n8n-nodes-custom-llm
v1.2.3
Published
Custom LLM node for n8n - use any LLM API via HTTP requests with curl import support
Downloads
874
Readme
n8n-nodes-custom-llm
Custom LLM node for n8n - Use any LLM API via custom HTTP request with curl import support.
Features
- Manual Request Mode: Manually configure HTTP requests with full control over URL, headers, body, and query parameters
- cURL Import Mode: Import HTTP request options directly from curl commands
- Dynamic Prompt Injection: Inject prompts into any location in the JSON body using dot notation (e.g.,
messages[0].content) - Custom Parameters: Configure any LLM parameters like
max_tokens,temperature,top_k, etc. with Liquid template support - Authentication Support: Built-in support for Header Auth and Basic Auth
- Liquid Templates: Full support for n8n Liquid expressions in all fields
Installation
Global Installation (Development)
- Clone or download this repository:
git clone <repository-url>
cd n8n-nodes-custom-llm- Build the project:
npm install
npm run build- Link the module globally:
npm link- Navigate to your n8n directory:
cd ~/.n8n- Link the nodes package:
npm link n8n-nodes-custom-llm- Restart n8n
Installation in n8n Directory
- Copy the entire project to your n8n directory:
cp -r n8n-nodes-custom-llm ~/.n8n/nodes/- Navigate to the node directory:
cd ~/.n8n/nodes/n8n-nodes-custom-llm- Install and build:
npm install
npm run build- Restart n8n
Docker Installation
Add the following to your n8n docker-compose.yml:
volumes:
- ./n8n-nodes-custom-llm:/nodes/n8n-nodes-custom-llmThen rebuild and restart n8n.
Usage
Mode 1: Manual Configuration
- Add the "Custom LLM" node to your workflow
- Select "Manual" request mode
- Fill in:
- URL: Your LLM API endpoint (e.g.,
https://api.openai.com/v1/chat/completions) - Method: POST (most LLM APIs use POST)
- Authentication: Choose auth type (Header Auth, etc.)
- JSON Body: Base request template
- Prompt Field: Where to inject the prompt (e.g.,
messages[0].content) - Prompt: Your prompt text (supports Liquid templates)
- Custom Parameters: Add parameters like
max_tokens,temperature,top_k
- URL: Your LLM API endpoint (e.g.,
Mode 2: Import from cURL
- Get a curl command from your LLM provider documentation
- Select "Import from cURL" request mode
- Paste the curl command
- Use the Prompt Field and Custom Parameters sections to override values dynamically
Example Configurations
OpenAI API Setup
- URL:
https://api.openai.com/v1/chat/completions - Method: POST
- JSON Body:
{ "model": "gpt-4", "messages": [ { "role": "user", "content": "" } ] } - Prompt Field:
messages[0].content - Custom Parameters:
max_tokens:1000temperature:0.7
Anthropic Claude API Setup
- URL:
https://api.anthropic.com/v1/messages - Method: POST
- JSON Body:
{ "model": "claude-3-opus-20240229", "messages": [ { "role": "user", "content": "" } ] } - Prompt Field:
messages[0].content - Custom Parameters:
max_tokens:1000top_k:250
Liquid Template Examples
Using Previous Node Data
- Prompt:
{{ $json.userInput }} - Custom Parameter:
{{ $json.tokenCount }}
Conditional Values
{% if $json.priority == 'high' %}
0.1
{% else %}
0.7
{% endif %}Complex Prompt
You are a {{ $json.role }}.
Please answer the following question: {{ $json.question }}Requirements
- n8n 1.0.0 or higher
License
MIT
