@torrix-ai/n8n-nodes-torrix
v0.1.11
Published
n8n community node for Torrix — self-hosted LLM observability. Route LLM calls through Torrix to log tokens, cost, latency, and prompt traces.
Maintainers
Readme
n8n-nodes-torrix
Official Torrix community node for n8n. Route LLM calls through Torrix to log tokens, cost, latency, and full prompt traces without changing your workflow logic.
Torrix is a self-hosted LLM observability tool. All data stays on your machine.
What it does
The Torrix Proxy node sends any LLM request through your local Torrix instance before forwarding it to the provider. Every call is logged with token counts (input and output), cost in USD, latency in milliseconds, the full prompt and response text, and the model name and provider.
Supports OpenAI, Anthropic, Groq, Mistral, DeepSeek, Ollama, and any OpenAI-compatible API.
Prerequisites
Torrix running locally. Install with a single Docker command:
curl -o docker-compose.yml https://raw.githubusercontent.com/torrix-ai/install/main/docker-compose.community.yml
docker compose upThen open http://localhost:8088, create your account, and copy your API key from Settings. You also need n8n, either self-hosted or cloud.
Installation
- Go to Settings in n8n (bottom left cog)
- Click Community Nodes
- Click Install
- Enter
@torrix-ai/n8n-nodes-torrix - Click Install and accept the prompt. n8n will restart.
Setup
Create a credential
- Go to Credentials in n8n
- Click Add Credential
- Search for Torrix API
- Fill in the following fields and click Save
Torrix Base URL: The URL where Torrix is running. Use http://host.docker.internal:8088 on Mac and Windows when both n8n and Torrix run in Docker. On Linux use your machine IP.
Torrix API Key: Your key from the Torrix Settings page at http://localhost:8088.
Default LLM Provider URL: The endpoint your workflow will call most often, for example https://api.openai.com/v1/chat/completions. Individual nodes can override this.
Default Provider API Key: Your OpenAI, Anthropic, or Groq API key. Leave empty for Ollama. Individual nodes can override this.
Default Model: The model your workflow will use most often, for example gpt-4o-mini. Individual nodes can override this.
Setting these defaults in the credential means every Torrix Proxy node in your workflows picks them up automatically. You only need to fill in a field on an individual node when that specific step requires a different value.
Add the node to a workflow
- In any workflow, click + to add a node
- Search for Torrix Proxy
- Configure the node
Model: The model to use for this step, for example gpt-4o-mini or claude-3-5-sonnet-20241022. Leave blank to use the Default Model from your Torrix credential.
User Message: The prompt text. Supports n8n expressions like {{ $json.message }}.
System Prompt: Optional instructions that set the behaviour of the model for this step.
Run Name: Optional label shown in the Torrix dashboard to identify this step.
LLM Provider URL: Leave blank to use the credential default. Fill in only when this step calls a different provider.
Provider API Key: Leave blank to use the credential default. Fill in only when this step uses a different API key.
The model name appears on the canvas node so you can see what each step is calling without opening it.
Grouping calls
Use Session ID to group multiple turns of a conversation together in Torrix.
Use Trace ID to group multiple LLM steps in one agent workflow into a single trace timeline.
Example workflow
An example support triage workflow is available to help you test the node and see Torrix in action. It demonstrates three Torrix Proxy nodes sharing a single trace ID, using a cheaper model for classification and a more capable model for the response, so you can compare cost and latency across steps in the Torrix dashboard.
Download torrix-support-triage.json and import it into n8n via Workflows > Import from file. The demo folder includes step-by-step instructions for configuring the credential, running the workflow, and exploring the results in Torrix.
Supported providers
Torrix works with any LLM provider, covering over 300 models across OpenAI, Anthropic, Google Gemini, Azure OpenAI, Groq, Mistral, DeepSeek, Perplexity, Fireworks, Together AI, Cohere, HuggingFace, Replicate, Ollama, SAP AI Core, and any OpenAI-compatible endpoint.
Common endpoint URLs:
| Provider | URL |
|---|---|
| OpenAI | https://api.openai.com/v1/chat/completions |
| Anthropic | https://api.anthropic.com/v1/messages |
| Groq | https://api.groq.com/openai/v1/chat/completions |
| Mistral | https://api.mistral.ai/v1/chat/completions |
| DeepSeek | https://api.deepseek.com/chat/completions |
| Azure OpenAI | https://<resource>.openai.azure.com/openai/deployments/<deployment>/chat/completions?api-version=2024-02-01 |
| Ollama (local) | http://host.docker.internal:11434/v1/chat/completions |
