npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@torrix-ai/n8n-nodes-torrix

v0.1.11

Published

n8n community node for Torrix — self-hosted LLM observability. Route LLM calls through Torrix to log tokens, cost, latency, and prompt traces.

Readme

n8n-nodes-torrix

Official Torrix community node for n8n. Route LLM calls through Torrix to log tokens, cost, latency, and full prompt traces without changing your workflow logic.

Torrix is a self-hosted LLM observability tool. All data stays on your machine.

What it does

The Torrix Proxy node sends any LLM request through your local Torrix instance before forwarding it to the provider. Every call is logged with token counts (input and output), cost in USD, latency in milliseconds, the full prompt and response text, and the model name and provider.

Supports OpenAI, Anthropic, Groq, Mistral, DeepSeek, Ollama, and any OpenAI-compatible API.

Prerequisites

Torrix running locally. Install with a single Docker command:

curl -o docker-compose.yml https://raw.githubusercontent.com/torrix-ai/install/main/docker-compose.community.yml
docker compose up

Then open http://localhost:8088, create your account, and copy your API key from Settings. You also need n8n, either self-hosted or cloud.

Installation

  1. Go to Settings in n8n (bottom left cog)
  2. Click Community Nodes
  3. Click Install
  4. Enter @torrix-ai/n8n-nodes-torrix
  5. Click Install and accept the prompt. n8n will restart.

Setup

Create a credential

  1. Go to Credentials in n8n
  2. Click Add Credential
  3. Search for Torrix API
  4. Fill in the following fields and click Save

Torrix Base URL: The URL where Torrix is running. Use http://host.docker.internal:8088 on Mac and Windows when both n8n and Torrix run in Docker. On Linux use your machine IP.

Torrix API Key: Your key from the Torrix Settings page at http://localhost:8088.

Default LLM Provider URL: The endpoint your workflow will call most often, for example https://api.openai.com/v1/chat/completions. Individual nodes can override this.

Default Provider API Key: Your OpenAI, Anthropic, or Groq API key. Leave empty for Ollama. Individual nodes can override this.

Default Model: The model your workflow will use most often, for example gpt-4o-mini. Individual nodes can override this.

Setting these defaults in the credential means every Torrix Proxy node in your workflows picks them up automatically. You only need to fill in a field on an individual node when that specific step requires a different value.

Add the node to a workflow

  1. In any workflow, click + to add a node
  2. Search for Torrix Proxy
  3. Configure the node

Model: The model to use for this step, for example gpt-4o-mini or claude-3-5-sonnet-20241022. Leave blank to use the Default Model from your Torrix credential.

User Message: The prompt text. Supports n8n expressions like {{ $json.message }}.

System Prompt: Optional instructions that set the behaviour of the model for this step.

Run Name: Optional label shown in the Torrix dashboard to identify this step.

LLM Provider URL: Leave blank to use the credential default. Fill in only when this step calls a different provider.

Provider API Key: Leave blank to use the credential default. Fill in only when this step uses a different API key.

The model name appears on the canvas node so you can see what each step is calling without opening it.

Grouping calls

Use Session ID to group multiple turns of a conversation together in Torrix.

Use Trace ID to group multiple LLM steps in one agent workflow into a single trace timeline.

Example workflow

An example support triage workflow is available to help you test the node and see Torrix in action. It demonstrates three Torrix Proxy nodes sharing a single trace ID, using a cheaper model for classification and a more capable model for the response, so you can compare cost and latency across steps in the Torrix dashboard.

Download torrix-support-triage.json and import it into n8n via Workflows > Import from file. The demo folder includes step-by-step instructions for configuring the credential, running the workflow, and exploring the results in Torrix.

Supported providers

Torrix works with any LLM provider, covering over 300 models across OpenAI, Anthropic, Google Gemini, Azure OpenAI, Groq, Mistral, DeepSeek, Perplexity, Fireworks, Together AI, Cohere, HuggingFace, Replicate, Ollama, SAP AI Core, and any OpenAI-compatible endpoint.

Common endpoint URLs:

| Provider | URL | |---|---| | OpenAI | https://api.openai.com/v1/chat/completions | | Anthropic | https://api.anthropic.com/v1/messages | | Groq | https://api.groq.com/openai/v1/chat/completions | | Mistral | https://api.mistral.ai/v1/chat/completions | | DeepSeek | https://api.deepseek.com/chat/completions | | Azure OpenAI | https://<resource>.openai.azure.com/openai/deployments/<deployment>/chat/completions?api-version=2024-02-01 | | Ollama (local) | http://host.docker.internal:11434/v1/chat/completions |

Links

Torrix website · Install docs · Support