npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

n8n-nodes-openrouter-selector

v0.9.3

Published

n8n community node for intelligent OpenRouter model selection based on task, budget, and benchmarks

Readme

n8n-nodes-openrouter-selector

n8n community node for intelligent OpenRouter model selection based on task, budget, and benchmarks.

Features

  • Task-Based Selection: Optimized model recommendations for:

    • Translation (Chinese ↔ English optimized)
    • Coding & Development
    • Data Analysis & Reasoning
    • Vision & Image Analysis
    • Conversational Chat
    • Text Embedding
    • Summarization
    • Mathematical Reasoning
  • Budget Awareness: Three budget tiers:

    • Cheap: Lowest cost, quality secondary
    • Balanced: Good price-performance ratio
    • Premium: Best quality, cost no concern
  • Benchmark-Based Scoring: Uses external benchmark data:

    • Artificial Analysis (Intelligence, Coding, Math indices)
    • LMSYS Chatbot Arena (Elo ratings)
    • LiveBench (Coding, Math, Reasoning scores)
  • Dynamic Model Override: Select specific models with real-time scoring preview

  • Flexible Filtering:

    • Minimum context length
    • JSON mode requirement
    • Vision/multimodal requirement
    • Cost limits
    • Provider whitelist/blacklist

Installation

In n8n

  1. Go to SettingsCommunity Nodes
  2. Click Install a community node
  3. Enter: n8n-nodes-openrouter-selector
  4. Click Install

Manual Installation

# In your n8n custom nodes directory
cd ~/.n8n/custom
npm install n8n-nodes-openrouter-selector

Development Installation

git clone https://github.com/ecolights/n8n-nodes-openrouter-selector.git
cd n8n-nodes-openrouter-selector
pnpm install
pnpm build

# Link to n8n
cd ~/.n8n/custom
npm link /path/to/n8n-nodes-openrouter-selector

Prerequisites

This node requires:

  1. Supabase Database with the benchmark schema (see docs/BENCHMARK_SYSTEM.md):

    • models_catalog - OpenRouter model data (synced via separate workflow)
    • model_name_mappings - Master mapping table (manually maintained)
    • model_benchmarks - Benchmark scores (auto-synced weekly)
    • task_profiles - Task-specific scoring weights
    • unmatched_models - Review queue for new models
  2. n8n Workflow: TN_benchmark_sync_artificial_analysis for weekly benchmark sync

  3. Credentials: Supabase URL and API key (service role for write access)

Quick Schema Setup

The full schema with triggers, functions, and RLS policies is documented in docs/BENCHMARK_SYSTEM.md.

Core Tables Overview:

-- Model name mappings (Source of Truth - manually maintained)
CREATE TABLE model_name_mappings (
  openrouter_id TEXT UNIQUE NOT NULL,      -- e.g. "anthropic/claude-sonnet-4"
  canonical_name TEXT NOT NULL,             -- Display name
  aa_name TEXT,                             -- Artificial Analysis name
  aa_slug TEXT,                             -- AA URL slug
  provider TEXT,                            -- anthropic, openai, google, etc.
  verified BOOLEAN DEFAULT false
);

-- Benchmark scores (auto-filled by sync workflow)
CREATE TABLE model_benchmarks (
  openrouter_id TEXT REFERENCES model_name_mappings(openrouter_id),
  -- Artificial Analysis
  aa_intelligence_index DECIMAL(5,2),
  aa_coding_index DECIMAL(5,2),
  aa_math_index DECIMAL(5,2),
  -- LMSYS Arena
  lmsys_elo INTEGER,
  -- LiveBench
  livebench_overall DECIMAL(5,2),
  livebench_coding DECIMAL(5,2),
  -- Computed composites (via trigger)
  composite_general DECIMAL(5,2),
  composite_code DECIMAL(5,2),
  composite_math DECIMAL(5,2)
);

-- Task-specific scoring weights
CREATE TABLE task_profiles (
  task_name TEXT UNIQUE NOT NULL,           -- general, code, translation, etc.
  weight_aa_intelligence DECIMAL(3,2),
  weight_aa_coding DECIMAL(3,2),
  weight_lmsys_elo DECIMAL(3,2),
  weight_livebench DECIMAL(3,2),
  boost_anthropic DECIMAL(3,2),
  boost_openai DECIMAL(3,2),
  boost_deepseek DECIMAL(3,2)
);

Usage

Basic Usage

  1. Add the OpenRouter Model Selector node to your workflow
  2. Configure credentials (Supabase URL + API Key)
  3. Select a Task Category (e.g., "coding")
  4. Select a Budget (e.g., "balanced")
  5. Execute to get the recommended model

Output Format

Full Output (default):

{
  "recommended": {
    "modelId": "anthropic/claude-sonnet-4",
    "provider": "anthropic",
    "displayName": "Claude Sonnet 4",
    "contextLength": 200000,
    "supportsJson": true,
    "modality": "text+image->text",
    "pricing": {
      "promptPer1kUsd": 0.003,
      "completionPer1kUsd": 0.015,
      "combinedPer1kUsd": 0.009
    },
    "score": 87.5,
    "scoreBreakdown": {
      "benchmarkFit": 38,
      "taskFit": 28,
      "budgetFit": 18,
      "capabilityFit": 9,
      "providerBonus": 1.5
    },
    "reasoning": "Excellent benchmark performance for Coding & Development, ideal balanced pricing, anthropic provider bonus (+15%)."
  },
  "alternatives": [...],
  "queryMetadata": {
    "task": "coding",
    "budget": "balanced",
    "totalModelsEvaluated": 313,
    "modelsPassingFilters": 187,
    "executionTimeMs": 245
  }
}

Using the Selected Model

Connect the output to an HTTP Request node or OpenRouter integration:

[OpenRouter Model Selector] → [HTTP Request to OpenRouter API]
                                 URL: https://openrouter.ai/api/v1/chat/completions
                                 Body: { "model": "={{$json.recommended.modelId}}", ... }

Scoring Algorithm

The scoring formula is deterministic and based on external benchmarks:

score = (benchmark_fit × 0.4) + (task_fit × 0.3) + (budget_fit × 0.2) + (capability_fit × 0.1) × provider_boost

Components

| Component | Weight | Description | |-----------|--------|-------------| | Benchmark Fit | 40% | Score from external benchmarks (AA, LMSYS, LiveBench) | | Task Fit | 30% | How well the model matches task requirements | | Budget Fit | 20% | Cost alignment with budget preference | | Capability Fit | 10% | Context length, JSON support, verification status | | Provider Boost | ×1.0-1.2 | Task-specific provider bonuses |

Provider Boosts by Task

| Task | Provider Boosts | |------|-----------------| | Translation | DeepSeek +20%, Qwen +15%, Anthropic +10% | | Coding | Anthropic +15%, OpenAI +10%, DeepSeek +8% | | Analysis | Anthropic +15%, OpenAI +10%, Google +5% | | Vision | OpenAI +15%, Google +12%, Anthropic +8% | | Math | DeepSeek +15%, Qwen +12%, OpenAI +10% |

Configuration

Node Parameters

| Parameter | Type | Description | |-----------|------|-------------| | Task Category | Dropdown | Type of task (coding, translation, etc.) | | Budget | Dropdown | Cost preference (cheap, balanced, premium) | | Model Override | Dynamic Dropdown | Override with specific model | | Filters | Collection | Advanced filtering options | | Options | Collection | Output configuration |

Filter Options

| Filter | Type | Default | Description | |--------|------|---------|-------------| | Min Context Length | Number | 8000 | Minimum tokens | | Require JSON Mode | Boolean | false | Only JSON-capable models | | Require Vision | Boolean | false | Only multimodal models | | Max Cost per 1K | Number | 0 | Cost limit (0 = no limit) | | Provider Whitelist | Multi-select | [] | Include only these providers | | Provider Blacklist | Multi-select | [] | Exclude these providers |

Benchmark Sync Workflow

The node requires benchmark data to be synced weekly via n8n workflow.

n8n Workflow: TN_benchmark_sync_artificial_analysis

Trigger: Weekly (Sunday 03:00 UTC) + Manual + Webhook

Data Flow:

[1. Fetch Artificial Analysis] ──────┐
                                      │
[2. Fetch OpenRouter Catalog] ────────┼──► [4. Merge All Data]
                                      │           │
[3. Fetch Existing Mappings] ────────┘            ▼
                                      [5. Process & Match Models]
                                              │
                               ┌──────────────┴──────────────┐
                               ▼                             ▼
                    [6. Upsert Benchmarks]      [7. Store Unmatched]
                               │                             │
                               └──────────────┬──────────────┘
                                              ▼
                                   [8. Merge Results]
                                              │
                                              ▼
                                   [9. Telegram Notification]

Detailed Documentation: See docs/BENCHMARK_SYSTEM.md

Manual Sync Trigger

# Via n8n Webhook
curl -X POST https://n8n.dev.ecolights.de/webhook/benchmark-sync

Development

# Install dependencies
pnpm install

# Build
pnpm build

# Watch mode
pnpm dev

# Lint
pnpm lint

# Format
pnpm format

License

MIT

Author

EcoLights ([email protected])

Links