npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

n8n-nodes-motherduck

v0.1.3

Published

n8n node to connect to MotherDuck (DuckDB in the cloud)

Downloads

24

Readme

n8n-nodes-motherduck

This is an n8n community node that lets you use MotherDuck (DuckDB in the cloud) in your n8n workflows.

MotherDuck is a serverless analytics platform built on DuckDB. It allows you to run SQL queries on your data without managing infrastructure.

n8n is a fair-code licensed workflow automation platform.

Important: only n8n:latest-debian is supported.

This node uses the @duckdb/node-api package which includes native bindings compiled for glibc-based systems. Alpine Linux uses musl libc instead, which is incompatible with these bindings. If you're running n8n on Alpine Linux (common in Docker images), you'll need to use a glibc-based distribution like Debian or Ubuntu instead.

Operations

Row Operations

  • Get: Query rows from a table with optional filters
  • Insert: Insert rows into a table
  • Upsert: Insert or update rows based on matching columns (uses MERGE)
  • Delete: Delete rows matching specified conditions

Table Operations

  • List: List all tables in a database/schema
  • Import from File: Create or load data from CSV, JSON, or Parquet files
  • Insert from Data: Batch insert an array of objects into a table (create or append)
  • Upsert from Data: Batch upsert an array of objects into a table

Credentials

You need a MotherDuck access token to use this node. You can create one at app.motherduck.com/settings/tokens.

Installation

Follow the installation guide in the n8n community nodes documentation.

npm

npm install n8n-nodes-motherduck

Manual Installation

  1. Clone this repository
  2. Run npm install
  3. Run npm run build
  4. Copy the dist folder to your n8n custom nodes directory

Usage

Get Rows

Query data from a MotherDuck table:

  1. Select Row resource and Get operation
  2. Choose the database, schema, and table
  3. Optionally add filter conditions
  4. Set limit, order by, or return all rows

Insert Rows

Insert rows from your workflow into a table:

Using an existing table:

  1. Select Row resource and Insert operation
  2. Choose the database, schema, and table
  3. Optionally specify an Input Data Field to map from a specific field in your input (e.g., data or user.profile) - leave empty to use the entire input
  4. Map columns using the resource mapper (auto-map or define manually)

Creating a new table:

  1. Select Row resource and Insert operation
  2. Set Table Mode to "Create New Table"
  3. Enter the new table name
  4. Optionally specify an Input Data Field to map from a specific field
  5. Optionally define column types in Column Definitions (defaults to VARCHAR for all columns)

Upsert Rows

Insert new rows or update existing ones based on matching columns:

Using an existing table:

  1. Select Row resource and Upsert operation
  2. Choose the database, schema, and table
  3. Optionally specify an Input Data Field to map from a specific field in your input
  4. Select which columns to match on using the resource mapper (no unique constraint required - uses MERGE)
  5. Map the values to insert/update

Creating a new table:

  1. Select Row resource and Upsert operation
  2. Set Table Mode to "Create New Table"
  3. Enter the new table name
  4. Specify Match Columns as a comma-separated list (e.g., id, email)
  5. Optionally specify an Input Data Field and Column Definitions

Delete Rows

Delete rows matching specified conditions:

  1. Select Row resource and Delete operation
  2. Choose the database, schema, and table
  3. Add filter conditions (at least one required)

List Tables

List all tables in a database:

  1. Select Table resource and List operation
  2. Choose the database
  3. Optionally filter by schema

Import from File

Create or append to a table from external files:

  1. Select Table resource and Import from File operation
  2. Choose the database and schema
  3. Enter the target table name
  4. Enter the file URL(s):
    • Single file: https://example.com/data.csv
    • Multiple files: https://example.com/file1.csv, https://example.com/file2.csv
    • Glob pattern: https://example.com/data/*.parquet
    • S3: s3://bucket/path/file.json
  5. Select format (auto-detect, CSV, JSON, or Parquet)
  6. Choose import mode:
    • Create Table: Create new table (fails if exists)
    • Create or Replace: Drop and recreate table
    • Create If Not Exists: Only create if table doesn't exist
    • Append: Insert into existing table
  7. For CSV files, configure delimiter, header, quote character, etc.

Insert from Data (Batch Insert)

Batch insert an array of objects into a table. Perfect for processing aggregated data from previous nodes:

  1. Select Table resource and Insert from Data operation
  2. Choose the database and schema
  3. Enter the target table name
  4. Specify the Data Field containing your array (e.g., movies, data.items), or leave empty to use input items directly
  5. Choose mode:
    • Create Table: Create new table (fails if exists)
    • Create or Replace: Drop and recreate table
    • Create If Not Exists: Only create if table doesn't exist
    • Append: Insert into existing table
  6. Optionally define column types in Column Definitions

Example: A workflow that fetches 200 movies from an API, aggregates them into one item with a movies array, then uses Insert from Data to bulk insert all 200 movies in a single operation.

Upsert from Data (Batch Upsert)

Batch upsert an array of objects into a table:

  1. Select Table resource and Upsert from Data operation
  2. Choose the database and schema
  3. Enter the target table name
  4. Specify the Data Field containing your array
  5. Enter Match Columns (comma-separated, e.g., id, email)
  6. Choose mode:
    • Create or Replace Then Upsert: Drop and recreate table, then upsert
    • Create If Not Exists Then Upsert: Create table if needed, then upsert
    • Upsert Only: Table must exist
  7. Optionally define column types

Development

Quick Start with Docker

The easiest way to test the node locally is with Docker:

# Install dependencies and build
make install
make build

# Start n8n with the custom node
make start

# View logs
make logs

# Stop n8n
make stop

Then open http://localhost:5678 in your browser. The MotherDuck node will be available in the nodes panel.

Development Workflow

# Terminal 1: Watch for TypeScript changes
make dev

# Terminal 2: Run n8n (restart after changes)
make start

After making changes to the node code:

  1. The TypeScript watcher will recompile automatically
  2. Restart n8n to pick up changes: make restart

Manual Setup

# Install dependencies
npm install

# Build the node
npm run build

# Run in development mode (watch)
npm run dev

# Lint
npm run lint

Makefile Commands

| Command | Description | |---------|-------------| | make install | Install npm dependencies | | make build | Build TypeScript to dist/ | | make dev | Watch mode for development | | make start | Build and start n8n container | | make stop | Stop n8n container | | make restart | Full restart (stop, build, start) | | make logs | View n8n container logs | | make clean | Remove containers and dist/ | | make rebuild | Rebuild Docker image and start |

Resources

License

MIT