npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@fermumen/databricks-execute

v2.10.6

Published

CLI to sync and execute a local file on Databricks (mirrors VS Code extension runFile behavior).

Readme

databricks-execute

Runs a local Python file on a Databricks cluster, mimicking the Databricks VS Code extension “Upload and Run File” flow:

  1. databricks bundle sync (uploads bundle assets)
  2. Executes the file on the configured cluster via the Command Execution API
  3. Parses remote stack traces and rewrites /Workspace/... paths back to local paths

For notebooks (.ipynb or “Databricks notebook source” files like example.py), it runs them as a workflow (Jobs API notebook task) to match the VS Code “Run File as Workflow” behavior.

Install / build (repo-local)

node .yarn/releases/yarn-3.2.1.cjs install
node .yarn/releases/yarn-3.2.1.cjs workspace @fermumen/databricks-execute build

Install the command

From npm (one-line install)

npm install -g @fermumen/databricks-execute@latest

Then run:

databricks-execute path/to/local/file.py -- arg1 arg2

Upgrade to latest:

npm install -g @fermumen/databricks-execute@latest

Local (recommended for this repo)

This exposes databricks-execute on your PATH via node_modules/.bin:

node .yarn/releases/yarn-3.2.1.cjs workspace @fermumen/databricks-execute add -D file:packages/databricks-execute

Then run it with:

node .yarn/releases/yarn-3.2.1.cjs databricks-execute path/to/local/file.py -- arg1 arg2

Global

Install the local build from this repo root:

npm install -g ./packages/databricks-execute

Usage

databricks-execute path/to/local/file.py -- arg1 arg2

This CLI shells out to the Databricks CLI (databricks), so make sure it’s installed and on your PATH.

Notebook mode

If the input file is:

  • an .ipynb, or
  • a *.py/*.sql/*.scala/*.r file whose first line is Databricks notebook source

then databricks-execute runs it as a workflow notebook task instead of using the Command Execution API.

Notes:

  • Positional args (-- arg1 arg2) and --env KEY=VALUE are only supported in Command Execution mode (plain .py files).
  • Notebook output is printed by extracting text stdout/stderr (and tracebacks) from the exported run. For rich outputs (tables/plots/HTML), open the run URL in a browser.

By default it uses bundle.yml / databricks.yml (via databricks bundle validate) for workspace.host, workspace file path, and cluster id.

Authentication / .config (optional)

For authentication, prefer the standard Databricks CLI auth flow (for example databricks auth login) or set DATABRICKS_TOKEN.

If you still want a repo-local override file, you can use .config in the bundle root with plain key=value lines (comments with # are allowed):

host=https://adb-1234567890123456.7.azuredatabricks.net
token=dapiXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
cluster=My Cluster Name
target=dev

CLI flags override .config, and environment variables can also be used (DATABRICKS_HOST, DATABRICKS_TOKEN).

Run databricks-execute --help for the full set of options.