npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@metabase/database-metadata

v1.0.6

Published

CLI tool to extract Metabase database metadata into YAML files

Readme

Metabase Database Metadata Format

Metabase represents database metadata — synced databases, their tables, and their fields — as a tree of YAML files. Files are diff-friendly: numeric IDs are omitted entirely, and foreign keys use natural-key tuples like ["Sample Database", "PUBLIC", "ORDERS"] instead of database identifiers.

This repository contains the specification, examples, and a CLI that converts the table_metadata.json downloaded from a Metabase instance into YAML.

Specification

The format is defined in core-spec/v1/spec.md (v1.0.4). It covers entity keys, field types, folder structure, and the shape of each entity.

Reference output for the Sample Database lives in examples/v1/ — both the raw table_metadata.json and the extracted YAML tree.

Entities

| Entity | Description | |--------|-------------| | Database | A connected data source (Postgres, MySQL, BigQuery, etc.) | | Table | A physical table (or view) inside a database | | Field | A column on a table, including JSON-unfolded nested fields |

Obtaining metadata

Metadata is fetched from Metabase's GET /api/ee/serialization/metadata/export endpoint as a table_metadata.json file — a flat JSON document with three arrays (databases, tables, and fields) streamed so even warehouses with very large schemas can be exported without exhausting server memory.

The endpoint accepts three boolean query parameters that opt sections in or out — they all default to false, so requests must explicitly set the sections they want:

  • with-databases — include the databases array.
  • with-tables — include the tables array.
  • with-fields — include the fields array.

A typical full export sets all three to true:

GET /api/ee/serialization/metadata/export?with-databases=true&with-tables=true&with-fields=true

Extracting metadata to YAML

The CLI turns that JSON into the human- and agent-friendly YAML tree described in the spec:

bunx @metabase/database-metadata extract-table-metadata <input-file> <output-folder>
  • <input-file> — path to the table_metadata.json downloaded from Metabase.
  • <output-folder> — destination directory. Database folders are created directly under it.

Extracting the spec

The bundled spec can be extracted to any file — convenient for agents that need to read it locally:

bunx @metabase/database-metadata extract-spec --file ./spec.md

Omit --file to write spec.md into the current directory.

Recommended workflow

The following is the default workflow for a project that wants to use Metabase metadata. It is a convention, not a requirement — teams are free to organize things differently.

1. A .metadata/ directory at the repo root

Create a top-level .metadata/ directory and add it to .gitignore. This is where the raw table_metadata.json and the extracted databases/ YAML tree live:

.metadata/
├── table_metadata.json
└── databases/
    └── …

2. Why .metadata/ should not be committed

On a large data warehouse the metadata export can easily reach hundreds of megabytes or several gigabytes. Committing it:

  • bloats the repository and slows every clone and fetch,
  • produces noisy diffs on unrelated PRs whenever someone resyncs,
  • can make the repo effectively unusable for CI and for new contributors.

Each developer (or a CI job) fetches metadata on demand from their own Metabase instance instead.

3. Download from Metabase and extract

Each developer downloads table_metadata.json from their Metabase instance and drops it into .metadata/. Then run the extractor:

mkdir -p .metadata
# Drop table_metadata.json from Metabase into .metadata/

rm -rf .metadata/databases
bunx @metabase/database-metadata extract-table-metadata .metadata/table_metadata.json .metadata/databases

After this, tools and agents should read the YAML tree under .metadata/databases/ — not table_metadata.json, which exists only as input to the extractor.

Publishing to NPM

Releases are published automatically by the Release to NPM GitHub Actions workflow on every push to main. The workflow compares the version in package.json against the version published on npm and publishes (with the latest dist-tag) if they differ.

To cut a release, bump version in package.json and merge to main.

The workflow requires an NPM_RELEASE_TOKEN secret with publish access to the @metabase npm org.

Development

bun install
bun bin/cli.ts extract-table-metadata examples/v1/table_metadata.json /tmp/.metadata/databases

Scripts

  • bun run build — compile TypeScript to dist/ and bundle the spec.
  • bun run type-checktsc --noEmit.
  • bun run lint-eslint — ESLint with no warnings allowed.
  • bun run lint-format — oxfmt format check.
  • bun run test — bun test suite.

The Lint, Test, and Validate GitHub workflows run on every push and pull request. Validate regenerates the bundled examples and fails if they drift from what's checked in.