npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

bq-write

v1.2.2

Published

Query your app's BigQuery dataset (Fivetran / Datastream / Airbyte sync) in plain English — using your ORM entities as context.

Readme

bq-write

Query your app's BigQuery dataset (Fivetran / Datastream / Airbyte sync) in plain English — using your ORM entities as context.

Ask questions about your data in plain English. bq-write reads your app's source code to understand your schema — entity names, column types, enum values, table relationships — then generates accurate BigQuery SQL and runs it.

bq> how many users completed a conversation in project 661?

  → read src/app/conversations/conversation.entity.ts
  → read src/app/conversations/enums/conversation-status.enum.ts

SELECT COUNT(*) AS total
FROM my-project.my_dataset.conversation
WHERE project_id = 661
  AND status = 'completed'

→ Query done — 1 row(s)

  ┌───────┐
  │ total │
  ├───────┤
  │ 4     │
  └───────┘

There are 4 completed conversations in project 661.

Who is this for?

If your BigQuery dataset is a mirror of an application database — synced via Datastream, Fivetran, Airbyte, or a custom pipeline — the schema alone tells you nothing. Column names like status, type, or role are meaningless without knowing that status = 'ACT' means active, or that contributor is what your app calls a user.

bq-write bridges that gap by reading your application's source code (TypeORM entities, Django models, Prisma schema, etc.) alongside the live dataset — so queries are grounded in the actual business logic, not guesswork.

How it works

An AI agent reads your entity and enum files to understand the domain — column names, status values, relationships — then writes and executes BigQuery SQL directly. No hallucinated column names, no wrong enum values.

Supports Anthropic (Opus, Sonnet, Haiku) and OpenAI (GPT-4o, GPT-4o Mini) — works with whichever API key you have.


Installation

npm install -g bq-write

Requirements:

  • Node.js 18+
  • Anthropic API key and/or OpenAI API key
  • Google Cloud credentials (see BigQuery auth)

Setup

1. API keys

Run once after installation — bq-write will prompt automatically on first run too:

bq-write setup
# ? Anthropic API key › sk-ant-...
# ? OpenAI API key (optional) › sk-...
# ✔ Saved to ~/.config/bq-write/config.json

You only need one key. Keys are stored in ~/.config/bq-write/config.json and never need to be set again.

2. BigQuery auth

gcloud auth application-default login

Don't have gcloud?

brew install google-cloud-sdk   # macOS

Usage

Run from inside your project directory:

cd ~/my-app
bq-write

On first run it will:

  1. Auto-redirect to setup if no API keys are configured
  2. Detect monorepos and ask which app to scope to
  3. Index entity/model files from your project
  4. Ask which model and dataset to use

Then you're in the REPL:

Model   : GPT-4o  (OpenAI)
Dataset : my-project.my_dataset
Project : /Users/you/my-app

bq> how many users signed up this month?
bq> break that down by country
bq> exit

REPL commands

| Command | Description | |---|---| | /setup | Update API keys | | /switch | Change model or dataset | | /reindex | Re-scan project files | | /help | Show all commands | | exit | Quit |

Monorepo support

If your project has an apps/ or packages/ directory, bq-write detects it and asks which app maps to your dataset:

? Monorepo detected — which app maps to this dataset?
❯ apps/api
  apps/worker
  apps/admin
    Entire repo

The selection is remembered. Run bq-write reindex after changing it.


BigQuery auth

bq-write uses Google Application Default Credentials. Run once:

gcloud auth application-default login

Your Google account needs BigQuery Data Viewer and BigQuery Job User roles on the project.


Environment variables

Optional overrides — prefer bq-write setup for API keys.

| Variable | Default | Description | |---|---|---| | ANTHROPIC_API_KEY | — | Overrides saved Anthropic key | | OPENAI_API_KEY | — | Overrides saved OpenAI key | | BQ_MAX_RESULTS | 100 | Max rows returned per query | | CONTEXT_MAX_TOKENS | 80000 | Token budget for file reads per turn |


Contributing

Issues and PRs welcome at github.com/dinesh-choudhary-dev/bq-write.

License

MIT