npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

bunflare

v0.2.22

Published

Write for Bun. Deploy to Cloudflare. A Bun bundler plugin that automatically replaces Bun-native APIs with Cloudflare Workers equivalents at build time.

Readme


Write Bun. Deploy Cloudflare.

Bunflare is a Bun bundler plugin that automatically replaces Bun-native APIs with their Cloudflare Workers equivalents at build time. Zero code changes, maximum compatibility.


🤔 Why Bunflare?

You love Bun. The DX is amazing — fast builds, great APIs, no boilerplate. You write Bun.serve, bun:sqlite, Bun.password.hash(), and life is good.

Then you try to deploy to Cloudflare Workers. And Bun is... not there.

ReferenceError: Bun is not defined

Ouch. 😬

Bunflare fixes that. It runs at build time and automatically transforms all your Bun.* calls into their Cloudflare-native equivalents — D1, KV, R2, WebCrypto, and more. Your source code stays clean and Bun-idiomatic. The bundled output is 100% Workers-compatible.

No runtime overhead. No vendor lock-in. Just write Bun, deploy Cloudflare.


✨ What Gets Shimmed

| Bun API | Cloudflare Equivalent | Status | |---|---|---| | Bun.env | Worker env bindings | ✅ Done | | bun:sqlite / new Database() | Cloudflare D1 | ✅ Done | | Bun.sql (tagged template) | Cloudflare Hyperdrive + any Postgres driver | ✅ Done | | import { redis } from "bun" | Cloudflare KV (Redis-over-KV bridge) | ✅ Done | | Bun.password.hash/verify | WebCrypto (PBKDF2) | ✅ Done | | Bun.hash() | WebCrypto (SHA-256) | ✅ Done | | Bun.file() / Bun.write() | Cloudflare R2 | ✅ Done | | Bun.serve() + routes | Cloudflare Worker fetch handler | ✅ Done | | Fullstack HTML/SPA Build | Cloudflare Workers Assets | ✅ Done |


🚀 Quick Start

1. Install

bun add -d bunflare

2. Configure your build (Optional)

Create a bunflare.config.ts at the root of your project. Note: Bunflare automatically discovers your bindings from wrangler.jsonc, so this file is often optional or very minimal!

// bunflare.config.ts
import type { BunflareConfig } from "bunflare";

export default {
  entrypoint: "./index.ts",
  
  // Optional: Only needed if you want to override auto-discovery
  // sqlite: { binding: "DB" }, 
  // kv:     { binding: "MY_CACHE" },
  // r2:     { binding: "MY_BUCKET" },
  
  frontend: {
    entrypoint: "./public/index.html",
    outdir: "./dist/public",
    // Optional: Custom loaders for frontend (e.g. for .wasm or .data)
    loader: { ".wasm": "file" }
  },
  
  // Optional: Custom loaders for backend
  loader: { ".txt": "text" }
} satisfies BunflareConfig;

3. TypeScript Setup (Critical)

To get full type safety for Bun.env and Cloudflare bindings, you need a global.d.ts and a proper tsconfig.json.

Create global.d.ts:

// global.d.ts
import "bun";

declare global {
  namespace Bun {
    // Merges Cloudflare Bindings into the global Bun.env object
    interface Env extends CloudflareBindings { }
  }
}

// This interface is automatically populated by 'wrangler types'
// into worker-configuration.d.ts
interface CloudflareBindings { }

interface BunflareEnv {
  ASSETS: { fetch(request: Request): Promise<Response> };
  [key: string]: any;
}

Generate Types:

Run the following command to generate worker-configuration.d.ts based on your wrangler.jsonc:

bun run cf-typegen  # bunx wrangler types --env-interface CloudflareBindings

Update tsconfig.json:

{
  "compilerOptions": {
    "types": ["bun"],
    "moduleResolution": "bundler",
    "jsx": "react-jsx",
    "strict": true
  },
  "include": [
    "global.d.ts", 
    "worker-configuration.d.ts", 
    "src/**/*"
  ]
}

4. Update your package.json scripts

{
  "scripts": {
    "dev":    "bunflare dev",
    "build":  "bunflare build",
    "deploy": "bunflare deploy"
  }
}

5. Wire up wrangler.jsonc

// wrangler.jsonc
{
  "name": "my-app",
  "main": "dist/index.js",
  "compatibility_date": "2025-02-24",
  "d1_databases": [
    { "binding": "DB", "database_name": "my-db", "database_id": "..." }
  ],
  "kv_namespaces": [
    { "binding": "MY_CACHE", "id": "..." }
  ],
  "r2_buckets": [
    { "binding": "MY_BUCKET", "bucket_name": "my-bucket" }
  ],
  "assets": {
    "directory": "dist/public"
  }
}

6. Write your Worker like you're writing Bun

[!IMPORTANT] Export Default Requirement: Cloudflare Workers require your application to be exported as an ES module default export. Whether you use export default Bun.serve(...) directly or assign it to a variable (const server = ...; export default server;), this export is strictly required.

// src/index.ts
import { Database } from "bun:sqlite";
import { redis } from "bun";

export default Bun.serve({
  routes: {
    "/api/hello": async (req) => {
      // bun:sqlite → D1 at deploy time
      const db = new Database("DB");

      // import { redis } from "bun" → Cloudflare KV at deploy time
      await redis.set("greeting", "Hello from Cloudflare!");
      const greeting = await redis.get("greeting");

      return Response.json({ greeting });
    }
  },
  development: true // enables live-reload in dev mode
});

That's it. bun run dev and you're cooking. 🔥


🧠 How It Works

Bunflare hooks into Bun's bundler via a plugin. When you run bunflare build, two things happen:

  1. Virtual Module Resolution: All bun:* imports (like bun:sqlite) are intercepted and replaced with Bunflare's own shim implementations that call Cloudflare APIs instead.

  2. Global AST Transformation: Any reference to Bun.* in your source files (like Bun.serve(), Bun.env, Bun.file()) gets a global preamble injected that maps them to the correct Cloudflare primitives.

The end result is a bundled dist/index.js that is 100% Cloudflare Workers-compatible, with no trace of Bun-specific APIs at runtime.

Your Code (Bun)         →  Bunflare Build  →  Cloudflare Worker
─────────────────────────────────────────────────────────────────
bun:sqlite              →   shim + D1       →  env.DB.prepare(...)
import { redis }        →   KV bridge       →  env.MY_CACHE.get/put(...)
Bun.file() / Bun.write  →   R2 shim         →  env.MY_BUCKET.get/put(...)
Bun.password.hash()     →   WebCrypto       →  crypto.subtle.digest(...)
Bun.serve({ routes })   →   fetch handler   →  export default { fetch }

⚙️ Configuration (bunflare.config.ts)

Bunflare is designed to be Zero Config by automatically discovering your Cloudflare bindings from wrangler.jsonc. However, for fullstack apps or complex builds, you can use bunflare.config.ts to fine-tune the behavior.

import type { BunflareConfig } from "bunflare";
import tailwind from "bun-plugin-tailwind";

export default {
  // The entry point of your Worker (Default: ./src/index.ts or ./index.ts)
  entrypoint: "./src/index.ts",

  // Backend-specific configuration
  sqlite: { binding: "DB" },
  redis:  { binding: "CACHE" },
  
  // Custom Bun plugins for the backend build
  plugins: [tailwind],

  // 📦 Loaders: How Bun treats different file types
  loader: {
    ".txt": "text",
    ".data": "binary"
  },

  // 🌐 Frontend-specific configuration (Fullstack mode)
  frontend: {
    entrypoint: "./public/index.html",
    outdir: "./dist/public",
    plugins: [tailwind],
    // Loaders specific to the frontend build
    loader: {
      ".wasm": "file"
    }
  }
} satisfies BunflareConfig;

🧠 Deep Dive: Loaders

Loaders are one of the most important concepts in the Bun/Bunflare build process. They define how Bun should interpret a file when you import it in your code.

| Loader | Effect | Use Case | |---|---|---| | text | Imports the file content as a plain string. | HTML templates, CSS (as string), Shader code. | | file | Returns the URL/Path to the file and copies it to outdir. | Images, Fonts, WASM files. | | binary | Imports the file content as a Uint8Array. | Binary data files, certificates. | | json | Parses the file as JSON. | Config files, local data. |

Why do we use them in Bunflare?

  1. HTML as Text: By default, Bunflare treats .html files as text during the production build. This allows us to inject the final bundled JS paths into the HTML and then return it as a string from your Worker.
  2. Assets as Files: If you have a .png or .woff2 file, using the file loader ensures that Bun copies the file to your dist/public folder so Cloudflare ASSETS can serve it.
  3. Custom Extensions: If you use a special format (like .glsl or .yaml), you must tell Bunflare how to load it, or the build will fail.

🛠️ CLI Reference

The bunflare CLI automates the entire build and deploy pipeline.

bunflare init [directory]

Scaffolds a new project from a template.

  • Templates:
    • react: (Default) Modern React 19 + Tailwind + Bun.serve.
    • hono: Hono Framework + React 19 + Fullstack routing.
    • none: Minimal "Hello World" Worker.
  • Example: bunx bunflare init my-app --template hono

bunflare dev [options]

Starts a development server with live-reloading.

  • --local, -l: (Recommended for DX) Runs in Local-Only Mode. This uses Bun's native runtime directly instead of Wrangler/Miniflare. It's much faster and provides a "Pure Bun" experience while still simulating Cloudflare bindings.
  • Default: Runs wrangler dev with the Bunflare plugin. Use this to test exact Cloudflare behavior (like R2/D1 limits).

bunflare build

Compiles your application for production.

  1. Backend: Bundles your code into a single dist/index.js and applies all Cloudflare shims.
  2. Frontend: If frontend is configured, it crawls your HTML, bundles all referenced scripts/styles, and outputs them to dist/public.

bunflare deploy

The "All-in-One" command. It runs bunflare build and then immediately calls wrangler deploy to push your app to the Cloudflare global network.


⚡ Smart Auto-Discovery

Bunflare is designed to be Zero Config. When you run dev, build, or deploy, it automatically parses your wrangler.jsonc to find your bindings.

  • SQLite: Automatically uses your first D1 database.
  • KV / Redis: Automatically uses your first KV namespace.
  • R2: Automatically uses your first R2 bucket.

You only need to define these in bunflare.config.ts if you have multiple bindings and want to specify which one Bun should use as the default.


📖 API Reference

Bun.serve() → Cloudflare Fetch Handler

Bunflare transforms Bun.serve() into a proper Cloudflare Worker export. All routing logic is preserved.

export default Bun.serve({
  // Route handlers work just like in Bun
  routes: {
    "/api/users": async (req) => {
      return Response.json([{ id: 1, name: "Alice" }]);
    },

    // URL params supported via URLPattern
    "/api/users/:id": async (req) => {
      const { id } = req.params;
      return Response.json({ id });
    },

    "/api/data": {
      // HTTP method handlers
      GET:  async (req) => Response.json({ action: "get" }),
      POST: async (req) => Response.json({ action: "post" }),
    }
  },

  // Fallback fetch handler
  fetch: async (req) => {
    return new Response("Not Found", { status: 404 });
  },

  // Enables live-reload script injection in dev mode
  development: true
});

💡 Tip: Unknown routes automatically fall through to your Cloudflare Workers Assets (your frontend), so you get SPA routing for free if you configure assets in wrangler.jsonc.


bun:sqlite → Cloudflare D1

Write SQLite-style database code and deploy to D1. The API is intentionally Bun-idiomatic.

import { Database } from "bun:sqlite";

// Connect to D1 using the binding name from wrangler.jsonc
const db = new Database("DB");

// Queries work the same way
const stmt = db.query("SELECT * FROM users WHERE active = ?");
const users = stmt.all(1); // ⚠️ See note below

// Runs fire-and-forget (async under the hood on D1)
db.run("INSERT INTO users (name, email) VALUES (?, ?)", "Alice", "[email protected]");

⚠️ Important — Async/Await is Required: D1 is async by nature, while the native bun:sqlite API is synchronous. Bunflare's shim returns Promises from all query methods (.run(), .all()), so you must await them. Use as any to satisfy TypeScript when calling .all(), since the declared return type is unknown[] but the actual value is a Promise.

const db = new Database("DB");

// Always await your queries on Cloudflare:
await (db.run("CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT)") as any);
await (db.run("INSERT INTO users (name) VALUES (?)", ["Alice"]) as any);

// .all() returns a D1Result object, rows are under .results
const d1Result = await (db.prepare("SELECT * FROM users LIMIT 10").all() as any);
const rows = d1Result.results; // { id: 1, name: "Alice" }[]

Config:

bunflare({ sqlite: { binding: "DB" } })

Bun.sql → Cloudflare Hyperdrive + PostgreSQL 🐘

This is one of Bunflare's most powerful new features. Write standard Bun.sql tagged-template queries and deploy them to Cloudflare Workers backed by a full PostgreSQL database via Cloudflare Hyperdrive.

Hyperdrive is Cloudflare's connection pooling and caching proxy for external databases. It keeps persistent warm connections at the edge, so connecting to PostgreSQL from a Worker is fast and cheap.

// index.ts — same Bun.sql tagged template syntax you already know
export default Bun.serve({
  routes: {
    "/api/users": async () => {
      // Bun.sql tagged template → Hyperdrive → PostgreSQL
      await Bun.sql`CREATE TABLE IF NOT EXISTS users (id SERIAL PRIMARY KEY, name TEXT)`;

      await Bun.sql`INSERT INTO users (name) VALUES (${"Alice"})`;

      const users = await Bun.sql`SELECT * FROM users LIMIT 10`;

      // .values() returns rows as arrays instead of objects
      const nameList = await Bun.sql`SELECT name FROM users`.values();

      return Response.json({ users, nameList });
    }
  }
});

Step-by-Step Setup

1. Create a Hyperdrive instance on Cloudflare:

bunx wrangler hyperdrive create my-hyperdrive \
  --connection-string "postgres://user:password@your-host:5432/mydb"

This will output a Hyperdrive id — copy it.

2. Install your chosen PostgreSQL driver:

# Option A — postgres.js (recommended)
bun add postgres

# Option B — node-postgres (pg)
bun add pg && bun add -d @types/pg

3. Configure bunflare.config.ts:

import type { BunflareConfig } from "bunflare";

export default {
  sqlite: { binding: "DB" },         // D1 → bun:sqlite
  sql: {
    type: "hyperdrive",              // Use Hyperdrive backend for Bun.sql
    binding: "HYPERDRIVE",           // Must match the binding name in wrangler.jsonc
    driver: "postgres",              // "postgres" | "pg"
  },
} satisfies BunflareConfig;

4. Configure wrangler.jsonc:

{
  "name": "my-app",
  "main": "dist/index.js",
  "compatibility_date": "2025-02-24",
  // ✅ Required! Postgres drivers use Node.js built-ins.
  "compatibility_flags": ["nodejs_compat"],
  "hyperdrive": [
    {
      "binding": "HYPERDRIVE",
      "id": "<your-hyperdrive-id>",
      // For local development only (not deployed):
      "localConnectionString": "postgres://user:[email protected]:5433/mydb"
    }
  ]
}

⚠️ nodejs_compat is mandatory. PostgreSQL drivers (postgres.js, pg) depend on Node.js built-ins like node:stream, node:buffer, and node:events. Without this flag, Wrangler will fail to bundle your Worker with an error like Could not resolve "node:stream".

Choosing a Driver

Bunflare is library-agnostic — you pick the driver, Bunflare adapts. The driver field in bunflare.config.ts tells Bunflare which internal adapter to use.

| driver value | Library | Install command | Notes | |---|---|---|---| | "postgres" (default) | postgres.js | bun add postgres | Ships a dedicated Cloudflare Workers build (/cf/) | | "pg" | node-postgres | bun add pg | Bunflare translates templates to $1, $2 syntax internally |

💡 Recommendation: Use postgres.js. It has first-class Cloudflare Workers support and is significantly more performant. Use pg only if you are migrating an existing codebase that already depends on it.

Local Development with Docker

You need a running PostgreSQL instance locally. The simplest approach is Docker:

# docker-compose.yml
services:
  postgres:
    image: postgres:latest
    container_name: my_postgres
    environment:
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
      POSTGRES_DB: mydb
    ports:
      - "5433:5432"   # 5433 avoids conflicts with any local Postgres on 5432
docker-compose up -d
bun run dev

⚠️ Use 127.0.0.1, never localhost in your localConnectionString. Modern Node.js (used internally by Wrangler/Miniflare) resolves localhost to the IPv6 address ::1 first. Docker Desktop, however, maps ports only to IPv4 (127.0.0.1). This mismatch causes a silent connection attempt failed error even though the container is running. Always be explicit:

// ❌ This may silently fail
"localConnectionString": "postgres://user:password@localhost:5433/mydb"

// ✅ This works reliably
"localConnectionString": "postgres://user:[email protected]:5433/mydb"

Using a Custom Shim

If you need complete control — e.g., to use a connection pool manager, add mTLS, or inject custom middleware — you can provide your own shim file:

// bunflare.config.ts
export default {
  sql: {
    type: "hyperdrive",
    binding: "HYPERDRIVE",
    custom: "./my-sql-shim.ts",  // path relative to project root
  },
} satisfies BunflareConfig;

Your shim file must export a sql tagged-template function compatible with the Bun.sql API.

D1 vs Hyperdrive: When to Use Each

| | D1 (bun:sqlite) | Hyperdrive (Bun.sql) | |---|---|---| | Database engine | Cloudflare-managed SQLite | Any external PostgreSQL | | API style | new Database() — sync-style with forced await | Tagged templates — fully async | | Latency | Ultra-low (edge-native) | Low (pooled + cached by Hyperdrive) | | Best for | Sessions, flags, small tables | Complex queries, existing Postgres DBs | | wrangler.jsonc key | d1_databases | hyperdrive | | Extra dependency | None | postgres or pg |



🛠️ Frameworks: Hono

Bunflare has first-class support for Hono. We provide a universal serveStatic adapter that works in both Bun (local dev) and Cloudflare Workers (production) without changing any code.

1. Setup

bun add hono

2. Entrypoint Pattern

For Hono apps, we recommend a hybrid export pattern. This allows Bun's native router to handle the frontend (with automatic transpilation) while Hono handles your API.

// src/index.ts
import { Hono } from "hono";
import { serveStatic } from "bunflare/hono";
import indexHtml from "../public/index.html";

const app = new Hono();

// Universal static middleware:
// - Dev: Serves from ./public via hono/bun
// - Prod: Passes through to Cloudflare ASSETS
app.use("*", serveStatic({ root: "./public" }));

app.get("/api/hello", (c) => c.json({ message: "Hello from Hono!" }));

export default {
  fetch: app.fetch,
  routes: {
    "/": indexHtml // Bun handles the root and transpiles React automatically
  }
};

Why the hybrid export? In dev:local, Hono doesn't know how to transpile .tsx files. By putting indexHtml in the routes property, you hand off the HTML serving to Bun's native router, which handles all the on-the-fly bundling magic for you.

import { redis } from "bun" → Redis-over-KV Bridge ⚡

This is one of Bunflare's most creative features. You get a Redis-compatible API backed by Cloudflare KV. Perfect for rate-limiting, counters, caching, and session management — without any external Redis instance.

import { redis } from "bun";

// Basic CRUD
await redis.set("user:1:name", "Alice");
const name = await redis.get("user:1:name"); // "Alice"
await redis.del("user:1:name");

// Atomic counters (great for rate limiting or visitor counts!)
await redis.incr("page:views");   // 1
await redis.incr("page:views");   // 2
await redis.decr("page:views");   // 1

// Key existence check
const exists = await redis.exists("user:1:name"); // false

// TTL / Expiration (in seconds)
await redis.setex("session:abc123", 3600, "user_data_json");
await redis.expire("session:abc123", 7200); // extend TTL

Config:

bunflare({ redis: { binding: "MY_CACHE" } })

Bun.file() & Bun.write() → Cloudflare R2

File operations map directly to R2 object storage. Same API, infinite scale.

// Write a file to R2
await Bun.write("uploads/profile.png", imageBuffer);
await Bun.write("config.json", JSON.stringify({ key: "value" }));

// Read a file from R2
const file = Bun.file("uploads/profile.png");
const text    = await file.text();
const buffer  = await file.arrayBuffer();
const json    = await file.json();
const exists  = await file.exists(); // true/false

Config:

bunflare({ r2: { binding: "MY_BUCKET" } })

Bun.password → WebCrypto

Password hashing and verification using the Web Crypto API (PBKDF2 under the hood). Works identically in both Bun and Cloudflare Workers.

// Hash a password
const hash = await Bun.password.hash("my-super-secret-password");
// → "a7b3c9..." (SHA-256 hex, compatible with Workers)

// Verify it later
const isValid = await Bun.password.verify("my-super-secret-password", hash);
// → true

const isWrong = await Bun.password.verify("wrong-password", hash);
// → false

No config needed — the crypto shim is always included automatically.


Bun.hash() → WebCrypto SHA-256

Generic data hashing, useful for generating ETags, content fingerprints, or cache keys.

const hash = await Bun.hash("some data or a Buffer");
// → "2cf24dba5fb..." (hex string)

Bun.env → Worker Bindings

Environment variables work transparently. In Workers, they come from your wrangler.jsonc secrets and bindings. In Bun, they come from .env.

// Works in both Bun and Cloudflare Workers!
const apiKey = Bun.env.MY_API_KEY;
const isDev  = Bun.env.NODE_ENV === "development";


🏗️ Fullstack Architecture

Bunflare is designed for fullstack apps. Here's the recommended project structure:

my-app/
├── index.ts              # Worker entry point (Bun.serve with routes)
├── src/
│   ├── App.tsx           # React/Vue/Svelte frontend
│   └── ...
├── public/
│   └── index.html        # HTML entry point for the SPA
├── dist/                 # Generated (don't commit this!)
│   ├── index.js          # Compiled Worker
│   └── public/           # Compiled frontend assets
├── bunflare.config.ts    # Bunflare configuration
├── wrangler.jsonc        # Cloudflare configuration
└── package.json

wrangler.jsonc — the Cloudflare config:

{
  "name": "my-app",
  "main": "dist/index.js",
  "compatibility_date": "2025-02-24",
  // Required when using PostgreSQL drivers
  "compatibility_flags": ["nodejs_compat"],
  "d1_databases": [
    { "binding": "DB", "database_name": "my-db", "database_id": "..." }
  ],
  "hyperdrive": [
    {
      "binding": "HYPERDRIVE",
      "id": "<your-hyperdrive-id>",
      "localConnectionString": "postgres://user:[email protected]:5433/mydb"
    }
  ],
  "kv_namespaces": [
    { "binding": "CACHE", "id": "..." }
  ],
  "r2_buckets": [
    { "binding": "STORAGE", "bucket_name": "my-storage" }
  ],
  "assets": {
    "directory": "dist/public"
  },
  "build": {
    "command": "bunflare build",
    "watch_dir": "./src"
  }
}

🔌 Using the Plugin Directly (Advanced)

If you prefer to manage the build yourself, you can use the bunflare plugin directly in your Bun.build call:

// build.ts
import { bunflare } from "bunflare";

await Bun.build({
  entrypoints: ["./index.ts"],
  outdir: "./dist",
  target: "browser",
  format: "esm",
  plugins: [
    bunflare({
      sqlite: { binding: "DB" },
      redis:  { binding: "CACHE" },
      r2:     { binding: "STORAGE" },
      frontend: {
        entrypoint: "./public/index.html",
        outdir: "./dist/public",
      }
    })
  ],
});

💡 Real-World Recipes

Visitor Counter with Redis

// index.ts
import { redis } from "bun";

export default Bun.serve({
  routes: {
    "/api/counter": async (req) => {
      const key = "global:visitors";

      if (req.method === "POST") {
        const count = await redis.incr(key);
        return Response.json({ count });
      }

      const count = await redis.get(key) || "0";
      return Response.json({ count: parseInt(count) });
    }
  }
});

File Upload to R2

"/api/upload": async (req) => {
  if (req.method !== "POST") {
    return new Response("Method Not Allowed", { status: 405 });
  }

  const formData = await req.formData();
  const file = formData.get("file") as File;

  if (!file) {
    return new Response("No file provided", { status: 400 });
  }

  // This maps to R2.put() at runtime!
  await Bun.write(file.name, file);

  const saved = Bun.file(file.name);
  const exists = await saved.exists();

  return Response.json({
    success: true,
    filename: file.name,
    size: file.size,
    exists,
    message: "Uploaded to R2! 🎉"
  });
}

User Auth with Password Hashing

"/api/register": async (req) => {
  const { email, password } = await req.json<{ email: string; password: string }>();

  // Hashed with WebCrypto — safe in Workers!
  const hashedPassword = await Bun.password.hash(password);

  const db = new Database("DB");
  db.run(
    "INSERT INTO users (email, password_hash) VALUES (?, ?)",
    email,
    hashedPassword
  );

  return Response.json({ success: true });
},

"/api/login": async (req) => {
  const { email, password } = await req.json<{ email: string; password: string }>();
  const db = new Database("DB");

  const user = db.query("SELECT * FROM users WHERE email = ?").get(email) as any;
  if (!user) {
    return new Response("Unauthorized", { status: 401 });
  }

  const valid = await Bun.password.verify(password, user.password_hash);
  if (!valid) {
    return new Response("Unauthorized", { status: 401 });
  }

  return Response.json({ success: true, userId: user.id });
}

Rate Limiting with Redis TTL

"/api/sensitive-action": async (req) => {
  const redis = Bun.redis();
  const ip = req.headers.get("cf-connecting-ip") || "unknown";
  const rateLimitKey = `rate:${ip}`;

  const attempts = await redis.incr(rateLimitKey);

  if (attempts === 1) {
    // First attempt — set 1-minute window
    await redis.expire(rateLimitKey, 60);
  }

  if (attempts > 10) {
    return Response.json({ error: "Too many requests" }, { status: 429 });
  }

  // Process the action...
  return Response.json({ success: true });
}

Full-Stack: D1 + PostgreSQL Simultaneously 🐘🗄️

This recipe shows Bunflare's most powerful capability: running two databases at once in the same Worker — D1 (SQLite) for fast edge-native access and PostgreSQL (via Hyperdrive) for full relational power.

import { Database } from "bun:sqlite";

export default Bun.serve({
  routes: {
    "/api/hybrid": async () => {
      const results: Record<string, unknown> = {};

      // 1. D1 / SQLite — fast, edge-native, no connection overhead
      const db = new Database("DB");
      await (db.run("CREATE TABLE IF NOT EXISTS sessions (id INTEGER PRIMARY KEY, token TEXT)") as any);
      await (db.run("INSERT INTO sessions (token) VALUES (?)", [`tok_${Date.now()}`]) as any);
      const d1 = await (db.prepare("SELECT * FROM sessions ORDER BY id DESC LIMIT 1").all() as any);
      results.d1 = d1.results; // [ { id: 1, token: "tok_..." } ]

      // 2. PostgreSQL / Hyperdrive — complex queries, joins, full SQL power
      await Bun.sql`CREATE TABLE IF NOT EXISTS orders (id SERIAL PRIMARY KEY, total NUMERIC, user_id INT)`;
      await Bun.sql`INSERT INTO orders (total, user_id) VALUES (${99.99}, ${1})`;
      results.postgres = await Bun.sql`
        SELECT o.id, o.total, o.user_id
        FROM orders o
        ORDER BY o.id DESC
        LIMIT 5
      `;

      return Response.json(results);
    }
  }
});

💡 Design Tip: Keep session data, feature flags, and small lookup tables in D1 (zero latency, no cold starts). Keep your main application data, complex relations, and analytics in PostgreSQL via Hyperdrive.


🗺️ Roadmap

| Version | Goal | Status | |---|---|---| | v0.1 | Foundation: env, sqlite, kv, redis, crypto | ✅ Done | | v0.2 | API Parity: R2, serve, fullstack HTML, live-reload | ✅ Done | | v0.3 | Testing: Unit + Miniflare + CI/CD | 🚧 In Progress | | v0.4 | DX: bunflare init, binding validation, better errors | 📅 Planned | | v0.5 | Bun.sql → Hyperdrive + multi-driver PostgreSQL support | ✅ Done | | v0.6 | Bun.CryptoHasher (streaming), randomUUIDv7() | 📅 Planned | | v1.0 | Stable npm release, full docs, VS Code extension | 📅 Planned |


🧪 Architecture Deep Dive

The KV/Redis Shim

One of the most elegant parts of Bunflare is the Redis-over-KV bridge. The shim has a clean separation of concerns:

plugin/shims/kv/
├── index.ts   # Generator — reads logic.ts and injects binding name
└── logic.ts   # Pure implementation — used in both shim and unit tests

logic.ts contains the actual class implementations (KV, RedisClient) that can be imported directly in your unit tests:

// Your test file
import { RedisClient } from "../plugin/shims/kv/logic";

const mockKV = {
  get: async (key) => "mock-value",
  put: async (key, val) => {},
  delete: async (key) => {},
};

// Test the real logic with a mock KV
globalThis.env = { MY_KV: mockKV };
const redis = new RedisClient("MY_KV");
const value = await redis.get("test-key"); // "mock-value"

This architecture means the production shim code is also your test code — no mocking the shim itself.

The Global Preamble Injection

For APIs like Bun.serve, Bun.redis(), and Bun.file() that aren't imported — they're accessed via the global Bun object — Bunflare injects a preamble at the top of every file that contains the word Bun:

// Auto-injected at build time:
import { redis, RedisClient } from "bunflare:kv";
import { serve } from "bunflare:serve";
import { file, write } from "bunflare:r2";
import { BunCrypto } from "bunflare:crypto";
import { env } from "bunflare:env";

if (typeof globalThis.Bun === "undefined") {
  globalThis.Bun = { redis, RedisClient, serve, file, write, ...BunCrypto, env };
}

// Your original code follows:
export default Bun.serve({ ... });

This is what we internally call the "Nuclear Option" — ensuring Bun.* is never undefined, regardless of the Workers environment restrictions.


🤝 Contributing

The project is structured as a Bun workspace:

bunflare/
├── plugin/     # The plugin source code
│   ├── index.ts          # Main plugin entry
│   ├── bin.ts            # CLI (bunflare dev/build/deploy)
│   ├── types.ts          # TypeScript types
│   ├── resolvers/        # Bun namespace resolver
│       ├── redis/        # import { redis } from "bun" → KV bridge shim
│       │   ├── index.ts  # Shim generator logic
│       │   ├── logic.ts  # Redis-over-KV class implementation
│       ├── d1/           # bun:sqlite → D1 shim
│       │   ├── database.ts  # Class-based Database/Statement API
│       │   ├── logic.ts     # Bun.sql → D1 tagged template engine
│       │   └── sql.ts       # Shim code generator
│       ├── hyperdrive/   # Bun.sql → Hyperdrive + PostgreSQL shim
│       │   ├── logic.ts     # Multi-driver adapter (postgres.js / pg / mysql2)
│       │   └── sql.ts       # Shim code generator
│       ├── kv/           # KV + Redis shims
│       │   ├── index.ts
│       │   └── logic.ts     # Pure logic — also used in unit tests
│       ├── r2.ts         # R2 shim
│       ├── crypto.ts     # WebCrypto shim
│       ├── env.ts        # Env shim
│       └── serve.ts      # Serve shim
├── tests/      # All tests live here
│   ├── hyperdrive.test.ts   # Hyperdrive driver & template translation tests
│   ├── sql.test.ts          # D1 SQL shim tests
│   └── ...
├── app/        # Example fullstack application
└── docs/       # Documentation extras

Running Tests

# From the root
bun test

# Individual test files
bun test tests/redis-bridge.test.ts
bun test tests/integration.test.ts
bun test tests/shims.test.ts

All 22 tests should pass. 💚


📄 License

MIT © fhorray