npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@x12i/resource-manager

v1.5.0

Published

Cloudflare R2 bucket access for the x12i resource buckets, with support for **any bucket name** using the **S3-compatible API** (CRUD/list; requires credentials).

Readme

@x12i/resource-manager

Cloudflare R2 bucket access for the x12i resource buckets, with support for any bucket name using the S3-compatible API (CRUD/list; requires credentials).

Install

npm i @x12i/resource-manager

Usage

import { R2Buckets } from "@x12i/resource-manager";

const r2 = new R2Buckets(); // loads config via nx-config2 (.env, shared env, etc)

// List ALL keys under a prefix (recursive)
const keys = await r2.list("content", { prefix: "images/icons/" });

// Read object content
const text = await r2.getText("config", "some/file.json");

// Create/Update (overwrite) content
await r2.put("apps", "bundle/app.js", "console.log('hi')", {
  contentType: "application/javascript"
});

// Any bucket name (string) is also supported for authenticated S3 API operations:
await r2.put("some-other-bucket", "path/file.txt", "hello");
const otherKeys = await r2.list("some-other-bucket", { prefix: "path/" });

// Delete
await r2.delete("some-other-bucket", "path/file.txt");

Folders (prefixes)

R2 (like S3) does not have real folders. “Folders” are just prefixes in object keys.

This package supports three common ways to work with them:

// 1) List the bucket root (top-level keys + "folders")
const root = await r2.listRoot("content");
console.log(root.keys);    // e.g. ["top.txt"]
console.log(root.folders); // e.g. ["images/", "docs/"]

// 2) List a single folder (non-recursive)
const folder = await r2.listFolder("content", { prefix: "images" });
console.log(folder.prefix);  // "images/"
console.log(folder.keys);    // e.g. ["images/a.png", "images/b.txt"]
console.log(folder.folders); // e.g. ["images/icons/", "images/photos/"]

// 3) List recursively (flat list of all keys, including nested folders)
const all = await r2.listRecursive("content", { prefix: "images" });
console.log(all.keys); // e.g. ["images/a.png", "images/icons/x.svg", "images/photos/2026/01.jpg"]

Creating, deleting, renaming folders

Because folders are prefixes, “folder operations” map to object operations:

// Create a folder placeholder (optional, useful for UIs)
await r2.createFolder("content", "images/new-folder");
// Creates a zero-byte object at key "images/new-folder/"

// Delete a folder (recursive) — deletes everything under the prefix
await r2.deleteFolder("content", "images/new-folder");

// Rename/move a folder (recursive) — copy every object, then delete originals
await r2.renameFolder("content", "images/old", "images/new");

Notes:

  • deleteFolder() and renameFolder() can be expensive for large folders because they operate on all objects under the prefix.
  • renameFolder() is implemented as copy + delete per object (the S3/R2 way; there is no atomic rename).

Upload helpers (stream, file path, json, base64)

// Stream / multipart upload
await r2.uploadStream("content", "big/file.bin", someReadableStream);

// Upload from a disk path
await r2.uploadFile("content", "uploads/photo.jpg", "C:/tmp/photo.jpg", {
  contentType: "image/jpeg"
});

// JSON object
await r2.putJson("config", "settings.json", { a: 1, b: true });

// Base64 (raw base64 or data URL)
await r2.putBase64("content", "hello.txt", "data:text/plain;base64,SGVsbG8=");

Connection check

const report = await r2.testConnection({
  buckets: ["config", "content", "apps", "appLocation", "sandbox"],
  listPrefix: "test/"
});

console.log(report.ok, report.checks);

fix-sandbox — rewrite sandbox URLs to resource-manager calls

When you prototype with direct pub-xxx.r2.dev URLs in your source code, fix-sandbox scans your files and rewrites those hardcoded URLs to r2.publicUrl() calls that use the correct non-sandbox bucket.

How it works

  1. Each per-bucket sandbox has a configured public URL (bucketContentSandboxURL, etc.).
  2. fix-sandbox builds a reverse map from those pub-xxx.r2.dev hostnames to their parent bucket key.
  3. It replaces every match in your source with r2.publicUrl("bucket", "object/key").
  4. For JSX string attributes (src="...", href="...") it also switches from attr="URL" to attr={r2.publicUrl(...)}.

Programmatic API

import { fixSandboxUrls } from "@x12i/resource-manager/fix-sandbox";
import { loadR2Env } from "@x12i/resource-manager";

const env = loadR2Env();
const result = fixSandboxUrls(sourceCode, env);
// result.changed       — boolean
// result.code          — transformed source
// result.replacements  — array of { original, replacement, bucket, key }

CLI

# Dry-run (default): show what would change without touching files
x12i-resource-manager fix-sandbox src/components/App.tsx src/pages/index.tsx

# Apply changes — backs up each file to <file>.bak first
x12i-resource-manager fix-sandbox src/components/App.tsx --write

# Versioned backups: each run creates .bak, .bak.1, .bak.2, ...
# If App.tsx.bak already exists, the next run creates App.tsx.bak.1, etc.

# Skip backup (use only when VCS is handling history)
x12i-resource-manager fix-sandbox src/components/App.tsx --write --no-backup

# Filter by extension
x12i-resource-manager fix-sandbox src/components/App.tsx --write --ext tsx,jsx

Example transformation:

// Before
<img src="https://pub-e063916722df470ab3f84bb80ccec0d4.r2.dev/images/logo.png" />
const url = "https://pub-e063916722df470ab3f84bb80ccec0d4.r2.dev/data/config.json";

// After (when bucketContentSandboxURL=https://pub-e063916722df470ab3f84bb80ccec0d4.r2.dev)
<img src={r2.publicUrl("content", "images/logo.png")} />
const url = r2.publicUrl("content", "data/config.json");

Configuration (nx-config2)

This package uses nx-config2 to load environment variables (and supports shared env chaining). Copy .env.example to .env and fill in your Cloudflare R2 credentials, or load env vars any other way.

Important: your R2 credential values must be non-empty (and not just whitespace). If you accidentally set an empty secret, downstream AWS signing can fail with confusing crypto errors.

Buckets: ten ways to address storage

There are nine built-in shortcut keys — five primary keys (config, content, apps, appLocation, sandbox) plus four per-bucket sandbox keys (config:sandbox, content:sandbox, apps:sandbox, appLocation:sandbox) — plus any raw R2 bucket name as a tenth style. Shortcuts are strings you pass to r2.list("content", …) etc.; each resolves to a physical bucket name and (optionally) a public HTTP base for URLs.

.env recap (names only — values are your real bucket names): bucketConfig, bucketContent or bucketResources (both map to logical content), bucketApps, appBucketLocation, bucketSandbox. Per-bucket sandbox S3 names: bucketConfigSandbox, bucketContentSandbox, bucketAppsSandbox, bucketAppLocationSandbox. Public URLs: bucketConfigSandboxURL, bucketContentSandboxURL, bucketAppsSandboxURL, bucketAppLocationSandboxURL.

| # | What you pass in code | Role | Default physical bucket (if unset in .env) | Typical “public”? | |---|------------------------|------|---------------------------------------------|-------------------| | 1 | "config" | Configuration / settings JSON | config | Custom domain or base URL via domainConfig / configBucket | | 2 | "content" | General assets, uploads (legacy code alias: "resources"content) | content | domainResources / resourceBucket (defaults to resources.x12i.com style) | | 3 | "apps" | Built app JS bundles, artifacts (legacy alias: "jsx"apps) | apps | domainJsx / jsxBucket (defaults to jsx.x12i.com style) | | 4 | "appLocation" | Dedicated bucket for app install / location / deployment artifacts (maps from appBucketLocation in .env) | app-location | appBucketLocationURL / domainAppLocation (defaults to app-location.x12i.com style) | | 5 | "sandbox" | Public-facing sandbox — previews, demos, pub-…r2.dev, content meant to be read over HTTP | sandbox | Yes — treat as public read via bucketSandboxURL / sandboxBucket / domainSandbox | | 6 | "config:sandbox" | Public sandbox mirror of configpub-xxx.r2.dev URL | config-sandbox | Yes — set bucketConfigSandboxURL | | 7 | "content:sandbox" | Public sandbox mirror of contentpub-xxx.r2.dev URL | content-sandbox | Yes — set bucketContentSandboxURL | | 8 | "apps:sandbox" | Public sandbox mirror of appspub-xxx.r2.dev URL | apps-sandbox | Yes — set bucketAppsSandboxURL | | 9 | "appLocation:sandbox" | Public sandbox mirror of appLocationpub-xxx.r2.dev URL | app-location-sandbox | Yes — set bucketAppLocationSandboxURL | | 10 | Any other string (e.g. "my-team-bucket") | Direct R2 bucket name — not a shortcut | That string is the bucket name | Only if you pass { domain: "…" } to getPublicText / publicUrl, or S3-only |

Credentials: Listing, uploading, and deleting always use the S3-compatible API and your tokenAccessKey / tokenSecretAccessKey (or aliases), regardless of shortcut. “Public” here means how you expose objects over HTTPS (custom domain / r2.dev), not “no credentials for writes.”

Multiple shortcuts → one physical bucket

Logical keys are independent. You may point two or more shortcuts at the same R2 bucket name if you want (e.g. bucketContent=shared and bucketApps=shared, or bucketContent and bucketResources both set to the same value — they both drive logical content). Objects must not collide on key paths — you usually separate by prefix (e.g. content/… vs apps/…). This is advanced; the usual setup is one physical bucket per shortcut.

Mapping shortcuts to your real bucket names (.env)

bucketConfig=config
bucketContent=resources
# Same logical bucket as bucketContent (optional duplicate; use one or both)
# bucketResources=resources
bucketApps=jsx
# appBucketLocation=<your-r2-bucket-name>
# appBucketLocationURL=https://...   # optional public base
# Sandbox: S3 name + public URL (sandbox is intended to be public-read over HTTP)
# bucketSandbox=sandbox
# bucketSandboxURL=https://pub-xxxxxxxx.r2.dev

See also: docs/cloudflare-r2-buckets.md.

Cloudflare setup (buckets, keys)

See docs/cloudflare-r2-buckets.md.

CORS (direct browser access to bucket domains)

If your browser app will fetch() assets directly from an R2 public/custom domain (Option B), you must configure CORS on the bucket domain. If you only access R2 from servers/CLIs via the S3 API, CORS does not apply.

Full guide: docs/cors.md.

Wide-open example (use only for fully public assets)

This matches the “allow everything” policy you shared. It’s convenient, but it’s also the least restrictive:

[
  {
    "AllowedOrigins": ["*"],
    "AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
    "AllowedHeaders": ["*"],
    "ExposeHeaders": ["ETag", "Content-Length", "Content-Type"],
    "MaxAgeSeconds": 3600
  }
]

Recommended: least privilege

Prefer a specific origin (or a short allowlist) and only the methods you actually need:

[
  {
    "AllowedOrigins": ["https://app.example.com"],
    "AllowedMethods": ["GET", "HEAD"],
    "AllowedHeaders": ["*"],
    "ExposeHeaders": ["ETag", "Content-Length", "Content-Type"],
    "MaxAgeSeconds": 86400
  }
]

Apply it with wrangler:

npx wrangler r2 bucket cors put <bucket-name> --file r2-cors.json

Troubleshooting

“HMAC key data must not be empty”

This almost always means your R2 secret access key is empty (or only whitespace).

This package uses Cloudflare R2’s S3-compatible API via @aws-sdk/client-s3, which signs requests using AWS SigV4 (HMAC). If the secret key is empty, the signer can throw this crypto error.

Fix:

  • Ensure your .env contains a real, non-empty secret:
    • tokenSecretAccessKey=... (recommended; see .env.example)
  • Or, if you’re using the Cloudflare Pages secrets flow, ensure the secret is set:
    • R2_SECRET_ACCESS_KEY=...
  • Also confirm the matching access key ID is non-empty:
    • tokenAccessKey=... or R2_ACCESS_KEY_ID=...

If you’re unsure which names are being read, start from .env.example and re-run x12i-resource-manager test-connection.

Tests

# Unit tests (no network)
npm test

# Live integration tests (real R2 access; uses your config)
npm run test:integration

CLI — R2 commands

After install, you get a x12i-resource-manager command.

# List a "folder" (prefix) in a bucket
x12i-resource-manager list content --prefix "test/"

# Download text to stdout
x12i-resource-manager get config "some/file.json"

# Upload a file from disk (streaming)
x12i-resource-manager put-file content "uploads/photo.jpg" --file "C:/tmp/photo.jpg" --contentType "image/jpeg"

# Upload JSON
x12i-resource-manager put-json config "settings.json" --json "{\"a\":1}"

# Upload base64
x12i-resource-manager put-base64 content "hello.txt" --base64 "data:text/plain;base64,SGVsbG8="

# Delete an object
x12i-resource-manager delete content "test/some-key.txt"

# Quick connectivity check (prints JSON report)
x12i-resource-manager test-connection --buckets "config,content,apps,appLocation,sandbox" --prefix "test/"

# Dry-run: show which sandbox URLs would be rewritten (no files changed)
x12i-resource-manager fix-sandbox src/App.tsx src/pages/index.tsx

# Apply rewrites — saves versioned backup (.bak, .bak.1, ...) before overwriting
x12i-resource-manager fix-sandbox src/App.tsx --write

# Skip backup (when VCS already tracks history)
x12i-resource-manager fix-sandbox src/App.tsx --write --no-backup

CLI — Cloudflare Pages commands

These commands are designed to be run from inside your Cloudflare Pages project (a Vite/React app, etc.). Install this package there and use these to connect your project to Cloudflare — no dashboard needed.

What you get

| Command | What it does | |---|---| | sync-secrets | Pushes your local .env secrets to Cloudflare Pages (encrypted, server-side only) | | publish | Deploys your dist/ folder to Cloudflare Pages (new project or new version) | | deploy | Chains sync-secrets + publish --build in one command |

One-time setup (in your Pages project)

1. Install this package and wrangler

npm install -D @x12i/resource-manager wrangler

2. Log in to Cloudflare (once)

npx wrangler login

This opens a browser, you click Allow, and wrangler saves an OAuth token to ~/.wrangler. That token is reused by all three commands — no separate API key needed.

3. Add wrangler.toml to your project root

name = "my-app"                        # your Pages project name
pages_build_output_dir = "./dist"      # where your build outputs to

4. Add scripts to your package.json

{
  "scripts": {
    "sync-secrets":   "x12i-resource-manager sync-secrets",
    "sync-secrets:dry": "x12i-resource-manager sync-secrets --dry-run",
    "publish:pages":  "x12i-resource-manager publish",
    "publish:build":  "x12i-resource-manager publish --build",
    "publish:preview":"x12i-resource-manager publish --preview",
    "deploy":         "x12i-resource-manager sync-secrets && x12i-resource-manager publish --build"
  }
}

5. Configure your .env

# --- PUBLIC variables (VITE_ prefix) ---
# These are baked into the JS bundle at build time.
# sync-secrets SKIPS these automatically.
VITE_API_BASE_URL=https://api.example.com

# --- SECRET variables (no prefix) ---
# These are pushed to Cloudflare Pages by sync-secrets.
# They are available server-side in Pages Functions as context.env.MY_VAR
# They are NEVER sent to the browser.
R2_ACCESS_KEY_ID=your-key
R2_SECRET_ACCESS_KEY=your-secret
PAGES_OUTPUT_DIR=dist          # optional — override publish directory (default: dist)

Or use env.json instead (same precedence: .env wins if both exist):

{
  "R2_ACCESS_KEY_ID": "your-key",
  "R2_SECRET_ACCESS_KEY": "your-secret",
  "PAGES_OUTPUT_DIR": "dist"
}

sync-secrets — mirror env to Cloudflare

Reads your local .env (or env.json), skips VITE_ variables, and pushes the rest as encrypted secrets to your Cloudflare Pages project.

How it works:

  1. .env is read locally — it is never committed or sent anywhere raw
  2. Each non-VITE_ key is pushed via npx wrangler pages secret put KEY --project-name PROJECT
  3. The value is written to stdin — it never appears in process arguments or logs (masked in output)
  4. On Cloudflare, these secrets are injected automatically into Pages Functions as context.env.KEY
npm run sync-secrets              # push all secrets
npm run sync-secrets:dry          # preview without pushing
npm run sync-secrets -- --verbose # show wrangler output per key
npm run sync-secrets -- --env-file .env.production
npm run sync-secrets -- --login   # run wrangler login first

Optional allowlist — .env.secrets

If you only want to push specific keys, create .env.secrets (key names only, safe to commit):

# .env.secrets — only these keys are pushed by sync-secrets
R2_ACCESS_KEY_ID
R2_SECRET_ACCESS_KEY

Example output:

☁️  Target: Cloudflare Pages project "my-app"

📂 Reading .env from /your/project...

⏭️  Skipping 2 public VITE_ variable(s):
   VITE_API_BASE_URL  (public, build-time only)

🔑 Found 2 secret(s) to sync:

   R2_ACCESS_KEY_ID = your••••••••
   R2_SECRET_ACCESS_KEY = your••••••••

🚀 Pushing secrets to Cloudflare Pages...

   R2_ACCESS_KEY_ID... ✅
   R2_SECRET_ACCESS_KEY... ✅

📊 Done: 2 pushed, 0 failed.
   Project: "my-app"

publish — deploy to Cloudflare Pages

Deploys your output directory to Cloudflare Pages. Handles both first deploy (creates the project) and re-deploy (pushes a new version).

Output directory resolution (priority order):

  1. --dir flag
  2. PAGES_OUTPUT_DIR in .env / env.json
  3. pages_build_output_dir in wrangler.toml
  4. Default: ./dist
npm run publish:pages             # deploy dist/
npm run publish:build             # build first, then deploy
npm run publish:preview           # deploy as preview URL
npm run publish:pages -- --dir out        # deploy different directory
npm run publish:pages -- --message "v1.2" # with commit message
npm run publish:pages -- --dry-run        # preview without deploying
npm run publish:pages -- --login          # run wrangler login first

First deploy (new project):

When the Pages project doesn't exist yet, the CLI detects this, asks for confirmation, creates it, then deploys:

☁️  Target: Cloudflare Pages project "my-app"
📁 Output directory: /your/project/dist

⚠️  Pages project "my-app" was not found on Cloudflare.
   Create it now? (y/N) y

🏗️  Creating project "my-app"...
✅ Project "my-app" created.

🚀 Deploying to Cloudflare Pages...

🌐 Live at: https://my-app.pages.dev

✅ Deployment complete.

💡 Next step: push your secrets to this project:
   npm run sync-secrets

deploy — full pipeline in one command

Chains sync-secrets + build + publish:

npm run deploy

This runs:

  1. sync-secrets — push all secrets to Cloudflare
  2. publish --build — build your project and deploy

Authentication

All three Pages commands use the same wrangler OAuth session:

npx wrangler login   # run once — opens browser, click Allow

The token is stored in ~/.wrangler/config/default.toml. The sync-secrets and publish commands spawn wrangler directly (already authenticated).

If you're not logged in, the CLI will detect the error and offer to run npx wrangler login for you:

Not authenticated. Run `npx wrangler login` now? (y/N)

It will never run login silently — always asks first.


How secrets reach Cloudflare (the full picture)

Your local .env / env.json
        │
        │  npm run sync-secrets
        ▼
Cloudflare Pages encrypted secrets store
        │
        │  Cloudflare injects automatically at runtime
        ▼
Your Pages Functions (context.env.MY_VAR)
  • VITE_* variables are baked into the JS bundle at build time by Vite — they never go through sync-secrets
  • Non-VITE_* variables are pushed as encrypted secrets — they live only on Cloudflare's edge, never in your bundle
  • Once pushed, Cloudflare handles injection — no nx-config2 or dotenv needed on the server side

Security reference

| What | Where it lives | Visible to browser? | |---|---|---| | VITE_* vars | Baked into JS bundle at build | Yes (public) | | Non-VITE_* secrets | Cloudflare Pages encrypted store | No | | Secret values in transit | Piped via stdin to wrangler | No | | Secret values in CLI output | Masked (your••••••••) | No | | .env file | Local only, in .gitignore | No | | .env.secrets allowlist | Key names only, safe to commit | Key names only | | wrangler.toml name/account_id | Committed to git | Yes (not secret) | | dist/ contents | Public CDN | Yes (public) |