npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@mzedstudio/s3-seam

v0.1.0

Published

S3-compatible storage seam: presigned uploads, signed downloads, and an optional public CDN target. Provider-agnostic (R2, S3, B2, MinIO, Tigris). Edge-runtime safe, zero framework deps.

Readme

@mzedstudio/s3-seam

S3-compatible storage seam: presigned uploads, signed downloads, and an optional public CDN target — behind a small set of helpers, with no tables, no setup ceremony, no provider lock-in, no framework deps.

The wire protocol is the seam: any S3-compatible storage works (AWS S3, Cloudflare R2, Backblaze B2, MinIO, DigitalOcean Spaces, Wasabi, Tigris). Swapping providers is an env-only change.

Edge-runtime safe (signs with aws4fetch). Works in Cloudflare Workers, Convex, Vercel Edge, Deno, Node 19+. The only runtime dep is aws4fetch (~3KB).

Install

npm install @mzedstudio/s3-seam

No peer dependencies.

Usage

import { createUploadIntent } from "@mzedstudio/s3-seam";

const intent = await createUploadIntent({
  // access: "private" by default — read with getSignedDownloadUrl.
  // Pass "public" if you want CDN-cacheable reads via getPublicUrl
  // (requires the public target env to be configured).
  prefix: "avatars",
  scope: userId,
  filename: "photo.png",
  contentType: "image/png",
  allowedContentTypes: ["image/png", "image/jpeg", "image/webp"],
});

// intent.uploadUrl  — browser PUTs the file directly to this URL
// intent.key        — persist this against the user/record
// intent.expiresAt  — ms epoch, when the URL stops working

The browser then PUTs the file to intent.uploadUrl and calls a follow-up mutation/handler (e.g. setAvatar({ key })) to persist intent.key.

The library does not authorize callers. Identity and permission checks are the caller's job.

Convex example

// convex/avatars.ts
import { mutation } from "./_generated/server";
import { v, ConvexError } from "convex/values";
import { createUploadIntent, S3SeamError } from "@mzedstudio/s3-seam";

export const requestAvatarUpload = mutation({
  args: { contentType: v.string(), filename: v.string() },
  handler: async (ctx, args) => {
    const userId = /* your identity helper */;
    try {
      return await createUploadIntent({
        prefix: "avatars",
        scope: userId,
        filename: args.filename,
        contentType: args.contentType,
        allowedContentTypes: ["image/png", "image/jpeg", "image/webp"],
      });
    } catch (e) {
      // Re-throw so the error code propagates to the client.
      // (Plain Error.message is stripped by Convex by default.)
      if (e instanceof S3SeamError) {
        throw new ConvexError({ code: e.code, message: e.message });
      }
      throw e;
    }
  },
});

Two targets: private + optional public

The library exposes two named targets, each backed by its own bucket:

  • private (required) — never publicly readable. Reads go through short-lived presigned URLs (getSignedDownloadUrl). Default for every helper.
  • public (optional) — bucket is publicly readable end-to-end. Reads are CDN URLs (getPublicUrl). Use only for cacheable, low-sensitivity assets (avatars, marketing images).

Why two buckets? Public-readability is bucket-wide on most providers (R2's r2.dev toggle, S3 ACLs/CloudFront origins). Mixing public and private content in one bucket is impossible without per-key signing, which defeats the point of public reads. Separate buckets keep the policy at the seam.

A project using only private storage configures the private env block and ignores the rest. Adding cacheable public assets later is an env-only change — no code restructure.

Public surface

import {
  createUploadIntent,
  getPublicUrl,
  getSignedDownloadUrl,
  deleteObject,
} from "@mzedstudio/s3-seam";

| Helper | Use | |---|---| | createUploadIntent({ access?, prefix, scope, filename, contentType, allowedContentTypes, bytes?, maxBytes?, expiresInSec? }) | Returns { uploadUrl, key, expiresAt }. access defaults to "private"; pass "public" for cacheable assets. The browser PUTs to uploadUrl directly; the caller stores key. | | getPublicUrl(key) | CDN/public URL. Only valid for keys uploaded with access: "public". Requires the public target to be configured. | | getSignedDownloadUrl(key, { expiresInSec? }) | Time-limited presigned GET against the private target. Regenerate per read. | | deleteObject(key, { access? }) | Deletes the object. Pass the same access mode the key was uploaded with. 404s are silently treated as success. |

A key uploaded with one access mode lives in that target only — the seam never copies between buckets. Match the read helper to the upload mode.

Env

Required (private target):

STORAGE_ENDPOINT              # e.g. https://<account>.r2.cloudflarestorage.com
STORAGE_REGION                # "auto" for R2, "us-east-1" etc. for S3
STORAGE_BUCKET
STORAGE_ACCESS_KEY_ID
STORAGE_SECRET_ACCESS_KEY

Optional (public target — set both to enable; partial config throws on boot):

STORAGE_PUBLIC_BUCKET         # required to enable public target
STORAGE_PUBLIC_URL            # required to enable public target — base URL the bucket is served from
STORAGE_PUBLIC_ENDPOINT       # optional — defaults to STORAGE_ENDPOINT
STORAGE_PUBLIC_REGION         # optional — defaults to STORAGE_REGION
STORAGE_PUBLIC_ACCESS_KEY_ID  # optional — defaults to STORAGE_ACCESS_KEY_ID
STORAGE_PUBLIC_SECRET_ACCESS_KEY  # optional — defaults to STORAGE_SECRET_ACCESS_KEY

The override vars exist so you can put the public bucket on a different account/provider (e.g. AWS for the public CDN, R2 for private). Most setups leave them unset and reuse the private credentials.

Use separate buckets per environment — staging and production should never share a bucket. The env block above is per-deployment, so your platform's env-management command (e.g. convex env set, wrangler secret put) is the seam.

STORAGE_ENDPOINT is the account-level S3 host — do not include the bucket name. The bucket goes in STORAGE_BUCKET; the seam joins them at request time. R2's dashboard sometimes shows a full URL like https://<account>.r2.cloudflarestorage.com/<bucket> — strip the trailing /<bucket> when you set the env, otherwise every signed URL doubles the bucket and 404s. (Path-prefixed endpoints are supported for proxied setups like MinIO behind a sub-path, but they should never end with the bucket name.)

Key shape

createUploadIntent returns keys of the form:

<prefix>/<scope>/<uuid>-<sanitized-filename>
  • prefix — namespace by feature (avatars, attachments, etc.).
  • scope — ownership boundary (usually a userId or orgId).
  • uuid — uniqueness, prevents overwrites.
  • sanitized-filename — original filename, made URL-safe, extension preserved.

Callers store the key, never the URL. URLs are regenerated on each read via getPublicUrl or getSignedDownloadUrl.

Upload size enforcement

When you pass both bytes (the client-declared object size) and maxBytes, the helper:

  1. Rejects early if bytes > maxBytes.
  2. Signs the URL with Content-Length: bytes. The browser must send a matching header — S3 rejects on mismatch — so callers can't upload more than maxBytes.

Without bytes you don't get this guarantee, so always send it from the client when you care about caps.

CORS

Direct browser uploads require CORS on every bucket that accepts uploads (private and, if enabled, public). Set this once per bucket; the values below assume an origin at https://app.example.com — adjust for your deployments.

Cloudflare R2

R2 dashboard → bucket → Settings → CORS Policy:

[
  {
    "AllowedOrigins": ["https://app.example.com", "http://localhost:3000"],
    "AllowedMethods": ["PUT", "GET", "DELETE"],
    "AllowedHeaders": ["content-type", "content-length"],
    "ExposeHeaders": ["ETag"],
    "MaxAgeSeconds": 3600
  }
]

AWS S3

Bucket → Permissions → CORS:

[
  {
    "AllowedOrigins": ["https://app.example.com", "http://localhost:3000"],
    "AllowedMethods": ["PUT", "GET", "DELETE"],
    "AllowedHeaders": ["*"],
    "ExposeHeaders": ["ETag"]
  }
]

Backblaze B2

Use the b2 update-bucket --cors-rules CLI or the dashboard's CORS editor with equivalent fields.

Provider gotchas (public target)

getPublicUrl(key) joins as <STORAGE_PUBLIC_URL>/<key> — no bucket name is added. Whatever you set must produce a loadable URL when the key is appended. If your host serves files path-style with the bucket in the path, include the bucket in STORAGE_PUBLIC_URL.

  • R2 with r2.dev public access — Cloudflare issues a bucket-scoped https://pub-<hash>.r2.dev domain (the hash is unique to that one bucket). Objects are served at the root, so set STORAGE_PUBLIC_URL=https://pub-<hash>.r2.dev with no bucket name in the path. Verify by pasting <STORAGE_PUBLIC_URL>/<some-existing-key> in a browser before wiring it up.
  • R2 with custom domain — set STORAGE_PUBLIC_URL to the custom domain (e.g. https://cdn.example.com). Custom domains are bucket-scoped, so no bucket prefix needed.
  • AWS S3 + CloudFront — set STORAGE_PUBLIC_URL to the CloudFront distribution URL. The origin bucket can stay private (origin access identity); the CDN serves reads.
  • Region "auto" — R2 wants auto; AWS S3 wants the bucket's actual region.

Errors

All thrown errors are S3SeamError instances with a stable code field. Branch on e.code, not on e.message.

import { S3SeamError } from "@mzedstudio/s3-seam";

try { /* ... */ }
catch (e) {
  if (e instanceof S3SeamError && e.code === "STORAGE_OBJECT_TOO_LARGE") { /* ... */ }
}

In a Convex mutation/action, re-throw as ConvexError to propagate the code to the client (Convex strips raw Error.message by default). See the Convex example above.

| Code | When | |---|---| | STORAGE_CONFIG_MISSING | A required private env var is not set | | STORAGE_CONFIG_INVALID | An endpoint/url env is not parseable | | STORAGE_PUBLIC_CONFIG_INCOMPLETE | One of STORAGE_PUBLIC_BUCKET / STORAGE_PUBLIC_URL is set without the other | | STORAGE_PUBLIC_TARGET_NOT_CONFIGURED | access: "public" or getPublicUrl called without the public env | | STORAGE_NO_ALLOWED_CONTENT_TYPES | allowedContentTypes is empty | | STORAGE_CONTENT_TYPE_NOT_ALLOWED | Client content type isn't in the allow-list | | STORAGE_INVALID_BYTES | bytes is negative or non-finite | | STORAGE_OBJECT_TOO_LARGE | bytes > maxBytes | | STORAGE_INVALID_PREFIX / STORAGE_INVALID_SCOPE | Empty or contains a path separator | | STORAGE_DELETE_FAILED | Provider returned a non-2xx, non-404 response on delete |

Why this shape

  • One adapter for all S3-compatible providers — the wire protocol is the seam. Swap = env change, not a code change.
  • aws4fetch for signing — fetch-native, ~3KB, zero deps, edge-runtime safe (Cloudflare's own R2 docs use it). SigV4 is a frozen spec, so quiet maintenance is feature-completeness, not abandonment.
  • Two targets, not one bucket with mixed policy — public-readability is bucket-wide on most providers. Splitting at the seam matches reality and keeps the policy decision in env, not in feature code.
  • Private is the default — features opt into public exposure, not out of it. Projects that never enable the public target have zero leakage surface.
  • Browser uploads direct to bucket — server code never proxies bytes. It signs, returns the URL, records the key on success.
  • Zero framework deps — only runtime dep is aws4fetch. Use it from Convex, Cloudflare Workers, Vercel Edge, Deno, or plain Node.

Development

pnpm install
pnpm test
pnpm build

License

Apache-2.0