npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

unified-cloud-storage

v1.0.1

Published

A unified abstraction layer for AWS S3, Google Cloud Storage, and Azure Blob Storage with upload, download, delete, and signed URL support.

Downloads

243

Readme

unified-cloud-storage

A unified, multi-cloud storage solution for Node.js and NestJS

Features

Multi-Cloud Support: Seamlessly work with AWS, GCP, and Azure using the same API.

Single & Multiple File Uploads: Stream or batch uploads with optional concurrency control.

Download & Zip: Generate signed download URLs, or download multiple files as a single zip.

File Deletion: Delete single or multiple files across all providers.

Signed URLs & Access Control: Easily manage public/private visibility and URL expiration.

NestJS & Express Ready: Works with controllers and services out-of-the-box.

CDN & Caching Support: Optional integration with CloudFront, Azure CDN, or GCP domains.

TypeScript-first: Fully typed for autocompletion and type safety.

Simplify file storage across AWS S3, Google Cloud Storage, and Azure Blob Storage with a single, consistent API. Forget about juggling different SDKs — this package provides an easy-to-use, extensible, and powerful abstraction layer for all your cloud storage needs.

CloudStorageFactory – Provider Configuration Guide

CloudStorageFactory provides a unified interface to interact with multiple cloud storage providers (AWS, GCP, Azure) using a single, consistent API.

1.Google Cloud Storage (GCS) Provider

To use Google Cloud Storage with CloudStorageFactory, set the provider to GCP and supply the following configuration.

Basic Usage

import { CloudStorageFactory, CloudProvider } from "unified-cloud-storage";

const cloudStorage = CloudStorageFactory.create({
  provider: CloudProvider.GCP,
  gcp: {
    baseBucket: "my-gcs-bucket",
    serviceAccount: '{ /* service account JSON */ }',
  },
});

GCP Configuration Options

{
  provider: CloudProvider.GCP,
  gcp: {
  baseBucket: process.env.GCS_BASE_BUCKET!,
  serviceAccount: JSON.parse(process.env.GCS_SERVICE_ACCOUNT_KEY!),

  httpProxy: process.env.HTTP_PROXY,                                 // Optional
  prefix: process.env.GCS_PREFIX ?? undefined,                       // Optional
  cacheProfile: process.env.CACHE_PROFILE as CacheProfile,           // Optional
  signedUrlTTL: process.env.SIGNED_URL_TTL                           // Optional
    ? Number(process.env.SIGNED_URL_TTL)
    : undefined,

  gcpCdnDomain: process.env.GCP_CDN_DOMAIN, // e.g. cdn.example.com  // Optional
}
}

Required Fields

baseBucket

Type: string Description: Name of the Google Cloud Storage bucket where files will be uploaded.

serviceAccount

Type: object | string Description: Full GCP Service Account JSON, stored as a string in the environment variable. GCP service account credentials with access to the bucket.

Important:

  • Do NOT split fields
  • Do NOT remove any keys
  • Store the entire JSON exactly as downloaded

Example GCS_SERVICE_ACCOUNT_KEY

GCS_SERVICE_ACCOUNT_KEY='{
  "type": "service_account",
  "project_id": "my-gcp-project",
  "private_key_id": "abc123",
  "private_key": "-----BEGIN PRIVATE KEY-----\n...\n-----END PRIVATE KEY-----\n",
  "client_email": "[email protected]",
  "client_id": "123456789",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://oauth2.googleapis.com/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/...",
  "universe_domain": "googleapis.com"
}'

Optional Fields

httpProxy

Type: string Description: HTTP proxy URL, if outbound traffic must go through a proxy.

prefix

Type: string Description: Optional prefix automatically prepended to all object keys. Useful for environment-based isolation (prod/, dev/, etc.)

cacheProfile

Type: CacheProfile (enum from this package) Description: Controls Cache-Control headers applied to uploaded objects when a CDN is used.

Available values:

CacheProfile.LONG_LIVED_CACHE CacheProfile.SHORT_LIVED_CACHE

| Profile | Cache-Control Header | Use case | | ------------------- | ------------------------------------- | -------------------------------------------- | | LONG_LIVED_CACHE | public, max-age=31536000, immutable | Static assets, media | | SHORT_LIVED_CACHE | private, max-age=600 | User-specific or frequently changing content |

signedUrlTTL

Type: number (seconds) Description: Default expiration time for GCS signed URLs. Can be overridden per request.

(Uses the value passed in the config. if not, uses the value passed while uploading else defaults to 1 hr)

gcpCdnDomain

Type: string Description: Custom domain backed by Google Cloud CDN (HTTPS Load Balancer + backend bucket).

Example: cdn.example.com

URL Behavior Summary

| Access Type | Returned URL | | ----------------------- | ----------------------------------------------- | | Public + CDN configured | https://cdn.example.com/object | | Public (no CDN) | https://storage.googleapis.com/bucket/object | | Signed | GCS signed URL (Cloud CDN caches it if enabled) |

When provided:

Public URLs are returned using the CDN domain Signed URLs are still generated by GCS and cached by Cloud CDN

2. Amazon Web Services (AWS) S3 Provider

To use Amazon S3 with CloudStorageFactory, set the provider to AWS and supply the following configuration.

Basic Usage

import { CloudStorageFactory, CloudProvider } from "unified-cloud-storage";

const cloudStorage = CloudStorageFactory.create({
  provider: CloudProvider.AWS,
  aws: {
    bucket: "my-s3-bucket",
    region: "ap-south-1",
    accessKeyId: "AKIA...",
    secretAccessKey: "******",
  },
});

AWS Configuration Options

cloudFrontDomain

{
  provider: CloudProvider.AWS,
  aws: {
    bucket: process.env.AWS_S3_BUCKET!,
    region: process.env.AWS_REGION!,

    accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
    secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,

    httpProxy: process.env.HTTP_PROXY,                        // Optional
    prefix: process.env.AWS_S3_PREFIX ?? undefined,           // Optional
    cacheProfile: process.env.CACHE_PROFILE as CacheProfile,  // Optional
    signedUrlTTL: process.env.SIGNED_URL_TTL                  // Optional
      ? Number(process.env.SIGNED_URL_TTL)
      : undefined,
    cloudFrontDomain: process.env.CLOUDFRONT_DOMAIN,          // Optional
    cloudFrontKeyPairId: process.env.CLOUDFRONT_KEY_PAIR_ID,            // Optional
    cloudFrontPrivateKey: process.env.CLOUDFRONT_PRIVATE_KEY,           // Optional
  },
}

Required Fields

bucket

Type: string Description: Name of the Amazon S3 bucket where files will be uploaded.

region

Type: string Description: AWS region where the S3 bucket is hosted. Example: ap-south-1

accessKeyId

Type: string Description: AWS IAM access key with permission to access the S3 bucket.

secretAccessKey

Type: string Description: AWS IAM secret key corresponding to the access key.

Optional Fields

httpProxy

Type: string Description: HTTP proxy URL, if outbound traffic must go through a proxy.

prefix

Type: string Description: Optional prefix automatically prepended to all S3 object keys. Useful for environment-based isolation (prod/, dev/, etc.)

cacheProfile

Type: CacheProfile (enum from this package) Description: Controls Cache-Control headers applied to S3 objects when CloudFront is used.

Available values:

CacheProfile.LONG_LIVED_CACHE CacheProfile.SHORT_LIVED_CACHE

| Profile | Cache-Control Header | Use case | | ------------------- | ------------------------------------- | -------------------------------------------- | | LONG_LIVED_CACHE | public, max-age=31536000, immutable | Static assets, media | | SHORT_LIVED_CACHE | private, max-age=600 | User-specific or frequently changing content |

signedUrlTTL

Type: number (seconds) Description: Default expiration time for AWS S3 signed URLs. Can be overridden per request.

(Uses the value passed in the config. if not, uses the value passed while uploading else defaults to 1 hr)

cloudFrontDomain

Type: string Description: CloudFront distribution domain pointing to the S3 bucket.

Example: d3abcd1234xyz.cloudfront.net

To enable CloudFront signed URLs, the following parameters must be provided in addition to cloudFront.distributionDomain.

cloudFrontKeyPairId

Type: string

Description: The CloudFront Key Pair ID associated with a trusted key group or root account. This key pair is used by CloudFront to verify signed URL requests. CloudFront uses it to know which public key should verify the signed URL

Example:APKAIATXXXXXXXXXXXXXXXX

cloudFrontPrivateKey

Type: string

Description: The private key corresponding to the CloudFront Key Pair ID.

Used by the application to cryptographically sign CloudFront URLs.

Important:

Must be the full RSA private key Store securely (environment variable or secret manager) Never commit to source control

Example (environment variable):

CLOUDFRONT_PRIVATE_KEY="-----BEGIN RSA PRIVATE KEY-----
MIIEowIBAAKCAQEAr...
...
-----END RSA PRIVATE KEY-----"

URL Behavior Summary

| Access Type | Returned URL | | --------------------------- | ---------------------------------------------------------------------------------------- | | Public + CloudFront enabled | https://d3abcd1234xyz.cloudfront.net/object | | Public (no CloudFront) | https://bucket.s3.region.amazonaws.com/object | | S3_SIGNED | S3 signed URL (CloudFront caches if configured) | | CLOUDFRONT_SIGNED | https://d3abcd123.cloudfront.net/path/file.png?Expires=...&Signature=...&Key-Pair-Id=... |

Works for private CloudFront distributions with trusted key groups

3. Microsoft Azure Blob Storage Provider

To use Azure Blob Storage with CloudStorageFactory, set the provider to AZURE and supply the following configuration.

Basic Usage

import { CloudStorageFactory, CloudProvider } from "unified-cloud-storage";

const cloudStorage = CloudStorageFactory.create({
  provider: CloudProvider.AZURE,
  azure: {
    accountName: "mystorageaccount",
    accountKey: "********",
    container: "my-container",
  },
});

Azure Configuration Options

cloudFrontDomain

{
  provider: CloudProvider.AZURE,
  azure: {
    accountName: process.env.AZURE_STORAGE_ACCOUNT!,
    accountKey: process.env.AZURE_STORAGE_KEY!,
    container: process.env.AZURE_BLOB_CONTAINER!,

    httpProxy: process.env.HTTP_PROXY,                         // Optional
    prefix: process.env.AZURE_BLOB_PREFIX ?? undefined,        // Optional
    cacheProfile: process.env.CACHE_PROFILE as CacheProfile,  // Optional
    signedUrlTTL: process.env.SIGNED_URL_TTL                   // Optional
      ? Number(process.env.SIGNED_URL_TTL)
      : undefined,

    azureCdnDomain: process.env.AZURE_CDN_DOMAIN,              // Optional
  },
}

Required Fields

accountName

Type: string Description: Name of the Azure Storage Account.

Example:mystorageaccount

accountKey

Type: string Description: Access key for the Azure Storage Account.

Notes:

Generated in Azure Portal → Storage Account → Access keys Either key1 or key2 can be used

container

Type: string Description: Name of the Blob Storage container where files will be stored.

Optional Fields

httpProxy

Type: string Description: HTTP proxy URL, if outbound traffic must go through a proxy.

prefix

Type: string Description: Optional prefix automatically prepended to all S3 object keys. Useful for environment-based isolation (prod/, dev/, etc.)

cacheProfile

Type: CacheProfile (enum from this package) Description: Controls Cache-Control headers applied to uploaded blobs when Azure CDN is used.

Available values:

CacheProfile.LONG_LIVED_CACHE CacheProfile.SHORT_LIVED_CACHE

| Profile | Cache-Control Header | Use case | | ------------------- | ------------------------------------- | -------------------------------------------- | | LONG_LIVED_CACHE | public, max-age=31536000, immutable | Static assets, media | | SHORT_LIVED_CACHE | private, max-age=600 | User-specific or frequently changing content |

signedUrlTTL

Type: number (seconds) Description: Default expiration time for Azure SAS URLs. Can be overridden per request.

(Uses the value passed in the config. if not, uses the value passed while uploading else defaults to 1 hr)

azureCdnDomain

Type: string Description: Custom domain backed by Azure CDN pointing to the blob container.

Example:cdn.example.com

URL Behavior Summary

| AccessType | Returned URL | | ------------------- | ------------------------------------------------------------------------------ | | AZURE_BLOB_PUBLIC | https://account.blob.core.windows.net/container/path/file.png | | AZURE_CDN_PUBLIC | https://cdn.example.com/path/file.png | | AZURE_BLOB_SIGNED | https://account.blob.core.windows.net/container/path/file.png?sv=...&sig=... | | AZURE_CDN_SIGNED | https://cdn.example.com/path/file.png?sv=...&sig=... |

NOTE: In Azure, public blob access is controlled at the CONTAINER level. This means the container must be configured to allow public blob access.

Unlike GCS where individual files can be made public, Azure requires: Container PublicAccessType = "blob" (anonymous read for blobs only) OR PublicAccessType = "container" (anonymous read for container & blobs) We must verify/ensure the container allows public access here.

Performing Storage Operations (Upload, Download and Delete)

1. Single File Upload (Streaming to Cloud Storage)

This module provides stream-based single file upload to cloud storage (AWS S3 / GCS / Azure Blob). Files are streamed directly, no server storage, works for large files.

Provider: uploadSingleFile()

Uploads one file at a time Streams directly to cloud storage Supports public and private files Returns either public URL or signed URL Handles CDN URLs (CloudFront / Akamai / Cloud CDN / Azure CDN) if configured TTL (expiry) can be set for signed URLs

How it works (simple):

Receives file as a stream Streams file to cloud storage Sets visibility (public/private) Returns a URL you can use to download

Provider API

Input Parameters

stream — File stream from request

filename — Original file name

mimeType — File content type

options — Optional:

visibility — "public" or "private" (default "private")

urlAccess — "public" or "signed" (default "signed")

urlTtlSeconds — Signed URL expiry in seconds (default : 3600 ms (1 hr))

Returns

{
  url: string;        // Download URL
  key: string;        // Path in cloud storage
  bucket: string;     // Bucket / container name
  filename: string;   // Original filename
  mimeType: string;   // Content type
  isPublic: boolean;  // Public or private
  urlType:
    | "GCS_PUBLIC"| "GCS_SIGNED"
    | "CLOUDFRONT_PUBLIC"| "CLOUDFRONT_SIGNED"
    | "CLOUD_CDN_PUBLIC" | "CLOUD_CDN_SIGNED"
    | 'S3_PUBLIC' | 'S3_SIGNED'
    | 'AZURE_CDN_PUBLIC' | 'AZURE_CDN_SIGNED'
    |'AZURE_BLOB_SIGNED' |'AZURE_BLOB_PUBLIC';
}

Important

filename must be unique (use UUID or timestamp) The provider handles streaming + ACL + signed URLs automatically Cache headers are applied only if CDN is configured

How to Use the Provider

Controller

  @Post("upload-single-file")
  upload(@Req() req: Request, @Res() res: Response) {
    return this.service.handleUpload(req, res);
  }

Service

Validates request content type Reads optional query parameters Parses multipart data using Busboy Streams file directly to S3 Applies cache headers (if CDN enabled) Returns response exactly once per request

  async handleUpload(req: Request, res: Response) {
    // ------------------- VALIDATION -------------------
    if (!req.headers["content-type"]?.includes("multipart/form-data")) {
      return res
        .status(400)
        .json({ message: "Content-Type must be multipart/form-data" });
    }

    let fileReceived = false;
    let responded = false;

    const fail = (status: number, message: string) => {
      if (!responded) {
        responded = true;
        res.status(status).json({ message });
      }
    };

    // ------------------- PARSE QUERY PARAMS -------------------
    const legacyIsPublic = req.query.isPublic === "true";

    const options = {
      // visibility : public or private (default private)
      visibility:
        req.query.visibility === "public" || legacyIsPublic
          ? "public"
          : "private",

      // URL type: public or signed (default signed)
      urlAccess: req.query.urlAccess === "public" ? "public" : "signed",

      // TTL in seconds
      urlTtlSeconds: req.query.signedUrlTTL
        ? Number(req.query.signedUrlTTL)
        : undefined,
    };

    // ------------------- BUSBOY SETUP -------------------
    const busboy = Busboy({ headers: req.headers });

    busboy.on("file", (_field, file, info) => {
      fileReceived = true;

      if (!info.filename) {
        file.resume();
        return fail(400, "Filename missing");
      }

      // ------------------- UPLOAD -------------------
      this.cloudStorage
        .uploadSingleFile(file, info.filename, info.mimeType, options)
        .then((result) => {
          if (!responded) {
            responded = true;
            res.json(result);
          }
        })
        .catch((err) =>
          fail(
            500,
            err instanceof Error
              ? err.message
              : "Upload failed. Please try again",
          ),
        );

      file.on("error", (err) => fail(500, `File stream error: ${err.message}`));
    });

    // ------------------- ERROR HANDLING -------------------
    busboy.on("error", (err) => fail(500, `Busboy error: ${err.message}`));

    busboy.on("finish", () => {
      if (!fileReceived) {
        fail(400, "No file provided");
      }
    });

    // ------------------- STREAM REQUEST INTO BUSBOY -------------------
    req.pipe(busboy);

    // ------------------- CLEANUP ON DISCONNECT -------------------
    req.on("close", () => {
      console.log("Client disconnected.");
    });
  }

How to Test in Postman

Request Method: POST URL:http://localhost:3000/upload-single-file?visibility=private&urlAccess=signed&signedUrlTTL=900 (default: visibility-private, urlAccess-signed, signedUrlTTL=3600 ) Headers: Content-Type: multipart/form-data

Body

Go to Body → form-data Add key file Change type Text → File Select a file from your computer

| Key | Type | Value | | ---- | ---- | ------------- | | file | File | Choose a file |

Click Send

Sample Response

{
  "url": "https://signed.cloud.com/uploads/my-file.pdf",
  "key": "uploads/my-file.pdf",
  "bucket": "my-cloud-bucket",
  "filename": "my-file.pdf",
  "mimeType": "application/pdf",
  "isPublic": false,
  "urlType": "GCS_SIGNED"
}

2. Multiple File Upload (Sequential / Streaming to Cloud Storage)

This module allows uploading multiple files at once to cloud storage (AWS S3 / GCS / Azure Blob). Files are streamed directly, no server storage, works for large files.

Uploads are handled sequentially with limited concurrency to avoid overloading the server.

Provider: uploadSingleFile() (Used Internally)

Each file is uploaded one by one using uploadSingleFile() Supports public and private files Returns either public URL or signed URL Handles CDN URLs (CloudFront / Cloud CDN / Azure CDN) if configured TTL (expiry) can be set for signed URLs

How it works :

Receives files as streams from request Streams each file to cloud storage Sets visibility (public/private) Returns an array of upload results

How to use the provider

Controller

  @Post("upload-multiple-files")
  uploadMultiple(@Req() req: Request, @Res() res: Response) {
    return this.service.handleMultipleFilesSequentialUpload(req, res);
  }

Service

Validates request content type Reads optional query params per file (visibility, URL type, TTL) Streams each file to cloud storage sequentially with limited concurrency Returns upload result for each file

  async handleMultipleFilesSequentialUpload(req: Request, res: Response) {
    if (!req.headers["content-type"]?.includes("multipart/form-data")) {
      return res.status(400).json({ message: "multipart/form-data required" });
    }

    const busboy = Busboy({ headers: req.headers });

    // ---------------- CONCURRENCY CONTROL ----------------
    const MAX_CONCURRENT_UPLOADS = 2;
    let activeUploads = 0;
    const waitQueue: (() => void)[] = [];

    const acquireSlot = async () => {
      if (activeUploads < MAX_CONCURRENT_UPLOADS) {
        activeUploads++;
        return;
      }
      await new Promise<void>((resolve) => waitQueue.push(resolve));
      activeUploads++;
    };

    const releaseSlot = () => {
      activeUploads--;
      const next = waitQueue.shift();
      if (next) next();
    };

    type FileMeta = {
      visibility?: "public" | "private";
      urlAccess?: "public" | "signed";
      urlTtlSeconds?: number;
    };

    const metadata: Record<number, FileMeta> = {};
    const pendingStreams = new Map<
      number,
      {
        gate: PassThrough;
        filename: string;
        mimeType: string;
        uploadStarted: boolean;
      }
    >();

    const uploadPromises: Promise<void>[] = [];
    const results: any[] = [];
    let responded = false;

    const fail = (status: number, message: string) => {
      if (!responded) {
        responded = true;
        res.status(status).json({ message });
      }
    };

    // ---------------- START UPLOAD ----------------
    const startUpload = (index: number) => {
      const entry = pendingStreams.get(index);
      if (!entry || entry.uploadStarted) return;

      const options = metadata[index];
      if (!options) return;

      entry.uploadStarted = true;

      const uploadPromise = (async () => {
        await acquireSlot();

        try {
          const result = await this.cloudStorage.uploadSingleFile(
            entry.gate, // streamed only after metadata
            entry.filename,
            entry.mimeType,
            options,
          );
          results.push(result);
        } catch (err: any) {
          results.push({
            filename: entry.filename,
            error: err?.message ?? "Upload failed",
          });
        } finally {
          releaseSlot();
        }
      })();

      uploadPromises.push(uploadPromise);
    };

    // ---------------- FIELD HANDLING ----------------
    busboy.on("field", (name, value) => {
      const match = name.match(/^files\[(\d+)\]\[(.+)\]$/);
      if (!match) return;

      const index = Number(match[1]);
      const field = match[2];

      metadata[index] ??= {};

      switch (field) {
        case "visibility":
          metadata[index].visibility =
            value === "public" || value === "true" ? "public" : "private";
          break;

        case "urlAccess":
          metadata[index].urlAccess = value === "public" ? "public" : "signed";
          break;

        case "urlTtlSeconds":
          const ttl = Number(value);
          if (!Number.isNaN(ttl) && ttl > 0) {
            metadata[index].urlTtlSeconds = ttl;
          }
          break;
      }

      // If file already arrived, start upload now
      startUpload(index);
    });

    // ---------------- FILE HANDLING ----------------
    busboy.on("file", (name, file, info) => {
      const match = name.match(/^files\[(\d+)\]\[file\]$/);
      if (!match) {
        file.resume();
        return;
      }

      const index = Number(match[1]);
      const gate = new PassThrough();

      pendingStreams.set(index, {
        gate,
        filename: info.filename,
        mimeType: info.mimeType || "application/octet-stream",
        uploadStarted: false,
      });
      // Connect file → gate immediately
      // Upload starts only when gate is consumed
      file.pipe(gate);

      // If metadata already arrived, start upload
      startUpload(index);
    });

    // ---------------- FINISH ----------------
    busboy.on("finish", async () => {
      try {
        await Promise.all(uploadPromises);

        if (!responded) {
          responded = true;
          res.json({
            total: results.length,
            filesUploaded: results,
          });
        }
      } catch (err: any) {
        fail(500, err.message);
      }
    });

    busboy.on("error", (err) => fail(500, `Busboy error: ${err.message}`));

    req.pipe(busboy);
  }

How to Test in Postman

Request Method: POST URL:http://localhost:3000/upload-multiple-files Headers: Content-Type: multipart/form-data

Important (Why Metadata First = Faster Uploads)

In this API, file upload does not start until its metadata is available.

What happens internally:

When a file arrives before metadata, the server pauses the file stream The file waits in memory until metadata is received Upload starts only after metadata is known To avoid pausing file streams and make uploads faster, send metadata first.

When metadata is sent first:

File streams start uploading immediately No pause / resume Better performance, especially for large files

Recommended Postman Usage (For Best Performance)

Send metadata first, then files.

Body

Go to Body → form-data Add multiple keys for each file:

Metadata (Text fields)

| Key | Type | Value | Notes | | ----------------------- | ---- | ---------------- | ----------------------- | ----------------- | | files[0][visibility] | Text | public / private | Optional | (default private) | | files[0][urlAccess] | Text | public / signed | Optional | (default signed) | | files[0][urlTtlSeconds] | Text | 900 | Optional signed URL TTL | (default 3600) | | files[1][visibility] | Text | public / private | Optional | | files[1][urlAccess] | Text | public / signed | Optional | | files[1][urlTtlSeconds] | Text | 600 | Optional |

Files (File fields)

| Key | Type | Value | Notes | | -------------- | ---- | ------------- | -------------- | | files[0][file] | File | Select file 1 | File to upload | | files[1][file] | File | Select file 2 | File to upload |

Click Send

Important: Each file must have a key like files[index][file]. Optional metadata can be added per file.

Sample Response

{
"total": 2,
"filesUploaded": [
{
"url": "https://signed.example.com/uploads/uuid-a.pdf",
"key": "uploads/uuid-a.pdf",
"bucket": "my-bucket",
"filename": "a.pdf",
"mimeType": "application/pdf",
"isPublic": false,
"urlType": "S3_SIGNED"
},
{
"url": "https://cdn.example.com/uploads/uuid-b.jpg",
"key": "uploads/uuid-b.jpg",
"bucket": "my-bucket",
"filename": "b.jpg",
"mimeType": "image/jpeg",
"isPublic": true,
"urlType": "AKAMAI_PUBLIC"
}
]
}

3. Delete Single File from Cloud Storage

This module deletes one file from cloud storage (AWS S3 / Google Cloud Storage / Azure Blob Storage).

Deletion is done using the storage key returned during upload.

Provider: deleteSingleFile()

Deletes one file using its storage key Works for AWS, GCS, and Azure Safe to call even if the file does not exist Returns deletion status

How it works :

Receives file key Deletes file from cloud storage Returns delete status

Provider API

Input Parameters

  • key — File path / object key in cloud storage

Returns

{
  key: string;        // File key
  bucket: string;     // Bucket / container name
  deleted: boolean;   // true if deleted successfully
  error?: string;     // Error message (if any)
}

How to use the provider

Controller

 @Delete("delete-single-file")
  async deleteFile(@Body("key") key: string) {
    return await this.service.deleteSingleFile(key);
  }

Service

  • Calls the provider function
  • Handles any errors thrown by the provider
async deleteSingleFile(key: string) {
    return await this.cloudStorage.deleteSingleFile(key);
  }

How to Test in Postman

Request Method: DELETE URL: http://localhost:3000/delete-single-file Headers: Content-Type: application/json

Body (raw JSON):

{
  "key": "uploads/uuid-file.pdf"
}

Click Send

Sample Response

{
  "key": "uploads/my-file.pdf",
  "bucket": "my-cloud-bucket",
  "deleted": true
}

4. Delete Multiple Files from Cloud Storage

This module deletes multiple files in one request from cloud storage (AWS S3 / Google Cloud Storage / Azure Blob Storage).

Each file is deleted using its storage key.

Provider: deleteMultipleFiles()

Deletes multiple files in a single call Works for AWS, GCS, and Azure Each file is handled independently Returns delete status for each file

How it works :

Receives a list of file keys Deletes each file from cloud storage Returns result for every file

Provider API

Input Parameters

{
  key: string; // File key in cloud storage
}[]

Returns

{
  key: string;        // File key
  bucket: string;     // Bucket / container name
  deleted: boolean;   // true if deleted successfully
  error?: string;     // Error message (if any)
}[]

How to use the function

Controller

@Delete("delete-multiple-files")
async deleteMultiple(@Body() files: { key: string }[]) {
  return await this.service.deleteMultipleFiles(files);
}

Service

  • Calls the provider function
  • Returns an array of deletion results for all requested files
  async deleteMultipleFiles(
    files: DeleteFileRequest[],
  ): Promise<DeleteFileResult[]> {
    return this.cloudStorage.deleteMultipleFiles(files);
  }

How to Test in Postman

Request Method: DELETE URL: http://localhost:3000/delete-multiple Headers: Content-Type: application/json

Body (raw JSON):

[
  { "key": "uploads/file1.pdf" },
  { "key": "uploads/file2.jpg" }
]

Click Send

Sample Response

[
  {
    "key": "uploads/file1.pdf",
    "bucket": "my-cloud-bucket",
    "deleted": true
  },
  {
    "key": "uploads/file2.jpg",
    "bucket": "my-cloud-bucket",
    "deleted": false,
    "error": "File not found"
  }
]

5.Download Multiple Files as ZIP (Streaming from cloud storage)

This API downloads multiple files from cloud storage and returns them as one ZIP file.

Works with AWS S3 / GCS / Azure.

Provider: createZipStreamForMultipleFileDownloads()

What it does :

Takes a list of file keys Fetches files from cloud storage Streams them into a ZIP file Returns the ZIP as a stream No files are stored on the server.

Provider API

Input Parameters

{
  key: string;        // File key in cloud storage
  saveAs?: string;    // Optional filename inside ZIP
}[]

Returns

  • A Readable stream containing the ZIP archive
  • Can be piped directly to HTTP response

Controller

@Post("download-zip")
async downloadZip(
  @Body() files: { key: string; saveAs?: string }[],
  @Res() res: Response
) {
  const zipStream = await this.service.getZipStream(files);

  res.setHeader("Content-Type", "application/zip");
  res.setHeader("Content-Disposition", 'attachment; filename="files.zip"');

  zipStream.pipe(res);
}

Service

  async getZipStream(files: DownloadFileRequest[]) {
    return this.cloudStorage.createZipStreamForMultipleFileDownloads(files);
  }

Important Filename Rule (Very Important)

Do NOT use / in saveAs ZIP treats / as folder separator

How to Test in Postman

Request

Method: POST URL: http://localhost:3000/download-zip Headers: Content-Type: application/json

Body (raw JSON):

[
  { "key": "uploads/file1.pdf", "saveAs": "file1.pdf" },
  { "key": "uploads/file2.jpg", "saveAs": "image2.jpg" }
]

Click Send And Download

IMPORTANT

Total ZIP size is validated against MAX_ZIP_SIZE (2 GB) ZIP contains all files at root level

6.Download Single File (Streaming from cloud storage)

This API downloads one file from cloud storage and streams it directly to the client.

Works with AWS S3 / GCS / Azure.

Provider: downloadSingleFile()

What it does :

What it does (simple): Reads a file from cloud storage using its key Streams it directly to the response Optionally renames the downloaded file using saveAs No file is stored on the server.

Provider API

Input Parameters

{
  key: string;      // Required: file key in cloud storage
  saveAs?: string;  // Optional: download filename
}

Returns

void // file is streamed to response

Controller


  @Post("download-single-file")
  async downloadFile(
    @Body() file: { key: string; saveAs?: string },
    @Res() res: Response,
  ) {
    if (!file.key) {
      return res.status(400).json({ message: "key is required" });
    }

    try {
      await this.service.download(file.key, res, file.saveAs);
    } catch (err: any) {
      if (!res.headersSent) {
        res.status(404).json({ message: err.message });
      }
    }
  }

Service

  async download(key: string, res: Response, saveAs?: string) {
    return this.cloudStorage.downloadSingleFile({ key, saveAs }, res);
  }

Important Filename Rule (Very Important)

Do NOT use / in saveAs / is treated as a folder path by browsers and servers

How to Test in Postman

Request

Method: GET URL: http://localhost:3000/download-single-file Headers: Content-Type: application/json

Body (raw JSON):

{
  "key": "uploads/report.pdf",
  "saveAs": "my-report.pdf"
}

Click Send And Download