express-storage
v3.0.0
Published
Express.js file upload middleware for AWS S3, Google Cloud Storage, Azure Blob & local disk. Unified API with presigned URLs, file validation, streaming uploads, TypeScript support, and zero-config provider switching.
Maintainers
Readme
Express Storage
Express.js file upload middleware for AWS S3, Google Cloud Storage, Azure Blob Storage, and local disk — one unified API, zero vendor lock-in.
Express Storage is a TypeScript-first file upload library for Node.js and Express. Upload files to AWS S3, Google Cloud Storage (GCS), Azure Blob Storage, or local disk using a single API. Switch cloud providers by changing one environment variable — no code changes needed. Built-in presigned URL support, file validation, streaming uploads, and security protection make it a production-ready alternative to multer-s3 that works with every major cloud provider.
Table of Contents
- Features
- Quick Start
- Supported Storage Providers
- Error Codes
- Security Features
- Presigned URLs: Client-Side Uploads
- Large File Uploads
- API Reference
- Environment Variables
- Lifecycle Hooks
- Type-Safe Results
- Configurable Concurrency
- Lifecycle Management
- Custom Rate Limiting
- Utilities
- Real-World Examples
- Migrating Between Providers
- Migrating from v2 to v3
- Why Express Storage over Alternatives?
- TypeScript Support
- Contributing
Features
- One API, Four Providers — Write upload code once. Deploy to AWS S3, GCS, Azure, or local disk.
- Presigned URLs — Client-side uploads that bypass your server, with per-provider constraint enforcement.
- File Validation — Size limits, MIME type checks, and extension filtering before storage.
- Security Built In — Path traversal prevention, filename sanitization, null byte protection.
- TypeScript Native — Full type safety with discriminated unions. No
anytypes. - Streaming Uploads — Automatic multipart/streaming for files over 100MB.
- Zero Config Switching — Change
FILE_DRIVER=localtoFILE_DRIVER=s3and you're done. - Lifecycle Hooks — Tap into upload/delete events for logging, virus scanning, or audit trails.
- Batch Operations — Upload or delete multiple files in parallel with concurrency control and
AbortSignalsupport. - Custom Rate Limiting — Built-in in-memory limiter or plug in your own (Redis, Memcached, etc.).
- Lightweight — Install only the cloud SDK you need. No dependency bloat.
Quick Start
Installation
npm install express-storageThen install only the cloud SDK you need:
# For AWS S3
npm install @aws-sdk/client-s3 @aws-sdk/lib-storage @aws-sdk/s3-request-presigner
# For Google Cloud Storage
npm install @google-cloud/storage
# For Azure Blob Storage
npm install @azure/storage-blob @azure/identityLocal storage works out of the box with no additional dependencies.
Basic Setup
import express from "express";
import multer from "multer";
import { StorageManager } from "express-storage";
const app = express();
const upload = multer();
const storage = new StorageManager();
app.post("/upload", upload.single("file"), async (req, res) => {
const result = await storage.uploadFile(req.file, {
maxSize: 10 * 1024 * 1024, // 10MB limit
allowedMimeTypes: ["image/jpeg", "image/png", "application/pdf"],
});
if (result.success) {
res.json({ reference: result.reference, url: result.fileUrl });
} else {
res.status(400).json({ error: result.error });
}
});Environment Configuration
Create a .env file:
# Choose your storage provider
FILE_DRIVER=local
# For local storage
LOCAL_PATH=uploads
# For AWS S3
FILE_DRIVER=s3
BUCKET_NAME=my-bucket
AWS_REGION=us-east-1
AWS_ACCESS_KEY=your-key
AWS_SECRET_KEY=your-secret
# For Google Cloud Storage
FILE_DRIVER=gcs
BUCKET_NAME=my-bucket
GCS_PROJECT_ID=my-project
# For Azure Blob Storage
FILE_DRIVER=azure
BUCKET_NAME=my-container
AZURE_CONNECTION_STRING=your-connection-stringThat's it. Your upload code stays the same regardless of which provider you choose.
Supported Storage Providers
| Provider | Direct Upload | Presigned URLs | Best For |
| ---------------- | ------------- | ----------------- | ------------------------- |
| Local Disk | local | — | Development, small apps |
| AWS S3 | s3 | s3-presigned | Most production apps |
| Google Cloud | gcs | gcs-presigned | GCP-hosted applications |
| Azure Blob | azure | azure-presigned | Azure-hosted applications |
Error Codes
Every error result includes a code field for programmatic error handling — no more parsing error strings:
const result = await storage.uploadFile(file, {
maxSize: 5 * 1024 * 1024,
allowedMimeTypes: ["image/jpeg", "image/png"],
});
if (!result.success) {
switch (result.code) {
case "FILE_TOO_LARGE":
res.status(413).json({ error: "File is too large" });
break;
case "INVALID_MIME_TYPE":
res.status(415).json({ error: "Unsupported file type" });
break;
case "RATE_LIMITED":
res.status(429).json({ error: "Too many requests" });
break;
default:
res.status(400).json({ error: result.error });
}
}| Code | When |
| -------------------------- | -------------------------------------------------------------- |
| NO_FILE | No file provided to upload |
| FILE_EMPTY | File has zero bytes |
| FILE_TOO_LARGE | File exceeds maxSize or maxFileSize |
| INVALID_MIME_TYPE | MIME type not in allowedMimeTypes |
| INVALID_EXTENSION | Extension not in allowedExtensions |
| INVALID_FILENAME | Filename is empty, too long, or contains illegal characters |
| INVALID_INPUT | Bad argument (e.g., non-numeric fileSize, missing fileName) |
| PATH_TRAVERSAL | Path contains .., \0, or other traversal sequences |
| FILE_NOT_FOUND | File doesn't exist (delete, validate, view) |
| VALIDATION_FAILED | Post-upload validation failed (content type or size mismatch) |
| RATE_LIMITED | Presigned URL rate limit exceeded |
| HOOK_ABORTED | A beforeUpload or beforeDelete hook threw |
| PRESIGNED_NOT_SUPPORTED | Local driver doesn't support presigned URLs |
| PROVIDER_ERROR | Cloud provider SDK error (network, auth, permissions) |
Security Features
File uploads are one of the most exploited attack vectors in web applications. Express Storage protects you by default.
Path Traversal Prevention
Attackers try filenames like ../../../etc/passwd to escape your upload directory. We block this:
// These malicious filenames are automatically rejected
"../secret.txt"; // Blocked: path traversal
"..\\config.json"; // Blocked: Windows path traversal
"file\0.txt"; // Blocked: null byte injectionAutomatic Filename Sanitization
User-provided filenames can't be trusted. We transform them into safe, unique identifiers:
User uploads: "My Photo (1).jpg"
Stored as: "1706123456789_a1b2c3d4e5_my_photo_1_.jpg"The format {timestamp}_{random}_{sanitized_name} prevents collisions and removes dangerous characters.
File Validation
Validate before processing. Reject before storing.
await storage.uploadFile(file, {
maxSize: 5 * 1024 * 1024, // 5MB limit
allowedMimeTypes: ["image/jpeg", "image/png"],
allowedExtensions: [".jpg", ".png"],
});Presigned URL Security
For S3 and GCS, file constraints are enforced at the URL level — clients physically cannot upload the wrong file type or size. For Azure (which doesn't support URL-level constraints), we validate after upload and automatically delete invalid files.
Presigned URLs: Client-Side Uploads
Large files shouldn't flow through your server. Presigned URLs let clients upload directly to cloud storage.
The Flow
1. Client → Your Server: "I want to upload photo.jpg (2MB, image/jpeg)"
2. Your Server → Client: "Here's a presigned URL, valid for 10 minutes"
3. Client → Cloud Storage: Uploads directly (your server never touches the bytes)
4. Client → Your Server: "Upload complete, please verify"
5. Your Server: Confirms file exists, returns permanent URLImplementation
// Step 1: Generate upload URL
app.post("/upload/init", async (req, res) => {
const { fileName, contentType, fileSize } = req.body;
const result = await storage.generateUploadUrl(
fileName,
contentType,
fileSize,
"user-uploads", // Optional folder
);
res.json({
uploadUrl: result.uploadUrl,
reference: result.reference, // Save this for later
});
});
// Step 2: Confirm upload
app.post("/upload/confirm", async (req, res) => {
const { reference, expectedContentType, expectedFileSize } = req.body;
const result = await storage.validateAndConfirmUpload(reference, {
expectedContentType,
expectedFileSize,
});
if (result.success) {
res.json({ viewUrl: result.viewUrl });
} else {
res.status(400).json({ error: result.error });
}
});Provider-Specific Behavior
| Provider | Content-Type Enforced | File Size Enforced | Post-Upload Validation | | -------- | --------------------- | ------------------ | ---------------------- | | S3 | At URL level | At URL level | Optional | | GCS | At URL level | At URL level | Optional | | Azure | Not enforced | Not enforced | Required |
For Azure, always call validateAndConfirmUpload() with expected values. Invalid files are automatically deleted.
Large File Uploads
For files larger than 100MB, we recommend using presigned URLs instead of direct server uploads. Here's why:
Memory Efficiency
When you upload through your server, the entire file must be buffered in memory (or stored temporarily on disk). For a 500MB video file, that's 500MB of RAM per concurrent upload. With presigned URLs, the file goes directly to cloud storage — your server only handles small JSON requests.
Automatic Streaming
For files that must go through your server, Express Storage automatically uses streaming uploads for files larger than 100MB:
- S3: Uses multipart upload with 10MB chunks
- GCS: Uses resumable uploads with streaming
- Azure: Uses block upload with streaming
This happens transparently — you don't need to change your code.
Recommended Approach for Large Files
// Frontend: Request presigned URL
const { uploadUrl, reference } = await fetch("/api/upload/init", {
method: "POST",
body: JSON.stringify({
fileName: "large-video.mp4",
contentType: "video/mp4",
fileSize: 524288000, // 500MB
}),
}).then((r) => r.json());
// Frontend: Upload directly to cloud (bypasses your server!)
await fetch(uploadUrl, {
method: "PUT",
body: file,
headers: { "Content-Type": "video/mp4" },
});
// Frontend: Confirm upload
await fetch("/api/upload/confirm", {
method: "POST",
body: JSON.stringify({ reference }),
});Size Limits
| Scenario | Recommended Limit | Reason | | ------------------------------ | ----------------- | ------------------------------ | | Direct upload (memory storage) | < 100MB | Node.js memory constraints | | Direct upload (disk storage) | < 500MB | Temp file management | | Presigned URL upload | 5GB+ | Limited only by cloud provider |
API Reference
StorageManager
The main class you'll interact with.
import { StorageManager } from "express-storage";
// Use environment variables
const storage = new StorageManager();
// Or configure programmatically
const storage = new StorageManager({
driver: "s3",
credentials: {
bucketName: "my-bucket",
awsRegion: "us-east-1",
maxFileSize: 50 * 1024 * 1024, // 50MB
},
logger: console, // Optional: enable debug logging
});File Upload Methods
// Single file
const result = await storage.uploadFile(file, validation?, options?);
// Multiple files (processed in parallel with concurrency limits)
const results = await storage.uploadFiles(files, validation?, options?);Presigned URL Methods
// Generate upload URL with constraints
const result = await storage.generateUploadUrl(fileName, contentType?, fileSize?, folder?);
// Generate view URL for existing file
const result = await storage.generateViewUrl(reference);
// Validate upload (required for Azure, recommended for all)
const result = await storage.validateAndConfirmUpload(reference, options?);
// Batch operations
const results = await storage.generateUploadUrls(files, folder?);
const results = await storage.generateViewUrls(references);File Management
// Delete single file (returns DeleteResult with error details on failure)
const result = await storage.deleteFile(reference);
if (!result.success) console.log(result.error, result.code);
// Delete multiple files
const results = await storage.deleteFiles(references);
// Get file metadata without downloading
const info = await storage.getMetadata(reference);
if (info) console.log(info.name, info.size, info.contentType, info.lastModified);
// List files with pagination
const result = await storage.listFiles(prefix?, maxResults?, continuationToken?);Upload Options
interface UploadOptions {
contentType?: string; // Override detected type
metadata?: Record<string, string>; // Custom metadata
cacheControl?: string; // e.g., 'max-age=31536000'
contentDisposition?: string; // e.g., 'attachment; filename="doc.pdf"'
}
// Example: Upload with caching headers
await storage.uploadFile(file, undefined, {
cacheControl: "public, max-age=31536000",
metadata: { uploadedBy: "user-123" },
});Validation Options
interface FileValidationOptions {
maxSize?: number; // Maximum file size in bytes
allowedMimeTypes?: string[]; // e.g., ['image/jpeg', 'image/png']
allowedExtensions?: string[]; // e.g., ['.jpg', '.png']
}Environment Variables
Core Settings
| Variable | Description | Default |
| ---------------------- | ----------------------------------- | ------------------------ |
| FILE_DRIVER | Storage driver to use | local |
| BUCKET_NAME | Cloud storage bucket/container name | — |
| BUCKET_PATH | Default folder path within bucket | "" (root) |
| LOCAL_PATH | Directory for local storage | public/express-storage |
| PRESIGNED_URL_EXPIRY | URL validity in seconds | 600 (10 min) |
| MAX_FILE_SIZE | Maximum upload size in bytes | 5368709120 (5GB) |
AWS S3
| Variable | Description |
| ---------------- | ----------------------------------------------- |
| AWS_REGION | AWS region (e.g., us-east-1) |
| AWS_ACCESS_KEY | Access key ID (optional if using IAM roles) |
| AWS_SECRET_KEY | Secret access key (optional if using IAM roles) |
Google Cloud Storage
| Variable | Description |
| ----------------- | ------------------------------------------------ |
| GCS_PROJECT_ID | Google Cloud project ID |
| GCS_CREDENTIALS | Path to service account JSON (optional with ADC) |
Azure Blob Storage
| Variable | Description |
| ------------------------- | ------------------------------------------------- |
| AZURE_CONNECTION_STRING | Full connection string (recommended) |
| AZURE_ACCOUNT_NAME | Storage account name (alternative) |
| AZURE_ACCOUNT_KEY | Storage account key (alternative) |
Note: Azure uses BUCKET_NAME for the container name (same as S3/GCS).
Lifecycle Hooks
Hooks let you tap into the upload/delete lifecycle without modifying drivers. Perfect for logging, virus scanning, metrics, or audit trails.
const storage = new StorageManager({
driver: "s3",
hooks: {
beforeUpload: async (file) => {
await virusScan(file.buffer); // Throw to abort upload
},
afterUpload: (result, file) => {
auditLog("file_uploaded", { result, originalName: file.originalname });
},
beforeDelete: async (reference) => {
await checkPermissions(reference);
},
afterDelete: (reference, success) => {
if (success) auditLog("file_deleted", { reference });
},
onError: (error, context) => {
metrics.increment("storage.error", { operation: context.operation });
},
},
});All hooks are optional and async-safe. beforeUpload and beforeDelete can throw to abort the operation — the error message is included in the result.
Type-Safe Results
All result types use TypeScript discriminated unions. Check result.success and TypeScript narrows the type automatically:
const result = await storage.uploadFile(file);
if (result.success) {
console.log(result.reference); // stored file path (for delete/view/getMetadata)
console.log(result.fileUrl); // URL to access the file
} else {
console.log(result.error); // TypeScript knows this exists
}This applies to all result types: FileUploadResult, DeleteResult, PresignedUrlResult, BlobValidationResult, and ListFilesResult.
Configurable Concurrency
Control how many parallel operations run in batch methods:
const storage = new StorageManager({
driver: "s3",
concurrency: 5, // Applies to uploadFiles, deleteFiles, generateUploadUrls, etc.
});Default is 10. Lower it for rate-limited APIs or resource-constrained environments.
Cancellable Batch Operations
All batch methods accept an AbortSignal for cancelling long-running operations mid-flight:
const controller = new AbortController();
// Cancel after 5 seconds
setTimeout(() => controller.abort(), 5000);
try {
const results = await storage.uploadFiles(files, validation, options, {
signal: controller.signal,
});
} catch (error) {
console.log("Upload batch was cancelled");
}
// Also works with deleteFiles, generateUploadUrls, generateViewUrls
await storage.deleteFiles(references, { signal: controller.signal });Lifecycle Management
Clean up resources when you're done with a StorageManager instance:
const storage = new StorageManager({ driver: "s3", rateLimiter: { maxRequests: 100 } });
// ... use storage ...
// Release resources (clears factory cache entry and rate limiter)
storage.destroy();This is especially useful in tests, serverless functions, or any environment where StorageManager instances are created and discarded frequently.
Custom Rate Limiting
The built-in rate limiter works for single-process apps. For clustered deployments, provide your own adapter:
import { StorageManager, RateLimiterAdapter } from "express-storage";
// or: import { RateLimiterAdapter } from "express-storage"; // types are always at top level
// Built-in in-memory limiter
const storage = new StorageManager({
driver: "s3",
rateLimiter: { maxRequests: 100, windowMs: 60000 },
});
// Custom Redis-backed limiter
class RedisRateLimiter implements RateLimiterAdapter {
async tryAcquire() {
/* Redis INCR + EXPIRE */
}
async getRemainingRequests() {
/* ... */
}
async getResetTime() {
/* ... */
}
}
const storage = new StorageManager({
driver: "s3",
rateLimiter: new RedisRateLimiter(redisClient),
});Utilities
Express Storage includes battle-tested utilities you can use directly.
Retry with Exponential Backoff
import { withRetry } from "express-storage/utils";
const result = await withRetry(() => storage.uploadFile(file), {
maxAttempts: 3,
baseDelay: 1000,
maxDelay: 10000,
exponentialBackoff: true,
});File Type Helpers
import {
isImageFile,
isDocumentFile,
getFileExtension,
formatFileSize,
} from "express-storage/utils";
isImageFile("image/jpeg"); // true
isDocumentFile("application/pdf"); // true
getFileExtension("photo.jpg"); // '.jpg'
formatFileSize(1048576); // '1 MB'Custom Logging
import { StorageManager, type Logger } from "express-storage";
const logger: Logger = {
debug: (msg, ...args) => console.debug(`[Storage] ${msg}`, ...args),
info: (msg, ...args) => console.info(`[Storage] ${msg}`, ...args),
warn: (msg, ...args) => console.warn(`[Storage] ${msg}`, ...args),
error: (msg, ...args) => console.error(`[Storage] ${msg}`, ...args),
};
const storage = new StorageManager({ driver: "s3", logger });Real-World Examples
Profile Picture Upload
app.post("/users/:id/avatar", upload.single("avatar"), async (req, res) => {
const result = await storage.uploadFile(
req.file,
{
maxSize: 2 * 1024 * 1024, // 2MB
allowedMimeTypes: ["image/jpeg", "image/png", "image/webp"],
},
{
cacheControl: "public, max-age=86400",
metadata: { userId: req.params.id },
},
);
if (result.success) {
await db.users.update(req.params.id, { reference: result.reference, avatarUrl: result.fileUrl });
res.json({ avatarUrl: result.fileUrl });
} else {
res.status(400).json({ error: result.error });
}
});Document Upload with Presigned URLs
// Frontend requests upload URL
app.post("/documents/request-upload", async (req, res) => {
const { fileName, fileSize } = req.body;
const result = await storage.generateUploadUrl(
fileName,
"application/pdf",
fileSize,
`documents/${req.user.id}`,
);
// Store pending upload in database
await db.documents.create({
reference: result.reference,
userId: req.user.id,
status: "pending",
});
res.json({
uploadUrl: result.uploadUrl,
reference: result.reference,
});
});
// Frontend confirms upload complete
app.post("/documents/confirm-upload", async (req, res) => {
const { reference } = req.body;
const result = await storage.validateAndConfirmUpload(reference, {
expectedContentType: "application/pdf",
});
if (result.success) {
await db.documents.update(
{ reference },
{
status: "uploaded",
size: result.actualFileSize,
},
);
res.json({ success: true, viewUrl: result.viewUrl });
} else {
await db.documents.delete({ reference });
res.status(400).json({ error: result.error });
}
});Bulk File Upload
app.post("/gallery/upload", upload.array("photos", 20), async (req, res) => {
const files = req.files as Express.Multer.File[];
const results = await storage.uploadFiles(files, {
maxSize: 10 * 1024 * 1024,
allowedMimeTypes: ["image/jpeg", "image/png"],
});
const successful = results.filter((r) => r.success);
const failed = results.filter((r) => !r.success);
res.json({
uploaded: successful.length,
failed: failed.length,
files: successful.map((r) => ({
reference: r.reference,
url: r.fileUrl,
})),
errors: failed.map((r) => r.error),
});
});Migrating Between Providers
Moving from local development to cloud production? Or switching cloud providers? Here's how.
Local to S3
# Before (development)
FILE_DRIVER=local
LOCAL_PATH=uploads
# After (production)
FILE_DRIVER=s3
BUCKET_NAME=my-app-uploads
AWS_REGION=us-east-1Your code stays exactly the same. Files uploaded before migration remain in their original location — you'll need to migrate existing files separately if needed.
S3 to Azure
# Before
FILE_DRIVER=s3
BUCKET_NAME=my-bucket
AWS_REGION=us-east-1
# After
FILE_DRIVER=azure
BUCKET_NAME=my-container
AZURE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=...Important: If using presigned URLs, remember that Azure requires post-upload validation. Add validateAndConfirmUpload() calls to your confirmation endpoints.
Migrating from v2 to v3
v3 has breaking changes in dependencies, types, and configuration. Most apps require minimal code changes.
What Changed
- Cloud SDKs are optional peer dependencies. Install only what you need — no more downloading all SDKs.
- Result types are discriminated unions.
result.fileNameis guaranteed whenresult.success === true. Code that accessed properties without checkingsuccessmay need updates. - Presigned driver subclasses removed.
S3PresignedStorageDriver,GCSPresignedStorageDriver, andAzurePresignedStorageDriverare no longer exported. Use the base driver classes orStorageManager(the's3-presigned'driver string still works). rateLimitoption renamed torateLimiter. Now accepts either options or a custom adapter.getRateLimitStatus()is async. Returns a Promise.deleteFile()returnsDeleteResultinstead ofboolean. Checkresult.successinstead of the boolean value.IStorageDriver.delete()returnsDeleteResultinstead ofboolean. Custom drivers must be updated.ensureDirectoryExists()is async. Returns aPromise<void>— addawaitto existing calls.- Presigned URL methods return stricter types.
generateUploadUrl()returnsPresignedUploadUrlResult(guaranteesuploadUrl,fileName,reference,expiresInon success).generateViewUrl()returnsPresignedViewUrlResult(guaranteesviewUrl,reference,expiresInon success).
Migration Steps
- Update the package:
npm install express-storage@3- Install the SDK for your provider:
# If you use S3
npm install @aws-sdk/client-s3 @aws-sdk/lib-storage @aws-sdk/s3-request-presigner
# If you use GCS
npm install @google-cloud/storage
# If you use Azure
npm install @azure/storage-blob @azure/identity- Update result type access —
fileNameis nowreference:
// Before (v2)
const name = result.fileName!;
// After (v3) — "reference" is the stored file path used for all subsequent operations
if (result.success) {
const ref = result.reference; // pass to deleteFile(), getMetadata(), generateViewUrl()
const url = result.fileUrl; // URL to access the file
}- Update rate limiting config (if used):
// Before (v2)
new StorageManager({ driver: "s3", rateLimit: { maxRequests: 100 } });
// After (v3)
new StorageManager({ driver: "s3", rateLimiter: { maxRequests: 100 } });If you forget to install a required SDK, you'll get a clear error message telling you exactly what to install.
Why Express Storage over Alternatives?
If you're evaluating file upload libraries for Express.js, here's how Express Storage compares:
| Feature | Express Storage | multer-s3 | express-fileupload | uploadfs | | --------------------------- | ------------------- | ------------- | ---------------------- | ------------ | | AWS S3 | Yes | Yes | Manual | Yes | | Google Cloud Storage | Yes | No | No | Yes | | Azure Blob Storage | Yes | No | No | Yes | | Local disk | Yes | No | Yes | Yes | | Presigned URLs | Yes | No | No | No | | File validation | Yes | No | Partial | No | | TypeScript (native) | Yes | No | @types | No | | Streaming uploads | Yes | Yes | No | No | | Switch providers at runtime | Yes (env var) | No | No | No | | Path traversal protection | Yes | No | No | No | | Lifecycle hooks | Yes | No | No | No | | Batch operations | Yes | No | No | No | | Rate limiting | Yes | No | No | No |
multer-s3 is great if you only need S3. Express Storage covers S3 plus GCS, Azure, and local disk with the same code — and adds presigned URLs, validation, and security that multer-s3 doesn't provide.
TypeScript Support
Express Storage is written in TypeScript and exports all types:
// Core — what most users need
import {
StorageManager,
InMemoryRateLimiter,
FileUploadResult,
DeleteResult,
PresignedUploadUrlResult,
StorageOptions,
FileValidationOptions,
UploadOptions,
} from "express-storage";
// Utilities — standalone helpers (import separately to keep your bundle small)
import { withRetry, formatFileSize, withConcurrencyLimit } from "express-storage/utils";
// Drivers — for custom driver implementations or direct driver use
import { BaseStorageDriver, createDriver } from "express-storage/drivers";
// Config — environment variable loading and validation
import { validateStorageConfig, loadAndValidateConfig } from "express-storage/config";
// Discriminated unions — TypeScript narrows automatically
const result: FileUploadResult = await storage.uploadFile(file);
if (result.success) {
// TypeScript knows: result is FileUploadSuccess
console.log(result.reference); // string — stored file path
console.log(result.fileUrl); // string — URL to access
} else {
// TypeScript knows: result is FileUploadError
console.log(result.error); // string (guaranteed)
}Contributing
Contributions are welcome!
# Clone the repository
git clone https://github.com/th3hero/express-storage.git
# Install dependencies (includes all cloud SDKs for development)
npm install
# Run tests
npm test
# Run tests in watch mode
npm run test:watch
# Build for production
npm run build
# Run linting
npm run lintLicense
MIT License — use it however you want.
Support
- Issues: GitHub Issues
- Author: Alok Kumar (@th3hero)
Made for developers who are tired of writing upload code from scratch.
