@ido_kawaz/storage-client
v7.4.0
Published
Storage client library for Kawaz Plus services
Downloads
1,248
Readme
@ido_kawaz/storage-client
Storage client library for S3-compatible object storage (AWS S3, MinIO, LocalStack) with multipart upload support.
Installation
npm install @ido_kawaz/storage-clientQuick Start
import { StorageClient } from '@ido_kawaz/storage-client';
import fs from 'node:fs';
const client = new StorageClient({
region: 'us-east-1',
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
},
partSize: 5 * 1024 * 1024,
maxConcurrency: 4,
batchSize: 10
});
await client.uploadObject(
'my-bucket',
{ key: 'files/report.pdf', data: fs.createReadStream('./report.pdf') },
{ ensureBucket: true }
);API
createStorageConfig(): StorageConfig
Builds a validated StorageConfig from environment variables using Zod.
Supported environment variables:
AWS_ENDPOINT(optional) - S3 endpoint URLAWS_PUBLIC_ENDPOINT(optional) - public-facing endpoint used for presigned URLs (e.g. a CDN or browser-accessible host); falls back toAWS_ENDPOINTAWS_REGION(optional, defaultus-east-1)AWS_ACCESS_KEY_ID(required)AWS_SECRET_ACCESS_KEY(required)AWS_PART_SIZE(optional, default5242880)AWS_MAX_CONCURRENCY(optional, default4)AWS_BATCH_SIZE(optional, default10)
new StorageClient(config)
Creates a new client.
StorageConfig extends AWS S3ClientConfig and includes:
partSize: number- multipart upload part size in bytesmaxConcurrency: number- number of parts uploaded in parallelbatchSize: number- number of objects processed in parallel per batchpublicEndpoint?: string- public-facing endpoint for presigned URL signing (e.g. a CDN URL)
ensureBucket(bucketName: string): Promise<void>
Checks whether a bucket exists and creates it when it is missing.
deleteBucket(bucketName: string, onProgress?: OnProgressCallback): Promise<void>
Deletes a bucket. If the bucket does not exist, the operation is treated as successful. When the bucket is non-empty, clears all objects first — onProgress receives (completedBatches, totalBatches) during that clearing phase.
uploadObject(bucketName: string, object: StorageObject, options?: UploadObjectOptions, onProgress?: OnProgressCallback): Promise<void>
Uploads a single object to storage. StorageObject is { key: string; data: Readable | (() => Readable) }.
- By default, uses a regular
PutObjectrequest. - If
options?.ensureBucketistrue, the bucket is created automatically when missing. - If
options?.multipartUploadistrue, upload is performed using multipart upload.onProgressreceives(bytesLoaded, totalBytes)as each part is sent. - When
datais a factory function() => Readable, the stream is created fresh on each attempt, enabling automatic retry (up to 3 attempts) on transient network errors. Whendatais a plainReadable, no retry is attempted.
uploadObjects(bucketName: string, objects: StorageObject[], options?: UploadObjectOptions, onOperationProgress?: OnProgressCallback, onObjectProgress?: OnProgressCallback): Promise<void>
Uploads multiple objects in parallel batches (controlled by batchSize). onOperationProgress receives (completedBatches, totalBatches); onObjectProgress is forwarded to each individual uploadObject call.
UploadObjectOptions:
ensureBucket: booleanmultipartUpload: boolean
downloadObject(bucketName: string, objectKey: string): Promise<Readable>
Downloads an object and returns its body as a Node.js Readable stream.
- Throws
StorageErrorwhen the object retrieval fails. - Throws
StorageErrorwhen the storage service returns an empty body.
getPresignedUrl(bucketName: string, objectKey: string, expiresInSeconds: number): Promise<string>
Generates a pre-signed URL for temporary read access to an object.
expiresInSeconds— how long the URL remains valid.- Throws
StorageErroron signing failure.
deleteObject(bucketName: string, objectKey: string): Promise<void>
Deletes a single object from a bucket.
- Throws
StorageErroron failure.
clearPrefix(bucketName: string, prefix: string, onProgress?: OnProgressCallback): Promise<void>
Deletes all objects whose key starts with the given prefix. onProgress receives (completedBatches, totalBatches).
- No-ops when the prefix matches no objects.
- Throws
StorageErrorif listing objects fails.
Error Handling
Storage operations can throw StorageError with operation context in the message.
import { StorageError } from '@ido_kawaz/storage-client';
try {
await client.uploadObject('my-bucket', { key: 'files/a.txt', data: fs.createReadStream('./a.txt') });
} catch (error) {
if (error instanceof StorageError) {
console.error(error.message);
}
}Exports
StorageClientcreateStorageConfigStorageConfigStorageErrorStorageObjectUploadObjectOptionsOnProgressCallback
Development
npm run build
npm testUseful test scripts:
npm test- run tests once
License
MIT
