@flink-app/s3-plugin
v0.12.1-alpha.45
Published
Flink plugin to work with s3
Readme
S3 Plugin
A Flink plugin for AWS S3 integration, providing file storage, upload, and management operations. Works with AWS S3 and S3-compatible services.
Installation
npm install @flink-app/s3-pluginUsage
Add and configure the plugin in your app startup:
import { s3Plugin } from "@flink-app/s3-plugin";
import { FlinkApp } from "@flink-app/flink";
import AppContext from "./ApplicationContext";
async function start() {
await new FlinkApp<AppContext>({
name: "My app",
plugins: [
s3Plugin({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
bucket: "my-bucket-name",
region: "us-east-1",
s3Acl: "public-read",
enableUpload: true,
uploadUrl: "/file-upload",
maxFileSize: 10, // MB
uploadPermissionRequired: "authenticated",
}),
],
}).start();
}Add the plugin context to your application context:
import { s3PluginContext } from "@flink-app/s3-plugin";
import { FlinkContext } from "@flink-app/flink";
interface ApplicationContext extends FlinkContext<s3PluginContext> {
// your context here
}
export default ApplicationContext;Configuration Options
interface s3PluginOptions {
accessKeyId: string; // AWS access key ID (required)
secretAccessKey: string; // AWS secret access key (required)
bucket: string; // S3 bucket name (required)
s3Acl?: string; // Access control list (e.g., "public-read", "private")
endpoint?: string; // Custom S3 endpoint (for S3-compatible services)
signatureVersion?: string; // AWS signature version
region?: string; // AWS region (e.g., "us-east-1")
enableUpload: boolean; // Enable file upload endpoint (required)
uploadUrl?: string; // Custom upload endpoint path (default: "/file-upload")
maxFileSize?: number; // Max file size in MB (default: 10)
uploadPermissionRequired?: string; // Permission required for uploads
}Built-in Upload Endpoint
When enableUpload is set to true, the plugin registers a file upload endpoint:
POST /file-upload (or custom uploadUrl)
Upload a file using multipart form data:
curl -X POST http://localhost:3333/file-upload?path=images/ \
-F "[email protected]" \
-H "Authorization: Bearer <token>"Query Parameters:
path(optional): S3 path prefix for the uploaded file
Response:
{
"status": 200,
"data": {
"url": "https://my-bucket.s3.amazonaws.com/images/photo-1234567890.jpg"
}
}Permission Protection:
Set uploadPermissionRequired to require authentication:
s3Plugin({
// ...
enableUpload: true,
uploadPermissionRequired: "authenticated", // or any custom permission
})S3Client API
Access the S3 client from your handlers:
import { Handler } from "@flink-app/flink";
import AppContext from "../ApplicationContext";
const UploadFile: Handler<AppContext, UploadReq, UploadRes> = async ({ ctx, req }) => {
const s3Client = ctx.plugins.s3Plugin.s3Client;
// Your S3 operations here
};Available Methods
uploadFile(fileName, data, mime)
Upload a file to S3.
const url = await ctx.plugins.s3Plugin.s3Client.uploadFile(
"documents/report.pdf",
fileBuffer,
"application/pdf"
);getObject(key)
Retrieve a file from S3.
const file = await ctx.plugins.s3Plugin.s3Client.getObject("documents/report.pdf");getObjects()
List all objects in the bucket.
const objects = await ctx.plugins.s3Plugin.s3Client.getObjects();deleteObject(file, version?)
Delete a single object.
await ctx.plugins.s3Plugin.s3Client.deleteObject("documents/old-report.pdf");deleteObjects(files)
Bulk delete multiple objects.
await ctx.plugins.s3Plugin.s3Client.deleteObjects([
"file1.jpg",
"file2.jpg",
"file3.jpg",
]);getSignedUrl(key, expires)
Generate a temporary signed URL for private objects.
const signedUrl = await ctx.plugins.s3Plugin.s3Client.getSignedUrl(
"private/document.pdf",
3600 // expires in seconds
);checkIfExists(fileName)
Check if a file exists in the bucket.
const exists = await ctx.plugins.s3Plugin.s3Client.checkIfExists("documents/report.pdf");Complete Example
import { Handler } from "@flink-app/flink";
import AppContext from "../ApplicationContext";
interface UploadDocumentReq {
fileName: string;
fileData: string; // base64 encoded
mimeType: string;
}
interface UploadDocumentRes {
url: string;
exists: boolean;
}
export const Route = {
path: "/document/upload",
};
const UploadDocument: Handler<AppContext, UploadDocumentReq, UploadDocumentRes> = async ({ ctx, req }) => {
const { fileName, fileData, mimeType } = req.body;
const s3Client = ctx.plugins.s3Plugin.s3Client;
// Check if file already exists
const exists = await s3Client.checkIfExists(fileName);
if (exists) {
return {
status: 409,
data: {
url: "",
exists: true,
},
};
}
// Upload file
const buffer = Buffer.from(fileData, "base64");
const url = await s3Client.uploadFile(fileName, buffer, mimeType);
return {
status: 200,
data: {
url,
exists: false,
},
};
};
export default UploadDocument;S3-Compatible Services
This plugin works with S3-compatible services like MinIO, DigitalOcean Spaces, etc.:
s3Plugin({
accessKeyId: "your-key",
secretAccessKey: "your-secret",
bucket: "my-bucket",
endpoint: "https://nyc3.digitaloceanspaces.com", // Custom endpoint
region: "nyc3",
s3Acl: "public-read",
enableUpload: true,
})Access Control Lists (ACL)
Common ACL values:
private: Owner gets full control, no one else has accesspublic-read: Owner gets full control, public gets read accesspublic-read-write: Owner gets full control, public gets read/write accessauthenticated-read: Owner gets full control, authenticated users get read access
Environment Variables
Recommended setup using environment variables:
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
AWS_REGION=us-east-1
S3_BUCKET=my-bucket-names3Plugin({
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
bucket: process.env.S3_BUCKET!,
region: process.env.AWS_REGION,
s3Acl: "public-read",
enableUpload: true,
})Features
- ✅ AWS S3 integration
- ✅ S3-compatible service support
- ✅ Built-in file upload endpoint
- ✅ Permission-based access control
- ✅ Multipart file upload support
- ✅ File size limits
- ✅ Signed URL generation
- ✅ Bulk operations
- ✅ File existence checking
Requirements
- AWS account with S3 access (or S3-compatible service)
- Valid AWS credentials (access key ID and secret access key)
- S3 bucket created and accessible
