npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

@tus/s3-store

v1.4.2

Published

AWS S3 store for @tus/server

Downloads

5,707

Readme

@tus/s3-store

👉 Note: since 1.0.0 packages are split and published under the @tus scope. The old package, tus-node-server, is considered unstable and will only receive security fixes. Make sure to use the new package.

Contents

Install

In Node.js (16.0+), install with npm:

npm install @tus/s3-store

Use

const {Server} = require('@tus/server')
const {S3Store} = require('@tus/s3-store')

const s3Store = new S3Store({
  partSize: 8 * 1024 * 1024, // Each uploaded part will have ~8MiB,
  s3ClientConfig: {
    bucket: process.env.AWS_BUCKET,
    region: process.env.AWS_REGION,
    credentials: {
      accessKeyId: process.env.AWS_ACCESS_KEY_ID,
      secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
    },
  },
})
const server = new Server({path: '/files', datastore: s3Store})
// ...

API

This package exports S3Store. There is no default export.

new S3Store(options)

Creates a new AWS S3 store with options.

options.bucket

The bucket name.

options.partSize

The preferred part size for parts send to S3. Can not be lower than 5MiB or more than 5GiB. The server calculates the optimal part size, which takes this size into account, but may increase it to not exceed the S3 10K parts limit.

options.s3ClientConfig

Options to pass to the AWS S3 SDK. Checkout the S3ClientConfig docs for the supported options. You need to at least set the region, bucket name, and your preferred method of authentication.

options.expirationPeriodInMilliseconds

Enables the expiration extension and sets the expiration period of an upload url in milliseconds. Once the expiration period has passed, the upload url will return a 410 Gone status code.

options.useTags

Some S3 providers don't support tagging objects. If you are using certain features like the expiration extension and your provider doesn't support tagging, you can set this option to false to disable tagging.

options.cache

An optional cache implementation (KvStore).

Default uses an in-memory cache (MemoryKvStore). When running multiple instances of the server, you need to provide a cache implementation that is shared between all instances like the RedisKvStore.

See the exported KV stores from @tus/server for more information.

options.maxConcurrentPartUploads

This setting determines the maximum number of simultaneous part uploads to an S3 storage service. The default value is 60. This default is chosen in conjunction with the typical partSize of 8MiB, aiming for an effective transfer rate of 3.84Gbit/s.

Considerations: The ideal value for maxConcurrentPartUploads varies based on your partSize and the upload bandwidth to your S3 bucket. A larger partSize means less overall upload bandwidth available for other concurrent uploads.

  • Lowering the Value: Reducing maxConcurrentPartUploads decreases the number of simultaneous upload requests to S3. This can be beneficial for conserving memory, CPU, and disk I/O resources, especially in environments with limited system resources or where the upload speed it low or the part size is large.

  • Increasing the Value: A higher value potentially enhances the data transfer rate to the server, but at the cost of increased resource usage (memory, CPU, and disk I/O). This can be advantageous when the goal is to maximize throughput, and sufficient system resources are available.

  • Bandwidth Considerations: It's important to note that if your upload bandwidth to S3 is a limiting factor, increasing maxConcurrentPartUploads won’t lead to higher throughput. Instead, it will result in additional resource consumption without proportional gains in transfer speed.

Extensions

The tus protocol supports optional extensions. Below is a table of the supported extensions in @tus/s3-store.

| Extension | @tus/s3-store | | ------------------------ | --------------- | | Creation | ✅ | | Creation With Upload | ✅ | | Expiration | ✅ | | Checksum | ❌ | | Termination | ✅ | | Concatenation | ❌ |

Termination

After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to set an S3 Lifecycle configuration to abort incomplete multipart uploads.

Expiration

Unlike other stores, the expiration extension on the S3 store does not need to call server.cleanUpExpiredUploads(). The store creates a Tus-Complete tag for all objects, including .part and .info files, to indicate whether an upload is finished. This means you could setup a lifecyle policy to automatically clean them up without a CRON job.

{
  "Rules": [
    {
      "Filter": {
        "Tag": {
          "Key": "Tus-Complete",
          "Value": "false"
        }
      },
      "Expiration": {
        "Days": 2
      }
    }
  ]
}

If you want more granularity, it is still possible to configure a CRON job to call server.cleanExpiredUploads() yourself.

Examples

Example: using credentials to fetch credentials inside a AWS container

The credentials config is directly passed into the AWS SDK so you can refer to the AWS docs for the supported values of credentials

const aws = require('aws-sdk')
const {Server} = require('@tus/server')
const {FileStore} = require('@tus/s3-store')

const s3Store = new S3Store({
  partSize: 8 * 1024 * 1024,
  s3ClientConfig: {
    bucket: process.env.AWS_BUCKET,
    region: process.env.AWS_REGION,
    credentials: new aws.ECSCredentials({
      httpOptions: {timeout: 5000},
      maxRetries: 10,
    }),
  },
})
const server = new Server({path: '/files', datastore: s3Store})
// ...

Types

This package is fully typed with TypeScript.

Compatibility

This package requires Node.js 16.0+.

Contribute

See contributing.md.

License

MIT © tus