npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2024 – Pkg Stats / Ryan Hefner

s3-s3

v1.4.0

Published

AWS S3 client

Downloads

98

Readme

s3-s3

A Node.js library for S3 redunancy, making sure your calls to S3 keep working even if there is an issue with one S3 location. This library is intended to be used with two buckets set up with cross-region replication.

This library tries to look like a subset of the API from AWS.S3 for easy use. Normally, calls with this library will be sent to the primary S3 location. If there are any unexpected issues with an S3 call, however, it will use a secondary, failover S3 location.

Build Status

NPM

Usage

Before using this library, you should have two buckets set up for cross-region replication. They need to be both replicating to each other. See Amazon's guide for setup. A few additional tips on setup:

  • Don't forget to turn on versioning for both buckets
  • Once you have gone through the replication steps, remember to go back to setting up the second bucket for replication as well
  • If you are starting with one bucket that already has data, make sure to use the AWS SDK for an initial copy of files from one bucket to another

Once you have two buckets to use, you can set up s3-s3 in your code by first setting up the two buckets using aws-sdk in the normal way you would set them up. Something like:

  var aws = require('aws-sdk'),
    // your location for the AWS config of the primary bucket
    awsConfig = require('config.json');
    // your location for the AWS config of the secondary bucket
    awsSecondaryConfig = require('config.json');
    // primary bucket S3 setup
    s3Primary = new AWS.S3(awsConfig);
    // secondary bucket S3 setup
    s3Secondary = new AWS.S3(awsSecondaryConfig);

With the above, you can then set up the s3-s3 object:

  var S3S3 = require('s3-s3'),
    s3 = new S3S3(s3Primary, primaryBucketName, s3Secondary, secondaryBucketName);

You can then use s3 to make many of the same calls that you would make with AWS.S3:

  var request = s3.putObject();
  request.params = {
    'Key': key
    'Body' : cmdStream,
    'ACL' : 'public-read'
  };
  request.on('success', function (response) {
    console.log('success!');
    callback();
  }).on('error', function (err, response) {
    console.log('error!');
    callback(err);
  }).on('failover', function (err) {
    // if you are streaming data in a Body param, you will need to reinitialize
    // request.params.Body here for it to work properly in failover
    console.log('failover!');
    // no callback, as we will still get an error or success
  }).send();

Differences from AWS.S3

While the API attempts to mimic AWS.S3, it's not exactly the same. Some differences:

  1. Using the request object returned from an API call is required with this library. AWS.S3 also allows you to pass in parameters to putObject/getObject/etc, and that is not a current feature of this library.
  2. 'Bucket' is usually given in request.params. This can not be done using this library. You always specify the buckets when initializing s3-s3.
  3. Not all methods and events are implemented. You're welcome to create a PR to add more support.
  4. The failover event used above is the one addition to the normal event list returned from AWS.S3. It is used to indicate that a failover to the secondary location is being attempted due to issues communicating with the primary location.

Usage with Streams

Whenever you have a stream as part of your parameters, as the Body or elsewhere, you need to make sure this stream is reinitialized in failover for this to work properly. For example:

  var request = s3.putObject(),
    setupBody = function () {
      // just pretend doing this makes sense
      var getFile = child_process.spawn('cat', ['myfile.txt']);
      return getFile.stdout;
    };
  request.params = {
    'Key': key
    'Body' : setupBody();
    'ACL' : 'public-read'
  };
  request.on('success', function (response) {
    console.log('success!');
    callback();
  }).on('error', function (err, response) {
    console.log('error!');
    callback(err);
  }).on('failover', function (err, response) {
    // reinitialize Body as needed during failover
    request.params.Body = setupBody();
    console.log('failover!');
    // no callback, as we will still get an error or success
  }).send();

APIs

New s3-s3 object:

  var S3S3 = require('s3-s3'),
    s3 = new S3S3(new AWS.S3(awsConfig), primaryBucketName, new AWS.S3(secondaryConfig), secondaryBucketName);

S3 APIs:

request = s3.putObject();
request = s3.deleteObject();
request = s3.deleteObjects();
request = s3.listObjects();
request = s3.getObject();

request events and send:

request.on('send', function(response) {})
       .on('retry', function(response) {})
       .on('extractError', function(response) {})
       .on('extractData', function(response) {})
       .on('success', function(response) {})
       .on('httpData', function(chunk, response) {})
       .on('complete', function(response) {})
       .on('error', function(error, response) {})
       .on('failover', function(error, response) {})
       .send();

Submitting changes

Thanks for considering making any updates to this project! Here are the steps to take in your fork:

  1. Run "npm install"
  2. Add your changes in your fork as well as any new tests needed
  3. Run "npm test"
  4. Update the HEAD section in CHANGES.md with a description of what you have done
  5. Push your changes and create the PR, and we'll try to get this merged in right away