npm package discovery and stats viewer.

Discover Tips

  • General search

    [free text search, go nuts!]

  • Package details

    pkg:[package-name]

  • User packages

    @[username]

Sponsor

Optimize Toolset

I’ve always been into building performant and accessible sites, but lately I’ve been taking it extremely seriously. So much so that I’ve been building a tool to help me optimize and monitor the sites that I build to make sure that I’m making an attempt to offer the best experience to those who visit them. If you’re into performant, accessible and SEO friendly sites, you might like it too! You can check it out at Optimize Toolset.

About

Hi, 👋, I’m Ryan Hefner  and I built this site for me, and you! The goal of this site was to provide an easy way for me to check the stats on my npm packages, both for prioritizing issues and updates, and to give me a little kick in the pants to keep up on stuff.

As I was building it, I realized that I was actually using the tool to build the tool, and figured I might as well put this out there and hopefully others will find it to be a fast and useful way to search and browse npm packages as I have.

If you’re interested in other things I’m working on, follow me on Twitter or check out the open source projects I’ve been publishing on GitHub.

I am also working on a Twitter bot for this site to tweet the most popular, newest, random packages from npm. Please follow that account now and it will start sending out packages soon–ish.

Open Software & Tools

This site wouldn’t be possible without the immense generosity and tireless efforts from the people who make contributions to the world and share their work via open source initiatives. Thank you 🙏

© 2026 – Pkg Stats / Ryan Hefner

@statewalker/webrun-files-s3

v0.7.0

Published

S3-backed FilesApi implementation for webrun-files

Readme

@statewalker/webrun-files-s3

S3 implementation of the FilesApi interface from @statewalker/webrun-files.

Overview

This package provides a FilesApi implementation that stores files in Amazon S3 or S3-compatible object storage services (MinIO, DigitalOcean Spaces, Backblaze B2, Cloudflare R2, etc.). It maps filesystem-like operations to S3 API calls, providing:

  • Virtual directory structure using key prefixes
  • Range reads via HTTP Range headers for efficient partial access
  • Server-side copy for copy/move operations (no data transfer through client)

Installation

npm install @statewalker/webrun-files-s3 @statewalker/webrun-files @aws-sdk/client-s3

Usage

Basic Usage

import { S3Client } from '@aws-sdk/client-s3';
import { S3FilesApi } from '@statewalker/webrun-files-s3';
import { readText, writeText } from '@statewalker/webrun-files';

// Create S3 client
const s3Client = new S3Client({
  region: 'us-east-1',
  credentials: {
    accessKeyId: 'YOUR_ACCESS_KEY',
    secretAccessKey: 'YOUR_SECRET_KEY',
  },
});

// Create S3-backed files API
const files = new S3FilesApi({
  client: s3Client,
  bucket: 'my-bucket',
  prefix: 'my-app/data', // optional key prefix
});

// Write a file
await writeText(files, '/docs/hello.txt', 'Hello, S3!');

// Read a file
const content = await readText(files, '/docs/hello.txt');
console.log(content); // "Hello, S3!"

// List directory contents
for await (const entry of files.list('/docs')) {
  console.log(entry.name, entry.kind, entry.size);
}

With S3-Compatible Storage (MinIO)

import { S3Client } from '@aws-sdk/client-s3';
import { S3FilesApi } from '@statewalker/webrun-files-s3';

const s3Client = new S3Client({
  endpoint: 'http://localhost:9000',
  region: 'us-east-1',
  credentials: {
    accessKeyId: 'minioadmin',
    secretAccessKey: 'minioadmin',
  },
  forcePathStyle: true, // Required for MinIO
});

const files = new S3FilesApi({
  client: s3Client,
  bucket: 'my-bucket',
});

With AWS IAM Roles (EC2, Lambda, ECS)

import { S3Client } from '@aws-sdk/client-s3';
import { S3FilesApi } from '@statewalker/webrun-files-s3';

// Credentials are automatically loaded from environment/IAM role
const s3Client = new S3Client({ region: 'us-east-1' });

const files = new S3FilesApi({
  client: s3Client,
  bucket: 'my-bucket',
});

API Reference

S3FilesApi

interface S3FilesApiOptions {
  /** Pre-configured S3Client instance. */
  client: S3Client;
  /** S3 bucket name. */
  bucket: string;
  /** Optional key prefix (acts as root directory). */
  prefix?: string;
  /** Part size for multipart uploads (default: 5MB, S3 minimum). */
  multipartPartSize?: number;
}

class S3FilesApi implements FilesApi {
  constructor(options: S3FilesApiOptions);

  // All FilesApi methods
  read(path: string, options?: ReadOptions): AsyncIterable<Uint8Array>;
  write(path: string, content: Iterable<Uint8Array> | AsyncIterable<Uint8Array>): Promise<void>;
  mkdir(path: string): Promise<void>;
  list(path: string, options?: ListOptions): AsyncIterable<FileInfo>;
  stats(path: string): Promise<FileStats | undefined>;
  exists(path: string): Promise<boolean>;
  remove(path: string): Promise<boolean>;
  move(source: string, target: string): Promise<boolean>;
  copy(source: string, target: string): Promise<boolean>;
}

How It Works

Path to Key Mapping

Virtual paths are mapped to S3 keys by combining the optional prefix with the path:

prefix: "my-app/data"
path:   "/docs/file.txt"
key:    "my-app/data/docs/file.txt"

Directory Listing

S3 doesn't have real directories, but this implementation simulates them using:

  • ListObjectsV2 with Delimiter="/" to get "subdirectories" via CommonPrefixes
  • Files are returned from Contents
// List /docs with prefix "my-app"
// S3 request: ListObjectsV2(Prefix="my-app/docs/", Delimiter="/")
for await (const entry of files.list('/docs')) {
  // entry.kind is "file" or "directory"
}

Reading Files

Reads use GetObject with HTTP Range headers for efficient partial access:

// Read bytes 1000-1499 from a file
for await (const chunk of files.read('/large-file.bin', { start: 1000, length: 500 })) {
  // Streams directly from S3, no full file download
}

Writing Files

Files are uploaded using a streaming approach that minimizes memory usage:

  • Small files (< 5MB): Uses simple PutObject for efficiency
  • Large files (>= 5MB): Uses streaming multipart upload, buffering only one part at a time
// Small file - uses PutObject
await writeText(files, '/data/file.txt', 'small content');

// Large file - automatically uses multipart upload
const largeContent = generateLargeContent(); // AsyncIterable<Uint8Array>
await files.write('/data/large-file.bin', largeContent);
// Only one 5MB part is buffered at a time

The multipartPartSize option controls part size (default: 5MB, S3 minimum).

Copy and Move

  • Copy uses CopyObject for single files or multiple CopyObject calls for directories
  • Move is implemented as copy + delete
  • Both operations happen server-side without transferring data through the client

Directory Creation

S3 directories are implicit (they exist if files exist within them). The mkdir() method creates an empty directory marker object:

await files.mkdir('/empty-dir');
// Creates object: "prefix/empty-dir/" with 0 bytes

S3-Compatible Storage

This implementation works with any S3-compatible storage:

| Service | Configuration Notes | |---------|---------------------| | AWS S3 | Standard configuration | | MinIO | Set forcePathStyle: true | | DigitalOcean Spaces | Use endpoint: "https://<region>.digitaloceanspaces.com" | | Backblaze B2 | Use S3-compatible endpoint | | Cloudflare R2 | Use account-specific endpoint | | Wasabi | Use region-specific endpoint |

Testing

Tests use testcontainers with MinIO:

import { MinioContainer } from '@testcontainers/minio';
import { S3Client, CreateBucketCommand } from '@aws-sdk/client-s3';
import { S3FilesApi } from '@statewalker/webrun-files-s3';

const minioContainer = await new MinioContainer().start();

const s3Client = new S3Client({
  endpoint: minioContainer.getConnectionUrl(),
  region: 'us-east-1',
  credentials: {
    accessKeyId: minioContainer.getUsername(),
    secretAccessKey: minioContainer.getPassword(),
  },
  forcePathStyle: true,
});

await s3Client.send(new CreateBucketCommand({ Bucket: 'test-bucket' }));

const files = new S3FilesApi({
  client: s3Client,
  bucket: 'test-bucket',
});

License

MIT