@chainsaws/s3
v0.1.2
Published
Python-style S3 wrapper for Node.js with async APIs, multipart upload helpers, presigned URLs, directory sync helpers, stream utilities, and an in-memory testing backend.
Downloads
216
Maintainers
Readme
@chainsaws/s3
Python-style S3 wrapper for Node.js with async APIs, multipart upload helpers, presigned URLs, directory sync helpers, stream utilities, and an in-memory testing backend.
Requirements
- Node.js
>= 22 - AWS credentials available through the normal AWS SDK resolution chain, or explicit credentials in
S3APIConfig
This package is ESM-only.
Installation
npm install @chainsaws/s3
yarn add @chainsaws/s3
pnpm add @chainsaws/s3Quick Start
import { S3API } from "@chainsaws/s3";
const s3 = new S3API("my-bucket", {
region: "ap-northeast-2",
});
await s3.init_s3_bucket();
await s3.put("hello.txt", "Hello, world!");
const content = await s3.get("hello.txt");
console.log(content?.toString("utf-8"));
for await (const obj of s3.list("logs/")) {
console.log(obj.Key, obj.Size);
}Design Notes
- The API keeps Python naming where it helps migration:
put,get,list,copy,put_many,get_many,query_json,sync_directory, and so on. - Node methods are naturally
asyncorAsyncGeneratorbased. - The package exposes a high-level
S3APIand a lower-levelS3adapter. Most applications should stay onS3API. - Tests can use an in-memory backend with no AWS credentials.
Package Surface
The package is split into a few layers:
S3API: high-level Python-style clientS3: thin AWS SDK adapter- stream helpers from
stream.ts - in-memory testing helpers from
testing.ts
Most applications should stay on S3API plus the testing helpers.
Creating the Client
import { S3API } from "@chainsaws/s3";
const s3 = new S3API("app-assets", {
region: "ap-northeast-2",
max_pool_connections: 100,
use_accelerate: false,
});Config
type S3APIConfig = {
region: string;
endpoint?: string;
force_path_style?: boolean;
credentials?: AwsCredentialsInput;
acl: "private" | "public-read" | "public-read-write" | "authenticated-read";
use_accelerate: boolean;
max_pool_connections: number;
};Use endpoint and force_path_style for MinIO, LocalStack, or other S3-compatible targets.
The config keys intentionally line up with the shared AWS client contract used
throughout the repo: region, endpoint, credentials, and
max_pool_connections, plus S3-specific options such as acl,
force_path_style, and use_accelerate.
Simple Operations
Put and Get
await s3.put("notes/hello.txt", "hello");
await s3.put("data/report.json", JSON.stringify({ ok: true }), "application/json");
const buffer = await s3.get("notes/hello.txt");
console.log(buffer?.toString("utf-8"));
await s3.get("data/report.json", "./downloads/report.json");TypeScript callers can also use object-style overloads when autocomplete is more valuable than Python-style positional args:
await s3.put({
object_key: "notes/hello.txt",
data: "hello",
acl: "public-read",
});
await s3.get({
object_key: "data/report.json",
local_path: "./downloads/report.json",
max_retries: 5,
});Exists, Size, Delete
if (await s3.exists("data/report.json")) {
console.log(await s3.size("data/report.json"));
await s3.delete("data/report.json");
}Listing Objects
for await (const obj of s3.list("logs/", 100)) {
console.log(obj.Key, obj.ETag);
}
for await (const obj of s3.generate_object_keys("images/")) {
console.log(obj.Key);
}Copying Objects
await s3.copy("incoming/a.txt", "archive/a.txt");
await s3.copy(
"reports/daily.csv",
"reports/daily-public.csv",
undefined,
"public-read"
);If you want failures to throw instead of returning false, use strict: true in copy(...).
Upload Modes
Auto Upload
await s3.upload("small.txt", "hello");
await s3.upload("big/video.mp4", largeBuffer, undefined, 10 * 1024 * 1024, undefined, "private", true);upload(...) chooses multipart upload automatically for payloads larger than 5 MiB unless you override with large_file.
Multipart Upload
await s3.upload_large_file(
"exports/big.ndjson",
hugeBuffer,
"application/octet-stream",
8 * 1024 * 1024,
(current, total) => {
console.log(`uploaded ${current}/${total}`);
}
);Presigned URLs
const downloadUrl = await s3.url("reports/monthly.pdf", 3600);
const uploadUrl = await s3.upload_url("incoming/photo.jpg", "image/jpeg", 900);
const explicitPutUrl = await s3.create_presigned_url_put_object(
"incoming/raw.bin",
undefined,
undefined,
900
);Batch Operations
Multiple Uploads
const uploadResult = await s3.put_many(
[
{ key: "a.txt", data: "A" },
{ key: "b.txt", data: "B" },
{ key: "data.json", data: JSON.stringify({ ok: true }), content_type: "application/json" },
],
4
);
console.log(uploadResult.successful);
console.log(uploadResult.failed);Multiple Downloads
const downloadResult = await s3.get_many(
["a.txt", "nested/b.txt"],
"./downloads",
4
);
console.log(downloadResult.successful);
console.log(downloadResult.failed);Multiple Deletes
const deleteResult = await s3.delete_many(["tmp/a.txt", "tmp/b.txt"]);
console.log(deleteResult.successful);
console.log(deleteResult.failed);Reusable Session Defaults
const uploader = await s3.upload_session({
acl: "public-read",
cache_control: "max-age=300",
});
await uploader.put("assets/logo.svg", "<svg>...</svg>", "image/svg+xml");
const batch = await s3.batch_operation({
max_workers: 8,
chunk_size: 4 * 1024 * 1024,
});
await batch.bulk_upload([
{ object_key: "a.txt", data: "A" },
{ object_key: "b.txt", data: "B" },
]);Object Metadata and Tags
const info = await s3.info("assets/logo.png");
console.log(info?.ContentLength, info?.ContentType);
await s3.tags("assets/logo.png", {
env: "prod",
service: "web",
});
const currentTags = await s3.tags("assets/logo.png");
console.log(currentTags.env);For raw shapes, use get_object_tags(...), put_object_tags(...), or get_object_metadata(...).
S3 Select
Query JSON Lines
for await (const row of s3.query_json(
"logs/app.jsonl",
"SELECT * FROM s3object s WHERE s.level = 'ERROR'"
)) {
console.log(row);
}Query CSV
for await (const row of s3.query_csv(
"users.csv",
"SELECT * FROM s3object s WHERE CAST(s.age AS INT) >= 20",
true,
","
)) {
console.log(row);
}Advanced Select
for await (const row of s3.select_query(
"sample.csv",
"SELECT * FROM s3object",
"CSV",
"CSV",
"LINES",
undefined,
{ file_header_info: "USE", delimiter: ";" },
{ delimiter: "|" }
)) {
console.log(row);
}Streaming
Raw Chunks
for await (const chunk of s3.stream_object("logs/app.log", 8192)) {
console.log(chunk.length);
}Stream Utilities
const textStream = await s3.stream.stream_context("logs/app.log", {
mode: "TEXT",
encoding: "utf-8",
});
console.log(await textStream.read());
for await (const line of s3.stream.stream_lines("logs/app.log")) {
console.log(line);
}The stream manager also supports stream_json(...), stream_csv(...), and stream_process(...).
Directory Operations
Upload a Directory
const result = await s3.upload_directory("./dist", "releases/site", [
"*.map",
"*.tmp",
]);
console.log(result.successful);
console.log(result.failed);Download a Directory
await s3.download_directory("releases/site/", "./local-site", ["*.html", "*.css"]);Sync a Directory
const result = await s3.sync_directory("./public", "site", false, "size_mtime");
console.log(result.uploaded);
console.log(result.updated);
console.log(result.deleted);
console.log(result.failed);multipart_compare_mode:
"size_mtime"compares multipart objects conservatively using size and modified time."head_checksum"uses checksum fields fromHeadObjectwhen available.
Bucket Configuration
Bucket Policy
await s3.put_bucket_policy({
Version: "2012-10-17",
Statement: [
{
Sid: "PublicReadGetObject",
Effect: "Allow",
Principal: "*",
Action: "s3:GetObject",
Resource: "arn:aws:s3:::my-bucket/*",
},
],
});
const policy = await s3.get_bucket_policy();
console.log(policy);Static Website Hosting
await s3.configure_website({
IndexDocument: { Suffix: "index.html" },
ErrorDocument: { Key: "error.html" },
});
console.log(s3.get_website_endpoint());Public / Private Helpers
await s3.make_bucket_public();
await s3.make_bucket_private();Lambda Notifications
await s3.add_lambda_notification(
"arn:aws:lambda:ap-northeast-2:123456789012:function:image-processor",
["s3:ObjectCreated:Put"],
"images/",
".png"
);
await s3.remove_lambda_notification(
"LambdaTrigger-deadbeef",
"arn:aws:lambda:ap-northeast-2:123456789012:function:image-processor"
);This helper also manages the corresponding Lambda invoke permission.
Testing With In-Memory Storage
import { create_inmemory_s3_api, seed_s3_object } from "@chainsaws/s3";
const s3 = create_inmemory_s3_api("test-bucket");
await seed_s3_object(s3, "fixtures/hello.txt", "hello", {
content_type: "text/plain",
tags: { env: "test" },
});
const body = await s3.get("fixtures/hello.txt");
console.log(body?.toString("utf-8"));The in-memory backend supports:
- normal
put/get/list/deleteflows - multipart upload helpers
- bucket policy and website config storage
- object tags
- presigned URL generation stubs
Current Scope
The package already covers the high-value Python parity surface, but a few gaps remain:
- some Python ergonomics map to async Node readers instead of Python context managers
For package details in editors, prefer the exported API docs: the public S3API surface is documented with JSDoc examples.
