@maydotinc/s3-syncer
v0.1.5
Published
Full-featured CLI to sync local directories to S3-compatible buckets via GitHub Actions or locally. You may also pull files from a remote target.
Readme
@maydotinc/s3-syncer
Sync local folders to S3-compatible storage (AWS S3, Cloudflare R2, MinIO, etc.) with minimal setup.
Most teams can start immediately with one command:
npx @maydotinc/s3-syncer sync- If
s3-syncer.jsonis missing, the CLI offers to create it or run a one-time sync (no config written). - If credentials are missing, the CLI can prompt you, or you can run in non-interactive env mode (
--env/--env-file).
Requirements
- Node.js 20+ (generated GitHub Actions workflow uses Node 22)
- S3 credentials with permissions for
ListObjectsV2,PutObject,GetObject(forpull), and optionallyDeleteObject(whendelete: true)
Guides
Quick start
- Run sync (interactive):
npx @maydotinc/s3-syncer sync- If prompted, choose:
- one-time sync (no config file), or
- create reusable
s3-syncer.json
- Provide credentials (interactive or via env):
- from
.env, or - enter once in prompt
Common examples
Create config only (multi-target, interactive):
npx @maydotinc/s3-syncer init --config-onlyCreate config + GitHub Actions workflow:
npx @maydotinc/s3-syncer init --full-setupRun sync in non-interactive mode (fails if creds are missing from env):
npx @maydotinc/s3-syncer sync --envPull a subpath to a specific output directory (skip confirmation):
npx @maydotinc/s3-syncer pull path/to/remote -o ./pulled --yesEnvironment variables
Credentials (required for sync/pull)
AWS_S3_ACCESS_KEY_IDAWS_S3_SECRET_ACCESS_KEY
Optional
AWS_S3_ENDPOINTSLACK_WEBHOOK_URLDISCORD_WEBHOOK_URL
AWS_S3_ENDPOINT notes:
pull: used as a global endpoint fallback when present.sync: used as a global endpoint fallback only when none of your targets specify anendpoint. If you have mixed providers/endpoints, setendpointper target (often via${...}placeholders).
Example .env:
AWS_S3_ACCESS_KEY_ID=your_access_key
AWS_S3_SECRET_ACCESS_KEY=your_secret_key
AWS_S3_ENDPOINT=Commands
npx @maydotinc/s3-syncer init- interactive initializer (config-only or full setup)
npx @maydotinc/s3-syncer init --config-only- generate
s3-syncer.json(no workflow)
- generate
npx @maydotinc/s3-syncer init --full-setup- generate
s3-syncer.json+.github/workflows/s3-syncer.yml
- generate
npx @maydotinc/s3-syncer setup <directory>- configure one target and (by default) generate/update the workflow
npx @maydotinc/s3-syncer setup <directory> --env-file <path>- store an env file path on that target for local runs (no
sync --env-fileneeded)
- store an env file path on that target for local runs (no
npx @maydotinc/s3-syncer sync- run sync now (interactive if config/credentials are missing)
npx @maydotinc/s3-syncer sync --env- load
.envfrom the current working directory (non-interactive for credentials)
- load
npx @maydotinc/s3-syncer sync --env-file ../.env.local- load a specific env file (implies env mode; non-interactive for credentials)
npx @maydotinc/s3-syncer pull [remotePath]- pull files from S3 to a local directory (supports config mode or “direct pull” flags)
init --full-setup vs setup
Not exactly, but very close in outcome.
init --full-setup- guided multi-step flow
- can add multiple targets in one run
- meant for onboarding and multi-target repos
setup <directory>- direct command for one target at a time
- better for scripted/power usage with flags
Both end up producing the same core artifacts:
s3-syncer.json.github/workflows/s3-syncer.yml
So they are functionally aligned, but the interaction model differs.
Config file
Primary config file:
s3-syncer.json
Example:
{
"targets": [
{
"directory": "cdn",
"bucket": "my-assets",
"region": "auto",
"endpoint": "https://<account>.r2.cloudflarestorage.com",
"prefix": "assets",
"delete": true,
"envFile": "./.env.cdn",
"accessKeyId": "${AWS_S3_ACCESS_KEY_ID}",
"secretAccessKey": "${AWS_S3_SECRET_ACCESS_KEY}"
}
],
"branch": "main",
"notifications": {
"slack": false,
"discord": false
}
}envFile (per target, optional)
If targets[].envFile is set, sync / pull will automatically load that env file for that target (using dotenv), so you can run:
npx @maydotinc/s3-syncer syncWithout --env / --env-file.
Notes:
envFileis for local runs. The generated GitHub Actions workflow uses repo secrets (not env files).- Env files are loaded per target and isolated so variables from one target don’t leak into the next.
${ENV_VAR} placeholders
You can reference environment variables in these target fields using ${VAR_NAME}:
bucketendpointaccessKeyIdsecretAccessKey
If a placeholder is present and the env var is missing/empty, the run fails with a clear error pointing to the target + field.
Example:
{
"targets": [
{
"directory": "cdn",
"bucket": "${SYNC_BUCKET}",
"region": "auto",
"endpoint": "${SYNC_ENDPOINT}",
"prefix": "assets",
"delete": true,
"accessKeyId": "${SYNC_ACCESS_KEY}",
"secretAccessKey": "${SYNC_SECRET_KEY}"
}
]
}When target-level accessKeyId/secretAccessKey are set, they are preferred over global AWS_S3_ACCESS_KEY_ID/AWS_S3_SECRET_ACCESS_KEY.
Use this pattern when you have multiple providers/targets.
Workflow behavior
Generated file:
.github/workflows/s3-syncer.yml
Behavior:
- triggers on configured branch
- triggers only when target paths change, or
s3-syncer.json/ the workflow file itself changes - supports manual run (
workflow_dispatch) - runs a pinned package version (the version you used when generating the workflow):
npx --yes @maydotinc/[email protected] sync
GitHub secrets for Actions:
AWS_S3_ACCESS_KEY_IDAWS_S3_SECRET_ACCESS_KEYAWS_S3_ENDPOINT(optional)- optional:
SLACK_WEBHOOK_URL,DISCORD_WEBHOOK_URL - plus any
${VAR_NAME}referenced in targetbucket,endpoint,accessKeyId,secretAccessKey
Pull behavior
pull lists remote files first, shows total file count/size, and asks for confirmation before downloading (unless --yes is passed).
- If output path is not provided via
-o, --output, it always prompts (even with--yes). pull <remotePath>pulls only that subpath under the selected target prefix.- If you have multiple targets, you can pick one with
--target <directory>. - If
s3-syncer.jsonis missing, pull can still run by prompting for bucket/region/endpoint/prefix. - Power users can skip prompts with direct flags:
--bucket,--region,--endpoint,--prefix,--access-key-id,--secret-access-key.
How s3-syncer works
Per target:
- fingerprint local files (MD5)
- list remote objects in bucket/prefix
- compare local MD5 vs remote ETag
- upload changed/new files
- optionally delete stale remote files (
delete: true)
If the configured prefix does not exist yet, s3-syncer treats it as empty and starts uploading (no manual prefix creation needed).
No GitHub cache is required for correctness.
Notes:
- Local dotfiles are ignored during fingerprinting (for example
.well-known/...will not be uploaded). - ETag matching is used for change detection. If objects were uploaded outside of s3-syncer (for example multipart uploads), ETags may not match the local MD5 and those objects may be re-uploaded.
Notifications (optional)
Slack and Discord support with summary stats on successful runs.
R2 tip
For Cloudflare R2:
region: auto- endpoint:
https://<account_id>.r2.cloudflarestorage.com
Troubleshooting
- Missing credentials:
- set
AWS_S3_ACCESS_KEY_IDandAWS_S3_SECRET_ACCESS_KEY
- set
- Invalid config:
- fix fields in
s3-syncer.json
- fix fields in
- Missing placeholder env var:
- ensure the referenced
${VAR_NAME}exists and is non-empty in your shell /.env
- ensure the referenced
- Missing directory:
- ensure build output exists before running
sync
- ensure build output exists before running
