fmeld
v1.5.2
Published
Sync files between local drives, ftp, sftp, GCS, Google Drive, Dropbox
Readme
fmeld
Move and sync files between local drives, FTP, FTPS, SFTP, Google Cloud Storage, Google Drive, Dropbox, Amazon S3, Box, Windows network shares, Android devices, and more — from one command line tool or Node.js library.
# Copy a local folder up to an FTP server
fmeld -s ~/photos -d ftp://user:[email protected]/photos cp -r
# Sync a Google Drive folder down to an SFTP server
fmeld -S ./gdrive-creds.json -s gdrive://backups \
-d sftp://[email protected]/backups sync -Ur
# Clean up temp files older than one day
fmeld -s /tmp clean --before "1 day ago" --clean-all
Table of contents
- Install
- Supported backends
- URL format
- Credentials
- Commands
- Options reference
- Examples
- Using as a library
- Testing
- Setting up cloud credentials
- Alternatives
Install
npm install -g fmeldOr install locally into your project:
npm install fmeldFTP and SFTP are included out of the box. Cloud and network backends (S3, GCS, Google Drive, Dropbox, Azure, OneDrive, WebDAV, SMB) are optional — run the interactive setup wizard to choose which you need:
fmeld setupThis presents a checkbox menu. Use arrow keys to navigate, space to toggle, and Enter to install. Alternatively, install backends individually:
npm install -g @aws-sdk/client-s3 @aws-sdk/lib-storage # S3
npm install -g @google-cloud/storage # Google Cloud StorageIf you try to use a backend whose package isn't installed, fmeld will ask whether to install it on the spot.
Supported backends
| Backend | URL scheme | Notes |
|---|---|---|
| Local filesystem | file:// or bare path | /abs, ./rel, ../rel, ~/home all work without a prefix |
| ZIP archive | zip:// or .zip path | Read and write ZIP files as a virtual directory |
| FTP | ftp:// | Active and passive mode — installed by default |
| FTPS | ftps:// | FTP over TLS (explicit) — installed by default |
| SFTP | sftp:// | SSH key or password auth — installed by default |
| Google Cloud Storage | gs:// or gcs:// | Service account JSON |
| Google Drive | gdrive:// | OAuth2, token cached after first login |
| Dropbox | dropbox:// | OAuth2, token cached after first login |
| Amazon S3 | s3:// | IAM credentials JSON or environment variables |
| WebDAV | webdav:// or webdavs:// | Nextcloud, ownCloud, NAS, and any WebDAV server |
| Azure Blob Storage | azure:// or azblob:// | Connection string or account key JSON |
| OneDrive | onedrive:// | OAuth2, token cached after first login |
| Windows Network Share | smb:// or cifs:// | SMB2/CIFS — NAS, Windows shares, Samba |
| Box | box:// | Box.com cloud storage, JWT app auth or developer token |
| Android Device (ADB) | adb:// | Android Debug Bridge — USB or TCP/IP connected devices |
URL format
For local paths, just pass the path directly — no scheme required:
fmeld -s ~/photos ls
fmeld -s ./backups ls
fmeld -s /mnt/nas/data ls
fmeld -s ~/archive.zip ls -r # .zip extension routes to zip:// automaticallyFor remote or protocol-specific connections use a full URL:
scheme://[user[:password]@]host[:port]/path[?key=value&...]Examples:
zip:///home/user/archive.zip
zip:///home/user/archive.zip?password=secret
zip:///home/user/archive.zip?passwordfile=~/.zippass
zip:///home/user/archive.zip?compression=6&method=deflate
ftp://alice:[email protected]:21/uploads
ftps://alice:[email protected]/uploads
sftp://[email protected]:22/backups
gs://my-bucket/some/prefix
gdrive://My Drive/project-files
dropbox:///camera-uploads
s3://my-bucket/some/prefix
s3://my-bucket/path?region=eu-west-1
webdav://alice:[email protected]/remote.php/dav/files/alice/documents
webdavs://alice:[email protected]:8443/remote.php/dav/files/alice/photos
azure://my-container/some/prefix
onedrive://Documents/project-files
smb://user:pass@server/sharename
smb://DOMAIN;user:pass@server/sharename/path/to/dir
cifs://user:[email protected]/backups/archive
box:///My Box Folder/subfolder
adb:///sdcard/DCIM/
adb://192.168.1.100:5555/sdcard/Query string parameters are passed through as extra options to the backend driver.
Credentials
Credentials can be supplied in three ways:
1. In the URL (convenient, but passwords appear in shell history)
fmeld -s ftp://user:pass@host/path ls2. From a file — pass the path to a plain-text password file with -S / -E
fmeld -S /run/secrets/ftp-password -s ftp://user@host/path ls3. From a credential root directory — fmeld searches the directory for a file whose name matches the target hostname
fmeld -c /etc/fmeld/creds -s sftp://myserver.com/backups ls
# looks for /etc/fmeld/creds/myserver.com (or similar match)The credential root path can also be an environment variable name:
fmeld -c '$MY_CRED_DIR' -s sftp://myserver.com/backups lsFor cloud services (Google Drive, Dropbox, GCS), -S / -E should point to the downloaded JSON credentials file from the respective developer console. OAuth tokens are cached next to the credentials file (.token.json suffix) so you only need to log in once.
Commands
| Command | Description |
|---|---|
| ls | List files and directories at the source |
| cp | Copy files from source to destination |
| sync | Sync source to destination (only transfer what changed) |
| md | Create a directory at the source path |
| rm | Remove a directory at the source path |
| unlink | Remove a single file at the source path |
| clean | Delete files matching age / size / name filters |
| dupes | Find duplicate files and interactively review / delete / hardlink them |
| setup | Interactively install optional backend packages |
Multiple commands can be chained in a single invocation and will run in sequence.
Options reference
fmeld [options] [ls|cp|sync|md|rm|unlink|clean|dupes]
--- SOURCE / DESTINATION ---
-s --source [arg] Source URL or local path. Bare paths are automatically
treated as file:// — /abs, ./rel, ../rel, and ~/home
all work without a scheme prefix. Paths with a recognised
file extension are further routed to the matching backend
(e.g. archive.zip → zip://).
-S --source-cred [arg] Source credentials: path to a file, directory, or
an environment variable name (prefix with $)
-d --dest [arg] Destination URL or local path (same path expansion as --source)
-E --dest-cred [arg] Destination credentials (same formats as --source-cred)
-c --cred-root [arg] Shared credentials root directory or env variable.
fmeld searches here for a file matching the hostname.
--- TRANSFER BEHAVIOUR ---
-r --recursive Recurse into sub-directories
-U --upload (sync) Upload changed or missing files to destination
-D --download (sync) Download files missing from destination back
to source. Changed files are not downloaded; swap
source and destination if you need that.
-G --flatten Flatten the directory tree into the destination root
-k --skip Skip individual files that fail instead of aborting
-x --retry [arg] Number of times to retry on failure (default: 1)
-b --batch [arg] Max concurrent operations (default: 1)
--- FILTERING ---
-f --filter-files [arg] Keep only files whose names match this regex
-F --filter-dirs [arg] Keep only directories whose names match this regex
--before [arg] Only match files modified before this time
Accepts natural language: "1 day ago", "last Friday"
--after [arg] Only match files modified after this time
--minsize [arg] Minimum file size: bytes or unit string (10MB, 1.5GiB, …)
--maxsize [arg] Maximum file size: bytes or unit string (10MB, 1.5GiB, …)
--fnametime [arg] Regex to extract a timestamp from the file name
rather than using filesystem mtime.
Example: ([0-9]{4}-[0-9]{2}-[0-9]{2})
--- CLEAN ---
--clean-files Delete files when cleaning (required to actually
delete — omitting this lets you do a dry run)
--clean-dirs Delete directories when cleaning
--clean-all Delete both files and directories
--- DUPES ---
--by [arg] Duplicate detection mode (default: sha256)
name — same filename (case-insensitive, NFC)
name,size — same filename AND size
md5 — MD5 content hash (uses backend metadata
where available: GDrive, GCS)
sha1 — SHA-1 content hash
sha256 — SHA-256 content hash
--session [arg] Path to a YAML session file. Saves scan results so you
can review and apply in separate steps. If the file
already exists the scan is skipped and the session is
loaded directly.
--apply Apply the session non-interactively (requires --session).
Blocked if any group is still marked "review" or has a
link action without a keep. Groups where every file is
"none" are silently skipped.
--force With --apply: skip blocked groups instead of failing.
--keep [arg] Pre-populate keep decisions before review or apply:
first — keep the file that appears first in the
scan order
newest — keep the file with the latest mtime
oldest — keep the file with the earliest mtime
shortest-path — keep the file with the shortest full path
longest-path — keep the file with the longest full path
regex — keep the first file whose path matches
--keep-pattern
--keep-pattern [arg] Regex used when --keep regex is set.
--remaining [arg] Action for non-kept files when --keep is set:
review (default) — leave them for interactive review
delete — mark them for deletion
link — replace them with hardlinks (local
filesystem only)
--force-preset Re-apply preset even when a group already has decisions
from a previously loaded session.
--include-empty Include zero-byte files in duplicate detection (they are
excluded by default).
--- OUTPUT ---
-l --less Show less console output
-z --raw-size Show raw byte count instead of human-readable sizes
-t --timestamp Always prefix output lines with a timestamp
-i --detailed Show per-file transfer speed and ETA
--- AUTH ---
-p --authport [arg] Local port for OAuth redirect (default: 19227)
-u --uncached [arg] Ignore cached OAuth tokens, force re-authentication
--- MISC ---
-v --version Show version
-V --verbose Verbose logging
-h --help Show this help text
Examples
Installing backends
Backend packages are optional. The setup command presents an interactive checklist — already-installed packages are pre-ticked, missing ones can be selected and installed in one step:
fmeld setupIn a non-interactive environment (CI, Docker) the same command prints the current status of every backend without prompting:
fmeld setup
# fmeld backends
#
# sftp SFTP (SSH) installed (default)
# ftp FTP installed (default)
# webdav WebDAV webdav (2 MB)
# smb Windows Network Share @marsaud/smb2 (1 MB)
# ...If you try to use a backend whose package is not installed and a terminal is attached, fmeld prompts automatically:
'@marsaud/smb2' is required for smb://
Install @marsaud/smb2 now? [y/N]To install a specific backend manually:
# ZIP archives
npm install -g unzipper archiver archiver-zip-encrypted
# Amazon S3
npm install -g @aws-sdk/client-s3 @aws-sdk/lib-storage
# Google Cloud Storage
npm install -g @google-cloud/storage
# Google Drive
npm install -g googleapis
# Dropbox
npm install -g dropbox-v2-api
# WebDAV
npm install -g webdav
# Azure Blob Storage
npm install -g @azure/storage-blob
# OneDrive
npm install -g @azure/msal-node
# Windows Network Shares (SMB/CIFS)
npm install -g @marsaud/smb2
# Box.com
npm install -g box-node-sdk
# Android devices (ADB)
npm install -g @devicefarmer/adbkit
List files
# List a local directory
fmeld -s ~/photos ls
# List files on an FTP server
fmeld -s ftp://user:[email protected]/uploads ls
# List recursively with human-readable sizes
fmeld -s sftp://[email protected]/data ls -r
# List a Google Cloud Storage bucket
fmeld -S ./gcs-credentials.json -s gs://my-bucket/reports ls
# List an S3 bucket
fmeld -S ./s3-credentials.json -s s3://my-bucket/reports ls
# List an S3 bucket using environment variables for credentials
fmeld -s s3://my-bucket/reports lsCopy files
# Copy a local directory to an FTP server
fmeld -s ~/photos -d ftp://user:[email protected]/photos cp -r
# Copy from FTP to a local directory
fmeld -s ftp://user:[email protected]/photos -d /tmp/photos cp -r
# Copy from SFTP to local, using a password file
fmeld -S /run/secrets/sftp-pass -s sftp://[email protected]/data \
-d ~/data cp -r
# Copy and flatten all files into one directory (no sub-folders)
fmeld -s sftp://[email protected]/archive -d /tmp/flat cp -rGSync files
sync is like cp but only transfers files that are missing or have changed (comparing size and modification time). Use -U to push changes up, -D to pull changes down, or both together to mirror.
# Upload new or changed files from local to SFTP
fmeld -s ~/site -d sftp://[email protected]/www sync -Ur
# Pull any files missing locally from SFTP (but don't overwrite changed ones)
fmeld -s sftp://[email protected]/www -d ~/site sync -Dr
# Two-way mirror between Google Drive and SFTP
fmeld -S ./gdrive-creds.json \
-s gdrive://My Drive/project \
-d sftp://[email protected]/project sync -UDr
# Sync Google Drive to Dropbox, 4 files at a time
fmeld -S ./gdrive-creds.json -s gdrive://backups \
-E ./dropbox-creds.json -d dropbox:///backups sync -Ur -b 4
# Sync a local directory up to S3
fmeld -S ./s3-creds.json -s ~/backups -d s3://my-bucket/backups sync -Ur
# Sync from S3 to a local directory
fmeld -S ./s3-creds.json -s s3://my-bucket/backups -d ~/backups sync -Dr
# Sync only .log files
fmeld -s sftp://[email protected]/logs -d /var/logs/remote \
sync -Ur --filter-files '\.log$'ZIP archives
fmeld treats a ZIP file as a virtual directory. Just use a .zip path directly — fmeld detects the extension and routes to the ZIP backend automatically. All writes go to a disk staging area; the original archive is never modified in place. On close the archive is rebuilt, verified, and atomically swapped into place.
For large archives, fmeld prints real-time progress to stderr during compression and prints a final success line when done:
zip: compressing 1234/5678 files, 234.5 MB written (21%)
zip: saved /home/user/archive.zip (1.2 GB)# List the contents of a ZIP archive
fmeld -s ~/archive.zip ls -r
# Copy a local directory into a new ZIP archive
fmeld -s ~/photos -d ~/photos.zip cp -r
# Extract a ZIP archive to a local directory
fmeld -s ~/photos.zip -d /tmp/extracted cp -r
# Sync a local directory into an existing ZIP archive (only changed files)
fmeld -s ~/docs -d ~/docs.zip sync -Ur
# Sync files from an Android device into a ZIP archive
fmeld -s adb:///sdcard -d ./backup.zip sync -r
# Copy from an SFTP server directly into a local ZIP archive
fmeld -S ~/.sftp-pass -s sftp://[email protected]/exports -d ~/exports.zip cp -r
# Create an AES-256 encrypted ZIP from a credential file
fmeld -E ~/.zip-pass -s ~/sensitive -d ~/sensitive.zip cp -r
# Use an inline password (shows in shell history — testing only)
fmeld -s ~/data -d 'zip:///home/user/data.zip?password=mysecret' cp -r
# Maximum compression
fmeld -s ~/logs -d 'zip:///home/user/logs.zip?compression=9' cp -rStaging files are written beside the archive as <archive>.staging.<token>/. If the process is interrupted, the original archive is untouched and the staging directory is left on disk with a console warning. Orphan staging directories older than 24 hours are automatically removed on the next run.
Make / remove directories
# Create a directory on an SFTP server
fmeld -s sftp://[email protected]/new-folder md
# Remove a directory from Google Drive
fmeld -S ./gdrive-creds.json -s gdrive://old-folder rm
# Delete a single file from Dropbox
fmeld -E ./dropbox-creds.json -s dropbox:///notes/draft.txt unlinkClean old files
The clean command deletes files that match your filters. By default it just reports what it would delete — you must add --clean-files, --clean-dirs, or --clean-all to actually remove anything.
# Dry run: show what would be deleted from /tmp that is older than 1 day
fmeld -s /tmp clean --before "1 day ago"
# Actually delete those files
fmeld -s /tmp clean --before "1 day ago" --clean-files
# Delete entire directories older than 7 days
fmeld -s /var/archive clean --before "7 days ago" --clean-dirs
# Delete files and directories, recursing into sub-directories
fmeld -s sftp://[email protected]/tmp clean --before "1 week ago" --clean-all -r
# Delete log files larger than 100 MB
fmeld -s /var/log clean --minsize 100MB --clean-files --filter-files '\.log$'
# Use a timestamp embedded in the file name instead of filesystem mtime
# This regex captures a date like "2024-01-31" from names like "backup-2024-01-31.tar.gz"
fmeld -s ~/backups clean \
--before "30 days ago" \
--fnametime '(\d{4}-\d{2}-\d{2})' \
--clean-files
Find and remove duplicates
The dupes command scans a source for duplicate files and lets you decide what to do with each group — keep one copy, delete the rest, or replace duplicates with hardlinks to save space.
By default dupes opens an interactive terminal UI. Use --session to save progress between sessions.
Detection modes (--by):
| Mode | What it compares |
|---|---|
| sha256 (default) | SHA-256 content hash — most accurate |
| sha1 | SHA-1 content hash |
| md5 | MD5 content hash (uses backend metadata on GDrive / GCS) |
| name | Filename only (case-insensitive, Unicode-normalized) |
| name,size | Filename AND size — faster than hashing |
# Scan a local directory for duplicates and review interactively
fmeld -s ~/photos dupes -r
# Scan a ZIP archive for internal duplicates
fmeld -s ~/Backup/archive.zip dupes -r --session dupes.yml
# Use a session file — scan once, resume review later
fmeld -s ~/photos dupes -r --session ~/photos-dupes.yml
# Resume an existing session (no rescan)
fmeld -s ~/photos dupes --session ~/photos-dupes.yml
# Detect duplicates by name only (fast, no hashing)
fmeld -s ~/photos dupes -r --by name
# Pre-mark the newest file in each group as "keep" before reviewing
fmeld -s ~/photos dupes -r --keep newest
# Auto-delete all non-kept files without interactive review
fmeld -s ~/photos dupes -r \
--keep newest --remaining delete \
--session ~/photos-dupes.yml --apply
# Keep whichever copy lives under /archive/, delete the rest
fmeld -s ~/photos dupes -r \
--keep regex --keep-pattern '/archive/' \
--remaining delete --session ~/photos-dupes.yml --apply
# Replace duplicates with hardlinks (saves disk space, local filesystem only)
fmeld -s ~/photos dupes -r \
--keep newest --remaining link \
--session ~/photos-dupes.yml --apply
# Skip groups that still need a decision instead of failing (non-interactive)
fmeld -s ~/photos dupes --session ~/photos-dupes.yml --apply --forceInteractive UI key bindings:
| Key | Action |
|---|---|
| ↑ / ↓ | Move between files in a group |
| ← / → | Move between duplicate groups |
| k | Mark current file as keep |
| d | Mark current file as delete |
| l | Mark current file as link (hardlink, local only) |
| r | Mark current file as review (clear decision) |
| n | Mark current file as none (no action) |
| s | Save session to current file |
| S | Save session to a new file (prompts for path) |
| a | Apply decisions (shows confirmation screen first) |
| R | Rescan source and carry forward existing decisions |
| A | Abort — discard all staged changes and restore the source to its original state (only shown for backends that support staged writes, e.g. zip://) |
| q | Quit |
Applied and skipped groups are highlighted in the UI: the group title and a banner line are colored (green for applied, yellow for skipped, red for failed), all file rows are dimmed, and a progress bar at the bottom shows the status of every group at a glance. Each bar segment is colored by the state of the groups it covers — green (applied), yellow (skipped), red (failed), or dim (pending) — so you can immediately see where unresolved groups are. Your current position is marked with a bold block in the bar.
Using as a library
fmeld exports all of its connection types and helper functions so you can use them directly in your own Node.js code.
const path = require('path');
const os = require('os');
const fmeld = require('fmeld');
async function example()
{
// Bare paths work just like in the CLI
const local = fmeld.getConnection(path.join(os.tmpdir(), 'data'), null, {});
const ftp = fmeld.getConnection('ftp://guest:[email protected]/data', null, {verbose: true});
// Connect both
await Promise.all([local.connect(), ftp.connect()]);
// List remote files
const files = await ftp.ls('/');
console.log(files);
// Sync remote -> local (upload any missing or changed files)
await fmeld.syncDir(ftp, local, ftp.makePath(), local.makePath(),
{
recursive : true,
upload : true,
progress : fmeld.stdoutProgress
});
// Clean up
await Promise.all([ftp.close(), local.close()]);
console.log('Done');
}
example().catch(console.error);Available exports
const fmeld = require('fmeld');
fmeld.getConnection(url, credFile, opts) // Create a backend client from a URL or bare path
fmeld.copyFile(src, dst, from, to, size, opts) // Copy a single file
fmeld.copyDir(src, dst, from, to, opts) // Copy a directory
fmeld.syncDir(src, dst, from, to, opts) // Sync two directories
fmeld.cleanDir(src, from, opts) // Clean a directory
fmeld.findDuplicates(src, from, opts) // Scan for duplicate files; returns Promise<sessionData>
fmeld.dupeSession // Session helpers: loadSession, saveSession, applyPreset,
// validateGroup, applySession, carryForward
fmeld.dupeUI // Interactive terminal review UI: runInteractive(state)
fmeld.stdoutProgress(args, opts) // Built-in progress reporter
fmeld.toHuman(bytes) // Format bytes as "1.23 MB" etc.
// Low-level client constructors (if you want to instantiate directly)
fmeld.fakeClient(args, opts) // In-memory fake tree — useful for testing
fmeld.fileClient(args, opts)
fmeld.zipClient(args, opts)
fmeld.ftpClient(args, opts)
fmeld.sftpClient(args, opts)
fmeld.gcsClient(args, opts)
fmeld.gdriveClient(args, opts)
fmeld.dropboxClient(args, opts)
fmeld.s3Client(args, opts)
fmeld.webdavClient(args, opts)
fmeld.azblobClient(args, opts)
fmeld.onedriveClient(args, opts)
fmeld.smbClient(args, opts)
fmeld.boxClient(args, opts)
fmeld.adbClient(args, opts)
// Backend registry and optional-dependency helpers
fmeld.setup.BACKENDS // Array of backend descriptors (key, label, pkgs, size, schemes, extensions)
fmeld.setup.pkgAvailable(name) // Returns true if an npm package is installed
fmeld.setup.requireBackend(pkg, hint) // require() with a typed BACKEND_NOT_INSTALLED error
fmeld.setup.getBackendByPkg(pkg) // Look up a backend descriptor by package name
fmeld.setup.installPackages(pkgs) // Install packages into fmeld's node_modulesAll client objects expose the same interface:
client.connect() // Returns Promise
client.close() // Returns Promise
client.ls(path) // Returns Promise<FileList>
client.mkDir(path, opts) // Returns Promise
client.rmFile(path) // Returns Promise
client.rmDir(path, opts) // Returns Promise
client.createReadStream(path) // Returns Promise<ReadableStream>
client.createWriteStream(path) // Returns Promise<WritableStream>
client.makePath(suffix) // Returns the full path string
client.getPrefix(suffix) // Returns the URL prefix string
client.isConnected() // Returns boolean
// Backends that use staging (e.g. zip://) also expose:
client.abort() // Returns Promise — discard staged changes, leave original untouched
Setting up cloud credentials
Amazon S3
fmeld can authenticate with S3 in two ways:
Option 1 — Credentials JSON file (recommended for explicit control)
Create a JSON file with your IAM access key:
{
"access_key_id": "AKIAIOSFODNN7EXAMPLE",
"secret_access_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"region": "us-east-1"
}Pass the file with -S (source) or -E (destination):
fmeld -S ./s3-creds.json -s s3://my-bucket/backups ls
fmeld -S ./s3-creds.json -s ~/data -d s3://my-bucket/data cp -rOption 2 — Environment variables (no credential file needed)
Set the standard AWS environment variables and omit -S / -E:
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-east-1
fmeld -s s3://my-bucket/backups lsThe AWS SDK also honours ~/.aws/credentials and IAM instance roles automatically.
Specifying the region or a custom endpoint via the URL
# Override region in the URL query string
fmeld -S ./s3-creds.json -s 's3://my-bucket/data?region=eu-west-1' ls
# Use an S3-compatible service (MinIO, Wasabi, Cloudflare R2, …)
fmeld -S ./s3-creds.json -s 's3://my-bucket/data?endpoint=https://s3.example.com' lsTo generate IAM credentials, go to the AWS IAM Console, create a user with AmazonS3FullAccess (or a least-privilege policy), and create an access key under Security credentials.
Google Cloud Storage
- Go to the Google Cloud Console and create or select a project.
- Enable the Cloud Storage API.
- Create a Service Account, then download its JSON key file.
- Pass the key file with
-S(source) or-E(destination):
fmeld -S ./service-account.json -s gs://my-bucket/path lsGoogle Drive
- Go to the Google Cloud Console and create or select a project.
- Enable the Google Drive API.
- Create an OAuth 2.0 Client ID (Desktop application type) and download the JSON file.
- On the first run, fmeld will print a URL — open it in your browser to authorize access. The resulting token is saved alongside the credentials file and reused automatically on future runs.
fmeld -S ./gdrive-oauth-client.json -s gdrive://My\ Drive/backups lsTo force re-authentication (e.g. after revoking access):
fmeld -S ./gdrive-oauth-client.json -u 1 -s gdrive://My\ Drive/backups lsDropbox
- Go to the Dropbox App Console and create an app.
- Set the redirect URI to
http://localhost:19227(or your chosen--authport). - Download or create a JSON credentials file with at least these fields:
{
"client_id": "your-app-key",
"client_secret": "your-app-secret",
"redirect_uris": ["http://localhost:19227"]
}- On the first run, fmeld will print a URL to authorize. The token is then cached for future runs.
fmeld -E ./dropbox-creds.json -d dropbox:///uploads ls
WebDAV
Pass credentials directly in the URL or via a plain-text password file (-S / -E):
# HTTP WebDAV with user/pass in URL
fmeld -s 'webdav://alice:[email protected]/remote.php/dav/files/alice/docs' ls
# HTTPS WebDAV using a password file
fmeld -S ./webdav-pass.txt -s webdavs://[email protected]/remote.php/dav/files/alice/docs ls
# Sync local to Nextcloud
fmeld -S ./webdav-pass.txt \
-s ~/documents \
-d webdavs://[email protected]/remote.php/dav/files/alice/documents \
sync -UrThe password file should contain only the password on a single line.
Azure Blob Storage
Create a JSON credentials file — pick one of these formats:
Option 1 — Connection string (easiest, found in the Azure Portal under your storage account → Access keys):
{
"connection_string": "DefaultEndpointsProtocol=https;AccountName=mystorageaccount;AccountKey=base64key==;EndpointSuffix=core.windows.net"
}Option 2 — Account name + key:
{
"account_name": "mystorageaccount",
"account_key": "base64encodedkey=="
}Option 3 — SAS token:
{
"account_name": "mystorageaccount",
"sas_token": "sv=2021-06-08&ss=b&srt=co&sp=rwdlacuptfx&..."
}Without a credentials file, fmeld reads AZURE_STORAGE_CONNECTION_STRING from the environment.
The URL hostname is the container name; the path is an optional blob prefix:
fmeld -S ./azure-creds.json -s azure://my-container/backups ls
fmeld -S ./azure-creds.json -s ~/data -d azure://my-container/data cp -r
fmeld -S ./azure-creds.json -s azure://my-container/data -d ~/data sync -Dr
OneDrive
- Go to Azure App registrations and create a new registration.
- Set the redirect URI to
http://localhost:19227(Web type, or your chosen--authport). - Under Certificates & secrets, create a new client secret and copy the value.
- Under API permissions, add Microsoft Graph → Files.ReadWrite (Delegated), then grant admin consent.
- Create a JSON credentials file:
{
"client_id": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"client_secret": "your-client-secret-value",
"tenant_id": "common"
}On the first run, fmeld will print an authorization URL. After you log in and grant access, the token is cached alongside the credentials file (.token.json) for future runs.
fmeld -S ./onedrive-creds.json -s onedrive://Documents/backups ls
fmeld -S ./onedrive-creds.json -s ~/docs -d onedrive://Documents/docs sync -UrTo force re-authentication:
fmeld -S ./onedrive-creds.json -u 1 -s onedrive://Documents/backups ls
Windows Network Shares (SMB/CIFS)
fmeld connects to SMB2/CIFS shares (Windows file shares, NAS devices, Samba servers) using the @marsaud/smb2 package — no native binaries or smbclient install required.
URL format:
smb://[domain;]user:pass@server/sharename[/sub/path]
cifs://[domain;]user:pass@server/sharename[/sub/path]The first path component after the hostname is always the share name. Any remaining path is the subdirectory within the share.
Examples:
# List a share
fmeld -s smb://alice:[email protected]/documents ls
# Sync local → share
fmeld -s ~/docs -d smb://alice:[email protected]/documents sync -Ur
# Include a Windows domain (two equivalent forms)
fmeld -s 'smb://CORP;alice:[email protected]/shared/reports' ls
fmeld -s 'smb://alice:[email protected]/shared/reports?domain=CORP' ls
# Use a password file instead of embedding credentials in the URL
fmeld -S /run/secrets/smb-pass -s smb://[email protected]/backups ls
# Copy from a Windows share to local
fmeld -s smb://alice:s3cr3t@winserver/myshare/exports -d ~/exports cp -rThe password file (passed with -S / -E) should contain only the password on a single line.
SMB port 445 is used by default. To use a non-standard port, append it to the hostname:
fmeld -s smb://alice:[email protected]:4450/share ls
Box
fmeld connects to Box.com using the box-node-sdk package and supports two credential modes:
Option 1 — JWT app config (recommended for production)
- Go to the Box Developer Console and create a new app with Server Authentication (JWT).
- Generate an RSA key pair and download the app config JSON.
- Approve the app in your Box Admin Console.
fmeld -S ./box-app-config.json -s box:///My Folder lsOption 2 — Developer token (quick testing only — expires after 60 minutes)
{
"client_id": "your-client-id",
"client_secret": "your-client-secret",
"token": "your-developer-token"
}fmeld -S ./box-dev-token.json -s box:///My Folder ls
fmeld -S ./box-dev-token.json -s ~/docs -d box:///Documents sync -UrThe URL path is the folder tree within Box. The root maps to the top of the authenticated account.
Android Devices (ADB)
fmeld talks to Android devices via the Android Debug Bridge using the @devicefarmer/adbkit package. No credentials file is required — ADB handles its own device authorization via the on-device prompt.
Prerequisites:
- Install Android SDK Platform Tools (provides the
adbbinary) - Enable Developer options and USB debugging on the device
- For TCP/IP connections, run
adb tcpip 5555on the device first
URL formats:
adb:///sdcard/DCIM/ — first available (USB or already-connected TCP/IP) device
adb://SERIALNUMBER/sdcard/ — specific device by USB serial number
adb://192.168.1.100:5555/sdcard/ — TCP/IP connected deviceThe serial number of connected devices can be found with adb devices.
Examples:
# List files on the first connected Android device
fmeld -s adb:///sdcard/DCIM/ ls
# Copy photos from a specific device to local
fmeld -s adb://R58M123ABCD/sdcard/DCIM/Camera -d ~/photos cp -r
# Sync from a TCP/IP connected device
fmeld -s adb://192.168.1.100:5555/sdcard/Documents -d ~/android-docs sync -Dr
# Upload files to the device
fmeld -s ~/music -d adb:///sdcard/Music cp -r
Testing
Unit tests
The unit test suite uses the built-in node:test runner (Node 18+):
node --test test/test.jsTests cover toHuman, promiseWhile/promiseWhileBatch, parseParams, getConnection protocol dispatch (including bare-path expansion and extension-based routing), all client constructors, copyDir, syncDir, cleanDir, loadConfig, and the dupes command (normalizeFileName, findDuplicates, applyPreset, validateGroup, carryForward, session save/load round-trips, and applySession including delete, hardlink, all-none auto-skip, and force-skip). The zip:// backend is covered including writes, deletes, directory operations, orphan cleanup, and abort(). Filesystem tests create and clean up their own temporary directories under os.tmpdir().
Docker smoke tests
The smoke test suite runs fmeld against real protocol servers inside Docker and verifies every backend end-to-end:
npm run smoketest
# or directly:
node run-smoketests.jsBackends covered:
| Backend | Service |
|---|---|
| ftp:// | pyftpdlib |
| ftps:// | pyftpdlib with TLS (explicit) |
| sftp:// | OpenSSH sshd |
| webdav:// | rclone serve webdav |
| webdavs:// | rclone serve webdav with TLS |
| smb:// | Samba |
| s3:// | MinIO |
| azblob:// | Azurite |
| gcs:// | fsouza/fake-gcs-server |
| zip:// | local filesystem (no Docker service) |
Each backend is exercised through: recursive listing, seed download with byte-exact content verification (including binary files), upload round-trip, cross-backend copy, unlink, rm, sync delta (v1 → v2), and duplicate detection. The runner writes a Markdown and JSON report to reports/ on every run.
To clean up Docker containers, images, and volumes after a run:
npm run smoketest:cleanupFull details are in docker/live-test/README.md.
Live cloud smoke tests
For backends that require real vendor accounts (gdrive://, dropbox://, onedrive://, box://), a separate manifest-driven runner is provided. It is designed for beta testers to run against their own accounts.
Create a manifest file (see todo/smoketests.md for the full schema):
version: 1
report_dir: ./reports
allow_destructive: true
backends:
gdrive:
enabled: true
cred_file: ~/.config/fmeld/gdrive.json
root: "fmeld-smoketests"
ops: [ls, cp, sync, unlink, rm]Validate the manifest and environment without writing anything:
node run-live-cloud-smoketests.js --config live-tests.yml --doctor
# or:
npm run smoke:doctor -- --config live-tests.ymlRun the smoke tests:
node run-live-cloud-smoketests.js --config live-tests.yml
# or:
npm run smoke:live -- --config live-tests.ymlEach enabled backend runs through: listing, upload round-trip with binary content verification, cross-backend copy, unlink, rm, and sync delta (v1 → v2). Backends with missing credentials are skipped and reported as Skipped (no credentials) rather than a failure. The runner writes reports/live-smoke-<timestamp>.md and reports/live-smoke-<timestamp>.json, suitable for attaching to bug reports.
License
MIT — see LICENSE
Alternatives
| Project | Language / Runtime | Type | Backends | Notable features | License | |---|---|---|---|---|---| | rclone | Go (static binary) | CLI + HTTP API | 70+ | Widest backend coverage, battle-tested, no runtime dep | MIT | | Cyberduck / duck | Java | GUI + CLI | ~30 | Desktop GUI, bookmarks, drag-and-drop | GPL v3 | | lftp | C++ | CLI | ~10 | Parallel transfers, resumable, rich scripting language | GPL v3 | | Flysystem | PHP | Library | ~15 | Uniform API for PHP apps, community adapters | MIT | | Apache Commons VFS | Java | Library | ~15 | Standard Java VFS abstraction, Apache ecosystem | Apache 2.0 | | fmeld | Node.js | CLI + Library | 15 | ZIP archives as virtual directories, interactive dedup UI, embeddable in JS apps | MIT |
rclone
rclone is the most widely used tool in this category. It supports more backends than any other project listed here, ships as a single static binary with no runtime dependency, and has years of production use behind it. Choose rclone when you need the broadest possible backend coverage, are working in a polyglot or shell-scripting environment, or need something you can drop onto any machine without installing a runtime.
Cyberduck / duck
Cyberduck is primarily a desktop GUI application for macOS and Windows; duck is its companion CLI. It covers a wide range of cloud and server backends and is well-suited to interactive use. The GUI provides visual browsing, bookmarks, and drag-and-drop transfers. Choose Cyberduck if your primary workflow is interactive file management rather than scripted or automated transfers, or if your team prefers a GUI tool.
lftp
lftp is a mature, Unix-native command-line client focused on FTP, FTPS, SFTP, HTTP, and HTTPS. It supports parallel transfers, mirroring, complex scripting via its built-in command language, and resumable downloads. It does not cover cloud object storage (S3, GCS, etc.). Choose lftp when your transfers are FTP/SFTP-centric, you need fine-grained control over connection behaviour, or you require a lightweight dependency on traditional Unix systems.
Flysystem
Flysystem is a PHP filesystem abstraction library. It provides a uniform API across local, FTP, SFTP, S3, Azure, GCS, and other storage backends via community adapters. It is a library only — there is no CLI. Choose Flysystem when building PHP applications that need to read and write files across multiple storage providers without coupling your code to a specific backend.
Apache Commons VFS
Apache Commons VFS is a Java library that exposes a virtual filesystem API over FTP, SFTP, HTTP, HTTPS, SMB, local, and compressed archives. It is tightly integrated with the Java ecosystem and is often used inside larger Apache projects. There is no CLI. Choose Commons VFS when building Java applications that need a standard, well-tested abstraction for accessing remote filesystems, particularly in environments that already use Apache libraries.
fmeld
fmeld is a Node.js package with a CLI and a library API. It requires Node.js and is early-stage compared to the others listed here. Backend coverage is narrower than rclone, there is no static binary, and it has a much smaller community and track record.
It differs from the others in a few specific ways: .zip files are treated as a mountable backend rather than just a transfer target; and there is a built-in interactive UI for finding and resolving duplicate files. The library API exposes each backend as an object with a uniform interface, which can be useful when embedding file operations in a Node.js application.
Links
- GitHub: https://github.com/wheresjames/fmeld-js
- Issues: https://github.com/wheresjames/fmeld-js/issues
- npm: https://www.npmjs.com/package/fmeld
