diff --git a/MANUAL.html b/MANUAL.html
index 29c5bfac1..0cbfcecd6 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -81,8 +81,74 @@
+
NAME
+rclone - manage files on cloud storage
+SYNOPSIS
+Usage:
+ rclone [flags]
+ rclone [command]
+
+Available commands:
+ about Get quota information from the remote.
+ authorize Remote authorization.
+ backend Run a backend-specific command.
+ bisync Perform bidirectional synchronization between two paths.
+ cat Concatenates any files and sends them to stdout.
+ check Checks the files in the source and destination match.
+ checksum Checks the files in the destination against a SUM file.
+ cleanup Clean up the remote if possible.
+ completion Output completion script for a given shell.
+ config Enter an interactive configuration session.
+ copy Copy files from source to dest, skipping identical files.
+ copyto Copy files from source to dest, skipping identical files.
+ copyurl Copy the contents of the URL supplied content to dest:path.
+ cryptcheck Cryptcheck checks the integrity of an encrypted remote.
+ cryptdecode Cryptdecode returns unencrypted file names.
+ dedupe Interactively find duplicate filenames and delete/rename them.
+ delete Remove the files in path.
+ deletefile Remove a single file from remote.
+ gendocs Output markdown docs for rclone to the directory supplied.
+ gitannex Speaks with git-annex over stdin/stdout.
+ hashsum Produces a hashsum file for all the objects in the path.
+ help Show help for rclone commands, flags and backends.
+ link Generate public link to file/folder.
+ listremotes List all the remotes in the config file and defined in environment variables.
+ ls List the objects in the path with size and path.
+ lsd List all directories/containers/buckets in the path.
+ lsf List directories and objects in remote:path formatted for parsing.
+ lsjson List directories and objects in the path in JSON format.
+ lsl List the objects in path with modification time, size and path.
+ md5sum Produces an md5sum file for all the objects in the path.
+ mkdir Make the path if it doesn't already exist.
+ mount Mount the remote as file system on a mountpoint.
+ move Move files from source to dest.
+ moveto Move file or directory from source to dest.
+ ncdu Explore a remote with a text based user interface.
+ nfsmount Mount the remote as file system on a mountpoint.
+ obscure Obscure password for use in the rclone config file.
+ purge Remove the path and all of its contents.
+ rc Run a command against a running rclone.
+ rcat Copies standard input to file on remote.
+ rcd Run rclone listening to remote control commands only.
+ rmdir Remove the empty directory at path.
+ rmdirs Remove empty directories under the path.
+ selfupdate Update the rclone binary.
+ serve Serve a remote over a protocol.
+ settier Changes storage class/tier of objects in remote.
+ sha1sum Produces an sha1sum file for all the objects in the path.
+ size Prints the total size and number of objects in remote:path.
+ sync Make source and dest identical, modifying destination only.
+ test Run a test command
+ touch Create new file or change file modification time.
+ tree List the contents of the remote in a tree like fashion.
+ version Show the version number.
+
+Use "rclone [command] --help" for more information about a command.
+Use "rclone help flags" for to see the global flags.
+Use "rclone help backends" for a list of supported services.
+
Rclone syncs your files to cloud storage

@@ -411,7 +477,7 @@ kill %1
Snap installation

Make sure you have Snapd installed
-$ sudo snap install rclone
+$ sudo snap install rclone
Due to the strict confinement of Snap, rclone snap cannot access real /home/$USER/.config/rclone directory, default config path is as below.
- Default config directory:
@@ -585,7 +651,7 @@ rclone copy /local/path remote:path # copies /local/path to the remote
rclone sync --interactive /local/path remote:path # syncs /local/path to the remote
rclone config
Enter an interactive configuration session.
-Synopsis
+Synopsis
Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
rclone config [flags]
Options
@@ -613,7 +679,7 @@ rclone sync --interactive /local/path remote:path # syncs /local/path to the rem
rclone copy
Copy files from source to dest, skipping identical files.
-Synopsis
+Synopsis
Copy the source to the destination. Does not transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Doesn't delete files from the destination. If you want to also delete files from destination, to make it match source, use the sync command instead.
Note that it is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.
To copy single files, use the copyto command instead.
@@ -716,7 +782,7 @@ destpath/sourcepath/two.txt
rclone sync
Make source and dest identical, modifying destination only.
-Synopsis
+Synopsis
Sync the source to the destination, changing the destination only. Doesn't transfer files that are identical on source and destination, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary (except duplicate objects, see below). If you don't want to delete files from destination, use the copy command instead.
Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.
rclone sync --interactive SOURCE remote:DESTINATION
@@ -857,7 +923,7 @@ destpath/sourcepath/two.txt
rclone move
Move files from source to dest.
-Synopsis
+Synopsis
Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server-side directory move operation.
To move single files, use the moveto command instead.
If no filters are in use and if possible this will server-side move source:path into dest:path. After this source:path will no longer exist.
@@ -948,7 +1014,7 @@ destpath/sourcepath/two.txt
rclone delete
Remove the files in path.
-Synopsis
+Synopsis
Remove the files in path. Unlike purge it obeys include/exclude filters so can be used to selectively delete files.
rclone delete only deletes files but leaves the directory structure alone. If you want to delete a directory and all of its contents use the purge command.
If you supply the --rmdirs flag, it will remove all empty directories along with it. You can also use the separate command rmdir or rmdirs to delete empty directories only.
@@ -1003,8 +1069,9 @@ rclone --dry-run --min-size 100M delete remote:path
rclone purge
Remove the path and all of its contents.
-Synopsis
+Synopsis
Remove the path and all of its contents. Note that this does not obey include/exclude filters - everything will be removed. Use the delete command if you want to selectively delete files. To delete empty directories only, use command rmdir or rmdirs.
+The concurrency of this operation is controlled by the --checkers global flag. However, some backends will implement this command directly, in which case --checkers will be ignored.
Important: Since this can cause data loss, test first with the --dry-run or the --interactive/-i flag.
rclone purge remote:path [flags]
Options
@@ -1036,7 +1103,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone rmdir
Remove the empty directory at path.
-Synopsis
+Synopsis
This removes empty directory given by path. Will not remove the path if it has any objects in it, not even empty subdirectories. Use command rmdirs (or delete with option --rmdirs) to do that.
To delete a path and any objects in it, use purge command.
rclone rmdir remote:path [flags]
@@ -1054,7 +1121,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone check
Checks the files in the source and destination match.
-Synopsis
+Synopsis
Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files that don't match. It doesn't alter the source or destination.
For the crypt remote there is a dedicated command, cryptcheck, that are able to check the checksums of the encrypted files.
If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.
@@ -1121,7 +1188,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone ls
List the objects in the path with size and path.
-Synopsis
+Synopsis
Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default.
Eg
$ rclone ls swift:bucket
@@ -1180,7 +1247,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone lsd
List all directories/containers/buckets in the path.
-Synopsis
+Synopsis
Lists the directories in the source path to standard output. Does not recurse by default. Use the -R flag to recurse.
This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of the directory, Eg
$ rclone lsd swift:
@@ -1244,7 +1311,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone lsl
List the objects in path with modification time, size and path.
-Synopsis
+Synopsis
Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default.
Eg
$ rclone lsl swift:bucket
@@ -1303,7 +1370,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone md5sum
Produces an md5sum file for all the objects in the path.
-Synopsis
+Synopsis
Produces an md5sum file for all the objects in the path. This is in the same format as the standard md5sum tool produces.
By default, the hash is requested from the remote. If MD5 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling MD5 for any remote.
For other algorithms, see the hashsum command. Running rclone md5sum remote:path is equivalent to running rclone hashsum MD5 remote:path.
@@ -1350,7 +1417,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone sha1sum
Produces an sha1sum file for all the objects in the path.
-Synopsis
+Synopsis
Produces an sha1sum file for all the objects in the path. This is in the same format as the standard sha1sum tool produces.
By default, the hash is requested from the remote. If SHA-1 is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling SHA-1 for any remote.
For other algorithms, see the hashsum command. Running rclone sha1sum remote:path is equivalent to running rclone hashsum SHA1 remote:path.
@@ -1398,7 +1465,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone size
Prints the total size and number of objects in remote:path.
-Synopsis
+Synopsis
Counts objects in the path and calculates the total size. Prints the result to standard output.
By default the output is in human-readable format, but shows values in both human-readable format as well as the raw numbers (global option --human-readable is not considered). Use option --json to format output as JSON instead.
Recurses by default, use --max-depth 1 to stop the recursion.
@@ -1442,7 +1509,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone version
Show the version number.
-Synopsis
+Synopsis
Show the rclone version number, the go version, the build target OS and architecture, the runtime OS and kernel version and bitness, build tags and the type of executable (static or dynamic).
For example:
$ rclone version
@@ -1478,7 +1545,7 @@ beta: 1.42.0.5 (released 2018-06-17)
rclone cleanup
Clean up the remote if possible.
-Synopsis
+Synopsis
Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes.
rclone cleanup remote:path [flags]
Options
@@ -1495,7 +1562,7 @@ beta: 1.42.0.5 (released 2018-06-17)
rclone dedupe
Interactively find duplicate filenames and delete/rename them.
-Synopsis
+Synopsis
By default dedupe interactively finds files with duplicate names and offers to delete all but one or rename them to be different. This is known as deduping by name.
Deduping by name is only useful with a small group of backends (e.g. Google Drive, Opendrive) that can have duplicate file names. It can be run on wrapping backends (e.g. crypt) if they wrap a backend which supports duplicate file names.
However if --by-hash is passed in then dedupe will find files with duplicate hashes instead which will work on any backend which supports at least one hash. This can be used to find files with duplicate content. This is known as deduping by hash.
@@ -1579,7 +1646,7 @@ two-3.txt: renamed from: two.txt
rclone about
Get quota information from the remote.
-Synopsis
+Synopsis
Prints quota information about a remote to standard output. The output is typically used, free, quota and trash contents.
E.g. Typical output from rclone about remote: is:
Total: 17 GiB
@@ -1625,7 +1692,7 @@ Other: 8849156022
rclone authorize
Remote authorization.
-Synopsis
+Synopsis
Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.
Use --auth-no-open-browser to prevent rclone to open auth link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.
@@ -1641,7 +1708,7 @@ Other: 8849156022
rclone backend
Run a backend-specific command.
-Synopsis
+Synopsis
This runs a backend-specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions.
You can discover what commands a backend implements by using
rclone backend help remote:
@@ -1670,7 +1737,7 @@ rclone backend help <backendname>
rclone bisync
Perform bidirectional synchronization between two paths.
-Synopsis
+Synopsis
Perform bidirectional synchronization between two paths.
Bisync provides a bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On each successive run it will: - list files on Path1 and Path2, and check for changes on each side. Changes include New, Newer, Older, and Deleted files. - Propagate changes on Path1 to Path2, and vice-versa.
Bisync is in beta and is considered an advanced command, so use with care. Make sure you have read and understood the entire manual (especially the Limitations section) before using, or data loss can result. Questions can be asked in the Rclone Forum.
@@ -1773,7 +1840,7 @@ rclone backend help <backendname>
rclone cat
Concatenates any files and sends them to stdout.
-Synopsis
+Synopsis
Sends any files to standard output.
You can use it like this to output a single file
rclone cat remote:path/to/file
@@ -1833,7 +1900,7 @@ rclone backend help <backendname>
rclone checksum
Checks the files in the destination against a SUM file.
-Synopsis
+Synopsis
Checks that hashsums of destination files match the SUM file. It compares hashes (MD5, SHA1, etc) and logs a report of files which don't match. It doesn't alter the file system.
The sumfile is treated as the source and the dst:path is treated as the destination for the purposes of the output.
If you supply the --download flag, it will download the data from the remote and calculate the content hash on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
@@ -1895,7 +1962,7 @@ rclone backend help <backendname>
rclone completion
Output completion script for a given shell.
-Synopsis
+Synopsis
Generates a shell completion script for rclone. Run with --help to list the supported shells.
Options
-h, --help help for completion
@@ -1910,7 +1977,7 @@ rclone backend help <backendname>
rclone completion bash
Output bash completion script for rclone.
-Synopsis
+Synopsis
Generates a bash shell autocompletion script for rclone.
By default, when run without any arguments,
rclone completion bash
@@ -1933,7 +2000,7 @@ rclone backend help <backendname>
rclone completion fish
Output fish completion script for rclone.
-Synopsis
+Synopsis
Generates a fish autocompletion script for rclone.
This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g.
sudo rclone completion fish
@@ -1951,7 +2018,7 @@ rclone backend help <backendname>
rclone completion powershell
Output powershell completion script for rclone.
-Synopsis
+Synopsis
Generate the autocompletion script for powershell.
To load completions in your current shell session:
rclone completion powershell | Out-String | Invoke-Expression
@@ -1967,7 +2034,7 @@ rclone backend help <backendname>
rclone completion zsh
Output zsh completion script for rclone.
-Synopsis
+Synopsis
Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g.
sudo rclone completion zsh
@@ -1985,7 +2052,7 @@ rclone backend help <backendname>
rclone config create
Create a new remote with name, type and options.
-Synopsis
+Synopsis
Create a new remote of name with type and options. The options should be passed in pairs of key value or as key=value.
For example, to make a swift remote of name myremote using auto config you would do:
rclone config create myremote swift env_auth true
@@ -2066,7 +2133,7 @@ rclone config create myremote swift env_auth=true
rclone config disconnect
Disconnects user from remote
-Synopsis
+Synopsis
This disconnects the remote: passed in to the cloud storage system.
This normally means revoking the oauth token.
To reconnect use "rclone config reconnect".
@@ -2090,7 +2157,7 @@ rclone config create myremote swift env_auth=true
rclone config edit
Enter an interactive configuration session.
-Synopsis
+Synopsis
Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
rclone config edit [flags]
Options
@@ -2102,7 +2169,7 @@ rclone config create myremote swift env_auth=true
rclone config encryption
set, remove and check the encryption for the config file
-Synopsis
+Synopsis
This command sets, clears and checks the encryption for the config file using the subcommands below.
Options
-h, --help help for encryption
@@ -2116,7 +2183,7 @@ rclone config create myremote swift env_auth=true
rclone config encryption check
Check that the config file is encrypted
-Synopsis
+Synopsis
This checks the config file is encrypted and that you can decrypt it.
It will attempt to decrypt the config using the password you supply.
If decryption fails it will return a non-zero exit code if using --password-command, otherwise it will prompt again for the password.
@@ -2131,7 +2198,7 @@ rclone config create myremote swift env_auth=true
rclone config encryption remove
Remove the config file encryption password
-Synopsis
+Synopsis
Remove the config file encryption password
This removes the config file encryption, returning it to un-encrypted.
If --password-command is in use, this will be called to supply the old config password.
@@ -2146,12 +2213,12 @@ rclone config create myremote swift env_auth=true
rclone config encryption set
Set or change the config file encryption password
-Synopsis
+Synopsis
This command sets or changes the config file encryption password.
If there was no config password set then it sets a new one, otherwise it changes the existing config password.
Note that if you are changing an encryption password using --password-command then this will be called once to decrypt the config using the old password and then again to read the new password to re-encrypt the config.
-When --password-command is called to change the password then the environment variable RCLONE_PASSWORD_CHANGE=1 will be set. So if changing passwords programatically you can use the environment variable to distinguish which password you must supply.
-Alternatively you can remove the password first (with rclone config encryption remove), then set it again with this command which may be easier if you don't mind the unecrypted config file being on the disk briefly.
+When --password-command is called to change the password then the environment variable RCLONE_PASSWORD_CHANGE=1 will be set. So if changing passwords programmatically you can use the environment variable to distinguish which password you must supply.
+Alternatively you can remove the password first (with rclone config encryption remove), then set it again with this command which may be easier if you don't mind the unencrypted config file being on the disk briefly.
rclone config encryption set [flags]
Options
-h, --help help for set
@@ -2172,7 +2239,7 @@ rclone config create myremote swift env_auth=true
rclone config password
Update password in an existing remote.
-Synopsis
+Synopsis
Update an existing remote's password. The password should be passed in pairs of key password or as key=password. The password should be passed in in clear (unobscured).
For example, to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
@@ -2208,7 +2275,7 @@ rclone config password myremote fieldname=mypassword
rclone config reconnect
Re-authenticates user with remote.
-Synopsis
+Synopsis
This reconnects remote: passed in to the cloud storage system.
To disconnect the remote use "rclone config disconnect".
This normally means going through the interactive oauth flow again.
@@ -2222,7 +2289,7 @@ rclone config password myremote fieldname=mypassword
rclone config redacted
Print redacted (decrypted) config file, or the redacted config for a single remote.
-Synopsis
+Synopsis
This prints a redacted copy of the config file, either the whole config file or for a given remote.
The config file will be redacted by replacing all passwords and other sensitive info with XXX.
This makes the config file suitable for posting online for support.
@@ -2257,7 +2324,7 @@ rclone config password myremote fieldname=mypassword
rclone config update
Update options in an existing remote.
-Synopsis
+Synopsis
Update an existing remote's options. The options should be passed in pairs of key value or as key=value.
For example, to update the env_auth field of a remote of name myremote you would do:
rclone config update myremote env_auth true
@@ -2328,7 +2395,7 @@ rclone config update myremote env_auth=true
rclone config userinfo
Prints info about logged in user of remote.
-Synopsis
+Synopsis
This prints the details of the person logged in to the cloud storage system.
rclone config userinfo remote: [flags]
Options
@@ -2341,7 +2408,7 @@ rclone config update myremote env_auth=true
rclone copyto
Copy files from source to dest, skipping identical files.
-Synopsis
+Synopsis
If source:path is a file or directory then it copies it to a file or directory named dest:path.
This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.
So
@@ -2433,13 +2500,13 @@ if src is directory
rclone copyurl
Copy the contents of the URL supplied content to dest:path.
-Synopsis
+Synopsis
Download a URL's content and copy it to the destination without saving it in temporary storage.
Setting --auto-filename will attempt to automatically determine the filename from the URL (after any redirections) and used in the destination path.
With --auto-filename-header in addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. With --print-filename in addition, the resulting file name will be printed.
Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name.
Setting --stdout or making the output file name - will cause the output to be written to standard output.
-Troublshooting
+Troubleshooting
If you can't get rclone copyurl to work then here are some things you can try:
--disable-http2 rclone will use HTTP2 if available - try disabling it
@@ -2468,7 +2535,7 @@ if src is directory
rclone cryptcheck
Cryptcheck checks the integrity of an encrypted remote.
-Synopsis
+Synopsis
Checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the encrypted remote.
For it to work the underlying remote of the cryptedremote must support some kind of checksum.
It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.
@@ -2536,7 +2603,7 @@ if src is directory
rclone cryptdecode
Cryptdecode returns unencrypted file names.
-Synopsis
+Synopsis
Returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.
If you supply the --reverse flag, it will return encrypted file names.
use it like this
@@ -2555,7 +2622,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
rclone deletefile
Remove a single file from remote.
-Synopsis
+Synopsis
Remove a single file from remote. Unlike delete it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.
rclone deletefile remote:path [flags]
Options
@@ -2572,7 +2639,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
rclone gendocs
Output markdown docs for rclone to the directory supplied.
-Synopsis
+Synopsis
This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
rclone gendocs output_directory [flags]
Options
@@ -2584,36 +2651,36 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
rclone gitannex
Speaks with git-annex over stdin/stdout.
-Synopsis
+Synopsis
Rclone's gitannex subcommand enables git-annex to store and retrieve content from an rclone remote. It is meant to be run by git-annex, not directly by users.
Installation on Linux
Skip this step if your version of git-annex is 10.20240430 or newer. Otherwise, you must create a symlink somewhere on your PATH with a particular name. This symlink helps git-annex tell rclone it wants to run the "gitannex" subcommand.
-# Create the helper symlink in "$HOME/bin".
-ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin"
-
-# Verify the new symlink is on your PATH.
-which git-annex-remote-rclone-builtin
+# Create the helper symlink in "$HOME/bin".
+ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin"
+
+# Verify the new symlink is on your PATH.
+which git-annex-remote-rclone-builtin
Add a new remote to your git-annex repo. This new remote will connect git-annex with the rclone gitannex subcommand.
Start by asking git-annex to describe the remote's available configuration parameters.
-# If you skipped step 1:
-git annex initremote MyRemote type=rclone --whatelse
-
-# If you created a symlink in step 1:
-git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse
+# If you skipped step 1:
+git annex initremote MyRemote type=rclone --whatelse
+
+# If you created a symlink in step 1:
+git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse
NOTE: If you're porting an existing git-annex-remote-rclone remote to use rclone gitannex, you can probably reuse the configuration parameters verbatim without renaming them. Check parameter synonyms with --whatelse as shown above.
The following example creates a new git-annex remote named "MyRemote" that will use the rclone remote named "SomeRcloneRemote". That rclone remote must be one configured in your rclone.conf file, which can be located with rclone config file.
-git annex initremote MyRemote \
- type=external \
- externaltype=rclone-builtin \
- encryption=none \
- rcloneremotename=SomeRcloneRemote \
- rcloneprefix=git-annex-content \
- rclonelayout=nodir
+git annex initremote MyRemote \
+ type=external \
+ externaltype=rclone-builtin \
+ encryption=none \
+ rcloneremotename=SomeRcloneRemote \
+ rcloneprefix=git-annex-content \
+ rclonelayout=nodir
Before you trust this command with your precious data, be sure to test the remote. This command is very new and has not been tested on many rclone backends. Caveat emptor!
-git annex testremote MyRemote
+git annex testremote MyRemote
Happy annexing!
rclone gitannex [flags]
@@ -2626,7 +2693,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
rclone hashsum
Produces a hashsum file for all the objects in the path.
-Synopsis
+Synopsis
Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.
By default, the hash is requested from the remote. If the hash is not supported by the remote, no hash will be returned. With the download flag, the file will be downloaded from the remote and hashed locally enabling any hash for any remote.
For the MD5 and SHA1 algorithms there are also dedicated commands, md5sum and sha1sum.
@@ -2684,7 +2751,7 @@ Supported hashes are:
rclone link
Generate public link to file/folder.
-Synopsis
+Synopsis
Create, retrieve or remove a public link to the given file or folder.
rclone link remote:path/to/file
rclone link remote:path/to/folder/
@@ -2705,7 +2772,7 @@ rclone link --expire 1d remote:path/to/file
rclone listremotes
List all the remotes in the config file and defined in environment variables.
-Synopsis
+Synopsis
Lists all the available remotes from the config file, or the remotes matching an optional filter.
Prints the result in human-readable format by default, and as a simple list of remote names, or if used with flag --long a tabular format including the remote names, types and descriptions. Using flag --json produces machine-readable output instead, which always includes all attributes - including the source (file or environment).
Result can be filtered by a filter argument which applies to all attributes, and/or filter flags specific for each attribute. The values must be specified according to regular rclone filtering pattern syntax.
@@ -2726,7 +2793,7 @@ rclone link --expire 1d remote:path/to/file
rclone lsf
List directories and objects in remote:path formatted for parsing.
-Synopsis
+Synopsis
List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.
Eg
$ rclone lsf swift:bucket
@@ -2852,7 +2919,7 @@ rclone lsf remote:path --format pt --time-format max
rclone lsjson
List directories and objects in the path in JSON format.
-Synopsis
+Synopsis
List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this:
{
@@ -2958,7 +3025,7 @@ rclone lsf remote:path --format pt --time-format max
rclone mount
Mount the remote as file system on a mountpoint.
-Synopsis
+Synopsis
Rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using rclone config. Check it works with rclone ls etc.
On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the --daemon flag to force background mode. On Windows you can run mount in foreground only, the flag is ignored.
@@ -3328,7 +3395,7 @@ WantedBy=multi-user.target
rclone moveto
Move file or directory from source to dest.
-Synopsis
+Synopsis
If source:path is a file or directory then it moves it to a file or directory named dest:path.
This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command.
So
@@ -3421,7 +3488,7 @@ if src is directory
rclone ncdu
Explore a remote with a text based user interface.
-Synopsis
+Synopsis
This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".
To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.
You can interact with the user interface using key presses, press '?' to toggle the help on and off. The supported keys are:
@@ -3497,7 +3564,7 @@ if src is directory
rclone nfsmount
Mount the remote as file system on a mountpoint.
-Synopsis
+Synopsis
Rclone nfsmount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using rclone config. Check it works with rclone ls etc.
On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the --daemon flag to force background mode. On Windows you can run mount in foreground only, the flag is ignored.
@@ -3872,7 +3939,7 @@ WantedBy=multi-user.target
rclone obscure
Obscure password for use in the rclone config file.
-Synopsis
+Synopsis
In the rclone config file, human-readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident.
Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token.
This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline.
@@ -3889,7 +3956,7 @@ WantedBy=multi-user.target
rclone rc
Run a command against a running rclone.
-Synopsis
+Synopsis
This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"
A username and password can be passed in with --user and --pass.
Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.
@@ -3931,7 +3998,7 @@ rclone rc --unix-socket /tmp/my.socket core/stats
rclone rcat
Copies standard input to file on remote.
-Synopsis
+Synopsis
Reads from standard input (stdin) and copies it to a single remote file.
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file
@@ -3956,7 +4023,7 @@ ffmpeg - | rclone rcat remote:path/to/file
rclone rcd
Run rclone listening to remote control commands only.
-Synopsis
+Synopsis
This runs rclone so that it only listens to remote control commands.
This is useful if you are controlling rclone via the rc API.
If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run.
@@ -4142,7 +4209,7 @@ htpasswd -B htpasswd anotherUser
rclone rmdirs
Remove empty directories under the path.
-Synopsis
+Synopsis
This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the --leave-root flag.
Use command rmdir to delete just the empty directory given by path, not recurse.
This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete command will delete files but leave the directory structure (unless used with option --rmdirs).
@@ -4164,7 +4231,7 @@ htpasswd -B htpasswd anotherUser
rclone selfupdate
Update the rclone binary.
-Synopsis
+Synopsis
This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed signature; see the release signing docs for details.
If used without flags (or with implied --stable flag), this command will install the latest stable release. However, some issues may be fixed (or features added) only in the latest beta release. In such cases you should run the command with the --beta flag, i.e. rclone selfupdate --beta. You can check in advance what version would be installed by adding the --check flag, then repeat the command without it when you are satisfied.
Sometimes the rclone team may recommend you a concrete beta or stable rclone release to troubleshoot your issue or add a bleeding edge feature. The --version VER flag, if given, will update to the concrete version instead of the latest one. If you omit micro version from VER (for example 1.53), the latest matching micro version will be used.
@@ -4189,7 +4256,7 @@ htpasswd -B htpasswd anotherUser
rclone serve
Serve a remote over a protocol.
-Synopsis
+Synopsis
Serve a remote over a given protocol. Requires the use of a subcommand to specify the protocol, e.g.
rclone serve http remote:
Each subcommand has its own options which you can see in their help.
@@ -4212,7 +4279,7 @@ htpasswd -B htpasswd anotherUser
rclone serve dlna
Serve remote:path over DLNA
-Synopsis
+Synopsis
Run a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.
Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.
Rclone will add external subtitle files (.srt) to videos if they have the same filename as the video file itself (except the extension), either in the same directory as the video, or in a "Subs" subdirectory.
@@ -4435,7 +4502,7 @@ htpasswd -B htpasswd anotherUser
rclone serve docker
Serve any remote on docker's volume plugin API.
-Synopsis
+Synopsis
This command implements the Docker volume plugin API allowing docker to use rclone as a data storage mechanism for various cloud providers. rclone provides docker volume plugin based on it.
To create a docker plugin, one must create a Unix or TCP socket that Docker will look for when you use the plugin and then it listens for commands from docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example:
sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
@@ -4679,7 +4746,7 @@ htpasswd -B htpasswd anotherUser
rclone serve ftp
Serve remote:path over FTP.
-Synopsis
+Synopsis
Run a basic FTP server to serve a remote over FTP protocol. This can be viewed with a FTP client or you can make a remote of type FTP to read and write it.
Server options
Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
@@ -4935,7 +5002,7 @@ htpasswd -B htpasswd anotherUser
rclone serve http
Serve the remote over HTTP.
-Synopsis
+Synopsis
Run a basic web server to serve a remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
You can use the filter flags (e.g. --include, --exclude) to control what is served.
The server will log errors. Use -v to see access logs.
@@ -5337,7 +5404,7 @@ htpasswd -B htpasswd anotherUser
rclone serve nfs
Serve the remote as an NFS mount
-Synopsis
+Synopsis
Create an NFS server that serves the given remote over the network.
This implements an NFSv3 server to serve any rclone remote via NFS.
The primary purpose for this command is to enable the mount command on recent macOS versions where installing FUSE is very cumbersome.
@@ -5346,7 +5413,7 @@ htpasswd -B htpasswd anotherUser
Modifying files through the NFS protocol requires VFS caching. Usually you will need to specify --vfs-cache-mode in order to be able to write to the mountpoint (full is recommended). If you don't specify VFS cache mode, the mount will be read-only.
--nfs-cache-type controls the type of the NFS handle cache. By default this is memory where new handles will be randomly allocated when needed. These are stored in memory. If the server is restarted the handle cache will be lost and connected NFS clients will get stale handle errors.
--nfs-cache-type disk uses an on disk NFS handle cache. Rclone hashes the path of the object and stores it in a file named after the hash. These hashes are stored on disk the directory controlled by --cache-dir or the exact directory may be specified with --nfs-cache-dir. Using this means that the NFS server can be restarted at will without affecting the connected clients.
---nfs-cache-type symlink is similar to --nfs-cache-type disk in that it uses an on disk cache, but the cache entries are held as symlinks. Rclone will use the handle of the underlying file as the NFS handle which improves performance. This sort of cache can't be backed up and restored as the underlying handles will change. This is Linux only. It requres running rclone as root or with CAP_DAC_READ_SEARCH. You can run rclone with this extra permission by doing this to the rclone binary sudo setcap cap_dac_read_search+ep /path/to/rclone.
+--nfs-cache-type symlink is similar to --nfs-cache-type disk in that it uses an on disk cache, but the cache entries are held as symlinks. Rclone will use the handle of the underlying file as the NFS handle which improves performance. This sort of cache can't be backed up and restored as the underlying handles will change. This is Linux only. It requires running rclone as root or with CAP_DAC_READ_SEARCH. You can run rclone with this extra permission by doing this to the rclone binary sudo setcap cap_dac_read_search+ep /path/to/rclone.
--nfs-cache-handle-limit controls the maximum number of cached NFS handles stored by the caching handler. This should not be set too low or you may experience errors when trying to access files. The default is 1000000, but consider lowering this limit if the server's system resource usage causes problems. This is only used by the memory type cache.
To serve NFS over the network use following command:
rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
@@ -5568,7 +5635,7 @@ htpasswd -B htpasswd anotherUser
rclone serve restic
Serve the remote for restic's REST API.
-Synopsis
+Synopsis
Run a basic web server to serve a remote over restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
Restic is a command-line program for doing backups.
The server will log errors. Use -v to see access logs.
@@ -5667,7 +5734,7 @@ htpasswd -B htpasswd anotherUser
rclone serve s3
Serve remote:path over s3.
-Synopsis
+Synopsis
serve s3 implements a basic s3 server that serves a remote via s3. This can be viewed with an s3 client, or you can make an s3 type remote to read and write to it with rclone.
serve s3 is considered Experimental so use with care.
S3 server supports Signature Version 4 authentication. Just use --auth-key accessKey,secretKey and set the Authorization header correctly in the request. (See the AWS docs).
@@ -5693,7 +5760,7 @@ endpoint = http://127.0.0.1:8080/
access_key_id = ACCESS_KEY_ID
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
-Note that setting disable_multipart_uploads = true is to work around a bug which will be fixed in due course.
+Note that setting use_multipart_uploads = false is to work around a bug which will be fixed in due course.
Bugs
When uploading multipart files serve s3 holds all the parts in memory (see #7453). This is a limitaton of the library rclone uses for serving S3 and will hopefully be fixed at some point.
Multipart server side copies do not work (see #7454). These take a very long time and eventually fail. The default threshold for multipart server side copies is 5G which is the maximum it can be, so files above this side will fail to be server side copied.
@@ -5988,7 +6055,7 @@ htpasswd -B htpasswd anotherUser
rclone serve sftp
Serve the remote over SFTP.
-Synopsis
+Synopsis
Run an SFTP server to serve a remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.
You can use the filter flags (e.g. --include, --exclude) to control what is served.
The server will respond to a small number of shell commands, mainly md5sum, sha1sum and df, which enable it to provide support for checksums and the about feature when accessed from an sftp remote.
@@ -6255,7 +6322,7 @@ htpasswd -B htpasswd anotherUser
rclone serve webdav
Serve remote:path over WebDAV.
-Synopsis
+Synopsis
Run a basic WebDAV server to serve a remote over HTTP via the WebDAV protocol. This can be viewed with a WebDAV client, through a web browser, or you can make a remote of type WebDAV to read and write it.
WebDAV options
--etag-hash
@@ -6671,7 +6738,7 @@ htpasswd -B htpasswd anotherUser
rclone settier
Changes storage class/tier of objects in remote.
-Synopsis
+Synopsis
Changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc.
Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true
You can use it to tier single object
@@ -6690,7 +6757,7 @@ htpasswd -B htpasswd anotherUser
rclone test
Run a test command
-Synopsis
+Synopsis
Rclone test is used to run test commands.
Select which test command you want with the subcommand, eg
rclone test memory remote:
@@ -6722,7 +6789,7 @@ htpasswd -B htpasswd anotherUser
rclone test histogram
Makes a histogram of file name characters.
-Synopsis
+Synopsis
This command outputs JSON which shows the histogram of characters used in filenames in the remote:path specified.
The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression.
rclone test histogram [remote:path] [flags]
@@ -6735,7 +6802,7 @@ htpasswd -B htpasswd anotherUser
rclone test info
Discovers file name or other limitations for paths.
-Synopsis
+Synopsis
Discovers what filenames and upload methods are possible to write to the paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one.
NB this can create undeletable files and other hazards - use with care
rclone test info [remote:path]+ [flags]
@@ -6807,7 +6874,7 @@ htpasswd -B htpasswd anotherUser
rclone touch
Create new file or change file modification time.
-Synopsis
+Synopsis
Set the modification time on file(s) as specified by remote:path to have the current time.
If remote:path does not exist then a zero sized file will be created, unless --no-create or --recursive is provided.
If --recursive is used then recursively sets the modification time on all existing files that is found under the path. Filters are supported, and you can test with the --dry-run or the --interactive/-i flag.
@@ -6865,7 +6932,7 @@ htpasswd -B htpasswd anotherUser
rclone tree
List the contents of the remote in a tree like fashion.
-Synopsis
+Synopsis
Lists the contents of a remote in a similar way to the unix tree command.
For example
$ rclone tree remote:path
@@ -7168,6 +7235,7 @@ rclone sync --interactive /path/to/files remote:current-backup
Options
Rclone has a number of options to control its behaviour.
Options that take parameters can have the values passed in two ways, --option=value or --option value. However boolean (true/false) options behave slightly differently to the other options in that --boolean sets the option to true and the absence of the flag sets it to false. It is also possible to specify --boolean=false or --boolean=true. Note that --boolean false is not valid - this is parsed as --boolean and the false is parsed as an extra command line argument for rclone.
+Options documented to take a stringArray parameter accept multiple values. To pass more than one value, repeat the option; for example: --include value1 --include value2.
Time or duration options
TIME or DURATION options can be specified as a duration string or a time string.
A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Default units are seconds or the following abbreviations are valid:
@@ -7530,55 +7598,55 @@ y/n/s/!/q> n
ID is the source ID of the object if known.
Metadata is the backend specific metadata as described in the backend docs.
-{
- "SrcFs": "gdrive:",
- "SrcFsType": "drive",
- "DstFs": "newdrive:user",
- "DstFsType": "onedrive",
- "Remote": "test.txt",
- "Size": 6,
- "MimeType": "text/plain; charset=utf-8",
- "ModTime": "2022-10-11T17:53:10.286745272+01:00",
- "IsDir": false,
- "ID": "xyz",
- "Metadata": {
- "btime": "2022-10-11T16:53:11Z",
- "content-type": "text/plain; charset=utf-8",
- "mtime": "2022-10-11T17:53:10.286745272+01:00",
- "owner": "user1@domain1.com",
- "permissions": "...",
- "description": "my nice file",
- "starred": "false"
- }
-}
-The program should then modify the input as desired and send it to STDOUT. The returned Metadata field will be used in its entirety for the destination object. Any other fields will be ignored. Note in this example we translate user names and permissions and add something to the description:
{
- "Metadata": {
- "btime": "2022-10-11T16:53:11Z",
- "content-type": "text/plain; charset=utf-8",
- "mtime": "2022-10-11T17:53:10.286745272+01:00",
- "owner": "user1@domain2.com",
- "permissions": "...",
- "description": "my nice file [migrated from domain1]",
- "starred": "false"
- }
-}
+ "SrcFs": "gdrive:",
+ "SrcFsType": "drive",
+ "DstFs": "newdrive:user",
+ "DstFsType": "onedrive",
+ "Remote": "test.txt",
+ "Size": 6,
+ "MimeType": "text/plain; charset=utf-8",
+ "ModTime": "2022-10-11T17:53:10.286745272+01:00",
+ "IsDir": false,
+ "ID": "xyz",
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain1.com",
+ "permissions": "...",
+ "description": "my nice file",
+ "starred": "false"
+ }
+}
+The program should then modify the input as desired and send it to STDOUT. The returned Metadata field will be used in its entirety for the destination object. Any other fields will be ignored. Note in this example we translate user names and permissions and add something to the description:
+{
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain2.com",
+ "permissions": "...",
+ "description": "my nice file [migrated from domain1]",
+ "starred": "false"
+ }
+}
Metadata can be removed here too.
An example python program might look something like this to implement the above transformations.
-import sys, json
-
-i = json.load(sys.stdin)
-metadata = i["Metadata"]
-# Add tag to description
-if "description" in metadata:
- metadata["description"] += " [migrated from domain1]"
-else:
- metadata["description"] = "[migrated from domain1]"
-# Modify owner
-if "owner" in metadata:
- metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
-o = { "Metadata": metadata }
-json.dump(o, sys.stdout, indent="\t")
+import sys, json
+
+i = json.load(sys.stdin)
+metadata = i["Metadata"]
+# Add tag to description
+if "description" in metadata:
+ metadata["description"] += " [migrated from domain1]"
+else:
+ metadata["description"] = "[migrated from domain1]"
+# Modify owner
+if "owner" in metadata:
+ metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
+o = { "Metadata": metadata }
+json.dump(o, sys.stdout, indent="\t")
You can find this example (slightly expanded) in the rclone source code at bin/test_metadata_mapper.py.
If you want to see the input to the metadata mapper and the output returned from it in the log you can use -vv --dump mapper.
See the metadata section for more info.
@@ -8051,7 +8119,7 @@ export RCLONE_CONFIG_PASS
Verbosity is slightly different, the environment variable equivalent of --verbose or -v is RCLONE_VERBOSE=1, or for -vv, RCLONE_VERBOSE=2.
The same parser is used for the options and the environment variables so they take exactly the same form.
The options set by environment variables can be seen with the -vv flag, e.g. rclone version -vv.
-Options that can appear multiple times (type stringArray) are treated slighly differently as environment variables can only be defined once. In order to allow a simple mechanism for adding one or many items, the input is treated as a CSV encoded string. For example
+Options that can appear multiple times (type stringArray) are treated slightly differently as environment variables can only be defined once. In order to allow a simple mechanism for adding one or many items, the input is treated as a CSV encoded string. For example
@@ -10160,7 +10228,7 @@ rclone rc options/set --json '{"main": {"LogLevel": 8}}&
},
],
}
-The expiry time is the time until the file is elegible for being uploaded in floating point seconds. This may go negative. As rclone only transfers --transfers files at once, only the lowest --transfers expiry times will have uploading as true. So there may be files with negative expiry times for which uploading is false.
+The expiry time is the time until the file is eligible for being uploaded in floating point seconds. This may go negative. As rclone only transfers --transfers files at once, only the lowest --transfers expiry times will have uploading as true. So there may be files with negative expiry times for which uploading is false.
This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.
vfs/queue-set-expiry: Set the expiry time for an item queued for upload.
Use this to adjust the expiry time for an item in the upload queue. You will need to read the id of the item using vfs/queue before using this call.
@@ -12221,7 +12289,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.1")
Flags helpful for increasing performance.
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@@ -13319,7 +13387,7 @@ systemctl start docker-volume-rclone.socket
systemctl restart docker
Or run the service directly: - run systemctl daemon-reload to let systemd pick up new config - run systemctl enable docker-volume-rclone.service to make the new service start automatically when you power on your machine. - run systemctl start docker-volume-rclone.service to start the service now. - run systemctl restart docker to restart docker daemon and let it detect the new plugin socket. Note that this step is not needed in managed mode where docker knows about plugin state changes.
The two methods are equivalent from the user perspective, but I personally prefer socket activation.
-Troubleshooting
+Troubleshooting
You can see managed plugin settings with
docker plugin list
docker plugin inspect rclone
@@ -13516,7 +13584,7 @@ rclone copy Path1 Path2 [--create-empty-src-dirs]
As a safety check, if greater than the --max-delete percent of files were deleted on either the Path1 or Path2 filesystem, then bisync will abort with a warning message, without making any changes. The default --max-delete is 50%. One way to trigger this limit is to rename a directory that contains more than half of your files. This will appear to bisync as a bunch of deleted files and a bunch of new files. This safety check is intended to block bisync from deleting all of the files on both filesystems due to a temporary network access issue, or if the user had inadvertently deleted the files on one side or the other. To force the sync, either set a different delete percentage limit, e.g. --max-delete 75 (allows up to 75% deletion), or use --force to bypass the check.
Also see the all files changed check.
--filters-file
-By using rclone filter features you can exclude file types or directory sub-trees from the sync. See the bisync filters section and generic --filter-from documentation. An example filters file contains filters for non-allowed files for synching with Dropbox.
+By using rclone filter features you can exclude file types or directory sub-trees from the sync. See the bisync filters section and generic --filter-from documentation. An example filters file contains filters for non-allowed files for syncing with Dropbox.
If you make changes to your filters file then bisync requires a run with --resync. This is a safety feature, which prevents existing files on the Path1 and/or Path2 side from seeming to disappear from view (since they are excluded in the new listings), which would fool bisync into seeing them as deleted (as compared to the prior run listings), and then bisync would proceed to delete them for real.
To block this from happening, bisync calculates an MD5 hash of the filters file and stores the hash in a .md5 file in the same place as your filters file. On the next run with --filters-file set, bisync re-calculates the MD5 hash of the current filters file and compares it to the hash stored in the .md5 file. If they don't match, the run aborts with a critical error and thus forces you to do a --resync, likely avoiding a disaster.
--conflict-resolve CHOICE
@@ -13556,7 +13624,7 @@ rclone copy Path1 Path2 [--create-empty-src-dirs]
--check-sync
Enabled by default, the check-sync function checks that all of the same files exist in both the Path1 and Path2 history listings. This check-sync integrity check is performed at the end of the sync run by default. Any untrapped failing copy/deletes between the two paths might result in differences between the two listings and in the untracked file content differences between the two paths. A resync run would correct the error.
Note that the default-enabled integrity check locally executes a load of both the final Path1 and Path2 listings, and thus adds to the run time of a sync. Using --check-sync=false will disable it and may significantly reduce the sync run times for very large numbers of files.
-The check may be run manually with --check-sync=only. It runs only the integrity check and terminates without actually synching.
+The check may be run manually with --check-sync=only. It runs only the integrity check and terminates without actually syncing.
Note that currently, --check-sync only checks listing snapshots and NOT the actual files on the remotes. Note also that the listing snapshots will not know about any changes that happened during or after the latest bisync run, as those will be discovered on the next run. Therefore, while listings should always match each other at the end of a bisync run, it is expected that they will not match the underlying remotes, nor will the remotes match each other, if there were changes during or after the run. This is normal, and any differences will be detected and synced on the next run.
For a robust integrity check of the current state of the remotes (as opposed to just their listing snapshots), consider using check (or cryptcheck, if at least one path is a crypt remote) instead of --check-sync, keeping in mind that differences are expected if files changed during or after your last bisync run.
For example, a possible sequence could look like this:
@@ -13786,7 +13854,7 @@ rclone copy Path1 Path2 [--create-empty-src-dirs]
See filtering documentation for how filter rules are written and interpreted.
Bisync's --filters-file flag slightly extends the rclone's --filter-from filtering mechanism. For a given bisync run you may provide only one --filters-file. The --include*, --exclude*, and --filter flags are also supported.
How to filter directories
-Filtering portions of the directory tree is a critical feature for synching.
+Filtering portions of the directory tree is a critical feature for syncing.
Examples of directory trees (always beneath the Path1/Path2 root level) you may want to exclude from your sync: - Directory trees containing only software build intermediate files. - Directory trees containing application temporary files and data such as the Windows C:\Users\MyLogin\AppData\ tree. - Directory trees containing files that are large, less important, or are getting thrashed continuously by ongoing processes.
On the other hand, there may be only select directories that you actually want to sync, and exclude all others. See the Example include-style filters for Windows user directories below.
Filters file writing guidelines
@@ -13853,7 +13921,7 @@ rclone copy Path1 Path2 [--create-empty-src-dirs]
Note also that Windows implements several "library" links such as C:\Users\MyLogin\My Documents\My Music pointing to C:\Users\MyLogin\Music. rclone sees these as links, so you must add --links to the bisync command line if you which to follow these links. I find that I get permission errors in trying to follow the links, so I don't include the rclone --links flag, but then you get lots of Can't follow symlink… noise from rclone about not following the links. This noise can be quashed by adding --quiet to the bisync command line.
Example exclude-style filters files for use with Dropbox
-- Dropbox disallows synching the listed temporary and configuration/data files. The `- ` filters exclude these files where ever they may occur in the sync tree. Consider adding similar exclusions for file types you don't need to sync, such as core dump and software build files.
+- Dropbox disallows syncing the listed temporary and configuration/data files. The `- ` filters exclude these files where ever they may occur in the sync tree. Consider adding similar exclusions for file types you don't need to sync, such as core dump and software build files.
- bisync testing creates
/testdir/ at the top level of the sync tree, and usually deletes the tree after the test. If a normal sync should run while the /testdir/ tree exists the --check-access phase may fail due to unbalanced RCLONE_TEST files. The `- /testdir/` filter blocks this tree from being synched. You don't need this exclusion if you are not doing bisync development testing.
- Everything else beneath the Path1/Path2 root will be synched.
- RCLONE_TEST files may be placed anywhere within the tree, including the root.
@@ -14013,7 +14081,7 @@ Options:
Note: unlike rclone flags which must be prefixed by double dash (--), the test command flags can be equally prefixed by a single - or double dash.
Running tests
-go test . -case basic -remote local -remote2 local runs the test_basic test case using only the local filesystem, synching one local directory with another local directory. Test script output is to the console, while commands within scenario.txt have their output sent to the .../workdir/test.log file, which is finally compared to the golden copy.
+go test . -case basic -remote local -remote2 local runs the test_basic test case using only the local filesystem, syncing one local directory with another local directory. Test script output is to the console, while commands within scenario.txt have their output sent to the .../workdir/test.log file, which is finally compared to the golden copy.
- The first argument after
go test should be a relative name of the directory containing bisync source code. If you run tests right from there, the argument will be . (current directory) as in most examples below. If you run bisync tests from the rclone source directory, the command should be go test ./cmd/bisync ....
- The test engine will mangle rclone output to ensure comparability with golden listings and logs.
- Test scenarios are located in
./cmd/bisync/testdata. The test -case argument should match the full name of a subdirectory under that directory. Every test subdirectory name on disk must start with test_, this prefix can be omitted on command line for brevity. Also, underscores in the name can be replaced by dashes for convenience.
@@ -14150,6 +14218,10 @@ Options:
Bisync adopts the differential synchronization technique, which is based on keeping history of changes performed by both synchronizing sides. See the Dual Shadow Method section in Neil Fraser's article.
Also note a number of academic publications by Benjamin Pierce about Unison and synchronization in general.
Changelog
+v1.69.1
+
+- Fixed an issue causing listings to not capture concurrent modifications under certain conditions
+
v1.68
- Fixed an issue affecting backends that round modtimes to a lower precision.
@@ -15072,7 +15144,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
- This is a policy that can be used when creating bucket. It assumes that
USER_NAME has been created.
- The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects.
-- When using s3-no-check-bucket and the bucket already exsits, the
"arn:aws:s3:::BUCKET_NAME" doesn't have to be included.
+- When using s3-no-check-bucket and the bucket already exists, the
"arn:aws:s3:::BUCKET_NAME" doesn't have to be included.
For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync.
Key Management System (KMS)
@@ -17632,7 +17704,7 @@ region = au-nsw
endpoint = s3.rackcorp.com
location_constraint = au-nsw
Rclone Serve S3
-Rclone can serve any remote over the S3 protocol. For details see the rclone serve s3 documentation.
+Rclone can serve any remote over the S3 protocol. For details see the rclone serve s3 documentation.
For example, to serve remote:path over s3, run the server like this:
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
This will be compatible with an rclone remote which is defined like this:
@@ -17643,7 +17715,7 @@ endpoint = http://127.0.0.1:8080/
access_key_id = ACCESS_KEY_ID
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
-Note that setting disable_multipart_uploads = true is to work around a bug which will be fixed in due course.
+Note that setting use_multipart_uploads = false is to work around a bug which will be fixed in due course.
Scaleway
Scaleway The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool.
Scaleway provides an S3 interface which can be configured for use with rclone like this:
@@ -18581,27 +18653,49 @@ Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
- 1 / Atlanta, GA (USA), us-southeast-1
+ 1 / Amsterdam (Netherlands), nl-ams-1
+ \ (nl-ams-1.linodeobjects.com)
+ 2 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
- 2 / Chicago, IL (USA), us-ord-1
+ 3 / Chennai (India), in-maa-1
+ \ (in-maa-1.linodeobjects.com)
+ 4 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
- 3 / Frankfurt (Germany), eu-central-1
+ 5 / Frankfurt (Germany), eu-central-1
\ (eu-central-1.linodeobjects.com)
- 4 / Milan (Italy), it-mil-1
+ 6 / Jakarta (Indonesia), id-cgk-1
+ \ (id-cgk-1.linodeobjects.com)
+ 7 / London 2 (Great Britain), gb-lon-1
+ \ (gb-lon-1.linodeobjects.com)
+ 8 / Los Angeles, CA (USA), us-lax-1
+ \ (us-lax-1.linodeobjects.com)
+ 9 / Madrid (Spain), es-mad-1
+ \ (es-mad-1.linodeobjects.com)
+10 / Melbourne (Australia), au-mel-1
+ \ (au-mel-1.linodeobjects.com)
+11 / Miami, FL (USA), us-mia-1
+ \ (us-mia-1.linodeobjects.com)
+12 / Milan (Italy), it-mil-1
\ (it-mil-1.linodeobjects.com)
- 5 / Newark, NJ (USA), us-east-1
+13 / Newark, NJ (USA), us-east-1
\ (us-east-1.linodeobjects.com)
- 6 / Paris (France), fr-par-1
+14 / Osaka (Japan), jp-osa-1
+ \ (jp-osa-1.linodeobjects.com)
+15 / Paris (France), fr-par-1
\ (fr-par-1.linodeobjects.com)
- 7 / Seattle, WA (USA), us-sea-1
+16 / São Paulo (Brazil), br-gru-1
+ \ (br-gru-1.linodeobjects.com)
+17 / Seattle, WA (USA), us-sea-1
\ (us-sea-1.linodeobjects.com)
- 8 / Singapore ap-south-1
+18 / Singapore, ap-south-1
\ (ap-south-1.linodeobjects.com)
- 9 / Stockholm (Sweden), se-sto-1
+19 / Singapore 2, sg-sin-1
+ \ (sg-sin-1.linodeobjects.com)
+20 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
-10 / Washington, DC, (USA), us-iad-1
+21 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
-endpoint> 3
+endpoint> 5
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@@ -18958,7 +19052,7 @@ cos s3
For Netease NOS configure as per the configurator rclone config setting the provider Netease. This will automatically set force_path_style = false which is necessary for it to run properly.
Petabox
Here is an example of making a Petabox configuration. First run:
-
+
This will guide you through an interactive setup process.
No remotes found, make a new one?
n) New remote
@@ -21755,7 +21849,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
8 bytes magic string RCLONE\x00\x00
24 bytes Nonce (IV)
-The initial nonce is generated from the operating systems crypto strong random number generator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being reused is minuscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of re-using a nonce.
+The initial nonce is generated from the operating systems crypto strong random number generator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being reused is minuscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of reusing a nonce.
Chunk
Each chunk will contain 64 KiB of data, except for the last one which may have less data. The data chunk is in standard NaCl SecretBox format. SecretBox uses XSalsa20 and Poly1305 to encrypt and authenticate messages.
Each chunk contains:
@@ -27047,7 +27141,7 @@ Enter a value.
config_2fa> 2FACODE
Remote config
--------------------
-[koofr]
+[iclouddrive]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@@ -27060,6 +27154,13 @@ d) Delete this remote
y/e/d> y
Advanced Data Protection
ADP is currently unsupported and need to be disabled
+On iPhone, Settings > Apple Account > iCloud > 'Access iCloud Data on the Web' must be ON, and 'Advanced Data Protection' OFF.
+Troubleshooting
+Missing PCS cookies from the request
+This means you have Advanced Data Protection (ADP) turned on. This is not supported at the moment. If you want to use rclone you will have to turn it off. See above for how to turn it off.
+You will need to clear the cookies and the trust_token fields in the config. Or you can delete the remote config and start again.
+You should then run rclone reconnect remote:.
+Note that changing the ADP setting may not take effect immediately - you may need to wait a few hours or a day before you can get rclone to work - keep clearing the config entry and running rclone reconnect remote: until rclone functions properly.
Standard options
Here are the Standard options specific to iclouddrive (iCloud Drive).
--iclouddrive-apple-id
@@ -27801,7 +27902,7 @@ y/e/d> y
Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
Jottacloud only supports filenames up to 255 characters in length.
-Troubleshooting
+Troubleshooting
Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases.
Koofr
Paths are specified as remote:path
@@ -30461,7 +30562,7 @@ y/e/d> y
"de"
-- Microsoft Cloud Germany
+- Microsoft Cloud Germany (deprecated - try global region first).
"cn"
@@ -30834,75 +30935,75 @@ rclone rc vfs/refresh recursive=true
Permissions are also supported, if --onedrive-metadata-permissions is set. The accepted values for --onedrive-metadata-permissions are "read", "write", "read,write", and "off" (the default). "write" supports adding new permissions, updating the "role" of existing permissions, and removing permissions. Updating and removing require the Permission ID to be known, so it is recommended to use "read,write" instead of "write" if you wish to update/remove permissions.
Permissions are read/written in JSON format using the same schema as the OneDrive API, which differs slightly between OneDrive Personal and Business.
Example for OneDrive Personal:
-[
- {
- "id": "1234567890ABC!123",
- "grantedTo": {
- "user": {
- "id": "ryan@contoso.com"
- },
- "application": {},
- "device": {}
- },
- "invitation": {
- "email": "ryan@contoso.com"
- },
- "link": {
- "webUrl": "https://1drv.ms/t/s!1234567890ABC"
- },
- "roles": [
- "read"
- ],
- "shareId": "s!1234567890ABC"
- }
-]
-Example for OneDrive Business:
[
{
- "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
- "grantedToIdentities": [
- {
- "user": {
- "displayName": "ryan@contoso.com"
- },
- "application": {},
- "device": {}
- }
- ],
- "link": {
- "type": "view",
- "scope": "users",
- "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
- },
- "roles": [
- "read"
- ],
- "shareId": "u!LKj1lkdlals90j1nlkascl"
- },
- {
- "id": "5D33DD65C6932946",
- "grantedTo": {
- "user": {
- "displayName": "John Doe",
- "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
- },
- "application": {},
- "device": {}
- },
- "roles": [
- "owner"
- ],
- "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
- }
-]
+ "id": "1234567890ABC!123",
+ "grantedTo": {
+ "user": {
+ "id": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ },
+ "invitation": {
+ "email": "ryan@contoso.com"
+ },
+ "link": {
+ "webUrl": "https://1drv.ms/t/s!1234567890ABC"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "s!1234567890ABC"
+ }
+]
+Example for OneDrive Business:
+[
+ {
+ "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
+ "grantedToIdentities": [
+ {
+ "user": {
+ "displayName": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ }
+ ],
+ "link": {
+ "type": "view",
+ "scope": "users",
+ "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "u!LKj1lkdlals90j1nlkascl"
+ },
+ {
+ "id": "5D33DD65C6932946",
+ "grantedTo": {
+ "user": {
+ "displayName": "John Doe",
+ "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
+ },
+ "application": {},
+ "device": {}
+ },
+ "roles": [
+ "owner"
+ ],
+ "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
+ }
+]
To write permissions, pass in a "permissions" metadata key using this same format. The --metadata-mapper tool can be very helpful for this.
When adding permissions, an email address can be provided in the User.ID or DisplayName properties of grantedTo or grantedToIdentities. Alternatively, an ObjectID can be provided in User.ID. At least one valid recipient must be provided in order to add a permission for a user. Creating a Public Link is also supported, if Link.Scope is set to "anonymous".
Example request to add a "read" permission with --metadata-mapper:
-{
- "Metadata": {
- "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
- }
-}
+{
+ "Metadata": {
+ "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
+ }
+}
Note that adding a permission can fail if a conflicting permission already exists for the file/folder.
To update an existing permission, include both the Permission ID and the new roles to be assigned. roles is the only property that can be changed.
To remove permissions, pass in a blob containing only the permissions you wish to keep (which can be empty, to remove all.) Note that the owner role will be ignored, as it cannot be removed.
@@ -31052,6 +31153,26 @@ rclone rc vfs/refresh recursive=true
See the metadata docs for more info.
+Impersonate other users as Admin
+Unlike Google Drive and impersonating any domain user via service accounts, OneDrive requires you to authenticate as an admin account, and manually setup a remote per user you wish to impersonate.
+
+- In Microsoft 365 Admin Center, open each user you need to "impersonate" and go to the OneDrive section. There is a heading called "Get access to files", you need to click to create the link, this creates the link of the format
https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/ but also changes the permissions so you your admin user has access.
+- Then in powershell run the following commands:
+
+Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
+Import-Module Microsoft.Graph.Files
+Connect-MgGraph -Scopes "Files.ReadWrite.All"
+# Follow the steps to allow access to your admin user
+# Then run this for each user you want to impersonate to get the Drive ID
+Get-MgUserDefaultDrive -UserId '{emailaddress}'
+# This will give you output of the format:
+# Name Id DriveType CreatedDateTime
+# ---- -- --------- ---------------
+# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm
+
+
+- Then in rclone add a onedrive remote type, and use the
Type in driveID with the DriveID you got in the previous step. One remote per user. It will then confirm the drive ID, and hopefully give you a message of Found drive "root" of type "business" and then include the URL of the format https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents
+
Limitations
If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the rclone config reconnect remote: command to get a new token and refresh token.
Naming
@@ -31097,7 +31218,7 @@ rclone rc vfs/refresh recursive=true
rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
NB Onedrive personal can't currently delete versions
-Troubleshooting
+Troubleshooting
Excessive throttling or blocked on SharePoint
If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: --user-agent "ISV|rclone.org|rclone/v1.55.1"
The specific details can be found in the Microsoft document: Avoid getting throttled or blocked in SharePoint Online
@@ -33067,7 +33188,7 @@ rclone lsd myremote:
Limitations
The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
-Troubleshooting
+Troubleshooting
Rclone gives Failed to create file system for "remote:": Bad Request
Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.
So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies flag.
@@ -38190,6 +38311,43 @@ $ tree /tmp/c
"error": return an error based on option value
Changelog
+v1.69.1 - 2025-02-14
+See commits
+
+- Bug Fixes
+
+- lib/oauthutil: Fix redirect URL mismatch errors (Nick Craig-Wood)
+- bisync: Fix listings missing concurrent modifications (nielash)
+- serve s3: Fix list objects encoding-type (Nick Craig-Wood)
+- fs: Fix confusing "didn't find section in config file" error (Nick Craig-Wood)
+- doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
+- build: Added parallel docker builds and caching for go build in the container (Anagh Kumar Baranwal)
+
+- VFS
+
+- Fix the cache failing to upload symlinks when
--links was specified (Nick Craig-Wood)
+- Fix race detected by race detector (Nick Craig-Wood)
+- Close the change notify channel on Shutdown (izouxv)
+
+- B2
+
+- Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
+
+- Iclouddrive
+
+- Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
+
+- Onedrive
+
+- Mark German (de) region as deprecated (Nick Craig-Wood)
+
+- S3
+
+- Added new storage class to magalu provider (Bruno Fernandes)
+- Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
+- Add latest Linode Object Storage endpoints (jbagwell-akamai)
+
+
v1.69.0 - 2025-01-12
See commits
diff --git a/MANUAL.md b/MANUAL.md
index 8d2f405fe..889d4fec9 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,7 +1,78 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Jan 12, 2025
+% Feb 14, 2025
+# NAME
+
+rclone - manage files on cloud storage
+
+# SYNOPSIS
+
+```
+Usage:
+ rclone [flags]
+ rclone [command]
+
+Available commands:
+ about Get quota information from the remote.
+ authorize Remote authorization.
+ backend Run a backend-specific command.
+ bisync Perform bidirectional synchronization between two paths.
+ cat Concatenates any files and sends them to stdout.
+ check Checks the files in the source and destination match.
+ checksum Checks the files in the destination against a SUM file.
+ cleanup Clean up the remote if possible.
+ completion Output completion script for a given shell.
+ config Enter an interactive configuration session.
+ copy Copy files from source to dest, skipping identical files.
+ copyto Copy files from source to dest, skipping identical files.
+ copyurl Copy the contents of the URL supplied content to dest:path.
+ cryptcheck Cryptcheck checks the integrity of an encrypted remote.
+ cryptdecode Cryptdecode returns unencrypted file names.
+ dedupe Interactively find duplicate filenames and delete/rename them.
+ delete Remove the files in path.
+ deletefile Remove a single file from remote.
+ gendocs Output markdown docs for rclone to the directory supplied.
+ gitannex Speaks with git-annex over stdin/stdout.
+ hashsum Produces a hashsum file for all the objects in the path.
+ help Show help for rclone commands, flags and backends.
+ link Generate public link to file/folder.
+ listremotes List all the remotes in the config file and defined in environment variables.
+ ls List the objects in the path with size and path.
+ lsd List all directories/containers/buckets in the path.
+ lsf List directories and objects in remote:path formatted for parsing.
+ lsjson List directories and objects in the path in JSON format.
+ lsl List the objects in path with modification time, size and path.
+ md5sum Produces an md5sum file for all the objects in the path.
+ mkdir Make the path if it doesn't already exist.
+ mount Mount the remote as file system on a mountpoint.
+ move Move files from source to dest.
+ moveto Move file or directory from source to dest.
+ ncdu Explore a remote with a text based user interface.
+ nfsmount Mount the remote as file system on a mountpoint.
+ obscure Obscure password for use in the rclone config file.
+ purge Remove the path and all of its contents.
+ rc Run a command against a running rclone.
+ rcat Copies standard input to file on remote.
+ rcd Run rclone listening to remote control commands only.
+ rmdir Remove the empty directory at path.
+ rmdirs Remove empty directories under the path.
+ selfupdate Update the rclone binary.
+ serve Serve a remote over a protocol.
+ settier Changes storage class/tier of objects in remote.
+ sha1sum Produces an sha1sum file for all the objects in the path.
+ size Prints the total size and number of objects in remote:path.
+ sync Make source and dest identical, modifying destination only.
+ test Run a test command
+ touch Create new file or change file modification time.
+ tree List the contents of the remote in a tree like fashion.
+ version Show the version number.
+
+Use "rclone [command] --help" for more information about a command.
+Use "rclone help flags" for to see the global flags.
+Use "rclone help backends" for a list of supported services.
+
+```
# Rclone syncs your files to cloud storage
@@ -1690,6 +1761,9 @@ include/exclude filters - everything will be removed. Use the
delete files. To delete empty directories only, use command
[rmdir](https://rclone.org/commands/rclone_rmdir/) or [rmdirs](https://rclone.org/commands/rclone_rmdirs/).
+The concurrency of this operation is controlled by the `--checkers` global flag. However, some backends will
+implement this command directly, in which case `--checkers` will be ignored.
+
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
@@ -3745,12 +3819,12 @@ password to re-encrypt the config.
When `--password-command` is called to change the password then the
environment variable `RCLONE_PASSWORD_CHANGE=1` will be set. So if
-changing passwords programatically you can use the environment
+changing passwords programmatically you can use the environment
variable to distinguish which password you must supply.
Alternatively you can remove the password first (with `rclone config
encryption remove`), then set it again with this command which may be
-easier if you don't mind the unecrypted config file being on the disk
+easier if you don't mind the unencrypted config file being on the disk
briefly.
@@ -4290,7 +4364,7 @@ destination if there is one with the same name.
Setting `--stdout` or making the output file name `-`
will cause the output to be written to standard output.
-## Troublshooting
+## Troubleshooting
If you can't get `rclone copyurl` to work then here are some things you can try:
@@ -10581,7 +10655,7 @@ that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance. This sort of cache can't be backed
up and restored as the underlying handles will change. This is Linux
-only. It requres running rclone as root or with `CAP_DAC_READ_SEARCH`.
+only. It requires running rclone as root or with `CAP_DAC_READ_SEARCH`.
You can run rclone with this extra permission by doing this to the
rclone binary `sudo setcap cap_dac_read_search+ep /path/to/rclone`.
@@ -11408,7 +11482,7 @@ secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
-Note that setting `disable_multipart_uploads = true` is to work around
+Note that setting `use_multipart_uploads = false` is to work around
[a bug](#bugs) which will be fixed in due course.
## Bugs
@@ -14444,6 +14518,11 @@ it to `false`. It is also possible to specify `--boolean=false` or
parsed as `--boolean` and the `false` is parsed as an extra command
line argument for rclone.
+Options documented to take a `stringArray` parameter accept multiple
+values. To pass more than one value, repeat the option; for example:
+`--include value1 --include value2`.
+
+
### Time or duration options {#time-option}
TIME or DURATION options can be specified as a duration string or a
@@ -16755,7 +16834,7 @@ so they take exactly the same form.
The options set by environment variables can be seen with the `-vv` flag, e.g. `rclone version -vv`.
Options that can appear multiple times (type `stringArray`) are
-treated slighly differently as environment variables can only be
+treated slightly differently as environment variables can only be
defined once. In order to allow a simple mechanism for adding one or
many items, the input is treated as a [CSV encoded](https://godoc.org/encoding/csv)
string. For example
@@ -19937,7 +20016,7 @@ the `--vfs-cache-mode` is off, it will return an empty result.
],
}
-The `expiry` time is the time until the file is elegible for being
+The `expiry` time is the time until the file is eligible for being
uploaded in floating point seconds. This may go negative. As rclone
only transfers `--transfers` files at once, only the lowest
`--transfers` expiry times will have `uploading` as `true`. So there
@@ -21018,7 +21097,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.1")
```
@@ -23066,7 +23145,7 @@ See the [bisync filters](#filtering) section and generic
[--filter-from](https://rclone.org/filtering/#filter-from-read-filtering-patterns-from-a-file)
documentation.
An [example filters file](#example-filters-file) contains filters for
-non-allowed files for synching with Dropbox.
+non-allowed files for syncing with Dropbox.
If you make changes to your filters file then bisync requires a run
with `--resync`. This is a safety feature, which prevents existing files
@@ -23243,7 +23322,7 @@ Using `--check-sync=false` will disable it and may significantly reduce the
sync run times for very large numbers of files.
The check may be run manually with `--check-sync=only`. It runs only the
-integrity check and terminates without actually synching.
+integrity check and terminates without actually syncing.
Note that currently, `--check-sync` **only checks listing snapshots and NOT the
actual files on the remotes.** Note also that the listing snapshots will not
@@ -23720,7 +23799,7 @@ The `--include*`, `--exclude*`, and `--filter` flags are also supported.
### How to filter directories
-Filtering portions of the directory tree is a critical feature for synching.
+Filtering portions of the directory tree is a critical feature for syncing.
Examples of directory trees (always beneath the Path1/Path2 root level)
you may want to exclude from your sync:
@@ -23829,7 +23908,7 @@ quashed by adding `--quiet` to the bisync command line.
## Example exclude-style filters files for use with Dropbox {#exclude-filters}
-- Dropbox disallows synching the listed temporary and configuration/data files.
+- Dropbox disallows syncing the listed temporary and configuration/data files.
The `- ` filters exclude these files where ever they may occur
in the sync tree. Consider adding similar exclusions for file types
you don't need to sync, such as core dump and software build files.
@@ -24163,7 +24242,7 @@ test command flags can be equally prefixed by a single `-` or double dash.
- `go test . -case basic -remote local -remote2 local`
runs the `test_basic` test case using only the local filesystem,
- synching one local directory with another local directory.
+ syncing one local directory with another local directory.
Test script output is to the console, while commands within scenario.txt
have their output sent to the `.../workdir/test.log` file,
which is finally compared to the golden copy.
@@ -24394,6 +24473,9 @@ about _Unison_ and synchronization in general.
## Changelog
+### `v1.69.1`
+* Fixed an issue causing listings to not capture concurrent modifications under certain conditions
+
### `v1.68`
* Fixed an issue affecting backends that round modtimes to a lower precision.
@@ -25680,7 +25762,7 @@ Notes on above:
that `USER_NAME` has been created.
2. The Resource entry must include both resource ARNs, as one implies
the bucket and the other implies the bucket's objects.
-3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already exsits, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included.
+3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already exists, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included.
For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
that will generate one or more buckets that will work with `rclone sync`.
@@ -28658,7 +28740,7 @@ location_constraint = au-nsw
### Rclone Serve S3 {#rclone}
Rclone can serve any remote over the S3 protocol. For details see the
-[rclone serve s3](https://rclone.org/commands/rclone_serve_http/) documentation.
+[rclone serve s3](https://rclone.org/commands/rclone_serve_s3/) documentation.
For example, to serve `remote:path` over s3, run the server like this:
@@ -28678,8 +28760,8 @@ secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
-Note that setting `disable_multipart_uploads = true` is to work around
-[a bug](https://rclone.org/commands/rclone_serve_http/#bugs) which will be fixed in due course.
+Note that setting `use_multipart_uploads = false` is to work around
+[a bug](https://rclone.org/commands/rclone_serve_s3/#bugs) which will be fixed in due course.
### Scaleway
@@ -29775,27 +29857,49 @@ Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
- 1 / Atlanta, GA (USA), us-southeast-1
+ 1 / Amsterdam (Netherlands), nl-ams-1
+ \ (nl-ams-1.linodeobjects.com)
+ 2 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
- 2 / Chicago, IL (USA), us-ord-1
+ 3 / Chennai (India), in-maa-1
+ \ (in-maa-1.linodeobjects.com)
+ 4 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
- 3 / Frankfurt (Germany), eu-central-1
+ 5 / Frankfurt (Germany), eu-central-1
\ (eu-central-1.linodeobjects.com)
- 4 / Milan (Italy), it-mil-1
+ 6 / Jakarta (Indonesia), id-cgk-1
+ \ (id-cgk-1.linodeobjects.com)
+ 7 / London 2 (Great Britain), gb-lon-1
+ \ (gb-lon-1.linodeobjects.com)
+ 8 / Los Angeles, CA (USA), us-lax-1
+ \ (us-lax-1.linodeobjects.com)
+ 9 / Madrid (Spain), es-mad-1
+ \ (es-mad-1.linodeobjects.com)
+10 / Melbourne (Australia), au-mel-1
+ \ (au-mel-1.linodeobjects.com)
+11 / Miami, FL (USA), us-mia-1
+ \ (us-mia-1.linodeobjects.com)
+12 / Milan (Italy), it-mil-1
\ (it-mil-1.linodeobjects.com)
- 5 / Newark, NJ (USA), us-east-1
+13 / Newark, NJ (USA), us-east-1
\ (us-east-1.linodeobjects.com)
- 6 / Paris (France), fr-par-1
+14 / Osaka (Japan), jp-osa-1
+ \ (jp-osa-1.linodeobjects.com)
+15 / Paris (France), fr-par-1
\ (fr-par-1.linodeobjects.com)
- 7 / Seattle, WA (USA), us-sea-1
+16 / São Paulo (Brazil), br-gru-1
+ \ (br-gru-1.linodeobjects.com)
+17 / Seattle, WA (USA), us-sea-1
\ (us-sea-1.linodeobjects.com)
- 8 / Singapore ap-south-1
+18 / Singapore, ap-south-1
\ (ap-south-1.linodeobjects.com)
- 9 / Stockholm (Sweden), se-sto-1
+19 / Singapore 2, sg-sin-1
+ \ (sg-sin-1.linodeobjects.com)
+20 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
-10 / Washington, DC, (USA), us-iad-1
+21 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
-endpoint> 3
+endpoint> 5
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@@ -34415,7 +34519,7 @@ strong random number generator. The nonce is incremented for each
chunk read making sure each nonce is unique for each block written.
The chance of a nonce being reused is minuscule. If you wrote an
exabyte of data (10¹⁸ bytes) you would have a probability of
-approximately 2×10⁻³² of re-using a nonce.
+approximately 2×10⁻³² of reusing a nonce.
#### Chunk
@@ -41561,7 +41665,7 @@ Enter a value.
config_2fa> 2FACODE
Remote config
--------------------
-[koofr]
+[iclouddrive]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@@ -41578,6 +41682,20 @@ y/e/d> y
ADP is currently unsupported and need to be disabled
+On iPhone, Settings `>` Apple Account `>` iCloud `>` 'Access iCloud Data on the Web' must be ON, and 'Advanced Data Protection' OFF.
+
+## Troubleshooting
+
+### Missing PCS cookies from the request
+
+This means you have Advanced Data Protection (ADP) turned on. This is not supported at the moment. If you want to use rclone you will have to turn it off. See above for how to turn it off.
+
+You will need to clear the `cookies` and the `trust_token` fields in the config. Or you can delete the remote config and start again.
+
+You should then run `rclone reconnect remote:`.
+
+Note that changing the ADP setting may not take effect immediately - you may need to wait a few hours or a day before you can get rclone to work - keep clearing the config entry and running `rclone reconnect remote:` until rclone functions properly.
+
### Standard options
@@ -46035,7 +46153,7 @@ Properties:
- "us"
- Microsoft Cloud for US Government
- "de"
- - Microsoft Cloud Germany
+ - Microsoft Cloud Germany (deprecated - try global region first).
- "cn"
- Azure and Office 365 operated by Vnet Group in China
@@ -46652,6 +46770,28 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+### Impersonate other users as Admin
+
+Unlike Google Drive and impersonating any domain user via service accounts, OneDrive requires you to authenticate as an admin account, and manually setup a remote per user you wish to impersonate.
+
+1. In [Microsoft 365 Admin Center](https://admin.microsoft.com), open each user you need to "impersonate" and go to the OneDrive section. There is a heading called "Get access to files", you need to click to create the link, this creates the link of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/` but also changes the permissions so you your admin user has access.
+2. Then in powershell run the following commands:
+```console
+Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
+Import-Module Microsoft.Graph.Files
+Connect-MgGraph -Scopes "Files.ReadWrite.All"
+# Follow the steps to allow access to your admin user
+# Then run this for each user you want to impersonate to get the Drive ID
+Get-MgUserDefaultDrive -UserId '{emailaddress}'
+# This will give you output of the format:
+# Name Id DriveType CreatedDateTime
+# ---- -- --------- ---------------
+# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm
+
+```
+3. Then in rclone add a onedrive remote type, and use the `Type in driveID` with the DriveID you got in the previous step. One remote per user. It will then confirm the drive ID, and hopefully give you a message of `Found drive "root" of type "business"` and then include the URL of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents`
+
+
## Limitations
If you don't use rclone for 90 days the refresh token will
@@ -56509,6 +56649,32 @@ Options:
# Changelog
+## v1.69.1 - 2025-02-14
+
+[See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.69.1)
+
+* Bug Fixes
+ * lib/oauthutil: Fix redirect URL mismatch errors (Nick Craig-Wood)
+ * bisync: Fix listings missing concurrent modifications (nielash)
+ * serve s3: Fix list objects encoding-type (Nick Craig-Wood)
+ * fs: Fix confusing "didn't find section in config file" error (Nick Craig-Wood)
+ * doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
+ * build: Added parallel docker builds and caching for go build in the container (Anagh Kumar Baranwal)
+* VFS
+ * Fix the cache failing to upload symlinks when `--links` was specified (Nick Craig-Wood)
+ * Fix race detected by race detector (Nick Craig-Wood)
+ * Close the change notify channel on Shutdown (izouxv)
+* B2
+ * Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
+* Iclouddrive
+ * Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
+* Onedrive
+ * Mark German (de) region as deprecated (Nick Craig-Wood)
+* S3
+ * Added new storage class to magalu provider (Bruno Fernandes)
+ * Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
+ * Add latest Linode Object Storage endpoints (jbagwell-akamai)
+
## v1.69.0 - 2025-01-12
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)
diff --git a/MANUAL.txt b/MANUAL.txt
index 9247df8dc..ea2b464f9 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,75 @@
rclone(1) User Manual
Nick Craig-Wood
-Jan 12, 2025
+Feb 14, 2025
+
+NAME
+
+rclone - manage files on cloud storage
+
+SYNOPSIS
+
+ Usage:
+ rclone [flags]
+ rclone [command]
+
+ Available commands:
+ about Get quota information from the remote.
+ authorize Remote authorization.
+ backend Run a backend-specific command.
+ bisync Perform bidirectional synchronization between two paths.
+ cat Concatenates any files and sends them to stdout.
+ check Checks the files in the source and destination match.
+ checksum Checks the files in the destination against a SUM file.
+ cleanup Clean up the remote if possible.
+ completion Output completion script for a given shell.
+ config Enter an interactive configuration session.
+ copy Copy files from source to dest, skipping identical files.
+ copyto Copy files from source to dest, skipping identical files.
+ copyurl Copy the contents of the URL supplied content to dest:path.
+ cryptcheck Cryptcheck checks the integrity of an encrypted remote.
+ cryptdecode Cryptdecode returns unencrypted file names.
+ dedupe Interactively find duplicate filenames and delete/rename them.
+ delete Remove the files in path.
+ deletefile Remove a single file from remote.
+ gendocs Output markdown docs for rclone to the directory supplied.
+ gitannex Speaks with git-annex over stdin/stdout.
+ hashsum Produces a hashsum file for all the objects in the path.
+ help Show help for rclone commands, flags and backends.
+ link Generate public link to file/folder.
+ listremotes List all the remotes in the config file and defined in environment variables.
+ ls List the objects in the path with size and path.
+ lsd List all directories/containers/buckets in the path.
+ lsf List directories and objects in remote:path formatted for parsing.
+ lsjson List directories and objects in the path in JSON format.
+ lsl List the objects in path with modification time, size and path.
+ md5sum Produces an md5sum file for all the objects in the path.
+ mkdir Make the path if it doesn't already exist.
+ mount Mount the remote as file system on a mountpoint.
+ move Move files from source to dest.
+ moveto Move file or directory from source to dest.
+ ncdu Explore a remote with a text based user interface.
+ nfsmount Mount the remote as file system on a mountpoint.
+ obscure Obscure password for use in the rclone config file.
+ purge Remove the path and all of its contents.
+ rc Run a command against a running rclone.
+ rcat Copies standard input to file on remote.
+ rcd Run rclone listening to remote control commands only.
+ rmdir Remove the empty directory at path.
+ rmdirs Remove empty directories under the path.
+ selfupdate Update the rclone binary.
+ serve Serve a remote over a protocol.
+ settier Changes storage class/tier of objects in remote.
+ sha1sum Produces an sha1sum file for all the objects in the path.
+ size Prints the total size and number of objects in remote:path.
+ sync Make source and dest identical, modifying destination only.
+ test Run a test command
+ touch Create new file or change file modification time.
+ tree List the contents of the remote in a tree like fashion.
+ version Show the version number.
+
+ Use "rclone [command] --help" for more information about a command.
+ Use "rclone help flags" for to see the global flags.
+ Use "rclone help backends" for a list of supported services.
Rclone syncs your files to cloud storage
@@ -1600,6 +1669,10 @@ include/exclude filters - everything will be removed. Use the delete
command if you want to selectively delete files. To delete empty
directories only, use command rmdir or rmdirs.
+The concurrency of this operation is controlled by the --checkers global
+flag. However, some backends will implement this command directly, in
+which case --checkers will be ignored.
+
Important: Since this can cause data loss, test first with the --dry-run
or the --interactive/-i flag.
@@ -3467,12 +3540,12 @@ re-encrypt the config.
When --password-command is called to change the password then the
environment variable RCLONE_PASSWORD_CHANGE=1 will be set. So if
-changing passwords programatically you can use the environment variable
+changing passwords programmatically you can use the environment variable
to distinguish which password you must supply.
Alternatively you can remove the password first (with
rclone config encryption remove), then set it again with this command
-which may be easier if you don't mind the unecrypted config file being
+which may be easier if you don't mind the unencrypted config file being
on the disk briefly.
rclone config encryption set [flags]
@@ -3949,7 +4022,7 @@ there is one with the same name.
Setting --stdout or making the output file name - will cause the output
to be written to standard output.
-Troublshooting
+Troubleshooting
If you can't get rclone copyurl to work then here are some things you
can try:
@@ -10102,7 +10175,7 @@ uses an on disk cache, but the cache entries are held as symlinks.
Rclone will use the handle of the underlying file as the NFS handle
which improves performance. This sort of cache can't be backed up and
restored as the underlying handles will change. This is Linux only. It
-requres running rclone as root or with CAP_DAC_READ_SEARCH. You can run
+requires running rclone as root or with CAP_DAC_READ_SEARCH. You can run
rclone with this extra permission by doing this to the rclone binary
sudo setcap cap_dac_read_search+ep /path/to/rclone.
@@ -10903,8 +10976,8 @@ which is defined like this:
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
-Note that setting disable_multipart_uploads = true is to work around a
-bug which will be fixed in due course.
+Note that setting use_multipart_uploads = false is to work around a bug
+which will be fixed in due course.
Bugs
@@ -13895,6 +13968,10 @@ also possible to specify --boolean=false or --boolean=true. Note that
--boolean false is not valid - this is parsed as --boolean and the false
is parsed as an extra command line argument for rclone.
+Options documented to take a stringArray parameter accept multiple
+values. To pass more than one value, repeat the option; for example:
+--include value1 --include value2.
+
Time or duration options
TIME or DURATION options can be specified as a duration string or a time
@@ -16177,7 +16254,7 @@ The options set by environment variables can be seen with the -vv flag,
e.g. rclone version -vv.
Options that can appear multiple times (type stringArray) are treated
-slighly differently as environment variables can only be defined once.
+slightly differently as environment variables can only be defined once.
In order to allow a simple mechanism for adding one or many items, the
input is treated as a CSV encoded string. For example
@@ -19420,7 +19497,7 @@ This is only useful if --vfs-cache-mode > off. If you call it when the
],
}
-The expiry time is the time until the file is elegible for being
+The expiry time is the time until the file is eligible for being
uploaded in floating point seconds. This may go negative. As rclone only
transfers --transfers files at once, only the lowest --transfers expiry
times will have uploading as true. So there may be files with negative
@@ -20569,7 +20646,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.1")
Performance
@@ -22531,7 +22608,7 @@ Also see the all files changed check.
By using rclone filter features you can exclude file types or directory
sub-trees from the sync. See the bisync filters section and generic
--filter-from documentation. An example filters file contains filters
-for non-allowed files for synching with Dropbox.
+for non-allowed files for syncing with Dropbox.
If you make changes to your filters file then bisync requires a run with
--resync. This is a safety feature, which prevents existing files on the
@@ -22704,7 +22781,7 @@ of a sync. Using --check-sync=false will disable it and may
significantly reduce the sync run times for very large numbers of files.
The check may be run manually with --check-sync=only. It runs only the
-integrity check and terminates without actually synching.
+integrity check and terminates without actually syncing.
Note that currently, --check-sync only checks listing snapshots and NOT
the actual files on the remotes. Note also that the listing snapshots
@@ -23237,7 +23314,7 @@ supported.
How to filter directories
Filtering portions of the directory tree is a critical feature for
-synching.
+syncing.
Examples of directory trees (always beneath the Path1/Path2 root level)
you may want to exclude from your sync: - Directory trees containing
@@ -23348,7 +23425,7 @@ This noise can be quashed by adding --quiet to the bisync command line.
Example exclude-style filters files for use with Dropbox
-- Dropbox disallows synching the listed temporary and
+- Dropbox disallows syncing the listed temporary and
configuration/data files. The `- ` filters exclude these files where
ever they may occur in the sync tree. Consider adding similar
exclusions for file types you don't need to sync, such as core dump
@@ -23668,7 +23745,7 @@ dash.
Running tests
- go test . -case basic -remote local -remote2 local runs the
- test_basic test case using only the local filesystem, synching one
+ test_basic test case using only the local filesystem, syncing one
local directory with another local directory. Test script output is
to the console, while commands within scenario.txt have their output
sent to the .../workdir/test.log file, which is finally compared to
@@ -23901,6 +23978,11 @@ Unison and synchronization in general.
Changelog
+v1.69.1
+
+- Fixed an issue causing listings to not capture concurrent
+ modifications under certain conditions
+
v1.68
- Fixed an issue affecting backends that round modtimes to a lower
@@ -25192,7 +25274,7 @@ Notes on above:
that USER_NAME has been created.
2. The Resource entry must include both resource ARNs, as one implies
the bucket and the other implies the bucket's objects.
-3. When using s3-no-check-bucket and the bucket already exsits, the
+3. When using s3-no-check-bucket and the bucket already exists, the
"arn:aws:s3:::BUCKET_NAME" doesn't have to be included.
For reference, here's an Ansible script that will generate one or more
@@ -28155,8 +28237,8 @@ this:
secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
-Note that setting disable_multipart_uploads = true is to work around a
-bug which will be fixed in due course.
+Note that setting use_multipart_uploads = false is to work around a bug
+which will be fixed in due course.
Scaleway
@@ -29203,27 +29285,49 @@ This will guide you through an interactive setup process.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
- 1 / Atlanta, GA (USA), us-southeast-1
+ 1 / Amsterdam (Netherlands), nl-ams-1
+ \ (nl-ams-1.linodeobjects.com)
+ 2 / Atlanta, GA (USA), us-southeast-1
\ (us-southeast-1.linodeobjects.com)
- 2 / Chicago, IL (USA), us-ord-1
+ 3 / Chennai (India), in-maa-1
+ \ (in-maa-1.linodeobjects.com)
+ 4 / Chicago, IL (USA), us-ord-1
\ (us-ord-1.linodeobjects.com)
- 3 / Frankfurt (Germany), eu-central-1
+ 5 / Frankfurt (Germany), eu-central-1
\ (eu-central-1.linodeobjects.com)
- 4 / Milan (Italy), it-mil-1
+ 6 / Jakarta (Indonesia), id-cgk-1
+ \ (id-cgk-1.linodeobjects.com)
+ 7 / London 2 (Great Britain), gb-lon-1
+ \ (gb-lon-1.linodeobjects.com)
+ 8 / Los Angeles, CA (USA), us-lax-1
+ \ (us-lax-1.linodeobjects.com)
+ 9 / Madrid (Spain), es-mad-1
+ \ (es-mad-1.linodeobjects.com)
+ 10 / Melbourne (Australia), au-mel-1
+ \ (au-mel-1.linodeobjects.com)
+ 11 / Miami, FL (USA), us-mia-1
+ \ (us-mia-1.linodeobjects.com)
+ 12 / Milan (Italy), it-mil-1
\ (it-mil-1.linodeobjects.com)
- 5 / Newark, NJ (USA), us-east-1
+ 13 / Newark, NJ (USA), us-east-1
\ (us-east-1.linodeobjects.com)
- 6 / Paris (France), fr-par-1
+ 14 / Osaka (Japan), jp-osa-1
+ \ (jp-osa-1.linodeobjects.com)
+ 15 / Paris (France), fr-par-1
\ (fr-par-1.linodeobjects.com)
- 7 / Seattle, WA (USA), us-sea-1
+ 16 / São Paulo (Brazil), br-gru-1
+ \ (br-gru-1.linodeobjects.com)
+ 17 / Seattle, WA (USA), us-sea-1
\ (us-sea-1.linodeobjects.com)
- 8 / Singapore ap-south-1
+ 18 / Singapore, ap-south-1
\ (ap-south-1.linodeobjects.com)
- 9 / Stockholm (Sweden), se-sto-1
+ 19 / Singapore 2, sg-sin-1
+ \ (sg-sin-1.linodeobjects.com)
+ 20 / Stockholm (Sweden), se-sto-1
\ (se-sto-1.linodeobjects.com)
- 10 / Washington, DC, (USA), us-iad-1
+ 21 / Washington, DC, (USA), us-iad-1
\ (us-iad-1.linodeobjects.com)
- endpoint> 3
+ endpoint> 5
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@@ -33757,7 +33861,7 @@ The initial nonce is generated from the operating systems crypto strong
random number generator. The nonce is incremented for each chunk read
making sure each nonce is unique for each block written. The chance of a
nonce being reused is minuscule. If you wrote an exabyte of data (10¹⁸
-bytes) you would have a probability of approximately 2×10⁻³² of re-using
+bytes) you would have a probability of approximately 2×10⁻³² of reusing
a nonce.
Chunk
@@ -40978,7 +41082,7 @@ This will guide you through an interactive setup process:
config_2fa> 2FACODE
Remote config
--------------------
- [koofr]
+ [iclouddrive]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@@ -40994,6 +41098,27 @@ Advanced Data Protection
ADP is currently unsupported and need to be disabled
+On iPhone, Settings > Apple Account > iCloud > 'Access iCloud Data on
+the Web' must be ON, and 'Advanced Data Protection' OFF.
+
+Troubleshooting
+
+Missing PCS cookies from the request
+
+This means you have Advanced Data Protection (ADP) turned on. This is
+not supported at the moment. If you want to use rclone you will have to
+turn it off. See above for how to turn it off.
+
+You will need to clear the cookies and the trust_token fields in the
+config. Or you can delete the remote config and start again.
+
+You should then run rclone reconnect remote:.
+
+Note that changing the ADP setting may not take effect immediately - you
+may need to wait a few hours or a day before you can get rclone to work
+- keep clearing the config entry and running rclone reconnect remote:
+until rclone functions properly.
+
Standard options
Here are the Standard options specific to iclouddrive (iCloud Drive).
@@ -45589,7 +45714,8 @@ Properties:
- "us"
- Microsoft Cloud for US Government
- "de"
- - Microsoft Cloud Germany
+ - Microsoft Cloud Germany (deprecated - try global region
+ first).
- "cn"
- Azure and Office 365 operated by Vnet Group in China
@@ -46248,6 +46374,38 @@ Here are the possible system metadata items for the onedrive backend.
See the metadata docs for more info.
+Impersonate other users as Admin
+
+Unlike Google Drive and impersonating any domain user via service
+accounts, OneDrive requires you to authenticate as an admin account, and
+manually setup a remote per user you wish to impersonate.
+
+1. In Microsoft 365 Admin Center, open each user you need to
+ "impersonate" and go to the OneDrive section. There is a heading
+ called "Get access to files", you need to click to create the link,
+ this creates the link of the format
+ https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/
+ but also changes the permissions so you your admin user has access.
+2. Then in powershell run the following commands:
+
+ Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
+ Import-Module Microsoft.Graph.Files
+ Connect-MgGraph -Scopes "Files.ReadWrite.All"
+ # Follow the steps to allow access to your admin user
+ # Then run this for each user you want to impersonate to get the Drive ID
+ Get-MgUserDefaultDrive -UserId '{emailaddress}'
+ # This will give you output of the format:
+ # Name Id DriveType CreatedDateTime
+ # ---- -- --------- ---------------
+ # OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm
+
+3. Then in rclone add a onedrive remote type, and use the
+ Type in driveID with the DriveID you got in the previous step. One
+ remote per user. It will then confirm the drive ID, and hopefully
+ give you a message of Found drive "root" of type "business" and then
+ include the URL of the format
+ https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents
+
Limitations
If you don't use rclone for 90 days the refresh token will expire. This
@@ -56157,6 +56315,37 @@ Options:
Changelog
+v1.69.1 - 2025-02-14
+
+See commits
+
+- Bug Fixes
+ - lib/oauthutil: Fix redirect URL mismatch errors (Nick
+ Craig-Wood)
+ - bisync: Fix listings missing concurrent modifications (nielash)
+ - serve s3: Fix list objects encoding-type (Nick Craig-Wood)
+ - fs: Fix confusing "didn't find section in config file" error
+ (Nick Craig-Wood)
+ - doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt
+ Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
+ - build: Added parallel docker builds and caching for go build in
+ the container (Anagh Kumar Baranwal)
+- VFS
+ - Fix the cache failing to upload symlinks when --links was
+ specified (Nick Craig-Wood)
+ - Fix race detected by race detector (Nick Craig-Wood)
+ - Close the change notify channel on Shutdown (izouxv)
+- B2
+ - Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
+- Iclouddrive
+ - Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
+- Onedrive
+ - Mark German (de) region as deprecated (Nick Craig-Wood)
+- S3
+ - Added new storage class to magalu provider (Bruno Fernandes)
+ - Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
+ - Add latest Linode Object Storage endpoints (jbagwell-akamai)
+
v1.69.0 - 2025-01-12
See commits
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index e3170bbf7..0bdf7c317 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -5,6 +5,32 @@ description: "Rclone Changelog"
# Changelog
+## v1.69.1 - 2025-02-14
+
+[See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.69.1)
+
+* Bug Fixes
+ * lib/oauthutil: Fix redirect URL mismatch errors (Nick Craig-Wood)
+ * bisync: Fix listings missing concurrent modifications (nielash)
+ * serve s3: Fix list objects encoding-type (Nick Craig-Wood)
+ * fs: Fix confusing "didn't find section in config file" error (Nick Craig-Wood)
+ * doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt Ickstadt, Nick Craig-Wood, Tim White, Zachary Vorhies)
+ * build: Added parallel docker builds and caching for go build in the container (Anagh Kumar Baranwal)
+* VFS
+ * Fix the cache failing to upload symlinks when `--links` was specified (Nick Craig-Wood)
+ * Fix race detected by race detector (Nick Craig-Wood)
+ * Close the change notify channel on Shutdown (izouxv)
+* B2
+ * Fix "fatal error: concurrent map writes" (Nick Craig-Wood)
+* Iclouddrive
+ * Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
+* Onedrive
+ * Mark German (de) region as deprecated (Nick Craig-Wood)
+* S3
+ * Added new storage class to magalu provider (Bruno Fernandes)
+ * Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
+ * Add latest Linode Object Storage endpoints (jbagwell-akamai)
+
## v1.69.0 - 2025-01-12
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index 0921995de..3890f0b91 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -965,7 +965,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.1")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect
diff --git a/docs/content/commands/rclone_config_encryption_set.md b/docs/content/commands/rclone_config_encryption_set.md
index b02dff900..780c086dc 100644
--- a/docs/content/commands/rclone_config_encryption_set.md
+++ b/docs/content/commands/rclone_config_encryption_set.md
@@ -21,12 +21,12 @@ password to re-encrypt the config.
When `--password-command` is called to change the password then the
environment variable `RCLONE_PASSWORD_CHANGE=1` will be set. So if
-changing passwords programatically you can use the environment
+changing passwords programmatically you can use the environment
variable to distinguish which password you must supply.
Alternatively you can remove the password first (with `rclone config
encryption remove`), then set it again with this command which may be
-easier if you don't mind the unecrypted config file being on the disk
+easier if you don't mind the unencrypted config file being on the disk
briefly.
diff --git a/docs/content/commands/rclone_copyurl.md b/docs/content/commands/rclone_copyurl.md
index 061644fa3..2bdccf5fd 100644
--- a/docs/content/commands/rclone_copyurl.md
+++ b/docs/content/commands/rclone_copyurl.md
@@ -28,7 +28,7 @@ destination if there is one with the same name.
Setting `--stdout` or making the output file name `-`
will cause the output to be written to standard output.
-## Troublshooting
+## Troubleshooting
If you can't get `rclone copyurl` to work then here are some things you can try:
diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md
index ab191f57e..c1fded41c 100644
--- a/docs/content/commands/rclone_purge.md
+++ b/docs/content/commands/rclone_purge.md
@@ -15,6 +15,9 @@ include/exclude filters - everything will be removed. Use the
delete files. To delete empty directories only, use command
[rmdir](/commands/rclone_rmdir/) or [rmdirs](/commands/rclone_rmdirs/).
+The concurrency of this operation is controlled by the `--checkers` global flag. However, some backends will
+implement this command directly, in which case `--checkers` will be ignored.
+
**Important**: Since this can cause data loss, test first with the
`--dry-run` or the `--interactive`/`-i` flag.
diff --git a/docs/content/commands/rclone_serve_nfs.md b/docs/content/commands/rclone_serve_nfs.md
index ed8358ec2..39deb36a9 100644
--- a/docs/content/commands/rclone_serve_nfs.md
+++ b/docs/content/commands/rclone_serve_nfs.md
@@ -7,8 +7,6 @@ versionIntroduced: v1.65
---
# rclone serve nfs
-*Not available in Windows.*
-
Serve the remote as an NFS mount
## Synopsis
@@ -55,7 +53,7 @@ that it uses an on disk cache, but the cache entries are held as
symlinks. Rclone will use the handle of the underlying file as the NFS
handle which improves performance. This sort of cache can't be backed
up and restored as the underlying handles will change. This is Linux
-only. It requres running rclone as root or with `CAP_DAC_READ_SEARCH`.
+only. It requires running rclone as root or with `CAP_DAC_READ_SEARCH`.
You can run rclone with this extra permission by doing this to the
rclone binary `sudo setcap cap_dac_read_search+ep /path/to/rclone`.
diff --git a/docs/content/commands/rclone_serve_s3.md b/docs/content/commands/rclone_serve_s3.md
index 40813321b..8785d2065 100644
--- a/docs/content/commands/rclone_serve_s3.md
+++ b/docs/content/commands/rclone_serve_s3.md
@@ -82,7 +82,7 @@ secret_access_key = SECRET_ACCESS_KEY
use_multipart_uploads = false
```
-Note that setting `disable_multipart_uploads = true` is to work around
+Note that setting `use_multipart_uploads = false` is to work around
[a bug](#bugs) which will be fixed in due course.
## Bugs
diff --git a/docs/content/flags.md b/docs/content/flags.md
index f9d32c490..e022ed0ec 100644
--- a/docs/content/flags.md
+++ b/docs/content/flags.md
@@ -9,8 +9,6 @@ description: "Rclone Global Flags"
This describes the global flags available to every rclone command
split into groups.
-See the [Options section](/docs/#options) for syntax and usage advice.
-
## Copy
@@ -118,7 +116,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.1")
```
diff --git a/docs/content/googlephotos.md b/docs/content/googlephotos.md
index c3549d158..26faedcc6 100644
--- a/docs/content/googlephotos.md
+++ b/docs/content/googlephotos.md
@@ -384,7 +384,7 @@ Use the gphotosdl proxy for downloading the full resolution images
The Google API will deliver images and video which aren't full
resolution, and/or have EXIF data missing.
-However if you use the gphotosdl proxy then you can download original,
+However if you ue the gphotosdl proxy tnen you can download original,
unchanged images.
This runs a headless browser in the background.
diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md
index 91c852083..1927123ba 100644
--- a/docs/content/onedrive.md
+++ b/docs/content/onedrive.md
@@ -319,7 +319,7 @@ Properties:
- "us"
- Microsoft Cloud for US Government
- "de"
- - Microsoft Cloud Germany
+ - Microsoft Cloud Germany (deprecated - try global region first).
- "cn"
- Azure and Office 365 operated by Vnet Group in China
diff --git a/rclone.1 b/rclone.1
index 28e0c3304..57bda9b82 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,8 +1,79 @@
.\"t
.\" Automatically generated by Pandoc 2.9.2.1
.\"
-.TH "rclone" "1" "Jan 12, 2025" "User Manual" ""
+.TH "rclone" "1" "Feb 14, 2025" "User Manual" ""
.hy
+.SH NAME
+.PP
+rclone - manage files on cloud storage
+.SH SYNOPSIS
+.IP
+.nf
+\f[C]
+Usage:
+ rclone [flags]
+ rclone [command]
+
+Available commands:
+ about Get quota information from the remote.
+ authorize Remote authorization.
+ backend Run a backend-specific command.
+ bisync Perform bidirectional synchronization between two paths.
+ cat Concatenates any files and sends them to stdout.
+ check Checks the files in the source and destination match.
+ checksum Checks the files in the destination against a SUM file.
+ cleanup Clean up the remote if possible.
+ completion Output completion script for a given shell.
+ config Enter an interactive configuration session.
+ copy Copy files from source to dest, skipping identical files.
+ copyto Copy files from source to dest, skipping identical files.
+ copyurl Copy the contents of the URL supplied content to dest:path.
+ cryptcheck Cryptcheck checks the integrity of an encrypted remote.
+ cryptdecode Cryptdecode returns unencrypted file names.
+ dedupe Interactively find duplicate filenames and delete/rename them.
+ delete Remove the files in path.
+ deletefile Remove a single file from remote.
+ gendocs Output markdown docs for rclone to the directory supplied.
+ gitannex Speaks with git-annex over stdin/stdout.
+ hashsum Produces a hashsum file for all the objects in the path.
+ help Show help for rclone commands, flags and backends.
+ link Generate public link to file/folder.
+ listremotes List all the remotes in the config file and defined in environment variables.
+ ls List the objects in the path with size and path.
+ lsd List all directories/containers/buckets in the path.
+ lsf List directories and objects in remote:path formatted for parsing.
+ lsjson List directories and objects in the path in JSON format.
+ lsl List the objects in path with modification time, size and path.
+ md5sum Produces an md5sum file for all the objects in the path.
+ mkdir Make the path if it doesn\[aq]t already exist.
+ mount Mount the remote as file system on a mountpoint.
+ move Move files from source to dest.
+ moveto Move file or directory from source to dest.
+ ncdu Explore a remote with a text based user interface.
+ nfsmount Mount the remote as file system on a mountpoint.
+ obscure Obscure password for use in the rclone config file.
+ purge Remove the path and all of its contents.
+ rc Run a command against a running rclone.
+ rcat Copies standard input to file on remote.
+ rcd Run rclone listening to remote control commands only.
+ rmdir Remove the empty directory at path.
+ rmdirs Remove empty directories under the path.
+ selfupdate Update the rclone binary.
+ serve Serve a remote over a protocol.
+ settier Changes storage class/tier of objects in remote.
+ sha1sum Produces an sha1sum file for all the objects in the path.
+ size Prints the total size and number of objects in remote:path.
+ sync Make source and dest identical, modifying destination only.
+ test Run a test command
+ touch Create new file or change file modification time.
+ tree List the contents of the remote in a tree like fashion.
+ version Show the version number.
+
+Use \[dq]rclone [command] --help\[dq] for more information about a command.
+Use \[dq]rclone help flags\[dq] for to see the global flags.
+Use \[dq]rclone help backends\[dq] for a list of supported services.
+\f[R]
+.fi
.SH Rclone syncs your files to cloud storage
.PP
.IP \[bu] 2
@@ -2238,6 +2309,11 @@ To delete empty directories only, use command
rmdir (https://rclone.org/commands/rclone_rmdir/) or
rmdirs (https://rclone.org/commands/rclone_rmdirs/).
.PP
+The concurrency of this operation is controlled by the
+\f[C]--checkers\f[R] global flag.
+However, some backends will implement this command directly, in which
+case \f[C]--checkers\f[R] will be ignored.
+.PP
\f[B]Important\f[R]: Since this can cause data loss, test first with the
\f[C]--dry-run\f[R] or the \f[C]--interactive\f[R]/\f[C]-i\f[R] flag.
.IP
@@ -4652,12 +4728,12 @@ password to re-encrypt the config.
.PP
When \f[C]--password-command\f[R] is called to change the password then
the environment variable \f[C]RCLONE_PASSWORD_CHANGE=1\f[R] will be set.
-So if changing passwords programatically you can use the environment
+So if changing passwords programmatically you can use the environment
variable to distinguish which password you must supply.
.PP
Alternatively you can remove the password first (with
\f[C]rclone config encryption remove\f[R]), then set it again with this
-command which may be easier if you don\[aq]t mind the unecrypted config
+command which may be easier if you don\[aq]t mind the unencrypted config
file being on the disk briefly.
.IP
.nf
@@ -5273,7 +5349,7 @@ destination if there is one with the same name.
.PP
Setting \f[C]--stdout\f[R] or making the output file name \f[C]-\f[R]
will cause the output to be written to standard output.
-.SS Troublshooting
+.SS Troubleshooting
.PP
If you can\[aq]t get \f[C]rclone copyurl\f[R] to work then here are some
things you can try:
@@ -12993,7 +13069,8 @@ which improves performance.
This sort of cache can\[aq]t be backed up and restored as the underlying
handles will change.
This is Linux only.
-It requres running rclone as root or with \f[C]CAP_DAC_READ_SEARCH\f[R].
+It requires running rclone as root or with
+\f[C]CAP_DAC_READ_SEARCH\f[R].
You can run rclone with this extra permission by doing this to the
rclone binary
\f[C]sudo setcap cap_dac_read_search+ep /path/to/rclone\f[R].
@@ -13973,7 +14050,7 @@ use_multipart_uploads = false
\f[R]
.fi
.PP
-Note that setting \f[C]disable_multipart_uploads = true\f[R] is to work
+Note that setting \f[C]use_multipart_uploads = false\f[R] is to work
around a bug which will be fixed in due course.
.SS Bugs
.PP
@@ -17806,6 +17883,11 @@ It is also possible to specify \f[C]--boolean=false\f[R] or
Note that \f[C]--boolean false\f[R] is not valid - this is parsed as
\f[C]--boolean\f[R] and the \f[C]false\f[R] is parsed as an extra
command line argument for rclone.
+.PP
+Options documented to take a \f[C]stringArray\f[R] parameter accept
+multiple values.
+To pass more than one value, repeat the option; for example:
+\f[C]--include value1 --include value2\f[R].
.SS Time or duration options
.PP
TIME or DURATION options can be specified as a duration string or a time
@@ -20455,8 +20537,8 @@ The options set by environment variables can be seen with the
\f[C]rclone version -vv\f[R].
.PP
Options that can appear multiple times (type \f[C]stringArray\f[R]) are
-treated slighly differently as environment variables can only be defined
-once.
+treated slightly differently as environment variables can only be
+defined once.
In order to allow a simple mechanism for adding one or many items, the
input is treated as a CSV encoded (https://godoc.org/encoding/csv)
string.
@@ -24731,7 +24813,7 @@ return an empty result.
\f[R]
.fi
.PP
-The \f[C]expiry\f[R] time is the time until the file is elegible for
+The \f[C]expiry\f[R] time is the time until the file is eligible for
being uploaded in floating point seconds.
This may go negative.
As rclone only transfers \f[C]--transfers\f[R] files at once, only the
@@ -28442,7 +28524,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.69.0\[dq])
+ --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.69.1\[dq])
\f[R]
.fi
.SS Performance
@@ -30761,7 +30843,7 @@ See the bisync filters section and generic
--filter-from (https://rclone.org/filtering/#filter-from-read-filtering-patterns-from-a-file)
documentation.
An example filters file contains filters for non-allowed files for
-synching with Dropbox.
+syncing with Dropbox.
.PP
If you make changes to your filters file then bisync requires a run with
\f[C]--resync\f[R].
@@ -30987,7 +31069,7 @@ reduce the sync run times for very large numbers of files.
.PP
The check may be run manually with \f[C]--check-sync=only\f[R].
It runs only the integrity check and terminates without actually
-synching.
+syncing.
.PP
Note that currently, \f[C]--check-sync\f[R] \f[B]only checks listing
snapshots and NOT the actual files on the remotes.\f[R] Note also that
@@ -31701,7 +31783,7 @@ flags are also supported.
.SS How to filter directories
.PP
Filtering portions of the directory tree is a critical feature for
-synching.
+syncing.
.PP
Examples of directory trees (always beneath the Path1/Path2 root level)
you may want to exclude from your sync: - Directory trees containing
@@ -31859,7 +31941,7 @@ This noise can be quashed by adding \f[C]--quiet\f[R] to the bisync
command line.
.SS Example exclude-style filters files for use with Dropbox
.IP \[bu] 2
-Dropbox disallows synching the listed temporary and configuration/data
+Dropbox disallows syncing the listed temporary and configuration/data
files.
The \[ga]- \[ga] filters exclude these files where ever they may occur
in the sync tree.
@@ -32246,7 +32328,7 @@ single \f[C]-\f[R] or double dash.
.SS Running tests
.IP \[bu] 2
\f[C]go test . -case basic -remote local -remote2 local\f[R] runs the
-\f[C]test_basic\f[R] test case using only the local filesystem, synching
+\f[C]test_basic\f[R] test case using only the local filesystem, syncing
one local directory with another local directory.
Test script output is to the console, while commands within scenario.txt
have their output sent to the \f[C].../workdir/test.log\f[R] file, which
@@ -32579,6 +32661,10 @@ Also note a number of academic publications by Benjamin
Pierce (http://www.cis.upenn.edu/%7Ebcpierce/papers/index.shtml#File%20Synchronization)
about \f[I]Unison\f[R] and synchronization in general.
.SS Changelog
+.SS \f[C]v1.69.1\f[R]
+.IP \[bu] 2
+Fixed an issue causing listings to not capture concurrent modifications
+under certain conditions
.SS \f[C]v1.68\f[R]
.IP \[bu] 2
Fixed an issue affecting backends that round modtimes to a lower
@@ -34293,7 +34379,7 @@ It assumes that \f[C]USER_NAME\f[R] has been created.
The Resource entry must include both resource ARNs, as one implies the
bucket and the other implies the bucket\[aq]s objects.
.IP "3." 3
-When using s3-no-check-bucket and the bucket already exsits, the
+When using s3-no-check-bucket and the bucket already exists, the
\f[C]\[dq]arn:aws:s3:::BUCKET_NAME\[dq]\f[R] doesn\[aq]t have to be
included.
.PP
@@ -38469,7 +38555,7 @@ location_constraint = au-nsw
.PP
Rclone can serve any remote over the S3 protocol.
For details see the rclone serve
-s3 (https://rclone.org/commands/rclone_serve_http/) documentation.
+s3 (https://rclone.org/commands/rclone_serve_s3/) documentation.
.PP
For example, to serve \f[C]remote:path\f[R] over s3, run the server like
this:
@@ -38495,8 +38581,8 @@ use_multipart_uploads = false
\f[R]
.fi
.PP
-Note that setting \f[C]disable_multipart_uploads = true\f[R] is to work
-around a bug (https://rclone.org/commands/rclone_serve_http/#bugs) which
+Note that setting \f[C]use_multipart_uploads = false\f[R] is to work
+around a bug (https://rclone.org/commands/rclone_serve_s3/#bugs) which
will be fixed in due course.
.SS Scaleway
.PP
@@ -39689,27 +39775,49 @@ Option endpoint.
Endpoint for Linode Object Storage API.
Choose a number from below, or type in your own value.
Press Enter to leave empty.
- 1 / Atlanta, GA (USA), us-southeast-1
+ 1 / Amsterdam (Netherlands), nl-ams-1
+ \[rs] (nl-ams-1.linodeobjects.com)
+ 2 / Atlanta, GA (USA), us-southeast-1
\[rs] (us-southeast-1.linodeobjects.com)
- 2 / Chicago, IL (USA), us-ord-1
+ 3 / Chennai (India), in-maa-1
+ \[rs] (in-maa-1.linodeobjects.com)
+ 4 / Chicago, IL (USA), us-ord-1
\[rs] (us-ord-1.linodeobjects.com)
- 3 / Frankfurt (Germany), eu-central-1
+ 5 / Frankfurt (Germany), eu-central-1
\[rs] (eu-central-1.linodeobjects.com)
- 4 / Milan (Italy), it-mil-1
+ 6 / Jakarta (Indonesia), id-cgk-1
+ \[rs] (id-cgk-1.linodeobjects.com)
+ 7 / London 2 (Great Britain), gb-lon-1
+ \[rs] (gb-lon-1.linodeobjects.com)
+ 8 / Los Angeles, CA (USA), us-lax-1
+ \[rs] (us-lax-1.linodeobjects.com)
+ 9 / Madrid (Spain), es-mad-1
+ \[rs] (es-mad-1.linodeobjects.com)
+10 / Melbourne (Australia), au-mel-1
+ \[rs] (au-mel-1.linodeobjects.com)
+11 / Miami, FL (USA), us-mia-1
+ \[rs] (us-mia-1.linodeobjects.com)
+12 / Milan (Italy), it-mil-1
\[rs] (it-mil-1.linodeobjects.com)
- 5 / Newark, NJ (USA), us-east-1
+13 / Newark, NJ (USA), us-east-1
\[rs] (us-east-1.linodeobjects.com)
- 6 / Paris (France), fr-par-1
+14 / Osaka (Japan), jp-osa-1
+ \[rs] (jp-osa-1.linodeobjects.com)
+15 / Paris (France), fr-par-1
\[rs] (fr-par-1.linodeobjects.com)
- 7 / Seattle, WA (USA), us-sea-1
+16 / S\[~a]o Paulo (Brazil), br-gru-1
+ \[rs] (br-gru-1.linodeobjects.com)
+17 / Seattle, WA (USA), us-sea-1
\[rs] (us-sea-1.linodeobjects.com)
- 8 / Singapore ap-south-1
+18 / Singapore, ap-south-1
\[rs] (ap-south-1.linodeobjects.com)
- 9 / Stockholm (Sweden), se-sto-1
+19 / Singapore 2, sg-sin-1
+ \[rs] (sg-sin-1.linodeobjects.com)
+20 / Stockholm (Sweden), se-sto-1
\[rs] (se-sto-1.linodeobjects.com)
-10 / Washington, DC, (USA), us-iad-1
+21 / Washington, DC, (USA), us-iad-1
\[rs] (us-iad-1.linodeobjects.com)
-endpoint> 3
+endpoint> 5
Option acl.
Canned ACL used when creating buckets and storing or copying objects.
@@ -45295,7 +45403,7 @@ The nonce is incremented for each chunk read making sure each nonce is
unique for each block written.
The chance of a nonce being reused is minuscule.
If you wrote an exabyte of data (10\[S1]\[u2078] bytes) you would have a
-probability of approximately 2\[tmu]10\[u207B]\[S3]\[S2] of re-using a
+probability of approximately 2\[tmu]10\[u207B]\[S3]\[S2] of reusing a
nonce.
.SS Chunk
.PP
@@ -54838,7 +54946,7 @@ Enter a value.
config_2fa> 2FACODE
Remote config
--------------------
-[koofr]
+[iclouddrive]
- type: iclouddrive
- apple_id: APPLEID
- password: *** ENCRYPTED ***
@@ -54854,6 +54962,28 @@ y/e/d> y
.SS Advanced Data Protection
.PP
ADP is currently unsupported and need to be disabled
+.PP
+On iPhone, Settings \f[C]>\f[R] Apple Account \f[C]>\f[R] iCloud
+\f[C]>\f[R] \[aq]Access iCloud Data on the Web\[aq] must be ON, and
+\[aq]Advanced Data Protection\[aq] OFF.
+.SS Troubleshooting
+.SS Missing PCS cookies from the request
+.PP
+This means you have Advanced Data Protection (ADP) turned on.
+This is not supported at the moment.
+If you want to use rclone you will have to turn it off.
+See above for how to turn it off.
+.PP
+You will need to clear the \f[C]cookies\f[R] and the
+\f[C]trust_token\f[R] fields in the config.
+Or you can delete the remote config and start again.
+.PP
+You should then run \f[C]rclone reconnect remote:\f[R].
+.PP
+Note that changing the ADP setting may not take effect immediately - you
+may need to wait a few hours or a day before you can get rclone to work
+- keep clearing the config entry and running
+\f[C]rclone reconnect remote:\f[R] until rclone functions properly.
.SS Standard options
.PP
Here are the Standard options specific to iclouddrive (iCloud Drive).
@@ -60946,7 +61076,7 @@ Microsoft Cloud for US Government
\[dq]de\[dq]
.RS 2
.IP \[bu] 2
-Microsoft Cloud Germany
+Microsoft Cloud Germany (deprecated - try global region first).
.RE
.IP \[bu] 2
\[dq]cn\[dq]
@@ -61951,6 +62081,43 @@ T}
.TE
.PP
See the metadata (https://rclone.org/docs/#metadata) docs for more info.
+.SS Impersonate other users as Admin
+.PP
+Unlike Google Drive and impersonating any domain user via service
+accounts, OneDrive requires you to authenticate as an admin account, and
+manually setup a remote per user you wish to impersonate.
+.IP "1." 3
+In Microsoft 365 Admin Center (https://admin.microsoft.com), open each
+user you need to \[dq]impersonate\[dq] and go to the OneDrive section.
+There is a heading called \[dq]Get access to files\[dq], you need to
+click to create the link, this creates the link of the format
+\f[C]https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/\f[R]
+but also changes the permissions so you your admin user has access.
+.IP "2." 3
+Then in powershell run the following commands:
+.IP
+.nf
+\f[C]
+Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
+Import-Module Microsoft.Graph.Files
+Connect-MgGraph -Scopes \[dq]Files.ReadWrite.All\[dq]
+# Follow the steps to allow access to your admin user
+# Then run this for each user you want to impersonate to get the Drive ID
+Get-MgUserDefaultDrive -UserId \[aq]{emailaddress}\[aq]
+# This will give you output of the format:
+# Name Id DriveType CreatedDateTime
+# ---- -- --------- ---------------
+# OneDrive b!XYZ123 business 14/10/2023 1:00:58\[u202F]pm
+\f[R]
+.fi
+.IP "3." 3
+Then in rclone add a onedrive remote type, and use the
+\f[C]Type in driveID\f[R] with the DriveID you got in the previous step.
+One remote per user.
+It will then confirm the drive ID, and hopefully give you a message of
+\f[C]Found drive \[dq]root\[dq] of type \[dq]business\[dq]\f[R] and then
+include the URL of the format
+\f[C]https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents\f[R]
.SS Limitations
.PP
If you don\[aq]t use rclone for 90 days the refresh token will expire.
@@ -74872,6 +75039,67 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
+.SS v1.69.1 - 2025-02-14
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.69.0...v1.69.1)
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+lib/oauthutil: Fix redirect URL mismatch errors (Nick Craig-Wood)
+.IP \[bu] 2
+bisync: Fix listings missing concurrent modifications (nielash)
+.IP \[bu] 2
+serve s3: Fix list objects encoding-type (Nick Craig-Wood)
+.IP \[bu] 2
+fs: Fix confusing \[dq]didn\[aq]t find section in config file\[dq] error
+(Nick Craig-Wood)
+.IP \[bu] 2
+doc fixes (Christoph Berger, Dimitri Papadopoulos, Matt Ickstadt, Nick
+Craig-Wood, Tim White, Zachary Vorhies)
+.IP \[bu] 2
+build: Added parallel docker builds and caching for go build in the
+container (Anagh Kumar Baranwal)
+.RE
+.IP \[bu] 2
+VFS
+.RS 2
+.IP \[bu] 2
+Fix the cache failing to upload symlinks when \f[C]--links\f[R] was
+specified (Nick Craig-Wood)
+.IP \[bu] 2
+Fix race detected by race detector (Nick Craig-Wood)
+.IP \[bu] 2
+Close the change notify channel on Shutdown (izouxv)
+.RE
+.IP \[bu] 2
+B2
+.RS 2
+.IP \[bu] 2
+Fix \[dq]fatal error: concurrent map writes\[dq] (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Iclouddrive
+.RS 2
+.IP \[bu] 2
+Add notes on ADP and Missing PCS cookies (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Onedrive
+.RS 2
+.IP \[bu] 2
+Mark German (de) region as deprecated (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Added new storage class to magalu provider (Bruno Fernandes)
+.IP \[bu] 2
+Add DigitalOcean regions SFO2, LON1, TOR1, BLR1 (jkpe)
+.IP \[bu] 2
+Add latest Linode Object Storage endpoints (jbagwell-akamai)
+.RE
.SS v1.69.0 - 2025-01-12
.PP
See commits (https://github.com/rclone/rclone/compare/v1.68.0...v1.69.0)