From 729799af7c6a13b42c535d5c5a0d112ea9409aef Mon Sep 17 00:00:00 2001
From: Nick Craig-Wood Jun 24, 2020 Aug 07, 2020 Rclone is a command line program to manage files on cloud storage. It is a feature rich alternative to cloud vendors’ web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols. Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone’s familiar syntax includes shell pipeline support, and Users call rclone “The Swiss army knife of cloud storage”, and “Technology indistinguishable from magic”. Rclone is a command line program to manage files on cloud storage. It is a feature rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols. Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone's familiar syntax includes shell pipeline support, and Users call rclone "The Swiss army knife of cloud storage", and "Technology indistinguishable from magic". Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can check the integrity of your files. Where possible, rclone employs server side transfers to minimise local bandwidth use and transfers from one provider to another without using local disk. Virtual backends wrap local and cloud file systems to apply encryption, caching, chunking and joining. Rclone mounts any local, cloud or virtual filesystem as a disk on Windows, macOS, linux and FreeBSD, and also serves these over SFTP, HTTP, WebDAV, FTP and DLNA. For beta installation, run: Note that this script checks the version of rclone installed first and won’t re-download if not needed. Note that this script checks the version of rclone installed first and won't re-download if not needed. Fetch and unpack You need to mount the host rclone config dir at You need to mount a host data dir at By default, the rclone binary inside a Docker container runs with UID=0 (root). As a result, all files created in a run will have UID=0. If your config and data files reside on the host with a non-root UID:GID, you need to pass these on the container start command line. It is possible to use It is possible to use You also need to mount the host Here are some commands tested on an Ubuntu 18.04.3 host: Make sure you have at least Go 1.7 installed. Download go if necessary. The latest release is recommended. Then Make sure you have at least Go 1.10 installed. Download go if necessary. The latest release is recommended. Then You can also build and install rclone in the GOPATH (which defaults to and this will build the binary in This will leave you a checked out version of rclone you can modify and send pull requests with. If you use You can also build the latest stable rclone with: or the latest version (equivalent to the beta) with These will build the binary in This can be done with Stefan Weichinger’s ansible role. This can be done with Stefan Weichinger's ansible role. Instructions First, you’ll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the The easiest way to make the config is to run rclone with the config option: See the following for detailed instructions for Rclone syncs a directory tree from one storage system to another. Its syntax is like this Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg “drive:myfolder” to look at “myfolder” in Google drive. Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg "drive:myfolder" to look at "myfolder" in Google drive. You can define as many storage paths as you like in the config file. rclone uses a system of subcommands. For example Copy files from source to dest, skipping already copied Copy the source to the destination. Doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. Doesn’t delete files from the destination. Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it’s the contents of source:path that are copied, not the directory name and contents. If dest:path doesn’t exist, it is created and the source:path contents go there. Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination. Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. If dest:path doesn't exist, it is created and the source:path contents go there. For example Let’s say there are two files in sourcepath Let's say there are two files in sourcepath This copies them to Not to If you are familiar with See the –no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly. If you are familiar with See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly. For example, if you have many files in /path/to/src but only a few of them change every day, you can copy all the files which have changed recently very efficiently like this: Note: Use the Make source and dest identical, modifying destination only. Sync the source to the destination, changing the destination only. Doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary. Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary. Important: Since this can cause data loss, test first with the Note that files in the destination won’t be deleted if there were any errors at any point. It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it’s the contents of source:path that are copied, not the directory name and contents. See extended explanation in the If dest:path doesn’t exist, it is created and the source:path contents go there. Note that files in the destination won't be deleted if there were any errors at any point. It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the If dest:path doesn't exist, it is created and the source:path contents go there. Note: Use the Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server side directory move operation. If no filters are in use and if possible this will server side move Otherwise for each file in If you want to delete empty source directories after move, use the –delete-empty-src-dirs flag. See the –no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly. Important: Since this can cause data loss, test first with the –dry-run flag. If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag. See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly. Important: Since this can cause data loss, test first with the --dry-run flag. Note: Use the Remove the files in path. Unlike If you supply the –rmdirs flag, it will remove all empty directories along with it. If you supply the --rmdirs flag, it will remove all empty directories along with it. Eg delete all files bigger than 100MBytes Check what would be deleted first (use either) Then delete That reads “delete everything with a minimum size of 100 MB”, hence delete all files bigger than 100MBytes. That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes. Make the path if it doesn’t already exist. Make the path if it doesn't already exist. Make the path if it doesn’t already exist. Make the path if it doesn't already exist. Remove the path if empty. Remove the path. Note that you can’t remove a path with objects in it, use purge for that. Remove the path. Note that you can't remove a path with objects in it, use purge for that. Checks the files in the source and destination match. Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don’t match. It doesn’t alter the source or destination. If you supply the –size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check. If you supply the –download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don’t support hashes or if you really want to check all the data. If you supply the –one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error. Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination. If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check. If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data. If you supply the --one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error. Note that The other list commands Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). Note that The other list commands Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). If you just want the directory names use “rclone lsf –dirs-only”. If you just want the directory names use "rclone lsf --dirs-only". Any of the filtering options can be applied to this command. There are several related list commands Note that The other list commands Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). Note that The other list commands Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). Note that The other list commands Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). Note that The other list commands Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). If you supply the –check flag, then it will do an online check to compare your version with the latest release and the latest beta. If you supply the --check flag, then it will do an online check to compare your version with the latest release and the latest beta. Note that not all the backends provide all the fields - they will be missing if they are not known for that backend. Where it is known that the value is unlimited the value will also be omitted. Use the –full flag to see the numbers written out in full, eg Use the --full flag to see the numbers written out in full, eg Use the –json flag for a computer readable output, eg Use the --json flag for a computer readable output, eg Remote authorization. Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config. Use the –auth-no-open-browser to prevent rclone to open auth link in default browser automatically. Use the --auth-no-open-browser to prevent rclone to open auth link in default browser automatically. Run a backend specific command. This runs a backend specific command. The commands themselves (except for “help” and “features”) are defined by the backends and you should see the backend docs for definitions. This runs a backend specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions. You can discover what commands a backend implements by using Or like this to output any .txt files in dir or its subdirectories. Use the –head flag to print characters only at the start, –tail for the end and –offset and –count to print a section in the middle. Note that if offset is negative it will count from the end, so –offset -1 –count 1 is equivalent to –tail 1. Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1. For example to make a swift remote of name myremote using auto config you would do: Note that if the config process would normally ask a question the default is taken. Each time that happens rclone will print a message saying how to affect the value taken. If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren’t already obscured before putting them in the config file. NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the “–obscure” flag, or if you are 100% certain you are already passing obscured passwords then use “–no-obscure”. You can also set osbscured passwords using the “rclone config password” command. If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file. NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command. So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this: This disconnects the remote: passed in to the cloud storage system. This normally means revoking the oauth token. To reconnect use “rclone config reconnect”. To reconnect use "rclone config reconnect". Update password in an existing remote. Update an existing remote’s password. The password should be passed in pairs of Update an existing remote's password. The password should be passed in pairs of For example to set password of a remote of name myremote you would do: This command is obsolete now that “config update” and “config create” both support obscuring passwords directly. This command is obsolete now that "config update" and "config create" both support obscuring passwords directly. Re-authenticates user with remote. This reconnects remote: passed in to the cloud storage system. To disconnect the remote use “rclone config disconnect”. To disconnect the remote use "rclone config disconnect". This normally means going through the interactive oauth flow again. Update options in an existing remote. Update an existing remote’s options. The options should be passed in in pairs of Update an existing remote's options. The options should be passed in in pairs of For example to update the env_auth field of a remote of name myremote you would do: If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren’t already obscured before putting them in the config file. NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the “–obscure” flag, or if you are 100% certain you are already passing obscured passwords then use “–no-obscure”. You can also set osbscured passwords using the “rclone config password” command. If the remote uses OAuth the token will be updated, if you don’t require this add an extra parameter thus: If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file. NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command. If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus: This doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. It doesn’t delete files from the destination. This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination. Note: Use the Copy url content to dest. Download a URL’s content and copy it to the destination without saving it in temporary storage. Setting –auto-filename will cause the file name to be retrieved from the from URL (after any redirections) and used in the destination path. Setting –no-clobber will prevent overwriting file on the destination if there is one with the same name. Setting –stdout or making the output file name “-” will cause the output to be written to standard output. Download a URL's content and copy it to the destination without saving it in temporary storage. Setting --auto-filename will cause the file name to be retrieved from the from URL (after any redirections) and used in the destination path. Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name. Setting --stdout or making the output file name "-" will cause the output to be written to standard output. You can use it like this also, but that will involve downloading all the files in remote:path. After it has run it will log the status of the encryptedremote:. If you supply the –one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error. If you supply the --one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error. Cryptdecode returns unencrypted file names. rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items. If you supply the –reverse flag, it will return encrypted file names. If you supply the --reverse flag, it will return encrypted file names. use it like this Remove a single file from remote. Remove a single file from remote. Unlike Remove a single file from remote. Unlike Output completion script for a given shell. Generates a shell completion script for rclone. Run with –help to list the supported shells. Generates a shell completion script for rclone. Run with --help to list the supported shells. See the global flags page for global options not listed here. rclone link will create or retrieve a public link to the given file or folder. If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always be created with the least constraints – e.g. no expiry, no password protection, accessible without account. If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always be created with the least constraints – e.g. no expiry, no password protection, accessible without account. Use the –format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: So if you wanted the path, size and modification time, you would use –format “pst”, or maybe –format “tsp” to put the path last. So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last. Eg If you specify “h” in the format you will get the MD5 hash by default, use the “–hash” flag to change which hash you want. Note that this can be returned as an empty string if it isn’t available on the object (and for directories), “ERROR” if there was an error reading it from the object and “UNSUPPORTED” if that object does not support that hash type. If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type. For example to emulate the md5sum command you can use Eg (Though “rclone md5sum .” is an easier way of typing this.) By default the separator is “;” this can be changed with the –separator flag. Note that separators aren’t escaped in the path so putting it last is a good strategy. (Though "rclone md5sum ." is an easier way of typing this.) By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy. Eg Note that the –absolute parameter is useful for making lists of files to pass to an rclone copy with the –files-from-raw flag. Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag. For example to find all the files modified within one day and copy those only (without traversing the whole directory structure): Note that The other list commands Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). Note that The other list commands Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). List directories and objects in the path in JSON format. The output is an array of Items, where each Item looks like this { “Hashes” : { “SHA-1” : “f572d396fae9206628714fb2ce00f72e94f2258f”, “MD5” : “b1946ac92492d2347c6235b4d2611184”, “DropboxHash” : “ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc” }, “ID”: “y2djkhiujf83u33”, “OrigID”: “UYOJVTUW00Q1RzTDA”, “IsBucket” : false, “IsDir” : false, “MimeType” : “application/octet-stream”, “ModTime” : “2017-05-31T16:15:57.034468261+01:00”, “Name” : “file.txt”, “Encrypted” : “v0qpsdq8anpci8n929v3uu9338”, “EncryptedPath” : “kja9098349023498/v0qpsdq8anpci8n929v3uu9338”, “Path” : “full/path/goes/here/file.txt”, “Size” : 6, “Tier” : “hot”, } If –hash is not specified the Hashes property won’t be emitted. The types of hash can be specified with the –hash-type parameter (which may be repeated). If –hash-type is set then it implies –hash. If –no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (eg s3, swift). If –no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (eg s3, swift). If –encrypted is not specified the Encrypted won’t be emitted. If –dirs-only is not specified files in addition to directories are returned If –files-only is not specified directories in addition to the files will be returned. The Path field will only show folders below the remote path being listed. If “remote:path” contains the file “subfolder/file.txt”, the Path for “file.txt” will be “subfolder/file.txt”, not “remote:path/subfolder/file.txt”. When used without –recursive the Path will always be the same as Name. If the directory is a bucket in a bucket based backend, then “IsBucket” will be set to true. This key won’t be present unless it is “true”. The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown (“2017-05-31T16:15:57.034+01:00”) whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown (“2017-05-31T16:15:57+01:00”). { "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : false, "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", } If --hash is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type parameter (which may be repeated). If --hash-type is set then it implies --hash. If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (eg s3, swift). If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (eg s3, swift). If --encrypted is not specified the Encrypted won't be emitted. If --dirs-only is not specified files in addition to directories are returned If --files-only is not specified directories in addition to the files will be returned. The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name. If the directory is a bucket in a bucket based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true". The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown ("2017-05-31T16:15:57+01:00"). The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line. Any of the filtering options can be applied to this command. There are several related list commands Note that The other list commands Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). Note that The other list commands Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). Mount the remote as file system on a mountpoint. rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a file system with FUSE. rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. First set up your remote using You can either run mount in foreground mode or background (daemon) mode. Mount runs in foreground mode by default, use the –daemon flag to specify background mode mode. Background mode is only supported on Linux and OSX, you can only run mount in foreground mode on Windows. You can either run mount in foreground mode or background (daemon) mode. Mount runs in foreground mode by default, use the --daemon flag to specify background mode mode. Background mode is only supported on Linux and OSX, you can only run mount in foreground mode on Windows. On Linux/macOS/FreeBSD Start the mount like this where Or on Windows like this where When running in background mode the user will have to stop the mount manually (specified below). When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount is automatically stopped. The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user’s responsibility to stop the mount manually. The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually. Stopping the mount manually: By default, rclone will mount the remote as a normal drive. However, you can also mount it as a Network Drive (or Network Share, as mentioned in some places) Unlike other systems, Windows provides a different filesystem type for network drives. Windows and other programs treat the network drives and fixed/removable drives differently: In network drives, many I/O operations are optimized, as the high latency and low reliability (compared to a normal drive) of a network is expected. Although many people prefer network shares to be mounted as normal system drives, this might cause some issues, such as programs not working as expected or freezes and errors while operating with the mounted remote in Windows Explorer. If you experience any of those, consider mounting rclone remotes as network shares, as Windows expects normal drives to be fast and reliable, while cloud storage is far from that. See also Limitations section below for more info Add “–fuse-flag –VolumePrefix=” to your “mount” command, replacing “share” with any other name of your choice if you are mounting more than one remote. Otherwise, the mountpoints will conflict and your mounted filesystems will overlap. Add "--fuse-flag --VolumePrefix=" to your "mount" command, replacing "share" with any other name of your choice if you are mounting more than one remote. Otherwise, the mountpoints will conflict and your mounted filesystems will overlap. Without the use of “–vfs-cache-mode” this can only write files sequentially, it can only seek when reading. This means that many applications won’t work with their files on an rclone mount without “–vfs-cache-mode writes” or “–vfs-cache-mode full”. See the File Caching section for more info. Without the use of "--vfs-cache-mode" this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without "--vfs-cache-mode writes" or "--vfs-cache-mode full". See the File Caching section for more info. The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache. Only supported on Linux, FreeBSD, OS X and Windows at the moment. File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can’t use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount more reliable. File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount more reliable. You can use the flag –attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries. The default is “1s” which caches files just long enough to avoid too many callbacks to rclone from the kernel. You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries. The default is "1s" which caches files just long enough to avoid too many callbacks to rclone from the kernel. In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as rclone using too much memory, rclone not serving files to samba and excessive time listing directories. The kernel can cache the info about a file for the time given by “–attr-timeout”. You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With “–attr-timeout 1s” this is very unlikely but not impossible. The higher you set “–attr-timeout” the more likely it is. The default setting of “1s” is the lowest setting which mitigates the problems above. If you set it higher (‘10s’ or ‘1m’ say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above. If files don’t change on the remote outside of the control of rclone then there is no chance of corruption. The kernel can cache the info about a file for the time given by "--attr-timeout". You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With "--attr-timeout 1s" this is very unlikely but not impossible. The higher you set "--attr-timeout" the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above. If you set it higher ('10s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above. If files don't change on the remote outside of the control of rclone then there is no chance of corruption. This is the same as setting the attr_timeout option in mount.fuse. Note that all the rclone filters can be used to select a subset of the files to be visible in the mount. When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode. –vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests. When –vfs-read-chunk-size-limit is also specified and greater than –vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely. With –vfs-read-chunk-size 100M and –vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When –vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Chunked reading will only work with –vfs-cache-mode < full, as the file will always be copied to the vfs cache before opening with –vfs-cache-mode full. --vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests. When --vfs-read-chunk-size-limit is also specified and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely. With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Chunked reading will only work with --vfs-cache-mode < full, as the file will always be copied to the vfs cache before opening with --vfs-cache-mode full. Using the Alternatively, you can send a The Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file. Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file. This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system. You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details. You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details. Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache. If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried up to –low-level-retries times. If an upload fails it will be retried up to --low-level-retries times. In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first. This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files. In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. If an upload or download fails it will be retried up to --low-level-retries times. Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. Windows is not like most other operating systems supported by rclone. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The “–vfs-case-insensitive” mount flag controls how rclone handles these two cases. If its value is “false”, rclone passes file names to the mounted file system as is. If the flag is “true” (or appears without a value on command line), rclone may perform a “fixup” as explained below. The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether “fixup” is performed to satisfy the target. If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: “true” on Windows and macOS, “false” otherwise. If the flag is provided without a value, then it is “true”. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". This doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer. Important: Since this can cause data loss, test first with the –dry-run flag. This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer. Important: Since this can cause data loss, test first with the --dry-run flag. Note: Use the Explore a remote with a text based user interface. This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - “What is using all my disk space?”. This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?". To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along. Here are the keys - press ‘?’ to toggle the help on and off Here are the keys - press '?' to toggle the help on and off This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands. Note that it might take some time to delete big files/folders. The UI won’t respond in the meantime since the deletion is done synchronously. Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously. Obscure password for use in the rclone config file In the rclone config file, human readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent “eyedropping” - namely someone seeing a password in the rclone config file by accident. In the rclone config file, human readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident. Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token. If you want to encrypt the config file then please use config file encryption - see rclone config for more info. Run a command against a running rclone. This runs a command against a running rclone. Use the –url flag to specify an non default URL to connect on. This can be either a “:port” which is taken to mean “http://localhost:port” or a “host:port” which is taken to mean “http://host:port” A username and password can be passed in with –user and –pass. Note that –rc-addr, –rc-user, –rc-pass will be read also for –url, –user, –pass. This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port" A username and password can be passed in with --user and --pass. Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass. Arguments should be passed in as parameter=value. The result will be returned as a JSON object by default. The –json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values. The -o/–opt option can be used to set a key “opt” with key, value options in the form “-o key=value” or “-o key”. It can be repeated as many times as required. This is useful for rc commands which take the “opt” parameter which by convention is a dictionary of strings. The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values. The -o/--opt option can be used to set a key "opt" with key, value options in the form "-o key=value" or "-o key". It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings. Will place this in the “opt” value Will place this in the "opt" value The -a/–arg option can be used to set strings in the “arg” value. It can be repeated as many times as required. This is useful for rc commands which take the “arg” parameter which by convention is a list of strings. The -a/--arg option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings. Will place this in the “arg” value Will place this in the "arg" value Use –loopback to connect to the rclone instance running “rclone rc”. This is very useful for testing commands without having to run an rclone rc server, eg: Use --loopback to connect to the rclone instance running "rclone rc". This is very useful for testing commands without having to run an rclone rc server, eg: Use “rclone rc” to see a list of all possible commands. Use "rclone rc" to see a list of all possible commands. If the remote file already exists, it will be overwritten. rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you’re better off caching locally and then Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then Remove empty directories under the path. This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in. If you supply the –leave-root flag, it will not remove the root directory. If you supply the --leave-root flag, it will not remove the root directory. This is useful for tidying up remotes that rclone has left a lot of empty directories in. rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs. Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly. Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. Use –name to choose the friendly server name, which is by default “rclone (hostname)”. Use –log-trace in conjunction with -vv to enable additional debug logging of all UPNP traffic. Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. Use --name to choose the friendly server name, which is by default "rclone (hostname)". Use --log-trace in conjunction with -vv to enable additional debug logging of all UPNP traffic. Using the Alternatively, you can send a The Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file. Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file. This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system. You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details. You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details. Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache. If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried up to –low-level-retries times. If an upload fails it will be retried up to --low-level-retries times. In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first. This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files. In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. If an upload or download fails it will be retried up to --low-level-retries times. Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. Windows is not like most other operating systems supported by rclone. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The “–vfs-case-insensitive” mount flag controls how rclone handles these two cases. If its value is “false”, rclone passes file names to the mounted file system as is. If the flag is “true” (or appears without a value on command line), rclone may perform a “fixup” as explained below. The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether “fixup” is performed to satisfy the target. If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: “true” on Windows and macOS, “false” otherwise. If the flag is provided without a value, then it is “true”. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it. Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. By default this will serve files without needing a login. You can set a single username and password with the –user and –pass flags. You can set a single username and password with the --user and --pass flags. Using the Alternatively, you can send a The Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file. Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file. This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system. You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details. You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details. Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache. If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried up to –low-level-retries times. If an upload fails it will be retried up to --low-level-retries times. In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first. This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files. In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. If an upload or download fails it will be retried up to --low-level-retries times. Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. Windows is not like most other operating systems supported by rclone. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The “–vfs-case-insensitive” mount flag controls how rclone handles these two cases. If its value is “false”, rclone passes file names to the mounted file system as is. If the flag is “true” (or appears without a value on command line), rclone may perform a “fixup” as explained below. The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether “fixup” is performed to satisfy the target. If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: “true” on Windows and macOS, “false” otherwise. If the flag is provided without a value, then it is “true”. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". If you supply the parameter PLEASE NOTE: There is an example program bin/test_proxy.py in the rclone source code. The program’s job is to take a The program's job is to take a This config generated must have this extra parameter - And it may have this parameter - If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: This would mean that an SFTP backend would be created on the fly for the The program can manipulate the supplied Note that an internal cache is keyed on The program can manipulate the supplied Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. Serve the remote over HTTP. rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it. You can use the filter flags (eg –include, –exclude) to control what is served. You can use the filter flags (eg --include, --exclude) to control what is served. The server will log errors. Use -v to see access logs. –bwlimit will be respected for file transfers. Use –stats to control the stats printing. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. –server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. –max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. –baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used –baseurl “/rclone” then rclone would serve from a URL starting with “/rclone/”. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” and –baseurl “/rclone/” are all treated identically. –template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags. Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: The password file can be updated while rclone is running. Use –realm to set the authentication realm. Use --realm to set the authentication realm. By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also. –cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate. By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. Using the Alternatively, you can send a The Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file. Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file. This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system. You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details. You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details. Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache. If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried up to –low-level-retries times. If an upload fails it will be retried up to --low-level-retries times. In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first. This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files. In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. If an upload or download fails it will be retried up to --low-level-retries times. Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. Windows is not like most other operating systems supported by rclone. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The “–vfs-case-insensitive” mount flag controls how rclone handles these two cases. If its value is “false”, rclone passes file names to the mounted file system as is. If the flag is “true” (or appears without a value on command line), rclone may perform a “fixup” as explained below. The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether “fixup” is performed to satisfy the target. If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: “true” on Windows and macOS, “false” otherwise. If the flag is provided without a value, then it is “true”. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". Serve the remote for restic’s REST API. Serve the remote for restic's REST API. rclone serve restic implements restic’s REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. Restic is a command line program for doing backups. The server will log errors. Use -v to see access logs. –bwlimit will be respected for file transfers. Use –stats to control the stats printing. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. First set up a remote for your chosen cloud provider. Once you have set up the remote, check it is working with, for example “rclone lsd remote:”. You may have called the remote something other than “remote:” - just substitute whatever you called it in the following instructions. Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions. Now start the rclone restic server Where you can replace “backup” in the above by whatever path in the remote you wish to use. By default this will serve on “localhost:8080” you can change this with use of the “–addr” flag. Where you can replace "backup" in the above by whatever path in the remote you wish to use. By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag. You might wish to start this server on boot. Now you can follow the restic instructions on setting up restic. Note that you will need restic 0.8.2 or later to interoperate with rclone. For the example above you will want to use “http://localhost:8080/” as the URL for the REST server. For the example above you will want to use "http://localhost:8080/" as the URL for the REST server. For example: The “–private-repos” flag can be used to limit users to repositories starting with a path of The "--private-repos" flag can be used to limit users to repositories starting with a path of Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. –server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. –max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. –baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used –baseurl “/rclone” then rclone would serve from a URL starting with “/rclone/”. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” and –baseurl “/rclone/” are all treated identically. –template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags. Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: The password file can be updated while rclone is running. Use –realm to set the authentication realm. Use --realm to set the authentication realm. By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also. –cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate. By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. Serve the remote over SFTP. rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it. You can use the filter flags (eg –include, –exclude) to control what is served. You can use the filter flags (eg --include, --exclude) to control what is served. The server will log errors. Use -v to see access logs. –bwlimit will be respected for file transfers. Use –stats to control the stats printing. You must provide some means of authentication, either with –user/–pass, an authorized keys file (specify location with –authorized-keys - the default is the same as ssh), an –auth-proxy, or set the –no-auth flag for no authentication when logging in. --bwlimit will be respected for file transfers. Use --stats to control the stats printing. You must provide some means of authentication, either with --user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in. Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend. If you don’t supply a –key then rclone will generate one and cache it for later use. By default the server binds to localhost:2022 - if you want it to be reachable externally then supply “–addr :2022” for example. Note that the default of “–vfs-cache-mode off” is fine for the rclone sftp backend, but it may not be with other SFTP clients. If you don't supply a --key then rclone will generate one and cache it for later use. By default the server binds to localhost:2022 - if you want it to be reachable externally then supply "--addr :2022" for example. Note that the default of "--vfs-cache-mode off" is fine for the rclone sftp backend, but it may not be with other SFTP clients. Using the Alternatively, you can send a The Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file. Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file. This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system. You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details. You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details. Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache. If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried up to –low-level-retries times. If an upload fails it will be retried up to --low-level-retries times. In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first. This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files. In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. If an upload or download fails it will be retried up to --low-level-retries times. Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. Windows is not like most other operating systems supported by rclone. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The “–vfs-case-insensitive” mount flag controls how rclone handles these two cases. If its value is “false”, rclone passes file names to the mounted file system as is. If the flag is “true” (or appears without a value on command line), rclone may perform a “fixup” as explained below. The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether “fixup” is performed to satisfy the target. If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: “true” on Windows and macOS, “false” otherwise. If the flag is provided without a value, then it is “true”. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". If you supply the parameter PLEASE NOTE: There is an example program bin/test_proxy.py in the rclone source code. The program’s job is to take a The program's job is to take a This config generated must have this extra parameter - And it may have this parameter - If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: This would mean that an SFTP backend would be created on the fly for the The program can manipulate the supplied Note that an internal cache is keyed on The program can manipulate the supplied Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it. This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object. If this flag is set to “auto” then rclone will choose the first supported hash on the backend or you can use a named hash such as “MD5” or “SHA-1”. Use “rclone hashsum” to see the full list. If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1". Use "rclone hashsum" to see the full list. Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. –server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. –max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. –baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used –baseurl “/rclone” then rclone would serve from a URL starting with “/rclone/”. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” and –baseurl “/rclone/” are all treated identically. –template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. --server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. --max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. --baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. --template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages: By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags. Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags. Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended. To create an htpasswd file: The password file can be updated while rclone is running. Use –realm to set the authentication realm. Use --realm to set the authentication realm. By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also. –cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate. By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also. --cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate. Using the Alternatively, you can send a The Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file. Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file. This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system. You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details. You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details. Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache. If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache. If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried up to –low-level-retries times. If an upload fails it will be retried up to --low-level-retries times. In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first. This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files. In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. If an upload or download fails it will be retried up to --low-level-retries times. Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. Windows is not like most other operating systems supported by rclone. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default The “–vfs-case-insensitive” mount flag controls how rclone handles these two cases. If its value is “false”, rclone passes file names to the mounted file system as is. If the flag is “true” (or appears without a value on command line), rclone may perform a “fixup” as explained below. The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether “fixup” is performed to satisfy the target. If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: “true” on Windows and macOS, “false” otherwise. If the flag is provided without a value, then it is “true”. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". If you supply the parameter PLEASE NOTE: There is an example program bin/test_proxy.py in the rclone source code. The program’s job is to take a The program's job is to take a This config generated must have this extra parameter - And it may have this parameter - If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: This would mean that an SFTP backend would be created on the fly for the The program can manipulate the supplied Note that an internal cache is keyed on The program can manipulate the supplied Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. Create new file or change file modification time. Set the modification time on object(s) as specified by remote:path to have the current time. If remote:path does not exist then a zero sized object will be created unless the –no-create flag is provided. If –timestamp is used then it will set the modification time to that time instead of the current time. Times may be specified as one of: If remote:path does not exist then a zero sized object will be created unless the --no-create flag is provided. If --timestamp is used then it will set the modification time to that time instead of the current time. Times may be specified as one of: Note that –timestamp is in UTC if you want local time then add the –localtime flag. Note that --timestamp is in UTC if you want local time then add the --localtime flag. You can use any of the filtering options with the tree command (eg –include and –exclude). You can also use –fast-list. The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone’s short options. You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list. The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options. rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error For example, suppose you have a remote with a file in called The file This refers to the local file system. On Windows only These paths needn’t start with a leading These paths needn't start with a leading This refers to a directory On most backends this is refers to the same directory as On most backends this is refers to the same directory as This is an advanced form for creating remotes on the fly. Here are some examples: If you want to send a The rules for quoting metacharacters are complicated and if you want the full details you’ll have to consult the manual page for your shell. The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell. If your names have spaces in you need to put them in If you are using the root directory on its own then don’t quote it (see #464 for why), eg If you are using the root directory on its own then don't quote it (see #464 for why), eg rclone uses Most remotes (but not all - see the overview) support server side copy. This means if you want to copy one folder to another then rclone won’t download all the files and re-upload them; it will instruct the server to copy them in place. This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place. Eg Will copy the contents of Remotes which don’t support server side copy will download and re-upload in this case. Server side copies are used with Remotes which don't support server side copy will download and re-upload in this case. Server side copies are used with Server side copies will only be attempted if the remote names are the same. This can be used when scripting to make aged backups efficiently, eg Rclone has a number of options to control its behaviour. Options that take parameters can have the values passed in two ways, Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as “300ms”, “-1.5h” or “2h45m”. Valid time units are “ns”, “us” (or “µs”), “ms”, “s”, “m”, “h”. Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h". Options which use SIZE use kByte by default. However, a suffix of When using If The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory. For example will sync If running rclone from a script you might want to use today’s date as the directory name passed to If running rclone from a script you might want to use today's date as the directory name passed to See Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn’t resolve or resolves to more than one IP address it will give an error. Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error. This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable. Single limits last for the duration of the session. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is For example, to limit bandwidth usage to 10 MBytes/s use It is also possible to specify a “timetable” of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as An example of a typical timetable to avoid link saturation during daytime working hours could be: In this example, the transfer bandwidth will be every day set to 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited. Is equal to this: Bandwidth limits only apply to the data transfer. They don’t apply to the bandwidth of the directory listings etc. Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let’s say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a Bandwidth limits only apply to the data transfer. They don't apply to the bandwidth of the directory listings etc. Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a On Unix systems (Linux, macOS, …) the bandwidth limiter can be toggled by sending a If you configure rclone with a remote control then you can use change the bwlimit dynamically: Use this sized buffer to speed up file transfers. Each When using Set to Note that the memory allocation of the buffers is influenced by the –use-mmap flag. Note that the memory allocation of the buffers is influenced by the --use-mmap flag. If this flag is set then in a This flag can be useful on IO limited systems where transfers interfere with checking. Using this flag can use more memory as it effectively sets The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel. The default is to run 8 checkers in parallel. Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal. This is useful when the remote doesn’t support setting modified time and a more accurate sync is desired than just checking the file size. This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size. This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section. Eg When using this flag, rclone won’t update mtimes of remote files if they are incorrect as it would normally. When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally. When using You must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory. See Specify the location of the rclone config file. Normally the config file is in your home directory as a file called If there is a file If you run Use this flag to override the config location, eg Set the connection timeout. This should be in go time format which looks like The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is When using The remote in use must support server side copy and you must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory. See Mode to run dedupe command in. One of This disables a comma separated list of optional features. For example to disable server side move and server side copy use: The features can be put in any case. See the overview features and optional features to get an idea of which feature does what. This flag can be useful for debugging and in exceptional circumstances (eg Google Drive limiting the total volume of Server Side Copies to 100GB/day). Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the This specifies the amount of time to wait for a server’s first response headers after fully writing the request headers if the request has an “Expect: 100-continue” header. Not all backends support using this. This specifies the amount of time to wait for a server's first response headers after fully writing the request headers if the request has an "Expect: 100-continue" header. Not all backends support using this. Zero means no timeout and causes the body to be sent immediately, without waiting for the server to approve. This time does not include the time to send the request header. The default is By default, rclone will exit with return code 0 if there were no errors. This option allows rclone to return exit code 9 if no files were transferred between the source and destination. This allows using rclone in scripts, and triggering follow-on actions if data was copied, or skipping if not. NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check and adjust your scripts accordingly! Add an HTTP header for all transactions. The flag can be repeated to add multiple headers. If you want to add headers only for uploads use This flag is supported for all HTTP based backends even those not supported by Add an HTTP header for all download transactions. The flag can be repeated to add multiple headers. See the GitHub issue here for currently supported backends. Add an HTTP header for all upload transactions. The flag can be repeated to add multiple headers. See the GitHub issue here for currently supported backends. Using this option will cause rclone to ignore the case of the files when synchronizing so files will not be copied/synced when the existing filenames are the same, even if the casing is different. Normally rclone will check that the checksums of transferred files match, and give an error “corrupted on transfer” if they don’t. You can use this option to skip that check. You should only use it if you have had the “corrupted on transfer” error message and you are sure you might want to transfer potentially corrupted data. Normally rclone will check that the checksums of transferred files match, and give an error "corrupted on transfer" if they don't. You can use this option to skip that check. You should only use it if you have had the "corrupted on transfer" error message and you are sure you might want to transfer potentially corrupted data. Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files. While this isn’t a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted. While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted. Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If It will also cause rclone to skip verifying the sizes are the same after transfer. This can be useful for transferring files to and from OneDrive which occasionally misreports the size of image files (see #399 for more info). Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination. Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using Treat source and destination files as immutable and disallow modification. With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error Note that only commands which transfer files (e.g. Note that only commands which transfer files (e.g. This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated. During rmdirs it will not remove root directory, even if it’s empty. Log all of rclone’s output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the Note that if you are using the Comma separated list of log format options. During rmdirs it will not remove root directory, even if it's empty. Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the Note that if you are using the Comma separated list of log format options. This sets the log level for rclone. The default log level is This switches the log format to JSON for rclone. The fields of json log are level, msg, source, time. This controls the number of low level retries rclone does. A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the This shouldn’t need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the This shouldn't need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the Disable low level retries with This is the maximum allowable backlog of files in a sync/copy/move queued for being checked or transferred. This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N kB of memory when the backlog is in use. Setting this large allows rclone to calculate how many files are pending more accurately, give a more accurate estimated finish time and make Setting this small will make rclone more synchronous to the listings of the remote which may be desirable. Setting this to a negative number will make the backlog as large as possible. This tells rclone not to delete more than N files. If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress. This modifies the recursion depth for all the commands except purge. So if you do For historical reasons the You can use this command to disable recursion (with Note that if you use this with Rclone will stop scheduling new transfers when it has run for the duration specified. Defaults to off. When the limit is reached any existing transfers will complete. Rclone won’t exit with an error if the transfer limit is reached. Rclone won't exit with an error if the transfer limit is reached. Rclone will stop transferring when it has reached the size specified. Defaults to off. When the limit is reached all transfers will stop immediately. Rclone will exit with exit code 8 if the transfer limit is reached. This modifies the behavior of Specifying Specifying Specifying When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent. The default is This command line flag allows you to override that computed default. When downloading files to the local backend above this size, rclone will use multiple threads to download the file (default 250M). Rclone preallocates the file (using Rclone preallocates the file (using The number of threads used to download is controlled by Use This will work with the NB that this only works for a local destination but will work with any source. NB that multi thread copies are disabled for local to local copies as they are faster without unless NB on Windows using multi-thread downloads will cause the resulting files to be sparse. Use When using multi thread downloads (see above Exactly how many streams rclone uses for the download depends on the size of the file. To calculate the number of download streams Rclone divides the size of the file by the So if The This means that: This flag is useful to minimise the transactions if you know that none of the files are on the destination. This is a specialized flag which should be ignored by most users! Don’t set Don't set There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone. The If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven’t changed and won’t need copying then you shouldn’t use However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven't changed and won't need copying then you shouldn't use See rclone copy for an example of how to use it. Don’t normalize unicode characters in filenames during the sync routine. Don't normalize unicode characters in filenames during the sync routine. Sometimes, an operating system will store filenames containing unicode parts in their decomposed form (particularly macOS). Some cloud storage systems will then recompose the unicode, resulting in duplicate files if the data is ever copied back to a local filesystem. Using this flag will disable that functionality, treating each unicode character as unique. For example, by default é and é will be normalized into the same character. With When using this flag, rclone won’t update modification times of remote files if they are incorrect as it would normally. When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally. This can be used if the remote is being synced with another tool also (eg the Google Drive client). The The order by string is constructed like this. The first part describes what aspect is being measured: The Rclone will do its best to transfer the best file it has so in practice this should not cause a problem. Think of This flag supplies a program which should supply the config password when run. This is an alternative to rclone prompting for the password or setting the The argument to this should be a command with a space separated list of arguments. If one of the arguments has a space in then enclose it in Eg See the Configuration Encryption for more info. See a Windows PowerShell example on the Wiki. This flag makes rclone update the stats in a static block in the terminal providing a realtime overview of the transfer. Any log messages will scroll above the static block. Log messages will push the static block down to the bottom of the terminal where it will stay. Normally this is updated every 500mS but this period can be overridden with the This can be used with the Note: On Windows until this bug is fixed all non-ASCII characters will be replaced with This flag will limit rclone’s output to error messages only. This flag will limit rclone's output to error messages only. Retry the entire sync if it fails this many times it fails (default 3). Some remotes can be unreliable and a few retries help pick up the files which didn’t get transferred because of errors. Some remotes can be unreliable and a few retries help pick up the files which didn't get transferred because of errors. Disable retries with This sets the interval between each retry specified by The default is Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size. This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn’t set checksums of modification times in the same way as rclone. This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone. Commands which transfer data ( This sets the interval. The default is If you set the stats interval then all commands can show stats. This can be useful when running other commands, Stats are logged at Stats are logged at Note that on macOS you can send a SIGINFO (which is normally ctrl-T in the terminal) to make the stats print immediately. By default, the Log level to show Log level to show When this is specified, rclone condenses the stats into a single line showing the most important stats only. When this is specified, rclone enables the single-line stats and prepends the display with a date string. The default is When this is specified, rclone enables the single-line stats and prepends the display with a user-supplied date string. The date string MUST be enclosed in quotes. Follow golang specs for date formatting syntax. By default, data transfer rates will be printed in bytes/second. This option allows the data rate to be printed in bits/second. Data transfer volume will still be reported in bytes. The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s. The default is When using The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. This is for use with files to add the suffix in the current directory or with For example will sync When using So let’s say we had So let's say we had On capable OSes (not Windows or Plan9) send all log output to syslog. This can be useful for running rclone in a script or If using Limit HTTP transactions per second to this. Default is 0 which is used to mean unlimited transactions per second. For example to limit rclone to 10 HTTP transactions per second use Use this when the number of transactions per second from rclone is causing a problem with the cloud storage provider (eg getting you banned or rate limited). This can be very useful for See also Max burst of transactions for Normally For example if you provide This may be used to increase performance of By default, rclone doesn’t keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy. By default, rclone doesn't keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy. If you use this flag, and the remote supports server side copy or server side move, and the source and destination have a compatible hash, then this will track renames during Files will be matched by size and hash - if both match then a rename will be considered. If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console. Note: Encrypted destinations are not supported by Note that Note also that This option changes the matching criteria for This option allows you to specify when files on your destination are deleted when you sync folders. Specifying the value Specifying Specifying When doing anything which involves a directory listing (eg However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket based remotes (eg S3, B2, GCS, Swift, Hubic). If you use the rclone should always give identical results with and without If you pay for transactions and can fit your entire sync listing into memory then If you use If you pay for transactions and can fit your entire sync listing into memory then If you use This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected. The default is The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote. The default is to run 4 file transfers in parallel. This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file. This can be useful when transferring to a remote which doesn’t support mod times directly (or when using If an existing destination file has a modification time equal (within the computed modify window precision) to the source file’s, it will be updated if the sizes are different. If This can be useful when transferring to a remote which doesn't support mod times directly (or when using If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different. If If an existing destination file is older than the source file then it will be updated if the size or checksum differs from the source file. On remotes which don’t support mod time directly (or when using On remotes which don't support mod time directly (or when using If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based platforms and VirtualAlloc on Windows for its transfer buffers (size controlled by If this flag is not set then rclone will allocate and free the buffers using the Go memory allocator which may use more memory as memory pages are returned less aggressively to the OS. It is possible this does not work well on all platforms so it is disabled by default; in the future it may be enabled by default. Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation. Use this flag to disable the extra API call and rely instead on the server’s modified time. In cases such as a local to remote sync using Use this flag to disable the extra API call and rely instead on the server's modified time. In cases such as a local to remote sync using Using this flag on a sync operation without also using With With Prints the version number The outgoing SSL/TLS connections rclone makes can be controlled with these options. For example this can be very useful with the HTTP or WebDAV backends. Rclone HTTP servers have their own set of configuration for SSL/TLS which you can find in their documentation. This loads the PEM encoded certificate authority certificate and uses it to verify the certificates of the servers rclone connects to. If you have generated certificates signed with a local CA then you will need this flag to connect to servers using those certificates. This loads the PEM encoded client side certificate. This is used for mutual TLS authentication. The This loads the PEM encoded client side private key used for mutual TLS authentication. Used in conjunction with This option defaults to This should be used only for testing. Your configuration file contains information for logging in to your cloud services. This means that you should keep your If you are in an environment where that isn’t possible, you can add a password to your configuration. This means that you will have to supply the password every time you start rclone. If you are in an environment where that isn't possible, you can add a password to your configuration. This means that you will have to supply the password every time you start rclone. To add a password to your rclone configuration, execute One useful example of this is using the If the If you are running rclone inside a script, unless you are using the If you are running rclone inside a script, unless you are using the These options are useful when developing or debugging rclone. There are also some more remote specific options which aren’t documented here which are used for testing. These start with remote name eg These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg Write CPU profile to file. This can be analysed with The Note that some headers including The available flags are: Dump HTTP headers with Use Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only. Note that the bodies are buffered in memory so don’t use this for enormous files. Note that the bodies are buffered in memory so don't use this for enormous files. Like Like Dump HTTP headers - will contain sensitive info such as Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on. This dumps a list of the running go-routines at the end of the command to standard output. This dumps a list of the open files at the end of the command. It uses the This dumps a list of the open files at the end of the command. It uses the Write memory profile to file. This can be analysed with For the filtering options The filters are applied for the Each path as it passes through rclone is matched against the include and exclude rules like The patterns used to match files for inclusion or exclusion are based on “file globs” as used by the unix shell. If the pattern starts with a The patterns used to match files for inclusion or exclusion are based on "file globs" as used by the unix shell. If the pattern starts with a With Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so Rclone keeps track of directories that could match any file patterns. Eg if you add the include rule Rclone will synthesize the directory include rule If you put any rules which end in Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won’t optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don’t have a concept of directory. Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory. Rclone implements bash style Rclone implements bash style Rclone always does a wildcard match so Rclone maintains a combined list of include rules and exclude rules. Add a single include rule with This flag can be repeated. See above for the order the flags are processed in. Eg This adds an implicit This adds an implicit Add include rules from a file. This flag can be repeated. See above for the order the flags are processed in. Then use as This is useful if you have a lot of rules. This adds an implicit This adds an implicit This can be used to add a single include or exclude rule. Include rules start with This flag can be repeated. See above for the order the flags are processed in. Rclone will traverse the file system if you use If you use This option can be repeated to read from more than one file. These are read in the order that they are placed on the command line. Paths within the Paths within the For example, suppose you had This will transfer these files only (if they exist) To take a more complicated example, let’s say you had a few files you want to back up regularly with these absolute paths: To take a more complicated example, let's say you had a few files you want to back up regularly with these absolute paths: To copy these you’d find a common subdirectory - in this case To copy these you'd find a common subdirectory - in this case This option is same as This option controls the minimum size file which will be transferred. This defaults to For example This option controls the maximum size file which will be transferred. This defaults to For example This option controls the maximum age of files to transfer. Give in seconds or with a suffix of: For example This can also be an absolute time in one of these formats This option controls the minimum age of files to transfer. Give in seconds or with a suffix (see For example You can exclude Currently only one filename is supported, i.e. Currently only one filename is supported, i.e. Rclone can serve a web based GUI (graphical user interface). This is somewhat experimental at the moment so things may be subject to change. Run this command in a terminal and rclone will download and then display the GUI in a web browser. When you run the If rclone is run with the If you just want to run a remote control then see the rcd command. Flag to start the http server listen on remote requests IPaddress:Port or :Port to bind server to. (default “localhost:5572”) IPaddress:Port or :Port to bind server to. (default "localhost:5572") SSL PEM key (concatenation of certificate and CA certificate) Client certificate authority to verify clients with htpasswd file - if not provided no authentication is done SSL PEM Private key Maximum size of request header (default 4096) User name for authentication. Password for authentication. Realm for authentication (default “rclone”) Realm for authentication (default "rclone") Timeout for server reading data (default 1h0m0s) Timeout for server writing data (default 1h0m0s) Enable the serving of remote objects via the HTTP interface. This means objects will be accessible at http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a listing of the remotes. Objects may be requested from remotes using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object Default Off. Path to local files to serve on the HTTP server. If this is set then rclone will serve the files in that directory. It will also open the root in the web browser if specified. This is for implementing browser based GUIs for rclone functions. If Default Off. Enable OpenMetrics/Prometheus compatible endpoint at Default Off. Set this flag to serve the default web gui on the same port as rclone. Default Off. Set the allowed Access-Control-Allow-Origin for rc requests. Can be used with –rc-web-gui if the rclone is running on different IP than the web-gui. Can be used with --rc-web-gui if the rclone is running on different IP than the web-gui. Default is IP address on which rc is running. Set the URL to fetch the rclone-web-gui files from. Default https://api.github.com/repos/rclone/rclone-webui-react/releases/latest. Set this flag to check and update rclone-webui-react from the rc-web-fetch-url. Default Off. Set this flag to force update rclone-webui-react from the rc-web-fetch-url. Default Off. Set this flag to disable opening browser automatically when using web-gui. Default Off. Expire finished async jobs older than DURATION (default 60s). Interval duration to check for expired async jobs (default 10s). By default rclone will require authorisation to have been set up on the rc interface in order to use any methods which access any rclone remotes. Eg If this is set then no authorisation will be required on the server to use these methods. The alternative is to use Default Off. This takes the following parameters Note that this is the direct equivalent of using this “backend” command: Note that this is the direct equivalent of using this "backend" command: Note that arguments must be preceded by the “-a” flag Note that arguments must be preceded by the "-a" flag See the backend command for more information. Authentication is required for this call. Ensure the specified file chunks are cached on disk. The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end] start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value “-5:” represents the last 5 chunks of a file. Some valid examples are: “:5,-5:” -> the first and last five chunks “0,-2” -> the first and the second last chunk “0:10” -> the first ten chunks Any parameter with a key that starts with “file” can be used to specify files to fetch, eg start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file. Some valid examples are: ":5,-5:" -> the first and last five chunks "0,-2" -> the first and the second last chunk "0:10" -> the first ten chunks Any parameter with a key that starts with "file" can be used to specify files to fetch, eg File names will automatically be encrypted when the a crypt remote is used on top of the cache. This takes the following parameters This takes the following parameters See the config password command command for more information on the above. Authentication is required for this call. This takes the following parameters The format of the parameter is exactly the same as passed to –bwlimit except only one bandwidth may be specified. In either case “rate” is returned as a human readable string, and “bytesPerSecond” is returned as a number. The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified. In either case "rate" is returned as a human readable string, and "bytesPerSecond" is returned as a number. This tells the go runtime to do a garbage collection run. It isn’t necessary to call this normally, but it can be useful for debugging memory problems. This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems. This returns list of stats groups currently in memory. Returns the following values: Values for “transferring”, “checking” and “lastError” are only assigned if data is available. The value for “eta” is null if an eta cannot be determined. Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined. This deletes entire stats group Parameters This shows the current version of go and the go runtime rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a file system with FUSE. rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. If no mountType is provided, the priority is given as follows: 1. mount 2.cmount 3.mount2 This takes the following parameters The mount types are strings like “mount”, “mount2”, “cmount” and can be passed to mount/mount as the mountType parameter. The mount types are strings like "mount", "mount2", "cmount" and can be passed to mount/mount as the mountType parameter. Eg Authentication is required for this call. rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a file system with FUSE. rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. This takes the following parameters This takes the following parameters The result is as returned from rclone about –json The result is as returned from rclone about --json See the about command command for more information on the above. Authentication is required for this call. This takes the following parameters See the cleanup command command for more information on the above. Authentication is required for this call. This takes the following parameters Authentication is required for this call. This takes the following parameters This takes the following parameters See the delete command command for more information on the above. Authentication is required for this call. This takes the following parameters See the deletefile command command for more information on the above. Authentication is required for this call. This takes the following parameters This returns info about the remote passed in; This takes the following parameters This takes the following parameters See the mkdir command command for more information on the above. Authentication is required for this call. This takes the following parameters Authentication is required for this call. This takes the following parameters Returns This takes the following parameters See the purge command command for more information on the above. Authentication is required for this call. This takes the following parameters See the rmdir command command for more information on the above. Authentication is required for this call. This takes the following parameters See the rmdirs command command for more information on the above. This takes the following parameters Returns This takes the following parameters See the copy command command for more information on the above. Authentication is required for this call. This takes the following parameters See the move command command for more information on the above. This takes the following parameters See the sync command command for more information on the above. Authentication is required for this call. Otherwise pass directories in as dir=path. Any parameter key starting with dir will refresh that directory, eg If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use –fast-list if enabled. If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use --fast-list if enabled. Rclone implements a simple HTTP based protocol. Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values. All calls must made using POST. The input objects can be supplied using URL parameters, POST parameters or by supplying “Content-Type: application/json” and a JSON blob in the body. There are examples of these below using The input objects can be supplied using URL parameters, POST parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using The response will be a JSON blob in the body of the response. This is formatted to be reasonably human readable. If an error occurs then there will be an HTTP error status (eg 500) and the body of the response will contain a JSON encoded error object, eg The keys in the error response are - error - error string - input - the input parameters to the call - status - the HTTP status code - path - the path of the call The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested “Access-Control-Request-Headers” back. The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested "Access-Control-Request-Headers" back. Response Note that curl doesn’t return errors to the shell unless you use the Note that curl doesn't return errors to the shell unless you use the If you use the To use these, first install go. To profile rclone’s memory use you can run: To profile rclone's memory use you can run: This should open a page in your browser showing what is using what memory. You can also use the See the net/http/pprof docs for more info on how to use the profiling and for a general overview see the Go team’s blog post on profiling go programs. See the net/http/pprof docs for more info on how to use the profiling and for a general overview see the Go team's blog post on profiling go programs. The profiling hook is zero overhead unless it is used. Each cloud storage system is slightly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through. The cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the To use the verify checksums when transferring between cloud storage systems they must support a common hash type. † Note that Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s. ‡ SFTP supports checksums if the same login has shell access and ‡ SFTP supports checksums if the same login has shell access and †† WebDAV supports hashes when used with Owncloud and Nextcloud only. ††† WebDAV supports modtimes when used with Owncloud and Nextcloud only. ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft’s own QuickXorHash. ‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own QuickXorHash. ‡‡‡ Mail.ru uses its own modified SHA1 hash The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system. If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully. The local filesystem and SFTP may or may not be case sensitive depending on OS. Most of the time this doesn’t cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems. Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems. If a cloud storage system allows duplicate files then it can have two objects with the same name. This confuses rclone greatly when syncing - use the This transformation is reversed when downloading a file or parsing The table below shows the characters that are replaced by default. When a replacement character is found in a filename, this character will be escaped with the When a replacement character is found in a filename, this character will be escaped with the Each cloud storage backend can use a different set of characters, which will be specified in the documentation for each backend. In this case all invalid UTF-8 bytes will be replaced with a quoted representation of the byte value to allow uploading a file to such a backend. For example, the invalid byte A common source of invalid UTF-8 bytes are local filesystems, that store names in a different encoding than UTF-8 or UTF-16, like latin1. See the local filenames section for details. Most backends have an encoding options, specified as a flag Most backends have an encoding options, specified as a flag This will have default value which encodes and decodes characters in such a way as to preserve the maximum number of characters (see above). However this can be incorrect in some scenarios, for example if you have a Windows file system with characters such as The To take a specific example, the FTP backend’s default encoding is To take a specific example, the FTP backend's default encoding is However, let’s say the FTP server is running on Windows and can’t have any of the invalid Windows characters in file names. You are backing up Linux servers to this FTP server which do have those characters in file names. So you would add the Windows set which are However, let's say the FTP server is running on Windows and can't have any of the invalid Windows characters in file names. You are backing up Linux servers to this FTP server which do have those characters in file names. So you would add the Windows set which are to the existing ones, giving: This can be specified using the Or let’s say you have a Windows server but you want to preserve Or let's say you have a Windows server but you want to preserve This can be specified using the This deletes a directory quicker than just deleting all the files in the directory. † Note Swift, Hubic, and Tardigrade implement this in order to delete directory markers but they don’t actually have a quicker way of deleting files other than deleting them individually. † Note Swift, Hubic, and Tardigrade implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually. ‡ StreamUpload is not supported with Nextcloud Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use If the server doesn’t support Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use If the server doesn't support Used when moving/renaming an object on the same remote. This is known as a server side move of a file. This is used in If the server isn’t capable of Used when moving/renaming an object on the same remote. This is known as a server side move of a file. This is used in If the server isn't capable of This is used to implement This is used to implement This is used for emptying the trash for a remote by If the server can’t do If the server can't do The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don’t have an account on the particular cloud provider. Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider. This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash. This is also used to return the space used, available for If the server can’t do If the server can't do The remote supports empty directories. See Limitations for details. Most Object/Bucket based remotes do not support this. These flags are available for every command. They control the backends and may be set in the config file. Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Here are the standard options specific to fichier (1Fichier). Your API Key, get it from https://1fichier.com/console/params.pl Here are the advanced options specific to fichier (1Fichier). If you want to download a shared folder, add this parameter This sets the encoding for the backend. See: the encoding section in the overview for more info. Here are the standard options specific to alias (Alias for an existing remote). Remote or path to alias. Can be “myremote:path/to/dir”, “myremote:bucket”, “myremote:” or “/local/path”. Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path". Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers. Important: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program is now closed to new entries so if you don’t already have your own set of keys you will not be able to use rclone with Amazon Drive. Important: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program is now closed to new entries so if you don't already have your own set of keys you will not be able to use rclone with Amazon Drive. For the history on why rclone no longer has a set of Amazon Drive API keys see the forum. If you happen to know anyone who works at Amazon then please ask them to re-instate rclone into the Amazon Drive developer program - thanks! The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google’s very secure App Engine environment and doesn’t store any credentials which pass through it. Since rclone doesn’t currently have its own Amazon Drive credentials so you will either need to have your own Note also if you are not using Amazon’s The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it. Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own Note also if you are not using Amazon's Here is an example of how to make a remote called This will guide you through an interactive setup process: To copy a local directory to an Amazon Drive directory called backup Amazon Drive doesn’t allow modification times to be changed via the API so these won’t be accurate or used for syncing. Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing. It does store MD5SUMs so for a more accurate sync, you can use the Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Any files you delete with rclone will end up in the trash. Amazon don’t provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon’s apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days. Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days. Let’s say you usually use Let's say you usually use Here are the standard options specific to amazon cloud drive (Amazon Drive). Amazon Application Client ID. Amazon Application Client Secret. Here are the advanced options specific to amazon cloud drive (Amazon Drive). Auth server URL. Leave blank to use Amazon’s. Auth server URL. Leave blank to use Amazon's. Token server url. leave blank to use Amazon’s. Token server url. leave blank to use Amazon's. Checkpoint for internal polling (debug). Additional time per GB to wait after a failed complete upload to see if it appears. Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear. The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears. You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually. These values were determined empirically by observing lots of uploads of big files for a range of file sizes. Upload with the “-v” flag to see more info about what rclone is doing in this situation. Upload with the "-v" flag to see more info about what rclone is doing in this situation. Files >= this size will be downloaded via their tempLink. Files this size or more will be downloaded via their “tempLink”. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn’t need to be changed. To download files above this threshold, rclone requests a “tempLink” which downloads the file through a temporary URL directly from the underlying S3 storage. Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed. To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage. This sets the encoding for the backend. See: the encoding section in the overview for more info. Note that Amazon Drive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail. At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail. This remote supports As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata. For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is “dirty”. By using For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using The modified time is stored as metadata on the object as If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive storage the object will be uploaded rather than copied. S3 allows any valid UTF-8 string as a key. Invalid UTF-8 bytes will be replaced, as they can’t be used in XML. Invalid UTF-8 bytes will be replaced, as they can't be used in XML. The following characters are replaced since these are problematic when dealing with the REST API: The encoding will also encode these file names as they don’t seem to work with the SDK properly: The encoding will also encode these file names as they don't seem to work with the SDK properly: Notes on above: For reference, here’s an Ansible script that will generate one or more buckets that will work with For reference, here's an Ansible script that will generate one or more buckets that will work with If you are using server side encryption with KMS then you will find you can’t transfer small objects. As a work-around you can use the If you are using server side encryption with KMS then you will find you can't transfer small objects. As a work-around you can use the A proper fix is being worked on in issue #1824. You can upload objects using the glacier storage class or transition them to glacier using a lifecycle policy. The bucket can still be synced or copied into normally, but if rclone tries to access data from the glacier storage class you will see an error like below. Note that rclone only speaks the S3 API it does not speak the Glacier Vault API, so rclone cannot directly access Glacier Vaults. Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)). Choose your S3 provider. Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. AWS Access Key ID. Leave blank for anonymous access or runtime credentials. AWS Secret Access Key (password) Leave blank for anonymous access or runtime credentials. Region to connect to. Region to connect to. Leave blank if you are using an S3 clone and you don’t have a region. Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. Endpoint for S3 API. Leave blank if using AWS to use the default endpoint for the region. Endpoint for IBM COS S3 API. Specify if using an IBM COS On Premise. Endpoint for OSS API. Endpoint for StackPath Object Storage. Endpoint for S3 API. Required when using an S3 clone. Location constraint - must be set to match the Region. Used when creating buckets only. Location constraint - must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only. Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn’t set, for creating buckets too. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when server side copying objects as S3 doesn’t copy the ACL from the source but rather writes a fresh one. Note that this ACL is applied when server side copying objects as S3 doesn't copy the ACL from the source but rather writes a fresh one. The server-side encryption algorithm used when storing this object in S3. If using KMS ID you must provide the ARN of Key. The storage class to use when storing new objects in S3. The storage class to use when storing new objects in OSS. Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)). Canned ACL used when creating buckets. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Note that this ACL is applied when only when creating buckets. If it isn’t set then “acl” is used instead. Note that this ACL is applied when only when creating buckets. If it isn't set then "acl" is used instead. If using SSE-C, the server-side encryption algorithm used when storing this object in S3. If using SSE-C you must provide the secret encryption key used to encrypt/decrypt your data. If using SSE-C you must provide the secret encryption key MD5 checksum. Cutoff for switching to chunked upload Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB. Chunk size to use for uploading. When uploading files larger than upload_cutoff or files with unknown size (eg from “rclone rcat” or uploaded with “rclone mount” or google photos or google docs) they will be uploaded as multipart uploads using this chunk size. Note that “–s3-upload-concurrency” chunks of this size are buffered in memory per transfer. When uploading files larger than upload_cutoff or files with unknown size (eg from "rclone rcat" or uploaded with "rclone mount" or google photos or google docs) they will be uploaded as multipart uploads using this chunk size. Note that "--s3-upload-concurrency" chunks of this size are buffered in memory per transfer. If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers. Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit. Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5MB and there can be at most 10,000 chunks, this means that by default the maximum size of file you can stream upload is 48GB. If you wish to stream upload larger files then you will need to increase chunk_size. Cutoff for switching to multipart copy Any files larger than this that need to be server side copied will be copied in chunks of this size. The minimum is 0 and the maximum is 5GB. Don’t store MD5 checksum with object metadata Don't store MD5 checksum with object metadata Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading. An AWS session token Concurrency for multipart uploads. This is the number of chunks of the same file that are uploaded concurrently. If you are uploading small numbers of large file over high speed link and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers. If true use path style access if false use virtual hosted style. If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See the AWS S3 docs for more info. Some providers (eg AWS, Aliyun OSS or Netease COS) require this set to false - rclone will do this automatically based on the provider setting. If true use v2 authentication. If this is false (the default) then rclone will use v4 authentication. If it is set then rclone will use v2 authentication. Use this only if v4 signatures don’t work, eg pre Jewel/v10 CEPH. Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH. If true use the AWS S3 accelerated endpoint. See: AWS S3 Transfer acceleration If true avoid calling abort upload on a failure, leaving all successfully uploaded parts on S3 for manual recovery. It should be set to true for resuming uploads across different sessions. WARNING: Storing parts of an incomplete multipart upload counts towards space usage on S3 and will add additional costs if not cleaned up. Size of listing chunk (response list for each ListObject S3 request). This option is also known as “MaxKeys”, “max-items”, or “page-size” from the AWS S3 specification. Most services truncate the response list to 1000 objects even if requested more than that. In AWS S3 this is a global maximum and cannot be changed, see AWS S3. In Ceph, this can be increased with the “rgw list buckets max chunk” option. This option is also known as "MaxKeys", "max-items", or "page-size" from the AWS S3 specification. Most services truncate the response list to 1000 objects even if requested more than that. In AWS S3 this is a global maximum and cannot be changed, see AWS S3. In Ceph, this can be increased with the "rgw list buckets max chunk" option. This sets the encoding for the backend. See: the encoding section in the overview for more info. How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool. Whether to use mmap buffers in internal memory pool. Spaces is an S3-interoperable object storage service from cloud provider DigitalOcean. To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the “Applications & API” page of the DigitalOcean control panel. They will be needed when prompted by When prompted for a To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "Applications & API" page of the DigitalOcean control panel. They will be needed when prompted by When prompted for a Going through the whole process of creating a new remote by running For Netease NOS configure as per the configurator B2 is Backblaze’s cloud storage system. B2 is Backblaze's cloud storage system. Paths are specified as Here is an example of making a b2 configuration. First run B2 supports multiple Application Keys for different access permission to B2 Buckets. You can use these with rclone too; you will need to use rclone version 1.43 or later. Follow Backblaze’s docs to create an Application Key with the required permission and add the Note that you must put the applicationKeyId as the Follow Backblaze's docs to create an Application Key with the required permission and add the Note that you must put the applicationKeyId as the This remote supports The modified time is stored as metadata on the object as Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process. Large files (bigger than the limit in For a large file to be uploaded with an SHA1 checksum, the source needs to support SHA1 checksums. The local disk supports SHA1 checksums so large file transfers from local disk will have an SHA1. See the overview for exactly which remotes support SHA1. Sources which don’t support SHA1, in particular Sources which don't support SHA1, in particular Files sizes below Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equipped laptop the optimum setting is about Note that uploading big files (bigger than 200 MB by default) will use a 96 MB RAM buffer by default. There can be at most When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a “hard delete” of files with the When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the Old versions of files, where available, are visible using the NB Note that NB Note that If you wish to remove all the old versions then you can use the Note that When you Clean up all the old versions and show that they’ve gone. Clean up all the old versions and show that they've gone. Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them. Note that when using Note that when using Rclone supports generating file share links for private B2 buckets. They can either be for a file for example: Here are the standard options specific to b2 (Backblaze B2). Account ID or Application Key ID Application Key Permanently delete files on remote removal, otherwise hide files. Here are the advanced options specific to b2 (Backblaze B2). Endpoint for the service. Leave blank normally. A flag string for X-Bz-Test-Mode header for debugging. This is for debugging purposes only. Setting it to one of the strings below will cause b2 to return specific errors: These will be set in the “X-Bz-Test-Mode” header which is documented in the b2 integrations checklist. These will be set in the "X-Bz-Test-Mode" header which is documented in the b2 integrations checklist. Include old versions in directory listings. Note that when using this no file write operations are permitted, so you can’t upload files or delete them. Include old versions in directory listings. Note that when using this no file write operations are permitted, so you can't upload files or delete them. Cutoff for switching to chunked upload. Files above this size will be uploaded in chunks of “–b2-chunk-size”. Files above this size will be uploaded in chunks of "--b2-chunk-size". This value should be set no larger than 4.657GiB (== 5GB). Upload chunk size. Must fit in memory. When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of “–transfers” chunks in progress at once. 5,000,000 Bytes is the minimum size. When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once. 5,000,000 Bytes is the minimum size. Disable checksums for large (> upload cutoff) files Normally rclone will calculate the SHA1 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading. Custom endpoint for downloads. This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze. Time before the authorization token will expire in s or suffix ms|s|m|h|d. The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week. This sets the encoding for the backend. See: the encoding section in the overview for more info. To copy a local directory to an Box directory called backup If you have an “Enterprise” account type with Box with single sign on (SSO), you need to create a password to use Box with rclone. This can be done at your Enterprise Box account by going to Settings, “Account” Tab, and then set the password in the “Authentication” field. If you have an "Enterprise" account type with Box with single sign on (SSO), you need to create a password to use Box with rclone. This can be done at your Enterprise Box account by going to Settings, "Account" Tab, and then set the password in the "Authentication" field. Once you have done this, you can setup your Enterprise Box account using the same procedure detailed above in the, using the password you have just set. According to the box docs: This means that if you Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. For files above 50MB rclone will use a chunked transfer. Rclone will upload up to So if the folder you want rclone to use has a URL which looks like Here are the standard options specific to box (Box). Box App Client Id. Leave blank normally. Box App Client Secret Leave blank normally. Box App config.json location Leave blank normally. Here are the advanced options specific to box (Box). Fill in for rclone to use a non root folder as its starting point. Cutoff for switching to multipart upload (>= 50MB). Max number of times to try committing a multipart file. This sets the encoding for the backend. See: the encoding section in the overview for more info. Note that Box is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. Box file names can’t have the Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Box file names can't have the Box only supports filenames up to 255 characters in length. The The cache backend code is working but it currently doesn’t have a maintainer so there are outstanding bugs which aren’t getting fixed. The cache backend code is working but it currently doesn't have a maintainer so there are outstanding bugs which aren't getting fixed. The cache backend is due to be phased out in favour of the VFS caching layer eventually which is more tightly integrated into rclone. Until this happens we recommend only using the cache backend if you find you can’t work without it. There are many docs online describing the use of the cache backend to minimize API hits and by-and-large these are out of date and the cache backend isn’t needed in those scenarios any more. Until this happens we recommend only using the cache backend if you find you can't work without it. There are many docs online describing the use of the cache backend to minimize API hits and by-and-large these are out of date and the cache backend isn't needed in those scenarios any more. To get started you just need to have an existing remote which can be configured with Here is an example of how to make a remote called Files are uploaded in sequence and only one file is uploaded at a time. Uploads will be stored in a queue and be processed based on the order they were added. The queue and the temporary storage is persistent across restarts but can be cleared on startup with the When the Plex server is configured to only accept secure connections, it is possible to use The format for these URLs is the following: https://ip-with-dots-replaced.server-hash.plex.direct:32400/ The The To get the https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token This page will list all the available Plex servers for your account with at least one –dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the --dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct one, try to set There are a couple of issues with Windows Future iterations of the cache backend will make use of the pooling functionality of the cloud provider to synchronize and at the same time make writing through it more tolerant to failures. There are a couple of enhancements in track to add these but in the meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts. Some recommendations: - don’t use a very small interval for entry information ( Some recommendations: - don't use a very small interval for entry information ( Future enhancements: One common scenario is to keep your data encrypted in the cloud provider using the There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we’re downloading the full file instead of small chunks. Organizing the remotes in this order yields better results: cloud remote -> cache -> crypt During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yields better results: cloud remote -> cache -> crypt This behavior is irrelevant for most backend types, but there are backends where a leading This behavior is irrelevant for most backend types, but there are backends where a leading Cache supports the new Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt. Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional, false by default) Here are the standard options specific to cache (Cache a remote). Remote to cache. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended). Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). The URL of the Plex server The username of the Plex user The password of the Plex user NB Input to this must be obscured - see rclone obscure. The size of a chunk (partial file data). Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur. How long to cache file structure information (directory listings, file size, times etc). If all write operations are done through the cache then you can safely make this value very large as the cache store will also be updated in real time. The total size that the chunks can take up on the local disk. If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value. Here are the advanced options specific to cache (Cache a remote). The plex token for authentication - auto set normally Skip all certificate verification when connecting to the Plex server Directory to store file structure metadata DB. The remote name is used as the DB file name. Directory to cache chunk files. Path to where partial file data (chunks) are stored locally. The remote name is appended to the final path. This config follows the “–cache-db-path”. If you specify a custom location for “–cache-db-path” and don’t specify one for “–cache-chunk-path” then “–cache-chunk-path” will use the same path as “–cache-db-path”. This config follows the "--cache-db-path". If you specify a custom location for "--cache-db-path" and don't specify one for "--cache-chunk-path" then "--cache-chunk-path" will use the same path as "--cache-db-path". Clear all the cached data for this remote on start. How often should the cache perform cleanups of the chunk storage. The default value should be ok for most people. If you find that the cache goes over “cache-chunk-total-size” too often then try to lower this value to force it to perform cleanups more often. How often should the cache perform cleanups of the chunk storage. The default value should be ok for most people. If you find that the cache goes over "cache-chunk-total-size" too often then try to lower this value to force it to perform cleanups more often. How many times to retry a read from a cache storage. Since reading from a cache stream is independent from downloading file data, readers can get to a point where there’s no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn’t able to provide file data anymore. Since reading from a cache stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn't able to provide file data anymore. For really slow connections, increase this to a point where the stream is able to provide data but your experience will be very stuttering. How many workers should run in parallel to download chunks. Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on the cloud provider. This impacts several aspects like the cloud provider API limits, more stress on the hardware that rclone runs on but it also means that streams will be more fluid and data will be available much more faster to readers. Note: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use. Disable the in-memory cache for storing chunks during streaming. By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as possible. This transient data is evicted as soon as it is read and the number of chunks stored doesn’t exceed the number of workers. However, depending on other settings like “cache-chunk-size” and “cache-workers” this footprint can increase if there are parallel streams too (multiple files being read at the same time). This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like "cache-chunk-size" and "cache-workers" this footprint can increase if there are parallel streams too (multiple files being read at the same time). If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine. Limits the number of requests per second to the source FS (-1 to disable) This setting places a hard limit on the number of requests per second that cache will be doing to the cloud provider remote and try to respect that value by setting waits between reads. If you find that you’re getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that. If you find that you're getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that. A good balance of all the other settings should make this setting useless but it is available to set for more special cases. NOTE: This will limit the number of requests during streams but other API calls to the cloud provider like directory listings will still pass. Cache file data on writes through the FS If you need to read files immediately after you upload them through cache you can enable this flag to have their data stored in the cache store at the same time during upload. Directory to keep temporary files until they are uploaded. This is the path where cache will use as a temporary storage for new files that need to be uploaded to the cloud provider. Specifying a value will enable this feature. Without it, it is completely disabled and files will be uploaded directly to the cloud provider How long should files be stored in local cache before being uploaded This is the duration that a file must wait in the temporary location cache-tmp-upload-path before it is selected for upload. Note that only one file is uploaded at a time and it can take longer to start the upload if a queue formed for this purpose. How long to wait for the DB to be available - 0 is unlimited Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error. If you set it to 0 then it will wait forever. Run them with The help below will explain what arguments each command takes. See the “rclone backend” command for more info on how to pass options and arguments. See the "rclone backend" command for more info on how to pass options and arguments. These can be run on a running backend using the rc command backend/command. Print stats on the cache backend in JSON format. The To use it, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote. First check your chosen remote is working - we’ll call it First check your chosen remote is working - we'll call it Now configure In normal use, make sure the remote has a When rclone starts a file upload, chunker checks the file size. If it doesn’t exceed the configured chunk size, chunker will just pass the file to the wrapped remote. If a file is large, chunker will transparently cut data in pieces with temporary names and stream them one by one, on the fly. Each data chunk will contain the specified number of bytes, except for the last one which may have less data. If file size is unknown in advance (this is called a streaming upload), chunker will internally create a temporary copy, record its size and repeat the above process. When rclone starts a file upload, chunker checks the file size. If it doesn't exceed the configured chunk size, chunker will just pass the file to the wrapped remote. If a file is large, chunker will transparently cut data in pieces with temporary names and stream them one by one, on the fly. Each data chunk will contain the specified number of bytes, except for the last one which may have less data. If file size is unknown in advance (this is called a streaming upload), chunker will internally create a temporary copy, record its size and repeat the above process. When upload completes, temporary chunk files are finally renamed. This scheme guarantees that operations can be run in parallel and look from outside as atomic. A similar method with hidden temporary chunks is used for other operations (copy/move/rename etc). If an operation fails, hidden chunks are normally destroyed, and the target composite file stays intact. When a composite file download is requested, chunker transparently assembles it by concatenating data chunks in order. As the split is trivial one could even manually concatenate data chunks together to obtain the original content. When the There is no field for composite file name as it’s simply equal to the name of meta object on the wrapped remote. Please refer to respective sections for details on hashsums and modified time handling. There is no field for composite file name as it's simply equal to the name of meta object on the wrapped remote. Please refer to respective sections for details on hashsums and modified time handling. You can disable meta objects by setting the meta format option to Chunker supports hashsums only when a compatible metadata is present. Hence, if you choose metadata format of Please note that by default metadata is stored only for composite files. If a file is smaller than configured chunk size, chunker will transparently redirect hash requests to wrapped remote, so support depends on that. You will see the empty string as a hashsum of requested type for small files if the wrapped remote doesn’t support it. Please note that by default metadata is stored only for composite files. If a file is smaller than configured chunk size, chunker will transparently redirect hash requests to wrapped remote, so support depends on that. You will see the empty string as a hashsum of requested type for small files if the wrapped remote doesn't support it. Many storage backends support MD5 and SHA1 hash types, so does chunker. With chunker you can choose one or another but not both. MD5 is set by default as the most supported type. Since chunker keeps hashes for composite files and falls back to the wrapped remote hash for non-chunked ones, we advise you to choose the same hash type as supported by wrapped remote so that your file listings look coherent. If your storage backend does not support MD5 or SHA1 but you need consistent file hashing, configure chunker with If your storage backend does not support MD5 or SHA1 but you need consistent file hashing, configure chunker with Normally, when a file is copied to chunker controlled remote, chunker will ask the file source for compatible file hash and revert to on-the-fly calculation if none is found. This involves some CPU overhead but provides a guarantee that given hashsum is available. Also, chunker will reject a server-side copy or move operation if source and destination hashsum types are different resulting in the extra network bandwidth, too. In some rare cases this may be undesired, so chunker provides two optional choices: Chunker stores modification times using the wrapped remote so support depends on that. For a small non-chunked file the chunker overlay simply manipulates modification time of the wrapped remote file. For a composite file with metadata chunker will get and set modification time of the metadata object on the wrapped remote. If file is chunked but metadata format is If rclone gets killed during a long operation on a big composite file, hidden temporary chunks may stay in the directory. They will not be shown by the Chunker requires wrapped remote to support server side Chunker encodes chunk number in file name, so with default Chunker encodes chunk number in file name, so with default Note that a move implemented using the copy-and-delete method may incur double charging with some cloud storage providers. Chunker will not automatically rename existing chunks when you run If wrapped remote is case insensitive, the chunker overlay will inherit that property (so you can’t have a file called “Hello.doc” and “hello.doc” in the same directory). If wrapped remote is case insensitive, the chunker overlay will inherit that property (so you can't have a file called "Hello.doc" and "hello.doc" in the same directory). Here are the standard options specific to chunker (Transparently chunk/split large files). Remote to chunk/unchunk. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended). Remote to chunk/unchunk. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). Files larger than chunk size will be split in chunks. Choose how chunker handles hash sums. All modes but “none” require metadata. Choose how chunker handles hash sums. All modes but "none" require metadata. Here are the advanced options specific to chunker (Transparently chunk/split large files). String format of chunk file names. The two placeholders are: base file name (*) and chunk number (#…). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format. String format of chunk file names. The two placeholders are: base file name (*) and chunk number (#...). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format. Minimum valid chunk number. Usually 0 or 1. By default chunk numbers start from 1. Format of the metadata object or “none”. By default “simplejson”. Metadata is a small JSON file named after the composite file. Format of the metadata object or "none". By default "simplejson". Metadata is a small JSON file named after the composite file. Choose how chunker should handle files with missing or invalid chunks. For files above 128MB rclone will use a chunked transfer. Rclone will upload up to Note that ShareFile is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. Note that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". ShareFile only supports filenames up to 256 characters in length. In addition to the default restricted characters set the following characters are also replaced: Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Here are the standard options specific to sharefile (Citrix Sharefile). ID of the root folder Leave blank to access “Personal Folders”. You can use one of the standard values here or any folder ID (long hex number ID). Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID). Here are the advanced options specific to sharefile (Citrix Sharefile). Cutoff for switching to multipart upload. Upload chunk size. Must a power of 2 >= 256k. Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer. Reducing this will reduce memory usage but decrease performance. Endpoint for API calls. This is usually auto discovered as part of the oauth process, but can be set manually to something like: https://XXX.sharefile.com This sets the encoding for the backend. See: the encoding section in the overview for more info. The To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example. First check your chosen remote is working - we’ll call it First check your chosen remote is working - we'll call it Now configure Important The password is stored in the config file is lightly obscured so it isn’t immediately obvious what it is. It is in no way secure unless you use config file encryption. Important The password is stored in the config file is lightly obscured so it isn't immediately obvious what it is. It is in no way secure unless you use config file encryption. A long passphrase is recommended, or you can use a random one. The obscured password is created by using AES-CTR with a static key, with the salt stored verbatim at the beginning of the obscured password. This static key is shared by between all versions of rclone. If you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible, but the obscured version will be different due to the different salt. In normal use, make sure the remote has a If you specify the remote as Note that unless you want encrypted bucket names (which are difficult to manage because you won’t know what directory they represent in web interfaces etc), you should probably specify a bucket, eg Note that unless you want encrypted bucket names (which are difficult to manage because you won't know what directory they represent in web interfaces etc), you should probably specify a bucket, eg To test I made a little directory of files using “standard” file name encryption. To test I made a little directory of files using "standard" file name encryption. If don’t use file name encryption then the remote will look like this - note the If don't use file name encryption then the remote will look like this - note the Here are some of the features of the file name encryption modes Off Standard Obfuscation This is a simple “rotate” of the filename, with each file having a rot distance based on the filename. We store the distance at the beginning of the filename. So a file called “hello” may become “53.jgnnq”. This is not a strong encryption of filenames, but it may stop automated scanning tools from picking up on filename patterns. As such it’s an intermediate between “off” and “standard”. The advantage is that it allows for longer path segment names. This is a simple "rotate" of the filename, with each file having a rot distance based on the filename. We store the distance at the beginning of the filename. So a file called "hello" may become "53.jgnnq". This is not a strong encryption of filenames, but it may stop automated scanning tools from picking up on filename patterns. As such it's an intermediate between "off" and "standard". The advantage is that it allows for longer path segment names. There is a possibility with some unicode based filenames that the obfuscation is weak and may map lower case characters to upper case equivalents. You can not rely on this for strong protection. Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using “Standard” file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers. Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers. There may be an even more secure file name encryption mode in the future which will address the long file name problem. Crypt offers the option of encrypting dir names or leaving them intact. There are two options: Crypt stores modification times using the underlying remote so support depends on that. Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator. Note that you should use the Note that you should use the Here are the standard options specific to crypt (Encrypt/Decrypt a remote). Remote to encrypt/decrypt. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended). Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended). How to encrypt the filenames. Option to either encrypt directory names or leave them intact. NB If filename_encryption is “off” then this option will do nothing. NB If filename_encryption is "off" then this option will do nothing. Password or pass phrase for encryption. NB Input to this must be obscured - see rclone obscure. Password or pass phrase for salt. Optional but recommended. Should be different to the previous password. NB Input to this must be obscured - see rclone obscure. Here are the advanced options specific to crypt (Encrypt/Decrypt a remote). For all files listed show how the names encrypt. If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name. This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes. Run them with The help below will explain what arguments each command takes. See the “rclone backend” command for more info on how to pass options and arguments. See the "rclone backend" command for more info on how to pass options and arguments. These can be run on a running backend using the rc command backend/command. Encode the given filename(s) For example, let’s say you have your original remote at For example, let's say you have your original remote at To sync the two remotes you would do And to check the integrity you would do 64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can’t be too big. 64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big. This uses a 32 byte (256 bit key) key derived from the user password. 1 byte file will encrypt to File names are encrypted segment by segment - the path is broken up into File segments are padded using PKCS#7 to a multiple of 16 bytes before encryption. They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper “A Parallelizable Enciphering Mode” by Halevi and Rogaway. This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can’t find it on the cloud storage system. They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway. This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system. This means that This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password. After encryption they are written out using a modified version of standard Rclone uses Rclone uses Paths are specified as A leading Dropbox supports modified times, but the only way to set a modification time is to re-upload the file. This means that if you uploaded your data with an older version of rclone which didn’t support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don’t want this to happen use This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use Dropbox supports its own hash type which is checked for all transfers. Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Here are the standard options specific to dropbox (Dropbox). Dropbox App Client Id Leave blank normally. Dropbox App Client Secret Leave blank normally. Here are the advanced options specific to dropbox (Dropbox). Upload chunk size. (< 150M). Any files larger than this will be uploaded in chunks of this size. Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory. Impersonate this user when using a business account. This sets the encoding for the backend. See: the encoding section in the overview for more info. Note that Dropbox is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. There are some file names such as Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are some file names such as If you have more than 10,000 files in a directory then When you use rclone with Dropbox in its default configuration you are using rclone’s App ID. This is shared between all the rclone users. When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users. Here is how to create your own Dropbox App ID for rclone: Log into the Dropbox App console with your Dropbox Account (It need not to be the same account as the Dropbox you want to access) Choose an API => Usually this should be Choose the type of access you want to use => Name your App. The app name is global, so you can’t use Name your App. The app name is global, so you can't use Click the button Fill Find the FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package. Paths are specified as Here is an example of making an FTP configuration. First run This will guide you through an interactive setup process. An FTP remote only needs a host together with and a username and a password. With anonymous FTP server, you will need to use FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the config for the remote. The default FTPS port is Here are the standard options specific to ftp (FTP Connection). FTP host to connect to FTP username, leave blank for current username, $USER FTP port, leave blank to use default (21) FTP password NB Input to this must be obscured - see rclone obscure. Use FTP over TLS (Implicit) Here are the advanced options specific to ftp (FTP Connection). Maximum number of FTP simultaneous connections, 0 for unlimited Do not verify the TLS certificate of the server Disable using EPSV even if server advertises support This sets the encoding for the backend. See: the encoding section in the overview for more info. Note that since FTP isn’t HTTP based the following flags don’t work with it: Note that Note that FTP could support server side move but doesn’t yet. Note that since FTP isn't HTTP based the following flags don't work with it: Note that Note that FTP could support server side move but doesn't yet. Note that the ftp backend does not support the Note that while implicit FTP over TLS is supported, explicit FTP over TLS is not. Sync You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don’t have actively logged-in users, for example build machines. To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines. To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the If no other source of credentials is provided, rclone will fall back to Application Default Credentials this is useful both when you already have configured authentication for your developer account, or in production when running on a google compute host. Note that if running in docker, you may need to run additional commands on your google compute machine - see this page. Note that in the case application default credentials are used, there is no need to explicitly configure a project number. This remote supports You can set custom upload headers with the Eg Note that the last of these is for setting custom metadata in the form Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the “mtime” key in RFC3339 format accurate to 1ns. Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns. Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). Google Application Client Id Leave blank normally. Google Application Client Secret Leave blank normally. Project number. Optional - needed only for list/create/delete buckets - see your developer console. Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login. Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login. Access Control List for new objects. Access Control List for new buckets. Access checks should use bucket-level IAM policies. If you want to upload objects to a bucket with Bucket Policy Only set then you will need to set this. When it is set, rclone: Location for the newly created buckets. The storage class to use when storing objects in Google Cloud Storage. Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). This sets the encoding for the backend. See: the encoding section in the overview for more info. The scope are This is the default scope and allows full access to all files, except for the Application Data Folder (see below). Choose this one if you aren’t sure. Choose this one if you aren't sure. This allows read only access to all files. Files may be listed and downloaded but not uploaded, renamed or deleted. This can be useful if you are using rclone to backup data and you want to be sure confidential data on your drive is not visible to rclone. Files created with this scope are visible in the web interface. This gives rclone its own private area to store files. Rclone will not be able to see any other files on your drive and you won’t be able to see rclone’s files from the web interface either. This gives rclone its own private area to store files. Rclone will not be able to see any other files on your drive and you won't be able to see rclone's files from the web interface either. This allows read only access to file names only. It does not allow rclone to download or upload data, or rename or delete files or directories. You can set the Normally you will leave this blank and rclone will determine the correct root to use itself. However you can set this to restrict rclone to a specific folder hierarchy or to access data within the “Computers” tab on the drive web interface (where files from Google’s Backup and Sync desktop program go). However you can set this to restrict rclone to a specific folder hierarchy or to access data within the "Computers" tab on the drive web interface (where files from Google's Backup and Sync desktop program go). In order to do this you will have to find the So if the folder you want rclone to use has a URL which looks like NB folders under the “Computers” tab seem to be read only (drive gives a 500 error) when using rclone. There doesn’t appear to be an API to discover the folder IDs of the “Computers” tab - please contact us if you know otherwise! Note also that rclone can’t access any data under the “Backups” tab on the google drive web interface yet. NB folders under the "Computers" tab seem to be read only (drive gives a 500 error) when using rclone. There doesn't appear to be an API to discover the folder IDs of the "Computers" tab - please contact us if you know otherwise! Note also that rclone can't access any data under the "Backups" tab on the google drive web interface yet. You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don’t have actively logged-in users, for example build machines. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines. To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the Let’s say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual’s Drive account, who IS a member of the domain. We’ll call the domain example.com, and the user foo@example.com. There’s a few steps we need to go through to accomplish this: Let's say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual's Drive account, who IS a member of the domain. We'll call the domain example.com, and the user foo@example.com. There's a few steps we need to go through to accomplish this: This remote supports It does this by combining multiple This works by combining many Google drive stores modification times accurate to 1 ms. Only Invalid UTF-8 bytes will be replaced, as they can’t be used in JSON strings. Only Invalid UTF-8 bytes will be replaced, as they can't be used in JSON strings. In contrast to other backends, Google drive stores revisions of files. When you upload a change to an existing file to google drive using rclone it will create a new revision of that file. By default rclone will send all files to the trash when deleting files. If deleting them permanently is required then use the In March 2020 Google introduced a new feature in Google Drive called drive shortcuts (API). These will (by September 2020) replace the ability for files or folders to be in multiple folders at once. Shortcuts are files that link to other files on Google Drive somewhat like a symlink in unix, except they point to the underlying file data (eg the inode in unix terms) so they don’t break if the source is renamed or moved about. Shortcuts are files that link to other files on Google Drive somewhat like a symlink in unix, except they point to the underlying file data (eg the inode in unix terms) so they don't break if the source is renamed or moved about. Be default rclone treats these as follows. For shortcuts pointing to files: Google documents can be exported from and uploaded to Google Drive. When rclone downloads a Google doc it chooses a format to download depending upon the When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can’t be exported to a format on the formats list, then rclone will choose a format from the default list. When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list. If you prefer an archive copy then you might use Note that rclone adds the extension to the google doc, so if it is called When importing files into Google Drive, rclone will convert all files with an extension in Here are the standard options specific to drive (Google Drive). Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance. Google Application Client Secret Setting your own is recommended. Scope that rclone should use when requesting access from drive. ID of the root folder Leave blank normally. Fill in to access “Computers” folders (see docs), or for rclone to use a non root folder as its starting point. Note that if this is blank, the first time rclone runs it will fill it in with the ID of the root folder. Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point. Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login. Here are the advanced options specific to drive (Google Drive). Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login. ID of the Team Drive Only consider files owned by the authenticated user. Send files to the trash instead of deleting permanently. Defaults to true, namely sending files to the trash. Use Skip google documents in all listings. If given, gdocs practically become invisible to rclone. Skip MD5 checksum on Google photos and videos only. Use this if you get checksum errors when transferring Google photos or videos. Setting this flag will cause Google photos and videos to return a blank MD5 checksum. Google photos are identified by being in the “photos” space. Google photos are identified by being in the "photos" space. Corrupted checksums are caused by Google modifying the image/video but not updating the checksum. Only show files that are shared with me. Instructs rclone to operate on your “Shared with me” folder (where Google Drive lets you access the files and folders others have shared with you). This works both with the “list” (lsd, lsl, etc) and the “copy” commands (copy, sync, etc), and with all other commands too. Instructs rclone to operate on your "Shared with me" folder (where Google Drive lets you access the files and folders others have shared with you). This works both with the "list" (lsd, lsl, etc) and the "copy" commands (copy, sync, etc), and with all other commands too. Only show files that are in the trash. This will show trashed files in their original directory structure. Deprecated: see export_formats Comma separated list of preferred formats for downloading Google docs. Comma separated list of preferred formats for uploading Google docs. Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time. Use file created date instead of modified date., Useful when downloading data and you want the creation date used in place of the last modified date. WARNING: This flag may have some unexpected consequences. When uploading to your drive all files will be overwritten unless they haven’t been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the “–checksum” flag. This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the “Create a Google Photos folder” option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date. When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the "--checksum" flag. This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the "Create a Google Photos folder" option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date. Use date file was shared instead of modified date. Note that, as with “–drive-use-created-date”, this flag may have unexpected consequences when uploading/downloading files. If both this flag and “–drive-use-created-date” are set, the created date is used. Note that, as with "--drive-use-created-date", this flag may have unexpected consequences when uploading/downloading files. If both this flag and "--drive-use-created-date" are set, the created date is used. Size of listing chunk 100-1000. 0 to disable. Impersonate this user when using a service account. Note that if this is used then “root_folder_id” will be ignored. Use alternate export URLs for google documents export., If this option is set this instructs rclone to use an alternate set of export URLs for drive documents. Users have reported that the official export URLs can’t export large documents, whereas these unofficial ones can. If this option is set this instructs rclone to use an alternate set of export URLs for drive documents. Users have reported that the official export URLs can't export large documents, whereas these unofficial ones can. See rclone issue #2243 for background, this google drive issue and this helpful post. Cutoff for switching to chunked upload Upload chunk size. Must a power of 2 >= 256k. Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer. Reducing this will reduce memory usage but decrease performance. Set to allow files which return cannotDownloadAbusiveFile to be downloaded. If downloading a file returns the error “This file has been identified as malware or spam and cannot be downloaded” with the error code “cannotDownloadAbusiveFile” then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway. If downloading a file returns the error "This file has been identified as malware or spam and cannot be downloaded" with the error code "cannotDownloadAbusiveFile" then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway. Keep new head revision of each file forever. Show sizes as storage quota usage, not actual size. Show the size of a file as the storage quota used. This is the current version plus any older versions that have been set to keep forever. WARNING: This flag may have some unexpected consequences. It is not recommended to set this flag in your config - the recommended usage is using the flag form –drive-size-as-quota when doing rclone ls/lsl/lsf/lsjson/etc only. If you do use this flag for syncing (not recommended) then you will need to use –ignore size also. It is not recommended to set this flag in your config - the recommended usage is using the flag form --drive-size-as-quota when doing rclone ls/lsl/lsf/lsjson/etc only. If you do use this flag for syncing (not recommended) then you will need to use --ignore size also. If Object’s are greater, use drive v2 API to download. If Object's are greater, use drive v2 API to download. Minimum time to sleep between API calls. Number of API calls to allow without sleeping. Allow server side operations (eg copy) to work across different drive configs. This can be useful if you wish to do a server side copy between two different Google drives. Note that this isn’t enabled by default because it isn’t easy to tell if it will work between any two configurations. This can be useful if you wish to do a server side copy between two different Google drives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations. Disable drive using http2 There is currently an unsolved issue with the google drive backend and HTTP/2. HTTP/2 is therefore disabled by default for the drive backend but can be re-enabled here. When the issue is solved this flag will be removed. See: https://github.com/rclone/rclone/issues/3631 Make upload limit errors be fatal At the time of writing it is only possible to upload 750GB of data to Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync. Note that this detection is relying on error message strings which Google don’t document so it may break in the future. Note that this detection is relying on error message strings which Google don't document so it may break in the future. See: https://github.com/rclone/rclone/issues/3857 If set skip shortcut files Normally rclone dereferences shortcut files making them appear as if they are the original file (see the shortcuts section). If this flag is set then rclone will ignore shortcut files completely. This sets the encoding for the backend. See: the encoding section in the overview for more info. Run them with The help below will explain what arguments each command takes. See the “rclone backend” command for more info on how to pass options and arguments. See the "rclone backend" command for more info on how to pass options and arguments. These can be run on a running backend using the rc command backend/command. Get command for fetching the drive config parameters Options: Set command for updating the drive config parameters Options: Create shortcuts from files or directories Usage: In the first example this creates a shortcut from the “source_item” which can be a file or a directory to the “destination_shortcut”. The “source_item” and the “destination_shortcut” should be relative paths from “drive:” In the second example this creates a shortcut from the “source_item” relative to “drive:” to the “destination_shortcut” relative to “drive2:”. This may fail with a permission error if the user authenticated with “drive2:” can’t read files from “drive:”. In the first example this creates a shortcut from the "source_item" which can be a file or a directory to the "destination_shortcut". The "source_item" and the "destination_shortcut" should be relative paths from "drive:" In the second example this creates a shortcut from the "source_item" relative to "drive:" to the "destination_shortcut" relative to "drive2:". This may fail with a permission error if the user authenticated with "drive2:" can't read files from "drive:". Options: Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time. Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server side copies with Google docs will appear as size -1 in This is because rclone can’t find out the size of the Google docs without downloading them. This is because rclone can't find out the size of the Google docs without downloading them. Google docs will transfer correctly with However an unfortunate consequence of this is that you may not be able to download Google docs using However an unfortunate consequence of this is that you may not be able to download Google docs using Sometimes, for no reason I’ve been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files. Sometimes, for no reason I've been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files. Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. Use Note that this isn’t just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes. Note that this isn't just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes. The most likely cause of this is the duplicated file issue above - run This can also be caused by a delay/caching on google drive’s end when comparing directory listings. Specifically with team drives used in combination with –fast-list. Files that were uploaded recently may not appear on the directory list sent to rclone when using –fast-list. Waiting a moderate period of time between attempts (estimated to be approximately 1 hour) and/or not using –fast-list both seem to be effective in preventing the problem. This can also be caused by a delay/caching on google drive's end when comparing directory listings. Specifically with team drives used in combination with --fast-list. Files that were uploaded recently may not appear on the directory list sent to rclone when using --fast-list. Waiting a moderate period of time between attempts (estimated to be approximately 1 hour) and/or not using --fast-list both seem to be effective in preventing the problem. When you use rclone with Google drive in its default configuration you are using rclone’s client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google. When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google. It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower. Here is how to create your own Google Drive client ID for rclone: Log into the Google API Console with your Google account. It doesn’t matter what Google account you use. (It need not be the same account as the Google Drive you want to access) Log into the Google API Console with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access) Select a project or create a new project. Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the “Google Drive API”. Click “Credentials” in the left-side panel (not “Create credentials”, which opens the wizard), then “Create credentials” If you already configured an “Oauth Consent Screen”, then skip to the next step; if not, click on “CONFIGURE CONSENT SCREEN” button (near the top right corner of the right panel), then select “External” and click on “CREATE”; on the next screen, enter an “Application name” (“rclone” is OK) then click on “Save” (all other data is optional). Click again on “Credentials” on the left panel to go back to the “Credentials” screen. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the "Google Drive API". Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials" If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK) then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen. (PS: if you are a GSuite user, you could also select “Internal” instead of “External” above, but this has not been tested/documented so far). (PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this has not been tested/documented so far). Click on the “+ CREATE CREDENTIALS” button at the top of the screen, then select “OAuth client ID”. Choose an application type of “Desktop app” if you using a Google account or “Other” if you using a GSuite account and click “Create”. (the default name is fine) Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID". Choose an application type of "Desktop app" if you using a Google account or "Other" if you using a GSuite account and click "Create". (the default name is fine) It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote. Be aware that, due to the “enhanced security” recently introduced by Google, you are theoretically expected to “submit your app for verification” and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it’s not such a big deal). Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal). (Thanks to @balazer on github for these instructions.) Sometimes, creation of an OAuth consent in Google API Console fails due to an error message “The request failed because changes to one of the field of the resource is not supported”. As a convenient workaround, the necessary Google Drive API key can be created on the Python Quickstart page. Just push the Enable the Drive API button to receive the Client ID and Secret. Note that it will automatically create a new project in the API Console. The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos. NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use. As Google Photos is not a general purpose cloud storage system the backend is laid out to help you navigate it. The directories under Note that all your photos and videos will appear somewhere under Note that all your photos and videos will appear somewhere under There are two writable parts of the tree, the The The Directories within the and the images directory contains This means that you can use the The Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn’t understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item. Note that all media items uploaded to Google Photos through the API are stored in full resolution at “original quality” and will count towards your storage quota in your Google Account. The API does not offer a way to upload in “high quality” mode.. Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item. Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and will count towards your storage quota in your Google Account. The API does not offer a way to upload in "high quality" mode.. When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115. The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on “Google Photos” as a backup of your photos. You will not be able to use rclone to redownload original images. You could use ‘google takeout’ to recover the original photos as a last resort The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044. If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known. This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes. The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check. It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the If you want to use the backend with If you want to use the backend with Rclone can only upload files to albums it created. This is a limitation of the Google Photos API. Rclone can remove files it uploaded from albums it created only. The Google Photos API does not support deleting albums - see bug #135714733. Here are the standard options specific to google photos (Google Photos). Google Application Client Id Leave blank normally. Google Application Client Secret Leave blank normally. Set to make the Google Photos backend read only. If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access. Here are the advanced options specific to google photos (Google Photos). Set to read the size of media items. Normally rclone does not read the size of media items since this takes another transaction. This isn’t necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media. Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media. Year limits the photos to be downloaded to those which are uploaded after the given year The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn’t then please file an issue, or send a pull request!) The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!) Paths are specified as Here is an example of how to make a remote called Sync the remote This remote is read only - you can’t upload files to an HTTP server. This remote is read only - you can't upload files to an HTTP server. Most HTTP servers store time accurate to 1 second. Here are the standard options specific to http (http Connection). URL of http host to connect to Here are the advanced options specific to http (http Connection). Set HTTP headers for all transactions Use this to set additional HTTP headers for all transactions The input format is comma separated list of key,value pairs. Standard CSV encoding may be used. For example to set a Cookie use ‘Cookie,name=value’, or ‘“Cookie”,“name=value”’. You can set multiple headers, eg ‘“Cookie”,“name=value”,“Authorization”,“xxx”’. For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'. You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'. Set this if the site doesn’t end directories with / Set this if the site doesn't end directories with / Use this if your target website does not use / on the end of directories. A / on the end of a path is how rclone normally tells the difference between files and directories. If this flag is set, then rclone will treat all files with Content-Type: text/html as directories and read URLs from them rather than downloading them. Note that this may cause rclone to confuse genuine HTML files with directories. Don’t use HEAD requests to find file sizes in dir listing Don't use HEAD requests to find file sizes in dir listing If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to: If you set this option, rclone will not do the HEAD request. This will mean some files that don’t exist may be in the listing some files that don't exist may be in the listing If you want the directory to be visible in the official Hubic browser, you need to copy your files to the This remote supports The modified time is stored as metadata on the object as Note that Hubic wraps the Swift backend, so most of the properties of are the same. Here are the standard options specific to hubic (Hubic). Hubic Client Id Leave blank normally. Hubic Client Secret Leave blank normally. Here are the advanced options specific to hubic (Hubic). Above this size files will be chunked into a _segments container. Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value. Don’t chunk files during streaming upload. Don't chunk files during streaming upload. When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files. This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM. Rclone will still chunk files bigger than chunk_size when doing normal copy operations. This sets the encoding for the backend. See: the encoding section in the overview for more info. This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API. The Swift API doesn’t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won’t check or use the MD5SUM for these. The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these. Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway. In addition to the official service at jottacloud.com, there are also several whitelabel versions which should work with this backend. To copy a local directory to an Jottacloud directory called backup The official Jottacloud client registers a device for each computer you install it on, and then creates a mountpoint for each folder you select for Backup. The web interface uses a special device called Jotta for the Archive and Sync mountpoints. In most cases you’ll want to use the Jotta/Archive device/mountpoint, however if you want to access files uploaded by any of the official clients rclone provides the option to select other devices and mountpoints during config. The built-in Jotta device may also contain several other mountpoints, such as: Latest, Links, Shared and Trash. These are special mountpoints with a different internal representation than the “regular” mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing. The official Jottacloud client registers a device for each computer you install it on, and then creates a mountpoint for each folder you select for Backup. The web interface uses a special device called Jotta for the Archive and Sync mountpoints. In most cases you'll want to use the Jotta/Archive device/mountpoint, however if you want to access files uploaded by any of the official clients rclone provides the option to select other devices and mountpoints during config. The built-in Jotta device may also contain several other mountpoints, such as: Latest, Links, Shared and Trash. These are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing. This remote supports Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown. Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. Jottacloud supports MD5 type hashes, so you can use the Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the In addition to the default restricted characters set the following characters are also replaced: Invalid UTF-8 bytes will also be replaced, as they can’t be used in XML strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings. By default rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the –jottacloud-hard-delete flag, or set the equivalent environment variable. Emptying the trash is supported by the cleanup command. By default rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the --jottacloud-hard-delete flag, or set the equivalent environment variable. Emptying the trash is supported by the cleanup command. Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website. To view your current quota you can use the Here are the advanced options specific to jottacloud (Jottacloud). Files bigger than this will be cached on disk to calculate the MD5 if required. Only show files that are in the trash. This will show trashed files in their original directory structure. Delete files permanently rather than putting them into the trash. Remove existing public link to file/folder with link command rather than creating. Default is false, meaning link command will create or retrieve public link. Files bigger than this can be resumed if the upload fail’s. Files bigger than this can be resumed if the upload fail's. This sets the encoding for the backend. See: the encoding section in the overview for more info. Note that Jottacloud is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. There are quite a few characters that can’t be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead. Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead. Jottacloud only supports filenames up to 255 characters in length. Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases. Invalid UTF-8 bytes will also be replaced, as they can’t be used in XML strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings. Here are the standard options specific to koofr (Koofr). Your Koofr user name Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password) NB Input to this must be obscured - see rclone obscure. Here are the advanced options specific to koofr (Koofr). The Koofr API endpoint to use Mount ID of the mount to use. If omitted, the primary mount is used. Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. This sets the encoding for the backend. See: the encoding section in the overview for more info. Note that Koofr is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Mail.ru Cloud is a cloud storage provided by a Russian internet company Mail.Ru Group. The official desktop client is Disk-O:, available only on Windows. (Please note that official sites are in Russian) Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented. Sync Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as “Jan 1 1970”. Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as "Jan 1 1970". Hash sums use a custom Mail.ru algorithm based on SHA1. If file size is less than or equal to the SHA1 block size (20 bytes), its hash is simply its data right-padded with zero bytes. Hash sum of a larger file is computed as a SHA1 sum of the file data bytes concatenated with a decimal representation of the data length. Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits. Note that Mailru is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". Here are the standard options specific to mailru (Mail.ru Cloud). User name (usually email) Password NB Input to this must be obscured - see rclone obscure. Skip full upload if there is another file with same data hash. This feature is called “speedup” or “put by hash”. It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization. Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization. Here are the advanced options specific to mailru (Mail.ru Cloud). Comma separated list of file name patterns eligible for speedup (put by hash). Patterns are case insensitive and can contain ’*’ or ‘?’ meta characters. Comma separated list of file name patterns eligible for speedup (put by hash). Patterns are case insensitive and can contain '*' or '?' meta characters. This option allows you to disable speedup (put by hash) for large files (because preliminary hashing can exhaust you RAM or disk space) Files larger than the size given below will always be hashed on disk. What should copy do if file checksum is mismatched or invalid HTTP user agent used internally by client. Defaults to “rclone/VERSION” or “–user-agent” provided on command line. HTTP user agent used internally by client. Defaults to "rclone/VERSION" or "--user-agent" provided on command line. Comma separated list of internal maintenance flags. This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist gzip insecure retry400 This sets the encoding for the backend. See: the encoding section in the overview for more info. Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Mega can have two files with exactly the same name and path (unlike a normal file system). Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. Use Mega remotes seem to get blocked (reject logins) under “heavy use”. We haven’t worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands. For example, executing this command 90 times in a row Mega remotes seem to get blocked (reject logins) under "heavy use". We haven't worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands. For example, executing this command 90 times in a row You can mitigate this issue by mounting the remote it with Rclone does not currently close mega sessions (you can see them in the web interface), however closing the sessions does not solve the issue. If you space rclone commands by 3 seconds it will avoid blocking the remote. We haven’t identified the exact blocking rules, so perhaps one could execute the command 80 times without waiting and avoid blocking by waiting 3 seconds, then continuing… If you space rclone commands by 3 seconds it will avoid blocking the remote. We haven't identified the exact blocking rules, so perhaps one could execute the command 80 times without waiting and avoid blocking by waiting 3 seconds, then continuing... Note that this has been observed by trial and error and might not be set in stone. Other tools seem not to produce this blocking effect, as they use a different working approach (state-based, using sessionIDs instead of log-in) which isn’t compatible with the current stateless rclone approach. Other tools seem not to produce this blocking effect, as they use a different working approach (state-based, using sessionIDs instead of log-in) which isn't compatible with the current stateless rclone approach. Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though. Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum. So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while. Here are the standard options specific to mega (Mega). User name Password. NB Input to this must be obscured - see rclone obscure. Here are the advanced options specific to mega (Mega). Output more debug from Mega. If this flag is set (along with -vv) it will print further debugging information from the mega backend. Delete files permanently rather than putting them into the trash. Normally the mega backend will put all deletions into the trash rather than permanently deleting them. If you specify this then rclone will permanently delete objects instead. This sets the encoding for the backend. See: the encoding section in the overview for more info. This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn’t appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library. This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library. Mega allows duplicate files which may confuse rclone. The memory backend is an in RAM backend. It does not persist its data - use the local backend for that. Because the memory backend isn’t persistent it is most useful for testing or with an rclone server or rclone mount, eg Because the memory backend isn't persistent it is most useful for testing or with an rclone server or rclone mount, eg Sync This remote supports The modified time is stored as metadata on the object with the Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, eg the local disk. You can also list the single container from the root. This will only show the container specified by the SAS URL. Note that you can’t see or access any other containers - this will fail Note that you can't see or access any other containers - this will fail Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server. Rclone supports multipart uploads with Azure Blob storage. Files bigger than 256MB will be uploaded using chunked upload by default. The files will be uploaded in parallel in 4MB chunks (by default). Note that these chunks are buffered in memory and there may be up to Files can’t be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using Note that rclone doesn’t commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won’t allow more than that amount of uncommitted blocks. Files can't be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using Note that rclone doesn't commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won't allow more than that amount of uncommitted blocks. Here are the standard options specific to azureblob (Microsoft Azure Blob Storage). Storage Account Name (leave blank to use SAS URL or Emulator) Storage Account Key (leave blank to use SAS URL or Emulator) SAS URL for container level access only (leave blank if using account/key or Emulator) Uses local storage emulator if provided as ‘true’ (leave blank if using real azure storage endpoint) Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint) Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage). Endpoint for the service Leave blank normally. Cutoff for switching to chunked upload (<= 256MB). Upload chunk size (<= 100MB). Note that this is stored in memory and there may be up to “–transfers” chunks stored at once in memory. Note that this is stored in memory and there may be up to "--transfers" chunks stored at once in memory. Size of blob list. This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. “List blobs” requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( source ). This can be used to limit the number of blobs items to return, to avoid the time out. This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. "List blobs" requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( source ). This can be used to limit the number of blobs items to return, to avoid the time out. Access tier of blob: hot, cool or archive. Archived blobs can be restored by setting access tier to hot or cool. Leave blank if you intend to use default access tier, which is set at account level If there is no “access tier” specified, rclone doesn’t apply any tier. rclone performs “Set Tier” operation on blobs while uploading, if objects are not modified, specifying “access tier” to new one will have no effect. If blobs are in “archive tier” at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to “Hot” or “Cool”. If there is no "access tier" specified, rclone doesn't apply any tier. rclone performs "Set Tier" operation on blobs while uploading, if objects are not modified, specifying "access tier" to new one will have no effect. If blobs are in "archive tier" at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to "Hot" or "Cool". Don’t store MD5 checksum with object metadata. Don't store MD5 checksum with object metadata. Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading. How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool. Whether to use mmap buffers in internal memory pool. This sets the encoding for the backend. See: the encoding section in the overview for more info. To copy a local directory to an OneDrive directory called backup You can use your own Client ID if the default ( You can use your own Client ID if the default ( If you are having problems with them (E.g., seeing a lot of throttling), you can get your own Client ID and Key by following the steps below: Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Any files you delete with rclone will end up in the trash. Microsoft doesn’t provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft’s apps or via the OneDrive website. Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website. Here are the standard options specific to onedrive (Microsoft OneDrive). Microsoft App Client Id Leave blank normally. Microsoft App Client Secret Leave blank normally. Here are the advanced options specific to onedrive (Microsoft OneDrive). Chunk size to upload files with - must be multiple of 320k (327,680 bytes). Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter "Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big." Note that the chunks will be buffered into memory. The ID of the drive to use The type of the drive ( personal | business | documentLibrary ) Set to make OneNote files show up in directory listings. By default rclone will hide OneNote files in directory listings because operations like “Open” and “Update” won’t work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option. By default rclone will hide OneNote files in directory listings because operations like "Open" and "Update" won't work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option. Allow server side operations (eg copy) to work across different onedrive configs. This can be useful if you wish to do a server side copy between two different Onedrives. Note that this isn’t enabled by default because it isn’t easy to tell if it will work between any two configurations. This can be useful if you wish to do a server side copy between two different Onedrives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations. This sets the encoding for the backend. See: the encoding section in the overview for more info. Note that OneDrive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. There are quite a few characters that can’t be in OneDrive file names. These can’t occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a The largest allowed file sizes are 15GB for OneDrive for Business and 100GB for OneDrive Personal (Updated 19 May 2020). Source: https://support.office.com/en-us/article/upload-photos-and-files-to-onedrive-b00ad3fe-6643-4b16-9212-de00ef02b586 The Note: Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an update to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting: Below are the steps for normal users to disable versioning. If you don’t see the “No Versioning” option, make sure the above requirements are met. Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met. User Weropol has found a method to disable versioning on OneDrive It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and hash checks to fail. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments: It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return “item not found” errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.). As a workaround, you may use the It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.). As a workaround, you may use the This means that rclone can’t use the OneDrive for Business API with your account. You can’t do much about it, maybe write an email to your admins. This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins. However, there are other ways to interact with your OneDrive account. Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run Paths are specified as Paths may be as deep as required, eg Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Here are the standard options specific to opendrive (OpenDrive). Username Password. NB Input to this must be obscured - see rclone obscure. Here are the advanced options specific to opendrive (OpenDrive). This sets the encoding for the backend. See: the encoding section in the overview for more info. Files will be uploaded in chunks this size. Note that these chunks are buffered in memory so increasing them will increase memory use. Note that OpenDrive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. There are quite a few characters that can’t be in OpenDrive file names. These can’t occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a Paths are specified as Here is an example of making an QingStor configuration. First run Sync This remote supports rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don’t have an MD5SUM. rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM. Note that incomplete multipart uploads older than 24 hours can be removed with With QingStor you can list buckets ( The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced. Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Here are the standard options specific to qingstor (QingCloud Object Storage). Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. QingStor Access Key ID Leave blank for anonymous access or runtime credentials. QingStor Secret Access Key (password) Leave blank for anonymous access or runtime credentials. Enter an endpoint URL to connection QingStor API. Leave blank will use the default value “https://qingstor.com:443” Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443" Zone to connect to. Default is “pek3a”. Zone to connect to. Default is "pek3a". Here are the advanced options specific to qingstor (QingCloud Object Storage). Number of connection retries. Cutoff for switching to chunked upload Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB. Chunk size to use for uploading. When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size. Note that “–qingstor-upload-concurrency” chunks of this size are buffered in memory per transfer. Note that "--qingstor-upload-concurrency" chunks of this size are buffered in memory per transfer. If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers. Concurrency for multipart uploads. This is the number of chunks of the same file that are uploaded concurrently. NB if you set this to > 1 then the checksums of multpart uploads become corrupted (the uploads themselves are not corrupted though). This sets the encoding for the backend. See: the encoding section in the overview for more info. When you run through the config, make sure you choose rclone will then set any empty config parameters from the environment using standard OpenStack environment variables. There is a list of the variables in the docs for the swift library. If your OpenStack installation uses a non-standard authentication method that might not be yet supported by rclone or the underlying swift library, you can authenticate externally (e.g. calling manually the If your OpenStack installation uses a non-standard authentication method that might not be yet supported by rclone or the underlying swift library, you can authenticate externally (e.g. calling manually the You can use rclone with swift without a config file, if desired, like this: This remote supports As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata. For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is “dirty”. By using For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). Get swift credentials from environment variables in standard OpenStack form. User name to log in (OS_USERNAME). API key or password (OS_PASSWORD). Authentication URL for server (OS_AUTH_URL). User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) Region name - optional (OS_REGION_NAME) Storage URL - optional (OS_STORAGE_URL) Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) Application Credential ID (OS_APPLICATION_CREDENTIAL_ID) Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME) Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET) AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) The storage policy to use when creating a new container This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider. Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). Above this size files will be chunked into a _segments container. Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value. Don’t chunk files during streaming upload. Don't chunk files during streaming upload. When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files. This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM. Rclone will still chunk files bigger than chunk_size when doing normal copy operations. This sets the encoding for the backend. See: the encoding section in the overview for more info. Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. The Swift API doesn’t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won’t check or use the MD5SUM for these. The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these. Due to an oddity of the underlying swift library, it gives a “Bad Request” error rather than a more sensible error when the authentication fails for Swift. Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift. So this most likely means your username / password is wrong. You can investigate further with the This may also be caused by specifying the region when you shouldn’t have (eg OVH). This may also be caused by specifying the region when you shouldn't have (eg OVH). This is most likely caused by forgetting to specify your tenant when setting up a swift remote. Paths are specified as Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. So if the folder you want rclone to use has a URL which looks like Here are the standard options specific to pcloud (Pcloud). Pcloud App Client Id Leave blank normally. Pcloud App Client Secret Leave blank normally. Here are the advanced options specific to pcloud (Pcloud). This sets the encoding for the backend. See: the encoding section in the overview for more info. Fill in for rclone to use a non root folder as its starting point. Hostname to connect to. This is normally set when rclone initially does the oauth connection. Paths are specified as Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Here are the standard options specific to premiumizeme (premiumize.me). API Key. This is not normally used - use oauth instead. Here are the advanced options specific to premiumizeme (premiumize.me). This sets the encoding for the backend. See: the encoding section in the overview for more info. Note that premiumize.me is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. premiumize.me file names can’t have the Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". premiumize.me file names can't have the premiumize.me only supports filenames up to 255 characters in length. Paths are specified as Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Here are the advanced options specific to putio (Put.io). This sets the encoding for the backend. See: the encoding section in the overview for more info. This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x and 7.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don’t specify a library during the configuration: Paths are specified as There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as Here is an example of making a seafile configuration for a user with no two-factor authentication. First run This remote is called This remote is called See all libraries Create a new library Sync Here’s an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you: Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you: You’ll notice your password is blank in the configuration. It’s because we only need the password to authenticate you once. You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once. You specified See all files in the library: Sync Seafile version 7+ supports In addition to the default restricted characters set the following characters are also replaced: Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory: Please note a share link is unique for each file or directory. If you run a link command on a file/dir that has already been shared, you will get the exact same link. It has been actively tested using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven’t been tested and might not work properly. Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly. Here are the standard options specific to seafile (seafile). URL of seafile host to connect to User name (usually email address) Password NB Input to this must be obscured - see rclone obscure. Two-factor authentication (‘true’ if the account has 2FA enabled) Two-factor authentication ('true' if the account has 2FA enabled) Name of the library. Leave blank to access all non-encrypted libraries. Library password (for encrypted libraries only). Leave blank if you pass it through the command line. NB Input to this must be obscured - see rclone obscure. Authentication token Here are the advanced options specific to seafile (seafile). Should rclone create a library if it doesn’t exist Should rclone create a library if it doesn't exist This sets the encoding for the backend. See: the encoding section in the overview for more info. SFTP runs over SSH v2 and is installed as standard with most modern SSH installations. Paths are specified as Paths are specified as "Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net, on the other hand, requires users to OMIT the leading /. Here is an example of making an SFTP configuration. First run Key files should be PEM-encoded private key files. For instance The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line (‘’ or ‘’) separating lines. i.e. key_pem = —–BEGIN RSA PRIVATE KEY—–0gAMbMbaSsd—–END RSA PRIVATE KEY—– The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line ('' or '') separating lines. i.e. key_pem = -----BEGIN RSA PRIVATE KEY-----0gAMbMbaSsd-----END RSA PRIVATE KEY----- This will generate it correctly for key_pem for use in the config: If you don’t specify If you don't specify You can also specify Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment. If you set the Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option Here are the standard options specific to sftp (SSH/SFTP Connection). SSH host to connect to SSH username, leave blank for current username, ncw SSH port, leave blank to use default (22) SSH password, leave blank to use ssh-agent. NB Input to this must be obscured - see rclone obscure. Raw PEM-encoded private key, If specified, will override key_file parameter. Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent. The passphrase to decrypt the PEM-encoded private key file. Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can’t be used. Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can't be used. NB Input to this must be obscured - see rclone obscure. When set forces the usage of the ssh-agent. When key-file is also set, the “.pub” file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid Enable the use of insecure ciphers and key exchange methods. This enables the use of the following insecure ciphers and key exchange methods: Disable the execution of SSH commands to determine if remote file hashing is available. Leave blank or set to false to enable hashing (recommended), set to true to disable hashing. Here are the advanced options specific to sftp (SSH/SFTP Connection). Allow asking for SFTP password when needed. If this is set and no password is supplied then rclone will: - ask for a password - not contact the ssh agent Override path used by SSH connection. This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes. Shared folders can be found in directories representing volumes Home directory can be found in a shared folder called “home” Home directory can be found in a shared folder called "home" Set the modified time on the remote if set. The command used to read md5 hashes. Leave blank for autodetect. The command used to read sha1 hashes. Leave blank for autodetect. Set to skip any symlinks and any other non regular files. SFTP supports checksums if the same login has shell access and SFTP also supports Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can’t be calculated properly. For them using The only ssh agent supported under Windows is Putty’s pageant. SFTP supports checksums if the same login has shell access and SFTP also supports Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using The only ssh agent supported under Windows is Putty's pageant. The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the SFTP isn’t supported under plan9 until this issue is fixed. Note that since SFTP isn’t HTTP based the following flags don’t work with it: Note that SFTP isn't supported under plan9 until this issue is fixed. Note that since SFTP isn't HTTP based the following flags don't work with it: Note that C14 is supported through the SFTP backend. rsync.net is supported through the SFTP backend. See rsync.net’s documentation of rclone examples. See rsync.net's documentation of rclone examples. SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing. The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. Note that the config asks for your email and password but doesn’t store them, it only uses them to get the initial token. Note that the config asks for your email and password but doesn't store them, it only uses them to get the initial token. Once configured you can then use List directories (sync folders) in top level of your SugarSync List all the files in your SugarSync folder “Test” List all the files in your SugarSync folder "Test" To copy a local directory to an SugarSync folder called backup Paths are specified as Paths may be as deep as required, eg NB you can’t create files in the top level folder you have to create a folder, which rclone will create as a “Sync Folder” with SugarSync. NB you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync. SugarSync does not support modification times or hashes, therefore syncing will default to SugarSync replaces the default restricted characters set except for DEL. Invalid UTF-8 bytes will also be replaced, as they can’t be used in XML strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings. Deleted files will be moved to the “Deleted items” folder by default. Deleted files will be moved to the "Deleted items" folder by default. However you can supply the flag Here are the standard options specific to sugarsync (Sugarsync). Sugarsync App ID. Leave blank to use rclone’s. Leave blank to use rclone's. Sugarsync Access Key ID. Leave blank to use rclone’s. Leave blank to use rclone's. Sugarsync Private Access Key Leave blank to use rclone’s. Leave blank to use rclone's. Permanently delete files if true otherwise put them in the deleted files. Here are the advanced options specific to sugarsync (Sugarsync). Sugarsync refresh token Leave blank normally, will be auto configured by rclone. Sugarsync authorization Leave blank normally, will be auto configured by rclone. Sugarsync authorization expiry Leave blank normally, will be auto configured by rclone. Sugarsync user Leave blank normally, will be auto configured by rclone. Sugarsync root id Leave blank normally, will be auto configured by rclone. Sugarsync deleted folder id Leave blank normally, will be auto configured by rclone. This sets the encoding for the backend. See: the encoding section in the overview for more info. Paths are specified as Once configured you can then use Use the Use the Use the Use the The The Use a folder in the local path to upload all its objects. Only modified files will be copied. Use the The The Use a folder in the remote path to download all its objects. Use the The The Since this can cause data loss, test first with the The sync can be done also from Tardigrade to the local file system. Here are the standard options specific to tardigrade (Tardigrade Decentralized Cloud Storage). Choose an authentication method. Access Grant. Satellite Address. Custom satellite address should match the format: API Key. Encryption Passphrase. To access existing objects enter passphrase used for uploading. Here are the standard options specific to union (Union merges the contents of several upstream fs). List of space separated upstreams. Can be ‘upstreama:test/dir upstreamb:’, ‘“upstreama:test/space:ro dir” upstreamb:’, etc. List of space separated upstreams. Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc. Policy to choose upstream on ACTION category. Policy to choose upstream on CREATE category. Policy to choose upstream on SEARCH category. Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used. Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them. Here are the standard options specific to webdav (Webdav). URL of http host to connect to Name of the Webdav site/service/software you are using User name Password. NB Input to this must be obscured - see rclone obscure. Bearer token instead of user/pass (eg a Macaroon) Here are the advanced options specific to webdav (Webdav). Command to run to get a bearer token This is configured in an identical way to Owncloud. Note that Nextcloud does not support streaming of files ( Rclone can be used with Sharepoint provided by OneDrive for Business or Office365 Education Accounts. This feature is only needed for a few of these Accounts, mostly Office365 Education ones. These accounts are sometimes not verified by the domain owner github#1975 This means that these accounts can’t be added using the official API (other Accounts should work with the “onedrive” option). However, it is possible to access them using webdav. To use a sharepoint remote with rclone, add it like this: First, you need to get your remote’s URL: This means that these accounts can't be added using the official API (other Accounts should work with the "onedrive" option). However, it is possible to access them using webdav. To use a sharepoint remote with rclone, add it like this: First, you need to get your remote's URL: You’ll only need this URL up to the email address. After that, you’ll most likely want to add “/Documents”. That subdirectory contains the actual data stored on your OneDrive. You'll only need this URL up to the email address. After that, you'll most likely want to add "/Documents". That subdirectory contains the actual data stored on your OneDrive. Add the remote to rclone like this: Configure the Your config file should look like this: As SharePoint does some special things with uploaded documents, you won’t be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer. For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the “Last Modified” datetime property to compare your documents: As SharePoint does some special things with uploaded documents, you won't be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer. For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the "Last Modified" datetime property to compare your documents: dCache is a storage system that supports many protocols and authentication/authorisation schemes. For WebDAV clients, it allows users to authenticate with username and password (BASIC), X.509, Kerberos, and various bearer tokens, including Macaroons and OpenID-Connect access tokens. Configure as normal using the Configure as normal using the The config will end up looking something like this. Note Before the The rclone Configure as a normal WebDAV endpoint, using the ‘other’ vendor, leaving the username and password empty. When prompted, choose to edit the advanced config and enter the command to get a bearer token (e.g., Configure as a normal WebDAV endpoint, using the 'other' vendor, leaving the username and password empty. When prompted, choose to edit the advanced config and enter the command to get a bearer token (e.g., The following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from the XDC OIDC Provider. To view your current quota you can use the The default restricted characters set are replaced. Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings. Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. When uploading very large files (bigger than about 5GB) you will need to increase the When uploading very large files (bigger than about 5GB) you will need to increase the Here are the standard options specific to yandex (Yandex Disk). Yandex Client Id Leave blank normally. Yandex Client Secret Leave blank normally. Here are the advanced options specific to yandex (Yandex Disk). Remove existing public link to file/folder with link command rather than creating. Default is false, meaning link command will create or retrieve public link. This sets the encoding for the backend. See: the encoding section in the overview for more info. Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X. Filenames should be encoded in UTF-8 on disk. This is the normal case for Windows and OS X. There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an old Linux filesystem with non UTF-8 file names (eg latin1) then you can use the There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an old Linux filesystem with non UTF-8 file names (eg latin1) then you can use the If an invalid (non-UTF8) filename is read, the invalid characters will be replaced with a quoted representation of the invalid bytes. The name Invalid UTF-8 bytes will also be replaced, as they can’t be converted to UTF-16. Invalid UTF-8 bytes will also be replaced, as they can't be converted to UTF-16. Rclone handles long paths automatically, by converting all paths to long UNC paths which allows paths up to 32,767 characters. This is why you will see that your paths, for instance Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows). If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a ‘.rclonelink’ suffix in the remote storage. If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a '.rclonelink' suffix in the remote storage. The text file will contain the target of the symbolic link (see example). This flag applies to all commands. For example, supposing you have a directory structure like this Copying the entire directory with ‘-l’ Copying the entire directory with '-l' The remote files are created with a ‘.rclonelink’ suffix The remote files are created with a '.rclonelink' suffix Copying them back with ‘-l’ Copying them back with '-l' However, if copied back without ‘-l’ However, if copied back without '-l' Note that this flag is incompatible with Normally rclone will recurse through filesystems as mounted. However if you set For example if you have a directory hierarchy like this NB Rclone (like most unix tools such as NB This flag is only available on Unix based systems. On systems where it isn’t supported (eg Windows) it will be ignored. NB This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will be ignored. Here are the standard options specific to local (Local Disk). Disable UNC (long path names) conversion on Windows Here are the advanced options specific to local (Local Disk). Follow symlinks and copy the pointed to item. Translate symlinks to/from regular files with a ‘.rclonelink’ extension Translate symlinks to/from regular files with a '.rclonelink' extension Don’t warn about skipped symlinks. This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped. Don't warn about skipped symlinks. This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped. Don’t apply unicode normalization to paths and filenames (Deprecated) Don't apply unicode normalization to paths and filenames (Deprecated) This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead. Don’t check to see if the files change during upload Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts “can’t copy - source file is being updated” if the file changes during upload. Don't check to see if the files change during upload Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts "can't copy - source file is being updated" if the file changes during upload. However on some file systems this modification time check may fail (eg Glusterfs #2206) so this check can be disabled with this flag. Don’t cross filesystem boundaries (unix/macOS only). Don't cross filesystem boundaries (unix/macOS only). Force the filesystem to report itself as case sensitive. Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice. Force the filesystem to report itself as case insensitive Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice. Disable sparse files for multi-thread downloads On Windows platforms rclone will make sparse files when doing multi-thread downloads. This avoids long pauses on large files where the OS zeros the file. However sparse files may be undesirable as they cause disk fragmentation and can be slow to work with. This sets the encoding for the backend. See: the encoding section in the overview for more info. Run them with The help below will explain what arguments each command takes. See the “rclone backend” command for more info on how to pass options and arguments. See the "rclone backend" command for more info on how to pass options and arguments. These can be run on a running backend using the rc command backend/command. A null operation for testing backend commands This is a test command which has some options you can try to change the output. Options: Rclone doesn’t currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing. Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing. Currently rclone loads each directory entirely into memory before using it. Since each Rclone object takes 0.5k-1k of memory this can take a very long time and use an extremely large amount of memory. Millions of files in a directory tend caused by software writing cloud storage (eg S3 buckets). Bucket based remotes (eg S3/GCS/Swift/B2) do not have a concept of directories. Rclone therefore cannot create directories in them which means that empty directories on a bucket based remote will tend to disappear. Some software creates empty keys ending in Some software creates empty keys ending in Bugs are stored in rclone’s GitHub project: Bugs are stored in rclone's GitHub project: You can use rclone from multiple places at the same time if you choose different subdirectory for the output, eg If you sync to the same directory then you should use rclone copy otherwise the two instances of rclone may delete each other’s files, eg If you sync to the same directory then you should use rclone copy otherwise the two instances of rclone may delete each other's files, eg The file names you upload from Server A and Server B should be different in this case, otherwise some file systems (eg Drive) may make duplicates. Rclone stores each file you transfer as a native object on the remote cloud storage system. This means that you can see the files you upload as expected using alternative access methods (eg using the Google Drive web interface). There is a 1:1 mapping between files on your hard disk and objects created in the cloud storage system. Cloud storage systems (at least none I’ve come across yet) don’t support partially uploading an object. You can’t take an existing object, and change some bytes in the middle of it. Cloud storage systems (at least none I've come across yet) don't support partially uploading an object. You can't take an existing object, and change some bytes in the middle of it. It would be possible to make a sync system which stored binary diffs instead of whole objects like rclone does, but that would break the 1:1 mapping of files on your hard disk to objects in the remote cloud storage system. All the cloud storage systems support partial downloads of content, so it would be possible to make partial downloads work. However to make this work efficiently this would require storing a significant amount of metadata, which breaks the desired 1:1 mapping of files to objects. The The e.g. Note that the ftp backend does not support This means that This means that Rclone (via the Go runtime) tries to load the root certificates from these places on Linux. The two environment variables Note that you may need to add the Note that you may need to add the Likely this means that you are running rclone on Linux version not supported by the go runtime, ie earlier than version 2.6.23. See the system requirements section in the go install docs for full details. This is caused by uploading these files from a Windows computer which hasn’t got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions’ file formats This is caused by uploading these files from a Windows computer which hasn't got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions' file formats This happens when rclone cannot resolve a domain. Please check that your DNS setup is generally working, e.g.rclone(1) User Manual
-Rclone syncs your files to cloud storage
About rclone
---dry-run protection. It is used at the command line, in scripts or via its API.--dry-run protection. It is used at the command line, in scripts or via its API.curl https://rclone.org/install.sh | sudo bash
-curl https://rclone.org/install.sh | sudo bash -s betaLinux installation from precompiled binary
curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
@@ -199,7 +199,7 @@ rclone v1.49.1
/config/rclone into the Docker container. Due to the fact that rclone updates tokens inside its config file, and that the update process involves a file rename, you need to mount the whole host rclone config dir, not just the single host rclone config file./data into the Docker container.rclone mount inside a userspace Docker container, and expose the resulting fuse mount to the host. The exact docker run options to do that might vary slightly between hosts. See, e.g. the discussion in this thread.rclone mount inside a userspace Docker container, and expose the resulting fuse mount to the host. The exact docker run options to do that might vary slightly between hosts. See, e.g. the discussion in this thread./etc/passwd and /etc/group for fuse to work inside the container.Install from source
-
-git clone https://github.com/rclone/rclone.git
cd rclone
go build
./rclone version~/go) with:
-go get -u -v github.com/rclone/rclone$GOPATH/bin (~/go/bin/rclone by default) after downloading the source to $GOPATH/src/github.com/rclone/rclone (~/go/src/github.com/rclone/rclone by default).make instead of go build then the rclone build will have the correct version information in it.
+go get github.com/rclone/rclone
+go get github.com/rclone/rclone@master$(go env GOPATH)/bin (~/go/bin/rclone by default) after downloading the source to the go module cache. Note - do not use the -u flag here. This causes go to try to update the depencencies that rclone uses and sometimes these don't work with the current version of rclone.Installation with Ansible
-
git clone https://github.com/stefangweichinger/ansible-rclone.git into your local roles-directoryConfigure
---config entry for how to find the config file and choose its location.)--config entry for how to find the config file and choose its location.)rclone config
-Syntax: [options] subcommand <parameters> <parameters...>Subcommands
Synopsis
-
-rclone copy source:sourcepath dest:destpathsourcepath/one.txt
sourcepath/two.txt
-destpath/sourcepath/one.txt
destpath/sourcepath/two.txtrsync, rclone always works as if you had written a trailing / - meaning “copy the contents of this directory”. This applies to all commands and whether you are talking about the source or destination.rsync, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination.rclone copy --max-age 24h --no-traverse /path/to/src remote:-P/--progress flag to view real-time transfer statisticsrclone sync
Synopsis
---dry-run flag to see exactly what would be copied and deleted.copy command above if unsure.copy command above if unsure.-P/--progress flag to view real-time transfer statisticsrclone sync source:path dest:path [flags]Options
@@ -381,9 +384,9 @@ destpath/sourcepath/two.txt
source:path into dest:path. After this source:path will no longer exist.source:path selected by the filters (if any) this will move it into dest:path. If possible a server side move will be used, otherwise it will copy it (server side if possible) into dest:path then delete the original (if no errors on copy) in source:path.-P/--progress flag to view real-time transfer statistics.rclone move source:path dest:path [flags]Options
@@ -400,14 +403,14 @@ destpath/sourcepath/two.txt
Synopsis
purge it obeys include/exclude filters so can be used to selectively delete files.rclone delete only deletes objects but leaves the directory structure alone. If you want to delete a directory and all of its contents use rclone purgerclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
-rclone --min-size 100M delete remote:pathrclone delete remote:path [flags]Options
-h, --help help for delete
@@ -430,9 +433,9 @@ rclone --dry-run --min-size 100M delete remote:pathrclone mkdir
-Synopsis
-rclone mkdir remote:path [flags]Options
@@ -444,7 +447,7 @@ rclone --dry-run --min-size 100M delete remote:path
-h, --help help for mkdirrclone rmdir
Synopsis
-rclone rmdir remote:path [flags]Options
@@ -456,10 +459,10 @@ rclone --dry-run --min-size 100M delete remote:path
-h, --help help for rmdirrclone check
Synopsis
-rclone check source:path dest:path [flags]Options
--download Check by downloading rather than with hash.
@@ -490,9 +493,9 @@ rclone --dry-run --min-size 100M delete remote:pathlsjson to list objects and directories in JSON formatls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.ls and lsl recurse by default - use “–max-depth 1” to stop the recursion.lsd,lsf,lsjson do not recurse by default - use “-R” to make them recurse.ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.rclone ls remote:path [flags]Options
@@ -514,7 +517,7 @@ rclone --dry-run --min-size 100M delete remote:path
-1 2016-10-17 17:41:53 -1 1000files
-1 2017-01-03 14:40:54 -1 2500files
-1 2017-07-08 14:39:28 -1 4000files
- -h, --help help for ls
@@ -525,9 +528,9 @@ rclone --dry-run --min-size 100M delete remote:path
lsjson to list objects and directories in JSON formatls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.ls and lsl recurse by default - use “–max-depth 1” to stop the recursion.lsd,lsf,lsjson do not recurse by default - use “-R” to make them recurse.ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.rclone lsd remote:path [flags]Options
-h, --help help for lsd
@@ -557,9 +560,9 @@ rclone --dry-run --min-size 100M delete remote:pathlsjson to list objects and directories in JSON formatls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.ls and lsl recurse by default - use “–max-depth 1” to stop the recursion.lsd,lsf,lsjson do not recurse by default - use “-R” to make them recurse.ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.rclone lsl remote:path [flags]Options
@@ -614,7 +617,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone v1.41
- os/arch: linux/amd64
- go version: go1.10
- -h, --help help for lsl$ rclone version --check
yours: 1.42.0.6
latest: 1.42 (released 2018-06-16)
@@ -739,13 +742,13 @@ Other: 8.241G
-Total: 18253611008
Used: 7993453766
Free: 1411001220
Trashed: 104857602
Other: 8849156022{
"total": 18253611008,
"used": 7993453766,
@@ -767,7 +770,7 @@ Other: 8849156022Synopsis
rclone authorize [flags]Options
--auth-no-open-browser Do not automatically open auth link in default browser
@@ -780,7 +783,7 @@ Other: 8849156022rclone backend
Synopsis
-
@@ -811,7 +814,7 @@ rclone backend help <backendname>
rclone backend help remote:
rclone backend help <backendname>rclone cat remote:path/to/dir
-rclone --include "*.txt" cat remote:path/to/dirrclone cat remote:path [flags]Options
--count int Only print N characters. (default -1)
@@ -832,8 +835,8 @@ rclone backend help <backendname>rclone config create myremote swift env_auth truerclone config create mydrive drive config_is_local false
@@ -863,7 +866,7 @@ rclone backend help <backendname>
rclone config create `name` `type` [`key` `value`]* [flags]Synopsis
rclone config disconnect remote: [flags]Options
@@ -911,10 +914,10 @@ rclone backend help <backendname>
-h, --help help for disconnectrclone config password
Synopsis
-key value.key value.
-rclone config password myremote fieldname mypasswordrclone config password `name` [`key` `value`]+ [flags]Options
@@ -939,7 +942,7 @@ rclone backend help <backendname>
-h, --help help for passwordSynopsis
rclone config reconnect remote: [flags]Options
@@ -964,12 +967,12 @@ rclone backend help <backendname>
rclone config update
Synopsis
-key value.key value.
-rclone config update myremote swift env_auth truerclone config update myremote swift env_auth true config_refresh_token falserclone config update `name` [`key` `value`]+ [flags]Options
@@ -1008,7 +1011,7 @@ rclone backend help <backendname>
if src is directory
copy it to dst, overwriting existing files if they exist
see copy command for full details
--P/--progress flag to view real-time transfer statisticsrclone copyto source:path dest:path [flags]Options
@@ -1021,10 +1024,10 @@ if src is directory
rclone copyurl
Synopsis
-rclone copyurl https://example.com dest:path [flags]Options
- -a, --auto-filename Get the file name from the URL and use it for destination file path
@@ -1047,7 +1050,7 @@ if src is directory
rclone cryptcheck remote:path encryptedremote:pathrclone cryptcheck remote:path cryptedremote:path [flags]Options
-h, --help help for cryptcheck
@@ -1061,7 +1064,7 @@ if src is directory
Synopsis
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
@@ -1078,7 +1081,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2rclone deletefile
Synopsis
-delete it cannot be used to remove a directory and it doesn’t obey include/exclude filters - if the specified file exists, it will always be removed.delete it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.rclone deletefile remote:path [flags]Options
@@ -1090,7 +1093,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2 -h, --help help for deletefilerclone genautocomplete
Synopsis
-Options
-h, --help help for genautocomplete
-rclone link remote:path/to/file
rclone link remote:path/to/folder/rclone link remote:path [flags]Options
@@ -1226,7 +1229,7 @@ canole
diwogej7
ferejej3gux/
fubuwic -h, --help help for link
-p - path
s - size
t - modification time
@@ -1236,7 +1239,7 @@ o - Original ID of underlying object
m - MimeType of object if known
e - encrypted name
T - tier of storage if known, eg "Hot" or "Cool"
-$ rclone lsf --format "tsp" swift:bucket
2016-06-25 18:55:41;60295;bevajer5jef
@@ -1244,7 +1247,7 @@ T - tier of storage if known, eg "Hot" or "Cool"rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
-$ rclone lsf --separator "," --format "tshp" swift:bucket
2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
@@ -1269,7 +1272,7 @@ cd65ac234e6fea5925974a51cdd865cc canole
test.log,22355
test.sh,449
"this file contains a comma, in the file name.txt",6
@@ -1283,9 +1286,9 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
rclone copy --files-from-raw new_files /path/to/local remote:pathlsjson to list objects and directories in JSON formatls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.ls and lsl recurse by default - use “–max-depth 1” to stop the recursion.lsd,lsf,lsjson do not recurse by default - use “-R” to make them recurse.ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.rclone lsf remote:path [flags]Options
--absolute Put a leading / in front of path names.
@@ -1308,16 +1311,16 @@ rclone copy --files-from-raw new_files /path/to/local remote:pathSynopsis
lsjson to list objects and directories in JSON formatls,lsl,lsd are designed to be human readable. lsf is designed to be human and machine readable. lsjson is designed to be machine readable.ls and lsl recurse by default - use “–max-depth 1” to stop the recursion.lsd,lsf,lsjson do not recurse by default - use “-R” to make them recurse.ls and lsl recurse by default - use "--max-depth 1" to stop the recursion.lsd,lsf,lsjson do not recurse by default - use "-R" to make them recurse.rclone lsjson remote:path [flags]Options
--dirs-only Show only directories in the listing.
@@ -1352,9 +1355,9 @@ rclone copy --files-from-raw new_files /path/to/local remote:pathrclone mount
Synopsis
-rclone config. Check it works with rclone ls etc./path/to/local/mount is an empty existing directory.rclone mount remote:path/to/files /path/to/local/mountX: is an unused drive letter or use a path to non-existent directory.# Linux
fusermount -u /path/to/local/mount
@@ -1378,31 +1381,31 @@ umount /path/to/local/mountLimitations
-rclone mount vs rclone sync/copy
-Attribute caching
-Filters
systemd
chunked reading
-Directory Cache
--dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up on within the polling interval.SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:rclone rc vfs/forget file=path/to/file dir=path/to/dirFile Buffering
--buffer-size flag determines the amount of memory, that will be used to buffer data in advance.--buffer-size * open files.File Caching
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -1426,47 +1429,47 @@ umount /path/to/local/mount-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.–vfs-cache-mode off
+--vfs-cache-mode off
-
-–vfs-cache-mode minimal
---vfs-cache-mode minimal
+
-
-–vfs-cache-mode writes
+--vfs-cache-mode writes
–vfs-cache-mode full
+--vfs-cache-mode full
--vfs-cache-max-age.Case Sensitivity
rclone mount remote:path /path/to/mountpoint [flags]Options
if src is directory
move it to dst, overwriting existing files if they exist
see move command for full details
- --allow-non-empty Allow mounting over a non-empty directory (not Windows).
@@ -1523,8 +1526,8 @@ umount /path/to/local/mount-P/--progress flag to view real-time transfer statistics.rclone moveto source:path dest:path [flags]Options
@@ -1537,9 +1540,9 @@ if src is directory
rclone ncdu
Synopsis
- ↑,↓ or k,j to Move
→,l to enter
←,h to return
@@ -1553,7 +1556,7 @@ if src is directory
? to toggle help on and off
q/ESC/c-C to quitrclone ncdu remote:path [flags]Options
@@ -1565,7 +1568,7 @@ if src is directory
-h, --help help for ncdurclone obscure
Synopsis
-
@@ -1579,23 +1582,23 @@ if src is directory
rclone obscure password [flags]rclone rc
Synopsis
-
--o key=value -o key2
-{"key":"value", "key2","")
--a value -a value2
-["value", "value2"]
-rclone rc --loopback operations/about fs=/rclone rc commands parameter [flags]Options
-a, --arg stringArray Argument placed in the "arg" array.
@@ -1620,7 +1623,7 @@ if src is directory
ffmpeg - | rclone rcat remote:path/to/file--streaming-upload-cutoff. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.rclone move it to the destination.rclone move it to the destination.rclone rcat remote:path [flags]Options
@@ -1648,7 +1651,7 @@ ffmpeg - | rclone rcat remote:path/to/file
-h, --help help for rcatSynopsis
rclone rmdirs remote:path [flags]Options
@@ -1675,7 +1678,7 @@ ffmpeg - | rclone rcat remote:path/to/file
Server options
-Directory Cache
--dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up on within the polling interval.SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:rclone rc vfs/forget file=path/to/file dir=path/to/dirFile Buffering
--buffer-size flag determines the amount of memory, that will be used to buffer data in advance.--buffer-size * open files.File Caching
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -1711,47 +1714,47 @@ ffmpeg - | rclone rcat remote:path/to/file-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.–vfs-cache-mode off
+--vfs-cache-mode off
-
-–vfs-cache-mode minimal
---vfs-cache-mode minimal
+
-
-–vfs-cache-mode writes
+--vfs-cache-mode writes
–vfs-cache-mode full
+--vfs-cache-mode full
--vfs-cache-max-age.Case Sensitivity
rclone serve dlna remote:path [flags]Options
--addr string ip:port or :port to bind the DLNA http server to. (default ":7879")
@@ -1788,11 +1791,11 @@ ffmpeg - | rclone rcat remote:path/to/fileSynopsis
Server options
-Authentication
Directory Cache
--dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up on within the polling interval.SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:rclone rc vfs/forget file=path/to/file dir=path/to/dirFile Buffering
--buffer-size flag determines the amount of memory, that will be used to buffer data in advance.--buffer-size * open files.File Caching
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -1816,52 +1819,52 @@ ffmpeg - | rclone rcat remote:path/to/file-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.–vfs-cache-mode off
+--vfs-cache-mode off
-
-–vfs-cache-mode minimal
---vfs-cache-mode minimal
+
-
-–vfs-cache-mode writes
+--vfs-cache-mode writes
–vfs-cache-mode full
+--vfs-cache-mode full
--vfs-cache-max-age.Case Sensitivity
Auth Proxy
--auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won’t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscureuser and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you’d probably want to restrict the host to a limited list.user so only use that for configuration, don’t use pass or public_key. This also means that if a user’s password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.rclone serve ftp remote:path [flags]Options
@@ -1925,16 +1928,16 @@ ffmpeg - | rclone rcat remote:path/to/file
Synopsis
Server options
-
Allows for creating a relative navigation
-
– .Link
+-- .Link
The relative to the root link of the Text.
-
– .Text
+-- .Text
The Name of the directory.
@@ -1992,40 +1995,40 @@ ffmpeg - | rclone rcat remote:path/to/file
Information about a specific file/directory.
-
– .URL
-The ‘url’ of an entry.
+-- .URL
+The 'url' of an entry.
-
– .Leaf
-Currently same as ‘URL’ but intended to be ‘just’ the name.
+-- .Leaf
+Currently same as 'URL' but intended to be 'just' the name.
-
– .IsDir
+-- .IsDir
Boolean for if an entry is a directory or not.
-
– .Size
+-- .Size
Size in Bytes of the entry.
-
– .ModTime
+-- .ModTime
The UTC timestamp of an entry.
Authentication
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUserSSL/TLS
-Directory Cache
--dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up on within the polling interval.SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:rclone rc vfs/forget file=path/to/file dir=path/to/dirFile Buffering
--buffer-size flag determines the amount of memory, that will be used to buffer data in advance.--buffer-size * open files.File Caching
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -2049,47 +2052,47 @@ htpasswd -B htpasswd anotherUser-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.–vfs-cache-mode off
+--vfs-cache-mode off
-
-–vfs-cache-mode minimal
---vfs-cache-mode minimal
+
-
-–vfs-cache-mode writes
+--vfs-cache-mode writes
–vfs-cache-mode full
+--vfs-cache-mode full
--vfs-cache-max-age.Case Sensitivity
rclone serve http remote:path [flags]Options
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
@@ -2132,24 +2135,24 @@ htpasswd -B htpasswd anotherUserrclone serve restic
-Synopsis
-Setting up rclone for use by restic
-rclone serve restic -v remote:backupSetting up restic to use rclone
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
# backup user2 stuff
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/
$ export RESTIC_PASSWORD=yourpassword
@@ -2172,14 +2175,14 @@ snapshot 45c8fdd8 savedPrivate repositories
-/<username>/./<username>/.Server options
-
Allows for creating a relative navigation
-
– .Link
+-- .Link
The relative to the root link of the Text.
-
– .Text
+-- .Text
The Name of the directory.
@@ -2237,40 +2240,40 @@ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
Information about a specific file/directory.
-
– .URL
-The ‘url’ of an entry.
+-- .URL
+The 'url' of an entry.
-
– .Leaf
-Currently same as ‘URL’ but intended to be ‘just’ the name.
+-- .Leaf
+Currently same as 'URL' but intended to be 'just' the name.
-
– .IsDir
+-- .IsDir
Boolean for if an entry is a directory or not.
-
– .Size
+-- .Size
Size in Bytes of the entry.
-
– .ModTime
+-- .ModTime
The UTC timestamp of an entry.
Authentication
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUserSSL/TLS
-rclone serve restic remote:path [flags]Options
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
@@ -2299,14 +2302,14 @@ htpasswd -B htpasswd anotherUserSynopsis
Directory Cache
--dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up on within the polling interval.SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:rclone rc vfs/forget file=path/to/file dir=path/to/dirFile Buffering
--buffer-size flag determines the amount of memory, that will be used to buffer data in advance.--buffer-size * open files.File Caching
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -2330,52 +2333,52 @@ htpasswd -B htpasswd anotherUser-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.–vfs-cache-mode off
+--vfs-cache-mode off
-
-–vfs-cache-mode minimal
---vfs-cache-mode minimal
+
-
-–vfs-cache-mode writes
+--vfs-cache-mode writes
–vfs-cache-mode full
+--vfs-cache-mode full
--vfs-cache-max-age.Case Sensitivity
Auth Proxy
--auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won’t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscureuser and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you’d probably want to restrict the host to a limited list.user so only use that for configuration, don’t use pass or public_key. This also means that if a user’s password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.rclone serve sftp remote:path [flags]Options
@@ -2441,17 +2444,17 @@ htpasswd -B htpasswd anotherUser
Synopsis
Webdav options
-–etag-hash
+--etag-hash
Server options
-
Allows for creating a relative navigation
-
– .Link
+-- .Link
The relative to the root link of the Text.
-
– .Text
+-- .Text
The Name of the directory.
@@ -2509,40 +2512,40 @@ htpasswd -B htpasswd anotherUser
Information about a specific file/directory.
-
– .URL
-The ‘url’ of an entry.
+-- .URL
+The 'url' of an entry.
-
– .Leaf
-Currently same as ‘URL’ but intended to be ‘just’ the name.
+-- .Leaf
+Currently same as 'URL' but intended to be 'just' the name.
-
– .IsDir
+-- .IsDir
Boolean for if an entry is a directory or not.
-
– .Size
+-- .Size
Size in Bytes of the entry.
-
– .ModTime
+-- .ModTime
The UTC timestamp of an entry.
Authentication
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUserSSL/TLS
-Directory Cache
--dir-cache-time flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up on within the polling interval.SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:rclone rc vfs/forget file=path/to/file dir=path/to/dirFile Buffering
--buffer-size flag determines the amount of memory, that will be used to buffer data in advance.--buffer-size * open files.File Caching
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -2566,52 +2569,52 @@ htpasswd -B htpasswd anotherUser-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.–vfs-cache-mode off
+--vfs-cache-mode off
-
-–vfs-cache-mode minimal
---vfs-cache-mode minimal
+
-
-–vfs-cache-mode writes
+--vfs-cache-mode writes
–vfs-cache-mode full
+--vfs-cache-mode full
--vfs-cache-max-age.Case Sensitivity
Auth Proxy
--auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won’t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscureuser and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you’d probably want to restrict the host to a limited list.user so only use that for configuration, don’t use pass or public_key. This also means that if a user’s password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.rclone serve webdav remote:path [flags]Options
@@ -2704,13 +2707,13 @@ htpasswd -B htpasswd anotherUser
Synopsis
-
-rclone touch remote:path [flags]Options
└── file5
1 directories, 5 files
- -h, --help help for touch
@@ -2737,8 +2740,8 @@ htpasswd -B htpasswd anotherUserrclone tree remote:path [flags]Options
-a, --all All files are listed (list . files too).
@@ -2768,7 +2771,7 @@ htpasswd -B htpasswd anotherUserCopying single files
-Failed to create file system for "remote:file": is a file not a directory if it isn’t.Failed to create file system for "remote:file": is a file not a directory if it isn't.test.jpg, then you could copy just that file like thisrclone copy remote:test.jpg /tmp/downloadtest.jpg will be placed inside /tmp/download./path/to/dir
\ may be used instead of / in local paths only, non local paths must use /./ - if they don’t then they will be relative to the current directory./ - if they don't then they will be relative to the current directory.remote:path/to/dir
path/to/dir on remote: as defined in the config file (configured with rclone config).remote:/path/to/dir
-remote:path/to/dir and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading / will refer to your “home” directory and paths with a leading / will refer to the root.remote:path/to/dir and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading / will refer to your "home" directory and paths with a leading / will refer to the root.:backend:path/to/dir
backend should be the name or prefix of a backend (the type in the config file) and all the configuration for the backend should be provided on the command line (or in environment variables).rclone copy 'Important files?' remote:backup' you will need to use ", eg
-rclone copy "O'Reilly Reviews" remote:backupWindows
", eg
-rclone copy "E:\folder name\folder name\folder name" remote:backuprclone copy E:\ remote:backupCopying files or directories with
: in the names: to mark a remote name. This is, however, a valid filename component in non-Windows OSes. The remote name parser will only search for a : up to the first / so if you need to act on a file or directory like this then use the full path starting with a /, or use ./ as a current directory prefix.rclone sync /full/path/to/sync:me remote:pathServer Side Copy
rclone copy s3:oldbucket s3:newbucketoldbucket to newbucket without downloading and re-uploading.sync and copy and will be identified in the log when using the -v flag. The move command may also use them if remote doesn’t support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload.sync and copy and will be identified in the log when using the -v flag. The move command may also use them if remote doesn't support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload.rclone sync remote:current-backup remote:previous-backup
@@ -2833,24 +2836,24 @@ rclone sync /path/to/files remote:current-backupOptions
--option=value or --option value. However boolean (true/false) options behave slightly differently to the other options in that --boolean sets the option to true and the absence of the flag sets it to false. It is also possible to specify --boolean=false or --boolean=true. Note that --boolean false is not valid - this is parsed as --boolean and the false is parsed as an extra command line argument for rclone.b for bytes, k for kBytes, M for MBytes, G for GBytes, T for TBytes and P for PBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.–backup-dir=DIR
+--backup-dir=DIR
sync, copy or move any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.--suffix is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.rclone sync /path/to/local remote:current --backup-dir remote:old/path/to/local to remote:current, but for any files which would have been updated or deleted will be stored in remote:old.--backup-dir to store the old files, or you might want to pass --suffix with today’s date.--backup-dir to store the old files, or you might want to pass --suffix with today's date.--compare-dest and --copy-dest.–bind string
-–bwlimit=BANDWIDTH_SPEC
+--bind string
+--bwlimit=BANDWIDTH_SPEC
0 which means to not limit bandwidth.--bwlimit 10MWEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH... where: WEEKDAY is optional element. It could be written as whole world or only using 3 first characters. HH:MM is an hour from 00:00 to 23:59.WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH... where: WEEKDAY is optional element. It could be written as whole world or only using 3 first characters. HH:MM is an hour from 00:00 to 23:59.--bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"--bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"--bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"--bwlimit 0.625M parameter for rclone.--bwlimit 0.625M parameter for rclone.SIGUSR2 signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with --bwlimit quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this:kill -SIGUSR2 $(pidof rclone)
-rclone rc core/bwlimit rate=1M–buffer-size=SIZE
+--buffer-size=SIZE
--transfer will use this much memory for buffering.mount or cmount each open file descriptor will use this much memory for buffering. See the mount documentation for more details.0 to disable the buffering for the minimum memory usage.–check-first
+--check-first
sync, copy or move, rclone will do all the checks to see whether files need to be transferred before doing any of the transfers. Normally rclone would start running transfers as soon as possible.--max-backlog to infinite. This means that all the info on the objects to transfer is held in memory before the transfers start.–checkers=N
+--checkers=N
-c, –checksum
+-c, --checksum
rclone --checksum sync s3:/bucket swift:/bucket would run much quicker than without the --checksum flag.–compare-dest=DIR
+--compare-dest=DIR
sync, copy or move DIR is checked in addition to the destination for files. If a file identical to the source is found that file is NOT copied from source. This is useful to copy just files that have changed since the last backup.--copy-dest and --backup-dir.–config=CONFIG_FILE
+--config=CONFIG_FILE
.config/rclone/rclone.conf (or .rclone.conf if created with an older version). If $XDG_CONFIG_HOME is set it will be at $XDG_CONFIG_HOME/rclone/rclone.conf.rclone.conf in the same directory as the rclone executable it will be preferred. This file must be created manually for Rclone to use it, it will never be created automatically.rclone config file you will see where the default location is for you.rclone --config=".myconfig" .config.–contimeout=TIME
+--contimeout=TIME
5s for 5 seconds, 10m for 10 minutes, or 3h30m.1m by default.–copy-dest=DIR
+--copy-dest=DIR
sync, copy or move DIR is checked in addition to the destination for files. If a file identical to the source is found that file is server side copied from DIR to the destination. This is useful for incremental backup.--compare-dest and --backup-dir.–dedupe-mode MODE
+--dedupe-mode MODE
interactive, skip, first, newest, oldest, rename. The default is interactive. See the dedupe command for more information as to what these options mean.–disable FEATURE,FEATURE,…
+--disable FEATURE,FEATURE,...
--disable move,copy--disable help-n, –dry-run
+-n, --dry-run
sync command which deletes files in the destination.–expect-continue-timeout=TIME
---expect-continue-timeout=TIME
+1s. Set to 0 to disable.–error-on-no-transfer
+--error-on-no-transfer
–header
+--header
--header-upload and if you want to add headers only for downloads use --header-download.--header-upload and --header-download so may be used as a workaround for those with care.
-rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes"–header-download
+--header-download
rclone sync s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"–header-upload
+--header-upload
rclone sync ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"–ignore-case-sync
+--ignore-case-sync
–ignore-checksum
-–ignore-existing
+--ignore-checksum
+--ignore-existing
–ignore-size
+--ignore-size
--checksum is set then it only checks the checksum.-I, –ignore-times
+-I, --ignore-times
--checksum).–immutable
+--immutable
Source and destination exist but do not match: immutable file modified.sync, copy, move) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete, purge) or implicitly (e.g. sync, move). Use copy --immutable if it is desired to avoid deletion as well as modification.sync, copy, move) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete, purge) or implicitly (e.g. sync, move). Use copy --immutable if it is desired to avoid deletion as well as modification.–leave-root
-–log-file=FILE
--v flag. See the Logging section for more info.logrotate program to manage rclone’s logs, then you should use the copytruncate option as rclone doesn’t have a signal to rotate logs.–log-format LIST
-date, time, microseconds, longfile, shortfile, UTC. The default is “date,time”.–log-level LEVEL
+--leave-root
+--log-file=FILE
+-v flag. See the Logging section for more info.logrotate program to manage rclone's logs, then you should use the copytruncate option as rclone doesn't have a signal to rotate logs.--log-format LIST
+date, time, microseconds, longfile, shortfile, UTC. The default is "date,time".--log-level LEVEL
NOTICE.DEBUG is equivalent to -vv. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.INFO is equivalent to -v. It outputs information about each transfer and prints stats once a minute by default.NOTICE is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.ERROR is equivalent to -q. It only outputs error messages.–use-json-log
+--use-json-log
–low-level-retries NUMBER
+--low-level-retries NUMBER
-v flag.--retries flag) quicker.--retries flag) quicker.--low-level-retries 1.–max-backlog=N
+--max-backlog=N
--order-by work more accurately.–max-delete=N
+--max-delete=N
–max-depth=N
+--max-depth=N
rclone --max-depth 1 ls remote:path you will see only the files in the top level directory. Using --max-depth 2 means you will see all the files in first two directory levels and so on.lsd command defaults to using a --max-depth of 1 - you can override this with the command line flag.--max-depth 1).sync and --delete-excluded the files not recursed through are considered excluded and will be deleted on the destination. Test first with --dry-run if you are not sure what will happen.–max-duration=TIME
+--max-duration=TIME
–max-transfer=SIZE
+--max-transfer=SIZE
–cutoff-mode=hard|soft|cautious
+--cutoff-mode=hard|soft|cautious
--max-transfer Defaults to --cutoff-mode=hard.--cutoff-mode=hard will stop transferring immediately when Rclone reaches the limit.--cutoff-mode=soft will stop starting new transfers when Rclone reaches the limit.--cutoff-mode=cautious will try to prevent Rclone from reaching the limit.–modify-window=TIME
+--modify-window=TIME
1ns unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s by default.–multi-thread-cutoff=SIZE
+--multi-thread-cutoff=SIZE
fallocate(FALLOC_FL_KEEP_SIZE) on unix or NTSetInformationFile on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won’t create fragmented or sparse files and there won’t be any assembly time at the end of the transfer.fallocate(FALLOC_FL_KEEP_SIZE) on unix or NTSetInformationFile on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won't create fragmented or sparse files and there won't be any assembly time at the end of the transfer.--multi-thread-streams.-vv if you wish to see info about the threads.sync/copy/move commands and friends copyto/moveto. Multi thread downloads will be used with rclone mount and rclone serve if --vfs-cache-mode is set to writes or above.--multi-thread-streams is set explicitly.--local-no-sparse to disable sparse files (which may cause long delays at the start of downloads) or disable multi-thread downloads with --multi-thread-streams 0–multi-thread-streams=N
+--multi-thread-streams=N
--multi-thread-cutoff) this sets the maximum number of streams to use. Set to 0 to disable multi thread downloads (Default 4).--multi-thread-cutoff and rounds up, up to the maximum set with --multi-thread-streams.--multi-thread-cutoff 250MB and --multi-thread-streams 4 are in effect (the defaults):–no-check-dest
+--no-check-dest
--no-check-dest can be used with move or copy and it causes rclone not to check the destination at all when copying files.
--retries 1 is recommended otherwise you’ll transfer everything again on a retry--retries 1 is recommended otherwise you'll transfer everything again on a retry–no-gzip-encoding
-Accept-Encoding: gzip. This means that rclone won’t ask the server for compressed files automatically. Useful if you’ve set the server to return files with Content-Encoding: gzip but you uploaded compressed files.--no-gzip-encoding
+Accept-Encoding: gzip. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with Content-Encoding: gzip but you uploaded compressed files.–no-traverse
+--no-traverse
--no-traverse flag controls whether the destination file system is traversed when using the copy or move commands. --no-traverse is not compatible with sync and will be ignored if you supply it with sync.--no-traverse will stop rclone listing the destination and save time.--no-traverse.--no-traverse.–no-unicode-normalization
---no-unicode-normalization
+--no-unicode-normalization they will be treated as unique characters.–no-update-modtime
---no-update-modtime
+–order-by string
+--order-by string
--order-by flag controls the order in which files in the backlog are processed in rclone sync, rclone copy and rclone move.
@@ -3079,10 +3082,10 @@ rclone sync /path/to/files remote:current-backup
--order-by flag does not do a separate pass over the data. This means that it may transfer some files out of the order specified if
--order-by as being more of a best efforts flag rather than a perfect ordering.–password-command SpaceSepList
+--password-command SpaceSepList
RCLONE_CONFIG_PASS variable.", if you want a literal " in an argument then enclose the argument in " and double the ". See CSV encoding for more info.-P, –progress
+-P, --progress
--stats flag.--stats-one-line flag for a simpler display.. when --progress is in use.-q, –quiet
-–retries int
+-q, --quiet
+--retries int
--retries 1.–retries-sleep=TIME
+--retries-sleep=TIME
--retries0. Use 0 to disable.–size-only
+--size-only
–stats=TIME
+--stats=TIME
sync, copy, copyto, move, moveto) will print data transfer stats at regular intervals to show their progress.1m. Use 0 to disable.check or mount for example.INFO level by default which means they won’t show at default log level NOTICE. Use --stats-log-level NOTICE or -v to make them show. See the Logging section for more info on log levels.INFO level by default which means they won't show at default log level NOTICE. Use --stats-log-level NOTICE or -v to make them show. See the Logging section for more info on log levels.–stats-file-name-length integer
+--stats-file-name-length integer
--stats output will truncate file names and paths longer than 40 characters. This is equivalent to providing --stats-file-name-length 40. Use --stats-file-name-length 0 to disable any truncation of file names printed by stats.–stats-log-level string
---stats output at. This can be DEBUG, INFO, NOTICE, or ERROR. The default is INFO. This means at the default level of logging which is NOTICE the stats won’t show - if you want them to then use --stats-log-level NOTICE. See the Logging section for more info on log levels.–stats-one-line
+--stats-log-level string
+--stats output at. This can be DEBUG, INFO, NOTICE, or ERROR. The default is INFO. This means at the default level of logging which is NOTICE the stats won't show - if you want them to then use --stats-log-level NOTICE. See the Logging section for more info on log levels.--stats-one-line
–stats-one-line-date
+--stats-one-line-date
2006/01/02 15:04:05 -–stats-one-line-date-format
+--stats-one-line-date-format
–stats-unit=bits|bytes
+--stats-unit=bits|bytes
bytes.–suffix=SUFFIX
+--suffix=SUFFIX
sync, copy or move any files which would have been overwritten or deleted will have the suffix added to them. If there is a file with the same path (after the suffix has been added), then it will be overwritten.--backup-dir. See --backup-dir for more info.rclone sync /path/to/local/file remote:current --suffix .bak/path/to/local to remote:current, but for any files which would have been updated or deleted have .bak added.–suffix-keep-extension
+--suffix-keep-extension
--suffix, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after.--suffix -2019-01-01, without the flag file.txt would be backed up to file.txt-2019-01-01 and with the flag it would be backed up to file-2019-01-01.txt. This can be helpful to make sure the suffixed files can still be opened.–syslog
+--suffix -2019-01-01, without the flag file.txt would be backed up to file.txt-2019-01-01 and with the flag it would be backed up to file-2019-01-01.txt. This can be helpful to make sure the suffixed files can still be opened.--syslog
rclone mount.–syslog-facility string
+--syslog-facility string
--syslog this sets the syslog facility (eg KERN, USER). See man syslog for a list of possible facilities. The default facility is DAEMON.–tpslimit float
+--tpslimit float
--tpslimit 10, or to 1 transaction every 2 seconds use --tpslimit 0.5.rclone mount to control the behaviour of applications using it.--tpslimit-burst.–tpslimit-burst int
+--tpslimit-burst int
--tpslimit (default 1).--tpslimit will do exactly the number of transaction per second specified. However if you supply --tps-burst then rclone can save up some transactions from when it was idle giving a burst of up to the parameter supplied.--tpslimit-burst 10 then if rclone has been idle for more than 10*--tpslimit then it can do 10 transactions very quickly before they are limited again.--tpslimit without changing the long term average number of transactions per second.–track-renames
---track-renames
+sync operations and perform renaming server-side.--track-renames.--track-renames is incompatible with --no-traverse and that it uses extra memory to keep track of all the rename candidates.--track-renames is incompatible with --delete-before and will select --delete-after instead of --delete-during.–track-renames-strategy (hash,modtime)
+--track-renames-strategy (hash,modtime)
--track-renames to match by any combination of modtime, hash, size. Matching by size is always enabled no matter what option is selected here. This also means that it enables --track-renames support for encrypted destinations. If nothing is specified, the default option is matching by hashes.–delete-(before,during,after)
+--delete-(before,during,after)
--delete-before will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses two passes through the file systems, one for the deletions and one for the copies.--delete-during will delete files while checking and uploading files. This is the fastest option and uses the least memory.--delete-after (the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors.–fast-list
+--fast-list
sync, copy, ls - in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory.--fast-list flag then rclone will use this method for listing directories. This will have the following consequences for the listing:--fast-list.--fast-list is recommended. If you have a very big sync to do then don’t use --fast-list otherwise you will run out of memory.--fast-list on a remote which doesn’t support it, then rclone will just ignore it.–timeout=TIME
+--fast-list is recommended. If you have a very big sync to do then don't use --fast-list otherwise you will run out of memory.--fast-list on a remote which doesn't support it, then rclone will just ignore it.--timeout=TIME
5m. Set to 0 to disable.–transfers=N
+--transfers=N
-u, –update
+-u, --update
--use-server-modtime to avoid extra API calls) as it is more accurate than a --size-only check and faster than using --checksum.--checksum is set then rclone will update the destination if the checksums differ too.--use-server-modtime to avoid extra API calls) as it is more accurate than a --size-only check and faster than using --checksum.--checksum is set then rclone will update the destination if the checksums differ too.--use-server-modtime) the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.–use-mmap
+--use-server-modtime) the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.--use-mmap
--buffer-size). Memory allocated like this does not go on the Go heap and can be returned to the OS immediately when it is finished with.–use-server-modtime
+--use-server-modtime
--update, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.--update, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.--update would cause all files modified at any time other than the last upload time to be uploaded again, which is probably not what you want.-v, -vv, –verbose
+-v, -vv, --verbose
-v rclone will tell you about each file that is transferred and a small number of significant events.-vv rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.-V, –version
+-V, --version
SSL/TLS options
–ca-cert string
+--ca-cert string
–client-cert string
+--client-cert string
--client-key flag is required too when using this.–client-key string
+--client-key string
--client-cert.–no-check-certificate=true/false
---no-check-certificate controls whether a client verifies the server’s certificate chain and host name. If --no-check-certificate is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.--no-check-certificate=true/false
+--no-check-certificate controls whether a client verifies the server's certificate chain and host name. If --no-check-certificate is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.false.Configuration Encryption
.rclone.conf file in a secure location.rclone config.>rclone config
Current remotes:
@@ -3270,34 +3273,34 @@ export RCLONE_CONFIG_PASSpasswordstore application to retrieve the password:export RCLONE_PASSWORD_COMMAND="pass rclone/config"passwordstore password manager holds the password for the rclone configuration, using the script method means the password is primarily protected by the passwordstore system, and is never embedded in the clear in scripts, nor available for examination using the standard commands available. It is quite possible with long running rclone sessions for copies of passwords to be innocently captured in log files or terminal scroll buffers, etc. Using the script method of supplying the password enhances the security of the config password considerably.--password-command method, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS doesn’t contain a valid password, and --password-command has not been supplied.--password-command method, you might want to disable password prompts. To do that, pass the parameter --ask-password=false to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS doesn't contain a valid password, and --password-command has not been supplied.Developer options
---drive-test-option - see the docs for the remote in question.–cpuprofile=FILE
+--drive-test-option - see the docs for the remote in question.--cpuprofile=FILE
go tool pprof.–dump flag,flag,flag
+--dump flag,flag,flag
--dump flag takes a comma separated list of flags to dump info about.Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.–dump headers
+--dump headers
Authorization: lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only.--dump auth if you do want the Authorization: headers.–dump bodies
+--dump bodies
–dump requests
+--dump requests
--dump bodies but dumps the request bodies and the response headers. Useful for debugging download problems.–dump responses
+--dump responses
--dump bodies but dumps the response bodies and the request headers. Useful for debugging upload problems.–dump auth
+--dump auth
Authorization: headers - use --dump headers to dump without Authorization: headers. Can be very verbose. Useful for debugging only.–dump filters
+--dump filters
–dump goroutines
+--dump goroutines
–dump openfiles
-lsof command to do that so you’ll need that installed to use it.–memprofile=FILE
+--dump openfiles
+lsof command to do that so you'll need that installed to use it.--memprofile=FILE
go tool pprof.Filtering
4 - File not found5 - Temporary error (one that more retries might fix) (Retry errors)6 - Less serious errors (like 461 errors from dropbox) (NoRetry errors)7 - Fatal error (one that more retries won’t fix, like account suspended) (Fatal errors)8 - Transfer exceeded - limit set by –max-transfer reached7 - Fatal error (one that more retries won't fix, like account suspended) (Fatal errors)8 - Transfer exceeded - limit set by --max-transfer reached9 - Operation successful, but no files transferredEnvironment Variables
@@ -3379,7 +3382,7 @@ mys3:
HTTP_PROXY, HTTPS_PROXY and NO_PROXY (or the lowercase versions thereof).
HTTPS_PROXY takes precedence over HTTP_PROXY for https requests.Configuring rclone on a remote / headless machine
@@ -3443,8 +3446,8 @@ Configuration file is stored at:
copy, sync, move, ls, lsl, md5sum, sha1sum, size, delete and check operations. Note that purge does not obey the filters.--include, --exclude, --include-from, --exclude-from, --filter, or --filter-from. The simplest way to try them out is using the ls command, or --dry-run together with -v. --filter-from, --exclude-from, --include-from, --files-from, --files-from-raw understand - as a file name to mean read from standard input.Patterns
-/ then it only matches at the top level of the directory tree, relative to the root of the remote (not necessarily the root of the local drive). If it doesn’t start with / then it is matched starting at the end of the path, but it will only match a complete path element:/ then it only matches at the top level of the directory tree, relative to the root of the remote (not necessarily the root of the local drive). If it doesn't start with / then it is matched starting at the end of the path, but it will only match a complete path element:file.jpg - matches "file.jpg"
- matches "directory/file.jpg"
- doesn't match "afile.jpg"
@@ -3486,7 +3489,7 @@ Configuration file is stored at:
--ignore-case
-potato - matches "potato"
- matches "POTATO"rclone copy "remote:dir*.jpg" /path/to/dir won’t work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dirrclone copy "remote:dir*.jpg" /path/to/dir won't work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dirDirectories
/a// then it will only match directories.Differences between rsync and rclone patterns
-{a,b,c} glob matching which rsync doesn’t.{a,b,c} glob matching which rsync doesn't.\ must always escape a \.How the rules are used
--include.--include *.{png,jpg} to include all png and jpg files in the backup and no others.--exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn’t provide enough flexibility then you must use --filter-from.--exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from.--include-from - Read include patterns from file--include-from include-file.txt. This will sync all jpg, png files and file2.avi.--exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn’t provide enough flexibility then you must use --filter-from.--exclude * at the very end of the filter list. This means you can mix --include and --include-from with the other filters (eg --exclude) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from.--filter - Add a file-filtering rule+ and exclude rules start with -. A special rule called ! can be used to clear the existing rules.--files-from, effectively using the files in --files-from as a set of filters. Rclone will not error if any of the files are missing.--no-traverse as well as --files-from then rclone will not traverse the destination file system, it will find each file individually using approximately 1 API call. This can be more efficient for small lists of files.--files-from file will be interpreted as starting with the root specified in the command. Leading / characters are ignored. See –files-from-raw if you need the input to be processed in a raw manner.--files-from file will be interpreted as starting with the root specified in the command. Leading / characters are ignored. See --files-from-raw if you need the input to be processed in a raw manner.files-from.txt with this content:# comment
file1.jpg
@@ -3601,11 +3604,11 @@ subdir/file2.jpg
-/home/me/pics/file1.jpg → remote:pics/file1.jpg
/home/me/pics/subdir/file2.jpg → remote:pics/subdir/file2.jpg
-/home/user1/important
/home/user1/dir/file
/home/user2/stuff/home and put the remaining files in files-from.txt with or without leading /, eg/home and put the remaining files in files-from.txt with or without leading /, eg
@@ -3627,13 +3630,13 @@ user2/stuff
/home/user2/stuff → remote:backup/home/user2/stuff
user1/important
user1/dir/file
user2/stuff--files-from-raw - Read list of source-file names without any processing--files-from with the only difference being that the input is read in a raw manner. This means that lines with leading/trailing whitespace and lines starting with ; or # are read without any processing. rclone lsf has a compatible format that can be used to export file lists from remotes, which can then be used as an input to --files-from-raw.
+--min-size - Don’t transfer any file smaller than this--min-size - Don't transfer any file smaller than thiskBytes but a suffix of k, M, or G can be used.--min-size 50k means no files smaller than 50kByte will be transferred.
+--max-size - Don’t transfer any file larger than this--max-size - Don't transfer any file larger than thiskBytes but a suffix of k, M, or G can be used.--max-size 1G means no files larger than 1GByte will be transferred.
+--max-age - Don’t transfer any file older than this--max-age - Don't transfer any file older than this
ms - Milliseconds--max-age 2d means no files older than 2 days will be transferred.
-
-
+--min-age - Don’t transfer any file younger than this--min-age - Don't transfer any file younger than this--max-age for list of suffixes)--min-age 2d means no files younger than 2 days will be transferred.
@@ -3692,7 +3695,7 @@ dir1/dir2/dir3/file3
dir1/dir2/dir3/.ignore
--delete-excluded - Delete files on dest excluded from syncdir3 from sync by running the following command:
-rclone sync --exclude-if-present .ignore dir1 remote:backup--exclude-if-present should not be used multiple times.--exclude-if-present should not be used multiple times.GUI (Experimental)
How it works
rclone rcd --rc-web-gui this is what happens
-
--rc flag then it starts an http server which can be used to remote control rclone using its API.Supported parameters
-–rc
+--rc
–rc-addr=IP
-–rc-cert=KEY
+--rc-addr=IP
+--rc-cert=KEY
–rc-client-ca=PATH
+--rc-client-ca=PATH
–rc-htpasswd=PATH
+--rc-htpasswd=PATH
–rc-key=PATH
+--rc-key=PATH
–rc-max-header-bytes=VALUE
+--rc-max-header-bytes=VALUE
–rc-user=VALUE
+--rc-user=VALUE
–rc-pass=VALUE
+--rc-pass=VALUE
–rc-realm=VALUE
-–rc-server-read-timeout=DURATION
+--rc-realm=VALUE
+--rc-server-read-timeout=DURATION
–rc-server-write-timeout=DURATION
+--rc-server-write-timeout=DURATION
–rc-serve
+--rc-serve
–rc-files /path/to/directory
+--rc-files /path/to/directory
--rc-user or --rc-pass is set then the URL that is opened will have the authorization in the URL in the http://user:pass@localhost/ style.–rc-enable-metrics
+--rc-enable-metrics
/metrics.–rc-web-gui
+--rc-web-gui
–rc-allow-origin
+--rc-allow-origin
–rc-web-fetch-url
+--rc-web-fetch-url
–rc-web-gui-update
+--rc-web-gui-update
–rc-web-gui-force-update
+--rc-web-gui-force-update
–rc-web-gui-no-open-browser
+--rc-web-gui-no-open-browser
–rc-job-expire-duration=DURATION
+--rc-job-expire-duration=DURATION
–rc-job-expire-interval=DURATION
+--rc-job-expire-interval=DURATION
–rc-no-auth
+--rc-no-auth
operations/list is denied as it involved creating a remote as is sync/copy.--rc-user and --rc-pass and use these credentials in the request.
@@ -3932,9 +3935,9 @@ dir1/dir2/dir3/.ignore
}
}
}
-
-rclone backend noop . -o echo=yes -o blue path1 path2cache/expire: Purge a remote from cache
@@ -3945,9 +3948,9 @@ rclone rc cache/expire remote=/ withData=true
cache/fetch: Fetch file chunks
rclone rc cache/fetch chunks=0 file=hello file2=home/goodbyecache/stats: Get cache stats
@@ -3956,7 +3959,7 @@ rclone rc cache/expire remote=/ withData=true
-
@@ -4027,10 +4030,10 @@ rclone rc core/bwlimit rate=1M
"bytesPerSecond": 1048576,
"rate": "1M"
}
-core/gc: Runs a garbage collection.
-core/group-list: Returns list of stats.
core/stats-delete: Delete stats group.
core/version: Shows the current version of rclone and the go runtime.
-
mount/mount: Create a new mount point
-
@@ -4216,12 +4219,12 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
-rclone rc mount/typesmount/unmount: Unmount all active mounts
-
operations/about: Return the space used on the remote
-
-operations/cleanup: Remove trashed files in the remote or path
-
operations/copyfile: Copy a file from source remote to destination remote
-
operations/copyurl: Copy the URL to the object
-
@@ -4265,22 +4268,22 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
operations/delete: Remove files in the path
-
operations/deletefile: Remove the single file pointed to
-
operations/fsinfo: Return information about the remote
-
{
@@ -4330,8 +4333,8 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
operations/list: List the given remote and path in JSON format
-
operations/mkdir: Make a destination directory or container
-
operations/movefile: Move a file from source remote to destination remote
-
operations/publiclink: Create or retrieve a public link to the given file or folder.
-
@@ -4382,24 +4385,24 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
operations/purge: Remove a directory or container and all of its contents
-
operations/rmdir: Remove an empty directory or container
-
operations/rmdirs: Remove all the empty directories in the path
-
operations/size: Count the number of bytes and files in remote
-
@@ -4450,16 +4453,16 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
sync/copy: copy a directory from source remote to destination remote
-
sync/move: move a directory from source remote to destination remote
-
sync/sync: sync a directory from source remote to destination remote
-
rclone rc vfs/refresh
-rclone rc vfs/refresh dir=home/junk dir2=data/miscAccessing the remote control via HTTP
curl.curl.Error returns
CORS
-Using POST with URL parameters only
curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2'-f option-f option$ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
curl: (22) The requested URL returned error: 400 Bad Request
$ echo $?
@@ -4567,7 +4570,7 @@ $ echo $?
--rc flag this will also enable the use of the go profiling tools on the same port.Debugging memory use
-go tool pprof -web http://localhost:5572/debug/pprof/heap-text flag to produce a textual summarygo tool pprof http://localhost:5572/debug/pprof/mutexOverview of cloud storage systems
--checksum flag in syncs and in the check command.md5sum or sha1sum as well as echo are in the remote’s PATH.md5sum or sha1sum as well as echo are in the remote's PATH.ModTime
--checksum flag.Case Insensitive
-file.txt and FILE.txt. If a cloud storage system is case insensitive then that isn’t possible.file.txt and FILE.txt. If a cloud storage system is case insensitive then that isn't possible.
@@ -4915,7 +4918,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
-Duplicate files
rclone dedupe command to rename or remove duplicates.rclone arguments. For example, when uploading a file named my file?.txt to Onedrive will be displayed as my file?.txt on the console, but stored as my file?.txt (the ? gets replaced by the similar looking ? character) to Onedrive. The reverse transformation allows to read a fileunusual/name.txt from Google Drive, by passing the name unusual/name.txt (the / needs to be replaced by the similar looking / character) on the command line.Default restricted characters
‛ character to avoid ambiguous file names. (e.g. a file named ␀.txt would shown as ‛␀.txt)‛ character to avoid ambiguous file names. (e.g. a file named ␀.txt would shown as ‛␀.txt)
@@ -5133,7 +5136,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
-0xFE will be encoded as ‛FE.Encoding option
---backend-encoding where backend is the name of the backend, or as a config parameter encoding (you’ll need to select the Advanced config in rclone config to see it).--backend-encoding where backend is the name of the backend, or as a config parameter encoding (you'll need to select the Advanced config in rclone config to see it).* and ? that you want to remain as those characters on the remote rather than being translated to * and ?.--backend-encoding flags allow you to change that. You can disable the encoding completely with --backend-encoding None or set encoding = None in the config file.
---ftp-encoding "Slash,Del,Ctl,RightSpace,Dot"Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,DotSlash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot,Del,RightSpace--ftp-encoding flag or using an encoding parameter in the config file.* and ?, you would then have this as the encoding (the Windows encoding minus Asterisk and Question).* and ?, you would then have this as the encoding (the Windows encoding minus Asterisk and Question).Slash,LtGt,DoubleQuote,Colon,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot--local-encoding flag or using an encoding parameter in the config file.MIME Type
@@ -5706,29 +5709,29 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Purge
Copy
-rclone copy or rclone move if the remote doesn’t support Move directly.Copy directly then for copy operations the file is downloaded then re-uploaded.rclone copy or rclone move if the remote doesn't support Move directly.Copy directly then for copy operations the file is downloaded then re-uploaded.Move
-rclone move if the server doesn’t support DirMove.Move then rclone simulates it with Copy then delete. If the server doesn’t support Copy then rclone will download the file and re-upload it.rclone move if the server doesn't support DirMove.Move then rclone simulates it with Copy then delete. If the server doesn't support Copy then rclone will download the file and re-upload it.DirMove
-rclone move to move a directory if possible. If it isn’t then it will use Move on each file (which falls back to Copy then download and upload - see Move section).rclone move to move a directory if possible. If it isn't then it will use Move on each file (which falls back to Copy then download and upload - see Move section).CleanUp
rclone cleanup.CleanUp then rclone cleanup will return an error.CleanUp then rclone cleanup will return an error.ListR
--fast-list flag to work. See the rclone docs for more details.StreamUpload
-rclone rcat.rclone rcat.LinkSharing
-About
rclone mount.About then rclone about will return an error.About then rclone about will return an error.EmptyDir
Global Flags
@@ -5866,7 +5869,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--use-json-log Use json log format.
--use-mmap Use mmap allocator (see docs).
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.52.2")
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.52.3")
-v, --verbose count Print lots more stuff (repeat for more)
Backend Flags
`
-
@@ -6313,10 +6317,10 @@ y/e/d> y
-’
+'
0x27
'
Standard Options
–fichier-api-key
+--fichier-api-key
Advanced Options
–fichier-shared-folder
+--fichier-shared-folder
-–fichier-encoding
+--fichier-encoding
@@ -6400,8 +6404,8 @@ e/n/d/r/c/s/q> q
rclone copy /home/source remote:sourceStandard Options
–alias-remote
---alias-remote
+
Amazon Drive
Status
-Setup
rclone config walks you through it.client_id and client_secret with Amazon Drive, or use a third party oauth proxy in which case you will need to enter client_id, client_secret, auth_url and token_url.auth_url and token_url, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize will not work.client_id and client_secret with Amazon Drive, or use a third party oauth proxy in which case you will need to enter client_id, client_secret, auth_url and token_url.auth_url and token_url, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize will not work.remote. First run: rclone configrclone copy /home/source remote:backupModified time and MD5SUMs
---checksum flag.Restricted filename characters
@@ -6503,14 +6507,14 @@ y/e/d> y
-Deleting files
-Using with non
-.com Amazon accountsamazon.co.uk. When you authenticate with rclone it will take you to an amazon.com page to log in. Your amazon.co.uk email and password should work here just fine.amazon.co.uk. When you authenticate with rclone it will take you to an amazon.com page to log in. Your amazon.co.uk email and password should work here just fine.Standard Options
–acd-client-id
+--acd-client-id
-–acd-client-secret
+--acd-client-secret
Advanced Options
–acd-auth-url
---acd-auth-url
+
-–acd-token-url
---acd-token-url
+
-–acd-checkpoint
+--acd-checkpoint
-–acd-upload-wait-per-gb
+--acd-upload-wait-per-gb
-–acd-templink-threshold
+--acd-templink-threshold
-–acd-encoding
+--acd-encoding
@@ -6585,7 +6589,7 @@ y/e/d> y
Limitations
---retries flag) which should hopefully work around this problem.–fast-list
+--fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.–update and –use-server-modtime
+--update and --use-server-modtime
--update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.--update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.Modified time
X-Amz-Meta-Mtime as floating point since the epoch accurate to 1 ns.Restricted filename characters
@@ -6843,7 +6847,7 @@ y/e/d>
-
-
@@ -6892,7 +6896,7 @@ y/e/d>
~/.aws/credentials on unix based systems) file and the “default” profile, to change set these environment variables:
+~/.aws/credentials on unix based systems) file and the "default" profile, to change set these environment variables:
AWS_SHARED_CREDENTIALS_FILE to control which file.AWS_PROFILE to control which profile to use.
-USER_NAME has been created.rclone sync.rclone sync.Key Management System (KMS)
---ignore-checksum flag.--ignore-checksum flag.Glacier and Glacier Deep Archive
Standard Options
–s3-provider
+--s3-provider
-
-
–s3-env-auth
+--s3-env-auth
-
-
–s3-access-key-id
+--s3-access-key-id
-–s3-secret-access-key
+--s3-secret-access-key
-–s3-region
+--s3-region
-
-
–s3-region
---s3-region
+
-
–s3-endpoint
+--s3-endpoint
-–s3-endpoint
+--s3-endpoint
-
-
–s3-endpoint
+--s3-endpoint
-
-
–s3-endpoint
+--s3-endpoint
-
-
–s3-endpoint
+--s3-endpoint
-
-
–s3-location-constraint
+--s3-location-constraint
-
-
–s3-location-constraint
+--s3-location-constraint
-
-
–s3-location-constraint
+--s3-location-constraint
-–s3-acl
+--s3-acl
-
-
–s3-server-side-encryption
+--s3-server-side-encryption
-
-
–s3-sse-kms-key-id
+--s3-sse-kms-key-id
-–s3-storage-class
+--s3-storage-class
-
-
–s3-storage-class
+--s3-storage-class
-
Advanced Options
–s3-bucket-acl
+--s3-bucket-acl
-
-
–s3-sse-customer-algorithm
+--s3-sse-customer-algorithm
-
-
–s3-sse-customer-key
+--s3-sse-customer-key
-–s3-sse-customer-key-md5
+--s3-sse-customer-key-md5
-–s3-upload-cutoff
+--s3-upload-cutoff
@@ -7944,10 +7948,10 @@ y/e/d>
-–s3-chunk-size
+--s3-chunk-size
–s3-copy-cutoff
+--s3-copy-cutoff
–s3-disable-checksum
---s3-disable-checksum
+
-–s3-session-token
+--s3-session-token
-–s3-upload-concurrency
+--s3-upload-concurrency
–s3-force-path-style
+--s3-force-path-style
–s3-v2-auth
+--s3-v2-auth
-–s3-use-accelerate-endpoint
+--s3-use-accelerate-endpoint
@@ -8023,7 +8027,7 @@ y/e/d>
-–s3-leave-parts-on-error
+--s3-leave-parts-on-error
–s3-list-chunk
+--s3-list-chunk
-–s3-encoding
+--s3-encoding
@@ -8051,7 +8055,7 @@ y/e/d>
-–s3-memory-pool-flush-time
+--s3-memory-pool-flush-time
-–s3-memory-pool-use-mmap
+--s3-memory-pool-use-mmap
DigitalOcean Spaces
rclone config for your access_key_id and secret_access_key.region or location_constraint, press enter to use the default value. The region must be included in the endpoint setting (e.g. nyc3.digitaloceanspaces.com). The default values can be used for other settings.rclone config for your access_key_id and secret_access_key.region or location_constraint, press enter to use the default value. The region must be included in the endpoint setting (e.g. nyc3.digitaloceanspaces.com). The default values can be used for other settings.rclone config, each prompt should be answered as shown below:Storage> s3
env_auth> 1
@@ -8176,7 +8180,7 @@ rclone copy /path/to/files spaces:my-new-space name> <YOUR NAME>
-
Choose a number from below, or type in your own value
1 / Alias for an existing remote
@@ -8273,7 +8277,7 @@ Choose a number from below, or type in your own value
\ "tor01-flex"
location_constraint>1
-
Canned ACL used when creating buckets and/or storing objects in S3.
For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
@@ -8288,7 +8292,7 @@ Choose a number from below, or type in your own value
\ "authenticated-read"
acl> 1
-
[xxx]
type = s3
@@ -8581,7 +8585,7 @@ y/e/d> yNetease NOS
rclone config setting the provider Netease. This will automatically set force_path_style = false which is necessary for it to run properly.Backblaze B2
-remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.
@@ -8627,9 +8631,9 @@ y/e/d> y
rclone configApplication Keys
applicationKeyId as the account and the Application Key itself as the key.account – you can’t use the master Account ID. If you try then B2 will return 401 errors.–fast-list
+applicationKeyId as the account and the Application Key itself as the key.account – you can't use the master Account ID. If you try then B2 will return 401 errors.--fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.Modified time
X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use this as a modified time.SHA1 checksums
--b2-upload-cutoff) which are uploaded in chunks will store their SHA1 on the object as X-Bz-Info-large_file_sha1 as recommended by Backblaze.crypt will upload large files without SHA1 checksums. This may be fixed in the future (see #1767).crypt will upload large files without SHA1 checksums. This may be fixed in the future (see #1767).--b2-upload-cutoff will always have an SHA1 regardless of the source.Transfers
--transfers 32 though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4 is definitely too low for Backblaze B2 though.--transfers of these in use at any moment, so this sets the upper limit on the memory used.Versions
---b2-hard-delete flag which would permanently remove the file instead of hiding it.--b2-hard-delete flag which would permanently remove the file instead of hiding it.--b2-versions flag.--b2-versions does not work with crypt at the moment #1627. Using –backup-dir with rclone is the recommended way of working around this.--b2-versions does not work with crypt at the moment #1627. Using --backup-dir with rclone is the recommended way of working around this.rclone cleanup remote:bucket command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg rclone cleanup remote:bucket/path/to/stuff.cleanup will remove partially uploaded files from the bucket if they are more than a day old.purge a bucket, the current and the old versions will be deleted then the bucket will be deleted.$ rclone -q cleanup b2:cleanup-test
$ rclone -q ls b2:cleanup-test
@@ -8721,7 +8725,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt--b2-versions no file write operations are permitted, so you can’t upload files or delete them.--b2-versions no file write operations are permitted, so you can't upload files or delete them.B2 and rclone link
./rclone link B2:bucket/path/to/file.txt
@@ -8737,7 +8741,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
Standard Options
–b2-account
+--b2-account
-–b2-key
+--b2-key
-–b2-hard-delete
+--b2-hard-delete
Advanced Options
–b2-endpoint
+--b2-endpoint
-–b2-test-mode
+--b2-test-mode
-
-
-–b2-versions
---b2-versions
+
-–b2-upload-cutoff
+--b2-upload-cutoff
-–b2-chunk-size
+--b2-chunk-size
-–b2-disable-checksum
+--b2-disable-checksum
@@ -8822,7 +8826,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
-–b2-download-url
+--b2-download-url
@@ -8831,7 +8835,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
-–b2-download-auth-duration
+--b2-download-auth-duration
@@ -8840,7 +8844,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
-–b2-encoding
+--b2-encoding
@@ -8911,7 +8915,7 @@ y/e/d> y
rclone copy /home/source remote:backupUsing rclone with an Enterprise account with SSO
-Invalid refresh token
-
@@ -9023,7 +9027,7 @@ y/e/d> y
-Transfers
--transfers chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 8MB so increasing --transfers will increase memory use.Deleting files
@@ -9036,7 +9040,7 @@ y/e/d> y
https://app.box.com/folder/11xxxxxxxxx8 in the browser, then you use 11xxxxxxxxx8 as the root_folder_id in the config.Standard Options
–box-client-id
+--box-client-id
-–box-client-secret
+--box-client-secret
-–box-box-config-file
+--box-box-config-file
-–box-box-sub-type
+--box-box-sub-type
-
Advanced Options
–box-root-folder-id
+--box-root-folder-id
-–box-upload-cutoff
+--box-upload-cutoff
-–box-commit-retries
+--box-commit-retries
-–box-encoding
+--box-encoding
@@ -9114,15 +9118,15 @@ y/e/d> y
Limitations
-\ character in. rclone maps this to and from an identical looking unicode equivalent \.\ character in. rclone maps this to and from an identical looking unicode equivalent \.Cache (BETA)
cache remote wraps another existing remote and stores file structure and its data for long running tasks like rclone mount.Status
-Setup
cache.test-cache. First run:cache-tmp-wait-time passes and the file is next in line, rclone move is used to move the file to the cloud providercache when it’s actually deleted from the temporary path then cache will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though)cache when it's actually deleted from the temporary path then cache will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though)--cache-db-purge flag.Write Support
@@ -9239,13 +9243,13 @@ chunk_total_size = 10G
.plex.direct URLs to ensure certificate validation succeeds. These URLs are used by Plex internally to connect to the Plex server securely.ip-with-dots-replaced part can be any IPv4 address, where the dots have been replaced with dashes, e.g. 127.0.0.1 becomes 127-0-0-1.ip-with-dots-replaced part can be any IPv4 address, where the dots have been replaced with dashes, e.g. 127.0.0.1 becomes 127-0-0-1.server-hash part, the easiest way is to visit.plex.direct link for each. Copy one URL and replace the IP address with the desired address. This can be used as the plex_url value.Known issues
-Mount and –dir-cache-time
-cache backend, it will manage its own entries based on the configured time.Mount and --dir-cache-time
+cache backend, it will manage its own entries based on the configured time.--dir-cache-time to a lower time than --cache-info-age. Default values are already configured in this way.Windows support - Experimental
mount functionality that still require some investigations. It should be considered as experimental thus far as fixes come in for this OS.Risk of throttling
--cache-info-age) - while writes aren’t yet optimised, you can still write through cache which gives you the advantage of adding the file in the cache at the same time if configured to do so.--cache-info-age) - while writes aren't yet optimised, you can still write through cache which gives you the advantage of adding the file in the cache at the same time if configured to do so.
-cache and crypt
crypt remote. crypt uses a similar technique to wrap around an existing remote and handles this translation in a seamless way.absolute remote paths
cache can not differentiate between relative and absolute paths for the wrapped remote. Any path given in the remote config setting and on the command line will be passed to the wrapped remote as is, but for storing the chunks on disk the path will be made relative by removing any leading / character./ changes the effective directory, e.g. in the sftp backend paths starting with a / are relative to the root of the SSH server and paths without are relative to the user home directory. As a result sftp:bin and sftp:/bin will share the same cache folder, even if they represent a different directory on the SSH server.Cache and Remote Control (–rc)
+/ changes the effective directory, e.g. in the sftp backend paths starting with a / are relative to the root of the SSH server and paths without are relative to the user home directory. As a result sftp:bin and sftp:/bin will share the same cache folder, even if they represent a different directory on the SSH server.Cache and Remote Control (--rc)
--rc mode in rclone and can be remote controlled through the following end points: By default, the listener is disabled if you do not add the flag.rc cache/expire
Standard Options
–cache-remote
---cache-remote
+
-–cache-plex-url
+--cache-plex-url
-–cache-plex-username
+--cache-plex-username
-–cache-plex-password
+--cache-plex-password
@@ -9312,7 +9316,7 @@ chunk_total_size = 10G
-–cache-chunk-size
+--cache-chunk-size
@@ -9322,21 +9326,21 @@ chunk_total_size = 10G
-
-
–cache-info-age
+--cache-info-age
-
-
–cache-chunk-total-size
+--cache-chunk-total-size
@@ -9369,15 +9373,15 @@ chunk_total_size = 10G
-
-
Advanced Options
–cache-plex-token
+--cache-plex-token
-–cache-plex-insecure
+--cache-plex-insecure
-–cache-db-path
+--cache-db-path
-–cache-chunk-path
+--cache-chunk-path
-–cache-db-purge
+--cache-db-purge
-–cache-chunk-clean-interval
---cache-chunk-clean-interval
+
-–cache-read-retries
+--cache-read-retries
-–cache-workers
+--cache-workers
–cache-chunk-no-memory
+--cache-chunk-no-memory
-–cache-rps
+--cache-rps
@@ -9478,7 +9482,7 @@ chunk_total_size = 10G
-–cache-writes
+--cache-writes
@@ -9487,7 +9491,7 @@ chunk_total_size = 10G
-–cache-tmp-upload-path
+--cache-tmp-upload-path
–cache-tmp-wait-time
+--cache-tmp-wait-time
–cache-db-wait-time
+--cache-db-wait-time
rclone backend COMMAND remote:stats
Chunker (BETA)
chunker overlay transparently splits large files into smaller chunks during upload to wrapped remote and transparently assembles them back when the file is downloaded. This allows to effectively overcome size limits imposed by storage providers.remote:path here. Note that anything inside remote:path will be chunked and anything outside won’t. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket.remote:path here. Note that anything inside remote:path will be chunked and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket.chunker using rclone config. We will call this one overlay to separate it from the remote itself.No remotes found - make a new one
n) New remote
@@ -9590,7 +9594,7 @@ y/e/d> ySpecifying the remote
: in. If you specify the remote without a : then rclone will use a local directory of that name. So if you use a remote of /path/to/secret/files then rclone will chunk stuff in that directory. If you use a remote of name then rclone will put files in a directory called name in the current directory.Chunking
-list rclone command scans a directory on wrapped remote, the potential chunk files are accounted for, grouped and assembled into composite directory entries. Any temporary chunks are hidden.md5 - MD5 hashsum of composite file (if present)sha1 - SHA1 hashsum (if present)No metadata
none. In this mode chunker will scan directory for all files that follow configured chunk name format, group them by detecting chunks with the same base name and show group names as virtual composite files. This method is more prone to missing chunk errors (especially missing last chunk) than format with metadata enabled.Hashsums
none, chunker will report hashsum as UNSUPPORTED.md5all or sha1all. These two modes guarantee given hash for all files. If wrapped remote doesn’t support it, chunker will then add metadata to all files, even small. However, this can double the amount of small files in storage and incur additional service charges. You can even use chunker to force md5/sha1 support in any other remote at expense of sidecar meta objects by setting eg. chunk_type=sha1all to force hashsums and chunk_size=1P to effectively disable chunking.md5all or sha1all. These two modes guarantee given hash for all files. If wrapped remote doesn't support it, chunker will then add metadata to all files, even small. However, this can double the amount of small files in storage and incur additional service charges. You can even use chunker to force md5/sha1 support in any other remote at expense of sidecar meta objects by setting eg. chunk_type=sha1all to force hashsums and chunk_size=1P to effectively disable chunking.sha1quick and md5quick. If the source does not support primary hash type and the quick mode is enabled, chunker will try to fall back to the secondary type. This will save CPU and bandwidth but can result in empty hashsums at destination. Beware of consequences: the sync command will revert (sometimes silently) to time/size comparison if compatible hashsums between source and target are not found.Modified time
none then chunker will use modification time of the first data chunk.list command but will eat up your account quota. Please note that the deletefile command deletes only active chunks of a file. As a workaround, you can use remote of the wrapped file system to see them. An easy way to get rid of hidden garbage is to copy littered directory somewhere using the chunker remote and purge the original directory. The copy command will copy only active chunks while the purge will remove everything including garbage.Caveats and Limitations
move (or copy + delete) operations, otherwise it will explicitly refuse to start. This is because it internally renames temporary chunk files to their final names when an operation completes successfully.name_format setting it adds 17 characters. Also chunker adds 7 characters of temporary suffix during operations. Many file systems limit base file name without path by 255 characters. Using rclone’s crypt remote as a base file system limits file name by 143 characters. Thus, maximum name length is 231 for most files and 119 for chunker-over-crypt. A user in need can change name format to eg. *.rcc## and save 10 characters (provided at most 99 chunks per file).name_format setting it adds 17 characters. Also chunker adds 7 characters of temporary suffix during operations. Many file systems limit base file name without path by 255 characters. Using rclone's crypt remote as a base file system limits file name by 143 characters. Thus, maximum name length is 231 for most files and 119 for chunker-over-crypt. A user in need can change name format to eg. *.rcc## and save 10 characters (provided at most 99 chunks per file).rclone config on a live remote and change the chunk name format. Beware that in result of this some files which have been treated as chunks before the change can pop up in directory listings as normal files and vice versa. The same warning holds for the chunk size. If you desperately need to change critical chunking settings, you should run data migration as described above.Standard Options
–chunker-remote
---chunker-remote
+
-–chunker-chunk-size
+--chunker-chunk-size
-–chunker-hash-type
---chunker-hash-type
+
-
-
Advanced Options
–chunker-name-format
---chunker-name-format
+
-–chunker-start-from
+--chunker-start-from
-–chunker-meta-format
---chunker-meta-format
+
-
-
-
–chunker-fail-hard
+--chunker-fail-hard
-
Transfers
--transfers chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 64MB so increasing --transfers will increase memory use.Limitations
-Restricted filename characters
Standard Options
–sharefile-root-folder-id
+--sharefile-root-folder-id
Advanced Options
–sharefile-upload-cutoff
+--sharefile-upload-cutoff
-–sharefile-chunk-size
+--sharefile-chunk-size
–sharefile-endpoint
+--sharefile-endpoint
@@ -9969,7 +9973,7 @@ y/e/d> y
-–sharefile-encoding
+--sharefile-encoding
@@ -9981,7 +9985,7 @@ y/e/d> y
Crypt
crypt remote encrypts and decrypts another remote.remote:path in these docs. Note that anything inside remote:path will be encrypted and anything outside won’t. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket. If you just use s3: then rclone will make encrypted bucket names too (if using file name encryption) which may or may not be what you want.remote:path in these docs. Note that anything inside remote:path will be encrypted and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket. If you just use s3: then rclone will make encrypted bucket names too (if using file name encryption) which may or may not be what you want.crypt using rclone config. We will call this one secret to differentiate it from the remote.
-No remotes found - make a new one
n) New remote
@@ -10052,7 +10056,7 @@ y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> ySpecifying the remote
: in. If you specify the remote without a : then rclone will use a local directory of that name. So if you use a remote of /path/to/secret/files then rclone will encrypt stuff to that directory. If you use a remote of name then rclone will put files in a directory called name in the current directory.remote:path/to/dir then rclone will store encrypted files in path/to/dir on the remote. If you are using file name encryption, then when you save files to secret:subdir/subfile this will store them in the unencrypted path path/to/dir but the subdir/subpath bit will be encrypted.remote:secretbucket when using bucket based remotes such as S3, Swift, Hubic, B2, GCS.remote:secretbucket when using bucket based remotes such as S3, Swift, Hubic, B2, GCS.Example
-
-plaintext/
├── file0.txt
├── file1.txt
@@ -10095,7 +10099,7 @@ $ rclone -q ls secret:
8 file2.txt
9 file3.txt
10 subsubdir/file4.txt.bin extensions added to prevent the cloud provider attempting to interpret the data..bin extensions added to prevent the cloud provider attempting to interpret the data.$ rclone -q ls remote:path
54 file0.txt.bin
57 subdir/file3.txt.bin
@@ -10106,22 +10110,22 @@ $ rclone -q ls secret:
-
-Directory name encryption
Modified time and hashes
rclone cryptcheck command to check the integrity of a crypted remote instead of rclone check which can’t check the checksums properly.rclone cryptcheck command to check the integrity of a crypted remote instead of rclone check which can't check the checksums properly.Standard Options
–crypt-remote
---crypt-remote
+
-–crypt-filename-encryption
+--crypt-filename-encryption
-
-
-
–crypt-directory-name-encryption
+--crypt-directory-name-encryption
-
-
-
–crypt-password
+--crypt-password
@@ -10204,7 +10208,7 @@ $ rclone -q ls secret:
-–crypt-password2
+--crypt-password2
@@ -10215,7 +10219,7 @@ $ rclone -q ls secret:
Advanced Options
–crypt-show-mapping
+--crypt-show-mapping
rclone backend COMMAND remote:encode
-rclone sync will check the checksums while copyingrclone check between the encrypted remotesremote: with the encrypted version at eremote: with path remote:crypt. You would then set up the new remote remote2: and then the encrypted version eremote2: with path remote2:crypt using the same passwords as eremote:.remote: with the encrypted version at eremote: with path remote:crypt. You would then set up the new remote remote2: and then the encrypted version eremote2: with path remote2:crypt using the same passwords as eremote:.rclone sync remote:crypt remote2:cryptExamples
Name encryption
/ separated strings and these are encrypted individually.
base32 encoding as described in RFC4648. The standard encoding is modified in two ways:base32 is used rather than the more efficient base64 so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive).Key derivation
-scrypt with parameters N=16384, r=8, p=1 with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn’t supply a salt then rclone uses an internal one.scrypt with parameters N=16384, r=8, p=1 with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.Dropbox
remote:path/ for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so it should be avoided.Modified time and Hashes
--size-only or --checksum flag to stop it.--size-only or --checksum flag to stop it.Restricted filename characters
@@ -10413,10 +10417,10 @@ y/e/d> y
-Standard Options
–dropbox-client-id
+--dropbox-client-id
-–dropbox-client-secret
+--dropbox-client-secret
Advanced Options
–dropbox-chunk-size
+--dropbox-chunk-size
–dropbox-impersonate
+--dropbox-impersonate
-–dropbox-encoding
+--dropbox-encoding
@@ -10462,23 +10466,24 @@ y/e/d> y
Limitations
-thumbs.db which Dropbox can’t store. There is a full list of them in the “Ignored Files” section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempts to upload one of those file names, but the sync won’t fail.thumbs.db which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading if it attempts to upload one of those file names, but the sync won't fail.rclone purge dropbox:dir will return the error Failed to purge: There are too many files involved in this operation. As a work-around do an rclone delete dropbox:dir followed by an rclone rmdir dropbox:dir.Get your own Dropbox App ID
-
Dropbox APIFull Dropbox or App Folderrclone for examplerclone for exampleCreate AppRedirect URIs as http://localhost:53682/App key and App secret Use these values in rclone config to add a new remote or edit an existing remote.FTP
remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory.rclone configanonymous as username and your email address as the password.990 so the port will likely have to be explicitly set in the config for the remote.Standard Options
–ftp-host
+--ftp-host
-
-
–ftp-user
+--ftp-user
-–ftp-port
+--ftp-port
-–ftp-pass
+--ftp-pass
@@ -10632,7 +10637,7 @@ y/e/d> y
-–ftp-tls
+--ftp-tls
Advanced Options
–ftp-concurrency
+--ftp-concurrency
-–ftp-no-check-certificate
+--ftp-no-check-certificate
-–ftp-disable-epsv
+--ftp-disable-epsv
-–ftp-encoding
+--ftp-encoding
@@ -10676,10 +10681,10 @@ y/e/d> y
Limitations
---dump-headers, --dump-bodies, --dump-auth--timeout isn’t supported (but --contimeout is).--bind isn’t supported.--dump-headers, --dump-bodies, --dump-auth--timeout isn't supported (but --contimeout is).--bind isn't supported.ftp_proxy environment variable yet.Google Cloud Storage
@@ -10818,13 +10823,13 @@ y/e/d> y
/home/local/directory to the remote bucket, deleting any excess files in the bucket.rclone sync /home/local/directory remote:bucketService Account support
-User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account’s credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.service_account_file prompt and rclone won’t use the browser based authentication flow. If you’d rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.User permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.service_account_file prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.Application Default Credentials
–fast-list
+--fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.Custom upload headers
--header-upload flag. Google Cloud Storage supports the headers as described in the working with metadata documentation--header-upload "Content-Type text/potato"--header-upload "x-goog-meta-key: value"Modified time
-Restricted filename characters
@@ -10872,10 +10877,10 @@ y/e/d> y
-Standard Options
–gcs-client-id
+--gcs-client-id
-–gcs-client-secret
+--gcs-client-secret
-–gcs-project-number
+--gcs-project-number
-–gcs-service-account-file
+--gcs-service-account-file
-–gcs-service-account-credentials
+--gcs-service-account-credentials
-–gcs-object-acl
+--gcs-object-acl
-
-
–gcs-bucket-acl
+--gcs-bucket-acl
-
-
–gcs-bucket-policy-only
+--gcs-bucket-policy-only
–gcs-location
+--gcs-location
-
-
–gcs-storage-class
+--gcs-storage-class
-
Advanced Options
–gcs-encoding
+--gcs-encoding
@@ -11226,7 +11231,7 @@ y/e/d> y
drive
drive.readonly
drive.file
@@ -11235,42 +11240,42 @@ y/e/d> y
drive.appfolder
-drive.metadata.readonly
Root folder ID
root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your drive.Folder ID of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the drive web interface.https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh in the browser, then you use 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh as the root_folder_id in the config.Service Account support
-service_account_file prompt during rclone config and rclone won’t use the browser based authentication flow. If you’d rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.service_account_file prompt during rclone config and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.Use case - Google Apps/G-suite account and individual Drive
-1. Create a service account for example.com
2. Allowing API access to example.com Google Drive
-
https://www.googleapis.com/auth/drive to grant access to Google Drive specifically.https://www.googleapis.com/auth/drive to grant access to Google Drive specifically.3. Configure rclone, assuming a new install
-rclone config
@@ -11285,7 +11290,7 @@ root_folder_id> # Can be left blank
service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
y/n> # Auto config, y
4. Verify that it’s working
+4. Verify that it's working
-rclone -v --drive-impersonate foo@example.com lsf gdrive:backup–fast-list
+--fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.list calls into a single API request.'%s' in parents filters into one expression. To list the contents of directories a, b and c, the following requests will be send by the regular List function:Modified time
Restricted filename characters
-/ can also be used in names and . or .. are valid names.Revisions
--drive-use-trash=false flag, or set the equivalent environment variable.Shortcuts
@@ -11394,7 +11399,7 @@ trashed=false and 'c' in parents
-Import/Export of google documents
--drive-export-formats setting. By default the export formats are docx,xlsx,pptx,svg which are a sensible default for an editable document.--drive-export-formats pdf, or if you prefer openoffice/libreoffice formats you might use --drive-export-formats ods,odt,odp.My Spreadsheet on google docs, it will be exported as My Spreadsheet.xlsx or My Spreadsheet.pdf etc.--drive-import-formats to their associated document type. rclone will not convert any files by default, since the conversion is lossy process.Standard Options
–drive-client-id
+--drive-client-id
-–drive-client-secret
+--drive-client-secret
-–drive-scope
+--drive-scope
-
-
–drive-root-folder-id
+--drive-root-folder-id
-–drive-service-account-file
+--drive-service-account-file
Advanced Options
–drive-service-account-credentials
+--drive-service-account-credentials
-–drive-team-drive
+--drive-team-drive
-–drive-auth-owner-only
+--drive-auth-owner-only
-–drive-use-trash
+--drive-use-trash
--drive-use-trash=false to delete files permanently instead.
-–drive-skip-gdocs
+--drive-skip-gdocs
-–drive-skip-checksum-gphotos
+--drive-skip-checksum-gphotos
-–drive-shared-with-me
+--drive-shared-with-me
-–drive-trashed-only
+--drive-trashed-only
-–drive-formats
+--drive-formats
-–drive-export-formats
+--drive-export-formats
-–drive-import-formats
+--drive-import-formats
-–drive-allow-import-name-change
---drive-allow-import-name-change
+
-–drive-use-created-date
+--drive-use-created-date
-–drive-use-shared-date
+--drive-use-shared-date
-–drive-list-chunk
+--drive-list-chunk
-–drive-impersonate
+--drive-impersonate
-–drive-alternate-export
+--drive-alternate-export
-–drive-upload-cutoff
+--drive-upload-cutoff
-–drive-chunk-size
+--drive-chunk-size
–drive-acknowledge-abuse
+--drive-acknowledge-abuse
-–drive-keep-revision-forever
+--drive-keep-revision-forever
-–drive-size-as-quota
+--drive-size-as-quota
-–drive-v2-download-min-size
---drive-v2-download-min-size
+
-–drive-pacer-min-sleep
+--drive-pacer-min-sleep
-–drive-pacer-burst
+--drive-pacer-burst
-–drive-server-side-across-configs
+--drive-server-side-across-configs
-–drive-disable-http2
+--drive-disable-http2
–drive-stop-on-upload-limit
+--drive-stop-on-upload-limit
-–drive-skip-shortcuts
+--drive-skip-shortcuts
@@ -11942,7 +11945,7 @@ trashed=false and 'c' in parents
-–drive-encoding
+--drive-encoding
@@ -11956,7 +11959,7 @@ trashed=false and 'c' in parents
rclone backend COMMAND remote:get
-
set
-
shortcut
-rclone backend shortcut drive: source_item destination_shortcut
rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut
-
Limitations
--disable copy to download and upload the files if you prefer.Limitations of Google Docs
rclone ls and as size 0 in anything which uses the VFS layer, eg rclone mount, rclone serve.rclone sync, rclone copy etc as rclone knows to ignore the size when doing the transfer.rclone mount. If it doesn’t work you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable. Whether it will work on not depends on the application accessing the mount and the OS you are running - experiment to find out if it does work for you!rclone mount. If it doesn't work you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable. Whether it will work on not depends on the application accessing the mount and the OS you are running - experiment to find out if it does work for you!Duplicated files
-rclone dedupe to fix duplicated files.Rclone appears to be re-copying files it shouldn’t
+Rclone appears to be re-copying files it shouldn't
rclone dedupe and check your logs for duplicate object or directory messages.Making your own client_id
-
-
-
-
-Google Photos
Layout
media show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month. (NB remote:media/by-day is rather slow at the moment so avoid for syncing.)media, but they may not appear under album unless you’ve put them into albums.media, but they may not appear under album unless you've put them into albums.
- file1.jpg
- file2.jpg
/
- upload
- file1.jpg
@@ -12159,7 +12163,7 @@ y/e/d> yupload directory and sub directories of the album directory.upload directory is for uploading files you don’t want to put into albums. This will be empty to start with and will contain the files you’ve uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to album will work better.upload directory is for uploading files you don't want to put into albums. This will be empty to start with and will contain the files you've uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to album will work better.album directory are also writeable and you may create new directories (albums) under album. If you copy files with a directory hierarchy in there then rclone will create albums with the / character in them. For example if you dorclone copy /path/to/images remote:album/imagesalbum path pretty much like a normal filesystem and it is a good target for repeated syncing.shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.Limitations
-Downloading Images
Downloading Videos
Duplicates
file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer alas!).upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn’t cause too many problems.upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn't cause too many problems.Modified time
Size
--gphotos-read-size option or the read_size = true config parameter.rclone mount you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You’ll need to experiment to see if it works for you without the flag.rclone mount you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag.Albums
Standard Options
–gphotos-client-id
+--gphotos-client-id
-–gphotos-client-secret
+--gphotos-client-secret
-–gphotos-read-only
+--gphotos-read-only
@@ -12242,16 +12246,16 @@ y/e/d> y
Advanced Options
–gphotos-read-size
+--gphotos-read-size
-–gphotos-start-year
+--gphotos-start-year
HTTP
-remote: or remote:path/to/dir.remote. First run:
@@ -12314,7 +12318,7 @@ e/n/d/r/c/s/q> q
rclone configdirectory to /home/local/directory, deleting any excess files.rclone sync remote:directory /home/local/directoryRead only
-Modified time
Checksum
@@ -12324,7 +12328,7 @@ e/n/d/r/c/s/q> q
rclone lsd --http-url https://beta.rclone.org :http:Standard Options
–http-url
+--http-url
-
-
Advanced Options
–http-headers
+--http-headers
-–http-no-slash
---http-no-slash
+–http-no-head
---http-no-head
+
rclone copy /home/source remote:backupdefault directory
-rclone copy /home/source remote:default/backup–fast-list
+--fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.Modified time
X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.Standard Options
–hubic-client-id
+--hubic-client-id
-–hubic-client-secret
+--hubic-client-secret
Advanced Options
–hubic-chunk-size
+--hubic-chunk-size
@@ -12475,8 +12479,8 @@ y/e/d> y
-–hubic-no-chunk
---hubic-no-chunk
+–hubic-encoding
+--hubic-encoding
@@ -12497,7 +12501,7 @@ y/e/d> y
Limitations
Jottacloud
rclone copy /home/source remote:backupDevices and Mountpoints
-–fast-list
+--fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.Modified time and hashes
--checksum flag.TMPDIR environment variable points to) before it is uploaded. Small files will be cached in memory - see the –jottacloud-md5-memory-limit flag.TMPDIR environment variable points to) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag.Restricted filename characters
@@ -12629,16 +12633,16 @@ y/e/d> y
-Deleting files
-Versions
Quota information
rclone about remote: command which will display your usage limit (unless it is unlimited) and the current usage.Advanced Options
–jottacloud-md5-memory-limit
+--jottacloud-md5-memory-limit
-–jottacloud-trashed-only
+--jottacloud-trashed-only
-–jottacloud-hard-delete
+--jottacloud-hard-delete
-–jottacloud-unlink
+--jottacloud-unlink
-–jottacloud-upload-resume-limit
---jottacloud-upload-resume-limit
+
-–jottacloud-encoding
+--jottacloud-encoding
@@ -12688,8 +12692,8 @@ y/e/d> y
Limitations
-Troubleshooting
Standard Options
–koofr-user
+--koofr-user
-–koofr-password
+--koofr-password
@@ -12791,15 +12795,15 @@ y/e/d> y
Advanced Options
–koofr-endpoint
+--koofr-endpoint
-–koofr-mountid
+--koofr-mountid
-–koofr-setmtime
+--koofr-setmtime
-–koofr-encoding
+--koofr-encoding
@@ -12825,14 +12829,14 @@ y/e/d> y
Limitations
-Mail.ru Cloud
Features highlights
remote:directory/subdirectorylast modified time property, directories don’tlast modified time property, directories don't/home/local/directory to the remote path, deleting any excess files in the path.rclone sync /home/local/directory remote:directoryModified time
-Hash checksums
Emptying Trash
@@ -12967,13 +12971,13 @@ y/e/d> y
-Limitations
Standard Options
–mailru-user
+--mailru-user
-–mailru-pass
+--mailru-pass
@@ -12990,8 +12994,8 @@ y/e/d> y
-–mailru-speedup-enable
---mailru-speedup-enable
+
-
Advanced Options
–mailru-speedup-file-patterns
---mailru-speedup-file-patterns
+
-
–mailru-speedup-max-disk
+--mailru-speedup-max-disk
-
-
–mailru-speedup-max-memory
+--mailru-speedup-max-memory
-
-
–mailru-check-hash
+--mailru-check-hash
-
-
–mailru-user-agent
---mailru-user-agent
+
-–mailru-quirks
+--mailru-quirks
-–mailru-encoding
+--mailru-encoding
@@ -13203,25 +13207,25 @@ y/e/d> y
-
Duplicated files
rclone dedupe to fix duplicated files.Failure to log-in
-rclone link remote:file will cause the remote to become “blocked”. This is not an abnormal situation, for example if you wish to get the public links of a directory with hundred of files… After more or less a week, the remote will remote accept rclone logins normally again.rclone link remote:file will cause the remote to become "blocked". This is not an abnormal situation, for example if you wish to get the public links of a directory with hundred of files... After more or less a week, the remote will remote accept rclone logins normally again.rclone mount. This will log-in when mounting and a log-out when unmounting only. You can also run rclone rcd and then use rclone rc to run the commands over the API to avoid logging in each time.Standard Options
–mega-user
+--mega-user
-–mega-pass
+--mega-pass
@@ -13240,7 +13244,7 @@ y/e/d> y
Advanced Options
–mega-debug
+--mega-debug
@@ -13249,7 +13253,7 @@ y/e/d> y
-–mega-hard-delete
+--mega-hard-delete
@@ -13258,7 +13262,7 @@ y/e/d> y
-–mega-encoding
+--mega-encoding
@@ -13268,7 +13272,7 @@ y/e/d> y
Limitations
-Memory
@@ -13351,7 +13355,7 @@ y/e/d> y
rclone mount :memory: /mnt/tmp
rclone serve webdav :memory:
rclone serve sftp :memory:rclone ls remote:container/home/local/directory to the remote container, deleting any excess files in the container.
-rclone sync /home/local/directory remote:container–fast-list
+--fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.Modified time
mtime key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it.Hashes
Authenticating with Azure Blob Storage
@@ -13411,17 +13415,17 @@ y/e/d> y
-$ rclone lsd azureblob:
container/rclone ls azureblob:othercontainerMultipart uploads
--transfers of them being uploaded at once.--azureblob-chunk-size 100M.--azureblob-chunk-size 100M.Standard Options
–azureblob-account
+--azureblob-account
-–azureblob-key
+--azureblob-key
-–azureblob-sas-url
+--azureblob-sas-url
-–azureblob-use-emulator
---azureblob-use-emulator
+
Advanced Options
–azureblob-endpoint
+--azureblob-endpoint
-–azureblob-upload-cutoff
+--azureblob-upload-cutoff
-–azureblob-chunk-size
+--azureblob-chunk-size
-–azureblob-list-chunk
+--azureblob-list-chunk
-–azureblob-access-tier
+--azureblob-access-tier
-–azureblob-disable-checksum
---azureblob-disable-checksum
+
-–azureblob-memory-pool-flush-time
+--azureblob-memory-pool-flush-time
-–azureblob-memory-pool-use-mmap
+--azureblob-memory-pool-use-mmap
-–azureblob-encoding
+--azureblob-encoding
@@ -13625,10 +13629,10 @@ y/e/d> y
rclone copy /home/source remote:backupGetting your own Client ID and Key
-client_id left blank) one doesn’t work for you or you see lots of throttling. The default Client ID and Key is shared by all rclone users when performing requests.client_id left blank) one doesn't work for you or you see lots of throttling. The default Client ID and Key is shared by all rclone users when performing requests.
-
New registration.New registration.Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox), select Web in Redirect URI Enter http://localhost:53682/ and click Register. Copy and keep the Application (client) ID under the app name for later use.manage select Certificates & secrets, click New client secret. Copy and keep that secret for later use.manage select API permissions, click Add a permission and select Microsoft Graph then select delegated permissions.Deleting files
-Standard Options
–onedrive-client-id
+--onedrive-client-id
-–onedrive-client-secret
+--onedrive-client-secret
Advanced Options
–onedrive-chunk-size
+--onedrive-chunk-size
@@ -13778,7 +13782,7 @@ y/e/d> y
-–onedrive-drive-id
+--onedrive-drive-id
-–onedrive-drive-type
+--onedrive-drive-type
-–onedrive-expose-onenote-files
+--onedrive-expose-onenote-files
-–onedrive-server-side-across-configs
+--onedrive-server-side-across-configs
-–onedrive-encoding
+--onedrive-encoding
@@ -13823,8 +13827,8 @@ y/e/d> y
Limitations
Naming
-? in it will be mapped to ? instead.? in it will be mapped to ? instead.File sizes
Path length
@@ -13837,43 +13841,43 @@ y/e/d> y
copy is the only rclone command affected by this as we copy the file and then afterwards set the modification time to match the source file.
-
-Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case you haven’t installed this already)Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case you haven't installed this already)Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameCheckingConnect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM (replacing YOURSITE, YOU, YOURSITE.COM with the actual values; this will prompt for your credentials)Set-SPOTenant -EnableMinimumVersionRequirement $FalseDisconnect-SPOService (to disconnect from the server)
Troubleshooting
Unexpected file size/hash differences on Sharepoint
---ignore-checksum --ignore-sizeReplacing/deleting existing files on Sharepoint gets “item not found”
---backup-dir <BACKUP_DIR> command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory rclone-backup-dir on backend mysharepoint, you may use:Replacing/deleting existing files on Sharepoint gets "item not found"
+--backup-dir <BACKUP_DIR> command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory rclone-backup-dir on backend mysharepoint, you may use:--backup-dir mysharepoint:rclone-backup-diraccess_denied (AADSTS65005)
-Error: access_denied
Code: AADSTS65005
Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.invalid_grant (AADSTS50076)
-Error: invalid_grant
Code: AADSTS50076
Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.rclone config, and choose to edit your OneDrive backend. Then, you don’t need to actually make any changes until you reach this question: Already have a token - refresh?. For this question, answer y and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.rclone config, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: Already have a token - refresh?. For this question, answer y and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.OpenDrive
remote:pathremote:directory/subdirectory.Standard Options
–opendrive-username
+--opendrive-username
-–opendrive-password
+--opendrive-password
@@ -14040,7 +14044,7 @@ y/e/d> y
Advanced Options
–opendrive-encoding
+--opendrive-encoding
@@ -14049,7 +14053,7 @@ y/e/d> y
-–opendrive-chunk-size
+--opendrive-chunk-size
@@ -14059,8 +14063,8 @@ y/e/d> y
Limitations
-? in it will be mapped to ? instead.? in it will be mapped to ? instead.QingStor
remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.rclone ls remote:bucket/home/local/directory to the remote bucket, deleting any excess files in the bucket.
-rclone sync /home/local/directory remote:bucket–fast-list
+--fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.Multipart uploads
-rclone cleanup remote:bucket just for one bucket rclone cleanup remote: for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time.Buckets and Zone
rclone lsd) using any zone, but you can only access the content of a bucket from the zone it was created in. If you attempt to access a bucket from the wrong zone, you will get an error, incorrect zone, the bucket is not in 'XXX' zone.Restricted filename characters
Standard Options
–qingstor-env-auth
+--qingstor-env-auth
-
-
–qingstor-access-key-id
+--qingstor-access-key-id
-–qingstor-secret-access-key
+--qingstor-secret-access-key
-–qingstor-endpoint
---qingstor-endpoint
+
-–qingstor-zone
---qingstor-zone
+
-
-
Advanced Options
–qingstor-connection-retries
+--qingstor-connection-retries
-–qingstor-upload-cutoff
+--qingstor-upload-cutoff
@@ -14247,10 +14251,10 @@ y/e/d> y
-–qingstor-chunk-size
+--qingstor-chunk-size
-–qingstor-upload-concurrency
+--qingstor-upload-concurrency
–qingstor-encoding
+--qingstor-encoding
@@ -14412,21 +14416,21 @@ tenant = $OS_TENANT_NAME
true for env_auth and leave everything else blank.Using an alternate authentication method
-openstack commands to get a token). Then, you just need to pass the two configuration variables auth_token and storage_url. If they are both provided, the other variables are ignored. rclone will not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation.openstack commands to get a token). Then, you just need to pass the two configuration variables auth_token and storage_url. If they are both provided, the other variables are ignored. rclone will not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation.Using rclone without a config file
-source openstack-credentials-file
export RCLONE_CONFIG_MYREMOTE_TYPE=swift
export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
rclone lsd myremote:–fast-list
+--fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.–update and –use-server-modtime
+--update and --use-server-modtime
--update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.--update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.Standard Options
–swift-env-auth
+--swift-env-auth
-
-
–swift-user
+--swift-user
-–swift-key
+--swift-key
-–swift-auth
+--swift-auth
-
-
–swift-user-id
+--swift-user-id
-–swift-domain
+--swift-domain
-–swift-tenant
+--swift-tenant
-–swift-tenant-id
+--swift-tenant-id
-–swift-tenant-domain
+--swift-tenant-domain
-–swift-region
+--swift-region
-–swift-storage-url
+--swift-storage-url
-–swift-auth-token
+--swift-auth-token
-–swift-application-credential-id
+--swift-application-credential-id
-–swift-application-credential-name
+--swift-application-credential-name
-–swift-application-credential-secret
+--swift-application-credential-secret
-–swift-auth-version
+--swift-auth-version
-–swift-endpoint-type
+--swift-endpoint-type
-
-
–swift-storage-policy
+--swift-storage-policy
@@ -14629,11 +14633,11 @@ rclone lsd myremote:
Advanced Options
–swift-chunk-size
+--swift-chunk-size
@@ -14650,8 +14654,8 @@ rclone lsd myremote:
-–swift-no-chunk
---swift-no-chunk
+–swift-encoding
+--swift-encoding
@@ -14695,15 +14699,15 @@ rclone lsd myremote:
-
Limitations
-Troubleshooting
-Rclone gives Failed to create file system for “remote:”: Bad Request
-Rclone gives Failed to create file system for "remote:": Bad Request
+--dump-bodies flag.Rclone gives Failed to create file system: Response didn’t have storage url and auth token
+Rclone gives Failed to create file system: Response didn't have storage url and auth token
pCloud
remote:pathDeleting files
rclone cleanup can be used to empty the trash.Root folder ID
@@ -14791,7 +14795,7 @@ y/e/d> y
https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the config.Standard Options
–pcloud-client-id
+--pcloud-client-id
-–pcloud-client-secret
+--pcloud-client-secret
Advanced Options
–pcloud-encoding
+--pcloud-encoding
@@ -14818,13 +14822,22 @@ y/e/d> y
-–pcloud-root-folder-id
+--pcloud-root-folder-id
+--pcloud-hostname
+
+
premiumize.me
remote:pathStandard Options
–premiumizeme-api-key
+--premiumizeme-api-key
@@ -14917,7 +14930,7 @@ y/e/d>
Advanced Options
–premiumizeme-encoding
+--premiumizeme-encoding
@@ -14927,8 +14940,8 @@ y/e/d>
Limitations
-\ or " characters in. rclone maps these to and from an identical looking unicode equivalents \ and "\ or " characters in. rclone maps these to and from an identical looking unicode equivalents \ and "put.io
remote:pathAdvanced Options
–putio-encoding
+--putio-encoding
@@ -15028,7 +15041,7 @@ e/n/d/r/c/s/q> q
Seafile
Root mode vs Library mode
-remote:library. You may put subdirectories in too, eg remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)remote:library. You may put subdirectories in too, eg remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)Configuration in root mode
@@ -15096,7 +15109,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-rclone configseafile. It’s pointing to the root of your seafile server and can now be used like this:seafile. It's pointing to the root of your seafile server and can now be used like this:rclone lsd seafile:/home/local/directory to the remote library, deleting any excess files in the library.rclone sync /home/local/directory seafile:libraryConfiguration in library mode
-
-No remotes found - make a new one
n) New remote
s) Set configuration password
@@ -15174,7 +15187,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> yMy Library during the configuration. The root of the remote is pointing at the root of the library My Library:
@@ -15184,7 +15197,7 @@ y/e/d> y
rclone lsd seafile:rclone ls seafile:directory/home/local/directory to the remote library, deleting any excess files in the library.
-rclone sync /home/local/directory seafile:–fast-list
+--fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.xRestricted filename characters
Seafile and rclone link
rclone link seafile:seafile-tutorial.doc
@@ -15226,10 +15239,10 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/Compatibility
Standard Options
–seafile-url
+--seafile-url
-
-
–seafile-user
+--seafile-user
-–seafile-pass
+--seafile-pass
@@ -15261,15 +15274,15 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
-–seafile-2fa
---seafile-2fa
+
-–seafile-library
+--seafile-library
-–seafile-library-key
+--seafile-library-key
@@ -15286,7 +15299,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
-–seafile-auth-token
+--seafile-auth-token
Advanced Options
–seafile-create-library
---seafile-create-library
+
-–seafile-encoding
+--seafile-encoding
@@ -15321,7 +15334,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user’s home directory.remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory.
@@ -15385,11 +15398,11 @@ y/e/d> y
rclone config/home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted files are supported.
-awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsapass, key_file, or key_pem then rclone will attempt to contact an ssh-agent.pass, key_file, or key_pem then rclone will attempt to contact an ssh-agent.key_use_agent to force the usage of an ssh-agent. In this case key_file or key_pem can also be specified to force the usage of a specific key in the ssh-agent.--sftp-ask-password option, rclone will prompt for a password when needed and no password has been configured.set_modtime = false in your RClone backend configuration to disable this behaviour.Standard Options
–sftp-host
+--sftp-host
-
-
–sftp-user
+--sftp-user
-–sftp-port
+--sftp-port
-–sftp-pass
+--sftp-pass
@@ -15445,7 +15458,7 @@ y/e/d> y
-–sftp-key-pem
+--sftp-key-pem
-–sftp-key-file
+--sftp-key-file
-–sftp-key-file-pass
+--sftp-key-file-pass
-–sftp-key-use-agent
+--sftp-key-use-agent
Too many authentication failures for *username* errors when the ssh-agent contains many keys.Too many authentication failures for *username* errors when the ssh-agent contains many keys.
-–sftp-use-insecure-cipher
+--sftp-use-insecure-cipher
@@ -15499,17 +15512,17 @@ y/e/d> y
-
-
–sftp-disable-hashcheck
+--sftp-disable-hashcheck
Advanced Options
–sftp-ask-password
+--sftp-ask-password
@@ -15528,12 +15541,12 @@ y/e/d> y
-–sftp-path-override
+--sftp-path-override
-rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directoryrclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
-–sftp-set-modtime
+--sftp-set-modtime
-–sftp-md5sum-command
+--sftp-md5sum-command
-–sftp-sha1sum-command
+--sftp-sha1sum-command
-–sftp-skip-links
+--sftp-skip-links
Limitations
-md5sum or sha1sum as well as echo are in the remote’s PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming.about if the same login has shell access and df are in the remote’s PATH. about will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. about will fail if it does not have shell access or if df is not in the remote’s PATH.disable_hashcheck is a good idea.md5sum or sha1sum as well as echo are in the remote's PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming.about if the same login has shell access and df are in the remote's PATH. about will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. about will fail if it does not have shell access or if df is not in the remote's PATH.disable_hashcheck is a good idea.use_insecure_cipher setting in the configuration file to true. Further details on the insecurity of this cipher can be found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).--dump-headers, --dump-bodies, --dump-auth--timeout isn’t supported (but --contimeout is).--dump-headers, --dump-bodies, --dump-auth--timeout isn't supported (but --contimeout is).C14
rsync.net
SugarSync
rclone config walks you through it.rclone like this,
-rclone lsd remote:rclone ls remote:Testrclone copy /home/source remote:backupremote:pathremote:directory/subdirectory.Modified time and hashes
--size-only checking. Note that using --update will work as rclone can read the time files were uploaded.Restricted filename characters
Deleting files
---sugarsync-hard-delete or set the config parameter hard_delete = true if you would like files to be deleted straight away.Standard Options
–sugarsync-app-id
+--sugarsync-app-id
-–sugarsync-access-key-id
+--sugarsync-access-key-id
-–sugarsync-private-access-key
+--sugarsync-private-access-key
-–sugarsync-hard-delete
+--sugarsync-hard-delete
Advanced Options
–sugarsync-refresh-token
+--sugarsync-refresh-token
@@ -15710,7 +15723,7 @@ y/e/d> y
-–sugarsync-authorization
+--sugarsync-authorization
@@ -15719,7 +15732,7 @@ y/e/d> y
-–sugarsync-authorization-expiry
+--sugarsync-authorization-expiry
@@ -15728,7 +15741,7 @@ y/e/d> y
-–sugarsync-user
+--sugarsync-user
@@ -15737,7 +15750,7 @@ y/e/d> y
-–sugarsync-root-id
+--sugarsync-root-id
@@ -15746,7 +15759,7 @@ y/e/d> y
-–sugarsync-deleted-id
+--sugarsync-deleted-id
@@ -15755,7 +15768,7 @@ y/e/d> y
-–sugarsync-encoding
+--sugarsync-encoding
@@ -15867,7 +15880,7 @@ y/e/d> y
remote:bucket (or remote: for the lsf command.) You may put subdirectories in too, eg remote:bucket/path/to/dir.rclone like this.Create a new bucket
-mkdir command to create new bucket, e.g. bucket.mkdir command to create new bucket, e.g. bucket.rclone mkdir remote:bucketList all buckets
lsf command to list all buckets.Upload objects
copy command to upload an object.
-rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/--progress flag is for displaying progress information. Remove it if you don’t need this information.--progress flag is for displaying progress information. Remove it if you don't need this information.rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/Download objects
copy command to download an object.
-rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/--progress flag is for displaying progress information. Remove it if you don’t need this information.--progress flag is for displaying progress information. Remove it if you don't need this information.rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/Delete objects
@@ -15909,7 +15922,7 @@ y/e/d> y
Sync two Locations
sync command to sync the source to the destination, changing the destination only, deleting any excess files.
-rclone sync --progress /home/local/directory/ remote:bucket/path/to/dir/--progress flag is for displaying progress information. Remove it if you don’t need this information.--progress flag is for displaying progress information. Remove it if you don't need this information.--dry-run flag to see exactly what would be copied and deleted.
@@ -15919,26 +15932,26 @@ y/e/d> y
rclone sync --progress remote:bucket/path/to/dir/ /home/local/directory/rclone sync --progress s3:bucket/path/to/dir/ tardigrade:bucket/path/to/dir/Standard Options
–tardigrade-provider
+--tardigrade-provider
-
-
–tardigrade-access-grant
+--tardigrade-access-grant
-–tardigrade-satellite-address
+--tardigrade-satellite-address
<nodeid>@<address>:<port>.
-
-
–tardigrade-api-key
+--tardigrade-api-key
-–tardigrade-passphrase
+--tardigrade-passphrase
rclone copy C:\source remote:sourceStandard Options
–union-upstreams
---union-upstreams
+
-–union-action-policy
+--union-action-policy
-–union-create-policy
+--union-create-policy
-–union-search-policy
+--union-search-policy
-–union-cache-time
+--union-cache-time
Standard Options
–webdav-url
+--webdav-url
-
-
–webdav-vendor
+--webdav-vendor
-
-
–webdav-user
+--webdav-user
-–webdav-pass
+--webdav-pass
@@ -16389,7 +16402,7 @@ y/e/d> y
-–webdav-bearer-token
+--webdav-bearer-token
Advanced Options
–webdav-bearer-token-command
+--webdav-bearer-token-command
rcat) whereas Owncloud does. This may be fixed in the future.Sharepoint
-https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/_layouts/15/onedrive.aspxurl as https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents and use your normal account email and password for user and pass. If you have 2FA enabled, you have to generate an app password. Set the vendor to sharepoint.[sharepoint]
@@ -16432,12 +16445,12 @@ vendor = other
user = YourEmailAddress
pass = encryptedpasswordRequired Flags for SharePoint
---ignore-size --ignore-checksum --updatedCache
other type. Don’t enter a username or password, instead enter your Macaroon as the bearer_token.other type. Don't enter a username or password, instead enter your Macaroon as the bearer_token.[dcache]
type = webdav
@@ -16456,7 +16469,7 @@ eyJraWQ[...]QFXDt0
paul@celebrimbor:~$oidc-token command will work, the refresh token must be loaded into the oidc agent. This is done with the oidc-add command (e.g., oidc-add XDC). This is typically done once per login session. Full details on this and how to register oidc-agent with your OIDC Provider are provided in the oidc-agent documentation.bearer_token_command configuration option is used to fetch the access token from oidc-agent.oidc-agent XDC).oidc-agent XDC).[dcache]
type = webdav
@@ -16527,12 +16540,12 @@ y/e/d> yrclone about remote: command which will display your usage limit (quota) and the current usage.Restricted filename characters
Limitations
---timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you’ll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GB should be enough, so if you want to upload a 30GB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.--timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GB should be enough, so if you want to upload a 30GB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.Standard Options
–yandex-client-id
+--yandex-client-id
-–yandex-client-secret
+--yandex-client-secret
Advanced Options
–yandex-unlink
+--yandex-unlink
-–yandex-encoding
+--yandex-encoding
@@ -16576,7 +16589,7 @@ y/e/d> y
Filenames
convmv tool to convert the filesystem to UTF-8. This tool is available in most distributions’ package managers.convmv tool to convert the filesystem to UTF-8. This tool is available in most distributions' package managers.gro\xdf will be transferred as gro‛DF. rclone will emit a debug message in this case (use -v to see), egLocal file system at .: Replacing invalid UTF-8 characters in "gro\xdf"Restricted characters
@@ -16841,7 +16854,7 @@ y/e/d> y
-Long paths on Windows
c:\files is converted to the UNC path \\?\c:\files in the output, and \\server\share is converted to \\?\UNC\server\share.–links, -l
+--links, -l
-$ rclone copyto -l /tmp/a/file1 remote:/tmp/a/
@@ -16900,14 +16913,14 @@ nounc = true
$ rclone cat remote:/tmp/a/file2.rclonelink
/home/user/file3
-$ rclone ls remote:/tmp/a
5 file1.rclonelink
14 file2.rclonelink
-$ rclone copyto -l remote:/tmp/a/ /tmp/b/
$ tree /tmp/b
/tmp/b
├── file1 -> ./file4
└── file2 -> /home/user/file3$ rclone copyto remote:/tmp/a/ /tmp/b/
$ tree /tmp/b
@@ -16915,7 +16928,7 @@ $ tree /tmp/b
├── file1.rclonelink
└── file2.rclonelink-copy-links / -L.Restricting filesystems with –one-file-system
+Restricting filesystems with --one-file-system
--one-file-system or -x this tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems.du, rsync and tar) treats a bind mount to the same device as being on the same filesystem.Standard Options
–local-nounc
+--local-nounc
-
Advanced Options
–copy-links / -L
+--copy-links / -L
-–links / -l
---links / -l
+
-–skip-links
---skip-links
+
-–local-no-unicode-normalization
---local-no-unicode-normalization
+
-–local-no-check-updated
---local-no-check-updated
+
-–one-file-system / -x
---one-file-system / -x
+
-–local-case-sensitive
+--local-case-sensitive
@@ -17016,7 +17029,7 @@ $ tree /tmp/b
-–local-case-insensitive
+--local-case-insensitive
@@ -17025,7 +17038,7 @@ $ tree /tmp/b
-–local-no-sparse
+--local-no-sparse
@@ -17034,7 +17047,7 @@ $ tree /tmp/b
-–local-encoding
+--local-encoding
@@ -17048,7 +17061,7 @@ $ tree /tmp/b
rclone backend COMMAND remote:noop
-
Changelog
+v1.52.3 - 2020-08-07
+
+
+
+
+
+
+
+
+
+
+
v1.52.2 - 2020-06-24
@@ -17075,13 +17133,13 @@ $ tree /tmp/b
@@ -17131,7 +17189,7 @@ $ tree /tmp/b
-
@@ -17140,7 +17198,7 @@ $ tree /tmp/b
@@ -17220,12 +17278,12 @@ $ tree /tmp/b
-
@@ -17246,14 +17304,14 @@ $ tree /tmp/b
-
-
--delete-before (Nick Craig-Wood)--delete-before (Nick Craig-Wood)--local-no-sparse flag for disabling sparse files (Nick Craig-Wood)rclone backend noop for testing purposes (Nick Craig-Wood)
@@ -17345,7 +17403,7 @@ $ tree /tmp/b
--fast-list and --drive-shared-with-me (Nick Craig-Wood)--drive-shared-with-me (Nick Craig-Wood)--drive-stop-on-upload-limit to respond to teamDriveFileLimitExceeded. (harry)--header-upload and --header-download (Tim Gallant)
@@ -17412,7 +17470,7 @@ $ tree /tmp/b
--header-upload and --header-download (Tim Gallant)
@@ -17445,12 +17503,12 @@ $ tree /tmp/b
session.New() with session.NewSession() (Lars Lehtonen)--s3-disable-checksum (Nick Craig-Wood)
@@ -17511,7 +17569,7 @@ $ tree /tmp/b
--order-by flag to order transfers (Nick Craig-Wood)
-
--vfs-cache-mode writes (Nick Craig-Wood)
@@ -17669,7 +17727,7 @@ $ tree /tmp/b
--sftp-skip-links to skip symlinks and non regular files (Nick Craig-Wood)
v1.50.1 - 2019-11-02
@@ -17703,7 +17761,7 @@ $ tree /tmp/b
DropboxHash and CRC-32 (Nick Craig-Wood)
@@ -17767,15 +17825,15 @@ $ tree /tmp/b
@@ -17978,7 +18036,7 @@ $ tree /tmp/b
-
--update/-u not transfer files that haven’t changed (Nick Craig-Wood)--update/-u not transfer files that haven't changed (Nick Craig-Wood)--files-from without --no-traverse doing a recursive scan (Nick Craig-Wood)
@@ -17788,15 +17846,15 @@ $ tree /tmp/b
--progress work in git bash on Windows (Nick Craig-Wood)
--size-only and --ignore-size together. (Nick Craig-Wood)--size-only and --ignore-size together. (Nick Craig-Wood)--files-from is in use (Michele Caci)
@@ -17912,7 +17970,7 @@ $ tree /tmp/b
--ignore-checksum (Nick Craig-Wood)--ignore-checksum (Nick Craig-Wood)--size-only mode (Nick Craig-Wood)
-
--no-traverse (buengese)
--local-case-sensitive and --local-case-insensitive (Nick Craig-Wood)
--backup-dir (Nick Craig-Wood)--ignore-checksum is in effect, don’t calculate checksum (Nick Craig-Wood)--ignore-checksum is in effect, don't calculate checksum (Nick Craig-Wood)--rc-serve (Nick Craig-Wood)
--s3-use-accelerate-endpoint (Nick Craig-Wood)--fast-list for listing operations where it won’t use more memory (Nick Craig-Wood)
+--fast-list for listing operations where it won't use more memory (Nick Craig-Wood)
ListRdedupe, serve restic lsf, ls, lsl, lsjson, lsd, md5sum, sha1sum, hashsum, size, delete, cat, settier--files-only and --dirs-only flags (calistri)rclone link (Nick Craig-Wood)
-
-
-
--dir-perms and --file-perms flags to set default permissions (Nick Craig-Wood)--dry-run set (Nick Craig-Wood)--fast-list flag
-
--files-from and non-existent files (Nick Craig-Wood)
-
-
--files-from only read the objects specified and don’t scan directories (Nick Craig-Wood)
+--files-from only read the objects specified and don't scan directories (Nick Craig-Wood)
--ignore-case flag (Nick Craig-Wood)
--json flag for structured JSON input (Nick Craig-Wood)--progress update the stats correctly at the end (Nick Craig-Wood)--dry-run (Nick Craig-Wood)--dry-run (Nick Craig-Wood)
@@ -18717,8 +18775,8 @@ $ tree /tmp/b
--config (albertony)--progress on windows (Nick Craig-Wood)--config (albertony)--progress on windows (Nick Craig-Wood)
-
--files-from work-around
+--files-from work-around
--absolute flag to add a leading / onto path names--csv flag for compliant CSV output
@@ -19078,7 +19136,7 @@ $ tree /tmp/b
-
--drive-acknowledge-abuse to download flagged files--drive-alternate-export to fix large doc export
-
@@ -19212,11 +19270,11 @@ $ tree /tmp/b
. and .. from directory listing
-
-
@@ -19272,8 +19330,8 @@ $ tree /tmp/b
-
@@ -19312,7 +19370,7 @@ $ tree /tmp/b
rc: enable the remote control of a running rclone
-
@@ -19395,12 +19453,12 @@ $ tree /tmp/b
--backup-dir don’t delete files if we can’t set their modtime
+--backup-dir don't delete files if we can't set their modtime
--backup-dirserve http: fix serving files with : in - fixes--exclude-if-present to ignore directories which it doesn’t have permission for (Iakov Davydov)--exclude-if-present to ignore directories which it doesn't have permission for (Iakov Davydov)--no-traverse flag because it is obsolete
-
-
-
-
-
-
-
@@ -19644,7 +19702,7 @@ $ tree /tmp/b
-
@@ -19679,7 +19737,7 @@ $ tree /tmp/b
dedupe - implement merging of duplicate directoriescheck and cryptcheck made more consistent and use less memorycleanup for remaining remotes (thanks ishuah)--immutable for ensuring that files don’t change (thanks Jacob McNamee)--immutable for ensuring that files don't change (thanks Jacob McNamee)--user-agent option (thanks Alex McGrath Kraak)--disable flag to disable optional features--bind flag for choosing the local addr on outgoing connections
-
-
rclone mount to limit external apps
-
@@ -19819,7 +19877,7 @@ $ tree /tmp/b
@@ -19843,9 +19901,9 @@ $ tree /tmp/b
-
-
-
@@ -20037,28 +20095,28 @@ $ tree /tmp/b
--stats flag
-
rclone check shows count of hashes that couldn’t be checkedrclone check shows count of hashes that couldn't be checkedrclone listremotes commandAuthorization: lines from --dump-headers outputrclone check on crypted file systems-q-no-seek flag to disable
-
@@ -20178,7 +20236,7 @@ $ tree /tmp/b
-
v1.33 - 2016-08-24
@@ -20245,7 +20303,7 @@ $ tree /tmp/b
-
@@ -20282,22 +20340,22 @@ $ tree /tmp/b
-
X-Bz-Test-Mode header.X-Bz-Test-Mode header.
-
v1.30 - 2016-06-18
@@ -20307,17 +20365,17 @@ $ tree /tmp/b
--max-size 0bb suffix so we can specify bytes in –bwlimit, –min-size etcb suffix so we can specify bytes in --bwlimit, --min-size etc
@@ -20349,7 +20407,7 @@ $ tree /tmp/b
-
--size-only flag.
@@ -20397,7 +20455,7 @@ $ tree /tmp/b
--size-only.
-
@@ -20424,7 +20482,7 @@ $ tree /tmp/b
-
--dry-run set--dry-run setmove command--log-filedelete command to wait until all finished - fixes missing deletes.more than one upload using auth token
@@ -20474,7 +20532,7 @@ $ tree /tmp/b
@@ -20642,11 +20700,11 @@ $ tree /tmp/b
-
@@ -20539,7 +20597,7 @@ $ tree /tmp/b
-
--dry-run!
-
-
v1.23 - 2015-10-03
@@ -20603,7 +20661,7 @@ $ tree /tmp/b
-
-
--drive-use-trash flag so rclone trashes instead of deletes
@@ -20720,32 +20778,32 @@ $ tree /tmp/b
v1.16 - 2015-06-09
v1.15 - 2015-06-06
-
v1.14 - 2015-05-21
v1.13 - 2015-05-10
v1.12 - 2015-03-15
@@ -20756,9 +20814,9 @@ $ tree /tmp/b
v1.10 - 2015-02-12
@@ -20783,7 +20841,7 @@ $ tree /tmp/b
v1.06 - 2014-12-12
-
@@ -20814,9 +20872,9 @@ $ tree /tmp/b
v1.01 - 2014-07-04
@@ -20828,7 +20886,7 @@ $ tree /tmp/b
v0.99 - 2014-06-26
-
v0.98 - 2014-05-30
@@ -20845,7 +20903,7 @@ $ tree /tmp/b
v0.95 - 2014-03-28
@@ -20865,7 +20923,7 @@ $ tree /tmp/b
v0.92 - 2014-03-15
-
v0.91 - 2014-03-15
@@ -20881,16 +20939,16 @@ $ tree /tmp/b
Bugs and Limitations
Limitations
-Directory timestamps aren’t preserved
-Directory timestamps aren't preserved
+Rclone struggles with millions of files in a directory
Bucket based remotes and folders
/ as directory markers. Rclone doesn’t do this as it potentially creates more objects and costs more. It may do in future (probably with a flag)./ as directory markers. Rclone doesn't do this as it potentially creates more objects and costs more. It may do in future (probably with a flag).Bugs
-
-Server A> rclone sync /tmp/whatever remote:ServerA
Server B> rclone sync /tmp/whatever remote:ServerBServer A> rclone copy /tmp/whatever remote:Backup
Server B> rclone copy /tmp/whatever remote:BackupWhy doesn’t rclone support partial transfers / binary diffs like rsync?
+Why doesn't rclone support partial transfers / binary diffs like rsync?
Can rclone do bi-directional sync?
@@ -20933,13 +20991,13 @@ Server B> rclone copy /tmp/whatever remote:Backup
export https_proxy=$http_proxy
export HTTP_PROXY=$http_proxy
export HTTPS_PROXY=$http_proxy
-NO_PROXY allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance “foo.com” also matches “bar.foo.com”.NO_PROXY allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance "foo.com" also matches "bar.foo.com".export no_proxy=localhost,127.0.0.0/8,my.host.name
export NO_PROXY=$no_proxyftp_proxy yet.Rclone gives x509: failed to load system roots and no roots provided error
-rclone can’t file the SSL root certificates. Likely you are running rclone on a NAS with a cut-down Linux OS, or possibly on Solaris.rclone can't file the SSL root certificates. Likely you are running rclone on a NAS with a cut-down Linux OS, or possibly on Solaris.
curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
ntpclient -s -h pool.ntp.org
"/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
"/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL
@@ -20950,13 +21008,13 @@ export NO_PROXY=$no_proxySSL_CERT_FILE and SSL_CERT_DIR, mentioned in the x509 package, provide an additional way to provide the SSL root certificates.--insecure option to the curl command line if it doesn’t work without.--insecure option to the curl command line if it doesn't work without.curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crtRclone gives Failed to load config file: function not implemented error
All my uploaded docx/xlsx/pptx files appear as archive/zip
-tcp lookup some.domain.com no such host
# both should print a long list of possible IP addresses
@@ -20965,7 +21023,7 @@ dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server
systemd-resolved (default on Arch Linux), ensure it is at version 233 or higher. Previous releases contain a bug which causes not all domains to be resolved properly.
Additionally with the GODEBUG=netdns= environment variable the Go resolver decision can be influenced. This also allows to resolve certain issues with DNS resolution. See the name resolution section in the go docs.
It is likely you have more than 10,000 files that need to be synced. By default rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the –max-backlog flag.
+It is likely you have more than 10,000 files that need to be synced. By default rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the --max-backlog flag.
Rclone is written in Go which uses a garbage collector. The default settings for the garbage collector mean that it runs when the heap size has doubled.
However it is possible to tune the garbage collector to use less memory by setting GOGC to a lower value, say export GOGC=20. This will make the garbage collector work harder, reducing memory size at the expense of CPU usage.
The project’s repository is located at:
+The project's repository is located at:
Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood. Please don’t email me requests for help - those are better directed to the forum. Thanks!
+Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood. Please don't email me requests for help - those are better directed to the forum. Thanks!