From a6387e1f81a2da09262a9b26b838554e971b2b00 Mon Sep 17 00:00:00 2001
From: Nick Craig-Wood Jun 15, 2019 Aug 26, 2019 Rclone is a command line program to sync files and directories to and from: Links See the following for detailed instructions for See the global flags page for global options not listed here. Copy files from source to dest, skipping already copied See the global flags page for global options not listed here. Make source and dest identical, modifying destination only. See the global flags page for global options not listed here. Move files from source to dest. See the global flags page for global options not listed here. Remove the contents of path. See the global flags page for global options not listed here. Remove the path and all of its contents. See the global flags page for global options not listed here. Make the path if it doesn’t already exist. See the global flags page for global options not listed here. Remove the path if empty. See the global flags page for global options not listed here. Checks the files in the source and destination match. See the global flags page for global options not listed here. List the objects in the path with size and path. See the global flags page for global options not listed here. List all directories/containers/buckets in the path. See the global flags page for global options not listed here. List the objects in path with modification time, size and path. See the global flags page for global options not listed here. Produces an md5sum file for all the objects in the path. See the global flags page for global options not listed here. Produces an sha1sum file for all the objects in the path. See the global flags page for global options not listed here. Prints the total size and number of objects in remote:path. See the global flags page for global options not listed here. Show the version number. See the global flags page for global options not listed here. Clean up the remote if possible See the global flags page for global options not listed here. Interactively find duplicate files and delete/rename them. See the global flags page for global options not listed here. Get quota information from the remote. See the global flags page for global options not listed here. Remote authorization. See the global flags page for global options not listed here. Print cache stats for a remote See the global flags page for global options not listed here. Concatenates any files and sends them to stdout. See the global flags page for global options not listed here. Create a new remote with name, type and options. See the global flags page for global options not listed here. Delete an existing remote See the global flags page for global options not listed here. Dump the config file as JSON. Disconnects user from remote Dump the config file as JSON. This disconnects the remote: passed in to the cloud storage system. This normally means revoking the oauth token. To reconnect use “rclone config reconnect”. See the global flags page for global options not listed here. Enter an interactive configuration session. Dump the config file as JSON. Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. Dump the config file as JSON. See the global flags page for global options not listed here. Show path of configuration file in use. Enter an interactive configuration session. Show path of configuration file in use. Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. See the global flags page for global options not listed here. Show path of configuration file in use. Show path of configuration file in use. See the global flags page for global options not listed here. Update password in an existing remote. Update an existing remote’s password. The password should be passed in in pairs of For example to set password of a remote of name myremote you would do: This command is obsolete now that “config update” and “config create” both support obscuring passwords directly. List in JSON format all the providers and options. List in JSON format all the providers and options. See the global flags page for global options not listed here. Print (decrypted) config file, or the config for a single remote. List in JSON format all the providers and options. Print (decrypted) config file, or the config for a single remote. List in JSON format all the providers and options. See the global flags page for global options not listed here. Re-authenticates user with remote. This reconnects remote: passed in to the cloud storage system. To disconnect the remote use “rclone config disconnect”. This normally means going through the interactive oauth flow again. See the global flags page for global options not listed here. Print (decrypted) config file, or the config for a single remote. Print (decrypted) config file, or the config for a single remote. See the global flags page for global options not listed here. Update options in an existing remote. Update an existing remote’s options. The options should be passed in in pairs of For example to update the env_auth field of a remote of name myremote you would do: If the remote uses oauth the token will be updated, if you don’t require this add an extra parameter thus: See the global flags page for global options not listed here. Prints info about logged in user of remote. This prints the details of the person logged in to the cloud storage system. See the global flags page for global options not listed here. Copy files from source to dest, skipping already copied If source:path is a file or directory then it copies it to a file or directory named dest:path. This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command. So This doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. It doesn’t delete files from the destination. Note: Use the See the global flags page for global options not listed here. Copy url content to dest. Download urls content and copy it to destination without saving it in tmp storage. See the global flags page for global options not listed here. Cryptcheck checks the integrity of a crypted remote. rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote. For it to work the underlying remote of the cryptedremote must support some kind of checksum. It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted. After it has run it will log the status of the encryptedremote:. If you supply the –one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error. See the global flags page for global options not listed here. Cryptdecode returns unencrypted file names. rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items. If you supply the –reverse flag, it will return encrypted file names. use it like this See the global flags page for global options not listed here. Produces a Dropbox hash file for all the objects in the path. Produces a Dropbox hash file for all the objects in the path. The hashes are calculated according to Dropbox content hash rules. The output is in the same format as md5sum and sha1sum. See the global flags page for global options not listed here. Remove a single file from remote. Remove a single file from remote. Unlike See the global flags page for global options not listed here. Output completion script for a given shell. Generates a shell completion script for rclone. Run with –help to list the supported shells. See the global flags page for global options not listed here. Output bash completion script for rclone. Generates a bash shell autocompletion script for rclone. This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg If you supply a command line argument the script will be written there. See the global flags page for global options not listed here. Output zsh completion script for rclone. Generates a zsh autocompletion script for rclone. This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, eg If you supply a command line argument the script will be written there. See the global flags page for global options not listed here. Output markdown docs for rclone to the directory supplied. This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website. See the global flags page for global options not listed here. Produces an hashsum file for all the objects in the path. Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool. Run without a hash to see the list of supported hashes, eg Then See the global flags page for global options not listed here. Generate public link to file/folder. rclone link will create or retrieve a public link to the given file or folder. If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always be created with the least constraints – e.g. no expiry, no password protection, accessible without account. See the global flags page for global options not listed here. List all the remotes in the config file. rclone listremotes lists all the available remotes from the config file. When uses with the -l flag it lists the types too. See the global flags page for global options not listed here. List directories and objects in remote:path formatted for parsing List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. Eg The other list commands Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). See the global flags page for global options not listed here. List directories and objects in the path in JSON format. List directories and objects in the path in JSON format. The output is an array of Items, where each Item looks like this { “Hashes” : { “SHA-1” : “f572d396fae9206628714fb2ce00f72e94f2258f”, “MD5” : “b1946ac92492d2347c6235b4d2611184”, “DropboxHash” : “ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc” }, “ID”: “y2djkhiujf83u33”, “OrigID”: “UYOJVTUW00Q1RzTDA”, “IsBucket” : false, “IsDir” : false, “MimeType” : “application/octet-stream”, “ModTime” : “2017-05-31T16:15:57.034468261+01:00”, “Name” : “file.txt”, “Encrypted” : “v0qpsdq8anpci8n929v3uu9338”, “EncryptedPath” : “kja9098349023498/v0qpsdq8anpci8n929v3uu9338”, “Path” : “full/path/goes/here/file.txt”, “Size” : 6, “Tier” : “hot”, } The other list commands Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes). See the global flags page for global options not listed here. Mount the remote as file system on a mountpoint. rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a file system with FUSE. First set up your remote using Start the mount like this The easiest way around this is to start the drive from a normal command prompt. It is also possible to start a drive from the SYSTEM account (using the WinFsp.Launcher infrastructure) which creates drives accessible for everyone on the system or alternatively using the nssm service manager. Without the use of “–vfs-cache-mode” this can only write files sequentially, it can only seek when reading. This means that many applications won’t work with their files on an rclone mount without “–vfs-cache-mode writes” or “–vfs-cache-mode full”. See the File Caching section for more info. The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won’t work from the root - you will need to specify a bucket, or a path within the bucket. So The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache. Only supported on Linux, FreeBSD, OS X and Windows at the moment. File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can’t use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount more reliable. This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. See the global flags page for global options not listed here. Move file or directory from source to dest. If source:path is a file or directory then it moves it to a file or directory named dest:path. This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exactly like the move command. So Important: Since this can cause data loss, test first with the –dry-run flag. Note: Use the See the global flags page for global options not listed here. Explore a remote with a text based user interface. This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - “What is using all my disk space?”. To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along. This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands. Note that it might take some time to delete big files/folders. The UI won’t respond in the meantime since the deletion is done synchronously. See the global flags page for global options not listed here. Obscure password for use in the rclone.conf Obscure password for use in the rclone.conf See the global flags page for global options not listed here. Run a command against a running rclone. This runs a command against a running rclone. Use the –url flag to specify an non default URL to connect on. This can be either a “:port” which is taken to mean “http://localhost:port” or a “host:port” which is taken to mean “http://host:port” A username and password can be passed in with –user and –pass. Note that –rc-addr, –rc-user, –rc-pass will be read also for –url, –user, –pass. Use “rclone rc” to see a list of all possible commands. See the global flags page for global options not listed here. Copies standard input to file on remote. rclone rcat reads from standard input (stdin) and copies it to a single remote file. rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you’re better off caching locally and then See the global flags page for global options not listed here. Run rclone listening to remote control commands only. This runs rclone so that it only listens to remote control commands. This is useful if you are controlling rclone via the rc API. If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run. See the rc documentation for more info on the rc flags. See the global flags page for global options not listed here. Remove empty directories under the path. This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in. If you supply the –leave-root flag, it will not remove the root directory. This is useful for tidying up remotes that rclone has left a lot of empty directories in. See the global flags page for global options not listed here. Serve a remote over a protocol. rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, eg Each subcommand has its own options which you can see in their help. See the global flags page for global options not listed here. Serve remote:path over DLNA rclone serve dlna is a DLNA media server for media stored in a rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs. Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly. This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. See the global flags page for global options not listed here. Serve remote:path over FTP. rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it. Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. If you supply the parameter There is an example program bin/test_proxy.py in the rclone source code. The program’s job is to take a This config generated must have this extra parameter - And it may have this parameter - For example the program might take this on STDIN And return this on STDOUT This would mean that an SFTP backend would be created on the fly for the The progam can manipulate the supplied Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. See the global flags page for global options not listed here. Serve the remote over HTTP. rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it. You can use the filter flags (eg –include, –exclude) to control what is served. The server will log errors. Use -v to see access logs. If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. –server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. –max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. –baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used –baseurl “/rclone” then rclone would serve from a URL starting with “/rclone/”. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” and –baseurl “/rclone/” are all treated identically. By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags. This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. See the global flags page for global options not listed here. Serve the remote for restic’s REST API. rclone serve restic implements restic’s REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. Restic is a command line program for doing backups. The server will log errors. Use -v to see access logs. If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. –server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. –max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. –baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used –baseurl “/rclone” then rclone would serve from a URL starting with “/rclone/”. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” and –baseurl “/rclone/” are all treated identically. By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags. By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also. –cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate. See the global flags page for global options not listed here. Serve the remote over SFTP. rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it. You can use the filter flags (eg –include, –exclude) to control what is served. The server will log errors. Use -v to see access logs. In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. If you supply the parameter There is an example program bin/test_proxy.py in the rclone source code. The program’s job is to take a This config generated must have this extra parameter - And it may have this parameter - For example the program might take this on STDIN And return this on STDOUT This would mean that an SFTP backend would be created on the fly for the The progam can manipulate the supplied Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. See the global flags page for global options not listed here. Serve remote:path over webdav. rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it. If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. –server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer. –max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header. –baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used –baseurl “/rclone” then rclone would serve from a URL starting with “/rclone/”. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” and –baseurl “/rclone/” are all treated identically. By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags. In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to This mode should support all normal file system operations. If an upload or download fails it will be retried up to –low-level-retries times. If you supply the parameter There is an example program bin/test_proxy.py in the rclone source code. The program’s job is to take a This config generated must have this extra parameter - And it may have this parameter - For example the program might take this on STDIN And return this on STDOUT This would mean that an SFTP backend would be created on the fly for the The progam can manipulate the supplied Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. See the global flags page for global options not listed here. Changes storage class/tier of objects in remote. rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc. Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true You can use it to tier single object Or just provide remote directory and all files in directory will be tiered See the global flags page for global options not listed here. Create new file or change file modification time. Create new file or change file modification time. See the global flags page for global options not listed here. List the contents of the remote in a tree like fashion. rclone tree lists the contents of a remote in a similar way to the unix tree command. For example You can use any of the filtering options with the tree command (eg –include and –exclude). You can also use –fast-list. The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone’s short options. See the global flags page for global options not listed here. rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error For example, suppose you have a remote with a file in called This can be used when scripting to make aged backups efficiently, eg Rclone has a number of options to control its behaviour. Options that take parameters can have the values passed in two ways, Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as “300ms”, “-1.5h” or “2h45m”. Valid time units are “ns”, “us” (or “µs”), “ms”, “s”, “m”, “h”. will sync If running rclone from a script you might want to use today’s date as the directory name passed to See Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn’t resolve or resolves to more than one IP address it will give an error. This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section. Eg When using this flag, rclone won’t update mtimes of remote files if they are incorrect as it would normally. When using You must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory. See Specify the location of the rclone config file. Normally the config file is in your home directory as a file called Set the connection timeout. This should be in go time format which looks like The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is When using The remote in use must support server side copy and you must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory. See Mode to run dedupe command in. One of This switches the log format to JSON for rclone. The fields of json log are level, msg, source, time. This controls the number of low level retries rclone does. A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the Use This will work with the NB that this only works for a local destination but will work with any source. NB that multi thread copies are disabled for local to local copies as they are faster without unless When using multi thread downloads (see above Exactly how many streams rclone uses for the download depends on the size of the file. To calculate the number of download streams Rclone divides the size of the file by the The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s. The default is This is for use with See When using The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. This is for use with files to add the suffix in the current directory or with For example will sync When using So let’s say we had This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file. If an existing destination file has a modification time equal (within the computed modify window precision) to the source file’s, it will be updated if the sizes are different. On remotes which don’t support mod time directly the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file. This can be useful when transferring to a remote which doesn’t support mod times directly as it is more accurate than a On remotes which don’t support mod time directly (or when using This can be useful when transferring to a remote which doesn’t support mod times directly (or when using If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based platforms and VirtualAlloc on Windows for its transfer buffers (size controlled by If this flag is not set then rclone will allocate and free the buffers using the Go memory allocator which may use more memory as memory pages are returned less aggressively to the OS. It is possible this does not work well on all platforms so it is disabled by default; in the future it may be enabled by default. Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation. Use this flag to disable the extra API call and rely instead on the server’s modified time. In cases such as a local to remote sync, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary. Use this flag to disable the extra API call and rely instead on the server’s modified time. In cases such as a local to remote sync using Using this flag on a sync operation without also using With With Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries. Every option in rclone can have its default set by environment variable. To find the name of the environment variable, first, take the long option name, strip the leading For example, to always set This will transfer these files only (if they exist) To take a more complicated example, let’s say you had a few files you want to back up regularly with these absolute paths: The 3 files will arrive in You could of course choose And you would transfer it like this In this case there will be an extra This option controls the minimum size file which will be transferred. This defaults to For example You can exclude Currently only one filename is supported, i.e. Rclone can serve a web based GUI (graphical user interface). This is somewhat experimental at the moment so things may be subject to change. Run this command in a terminal and rclone will download and then display the GUI in a web browser. This will produce logs like this and rclone needs to continue to run to serve the GUI: This assumes you are running rclone locally on your machine. It is possible to separate the rclone and the GUI - see below for details. If you wish to update to the latest API version then you can add Once the GUI opens, you will be looking at the dashboard which has an overall overview. On the left hand side you will see a series of view buttons you can click on: (More docs and walkthrough video to come!) When you run the The The flag These flags can be overidden as desired. See also the rclone rcd documentation. For example the GUI could be served on a public port over SSL using an htpasswd file using the following flags: If you want to run the GUI behind a proxy at Or instead of htpassword if you just want a single user and password: The GUI is being developed in the: rclone/rclone-webui-react respository. Bug reports and contributions very welcome welcome :-) If you have questions then please ask them on the rclone forum. If rclone is run with the If you just want to run a remote control then see the rcd command. If this is set then rclone will serve the files in that directory. It will also open the root in the web browser if specified. This is for implementing browser based GUIs for rclone functions. If Default Off. Set this flag to serve the default web gui on the same port as rclone. Default Off. Set the allowed Access-Control-Allow-Origin for rc requests. Can be used with –rc-web-gui if the rclone is running on different IP than the web-gui. Default is IP address on which rc is running. Set the URL to fetch the rclone-web-gui files from. Default https://api.github.com/repos/rclone/rclone-webui-react/releases/latest. Set this flag to Download / Force update rclone-webui-react from the rc-web-fetch-url. Default Off. Expire finished async jobs older than DURATION (default 60s). The rc interface supports some special parameters which apply to all commands. These start with Each rc call is classified as a job and it is assigned its own id. By default jobs are executed immediately as they are created or synchronously. If It is recommended that potentially long running jobs, eg Starting a job with the Each rc call has it’s own stats group for tracking it’s metrics. By default grouping is done by the composite group name from prefix If Stats for specific group can be accessed by passing Purge a remote from the cache backend. Supports either a directory or a file. Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional) Eg Ensure the specified file chunks are cached on disk. The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end] start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value “-5:” represents the last 5 chunks of a file. Any parameter with a key that starts with “file” can be used to specify files to fetch, eg File names will automatically be encrypted when the a crypt remote is used on top of the cache. Show statistics for the cache remote. This takes the following parameters See the config create command command for more information on the above. Authentication is required for this call. Parameters: - name - name of remote to delete See the config delete command command for more information on the above. Authentication is required for this call. Returns a JSON object: - key: value Where keys are remote names and values are the config parameters. See the config dump command command for more information on the above. Authentication is required for this call. Parameters: - name - name of remote to get See the config dump command command for more information on the above. Authentication is required for this call. Returns - remotes - array of remote names See the listremotes command command for more information on the above. Authentication is required for this call. This takes the following parameters See the config password command command for more information on the above. Authentication is required for this call. Returns a JSON object: - providers - array of objects See the config providers command command for more information on the above. Authentication is required for this call. This takes the following parameters See the config update command command for more information on the above. Authentication is required for this call. This sets the bandwidth limit to that passed in. Eg If the rate parameter is not suppied then the bandwidth is queried The format of the parameter is exactly the same as passed to –bwlimit except only one bandwidth may be specified. In either case “rate” is returned as a human readable string, and “bytesPerSecond” is returned as a number. This tells the go runtime to do a garbage collection run. It isn’t necessary to call this normally, but it can be useful for debugging memory problems. This returns the memory statistics of the running program. What the values mean are explained in the go docs: https://golang.org/pkg/runtime/#MemStats The most interesting values for most people are: Pass a clear string and rclone will obscure it for the config file: - clear - string Returns - obscured - string This returns PID of current process. Useful for stopping rclone process. This returns all available stats This returns list of stats groups currently in memory. Returns the following values: Values for “transferring”, “checking” and “lastError” are only assigned if data is available. The value for “eta” is null if an eta cannot be determined. { “speed”: average speed in bytes/sec since start of the process, “bytes”: total transferred bytes since the start of the process, “errors”: number of errors, “fatalError”: whether there has been at least one FatalError, “retryError”: whether there has been at least one non-NoRetryError, “checks”: number of checked files, “transfers”: number of transferred files, “deletes” : number of deleted files, “elapsedTime”: time in seconds since the start of the process, “lastError”: last occurred error, “transferring”: an array of currently active file transfers: [ { “bytes”: total transferred bytes for this file, “eta”: estimated time in seconds until file transfer completion “name”: name of the file, “percentage”: progress of the file transfer in percent, “speed”: speed in bytes/sec, “speedAvg”: speed in bytes/sec as an exponentially weighted moving average, “size”: size of the file in bytes } ], “checking”: an array of names of currently active file checks [] } { “transferred”: an array of completed transfers (including failed ones): [ { “name”: name of the file, “size”: size of the file in bytes, “bytes”: total transferred bytes for this file, “checked”: if the transfer is only checked (skipped, deleted), “timestamp”: integer representing millisecond unix epoch, “error”: string description of the error (empty if successfull), “jobid”: id of the job that this transfer belongs to } ] } This shows the current version of go and the go runtime - version - rclone version, eg “v1.44” - decomposed - version number as [major, minor, patch, subpatch] - note patch and subpatch will be 999 for a git compiled version - isGit - boolean - true if this was compiled from the git version - os - OS in use as according to Go - arch - cpu architecture in use according to Go - goVersion - version of Go runtime in use Parameters - None Results - jobids - array of integer job ids Parameters - jobid - id of the job (integer) Results - finished - boolean - duration - time in seconds that the job ran for - endTime - time the job finished (eg “2018-10-26T18:50:20.528746884+01:00”) - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg “2018-10-26T18:50:20.528336039+01:00”) - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously Results - finished - boolean - duration - time in seconds that the job ran for - endTime - time the job finished (eg “2018-10-26T18:50:20.528746884+01:00”) - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above - startTime - time the job started (eg “2018-10-26T18:50:20.528336039+01:00”) - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously - progress - output of the progress related to the underlying job Parameters - jobid - id of the job (integer) This takes the following parameters The result is as returned from rclone about –json See the about command command for more information on the above. Authentication is required for this call. This takes the following parameters See the cleanup command command for more information on the above. Authentication is required for this call. This takes the following parameters Authentication is required for this call. This takes the following parameters See the copyurl command command for more information on the above. Authentication is required for this call. This takes the following parameters See the delete command command for more information on the above. Authentication is required for this call. This takes the following parameters See the deletefile command command for more information on the above. Authentication is required for this call. This takes the following parameters This command does not have a command line equivalent so use this instead: This takes the following parameters See the lsjson command for more information on the above and examples. Authentication is required for this call. This takes the following parameters See the mkdir command command for more information on the above. Authentication is required for this call. This takes the following parameters Authentication is required for this call. This takes the following parameters See the link command command for more information on the above. Authentication is required for this call. This takes the following parameters See the purge command command for more information on the above. Authentication is required for this call. This takes the following parameters See the rmdir command command for more information on the above. Authentication is required for this call. This takes the following parameters See the rmdirs command command for more information on the above. Authentication is required for this call. This takes the following parameters See the size command command for more information on the above. Authentication is required for this call. Returns - options - a list of the options block names Returns an object where keys are option block names and values are an object with the current option values in. This shows the internal names of the option within rclone which should map to the external options very easily with a few exceptions. Parameters And this sets NOTICE level logs (normal without -v) This returns an error with the input as part of its error string. Useful for testing error handling. This lists all the registered remote control commands as a JSON map in the commands response. This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly. This echoes the input parameters to the output parameters for testing purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly. Authentication is required for this call. This takes the following parameters See the copy command command for more information on the above. Authentication is required for this call. This takes the following parameters See the move command command for more information on the above. Authentication is required for this call. This takes the following parameters See the sync command command for more information on the above. Authentication is required for this call. This forgets the paths in the directory cache causing them to be re-read from the remote when needed. If no paths are passed in then it will forget all the paths in the directory cache. Otherwise pass files or dirs in as file=path or dir=path. Any parameter key starting with file will forget that file and any starting with dir will forget that dir, eg Without any parameter given this returns the current status of the poll-interval setting. When the interval=duration parameter is set, the poll-interval value is updated and the polling function is notified. Setting interval=0 disables poll-interval. The timeout=duration parameter can be used to specify a time to wait for the current poll function to apply the new value. If timeout is less or equal 0, which is the default, wait indefinitely. The new poll-interval value will only be active when the timeout is not reached. If poll-interval is updated or disabled temporarily, some changes might not get picked up by the polling function, depending on the used remote. This reads the directories for the specified paths and freshens the directory cache. If no paths are passed in then it will refresh the root directory. This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash. This is also used to return the space used, available for If the server can’t do The remote supports empty directories. See Limitations for details. Most Object/Bucket based remotes do not support this. This describes the global flags available to every rclone command split into two groups, non backend and backend flags. These flags are available for every command. These flags are available for every command. They control the backends and may be set in the config file. This is a backend for the 1ficher cloud storage service. Note that a Premium subscription is required to use the API. Paths are specified as Paths may be as deep as required, eg The initial setup for 1Fichier involves getting the API key from the website which you need to do in your browser. Here is an example of how to make a remote called This will guide you through an interactive setup process: Once configured you can then use List directories in top level of your 1Fichier account List all the files in your 1Fichier account To copy a local directory to a 1Fichier directory called backup 1Fichier does not support modification times. It supports the Whirlpool hash algorithm. 1Fichier can have two files with exactly the same name and path (unlike a normal file system). Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. 1Fichier does not support the characters Here are the standard options specific to fichier (1Fichier). Your API Key, get it from https://1fichier.com/console/params.pl Here are the advanced options specific to fichier (1Fichier). If you want to download a shared folder, add this parameter The Paths may be as deep as required or a local path, eg Copy another local directory to the alias directory called source Here are the standard options specific to alias (Alias for an existing remote). Remote or path to alias. Can be “myremote:path/to/dir”, “myremote:bucket”, “myremote:” or “/local/path”. Let’s say you usually use Here are the standard options specific to amazon cloud drive (Amazon Drive). Amazon Application Client ID. Here are the advanced options specific to amazon cloud drive (Amazon Drive). Auth server URL. Leave blank to use Amazon’s. When using the Example policy: Notes on above: In this case you need to restore the object(s) in question before using rclone. Note that rclone only speaks the S3 API it does not speak the Glacier Vault API, so rclone cannot directly access Glacier Vaults. Here are the standard options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)). Choose your S3 provider. Here are the advanced options specific to s3 (Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)). Canned ACL used when creating buckets. Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them. Note that when using Rclone supports generating file share links for private B2 buckets. They can either be for a file for example: or if run on a directory you will get: you can then use the authorization token (the part of the url from the Here are the standard options specific to b2 (Backblaze B2). Account ID or Application Key ID Here are the advanced options specific to b2 (Backblaze B2). Endpoint for the service. Leave blank normally. Custom endpoint for downloads. This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. Leave blank if you want to use the endpoint provided by Backblaze. This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze. Time before the authorization token will expire in s or suffix ms|s|m|h|d. The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week. Paths are specified as Box allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. Box supports SHA1 type hashes, so you can use the Depending on the enterprise settings for your user, the item will either be actually deleted from Box or moved to the trash. Here are the standard options specific to box (Box). Box App Client Id. Leave blank normally. Here are the advanced options specific to box (Box). Cutoff for switching to multipart upload (>= 50MB). Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt. Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional, false by default) Here are the standard options specific to cache (Cache a remote). Remote to cache. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended). Here are the advanced options specific to cache (Cache a remote). The plex token for authentication - auto set normally Encrypts the whole file path including directory names Example: False Only encrypts file names, skips directory names Example: Crypt stores modification times using the underlying remote so support depends on that. Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator. Note that you should use the Here are the standard options specific to crypt (Encrypt/Decrypt a remote). Remote to encrypt/decrypt. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended). Here are the advanced options specific to crypt (Encrypt/Decrypt a remote). For all files listed show how the names encrypt. If you wish to see Team Folders you must use a leading You can then use team folders like this A leading Dropbox supports modified times, but the only way to set a modification time is to re-upload the file. This means that if you uploaded your data with an older version of rclone which didn’t support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don’t want this to happen use Dropbox supports its own hash type which is checked for all transfers. Here are the standard options specific to dropbox (Dropbox). Dropbox App Client Id Leave blank normally. Here are the advanced options specific to dropbox (Dropbox). Upload chunk size. (< 150M). FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the config for the remote. The default FTPS port is Here are the standard options specific to ftp (FTP Connection). FTP host to connect to Here are the advanced options specific to ftp (FTP Connection). Maximum number of FTP simultaneous connections, 0 for unlimited Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the “mtime” key in RFC3339 format accurate to 1ns. Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). Google Application Client Id Leave blank normally. Here are the standard options specific to drive (Google Drive). Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance. Here are the advanced options specific to drive (Google Drive). Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login. This is because rclone can’t find out the size of the Google docs without downloading them. Google docs will transfer correctly with However an unfortunate consequence of this is that you can’t download Google docs using Sometimes, for no reason I’ve been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files. Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. Use Log into the Google API Console with your Google account. It doesn’t matter what Google account you use. (It need not be the same account as the Google Drive you want to access) Select a project or create a new project. Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the then “Google Drive API”. Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the “Google Drive API”. Click “Credentials” in the left-side panel (not “Create credentials”, which opens the wizard), then “Create credentials”, then “OAuth client ID”. It will prompt you to set the OAuth consent screen product name, if you haven’t set one already. Choose an application type of “other”, and click “Create”. (the default name is fine) It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote. (Thanks to @balazer on github for these instructions.) The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos. NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use. The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. Here is an example of how to make a remote called This will guide you through an interactive setup process: Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on This remote is called See all the albums in your photos Make a new album List the contents of an album Sync As Google Photos is not a general purpose cloud storage system the backend is laid out to help you navigate it. The directories under Note that all your photos and videos will appear somewhere under There are two writable parts of the tree, the The Directories within the and the images directory contains Then rclone will create the following albums with the following files in This means that you can use the The Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn’t understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item. Note that all media items uploaded to Google Photos through the API are stored in full resolution at “original quality” and will count towards your storage quota in your Google Account. The API does not offer a way to upload in “high quality” mode.. When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115. When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044. If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known. This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes. The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check. It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the If you want to use the backend with Rclone can only upload files to albums it created. This is a limitation of the Google Photos API. Rclone can remove files it uploaded from albums it created only. Rclone can remove files from albums it created, but note that the Google Photos API does not allow media to be deleted permanently so this media will still remain. See bug #109759781. Rclone cannot delete files anywhere except under The Google Photos API does not support deleting albums - see bug #135714733. Here are the standard options specific to google photos (Google Photos). Google Application Client Id Leave blank normally. Google Application Client Secret Leave blank normally. Set to make the Google Photos backend read only. If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access. Here are the advanced options specific to google photos (Google Photos). Set to read the size of media items. Normally rclone does not read the size of media items since this takes another transaction. This isn’t necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media. The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn’t then please file an issue, or send a pull request!) Paths are specified as This remote is read only - you can’t upload files to an HTTP server. Most HTTP servers store time accurate to 1 second. No checksums are stored. Since the http remote only has one config parameter it is easy to use without a config file: Here are the standard options specific to http (http Connection). URL of http host to connect to Here are the advanced options specific to http (http Connection). Set HTTP headers for all transactions Use this to set additional HTTP headers for all transactions The input format is comma separated list of key,value pairs. Standard CSV encoding may be used. For example to set a Cookie use ‘Cookie,name=value’, or ‘“Cookie”,“name=value”’. You can set multiple headers, eg ‘“Cookie”,“name=value”,“Authorization”,“xxx”’. Set this if the site doesn’t end directories with / Use this if your target website does not use / on the end of directories. This remote supports The modified time is stored as metadata on the object as This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object. Note that Hubic wraps the Swift backend, so most of the properties of are the same. Here are the standard options specific to hubic (Hubic). Hubic Client Id Leave blank normally. Here are the advanced options specific to hubic (Hubic). Above this size files will be chunked into a _segments container. This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API. The Swift API doesn’t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won’t check or use the MD5SUM for these. This remote supports Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown. Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. Jottacloud supports MD5 type hashes, so you can use the Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the By default rclone will send all files to the trash when deleting files. Due to a lack of API documentation emptying the trash is currently only possible via the Jottacloud website. If deleting permanently is required then use the Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website. Jottacloud requires each ‘device’ to be registered. Rclone brings such a registration to easily access your account but if you want to use Jottacloud together with rclone on multiple machines you NEED to create a seperate deviceID/deviceSecrect on each machine. You will asked during setting up the remote. Please be aware that this also means that copying the rclone config from one machine to another does NOT work with Jottacloud accounts. You have to create it on each machine. Here are the standard options specific to jottacloud (JottaCloud). User Name: Here are the advanced options specific to jottacloud (JottaCloud). Files bigger than this will be cached on disk to calculate the MD5 if required. Note that Jottacloud is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. There are quite a few characters that can’t be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead. Jottacloud only supports filenames up to 255 characters in length. To copy a local directory to an Koofr directory called backup Here are the standard options specific to koofr (Koofr). Your Koofr user name Here are the advanced options specific to koofr (Koofr). The Koofr API endpoint to use Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. Note that Koofr is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. Mega is a cloud storage and file hosting service known for its security feature where all files are encrypted locally before they are uploaded. This prevents anyone (including employees of Mega) from accessing the files without knowledge of the key used for encryption. To copy a local directory to an Mega directory called backup Mega does not support modification times or hashes yet. Mega can have two files with exactly the same name and path (unlike a normal file system). Duplicated files cause problems with the syncing and you will see messages in the log about duplicates. Use Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum. So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while. Here are the standard options specific to mega (Mega). User name Here are the advanced options specific to mega (Mega). Output more debug from Mega. This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn’t appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library. Mega allows duplicate files which may confuse rclone. This remote supports The modified time is stored as metadata on the object with the MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, eg the local disk. Files can’t be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using Note that rclone doesn’t commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won’t allow more than that amount of uncommitted blocks. Here are the standard options specific to azureblob (Microsoft Azure Blob Storage). Storage Account Name (leave blank to use connection string or SAS URL) Storage Account Name (leave blank to use SAS URL or Emulator) Storage Account Key (leave blank to use connection string or SAS URL) Storage Account Key (leave blank to use SAS URL or Emulator) SAS URL for container level access only (leave blank if using account/key or connection string) SAS URL for container level access only (leave blank if using account/key or Emulator) Uses local storage emulator if provided as ‘true’ (leave blank if using real azure storage endpoint) Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage). Endpoint for the service Leave blank normally. MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy. You can test rlcone with storage emulator locally, to do this make sure azure storage emulator installed locally and set up a new remote with Paths are specified as Paths may be as deep as required, eg Now the application is complete. Run OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. OneDrive personal supports SHA1 type hashes. OneDrive for business and Sharepoint Server support QuickXorHash. For all types of OneDrive you can use the Any files you delete with rclone will end up in the trash. Microsoft doesn’t provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft’s apps or via the OneDrive website. Here are the standard options specific to onedrive (Microsoft OneDrive). Microsoft App Client Id Leave blank normally. Here are the advanced options specific to onedrive (Microsoft OneDrive). Chunk size to upload files with - must be multiple of 320k. Note that OneDrive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. There are quite a few characters that can’t be in OneDrive file names. These can’t occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a The largest allowed file sizes are 15GB for OneDrive for Business and 35GB for OneDrive Personal (Updated 4 Jan 2019). OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. Here are the standard options specific to opendrive (OpenDrive). Username Note that OpenDrive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. There are quite a few characters that can’t be in OpenDrive file names. These can’t occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a Here are the standard options specific to qingstor (QingCloud Object Storage). Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. Here are the advanced options specific to qingstor (QingCloud Object Storage). Number of connection retries. As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata. For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is “dirty”. By using Here are the standard options specific to swift (Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). Get swift credentials from environment variables in standard OpenStack form. Here are the advanced options specific to swift (Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)). Above this size files will be chunked into a _segments container. The modified time is stored as metadata on the object as This is a defacto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object. The Swift API doesn’t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won’t check or use the MD5SUM for these. To copy a local directory to an pCloud directory called backup pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded. pCloud supports MD5 and SHA1 type hashes, so you can use the Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. Here are the standard options specific to pcloud (Pcloud). Pcloud App Client Id Leave blank normally. Paths are specified as Paths may be as deep as required, eg The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. Here is an example of how to make a remote called This will guide you through an interactive setup process: See the remote setup docs for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on Once configured you can then use List directories in top level of your premiumize.me List all the files in your premiumize.me To copy a local directory to an premiumize.me directory called backup premiumize.me does not support modification times or hashes, therefore syncing will default to Here are the standard options specific to premiumizeme (premiumize.me). API Key. This is not normally used - use oauth instead. Note that premiumize.me is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”. premiumize.me file names can’t have the premiumize.me only supports filenames up to 255 characters in length. Paths are specified as put.io paths may be as deep as required, eg The initial setup for put.io involves getting a token from put.io which you need to do in your browser. Here is an example of how to make a remote called This will guide you through an interactive setup process: Note that rclone runs a webserver on your local machine to collect the token as returned from Google if you use auto config mode. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on You can then use it like this, List directories in top level of your put.io List all the files in your put.io To copy a local directory to a put.io directory called backup SFTP is the Secure (or SSH) File Transfer Protocol. The SFTP backend can be used with a number of different providers: SFTP runs over SSH v2 and is installed as standard with most modern SSH installations. Paths are specified as "Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net, on the other hand, requires users to OMIT the leading /. And then at the end of the session These commands can be used in scripts of course. Modified times are stored on the server to 1 second precision. Modified times are used in syncing and are fully supported. Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option Here are the standard options specific to sftp (SSH/SFTP Connection). SSH host to connect to Enable the use of the aes128-cbc cipher. This cipher is insecure and may allow plaintext data to be recovered by an attacker. Enable the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. Those algorithms are insecure and may allow plaintext data to be recovered by an attacker. Here are the advanced options specific to sftp (SSH/SFTP Connection). Allow asking for SFTP password when needed. The command used to read md5 hashes. Leave blank for autodetect. The command used to read sha1 hashes. Leave blank for autodetect. SFTP supports checksums if the same login has shell access and SFTP also supports Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can’t be calculated properly. For them using SFTP isn’t supported under plan9 until this issue is fixed. Note that since SFTP isn’t HTTP based the following flags don’t work with it: Note that C14 is supported through the SFTP backend. rsync.net is supported through the SFTP backend. See rsync.net’s documentation of rclone examples. The Paths may be as deep as required or a local path, eg Copy another local directory to the union directory called source, which will be placed into Here are the standard options specific to union (A stackable unification remote, which can appear to merge the contents of several remotes). Here are the standard options specific to union (Union merges the contents of several remotes). List of space separated remotes. Can be ‘remotea:test/dir remoteb:’, ‘“remotea:test/space dir” remoteb:’, etc. The last remote is used to write to. To copy a local directory to an WebDAV directory called backup Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times. Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them. Here are the standard options specific to webdav (Webdav). URL of http host to connect to Here are the advanced options specific to webdav (Webdav). Command to run to get a bearer token See below for notes on specific providers. Owncloud supports modified times using the This is configured in an identical way to Owncloud. Note that Nextcloud does not support streaming of files ( put.io can be accessed in a read only way using webdav. Configure the Your config file should end up looking like this: If you are using For more help see the put.io webdav docs. Rclone can be used with Sharepoint provided by OneDrive for Business or Office365 Education Accounts. This feature is only needed for a few of these Accounts, mostly Office365 Education ones. These accounts are sometimes not verified by the domain owner github#1975 This means that these accounts can’t be added using the official API (other Accounts should work with the “onedrive” option). However, it is possible to access them using webdav. dCache is a storage system with WebDAV doors that support, beside basic and x509, authentication with Macaroons (bearer tokens). dCache is a storage system that supports many protocols and authentication/authorisation schemes. For WebDAV clients, it allows users to authenticate with username and password (BASIC), X.509, Kerberos, and various bearer tokens, including Macaroons and OpenID-Connect access tokens. Configure as normal using the The config will end up looking something like this. There is a script that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file. Macaroons may also be obtained from the dCacheView web-browser/JavaScript client that comes with dCache. dCache also supports authenticating with OpenID-Connect access tokens. OpenID-Connect is a protocol (based on OAuth 2.0) that allows services to identify users who have authenticated with some central service. Support for OpenID-Connect in rclone is currently achieved using another software package called oidc-agent. This is a command-line tool that facilitates obtaining an access token. Once installed and configured, an access token is obtained by running the Note Before the The rclone Configure as a normal WebDAV endpoint, using the ‘other’ vendor, leaving the username and password empty. When prompted, choose to edit the advanced config and enter the command to get a bearer token (e.g., The following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from the XDC OIDC Provider. Yandex Disk is a cloud storage solution created by Yandex. Yandex paths may be as deep as required, eg Sync Modified times are supported and are stored accurate to 1 ns in custom metadata called MD5 checksums are natively supported by Yandex Disk. If you wish to empty your trash you can use the To view your current quota you can use the When uploading very large files (bigger than about 5GB) you will need to increase the Here are the standard options specific to yandex (Yandex Disk). Yandex Client Id Leave blank normally. Here are the advanced options specific to yandex (Yandex Disk). Remove existing public link to file/folder with link command rather than creating. Default is false, meaning link command will create or retrieve public link. Will sync These can be configured into the config file for consistencies sake, but it is probably easier not to. Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X. Filenames are expected to be encoded in UTF-8 on disk. This is the normal case for Windows and OS X. NB Rclone (like most unix tools such as NB This flag is only available on Unix based systems. On systems where it isn’t supported (eg Windows) it will be ignored. Here are the standard options specific to local (Local Disk). Disable UNC (long path names) conversion on Windows Here are the advanced options specific to local (Local Disk). Follow symlinks and copy the pointed to item. Force the filesystem to report itself as case sensitive. Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice. Force the filesystem to report itself as case insensitive Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice. With remotes that have a concept of directory, eg Local and Drive, empty directories may be left behind, or not created when one was expected. This is because rclone doesn’t have a concept of a directory - it only works on objects. Most of the object storage systems can’t actually store a directory so there is nowhere for rclone to store anything about directories. You can work round this to some extent with the This may be fixed at some point in Issue #100 For the same reason as the above, rclone doesn’t have a concept of a directory - it only works on objects, therefore it can’t preserve the timestamps of directories. Rclone doesn’t currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing. Currently rclone loads each directory entirely into memory before using it. Since each Rclone object takes 0.5k-1k of memory this can take a very long time and use an extremely large amount of memory. Millions of files in a directory tend caused by software writing cloud storage (eg S3 buckets). Bucket based remotes (eg S3/GCS/Swift/B2) do not have a concept of directories. Rclone therefore cannot create directories in them which means that empty directories on a bucket based remote will tend to disappear. Some software creates empty keys ending in Bugs are stored in rclone’s Github project: Yes they do. All the rclone commands (eg rclone(1) User Manual
-Rclone
-
+Rclone - rsync for cloud storage
+
@@ -155,6 +159,7 @@ go build
rclone config
+
rclone config [flags]
Options
+ -h, --help help for configSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone copy
Synopsis
@@ -242,11 +253,11 @@ destpath/sourcepath/two.txt
Options
+ --create-empty-src-dirs Create empty source dirs on destination after copy
-h, --help help for copySEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone sync
Synopsis
@@ -260,11 +271,11 @@ destpath/sourcepath/two.txt
Options
+ --create-empty-src-dirs Create empty source dirs on destination after sync
-h, --help help for syncSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone move
Synopsis
@@ -280,11 +291,11 @@ destpath/sourcepath/two.txt
+ --create-empty-src-dirs Create empty source dirs on destination after move
--delete-empty-src-dirs Delete empty source dirs after move
-h, --help help for moveSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone delete
Synopsis
@@ -300,11 +311,11 @@ rclone --dry-run --min-size 100M delete remote:path
rclone delete remote:path [flags]Options
+ -h, --help help for deleteSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone purge
Synopsis
@@ -312,11 +323,11 @@ rclone --dry-run --min-size 100M delete remote:path
rclone purge remote:path [flags]Options
+ -h, --help help for purgeSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone mkdir
Synopsis
@@ -324,11 +335,11 @@ rclone --dry-run --min-size 100M delete remote:path
rclone mkdir remote:path [flags]Options
+ -h, --help help for mkdirSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone rmdir
Synopsis
@@ -336,11 +347,11 @@ rclone --dry-run --min-size 100M delete remote:path
rclone rmdir remote:path [flags]Options
+ -h, --help help for rmdirSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone check
Synopsis
@@ -353,11 +364,11 @@ rclone --dry-run --min-size 100M delete remote:path
+ --download Check by downloading rather than with hash.
-h, --help help for check
--one-way Check one way only, source files must exist on remoteSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone ls
Synopsis
@@ -384,11 +395,11 @@ rclone --dry-run --min-size 100M delete remote:path
rclone ls remote:path [flags]Options
+ -h, --help help for lsSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone lsd
Synopsis
@@ -420,11 +431,11 @@ rclone --dry-run --min-size 100M delete remote:path
Options
+ -h, --help help for lsd
-R, --recursive Recurse into the listing.SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone lsl
Synopsis
@@ -451,11 +462,11 @@ rclone --dry-run --min-size 100M delete remote:path
rclone lsl remote:path [flags]Options
+ -h, --help help for lslSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone md5sum
Synopsis
@@ -463,11 +474,11 @@ rclone --dry-run --min-size 100M delete remote:path
rclone md5sum remote:path [flags]Options
+ -h, --help help for md5sumSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone sha1sum
Synopsis
@@ -475,11 +486,11 @@ rclone --dry-run --min-size 100M delete remote:path
rclone sha1sum remote:path [flags]Options
+ -h, --help help for sha1sumSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone size
Synopsis
@@ -488,11 +499,11 @@ rclone --dry-run --min-size 100M delete remote:path
Options
+ -h, --help help for size
--json format output as JSONSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone version
Synopsis
@@ -518,11 +529,11 @@ beta: 1.42.0.5 (released 2018-06-17)
Options
+ --check Check for new version.
-h, --help help for versionSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone cleanup
Synopsis
@@ -530,11 +541,11 @@ beta: 1.42.0.5 (released 2018-06-17)
rclone cleanup remote:path [flags]Options
+ -h, --help help for cleanupSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone dedupe
Synopsis
@@ -601,11 +612,11 @@ two-3.txt: renamed from: two.txt
Options
+ --dedupe-mode string Dedupe mode interactive|skip|first|newest|oldest|rename. (default "interactive")
-h, --help help for dedupeSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone about
Synopsis
@@ -645,11 +656,11 @@ Other: 8849156022
+ --full Full numbers instead of SI units
-h, --help help for about
--json Format output as JSONSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone authorize
Synopsis
@@ -657,11 +668,11 @@ Other: 8849156022
rclone authorize [flags]Options
+ -h, --help help for authorizeSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone cachestats
Synopsis
@@ -669,11 +680,11 @@ Other: 8849156022
rclone cachestats source: [flags]Options
+ -h, --help help for cachestatsSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone cat
Synopsis
@@ -693,11 +704,11 @@ Other: 8849156022
-h, --help help for cat
--offset int Start printing at offset N (or from end if -ve).
--tail int Only print the last N characters.
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone config create
Synopsis
@@ -711,11 +722,11 @@ Other: 8849156022
rclone config create <name> <type> [<key> <value>]* [flags]Options
+ -h, --help help for createSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone config delete
Synopsis
@@ -723,89 +734,117 @@ Other: 8849156022
rclone config delete <name> [flags]Options
+ -h, --help help for deleteSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
-rclone config dump
-rclone config disconnect
+Synopsis
-
+rclone config dump [flags]rclone config disconnect remote: [flags]Options
-
+ -h, --help help for dump
+ -h, --help help for disconnectSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
-rclone config edit
-rclone config dump
+Synopsis
-
+rclone config edit [flags]rclone config dump [flags]Options
-
+ -h, --help help for edit
+ -h, --help help for dumpSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
-rclone config file
-rclone config edit
+Synopsis
-
+rclone config file [flags]rclone config edit [flags]Options
-
+ -h, --help help for file
+ -h, --help help for editSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
+rclone config file
+Synopsis
+
+rclone config file [flags]Options
+
+ -h, --help help for fileSEE ALSO
+
+
rclone config password
Synopsis
+Synopsis
rclone config password myremote fieldname mypassword
-rclone config password <name> [<key> <value>]+ [flags]Options
-
- -h, --help help for passwordSEE ALSO
-
-
-Auto generated by spf13/cobra on 15-Jun-2019
-rclone config providers
-Synopsis
-rclone config providers [flags]Options
-
+ -h, --help help for providers
+ -h, --help help for passwordSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
-rclone config show
-rclone config providers
+Synopsis
-
+rclone config show [<remote>] [flags]rclone config providers [flags]Options
-
+ -h, --help help for show
+ -h, --help help for providersSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
+rclone config reconnect
+Synopsis
+
+rclone config reconnect remote: [flags]Options
+
+ -h, --help help for reconnectSEE ALSO
+
+
+rclone config show
+Synopsis
+
+rclone config show [<remote>] [flags]Options
+
+ -h, --help help for showSEE ALSO
+
+
rclone config update
Synopsis
+Synopsis
@@ -813,16 +852,29 @@ Other: 8849156022
rclone config update myremote swift env_auth truerclone config update myremote swift env_auth true config_refresh_token false
-rclone config update <name> [<key> <value>]+ [flags]Options
+Options
- -h, --help help for updateSEE ALSO
+SEE ALSO
+
+
+rclone config userinfo
+Synopsis
+
+rclone config userinfo remote: [flags]Options
+
+ -h, --help help for userinfo
+ --json Format output as JSONSEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone copyto
Synopsis
+Synopsis
-P/--progress flag to view real-time transfer statistics
-rclone copyto source:path dest:path [flags]Options
+Options
- -h, --help help for copytoSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone copyurl
Synopsis
+Synopsis
-rclone copyurl https://example.com dest:path [flags]Options
+Options
- -h, --help help for copyurlSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone cryptcheck
Synopsis
+Synopsis
-rclone cryptcheck remote:path cryptedremote:path [flags]Options
+Options
- -h, --help help for cryptcheck
--one-way Check one way only, source files must exist on destinationSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone cryptdecode
Synopsis
+Synopsis
-rclone cryptdecode encryptedremote: encryptedfilename [flags]Options
+Options
- -h, --help help for cryptdecode
--reverse Reverse cryptdecode, encrypts filenamesSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone dbhashsum
Synopsis
+Synopsis
-rclone dbhashsum remote:path [flags]Options
+Options
- -h, --help help for dbhashsumSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone deletefile
Synopsis
+Synopsis
delete it cannot be used to remove a directory and it doesn’t obey include/exclude filters - if the specified file exists, it will always be removed.
-rclone deletefile remote:path [flags]Options
+Options
- -h, --help help for deletefileSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone genautocomplete
Synopsis
+Synopsis
Options
+Options
- -h, --help help for genautocompleteSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone genautocomplete bash
Synopsis
+Synopsis
@@ -942,16 +994,16 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
sudo rclone genautocomplete bash. /etc/bash_completion
-rclone genautocomplete bash [output_file] [flags]Options
+Options
- -h, --help help for bashSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone genautocomplete zsh
Synopsis
+Synopsis
@@ -959,28 +1011,28 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
sudo rclone genautocomplete zshautoload -U compinit && compinit
-rclone genautocomplete zsh [output_file] [flags]Options
+Options
- -h, --help help for zshSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone gendocs
Synopsis
+Synopsis
-rclone gendocs output_directory [flags]Options
+Options
- -h, --help help for gendocsSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone hashsum
Synopsis
+Synopsis
-$ rclone hashsum
@@ -992,45 +1044,45 @@ Supported hashes are:
$ rclone hashsum MD5 remote:path
-rclone hashsum <hash> remote:path [flags]Options
+Options
- -h, --help help for hashsumSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone link
Synopsis
+Synopsis
rclone link remote:path/to/file
rclone link remote:path/to/folder/
-rclone link remote:path [flags]Options
+Options
- -h, --help help for linkSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone listremotes
Synopsis
+Synopsis
-rclone listremotes [flags]Options
+Options
- -h, --help help for listremotes
--long Show the type as well as names.SEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone lsf
Synopsis
+Synopsis
$ rclone lsf swift:bucket
@@ -1100,7 +1152,7 @@ rclone copy --files-from new_files /path/to/local remote:pathlsd,lsf,lsjson do not recurse by default - use “-R” to make them recurse.
-rclone lsf remote:path [flags]Options
+Options
-h, --help help for lsf
-R, --recursive Recurse into the listing.
-s, --separator string Separator for the items in the format. (default ";") --absolute Put a leading / in front of path names.
--csv Output in CSV format.
-d, --dir-slash Append a slash to directory names. (default true)
@@ -1111,14 +1163,14 @@ rclone copy --files-from new_files /path/to/local remote:pathSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone lsjson
Synopsis
+Synopsis
lsd,lsf,lsjson do not recurse by default - use “-R” to make them recurse.
-rclone lsjson remote:path [flags]Options
+Options
--no-modtime Don't read the modification time (can speed things up).
--original Show the ID of the underlying Object.
-R, --recursive Recurse into the listing.
- --dirs-only Show only directories in the listing.
-M, --encrypted Show the encrypted names.
--files-only Show only files in the listing.
@@ -1154,14 +1206,14 @@ rclone copy --files-from new_files /path/to/local remote:pathSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone mount
Synopsis
+Synopsis
rclone config. Check it works with rclone ls etc.Limitations
swift: won’t work whereas swift:bucket will as will swift:bucket/path. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.rclone mount vs rclone sync/copy
-rclone mount remote:path /path/to/mountpoint [flags]Options
+Options
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
--volname string Set the volume name (not supported by all OSes).
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
- --allow-non-empty Allow mounting over a non-empty directory.
--allow-other Allow access to other users.
--allow-root Allow access to root user.
@@ -1292,14 +1344,14 @@ umount /path/to/local/mountSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone moveto
Synopsis
+Synopsis
-P/--progress flag to view real-time transfer statistics.
-rclone moveto source:path dest:path [flags]Options
+Options
- -h, --help help for movetoSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone ncdu
Synopsis
+Synopsis
-rclone ncdu remote:path [flags]Options
+Options
- -h, --help help for ncduSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone obscure
Synopsis
+Synopsis
-rclone obscure password [flags]Options
+Options
- -h, --help help for obscureSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone rc
Synopsis
+Synopsis
rclone rc --loopback operations/about fs=/
-rclone rc commands parameter [flags]Options
+Options
- -h, --help help for rc
--json string Input JSON - use instead of key=value args.
--loopback If set connect to this rclone instance not via HTTP.
@@ -1382,14 +1435,14 @@ if src is directory
--pass string Password to use to connect to rclone remote control.
--url string URL to connect to rclone remote control. (default "http://localhost:5572/")
--user string Username to use to rclone remote control.SEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone rcat
Synopsis
+Synopsis
@@ -1397,53 +1450,54 @@ ffmpeg - | rclone rcat remote:path/to/file
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file--streaming-upload-cutoff. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.rclone move it to the destination.
-rclone rcat remote:path [flags]Options
+Options
- -h, --help help for rcatSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone rcd
Synopsis
+Synopsis
-rclone rcd <path to files to serve>* [flags]Options
+Options
- -h, --help help for rcdSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone rmdirs
Synopsis
+Synopsis
-rclone rmdirs remote:path [flags]Options
+Options
- -h, --help help for rmdirs
--leave-root Do not remove root directory if emptySEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone serve
Synopsis
+Synopsis
rclone serve http remote:
-rclone serve <protocol> [opts] <remote> [flags]Options
+Options
- -h, --help help for serveSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone serve dlna
Synopsis
+Synopsis
Server options
@@ -1520,7 +1573,7 @@ ffmpeg - | rclone rcat remote:path/to/file
-rclone serve dlna remote:path [flags]Options
+Options
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --addr string ip:port or :port to bind the DLNA http server to. (default ":7879")
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
@@ -1542,14 +1595,14 @@ ffmpeg - | rclone rcat remote:path/to/fileSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone serve ftp
Synopsis
+Synopsis
Server options
--vfs-cache-max-age.Auth Proxy
+--auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won’t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscure
+{
+ "user": "me",
+ "pass": "mypassword"
+}
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}user and pass returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you’d probably want to restrict the host to a limited list.user so only use that for configuration, don’t use pass. This also means that if a user’s password is changed the cache will need to expire (which takes 5 mins) before it takes effect.
-rclone serve ftp remote:path [flags]Options
+Options
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2121")
+ --auth-proxy string A program to use to create the backend from the auth.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
@@ -1638,14 +1716,14 @@ ffmpeg - | rclone rcat remote:path/to/fileSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone serve http
Synopsis
+Synopsis
Authentication
-rclone serve http remote:path [flags]Options
+Options
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --baseurl string Prefix for URLs - leave blank for root.
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
@@ -1755,14 +1835,14 @@ htpasswd -B htpasswd anotherUserSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone serve restic
Synopsis
+Synopsis
Authentication
-rclone serve restic remote:path [flags]Options
+Options
--server-write-timeout duration Timeout for server writing data (default 1h0m0s)
--stdio run an HTTP2 server on stdin/stdout
--user string User name for authentication.
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
--append-only disallow deletion of repository data
+ --baseurl string Prefix for URLs - leave blank for root.
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
-h, --help help for restic
@@ -1837,14 +1919,14 @@ htpasswd -B htpasswd anotherUserSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone serve sftp
Synopsis
+Synopsis
--vfs-cache-max-age.Auth Proxy
+--auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won’t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscure
+{
+ "user": "me",
+ "pass": "mypassword"
+}
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}user and pass returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you’d probably want to restrict the host to a limited list.user so only use that for configuration, don’t use pass. This also means that if a user’s password is changed the cache will need to expire (which takes 5 mins) before it takes effect.
-rclone serve sftp remote:path [flags]Options
+Options
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:2022")
+ --auth-proxy string A program to use to create the backend from the auth.
--authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys")
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
@@ -1936,14 +2043,14 @@ htpasswd -B htpasswd anotherUserSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone serve webdav
Synopsis
+Synopsis
Webdav options
–etag-hash
@@ -1955,6 +2062,7 @@ htpasswd -B htpasswd anotherUser
Authentication
--vfs-cache-max-age.Auth Proxy
+--auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won’t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscure
+{
+ "user": "me",
+ "pass": "mypassword"
+}
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}user and pass returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you’d probably want to restrict the host to a limited list.user so only use that for configuration, don’t use pass. This also means that if a user’s password is changed the cache will need to expire (which takes 5 mins) before it takes effect.
-rclone serve webdav remote:path [flags]Options
+Options
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
- --addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
+ --auth-proxy string A program to use to create the backend from the auth.
+ --baseurl string Prefix for URLs - leave blank for root.
--cert string SSL PEM key (concatenation of certificate and CA certificate)
--client-ca string Client certificate authority to verify clients with
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
@@ -2057,14 +2191,14 @@ htpasswd -B htpasswd anotherUserSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone settier
Synopsis
+Synopsis
rclone settier tier remote:path/dir
-rclone settier tier remote:path [flags]Options
+Options
- -h, --help help for settierSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone touch
Synopsis
+Synopsis
-rclone touch remote:path [flags]Options
+Options
- -h, --help help for touch
-C, --no-create Do not create the file if it does not exist.
-t, --timestamp string Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05)SEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
rclone tree
Synopsis
+Synopsis
$ rclone tree remote:path
@@ -2113,7 +2247,7 @@ htpasswd -B htpasswd anotherUser
-rclone tree remote:path [flags]Options
+Options
-r, --sort-reverse Reverse the order of the sort.
-U, --unsorted Leave files unsorted.
--version Sort files alphanumerically by version.
- -a, --all All files are listed (list . files too).
-C, --color Turn colorization on always.
-d, --dirs-only List directories only.
@@ -2135,11 +2269,11 @@ htpasswd -B htpasswd anotherUserSEE ALSO
+SEE ALSO
-Auto generated by spf13/cobra on 15-Jun-2019
Copying single files
Failed to create file system for "remote:file": is a file not a directory if it isn’t.test.jpg, then you could copy just that file like this
-rclone sync remote:current-backup remote:previous-backup
rclone sync /path/to/files remote:current-backupOptions
+Options
--option=value or --option value. However boolean (true/false) options behave slightly differently to the other options in that --boolean sets the option to true and the absence of the flag sets it to false. It is also possible to specify --boolean=false or --boolean=true. Note that --boolean false is not valid - this is parsed as --boolean and the false is parsed as an extra command line argument for rclone.rclone sync /path/to/local remote:current --backup-dir remote:old/path/to/local to remote:current, but for any files which would have been updated or deleted will be stored in remote:old.--backup-dir to store the old files, or you might want to pass --suffix with today’s date.--compare-dest and --copy-dest.–bind string
–bwlimit=BANDWIDTH_SPEC
@@ -2253,6 +2388,10 @@ rclone sync /path/to/files remote:current-backup
rclone --checksum sync s3:/bucket swift:/bucket would run much quicker than without the --checksum flag.–compare-dest=DIR
+sync, copy or move DIR is checked in addition to the destination for files. If a file identical to the source is found that file is NOT copied from source. This is useful to copy just files that have changed since the last backup.--copy-dest and --backup-dir.–config=CONFIG_FILE
.config/rclone/rclone.conf (or .rclone.conf if created with an older version). If $XDG_CONFIG_HOME is set it will be at $XDG_CONFIG_HOME/rclone/rclone.conf.–contimeout=TIME
5s for 5 seconds, 10m for 10 minutes, or 3h30m.1m by default.–copy-dest=DIR
+sync, copy or move DIR is checked in addition to the destination for files. If a file identical to the source is found that file is server side copied from DIR to the destination. This is useful for incremental backup.--compare-dest and --backup-dir.–dedupe-mode MODE
interactive, skip, first, newest, oldest, rename. The default is interactive. See the dedupe command for more information as to what these options mean.–disable FEATURE,FEATURE,…
@@ -2307,6 +2450,8 @@ rclone sync /path/to/files remote:current-backup
INFO is equivalent to -v. It outputs information about each transfer and prints stats once a minute by default.NOTICE is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.ERROR is equivalent to -q. It only outputs error messages.–use-json-log
+–low-level-retries NUMBER
-v flag.-vv if you wish to see info about the threads.sync/copy/move commands and friends copyto/moveto. Multi thread downloads will be used with rclone mount and rclone serve if --vfs-cache-mode is set to writes or above.--multi-thread-streams is set explicitly.–multi-thread-streams=N
--multi-thread-cutoff) this sets the maximum number of streams to use. Set to 0 to disable multi thread downloads. (Default 4)--multi-thread-cutoff and rounds up, up to the maximum set with --multi-thread-streams.bytes.–suffix=SUFFIX
---backup-dir only. If this isn’t set then --backup-dir will move files with their original name. If it is set then the files will have SUFFIX added on to them.--backup-dir for more info.sync, copy or move any files which would have been overwritten or deleted will have the suffix added to them. If there is a file with the same path (after the suffix has been added), then it will be overwritten.--backup-dir. See --backup-dir for more info.
+rclone sync /path/to/local/file remote:current --suffix .bak/path/to/local to remote:current, but for any files which would have been updated or deleted have .bak added.–suffix-keep-extension
--suffix, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after.--suffix -2019-01-01, without the flag file.txt would be backed up to file.txt-2019-01-01 and with the flag it would be backed up to file-2019-01-01.txt. This can be helpful to make sure the suffixed files can still be opened.-u, –update
--size-only check and faster than using --checksum.--use-server-mod-time) the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.--use-server-mod-time to avoid extra API calls) as it is more accurate than a --size-only check and faster than using --checksum.–use-mmap
--buffer-size). Memory allocated like this does not go on the Go heap and can be returned to the OS immediately when it is finished with.–use-server-modtime
--update, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.--update would cause all files modified at any time other than the last upload time to be uploaded again, which is probably not what you want.-v, -vv, –verbose
-v rclone will tell you about each file that is transferred and a small number of significant events.-vv rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.Environment Variables
Options
+Options
--, change - to _, make upper case and prepend RCLONE_.--stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting.rclone copy --files-from files-from.txt /home/me/pics remote:pics
+/home/me/pics/subdir/file2.jpg → remote:pics/subdir/file2.jpg
/home/me/pics/file1.jpg → remote:pics/file1.jpg
-/home/me/pics/subdir/file2.jpg → remote:pics/subdirfile1.jpg/home/user1/important
/home/user1/dir/file
@@ -2859,7 +3010,7 @@ user2/stuffremote:backup with the paths as in the files-from.txt like this:
+/home/user2/stuff → remote:backup/user2/stuff
/home/user1/important → remote:backup/user1/important
/home/user1/dir/file → remote:backup/user1/dir/file
-/home/user2/stuff → remote:backup/stuff/ as the root too in which case your files-from.txt might look like this./home/user1/important
/home/user1/dir/file
@@ -2867,9 +3018,9 @@ user2/stuffrclone copy --files-from files-from.txt / remote:backuphome directory on the remote:
+/home/user1/important → remote:home/backup/user1/important
-/home/user1/dir/file → remote:home/backup/user1/dir/file
-/home/user2/stuff → remote:home/backup/stuff/home/user1/important → remote:backup/home/user1/important
+/home/user1/dir/file → remote:backup/home/user1/dir/file
+/home/user2/stuff → remote:backup/home/user2/stuff--min-size - Don’t transfer any file smaller than thiskBytes but a suffix of k, M, or G can be used.--min-size 50k means no files smaller than 50kByte will be transferred.dir3 from sync by running the following command:rclone sync --exclude-if-present .ignore dir1 remote:backup--exclude-if-present should not be used multiple times.GUI (Experimental)
+
+rclone rcd --rc-web-gui
+2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip
+2019/08/25 11:40:14 NOTICE: Downloading webgui binary. Please wait. [Size: 3813937, Path : /home/USER/.cache/rclone/webgui/v0.0.6.zip]
+2019/08/25 11:40:16 NOTICE: Unzipping
+2019/08/25 11:40:16 NOTICE: Serving remote control on http://127.0.0.1:5572/--rc-web-gui-update to the command line.Using the GUI
+
+
+How it works
+rclone rcd --rc-web-gui this is what happens
+
+login_token so it can log straight in.Advanced use
+rclone rcd may use any of the flags documented on the rc page.--rc-web-gui is shorthand for
+
+--rc-user gui--rc-pass <random password>--rc-serveExample: Running a public GUI
+
+
+--rc-web-gui--rc-addr :443--rc-htpasswd /path/to/htpasswd--rc-cert /path/to/ssl.crt--rc-key /path/to/ssl.keyExample: Running a GUI behind a proxy
+/rclone you could use these flags:
+
+--rc-web-gui--rc-baseurl rclone--rc-htpasswd /path/to/htpasswd
+
+--rc-user me--rc-pass mypasswordProject
+Remote controlling rclone
--rc flag then it starts an http server which can be used to remote control rclone.--rc-user or --rc-pass is set then the URL that is opened will have the authorization in the URL in the http://user:pass@localhost/ style.–rc-web-gui
+–rc-allow-origin
+–rc-web-fetch-url
+–rc-web-gui-update
+–rc-job-expire-duration=DURATION
–rc-job-expire-interval=DURATION
@@ -3000,6 +3232,7 @@ dir1/dir2/dir3/.ignore
Special parameters
_ to show they are different.Running asynchronous jobs with _async = true
+_async has a true value when supplied to an rc call then it will return immediately with a job id and the task will be run in the background. The job/status call can be used to get information of the background job. The job can be queried for up to 1 minute after it has finished.sync/sync, sync/copy, sync/move, operations/purge are run with the _async flag to avoid any potential problems with the HTTP request and response timing out._async flag:Assigning operations to groups with _group =
+job/ and id of the job like so job/1._group has a value then stats for that request will be grouped under that value. This allows caller to group stats under their own name.group to core/stats:$ rclone rc --json '{ "group": "job/1" }' core/stats
+{
+ "speed": 12345
+ ...
+}Supported commands
-cache/expire: Purge a remote from cache
+cache/expire: Purge a remote from cache {#cache/expire}
-rclone rc cache/expire remote=path/to/sub/folder/
rclone rc cache/expire remote=/ withData=truecache/fetch: Fetch file chunks
+cache/fetch: Fetch file chunks {#cache/fetch}
rclone rc cache/fetch chunks=0 file=hello file2=home/goodbyecache/stats: Get cache stats
+cache/stats: Get cache stats {#cache/stats}
config/create: create the config for a remote.
+config/create: create the config for a remote. {#config/create}
config/delete: Delete a remote in the config file.
+config/delete: Delete a remote in the config file. {#config/delete}
config/dump: Dumps the config file.
+config/dump: Dumps the config file. {#config/dump}
config/get: Get a remote in the config file.
+config/get: Get a remote in the config file. {#config/get}
config/listremotes: Lists the remotes in the config file.
+config/listremotes: Lists the remotes in the config file. {#config/listremotes}
config/password: password the config for a remote.
+config/password: password the config for a remote. {#config/password}
config/providers: Shows how providers are configured in the config file.
+config/providers: Shows how providers are configured in the config file. {#config/providers}
config/update: update the config for a remote.
+config/update: update the config for a remote. {#config/update}
core/bwlimit: Set the bandwidth limit.
+core/bwlimit: Set the bandwidth limit. {#core/bwlimit}
+rclone rc core/bwlimit rate=1M
-rclone rc core/bwlimit rate=off
+rclone rc core/bwlimit rate=off
+{
+ "bytesPerSecond": -1,
+ "rate": "off"
+}
+rclone rc core/bwlimit rate=1M
+{
+ "bytesPerSecond": 1048576,
+ "rate": "1M"
+}rclone rc core/bwlimit
+{
+ "bytesPerSecond": 1048576,
+ "rate": "1M"
+}core/gc: Runs a garbage collection.
+core/gc: Runs a garbage collection. {#core/gc}
core/memstats: Returns the memory statistics
-
-
-
-
core/obscure: Obscures a string passed in.
-core/pid: Return PID of current process
-core/stats: Returns stats about current transfers.
-
+rclone rc core/statscore/group-list: Returns list of stats. {#core/group-list}
+
-{
- "speed": average speed in bytes/sec since start of the process,
- "bytes": total transferred bytes since the start of the process,
- "errors": number of errors,
- "fatalError": whether there has been at least one FatalError,
- "retryError": whether there has been at least one non-NoRetryError,
- "checks": number of checked files,
- "transfers": number of transferred files,
- "deletes" : number of deleted files,
- "elapsedTime": time in seconds since the start of the process,
- "lastError": last occurred error,
- "transferring": an array of currently active file transfers:
+ "groups": an array of group names:
[
- {
- "bytes": total transferred bytes for this file,
- "eta": estimated time in seconds until file transfer completion
- "name": name of the file,
- "percentage": progress of the file transfer in percent,
- "speed": speed in bytes/sec,
- "speedAvg": speed in bytes/sec as an exponentially weighted moving average,
- "size": size of the file in bytes
- }
- ],
- "checking": an array of names of currently active file checks
- []
-}core/version: Shows the current version of rclone and the go runtime.
+ "group1",
+ "group2",
+ ...
+ ]
+}
+
+### core/memstats: Returns the memory statistics {#core/memstats}
+
+This returns the memory statistics of the running program. What the values mean
+are explained in the go docs: https://golang.org/pkg/runtime/#MemStats
+
+The most interesting values for most people are:
+
+* HeapAlloc: This is the amount of memory rclone is actually using
+* HeapSys: This is the amount of memory rclone has obtained from the OS
+* Sys: this is the total amount of memory requested from the OS
+ * It is virtual memory so may include unused memory
+
+### core/obscure: Obscures a string passed in. {#core/obscure}
+
+Pass a clear string and rclone will obscure it for the config file:
+- clear - string
+
+Returns
+- obscured - string
+
+### core/pid: Return PID of current process {#core/pid}
+
+This returns PID of current process.
+Useful for stopping rclone process.
+
+### core/stats: Returns stats about current transfers. {#core/stats}
+
+This returns all available stats:
+
+ rclone rc core/stats
+
+If group is not provided then summed up stats for all groups will be
+returned.
+
+Parameters
+- group - name of the stats group (string)
+
+Returns the following values:
+
+
+Values for "transferring", "checking" and "lastError" are only assigned if data is available.
+The value for "eta" is null if an eta cannot be determined.
+
+### core/stats-reset: Reset stats. {#core/stats-reset}
+
+This clears counters and errors for all stats or specific stats group if group
+is provided.
+
+Parameters
+- group - name of the stats group (string)
+
+### core/transferred: Returns stats about completed transfers. {#core/transferred}
+
+This returns stats about completed transfers:
+
+ rclone rc core/transferred
+
+If group is not provided then completed transfers for all groups will be
+returned.
+
+Parameters
+- group - name of the stats group (string)
+
+Returns the following values:core/version: Shows the current version of rclone and the go runtime. {#core/version}
job/list: Lists the IDs of the running jobs
+job/list: Lists the IDs of the running jobs {#job/list}
job/status: Reads the status of the job ID
+job/status: Reads the status of the job ID {#job/status}
operations/about: Return the space used on the remote
+job/stop: Stop the running job {#job/stop}
+operations/about: Return the space used on the remote {#operations/about}
operations/cleanup: Remove trashed files in the remote or path
+operations/cleanup: Remove trashed files in the remote or path {#operations/cleanup}
operations/copyfile: Copy a file from source remote to destination remote
+operations/copyfile: Copy a file from source remote to destination remote {#operations/copyfile}
operations/copyurl: Copy the URL to the object
+operations/copyurl: Copy the URL to the object {#operations/copyurl}
operations/delete: Remove files in the path
+operations/delete: Remove files in the path {#operations/delete}
operations/deletefile: Remove the single file pointed to
+operations/deletefile: Remove the single file pointed to {#operations/deletefile}
operations/fsinfo: Return information about the remote
+operations/fsinfo: Return information about the remote {#operations/fsinfo}
-rclone rc --loopback operations/fsinfo fs=remote:operations/list: List the given remote and path in JSON format
+operations/list: List the given remote and path in JSON format {#operations/list}
operations/mkdir: Make a destination directory or container
+operations/mkdir: Make a destination directory or container {#operations/mkdir}
operations/movefile: Move a file from source remote to destination remote
+operations/movefile: Move a file from source remote to destination remote {#operations/movefile}
operations/publiclink: Create or retrieve a public link to the given file or folder.
+operations/publiclink: Create or retrieve a public link to the given file or folder. {#operations/publiclink}
operations/purge: Remove a directory or container and all of its contents
+operations/purge: Remove a directory or container and all of its contents {#operations/purge}
operations/rmdir: Remove an empty directory or container
+operations/rmdir: Remove an empty directory or container {#operations/rmdir}
operations/rmdirs: Remove all the empty directories in the path
+operations/rmdirs: Remove all the empty directories in the path {#operations/rmdirs}
operations/size: Count the number of bytes and files in remote
+operations/size: Count the number of bytes and files in remote {#operations/size}
options/blocks: List all the option blocks
+options/blocks: List all the option blocks {#options/blocks}
options/get: Get all the options
+options/get: Get all the options {#options/get}
options/set: Set an option
+options/set: Set an option {#options/set}
-rclone rc options/set --json '{"main": {"LogLevel": 7}}'
-rclone rc options/set --json '{"main": {"LogLevel": 6}}'rc/error: This returns an error
+rc/error: This returns an error {#rc/error}
rc/list: List all the registered remote control commands
+rc/list: List all the registered remote control commands {#rc/list}
rc/noop: Echo the input to the output parameters
+rc/noop: Echo the input to the output parameters {#rc/noop}
rc/noopauth: Echo the input to the output parameters requiring auth
+rc/noopauth: Echo the input to the output parameters requiring auth {#rc/noopauth}
sync/copy: copy a directory from source remote to destination remote
+sync/copy: copy a directory from source remote to destination remote {#sync/copy}
sync/move: move a directory from source remote to destination remote
+sync/move: move a directory from source remote to destination remote {#sync/move}
sync/sync: sync a directory from source remote to destination remote
+sync/sync: sync a directory from source remote to destination remote {#sync/sync}
vfs/forget: Forget files or directories in the directory cache.
+vfs/forget: Forget files or directories in the directory cache. {#vfs/forget}
rclone rc vfs/forget
-rclone rc vfs/forget file=hello file2=goodbye dir=home/junkvfs/poll-interval: Get the status or update the value of the poll-interval option.
+vfs/poll-interval: Get the status or update the value of the poll-interval option. {#vfs/poll-interval}
rclone rc vfs/poll-interval interval=5mvfs/refresh: Refresh the directory cache.
+vfs/refresh: Refresh the directory cache. {#vfs/refresh}
@@ -3552,6 +3841,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
rclone rc vfs/refresh
+
+1Fichier
+Whirlpool
+No
+No
+Yes
+R
+
-Amazon Drive
MD5
No
@@ -3559,7 +3856,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R
+
-Amazon S3
MD5
Yes
@@ -3567,7 +3864,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R/W
+
-Backblaze B2
SHA1
Yes
@@ -3575,7 +3872,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R/W
+
-Box
SHA1
Yes
@@ -3583,7 +3880,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
+
-Dropbox
DBHASH †
Yes
@@ -3591,7 +3888,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
+
-FTP
-
No
@@ -3599,7 +3896,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
-
+
-Google Cloud Storage
MD5
Yes
@@ -3607,7 +3904,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
R/W
+
+Google Drive
MD5
Yes
@@ -3615,6 +3912,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
R/W
+
Google Photos
+-
+No
+No
+Yes
+R
+
HTTP
-
@@ -3696,6 +4001,22 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
W
+
+premiumize.me
+-
+No
+Yes
+No
+R
+
+
+put.io
+CRC-32
+Yes
+No
+Yes
+R
+
QingStor
MD5
No
@@ -3781,10 +4102,24 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
StreamUpload
LinkSharing
About
+EmptyDir
+
+1Fichier
+No
+No
+No
+No
+No
+No
+No
+No
+No
+Yes
+
-Amazon Drive
Yes
No
@@ -3795,8 +4130,9 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
No #2178
No
+Yes
+
-Amazon S3
No
Yes
@@ -3807,8 +4143,9 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
No #2178
No
+No
+
-Backblaze B2
No
Yes
@@ -3817,10 +4154,11 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
Yes
Yes
-No #2178
+Yes
+No
No
+
-Box
Yes
Yes
@@ -3831,8 +4169,9 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
Yes
No
+Yes
+
-Dropbox
Yes
Yes
@@ -3843,8 +4182,9 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
Yes
Yes
+Yes
+
-FTP
No
No
@@ -3855,8 +4195,9 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
No #2178
No
+Yes
+
-Google Cloud Storage
Yes
Yes
@@ -3867,8 +4208,9 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
No #2178
No
+No
+
+Google Drive
Yes
Yes
@@ -3879,6 +4221,20 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
Yes
Yes
+Yes
+
+
Google Photos
+No
+No
+No
+No
+No
+No
+No
+No
+No
+No
HTTP
@@ -3891,6 +4247,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
No #2178
No
+Yes
Hubic
@@ -3903,6 +4260,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
No #2178
Yes
+No
Jottacloud
@@ -3915,6 +4273,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
Yes
Yes
+Yes
Mega
@@ -3927,6 +4286,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
No #2178
Yes
+Yes
Microsoft Azure Blob Storage
@@ -3939,6 +4299,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
No #2178
No
+No
Microsoft OneDrive
@@ -3951,6 +4312,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
Yes
Yes
+Yes
OpenDrive
@@ -3963,6 +4325,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
No
No
+Yes
Openstack Swift
@@ -3975,6 +4338,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
No #2178
Yes
+No
+pCloud
@@ -3987,6 +4351,33 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
No #2178
Yes
+Yes
+
+
+premiumize.me
+Yes
+No
+Yes
+Yes
+No
+No
+No
+Yes
+Yes
+Yes
+
+
put.io
+Yes
+No
+Yes
+Yes
+Yes
+No
+Yes
+No #2178
+Yes
+Yes
QingStor
@@ -3999,6 +4390,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
No #2178
No
+No
SFTP
@@ -4011,6 +4403,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
No #2178
Yes
+Yes
WebDAV
@@ -4023,6 +4416,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes ‡
No #2178
Yes
+Yes
Yandex Disk
@@ -4035,6 +4429,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
Yes
Yes
+Yes
@@ -4075,6 +4471,432 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
The local filesystem
@@ -4047,6 +4442,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
No
Yes
+Yes
rclone mount.About then rclone about will return an error.EmptyDir
+Global Flags
+Non Backend Flags
+
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --ca-cert string CA certificate used to verify servers
+ --cache-dir string Directory rclone will use for caching. (default "$HOME/.cache/rclone")
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum (if available) & size, not mod-time & size
+ --client-cert string Client SSL certificate (PEM) for mutual TLS auth
+ --client-key string Client SSL private key (PEM) for mutual TLS auth
+ --compare-dest string use DIR to server side copy flies from.
+ --config string Config file. (default "$HOME/.config/rclone/rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ --copy-dest string Compare dest to DIR also.
+ --cpuprofile string Write cpu profile to file
+ --delete-after When synchronizing, delete files on destination after transferring (default)
+ --delete-before When synchronizing, delete files on destination before transferring
+ --delete-during When synchronizing, delete files during transfer
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump DumpFlags List of items to dump from: headers,bodies,requests,responses,auth,filters,goroutines,openfiles
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --ignore-case Ignore case in filters (case insensitive)
+ --ignore-case-sync Ignore case when synchronizing
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-errors delete even if there are I/O errors
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --log-file string Log everything to this file
+ --log-format string Comma separated list of log format options (default "date,time")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-backlog int Maximum number of objects in sync or check backlog. (default 10000)
+ --max-delete int When synchronizing, limit the number of deletes (default -1)
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in k or suffix b|k|M|G (default off)
+ --max-stats-groups int Maximum number of stats groups to keep in memory. On max oldest is discarded. (default 1000)
+ --max-transfer SizeSuffix Maximum size of data to transfer. (default off)
+ --memprofile string Write memory profile to file
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --multi-thread-cutoff SizeSuffix Use multi-thread downloads for files above this size. (default 250M)
+ --multi-thread-streams int Max number of streams to use for multi-thread downloads. (default 4)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ -P, --progress Show progress during transfer.
+ -q, --quiet Print as little stuff as possible
+ --rc Enable the remote control server.
+ --rc-addr string IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+ --rc-allow-origin string Set the allowed origin for CORS.
+ --rc-baseurl string Prefix for URLs - leave blank for root.
+ --rc-cert string SSL PEM key (concatenation of certificate and CA certificate)
+ --rc-client-ca string Client certificate authority to verify clients with
+ --rc-files string Path to local files to serve on the HTTP server.
+ --rc-htpasswd string htpasswd file - if not provided no authentication is done
+ --rc-job-expire-duration duration expire finished async jobs older than this value (default 1m0s)
+ --rc-job-expire-interval duration interval to check for expired async jobs (default 10s)
+ --rc-key string SSL PEM Private key
+ --rc-max-header-bytes int Maximum size of request header (default 4096)
+ --rc-no-auth Don't require auth for certain methods.
+ --rc-pass string Password for authentication.
+ --rc-realm string realm for authentication (default "rclone")
+ --rc-serve Enable the serving of remote objects.
+ --rc-server-read-timeout duration Timeout for server reading data (default 1h0m0s)
+ --rc-server-write-timeout duration Timeout for server writing data (default 1h0m0s)
+ --rc-user string User name for authentication.
+ --rc-web-fetch-url string URL to fetch the releases for webgui. (default "https://api.github.com/repos/rclone/rclone-webui-react/releases/latest")
+ --rc-web-gui Launch WebGUI on localhost
+ --rc-web-gui-update Update / Force update to latest version of web gui
+ --retries int Retry operations this many times if they fail (default 3)
+ --retries-sleep duration Interval between retrying operations if they fail, e.g 500ms, 60s, 5m. (0 to disable)
+ --size-only Skip based on size only, not mod-time or checksum
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-file-name-length int Max file name length in stats. 0 for no limit (default 45)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-one-line Make the stats fit on one line.
+ --stats-one-line-date Enables --stats-one-line and add current date/time prefix.
+ --stats-one-line-date-format string Enables --stats-one-line-date and uses custom formatted date. Enclose date string in double quotes ("). See https://golang.org/pkg/time/#Time.Format
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff SizeSuffix Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix to add to changed files.
+ --suffix-keep-extension Preserve the extension when using --suffix.
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --use-cookies Enable session cookiejar.
+ --use-json-log Use json log format.
+ --use-mmap Use mmap allocator (see docs).
+ --use-server-modtime Use server modified time instead of object metadata
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.49.0")
+ -v, --verbose count Print lots more stuff (repeat for more)Backend Flags
+
+ --acd-auth-url string Auth server URL.
+ --acd-client-id string Amazon Application Client ID.
+ --acd-client-secret string Amazon Application Client Secret.
+ --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-token-url string Token server url.
+ --acd-upload-wait-per-gb Duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --alias-remote string Remote or path to alias.
+ --azureblob-access-tier string Access tier of blob: hot, cool or archive.
+ --azureblob-account string Storage Account Name (leave blank to use SAS URL or Emulator)
+ --azureblob-chunk-size SizeSuffix Upload chunk size (<= 100MB). (default 4M)
+ --azureblob-endpoint string Endpoint for the service
+ --azureblob-key string Storage Account Key (leave blank to use SAS URL or Emulator)
+ --azureblob-list-chunk int Size of blob list. (default 5000)
+ --azureblob-sas-url string SAS URL for container level access only
+ --azureblob-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (<= 256MB). (default 256M)
+ --azureblob-use-emulator Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
+ --b2-account string Account ID or Application Key ID
+ --b2-chunk-size SizeSuffix Upload chunk size. Must fit in memory. (default 96M)
+ --b2-disable-checksum Disable checksums for large (> upload cutoff) files
+ --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d. (default 1w)
+ --b2-download-url string Custom endpoint for downloads.
+ --b2-endpoint string Endpoint for the service.
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-key string Application Key
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging.
+ --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. (default 200M)
+ --b2-versions Include old versions in directory listings.
+ --box-client-id string Box App Client Id.
+ --box-client-secret string Box App Client Secret
+ --box-commit-retries int Max number of times to try committing a multipart file. (default 100)
+ --box-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (>= 50MB). (default 50M)
+ --cache-chunk-clean-interval Duration How often should the cache perform cleanups of the chunk storage. (default 1m0s)
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming.
+ --cache-chunk-path string Directory to cache chunk files. (default "$HOME/.cache/rclone/cache-backend")
+ --cache-chunk-size SizeSuffix The size of a chunk (partial file data). (default 5M)
+ --cache-chunk-total-size SizeSuffix The total size that the chunks can take up on the local disk. (default 10G)
+ --cache-db-path string Directory to store file structure metadata DB. (default "$HOME/.cache/rclone/cache-backend")
+ --cache-db-purge Clear all the cached data for this remote on start.
+ --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-info-age Duration How long to cache file structure information (directory listings, file size, times etc). (default 6h0m0s)
+ --cache-plex-insecure string Skip all certificate verifications when connecting to the Plex server
+ --cache-plex-password string The password of the Plex user
+ --cache-plex-url string The URL of the Plex server
+ --cache-plex-username string The username of the Plex user
+ --cache-read-retries int How many times to retry a read from a cache storage. (default 10)
+ --cache-remote string Remote to cache.
+ --cache-rps int Limits the number of requests per second to the source FS (-1 to disable) (default -1)
+ --cache-tmp-upload-path string Directory to keep temporary files until they are uploaded.
+ --cache-tmp-wait-time Duration How long should files be stored in local cache before being uploaded (default 15s)
+ --cache-workers int How many workers should run in parallel to download chunks. (default 4)
+ --cache-writes Cache file data on writes through the FS
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact. (default true)
+ --crypt-filename-encryption string How to encrypt the filenames. (default "standard")
+ --crypt-password string Password or pass phrase for encryption.
+ --crypt-password2 string Password or pass phrase for salt. Optional but recommended.
+ --crypt-remote string Remote to encrypt/decrypt.
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded.
+ --drive-allow-import-name-change Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+ --drive-alternate-export Use alternate export URLs for google documents export.,
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size SizeSuffix Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-client-id string Google Application Client Id
+ --drive-client-secret string Google Application Client Secret
+ --drive-export-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-formats string Deprecated: see export_formats
+ --drive-impersonate string Impersonate this user when using a service account.
+ --drive-import-formats string Comma separated list of preferred formats for uploading Google docs.
+ --drive-keep-revision-forever Keep new head revision of each file forever.
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-pacer-burst int Number of API calls to allow without sleeping. (default 100)
+ --drive-pacer-min-sleep Duration Minimum time to sleep between API calls. (default 100ms)
+ --drive-root-folder-id string ID of the root folder
+ --drive-scope string Scope that rclone should use when requesting access from drive.
+ --drive-server-side-across-configs Allow server side operations (eg copy) to work across different drive configs.
+ --drive-service-account-credentials string Service Account Credentials JSON blob
+ --drive-service-account-file string Service Account Credentials JSON file path
+ --drive-shared-with-me Only show files that are shared with me.
+ --drive-size-as-quota Show storage quota usage for file size.
+ --drive-skip-checksum-gphotos Skip MD5 checksum on Google photos and videos only.
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-team-drive string ID of the Team Drive
+ --drive-trashed-only Only show files that are in the trash.
+ --drive-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 8M)
+ --drive-use-created-date Use file created date instead of modified date.,
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --drive-v2-download-min-size SizeSuffix If Object's are greater, use drive v2 API to download. (default off)
+ --dropbox-chunk-size SizeSuffix Upload chunk size. (< 150M). (default 48M)
+ --dropbox-client-id string Dropbox App Client Id
+ --dropbox-client-secret string Dropbox App Client Secret
+ --dropbox-impersonate string Impersonate this user when using a business account.
+ --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
+ --fichier-shared-folder string If you want to download a shared folder, add this parameter
+ --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
+ --ftp-host string FTP host to connect to
+ --ftp-no-check-certificate Do not verify the TLS certificate of the server
+ --ftp-pass string FTP password
+ --ftp-port string FTP port, leave blank to use default (21)
+ --ftp-tls Use FTP over TLS (Implicit)
+ --ftp-user string FTP username, leave blank for current username, $USER
+ --gcs-bucket-acl string Access Control List for new buckets.
+ --gcs-bucket-policy-only Access checks should use bucket-level IAM policies.
+ --gcs-client-id string Google Application Client Id
+ --gcs-client-secret string Google Application Client Secret
+ --gcs-location string Location for the newly created buckets.
+ --gcs-object-acl string Access Control List for new objects.
+ --gcs-project-number string Project number.
+ --gcs-service-account-file string Service Account Credentials JSON file path
+ --gcs-storage-class string The storage class to use when storing objects in Google Cloud Storage.
+ --gphotos-client-id string Google Application Client Id
+ --gphotos-client-secret string Google Application Client Secret
+ --gphotos-read-only Set to make the Google Photos backend read only.
+ --gphotos-read-size Set to read the size of media items.
+ --http-headers CommaSepList Set HTTP headers for all transactions
+ --http-no-slash Set this if the site doesn't end directories with /
+ --http-url string URL of http host to connect to
+ --hubic-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --hubic-client-id string Hubic Client Id
+ --hubic-client-secret string Hubic Client Secret
+ --hubic-no-chunk Don't chunk files during streaming upload.
+ --jottacloud-hard-delete Delete files permanently rather than putting them into the trash.
+ --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required. (default 10M)
+ --jottacloud-unlink Remove existing public link to file/folder with link command rather than creating.
+ --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's. (default 10M)
+ --koofr-endpoint string The Koofr API endpoint to use (default "https://app.koofr.net")
+ --koofr-mountid string Mount ID of the mount to use. If omitted, the primary mount is used.
+ --koofr-password string Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
+ --koofr-setmtime Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend. (default true)
+ --koofr-user string Your Koofr user name
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
+ --local-case-insensitive Force the filesystem to report itself as case insensitive
+ --local-case-sensitive Force the filesystem to report itself as case sensitive.
+ --local-no-check-updated Don't check to see if the files change during upload
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames (Deprecated)
+ --local-nounc string Disable UNC (long path names) conversion on Windows
+ --mega-debug Output more debug from Mega.
+ --mega-hard-delete Delete files permanently rather than putting them into the trash.
+ --mega-pass string Password.
+ --mega-user string User name
+ -x, --one-file-system Don't cross filesystem boundaries (unix/macOS only).
+ --onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k. (default 10M)
+ --onedrive-client-id string Microsoft App Client Id
+ --onedrive-client-secret string Microsoft App Client Secret
+ --onedrive-drive-id string The ID of the drive to use
+ --onedrive-drive-type string The type of the drive ( personal | business | documentLibrary )
+ --onedrive-expose-onenote-files Set to make OneNote files show up in directory listings.
+ --opendrive-password string Password.
+ --opendrive-username string Username
+ --pcloud-client-id string Pcloud App Client Id
+ --pcloud-client-secret string Pcloud App Client Secret
+ --qingstor-access-key-id string QingStor Access Key ID
+ --qingstor-chunk-size SizeSuffix Chunk size to use for uploading. (default 4M)
+ --qingstor-connection-retries int Number of connection retries. (default 3)
+ --qingstor-endpoint string Enter a endpoint URL to connection QingStor API.
+ --qingstor-env-auth Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+ --qingstor-secret-access-key string QingStor Secret Access Key (password)
+ --qingstor-upload-concurrency int Concurrency for multipart uploads. (default 1)
+ --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
+ --qingstor-zone string Zone to connect to.
+ --s3-access-key-id string AWS Access Key ID.
+ --s3-acl string Canned ACL used when creating buckets and storing or copying objects.
+ --s3-bucket-acl string Canned ACL used when creating buckets.
+ --s3-chunk-size SizeSuffix Chunk size to use for uploading. (default 5M)
+ --s3-disable-checksum Don't store MD5 checksum with object metadata
+ --s3-endpoint string Endpoint for S3 API.
+ --s3-env-auth Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+ --s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
+ --s3-location-constraint string Location constraint - must be set to match the Region.
+ --s3-provider string Choose your S3 provider.
+ --s3-region string Region to connect to.
+ --s3-secret-access-key string AWS Secret Access Key (password)
+ --s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3.
+ --s3-session-token string An AWS session token
+ --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key.
+ --s3-storage-class string The storage class to use when storing new objects in S3.
+ --s3-upload-concurrency int Concurrency for multipart uploads. (default 4)
+ --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200M)
+ --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint.
+ --s3-v2-auth If true use v2 authentication.
+ --sftp-ask-password Allow asking for SFTP password when needed.
+ --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available.
+ --sftp-host string SSH host to connect to
+ --sftp-key-file string Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
+ --sftp-key-file-pass string The passphrase to decrypt the PEM-encoded private key file.
+ --sftp-key-use-agent When set forces the usage of the ssh-agent.
+ --sftp-md5sum-command string The command used to read md5 hashes. Leave blank for autodetect.
+ --sftp-pass string SSH password, leave blank to use ssh-agent.
+ --sftp-path-override string Override path used by SSH connection.
+ --sftp-port string SSH port, leave blank to use default (22)
+ --sftp-set-modtime Set the modified time on the remote if set. (default true)
+ --sftp-sha1sum-command string The command used to read sha1 hashes. Leave blank for autodetect.
+ --sftp-use-insecure-cipher Enable the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.
+ --sftp-user string SSH username, leave blank for current username, ncw
+ --skip-links Don't warn about skipped symlinks.
+ --swift-application-credential-id string Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
+ --swift-application-credential-name string Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
+ --swift-application-credential-secret string Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
+ --swift-auth string Authentication URL for server (OS_AUTH_URL).
+ --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container. (default 5G)
+ --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
+ --swift-env-auth Get swift credentials from environment variables in standard OpenStack form.
+ --swift-key string API key or password (OS_PASSWORD).
+ --swift-no-chunk Don't chunk files during streaming upload.
+ --swift-region string Region name - optional (OS_REGION_NAME)
+ --swift-storage-policy string The storage policy to use when creating a new container
+ --swift-storage-url string Storage URL - optional (OS_STORAGE_URL)
+ --swift-tenant string Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ --swift-tenant-domain string Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ --swift-tenant-id string Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ --swift-user string User name to log in (OS_USERNAME).
+ --swift-user-id string User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ --union-remotes string List of space separated remotes.
+ --webdav-bearer-token string Bearer token instead of user/pass (eg a Macaroon)
+ --webdav-bearer-token-command string Command to run to get a bearer token
+ --webdav-pass string Password.
+ --webdav-url string URL of http host to connect to
+ --webdav-user string User name
+ --webdav-vendor string Name of the Webdav site/service/software you are using
+ --yandex-client-id string Yandex Client Id
+ --yandex-client-secret string Yandex Client Secret
+ --yandex-unlink Remove existing public link to file/folder with link command rather than creating.1Fichier
+remote:pathremote:directory/subdirectory.remote. First run:
+ rclone config
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / 1Fichier
+ \ "fichier"
+[snip]
+Storage> fichier
+** See help for fichier backend at: https://rclone.org/fichier/ **
+
+Your API Key, get it from https://1fichier.com/console/params.pl
+Enter a string value. Press Enter for the default ("").
+api_key> example_key
+
+Edit advanced config? (y/n)
+y) Yes
+n) No
+y/n>
+Remote config
+--------------------
+[remote]
+type = fichier
+api_key = example_key
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> yrclone like this,
+rclone lsd remote:
+rclone ls remote:
+rclone copy /home/source remote:backupModified time and hashes
+Duplicated files
+Forbidden characters
+\ < > " ' ` $ and spaces at the beginning of folder names. rclone automatically escapes these to a unicode equivalent. The exception is /, which cannot be escaped and will therefore lead to errors.Standard Options
+–fichier-api-key
+
+
+Advanced Options
+–fichier-shared-folder
+
+
+
Alias
alias remote provides a new name for another remote.remote:directory/subdirectory or /directory/subdirectory.
-rclone copy /home/source remote:sourceStandard Options
+Standard Options
–alias-remote
Using with non
.com Amazon accountsamazon.co.uk. When you authenticate with rclone it will take you to an amazon.com page to log in. Your amazon.co.uk email and password should work here just fine.Standard Options
+Standard Options
–acd-client-id
Advanced Options
+Advanced Options
–acd-auth-url
PutObjectPutObjectACLlsd subcommand, the ListAllMyBuckets permission is required.{
"Version": "2012-10-17",
@@ -4655,7 +5409,12 @@ In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archi
"arn:aws:s3:::BUCKET_NAME/*",
"arn:aws:s3:::BUCKET_NAME"
]
- }
+ },
+ {
+ "Effect": "Allow",
+ "Action": "s3:ListAllMyBuckets",
+ "Resource": "arn:aws:s3:::*"
+ }
]
}Standard Options
+Standard Options
–s3-provider
+
+
–s3-storage-class
@@ -5531,7 +6294,7 @@ In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archi
-Advanced Options
+Advanced Options
–s3-bucket-acl
--b2-versions no file write operations are permitted, so you can’t upload files or delete them.B2 and rclone link
+
+./rclone link B2:bucket/path/to/file.txt
+https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx
+
+./rclone link B2:bucket/path
+https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx?Authorization= on) on any file path under that directory. For example:
-https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx
+https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx
+https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx
+Standard Options
+Standard Options
–b2-account
Advanced Options
+Advanced Options
–b2-endpoint
–b2-download-url
+–b2-download-auth-duration
+
+
Box
remote:pathModified time and hashes
+Modified time and hashes
--checksum flag.Transfers
@@ -6560,7 +7294,7 @@ y/e/d> y
Deleting files
Standard Options
+Standard Options
–box-client-id
Advanced Options
+Advanced Options
–box-upload-cutoff
Standard Options
+Standard Options
–cache-remote
Advanced Options
+Advanced Options
–cache-plex-token
1/12/123.txt is encrypted to p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v01/12/123.txt is encrypted to 1/12/qgm4avr35m5loi1th53ato71v0Modified time and hashes
+Modified time and hashes
rclone cryptcheck command to check the integrity of a crypted remote instead of rclone check which can’t check the checksums properly.Standard Options
+Standard Options
–crypt-remote
Advanced Options
+Advanced Options
–crypt-show-mapping
/ in the path, so rclone lsd remote:/ will refer to the root and show you all Team Folders and your User Folder.remote:/TeamFolder and remote:/TeamFolder/path/to/file./ for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so it should be avoided.Modified time and Hashes
+Modified time and Hashes
--size-only or --checksum flag to stop it.Standard Options
+Standard Options
–dropbox-client-id
Advanced Options
+Advanced Options
–dropbox-chunk-size
Implicit TLS
990 so the port will likely have to be explictly set in the config for the remote.Standard Options
+Standard Options
–ftp-host
Advanced Options
+Advanced Options
–ftp-concurrency
Modified time
Standard Options
+Standard Options
–gcs-client-id
Standard Options
+Standard Options
–drive-client-id
Advanced Options
+Advanced Options
–drive-service-account-credentials
rclone sync, rclone copy etc as rclone knows to ignore the size when doing the transfer.rclone mount - you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable.Duplicated files
+Duplicated files
rclone dedupe to fix duplicated files.
Google Photos
+Configuring Google Photos
+rclone config walks you through it.remote. First run:
+ rclone config
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / Google Photos
+ \ "google photos"
+[snip]
+Storage> google photos
+** See help for google photos backend at: https://rclone.org/googlephotos/ **
+
+Google Application Client Id
+Leave blank normally.
+Enter a string value. Press Enter for the default ("").
+client_id>
+Google Application Client Secret
+Leave blank normally.
+Enter a string value. Press Enter for the default ("").
+client_secret>
+Set to make the Google Photos backend read only.
+
+If you choose read only then rclone will only request read only access
+to your photos, otherwise rclone will request full access.
+Enter a boolean value (true or false). Press Enter for the default ("false").
+read_only>
+Edit advanced config? (y/n)
+y) Yes
+n) No
+y/n> n
+Remote config
+Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+y) Yes
+n) No
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+
+*** IMPORTANT: All media items uploaded to Google Photos with rclone
+*** are stored in full resolution at original quality. These uploads
+*** will count towards storage in your Google Account.
+
+--------------------
+[remote]
+type = google photos
+token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2019-06-28T17:38:04.644930156+01:00"}
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> yhttp://127.0.0.1:53682/ and this may require you to unblock it temporarily if you are running a host firewall, or use manual mode.remote and can now be used like this
+rclone lsd remote:album
+rclone mkdir remote:album/newAlbum
+rclone ls remote:album/newAlbum/home/local/images to the Google Photos, removing any excess files in the album.
+rclone sync /home/local/image remote:album/newAlbumLayout
+media show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month. (NB remote:media/by-day is rather slow at the moment so avoid for syncing.)media, but they may not appear under album unless you’ve put them into albums.
+/
+- upload
+ - file1.jpg
+ - file2.jpg
+ - ...
+- media
+ - all
+ - file1.jpg
+ - file2.jpg
+ - ...
+ - by-year
+ - 2000
+ - file1.jpg
+ - ...
+ - 2001
+ - file2.jpg
+ - ...
+ - ...
+ - by-month
+ - 2000
+ - 2000-01
+ - file1.jpg
+ - ...
+ - 2000-02
+ - file2.jpg
+ - ...
+ - ...
+ - by-day
+ - 2000
+ - 2000-01-01
+ - file1.jpg
+ - ...
+ - 2000-01-02
+ - file2.jpg
+ - ...
+ - ...
+- album
+ - album name
+ - album name/sub
+- shared-album
+ - album name
+ - album name/subupload directory and sub directories of the the album directory.upload directory is for uploading files you don’t want to put into albums. This will be empty to start with and will contain the files you’ve uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to album will work better.album directory are also writeable and you may create new directories (albums) under album. If you copy files with a directory hierarchy in there then rclone will create albums with the / character in them. For example if you do
+rclone copy /path/to/images remote:album/images
+images
+ - file1.jpg
+ dir
+ file2.jpg
+ dir2
+ dir3
+ file3.jpg
+
+
+
+
+
album path pretty much like a normal filesystem and it is a good target for repeated syncing.shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.Limitations
+Downloading Images
+Downloading Videos
+Duplicates
+file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer alas!).upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn’t cause too many problems.Modified time
+Size
+--gphotos-read-size option or the read_size = true config parameter.rclone mount you will need to enable this flag otherwise you will not be able to read media off the mount.Albums
+Deleting files
+album.Deleting albums
+Standard Options
+–gphotos-client-id
+
+
+–gphotos-client-secret
+
+
+–gphotos-read-only
+
+
+Advanced Options
+–gphotos-read-size
+
+
+
HTTP
remote: or remote:path/to/dir.rclone sync remote:directory /home/local/directoryRead only
Modified time
+Modified time
Checksum
-rclone lsd --http-url https://beta.rclone.org :http:Standard Options
+Standard Options
–http-url
Advanced Options
+Advanced Options
–http-headers
+
+
–http-no-slash
rclone copy /home/source remote:default/backup–fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.Modified time
+Modified time
X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.Standard Options
+Standard Options
–hubic-client-id
Advanced Options
+Advanced Options
–hubic-chunk-size
Limitations
+Limitations
Jottacloud
@@ -9045,15 +9894,12 @@ Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
-14 / JottaCloud
+XX / JottaCloud
\ "jottacloud"
[snip]
Storage> jottacloud
** See help for jottacloud backend at: https://rclone.org/jottacloud/ **
-User Name:
-Enter a string value. Press Enter for the default ("").
-user> user@email.tld
Edit advanced config? (y/n)
y) Yes
n) No
@@ -9067,6 +9913,7 @@ Rclone has it's own Jottacloud API KEY which works fine as long as one only
y) Yes
n) No
y/n> y
+Username> 0xC4KE@gmail.com
Your Jottacloud password is only required during setup and will not be stored.
password:
@@ -9078,7 +9925,7 @@ y/n> y
Please select the device to use. Normally this will be Jotta
Choose a number from below, or type in an existing value
1 > DESKTOP-3H31129
- 2 > test1
+ 2 > fla1
3 > Jotta
Devices> 3
Please select the mountpoint to user. Normally this will be Archive
@@ -9113,11 +9960,11 @@ y/e/d> y
–fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.Modified time and hashes
+Modified time and hashes
--checksum flag.TMPDIR environment variable points to) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag.Deleting files
+Deleting files
--jottacloud-hard-delete flag, or set the equivalent environment variable.Versions
Device IDs
Standard Options
-–jottacloud-user
-
-
-Advanced Options
+Advanced Options
–jottacloud-md5-memory-limit
Limitations
+Limitations
-rclone copy /home/source remote:backupStandard Options
+Standard Options
–koofr-user
Advanced Options
+Advanced Options
–koofr-endpoint
–koofr-setmtime
+
+
-Limitations
+Limitations
Mega
rclone ls remote:
-rclone copy /home/source remote:backupModified time and hashes
+Modified time and hashes
Duplicated files
+Duplicated files
rclone dedupe to fix duplicated files.Standard Options
+Standard Options
–mega-user
Advanced Options
+Advanced Options
–mega-debug
Limitations
+Limitations
Microsoft Azure Blob Storage
@@ -9453,40 +10244,10 @@ n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Box
- \ "box"
- 5 / Dropbox
- \ "dropbox"
- 6 / Encrypt/Decrypt a remote
- \ "crypt"
- 7 / FTP Connection
- \ "ftp"
- 8 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 9 / Google Drive
- \ "drive"
-10 / Hubic
- \ "hubic"
-11 / Local Disk
- \ "local"
-12 / Microsoft Azure Blob Storage
+[snip]
+XX / Microsoft Azure Blob Storage
\ "azureblob"
-13 / Microsoft OneDrive
- \ "onedrive"
-14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
-15 / SSH/SFTP Connection
- \ "sftp"
-16 / Yandex Disk
- \ "yandex"
-17 / http Connection
- \ "http"
+[snip]
Storage> azureblob
Storage Account Name
account> account_name
@@ -9515,7 +10276,7 @@ y/e/d> y
rclone sync /home/local/directory remote:container–fast-list
--fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.Modified time
+Modified time
mtime key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it.Hashes
--azureblob-chunk-size 100M.Standard Options
+Standard Options
–azureblob-account
-
–azureblob-key
-
–azureblob-sas-url
-
-Advanced Options
+–azureblob-use-emulator
+
+
+Advanced Options
–azureblob-endpoint
Limitations
+Limitations
Azure Storage Emulator Support
+rclone config follow instructions described in introduction, set use_emulator config as true, you do not need to provide default account name or key if using emulator.Microsoft OneDrive
remote:pathremote:directory/subdirectory.Save.rclone config to create or edit a OneDrive remote. Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps.Modified time and hashes
+Modified time and hashes
--checksum flag.Deleting files
+Deleting files
Standard Options
+Standard Options
–onedrive-client-id
Advanced Options
+Advanced Options
–onedrive-chunk-size
Limitations
+Limitations
? in it will be mapped to ? instead.Modified time and MD5SUMs
Standard Options
+Standard Options
–opendrive-username
Limitations
+Limitations
? in it will be mapped to ? instead.QingStor
@@ -9923,37 +10670,11 @@ n/r/c/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / FTP Connection
- \ "ftp"
- 7 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 8 / Google Drive
- \ "drive"
- 9 / Hubic
- \ "hubic"
-10 / Local Disk
- \ "local"
-11 / Microsoft OneDrive
- \ "onedrive"
-12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
-13 / QingStor Object Storage
+[snip]
+XX / QingStor Object Storage
\ "qingstor"
-14 / SSH/SFTP Connection
- \ "sftp"
-15 / Yandex Disk
- \ "yandex"
-Storage> 13
+[snip]
+Storage> qingstor
Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter QingStor credentials in the next step
@@ -10027,7 +10748,7 @@ y/e/d> y
-Standard Options
+Standard Options
–qingstor-env-auth
Advanced Options
+Advanced Options
–qingstor-connection-retries
--update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.Standard Options
+Standard Options
–swift-env-auth
Advanced Options
+Advanced Options
–swift-chunk-size
Modified time
+Modified time
X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.Limitations
+Limitations
Troubleshooting
Rclone gives Failed to create file system for “remote:”: Bad Request
@@ -10590,44 +11273,10 @@ n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Box
- \ "box"
- 5 / Dropbox
- \ "dropbox"
- 6 / Encrypt/Decrypt a remote
- \ "crypt"
- 7 / FTP Connection
- \ "ftp"
- 8 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 9 / Google Drive
- \ "drive"
-10 / Hubic
- \ "hubic"
-11 / Local Disk
- \ "local"
-12 / Microsoft Azure Blob Storage
- \ "azureblob"
-13 / Microsoft OneDrive
- \ "onedrive"
-14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
-15 / Pcloud
+[snip]
+XX / Pcloud
\ "pcloud"
-16 / QingCloud Object Storage
- \ "qingstor"
-17 / SSH/SFTP Connection
- \ "sftp"
-18 / Yandex Disk
- \ "yandex"
-19 / http Connection
- \ "http"
+[snip]
Storage> pcloud
Pcloud App Client Id - leave blank normally.
client_id>
@@ -10663,13 +11312,13 @@ y/e/d> y
rclone ls remote:
-rclone copy /home/source remote:backupModified time and hashes
+Modified time and hashes
--checksum flag.Deleting files
+Deleting files
rclone cleanup can be used to empty the trash.Standard Options
+Standard Options
–pcloud-client-id
premiumize.me
+remote:pathremote:directory/subdirectory.rclone config walks you through it.remote. First run:
+ rclone config
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / premiumize.me
+ \ "premiumizeme"
+[snip]
+Storage> premiumizeme
+** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ **
+
+Remote config
+Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+y) Yes
+n) No
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+--------------------
+[remote]
+type = premiumizeme
+token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"}
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.rclone like this,
+rclone lsd remote:
+rclone ls remote:
+rclone copy /home/source remote:backupModified time and hashes
+--size-only checking. Note that using --update will work.Standard Options
+–premiumizeme-api-key
+
+
+
+Limitations
+\ or " characters in. rclone maps these to and from an identical looking unicode equivalents \ and "put.io
+remote:pathremote:directory/subdirectory.rclone config walks you through it.remote. First run:
+ rclone config
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> putio
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / Put.io
+ \ "putio"
+[snip]
+Storage> putio
+** See help for putio backend at: https://rclone.org/putio/ **
+
+Remote config
+Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+y) Yes
+n) No
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+--------------------
+[putio]
+type = putio
+token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"}
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+putio putio
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> qhttp://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.
+rclone lsd remote:
+rclone ls remote:
+
+
rclone copy /home/source remote:backupSFTP
+
remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user’s home directory.eval `ssh-agent -k`Modified time
+Modified time
set_modtime = false in your RClone backend configuration to disable this behaviour.Standard Options
+Standard Options
–sftp-host
–sftp-use-insecure-cipher
-
-
Advanced Options
+Advanced Options
–sftp-ask-password
–sftp-md5sum-command
+
+
+–sftp-sha1sum-command
+
+
-Limitations
+Limitations
md5sum or sha1sum as well as echo are in the remote’s PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming.about if the same login has shell access and df are in the remote’s PATH. about will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. about will fail if it does not have shell access or if df is not in the remote’s PATH.disable_hashcheck is a good idea.--dump-headers, --dump-bodies, --dump-auth--timeout isn’t supported (but --contimeout is).C14
+rsync.net
+Union
union remote provides a unification similar to UnionFS using other remotes.remote:directory/subdirectory or /directory/subdirectory.C:\dir3
-rclone copy C:\source remote:sourceStandard Options
-Standard Options
+–union-remotes
@@ -11067,7 +11807,7 @@ name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
-22 / Webdav
+XX / Webdav
\ "webdav"
[snip]
Storage> webdav
@@ -11099,7 +11839,7 @@ password:
Confirm the password:
password:
Bearer token instead of user/pass (eg a Macaroon)
-bearer_token>
+bearer_token>
Remote config
--------------------
[remote]
@@ -11108,7 +11848,7 @@ url = https://example.com/remote.php/webdav/
vendor = nextcloud
user = user
pass = *** ENCRYPTED ***
-bearer_token =
+bearer_token =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -11121,11 +11861,11 @@ y/e/d> y
rclone ls remote:
-rclone copy /home/source remote:backupModified time and hashes
+Modified time and hashes
Standard Options
+Standard Options
–webdav-url
Advanced Options
+–webdav-bearer-token-command
+
+
Provider notes
X-OC-Mtime header.Nextcloud
rcat) whereas Owncloud does. This may be fixed in the future.Put.io
-url as https://webdav.put.io and use your normal account username and password for user and pass. Set the vendor to other.
-[putio]
-type = webdav
-url = https://webdav.put.io
-vendor = other
-user = YourUserName
-pass = encryptedpasswordput.io with rclone mount then use the --read-only flag to signal to the OS that it can’t write to the mount.Sharepoint
dCache
-other type. Don’t enter a username or password, instead enter your Macaroon as the bearer_token.[dcache]
@@ -11242,6 +11980,22 @@ user =
pass =
bearer_token = your-macaroonOpenID-Connect
+oidc-token command. The following example shows a (shortened) access token obtained from the XDC OIDC Provider.
+paul@celebrimbor:~$ oidc-token XDC
+eyJraWQ[...]QFXDt0
+paul@celebrimbor:~$oidc-token command will work, the refresh token must be loaded into the oidc agent. This is done with the oidc-add command (e.g., oidc-add XDC). This is typically done once per login session. Full details on this and how to register oidc-agent with your OIDC Provider are provided in the oidc-agent documentation.bearer_token_command configuration option is used to fetch the access token from oidc-agent.oidc-agent XDC).[dcache]
+type = webdav
+url = https://dcache.example.org/
+vendor = other
+bearer_token_command = oidc-token XDCYandex Disk
remote:directory/subdirectory.rclone ls remote:directory/home/local/directory to the remote path, deleting any excess files in the path.
-rclone sync /home/local/directory remote:directoryModified time
+Modified time
rclone_modified in RFC3339 with nanoseconds format.MD5 checksums
rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.Quota information
rclone about remote: command which will display your usage limit (quota) and the current usage.Limitations
+Limitations
--timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you’ll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GB should be enough, so if you want to upload a 30GB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.Standard Options
+Standard Options
–yandex-client-id
Advanced Options
+Advanced Options
–yandex-unlink
rclone sync /home/source /tmp/destination/home/source to /tmp/destinationModified time
+Modified time
Filenames
du, rsync and tar) treats a bind mount to the same device as being on the same filesystem.Standard Options
+Standard Options
–local-nounc
Advanced Options
+Advanced Options
–copy-links
–local-case-sensitive
+
+
+–local-case-insensitive
+
+
Changelog
+v1.49.0 - 2019-08-26
+
+
+
+
--compare-dest & --copy-dest (yparitcher)--suffix without --backup-dir for backup to current dir (yparitcher)--use-json-log for JSON logging (justinalin)config reconnect, config userinfo and config disconnect subcommands. (Nick Craig-Wood)
+
--ignore-checksum (Nick Craig-Wood)--size-only mode (Nick Craig-Wood)
+
+
--baseurl for rcd and web-gui (Chaitanya Bankanhal)
+
+
--auth-proxy (Nick Craig-Wood)--baseurl (Nick Craig-Wood)--baseurl (Nick Craig-Wood)
+
+
--baseurl (Nick Craig-Wood)--auth-proxy (Nick Craig-Wood)
+
--no-traverse (buengese)
+
--loopback with rc/list and others (Nick Craig-Wood)
+
--deamon-timout to 15 minutes on macOS and FreeBSD (Nick Craig-Wood)
+
--vfs-cache-mode minimal and writes ignoring cached files (Nick Craig-Wood)
+
--local-case-sensitive and --local-case-insensitive (Nick Craig-Wood)
+
+
+
--drive-trashed-only (ginvine)
+
+
+
--http-headers flag for setting arbitrary headers (Nick Craig-Wood)
+
+
+
+
+
+
+
+
--webdav-bearer-token-command (Nick Craig-Wood)--webdav-bearer-token-command (Nick Craig-Wood)v1.48.0 - 2019-06-15
-
--progress update the stats correctly at the end (Nick Craig-Wood)--dry-run (Nick Craig-Wood)--log-format flag for more control over log output (dcpu)
--config (albertony)--progress on windows (Nick Craig-Wood)
-
--azureblob-list-chunk parameter (Santiago Rodríguez)
-
--drive-import-formats - google docs can now be imported (Fabian Möller)
--drive-v2-download-min-size a workaround for slow downloads (Fabian Möller)
--fast-list support (albertony)--jottacloud-hard-delete (albertony)
--s3-v2-auth flag (Nick Craig-Wood)--backup-dir on union backend (Nick Craig-Wood)
@@ -12252,7 +13180,7 @@ $ tree /tmp/b
--progress and --stats 0 (Nick Craig-Wood)
-Bugs and Limitations
-Empty directories are left behind / not created
-purge command which will delete everything under the path, inluding empty directories.Bugs and Limitations
+Limitations
Directory timestamps aren’t preserved
-Rclone struggles with millions of files in a directory
+Bucket based remotes and folders
+/ as directory markers. Rclone doesn’t do this as it potentially creates more objects and costs more. It may do in future (probably with a flag).Bugs
+Frequently Asked Questions
Do all cloud storage systems support all rclone commands
sync, copy etc) will work on all the remote storage systems.
This is free software under the terms of MIT the license (check the COPYING file included with the source code).
-Copyright (C) 2012 by Nick Craig-Wood https://www.craig-wood.com/nick/
+Copyright (C) 2019 by Nick Craig-Wood https://www.craig-wood.com/nick/
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
@@ -14649,6 +15585,26 @@ THE SOFTWARE.
forgems forgems@gmail.com
Florian Apolloner florian@apolloner.eu
Aleksandar Jankovic office@ajankovic.com
+Maran maran@protonmail.com
+nguyenhuuluan434 nguyenhuuluan434@gmail.com
+Laura Hausmann zotan@zotan.pw laura@hausmann.dev
+yparitcher y@paritcher.com
+AbelThar abela.tharen@gmail.com
+Matti Niemenmaa matti.niemenmaa+git@iki.fi
+Russell Davis russelldavis@users.noreply.github.com
+Yi FU yi.fu@tink.se
+Paul Millar paul.millar@desy.de
+justinalin justinalin@qnap.com
+EliEron subanimehd@gmail.com
+justina777 chiahuei.lin@gmail.com
+Chaitanya Bankanhal bchaitanya15@gmail.com
+Michał Matczuk michal@scylladb.com
+Macavirus macavirus@zoho.com
+Abhinav Sharma abhi18av@users.noreply.github.com
+ginvine 34869051+ginvine@users.noreply.github.com
+Patrick Wang mail6543210@yahoo.com.tw
+Cenk Alti cenkalti@gmail.com
+Andreas Chlupka andy@chlupka.com
Contact the rclone project
Forum
@@ -14668,6 +15624,6 @@ THE SOFTWARE.
Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood
+Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood. Please don’t email me requests for help - those are better directed to the forum - thanks!