From e90537b2e93d194848560b83e2c5164641a02595 Mon Sep 17 00:00:00 2001
From: Nick Craig-Wood Mar 14, 2023 Jun 30, 2023 If you are planning to use the rclone mount feature then you will need to install the third party utility WinFsp also. Winget comes pre-installed with the latest versions of Windows. If not, update the App Installer package from the Microsoft store. To install rclone To uninstall rclone Make sure you have Choco installed Checks the files in the source and destination match. Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files that don't match. It doesn't alter the source or destination. For the crypt remote there is a dedicated command, cryptcheck, that are able to check the checksums of the crypted files. For the crypt remote there is a dedicated command, cryptcheck, that are able to check the checksums of the encrypted files. If you supply the If you supply the If you supply the The default number of parallel checks is 8. See the --checkers=N option for more information. Counts objects in the path and calculates the total size. Prints the result to standard output. By default the output is in human-readable format, but shows values in both human-readable format as well as the raw numbers (global option Recurses by default, use Some backends do not always provide file sizes, see for example Google Photos and Google Drive. Rclone will then show a notice in the log indicating how many such files were encountered, and count them in as empty files in the output of the size command. Some backends do not always provide file sizes, see for example Google Photos and Google Docs. Rclone will then show a notice in the log indicating how many such files were encountered, and count them in as empty files in the output of the size command. Or like this to output any .txt files in dir or its subdirectories. Use the Use the bash: powershell: See the global flags page for global options not listed here. The default number of parallel checks is 8. See the --checkers=N option for more information. Generate the autocompletion script for the specified shell Output completion script for a given shell. Generate the autocompletion script for rclone for the specified shell. See each sub-command's help for details on how to use the generated script. Generates a shell completion script for rclone. Run with See the global flags page for global options not listed here. Generate the autocompletion script for bash Output bash completion script for rclone. Generate the autocompletion script for the bash shell. This script depends on the 'bash-completion' package. If it is not installed already, you can install it via your OS's package manager. To load completions in your current shell session: To load completions for every new session, execute once: You will need to start a new shell for this setup to take effect. Generates a bash shell autocompletion script for rclone. This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, e.g. Logout and login again to use the autocompletion scripts, or source them directly If you supply a command line argument the script will be written there. If output_file is "-", then the output will be written to stdout. See the global flags page for global options not listed here. Generate the autocompletion script for fish Output fish completion script for rclone. Generate the autocompletion script for the fish shell. To load completions in your current shell session: To load completions for every new session, execute once: You will need to start a new shell for this setup to take effect. Generates a fish autocompletion script for rclone. This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g. Logout and login again to use the autocompletion scripts, or source them directly If you supply a command line argument the script will be written there. If output_file is "-", then the output will be written to stdout. See the global flags page for global options not listed here. Generate the autocompletion script for powershell Generate the autocompletion script for powershell. To load completions in your current shell session: To load completions for every new session, add the output of the above command to your powershell profile. See the global flags page for global options not listed here. Generate the autocompletion script for zsh Output zsh completion script for rclone. Generate the autocompletion script for the zsh shell. If shell completion is not already enabled in your environment you will need to enable it. You can execute the following once: To load completions in your current shell session: To load completions for every new session, execute once: You will need to start a new shell for this setup to take effect. Generates a zsh autocompletion script for rclone. This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g. Logout and login again to use the autocompletion scripts, or source them directly If you supply a command line argument the script will be written there. If output_file is "-", then the output will be written to stdout. See the global flags page for global options not listed here. Create a new remote with name, type and options. Cryptcheck checks the integrity of a crypted remote. Cryptcheck checks the integrity of an encrypted remote. rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote. rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the encrypted remote. For it to work the underlying remote of the cryptedremote must support some kind of checksum. It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted. Use it like this The default number of parallel checks is 8. See the --checkers=N option for more information. Output completion script for a given shell. Generates a shell completion script for rclone. Run with See the global flags page for global options not listed here. Output bash completion script for rclone. Generates a bash shell autocompletion script for rclone. This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, e.g. If you supply a command line argument the script will be written there. If output_file is "-", then the output will be written to stdout. See the global flags page for global options not listed here. Output fish completion script for rclone. Generates a fish autocompletion script for rclone. This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g. If you supply a command line argument the script will be written there. If output_file is "-", then the output will be written to stdout. See the global flags page for global options not listed here. Output zsh completion script for rclone. Generates a zsh autocompletion script for rclone. This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g. If you supply a command line argument the script will be written there. If output_file is "-", then the output will be written to stdout. See the global flags page for global options not listed here. List all the remotes in the config file. List all the remotes in the config file and defined in environment variables. rclone listremotes lists all the available remotes from the config file. When used with the Note that mapping to a directory path, instead of a drive letter, does not suffer from the same limitations. Mounting on macOS can be done either via macFUSE (also known as osxfuse) or FUSE-T. macFUSE is a traditional FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which "mounts" via an NFSv4 local server. If installing macFUSE using dmg packages from the website, rclone will locate the macFUSE libraries without any further intervention. If however, macFUSE is installed using the macports package manager, the following addition steps are required. There are some limitations, caveats, and notes about how it works. These are current as of FUSE-T version 1.0.14. or create systemd mount units: optionally accompanied by systemd automount unit Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed and if they haven't been accessed for If using If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through Use the Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then Note that the upload cannot be retried because the data is not stored. If the backend supports multipart uploading then individual chunks can be retried. If you need to transfer a lot of data, you may be better off caching it locally and then If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run. See the rc documentation for more info on the rc flags. Use If you set Use If you set You can use a unix socket by setting the url to By default this will serve over http. If you want you can serve over https. You will need to supply the --min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0"). By default this will serve over http. If you want you can serve over https. You will need to supply the --rc-min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0"). By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the Use You can either use an htpasswd file which can take lots of users, or set a single username and password with the If no static users are configured by either of the above methods, and client certificates are required by the Use To create an htpasswd file: The password file can be updated while rclone is running. Use Use Use Use Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed and if they haven't been accessed for If using If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed and if they haven't been accessed for If using If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed and if they haven't been accessed for If using If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the If no static users are configured by either of the above methods, and client certificates are required by the Use To create an htpasswd file: Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed and if they haven't been accessed for If using If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the If no static users are configured by either of the above methods, and client certificates are required by the Use To create an htpasswd file: If On the client you need to set The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from beeing used. Omitting "restrict" and using The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from being used. Omitting "restrict" and using This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed and if they haven't been accessed for If using If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the If no static users are configured by either of the above methods, and client certificates are required by the Use To create an htpasswd file: Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed and if they haven't been accessed for If using If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. Run a test command Rclone test is used to run test commands. Select which test comand you want with the subcommand, eg Select which test command you want with the subcommand, eg Each subcommand has its own options which you can see in their help. NB Be careful running these commands, they may do strange things so reading their documentation first is recommended. Eg When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally. Specifiy when colors (and other ANSI codes) should be added to the output. Specify when colors (and other ANSI codes) should be added to the output. Note that passwords are in obscured form. Also, many storage systems uses token-based authentication instead of passwords, and this requires additional steps. It is easier, and safer, to use the interactive command The configuration file will typically contain login information, and should therefore have restricted permissions so that only the current user can read it. Rclone tries to ensure this when it writes the file. You may also choose to encrypt the file. When token-based authentication are used, the configuration file must be writable, because rclone needs to update the tokens inside it. To reduce risk of corrupting an existing configuration file, rclone will not write directly to it when saving changes. Instead it will first write to a new, temporary, file. If a configuration file already existed, it will (on Unix systems) try to mirror its permissions to the new file. Then it will rename the existing file to a temporary name as backup. Next, rclone will rename the new file to the correct name, before finally cleaning up by deleting the backup file. If the configuration file path used by rclone is a symbolic link, then this will be evaluated and rclone will write to the resolved path, instead of overwriting the symbolic link. Temporary files used in the process (described above) will be written to the same parent directory as that of the resolved configuration file, but if this directory is also a symbolic link it will not be resolved and the temporary files will be written to the location of the directory symbolic link. Set the connection timeout. This should be in go time format which looks like The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is Mode to run dedupe command in. One of If a file or directory does have a modification time rclone can read then rclone will display this fixed time instead. The default is For example This disables a comma separated list of optional features. For example to disable server-side move and server-side copy use: The features can be put in any case. To see a list of which features can be disabled use: The features a remote has can be seen in JSON format with: See the overview features and optional features to get an idea of which feature does what. Note that some features can be set to (Note that This flag can be useful for debugging and in exceptional circumstances (e.g. Google Drive limiting the total volume of Server Side Copies to 100 GiB/day). This stops rclone from trying to use HTTP/2 if available. This can sometimes speed up transfers due to a problem in the Go standard library.rclone(1) User Manual
-Rclone syncs your files to cloud storage
Windows package manager (Winget)
+winget install Rclone.Rclonewinget uninstall Rclone.Rclone --forceChocolatey package manager
choco search rclone
@@ -289,10 +358,16 @@ rclone v1.49.1
# config on host at ~/.config/rclone/rclone.conf
# data on host at ~/data
+# add a remote interactively
+docker run --rm -it \
+ --volume ~/.config/rclone:/config/rclone \
+ --user $(id -u):$(id -g) \
+ rclone/rclone \
+ config
+
# make sure the config is ok by listing the remotes
docker run --rm \
--volume ~/.config/rclone:/config/rclone \
- --volume ~/data:/data:shared \
--user $(id -u):$(id -g) \
rclone/rclone \
listremotes
@@ -417,10 +492,11 @@ go buildSynopsis
--size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.--download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.--checkfile HASH flag with a valid hash name, the source:path must point to a text file in the SUM format.! path means there was an error reading or hashing the source or dest.rclone check source:path dest:path [flags]Options
-C, --checkfile string Treat source:path as a SUM file with hashes of given type
@@ -785,7 +862,7 @@ rclone --dry-run --min-size 100M delete remote:path--human-readable is not considered). Use option --json to format output as JSON instead.--max-depth 1 to stop the recursion.rclone size remote:path [flags]Options
-h, --help help for size
@@ -1043,14 +1120,22 @@ rclone backend help <backendname>rclone --include "*.txt" cat remote:path/to/dir--head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.--separator flag to print a separator value between files. Be sure to shell-escape special characters. For example, to print a newline between files, use:
+
rclone --include "*.txt" --separator $'\n' cat remote:path/to/dirrclone --include "*.txt" --separator "`n" cat remote:path/to/dirrclone cat remote:path [flags]Options
-
+ --count int Only print N characters (default -1)
- --discard Discard the output instead of printing
- --head int Only print the first N characters
- -h, --help help for cat
- --offset int Start printing at offset N (or from end if -ve)
- --tail int Only print the last N characters --count int Only print N characters (default -1)
+ --discard Discard the output instead of printing
+ --head int Only print the first N characters
+ -h, --help help for cat
+ --offset int Start printing at offset N (or from end if -ve)
+ --separator string Separator to use between objects when printing multiple files
+ --tail int Only print the last N charactersSEE ALSO
@@ -1072,6 +1157,7 @@ rclone backend help <backendname>
+! path means there was an error reading or hashing the source or dest.rclone checksum <hash> sumfile src:path [flags]Options
--combined string Make a combined report of changes to this file
@@ -1089,98 +1175,88 @@ rclone backend help <backendname>rclone completion
-Synopsis
---help to list the supported shells.Options
-h, --help help for completionSEE ALSO
rclone completion bash
-Synopsis
-
-source <(rclone completion bash)Linux:
-
-rclone completion bash > /etc/bash_completion.d/rclonemacOS:
-
-rclone completion bash > $(brew --prefix)/etc/bash_completion.d/rclone
+rclone completion bash
+sudo rclone genautocomplete bash
+. /etc/bash_completionrclone completion bash [output_file] [flags]Options
-
+ -h, --help help for bash
- --no-descriptions disable completion descriptions -h, --help help for bashSEE ALSO
-
rclone completion fish
-Synopsis
-
-rclone completion fish | source
-rclone completion fish > ~/.config/fish/completions/rclone.fish
+rclone completion fish [flags]
+sudo rclone genautocomplete fish
+. /etc/fish/completions/rclone.fishrclone completion fish [output_file] [flags]Options
-
+ -h, --help help for fish
- --no-descriptions disable completion descriptions -h, --help help for fishSEE ALSO
-
rclone completion powershell
Synopsis
+Synopsis
rclone completion powershell | Out-String | Invoke-Expression
-rclone completion powershell [flags]Options
+Options
-h, --help help for powershell
--no-descriptions disable completion descriptionsSEE ALSO
+SEE ALSO
rclone completion zsh
-Synopsis
-
-echo "autoload -U compinit; compinit" >> ~/.zshrc
-source <(rclone completion zsh); compdef _rclone rcloneLinux:
-
-rclone completion zsh > "${fpath[1]}/_rclone"macOS:
-
-rclone completion zsh > $(brew --prefix)/share/zsh/site-functions/_rclone
+rclone completion zsh [flags]
+sudo rclone genautocomplete zsh
+autoload -U compinit && compinitrclone completion zsh [output_file] [flags]Options
-
+ -h, --help help for zsh
- --no-descriptions disable completion descriptions -h, --help help for zshSEE ALSO
-
rclone config create
rclone cryptcheck
-Synopsis
-! path means there was an error reading or hashing the source or dest.rclone cryptcheck remote:path cryptedremote:path [flags]Options
--combined string Make a combined report of changes to this file
@@ -1576,12 +1653,12 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2rclone genautocomplete
Synopsis
+Synopsis
--help to list the supported shells.Options
+Options
-h, --help help for genautocompleteSEE ALSO
+SEE ALSO
rclone genautocomplete bash
Synopsis
+Synopsis
@@ -1599,16 +1676,16 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
sudo rclone genautocomplete bash
-rclone genautocomplete bash [output_file] [flags]Options
+Options
-h, --help help for bashSEE ALSO
+SEE ALSO
rclone genautocomplete fish
Synopsis
+Synopsis
@@ -1617,16 +1694,16 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
sudo rclone genautocomplete fish
-rclone genautocomplete fish [output_file] [flags]Options
+Options
-h, --help help for fishSEE ALSO
+SEE ALSO
rclone genautocomplete zsh
Synopsis
+Synopsis
@@ -1635,10 +1712,10 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
sudo rclone genautocomplete zsh
-rclone genautocomplete zsh [output_file] [flags]Options
+Options
-h, --help help for zshSEE ALSO
+SEE ALSO
@@ -1710,7 +1787,7 @@ rclone link --expire 1d remote:path/to/file
rclone listremotes
-Synopsis
--long flag it lists the types too.Mounting on macOS
macFUSE Notes
+sudo mkdir /usr/local/lib
+cd /usr/local/lib
+sudo ln -s /opt/local/lib/libfuse.2.dylibFUSE-T Limitations, Caveats, and Notes
ModTime update on read
@@ -1985,17 +2067,16 @@ rclone mount remote:path/to/files * --volname \\cloud\remote
+Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
# /etc/systemd/system/mnt-data.mount
[Unit]
-After=network-online.target
+Description=Mount for /mnt/data
[Mount]
Type=rclone
What=sftp1:subdir
Where=/mnt/data
-Options=rw,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone# /etc/systemd/system/mnt-data.automount
[Unit]
-After=network-online.target
-Before=remote-fs.target
+Description=AutoMount for /mnt/data
[Automount]
Where=/mnt/data
TimeoutIdleSec=600
@@ -2039,14 +2120,15 @@ WantedBy=multi-user.target--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.--vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
--streaming-upload-cutoff. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.--size flag to preallocate the file in advance at the remote end and actually stream it, even if remote backend doesn't support streaming.--size should be the exact size of the input stream in bytes. If the size of the stream is different in length to the --size passed in then the transfer will likely fail.rclone move it to the destination.rclone move it to the destination which can use retries.rclone rcat remote:path [flags]Options
-h, --help help for rcat
@@ -2332,19 +2415,19 @@ ffmpeg - | rclone rcat remote:path/to/fileServer options
---addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.--addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.--rc-addr to specify which IP address and port the server should listen on, eg --rc-addr 1.2.3.4:8000 or --rc-addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.--rc-addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.--addr may be repeated to listen on multiple IPs/ports/sockets.--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.--rc-addr may be repeated to listen on multiple IPs/ports/sockets.--rc-server-read-timeout and --rc-server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.--rc-max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.--rc-baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --rc-baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --rc-baseurl, so --rc-baseurl "rclone", --rc-baseurl "/rclone" and --rc-baseurl "/rclone/" are all treated identically.TLS (SSL)
---cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.--rc-cert and --rc-key flags. If you wish to do client side certificate validation then you will need to supply --rc-client-ca also.--rc-cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --krc-ey should be the PEM encoded private key and --rc-client-ca should be the PEM encoded client certificate authority certificate.Template
---template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:--rc-template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:
Authentication
--user and --pass flags.--htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.--rc-user and --rc-pass flags.--client-ca flag passed to the server, the client certificate common name will be considered as the username.--rc-htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser--realm to set the authentication realm.--salt to change the password hashing salt from the default.--rc-realm to set the authentication realm.--rc-salt to change the password hashing salt from the default.rclone rcd <path to files to serve>* [flags]Options
@@ -2539,14 +2623,15 @@ htpasswd -B htpasswd anotherUser
-h, --help help for rcd--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.--vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.--vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.--vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
Authentication
--user and --pass flags.--client-ca flag passed to the server, the client certificate common name will be considered as the username.--htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.touch htpasswd
@@ -3180,14 +3269,15 @@ htpasswd -B htpasswd anotherUser--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.--vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
Authentication
--user and --pass flags.--client-ca flag passed to the server, the client certificate common name will be considered as the username.--htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.touch htpasswd
@@ -3456,7 +3547,7 @@ htpasswd -B htpasswd anotherUser--stdio is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example:restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...--transfers 1 when using --stdio. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to "corrupted on transfer" errors. This is the case because the client chooses indiscriminately which server to send commands to while the servers all have different views of the state of the filing system.--sftp-path-override to enable checksumming is possible but less secure and you could use the SFTP server provided by OpenSSH in this case.--sftp-path-override to enable checksumming is possible but less secure and you could use the SFTP server provided by OpenSSH in this case.VFS - Virtual File System
--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.--vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
Authentication
--user and --pass flags.--client-ca flag passed to the server, the client certificate common name will be considered as the username.--htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.touch htpasswd
@@ -3785,14 +3878,15 @@ htpasswd -B htpasswd anotherUser--cache-dir string Directory rclone will use for caching.
--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max age of objects in the cache (default 1h0m0s)
+--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.--vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.--vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
Synopsis
rclone test memory remote:rclone --checksum sync s3:/bucket swift:/bucket would run much quicker than without the --checksum flag.--color WHEN
-AUTO (default) only allows ANSI codes when the output is a terminalNEVER never allow ANSI codesALWAYS always add ANSI codes, regardless of the output format (terminal or file)rclone config instead of manually editing the configuration file.--contimeout=TIME
5s for 5 seconds, 10m for 10 minutes, or 3h30m.1m by default.--dedupe-mode MODE
interactive, skip, first, newest, oldest, rename. The default is interactive.
See the dedupe command for more information as to what these options mean.--default-time TIME
+2000-01-01 00:00:00 UTC. This can be configured in any of the ways shown in the time or duration options.--default-time 2020-06-01 to set the default time to the 1st of June 2020 or --default-time 0s to set the default time to the time rclone started up.--disable FEATURE,FEATURE,...
--disable move,copy
+--disable helprclone backend features remote:true if they are true/false feature flag features by prefixing them with !. For example the CaseInsensitive feature can be forced to false with --disable CaseInsensitive and forced to true with --disable '!CaseInsensitive'. In general it isn't a good idea doing this but it may be useful in extremis.! is a shell command which you will need to escape with single quotes or a backslash on unix like platforms.)--disable-http2
With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error Source and destination exist but do not match: immutable file modified.
Note that only commands which transfer files (e.g. sync, copy, move) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete, purge) or implicitly (e.g. sync, move). Use copy --immutable if it is desired to avoid deletion as well as modification.
This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated.
+The --inplace flag changes the behaviour of rclone when uploading files to some backends (backends with the PartialUploads feature flag set) such as:
Without --inplace (the default) rclone will first upload to a temporary file with an extension like this where XXXXXX represents a random string.
original-file-name.XXXXXX.partial
+(rclone will make sure the final name is no longer than 100 characters by truncating the original-file-name part if necessary).
When the upload is complete, rclone will rename the .partial file to the correct name, overwriting any existing file at that point. If the upload fails then the .partial file will be deleted.
This prevents other users of the backend from seeing partially uploaded files in their new names and prevents overwriting the old file until the new one is completely uploaded.
+If the --inplace flag is supplied, rclone will upload directly to the final name without creating a .partial file.
This means that an incomplete file will be visible in the directory listings while the upload is in progress and any existing files will be overwritten as soon as the upload starts. If the transfer fails then the file will be deleted. This can cause data loss of the existing file if the transfer fails.
+Note that on the local file system if you don't use --inplace hard links (Unix only) will be broken. And if you do use --inplace you won't be able to update in use executables.
Note also that versions of rclone prior to v1.63.0 behave as if the --inplace flag is always supplied.
This flag can be used to tell rclone that you wish a manual confirmation before destructive operations.
It is recommended that you use this flag while learning rclone especially with rclone sync.
When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent.
The default is 1ns unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s by default.
This command line flag allows you to override that computed default.
+When downloading with multiple threads, rclone will buffer SIZE bytes in memory before writing to disk for each thread.
+This can improve performance if the underlying filesystem does not deal well with a lot of small writes in different positions of the file, so if you see downloads being limited by disk write speed, you might want to experiment with different values. Specially for magnetic drives and remote file systems a higher value can be useful.
+Nevertheless, the default of 128k should be fine for almost all use cases, so before changing it ensure that network is not really your bottleneck.
As a final hint, size is not the only factor: block size (or similar concept) can have an impact. In one case, we observed that exact multiples of 16k performed much better than other values.
When downloading files to the local backend above this size, rclone will use multiple threads to download the file (default 250M).
Rclone preallocates the file (using fallocate(FALLOC_FL_KEEP_SIZE) on unix or NTSetInformationFile on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won't create fragmented or sparse files and there won't be any assembly time at the end of the transfer.
When using --suffix, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after.
So let's say we had --suffix -2019-01-01, without the flag file.txt would be backed up to file.txt-2019-01-01 and with the flag it would be backed up to file-2019-01-01.txt. This can be helpful to make sure the suffixed files can still be opened.
If a file has two (or more) extensions and the second (or subsequent) extension is recognised as a valid mime type, then the suffix will go before that extension. So file.tar.gz would be backed up to file-2019-01-01.tar.gz whereas file.badextension.gz would be backed up to file.badextension-2019-01-01.gz.
On capable OSes (not Windows or Plan9) send all log output to syslog.
This can be useful for running rclone in a script or rclone mount.
Not
{{start.*end\.jpg}}
Which will match a directory called start with a file called end.jpg in it as the .* will match / characters.
Note that you can use -vv --dump filters to show the filter patterns in regexp format - rclone implements the glob patters by transforming them into regular expressions.
Note that you can use -vv --dump filters to show the filter patterns in regexp format - rclone implements the glob patterns by transforming them into regular expressions.
| - | |||||||||||
| PikPak | +MD5 | +R | +No | +No | +R | +- | +|||||
| premiumize.me | - | - | @@ -7300,7 +7435,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalR | - | |||||||
| put.io | CRC-32 | R/W | @@ -7309,7 +7444,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalR | - | |||||||
| QingStor | MD5 | - ⁹ | @@ -7318,7 +7453,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalR/W | - | |||||||
| Seafile | - | - | @@ -7327,7 +7462,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total- | - | |||||||
| SFTP | MD5, SHA1 ² | R/W | @@ -7336,7 +7471,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total- | - | |||||||
| Sia | - | - | @@ -7345,7 +7480,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total- | - | |||||||
| SMB | - | - | @@ -7354,7 +7489,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total- | - | |||||||
| SugarSync | - | - | @@ -7363,7 +7498,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total- | - | |||||||
| Storj | - | R | @@ -7372,7 +7507,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total- | - | |||||||
| Uptobox | - | - | @@ -7381,7 +7516,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total- | - | |||||||
| WebDAV | MD5, SHA1 ³ | R ⁴ | @@ -7390,7 +7525,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total- | - | |||||||
| Yandex Disk | MD5 | R/W | @@ -7399,7 +7534,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalR | - | |||||||
| Zoho WorkDrive | - | - | @@ -7408,7 +7543,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total- | - | |||||||
| The local filesystem | All | R/W | @@ -7422,8 +7557,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | No | No | -Yes | +No | No | No | Yes | @@ -8267,6 +8402,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes |
| PikPak | +Yes | +Yes | +Yes | +Yes | +Yes | +No | +No | +Yes | +Yes | +Yes | +|
| premiumize.me | Yes | No | @@ -8279,7 +8427,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | |||||||
| put.io | Yes | No | @@ -8292,7 +8440,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | |||||||
| QingStor | No | Yes | @@ -8305,7 +8453,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | No | |||||||
| Seafile | Yes | Yes | @@ -8318,7 +8466,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | |||||||
| SFTP | No | No | @@ -8331,7 +8479,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | |||||||
| Sia | No | No | @@ -8344,7 +8492,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | Yes | |||||||
| SMB | No | No | @@ -8357,7 +8505,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | Yes | |||||||
| SugarSync | Yes | Yes | @@ -8370,7 +8518,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | Yes | |||||||
| Storj | Yes ☨ | Yes | @@ -8383,7 +8531,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | No | |||||||
| Uptobox | No | Yes | @@ -8396,7 +8544,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalNo | No | |||||||
| WebDAV | Yes | Yes | @@ -8409,7 +8557,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | |||||||
| Yandex Disk | Yes | Yes | @@ -8422,7 +8570,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | |||||||
| Zoho WorkDrive | Yes | Yes | @@ -8435,7 +8583,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalYes | Yes | |||||||
| The local filesystem | Yes | No | @@ -8484,716 +8632,746 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
+Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
Get swift credentials from environment variables in standard OpenStack form.
Properties:
@@ -26131,6 +26715,10 @@ rclone lsd myremote:Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
+Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
If true avoid calling abort upload on a failure.
It should be set to true for resuming uploads across different sessions.
@@ -26565,10 +27153,220 @@ y/e/d> yPikPak is a private cloud drive.
+Paths are specified as remote:path, and may be as deep as required, e.g. remote:directory/subdirectory.
Here is an example of making a remote for PikPak.
+First run:
+ rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> remote
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+XX / PikPak
+ \ (pikpak)
+Storage> XX
+
+Option user.
+Pikpak username.
+Enter a value.
+user> USERNAME
+
+Option pass.
+Pikpak password.
+Choose an alternative below.
+y) Yes, type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n>
+
+Configuration complete.
+Options:
+- type: pikpak
+- user: USERNAME
+- pass: *** ENCRYPTED ***
+- token: {"access_token":"eyJ...","token_type":"Bearer","refresh_token":"os...","expiry":"2023-01-26T18:54:32.170582647+09:00"}
+Keep this "remote" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Here are the Standard options specific to pikpak (PikPak).
+Pikpak username.
+Properties:
+Pikpak password.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+Here are the Advanced options specific to pikpak (PikPak).
+OAuth Client Id.
+Leave blank normally.
+Properties:
+OAuth Client Secret.
+Leave blank normally.
+Properties:
+OAuth Access Token as a JSON blob.
+Properties:
+Auth server URL.
+Leave blank to use the provider defaults.
+Properties:
+Token server url.
+Leave blank to use the provider defaults.
+Properties:
+ID of the root folder. Leave blank normally.
+Fill in for rclone to use a non root folder as its starting point.
+Properties:
+Send files to the trash instead of deleting permanently.
+Defaults to true, namely sending files to the trash. Use --pikpak-use-trash=false to delete files permanently instead.
Properties:
+Only show files that are in the trash.
+This will show trashed files in their original directory structure.
+Properties:
+Files bigger than this will be cached on disk to calculate hash if required.
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Here are the commands specific to the pikpak backend.
+Run them with
+rclone backend COMMAND remote:
+The help below will explain what arguments each command takes.
+See the backend command for more info on how to pass options and arguments.
+These can be run on a running backend using the rc command backend/command.
+Add offline download task for url
+rclone backend addurl remote: [options] [<arguments>+]
+This command adds offline download task for url.
+Usage:
+rclone backend addurl pikpak:dirpath url
+Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, download will fallback to default 'My Pack' folder.
+Request decompress of a file/files in a folder
+rclone backend decompress remote: [options] [<arguments>+]
+This command requests decompress of file/files in a folder.
+Usage:
+rclone backend decompress pikpak:dirpath {filename} -o password=password
+rclone backend decompress pikpak:dirpath {filename} -o delete-src-file
+An optional argument 'filename' can be specified for a file located in 'pikpak:dirpath'. You may want to pass '-o password=password' for a password-protected files. Also, pass '-o delete-src-file' to delete source files after decompression finished.
+Result:
+{
+ "Decompressed": 17,
+ "SourceDeleted": 0,
+ "Errors": 0
+}
+PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
+Deleted files will still be visible with --pikpak-trashed-only even after the trash emptied. This goes away after few days.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -26645,7 +27443,7 @@ y/e/d>
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the Standard options specific to premiumizeme (premiumize.me).
API Key.
@@ -26657,7 +27455,7 @@ y/e/d>Here are the Advanced options specific to premiumizeme (premiumize.me).
The encoding for the backend.
@@ -26669,14 +27467,14 @@ y/e/d>Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
premiumize.me file names can't have the \ or " characters in. rclone maps these to and from an identical looking unicode equivalents \ and "
premiumize.me only supports filenames up to 255 characters in length.
Paths are specified as remote:path
put.io paths may be as deep as required, e.g. remote:directory/subdirectory.
The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -26760,7 +27558,7 @@ e/n/d/r/c/s/q> q
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the Advanced options specific to putio (Put.io).
The encoding for the backend.
@@ -26772,12 +27570,12 @@ e/n/d/r/c/s/q> qput.io has rate limiting. When you hit a limit, rclone automatically retries after waiting the amount of time requested by the server.
If you want to avoid ever hitting these limits, you may use the --tpslimit flag with a low number. Note that the imposed limits may be different for different operations, and may change over time.
This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users - Using a Library API Token is not supported
-There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library. You may put subdirectories in too, e.g. remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)
Here is an example of making a seafile configuration for a user with no two-factor authentication. First run
@@ -26978,7 +27776,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/It has been actively developed using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition - 9.0.10 community edition
Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.
Each new version of rclone is automatically tested against the latest docker image of the seafile community server.
Here are the Standard options specific to seafile (seafile).
URL of seafile host to connect to.
@@ -27054,7 +27852,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/Here are the Advanced options specific to seafile (seafile).
Should rclone create a library if it doesn't exist.
@@ -27086,7 +27884,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory. For example, rclone lsd remote: would list the home directory of the user configured in the rclone remote config (i.e /home/sftpuser). However, rclone lsd remote:/ would list the root directory for remote machine (i.e. /)
Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net and Hetzner, on the other hand, requires users to OMIT the leading /.
Note that by default rclone will try to execute shell commands on the server, see shell access considerations.
-Here is an example of making an SFTP configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -27229,7 +28027,7 @@ known_hosts_file = ~/.ssh/known_hostsThe about command returns the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote.
SFTP usually supports the about command, but it depends on the server. If the server implements the vendor-specific VFS statistics extension, which is normally the case with OpenSSH instances, it will be used. If not, but the same login has access to a Unix shell, where the df command is available (e.g. in the remote's PATH), then this will be used instead. If the server shell is PowerShell, probably with a Windows OpenSSH server, rclone will use a built-in shell command (see shell access). If none of the above is applicable, about will fail.
Here are the Standard options specific to sftp (SSH/SFTP).
SSH host to connect to.
@@ -27363,7 +28161,7 @@ known_hosts_file = ~/.ssh/known_hostsHere are the Advanced options specific to sftp (SSH/SFTP).
Optional path to known_hosts file.
@@ -27568,7 +28366,7 @@ known_hosts_file = ~/.ssh/known_hoststo be passed to the sftp client and to any commands run (eg md5sum).
Pass multiple variables space separated, eg
VAR1=value VAR2=value
-and pass variables with spaces in in quotes, eg
+and pass variables with spaces in quotes, eg
"VAR3=value with space" "VAR4=value with space" VAR5=nospacehere
Properties:
Space separated list of host key algorithms, ordered by preference.
+At least one must match with server configuration. This can be checked for example using ssh -Q HostKeyAlgorithms.
+Note: This can affect the outcome of key negotiation with the server even if server host key validation is not enabled.
+Example:
+ssh-ed25519 ssh-rsa ssh-dss
+Properties:
+On some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck is a good idea.
The only ssh agent supported under Windows is Putty's pageant.
The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher setting in the configuration file to true. Further details on the insecurity of this cipher can be found in this paper.
This relies on go-smb2 library for communication with SMB protocol.
Paths are specified as remote:sharename (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:item/path/to/dir.
The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in smb.conf (usually in /etc/samba/) file. You can find shares by quering the root if you're unsure (e.g. rclone lsd remote:).
The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in smb.conf (usually in /etc/samba/) file. You can find shares by querying the root if you're unsure (e.g. rclone lsd remote:).
You can't access to the shared printers from rclone, obviously.
You can't use Anonymous access for logging in. You have to use the guest user with an empty password instead. The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods. Alternatively, the local backend on Windows can access SMB servers using UNC paths, by \\server\share. This doesn't apply to non-Windows OSes, such as Linux and macOS.
Here is an example of making a SMB configuration.
First run
rclone config
@@ -27711,7 +28522,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> d
-Here are the Standard options specific to smb (SMB / CIFS).
SMB server hostname to connect to.
@@ -27772,7 +28583,7 @@ y/e/d> dHere are the Advanced options specific to smb (SMB / CIFS).
Max time before closing idle connections.
@@ -27873,7 +28684,7 @@ y/e/d> dTo make a new Storj configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Storj project you are a member of.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -27970,7 +28781,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).
Choose an authentication method.
@@ -28103,7 +28914,7 @@ y/e/d> yrclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
Or even between another cloud storage and Storj.
rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
-rclone about is not supported by the rclone Storj backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
To fix these, please raise your system limits. You can do this issuing a ulimit -n 65536 just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc, or change the system-wide configuration, usually /etc/sysctl.conf and/or /etc/security/limits.conf, but please refer to your operating system manual.
SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.
-The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -28184,7 +28995,7 @@ y/e/d> y
Deleted files will be moved to the "Deleted items" folder by default.
However you can supply the flag --sugarsync-hard-delete or set the config parameter hard_delete = true if you would like files to be deleted straight away.
Here are the Standard options specific to sugarsync (Sugarsync).
Sugarsync App ID.
@@ -28225,7 +29036,7 @@ y/e/d> yHere are the Advanced options specific to sugarsync (Sugarsync).
Sugarsync refresh token.
@@ -28297,7 +29108,7 @@ y/e/d> yrclone about is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and therefore not suitable for long term storage.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
To configure an Uptobox backend you'll need your personal api token. You'll find it in your account settings
Here is an example of how to make a remote called remote with the default setup. First run:
rclone config
@@ -28361,7 +29172,7 @@ y/e/d>
To copy a local directory to an Uptobox directory called backup
rclone copy /home/source remote:backup
Uptobox supports neither modified times nor checksums.
+Uptobox supports neither modified times nor checksums. All timestamps will read as that set by --default-time.
In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Here are the Standard options specific to uptobox (Uptobox).
Your access token.
@@ -28398,8 +29209,17 @@ y/e/d>Here are the Advanced options specific to uptobox (Uptobox).
+Set to make uploaded files private
+Properties:
+The encoding for the backend.
See the encoding section in the overview for more info.
@@ -28410,7 +29230,7 @@ y/e/d>Uptobox will delete inactive files that have not been accessed in 60 days.
rclone about is not supported by this backend an overview of used space can however been seen in the uptobox web interface.
Attribute :ro and :nc can be attach to the end of path to tag the remote as read only or no create, e.g. remote:directory/subdirectory:ro or remote:directory/subdirectory:nc.
Subfolders can be used in upstream remotes. Assume a union remote named backup with the remotes mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop.
There will be no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop.
Here is an example of how to make a union called remote for local folders. First run:
rclone config
This will guide you through an interactive setup process:
@@ -28641,7 +29461,7 @@ e/n/d/r/c/s/q> q -Here are the Standard options specific to union (Union merges the contents of several upstream fs).
List of space separated upstreams.
@@ -28690,7 +29510,7 @@ e/n/d/r/c/s/q> qHere are the Advanced options specific to union (Union merges the contents of several upstream fs).
Minimum viable free space for lfs/eplfs policies.
@@ -28708,7 +29528,7 @@ e/n/d/r/c/s/q> qPaths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -28733,17 +29553,19 @@ Choose a number from below, or type in your own value
url> https://example.com/remote.php/webdav/
Name of the WebDAV site/service/software you are using
Choose a number from below, or type in your own value
- 1 / Nextcloud
- \ "nextcloud"
- 2 / Owncloud
- \ "owncloud"
- 3 / Sharepoint Online, authenticated by Microsoft account.
- \ "sharepoint"
- 4 / Sharepoint with NTLM authentication. Usually self-hosted or on-premises.
- \ "sharepoint-ntlm"
- 5 / Other site/service or software
- \ "other"
-vendor> 1
+ 1 / Fastmail Files
+ \ (fastmail)
+ 2 / Nextcloud
+ \ (nextcloud)
+ 3 / Owncloud
+ \ (owncloud)
+ 4 / Sharepoint Online, authenticated by Microsoft account
+ \ (sharepoint)
+ 5 / Sharepoint with NTLM authentication, usually self-hosted or on-premises
+ \ (sharepoint-ntlm)
+ 6 / Other site/service or software
+ \ (other)
+vendor> 2
User name
user> user
Password.
@@ -28779,9 +29601,9 @@ y/e/d> y
To copy a local directory to an WebDAV directory called backup
rclone copy /home/source remote:backup
Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.
-Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
-Plain WebDAV does not support modified times. However when used with Fastmail Files, Owncloud or Nextcloud rclone will support modified times.
+Likewise plain WebDAV does not support hashes, however when used with Fastmail Files, Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
+Here are the Standard options specific to webdav (WebDAV).
URL of http host to connect to.
@@ -28803,6 +29625,10 @@ y/e/d> yHere are the Advanced options specific to webdav (WebDAV).
Command to run to get a bearer token.
@@ -28889,8 +29715,31 @@ y/e/d> yMinimum time to sleep between API calls.
+Properties:
+Nextcloud upload chunk size.
+We recommend configuring your NextCloud instance to increase the max chunk size to 1 GB for better upload performances. See https://docs.nextcloud.com/server/latest/admin_manual/configuration_files/big_file_upload_configuration.html#adjust-chunk-size-on-nextcloud-side
+Set to 0 to disable chunked uploading.
+Properties:
+See below for notes on specific providers.
+Use https://webdav.fastmail.com/ or a subdirectory as the URL, and your Fastmail email username@domain.tld as the username. Follow this documentation to create an app password with access to Files (WebDAV) and use this as the password.
Fastmail supports modified times using the X-OC-Mtime header.
Click on the settings cog in the bottom right of the page and this will show the WebDAV URL that rclone needs in the config step. It will look something like https://example.com/remote.php/webdav/.
Owncloud supports modified times using the X-OC-Mtime header.
Yandex Disk is a cloud storage solution created by Yandex.
-Here is an example of making a yandex configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -29026,7 +29875,7 @@ y/e/d> yThe default restricted characters set are replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the Standard options specific to yandex (Yandex Disk).
OAuth Client Id.
@@ -29048,7 +29897,7 @@ y/e/d> yHere are the Advanced options specific to yandex (Yandex Disk).
OAuth Access Token as a JSON blob.
@@ -29098,13 +29947,13 @@ y/e/d> yWhen uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.
Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.
[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
Zoho WorkDrive is a cloud storage solution created by Zoho.
-Here is an example of making a zoho configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -29185,7 +30034,7 @@ y/e/d>To view your current quota you can use the rclone about remote: command which will display your current usage.
Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.
-Here are the Standard options specific to zoho (Zoho).
OAuth Client Id.
@@ -29244,7 +30093,7 @@ y/e/d> -Here are the Advanced options specific to zoho (Zoho).
OAuth Access Token as a JSON blob.
@@ -29297,7 +30146,7 @@ y/e/d>Local paths are specified as normal filesystem paths, e.g. /path/to/wherever, so
rclone sync --interactive /home/source /tmp/destination
Will sync /home/source to /tmp/destination.
For consistencies sake one can also configure a remote of type local in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever, but it is probably easier not to.
Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
@@ -29668,7 +30517,7 @@ $ tree /tmp/b 0 file2NB Rclone (like most unix tools such as du, rsync and tar) treats a bind mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.
-Here are the Advanced options specific to local (Local Disk).
Disable UNC (long path names) conversion on Windows.
@@ -29905,7 +30754,7 @@ $ tree /tmp/bSee the metadata docs for more info.
-Here are the commands specific to the local backend.
Run them with
rclone backend COMMAND remote:
@@ -29922,6 +30771,285 @@ $ tree /tmp/b
.partial when copying to local,ftp,sftp then renamed at the end of the transfer. (Janne Hellsten, Nick Craig-Wood)
+sftp wrapped with crypt.--default-time 0s which will set this time to the time rclone started up.--separator option to cat command (Loren Gordon)config create making invalid config files (Nick Craig-Wood)size to JSON logs when moving or copying an object (Nick Craig-Wood)--disable !Feature (Nick Craig-Wood)completion with alias to the old name (Nick Craig-Wood)librclone with Go (alankrit)--stat more efficient (Nick Craig-Wood)--multi-thread-write-buffer-size for speed improvements on downloads (Paulo Schreiner)check --download and cat (Nick Craig-Wood)config/listremotes includes remotes defined with environment variables (kapitainsky)--no-check-certificate flag (Nick Craig-Wood)--suffix-keep-extension preserve 2 part extensions like .tar.gz (Nick Craig-Wood)core/stats (Nick Craig-Wood)maxDelete parameter being ignored via the rc (Nick Craig-Wood)--files-from (douchen)--progress and --interactive (Nick Craig-Wood)operations/stat with trailing / (Nick Craig-Wood)--rc flags (Nick Craig-Wood)options/get (Nick Craig-Wood)--mount-case-insensitive to force the mount to be case insensitive (Nick Craig-Wood)-l/--links flag (Nick Craig-Wood)-l/--links is in use (Nick Craig-Wood)--metadata on Android (Nick Craig-Wood)--crypt-suffix option to set a custom suffix for encrypted files (jladbrook)--crypt-pass-bad-blocks to allow corrupted file output (Nick Craig-Wood)base32768 encoding (Nick Craig-Wood)--drive-env-auth to get IAM credentials from runtime (Peter Brunner)--dropbox-pacer-min-sleep flag (Nick Craig-Wood)--ficicher-cdn option to use the CDN for download (Nick Craig-Wood)SetModTime is not supported to debug (Tobias Gion)--gcs-user-project needed for requester pays (Christopher Merry)serve restic from the username in the client cert. (Peter Fern)--onedrive-av-override flag to download files flagged as virus (Nick Craig-Wood)rclone cleanup (albertony)--s3-versions on individual objects (Nick Craig-Wood)--sftp-host-key-algorithms to allow specifying SSH host key algorithms (Joel)--sftp-key-use-agent and --sftp-key-file together needing private key file (Arnav Singh)--uptobox-private flag to make all uploaded files private (Nick Craig-Wood)--webdav-pacer-min-sleep (ed)Range: header returning the wrong data (Nick Craig-Wood)rclone backend decode/encode commands to replicate functionality of cryptdecode (Anagh Kumar Baranwal)rcat - read from standard input and stream uploadtree - shows a nicely formatted recursive listingcryptdecode - decode crypted file names (thanks ishuah)cryptdecode - decode encrypted file names (thanks ishuah)config show - print the config fileconfig file - print the config file locationrclone check on crypted file systemsrclone check on encrypted file systems-qRclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.
If you are using systemd-resolved (default on Arch Linux), ensure it is at version 233 or higher. Previous releases contain a bug which causes not all domains to be resolved properly.
Additionally with the GODEBUG=netdns= environment variable the Go resolver decision can be influenced. This also allows to resolve certain issues with DNS resolution. See the name resolution section in the go docs.
The Go resolver decision can be influenced with the GODEBUG=netdns=... environment variable. This also allows to resolve certain issues with DNS resolution. On Windows or MacOS systems, try forcing use of the internal Go resolver by setting GODEBUG=netdns=go at runtime. On other systems (Linux, *BSD, etc) try forcing use of the system name resolver by setting GODEBUG=netdns=cgo (and recompile rclone from source with CGO enabled if necessary). See the name resolution section in the go docs.
Error: config failed to refresh token: failed to start auth webserver: listen tcp 127.0.0.1:53682: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
+...
+yyyy/mm/dd hh:mm:ss Fatal error: config failed to refresh token: failed to start auth webserver: listen tcp 127.0.0.1:53682: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
+This is sometimes caused by the Host Network Service causing issues with opening the port on the host.
+A simple solution may be restarting the Host Network Service with eg. Powershell
+Restart-Service hns
It is likely you have more than 10,000 files that need to be synced. By default, rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the --max-backlog flag.
{{< rem email addresses removed from here need to be addeed to bin/.ignore-emails to make sure update-authors.py doesn't immediately put them back in again. >}}
{{< rem email addresses removed from here need to be added to bin/.ignore-emails to make sure update-authors.py doesn't immediately put them back in again. >}}