From b6013a5e689ff4ff8a869aa262c9d04d454f5a71 Mon Sep 17 00:00:00 2001
From: Nick Craig-Wood Nov 26, 2023 Mar 10, 2024 See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly. For example, if you have many files in /path/to/src but only a few of them change every day, you can copy all the files which have changed recently very efficiently like this: Rclone will sync the modification times of files and directories if the backend supports it. If metadata syncing is required then use the Note that the modification time and metadata for the root directory will not be synced. See https://github.com/rclone/rclone/issues/7652 for more info. Note: Use the Note: Use the It is always the contents of the directory that is synced, not the directory itself. So when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy command if unsure. If dest:path doesn't exist, it is created and the source:path contents go there. It is not possible to sync overlapping remotes. However, you may exclude the destination from the sync with a filter rule or by putting an exclude-if-present file inside the destination directory and sync to a destination that is inside the source directory. Rclone will sync the modification times of files and directories if the backend supports it. If metadata syncing is required then use the Note that the modification time and metadata for the root directory will not be synced. See https://github.com/rclone/rclone/issues/7652 for more info. Note: Use the Note: Use the The The The Note that these logger flags have a few limitations, and certain scenarios are not currently supported: Note also that each file is logged during the sync, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file (which may or may not match what actually DID.) Flags for anything which can Copy a file. Otherwise for each file in If you want to delete empty source directories after move, use the See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly. Rclone will sync the modification times of files and directories if the backend supports it. If metadata syncing is required then use the Note that the modification time and metadata for the root directory will not be synced. See https://github.com/rclone/rclone/issues/7652 for more info. Important: Since this can cause data loss, test first with the Note: Use the Perform bidirectional synchronization between two paths. Bisync provides a bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On each successive run it will: - list files on Path1 and Path2, and check for changes on each side. Changes include Bisync is in beta and is considered an advanced command, so use with care. Make sure you have read and understood the entire manual (especially the Limitations section) before using, or data loss can result. Questions can be asked in the Rclone Forum. See full bisync description for details. Flags for anything which can Copy a file. Copy url content to dest. Copy the contents of the URL supplied content to dest:path. Download a URL's content and copy it to the destination without saving it in temporary storage. Setting Setting With Setting Setting If you can't get List all the remotes in the config file and defined in environment variables. rclone listremotes lists all the available remotes from the config file. When used with the When used with the See the global flags page for global options not listed here. For example, to find all the files modified within one day and copy those only (without traversing the whole directory structure): The default time format is Any of the filtering options can be applied to this command. There are several related list commands Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes). Flags for filtering directory listings. Note that mapping to a directory path, instead of a drive letter, does not suffer from the same limitations. Mounting on macOS can be done either via built-in NFS server, macFUSE (also known as osxfuse) or FUSE-T. macFUSE is a traditional FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which "mounts" via an NFSv4 local server. It is highly recommended to keep the default of This method spins up an NFS server using serve nfs command and mounts it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to send SIGTERM signal to the rclone process using |kill| command to stop the mount. Note that If installing macFUSE using dmg packages from the website, rclone will locate the macFUSE libraries without any further intervention. If however, macFUSE is installed using the macports package manager, the following addition steps are required. File access and modification times cannot be set separately as it seems to be an issue with the NFS client which always modifies both. Can be reproduced with 'touch -m' and 'touch -a' commands This means that viewing files with various tools, notably macOS Finder, will cause rlcone to update the modification time of the file. This may make rclone upload a full new copy of the file. Rclone includes flags for unicode normalization with macFUSE that should be updated for FUSE-T. See this forum post and FUSE-T issue #16. The following flag should be added to the When mounting with The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". The In the (probably unlikely) event that a directory has multiple duplicate filenames after applying case and unicode normalization, the This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. Mount the remote as file system on a mountpoint. rclone nfsmount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. First set up your remote using On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the In background mode rclone acts as a generic Unix mount program: the main program starts, spawns background rclone process to setup and maintain the mount, waits until success or timeout and exits with appropriate code (killing the child process if it fails). On Linux/macOS/FreeBSD start the mount like this, where On Windows you can start a mount in different ways. See below for details. If foreground mount is used interactively from a console window, rclone will serve the mount and occupy the console so another window should be used to work with the mount until rclone is interrupted e.g. by pressing Ctrl-C. The following examples will mount to an automatically assigned drive, to specific drive letter When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount should be automatically stopped. When running in background mode the user will have to stop the mount manually: The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually. The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support the about feature at all, then 1 PiB is set as both the total and the free size. To run rclone nfsmount on Windows, you will need to download and install WinFsp. WinFsp is an open-source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which rclone uses combination with cgofuse. Both of these packages are by Bill Zissimopoulos who was very helpful during the implementation of rclone nfsmount for Windows. Unlike other operating systems, Microsoft Windows provides a different filesystem type for network and fixed drives. It optimises access on the assumption fixed disk drives are fast and reliable, while network drives have relatively high latency and less reliability. Some settings can also be differentiated between the two types, for example that Windows Explorer should just display icons and not create preview thumbnails for image and video files on network drives. In most cases, rclone will mount the remote as a normal, fixed disk drive by default. However, you can also choose to mount it as a remote network drive, often described as a network share. If you mount an rclone remote using the default, fixed drive mode and experience unexpected program errors, freezes or other issues, consider mounting as a network drive instead. When mounting as a fixed disk drive you can either mount to an unused drive letter, or to a path representing a nonexistent subdirectory of an existing parent directory or drive. Using the special value Option To mount as network drive, you can add option A volume name specified with If you specify a full network share UNC path with You may also specify the network share UNC path as the mountpoint itself. Then rclone will automatically assign a drive letter, same as with There is yet another way to enable network mode, and to set the share path, and that is to pass the "native" libfuse/WinFsp option directly: Note: In previous versions of rclone this was the only supported method. See also Limitations section below. The FUSE emulation layer on Windows must convert between the POSIX-based permission model used in FUSE, and the permission model used in Windows, based on access-control lists (ACL). The mounted filesystem will normally get three entries in its access-control list (ACL), representing permissions for the POSIX permission scopes: Owner, group and others. By default, the owner and group will be taken from the current user, and the built-in group "Everyone" will be used to represent others. The user/group can be customized with FUSE options "UserName" and "GroupName", e.g. The default permissions corresponds to The mapping of permissions is not always trivial, and the result you see in Windows Explorer may not be exactly like you expected. For example, when setting a value that includes write access for the group or others scope, this will be mapped to individual permissions "write attributes", "write data" and "append data", but not "write extended attributes". Windows will then show this as basic permission "Special" instead of "Write", because "Write" also covers the "write extended attributes" permission. When setting digit 0 for group or others, to indicate no permissions, they will still get individual permissions "read attributes", "read extended attributes" and "read permissions". This is done for compatibility reasons, e.g. to allow users without additional permissions to be able to read basic metadata about files like in Unix. WinFsp 2021 (version 1.9) introduced a new FUSE option "FileSecurity", that allows the complete specification of file security descriptors using SDDL. With this you get detailed control of the resulting permissions, compared to use of the POSIX permissions described above, and no additional permissions will be added automatically for compatibility with Unix. Some example use cases will following. If you set POSIX permissions for only allowing access to the owner, using When setting write permissions then, except for the owner, this does not include the "write extended attributes" permission, as mentioned above. This may prevent applications from writing to files, giving permission denied error instead. To set working write permissions for the built-in "Everyone" group, similar to what it gets by default but with the addition of the "write extended attributes", you can specify Drives created as Administrator are not visible to other accounts, not even an account that was elevated to Administrator with the User Account Control (UAC) feature. A result of this is that if you mount to a drive letter from a Command Prompt run as Administrator, and then try to access the same drive from Windows Explorer (which does not run as Administrator), you will not be able to see the mounted drive. If you don't need to access the drive from applications running with administrative privileges, the easiest way around this is to always create the mount from a non-elevated command prompt. To make mapped drives available to the user account that created them regardless if elevated or not, there is a special Windows setting called linked connections that can be enabled. It is also possible to make a drive mount available to everyone on the system, by running the process creating it as the built-in SYSTEM account. There are several ways to do this: One is to use the command-line utility PsExec, from Microsoft's Sysinternals suite, which has option Note that mapping to a directory path, instead of a drive letter, does not suffer from the same limitations. Mounting on macOS can be done either via built-in NFS server, macFUSE (also known as osxfuse) or FUSE-T. macFUSE is a traditional FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which "mounts" via an NFSv4 local server. It is highly recommended to keep the default of This method spins up an NFS server using serve nfs command and mounts it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to send SIGTERM signal to the rclone process using |kill| command to stop the mount. Note that If installing macFUSE using dmg packages from the website, rclone will locate the macFUSE libraries without any further intervention. If however, macFUSE is installed using the macports package manager, the following addition steps are required. There are some limitations, caveats, and notes about how it works. These are current as of FUSE-T version 1.0.14. As per the FUSE-T wiki: File access and modification times cannot be set separately as it seems to be an issue with the NFS client which always modifies both. Can be reproduced with 'touch -m' and 'touch -a' commands This means that viewing files with various tools, notably macOS Finder, will cause rlcone to update the modification time of the file. This may make rclone upload a full new copy of the file. When mounting with Without the use of The bucket-based remotes (e.g. Swift, S3, Google Compute Storage, B2) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache. When Only supported on Linux, FreeBSD, OS X and Windows at the moment. File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone nfsmount can't use retries in the same way without making local copies of the uploads. Look at the VFS File Caching for solutions to make nfsmount more reliable. You can use the flag The default is In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as rclone using too much memory, rclone not serving files to samba and excessive time listing directories. The kernel can cache the info about a file for the time given by If you set it higher ( If files don't change on the remote outside of the control of rclone then there is no chance of corruption. This is the same as setting the attr_timeout option in mount.fuse. Note that all the rclone filters can be used to select a subset of the files to be visible in the mount. When running rclone nfsmount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone nfsmount service specified as a requirement will see all files and folders immediately in this mode. Note that systemd runs mount units without any environment variables including The core Unix program rclone by default expects GNU-style flags Now you can run classic mounts like this: or create systemd mount units: optionally accompanied by systemd automount unit or add in or use classic Automountd. Remember to provide explicit Rclone in the mount helper mode will split Mount option syntax includes a few extra options treated specially: This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. Using the However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache: Or individual files or directories: The Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed and if they haven't been accessed for If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to When reading a file rclone will read When using this mode it is recommended that IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from: where available on an object. On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object). For example If you use the If you are running a vfs cache over Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests. These flags control the chunking: Rclone will start reading a chunk of size With Setting These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature. In particular S3 and Swift benefit hugely from the Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. When using VFS write caching ( Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. The The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". The In the (probably unlikely) event that a directory has multiple duplicate filenames after applying case and unicode normalization, the This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running WARNING. Contrary to Flags for filtering directory listings. See the global flags page for global options not listed here. Obscure password for use in the rclone config file. In the rclone config file, human-readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident. Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token. This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline. If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself. If you want to encrypt the config file then please use config file encryption - see rclone config for more info. See the global flags page for global options not listed here. Run a command against a running rclone. This runs a command against a running rclone. Use the A username and password can be passed in with Note that Use See the global flags page for global options not listed here. Copies standard input to file on remote. rclone rcat reads from standard input (stdin) and copies it to a single remote file. Note that the upload cannot be retried because the data is not stored. If the backend supports multipart uploading then individual chunks can be retried. If you need to transfer a lot of data, you may be better off caching it locally and then See the global flags page for global options not listed here. Run rclone listening to remote control commands only. This runs rclone so that it only listens to remote control commands. This is useful if you are controlling rclone via the rc API. If you pass in a path to a directory, rclone will serve that directory for GET requests on the URL passed in. It will also open the URL in the browser when rclone is run. Use Use Flags to control the Remote Control API. See the global flags page for global options not listed here. Remove empty directories under the path. This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply the Use command rmdir to delete just the empty directory given by path, not recurse. This is useful for tidying up remotes that rclone has left a lot of empty directories in. For example the delete command will delete files but leave the directory structure (unless used with option This will delete To delete a path and any objects in it, use the purge command. See the global flags page for global options not listed here. Update the rclone binary. This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed signature; see the release signing docs for details. If used without flags (or with implied Sometimes the rclone team may recommend you a concrete beta or stable rclone release to troubleshoot your issue or add a bleeding edge feature. The Note: Windows forbids deletion of a currently running executable so this command will rename the old executable to 'rclone.old.exe' upon success. Please note that this command was not available before rclone version 1.55. If it fails for you with the message See the global flags page for global options not listed here. Serve a remote over a protocol. Serve a remote over a given protocol. Requires the use of a subcommand to specify the protocol, e.g. Each subcommand has its own options which you can see in their help. See the global flags page for global options not listed here. Serve remote:path over DLNA Run a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs. Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly. This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. Using the However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache: Or individual files or directories: The Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed and if they haven't been accessed for If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to When reading a file rclone will read When using this mode it is recommended that IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from: where available on an object. On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object). For example If you use the If you are running a vfs cache over Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests. These flags control the chunking: Rclone will start reading a chunk of size With Setting These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature. In particular S3 and Swift benefit hugely from the Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. When using VFS write caching ( Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. The The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running WARNING. Contrary to Flags for filtering directory listings. See the global flags page for global options not listed here. Serve any remote on docker's volume plugin API. This command implements the Docker volume plugin API allowing docker to use rclone as a data storage mechanism for various cloud providers. rclone provides docker volume plugin based on it. To create a docker plugin, one must create a Unix or TCP socket that Docker will look for when you use the plugin and then it listens for commands from docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example: Running If you later decide to change listening socket, the docker daemon must be restarted to reconnect to The command will create volume mounts under the path given by All mount and VFS options are submitted by the docker daemon via API, but you can also provide defaults on the command line as well as set path to the config file and cache directory or adjust logging verbosity. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. Using the The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". The In the (probably unlikely) event that a directory has multiple duplicate filenames after applying case and unicode normalization, the This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running WARNING. Contrary to Flags for filtering directory listings. Serve remote:path over FTP. Serve any remote on docker's volume plugin API. Run a basic FTP server to serve a remote over FTP protocol. This can be viewed with a FTP client or you can make a remote of type FTP to read and write it. Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. By default this will serve files without needing a login. You can set a single username and password with the --user and --pass flags. ## VFS - Virtual File System This command implements the Docker volume plugin API allowing docker to use rclone as a data storage mechanism for various cloud providers. rclone provides docker volume plugin based on it. To create a docker plugin, one must create a Unix or TCP socket that Docker will look for when you use the plugin and then it listens for commands from docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example: Running If you later decide to change listening socket, the docker daemon must be restarted to reconnect to The command will create volume mounts under the path given by All mount and VFS options are submitted by the docker daemon via API, but you can also provide defaults on the command line as well as set path to the config file and cache directory or adjust logging verbosity. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". The In the (probably unlikely) event that a directory has multiple duplicate filenames after applying case and unicode normalization, the This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running WARNING. Contrary to If you supply the parameter PLEASE NOTE: There is an example program bin/test_proxy.py in the rclone source code. The program's job is to take a This config generated must have this extra parameter - And it may have this parameter - If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: And as an example return this on STDOUT This would mean that an SFTP backend would be created on the fly for the The program can manipulate the supplied Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. Flags for filtering directory listings. Serve remote:path over FTP. Run a basic FTP server to serve a remote over FTP protocol. This can be viewed with a FTP client or you can make a remote of type FTP to read and write it. Use --addr to specify which IP address and port the server should listen on, e.g. --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port. If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info. By default this will serve files without needing a login. You can set a single username and password with the --user and --pass flags. ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. Using the However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache: Or individual files or directories: The Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed and if they haven't been accessed for If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to When reading a file rclone will read When using this mode it is recommended that IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from: where available on an object. On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object). For example If you use the If you are running a vfs cache over Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests. These flags control the chunking: Rclone will start reading a chunk of size With Setting These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature. In particular S3 and Swift benefit hugely from the Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. When using VFS write caching ( Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. The The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". The In the (probably unlikely) event that a directory has multiple duplicate filenames after applying case and unicode normalization, the This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running WARNING. Contrary to If you supply the parameter PLEASE NOTE: There is an example program bin/test_proxy.py in the rclone source code. The program's job is to take a This config generated must have this extra parameter - And it may have this parameter - If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: And as an example return this on STDOUT This would mean that an SFTP backend would be created on the fly for the The program can manipulate the supplied Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. Flags for filtering directory listings. See the global flags page for global options not listed here. Serve the remote over HTTP. Run a basic web server to serve a remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it. You can use the filter flags (e.g. The server will log errors. Use This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. Using the However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache: Or individual files or directories: The Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed and if they haven't been accessed for If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to When reading a file rclone will read When using this mode it is recommended that IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from: where available on an object. On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object). For example If you use the If you are running a vfs cache over Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests. These flags control the chunking: Rclone will start reading a chunk of size With Setting These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature. In particular S3 and Swift benefit hugely from the Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. When using VFS write caching ( Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. The The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running WARNING. Contrary to If you supply the parameter PLEASE NOTE: There is an example program bin/test_proxy.py in the rclone source code. The program's job is to take a This config generated must have this extra parameter - And it may have this parameter - If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: And as an example return this on STDOUT This would mean that an SFTP backend would be created on the fly for the The program can manipulate the supplied Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. Flags for filtering directory listings. See the global flags page for global options not listed here. Serve the remote as an NFS mount Create an NFS server that serves the given remote over the network. The primary purpose for this command is to enable mount command on recent macOS versions where installing FUSE is very cumbersome. Since this is running on NFSv3, no authentication method is available. Any client will be able to access the data. To limit access, you can use serve NFS on loopback address and rely on secure tunnels (such as SSH). For this reason, by default, a random TCP port is chosen and loopback interface is used for the listening address; meaning that it is only available to the local machine. If you want other machines to access the NFS mount over local network, you need to specify the listening address and port using Modifying files through NFS protocol requires VFS caching. Usually you will need to specify To serve NFS over the network use following command: We specify a specific port that we can use in the mount command: To mount the server under Linux/macOS, use the following command: Where This feature is only available on Unix platforms. This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. Using the The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". The In the (probably unlikely) event that a directory has multiple duplicate filenames after applying case and unicode normalization, the This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running WARNING. Contrary to If you supply the parameter PLEASE NOTE: There is an example program bin/test_proxy.py in the rclone source code. The program's job is to take a This config generated must have this extra parameter - And it may have this parameter - If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: And as an example return this on STDOUT This would mean that an SFTP backend would be created on the fly for the The program can manipulate the supplied Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. Serve the remote for restic's REST API. Serve the remote as an NFS mount Run a basic web server to serve a remote over restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. Restic is a command-line program for doing backups. The server will log errors. Use -v to see access logs. First set up a remote for your chosen cloud provider. Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions. Now start the rclone restic server Where you can replace "backup" in the above by whatever path in the remote you wish to use. By default this will serve on "localhost:8080" you can change this with use of the You might wish to start this server on boot. Adding Now you can follow the restic instructions on setting up restic. Note that you will need restic 0.8.2 or later to interoperate with rclone. For the example above you will want to use "http://localhost:8080/" as the URL for the REST server. For example: Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these must end with /. Eg The Use If you set You can use a unix socket by setting the url to By default this will serve over http. If you want you can serve over https. You will need to supply the --min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0"). By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the If no static users are configured by either of the above methods, and client certificates are required by the Use To create an htpasswd file: The password file can be updated while rclone is running. Use Use See the global flags page for global options not listed here. Serve remote:path over s3. S3 server supports Signature Version 4 authentication. Just use Please note that some clients may require HTTPS endpoints. See the SSL docs for more information. This command uses the VFS directory cache. All the functionality will work with Use Use For a simple set up, to serve This will be compatible with an rclone remote which is defined like this: Note that setting When uploading multipart files Multipart server side copies do not work (see #7454). These take a very long time and eventually fail. The default threshold for multipart server side copies is 5G which is the maximum it can be, so files above this side will fail to be server side copied. For a current list of When using When using Versioning is not currently supported. Metadata will only be saved in memory other than the rclone Other operations will return error Use If you set You can use a unix socket by setting the url to By default this will serve over http. If you want you can serve over https. You will need to supply the --min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0"). ## VFS - Virtual File System Create an NFS server that serves the given remote over the network. The primary purpose for this command is to enable mount command on recent macOS versions where installing FUSE is very cumbersome. Since this is running on NFSv3, no authentication method is available. Any client will be able to access the data. To limit access, you can use serve NFS on loopback address and rely on secure tunnels (such as SSH). For this reason, by default, a random TCP port is chosen and loopback interface is used for the listening address; meaning that it is only available to the local machine. If you want other machines to access the NFS mount over local network, you need to specify the listening address and port using Modifying files through NFS protocol requires VFS caching. Usually you will need to specify To serve NFS over the network use following command: We specify a specific port that we can use in the mount command: To mount the server under Linux/macOS, use the following command: Where This feature is only available on Unix platforms. This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". The In the (probably unlikely) event that a directory has multiple duplicate filenames after applying case and unicode normalization, the This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running WARNING. Contrary to See the global flags page for global options not listed here. Serve the remote for restic's REST API. Run a basic web server to serve a remote over restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly. Restic is a command-line program for doing backups. The server will log errors. Use -v to see access logs. First set up a remote for your chosen cloud provider. Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions. Now start the rclone restic server Where you can replace "backup" in the above by whatever path in the remote you wish to use. By default this will serve on "localhost:8080" you can change this with use of the You might wish to start this server on boot. Adding Now you can follow the restic instructions on setting up restic. Note that you will need restic 0.8.2 or later to interoperate with rclone. For the example above you will want to use "http://localhost:8080/" as the URL for the REST server. For example: Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these must end with /. Eg The Use If you set You can use a unix socket by setting the url to By default this will serve over http. If you want you can serve over https. You will need to supply the --min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0"). By default this will serve files without needing a login. You can either use an htpasswd file which can take lots of users, or set a single username and password with the If no static users are configured by either of the above methods, and client certificates are required by the Use To create an htpasswd file: The password file can be updated while rclone is running. Use Use See the global flags page for global options not listed here. Serve the remote over SFTP. Serve remote:path over s3. Run an SFTP server to serve a remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it. You can use the filter flags (e.g. The server will respond to a small number of shell commands, mainly md5sum, sha1sum and df, which enable it to provide support for checksums and the about feature when accessed from an sftp remote. Note that this server uses standard 32 KiB packet payload size, which means you must not configure the client to expect anything else, e.g. with the chunk_size option on an sftp remote. The server will log errors. Use You must provide some means of authentication, either with If you don't supply a host By default the server binds to localhost:2022 - if you want it to be reachable externally then supply Note that the default of If On the client you need to set The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from being used. Omitting "restrict" and using S3 server supports Signature Version 4 authentication. Just use Please note that some clients may require HTTPS endpoints. See the SSL docs for more information. This command uses the VFS directory cache. All the functionality will work with Use Use For a simple set up, to serve This will be compatible with an rclone remote which is defined like this: Note that setting When uploading multipart files Multipart server side copies do not work (see #7454). These take a very long time and eventually fail. The default threshold for multipart server side copies is 5G which is the maximum it can be, so files above this side will fail to be server side copied. For a current list of When using When using Versioning is not currently supported. Metadata will only be saved in memory other than the rclone Other operations will return error Use If you set You can use a unix socket by setting the url to By default this will serve over http. If you want you can serve over https. You will need to supply the --min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0"). ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". The In the (probably unlikely) event that a directory has multiple duplicate filenames after applying case and unicode normalization, the This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running WARNING. Contrary to If you supply the parameter PLEASE NOTE: There is an example program bin/test_proxy.py in the rclone source code. The program's job is to take a This config generated must have this extra parameter - And it may have this parameter - If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: And as an example return this on STDOUT This would mean that an SFTP backend would be created on the fly for the The program can manipulate the supplied Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. Serve the remote over SFTP. Run an SFTP server to serve a remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it. You can use the filter flags (e.g. The server will respond to a small number of shell commands, mainly md5sum, sha1sum and df, which enable it to provide support for checksums and the about feature when accessed from an sftp remote. Note that this server uses standard 32 KiB packet payload size, which means you must not configure the client to expect anything else, e.g. with the chunk_size option on an sftp remote. The server will log errors. Use You must provide some means of authentication, either with If you don't supply a host By default the server binds to localhost:2022 - if you want it to be reachable externally then supply Note that the default of If On the client you need to set The "restrict" in authorized_keys prevents SHA1SUMs and MD5SUMs from being used. Omitting "restrict" and using This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. Using the However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up within the polling interval. You can send a If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache: Or individual files or directories: The Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If run with The cache has 4 different modes selected by Note that files are written back to the remote only when they are closed and if they haven't been accessed for If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. This mode should support all normal file system operations and is otherwise identical to When reading a file rclone will read When using this mode it is recommended that IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from: where available on an object. On some backends some of these attributes are slow to read (they take an extra API call per object, or extra work per object). For example If you use the If you are running a vfs cache over Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests. These flags control the chunking: Rclone will start reading a chunk of size With Setting These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature. In particular S3 and Swift benefit hugely from the Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. When using VFS write caching ( Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. The The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". The In the (probably unlikely) event that a directory has multiple duplicate filenames after applying case and unicode normalization, the This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running WARNING. Contrary to If you supply the parameter PLEASE NOTE: There is an example program bin/test_proxy.py in the rclone source code. The program's job is to take a This config generated must have this extra parameter - And it may have this parameter - If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: And as an example return this on STDOUT This would mean that an SFTP backend would be created on the fly for the The program can manipulate the supplied Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. Flags for filtering directory listings. See the global flags page for global options not listed here. Serve remote:path over WebDAV. Run a basic WebDAV server to serve a remote over HTTP via the WebDAV protocol. This can be viewed with a WebDAV client, through a web browser, or you can make a remote of type WebDAV to read and write it. This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system. Cloud storage objects have lots of properties which aren't like disk files - you can't extend them or write to the middle of them, so the VFS layer has to deal with that. Because there is no one right way of doing this there are various options explained below. The VFS layer also implements a directory cache - this caches info about files and directories (but not the data) in memory. Using the Or individual files or directories: The Each open file will try to keep the specified amount of data in memory at all times. The buffered data is bound to one open file and won't be shared. This flag is a upper limit for the used memory per open file. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to These flags control the VFS file caching options. File caching is necessary to make the VFS layer appear compatible with a normal file system. It can be disabled at the cost of some compatibility. For example you'll need to enable VFS caching if you want to read and write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. If using The You should not run two copies of rclone using the same VFS cache with the same or overlapping remotes if using In this mode (the default) the cache will read directly from the remote and write directly to the remote without caching anything on disk. This will mean some operations are not possible This is very similar to "off" except that files opened for read AND write will be buffered to disk. This means that files opened for write will be a lot more compatible, but uses the minimal disk space. These operations are not possible In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. In this mode all reads and writes are buffered to and from disk. When data is read from the remote this is buffered to disk as well. In this mode the files in the cache will be sparse files and rclone will keep track of which bits of the files it has downloaded. So if an application only reads the starts of each file, then rclone will only buffer the start of the file. These files will appear to be their full size in the cache, but they will be sparse files with only the data that has been downloaded present in them. When reading a file rclone will read When using this mode it is recommended that IMPORTANT not all file systems support sparse files. In particular FAT/exFAT do not. Rclone will perform very badly if the cache directory is on a filesystem which doesn't support sparse files and it will log an ERROR message if one is detected. Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. Fingerprints are made from: If you use the If you are running a vfs cache over Note that if you change the value of this flag, the fingerprints of the files in the cache may be invalidated and the files will need to be downloaded again. When rclone reads files from a remote it reads them in chunks. This means that rather than requesting the whole file rclone reads the chunk specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests. These flags control the chunking: Rclone will start reading a chunk of size With Setting These flags may be used to enable/disable features of the VFS for performance or other reasons. See also the chunked reading feature. In particular S3 and Swift benefit hugely from the When using VFS write caching ( Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default. The user may specify a file name to open/delete/rename/etc with a case different than what is stored on the remote. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by the underlying remote. Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system presented by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target. If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true". The In the (probably unlikely) event that a directory has multiple duplicate filenames after applying case and unicode normalization, the This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. Some backends, most notably S3, do not report the amount of bytes used. If you need this information to be available when running WARNING. Contrary to If you supply the parameter PLEASE NOTE: There is an example program bin/test_proxy.py in the rclone source code. There is an example program bin/test_proxy.py in the rclone source code. The program's job is to take a This config generated must have this extra parameter - And it may have this parameter - Note that an internal cache is keyed on This can be used to build general purpose proxies to any kind of backend that rclone supports. Flags for filtering directory listings. See the global flags page for global options not listed here. Changes storage class/tier of objects in remote. rclone settier changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, Google Cloud Storage, Regional Storage, Nearline, Coldline etc. Note that, certain tier changes make objects not available to access immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true You can use it to tier single object Or just provide remote directory and all files in directory will be tiered See the global flags page for global options not listed here. Run a test command Rclone test is used to run test commands. Select which test command you want with the subcommand, eg Each subcommand has its own options which you can see in their help. NB Be careful running these commands, they may do strange things so reading their documentation first is recommended. See the global flags page for global options not listed here. Log any change notify requests for the remote passed in. See the global flags page for global options not listed here. Makes a histogram of file name characters. This command outputs JSON which shows the histogram of characters used in filenames in the remote:path specified. The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression. See the global flags page for global options not listed here. Makes a histogram of file name characters. This command outputs JSON which shows the histogram of characters used in filenames in the remote:path specified. The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression. See the global flags page for global options not listed here. Discovers file name or other limitations for paths. rclone info discovers what filenames and upload methods are possible to write to the paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one. NB this can create undeletable files and other hazards - use with care See the global flags page for global options not listed here. Make files with random contents of the size given See the global flags page for global options not listed here. Make a random file hierarchy in a directory See the global flags page for global options not listed here. Load all the objects at remote:path into memory and report memory stats. See the global flags page for global options not listed here. Create new file or change file modification time. Set the modification time on file(s) as specified by remote:path to have the current time. If remote:path does not exist then a zero sized file will be created, unless If Note that value of Flags for filtering directory listings. See the global flags page for global options not listed here. List the contents of the remote in a tree like fashion. rclone tree lists the contents of a remote in a similar way to the unix tree command. For example The tree command has many options for controlling the listing which are compatible with the tree command, for example you can include file sizes with For a more interactive navigation of the remote see the ncdu command. Flags for filtering directory listings. See the global flags page for global options not listed here. Metadata is data about a file which isn't the contents of the file. Normally rclone only preserves the modification time and the content (MIME) type where possible. Rclone supports preserving all the available metadata on files (not directories) when using the Metadata is data about a file (or directory) which isn't the contents of the file (or directory). Normally rclone only preserves the modification time and the content (MIME) type where possible. Rclone supports preserving all the available metadata on files and directories when using the Exactly what metadata is supported and what that support means depends on the backend. Backends that support metadata have a metadata section in their docs and are listed in the features table (Eg local, s3) Some backends don't support metadata, some only support metadata on files and some support metadata on both files and directories. Rclone only supports a one-time sync of metadata. This means that metadata will be synced from the source object to the destination object only when the source object has changed and needs to be re-uploaded. If the metadata subsequently changes on the source object without changing the object itself then it won't be synced to the destination object. This is in line with the way rclone syncs Using Note that arbitrary metadata may be added to objects using the The --metadata-mapper flag can be used to pass the name of a program in which can transform metadata when it is being copied from source to destination. Rclone supports Metadata is divided into two type. System metadata and User metadata. Metadata which the backend uses itself is called system metadata. For example on the local backend the system metadata The metadata keys Hashes are not included in system metadata as there is a well defined way of reading those already. Rclone has a number of options to control its behaviour. Options that take parameters can have the values passed in two ways, rclone(1) User Manual
-Rclone syncs your files to cloud storage
+rclone copy --max-age 24h --no-traverse /path/to/src remote:--metadata flag.-P/--progress flag to view real-time transfer statistics.--dry-run or the --interactive/-i flag to test without copying anything.
@@ -627,7 +629,7 @@ destpath/sourcepath/two.txt
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use modtime or checksum
- -I, --ignore-times Don't skip files that match size and time - transfer all files
+ -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
@@ -641,6 +643,7 @@ destpath/sourcepath/two.txt
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
+ --no-update-dir-modtime Don't update directory modification times
--no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
@@ -697,12 +700,50 @@ destpath/sourcepath/two.txt
rclone copy source:path dest:path [flags]--metadata flag.-P/--progress flag to view real-time transfer statisticsrclone dedupe command to deal with "Duplicate object/directory found in source/destination - ignoring" errors. See this forum post for more info.Logger Flags
+--differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to the file name (or stdout if it is -) supplied. What they write is described in the help below. For example --differ will write all paths which are present on both the source and destination but different.--combined flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell you what happened to it. These are reminiscent of diff files.
+
+= path means path was found in source and destination and was identical! path means there was an error reading or hashing the source or dest.--dest-after flag writes a list file using the same format flags as lsf (including customizable options for hash, modtime, etc.) Conceptually it is similar to rsync's --itemize-changes, but not identical -- it should output an accurate list of what will be on the destination after the sync.
+
+--max-duration / CutoffModeHard--compare-dest / --copy-dest--retries 1 to disable)rclone sync source:path dest:path [flags]Options
-
+ --create-empty-src-dirs Create empty source dirs on destination after sync
- -h, --help help for sync --absolute Put a leading / in front of path names
+ --combined string Make a combined report of changes to this file
+ --create-empty-src-dirs Create empty source dirs on destination after sync
+ --csv Output in CSV format
+ --dest-after string Report all files that exist on the dest post-sync
+ --differ string Report all non-matching files to this file
+ -d, --dir-slash Append a slash to directory names (default true)
+ --dirs-only Only list directories
+ --error string Report all files with errors (hashing or reading) to this file
+ --files-only Only list files (default true)
+ -F, --format string Output format - see lsf help for details (default "p")
+ --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5")
+ -h, --help help for sync
+ --match string Report all matching files to this file
+ --missing-on-dst string Report all files missing from the destination to this file
+ --missing-on-src string Report all files missing from the source to this file
+ -s, --separator string Separator for the items in the format (default ";")
+ -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05)Copy Options
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use modtime or checksum
- -I, --ignore-times Don't skip files that match size and time - transfer all files
+ -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
@@ -728,6 +769,7 @@ destpath/sourcepath/two.txt
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
+ --no-update-dir-modtime Don't update directory modification times
--no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
@@ -742,6 +784,7 @@ destpath/sourcepath/two.txt
--delete-after When synchronizing, delete files on destination after transferring (default)
--delete-before When synchronizing, delete files on destination before transferring
--delete-during When synchronizing, delete files during transfer
+ --fix-case Force rename of case insensitive dest to match source
--ignore-errors Delete even if there are I/O errors
--max-delete int When synchronizing, limit the number of deletes (default -1)
--max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off)
@@ -796,6 +839,8 @@ destpath/sourcepath/two.txt
--check-first Do all the checks before starting transfers
@@ -714,7 +755,7 @@ destpath/sourcepath/two.txtsource:path selected by the filters (if any) this will move it into dest:path. If possible a server-side move will be used, otherwise it will copy it (server-side if possible) into dest:path then delete the original (if no errors on copy) in source:path.--delete-empty-src-dirs flag.--metadata flag.--dry-run or the --interactive/-i flag.-P/--progress flag to view real-time transfer statistics.
@@ -814,7 +859,7 @@ destpath/sourcepath/two.txt
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use modtime or checksum
- -I, --ignore-times Don't skip files that match size and time - transfer all files
+ -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
@@ -828,6 +873,7 @@ destpath/sourcepath/two.txt
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
+ --no-update-dir-modtime Don't update directory modification times
--no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
@@ -1601,23 +1647,35 @@ rclone backend help <backendname>
rclone move source:path dest:path [flags]Synopsis
New, Newer, Older, and Deleted files. - Propagate changes on Path1 to Path2, and vice-versa.rclone bisync remote1:path1 remote2:path2 [flags]Options
-
+ --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
- --check-filename string Filename for --check-access (default: RCLONE_TEST)
- --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true")
- --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs)
- --filters-file string Read filtering patterns from a file
- --force Bypass --max-delete safety check and run the sync. Consider using with --verbose
- -h, --help help for bisync
- --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)
- --localtime Use local time in listings (default: UTC)
- --no-cleanup Retain working files (useful for troubleshooting and testing).
- --remove-empty-dirs Remove ALL empty directories at the final cleanup step.
- --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk!
- -1, --resync Performs the resync run. Path1 files may overwrite Path2 versions. Consider using --verbose or --dry-run first.
- --workdir string Use custom working dir - useful for testing. (default: $HOME/.cache/rclone/bisync) --backup-dir1 string --backup-dir for Path1. Must be a non-overlapping path on the same remote.
+ --backup-dir2 string --backup-dir for Path2. Must be a non-overlapping path on the same remote.
+ --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
+ --check-filename string Filename for --check-access (default: RCLONE_TEST)
+ --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true")
+ --compare string Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime')
+ --conflict-loser ConflictLoserAction Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num)
+ --conflict-resolve string Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default "none")
+ --conflict-suffix string Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict')
+ --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs)
+ --download-hash Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!)
+ --filters-file string Read filtering patterns from a file
+ --force Bypass --max-delete safety check and run the sync. Consider using with --verbose
+ -h, --help help for bisync
+ --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)
+ --max-lock Duration Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s)
+ --no-cleanup Retain working files (useful for troubleshooting and testing).
+ --no-slow-hash Ignore listing checksums only on backends where they are slow
+ --recover Automatically recover from interruptions without requiring --resync.
+ --remove-empty-dirs Remove ALL empty directories at the final cleanup step.
+ --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk!
+ -1, --resync Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first.
+ --resync-mode string During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default "none")
+ --slow-hash-sync-only Ignore slow checksums for listings and deltas, but still consider them during sync calls.
+ --workdir string Use custom working dir - useful for testing. (default: {WORKDIR})Copy Options
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use modtime or checksum
- -I, --ignore-times Don't skip files that match size and time - transfer all files
+ -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
@@ -1643,6 +1701,7 @@ rclone backend help <backendname>
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
+ --no-update-dir-modtime Don't update directory modification times
--no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
@@ -2218,7 +2277,7 @@ if src is directory
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use modtime or checksum
- -I, --ignore-times Don't skip files that match size and time - transfer all files
+ -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
@@ -2232,6 +2291,7 @@ if src is directory
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
+ --no-update-dir-modtime Don't update directory modification times
--no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
@@ -2279,12 +2339,22 @@ if src is directory
--check-first Do all the checks before starting transfers
@@ -1629,7 +1687,7 @@ rclone backend help <backendname>rclone copyurl
-Synopsis
--auto-filename will attempt to automatically determine the filename from the URL (after any redirections) and used in the destination path. With --auto-filename-header in addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. With --print-filename in addition, the resulting file name will be printed.--auto-filename will attempt to automatically determine the filename from the URL (after any redirections) and used in the destination path.--auto-filename-header in addition, if a specific filename is set in HTTP headers, it will be used instead of the name from the URL. With --print-filename in addition, the resulting file name will be printed.--no-clobber will prevent overwriting file on the destination if there is one with the same name.--stdout or making the output file name - will cause the output to be written to standard output.Troublshooting
+rclone copyurl to work then here are some things you can try:
+
--disable-http2 rclone will use HTTP2 if available - try disabling it--bind 0.0.0.0 rclone will use IPv6 if available - try disabling it--bind ::0 to disable IPv4--user agent curl - some sites have whitelists for curl's user-agent - try thatcurl directlyrclone copyurl https://example.com dest:path [flags]Options
-a, --auto-filename Get the file name from the URL and use it for destination file path
@@ -2570,11 +2640,11 @@ rclone link --expire 1d remote:path/to/fileSynopsis
--long flag it lists the types too.--long flag it lists the types and the descriptions too.rclone listremotes [flags]Options
+ --long Show the type and the description as well as names
-h, --help help for listremotes
- --long Show the type as well as namesSEE ALSO
@@ -2639,6 +2709,14 @@ test.sh,449
+rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
rclone copy --files-from-raw new_files /path/to/local remote:path'2006-01-02 15:04:05'. Other formats can be specified with the --time-format flag. Examples:
+rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)'
+rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000'
+rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00'
+rclone lsf remote:path --format pt --time-format RFC3339
+rclone lsf remote:path --format pt --time-format DateOnly
+rclone lsf remote:path --format pt --time-format max--time-format max will automatically truncate '2006-01-02 15:04:05.000000000' to the maximum precision supported by the remote.
@@ -2654,16 +2732,17 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
rclone lsf remote:path [flags]Options
-
+ --absolute Put a leading / in front of path names
- --csv Output in CSV format
- -d, --dir-slash Append a slash to directory names (default true)
- --dirs-only Only list directories
- --files-only Only list files
- -F, --format string Output format - see help for details (default "p")
- --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5")
- -h, --help help for lsf
- -R, --recursive Recurse into the listing
- -s, --separator string Separator for the items in the format (default ";") --absolute Put a leading / in front of path names
+ --csv Output in CSV format
+ -d, --dir-slash Append a slash to directory names (default true)
+ --dirs-only Only list directories
+ --files-only Only list files
+ -F, --format string Output format - see help for details (default "p")
+ --hash h Use this hash when h is used in the format MD5|SHA-1|DropboxHash (default "md5")
+ -h, --help help for lsf
+ -R, --recursive Recurse into the listing
+ -s, --separator string Separator for the items in the format (default ";")
+ -t, --time-format string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05)Filter Options
--delete-excluded Delete files on dest excluded from sync
@@ -2857,8 +2936,11 @@ rclone mount remote:path/to/files * --volname \\cloud\remoteMounting on macOS
NFS mount
+Unicode Normalization
+--no-unicode-normalization=false for all mount and serve commands on macOS. For details, see vfs-case-sensitivity.NFS mount
--nfs-cache-handle-limit controls the maximum number of cached file handles stored by the nfsmount caching handler. This should not be set too low or you may experience errors when trying to access files. The default is 1000000, but consider lowering this limit if the server's system resource usage causes problems.macFUSE Notes
sudo mkdir /usr/local/lib
@@ -2872,9 +2954,6 @@ sudo ln -s /opt/local/lib/libfuse.2.dylibUnicode Normalization
-rclone mount command.-o modules=iconv,from_code=UTF-8,to_code=UTF-8Read Only mounts
--read-only, attempts to write to files will fail silently as opposed to with a clear warning as in macFUSE.Limitations
@@ -3042,6 +3121,8 @@ WantedBy=multi-user.target
--no-unicode-normalization flag controls whether a similar "fixup" is performed for filenames that differ but are canonically equivalent with respect to unicode. Unicode normalization can be particularly helpful for users of macOS, which prefers form NFD instead of the NFC used by most other platforms. It is therefore highly recommended to keep the default of false on macOS, to avoid encoding compatibility issues.--vfs-block-norm-dupes flag allows hiding these duplicates. This comes with a performance tradeoff, as rclone will have to scan the entire directory for duplicates when listing a directory. For this reason, it is recommended to leave this disabled if not needed. However, macOS users may wish to consider using it, as otherwise, if a remote directory contains both NFC and NFD versions of the same filename, an odd situation will occur: both versions of the file will be visible in the mount, and both will appear to be editable, however, editing either version will actually result in only the NFD version getting edited under the hood. --vfs-block- norm-dupes prevents this confusion by detecting this scenario, hiding the duplicates, and logging an error, similar to how this is handled in rclone sync.VFS Disk Options
@@ -3080,6 +3161,7 @@ WantedBy=multi-user.target
--read-only Only allow read-only access
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
@@ -3092,7 +3174,7 @@ WantedBy=multi-user.target
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
- --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-refresh Refreshes the directory cache recursively in the background on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
@@ -3158,7 +3240,7 @@ if src is directory
--ignore-checksum Skip post copy check of checksums
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use modtime or checksum
- -I, --ignore-times Don't skip files that match size and time - transfer all files
+ -I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
@@ -3172,6 +3254,7 @@ if src is directory
--multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki)
--no-check-dest Don't check the destination, copy regardless
--no-traverse Don't traverse destination file system on copy
+ --no-update-dir-modtime Don't update directory modification times
--no-update-modtime Don't update destination modtime if files identical
--order-by string Instructions on how to order the transfers, e.g. 'size,descending'
--partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default ".partial")
@@ -3293,9 +3376,349 @@ if src is directory
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)
+rclone nfsmount
+Synopsis
+rclone config. Check it works with rclone ls etc.--daemon flag to force background mode. On Windows you can run mount in foreground only, the flag is ignored./path/to/local/mount is an empty existing directory:
+rclone nfsmount remote:path/to/files /path/to/local/mountX:, to path C:\path\parent\mount (where parent directory or drive must exist, and mount must not exist, and is not supported when mounting as a network drive), and the last example will mount as network share \\cloud\remote and map it to an automatically assigned drive:
+rclone nfsmount remote:path/to/files *
+rclone nfsmount remote:path/to/files X:
+rclone nfsmount remote:path/to/files C:\path\parent\mount
+rclone nfsmount remote:path/to/files \\cloud\remote
+# Linux
+fusermount -u /path/to/local/mount
+# OS X
+umount /path/to/local/mountInstalling on Windows
+Mounting modes on windows
+* will tell rclone to automatically assign the next available drive letter, starting with Z: and moving backward. Examples:
+rclone nfsmount remote:path/to/files *
+rclone nfsmount remote:path/to/files X:
+rclone nfsmount remote:path/to/files C:\path\parent\mount
+rclone nfsmount remote:path/to/files X:--volname can be used to set a custom volume name for the mounted file system. The default is to use the remote name and path.--network-mode to your nfsmount command. Mounting to a directory path is not supported in this mode, it is a limitation Windows imposes on junctions, so the remote must always be mounted to a drive letter.
+rclone nfsmount remote:path/to/files X: --network-mode--volname will be used to create the network share path. A complete UNC path, such as \\cloud\remote, optionally with path \\cloud\remote\madeup\path, will be used as is. Any other string will be used as the share part, after a default prefix \\server\. If no volume name is specified then \\server\share will be used. You must make sure the volume name is unique when you are mounting more than one drive, or else the mount command will fail. The share name will treated as the volume label for the mapped drive, shown in Windows Explorer etc, while the complete \\server\share will be reported as the remote UNC path by net use etc, just like a normal network drive mapping.--volname, this will implicitly set the --network-mode option, so the following two examples have same result:
+rclone nfsmount remote:path/to/files X: --network-mode
+rclone nfsmount remote:path/to/files X: --volname \\server\share* and use that as mountpoint, and instead use the UNC path specified as the volume name, as if it were specified with the --volname option. This will also implicitly set the --network-mode option. This means the following two examples have same result:
+rclone nfsmount remote:path/to/files \\cloud\remote
+rclone nfsmount remote:path/to/files * --volname \\cloud\remote--fuse-flag --VolumePrefix=\server\share. Note that the path must be with just a single backslash prefix in this case.Windows filesystem permissions
+-o UserName=user123 -o GroupName="Authenticated Users". The permissions on each entry will be set according to options --dir-perms and --file-perms, which takes a value in traditional Unix numeric notation.--file-perms 0666 --dir-perms 0777, i.e. read and write permissions to everyone. This means you will not be able to start any programs from the mount. To be able to do that you must add execute permissions, e.g. --file-perms 0777 --dir-perms 0777 to add it to everyone. If the program needs to write files, chances are you will have to enable VFS File Caching as well (see also limitations). Note that the default write permission have some restrictions for accounts other than the owner, specifically it lacks the "write extended attributes", as explained next.--file-perms 0600 --dir-perms 0700, the user group and the built-in "Everyone" group will still be given some special permissions, as described above. Some programs may then (incorrectly) interpret this as the file being accessible by everyone, for example an SSH client may warn about "unprotected private key file". You can work around this by specifying -o FileSecurity="D:P(A;;FA;;;OW)", which sets file all access (FA) to the owner (OW), and nothing else.-o FileSecurity="D:P(A;;FRFW;;;WD)", which sets file read (FR) and file write (FW) to everyone (WD). If file execute (FX) is also needed, then change to -o FileSecurity="D:P(A;;FRFWFX;;;WD)", or set file all access (FA) to get full access permissions, including delete, with -o FileSecurity="D:P(A;;FA;;;WD)".Windows caveats
+-s to start processes as the SYSTEM account. Another alternative is to run the mount command from a Windows Scheduled Task, or a Windows Service, configured to run as the SYSTEM account. A third alternative is to use the WinFsp.Launcher infrastructure). Read more in the install documentation. Note that when running rclone as another user, it will not use the configuration file from your profile unless you tell it to with the --config option. Note also that it is now the SYSTEM account that will have the owner permissions, and other accounts will have permissions according to the group or others scopes. As mentioned above, these will then not get the "write extended attributes" permission, and this may prevent writing to files. You can work around this with the FileSecurity option, see example above.Mounting on macOS
+Unicode Normalization
+--no-unicode-normalization=false for all mount and serve commands on macOS. For details, see vfs-case-sensitivity.NFS mount
+--nfs-cache-handle-limit controls the maximum number of cached file handles stored by the nfsmount caching handler. This should not be set too low or you may experience errors when trying to access files. The default is 1000000, but consider lowering this limit if the server's system resource usage causes problems.macFUSE Notes
+
+sudo mkdir /usr/local/lib
+cd /usr/local/lib
+sudo ln -s /opt/local/lib/libfuse.2.dylibFUSE-T Limitations, Caveats, and Notes
+ModTime update on read
+
+
+Read Only mounts
+--read-only, attempts to write to files will fail silently as opposed to with a clear warning as in macFUSE.Limitations
+--vfs-cache-mode this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without --vfs-cache-mode writes or --vfs-cache-mode full. See the VFS File Caching section for more info. When using NFS mount on macOS, if you don't specify |--vfs-cache-mode| the mount point will be read-only.rclone mount is invoked on Unix with --daemon flag, the main rclone program will wait for the background mount to become ready or until the timeout specified by the --daemon-wait flag. On Linux it can check mount status using ProcFS so the flag in fact sets maximum time to wait, while the real wait can be less. On macOS / BSD the time to wait is constant and the check is performed only at the end. We advise you to set wait time on macOS reasonably.rclone nfsmount vs rclone sync/copy
+Attribute caching
+--attr-timeout to set the time the kernel caches the attributes (size, modification time, etc.) for directory entries.1s which caches files just long enough to avoid too many callbacks to rclone from the kernel.--attr-timeout. You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With --attr-timeout 1s this is very unlikely but not impossible. The higher you set --attr-timeout the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above.10s or 1m say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above.Filters
+systemd
+PATH or HOME. This means that tilde (~) expansion will not work and you should provide --config and --cache-dir explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount program, rclone will use the fallback PATH of /bin:/usr/bin in this scenario. Please ensure that fusermount is present on this PATH.Rclone as Unix mount helper
+/bin/mount normally takes the -t FSTYPE argument then runs the /sbin/mount.FSTYPE helper program passing it mount options as -o key=val,... or --opt=.... Automount (classic or systemd) behaves in a similar way.--key val. To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone and optionally /usr/bin/rclonefs, e.g. ln -s /usr/bin/rclone /sbin/mount.rclone. rclone will detect it and translate command-line arguments appropriately.
+mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem
+# /etc/systemd/system/mnt-data.mount
+[Unit]
+Description=Mount for /mnt/data
+[Mount]
+Type=rclone
+What=sftp1:subdir
+Where=/mnt/data
+Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
+# /etc/systemd/system/mnt-data.automount
+[Unit]
+Description=AutoMount for /mnt/data
+[Automount]
+Where=/mnt/data
+TimeoutIdleSec=600
+[Install]
+WantedBy=multi-user.target/etc/fstab a line like
+sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0config=...,cache-dir=... as a workaround for mount units being run without HOME.-o argument(s) by comma, replace _ by - and prepend -- to get the command-line flags. Options containing commas or spaces can be wrapped in single or double quotes. Any inner quotes inside outer quotes of the same type should be doubled.
+
+env.NAME=VALUE will set an environment variable for the mount process. This helps with Automountd and Systemd.mount which don't allow setting custom environment for mount helpers. Typically you will use env.HTTPS_PROXY=proxy.host:3128 or env.HOME=/rootcommand=cmount can be used to run cmount or any other rclone command rather than the default mount.args2env will pass mount options to the mount helper running in background via environment variables instead of command line arguments. This allows to hide secrets from such commands as ps or pgrep.vv... will be transformed into appropriate --verbose=Nx-systemd.automount, _netdev, nosuid and alike are intended only for Automountd and ignored by rclone. ## VFS - Virtual File SystemVFS Directory Cache
+--dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
+--dir-cache-time duration Time to cache directory entries for (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
+kill -SIGHUP $(pidof rclone)
+rclone rc vfs/forget
+rclone rc vfs/forget file=path/to/file dir=path/to/dirVFS File Buffering
+--buffer-size flag determines the amount of memory, that will be used to buffer data in advance.--buffer-size * open files.VFS File Caching
+
+--cache-dir string Directory rclone will use for caching.
+--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.--vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.--vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
+
+
+--vfs-cache-mode minimal
+
+
+--vfs-cache-mode writes
+--vfs-cache-mode full
+--vfs-cache-mode writes.--buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.--buffer-size is not set too large and --vfs-read-ahead is set large if required.Fingerprinting
+
+
+hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.--vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.local, s3 or swift backends then using this flag is recommended.VFS Chunked Reading
+
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)--vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.--vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.--vfs-read-chunk-size to 0 or "off" disables chunked reading.VFS Performance
+--no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
+--no-checksum Don't compare checksums on up/download.
+--no-modtime Don't read/write the modification time (can speed things up).
+--no-seek Don't allow seeking in files.
+--read-only Only allow read-only access.
+--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)VFS Case Sensitivity
+--vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.--no-unicode-normalization flag controls whether a similar "fixup" is performed for filenames that differ but are canonically equivalent with respect to unicode. Unicode normalization can be particularly helpful for users of macOS, which prefers form NFD instead of the NFC used by most other platforms. It is therefore highly recommended to keep the default of false on macOS, to avoid encoding compatibility issues.--vfs-block-norm-dupes flag allows hiding these duplicates. This comes with a performance tradeoff, as rclone will have to scan the entire directory for duplicates when listing a directory. For this reason, it is recommended to leave this disabled if not needed. However, macOS users may wish to consider using it, as otherwise, if a remote directory contains both NFC and NFD versions of the same filename, an odd situation will occur: both versions of the file will be visible in the mount, and both will appear to be editable, however, editing either version will actually result in only the NFD version getting edited under the hood. --vfs-block- norm-dupes prevents this confusion by detecting this scenario, hiding the duplicates, and logging an error, similar to how this is handled in rclone sync.VFS Disk Options
+
+--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)Alternate report of used bytes
+df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+rclone nfsmount remote:path /path/to/mountpoint [flags]Options
+
+ --addr string IPaddress:Port or :Port to bind server to
+ --allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
+ --allow-other Allow access to other users (not supported on Windows)
+ --allow-root Allow access to root user (not supported on Windows)
+ --async-read Use asynchronous reads (not supported on Windows) (default true)
+ --attr-timeout Duration Time for which file/directory attributes are cached (default 1s)
+ --daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)
+ --daemon-timeout Duration Time limit for rclone to respond to kernel (not supported on Windows) (default 0s)
+ --daemon-wait Duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
+ --debug-fuse Debug the FUSE internals - needs -v
+ --default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
+ --devname string Set the device name - default is remote:path
+ --dir-cache-time Duration Time to cache directory entries for (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --file-perms FileMode File permissions (default 0666)
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
+ -h, --help help for nfsmount
+ --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
+ --mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
+ --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
+ --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000)
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true)
+ --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
+ -o, --option stringArray Option for libfuse/WinFsp (repeat if required)
+ --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Only allow read-only access
+ --sudo Use sudo to run the mount command as root.
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
+ --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
+ --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively in the background on start
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
+ --volname string Set the volume name (supported on Windows and OSX only)
+ --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)Filter Options
+
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)SEE ALSO
+
+
rclone obscure
Synopsis
+Synopsis
-rclone obscure password [flags]Options
+Options
-h, --help help for obscureSEE ALSO
+SEE ALSO
rclone rc
Synopsis
+Synopsis
--url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"--user and --pass.--rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.rclone rc --loopback operations/about fs=/rclone rc to see a list of all possible commands.
-rclone rc commands parameter [flags]Options
+Options
-a, --arg stringArray Argument placed in the "arg" array
-h, --help help for rc
--json string Input JSON - use instead of key=value args
@@ -3342,13 +3765,13 @@ if src is directory
--url string URL to connect to rclone remote control (default "http://localhost:5572/")
--user string Username to use to rclone remote controlSEE ALSO
+SEE ALSO
rclone rcat
Synopsis
+Synopsis
@@ -3358,7 +3781,7 @@ ffmpeg - | rclone rcat remote:path/to/file
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file--size should be the exact size of the input stream in bytes. If the size of the stream is different in length to the --size passed in then the transfer will likely fail.rclone move it to the destination which can use retries.
-rclone rcat remote:path [flags]Options
+Options
-h, --help help for rcat
--size int File size hint to preallocate (default -1)Important Options
@@ -3367,13 +3790,13 @@ ffmpeg - | rclone rcat remote:path/to/file
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
SEE ALSO
+SEE ALSO
rclone rcd
Synopsis
+Synopsis
--rc-realm to set the authentication realm.--rc-salt to change the password hashing salt from the default.
-rclone rcd <path to files to serve>* [flags]Options
+Options
-h, --help help for rcdRC Options
SEE ALSO
+SEE ALSO
rclone rmdirs
Synopsis
+Synopsis
--leave-root flag.--rmdirs).--checkers directories concurrently so if you have thousands of empty directories consider increasing this number.
-rclone rmdirs remote:path [flags]Options
+Options
-h, --help help for rmdirs
--leave-root Do not remove root directory if emptyImportant Options
@@ -3569,13 +3992,13 @@ htpasswd -B htpasswd anotherUser
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)
SEE ALSO
+SEE ALSO
rclone selfupdate
Synopsis
+Synopsis
--stable flag), this command will install the latest stable release. However, some issues may be fixed (or features added) only in the latest beta release. In such cases you should run the command with the --beta flag, i.e. rclone selfupdate --beta. You can check in advance what version would be installed by adding the --check flag, then repeat the command without it when you are satisfied.--version VER flag, if given, will update to the concrete version instead of the latest one. If you omit micro version from VER (for example 1.53), the latest matching micro version will be used.unknown command "selfupdate" then you will need to update manually following the install instructions located at https://rclone.org/install/
-rclone selfupdate [flags]Options
+Options
--stable Install stable release (this is the default)
--version string Install the given rclone version (default: latest)
--beta Install beta release
--check Check for latest release, do not download
-h, --help help for selfupdate
@@ -3594,21 +4017,21 @@ htpasswd -B htpasswd anotherUserSEE ALSO
+SEE ALSO
rclone serve
Synopsis
+Synopsis
rclone serve http remote:
-rclone serve <protocol> [opts] <remote> [flags]Options
+Options
-h, --help help for serveSEE ALSO
+SEE ALSO
rclone serve dlna
Synopsis
+Synopsis
Server options
@@ -3633,196 +4056,6 @@ htpasswd -B htpasswd anotherUser
VFS Directory Cache
---dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
---dir-cache-time duration Time to cache directory entries for (default 5m0s)
---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
-kill -SIGHUP $(pidof rclone)
-rclone rc vfs/forget
-rclone rc vfs/forget file=path/to/file dir=path/to/dirVFS File Buffering
---buffer-size flag determines the amount of memory, that will be used to buffer data in advance.--buffer-size * open files.VFS File Caching
-
---cache-dir string Directory rclone will use for caching.
---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
---vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
---vfs-write-back duration Time to writeback files after last use when using cache (default 5s)-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.--vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.--vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
-
-
---vfs-cache-mode minimal
-
-
---vfs-cache-mode writes
---vfs-cache-mode full
---vfs-cache-mode writes.--buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.--buffer-size is not set too large and --vfs-read-ahead is set large if required.Fingerprinting
-
-
-hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.--vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.local, s3 or swift backends then using this flag is recommended.VFS Chunked Reading
-
---vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)--vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.--vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.--vfs-read-chunk-size to 0 or "off" disables chunked reading.VFS Performance
---no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
---no-checksum Don't compare checksums on up/download.
---no-modtime Don't read/write the modification time (can speed things up).
---no-seek Don't allow seeking in files.
---read-only Only allow read-only access.
---vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
---vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
---transfers int Number of file transfers to run in parallel (default 4)VFS Case Sensitivity
---vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.VFS Disk Options
-
---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)Alternate report of used bytes
-df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
-rclone serve dlna remote:path [flags]Options
-
- --addr string The ip:port or :port to bind the DLNA http server to (default ":7879")
- --announce-interval Duration The interval between SSDP announcements (default 12m0s)
- --dir-cache-time Duration Time to cache directory entries for (default 5m0s)
- --dir-perms FileMode Directory permissions (default 0777)
- --file-perms FileMode File permissions (default 0666)
- --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
- -h, --help help for dlna
- --interface stringArray The interface to use for SSDP (repeat as necessary)
- --log-trace Enable trace logging of SOAP traffic
- --name string Name of DLNA server
- --no-checksum Don't compare checksums on up/download
- --no-modtime Don't read/write the modification time (can speed things up)
- --no-seek Don't allow seeking in files
- --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
- --read-only Only allow read-only access
- --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
- --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
- --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
- --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
- --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
- --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
- --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
- --vfs-case-insensitive If a file name not found, find a case insensitive match
- --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
- --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
- --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
- --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
- --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
- --vfs-refresh Refreshes the directory cache recursively on start
- --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
- --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
- --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)Filter Options
-
- --delete-excluded Delete files on dest excluded from sync
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
- --exclude-if-present stringArray Exclude directories if filename is present
- --files-from stringArray Read list of source-file names from file (use - to read from stdin)
- --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
- -f, --filter stringArray Add a file filtering rule
- --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
- --ignore-case Ignore case in filters (case insensitive)
- --include stringArray Include files matching pattern
- --include-from stringArray Read file include patterns from file (use - to read from stdin)
- --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-depth int If set limits the recursion depth to this (default -1)
- --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
- --metadata-exclude stringArray Exclude metadatas matching pattern
- --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
- --metadata-filter stringArray Add a metadata filtering rule
- --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
- --metadata-include stringArray Include metadatas matching pattern
- --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
- --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)SEE ALSO
-
-
-rclone serve docker
-Synopsis
-
-sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vvrclone serve docker will create the said socket, listening for commands from Docker to create the necessary Volumes. Normally you need not give the --socket-addr flag. The API will listen on the unix domain socket at /run/docker/plugins/rclone.sock. In the example above rclone will create a TCP socket and a small file /etc/docker/plugins/rclone.spec containing the socket address. We use sudo because both paths are writeable only by the root user./run/docker/plugins/rclone.sock or parse new /etc/docker/plugins/rclone.spec. Until you restart, any volume related docker commands will timeout trying to access the old socket. Running directly is supported on Linux only, not on Windows or MacOS. This is not a problem with managed plugin mode described in details in the full documentation.--base-dir (by default /var/lib/docker-volumes/rclone available only to root) and maintain the JSON formatted file docker-plugin.state in the rclone cache directory with book-keeping records of created and mounted volumes.VFS Directory Cache
--dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@@ -3930,49 +4163,34 @@ htpasswd -B htpasswd anotherUser--no-unicode-normalization flag controls whether a similar "fixup" is performed for filenames that differ but are canonically equivalent with respect to unicode. Unicode normalization can be particularly helpful for users of macOS, which prefers form NFD instead of the NFC used by most other platforms. It is therefore highly recommended to keep the default of false on macOS, to avoid encoding compatibility issues.--vfs-block-norm-dupes flag allows hiding these duplicates. This comes with a performance tradeoff, as rclone will have to scan the entire directory for duplicates when listing a directory. For this reason, it is recommended to leave this disabled if not needed. However, macOS users may wish to consider using it, as otherwise, if a remote directory contains both NFC and NFD versions of the same filename, an odd situation will occur: both versions of the file will be visible in the mount, and both will appear to be editable, however, editing either version will actually result in only the NFD version getting edited under the hood. --vfs-block- norm-dupes prevents this confusion by detecting this scenario, hiding the duplicates, and logging an error, similar to how this is handled in rclone sync.VFS Disk Options
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)Alternate report of used bytes
df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+rclone serve docker [flags]rclone serve dlna remote:path [flags]Options
-
+ --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
--allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
- --allow-other Allow access to other users (not supported on Windows)
- --allow-root Allow access to root user (not supported on Windows)
- --async-read Use asynchronous reads (not supported on Windows) (default true)
- --attr-timeout Duration Time for which file/directory attributes are cached (default 1s)
- --base-dir string Base directory for volumes (default "/var/lib/docker-volumes/rclone")
- --daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)
- --daemon-timeout Duration Time limit for rclone to respond to kernel (not supported on Windows) (default 0s)
- --daemon-wait Duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
- --debug-fuse Debug the FUSE internals - needs -v
- --default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
- --devname string Set the device name - default is remote:path
+
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
- --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-refresh Refreshes the directory cache recursively in the background on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
- --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
- --volname string Set the volume name (supported on Windows and OSX only)
- --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) --addr string The ip:port or :port to bind the DLNA http server to (default ":7879")
+ --announce-interval Duration The interval between SSDP announcements (default 12m0s)
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
- --forget-state Skip restoring previous state
- --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
- -h, --help help for docker
- --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
- --mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
- --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
+ -h, --help help for dlna
+ --interface stringArray The interface to use for SSDP (repeat as necessary)
+ --log-trace Enable trace logging of SOAP traffic
+ --name string Name of DLNA server
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
- --no-spec Do not write spec file
- --noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true)
- --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
- -o, --option stringArray Option for libfuse/WinFsp (repeat if required)
--poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
- --socket-addr string Address <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
- --socket-gid int GID for unix socket (default: current process GID) (default 1000)
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
@@ -3985,12 +4203,10 @@ htpasswd -B htpasswd anotherUserFilter Options
--delete-excluded Delete files on dest excluded from sync
@@ -4020,16 +4236,16 @@ htpasswd -B htpasswd anotherUser
-rclone serve ftp
-rclone serve docker
+Synopsis
-Server options
-Authentication
-
+sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vvrclone serve docker will create the said socket, listening for commands from Docker to create the necessary Volumes. Normally you need not give the --socket-addr flag. The API will listen on the unix domain socket at /run/docker/plugins/rclone.sock. In the example above rclone will create a TCP socket and a small file /etc/docker/plugins/rclone.spec containing the socket address. We use sudo because both paths are writeable only by the root user./run/docker/plugins/rclone.sock or parse new /etc/docker/plugins/rclone.spec. Until you restart, any volume related docker commands will timeout trying to access the old socket. Running directly is supported on Linux only, not on Windows or MacOS. This is not a problem with managed plugin mode described in details in the full documentation.--base-dir (by default /var/lib/docker-volumes/rclone available only to root) and maintain the JSON formatted file docker-plugin.state in the rclone cache directory with book-keeping records of created and mounted volumes.--no-unicode-normalization flag controls whether a similar "fixup" is performed for filenames that differ but are canonically equivalent with respect to unicode. Unicode normalization can be particularly helpful for users of macOS, which prefers form NFD instead of the NFC used by most other platforms. It is therefore highly recommended to keep the default of false on macOS, to avoid encoding compatibility issues.--vfs-block-norm-dupes flag allows hiding these duplicates. This comes with a performance tradeoff, as rclone will have to scan the entire directory for duplicates when listing a directory. For this reason, it is recommended to leave this disabled if not needed. However, macOS users may wish to consider using it, as otherwise, if a remote directory contains both NFC and NFD versions of the same filename, an odd situation will occur: both versions of the file will be visible in the mount, and both will appear to be editable, however, editing either version will actually result in only the NFD version getting edited under the hood. --vfs-block- norm-dupes prevents this confusion by detecting this scenario, hiding the duplicates, and logging an error, similar to how this is handled in rclone sync.VFS Disk Options
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)Alternate report of used bytes
df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.Auth Proxy
---auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscure
-{
- "user": "me",
- "pass": "mypassword"
-}
-{
- "user": "me",
- "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
-}
-{
- "type": "sftp",
- "_root": "",
- "_obscure": "pass",
- "user": "me",
- "pass": "mypassword",
- "host": "sftp.example.com"
-}user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
+rclone serve ftp remote:path [flags]rclone serve docker [flags]Options
-
+ --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
+ --volname string Set the volume name (supported on Windows and OSX only)
+ --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows)
--addr string IPaddress:Port or :Port to bind server to (default "localhost:2121")
- --auth-proxy string A program to use to create the backend from the auth
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
+
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
- --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-refresh Refreshes the directory cache recursively in the background on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
- --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --allow-non-empty Allow mounting over a non-empty directory (not supported on Windows)
+ --allow-other Allow access to other users (not supported on Windows)
+ --allow-root Allow access to root user (not supported on Windows)
+ --async-read Use asynchronous reads (not supported on Windows) (default true)
+ --attr-timeout Duration Time for which file/directory attributes are cached (default 1s)
+ --base-dir string Base directory for volumes (default "/var/lib/docker-volumes/rclone")
+ --daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows)
+ --daemon-timeout Duration Time limit for rclone to respond to kernel (not supported on Windows) (default 0s)
+ --daemon-wait Duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s)
+ --debug-fuse Debug the FUSE internals - needs -v
+ --default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows)
+ --devname string Set the device name - default is remote:path
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
+ --forget-state Skip restoring previous state
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
- -h, --help help for ftp
- --key string TLS PEM Private key
+ -h, --help help for docker
+ --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki)
+ --mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset)
+ --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only)
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
- --pass string Password for authentication (empty value allow every password)
- --passive-port string Passive port range to use (default "30000-32000")
+ --no-spec Do not write spec file
+ --noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true)
+ --noapplexattr Ignore all "com.apple.*" extended attributes (supported on OSX only)
+ -o, --option stringArray Option for libfuse/WinFsp (repeat if required)
--poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
- --public-ip string Public IP address to advertise for passive connections
--read-only Only allow read-only access
+ --socket-addr string Address <host:port> or absolute path (default: /run/docker/plugins/rclone.sock)
+ --socket-gid int GID for unix socket (default: current process GID) (default 1000)
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
- --user string User name for authentication (default "anonymous")
+ --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
@@ -4210,10 +4414,12 @@ htpasswd -B htpasswd anotherUserFilter Options
--delete-excluded Delete files on dest excluded from sync
@@ -4243,9 +4449,235 @@ htpasswd -B htpasswd anotherUser
+rclone serve ftp
+Synopsis
+Server options
+Authentication
+VFS Directory Cache
+--dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
+--dir-cache-time duration Time to cache directory entries for (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
+kill -SIGHUP $(pidof rclone)
+rclone rc vfs/forget
+rclone rc vfs/forget file=path/to/file dir=path/to/dirVFS File Buffering
+--buffer-size flag determines the amount of memory, that will be used to buffer data in advance.--buffer-size * open files.VFS File Caching
+
+--cache-dir string Directory rclone will use for caching.
+--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.--vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.--vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
+
+
+--vfs-cache-mode minimal
+
+
+--vfs-cache-mode writes
+--vfs-cache-mode full
+--vfs-cache-mode writes.--buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.--buffer-size is not set too large and --vfs-read-ahead is set large if required.Fingerprinting
+
+
+hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.--vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.local, s3 or swift backends then using this flag is recommended.VFS Chunked Reading
+
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)--vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.--vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.--vfs-read-chunk-size to 0 or "off" disables chunked reading.VFS Performance
+--no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
+--no-checksum Don't compare checksums on up/download.
+--no-modtime Don't read/write the modification time (can speed things up).
+--no-seek Don't allow seeking in files.
+--read-only Only allow read-only access.
+--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)VFS Case Sensitivity
+--vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.--no-unicode-normalization flag controls whether a similar "fixup" is performed for filenames that differ but are canonically equivalent with respect to unicode. Unicode normalization can be particularly helpful for users of macOS, which prefers form NFD instead of the NFC used by most other platforms. It is therefore highly recommended to keep the default of false on macOS, to avoid encoding compatibility issues.--vfs-block-norm-dupes flag allows hiding these duplicates. This comes with a performance tradeoff, as rclone will have to scan the entire directory for duplicates when listing a directory. For this reason, it is recommended to leave this disabled if not needed. However, macOS users may wish to consider using it, as otherwise, if a remote directory contains both NFC and NFD versions of the same filename, an odd situation will occur: both versions of the file will be visible in the mount, and both will appear to be editable, however, editing either version will actually result in only the NFD version getting edited under the hood. --vfs-block- norm-dupes prevents this confusion by detecting this scenario, hiding the duplicates, and logging an error, similar to how this is handled in rclone sync.VFS Disk Options
+
+--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)Alternate report of used bytes
+df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.Auth Proxy
+--auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscure
+{
+ "user": "me",
+ "pass": "mypassword"
+}
+{
+ "user": "me",
+ "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+}
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
+rclone serve ftp remote:path [flags]Options
+
+ --addr string IPaddress:Port or :Port to bind server to (default "localhost:2121")
+ --auth-proxy string A program to use to create the backend from the auth
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --dir-cache-time Duration Time to cache directory entries for (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --file-perms FileMode File permissions (default 0666)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
+ -h, --help help for ftp
+ --key string TLS PEM Private key
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --pass string Password for authentication (empty value allow every password)
+ --passive-port string Passive port range to use (default "30000-32000")
+ --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --public-ip string Public IP address to advertise for passive connections
+ --read-only Only allow read-only access
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --user string User name for authentication (default "anonymous")
+ --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
+ --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
+ --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively in the background on start
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)Filter Options
+
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)SEE ALSO
+
+
rclone serve http
Synopsis
+Synopsis
--include, --exclude) to control what is served.-v to see access logs.VFS Directory Cache
---dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
---dir-cache-time duration Time to cache directory entries for (default 5m0s)
---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
-kill -SIGHUP $(pidof rclone)
-rclone rc vfs/forget
-rclone rc vfs/forget file=path/to/file dir=path/to/dirVFS File Buffering
---buffer-size flag determines the amount of memory, that will be used to buffer data in advance.--buffer-size * open files.VFS File Caching
-
---cache-dir string Directory rclone will use for caching.
---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
---vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
---vfs-write-back duration Time to writeback files after last use when using cache (default 5s)-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.--vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.--vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
-
-
---vfs-cache-mode minimal
-
-
---vfs-cache-mode writes
---vfs-cache-mode full
---vfs-cache-mode writes.--buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.--buffer-size is not set too large and --vfs-read-ahead is set large if required.Fingerprinting
-
-
-hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.--vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.local, s3 or swift backends then using this flag is recommended.VFS Chunked Reading
-
---vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)--vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.--vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.--vfs-read-chunk-size to 0 or "off" disables chunked reading.VFS Performance
---no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
---no-checksum Don't compare checksums on up/download.
---no-modtime Don't read/write the modification time (can speed things up).
---no-seek Don't allow seeking in files.
---read-only Only allow read-only access.
---vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
---vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
---transfers int Number of file transfers to run in parallel (default 4)VFS Case Sensitivity
---vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.VFS Disk Options
-
---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)Alternate report of used bytes
-df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.Auth Proxy
---auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscure
-{
- "user": "me",
- "pass": "mypassword"
-}
-{
- "user": "me",
- "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
-}
-{
- "type": "sftp",
- "_root": "",
- "_obscure": "pass",
- "user": "me",
- "pass": "mypassword",
- "host": "sftp.example.com"
-}user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
-rclone serve http remote:path [flags]Options
-
- --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
- --allow-origin string Origin which cross-domain request (CORS) can be executed from
- --auth-proxy string A program to use to create the backend from the auth
- --baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
- --dir-cache-time Duration Time to cache directory entries for (default 5m0s)
- --dir-perms FileMode Directory permissions (default 0777)
- --file-perms FileMode File permissions (default 0666)
- --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
- -h, --help help for http
- --htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
- --max-header-bytes int Maximum size of request header (default 4096)
- --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
- --no-checksum Don't compare checksums on up/download
- --no-modtime Don't read/write the modification time (can speed things up)
- --no-seek Don't allow seeking in files
- --pass string Password for authentication
- --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
- --read-only Only allow read-only access
- --realm string Realm for authentication
- --salt string Password hashing salt (default "dlPL2MqE")
- --server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
- --server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
- --template string User-specified template
- --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
- --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
- --user string User name for authentication
- --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
- --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
- --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
- --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
- --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
- --vfs-case-insensitive If a file name not found, find a case insensitive match
- --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
- --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
- --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
- --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
- --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
- --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
- --vfs-refresh Refreshes the directory cache recursively on start
- --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
- --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
- --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)Filter Options
-
- --delete-excluded Delete files on dest excluded from sync
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
- --exclude-if-present stringArray Exclude directories if filename is present
- --files-from stringArray Read list of source-file names from file (use - to read from stdin)
- --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
- -f, --filter stringArray Add a file filtering rule
- --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
- --ignore-case Ignore case in filters (case insensitive)
- --include stringArray Include files matching pattern
- --include-from stringArray Read file include patterns from file (use - to read from stdin)
- --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --max-depth int If set limits the recursion depth to this (default -1)
- --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
- --metadata-exclude stringArray Exclude metadatas matching pattern
- --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
- --metadata-filter stringArray Add a metadata filtering rule
- --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
- --metadata-include stringArray Include metadatas matching pattern
- --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
- --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
- --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)SEE ALSO
-
-
-rclone serve nfs
-Synopsis
---addr flag.--vfs-cache-mode in order to be able to write to the mountpoint (full is recommended). If you don't specify VFS cache mode, the mount will be read-only.
-rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
-mount -oport=$PORT,mountport=$PORT $HOSTNAME: path/to/mountpoint$PORT is the same port number we used in the serve nfs command.VFS - Virtual File System
-VFS Directory Cache
--dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.--dir-cache-time duration Time to cache directory entries for (default 5m0s)
@@ -4732,27 +4927,76 @@ htpasswd -B htpasswd anotherUser--no-unicode-normalization flag controls whether a similar "fixup" is performed for filenames that differ but are canonically equivalent with respect to unicode. Unicode normalization can be particularly helpful for users of macOS, which prefers form NFD instead of the NFC used by most other platforms. It is therefore highly recommended to keep the default of false on macOS, to avoid encoding compatibility issues.--vfs-block-norm-dupes flag allows hiding these duplicates. This comes with a performance tradeoff, as rclone will have to scan the entire directory for duplicates when listing a directory. For this reason, it is recommended to leave this disabled if not needed. However, macOS users may wish to consider using it, as otherwise, if a remote directory contains both NFC and NFD versions of the same filename, an odd situation will occur: both versions of the file will be visible in the mount, and both will appear to be editable, however, editing either version will actually result in only the NFD version getting edited under the hood. --vfs-block- norm-dupes prevents this confusion by detecting this scenario, hiding the duplicates, and logging an error, similar to how this is handled in rclone sync.VFS Disk Options
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)Alternate report of used bytes
df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
+rclone serve nfs remote:path [flags]Auth Proxy
+--auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscure
+{
+ "user": "me",
+ "pass": "mypassword"
+}
+{
+ "user": "me",
+ "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+}
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.rclone serve http remote:path [flags]Options
-
@@ -4798,170 +5042,21 @@ htpasswd -B htpasswd anotherUser
--addr string IPaddress:Port or :Port to bind server to
+
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
- --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-refresh Refreshes the directory cache recursively in the background on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
+ --allow-origin string Origin which cross-domain request (CORS) can be executed from
+ --auth-proxy string A program to use to create the backend from the auth
+ --baseurl string Prefix for URLs - leave blank for root
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
--file-perms FileMode File permissions (default 0666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
- -h, --help help for nfs
+ -h, --help help for http
+ --htpasswd string A htpasswd file - if not provided no authentication is done
+ --key string TLS PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
+ --pass string Password for authentication
--poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
+ --realm string Realm for authentication
+ --salt string Password hashing salt (default "dlPL2MqE")
+ --server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
+ --template string User-specified template
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --user string User name for authentication
+ --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
@@ -4765,7 +5009,7 @@ htpasswd -B htpasswd anotherUser
-rclone serve restic
-rclone serve nfs
+Synopsis
---bwlimit will be respected for file transfers. Use --stats to control the stats printing.Setting up rclone for use by restic
-
-rclone serve restic -v remote:backup--addr flag.--cache-objects=false will cause rclone to stop caching objects returned from the List call. Caching is normally desirable as it speeds up downloading objects, saves transactions and uses very little memory.Setting up restic to use rclone
-
-$ export RESTIC_REPOSITORY=rest:http://localhost:8080/
-$ export RESTIC_PASSWORD=yourpassword
-$ restic init
-created restic backend 8b1a4b56ae at rest:http://localhost:8080/
-
-Please note that knowledge of your password is required to access
-the repository. Losing your password means that your data is
-irrecoverably lost.
-$ restic backup /path/to/files/to/backup
-scan [/path/to/files/to/backup]
-scanned 189 directories, 312 files in 0:00
-[0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00
-duration: 0:00
-snapshot 45c8fdd8 savedMultiple repositories
-
-$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
-# backup user1 stuff
-$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
-# backup user2 stuffPrivate repositories
---private-repos flag can be used to limit users to repositories starting with a path of /<username>/.Server options
---addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.--addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.--addr may be repeated to listen on multiple IPs/ports/sockets.--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.TLS (SSL)
---cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.Authentication
---user and --pass flags.--client-ca flag passed to the server, the client certificate common name will be considered as the username.--htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
-touch htpasswd
-htpasswd -B htpasswd user
-htpasswd -B htpasswd anotherUser--realm to set the authentication realm.--salt to change the password hashing salt from the default.
-rclone serve restic remote:path [flags]Options
-
- --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
- --allow-origin string Origin which cross-domain request (CORS) can be executed from
- --append-only Disallow deletion of repository data
- --baseurl string Prefix for URLs - leave blank for root
- --cache-objects Cache listed objects (default true)
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
- -h, --help help for restic
- --htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
- --max-header-bytes int Maximum size of request header (default 4096)
- --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
- --pass string Password for authentication
- --private-repos Users can only access their private repo
- --realm string Realm for authentication
- --salt string Password hashing salt (default "dlPL2MqE")
- --server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
- --server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
- --stdio Run an HTTP2 server on stdin/stdout
- --user string User name for authenticationSEE ALSO
-
-
-rclone serve s3
-Synopsis
-serve s3 implements a basic s3 server that serves a remote via s3. This can be viewed with an s3 client, or you can make an s3 type remote to read and write to it with rclone.serve s3 is considered Experimental so use with care.--auth-key accessKey,secretKey and set the Authorization header correctly in the request. (See the AWS docs).--auth-key can be repeated for multiple auth pairs. If --auth-key is not provided then serve s3 will allow anonymous access.--vfs-cache-mode off. Using --vfs-cache-mode full (or writes) can be used to cache objects locally to improve performance.--force-path-style=false if you want to use the bucket name as a part of the hostname (such as mybucket.local)--etag-hash if you want to change the hash uses for the ETag. Note that using anything other than MD5 (the default) is likely to cause problems for S3 clients which rely on the Etag being the MD5.Quickstart
-remote:path over s3, run the server like this:
-rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
-[serves3]
-type = s3
-provider = Rclone
-endpoint = http://127.0.0.1:8080/
-access_key_id = ACCESS_KEY_ID
-secret_access_key = SECRET_ACCESS_KEY
-use_multipart_uploads = falsedisable_multipart_uploads = true is to work around a bug which will be fixed in due course.Bugs
-serve s3 holds all the parts in memory (see #7453). This is a limitaton of the library rclone uses for serving S3 and will hopefully be fixed at some point.serve s3 bugs see the serve s3 bug category on GitHub.Limitations
-serve s3 will treat all directories in the root as buckets and ignore all files in the root. You can use CreateBucket to create folders under the root, but you can't create empty folders under other folders not in the root.PutObject or DeleteObject, rclone will automatically create or clean up empty folders. If you don't want to clean up empty folders automatically, use --no-cleanup.ListObjects, rclone will use / when the delimiter is empty. This reduces backend requests with no effect on most operations, but if the delimiter is something other than / and empty, rclone will do a full recursive search of the backend, which can take some time.mtime metadata which will be set as the modification time of the file.Supported operations
-serve s3 currently supports the following operations.
-
-
-
ListBucketsCreateBucketDeleteBucket
-
HeadObjectListObjectsGetObjectPutObjectDeleteObjectDeleteObjectsCreateMultipartUploadCompleteMultipartUploadAbortMultipartUploadCopyObjectUploadPartUnimplemented.Server options
---addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.--addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.--addr may be repeated to listen on multiple IPs/ports/sockets.--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.TLS (SSL)
---cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.--addr flag.--vfs-cache-mode in order to be able to write to the mountpoint (full is recommended). If you don't specify VFS cache mode, the mount will be read-only. Note also that --nfs-cache-handle-limit controls the maximum number of cached file handles stored by the caching handler. This should not be set too low or you may experience errors when trying to access files. The default is 1000000, but consider lowering this limit if the server's system resource usage causes problems.
+rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
+mount -oport=$PORT,mountport=$PORT $HOSTNAME: path/to/mountpoint$PORT is the same port number we used in the serve nfs command.VFS - Virtual File System
--no-unicode-normalization flag controls whether a similar "fixup" is performed for filenames that differ but are canonically equivalent with respect to unicode. Unicode normalization can be particularly helpful for users of macOS, which prefers form NFD instead of the NFC used by most other platforms. It is therefore highly recommended to keep the default of false on macOS, to avoid encoding compatibility issues.--vfs-block-norm-dupes flag allows hiding these duplicates. This comes with a performance tradeoff, as rclone will have to scan the entire directory for duplicates when listing a directory. For this reason, it is recommended to leave this disabled if not needed. However, macOS users may wish to consider using it, as otherwise, if a remote directory contains both NFC and NFD versions of the same filename, an odd situation will occur: both versions of the file will be visible in the mount, and both will appear to be editable, however, editing either version will actually result in only the NFD version getting edited under the hood. --vfs-block- norm-dupes prevents this confusion by detecting this scenario, hiding the duplicates, and logging an error, similar to how this is handled in rclone sync.VFS Disk Options
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)Alternate report of used bytes
df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.
-rclone serve s3 remote:path [flags]Options
-
@@ -5147,28 +5233,174 @@ use_multipart_uploads = false
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
--addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
- --allow-origin string Origin which cross-domain request (CORS) can be executed from
- --auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
- --baseurl string Prefix for URLs - leave blank for root
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+
+rclone serve nfs remote:path [flags]Options
+
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
- --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-refresh Refreshes the directory cache recursively in the background on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --addr string IPaddress:Port or :Port to bind server to
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
- --etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
--file-perms FileMode File permissions (default 0666)
- --force-path-style If true use path style access if false use virtual hosted style (default true) (default true)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
- -h, --help help for s3
- --key string TLS PEM Private key
- --max-header-bytes int Maximum size of request header (default 4096)
- --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
+ -h, --help help for nfs
+ --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000)
--no-checksum Don't compare checksums on up/download
- --no-cleanup Not to cleanup empty folder after object is deleted
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
--poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
- --server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
- --server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
@@ -5118,7 +5204,7 @@ use_multipart_uploads = falseSEE ALSO
+
+
+rclone serve restic
+Synopsis
+--bwlimit will be respected for file transfers. Use --stats to control the stats printing.Setting up rclone for use by restic
+
+rclone serve restic -v remote:backup--addr flag.--cache-objects=false will cause rclone to stop caching objects returned from the List call. Caching is normally desirable as it speeds up downloading objects, saves transactions and uses very little memory.Setting up restic to use rclone
+
+$ export RESTIC_REPOSITORY=rest:http://localhost:8080/
+$ export RESTIC_PASSWORD=yourpassword
+$ restic init
+created restic backend 8b1a4b56ae at rest:http://localhost:8080/
+
+Please note that knowledge of your password is required to access
+the repository. Losing your password means that your data is
+irrecoverably lost.
+$ restic backup /path/to/files/to/backup
+scan [/path/to/files/to/backup]
+scanned 189 directories, 312 files in 0:00
+[0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00
+duration: 0:00
+snapshot 45c8fdd8 savedMultiple repositories
+
+$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
+# backup user1 stuff
+$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
+# backup user2 stuffPrivate repositories
+--private-repos flag can be used to limit users to repositories starting with a path of /<username>/.Server options
+--addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.--addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.--addr may be repeated to listen on multiple IPs/ports/sockets.--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.TLS (SSL)
+--cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.Authentication
+--user and --pass flags.--client-ca flag passed to the server, the client certificate common name will be considered as the username.--htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+touch htpasswd
+htpasswd -B htpasswd user
+htpasswd -B htpasswd anotherUser--realm to set the authentication realm.--salt to change the password hashing salt from the default.
+rclone serve restic remote:path [flags]Options
+
+ --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
+ --allow-origin string Origin which cross-domain request (CORS) can be executed from
+ --append-only Disallow deletion of repository data
+ --baseurl string Prefix for URLs - leave blank for root
+ --cache-objects Cache listed objects (default true)
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
+ -h, --help help for restic
+ --htpasswd string A htpasswd file - if not provided no authentication is done
+ --key string TLS PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
+ --pass string Password for authentication
+ --private-repos Users can only access their private repo
+ --realm string Realm for authentication
+ --salt string Password hashing salt (default "dlPL2MqE")
+ --server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
+ --stdio Run an HTTP2 server on stdin/stdout
+ --user string User name for authenticationSEE ALSO
-rclone serve sftp
-rclone serve s3
+Synopsis
---include, --exclude) to control what is served.-v to see access logs.--bwlimit will be respected for file transfers. Use --stats to control the stats printing.--user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in.--key then rclone will generate rsa, ecdsa and ed25519 variants, and cache them for later use in rclone's cache directory (see rclone help flags cache-dir) in the "serve-sftp" directory.--addr :2022 for example.--vfs-cache-mode off is fine for the rclone sftp backend, but it may not be with other SFTP clients.--stdio is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example:
-restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...--transfers 1 when using --stdio. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to "corrupted on transfer" errors. This is the case because the client chooses indiscriminately which server to send commands to while the servers all have different views of the state of the filing system.--sftp-path-override to enable checksumming is possible but less secure and you could use the SFTP server provided by OpenSSH in this case.VFS - Virtual File System
+serve s3 implements a basic s3 server that serves a remote via s3. This can be viewed with an s3 client, or you can make an s3 type remote to read and write to it with rclone.serve s3 is considered Experimental so use with care.--auth-key accessKey,secretKey and set the Authorization header correctly in the request. (See the AWS docs).--auth-key can be repeated for multiple auth pairs. If --auth-key is not provided then serve s3 will allow anonymous access.--vfs-cache-mode off. Using --vfs-cache-mode full (or writes) can be used to cache objects locally to improve performance.--force-path-style=false if you want to use the bucket name as a part of the hostname (such as mybucket.local)--etag-hash if you want to change the hash uses for the ETag. Note that using anything other than MD5 (the default) is likely to cause problems for S3 clients which rely on the Etag being the MD5.Quickstart
+remote:path over s3, run the server like this:
+rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
+[serves3]
+type = s3
+provider = Rclone
+endpoint = http://127.0.0.1:8080/
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+use_multipart_uploads = falsedisable_multipart_uploads = true is to work around a bug which will be fixed in due course.Bugs
+serve s3 holds all the parts in memory (see #7453). This is a limitaton of the library rclone uses for serving S3 and will hopefully be fixed at some point.serve s3 bugs see the serve s3 bug category on GitHub.Limitations
+serve s3 will treat all directories in the root as buckets and ignore all files in the root. You can use CreateBucket to create folders under the root, but you can't create empty folders under other folders not in the root.PutObject or DeleteObject, rclone will automatically create or clean up empty folders. If you don't want to clean up empty folders automatically, use --no-cleanup.ListObjects, rclone will use / when the delimiter is empty. This reduces backend requests with no effect on most operations, but if the delimiter is something other than / and empty, rclone will do a full recursive search of the backend, which can take some time.mtime metadata which will be set as the modification time of the file.Supported operations
+serve s3 currently supports the following operations.
+
+
+
ListBucketsCreateBucketDeleteBucket
+
HeadObjectListObjectsGetObjectPutObjectDeleteObjectDeleteObjectsCreateMultipartUploadCompleteMultipartUploadAbortMultipartUploadCopyObjectUploadPartUnimplemented.Server options
+--addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.--addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.--addr may be repeated to listen on multiple IPs/ports/sockets.--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.TLS (SSL)
+--cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.--no-unicode-normalization flag controls whether a similar "fixup" is performed for filenames that differ but are canonically equivalent with respect to unicode. Unicode normalization can be particularly helpful for users of macOS, which prefers form NFD instead of the NFC used by most other platforms. It is therefore highly recommended to keep the default of false on macOS, to avoid encoding compatibility issues.--vfs-block-norm-dupes flag allows hiding these duplicates. This comes with a performance tradeoff, as rclone will have to scan the entire directory for duplicates when listing a directory. For this reason, it is recommended to leave this disabled if not needed. However, macOS users may wish to consider using it, as otherwise, if a remote directory contains both NFC and NFD versions of the same filename, an odd situation will occur: both versions of the file will be visible in the mount, and both will appear to be editable, however, editing either version will actually result in only the NFD version getting edited under the hood. --vfs-block- norm-dupes prevents this confusion by detecting this scenario, hiding the duplicates, and logging an error, similar to how this is handled in rclone sync.VFS Disk Options
--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)Alternate report of used bytes
df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.Auth Proxy
---auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscure
-{
- "user": "me",
- "pass": "mypassword"
-}
-{
- "user": "me",
- "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
-}
-{
- "type": "sftp",
- "_root": "",
- "_obscure": "pass",
- "user": "me",
- "pass": "mypassword",
- "host": "sftp.example.com"
-}user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
+rclone serve sftp remote:path [flags]rclone serve s3 remote:path [flags]Options
-
@@ -5382,9 +5593,243 @@ use_multipart_uploads = false
--addr string IPaddress:Port or :Port to bind server to (default "localhost:2022")
- --auth-proxy string A program to use to create the backend from the auth
- --authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys")
+
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
- --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-refresh Refreshes the directory cache recursively in the background on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
+ --allow-origin string Origin which cross-domain request (CORS) can be executed from
+ --auth-key stringArray Set key pair for v4 authorization: access_key_id,secret_access_key
+ --baseurl string Prefix for URLs - leave blank for root
+ --cert string TLS PEM key (concatenation of certificate and CA certificate)
+ --client-ca string Client certificate authority to verify clients with
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 0777)
+ --etag-hash string Which hash to use for the ETag, or auto or blank for off (default "MD5")
--file-perms FileMode File permissions (default 0666)
+ --force-path-style If true use path style access if false use virtual hosted style (default true) (default true)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
- -h, --help help for sftp
- --key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate)
- --no-auth Allow connections with no authentication if set
+ -h, --help help for s3
+ --key string TLS PEM Private key
+ --max-header-bytes int Maximum size of request header (default 4096)
+ --min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--no-checksum Don't compare checksums on up/download
+ --no-cleanup Not to cleanup empty folder after object is deleted
--no-modtime Don't read/write the modification time (can speed things up)
--no-seek Don't allow seeking in files
- --pass string Password for authentication
--poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
--read-only Only allow read-only access
- --stdio Run an sftp server on stdin/stdout
+ --server-read-timeout Duration Timeout for server reading data (default 1h0m0s)
+ --server-write-timeout Duration Timeout for server writing data (default 1h0m0s)
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
- --user string User name for authentication
+ --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
@@ -5349,7 +5560,7 @@ use_multipart_uploads = false
+rclone serve sftp
+Synopsis
+--include, --exclude) to control what is served.-v to see access logs.--bwlimit will be respected for file transfers. Use --stats to control the stats printing.--user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in.--key then rclone will generate rsa, ecdsa and ed25519 variants, and cache them for later use in rclone's cache directory (see rclone help flags cache-dir) in the "serve-sftp" directory.--addr :2022 for example.--vfs-cache-mode off is fine for the rclone sftp backend, but it may not be with other SFTP clients.--stdio is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example:
+restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...--transfers 1 when using --stdio. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to "corrupted on transfer" errors. This is the case because the client chooses indiscriminately which server to send commands to while the servers all have different views of the state of the filing system.--sftp-path-override to enable checksumming is possible but less secure and you could use the SFTP server provided by OpenSSH in this case.VFS - Virtual File System
+VFS Directory Cache
+--dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
+--dir-cache-time duration Time to cache directory entries for (default 5m0s)
+--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
+kill -SIGHUP $(pidof rclone)
+rclone rc vfs/forget
+rclone rc vfs/forget file=path/to/file dir=path/to/dirVFS File Buffering
+--buffer-size flag determines the amount of memory, that will be used to buffer data in advance.--buffer-size * open files.VFS File Caching
+
+--cache-dir string Directory rclone will use for caching.
+--vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+--vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s)
+--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s)
+--vfs-write-back duration Time to writeback files after last use when using cache (default 5s)-vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting the appropriate environment variable.--vfs-cache-mode. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.--vfs-write-back seconds. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone is run with the same flags.--vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
+
+
+--vfs-cache-mode minimal
+
+
+--vfs-cache-mode writes
+--vfs-cache-mode full
+--vfs-cache-mode writes.--buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.--buffer-size is not set too large and --vfs-read-ahead is set large if required.Fingerprinting
+
+
+hash is slow with the local and sftp backends as they have to read the entire file and hash it, and modtime is slow with the s3, swift, ftp and qinqstor backends because they need to do an extra API call to fetch it.--vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.local, s3 or swift backends then using this flag is recommended.VFS Chunked Reading
+
+--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
+--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off)--vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.--vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.--vfs-read-chunk-size to 0 or "off" disables chunked reading.VFS Performance
+--no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
+--no-checksum Don't compare checksums on up/download.
+--no-modtime Don't read/write the modification time (can speed things up).
+--no-seek Don't allow seeking in files.
+--read-only Only allow read-only access.
+--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms)
+--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)VFS Case Sensitivity
+--vfs-case-insensitive VFS flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the remote as-is. If the flag is "true" (or appears without a value on the command line), rclone may perform a "fixup" as explained below.--no-unicode-normalization flag controls whether a similar "fixup" is performed for filenames that differ but are canonically equivalent with respect to unicode. Unicode normalization can be particularly helpful for users of macOS, which prefers form NFD instead of the NFC used by most other platforms. It is therefore highly recommended to keep the default of false on macOS, to avoid encoding compatibility issues.--vfs-block-norm-dupes flag allows hiding these duplicates. This comes with a performance tradeoff, as rclone will have to scan the entire directory for duplicates when listing a directory. For this reason, it is recommended to leave this disabled if not needed. However, macOS users may wish to consider using it, as otherwise, if a remote directory contains both NFC and NFD versions of the same filename, an odd situation will occur: both versions of the file will be visible in the mount, and both will appear to be editable, however, editing either version will actually result in only the NFD version getting edited under the hood. --vfs-block- norm-dupes prevents this confusion by detecting this scenario, hiding the duplicates, and logging an error, similar to how this is handled in rclone sync.VFS Disk Options
+
+--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)Alternate report of used bytes
+df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.Auth Proxy
+--auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscure
+{
+ "user": "me",
+ "pass": "mypassword"
+}
+{
+ "user": "me",
+ "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+}
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}user and pass/public_key returned in the output to the host given. Note that since _obscure is set to pass, rclone will obscure the pass parameter before creating the backend (which is required for sftp backends).user in any way, for example to make proxy to many different sftp backends, you could make the user be user@example.com and then set the host to example.com in the output and the user to user. For security you'd probably want to restrict the host to a limited list.user so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
+rclone serve sftp remote:path [flags]Options
+
+ --addr string IPaddress:Port or :Port to bind server to (default "localhost:2022")
+ --auth-proxy string A program to use to create the backend from the auth
+ --authorized-keys string Authorized keys file (default "~/.ssh/authorized_keys")
+ --dir-cache-time Duration Time to cache directory entries for (default 5m0s)
+ --dir-perms FileMode Directory permissions (default 0777)
+ --file-perms FileMode File permissions (default 0666)
+ --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
+ -h, --help help for sftp
+ --key stringArray SSH private host key file (Can be multi-valued, leave blank to auto generate)
+ --no-auth Allow connections with no authentication if set
+ --no-checksum Don't compare checksums on up/download
+ --no-modtime Don't read/write the modification time (can speed things up)
+ --no-seek Don't allow seeking in files
+ --pass string Password for authentication
+ --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s)
+ --read-only Only allow read-only access
+ --stdio Run an sftp server on stdin/stdout
+ --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
+ --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
+ --user string User name for authentication
+ --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
+ --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
+ --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
+ --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
+ --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off)
+ --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s)
+ --vfs-case-insensitive If a file name not found, find a case insensitive match
+ --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off)
+ --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection
+ --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full
+ --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
+ --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
+ --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
+ --vfs-refresh Refreshes the directory cache recursively in the background on start
+ --vfs-used-is-size rclone size Use the rclone size algorithm for Used size
+ --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
+ --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)Filter Options
+
+ --delete-excluded Delete files on dest excluded from sync
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin)
+ --exclude-if-present stringArray Exclude directories if filename is present
+ --files-from stringArray Read list of source-file names from file (use - to read from stdin)
+ --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
+ -f, --filter stringArray Add a file filtering rule
+ --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin)
+ --ignore-case Ignore case in filters (case insensitive)
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read file include patterns from file (use - to read from stdin)
+ --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --max-depth int If set limits the recursion depth to this (default -1)
+ --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
+ --metadata-exclude stringArray Exclude metadatas matching pattern
+ --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin)
+ --metadata-filter stringArray Add a metadata filtering rule
+ --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin)
+ --metadata-include stringArray Include metadatas matching pattern
+ --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin)
+ --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
+ --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)SEE ALSO
+
+
rclone serve webdav
Synopsis
+Synopsis
WebDAV options
--etag-hash
@@ -5533,7 +5978,7 @@ htpasswd -B htpasswd anotherUser
VFS Directory Cache
+VFS Directory Cache
--dir-cache-time flag, you can control how long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.
@@ -5544,12 +5989,12 @@ htpasswd -B htpasswd anotherUser
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)rclone rc vfs/forget
-rclone rc vfs/forget file=path/to/file dir=path/to/dirVFS File Buffering
+VFS File Buffering
--buffer-size flag determines the amount of memory, that will be used to buffer data in advance.--buffer-size * open files.VFS File Caching
+VFS File Caching
--vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.--vfs-cache-max-age will evict files from the cache after the set time since last access has passed. The default value of 1 hour will start evicting files from cache that haven't been accessed for 1 hour. When a cached file is accessed the 1 hour timer is reset to 0 and will wait for 1 more hour before evicting. Specify the time with standard notation, s, m, h, d, w .--vfs-cache-mode > off. This can potentially cause data corruption if you do. You can work around this by giving each rclone its own cache hierarchy with --cache-dir. You don't need to worry about this if the remotes in use don't overlap.--vfs-cache-mode off
+--vfs-cache-mode off
@@ -5578,7 +6023,7 @@ htpasswd -B htpasswd anotherUser
---vfs-cache-mode minimal
+--vfs-cache-mode minimal
@@ -5587,11 +6032,11 @@ htpasswd -B htpasswd anotherUser
---vfs-cache-mode writes
+--vfs-cache-mode writes
--vfs-cache-mode full
+--vfs-cache-mode full
--buffer-size plus --vfs-read-ahead bytes ahead. The --buffer-size is buffered in memory whereas the --vfs-read-ahead is buffered on disk.--buffer-size is not set too large and --vfs-read-ahead is set large if required.Fingerprinting
+Fingerprinting
--vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This makes the fingerprinting less accurate but much faster and will improve the opening time of cached files.local, s3 or swift backends then using this flag is recommended.VFS Chunked Reading
+VFS Chunked Reading
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M)
@@ -5620,7 +6065,7 @@ htpasswd -B htpasswd anotherUser--vfs-read-chunk-size, and then double the size for each read. When --vfs-read-chunk-size-limit is specified, and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled only until the specified value is reached. If the value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely.--vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.--vfs-read-chunk-size to 0 or "off" disables chunked reading.VFS Performance
+VFS Performance
--no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
--no-checksum Don't compare checksums on up/download.
@@ -5632,7 +6077,7 @@ htpasswd -B htpasswd anotherUser--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
---transfers int Number of file transfers to run in parallel (default 4)VFS Case Sensitivity
+VFS Case Sensitivity
VFS Disk Options
+--no-unicode-normalization flag controls whether a similar "fixup" is performed for filenames that differ but are canonically equivalent with respect to unicode. Unicode normalization can be particularly helpful for users of macOS, which prefers form NFD instead of the NFC used by most other platforms. It is therefore highly recommended to keep the default of false on macOS, to avoid encoding compatibility issues.--vfs-block-norm-dupes flag allows hiding these duplicates. This comes with a performance tradeoff, as rclone will have to scan the entire directory for duplicates when listing a directory. For this reason, it is recommended to leave this disabled if not needed. However, macOS users may wish to consider using it, as otherwise, if a remote directory contains both NFC and NFD versions of the same filename, an odd situation will occur: both versions of the file will be visible in the mount, and both will appear to be editable, however, editing either version will actually result in only the NFD version getting edited under the hood. --vfs-block- norm-dupes prevents this confusion by detecting this scenario, hiding the duplicates, and logging an error, similar to how this is handled in rclone sync.VFS Disk Options
---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1)Alternate report of used bytes
+Alternate report of used bytes
df on the filesystem, then pass the flag --vfs-used-is-size to rclone. With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching.Auth Proxy
--auth-proxy /path/to/program then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocol with input on STDIN and output on STDOUT.--auth-proxy and --authorized-keys cannot be used together, if --auth-proxy is set the authorized keys option will be ignored.user and pass on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config._root - root to use for the backend_obscure - comma separated strings for parameters to obscureuser so only use that for configuration, don't use pass or public_key. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
-rclone serve webdav remote:path [flags]Options
+Options
--uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000)
--umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2)
--user string User name for authentication
+ --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost)
--vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s)
--vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off)
--vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off)
@@ -5721,11 +6169,11 @@ htpasswd -B htpasswd anotherUser
--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi)
--vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached ('off' is unlimited) (default off)
--vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms)
- --vfs-refresh Refreshes the directory cache recursively on start
+ --vfs-refresh Refreshes the directory cache recursively in the background on start
--vfs-used-is-size rclone size Use the rclone size algorithm for Used size
--vfs-write-back Duration Time to writeback files after last use when using cache (default 5s)
--vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s)
- --addr stringArray IPaddress:Port or :Port to bind server to (default [127.0.0.1:8080])
--allow-origin string Origin which cross-domain request (CORS) can be executed from
--auth-proxy string A program to use to create the backend from the auth
@@ -5709,6 +6156,7 @@ htpasswd -B htpasswd anotherUserFilter Options
+Filter Options
--min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off)
--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -5750,13 +6198,13 @@ htpasswd -B htpasswd anotherUserSEE ALSO
+SEE ALSO
rclone settier
Synopsis
+Synopsis
rclone settier tier remote:path/dir
-rclone settier tier remote:path [flags]Options
+Options
-h, --help help for settierSEE ALSO
+SEE ALSO
rclone test
Synopsis
+Synopsis
rclone test memory remote:Options
+Options
-h, --help help for testSEE ALSO
+SEE ALSO
rclone test changenotify
-rclone test changenotify remote: [flags]Options
+Options
-h, --help help for changenotify
--poll-interval Duration Time to wait between polling for changes (default 10s)SEE ALSO
-
-
-rclone test histogram
-Synopsis
-
-rclone test histogram [remote:path] [flags]Options
-
- -h, --help help for histogramSEE ALSO
+rclone test histogram
+Synopsis
+
+rclone test histogram [remote:path] [flags]Options
+
+ -h, --help help for histogramSEE ALSO
+
+
rclone test info
Synopsis
+Synopsis
-rclone test info [remote:path]+ [flags]Options
+Options
--upload-wait Duration Wait after writing a file (default 0s)
--write-json string Write results to file
--all Run all tests
--check-base32768 Check can store all possible base32768 characters
--check-control Check control characters
@@ -5835,14 +6283,14 @@ htpasswd -B htpasswd anotherUserSEE ALSO
+SEE ALSO
rclone test makefile
-rclone test makefile <size> [<file>]+ [flags]Options
+Options
--sparse Make the files sparse (appear to be filled with ASCII 0x00)
--zero Fill files with ASCII 0x00
--ascii Fill files with random ASCII printable bytes only
--chargen Fill files with a ASCII chargen pattern
-h, --help help for makefile
@@ -5851,14 +6299,14 @@ htpasswd -B htpasswd anotherUserSEE ALSO
+SEE ALSO
rclone test makefiles
-rclone test makefiles <dir> [flags]Options
+Options
--sparse Make the files sparse (appear to be filled with ASCII 0x00)
--zero Fill files with ASCII 0x00
--ascii Fill files with random ASCII printable bytes only
--chargen Fill files with a ASCII chargen pattern
--files int Number of files to create (default 1000)
@@ -5874,23 +6322,23 @@ htpasswd -B htpasswd anotherUserSEE ALSO
+SEE ALSO
rclone test memory
-rclone test memory remote:path [flags]Options
+Options
-h, --help help for memorySEE ALSO
+SEE ALSO
rclone touch
Synopsis
+Synopsis
--no-create or --recursive is provided.--recursive is used then recursively sets the modification time on all existing files that is found under the path. Filters are supported, and you can test with the --dry-run or the --interactive/-i flag.--timestamp is in UTC. If you want local time then add the --localtime flag.
-rclone touch remote:path [flags]Options
+Options
-h, --help help for touch
--localtime Use localtime for timestamp, not UTC
-C, --no-create Do not create the file if it does not exist (implied with --recursive)
@@ -5913,7 +6361,7 @@ htpasswd -B htpasswd anotherUser
- -n, --dry-run Do a trial run with no permanent changes
-i, --interactive Enable interactive mode
-v, --verbose count Print lots more stuff (repeat for more)Filter Options
+Filter Options
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -5942,13 +6390,13 @@ htpasswd -B htpasswd anotherUser --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactionsSEE ALSO
+SEE ALSO
rclone tree
Synopsis
+Synopsis
$ rclone tree remote:path
@@ -5965,7 +6413,7 @@ htpasswd -B htpasswd anotherUser--size. Note that not all of them have short options as they conflict with rclone's short options.
-rclone tree remote:path [flags]Options
+Options
-r, --sort-reverse Reverse the order of the sort
-U, --unsorted Leave files unsorted
--version Sort files alphanumerically by version
- -a, --all All files are listed (list . files too)
-d, --dirs-only List directories only
--dirsfirst List directories before files (-U disables)
@@ -5985,7 +6433,7 @@ htpasswd -B htpasswd anotherUserFilter Options
+Filter Options
--delete-excluded Delete files on dest excluded from sync
--exclude stringArray Exclude files matching pattern
@@ -6014,7 +6462,7 @@ htpasswd -B htpasswd anotherUser --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
--fast-list Use recursive list if available; uses more memory but fewer transactionsSEE ALSO
+SEE ALSO
@@ -6129,13 +6577,15 @@ rclone copy :sftp,host=example.com:path/to/dir /tmp/dir
rclone sync --interactive remote:current-backup remote:previous-backup
rclone sync --interactive /path/to/files remote:current-backupMetadata support
---metadata or -M flag.--metadata or -M flag.Content-Type without the --metadata flag.--metadata when syncing from local to local will preserve file attributes such as file mode, owner, extended attributes (not Windows).--metadata-set key=value flag when the object is first uploaded. This flag can be repeated as many times as necessary.--metadata-set and --metadata-mapper when doing sever side Move and server side Copy, but not when doing server side DirMove (renaming a directory) as this would involve recursing into the directory. Note that you can disable DirMove with --disable DirMove and rclone will revert back to using Move for each individual object where --metadata-set and --metadata-mapper are supported.Types of metadata
uid will store the user ID of the file when used on a unix based platform.mtime and content-type will take precedence if supplied in the metadata over reading the Content-Type or modification time of the source object.Options
+Options
--option=value or --option value. However boolean (true/false) options behave slightly differently to the other options in that --boolean sets the option to true and the absence of the flag sets it to false. It is also possible to specify --boolean=false or --boolean=true. Note that --boolean false is not valid - this is parsed as --boolean and the false is parsed as an extra command line argument for rclone.Time or duration options
@@ -6441,6 +6891,9 @@ See the dedupe command for more information as to what these options mean.
By default, rclone will exit with return code 0 if there were no errors.
This option allows rclone to return exit code 9 if no files were transferred between the source and destination. This allows using rclone in scripts, and triggering follow-on actions if data was copied, or skipping if not.
NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check and adjust your scripts accordingly!
+Normally, a sync to a case insensitive dest (such as macOS / Windows) will not result in a matching filename if the source and dest filenames have casing differences but are otherwise identical. For example, syncing hello.txt to HELLO.txt will normally result in the dest filename remaining HELLO.txt. If --fix-case is set, then HELLO.txt will be renamed to hello.txt to match the source.
NB: - directory names with incorrect casing will also be fixed - --fix-case will be ignored if --immutable is set - using --local-case-sensitive instead is not advisable; it will cause HELLO.txt to get deleted! - the old dest filename must not be excluded by filters. Be especially careful with --files-from, which does not respect --ignore-case! - on remotes that do not support server-side move, --fix-case will require downloading the file and re-uploading it. To avoid this, do not use --fix-case.
When using rclone via the API rclone caches created remotes for 5 minutes by default in the "fs cache". This means that if you do repeated actions on the same remote then rclone won't have to build it again from scratch, which makes it more efficient.
This flag sets the time that the remotes are cached for. If you set it to 0 (or negative) then rclone won't cache the remotes at all.
SrcFsType is the name of the source backend.DstFs is the config string for the remote that the object is being copied toDstFsType is the name of the destination backend.Remote is the path of the file relative to the root.Size, MimeType, ModTime are attributes of the file.Remote is the path of the object relative to the root.Size, MimeType, ModTime are attributes of the object.IsDir is true if this is a directory (not yet implemented).ID is the source ID of the file if known.ID is the source ID of the object if known.Metadata is the backend specific metadata as described in the backend docs.{
- "SrcFs": "gdrive:",
- "SrcFsType": "drive",
- "DstFs": "newdrive:user",
- "DstFsType": "onedrive",
- "Remote": "test.txt",
- "Size": 6,
- "MimeType": "text/plain; charset=utf-8",
- "ModTime": "2022-10-11T17:53:10.286745272+01:00",
- "IsDir": false,
- "ID": "xyz",
- "Metadata": {
- "btime": "2022-10-11T16:53:11Z",
- "content-type": "text/plain; charset=utf-8",
- "mtime": "2022-10-11T17:53:10.286745272+01:00",
- "owner": "user1@domain1.com",
- "permissions": "...",
- "description": "my nice file",
- "starred": "false"
- }
-}{
+ "SrcFs": "gdrive:",
+ "SrcFsType": "drive",
+ "DstFs": "newdrive:user",
+ "DstFsType": "onedrive",
+ "Remote": "test.txt",
+ "Size": 6,
+ "MimeType": "text/plain; charset=utf-8",
+ "ModTime": "2022-10-11T17:53:10.286745272+01:00",
+ "IsDir": false,
+ "ID": "xyz",
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain1.com",
+ "permissions": "...",
+ "description": "my nice file",
+ "starred": "false"
+ }
+}The program should then modify the input as desired and send it to STDOUT. The returned Metadata field will be used in its entirety for the destination object. Any other fields will be ignored. Note in this example we translate user names and permissions and add something to the description:
{
- "Metadata": {
- "btime": "2022-10-11T16:53:11Z",
- "content-type": "text/plain; charset=utf-8",
- "mtime": "2022-10-11T17:53:10.286745272+01:00",
- "owner": "user1@domain2.com",
- "permissions": "...",
- "description": "my nice file [migrated from domain1]",
- "starred": "false"
- }
-}{
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain2.com",
+ "permissions": "...",
+ "description": "my nice file [migrated from domain1]",
+ "starred": "false"
+ }
+}Metadata can be removed here too.
An example python program might look something like this to implement the above transformations.
-import sys, json
-
-i = json.load(sys.stdin)
-metadata = i["Metadata"]
-# Add tag to description
-if "description" in metadata:
- metadata["description"] += " [migrated from domain1]"
-else:
- metadata["description"] = "[migrated from domain1]"
-# Modify owner
-if "owner" in metadata:
- metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
-o = { "Metadata": metadata }
-json.dump(o, sys.stdout, indent="\t")You can find this example (slightly expanded) in the rclone source code at bin/test_metadata_mapper.py.
+import sys, json
+
+i = json.load(sys.stdin)
+metadata = i["Metadata"]
+# Add tag to description
+if "description" in metadata:
+ metadata["description"] += " [migrated from domain1]"
+else:
+ metadata["description"] = "[migrated from domain1]"
+# Modify owner
+if "owner" in metadata:
+ metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
+o = { "Metadata": metadata }
+json.dump(o, sys.stdout, indent="\t")You can find this example (slightly expanded) in the rclone source code at bin/test_metadata_mapper.py.
If you want to see the input to the metadata mapper and the output returned from it in the log you can use -vv --dump mapper.
See the metadata section for more info.
In this case the value of this option is used (default 64Mi).
When transferring files above SIZE to capable backends, rclone will use multiple threads to transfer the file (default 256M).
-Capable backends are marked in the overview as MultithreadUpload. (They need to implement either the OpenWriterAt or OpenChunkedWriter internal interfaces). These include include, local, s3, azureblob, b2, oracleobjectstorage and smb at the time of writing.
Capable backends are marked in the overview as MultithreadUpload. (They need to implement either the OpenWriterAt or OpenChunkWriter internal interfaces). These include include, local, s3, azureblob, b2, oracleobjectstorage and smb at the time of writing.
On the local disk, rclone preallocates the file (using fallocate(FALLOC_FL_KEEP_SIZE) on unix or NTSetInformationFile on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won't create fragmented or sparse files and there won't be any assembly time at the end of the transfer.
The number of threads used to transfer is controlled by --multi-thread-streams.
Use -vv if you wish to see info about the threads.
When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally.
This can be used if the remote is being synced with another tool also (e.g. the Google Drive client).
+When using this flag, rclone won't update modification times of remote directories if they are incorrect as it would normally.
The --order-by flag controls the order in which files in the backlog are processed in rclone sync, rclone copy and rclone move.
The order by string is constructed like this. The first part describes what aspect is being measured:
@@ -6726,7 +7181,7 @@ y/n/s/!/q> n--order-by name - send the files with alphabetically by path firstIf the --order-by flag is not supplied or it is supplied with an empty string then the default ordering will be used which is as scanned. With --checkers 1 this is mostly alphabetical, however with the default --checkers 8 it is somewhat random.
The --order-by flag does not do a separate pass over the data. This means that it may transfer some files out of the order specified if
Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
-Every option in rclone can have its default set by environment variable.
To find the name of the environment variable, first, take the long option name, strip the leading --, change - to _, make upper case and prepend RCLONE_.
For example, to always set --stats 5s, set the environment variable RCLONE_STATS=5s. If you set stats on the command line this will override the environment variable setting.
Then on your main desktop machine
-rclone authorize "amazon cloud drive"
+rclone authorize "dropbox"
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
Waiting for code...
@@ -7525,7 +7980,7 @@ file2.avi
+ *.png
+ file2.avi
- *
-Files file1.jpg, file3.png and file2.avi are listed whilst secret17.jpg and files without the suffix .jpgor.png` are excluded.
Files file1.jpg, file3.png and file2.avi are listed whilst secret17.jpg and files without the suffix .jpg or .png are excluded.
E.g. for an alternative filter-file.txt:
+ *.jpg
+ *.gif
@@ -8034,6 +8489,21 @@ rclone rc cache/expire remote=/ withData=true
See the config password command for more information on the above.
Authentication is required for this call.
+Returns a JSON object with the following keys:
+Eg
+{
+ "cache": "/home/USER/.cache/rclone",
+ "config": "/home/USER/.rclone.conf",
+ "temp": "/tmp"
+}
+See the config paths command for more information on the above.
+Authentication is required for this call.
Returns a JSON object: - providers - array of objects
See the config providers command for more information on the above.
@@ -8602,6 +9072,37 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"Cache }This command does not have a command line equivalent so use this instead:
rclone rc --loopback operations/fsinfo fs=remote:
+Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.
+This takes the following parameters:
+If you supply the download flag, it will download the data from the remote and create the hash on the fly. This can be useful for remotes that don't support the given hash or if you really want to check all the data.
+Note that if you wish to supply a checkfile to check hashes against the current files then you should use operations/check instead of operations/hashsum.
+Returns:
+Example:
+$ rclone rc --loopback operations/hashsum fs=bin hashType=MD5 download=true base64=true
+{
+ "hashType": "md5",
+ "hashsum": [
+ "WTSVLpuiXyJO_kGzJerRLg== backend-versions.sh",
+ "v1b_OlWCJO9LtNq3EIKkNQ== bisect-go-rclone.sh",
+ "VHbmHzHh4taXzgag8BAIKQ== bisect-rclone.sh",
+ ]
+}
+See the hashsum command for more information on the above.
+Authentication is required for this call.
This takes the following parameters:
~/.cache/rclone/bisync)See bisync command help and full bisync description for more information.
@@ -9100,15 +9603,6 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalThe cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum flag in syncs and in the check command.
To use the verify checksums when transferring between cloud storage systems they must support a common hash type.
Almost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. some backends will only write a timestamp that represent the time of the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by default, though can be configured to check the file hash (with the --checksum flag). Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it.
Almost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. some backends will only write a timestamp that represents the time of the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by default, though can be configured to check the file hash (with the --checksum flag). Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it.
| Key | +Explanation | +
|---|---|
- |
+ModTimes not supported - times likely the upload time | +
R |
+ModTimes supported on files but can't be changed without re-upload | +
R/W |
+Read and Write ModTimes fully supported on files | +
DR |
+ModTimes supported on files and directories but can't be changed without re-upload | +
DR/W |
+Read and Write ModTimes fully supported on files and directories | +
Storage systems with a - in the ModTime column, means the modification read on objects is not the modification time of the file when uploaded. It is most likely the time the file was uploaded, or possibly something else (like the time the picture was taken in Google Photos).
Storage systems with a R (for read-only) in the ModTime column, means the it keeps modification times on objects, and updates them when uploading objects, but it does not support changing only the modification time (SetModTime operation) without re-uploading, possibly not even without deleting existing first. Some operations in rclone, such as copy and sync commands, will automatically check for SetModTime support and re-upload if necessary to keep the modification times in sync. Other commands will not work without SetModTime support, e.g. touch command on an existing file will fail, and changes to modification time only on a files in a mount will be silently ignored.
Storage systems with R/W (for read/write) in the ModTime column, means they do also support modtime-only operations.
Storage systems with D in the ModTime column means that the following symbols apply to directories as well as files.
If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, e.g. file.txt and FILE.txt. If a cloud storage system is case insensitive then that isn't possible.
This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.
@@ -9947,6 +10476,10 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalBackends may or may support reading or writing metadata. They may support reading and writing system metadata (metadata intrinsic to that backend) and/or user metadata (general purpose metadata).
The levels of metadata support are
| Key | @@ -9956,15 +10489,27 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total||
|---|---|---|
R |
-Read only System Metadata | +Read only System Metadata on files only |
RW |
-Read and write System Metadata | +Read and write System Metadata on files only |
RWU |
-Read and write System Metadata and read and write User Metadata | +Read and write System Metadata and read and write User Metadata on files only | +
DR |
+Read only System Metadata on files and directories | +|
DRW |
+Read and write System Metadata on files and directories | +|
DRWU |
+Read and write System Metadata and read and write User Metadata on files and directories |
Flags helpful for increasing performance.
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@@ -10891,14 +11438,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--rc-web-gui-update Check and update to latest version of web gui
Backend only flags. These can be set in the config file also.
- --acd-auth-url string Auth server URL
- --acd-client-id string OAuth Client Id
- --acd-client-secret string OAuth Client Secret
- --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
- --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi)
- --acd-token string OAuth Access Token as a JSON blob
- --acd-token-url string Token server url
- --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s)
+ --alias-description string Description of the remote
--alias-remote string Remote or path to alias
--azureblob-access-tier string Access tier of blob: hot, cool, cold or archive
--azureblob-account string Azure Storage Account Name
@@ -10909,6 +11449,8 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal's client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
+ --azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
--azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
@@ -10939,6 +11481,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azurefiles-client-secret string One of the service principal's client secrets
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
--azurefiles-connection-string string Azure Files Connection String
+ --azurefiles-description string Description of the remote
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
--azurefiles-endpoint string Endpoint for the service
--azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -10958,8 +11501,9 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--b2-account string Account ID or Application Key ID
--b2-chunk-size SizeSuffix Upload chunk size (default 96Mi)
--b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi)
+ --b2-description string Description of the remote
--b2-disable-checksum Disable checksums for large (> upload cutoff) files
- --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
+ --b2-download-auth-duration Duration Time before the public link authorization token will expire in s or suffix ms|s|m|h|d (default 1w)
--b2-download-url string Custom endpoint for downloads
--b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--b2-endpoint string Endpoint for the service
@@ -10978,6 +11522,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
+ --box-description string Description of the remote
--box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
--box-impersonate string Impersonate this user ID when using a service account
--box-list-chunk int Size of listing chunk 1-1000 (default 1000)
@@ -10994,6 +11539,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--cache-db-path string Directory to store file structure metadata DB (default "$HOME/.cache/rclone/cache-backend")
--cache-db-purge Clear all the cached data for this remote on start
--cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s)
+ --cache-description string Description of the remote
--cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s)
--cache-plex-insecure string Skip all certificate verification when connecting to the Plex server
--cache-plex-password string The password of the Plex user (obscured)
@@ -11007,15 +11553,19 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--cache-workers int How many workers should run in parallel to download chunks (default 4)
--cache-writes Cache file data on writes through the FS
--chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi)
+ --chunker-description string Description of the remote
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
+ --combine-description string Description of the remote
--combine-upstreams SpaceSepList Upstreams for combining
+ --compress-description string Description of the remote
--compress-level int GZIP compression level (-2 to 9) (default -1)
--compress-mode string Compression mode (default "gzip")
--compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi)
--compress-remote string Remote to compress
-L, --copy-links Follow symlinks and copy the pointed to item
+ --crypt-description string Description of the remote
--crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true)
--crypt-filename-encoding string How to encode the encrypted filename to text string (default "base32")
--crypt-filename-encryption string How to encrypt the filenames (default "standard")
@@ -11026,6 +11576,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--crypt-remote string Remote to encrypt/decrypt
--crypt-server-side-across-configs Deprecated: use --server-side-across-configs instead
--crypt-show-mapping For all files listed show how the names encrypt
+ --crypt-strict-names If set, this will raise an error when crypt comes across a filename that can't be decrypted
--crypt-suffix string If this is set it will override the default suffix of ".bin" (default ".bin")
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
@@ -11035,6 +11586,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
+ --drive-description string Description of the remote
--drive-disable-http2 Disable drive using http2 (default true)
--drive-encoding Encoding The encoding for the backend (default InvalidUtf8)
--drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars)
@@ -11083,6 +11635,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
+ --dropbox-description string Description of the remote
--dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot)
--dropbox-impersonate string Impersonate this user when using a business account
--dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
@@ -11092,10 +11645,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--dropbox-token-url string Token server url
--fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl
--fichier-cdn Set if you wish to use CDN download links
+ --fichier-description string Description of the remote
--fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
--fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured)
--fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured)
--fichier-shared-folder string If you want to download a shared folder, add this parameter
+ --filefabric-description string Description of the remote
--filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--filefabric-permanent-token string Permanent Authentication Token
--filefabric-root-folder-id string ID of the root folder
@@ -11106,6 +11661,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--ftp-ask-password Allow asking for FTP password when needed
--ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s)
--ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited
+ --ftp-description string Description of the remote
--ftp-disable-epsv Disable using EPSV even if server advertises support
--ftp-disable-mlsd Disable using MLSD even if server advertises support
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
@@ -11131,6 +11687,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
+ --gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Endpoint for the service
@@ -11151,6 +11708,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
+ --gphotos-description string Description of the remote
--gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
--gphotos-read-only Set to make the Google Photos backend read only
@@ -11159,10 +11717,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--gphotos-token string OAuth Access Token as a JSON blob
--gphotos-token-url string Token server url
--hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default)
+ --hasher-description string Description of the remote
--hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1)
--hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off)
--hasher-remote string Remote to cache checksums for (e.g. myRemote:path)
--hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy
+ --hdfs-description string Description of the remote
--hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot)
--hdfs-namenode CommaSepList Hadoop name nodes and ports
--hdfs-service-principal-name string Kerberos service principal name for the namenode
@@ -11171,6 +11731,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi)
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
+ --hidrive-description string Description of the remote
--hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary
--hidrive-encoding Encoding The encoding for the backend (default Slash,Dot)
--hidrive-endpoint string Endpoint for the service (default "https://api.hidrive.strato.com/2.1")
@@ -11181,10 +11742,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--hidrive-token-url string Token server url
--hidrive-upload-concurrency int Concurrency for chunked uploads (default 4)
--hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi)
+ --http-description string Description of the remote
--http-headers CommaSepList Set HTTP headers for all transactions
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
+ --imagekit-description string Description of the remote
--imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
--imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
--imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true
@@ -11193,6 +11756,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--imagekit-upload-tags string Tags to add to the uploaded files, e.g. "tag1,tag2"
--imagekit-versions Include old versions in directory listings
--internetarchive-access-key-id string IAS3 Access Key
+ --internetarchive-description string Description of the remote
--internetarchive-disable-checksum Don't ask the server to test against MD5 checksum calculated by rclone (default true)
--internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot)
--internetarchive-endpoint string IAS3 Endpoint (default "https://s3.us.archive.org")
@@ -11202,6 +11766,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--jottacloud-auth-url string Auth server URL
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
+ --jottacloud-description string Description of the remote
--jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot)
--jottacloud-hard-delete Delete files permanently rather than putting them into the trash
--jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi)
@@ -11210,6 +11775,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--jottacloud-token-url string Token server url
--jottacloud-trashed-only Only show files that are in the trash
--jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail's (default 10Mi)
+ --koofr-description string Description of the remote
--koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--koofr-endpoint string The Koofr API endpoint to use
--koofr-mountid string Mount ID of the mount to use
@@ -11217,10 +11783,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--koofr-provider string Choose your storage provider
--koofr-setmtime Does the backend support setting modification time (default true)
--koofr-user string Your user name
+ --linkbox-description string Description of the remote
--linkbox-token string Token from https://www.linkbox.to/admin/account
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
+ --local-description string Description of the remote
--local-encoding Encoding The encoding for the backend (default Slash,Dot)
--local-no-check-updated Don't check to see if the files change during upload
--local-no-preallocate Disable preallocation of disk space for transferred files
@@ -11233,6 +11801,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
+ --mailru-description string Description of the remote
--mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--mailru-pass string Password (obscured)
--mailru-speedup-enable Skip full upload if there is another file with same data hash (default true)
@@ -11243,12 +11812,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--mailru-token-url string Token server url
--mailru-user string User name (usually email)
--mega-debug Output more debug from Mega
+ --mega-description string Description of the remote
--mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--mega-hard-delete Delete files permanently rather than putting them into the trash
--mega-pass string Password (obscured)
--mega-use-https Use HTTPS for transfers
--mega-user string User name
+ --memory-description string Description of the remote
--netstorage-account string Set the NetStorage account name
+ --netstorage-description string Description of the remote
--netstorage-host string Domain+path of NetStorage host to connect to
--netstorage-protocol string Select between HTTP or HTTPS protocol (default "https")
--netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured)
@@ -11260,6 +11832,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-delta If set rclone will use delta listing to implement recursive listings
+ --onedrive-description string Description of the remote
--onedrive-drive-id string The ID of the drive to use
--onedrive-drive-type string The type of the drive (personal | business | documentLibrary)
--onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot)
@@ -11269,6 +11842,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--onedrive-link-scope string Set the scope of the links created by the link command (default "anonymous")
--onedrive-link-type string Set the type of the links created by the link command (default "view")
--onedrive-list-chunk int Size of listing chunk (default 1000)
+ --onedrive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
--onedrive-no-versions Remove all versions on modifying operations
--onedrive-region string Choose national cloud region for OneDrive (default "global")
--onedrive-root-folder-id string ID of the root folder
@@ -11282,6 +11856,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--oos-config-profile string Profile name inside the oci config file (default "Default")
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--oos-copy-timeout Duration Timeout for copy (default 1m0s)
+ --oos-description string Description of the remote
--oos-disable-checksum Don't store MD5 checksum with object metadata
--oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
--oos-endpoint string Endpoint for Object storage API
@@ -11300,12 +11875,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--oos-upload-concurrency int Concurrency for multipart uploads (default 10)
--oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi)
+ --opendrive-description string Description of the remote
--opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
+ --pcloud-description string Description of the remote
--pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--pcloud-hostname string Hostname to connect to (default "api.pcloud.com")
--pcloud-password string Your pcloud password (obscured)
@@ -11316,6 +11893,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--pikpak-auth-url string Auth server URL
--pikpak-client-id string OAuth Client Id
--pikpak-client-secret string OAuth Client Secret
+ --pikpak-description string Description of the remote
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
--pikpak-pass string Pikpak password (obscured)
@@ -11328,11 +11906,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--premiumizeme-auth-url string Auth server URL
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
+ --premiumizeme-description string Description of the remote
--premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--premiumizeme-token string OAuth Access Token as a JSON blob
--premiumizeme-token-url string Token server url
--protondrive-2fa string The 2FA code
--protondrive-app-version string The app version string (default "macos-drive@1.0.0-alpha.1+rclone")
+ --protondrive-description string Description of the remote
--protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true)
--protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured)
@@ -11343,12 +11923,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--putio-auth-url string Auth server URL
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
+ --putio-description string Description of the remote
--putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--putio-token string OAuth Access Token as a JSON blob
--putio-token-url string Token server url
--qingstor-access-key-id string QingStor Access Key ID
--qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi)
--qingstor-connection-retries int Number of connection retries (default 3)
+ --qingstor-description string Description of the remote
--qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8)
--qingstor-endpoint string Enter an endpoint URL to connection QingStor API
--qingstor-env-auth Get QingStor credentials from runtime
@@ -11357,18 +11939,21 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--qingstor-zone string Zone to connect to
--quatrix-api-key string API key for accessing Quatrix account
+ --quatrix-description string Description of the remote
--quatrix-effective-upload-time string Wanted upload time for one chunk (default "4s")
--quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
--quatrix-hard-delete Delete files permanently rather than putting them into the trash
--quatrix-host string Host name of Quatrix account
--quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' (default 95.367Mi)
--quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi)
+ --quatrix-skip-project-folders Skip project folders in operations
--s3-access-key-id string AWS Access Key ID
--s3-acl string Canned ACL used when creating buckets and storing or copying objects
--s3-bucket-acl string Canned ACL used when creating buckets
--s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-decompress If set this will decompress gzip encoded objects
+ --s3-description string Description of the remote
--s3-directory-markers Upload an empty object with a trailing slash when a new directory is created
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
@@ -11403,19 +11988,22 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key
--s3-storage-class string The storage class to use when storing new objects in S3
--s3-sts-endpoint string Endpoint for STS
- --s3-upload-concurrency int Concurrency for multipart uploads (default 4)
+ --s3-upload-concurrency int Concurrency for multipart uploads and copies (default 4)
--s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
--s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint
--s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset)
--s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset)
+ --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support)
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
--s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads
--s3-v2-auth If true use v2 authentication
--s3-version-at Time Show file versions as they were at the specified time (default off)
+ --s3-version-deleted Show deleted file markers when using versions
--s3-versions Include old versions in directory listings
--seafile-2fa Two-factor authentication ('true' if the account has 2FA enabled)
--seafile-create-library Should rclone create a library if it doesn't exist
+ --seafile-description string Description of the remote
--seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8)
--seafile-library string Name of the library
--seafile-library-key string Library password (for encrypted libraries only) (obscured)
@@ -11427,6 +12015,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference
--sftp-concurrency int The maximum number of outstanding requests for one file (default 64)
--sftp-copy-is-hardlink Set to enable server side copies using hardlinks
+ --sftp-description string Description of the remote
--sftp-disable-concurrent-reads If set don't use concurrent reads
--sftp-disable-concurrent-writes If set don't use concurrent writes
--sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available
@@ -11461,6 +12050,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
+ --sharefile-description string Description of the remote
--sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot)
--sharefile-endpoint string Endpoint for API calls
--sharefile-root-folder-id string ID of the root folder
@@ -11469,10 +12059,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi)
--sia-api-password string Sia Daemon API Password (obscured)
--sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default "http://127.0.0.1:9980")
+ --sia-description string Description of the remote
--sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot)
--sia-user-agent string Siad User Agent (default "Sia-Agent")
--skip-links Don't warn about skipped symlinks
--smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
+ --smb-description string Description of the remote
--smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
--smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot)
--smb-hide-special-share Hide special shares (e.g. print$) which users aren't supposed to access (default true)
@@ -11484,6 +12076,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--smb-user string SMB username (default "$USER")
--storj-access-grant string Access grant
--storj-api-key string API key
+ --storj-description string Description of the remote
--storj-passphrase string Encryption passphrase
--storj-provider string Choose an authentication method (default "existing")
--storj-satellite-address string Satellite address (default "us1.storj.io")
@@ -11492,6 +12085,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sugarsync-authorization string Sugarsync authorization
--sugarsync-authorization-expiry string Sugarsync authorization expiry
--sugarsync-deleted-id string Sugarsync deleted folder id
+ --sugarsync-description string Description of the remote
--sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot)
--sugarsync-hard-delete Permanently delete files if true
--sugarsync-private-access-key string Sugarsync Private Access Key
@@ -11505,6 +12099,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
--swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
--swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi)
+ --swift-description string Description of the remote
--swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
--swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8)
--swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default "public")
@@ -11524,17 +12119,21 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--union-action-policy string Policy to choose upstream on ACTION category (default "epall")
--union-cache-time int Cache time of usage and free space (in seconds) (default 120)
--union-create-policy string Policy to choose upstream on CREATE category (default "epmfs")
+ --union-description string Description of the remote
--union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi)
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access token
+ --uptobox-description string Description of the remote
--uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
+ --webdav-description string Description of the remote
--webdav-encoding string The encoding for the backend
--webdav-headers CommaSepList Set HTTP headers for all transactions
--webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi)
+ --webdav-owncloud-exclude-shares Exclude ownCloud shares
--webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms)
--webdav-pass string Password (obscured)
--webdav-url string URL of http host to connect to
@@ -11543,6 +12142,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--yandex-auth-url string Auth server URL
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
+ --yandex-description string Description of the remote
--yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
--yandex-hard-delete Delete files permanently rather than putting them into the trash
--yandex-token string OAuth Access Token as a JSON blob
@@ -11550,6 +12150,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--zoho-auth-url string Auth server URL
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
+ --zoho-description string Description of the remote
--zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
@@ -11735,16 +12336,21 @@ docker volume create my_vol -d rclone -o opt1=new_val1 ...
docker volume list
docker volume inspect my_vol
If docker refuses to remove the volume, you should find containers or swarm services that use it and stop them first.
+Bisync
+bisync is in beta and is considered an advanced command, so use with care. Make sure you have read and understood the entire manual (especially the Limitations section) before using, or data loss can result. Questions can be asked in the Rclone Forum.
Getting started
- Install rclone and setup your remotes.
-- Bisync will create its working directory at
~/.cache/rclone/bisync on Linux or C:\Users\MyLogin\AppData\Local\rclone\bisync on Windows. Make sure that this location is writable.
+- Bisync will create its working directory at
~/.cache/rclone/bisync on Linux, /Users/yourusername/Library/Caches/rclone/bisync on Mac, or C:\Users\MyLogin\AppData\Local\rclone\bisync on Windows. Make sure that this location is writable.
- Run bisync with the
--resync flag, specifying the paths to the local and remote sync directory roots.
-- For successive sync runs, leave off the
--resync flag.
+- For successive sync runs, leave off the
--resync flag. (Important!)
- Consider using a filters file for excluding unnecessary files and directories from the sync.
- Consider setting up the --check-access feature for safety.
-- On Linux, consider setting up a crontab entry. bisync can safely run in concurrent cron jobs thanks to lock files it maintains.
+- On Linux or Mac, consider setting up a crontab entry. bisync can safely run in concurrent cron jobs thanks to lock files it maintains.
+For example, your first command might look like this:
+rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run
+If all looks good, run it again without --dry-run. After that, remove --resync as well.
Here is a typical run log (with timestamps removed for clarity):
rclone bisync /testdir/path1/ /testdir/path2/ --verbose
INFO : Synching Path1 "/testdir/path1/" with Path2 "/testdir/path2/"
@@ -11797,36 +12403,36 @@ Positional arguments:
Type 'rclone listremotes' for list of configured remotes.
Optional Flags:
- --check-access Ensure expected `RCLONE_TEST` files are found on
- both Path1 and Path2 filesystems, else abort.
- --check-filename FILENAME Filename for `--check-access` (default: `RCLONE_TEST`)
- --check-sync CHOICE Controls comparison of final listings:
- `true | false | only` (default: true)
- If set to `only`, bisync will only compare listings
- from the last run but skip actual sync.
- --filters-file PATH Read filtering patterns from a file
- --max-delete PERCENT Safety check on maximum percentage of deleted files allowed.
- If exceeded, the bisync run will abort. (default: 50%)
- --force Bypass `--max-delete` safety check and run the sync.
- Consider using with `--verbose`
- --create-empty-src-dirs Sync creation and deletion of empty directories.
- (Not compatible with --remove-empty-dirs)
- --remove-empty-dirs Remove empty directories at the final cleanup step.
- -1, --resync Performs the resync run.
- Warning: Path1 files may overwrite Path2 versions.
- Consider using `--verbose` or `--dry-run` first.
- --ignore-listing-checksum Do not use checksums for listings
- (add --ignore-checksum to additionally skip post-copy checksum checks)
- --resilient Allow future runs to retry after certain less-serious errors,
- instead of requiring --resync. Use at your own risk!
- --localtime Use local time in listings (default: UTC)
- --no-cleanup Retain working files (useful for troubleshooting and testing).
- --workdir PATH Use custom working directory (useful for testing).
- (default: `~/.cache/rclone/bisync`)
- -n, --dry-run Go through the motions - No files are copied/deleted.
- -v, --verbose Increases logging verbosity.
- May be specified more than once for more details.
- -h, --help help for bisync
+ --backup-dir1 string --backup-dir for Path1. Must be a non-overlapping path on the same remote.
+ --backup-dir2 string --backup-dir for Path2. Must be a non-overlapping path on the same remote.
+ --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
+ --check-filename string Filename for --check-access (default: RCLONE_TEST)
+ --check-sync string Controls comparison of final listings: true|false|only (default: true) (default "true")
+ --compare string Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime')
+ --conflict-loser ConflictLoserAction Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num)
+ --conflict-resolve string Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default "none")
+ --conflict-suffix string Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict')
+ --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs)
+ --download-hash Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!)
+ --filters-file string Read filtering patterns from a file
+ --force Bypass --max-delete safety check and run the sync. Consider using with --verbose
+ -h, --help help for bisync
+ --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)
+ --max-lock Duration Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s)
+ --no-cleanup Retain working files (useful for troubleshooting and testing).
+ --no-slow-hash Ignore listing checksums only on backends where they are slow
+ --recover Automatically recover from interruptions without requiring --resync.
+ --remove-empty-dirs Remove ALL empty directories at the final cleanup step.
+ --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk!
+ -1, --resync Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first.
+ --resync-mode string During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default "none")
+ --retries int Retry operations this many times if they fail (requires --resilient). (default 3)
+ --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s)
+ --slow-hash-sync-only Ignore slow checksums for listings and deltas, but still consider them during sync calls.
+ --workdir string Use custom working dir - useful for testing. (default: {WORKDIR})
+ --max-delete PERCENT Safety check on maximum percentage of deleted files allowed. If exceeded, the bisync run will abort. (default: 50%)
+ -n, --dry-run Go through the motions - No files are copied/deleted.
+ -v, --verbose Increases logging verbosity. May be specified more than once for more details.
Arbitrary rclone flags may be specified on the bisync command line, for example rclone bisync ./testdir/path1/ gdrive:testdir/path2/ --drive-skip-gdocs -v -v --timeout 10s Note that interactions of various rclone flags with bisync process flow has not been fully tested yet.
Path1 and Path2 arguments may be references to any mix of local directory paths (absolute or relative), UNC paths (//server/share/path), Windows drive paths (with a drive letter and :) or configured remotes with optional subdirectory paths. Cloud references are distinguished by having a : in the argument (see Windows support below).
The listings in bisync working directory (default: ~/.cache/rclone/bisync) are named based on the Path1 and Path2 arguments so that separate syncs to individual directories within the tree may be set up, e.g.: path_to_local_tree..dropbox_subdir.lst.
Any empty directories after the sync on both the Path1 and Path2 filesystems are not deleted by default, unless --create-empty-src-dirs is specified. If the --remove-empty-dirs flag is specified, then both paths will have ALL empty directories purged as the last step in the process.
This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. Path2 files that do not exist in Path1 will be copied to Path1, and the process will then copy the Path1 tree to Path2.
-The --resync sequence is roughly equivalent to:
rclone copy Path2 Path1 --ignore-existing
-rclone copy Path1 Path2
-Or, if using --create-empty-src-dirs:
rclone copy Path2 Path1 --ignore-existing
-rclone copy Path1 Path2 --create-empty-src-dirs
-rclone copy Path2 Path1 --create-empty-src-dirs
+This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. By default, Path2 files that do not exist in Path1 will be copied to Path1, and the process will then copy the Path1 tree to Path2.
+The --resync sequence is roughly equivalent to the following (but see --resync-mode for other options):
rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs]
+rclone copy Path1 Path2 [--create-empty-src-dirs]
The base directories on both Path1 and Path2 filesystems must exist or bisync will fail. This is required for safety - that bisync can verify that both paths are valid.
-When using --resync, a newer version of a file on the Path2 filesystem will be overwritten by the Path1 filesystem version. (Note that this is NOT entirely symmetrical.) Carefully evaluate deltas using --dry-run.
When using --resync, a newer version of a file on the Path2 filesystem will (by default) be overwritten by the Path1 filesystem version. (Note that this is NOT entirely symmetrical, and more symmetrical options can be specified with the --resync-mode flag.) Carefully evaluate deltas using --dry-run.
For a resync run, one of the paths may be empty (no files in the path tree). The resync run should result in files on both paths, else a normal non-resync run will fail.
For a non-resync run, either path being empty (no files in the tree) fails with Empty current PathN listing. Cannot sync to an empty directory: X.pathN.lst This is a safety check that an unexpected empty path does not result in deleting everything in the other path.
Note that --resync implies --resync-mode path1 unless a different --resync-mode is explicitly specified. It is not necessary to use both the --resync and --resync-mode flags -- either one is sufficient without the other.
Note: --resync (including --resync-mode) should only be used under three specific (rare) circumstances: 1. It is your first bisync run (between these two paths) 2. You've just made changes to your bisync settings (such as editing the contents of your --filters-file) 3. There was an error on the prior run, and as a result, bisync now requires --resync to recover
The rest of the time, you should omit --resync. The reason is because --resync will only copy (not sync) each side to the other. Therefore, if you included --resync for every bisync run, it would never be possible to delete a file -- the deleted file would always keep reappearing at the end of every run (because it's being copied from the other side where it still exists). Similarly, renaming a file would always result in a duplicate copy (both old and new name) on both sides.
If you find that frequent interruptions from #3 are an issue, rather than automatically running --resync, the recommended alternative is to use the --resilient, --recover, and --conflict-resolve flags, (along with Graceful Shutdown mode, when needed) for a very robust "set-it-and-forget-it" bisync setup that can automatically bounce back from almost any interruption it might encounter. Consider adding something like the following:
--resilient --recover --max-lock 2m --conflict-resolve newer
+In the event that a file differs on both sides during a --resync, --resync-mode controls which version will overwrite the other. The supported options are similar to --conflict-resolve. For all of the following options, the version that is kept is referred to as the "winner", and the version that is overwritten (deleted) is referred to as the "loser". The options are named after the "winner":
path1 - (the default) - the version from Path1 is unconditionally considered the winner (regardless of modtime and size, if any). This can be useful if one side is more trusted or up-to-date than the other, at the time of the --resync.path2 - same as path1, except the path2 version is considered the winner.newer - the newer file (by modtime) is considered the winner, regardless of which side it came from. This may result in having a mix of some winners from Path1, and some winners from Path2. (The implementation is analogous to running rclone copy --update in both directions.)older - same as newer, except the older file is considered the winner, and the newer file is considered the loser.larger - the larger file (by size) is considered the winner (regardless of modtime, if any). This can be a useful option for remotes without modtime support, or with the kinds of files (such as logs) that tend to grow but not shrink, over time.smaller - the smaller file (by size) is considered the winner (regardless of modtime, if any).For all of the above options, note the following: - If either of the underlying remotes lacks support for the chosen method, it will be ignored and will fall back to the default of path1. (For example, if --resync-mode newer is set, but one of the paths uses a remote that doesn't support modtime.) - If a winner can't be determined because the chosen method's attribute is missing or equal, it will be ignored, and bisync will instead try to determine whether the files differ by looking at the other --compare methods in effect. (For example, if --resync-mode newer is set, but the Path1 and Path2 modtimes are identical, bisync will compare the sizes.) If bisync concludes that they differ, preference is given to whichever is the "source" at that moment. (In practice, this gives a slight advantage to Path2, as the 2to1 copy comes before the 1to2 copy.) If the files do not differ, nothing is copied (as both sides are already correct). - These options apply only to files that exist on both sides (with the same name and relative path). Files that exist only on one side and not the other are always copied to the other, during --resync (this is one of the main differences between resync and non-resync runs.). - --conflict-resolve, --conflict-loser, and --conflict-suffix do not apply during --resync, and unlike these flags, nothing is renamed during --resync. When a file differs on both sides during --resync, one version always overwrites the other (much like in rclone copy.) (Consider using --backup-dir to retain a backup of the losing version.) - Unlike for --conflict-resolve, --resync-mode none is not a valid option (or rather, it will be interpreted as "no resync", unless --resync has also been specified, in which case it will be ignored.) - Winners and losers are decided at the individual file-level only (there is not currently an option to pick an entire winning directory atomically, although the path1 and path2 options typically produce a similar result.) - To maintain backward-compatibility, the --resync flag implies --resync-mode path1 unless a different --resync-mode is explicitly specified. Similarly, all --resync-mode options (except none) imply --resync, so it is not necessary to use both the --resync and --resync-mode flags simultaneously -- either one is sufficient without the other.
Access check files are an additional safety measure against data loss. bisync will ensure it can find matching RCLONE_TEST files in the same places in the Path1 and Path2 filesystems. RCLONE_TEST files are not generated automatically. For --check-access to succeed, you must first either: A) Place one or more RCLONE_TEST files in both systems, or B) Set --check-filename to a filename already in use in various locations throughout your sync'd fileset. Recommended methods for A) include: * rclone touch Path1/RCLONE_TEST (create a new file) * rclone copyto Path1/RCLONE_TEST Path2/RCLONE_TEST (copy an existing file) * rclone copy Path1/RCLONE_TEST Path2/RCLONE_TEST --include "RCLONE_TEST" (copy multiple files at once, recursively) * create the files manually (outside of rclone) * run bisync once without --check-access to set matching files on both filesystems will also work, but is not preferred, due to potential for user error (you are temporarily disabling the safety feature).
Note that --check-access is still enforced on --resync, so bisync --resync --check-access will not work as a method of initially setting the files (this is to ensure that bisync can't inadvertently circumvent its own safety switch.)
Time stamps and file contents for RCLONE_TEST files are not important, just the names and locations. If you have symbolic links in your sync tree it is recommended to place RCLONE_TEST files in the linked-to directory tree to protect against bisync assuming a bunch of deleted files if the linked-to tree should not be accessible. See also the --check-filename flag.
Name of the file(s) used in access health validation. The default --check-filename is RCLONE_TEST. One or more files having this filename must exist, synchronized between your source and destination filesets, in order for --check-access to succeed. See --check-access for additional details.
As of v1.66, bisync fully supports comparing based on any combination of size, modtime, and checksum (lifting the prior restriction on backends without modtime support.)
By default (without the --compare flag), bisync inherits the same comparison options as sync (that is: size and modtime by default, unless modified with flags such as --checksum or --size-only.)
If the --compare flag is set, it will override these defaults. This can be useful if you wish to compare based on combinations not currently supported in sync, such as comparing all three of size AND modtime AND checksum simultaneously (or just modtime AND checksum).
--compare takes a comma-separated list, with the currently supported values being size, modtime, and checksum. For example, if you want to compare size and checksum, but not modtime, you would do:
--compare size,checksum
+Or if you want to compare all three:
+--compare size,modtime,checksum
+--compare overrides any conflicting flags. For example, if you set the conflicting flags --compare checksum --size-only, --size-only will be ignored, and bisync will compare checksum and not size. To avoid confusion, it is recommended to use either --compare or the normal sync flags, but not both.
If --compare includes checksum and both remotes support checksums but have no hash types in common with each other, checksums will be considered only for comparisons within the same side (to determine what has changed since the prior sync), but not for comparisons against the opposite side. If one side supports checksums and the other does not, checksums will only be considered on the side that supports them.
When comparing with checksum and/or size without modtime, bisync cannot determine whether a file is newer or older -- only whether it is changed or unchanged. (If it is changed on both sides, bisync still does the standard equality-check to avoid declaring a sync conflict unless it absolutely has to.)
It is recommended to do a --resync when changing --compare settings, as otherwise your prior listing files may not contain the attributes you wish to compare (for example, they will not have stored checksums if you were not previously comparing checksums.)
When --checksum or --compare checksum is set, bisync will retrieve (or generate) checksums (for backends that support them) when creating the listings for both paths, and store the checksums in the listing files. --ignore-listing-checksum will disable this behavior, which may speed things up considerably, especially on backends (such as local) where hashes must be computed on the fly instead of retrieved. Please note the following:
v1.66, --ignore-listing-checksum is now automatically set when neither --checksum nor --compare checksum are in use (as the checksums would not be used for anything.)--ignore-listing-checksum is NOT the same as --ignore-checksum, and you may wish to use one or the other, or both. In a nutshell: --ignore-listing-checksum controls whether checksums are considered when scanning for diffs, while --ignore-checksum controls whether checksums are considered during the copy/sync operations that follow, if there ARE diffs.--ignore-listing-checksum is passed, bisync currently computes hashes for one path even when there's no common hash with the other path (for example, a crypt remote.) This can still be beneficial, as the hashes will still be used to detect changes within the same side (if --checksum or --compare checksum is set), even if they can't be used to compare against the opposite side.--no-slow-hash (or --slow-hash-sync-only) instead of --ignore-listing-checksum.--ignore-listing-checksum is used simultaneously with --compare checksum (or --checksum), checksums will be ignored for bisync deltas, but still considered during the sync operations that follow (if deltas are detected based on modtime and/or size.)On some remotes (notably local), checksums can dramatically slow down a bisync run, because hashes cannot be stored and need to be computed in real-time when they are requested. On other remotes (such as drive), they add practically no time at all. The --no-slow-hash flag will automatically skip checksums on remotes where they are slow, while still comparing them on others (assuming --compare includes checksum.) This can be useful when one of your bisync paths is slow but you still want to check checksums on the other, for a more robust sync.
Same as --no-slow-hash, except slow hashes are still considered during sync calls. They are still NOT considered for determining deltas, nor or they included in listings. They are also skipped during --resync. The main use case for this flag is when you have a large number of files, but relatively few of them change from run to run -- so you don't want to check your entire tree every time (it would take too long), but you still want to consider checksums for the smaller group of files for which a modtime or size change was detected. Keep in mind that this speed savings comes with a safety trade-off: if a file's content were to change without a change to its modtime or size, bisync would not detect it, and it would not be synced.
--slow-hash-sync-only is only useful if both remotes share a common hash type (if they don't, bisync will automatically fall back to --no-slow-hash.) Both --no-slow-hash and --slow-hash-sync-only have no effect without --compare checksum (or --checksum).
If --download-hash is set, bisync will use best efforts to obtain an MD5 checksum by downloading and computing on-the-fly, when checksums are not otherwise available (for example, a remote that doesn't support them.) Note that since rclone has to download the entire file, this may dramatically slow down your bisync runs, and is also likely to use a lot of data, so it is probably not practical for bisync paths with a large total file size. However, it can be a good option for syncing small-but-important files with maximum accuracy (for example, a source code repo on a crypt remote.) An additional advantage over methods like cryptcheck is that the original file is not required for comparison (for example, --download-hash can be used to bisync two different crypt remotes with different passwords.)
When --download-hash is set, bisync still looks for more efficient checksums first, and falls back to downloading only when none are found. It takes priority over conflicting flags such as --no-slow-hash. --download-hash is not suitable for Google Docs and other files of unknown size, as their checksums would change from run to run (due to small variances in the internals of the generated export file.) Therefore, bisync automatically skips --download-hash for files with a size less than 0.
See also: Hasher backend, cryptcheck command, rclone check --download option, md5sum command
As a safety check, if greater than the --max-delete percent of files were deleted on either the Path1 or Path2 filesystem, then bisync will abort with a warning message, without making any changes. The default --max-delete is 50%. One way to trigger this limit is to rename a directory that contains more than half of your files. This will appear to bisync as a bunch of deleted files and a bunch of new files. This safety check is intended to block bisync from deleting all of the files on both filesystems due to a temporary network access issue, or if the user had inadvertently deleted the files on one side or the other. To force the sync, either set a different delete percentage limit, e.g. --max-delete 75 (allows up to 75% deletion), or use --force to bypass the check.
Also see the all files changed check.
-By using rclone filter features you can exclude file types or directory sub-trees from the sync. See the bisync filters section and generic --filter-from documentation. An example filters file contains filters for non-allowed files for synching with Dropbox.
If you make changes to your filters file then bisync requires a run with --resync. This is a safety feature, which prevents existing files on the Path1 and/or Path2 side from seeming to disappear from view (since they are excluded in the new listings), which would fool bisync into seeing them as deleted (as compared to the prior run listings), and then bisync would proceed to delete them for real.
To block this from happening, bisync calculates an MD5 hash of the filters file and stores the hash in a .md5 file in the same place as your filters file. On the next run with --filters-file set, bisync re-calculates the MD5 hash of the current filters file and compares it to the hash stored in the .md5 file. If they don't match, the run aborts with a critical error and thus forces you to do a --resync, likely avoiding a disaster.
In bisync, a "conflict" is a file that is new or changed on both sides (relative to the prior run) AND is not currently identical on both sides. --conflict-resolve controls how bisync handles such a scenario. The currently supported options are:
none - (the default) - do not attempt to pick a winner, keep and rename both files according to --conflict-loser and --conflict-suffix settings. For example, with the default settings, file.txt on Path1 is renamed file.txt.conflict1 and file.txt on Path2 is renamed file.txt.conflict2. Both are copied to the opposite path during the run, so both sides end up with a copy of both files. (As none is the default, it is not necessary to specify --conflict-resolve none -- you can just omit the flag.)newer - the newer file (by modtime) is considered the winner and is copied without renaming. The older file (the "loser") is handled according to --conflict-loser and --conflict-suffix settings (either renamed or deleted.) For example, if file.txt on Path1 is newer than file.txt on Path2, the result on both sides (with other default settings) will be file.txt (winner from Path1) and file.txt.conflict1 (loser from Path2).older - same as newer, except the older file is considered the winner, and the newer file is considered the loser.larger - the larger file (by size) is considered the winner (regardless of modtime, if any).smaller - the smaller file (by size) is considered the winner (regardless of modtime, if any).path1 - the version from Path1 is unconditionally considered the winner (regardless of modtime and size, if any). This can be useful if one side is usually more trusted or up-to-date than the other.path2 - same as path1, except the path2 version is considered the winner.For all of the above options, note the following: - If either of the underlying remotes lacks support for the chosen method, it will be ignored and fall back to none. (For example, if --conflict-resolve newer is set, but one of the paths uses a remote that doesn't support modtime.) - If a winner can't be determined because the chosen method's attribute is missing or equal, it will be ignored and fall back to none. (For example, if --conflict-resolve newer is set, but the Path1 and Path2 modtimes are identical, even if the sizes may differ.) - If the file's content is currently identical on both sides, it is not considered a "conflict", even if new or changed on both sides since the prior sync. (For example, if you made a change on one side and then synced it to the other side by other means.) Therefore, none of the conflict resolution flags apply in this scenario. - The conflict resolution flags do not apply during a --resync, as there is no "prior run" to speak of (but see --resync-mode for similar options.)
--conflict-loser determines what happens to the "loser" of a sync conflict (when --conflict-resolve determines a winner) or to both files (when there is no winner.) The currently supported options are:
num - (the default) - auto-number the conflicts by automatically appending the next available number to the --conflict-suffix, in chronological order. For example, with the default settings, the first conflict for file.txt will be renamed file.txt.conflict1. If file.txt.conflict1 already exists, file.txt.conflict2 will be used instead (etc., up to a maximum of 9223372036854775807 conflicts.)pathname - rename the conflicts according to which side they came from, which was the default behavior prior to v1.66. For example, with --conflict-suffix path, file.txt from Path1 will be renamed file.txt.path1, and file.txt from Path2 will be renamed file.txt.path2. If two non-identical suffixes are provided (ex. --conflict-suffix cloud,local), the trailing digit is omitted. Importantly, note that with pathname, there is no auto-numbering beyond 2, so if file.txt.path2 somehow already exists, it will be overwritten. Using a dynamic date variable in your --conflict-suffix (see below) is one possible way to avoid this. Note also that conflicts-of-conflicts are possible, if the original conflict is not manually resolved -- for example, if for some reason you edited file.txt.path1 on both sides, and those edits were different, the result would be file.txt.path1.path1 and file.txt.path1.path2 (in addition to file.txt.path2.)delete - keep the winner only and delete the loser, instead of renaming it. If a winner cannot be determined (see --conflict-resolve for details on how this could happen), delete is ignored and the default num is used instead (i.e. both versions are kept and renamed, and neither is deleted.) delete is inherently the most destructive option, so use it only with care.For all of the above options, note that if a winner cannot be determined (see --conflict-resolve for details on how this could happen), or if --conflict-resolve is not in use, both files will be renamed.
--conflict-suffix controls the suffix that is appended when bisync renames a --conflict-loser (default: conflict). --conflict-suffix will accept either one string or two comma-separated strings to assign different suffixes to Path1 vs. Path2. This may be helpful later in identifying the source of the conflict. (For example, --conflict-suffix dropboxconflict,laptopconflict)
With --conflict-loser num, a number is always appended to the suffix. With --conflict-loser pathname, a number is appended only when one suffix is specified (or when two identical suffixes are specified.) i.e. with --conflict-loser pathname, all of the following would produce exactly the same result:
--conflict-suffix path
+--conflict-suffix path,path
+--conflict-suffix path1,path2
+Suffixes may be as short as 1 character. By default, the suffix is appended after any other extensions (ex. file.jpg.conflict1), however, this can be changed with the --suffix-keep-extension flag (i.e. to instead result in file.conflict1.jpg).
--conflict-suffix supports several dynamic date variables when enclosed in curly braces as globs. This can be helpful to track the date and/or time that each conflict was handled by bisync. For example:
--conflict-suffix {DateOnly}-conflict
+// result: myfile.txt.2006-01-02-conflict1
+All of the formats described here and here are supported, but take care to ensure that your chosen format does not use any characters that are illegal on your remotes (for example, macOS does not allow colons in filenames, and slashes are also best avoided as they are often interpreted as directory separators.) To address this particular issue, an additional {MacFriendlyTime} (or just {mac}) option is supported, which results in 2006-01-02 0304PM.
Note that --conflict-suffix is entirely separate from rclone's main --sufix flag. This is intentional, as users may wish to use both flags simultaneously, if also using --backup-dir.
Finally, note that the default in bisync prior to v1.66 was to rename conflicts with ..path1 and ..path2 (with two periods, and path instead of conflict.) Bisync now defaults to a single dot instead of a double dot, but additional dots can be added by including them in the specified suffix string. For example, for behavior equivalent to the previous default, use:
[--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path
+Enabled by default, the check-sync function checks that all of the same files exist in both the Path1 and Path2 history listings. This check-sync integrity check is performed at the end of the sync run by default. Any untrapped failing copy/deletes between the two paths might result in differences between the two listings and in the untracked file content differences between the two paths. A resync run would correct the error.
Note that the default-enabled integrity check locally executes a load of both the final Path1 and Path2 listings, and thus adds to the run time of a sync. Using --check-sync=false will disable it and may significantly reduce the sync run times for very large numbers of files.
The check may be run manually with --check-sync=only. It runs only the integrity check and terminates without actually synching.
See also: Concurrent modifications
-By default, bisync will retrieve (or generate) checksums (for backends that support them) when creating the listings for both paths, and store the checksums in the listing files. --ignore-listing-checksum will disable this behavior, which may speed things up considerably, especially on backends (such as local) where hashes must be computed on the fly instead of retrieved. Please note the following:
--ignore-listing-checksum is NOT the same as --ignore-checksum, and you may wish to use one or the other, or both. In a nutshell: --ignore-listing-checksum controls whether checksums are considered when scanning for diffs, while --ignore-checksum controls whether checksums are considered during the copy/sync operations that follow, if there ARE diffs.--ignore-listing-checksum is passed, bisync currently computes hashes for one path even when there's no common hash with the other path (for example, a crypt remote.)--ignore-listing-checksum was not specified when creating the listings, --check-sync=only can be used to compare Path1 vs. Path2 checksums (as of the time the previous listings were created.) However, --check-sync=only will NOT include checksums if the previous listings were generated on a run using --ignore-listing-checksum. For a more robust integrity check of the current state, consider using check (or cryptcheck, if at least one path is a crypt remote.)Note that currently, --check-sync only checks listing snapshots and NOT the actual files on the remotes. Note also that the listing snapshots will not know about any changes that happened during or after the latest bisync run, as those will be discovered on the next run. Therefore, while listings should always match each other at the end of a bisync run, it is expected that they will not match the underlying remotes, nor will the remotes match each other, if there were changes during or after the run. This is normal, and any differences will be detected and synced on the next run.
For a robust integrity check of the current state of the remotes (as opposed to just their listing snapshots), consider using check (or cryptcheck, if at least one path is a crypt remote) instead of --check-sync, keeping in mind that differences are expected if files changed during or after your last bisync run.
For example, a possible sequence could look like this:
+rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient
+rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt
+rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v
+(or switch Path1 and Path2 to make Path2 the source-of-truth)
+Or, if neither side is totally up-to-date, you could run a --resync to bring them back into agreement (but remember that this could cause deleted files to re-appear.)
*Note also that rclone check does not currently include empty directories, so if you want to know if any empty directories are out of sync, consider alternatively running the above rclone sync command with --dry-run added.
See also: Concurrent modifications, --resilient
Caution: this is an experimental feature. Use at your own risk!
By default, most errors or interruptions will cause bisync to abort and require --resync to recover. This is a safety feature, to prevent bisync from running again until a user checks things out. However, in some cases, bisync can go too far and enforce a lockout when one isn't actually necessary, like for certain less-serious errors that might resolve themselves on the next run. When --resilient is specified, bisync tries its best to recover and self-correct, and only requires --resync as a last resort when a human's involvement is absolutely necessary. The intended use case is for running bisync as a background process (such as via scheduled cron).
When using --resilient mode, bisync will still report the error and abort, however it will not lock out future runs -- allowing the possibility of retrying at the next normally scheduled time, without requiring a --resync first. Examples of such retryable errors include access test failures, missing listing files, and filter change detections. These safety features will still prevent the current run from proceeding -- the difference is that if conditions have improved by the time of the next run, that next run will be allowed to proceed. Certain more serious errors will still enforce a --resync lockout, even in --resilient mode, to prevent data loss.
Behavior of --resilient may change in a future version.
Behavior of --resilient may change in a future version. (See also: --recover, --max-lock, Graceful Shutdown)
If --recover is set, in the event of a sudden interruption or other un-graceful shutdown, bisync will attempt to automatically recover on the next run, instead of requiring --resync. Bisync is able to recover robustly by keeping one "backup" listing at all times, representing the state of both paths after the last known successful sync. Bisync can then compare the current state with this snapshot to determine which changes it needs to retry. Changes that were synced after this snapshot (during the run that was later interrupted) will appear to bisync as if they are "new or changed on both sides", but in most cases this is not a problem, as bisync will simply do its usual "equality check" and learn that no action needs to be taken on these files, since they are already identical on both sides.
In the rare event that a file is synced successfully during a run that later aborts, and then that same file changes AGAIN before the next run, bisync will think it is a sync conflict, and handle it accordingly. (From bisync's perspective, the file has changed on both sides since the last trusted sync, and the files on either side are not currently identical.) Therefore, --recover carries with it a slightly increased chance of having conflicts -- though in practice this is pretty rare, as the conditions required to cause it are quite specific. This risk can be reduced by using bisync's "Graceful Shutdown" mode (triggered by sending SIGINT or Ctrl+C), when you have the choice, instead of forcing a sudden termination.
--recover and --resilient are similar, but distinct -- the main difference is that --resilient is about retrying, while --recover is about recovering. Most users will probably want both. --resilient allows retrying when bisync has chosen to abort itself due to safety features such as failing --check-access or detecting a filter change. --resilient does not cover external interruptions such as a user shutting down their computer in the middle of a sync -- that is what --recover is for.
Bisync uses lock files as a safety feature to prevent interference from other bisync runs while it is running. Bisync normally removes these lock files at the end of a run, but if bisync is abruptly interrupted, these files will be left behind. By default, they will lock out all future runs, until the user has a chance to manually check things out and remove the lock. As an alternative, --max-lock can be used to make them automatically expire after a certain period of time, so that future runs are not locked out forever, and auto-recovery is possible. --max-lock can be any duration 2m or greater (or 0 to disable). If set, lock files older than this will be considered "expired", and future runs will be allowed to disregard them and proceed. (Note that the --max-lock duration must be set by the process that left the lock file -- not the later one interpreting it.)
If set, bisync will also "renew" these lock files every --max-lock minus one minute throughout a run, for extra safety. (For example, with --max-lock 5m, bisync would renew the lock file (for another 5 minutes) every 4 minutes until the run has completed.) In other words, it should not be possible for a lock file to pass its expiration time while the process that created it is still running -- and you can therefore be reasonably sure that any expired lock file you may find was left there by an interrupted run, not one that is still running and just taking awhile.
If --max-lock is 0 or not set, the default is that lock files will never expire, and will block future runs (of these same two bisync paths) indefinitely.
For maximum resilience from disruptions, consider setting a relatively short duration like --max-lock 2m along with --resilient and --recover, and a relatively frequent cron schedule. The result will be a very robust "set-it-and-forget-it" bisync run that can automatically bounce back from almost any interruption it might encounter, without requiring the user to get involved and run a --resync. (See also: Graceful Shutdown mode)
As of v1.66, --backup-dir is supported in bisync. Because --backup-dir must be a non-overlapping path on the same remote, Bisync has introduced new --backup-dir1 and --backup-dir2 flags to support separate backup-dirs for Path1 and Path2 (bisyncing between different remotes with --backup-dir would not otherwise be possible.) --backup-dir1 and --backup-dir2 can use different remotes from each other, but --backup-dir1 must use the same remote as Path1, and --backup-dir2 must use the same remote as Path2. Each backup directory must not overlap its respective bisync Path without being excluded by a filter rule.
The standard --backup-dir will also work, if both paths use the same remote (but note that deleted files from both paths would be mixed together in the same dir). If either --backup-dir1 and --backup-dir2 are set, they will override --backup-dir.
Example:
+rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case
+In this example, if the user deletes a file in /Users/someuser/some/local/path/Bisync, bisync will propagate the delete to the other side by moving the corresponding file from gdrive:Bisync to gdrive:BackupDir. If the user deletes a file from gdrive:Bisync, bisync moves it from /Users/someuser/some/local/path/Bisync to /Users/someuser/some/local/path/BackupDir.
In the event of a rename due to a sync conflict, the rename is not considered a delete, unless a previous conflict with the same name already exists and would get overwritten.
+See also: --suffix, --suffix-keep-extension
bisync retains the listings of the Path1 and Path2 filesystems from the prior run. On each successive run it will:
..path1 and ..path2 file versions..conflict1, .conflict2, etc. file versions, according to --conflict-resolve, --conflict-loser, and --conflict-suffix settings.RCLONE_TEST files (see the --check-access flag).--max-delete and --force flags.rclone copy _Path2 file to Path1, rclone copy _Path1 file to Path2--conflict-resolve & --conflict-loser settingsrclone copy renamed Path2.conflict2 file to Path1, rclone copy renamed Path1.conflict1 file to Path2rclone copy _Path2 file to Path1, rclone copy _Path1 file to Path2--conflict-resolve & --conflict-loser settingsrclone copy renamed Path2.conflict2 file to Path1, rclone copy renamed Path1.conflict1 file to Path2As of rclone v1.64, bisync is now better at detecting false positive sync conflicts, which would previously have resulted in unnecessary renames and duplicates. Now, when bisync comes to a file that it wants to rename (because it is new/changed on both sides), it first checks whether the Path1 and Path2 versions are currently identical (using the same underlying function as check.) If bisync concludes that the files are identical, it will skip them and move on. Otherwise, it will create renamed ..Path1 and ..Path2 duplicates, as before. This behavior also improves the experience of renaming directories, as a --resync is no longer required, so long as the same change has been made on both sides.
As of rclone v1.64, bisync is now better at detecting false positive sync conflicts, which would previously have resulted in unnecessary renames and duplicates. Now, when bisync comes to a file that it wants to rename (because it is new/changed on both sides), it first checks whether the Path1 and Path2 versions are currently identical (using the same underlying function as check.) If bisync concludes that the files are identical, it will skip them and move on. Otherwise, it will create renamed duplicates, as before. This behavior also improves the experience of renaming directories, as a --resync is no longer required, so long as the same change has been made on both sides.
If all prior existing files on either of the filesystems have changed (e.g. timestamps have changed due to changing the system's timezone) then bisync will abort without making any changes. Any new files are not considered for this check. You could use --force to force the sync (whichever side has the changed timestamp files wins). Alternately, a --resync may be used (Path1 versions will be pushed to Path2). Consider the situation carefully and perhaps use --dry-run before you commit to the changes.
Bisync relies on file timestamps to identify changed files and will refuse to operate if backend lacks the modification time support.
-If you or your application should change the content of a file without changing the modification time then bisync will not notice the change, and thus will not copy it to the other side.
-Note that on some cloud storage systems it is not possible to have file timestamps that match precisely between the local and other filesystems.
-Bisync's approach to this problem is by tracking the changes on each side separately over time with a local database of files in that side then applying the resulting changes on the other side.
+By default, bisync compares files by modification time and size. If you or your application should change the content of a file without changing the modification time and size, then bisync will not notice the change, and thus will not copy it to the other side. As an alternative, consider comparing by checksum (if your remotes support it). See --compare for details.
Certain bisync critical errors, such as file copy/move failing, will result in a bisync lockout of following runs. The lockout is asserted because the sync status and history of the Path1 and Path2 filesystems cannot be trusted, so it is safer to block any further changes until someone checks things out. The recovery is to do a --resync again.
It is recommended to use --resync --dry-run --verbose initially and carefully review what changes will be made before running the --resync without --dry-run.
Most of these events come up due to an error status from an internal call. On such a critical error the {...}.path1.lst and {...}.path2.lst listing files are renamed to extension .lst-err, which blocks any future bisync runs (since the normal .lst files are not found). Bisync keeps them under bisync subdirectory of the rclone cache directory, typically at ${HOME}/.cache/rclone/bisync/ on Linux.
Some errors are considered temporary and re-running the bisync is not blocked. The critical return blocks further bisync runs.
-See also: --resilient
See also: --resilient, --recover, --max-lock, Graceful Shutdown
When bisync is running, a lock file is created in the bisync working directory, typically at ~/.cache/rclone/bisync/PATH1..PATH2.lck on Linux. If bisync should crash or hang, the lock file will remain in place and block any further runs of bisync for the same paths. Delete the lock file as part of debugging the situation. The lock file effectively blocks follow-on (e.g., scheduled by cron) runs when the prior invocation is taking a long time. The lock file contains PID of the blocking process, which may help in debug.
When bisync is running, a lock file is created in the bisync working directory, typically at ~/.cache/rclone/bisync/PATH1..PATH2.lck on Linux. If bisync should crash or hang, the lock file will remain in place and block any further runs of bisync for the same paths. Delete the lock file as part of debugging the situation. The lock file effectively blocks follow-on (e.g., scheduled by cron) runs when the prior invocation is taking a long time. The lock file contains PID of the blocking process, which may help in debug. Lock files can be set to automatically expire after a certain amount of time, using the --max-lock flag.
Note that while concurrent bisync runs are allowed, be very cautious that there is no overlap in the trees being synched between concurrent runs, lest there be replicated files, deleted files and general mayhem.
rclone bisync returns the following codes to calling program: - 0 on a successful run, - 1 for a non-critical failing run (a rerun may be successful), - 2 for a critically aborted run (requires a --resync to recover).
Bisync has a "Graceful Shutdown" mode which is activated by sending SIGINT or pressing Ctrl+C during a run. Once triggered, bisync will use best efforts to exit cleanly before the timer runs out. If bisync is in the middle of transferring files, it will attempt to cleanly empty its queue by finishing what it has started but not taking more. If it cannot do so within 30 seconds, it will cancel the in-progress transfers at that point and then give itself a maximum of 60 seconds to wrap up, save its state for next time, and exit. With the -vP flags you will see constant status updates and a final confirmation of whether or not the graceful shutdown was successful.
At any point during the "Graceful Shutdown" sequence, a second SIGINT or Ctrl+C will trigger an immediate, un-graceful exit, which will leave things in a messier state. Usually a robust recovery will still be possible if using --recover mode, otherwise you will need to do a --resync.
If you plan to use Graceful Shutdown mode, it is recommended to use --resilient and --recover, and it is important to NOT use --inplace, otherwise you risk leaving partially-written files on one side, which may be confused for real files on the next run. Note also that in the event of an abrupt interruption, a lock file will be left behind to block concurrent runs. You will need to delete it before you can proceed with the next run (or wait for it to expire on its own, if using --max-lock.)
Bisync is considered BETA and has been tested with the following backends: - Local filesystem - Google Drive - Dropbox - OneDrive - S3 - SFTP - Yandex Disk
+Bisync is considered BETA and has been tested with the following backends: - Local filesystem - Google Drive - Dropbox - OneDrive - S3 - SFTP - Yandex Disk - Crypt
It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we'll update the list. Run the test suite to check for proper operation as described below.
-First release of rclone bisync requires that underlying backend supports the modification time feature and will refuse to run otherwise. This limitation will be lifted in a future rclone bisync release.
The first release of rclone bisync required both underlying backends to support modification times, and refused to run otherwise. This limitation has been lifted as of v1.66, as bisync now supports comparing checksum and/or size instead of (or in addition to) modtime. See --compare for details.
When using Local, FTP or SFTP remotes rclone does not create temporary files at the destination when copying, and thus if the connection is lost the created file may be corrupt, which will likely propagate back to the original path on the next sync, resulting in data loss. This will be solved in a future release, there is no workaround at the moment.
-Files that change during a bisync run may result in data loss. This has been seen in a highly dynamic environment, where the filesystem is getting hammered by running processes during the sync. The currently recommended solution is to sync at quiet times or filter out unnecessary directories and files.
-As an alternative approach, consider using --check-sync=false (and possibly --resilient) to make bisync more forgiving of filesystems that change during the sync. Be advised that this may cause bisync to miss events that occur during a bisync run, so it is a good idea to supplement this with a periodic independent integrity check, and corrective sync if diffs are found. For example, a possible sequence could look like this:
rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --check-sync=false --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient
-rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt
-rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v
-(or switch Path1 and Path2 to make Path2 the source-of-truth)
-Or, if neither side is totally up-to-date, you could run a --resync to bring them back into agreement (but remember that this could cause deleted files to re-appear.)
*Note also that rclone check does not currently include empty directories, so if you want to know if any empty directories are out of sync, consider alternatively running the above rclone sync command with --dry-run added.
When using Local, FTP or SFTP remotes with --inplace, rclone does not create temporary files at the destination when copying, and thus if the connection is lost the created file may be corrupt, which will likely propagate back to the original path on the next sync, resulting in data loss. It is therefore recommended to omit --inplace.
Files that change during a bisync run may result in data loss. Prior to rclone v1.66, this was commonly seen in highly dynamic environments, where the filesystem was getting hammered by running processes during the sync. As of rclone v1.66, bisync was redesigned to use a "snapshot" model, greatly reducing the risks from changes during a sync. Changes that are not detected during the current sync will now be detected during the following sync, and will no longer cause the entire run to throw a critical error. There is additionally a mechanism to mark files as needing to be internally rechecked next time, for added safety. It should therefore no longer be necessary to sync only at quiet times -- however, note that an error can still occur if a file happens to change at the exact moment it's being read/written by bisync (same as would happen in rclone sync.) (See also: --ignore-checksum, --local-no-check-updated)
By default, new/deleted empty directories on one path are not propagated to the other side. This is because bisync (and rclone) natively works on files, not directories. However, this can be changed with the --create-empty-src-dirs flag, which works in much the same way as in sync and copy. When used, empty directories created or deleted on one side will also be created or deleted on the other side. The following should be noted: * --create-empty-src-dirs is not compatible with --remove-empty-dirs. Use only one or the other (or neither). * It is not recommended to switch back and forth between --create-empty-src-dirs and the default (no --create-empty-src-dirs) without running --resync. This is because it may appear as though all directories (not just the empty ones) were created/deleted, when actually you've just toggled between making them visible/invisible to bisync. It looks scarier than it is, but it's still probably best to stick to one or the other, and use --resync when you need to switch.
Renaming a folder on the Path1 side results in deleting all files on the Path2 side and then copying all files again from Path1 to Path2. Bisync sees this as all files in the old directory name as deleted and all files in the new directory name as new. Currently, the most effective and efficient method of renaming a directory is to rename it to the same name on both sides. (As of rclone v1.64, a --resync is no longer required after doing so, as bisync will automatically detect that Path1 and Path2 are in agreement.)
By default, renaming a folder on the Path1 side results in deleting all files on the Path2 side and then copying all files again from Path1 to Path2. Bisync sees this as all files in the old directory name as deleted and all files in the new directory name as new.
+A recommended solution is to use --track-renames, which is now supported in bisync as of rclone v1.66. Note that --track-renames is not available during --resync, as --resync does not delete anything (--track-renames only supports sync, not copy.)
Otherwise, the most effective and efficient method of renaming a directory is to rename it to the same name on both sides. (As of rclone v1.64, a --resync is no longer required after doing so, as bisync will automatically detect that Path1 and Path2 are in agreement.)
--fast-list used by defaultUnlike most other rclone commands, bisync uses --fast-list by default, for backends that support it. In many cases this is desirable, however, there are some scenarios in which bisync could be faster without --fast-list, and there is also a known issue concerning Google Drive users with many empty directories. For now, the recommended way to avoid using --fast-list is to add --disable ListR to all bisync commands. The default behavior may change in a future version.
When rclone detects an overridden config, it adds a suffix like {ABCDE} on the fly to the internal name of the remote. Bisync follows suit by including this suffix in its listing filenames. However, this suffix does not necessarily persist from run to run, especially if different flags are provided. So if next time the suffix assigned is {FGHIJ}, bisync will get confused, because it's looking for a listing file with {FGHIJ}, when the file it wants has {ABCDE}. As a result, it throws Bisync critical error: cannot find prior Path1 or Path2 listings, likely due to critical error on prior run and refuses to run again until the user runs a --resync (unless using --resilient). The best workaround at the moment is to set any backend-specific flags in the config file instead of specifying them with command flags. (You can still override them as needed for other rclone commands.)
Synching with case-insensitive filesystems, such as Windows or Box, can result in file name conflicts. This will be fixed in a future release. The near-term workaround is to make sure that files on both sides don't have spelling case differences (Smile.jpg vs. smile.jpg).
As of v1.66, case and unicode form differences no longer cause critical errors, and normalization (when comparing between filesystems) is handled according to the same flags and defaults as rclone sync. See the following options (all of which are supported by bisync) to control this behavior more granularly: - --fix-case - --ignore-case-sync - --no-unicode-normalization - --local-unicode-normalization and --local-case-sensitive (caution: these are normally not what you want.)
Note that in the (probably rare) event that --fix-case is used AND a file is new/changed on both sides AND the checksums match AND the filename case does not match, the Path1 filename is considered the winner, for the purposes of --fix-case (Path2 will be renamed to match it).
Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on Windows GitHub runners.
Drive letters are allowed, including drive letters mapped to network drives (rclone bisync J:\localsync GDrive:). If a drive letter is omitted, the shell current drive is the default. Drive letters are a single character follows by :, so cloud names must be more than one character long.
Google Drive has a filter for certain file types (.exe, .apk, et cetera) that by default cannot be copied from Google Drive to the local filesystem. If you are having problems, run with --verbose to see specifically which files are generating complaints. If the error is This file has been identified as malware or spam and cannot be downloaded, consider using the flag --drive-acknowledge-abuse.
Google docs exist as virtual files on Google Drive and cannot be transferred to other filesystems natively. While it is possible to export a Google doc to a normal file (with .xlsx extension, for example), it is not possible to import a normal file back into a Google document.
Bisync's handling of Google Doc files is to flag them in the run log output for user's attention and ignore them for any file transfers, deletes, or syncs. They will show up with a length of -1 in the listings. This bisync run is otherwise successful:
2021/05/11 08:23:15 INFO : Synching Path1 "/path/to/local/tree/base/" with Path2 "GDrive:"
-2021/05/11 08:23:15 INFO : ...path2.lst-new: Ignoring incorrect line: "- -1 - - 2018-07-29T08:49:30.136000000+0000 GoogleDoc.docx"
-2021/05/11 08:23:15 INFO : Bisync successful
+As of v1.66, Google Docs (including Google Sheets, Slides, etc.) are now supported in bisync, subject to the same options, defaults, and limitations as in rclone sync. When bisyncing drive with non-drive backends, the drive -> non-drive direction is controlled by --drive-export-formats (default "docx,xlsx,pptx,svg") and the non-drive -> drive direction is controlled by --drive-import-formats (default none.)
For example, with the default export/import formats, a Google Sheet on the drive side will be synced to an .xlsx file on the non-drive side. In the reverse direction, .xlsx files with filenames that match an existing Google Sheet will be synced to that Google Sheet, while .xlsx files that do NOT match an existing Google Sheet will be copied to drive as normal .xlsx files (without conversion to Sheets, although the Google Drive web browser UI may still give you the option to open it as one.)
If --drive-import-formats is set (it's not, by default), then all of the specified formats will be converted to Google Docs, if there is no existing Google Doc with a matching name. Caution: such conversion can be quite lossy, and in most cases it's probably not what you want!
To bisync Google Docs as URL shortcut links (in a manner similar to "Drive for Desktop"), use: --drive-export-formats url (or alternatives.)
Note that these link files cannot be edited on the non-drive side -- you will get errors if you try to sync an edited link file back to drive. They CAN be deleted (it will result in deleting the corresponding Google Doc.) If you create a .url file on the non-drive side that does not match an existing Google Doc, bisyncing it will just result in copying the literal .url file over to drive (no Google Doc will be created.) So, as a general rule of thumb, think of them as read-only placeholders on the non-drive side, and make all your changes on the drive side.
Likewise, even with other export-formats, it is best to only move/rename Google Docs on the drive side. This is because otherwise, bisync will interpret this as a file deleted and another created, and accordingly, it will delete the Google Doc and create a new file at the new path. (Whether or not that new file is a Google Doc depends on --drive-import-formats.)
Lastly, take note that all Google Docs on the drive side have a size of -1 and no checksum. Therefore, they cannot be reliably synced with the --checksum or --size-only flags. (To be exact: they will still get created/deleted, and bisync's delta engine will notice changes and queue them for syncing, but the underlying sync function will consider them identical and skip them.) To work around this, use the default (modtime and size) instead of --checksum or --size-only.
To ignore Google Docs entirely, use --drive-skip-gdocs.
Rclone does not yet have a built-in capability to monitor the local file system for changes and must be blindly run periodically. On Windows this can be done using a Task Scheduler, on Linux you can use Cron which is described below.
@@ -12440,6 +13138,29 @@ Options:Bisync adopts the differential synchronization technique, which is based on keeping history of changes performed by both synchronizing sides. See the Dual Shadow Method section in Neil Fraser's article.
Also note a number of academic publications by Benjamin Pierce about Unison and synchronization in general.
v1.66--track-renames and --backup-dir are now supportedlocal/ftp/sftp has been resolved (unless using --inplace)--color (AUTO|NEVER|ALWAYS)check and sync, for performance improvements and less risk of error.--fix-case, --ignore-case-sync, --no-unicode-normalization--resync is now much more efficient (especially for users of --create-empty-src-dirs)sync)cryptcheck (when possible) or --download, instead of of --size-only, when check is not available.--resync.--recover flag allows robust recovery in the event of interruptions, without requiring --resync.--max-lock setting allows lock files to automatically renew and expire, for better automatic recovery when a run is interrupted.--conflict-resolve, --conflict-loser, and --conflict-suffix flags.--resync-mode flag allows more control over which version of a file gets kept during a --resync.--retries and --retries-sleep (when --resilient is set.)v1.64Description of the remote
+Properties:
+rclone about is not supported by the 1Fichier backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers.
-Important: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program is now closed to new entries so if you don't already have your own set of keys you will not be able to use rclone with Amazon Drive.
-For the history on why rclone no longer has a set of Amazon Drive API keys see the forum.
-If you happen to know anyone who works at Amazon then please ask them to re-instate rclone into the Amazon Drive developer program - thanks!
-The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config walks you through it.
The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it.
-Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own client_id and client_secret with Amazon Drive, or use a third-party oauth proxy in which case you will need to enter client_id, client_secret, auth_url and token_url.
Note also if you are not using Amazon's auth_url and token_url, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize will not work.
Here is an example of how to make a remote called remote. First run:
rclone config
-This will guide you through an interactive setup process:
-No remotes found, make a new one?
-n) New remote
-r) Rename remote
-c) Copy remote
-s) Set configuration password
-q) Quit config
-n/r/c/s/q> n
-name> remote
-Type of storage to configure.
-Choose a number from below, or type in your own value
-[snip]
-XX / Amazon Drive
- \ "amazon cloud drive"
-[snip]
-Storage> amazon cloud drive
-Amazon Application Client Id - required.
-client_id> your client ID goes here
-Amazon Application Client Secret - required.
-client_secret> your client secret goes here
-Auth server URL - leave blank to use Amazon's.
-auth_url> Optional auth URL
-Token server url - leave blank to use Amazon's.
-token_url> Optional token URL
-Remote config
-Make sure your Redirect URL is set to "http://127.0.0.1:53682/" in your custom config.
-Use web browser to automatically authenticate rclone with remote?
- * Say Y if the machine running rclone has a web browser you can use
- * Say N if running rclone on a (remote) machine without web browser access
-If not sure try Y. If Y failed, try N.
-y) Yes
-n) No
-y/n> y
-If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
-Log in and authorize rclone for access
-Waiting for code...
-Got code
---------------------
-[remote]
-client_id = your client ID goes here
-client_secret = your client secret goes here
-auth_url = Optional auth URL
-token_url = Optional token URL
-token = {"access_token":"xxxxxxxxxxxxxxxxxxxxxxx","token_type":"bearer","refresh_token":"xxxxxxxxxxxxxxxxxx","expiry":"2015-09-06T16:07:39.658438471+01:00"}
---------------------
-y) Yes this is OK
-e) Edit this remote
-d) Delete this remote
-y/e/d> y
-See the remote setup docs for how to set it up on a machine with no Internet browser available.
-Note that rclone runs a webserver on your local machine to collect the token as returned from Amazon. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone like this,
List directories in top level of your Amazon Drive
-rclone lsd remote:
-List all the files in your Amazon Drive
-rclone ls remote:
-To copy a local directory to an Amazon Drive directory called backup
-rclone copy /home/source remote:backup
-Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.
-It does support the MD5 hash algorithm, so for a more accurate sync, you can use the --checksum flag.
| Character | -Value | -Replacement | -
|---|---|---|
| NUL | -0x00 | -␀ | -
| / | -0x2F | -/ | -
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.
-.com Amazon accountsLet's say you usually use amazon.co.uk. When you authenticate with rclone it will take you to an amazon.com page to log in. Your amazon.co.uk email and password should work here just fine.
Here are the Standard options specific to amazon cloud drive (Amazon Drive).
-OAuth Client Id.
-Leave blank normally.
-Properties:
-OAuth Client Secret.
-Leave blank normally.
-Properties:
-Here are the Advanced options specific to amazon cloud drive (Amazon Drive).
-OAuth Access Token as a JSON blob.
+Here are the Advanced options specific to alias (Alias for an existing remote).
+Description of the remote
Properties:
Auth server URL.
-Leave blank to use the provider defaults.
-Properties:
-Token server url.
-Leave blank to use the provider defaults.
-Properties:
-Checkpoint for internal polling (debug).
-Properties:
-Additional time per GiB to wait after a failed complete upload to see if it appears.
-Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1 GiB in size and nearly every time for files bigger than 10 GiB. This parameter controls the time rclone waits for the file to appear.
-The default value for this parameter is 3 minutes per GiB, so by default it will wait 3 minutes for every GiB uploaded to see if the file appears.
-You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.
-These values were determined empirically by observing lots of uploads of big files for a range of file sizes.
-Upload with the "-v" flag to see more info about what rclone is doing in this situation.
-Properties:
-Files >= this size will be downloaded via their tempLink.
-Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10 GiB. The default for this is 9 GiB which shouldn't need to be changed.
-To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage.
-Properties:
-The encoding for the backend.
-See the encoding section in the overview for more info.
-Properties:
-Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
-Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries flag) which should hopefully work around this problem.
Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.
-At the time of writing (Jan 2016) is in the area of 50 GiB per file. This means that larger files are likely to fail.
-Unfortunately there is no way for rclone to see that this failure is because of file size, so it will retry the operation, as any other failure. To avoid this problem, use --max-size 50000M option to limit the maximum size of uploaded files. Note that --max-size does not split files into segments, it only ignores files over this size.
rclone about is not supported by the Amazon Drive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
The S3 backend can be used with a number of different providers:
rclone ls remote:bucket
Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.
rclone sync --interactive /home/local/directory remote:bucket
-Here is an example of making an s3 configuration for the AWS S3 provider. Most applies to the other providers as well, any differences are described below.
First run
rclone config
@@ -13059,7 +13594,7 @@ name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
[snip]
-XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, ChinaMobile, ArvanCloud, Dreamhost, IBM COS, Liara, Minio, and Tencent COS
+XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ "s3"
[snip]
Storage> s3
@@ -13244,7 +13779,7 @@ y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d>
-The modified time is stored as metadata on the object as X-Amz-Meta-Mtime as floating point since the epoch, accurate to 1 ns.
If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive storage the object will be uploaded rather than copied.
@@ -13340,7 +13875,7 @@ $ rclone -q --s3-versions ls s3:cleanup-testIf there are real files present with the same names as versions, then behaviour of --s3-versions can be unpredictable.
If you run rclone cleanup s3:bucket then it will remove all pending multipart uploads older than 24 hours. You can use the --interactive/i or --dry-run flag to see exactly what it will do. If you want more control over the expiry date then run rclone backend cleanup s3:bucket -o max-age=1h to expire all uploads older than one hour. You can use rclone backend list-multipart-uploads s3:bucket to see the pending multipart uploads.
S3 allows any valid UTF-8 string as a key.
Invalid UTF-8 bytes will be replaced, as they can't be used in XML.
The following characters are replaced since these are problematic when dealing with the REST API:
@@ -13434,6 +13969,7 @@ $ rclone -q --s3-versions ls s3:cleanup-testGetObjectPutObjectPutObjectACLCreateBucket (unless using s3-no-check-bucket)When using the lsd subcommand, the ListAllMyBuckets permission is required.
Example policy:
@@ -13468,6 +14004,7 @@ $ rclone -q --s3-versions ls s3:cleanup-testUSER_NAME has been created."arn:aws:s3:::BUCKET_NAME" doesn't have to be included.For reference, here's an Ansible script that will generate one or more buckets that will work with rclone sync.
If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
As mentioned in the Modification times and hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the --s3-upload-cutoff 0 and force all the files to be uploaded as multipart.
Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others).
Choose your S3 provider.
@@ -14315,8 +14852,8 @@ Windows: "%USERPROFILE%\.aws\credentials"Concurrency for multipart uploads.
-This is the number of chunks of the same file that are uploaded concurrently.
+Concurrency for multipart uploads and copies.
+This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies.
If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.
Properties:
If true use AWS S3 dual-stack endpoint (IPv6 support).
+See AWS Docs on Dualstack Endpoints
+Properties:
+If true use the AWS S3 accelerated endpoint.
See: AWS S3 Transfer acceleration
@@ -14546,6 +15093,18 @@ Windows: "%USERPROFILE%\.aws\credentials"Show deleted file markers when using versions.
+This shows deleted file markers in the listing when using versions. These will appear as 0 size files. The only operation which can be performed on them is deletion.
+Deleting a delete marker will reveal the previous version.
+Deleted files will always show with a timestamp.
+Properties:
+If set this will decompress gzip encoded objects.
It is possible to upload objects to S3 with "Content-Encoding: gzip" set. Normally rclone will download these files as compressed objects.
@@ -14631,6 +15190,15 @@ Windows: "%USERPROFILE%\.aws\credentials"Description of the remote
+Properties:
+User metadata is stored as x-amz-meta- keys. S3 metadata keys are case insensitive and are always returned in lower case.
Here are the possible system metadata items for the s3 backend.
@@ -15067,10 +15635,10 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] -Storage> 5 +Storage> s3 Option provider. Choose your S3 provider. Choose a number from below, or type in your own value. @@ -15185,18 +15753,11 @@ e/n/d/r/c/s/q> qChoose a number from below, or type in your own value
- 1 / Alias for an existing remote
- \ "alias"
- 2 / Amazon Drive
- \ "amazon cloud drive"
- 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, Liara, ArvanCloud, Minio, IBM COS)
- \ "s3"
- 4 / Backblaze B2
- \ "b2"
[snip]
- 23 / HTTP
- \ "http"
-Storage> 3
+XX / Amazon S3 Compliant Storage Providers including AWS, ...
+ \ "s3"
+[snip]
+Storage> s3
s3 storage.Choose a number from below, or type in your own value
- 1 / 1Fichier
- \ (fichier)
- 2 / Akamai NetStorage
- \ (netstorage)
- 3 / Alias for an existing remote
- \ (alias)
- 4 / Amazon Drive
- \ (amazon cloud drive)
- 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi
+[snip]
+XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ (s3)
[snip]
Storage> s3
@@ -15837,7 +16391,7 @@ name> remote
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
-XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
+XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ (s3)
[snip]
Storage> s3
@@ -16064,7 +16618,7 @@ Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[snip]
- 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Minio, and Tencent COS
+XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ "s3"
[snip]
Storage> s3
@@ -16166,7 +16720,7 @@ Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
...
- 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS
+XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ (s3)
...
Storage> s3
@@ -16414,15 +16968,8 @@ n/s/q> n
s3 storage.Choose a number from below, or type in your own value
- 1 / 1Fichier
- \ (fichier)
- 2 / Akamai NetStorage
- \ (netstorage)
- 3 / Alias for an existing remote
- \ (alias)
- 4 / Amazon Drive
- \ (amazon cloud drive)
- 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi
+[snip]
+XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ (s3)
[snip]
Storage> s3
@@ -16609,7 +17156,7 @@ Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
- X / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others
+XX / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others
\ (s3)
[snip]
Storage> s3
@@ -16828,13 +17375,8 @@ n/s/q> n
s3 storage.Choose a number from below, or type in your own value
-1 / 1Fichier
- \ "fichier"
- 2 / Alias for an existing remote
- \ "alias"
- 3 / Amazon Drive
- \ "amazon cloud drive"
- 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Minio, and Tencent COS
+[snip]
+XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ "s3"
[snip]
Storage> s3
@@ -16928,7 +17470,7 @@ cos s3
For Netease NOS configure as per the configurator rclone config setting the provider Netease. This will automatically set force_path_style = false which is necessary for it to run properly.
Here is an example of making a Petabox configuration. First run:
-rclone configrclone configThis will guide you through an interactive setup process.
No remotes found, make a new one?
n) New remote
@@ -17162,7 +17704,7 @@ Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
- 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, GCS, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi
+XX / Amazon S3 Compliant Storage Providers including AWS, ...
\ "s3"
Storage> s3
@@ -17729,9 +18271,12 @@ Properties:
#### --b2-download-auth-duration
-Time before the authorization token will expire in s or suffix ms|s|m|h|d.
+Time before the public link authorization token will expire in s or suffix ms|s|m|h|d.
+
+This is used in combination with "rclone link" for making files
+accessible to the public and sets the duration before the download
+authorization token will expire.
-The duration before the download authorization token will expire.
The minimum value is 1 second. The maximum value is one week.
Properties:
@@ -17807,6 +18352,17 @@ Properties:
- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+#### --b2-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_B2_DESCRIPTION
+- Type: string
+- Required: false
+
## Backend commands
Here are the commands specific to the b2 backend.
@@ -18266,6 +18822,17 @@ Properties:
- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot
+#### --box-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_BOX_DESCRIPTION
+- Type: string
+- Required: false
+
## Limitations
@@ -18898,6 +19465,17 @@ Properties:
- Type: Duration
- Default: 1s
+#### --cache-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_CACHE_DESCRIPTION
+- Type: string
+- Required: false
+
## Backend commands
Here are the commands specific to the cache backend.
@@ -19338,6 +19916,17 @@ Properties:
- If meta format is set to "none", rename transactions will always be used.
- This method is EXPERIMENTAL, don't use on production systems.
+#### --chunker-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_CHUNKER_DESCRIPTION
+- Type: string
+- Required: false
+
# Citrix ShareFile
@@ -19583,6 +20172,17 @@ Properties:
- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot
+#### --sharefile-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_SHAREFILE_DESCRIPTION
+- Type: string
+- Required: false
+
## Limitations
@@ -20053,6 +20653,22 @@ Properties:
- Type: bool
- Default: false
+#### --crypt-strict-names
+
+If set, this will raise an error when crypt comes across a filename that can't be decrypted.
+
+(By default, rclone will just log a NOTICE and continue as normal.)
+This can happen if encrypted and unencrypted files are stored in the same
+directory (which is not recommended.) It may also indicate a more serious
+problem that should be investigated.
+
+Properties:
+
+- Config: strict_names
+- Env Var: RCLONE_CRYPT_STRICT_NAMES
+- Type: bool
+- Default: false
+
#### --crypt-filename-encoding
How to encode the encrypted filename to text string.
@@ -20090,6 +20706,17 @@ Properties:
- Type: string
- Default: ".bin"
+#### --crypt-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_CRYPT_DESCRIPTION
+- Type: string
+- Required: false
+
### Metadata
Any metadata supported by the underlying remote is read and written.
@@ -20258,7 +20885,7 @@ encoding is modified in two ways:
* we strip the padding character `=`
`base32` is used rather than the more efficient `base64` so rclone can be
-used on case insensitive remotes (e.g. Windows, Amazon Drive).
+used on case insensitive remotes (e.g. Windows, Box, Dropbox, Onedrive etc).
### Key derivation
@@ -20391,6 +21018,17 @@ Properties:
- Type: SizeSuffix
- Default: 20Mi
+#### --compress-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_COMPRESS_DESCRIPTION
+- Type: string
+- Required: false
+
### Metadata
Any metadata supported by the underlying remote is read and written.
@@ -20496,6 +21134,21 @@ Properties:
- Type: SpaceSepList
- Default:
+### Advanced options
+
+Here are the Advanced options specific to combine (Combine several remotes into one).
+
+#### --combine-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_COMBINE_DESCRIPTION
+- Type: string
+- Required: false
+
### Metadata
Any metadata supported by the underlying remote is read and written.
@@ -20929,6 +21582,17 @@ Properties:
- Type: Duration
- Default: 10m0s
+#### --dropbox-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_DROPBOX_DESCRIPTION
+- Type: string
+- Required: false
+
## Limitations
@@ -21189,6 +21853,17 @@ Properties:
- Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
+#### --filefabric-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_FILEFABRIC_DESCRIPTION
+- Type: string
+- Required: false
+
# FTP
@@ -21585,6 +22260,17 @@ Properties:
- "Ctl,LeftPeriod,Slash"
- VsFTPd can't handle file names starting with dot
+#### --ftp-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_FTP_DESCRIPTION
+- Type: string
+- Required: false
+
## Limitations
@@ -22227,6 +22913,17 @@ Properties:
- Type: Encoding
- Default: Slash,CrLf,InvalidUtf8,Dot
+#### --gcs-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_GCS_DESCRIPTION
+- Type: string
+- Required: false
+
## Limitations
@@ -23507,10 +24204,23 @@ Properties:
- "true"
- Get GCP IAM credentials from the environment (env vars or IAM).
+#### --drive-description
+
+Description of the remote
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_DRIVE_DESCRIPTION
+- Type: string
+- Required: false
+
### Metadata
User metadata is stored in the properties field of the drive object.
+Metadata is supported on files and directories.
+
Here are the possible system metadata items for the drive backend.
| Name | Help | Type | Example | Read Only |
@@ -24247,6 +24957,18 @@ This will guide you through an interactive setup process:
- Config: batch_commit_timeout - Env Var: RCLONE_GPHOTOS_BATCH_COMMIT_TIMEOUT - Type: Duration - Default: 10m0s
--ignore-checksum --ignore-size
-
-Alternatively, if you have write access to the OneDrive files, it may be possible
-to fix this problem for certain files, by attempting the steps below.
-Open the web interface for [OneDrive](https://onedrive.live.com) and find the
-affected files (which will be in the error messages/log for rclone). Simply click on
-each of these files, causing OneDrive to open them on the web. This will cause each
-file to be converted in place to a format that is functionally equivalent
-but which will no longer trigger the size discrepancy. Once all problematic files
-are converted you will no longer need the ignore options above.
-
-### Replacing/deleting existing files on Sharepoint gets "item not found" ####
-
-It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue
-that Sharepoint (not OneDrive or OneDrive for Business) may return "item not
-found" errors when users try to replace or delete uploaded files; this seems to
-mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use
-the `--backup-dir <BACKUP_DIR>` command line argument so rclone moves the
-files to be replaced/deleted into a given backup directory (instead of directly
-replacing/deleting them). For example, to instruct rclone to move the files into
-the directory `rclone-backup-dir` on backend `mysharepoint`, you may use:
-
---backup-dir mysharepoint:rclone-backup-dir
-
-### access\_denied (AADSTS65005) ####
-
-Error: access_denied Code: AADSTS65005 Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.
-
-This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins.
-
-However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint
-
-### invalid\_grant (AADSTS50076) ####
-
-Error: invalid_grant Code: AADSTS50076 Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.
-
-If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run `rclone config`, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: `Already have a token - refresh?`. For this question, answer `y` and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.
-
-### Invalid request when making public links ####
-
-On Sharepoint and OneDrive for Business, `rclone link` may return an "Invalid
-request" error. A possible cause is that the organisation admin didn't allow
-public links to be made for the organisation/sharepoint library. To fix the
-permissions as an admin, take a look at the docs:
-[1](https://docs.microsoft.com/en-us/sharepoint/turn-external-sharing-on-or-off),
-[2](https://support.microsoft.com/en-us/office/set-up-and-manage-access-requests-94b26e0b-2822-49d4-929a-8455698654b3).
-
-### Can not access `Shared` with me files
-
-Shared with me files is not supported by rclone [currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround:
-
-1. Visit [https://onedrive.live.com](https://onedrive.live.com/)
-2. Right click a item in `Shared`, then click `Add shortcut to My files` in the context
- ")
-3. The shortcut will appear in `My files`, you can access it with rclone, it behaves like a normal folder/file.
- ")
- ")
-
-### Live Photos uploaded from iOS (small video clips in .heic files)
-
-The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452)
-of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020.
-The usage and download of these uploaded Live Photos is unfortunately still work-in-progress
-and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows.
-
-The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface.
-Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface.
-The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive.
-
-The different sizes will cause `rclone copy/sync` to repeatedly recopy unmodified photos something like this:
-
- DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
- DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK
- INFO : 20230203_123826234_iOS.heic: Copied (replaced existing)
-
-These recopies can be worked around by adding `--ignore-size`. Please note that this workaround only syncs the still-picture not the movie clip,
-and relies on modification dates being correctly updated on all files in all situations.
-
-The different sizes will also cause `rclone check` to report size errors something like this:
-
- ERROR : 20230203_123826234_iOS.heic: sizes differ
-
-These check errors can be suppressed by adding `--ignore-size`.
-
-The different sizes will also cause `rclone mount` to fail downloading with an error something like this:
-
- ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
-
-or like this when using `--cache-mode=full`:
-
- INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
- ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
-
-# OpenDrive
-
-Paths are specified as `remote:path`
-
-Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
-
-## Configuration
-
-Here is an example of how to make a remote called `remote`. First run:
-
- rclone config
-
-This will guide you through an interactive setup process:
-
-
-List directories in top level of your OpenDrive
-
- rclone lsd remote:
-
-List all the files in your OpenDrive
-
- rclone ls remote:
-
-To copy a local directory to an OpenDrive directory called backup
-
- rclone copy /home/source remote:backup
-
-### Modification times and hashes
-
-OpenDrive allows modification times to be set on objects accurate to 1
-second. These will be used to detect whether objects need syncing or
-not.
-
-The MD5 hash algorithm is supported.
-
-### Restricted filename characters
-
-| Character | Value | Replacement |
-| --------- |:-----:|:-----------:|
-| NUL | 0x00 | ␀ |
-| / | 0x2F | / |
-| " | 0x22 | " |
-| * | 0x2A | * |
-| : | 0x3A | : |
-| < | 0x3C | < |
-| > | 0x3E | > |
-| ? | 0x3F | ? |
-| \ | 0x5C | \ |
-| \| | 0x7C | | |
-
-File names can also not begin or end with the following characters.
-These only get replaced if they are the first or last character in the name:
-
-| Character | Value | Replacement |
-| --------- |:-----:|:-----------:|
-| SP | 0x20 | ␠ |
-| HT | 0x09 | ␉ |
-| LF | 0x0A | ␊ |
-| VT | 0x0B | ␋ |
-| CR | 0x0D | ␍ |
-
-
-Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
-as they can't be used in JSON strings.
-
-
-### Standard options
-
-Here are the Standard options specific to opendrive (OpenDrive).
-
-#### --opendrive-username
-
-Username.
+Description of the remote
Properties:
-- Config: username
-- Env Var: RCLONE_OPENDRIVE_USERNAME
-- Type: string
-- Required: true
-
-#### --opendrive-password
-
-Password.
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: password
-- Env Var: RCLONE_OPENDRIVE_PASSWORD
-- Type: string
-- Required: true
-
-### Advanced options
-
-Here are the Advanced options specific to opendrive (OpenDrive).
-
-#### --opendrive-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_OPENDRIVE_ENCODING
-- Type: Encoding
-- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot
-
-#### --opendrive-chunk-size
-
-Files will be uploaded in chunks this size.
-
-Note that these chunks are buffered in memory so increasing them will
-increase memory use.
-
-Properties:
-
-- Config: chunk_size
-- Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE
-- Type: SizeSuffix
-- Default: 10Mi
-
-
-
-## Limitations
-
-Note that OpenDrive is case insensitive so you can't have a
-file called "Hello.doc" and one called "hello.doc".
-
-There are quite a few characters that can't be in OpenDrive file
-names. These can't occur on Windows platforms, but on non-Windows
-platforms they are common. Rclone will map these names to and from an
-identical looking unicode equivalent. For example if a file has a `?`
-in it will be mapped to `?` instead.
-
-`rclone about` is not supported by the OpenDrive backend. Backends without
-this capability cannot determine free space for an rclone mount or
-use policy `mfs` (most free space) as a member of an rclone union
-remote.
-
-See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
-
-# Oracle Object Storage
-- [Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm)
-- [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/)
-- [Oracle Object Storage Limits](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf)
-
-Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in
-too, e.g. `remote:bucket/path/to/dir`.
-
-Sample command to transfer local artifacts to remote:bucket in oracle object storage:
-
-`rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv`
-
-## Configuration
-
-Here is an example of making an oracle object storage configuration. `rclone config` walks you
-through it.
-
-Here is an example of how to make a remote called `remote`. First run:
-
- rclone config
-
-This will guide you through an interactive setup process:
-
-
-Enter name for new remote. name> remote
-Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] XX / Oracle Cloud Infrastructure Object Storage (oracleobjectstorage) Storage> oracleobjectstorage
-Option provider. Choose your Auth Provider Choose a number from below, or type in your own string value. Press Enter for the default (env_auth). 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins (env_auth) / use an OCI user and an API key for authentication. 2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm (user_principal_auth) / use instance principals to authorize an instance to make API calls. 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata. | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm (instance_principal_auth) 4 / use resource principals to make API calls (resource_principal_auth) 5 / no credentials needed, this is typically for reading public buckets (no_auth) provider> 2
-Option namespace. Object storage namespace Enter a value. namespace> idbamagbg734
-Option compartment. Object storage compartment OCID Enter a value. compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
-Option region. Object storage Region Enter a value. region> us-ashburn-1
-Option endpoint. Endpoint for Object storage API. Leave blank to use the default endpoint for the region. Enter a value. Press Enter to leave empty. endpoint>
-Option config_file. Full Path to OCI config file Choose a number from below, or type in your own string value. Press Enter for the default (~/.oci/config). 1 / oci configuration file location (~/.oci/config) config_file> /etc/oci/dev.conf
-Option config_profile. Profile name inside OCI config file Choose a number from below, or type in your own string value. Press Enter for the default (Default). 1 / Use the default profile (Default) config_profile> Test
-Edit advanced config? y) Yes n) No (default) y/n> n
-Configuration complete. Options: - type: oracleobjectstorage - namespace: idbamagbg734 - compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba - region: us-ashburn-1 - provider: user_principal_auth - config_file: /etc/oci/dev.conf - config_profile: Test Keep this "remote" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y
-
-See all buckets
-
- rclone lsd remote:
-
-Create a new bucket
-
- rclone mkdir remote:bucket
-
-List the contents of a bucket
-
- rclone ls remote:bucket
- rclone ls remote:bucket --max-depth 1
-
-## Authentication Providers
-
-OCI has various authentication methods. To learn more about authentication methods please refer [oci authentication
-methods](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm)
-These choices can be specified in the rclone config file.
-
-Rclone supports the following OCI authentication provider.
-
- User Principal
- Instance Principal
- Resource Principal
- No authentication
-
-### User Principal
-
-Sample rclone config file for Authentication Provider User Principal:
-
- [oos]
- type = oracleobjectstorage
- namespace = id<redacted>34
- compartment = ocid1.compartment.oc1..aa<redacted>ba
- region = us-ashburn-1
- provider = user_principal_auth
- config_file = /home/opc/.oci/config
- config_profile = Default
-
-Advantages:
-- One can use this method from any server within OCI or on-premises or from other cloud provider.
-
-Considerations:
-- you need to configure user’s privileges / policy to allow access to object storage
-- Overhead of managing users and keys.
-- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
-
-### Instance Principal
-
-An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal.
-With this approach no credentials have to be stored and managed.
-
-Sample rclone configuration file for Authentication Provider Instance Principal:
-
- [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf
- [oos]
- type = oracleobjectstorage
- namespace = id<redacted>fn
- compartment = ocid1.compartment.oc1..aa<redacted>k7a
- region = us-ashburn-1
- provider = instance_principal_auth
-
-Advantages:
-
-- With instance principals, you don't need to configure user credentials and transfer/ save it to disk in your compute
- instances or rotate the credentials.
-- You don’t need to deal with users and keys.
-- Greatly helps in automation as you don't have to manage access keys, user private keys, storing them in vault,
- using kms etc.
-
-Considerations:
-
-- You need to configure a dynamic group having this instance as member and add policy to read object storage to that
- dynamic group.
-- Everyone who has access to this machine can execute the CLI commands.
-- It is applicable for oci compute instances only. It cannot be used on external instance or resources.
-
-### Resource Principal
-
-Resource principal auth is very similar to instance principal auth but used for resources that are not
-compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm).
-To use resource principal ensure Rclone process is started with these environment variables set in its process.
-
- export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
- export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
- export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
- export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token
-
-Sample rclone configuration file for Authentication Provider Resource Principal:
-
- [oos]
- type = oracleobjectstorage
- namespace = id<redacted>34
- compartment = ocid1.compartment.oc1..aa<redacted>ba
- region = us-ashburn-1
- provider = resource_principal_auth
-
-### No authentication
-
-Public buckets do not require any authentication mechanism to read objects.
-Sample rclone configuration file for No authentication:
-
- [oos]
- type = oracleobjectstorage
- namespace = id<redacted>34
- compartment = ocid1.compartment.oc1..aa<redacted>ba
- region = us-ashburn-1
- provider = no_auth
-
-### Modification times and hashes
-
-The modification time is stored as metadata on the object as
-`opc-meta-mtime` as floating point since the epoch, accurate to 1 ns.
-
-If the modification time needs to be updated rclone will attempt to perform a server
-side copy to update the modification if the object can be copied in a single part.
-In the case the object is larger than 5Gb, the object will be uploaded rather than copied.
-
-Note that reading this from the object takes an additional `HEAD` request as the metadata
-isn't returned in object listings.
-
-The MD5 hash algorithm is supported.
-
-### Multipart uploads
-
-rclone supports multipart uploads with OOS which means that it can
-upload files bigger than 5 GiB.
-
-Note that files uploaded *both* with multipart upload *and* through
-crypt remotes do not have MD5 sums.
-
-rclone switches from single part uploads to multipart uploads at the
-point specified by `--oos-upload-cutoff`. This can be a maximum of 5 GiB
-and a minimum of 0 (ie always upload multipart files).
-
-The chunk sizes used in the multipart upload are specified by
-`--oos-chunk-size` and the number of chunks uploaded concurrently is
-specified by `--oos-upload-concurrency`.
-
-Multipart uploads will use `--transfers` * `--oos-upload-concurrency` *
-`--oos-chunk-size` extra memory. Single part uploads to not use extra
-memory.
-
-Single part transfers can be faster than multipart transfers or slower
-depending on your latency from oos - the more latency, the more likely
-single part transfers will be faster.
-
-Increasing `--oos-upload-concurrency` will increase throughput (8 would
-be a sensible value) and increasing `--oos-chunk-size` also increases
-throughput (16M would be sensible). Increasing either of these will
-use more memory. The default values are high enough to gain most of
-the possible performance without using too much memory.
-
-
-### Standard options
-
-Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
-
-#### --oos-provider
-
-Choose your Auth Provider
-
-Properties:
-
-- Config: provider
-- Env Var: RCLONE_OOS_PROVIDER
-- Type: string
-- Default: "env_auth"
-- Examples:
- - "env_auth"
- - automatically pickup the credentials from runtime(env), first one to provide auth wins
- - "user_principal_auth"
- - use an OCI user and an API key for authentication.
- - you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
- - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
- - "instance_principal_auth"
- - use instance principals to authorize an instance to make API calls.
- - each instance has its own identity, and authenticates using the certificates that are read from instance metadata.
- - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
- - "resource_principal_auth"
- - use resource principals to make API calls
- - "no_auth"
- - no credentials needed, this is typically for reading public buckets
-
-#### --oos-namespace
-
-Object storage namespace
-
-Properties:
-
-- Config: namespace
-- Env Var: RCLONE_OOS_NAMESPACE
-- Type: string
-- Required: true
-
-#### --oos-compartment
-
-Object storage compartment OCID
-
-Properties:
-
-- Config: compartment
-- Env Var: RCLONE_OOS_COMPARTMENT
-- Provider: !no_auth
-- Type: string
-- Required: true
-
-#### --oos-region
-
-Object storage Region
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_OOS_REGION
-- Type: string
-- Required: true
-
-#### --oos-endpoint
-
-Endpoint for Object storage API.
-
-Leave blank to use the default endpoint for the region.
-
-Properties:
-
-- Config: endpoint
-- Env Var: RCLONE_OOS_ENDPOINT
+- Config: description
+- Env Var: RCLONE_ONEDRIVE_DESCRIPTION
- Type: string
- Required: false
-#### --oos-config-file
-
-Path to OCI config file
-
-Properties:
-
-- Config: config_file
-- Env Var: RCLONE_OOS_CONFIG_FILE
-- Provider: user_principal_auth
-- Type: string
-- Default: "~/.oci/config"
-- Examples:
- - "~/.oci/config"
- - oci configuration file location
-
-#### --oos-config-profile
-
-Profile name inside the oci config file
-
-Properties:
-
-- Config: config_profile
-- Env Var: RCLONE_OOS_CONFIG_PROFILE
-- Provider: user_principal_auth
-- Type: string
-- Default: "Default"
-- Examples:
- - "Default"
- - Use the default profile
-
-### Advanced options
-
-Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
-
-#### --oos-storage-tier
-
-The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm
-
-Properties:
-
-- Config: storage_tier
-- Env Var: RCLONE_OOS_STORAGE_TIER
-- Type: string
-- Default: "Standard"
-- Examples:
- - "Standard"
- - Standard storage tier, this is the default tier
- - "InfrequentAccess"
- - InfrequentAccess storage tier
- - "Archive"
- - Archive storage tier
-
-#### --oos-upload-cutoff
-
-Cutoff for switching to chunked upload.
-
-Any files larger than this will be uploaded in chunks of chunk_size.
-The minimum is 0 and the maximum is 5 GiB.
-
-Properties:
-
-- Config: upload_cutoff
-- Env Var: RCLONE_OOS_UPLOAD_CUTOFF
-- Type: SizeSuffix
-- Default: 200Mi
-
-#### --oos-chunk-size
-
-Chunk size to use for uploading.
-
-When uploading files larger than upload_cutoff or files with unknown
-size (e.g. from "rclone rcat" or uploaded with "rclone mount" they will be uploaded
-as multipart uploads using this chunk size.
-
-Note that "upload_concurrency" chunks of this size are buffered
-in memory per transfer.
-
-If you are transferring large files over high-speed links and you have
-enough memory, then increasing this will speed up the transfers.
-
-Rclone will automatically increase the chunk size when uploading a
-large file of known size to stay below the 10,000 chunks limit.
-
-Files of unknown size are uploaded with the configured
-chunk_size. Since the default chunk size is 5 MiB and there can be at
-most 10,000 chunks, this means that by default the maximum size of
-a file you can stream upload is 48 GiB. If you wish to stream upload
-larger files then you will need to increase chunk_size.
-
-Increasing the chunk size decreases the accuracy of the progress
-statistics displayed with "-P" flag.
-
-
-Properties:
-
-- Config: chunk_size
-- Env Var: RCLONE_OOS_CHUNK_SIZE
-- Type: SizeSuffix
-- Default: 5Mi
-
-#### --oos-max-upload-parts
-
-Maximum number of parts in a multipart upload.
-
-This option defines the maximum number of multipart chunks to use
-when doing a multipart upload.
-
-OCI has max parts limit of 10,000 chunks.
-
-Rclone will automatically increase the chunk size when uploading a
-large file of a known size to stay below this number of chunks limit.
-
-
-Properties:
-
-- Config: max_upload_parts
-- Env Var: RCLONE_OOS_MAX_UPLOAD_PARTS
-- Type: int
-- Default: 10000
-
-#### --oos-upload-concurrency
-
-Concurrency for multipart uploads.
-
-This is the number of chunks of the same file that are uploaded
-concurrently.
-
-If you are uploading small numbers of large files over high-speed links
-and these uploads do not fully utilize your bandwidth, then increasing
-this may help to speed up the transfers.
-
-Properties:
-
-- Config: upload_concurrency
-- Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY
-- Type: int
-- Default: 10
-
-#### --oos-copy-cutoff
-
-Cutoff for switching to multipart copy.
-
-Any files larger than this that need to be server-side copied will be
-copied in chunks of this size.
-
-The minimum is 0 and the maximum is 5 GiB.
-
-Properties:
-
-- Config: copy_cutoff
-- Env Var: RCLONE_OOS_COPY_CUTOFF
-- Type: SizeSuffix
-- Default: 4.656Gi
-
-#### --oos-copy-timeout
-
-Timeout for copy.
-
-Copy is an asynchronous operation, specify timeout to wait for copy to succeed
-
-
-Properties:
-
-- Config: copy_timeout
-- Env Var: RCLONE_OOS_COPY_TIMEOUT
-- Type: Duration
-- Default: 1m0s
-
-#### --oos-disable-checksum
-
-Don't store MD5 checksum with object metadata.
-
-Normally rclone will calculate the MD5 checksum of the input before
-uploading it so it can add it to metadata on the object. This is great
-for data integrity checking but can cause long delays for large files
-to start uploading.
-
-Properties:
-
-- Config: disable_checksum
-- Env Var: RCLONE_OOS_DISABLE_CHECKSUM
-- Type: bool
-- Default: false
-
-#### --oos-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_OOS_ENCODING
-- Type: Encoding
-- Default: Slash,InvalidUtf8,Dot
-
-#### --oos-leave-parts-on-error
-
-If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery.
-
-It should be set to true for resuming uploads across different sessions.
-
-WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add
-additional costs if not cleaned up.
-
-
-Properties:
-
-- Config: leave_parts_on_error
-- Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR
-- Type: bool
-- Default: false
-
-#### --oos-attempt-resume-upload
-
-If true attempt to resume previously started multipart upload for the object.
-This will be helpful to speed up multipart transfers by resuming uploads from past session.
-
-WARNING: If chunk size differs in resumed session from past incomplete session, then the resumed multipart upload is
-aborted and a new multipart upload is started with the new chunk size.
-
-The flag leave_parts_on_error must be true to resume and optimize to skip parts that were already uploaded successfully.
-
-
-Properties:
-
-- Config: attempt_resume_upload
-- Env Var: RCLONE_OOS_ATTEMPT_RESUME_UPLOAD
-- Type: bool
-- Default: false
-
-#### --oos-no-check-bucket
-
-If set, don't attempt to check the bucket exists or create it.
-
-This can be useful when trying to minimise the number of transactions
-rclone does if you know the bucket exists already.
-
-It can also be needed if the user you are using does not have bucket
-creation permissions.
-
-
-Properties:
-
-- Config: no_check_bucket
-- Env Var: RCLONE_OOS_NO_CHECK_BUCKET
-- Type: bool
-- Default: false
-
-#### --oos-sse-customer-key-file
-
-To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated
-with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.'
-
-Properties:
-
-- Config: sse_customer_key_file
-- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE
-- Type: string
-- Required: false
-- Examples:
- - ""
- - None
-
-#### --oos-sse-customer-key
-
-To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to
-encrypt or decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is
-needed. For more information, see Using Your Own Keys for Server-Side Encryption
-(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm)
-
-Properties:
-
-- Config: sse_customer_key
-- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY
-- Type: string
-- Required: false
-- Examples:
- - ""
- - None
-
-#### --oos-sse-customer-key-sha256
-
-If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption
-key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for
-Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
-
-Properties:
-
-- Config: sse_customer_key_sha256
-- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256
-- Type: string
-- Required: false
-- Examples:
- - ""
- - None
-
-#### --oos-sse-kms-key-id
-
-if using your own master key in vault, this header specifies the
-OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call
-the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key.
-Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.
-
-Properties:
-
-- Config: sse_kms_key_id
-- Env Var: RCLONE_OOS_SSE_KMS_KEY_ID
-- Type: string
-- Required: false
-- Examples:
- - ""
- - None
-
-#### --oos-sse-customer-algorithm
-
-If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm.
-Object Storage supports "AES256" as the encryption algorithm. For more information, see
-Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
-
-Properties:
-
-- Config: sse_customer_algorithm
-- Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM
-- Type: string
-- Required: false
-- Examples:
- - ""
- - None
- - "AES256"
- - AES256
-
-## Backend commands
-
-Here are the commands specific to the oracleobjectstorage backend.
-
-Run them with
-
- rclone backend COMMAND remote:
-
-The help below will explain what arguments each command takes.
-
-See the [backend](https://rclone.org/commands/rclone_backend/) command for more
-info on how to pass options and arguments.
-
-These can be run on a running backend using the rc command
-[backend/command](https://rclone.org/rc/#backend-command).
-
-### rename
-
-change the name of an object
-
- rclone backend rename remote: [options] [<arguments>+]
-
-This command can be used to rename a object.
-
-Usage Examples:
-
- rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
-
-
-### list-multipart-uploads
-
-List the unfinished multipart uploads
-
- rclone backend list-multipart-uploads remote: [options] [<arguments>+]
-
-This command lists the unfinished multipart uploads in JSON format.
-
- rclone backend list-multipart-uploads oos:bucket/path/to/object
-
-It returns a dictionary of buckets with values as lists of unfinished
-multipart uploads.
-
-You can call it with no bucket in which case it lists all bucket, with
-a bucket or with a bucket and path.
-
+### Metadata
+
+OneDrive supports System Metadata (not User Metadata, as of this writing) for
+both files and directories. Much of the metadata is read-only, and there are some
+differences between OneDrive Personal and Business (see table below for
+details).
+
+Permissions are also supported, if `--onedrive-metadata-permissions` is set. The
+accepted values for `--onedrive-metadata-permissions` are `read`, `write`,
+`read,write`, and `off` (the default). `write` supports adding new permissions,
+updating the "role" of existing permissions, and removing permissions. Updating
+and removing require the Permission ID to be known, so it is recommended to use
+`read,write` instead of `write` if you wish to update/remove permissions.
+
+Permissions are read/written in JSON format using the same schema as the
+[OneDrive API](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online),
+which differs slightly between OneDrive Personal and Business.
+
+Example for OneDrive Personal:
+```json
+[
{
- "test-bucket": [
- {
- "namespace": "test-namespace",
- "bucket": "test-bucket",
- "object": "600m.bin",
- "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
- "timeCreated": "2022-07-29T06:21:16.595Z",
- "storageTier": "Standard"
- }
- ]
+ "id": "1234567890ABC!123",
+ "grantedTo": {
+ "user": {
+ "id": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ },
+ "invitation": {
+ "email": "ryan@contoso.com"
+ },
+ "link": {
+ "webUrl": "https://1drv.ms/t/s!1234567890ABC"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "s!1234567890ABC"
+ }
+]
+Example for OneDrive Business:
+[
+ {
+ "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
+ "grantedToIdentities": [
+ {
+ "user": {
+ "displayName": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ }
+ ],
+ "link": {
+ "type": "view",
+ "scope": "users",
+ "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "u!LKj1lkdlals90j1nlkascl"
+ },
+ {
+ "id": "5D33DD65C6932946",
+ "grantedTo": {
+ "user": {
+ "displayName": "John Doe",
+ "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
+ },
+ "application": {},
+ "device": {}
+ },
+ "roles": [
+ "owner"
+ ],
+ "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
+ }
+]To write permissions, pass in a "permissions" metadata key using this same format. The --metadata-mapper tool can be very helpful for this.
When adding permissions, an email address can be provided in the User.ID or DisplayName properties of grantedTo or grantedToIdentities. Alternatively, an ObjectID can be provided in User.ID. At least one valid recipient must be provided in order to add a permission for a user. Creating a Public Link is also supported, if Link.Scope is set to "anonymous".
Example request to add a "read" permission:
+[
+ {
+ "id": "",
+ "grantedTo": {
+ "user": {},
+ "application": {},
+ "device": {}
+ },
+ "grantedToIdentities": [
+ {
+ "user": {
+ "id": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ }
+ ],
+ "roles": [
+ "read"
+ ]
+ }
+]Note that adding a permission can fail if a conflicting permission already exists for the file/folder.
+To update an existing permission, include both the Permission ID and the new roles to be assigned. roles is the only property that can be changed.
To remove permissions, pass in a blob containing only the permissions you wish to keep (which can be empty, to remove all.)
+Note that both reading and writing permissions requires extra API calls, so if you don't need to read or write permissions it is recommended to omit --onedrive-metadata-permissions.
Metadata and permissions are supported for Folders (directories) as well as Files. Note that setting the mtime or btime on a Folder requires one extra API call on OneDrive Business only.
OneDrive does not currently support User Metadata. When writing metadata, only writeable system properties will be written -- any read-only or unrecognized keys passed in will be ignored.
+TIP: to see the metadata and permissions for any file or folder, run:
+rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read
+Here are the possible system metadata items for the onedrive backend.
+| Name | +Help | +Type | +Example | +Read Only | +
|---|---|---|---|---|
| btime | +Time of file birth (creation) with S accuracy (mS for OneDrive Personal). | +RFC 3339 | +2006-01-02T15:04:05Z | +N | +
| content-type | +The MIME type of the file. | +string | +text/plain | +Y | +
| created-by-display-name | +Display name of the user that created the item. | +string | +John Doe | +Y | +
| created-by-id | +ID of the user that created the item. | +string | +48d31887-5fad-4d73-a9f5-3c356e68a038 | +Y | +
| description | +A short description of the file. Max 1024 characters. Only supported for OneDrive Personal. | +string | +Contract for signing | +N | +
| id | +The unique identifier of the item within OneDrive. | +string | +01BYE5RZ6QN3ZWBTUFOFD3GSPGOHDJD36K | +Y | +
| last-modified-by-display-name | +Display name of the user that last modified the item. | +string | +John Doe | +Y | +
| last-modified-by-id | +ID of the user that last modified the item. | +string | +48d31887-5fad-4d73-a9f5-3c356e68a038 | +Y | +
| malware-detected | +Whether OneDrive has detected that the item contains malware. | +boolean | +true | +Y | +
| mtime | +Time of last modification with S accuracy (mS for OneDrive Personal). | +RFC 3339 | +2006-01-02T15:04:05Z | +N | +
| package-type | +If present, indicates that this item is a package instead of a folder or file. Packages are treated like files in some contexts and folders in others. | +string | +oneNote | +Y | +
| permissions | +Permissions in a JSON dump of OneDrive format. Enable with --onedrive-metadata-permissions. Properties: id, grantedTo, grantedToIdentities, invitation, inheritedFrom, link, roles, shareId | +JSON | +{} | +N | +
| shared-by-id | +ID of the user that shared the item (if shared). | +string | +48d31887-5fad-4d73-a9f5-3c356e68a038 | +Y | +
| shared-owner-id | +ID of the owner of the shared item (if shared). | +string | +48d31887-5fad-4d73-a9f5-3c356e68a038 | +Y | +
| shared-scope | +If shared, indicates the scope of how the item is shared: anonymous, organization, or users. | +string | +users | +Y | +
| shared-time | +Time when the item was shared, with S accuracy (mS for OneDrive Personal). | +RFC 3339 | +2006-01-02T15:04:05Z | +Y | +
| utime | +Time of upload with S accuracy (mS for OneDrive Personal). | +RFC 3339 | +2006-01-02T15:04:05Z | +Y | +
See the metadata docs for more info.
+If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the rclone config reconnect remote: command to get a new token and refresh token.
Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business (Updated 13 Jan 2021).
+The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones.
+OneDrive seems to be OK with at least 50,000 files in a folder, but at 100,000 rclone will get errors listing the directory like couldn’t list files: UnknownError:. See #2707 for more info.
An official document about the limitations for different types of OneDrive can be found here.
+Every change in a file OneDrive causes the service to create a new version of the file. This counts against a users quota. For example changing the modification time of a file creates a second version, so the file apparently uses twice the space.
+For example the copy command is affected by this as rclone copies the file and then afterwards sets the modification time to match the source file which uses another version.
You can use the rclone cleanup command (see below) to remove all old versions.
Or you can set the no_versions parameter to true and rclone will remove versions after operations which create new versions. This takes extra transactions so only enable it if you need it.
Note At the time of writing Onedrive Personal creates versions (but not for setting the modification time) but the API for removing them returns "API not found" so cleanup and no_versions should not be used on Onedrive Personal.
Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an update to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting:
+Install-Module -Name Microsoft.Online.SharePoint.PowerShell (in case you haven't installed this already)Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameCheckingConnect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM (replacing YOURSITE, YOU, YOURSITE.COM with the actual values; this will prompt for your credentials)Set-SPOTenant -EnableMinimumVersionRequirement $FalseDisconnect-SPOService (to disconnect from the server)Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.
+User Weropol has found a method to disable versioning on OneDrive
+OneDrive supports rclone cleanup which causes rclone to look through every file under the path supplied and delete all version but the current version. Because this involves traversing all the files, then querying each file for versions it can be quite slow. Rclone does --checkers tests in parallel. The command also supports --interactive/i or --dry-run which is a great way to see what it would do.
rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
+rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
+NB Onedrive personal can't currently delete versions
+If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: --user-agent "ISV|rclone.org|rclone/v1.55.1"
The specific details can be found in the Microsoft document: Avoid getting throttled or blocked in SharePoint Online
+It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and hash checks to fail. There are also other situations that will cause OneDrive to report inconsistent file sizes. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments:
+--ignore-checksum --ignore-size
+Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for certain files, by attempting the steps below. Open the web interface for OneDrive and find the affected files (which will be in the error messages/log for rclone). Simply click on each of these files, causing OneDrive to open them on the web. This will cause each file to be converted in place to a format that is functionally equivalent but which will no longer trigger the size discrepancy. Once all problematic files are converted you will no longer need the ignore options above.
+It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use the --backup-dir <BACKUP_DIR> command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory rclone-backup-dir on backend mysharepoint, you may use:
--backup-dir mysharepoint:rclone-backup-dir
+Error: access_denied
+Code: AADSTS65005
+Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.
+This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins.
+However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint
+Error: invalid_grant
+Code: AADSTS50076
+Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.
+If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run rclone config, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: Already have a token - refresh?. For this question, answer y and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.
On Sharepoint and OneDrive for Business, rclone link may return an "Invalid request" error. A possible cause is that the organisation admin didn't allow public links to be made for the organisation/sharepoint library. To fix the permissions as an admin, take a look at the docs: 1, 2.
Shared with me filesShared with me files is not supported by rclone currently, but there is a workaround:
+Shared, then click Add shortcut to My files in the context 
My files, you can access it with rclone, it behaves like a normal folder/file.

The iOS OneDrive app introduced upload and storage of Live Photos in 2020. The usage and download of these uploaded Live Photos is unfortunately still work-in-progress and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows.
+The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive.
+The different sizes will cause rclone copy/sync to repeatedly recopy unmodified photos something like this:
DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
+DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK
+INFO : 20230203_123826234_iOS.heic: Copied (replaced existing)
+These recopies can be worked around by adding --ignore-size. Please note that this workaround only syncs the still-picture not the movie clip, and relies on modification dates being correctly updated on all files in all situations.
The different sizes will also cause rclone check to report size errors something like this:
ERROR : 20230203_123826234_iOS.heic: sizes differ
+These check errors can be suppressed by adding --ignore-size.
The different sizes will also cause rclone mount to fail downloading with an error something like this:
ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
+or like this when using --cache-mode=full:
INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
+ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
+Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
Here is an example of how to make a remote called remote. First run:
rclone config
+This will guide you through an interactive setup process:
+n) New remote
+d) Delete remote
+q) Quit config
+e/n/d/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / OpenDrive
+ \ "opendrive"
+[snip]
+Storage> opendrive
+Username
+username>
+Password
+y) Yes type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+--------------------
+[remote]
+username =
+password = *** ENCRYPTED ***
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+List directories in top level of your OpenDrive
+rclone lsd remote:
+List all the files in your OpenDrive
+rclone ls remote:
+To copy a local directory to an OpenDrive directory called backup
+rclone copy /home/source remote:backup
+OpenDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
+The MD5 hash algorithm is supported.
+| Character | +Value | +Replacement | +
|---|---|---|
| NUL | +0x00 | +␀ | +
| / | +0x2F | +/ | +
| " | +0x22 | +" | +
| * | +0x2A | +* | +
| : | +0x3A | +: | +
| < | +0x3C | +< | +
| > | +0x3E | +> | +
| ? | +0x3F | +? | +
| \ | +0x5C | +\ | +
| | | +0x7C | +| | +
File names can also not begin or end with the following characters. These only get replaced if they are the first or last character in the name:
+| Character | +Value | +Replacement | +
|---|---|---|
| SP | +0x20 | +␠ | +
| HT | +0x09 | +␉ | +
| LF | +0x0A | +␊ | +
| VT | +0x0B | +␋ | +
| CR | +0x0D | +␍ | +
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
+Here are the Standard options specific to opendrive (OpenDrive).
+Username.
+Properties:
+Password.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+Here are the Advanced options specific to opendrive (OpenDrive).
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Files will be uploaded in chunks this size.
+Note that these chunks are buffered in memory so increasing them will increase memory use.
+Properties:
+Description of the remote
+Properties:
+Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
rclone about is not supported by the OpenDrive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
+Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.
Sample command to transfer local artifacts to remote:bucket in oracle object storage:
+rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv
Here is an example of making an oracle object storage configuration. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
+This will guide you through an interactive setup process:
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> n
+Enter name for new remote.
+name> remote
-### cleanup
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Oracle Cloud Infrastructure Object Storage
+ \ (oracleobjectstorage)
+Storage> oracleobjectstorage
-Remove unfinished multipart uploads.
+Option provider.
+Choose your Auth Provider
+Choose a number from below, or type in your own string value.
+Press Enter for the default (env_auth).
+ 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins
+ \ (env_auth)
+ / use an OCI user and an API key for authentication.
+ 2 | you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key.
+ | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm
+ \ (user_principal_auth)
+ / use instance principals to authorize an instance to make API calls.
+ 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata.
+ | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm
+ \ (instance_principal_auth)
+ / use workload identity to grant Kubernetes pods policy-driven access to Oracle Cloud
+ 4 | Infrastructure (OCI) resources using OCI Identity and Access Management (IAM).
+ | https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm
+ \ (workload_identity_auth)
+ 5 / use resource principals to make API calls
+ \ (resource_principal_auth)
+ 6 / no credentials needed, this is typically for reading public buckets
+ \ (no_auth)
+provider> 2
- rclone backend cleanup remote: [options] [<arguments>+]
+Option namespace.
+Object storage namespace
+Enter a value.
+namespace> idbamagbg734
-This command removes unfinished multipart uploads of age greater than
-max-age which defaults to 24 hours.
+Option compartment.
+Object storage compartment OCID
+Enter a value.
+compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
-Note that you can use --interactive/-i or --dry-run with this command to see what
-it would do.
+Option region.
+Object storage Region
+Enter a value.
+region> us-ashburn-1
- rclone backend cleanup oos:bucket/path/to/object
- rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
+Option endpoint.
+Endpoint for Object storage API.
+Leave blank to use the default endpoint for the region.
+Enter a value. Press Enter to leave empty.
+endpoint>
-Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
+Option config_file.
+Full Path to OCI config file
+Choose a number from below, or type in your own string value.
+Press Enter for the default (~/.oci/config).
+ 1 / oci configuration file location
+ \ (~/.oci/config)
+config_file> /etc/oci/dev.conf
+Option config_profile.
+Profile name inside OCI config file
+Choose a number from below, or type in your own string value.
+Press Enter for the default (Default).
+ 1 / Use the default profile
+ \ (Default)
+config_profile> Test
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
Options:
+- type: oracleobjectstorage
+- namespace: idbamagbg734
+- compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba
+- region: us-ashburn-1
+- provider: user_principal_auth
+- config_file: /etc/oci/dev.conf
+- config_profile: Test
+Keep this "remote" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+See all buckets
+rclone lsd remote:
+Create a new bucket
+rclone mkdir remote:bucket
+List the contents of a bucket
+rclone ls remote:bucket
+rclone ls remote:bucket --max-depth 1
+OCI has various authentication methods. To learn more about authentication methods please refer oci authentication methods These choices can be specified in the rclone config file.
+Rclone supports the following OCI authentication provider.
+User Principal
+Instance Principal
+Resource Principal
+Workload Identity
+No authentication
+Sample rclone config file for Authentication Provider User Principal:
+[oos]
+type = oracleobjectstorage
+namespace = id<redacted>34
+compartment = ocid1.compartment.oc1..aa<redacted>ba
+region = us-ashburn-1
+provider = user_principal_auth
+config_file = /home/opc/.oci/config
+config_profile = Default
+Advantages: - One can use this method from any server within OCI or on-premises or from other cloud provider.
+Considerations: - you need to configure user’s privileges / policy to allow access to object storage - Overhead of managing users and keys. - If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
+An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. With this approach no credentials have to be stored and managed.
+Sample rclone configuration file for Authentication Provider Instance Principal:
+[opc@rclone ~]$ cat ~/.config/rclone/rclone.conf
+[oos]
+type = oracleobjectstorage
+namespace = id<redacted>fn
+compartment = ocid1.compartment.oc1..aa<redacted>k7a
+region = us-ashburn-1
+provider = instance_principal_auth
+Advantages:
+Considerations:
+Resource principal auth is very similar to instance principal auth but used for resources that are not compute instances such as serverless functions. To use resource principal ensure Rclone process is started with these environment variables set in its process.
+export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
+export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
+export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
+export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token
+Sample rclone configuration file for Authentication Provider Resource Principal:
+[oos]
+type = oracleobjectstorage
+namespace = id<redacted>34
+compartment = ocid1.compartment.oc1..aa<redacted>ba
+region = us-ashburn-1
+provider = resource_principal_auth
+Workload Identity auth may be used when running Rclone from Kubernetes pod on a Container Engine for Kubernetes (OKE) cluster. For more details on configuring Workload Identity, see Granting Workloads Access to OCI Resources. To use workload identity, ensure Rclone is started with these environment variables set in its process.
+export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
+export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
+Public buckets do not require any authentication mechanism to read objects. Sample rclone configuration file for No authentication:
+[oos]
+type = oracleobjectstorage
+namespace = id<redacted>34
+compartment = ocid1.compartment.oc1..aa<redacted>ba
+region = us-ashburn-1
+provider = no_auth
+The modification time is stored as metadata on the object as opc-meta-mtime as floating point since the epoch, accurate to 1 ns.
If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb, the object will be uploaded rather than copied.
+Note that reading this from the object takes an additional HEAD request as the metadata isn't returned in object listings.
The MD5 hash algorithm is supported.
+rclone supports multipart uploads with OOS which means that it can upload files bigger than 5 GiB.
+Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.
+rclone switches from single part uploads to multipart uploads at the point specified by --oos-upload-cutoff. This can be a maximum of 5 GiB and a minimum of 0 (ie always upload multipart files).
The chunk sizes used in the multipart upload are specified by --oos-chunk-size and the number of chunks uploaded concurrently is specified by --oos-upload-concurrency.
Multipart uploads will use --transfers * --oos-upload-concurrency * --oos-chunk-size extra memory. Single part uploads to not use extra memory.
Single part transfers can be faster than multipart transfers or slower depending on your latency from oos - the more latency, the more likely single part transfers will be faster.
+Increasing --oos-upload-concurrency will increase throughput (8 would be a sensible value) and increasing --oos-chunk-size also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.
Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
+Choose your Auth Provider
+Properties:
+Object storage namespace
+Properties:
+Object storage compartment OCID
+Properties:
+Object storage Region
+Properties:
+Endpoint for Object storage API.
+Leave blank to use the default endpoint for the region.
+Properties:
+Path to OCI config file
+Properties:
+Profile name inside the oci config file
+Properties:
+Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
+The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm
+Properties:
+Cutoff for switching to chunked upload.
+Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.
+Properties:
+Chunk size to use for uploading.
+When uploading files larger than upload_cutoff or files with unknown size (e.g. from "rclone rcat" or uploaded with "rclone mount" they will be uploaded as multipart uploads using this chunk size.
+Note that "upload_concurrency" chunks of this size are buffered in memory per transfer.
+If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers.
+Rclone will automatically increase the chunk size when uploading a large file of known size to stay below the 10,000 chunks limit.
+Files of unknown size are uploaded with the configured chunk_size. Since the default chunk size is 5 MiB and there can be at most 10,000 chunks, this means that by default the maximum size of a file you can stream upload is 48 GiB. If you wish to stream upload larger files then you will need to increase chunk_size.
+Increasing the chunk size decreases the accuracy of the progress statistics displayed with "-P" flag.
+Properties:
+Maximum number of parts in a multipart upload.
+This option defines the maximum number of multipart chunks to use when doing a multipart upload.
+OCI has max parts limit of 10,000 chunks.
+Rclone will automatically increase the chunk size when uploading a large file of a known size to stay below this number of chunks limit.
+Properties:
+Concurrency for multipart uploads.
+This is the number of chunks of the same file that are uploaded concurrently.
+If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.
+Properties:
+Cutoff for switching to multipart copy.
+Any files larger than this that need to be server-side copied will be copied in chunks of this size.
+The minimum is 0 and the maximum is 5 GiB.
+Properties:
+Timeout for copy.
+Copy is an asynchronous operation, specify timeout to wait for copy to succeed
+Properties:
+Don't store MD5 checksum with object metadata.
+Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery.
+It should be set to true for resuming uploads across different sessions.
+WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add additional costs if not cleaned up.
+Properties:
+If true attempt to resume previously started multipart upload for the object. This will be helpful to speed up multipart transfers by resuming uploads from past session.
+WARNING: If chunk size differs in resumed session from past incomplete session, then the resumed multipart upload is aborted and a new multipart upload is started with the new chunk size.
+The flag leave_parts_on_error must be true to resume and optimize to skip parts that were already uploaded successfully.
+Properties:
+If set, don't attempt to check the bucket exists or create it.
+This can be useful when trying to minimise the number of transactions rclone does if you know the bucket exists already.
+It can also be needed if the user you are using does not have bucket creation permissions.
+Properties:
+To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.'
+Properties:
+To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to encrypt or decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. For more information, see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm)
+Properties:
+If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
+Properties:
+if using your own master key in vault, this header specifies the OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.
+Properties:
+If using SSE-C, the optional header that specifies "AES256" as the encryption algorithm. Object Storage supports "AES256" as the encryption algorithm. For more information, see Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm).
+Properties:
+Description of the remote
+Properties:
+Here are the commands specific to the oracleobjectstorage backend.
+Run them with
+rclone backend COMMAND remote:
+The help below will explain what arguments each command takes.
+See the backend command for more info on how to pass options and arguments.
+These can be run on a running backend using the rc command backend/command.
+change the name of an object
+rclone backend rename remote: [options] [<arguments>+]
+This command can be used to rename a object.
+Usage Examples:
+rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
+List the unfinished multipart uploads
+rclone backend list-multipart-uploads remote: [options] [<arguments>+]
+This command lists the unfinished multipart uploads in JSON format.
+rclone backend list-multipart-uploads oos:bucket/path/to/object
+It returns a dictionary of buckets with values as lists of unfinished multipart uploads.
+You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path.
+{
+ "test-bucket": [
+ {
+ "namespace": "test-namespace",
+ "bucket": "test-bucket",
+ "object": "600m.bin",
+ "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
+ "timeCreated": "2022-07-29T06:21:16.595Z",
+ "storageTier": "Standard"
+ }
+ ]
+Remove unfinished multipart uploads.
+rclone backend cleanup remote: [options] [<arguments>+]
+This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours.
+Note that you can use --interactive/-i or --dry-run with this command to see what it would do.
+rclone backend cleanup oos:bucket/path/to/object
+rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
+Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
+Options:
+Restore objects from Archive to Standard storage
+rclone backend restore remote: [options] [<arguments>+]
+This command can be used to restore one or more objects from Archive to Standard storage.
+Usage Examples:
-- "max-age": Max age of upload to delete
+rclone backend restore oos:bucket/path/to/directory -o hours=HOURS
+rclone backend restore oos:bucket -o hours=HOURS
+This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags
+rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72
+All the objects shown will be marked for restore, then
+rclone backend restore --include "*.txt" oos:bucket/path -o hours=72
+It returns a list of status dictionaries with Object Name and Status
+keys. The Status will be "RESTORED"" if it was successful or an error message
+if not.
-
-## Tutorials
-### [Mounting Buckets](https://rclone.org/oracleobjectstorage/tutorial_mount/)
-
-# QingStor
-
-Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
-command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
-
-## Configuration
-
-Here is an example of making an QingStor configuration. First run
-
- rclone config
-
-This will guide you through an interactive setup process.
-
-No remotes found, make a new one? n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / QingStor Object Storage "qingstor" [snip] Storage> qingstor Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own value 1 / Enter QingStor credentials in the next step "false" 2 / Get QingStor credentials from the environment (env vars or IAM) "true" env_auth> 1 QingStor Access Key ID - leave blank for anonymous access or runtime credentials. access_key_id> access_key QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. secret_access_key> secret_key Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443" endpoint> Zone connect to. Default is "pek3a". Choose a number from below, or type in your own value / The Beijing (China) Three Zone 1 | Needs location constraint pek3a. "pek3a" / The Shanghai (China) First Zone 2 | Needs location constraint sh1a. "sh1a" zone> 1 Number of connection retry. Leave blank will use the default value "3". connection_retries> Remote config -------------------- [remote] env_auth = false access_key_id = access_key secret_access_key = secret_key endpoint = zone = pek3a connection_retries = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
-
-This remote is called `remote` and can now be used like this
-
-See all buckets
-
- rclone lsd remote:
-
-Make a new bucket
-
- rclone mkdir remote:bucket
-
-List the contents of a bucket
-
- rclone ls remote:bucket
-
-Sync `/home/local/directory` to the remote bucket, deleting any excess
-files in the bucket.
-
- rclone sync --interactive /home/local/directory remote:bucket
-
-### --fast-list
-
-This remote supports `--fast-list` which allows you to use fewer
-transactions in exchange for more memory. See the [rclone
-docs](https://rclone.org/docs/#fast-list) for more details.
-
-### Multipart uploads
-
-rclone supports multipart uploads with QingStor which means that it can
-upload files bigger than 5 GiB. Note that files uploaded with multipart
-upload don't have an MD5SUM.
-
-Note that incomplete multipart uploads older than 24 hours can be
-removed with `rclone cleanup remote:bucket` just for one bucket
-`rclone cleanup remote:` for all buckets. QingStor does not ever
-remove incomplete multipart uploads so it may be necessary to run this
-from time to time.
-
-### Buckets and Zone
-
-With QingStor you can list buckets (`rclone lsd`) using any zone,
-but you can only access the content of a bucket from the zone it was
-created in. If you attempt to access a bucket from the wrong zone,
-you will get an error, `incorrect zone, the bucket is not in 'XXX'
-zone`.
-
-### Authentication
-
-There are two ways to supply `rclone` with a set of QingStor
-credentials. In order of precedence:
-
- - Directly in the rclone configuration file (as configured by `rclone config`)
- - set `access_key_id` and `secret_access_key`
- - Runtime configuration:
- - set `env_auth` to `true` in the config file
- - Exporting the following environment variables before running `rclone`
- - Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY`
- - Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY`
-
-### Restricted filename characters
-
-The control characters 0x00-0x1F and / are replaced as in the [default
-restricted characters set](https://rclone.org/overview/#restricted-characters). Note
-that 0x7F is not replaced.
-
-Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
-as they can't be used in JSON strings.
-
-
-### Standard options
-
-Here are the Standard options specific to qingstor (QingCloud Object Storage).
-
-#### --qingstor-env-auth
-
-Get QingStor credentials from runtime.
-
-Only applies if access_key_id and secret_access_key is blank.
-
-Properties:
-
-- Config: env_auth
-- Env Var: RCLONE_QINGSTOR_ENV_AUTH
-- Type: bool
-- Default: false
-- Examples:
- - "false"
- - Enter QingStor credentials in the next step.
- - "true"
- - Get QingStor credentials from the environment (env vars or IAM).
-
-#### --qingstor-access-key-id
-
-QingStor Access Key ID.
-
-Leave blank for anonymous access or runtime credentials.
-
-Properties:
-
-- Config: access_key_id
-- Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID
-- Type: string
-- Required: false
-
-#### --qingstor-secret-access-key
-
-QingStor Secret Access Key (password).
-
-Leave blank for anonymous access or runtime credentials.
-
-Properties:
-
-- Config: secret_access_key
-- Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY
-- Type: string
-- Required: false
-
-#### --qingstor-endpoint
-
+[
+ {
+ "Object": "test.txt"
+ "Status": "RESTORED",
+ },
+ {
+ "Object": "test/file4.txt"
+ "Status": "RESTORED",
+ }
+]
+Options:
+Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.
Here is an example of making an QingStor configuration. First run
+rclone config
+This will guide you through an interactive setup process.
+No remotes found, make a new one?
+n) New remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+n/r/c/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / QingStor Object Storage
+ \ "qingstor"
+[snip]
+Storage> qingstor
+Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own value
+ 1 / Enter QingStor credentials in the next step
+ \ "false"
+ 2 / Get QingStor credentials from the environment (env vars or IAM)
+ \ "true"
+env_auth> 1
+QingStor Access Key ID - leave blank for anonymous access or runtime credentials.
+access_key_id> access_key
+QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials.
+secret_access_key> secret_key
Enter an endpoint URL to connection QingStor API.
+Leave blank will use the default value "https://qingstor.com:443"
+endpoint>
+Zone connect to. Default is "pek3a".
+Choose a number from below, or type in your own value
+ / The Beijing (China) Three Zone
+ 1 | Needs location constraint pek3a.
+ \ "pek3a"
+ / The Shanghai (China) First Zone
+ 2 | Needs location constraint sh1a.
+ \ "sh1a"
+zone> 1
+Number of connection retry.
+Leave blank will use the default value "3".
+connection_retries>
+Remote config
+--------------------
+[remote]
+env_auth = false
+access_key_id = access_key
+secret_access_key = secret_key
+endpoint =
+zone = pek3a
+connection_retries =
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+This remote is called remote and can now be used like this
See all buckets
+rclone lsd remote:
+Make a new bucket
+rclone mkdir remote:bucket
+List the contents of a bucket
+rclone ls remote:bucket
+Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.
rclone sync --interactive /home/local/directory remote:bucket
+This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5 GiB. Note that files uploaded with multipart upload don't have an MD5SUM.
+Note that incomplete multipart uploads older than 24 hours can be removed with rclone cleanup remote:bucket just for one bucket rclone cleanup remote: for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time.
With QingStor you can list buckets (rclone lsd) using any zone, but you can only access the content of a bucket from the zone it was created in. If you attempt to access a bucket from the wrong zone, you will get an error, incorrect zone, the bucket is not in 'XXX' zone.
There are two ways to supply rclone with a set of QingStor credentials. In order of precedence:
rclone config)
+access_key_id and secret_access_keyenv_auth to true in the config filerclone
+QS_ACCESS_KEY_ID or QS_ACCESS_KEYQS_SECRET_ACCESS_KEY or QS_SECRET_KEYThe control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
+Here are the Standard options specific to qingstor (QingCloud Object Storage).
+Get QingStor credentials from runtime.
+Only applies if access_key_id and secret_access_key is blank.
+Properties:
+QingStor Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Properties:
+QingStor Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Properties:
+Enter an endpoint URL to connection QingStor API.
+Leave blank will use the default value "https://qingstor.com:443".
+Properties:
+Zone to connect to.
+Default is "pek3a".
+Properties:
+Here are the Advanced options specific to qingstor (QingCloud Object Storage).
+Number of connection retries.
+Properties:
+Cutoff for switching to chunked upload.
+Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5 GiB.
+Properties:
+Chunk size to use for uploading.
+When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size.
+Note that "--qingstor-upload-concurrency" chunks of this size are buffered in memory per transfer.
+If you are transferring large files over high-speed links and you have enough memory, then increasing this will speed up the transfers.
+Properties:
+Concurrency for multipart uploads.
+This is the number of chunks of the same file that are uploaded concurrently.
+NB if you set this to > 1 then the checksums of multipart uploads become corrupted (the uploads themselves are not corrupted though).
+If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing this may help to speed up the transfers.
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Description of the remote
+Properties:
+rclone about is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
+Quatrix by Maytech is Quatrix Secure Compliant File Sharing | Maytech.
+Paths are specified as remote:path
Paths may be as deep as required, e.g., remote:directory/subdirectory.
The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at https://<account>/profile/api-keys or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create.
See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer
+Here is an example of how to make a remote called remote. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / Quatrix by Maytech
+ \ "quatrix"
+[snip]
+Storage> quatrix
+API key for accessing Quatrix account.
+api_key> your_api_key
+Host name of Quatrix account.
+host> example.quatrix.it
-Leave blank will use the default value "https://qingstor.com:443".
+--------------------
+[remote]
+api_key = your_api_key
+host = example.quatrix.it
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Once configured you can then use rclone like this,
List directories in top level of your Quatrix
+rclone lsd remote:
+List all the files in your Quatrix
+rclone ls remote:
+To copy a local directory to an Quatrix directory called backup
+rclone copy /home/source remote:backup
+API Key is created with no expiration date. It will be valid until you delete or deactivate it in your account. After disabling, the API Key can be enabled back. If the API Key was deleted and a new key was created, you can update it in rclone config. The same happens if the hostname was changed.
+$ rclone config
+Current remotes:
-Properties:
+Name Type
+==== ====
+remote quatrix
-- Config: endpoint
-- Env Var: RCLONE_QINGSTOR_ENDPOINT
-- Type: string
-- Required: false
-
-#### --qingstor-zone
-
-Zone to connect to.
-
-Default is "pek3a".
-
-Properties:
-
-- Config: zone
-- Env Var: RCLONE_QINGSTOR_ZONE
-- Type: string
-- Required: false
-- Examples:
- - "pek3a"
- - The Beijing (China) Three Zone.
- - Needs location constraint pek3a.
- - "sh1a"
- - The Shanghai (China) First Zone.
- - Needs location constraint sh1a.
- - "gd2a"
- - The Guangdong (China) Second Zone.
- - Needs location constraint gd2a.
-
-### Advanced options
-
-Here are the Advanced options specific to qingstor (QingCloud Object Storage).
-
-#### --qingstor-connection-retries
-
-Number of connection retries.
-
-Properties:
-
-- Config: connection_retries
-- Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES
-- Type: int
-- Default: 3
-
-#### --qingstor-upload-cutoff
-
-Cutoff for switching to chunked upload.
-
-Any files larger than this will be uploaded in chunks of chunk_size.
-The minimum is 0 and the maximum is 5 GiB.
-
-Properties:
-
-- Config: upload_cutoff
-- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF
-- Type: SizeSuffix
-- Default: 200Mi
-
-#### --qingstor-chunk-size
-
-Chunk size to use for uploading.
-
-When uploading files larger than upload_cutoff they will be uploaded
-as multipart uploads using this chunk size.
-
-Note that "--qingstor-upload-concurrency" chunks of this size are buffered
-in memory per transfer.
-
-If you are transferring large files over high-speed links and you have
-enough memory, then increasing this will speed up the transfers.
-
-Properties:
-
-- Config: chunk_size
-- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE
-- Type: SizeSuffix
-- Default: 4Mi
-
-#### --qingstor-upload-concurrency
-
-Concurrency for multipart uploads.
-
-This is the number of chunks of the same file that are uploaded
-concurrently.
-
-NB if you set this to > 1 then the checksums of multipart uploads
-become corrupted (the uploads themselves are not corrupted though).
-
-If you are uploading small numbers of large files over high-speed links
-and these uploads do not fully utilize your bandwidth, then increasing
-this may help to speed up the transfers.
-
-Properties:
-
-- Config: upload_concurrency
-- Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY
-- Type: int
-- Default: 1
-
-#### --qingstor-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_QINGSTOR_ENCODING
-- Type: Encoding
-- Default: Slash,Ctl,InvalidUtf8
-
-
-
-## Limitations
-
-`rclone about` is not supported by the qingstor backend. Backends without
-this capability cannot determine free space for an rclone mount or
-use policy `mfs` (most free space) as a member of an rclone union
-remote.
-
-See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
-
-# Quatrix
-
-Quatrix by Maytech is [Quatrix Secure Compliant File Sharing | Maytech](https://www.maytech.net/products/quatrix-business).
-
-Paths are specified as `remote:path`
-
-Paths may be as deep as required, e.g., `remote:directory/subdirectory`.
-
-The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at `https://<account>/profile/api-keys`
-or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create.
-
-See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer
-
-## Configuration
-
-Here is an example of how to make a remote called `remote`. First run:
-
- rclone config
-
-This will guide you through an interactive setup process:
-
-No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Quatrix by Maytech "quatrix" [snip] Storage> quatrix API key for accessing Quatrix account. api_key> your_api_key Host name of Quatrix account. host> example.quatrix.it
-| [remote] api_key = your_api_key host = example.quatrix.it | -
|---|
| y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` | -
Once configured you can then use rclone like this, |
-
| List directories in top level of your Quatrix | -
| rclone lsd remote: | -
| List all the files in your Quatrix | -
| rclone ls remote: | -
| To copy a local directory to an Quatrix directory called backup | -
| rclone copy /home/source remote:backup | -
| ### API key validity | -
| API Key is created with no expiration date. It will be valid until you delete or deactivate it in your account. After disabling, the API Key can be enabled back. If the API Key was deleted and a new key was created, you can update it in rclone config. The same happens if the hostname was changed. | -
| ``` $ rclone config Current remotes: | -
| Name Type ==== ==== remote quatrix | -
| e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> e Choose a number from below, or type in an existing value 1 > remote remote> remote | -
[remote] type = quatrix host = some_host.quatrix.it api_key = your_api_key -------------------- Edit remote Option api_key. API key for accessing Quatrix account Enter a string value. Press Enter for the default (your_api_key) api_key> Option host. Host name of Quatrix account Enter a string value. Press Enter for the default (some_host.quatrix.it).
-| [remote] type = quatrix host = some_host.quatrix.it api_key = your_api_key | -
|---|
| y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y ``` | -
| ### Modification times and hashes | -
| Quatrix allows modification times to be set on objects accurate to 1 microsecond. These will be used to detect whether objects need syncing or not. | -
Quatrix does not support hashes, so you cannot use the --checksum flag. |
-
| ### Restricted filename characters | -
File names in Quatrix are case sensitive and have limitations like the maximum length of a filename is 255, and the minimum length is 1. A file name cannot be equal to . or .. nor contain / , \ or non-printable ascii. |
-
| ### Transfers | -
For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all multipart uploads). Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by default, and it can be changed in the advanced configuration, so increasing --transfers will increase the memory use. The chunk size has a maximum size limit, which is set to 100_000_000 bytes by default and can be changed in the advanced configuration. The size of the uploaded chunk will dynamically change depending on the upload speed. The total memory use equals the number of transfers multiplied by the minimal chunk size. In case there's free memory allocated for the upload (which equals the difference of maximal_summary_chunk_size and minimal_chunk_size * transfers), the chunk size may increase in case of high upload speed. As well as it can decrease in case of upload speed problems. If no free memory is available, all chunks will equal minimal_chunk_size. |
-
| ### Deleting files | -
| Files you delete with rclone will end up in Trash and be stored there for 30 days. Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account. | -
| ### Standard options | -
| Here are the Standard options specific to quatrix (Quatrix by Maytech). | -
| #### --quatrix-api-key | -
| API key for accessing Quatrix account | -
| Properties: | -
| - Config: api_key - Env Var: RCLONE_QUATRIX_API_KEY - Type: string - Required: true | -
| #### --quatrix-host | -
| Host name of Quatrix account | -
| Properties: | -
| - Config: host - Env Var: RCLONE_QUATRIX_HOST - Type: string - Required: true | -
| ### Advanced options | -
| Here are the Advanced options specific to quatrix (Quatrix by Maytech). | -
| #### --quatrix-encoding | -
| The encoding for the backend. | -
| See the encoding section in the overview for more info. | -
| Properties: | -
| - Config: encoding - Env Var: RCLONE_QUATRIX_ENCODING - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot | -
| #### --quatrix-effective-upload-time | -
| Wanted upload time for one chunk | -
| Properties: | -
| - Config: effective_upload_time - Env Var: RCLONE_QUATRIX_EFFECTIVE_UPLOAD_TIME - Type: string - Default: "4s" | -
| #### --quatrix-minimal-chunk-size | -
| The minimal size for one chunk | -
| Properties: | -
| - Config: minimal_chunk_size - Env Var: RCLONE_QUATRIX_MINIMAL_CHUNK_SIZE - Type: SizeSuffix - Default: 9.537Mi | -
| #### --quatrix-maximal-summary-chunk-size | -
| The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size' | -
| Properties: | -
| - Config: maximal_summary_chunk_size - Env Var: RCLONE_QUATRIX_MAXIMAL_SUMMARY_CHUNK_SIZE - Type: SizeSuffix - Default: 95.367Mi | -
| #### --quatrix-hard-delete | -
| Delete files permanently rather than putting them into the trash. | -
| Properties: | -
| - Config: hard_delete - Env Var: RCLONE_QUATRIX_HARD_DELETE - Type: bool - Default: false | -
| ## Storage usage | -
| The storage usage in Quatrix is restricted to the account during the purchase. You can restrict any user with a smaller storage limit. The account limit is applied if the user has no custom storage limit. Once you've reached the limit, the upload of files will fail. This can be fixed by freeing up the space or increasing the quota. | -
| ## Server-side operations | -
| Quatrix supports server-side operations (copy and move). In case of conflict, files are overwritten during server-side operation. | -
| # Sia | -
| Sia (sia.tech) is a decentralized cloud storage platform based on the blockchain technology. With rclone you can use it like any other remote filesystem or mount Sia folders locally. The technology behind it involves a number of new concepts such as Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you'd better first familiarize yourself using their excellent support documentation. | -
| ## Introduction | -
Before you can use rclone with Sia, you will need to have a running copy of Sia-UI or siad (the Sia daemon) locally on your computer or on local network (e.g. a NAS). Please follow the Get started guide and install one. |
-
rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980 making external access impossible). |
-
However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980 and --disable-api-security arguments on the daemon command line. - Enforce API password for the siad daemon via environment variable SIA_API_PASSWORD or text file named apipassword in the daemon directory. - Set rclone backend option api_password taking it from above locations. |
-
Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock. Alternatively you can make siad unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR directory, it will generate a random password and store in the text file named apipassword under YOUR_HOME/.sia/ directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember this when you configure password in rclone. 3. The only way to use siad without API password is to run it on localhost with command line argument --authorize-api=false, but this is insecure and strongly discouraged. |
-
| ## Configuration | -
Here is an example of how to make a sia remote called mySia. First, run: |
-
| rclone config | -
| This will guide you through an interactive setup process: | -
| ``` No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> mySia Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value ... 29 / Sia Decentralized Cloud "sia" ... Storage> sia Sia daemon API URL, like http://sia.daemon.host:9980. Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost. Enter a string value. Press Enter for the default ("http://127.0.0.1:9980"). api_url> http://127.0.0.1:9980 Sia Daemon API Password. Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> y Enter the password: password: Confirm the password: password: Edit advanced config? y) Yes n) No (default) y/n> n | -
[mySia] type = sia api_url = http://127.0.0.1:9980 api_password = *** ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y
-
-Once configured, you can then use `rclone` like this:
-
-- List directories in top level of your Sia storage
-
-rclone lsd mySia:
-
-- List all the files in your Sia storage
-
-rclone ls mySia:
-
-- Upload a local directory to the Sia directory called _backup_
-
-rclone copy /home/source mySia:backup
-
-
-### Standard options
-
-Here are the Standard options specific to sia (Sia Decentralized Cloud).
-
-#### --sia-api-url
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> e
+Choose a number from below, or type in an existing value
+ 1 > remote
+remote> remote
+--------------------
+[remote]
+type = quatrix
+host = some_host.quatrix.it
+api_key = your_api_key
+--------------------
+Edit remote
+Option api_key.
+API key for accessing Quatrix account
+Enter a string value. Press Enter for the default (your_api_key)
+api_key>
+Option host.
+Host name of Quatrix account
+Enter a string value. Press Enter for the default (some_host.quatrix.it).
+--------------------
+[remote]
+type = quatrix
+host = some_host.quatrix.it
+api_key = your_api_key
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Quatrix allows modification times to be set on objects accurate to 1 microsecond. These will be used to detect whether objects need syncing or not.
+Quatrix does not support hashes, so you cannot use the --checksum flag.
File names in Quatrix are case sensitive and have limitations like the maximum length of a filename is 255, and the minimum length is 1. A file name cannot be equal to . or .. nor contain / , \ or non-printable ascii.
For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all multipart uploads). Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by default, and it can be changed in the advanced configuration, so increasing --transfers will increase the memory use. The chunk size has a maximum size limit, which is set to 100_000_000 bytes by default and can be changed in the advanced configuration. The size of the uploaded chunk will dynamically change depending on the upload speed. The total memory use equals the number of transfers multiplied by the minimal chunk size. In case there's free memory allocated for the upload (which equals the difference of maximal_summary_chunk_size and minimal_chunk_size * transfers), the chunk size may increase in case of high upload speed. As well as it can decrease in case of upload speed problems. If no free memory is available, all chunks will equal minimal_chunk_size.
Files you delete with rclone will end up in Trash and be stored there for 30 days. Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account.
+Here are the Standard options specific to quatrix (Quatrix by Maytech).
+API key for accessing Quatrix account
+Properties:
+Host name of Quatrix account
+Properties:
+Here are the Advanced options specific to quatrix (Quatrix by Maytech).
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Wanted upload time for one chunk
+Properties:
+The minimal size for one chunk
+Properties:
+The maximal summary for all chunks. It should not be less than 'transfers'*'minimal_chunk_size'
+Properties:
+Delete files permanently rather than putting them into the trash
+Properties:
+Skip project folders in operations
+Properties:
+Description of the remote
+Properties:
+The storage usage in Quatrix is restricted to the account during the purchase. You can restrict any user with a smaller storage limit. The account limit is applied if the user has no custom storage limit. Once you've reached the limit, the upload of files will fail. This can be fixed by freeing up the space or increasing the quota.
+Quatrix supports server-side operations (copy and move). In case of conflict, files are overwritten during server-side operation.
+Sia (sia.tech) is a decentralized cloud storage platform based on the blockchain technology. With rclone you can use it like any other remote filesystem or mount Sia folders locally. The technology behind it involves a number of new concepts such as Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you'd better first familiarize yourself using their excellent support documentation.
+Before you can use rclone with Sia, you will need to have a running copy of Sia-UI or siad (the Sia daemon) locally on your computer or on local network (e.g. a NAS). Please follow the Get started guide and install one.
rclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980 making external access impossible).
However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980 and --disable-api-security arguments on the daemon command line. - Enforce API password for the siad daemon via environment variable SIA_API_PASSWORD or text file named apipassword in the daemon directory. - Set rclone backend option api_password taking it from above locations.
Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock. Alternatively you can make siad unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR directory, it will generate a random password and store in the text file named apipassword under YOUR_HOME/.sia/ directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember this when you configure password in rclone. 3. The only way to use siad without API password is to run it on localhost with command line argument --authorize-api=false, but this is insecure and strongly discouraged.
Here is an example of how to make a sia remote called mySia. First, run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> mySia
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+...
+29 / Sia Decentralized Cloud
+ \ "sia"
+...
+Storage> sia
Sia daemon API URL, like http://sia.daemon.host:9980.
-
Note that siad must run with --disable-api-security to open API port for other hosts (not recommended).
Keep default if Sia daemon runs on localhost.
-
-Properties:
-
-- Config: api_url
-- Env Var: RCLONE_SIA_API_URL
-- Type: string
-- Default: "http://127.0.0.1:9980"
-
-#### --sia-api-password
-
+Enter a string value. Press Enter for the default ("http://127.0.0.1:9980").
+api_url> http://127.0.0.1:9980
Sia Daemon API Password.
-
Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: api_password
-- Env Var: RCLONE_SIA_API_PASSWORD
-- Type: string
-- Required: false
-
-### Advanced options
-
-Here are the Advanced options specific to sia (Sia Decentralized Cloud).
-
-#### --sia-user-agent
-
-Siad User Agent
-
-Sia daemon requires the 'Sia-Agent' user agent by default for security
-
-Properties:
-
-- Config: user_agent
-- Env Var: RCLONE_SIA_USER_AGENT
-- Type: string
-- Default: "Sia-Agent"
-
-#### --sia-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_SIA_ENCODING
-- Type: Encoding
-- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot
-
-
-
-## Limitations
-
-- Modification times not supported
-- Checksums not supported
-- `rclone about` not supported
-- rclone can work only with _Siad_ or _Sia-UI_ at the moment,
- the **SkyNet daemon is not supported yet.**
-- Sia does not allow control characters or symbols like question and pound
- signs in file names. rclone will transparently [encode](https://rclone.org/overview/#encoding)
- them for you, but you'd better be aware
-
-# Swift
-
-Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/).
-Commercial implementations of that being:
-
- * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/)
- * [Memset Memstore](https://www.memset.com/cloud/storage/)
- * [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/)
- * [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html)
- * [Blomp Cloud Storage](https://www.blomp.com/cloud-storage/)
- * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html)
-
-Paths are specified as `remote:container` (or `remote:` for the `lsd`
-command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`.
-
-## Configuration
-
-Here is an example of making a swift configuration. First run
-
- rclone config
-
-This will guide you through an interactive setup process.
-
-No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH) "swift" [snip] Storage> swift Get swift credentials from environment variables in standard OpenStack form. Choose a number from below, or type in your own value 1 / Enter swift credentials in the next step "false" 2 / Get swift credentials from environment vars. Leave other fields blank if using this. "true" env_auth> true User name to log in (OS_USERNAME). user> API key or password (OS_PASSWORD). key> Authentication URL for server (OS_AUTH_URL). Choose a number from below, or type in your own value 1 / Rackspace US "https://auth.api.rackspacecloud.com/v1.0" 2 / Rackspace UK "https://lon.auth.api.rackspacecloud.com/v1.0" 3 / Rackspace v2 "https://identity.api.rackspacecloud.com/v2.0" 4 / Memset Memstore UK "https://auth.storage.memset.com/v1.0" 5 / Memset Memstore UK v2 "https://auth.storage.memset.com/v2.0" 6 / OVH "https://auth.cloud.ovh.net/v3" 7 / Blomp Cloud Storage "https://authenticate.ain.net" auth> User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). user_id> User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) domain> Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) tenant> Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) tenant_id> Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) tenant_domain> Region name - optional (OS_REGION_NAME) region> Storage URL - optional (OS_STORAGE_URL) storage_url> Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) auth_token> AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) auth_version> Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) Choose a number from below, or type in your own value 1 / Public (default, choose this if not sure) "public" 2 / Internal (use internal service net) "internal" 3 / Admin "admin" endpoint_type> Remote config -------------------- [test] env_auth = true user = key = auth = user_id = domain = tenant = tenant_id = tenant_domain = region = storage_url = auth_token = auth_version = endpoint_type = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
-
-This remote is called `remote` and can now be used like this
-
-See all containers
-
- rclone lsd remote:
-
-Make a new container
-
- rclone mkdir remote:container
-
-List the contents of a container
-
- rclone ls remote:container
-
-Sync `/home/local/directory` to the remote container, deleting any
-excess files in the container.
-
- rclone sync --interactive /home/local/directory remote:container
-
-### Configuration from an OpenStack credentials file
-
-An OpenStack credentials file typically looks something something
-like this (without the comments)
-
-export OS_AUTH_URL=https://a.provider.net/v2.0 export OS_TENANT_ID=ffffffffffffffffffffffffffffffff export OS_TENANT_NAME="1234567890123456" export OS_USERNAME="123abc567xy" echo "Please enter your OpenStack Password: " read -sr OS_PASSWORD_INPUT export OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME="SBG1" if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
-
-The config file needs to look something like this where `$OS_USERNAME`
-represents the value of the `OS_USERNAME` variable - `123abc567xy` in
-the example above.
-
-[remote] type = swift user = $OS_USERNAME key = $OS_PASSWORD auth = $OS_AUTH_URL tenant = $OS_TENANT_NAME
-
-Note that you may (or may not) need to set `region` too - try without first.
-
-### Configuration from the environment
-
-If you prefer you can configure rclone to use swift using a standard
-set of OpenStack environment variables.
-
-When you run through the config, make sure you choose `true` for
-`env_auth` and leave everything else blank.
-
-rclone will then set any empty config parameters from the environment
-using standard OpenStack environment variables. There is [a list of
-the
-variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment)
-in the docs for the swift library.
-
-### Using an alternate authentication method
-
-If your OpenStack installation uses a non-standard authentication method
-that might not be yet supported by rclone or the underlying swift library,
-you can authenticate externally (e.g. calling manually the `openstack`
-commands to get a token). Then, you just need to pass the two
-configuration variables ``auth_token`` and ``storage_url``.
-If they are both provided, the other variables are ignored. rclone will
-not try to authenticate but instead assume it is already authenticated
-and use these two variables to access the OpenStack installation.
-
-#### Using rclone without a config file
-
-You can use rclone with swift without a config file, if desired, like
-this:
-
-source openstack-credentials-file export RCLONE_CONFIG_MYREMOTE_TYPE=swift export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote:
-
-### --fast-list
-
-This remote supports `--fast-list` which allows you to use fewer
-transactions in exchange for more memory. See the [rclone
-docs](https://rclone.org/docs/#fast-list) for more details.
-
-### --update and --use-server-modtime
-
-As noted below, the modified time is stored on metadata on the object. It is
-used by default for all operations that require checking the time a file was
-last updated. It allows rclone to treat the remote more like a true filesystem,
-but it is inefficient because it requires an extra API call to retrieve the
-metadata.
-
-For many operations, the time the object was last uploaded to the remote is
-sufficient to determine if it is "dirty". By using `--update` along with
-`--use-server-modtime`, you can avoid the extra API call and simply upload
-files whose local modtime is newer than the time it was last uploaded.
-
-### Modification times and hashes
-
-The modified time is stored as metadata on the object as
-`X-Object-Meta-Mtime` as floating point since the epoch accurate to 1
-ns.
-
-This is a de facto standard (used in the official python-swiftclient
-amongst others) for storing the modification time for an object.
-
-The MD5 hash algorithm is supported.
-
-### Restricted filename characters
-
-| Character | Value | Replacement |
-| --------- |:-----:|:-----------:|
-| NUL | 0x00 | ␀ |
-| / | 0x2F | / |
-
-Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
-as they can't be used in JSON strings.
-
-
-### Standard options
-
-Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
-
-#### --swift-env-auth
-
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank (default)
+y/g/n> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+--------------------
+[mySia]
+type = sia
+api_url = http://127.0.0.1:9980
+api_password = *** ENCRYPTED ***
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Once configured, you can then use rclone like this:
rclone lsd mySia:
+rclone ls mySia:
+rclone copy /home/source mySia:backup
+Here are the Standard options specific to sia (Sia Decentralized Cloud).
+Sia daemon API URL, like http://sia.daemon.host:9980.
+Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost.
+Properties:
+Sia Daemon API Password.
+Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+Here are the Advanced options specific to sia (Sia Decentralized Cloud).
+Siad User Agent
+Sia daemon requires the 'Sia-Agent' user agent by default for security
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Description of the remote
+Properties:
+rclone about not supportedSwift refers to OpenStack Object Storage. Commercial implementations of that being:
+Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.
Here is an example of making a swift configuration. First run
+rclone config
+This will guide you through an interactive setup process.
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)
+ \ "swift"
+[snip]
+Storage> swift
Get swift credentials from environment variables in standard OpenStack form.
-
-Properties:
-
-- Config: env_auth
-- Env Var: RCLONE_SWIFT_ENV_AUTH
-- Type: bool
-- Default: false
-- Examples:
- - "false"
- - Enter swift credentials in the next step.
- - "true"
- - Get swift credentials from environment vars.
- - Leave other fields blank if using this.
-
-#### --swift-user
-
+Choose a number from below, or type in your own value
+ 1 / Enter swift credentials in the next step
+ \ "false"
+ 2 / Get swift credentials from environment vars. Leave other fields blank if using this.
+ \ "true"
+env_auth> true
User name to log in (OS_USERNAME).
-
-Properties:
-
-- Config: user
-- Env Var: RCLONE_SWIFT_USER
-- Type: string
-- Required: false
-
-#### --swift-key
-
+user>
API key or password (OS_PASSWORD).
-
-Properties:
-
-- Config: key
-- Env Var: RCLONE_SWIFT_KEY
-- Type: string
-- Required: false
-
-#### --swift-auth
-
+key>
Authentication URL for server (OS_AUTH_URL).
-
-Properties:
-
-- Config: auth
-- Env Var: RCLONE_SWIFT_AUTH
-- Type: string
-- Required: false
-- Examples:
- - "https://auth.api.rackspacecloud.com/v1.0"
- - Rackspace US
- - "https://lon.auth.api.rackspacecloud.com/v1.0"
- - Rackspace UK
- - "https://identity.api.rackspacecloud.com/v2.0"
- - Rackspace v2
- - "https://auth.storage.memset.com/v1.0"
- - Memset Memstore UK
- - "https://auth.storage.memset.com/v2.0"
- - Memset Memstore UK v2
- - "https://auth.cloud.ovh.net/v3"
- - OVH
- - "https://authenticate.ain.net"
- - Blomp Cloud Storage
-
-#### --swift-user-id
-
+Choose a number from below, or type in your own value
+ 1 / Rackspace US
+ \ "https://auth.api.rackspacecloud.com/v1.0"
+ 2 / Rackspace UK
+ \ "https://lon.auth.api.rackspacecloud.com/v1.0"
+ 3 / Rackspace v2
+ \ "https://identity.api.rackspacecloud.com/v2.0"
+ 4 / Memset Memstore UK
+ \ "https://auth.storage.memset.com/v1.0"
+ 5 / Memset Memstore UK v2
+ \ "https://auth.storage.memset.com/v2.0"
+ 6 / OVH
+ \ "https://auth.cloud.ovh.net/v3"
+ 7 / Blomp Cloud Storage
+ \ "https://authenticate.ain.net"
+auth>
User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
-
-Properties:
-
-- Config: user_id
-- Env Var: RCLONE_SWIFT_USER_ID
-- Type: string
-- Required: false
-
-#### --swift-domain
-
+user_id>
User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
-
-Properties:
-
-- Config: domain
-- Env Var: RCLONE_SWIFT_DOMAIN
-- Type: string
-- Required: false
-
-#### --swift-tenant
-
-Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME).
-
-Properties:
-
-- Config: tenant
-- Env Var: RCLONE_SWIFT_TENANT
-- Type: string
-- Required: false
-
-#### --swift-tenant-id
-
-Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID).
-
-Properties:
-
-- Config: tenant_id
-- Env Var: RCLONE_SWIFT_TENANT_ID
-- Type: string
-- Required: false
-
-#### --swift-tenant-domain
-
-Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME).
-
-Properties:
-
-- Config: tenant_domain
-- Env Var: RCLONE_SWIFT_TENANT_DOMAIN
-- Type: string
-- Required: false
-
-#### --swift-region
-
-Region name - optional (OS_REGION_NAME).
-
-Properties:
-
-- Config: region
-- Env Var: RCLONE_SWIFT_REGION
-- Type: string
-- Required: false
-
-#### --swift-storage-url
-
-Storage URL - optional (OS_STORAGE_URL).
-
-Properties:
-
-- Config: storage_url
-- Env Var: RCLONE_SWIFT_STORAGE_URL
-- Type: string
-- Required: false
-
-#### --swift-auth-token
-
-Auth Token from alternate authentication - optional (OS_AUTH_TOKEN).
-
-Properties:
-
-- Config: auth_token
-- Env Var: RCLONE_SWIFT_AUTH_TOKEN
-- Type: string
-- Required: false
-
-#### --swift-application-credential-id
-
-Application Credential ID (OS_APPLICATION_CREDENTIAL_ID).
-
-Properties:
-
-- Config: application_credential_id
-- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID
-- Type: string
-- Required: false
-
-#### --swift-application-credential-name
-
-Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME).
-
-Properties:
-
-- Config: application_credential_name
-- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME
-- Type: string
-- Required: false
-
-#### --swift-application-credential-secret
-
-Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET).
-
-Properties:
-
-- Config: application_credential_secret
-- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET
-- Type: string
-- Required: false
-
-#### --swift-auth-version
-
-AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION).
-
-Properties:
-
-- Config: auth_version
-- Env Var: RCLONE_SWIFT_AUTH_VERSION
-- Type: int
-- Default: 0
-
-#### --swift-endpoint-type
-
-Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE).
-
-Properties:
-
-- Config: endpoint_type
-- Env Var: RCLONE_SWIFT_ENDPOINT_TYPE
-- Type: string
-- Default: "public"
-- Examples:
- - "public"
- - Public (default, choose this if not sure)
- - "internal"
- - Internal (use internal service net)
- - "admin"
- - Admin
-
-#### --swift-storage-policy
-
-The storage policy to use when creating a new container.
-
-This applies the specified storage policy when creating a new
-container. The policy cannot be changed afterwards. The allowed
-configuration values and their meaning depend on your Swift storage
-provider.
-
-Properties:
-
-- Config: storage_policy
-- Env Var: RCLONE_SWIFT_STORAGE_POLICY
-- Type: string
-- Required: false
-- Examples:
- - ""
- - Default
- - "pcs"
- - OVH Public Cloud Storage
- - "pca"
- - OVH Public Cloud Archive
-
-### Advanced options
-
-Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
-
-#### --swift-leave-parts-on-error
-
-If true avoid calling abort upload on a failure.
-
-It should be set to true for resuming uploads across different sessions.
-
-Properties:
-
-- Config: leave_parts_on_error
-- Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR
-- Type: bool
-- Default: false
-
-#### --swift-chunk-size
-
-Above this size files will be chunked into a _segments container.
-
-Above this size files will be chunked into a _segments container. The
-default for this is 5 GiB which is its maximum value.
-
-Properties:
-
-- Config: chunk_size
-- Env Var: RCLONE_SWIFT_CHUNK_SIZE
-- Type: SizeSuffix
-- Default: 5Gi
-
-#### --swift-no-chunk
-
-Don't chunk files during streaming upload.
-
-When doing streaming uploads (e.g. using rcat or mount) setting this
-flag will cause the swift backend to not upload chunked files.
-
-This will limit the maximum upload size to 5 GiB. However non chunked
-files are easier to deal with and have an MD5SUM.
-
-Rclone will still chunk files bigger than chunk_size when doing normal
-copy operations.
-
-Properties:
-
-- Config: no_chunk
-- Env Var: RCLONE_SWIFT_NO_CHUNK
-- Type: bool
-- Default: false
-
-#### --swift-no-large-objects
-
-Disable support for static and dynamic large objects
-
-Swift cannot transparently store files bigger than 5 GiB. There are
-two schemes for doing that, static or dynamic large objects, and the
-API does not allow rclone to determine whether a file is a static or
-dynamic large object without doing a HEAD on the object. Since these
-need to be treated differently, this means rclone has to issue HEAD
-requests for objects for example when reading checksums.
-
-When `no_large_objects` is set, rclone will assume that there are no
-static or dynamic large objects stored. This means it can stop doing
-the extra HEAD calls which in turn increases performance greatly
-especially when doing a swift to swift transfer with `--checksum` set.
-
-Setting this option implies `no_chunk` and also that no files will be
-uploaded in chunks, so files bigger than 5 GiB will just fail on
-upload.
-
-If you set this option and there *are* static or dynamic large objects,
-then this will give incorrect hashes for them. Downloads will succeed,
-but other operations such as Remove and Copy will fail.
-
-
-Properties:
-
-- Config: no_large_objects
-- Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS
-- Type: bool
-- Default: false
-
-#### --swift-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_SWIFT_ENCODING
-- Type: Encoding
-- Default: Slash,InvalidUtf8
-
-
-
-## Limitations
-
-The Swift API doesn't return a correct MD5SUM for segmented files
-(Dynamic or Static Large Objects) so rclone won't check or use the
-MD5SUM for these.
-
-## Troubleshooting
-
-### Rclone gives Failed to create file system for "remote:": Bad Request
-
-Due to an oddity of the underlying swift library, it gives a "Bad
-Request" error rather than a more sensible error when the
-authentication fails for Swift.
-
-So this most likely means your username / password is wrong. You can
-investigate further with the `--dump-bodies` flag.
-
-This may also be caused by specifying the region when you shouldn't
-have (e.g. OVH).
-
-### Rclone gives Failed to create file system: Response didn't have storage url and auth token
-
-This is most likely caused by forgetting to specify your tenant when
-setting up a swift remote.
-
-## OVH Cloud Archive
-
-To use rclone with OVH cloud archive, first use `rclone config` to set up a `swift` backend with OVH, choosing `pca` as the `storage_policy`.
-
-### Uploading Objects
-
-Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel.
-
-### Retrieving Objects
-
-To retrieve objects use `rclone copy` as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following:
-
-`2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)`
-
-Rclone will wait for the time specified then retry the copy.
-
-# pCloud
-
-Paths are specified as `remote:path`
-
-Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
-
-## Configuration
-
-The initial setup for pCloud involves getting a token from pCloud which you
-need to do in your browser. `rclone config` walks you through it.
-
-Here is an example of how to make a remote called `remote`. First run:
-
- rclone config
-
-This will guide you through an interactive setup process:
-
-No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Pcloud "pcloud" [snip] Storage> pcloud Pcloud App Client Id - leave blank normally. client_id> Pcloud App Client Secret - leave blank normally. client_secret> Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] client_id = client_secret = token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
-
-See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
-machine with no Internet browser available.
-
-Note that rclone runs a webserver on your local machine to collect the
-token as returned from pCloud. This only runs from the moment it opens
-your browser to the moment you get back the verification code. This
-is on `http://127.0.0.1:53682/` and this it may require you to unblock
-it temporarily if you are running a host firewall.
-
-Once configured you can then use `rclone` like this,
-
-List directories in top level of your pCloud
-
- rclone lsd remote:
-
-List all the files in your pCloud
-
- rclone ls remote:
-
-To copy a local directory to a pCloud directory called backup
-
- rclone copy /home/source remote:backup
-
-### Modification times and hashes
-
-pCloud allows modification times to be set on objects accurate to 1
-second. These will be used to detect whether objects need syncing or
-not. In order to set a Modification time pCloud requires the object
-be re-uploaded.
-
-pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256
-hashes in the EU region, so you can use the `--checksum` flag.
-
-### Restricted filename characters
-
-In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
-the following characters are also replaced:
-
-| Character | Value | Replacement |
-| --------- |:-----:|:-----------:|
-| \ | 0x5C | \ |
-
-Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
-as they can't be used in JSON strings.
-
-### Deleting files
-
-Deleted files will be moved to the trash. Your subscription level
-will determine how long items stay in the trash. `rclone cleanup` can
-be used to empty the trash.
-
-### Emptying the trash
-
-Due to an API limitation, the `rclone cleanup` command will only work if you
-set your username and password in the advanced options for this backend.
-Since we generally want to avoid storing user passwords in the rclone config
-file, we advise you to only set this up if you need the `rclone cleanup` command to work.
-
-### Root folder ID
-
-You can set the `root_folder_id` for rclone. This is the directory
-(identified by its `Folder ID`) that rclone considers to be the root
-of your pCloud drive.
-
-Normally you will leave this blank and rclone will determine the
-correct root to use itself.
-
-However you can set this to restrict rclone to a specific folder
-hierarchy.
-
-In order to do this you will have to find the `Folder ID` of the
-directory you wish rclone to display. This will be the `folder` field
-of the URL when you open the relevant folder in the pCloud web
-interface.
-
-So if the folder you want rclone to use has a URL which looks like
-`https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid`
-in the browser, then you use `5xxxxxxxx8` as
-the `root_folder_id` in the config.
-
-
-### Standard options
-
-Here are the Standard options specific to pcloud (Pcloud).
-
-#### --pcloud-client-id
-
-OAuth Client Id.
-
-Leave blank normally.
-
-Properties:
-
-- Config: client_id
-- Env Var: RCLONE_PCLOUD_CLIENT_ID
-- Type: string
-- Required: false
-
-#### --pcloud-client-secret
-
-OAuth Client Secret.
-
-Leave blank normally.
-
-Properties:
-
-- Config: client_secret
-- Env Var: RCLONE_PCLOUD_CLIENT_SECRET
-- Type: string
-- Required: false
-
-### Advanced options
-
-Here are the Advanced options specific to pcloud (Pcloud).
-
-#### --pcloud-token
-
-OAuth Access Token as a JSON blob.
-
-Properties:
-
-- Config: token
-- Env Var: RCLONE_PCLOUD_TOKEN
-- Type: string
-- Required: false
-
-#### --pcloud-auth-url
-
-Auth server URL.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: auth_url
-- Env Var: RCLONE_PCLOUD_AUTH_URL
-- Type: string
-- Required: false
-
-#### --pcloud-token-url
-
-Token server url.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: token_url
-- Env Var: RCLONE_PCLOUD_TOKEN_URL
-- Type: string
-- Required: false
-
-#### --pcloud-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_PCLOUD_ENCODING
-- Type: Encoding
-- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
-
-#### --pcloud-root-folder-id
-
-Fill in for rclone to use a non root folder as its starting point.
-
-Properties:
-
-- Config: root_folder_id
-- Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID
-- Type: string
-- Default: "d0"
-
-#### --pcloud-hostname
-
-Hostname to connect to.
-
-This is normally set when rclone initially does the oauth connection,
-however you will need to set it by hand if you are using remote config
-with rclone authorize.
-
-
-Properties:
-
-- Config: hostname
-- Env Var: RCLONE_PCLOUD_HOSTNAME
-- Type: string
-- Default: "api.pcloud.com"
-- Examples:
- - "api.pcloud.com"
- - Original/US region
- - "eapi.pcloud.com"
- - EU region
-
-#### --pcloud-username
-
-Your pcloud username.
-
-This is only required when you want to use the cleanup command. Due to a bug
-in the pcloud API the required API does not support OAuth authentication so
-we have to rely on user password authentication for it.
-
-Properties:
-
-- Config: username
-- Env Var: RCLONE_PCLOUD_USERNAME
-- Type: string
-- Required: false
-
-#### --pcloud-password
-
-Your pcloud password.
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: password
-- Env Var: RCLONE_PCLOUD_PASSWORD
-- Type: string
-- Required: false
-
-
-
-# PikPak
-
-PikPak is [a private cloud drive](https://mypikpak.com/).
-
-Paths are specified as `remote:path`, and may be as deep as required, e.g. `remote:directory/subdirectory`.
-
-## Configuration
-
-Here is an example of making a remote for PikPak.
-
-First run:
-
- rclone config
-
-This will guide you through an interactive setup process:
-
-No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n
-Enter name for new remote. name> remote
-Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. XX / PikPak (pikpak) Storage> XX
-Option user. Pikpak username. Enter a value. user> USERNAME
-Option pass. Pikpak password. Choose an alternative below. y) Yes, type in my own password g) Generate random password y/g> y Enter the password: password: Confirm the password: password:
-Edit advanced config? y) Yes n) No (default) y/n>
-Configuration complete. Options: - type: pikpak - user: USERNAME - pass: *** ENCRYPTED *** - token: {"access_token":"eyJ...","token_type":"Bearer","refresh_token":"os...","expiry":"2023-01-26T18:54:32.170582647+09:00"} Keep this "remote" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y
-
-### Modification times and hashes
-
-PikPak keeps modification times on objects, and updates them when uploading objects,
-but it does not support changing only the modification time
-
-The MD5 hash algorithm is supported.
-
-
-### Standard options
-
-Here are the Standard options specific to pikpak (PikPak).
-
-#### --pikpak-user
-
-Pikpak username.
-
-Properties:
-
-- Config: user
-- Env Var: RCLONE_PIKPAK_USER
-- Type: string
-- Required: true
-
-#### --pikpak-pass
-
-Pikpak password.
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: pass
-- Env Var: RCLONE_PIKPAK_PASS
-- Type: string
-- Required: true
-
-### Advanced options
-
-Here are the Advanced options specific to pikpak (PikPak).
-
-#### --pikpak-client-id
-
-OAuth Client Id.
-
-Leave blank normally.
-
-Properties:
-
-- Config: client_id
-- Env Var: RCLONE_PIKPAK_CLIENT_ID
-- Type: string
-- Required: false
-
-#### --pikpak-client-secret
-
-OAuth Client Secret.
-
-Leave blank normally.
-
-Properties:
-
-- Config: client_secret
-- Env Var: RCLONE_PIKPAK_CLIENT_SECRET
-- Type: string
-- Required: false
-
-#### --pikpak-token
-
-OAuth Access Token as a JSON blob.
-
-Properties:
-
-- Config: token
-- Env Var: RCLONE_PIKPAK_TOKEN
-- Type: string
-- Required: false
-
-#### --pikpak-auth-url
-
-Auth server URL.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: auth_url
-- Env Var: RCLONE_PIKPAK_AUTH_URL
-- Type: string
-- Required: false
-
-#### --pikpak-token-url
-
-Token server url.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: token_url
-- Env Var: RCLONE_PIKPAK_TOKEN_URL
-- Type: string
-- Required: false
-
-#### --pikpak-root-folder-id
-
-ID of the root folder.
-Leave blank normally.
-
-Fill in for rclone to use a non root folder as its starting point.
-
-
-Properties:
-
-- Config: root_folder_id
-- Env Var: RCLONE_PIKPAK_ROOT_FOLDER_ID
-- Type: string
-- Required: false
-
-#### --pikpak-use-trash
-
-Send files to the trash instead of deleting permanently.
-
-Defaults to true, namely sending files to the trash.
-Use `--pikpak-use-trash=false` to delete files permanently instead.
-
-Properties:
-
-- Config: use_trash
-- Env Var: RCLONE_PIKPAK_USE_TRASH
-- Type: bool
-- Default: true
-
-#### --pikpak-trashed-only
-
-Only show files that are in the trash.
-
-This will show trashed files in their original directory structure.
-
-Properties:
-
-- Config: trashed_only
-- Env Var: RCLONE_PIKPAK_TRASHED_ONLY
-- Type: bool
-- Default: false
-
-#### --pikpak-hash-memory-limit
-
-Files bigger than this will be cached on disk to calculate hash if required.
-
-Properties:
-
-- Config: hash_memory_limit
-- Env Var: RCLONE_PIKPAK_HASH_MEMORY_LIMIT
-- Type: SizeSuffix
-- Default: 10Mi
-
-#### --pikpak-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_PIKPAK_ENCODING
-- Type: Encoding
-- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot
-
-## Backend commands
-
-Here are the commands specific to the pikpak backend.
-
-Run them with
-
- rclone backend COMMAND remote:
-
-The help below will explain what arguments each command takes.
-
-See the [backend](https://rclone.org/commands/rclone_backend/) command for more
-info on how to pass options and arguments.
-
-These can be run on a running backend using the rc command
-[backend/command](https://rclone.org/rc/#backend-command).
-
-### addurl
-
-Add offline download task for url
-
- rclone backend addurl remote: [options] [<arguments>+]
-
-This command adds offline download task for url.
-
-Usage:
-
- rclone backend addurl pikpak:dirpath url
-
-Downloads will be stored in 'dirpath'. If 'dirpath' is invalid,
-download will fallback to default 'My Pack' folder.
-
-
-### decompress
-
-Request decompress of a file/files in a folder
-
- rclone backend decompress remote: [options] [<arguments>+]
-
-This command requests decompress of file/files in a folder.
-
-Usage:
-
- rclone backend decompress pikpak:dirpath {filename} -o password=password
- rclone backend decompress pikpak:dirpath {filename} -o delete-src-file
-
-An optional argument 'filename' can be specified for a file located in
-'pikpak:dirpath'. You may want to pass '-o password=password' for a
-password-protected files. Also, pass '-o delete-src-file' to delete
-source files after decompression finished.
-
-Result:
-
- {
- "Decompressed": 17,
- "SourceDeleted": 0,
- "Errors": 0
- }
-
-
-
-
-## Limitations
-
-### Hashes may be empty
-
-PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
-
-### Deleted files still visible with trashed-only
-
-Deleted files will still be visible with `--pikpak-trashed-only` even after the
-trash emptied. This goes away after few days.
-
-# premiumize.me
-
-Paths are specified as `remote:path`
-
-Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
-
-## Configuration
-
-The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you
-need to do in your browser. `rclone config` walks you through it.
-
-Here is an example of how to make a remote called `remote`. First run:
-
- rclone config
-
-This will guide you through an interactive setup process:
-
-No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / premiumize.me "premiumizeme" [snip] Storage> premiumizeme ** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ **
-Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [remote] type = premiumizeme token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d>
-
-See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
-machine with no Internet browser available.
-
-Note that rclone runs a webserver on your local machine to collect the
-token as returned from premiumize.me. This only runs from the moment it opens
-your browser to the moment you get back the verification code. This
-is on `http://127.0.0.1:53682/` and this it may require you to unblock
-it temporarily if you are running a host firewall.
-
-Once configured you can then use `rclone` like this,
-
-List directories in top level of your premiumize.me
-
- rclone lsd remote:
-
-List all the files in your premiumize.me
-
- rclone ls remote:
-
-To copy a local directory to an premiumize.me directory called backup
-
- rclone copy /home/source remote:backup
-
-### Modification times and hashes
-
-premiumize.me does not support modification times or hashes, therefore
-syncing will default to `--size-only` checking. Note that using
-`--update` will work.
-
-### Restricted filename characters
-
-In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
-the following characters are also replaced:
-
-| Character | Value | Replacement |
-| --------- |:-----:|:-----------:|
-| \ | 0x5C | \ |
-| " | 0x22 | " |
-
-Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
-as they can't be used in JSON strings.
-
-
-### Standard options
-
-Here are the Standard options specific to premiumizeme (premiumize.me).
-
-#### --premiumizeme-client-id
-
-OAuth Client Id.
-
-Leave blank normally.
-
-Properties:
-
-- Config: client_id
-- Env Var: RCLONE_PREMIUMIZEME_CLIENT_ID
-- Type: string
-- Required: false
-
-#### --premiumizeme-client-secret
-
-OAuth Client Secret.
-
-Leave blank normally.
-
-Properties:
-
-- Config: client_secret
-- Env Var: RCLONE_PREMIUMIZEME_CLIENT_SECRET
-- Type: string
-- Required: false
-
-#### --premiumizeme-api-key
-
-API Key.
-
-This is not normally used - use oauth instead.
-
-
-Properties:
-
-- Config: api_key
-- Env Var: RCLONE_PREMIUMIZEME_API_KEY
-- Type: string
-- Required: false
-
-### Advanced options
-
-Here are the Advanced options specific to premiumizeme (premiumize.me).
-
-#### --premiumizeme-token
-
-OAuth Access Token as a JSON blob.
-
-Properties:
-
-- Config: token
-- Env Var: RCLONE_PREMIUMIZEME_TOKEN
-- Type: string
-- Required: false
-
-#### --premiumizeme-auth-url
-
-Auth server URL.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: auth_url
-- Env Var: RCLONE_PREMIUMIZEME_AUTH_URL
-- Type: string
-- Required: false
-
-#### --premiumizeme-token-url
-
-Token server url.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: token_url
-- Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL
-- Type: string
-- Required: false
-
-#### --premiumizeme-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_PREMIUMIZEME_ENCODING
-- Type: Encoding
-- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot
-
-
-
-## Limitations
-
-Note that premiumize.me is case insensitive so you can't have a file called
-"Hello.doc" and one called "hello.doc".
-
-premiumize.me file names can't have the `\` or `"` characters in.
-rclone maps these to and from an identical looking unicode equivalents
-`\` and `"`
-
-premiumize.me only supports filenames up to 255 characters in length.
-
-# Proton Drive
-
-[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault
- for your files that protects your data.
-
-This is an rclone backend for Proton Drive which supports the file transfer
-features of Proton Drive using the same client-side encryption.
-
-Due to the fact that Proton Drive doesn't publish its API documentation, this
-backend is implemented with best efforts by reading the open-sourced client
-source code and observing the Proton Drive traffic in the browser.
-
-**NB** This backend is currently in Beta. It is believed to be correct
-and all the integration tests pass. However the Proton Drive protocol
-has evolved over time there may be accounts it is not compatible
-with. Please [post on the rclone forum](https://forum.rclone.org/) if
-you find an incompatibility.
-
-Paths are specified as `remote:path`
-
-Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
-
-## Configurations
-
-Here is an example of how to make a remote called `remote`. First run:
-
- rclone config
-
-This will guide you through an interactive setup process:
-
-No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Proton Drive "Proton Drive" [snip] Storage> protondrive User name user> you@protonmail.com Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Option 2fa. 2FA code (if the account requires one) Enter a value. Press Enter to leave empty. 2fa> 123456 Remote config -------------------- [remote] type = protondrive user = you@protonmail.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
-
-**NOTE:** The Proton Drive encryption keys need to have been already generated
-after a regular login via the browser, otherwise attempting to use the
-credentials in `rclone` will fail.
-
-Once configured you can then use `rclone` like this,
-
-List directories in top level of your Proton Drive
-
- rclone lsd remote:
-
-List all the files in your Proton Drive
-
- rclone ls remote:
-
-To copy a local directory to an Proton Drive directory called backup
-
- rclone copy /home/source remote:backup
-
-### Modification times and hashes
-
-Proton Drive Bridge does not support updating modification times yet.
-
-The SHA1 hash algorithm is supported.
-
-### Restricted filename characters
-
-Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and
-right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51))
-
-### Duplicated files
-
-Proton Drive can not have two files with exactly the same name and path. If the
-conflict occurs, depending on the advanced config, the file might or might not
-be overwritten.
-
-### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password)
-
-Please set your mailbox password in the advanced config section.
-
-### Caching
-
-The cache is currently built for the case when the rclone is the only instance
-performing operations to the mount point. The event system, which is the proton
-API system that provides visibility of what has changed on the drive, is yet
-to be implemented, so updates from other clients won’t be reflected in the
-cache. Thus, if there are concurrent clients accessing the same mount point,
-then we might have a problem with caching the stale data.
-
-
-### Standard options
-
-Here are the Standard options specific to protondrive (Proton Drive).
-
-#### --protondrive-username
-
-The username of your proton account
-
-Properties:
-
-- Config: username
-- Env Var: RCLONE_PROTONDRIVE_USERNAME
-- Type: string
-- Required: true
-
-#### --protondrive-password
-
-The password of your proton account.
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: password
-- Env Var: RCLONE_PROTONDRIVE_PASSWORD
-- Type: string
-- Required: true
-
-#### --protondrive-2fa
-
-The 2FA code
-
-The value can also be provided with --protondrive-2fa=000000
-
-The 2FA code of your proton drive account if the account is set up with
-two-factor authentication
-
-Properties:
-
-- Config: 2fa
-- Env Var: RCLONE_PROTONDRIVE_2FA
-- Type: string
-- Required: false
-
-### Advanced options
-
-Here are the Advanced options specific to protondrive (Proton Drive).
-
-#### --protondrive-mailbox-password
-
-The mailbox password of your two-password proton account.
-
-For more information regarding the mailbox password, please check the
-following official knowledge base article:
-https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password
-
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: mailbox_password
-- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD
-- Type: string
-- Required: false
-
-#### --protondrive-client-uid
-
-Client uid key (internal use only)
-
-Properties:
-
-- Config: client_uid
-- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID
-- Type: string
-- Required: false
-
-#### --protondrive-client-access-token
-
-Client access token key (internal use only)
-
-Properties:
-
-- Config: client_access_token
-- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN
-- Type: string
-- Required: false
-
-#### --protondrive-client-refresh-token
-
-Client refresh token key (internal use only)
-
-Properties:
-
-- Config: client_refresh_token
-- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN
-- Type: string
-- Required: false
-
-#### --protondrive-client-salted-key-pass
-
-Client salted key pass key (internal use only)
-
-Properties:
-
-- Config: client_salted_key_pass
-- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS
-- Type: string
-- Required: false
-
-#### --protondrive-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_PROTONDRIVE_ENCODING
-- Type: Encoding
-- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
-
-#### --protondrive-original-file-size
-
-Return the file size before encryption
-
-The size of the encrypted file will be different from (bigger than) the
-original file size. Unless there is a reason to return the file size
-after encryption is performed, otherwise, set this option to true, as
-features like Open() which will need to be supplied with original content
-size, will fail to operate properly
-
-Properties:
-
-- Config: original_file_size
-- Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE
-- Type: bool
-- Default: true
-
-#### --protondrive-app-version
-
-The app version string
-
-The app version string indicates the client that is currently performing
-the API request. This information is required and will be sent with every
-API request.
-
-Properties:
-
-- Config: app_version
-- Env Var: RCLONE_PROTONDRIVE_APP_VERSION
-- Type: string
-- Default: "macos-drive@1.0.0-alpha.1+rclone"
-
-#### --protondrive-replace-existing-draft
-
-Create a new revision when filename conflict is detected
-
-When a file upload is cancelled or failed before completion, a draft will be
-created and the subsequent upload of the same file to the same location will be
-reported as a conflict.
-
-The value can also be set by --protondrive-replace-existing-draft=true
-
-If the option is set to true, the draft will be replaced and then the upload
-operation will restart. If there are other clients also uploading at the same
-file location at the same time, the behavior is currently unknown. Need to set
-to true for integration tests.
-If the option is set to false, an error "a draft exist - usually this means a
-file is being uploaded at another client, or, there was a failed upload attempt"
-will be returned, and no upload will happen.
-
-Properties:
-
-- Config: replace_existing_draft
-- Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT
-- Type: bool
-- Default: false
-
-#### --protondrive-enable-caching
-
-Caches the files and folders metadata to reduce API calls
-
-Notice: If you are mounting ProtonDrive as a VFS, please disable this feature,
-as the current implementation doesn't update or clear the cache when there are
-external changes.
-
-The files and folders on ProtonDrive are represented as links with keyrings,
-which can be cached to improve performance and be friendly to the API server.
-
-The cache is currently built for the case when the rclone is the only instance
-performing operations to the mount point. The event system, which is the proton
-API system that provides visibility of what has changed on the drive, is yet
-to be implemented, so updates from other clients won’t be reflected in the
-cache. Thus, if there are concurrent clients accessing the same mount point,
-then we might have a problem with caching the stale data.
-
-Properties:
-
-- Config: enable_caching
-- Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING
-- Type: bool
-- Default: true
-
-
-
-## Limitations
-
-This backend uses the
-[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which
-is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a
-fork of the [official repo](https://github.com/ProtonMail/go-proton-api).
-
-There is no official API documentation available from Proton Drive. But, thanks
-to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api)
-and the web, iOS, and Android client codebases, we don't need to completely
-reverse engineer the APIs by observing the web client traffic!
-
-[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic
-building blocks of API calls and error handling, such as 429 exponential
-back-off, but it is pretty much just a barebone interface to the Proton API.
-For example, the encryption and decryption of the Proton Drive file are not
-provided in this library.
-
-The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on
-top of this quickly. This codebase handles the intricate tasks before and after
-calling Proton APIs, particularly the complex encryption scheme, allowing
-developers to implement features for other software on top of this codebase.
-There are likely quite a few errors in this library, as there isn't official
-documentation available.
-
-# put.io
-
-Paths are specified as `remote:path`
-
-put.io paths may be as deep as required, e.g.
-`remote:directory/subdirectory`.
-
-## Configuration
-
-The initial setup for put.io involves getting a token from put.io
-which you need to do in your browser. `rclone config` walks you
-through it.
-
-Here is an example of how to make a remote called `remote`. First run:
-
- rclone config
-
-This will guide you through an interactive setup process:
-
-No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> putio Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Put.io "putio" [snip] Storage> putio ** See help for putio backend at: https://rclone.org/putio/ **
-Remote config Use web browser to automatically authenticate rclone with remote? * Say Y if the machine running rclone has a web browser you can use * Say N if running rclone on a (remote) machine without web browser access If not sure try Y. If Y failed, try N. y) Yes n) No y/n> y If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... Got code -------------------- [putio] type = putio token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"} -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y Current remotes:
-Name Type ==== ==== putio putio
-
-See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
-machine with no Internet browser available.
-
-Note that rclone runs a webserver on your local machine to collect the
-token as returned from put.io if using web browser to automatically
-authenticate. This only
-runs from the moment it opens your browser to the moment you get back
-the verification code. This is on `http://127.0.0.1:53682/` and this
-it may require you to unblock it temporarily if you are running a host
-firewall, or use manual mode.
-
-You can then use it like this,
-
-List directories in top level of your put.io
-
- rclone lsd remote:
-
-List all the files in your put.io
-
- rclone ls remote:
-
-To copy a local directory to a put.io directory called backup
-
- rclone copy /home/source remote:backup
-
-### Restricted filename characters
-
-In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
-the following characters are also replaced:
-
-| Character | Value | Replacement |
-| --------- |:-----:|:-----------:|
-| \ | 0x5C | \ |
-
-Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
-as they can't be used in JSON strings.
-
-
-### Standard options
-
-Here are the Standard options specific to putio (Put.io).
-
-#### --putio-client-id
-
-OAuth Client Id.
-
-Leave blank normally.
-
-Properties:
-
-- Config: client_id
-- Env Var: RCLONE_PUTIO_CLIENT_ID
-- Type: string
-- Required: false
-
-#### --putio-client-secret
-
-OAuth Client Secret.
-
-Leave blank normally.
-
-Properties:
-
-- Config: client_secret
-- Env Var: RCLONE_PUTIO_CLIENT_SECRET
-- Type: string
-- Required: false
-
-### Advanced options
-
-Here are the Advanced options specific to putio (Put.io).
-
-#### --putio-token
-
-OAuth Access Token as a JSON blob.
-
-Properties:
-
-- Config: token
-- Env Var: RCLONE_PUTIO_TOKEN
-- Type: string
-- Required: false
-
-#### --putio-auth-url
-
-Auth server URL.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: auth_url
-- Env Var: RCLONE_PUTIO_AUTH_URL
-- Type: string
-- Required: false
-
-#### --putio-token-url
-
-Token server url.
-
-Leave blank to use the provider defaults.
-
-Properties:
-
-- Config: token_url
-- Env Var: RCLONE_PUTIO_TOKEN_URL
-- Type: string
-- Required: false
-
-#### --putio-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_PUTIO_ENCODING
-- Type: Encoding
-- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
-
-
-
-## Limitations
-
-put.io has rate limiting. When you hit a limit, rclone automatically
-retries after waiting the amount of time requested by the server.
-
-If you want to avoid ever hitting these limits, you may use the
-`--tpslimit` flag with a low number. Note that the imposed limits
-may be different for different operations, and may change over time.
-
-# Proton Drive
-
-[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault
- for your files that protects your data.
-
-This is an rclone backend for Proton Drive which supports the file transfer
-features of Proton Drive using the same client-side encryption.
-
-Due to the fact that Proton Drive doesn't publish its API documentation, this
-backend is implemented with best efforts by reading the open-sourced client
-source code and observing the Proton Drive traffic in the browser.
-
-**NB** This backend is currently in Beta. It is believed to be correct
-and all the integration tests pass. However the Proton Drive protocol
-has evolved over time there may be accounts it is not compatible
-with. Please [post on the rclone forum](https://forum.rclone.org/) if
-you find an incompatibility.
-
-Paths are specified as `remote:path`
-
-Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
-
-## Configurations
-
-Here is an example of how to make a remote called `remote`. First run:
-
- rclone config
-
-This will guide you through an interactive setup process:
-
-No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / Proton Drive "Proton Drive" [snip] Storage> protondrive User name user> you@protonmail.com Password. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> y Enter the password: password: Confirm the password: password: Option 2fa. 2FA code (if the account requires one) Enter a value. Press Enter to leave empty. 2fa> 123456 Remote config -------------------- [remote] type = protondrive user = you@protonmail.com pass = *** ENCRYPTED *** -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
-
-**NOTE:** The Proton Drive encryption keys need to have been already generated
-after a regular login via the browser, otherwise attempting to use the
-credentials in `rclone` will fail.
-
-Once configured you can then use `rclone` like this,
-
-List directories in top level of your Proton Drive
-
- rclone lsd remote:
-
-List all the files in your Proton Drive
-
- rclone ls remote:
-
-To copy a local directory to an Proton Drive directory called backup
-
- rclone copy /home/source remote:backup
-
-### Modification times and hashes
-
-Proton Drive Bridge does not support updating modification times yet.
-
-The SHA1 hash algorithm is supported.
-
-### Restricted filename characters
-
-Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and
-right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51))
-
-### Duplicated files
-
-Proton Drive can not have two files with exactly the same name and path. If the
-conflict occurs, depending on the advanced config, the file might or might not
-be overwritten.
-
-### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password)
-
-Please set your mailbox password in the advanced config section.
-
-### Caching
-
-The cache is currently built for the case when the rclone is the only instance
-performing operations to the mount point. The event system, which is the proton
-API system that provides visibility of what has changed on the drive, is yet
-to be implemented, so updates from other clients won’t be reflected in the
-cache. Thus, if there are concurrent clients accessing the same mount point,
-then we might have a problem with caching the stale data.
-
-
-### Standard options
-
-Here are the Standard options specific to protondrive (Proton Drive).
-
-#### --protondrive-username
-
-The username of your proton account
-
-Properties:
-
-- Config: username
-- Env Var: RCLONE_PROTONDRIVE_USERNAME
-- Type: string
-- Required: true
-
-#### --protondrive-password
-
-The password of your proton account.
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: password
-- Env Var: RCLONE_PROTONDRIVE_PASSWORD
-- Type: string
-- Required: true
-
-#### --protondrive-2fa
-
-The 2FA code
-
-The value can also be provided with --protondrive-2fa=000000
-
-The 2FA code of your proton drive account if the account is set up with
-two-factor authentication
-
-Properties:
-
-- Config: 2fa
-- Env Var: RCLONE_PROTONDRIVE_2FA
-- Type: string
-- Required: false
-
-### Advanced options
-
-Here are the Advanced options specific to protondrive (Proton Drive).
-
-#### --protondrive-mailbox-password
-
-The mailbox password of your two-password proton account.
-
-For more information regarding the mailbox password, please check the
-following official knowledge base article:
-https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password
-
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: mailbox_password
-- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD
-- Type: string
-- Required: false
-
-#### --protondrive-client-uid
-
-Client uid key (internal use only)
-
-Properties:
-
-- Config: client_uid
-- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID
-- Type: string
-- Required: false
-
-#### --protondrive-client-access-token
-
-Client access token key (internal use only)
-
-Properties:
-
-- Config: client_access_token
-- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN
-- Type: string
-- Required: false
-
-#### --protondrive-client-refresh-token
-
-Client refresh token key (internal use only)
-
-Properties:
-
-- Config: client_refresh_token
-- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN
-- Type: string
-- Required: false
-
-#### --protondrive-client-salted-key-pass
-
-Client salted key pass key (internal use only)
-
-Properties:
-
-- Config: client_salted_key_pass
-- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS
-- Type: string
-- Required: false
-
-#### --protondrive-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_PROTONDRIVE_ENCODING
-- Type: Encoding
-- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot
-
-#### --protondrive-original-file-size
-
-Return the file size before encryption
-
-The size of the encrypted file will be different from (bigger than) the
-original file size. Unless there is a reason to return the file size
-after encryption is performed, otherwise, set this option to true, as
-features like Open() which will need to be supplied with original content
-size, will fail to operate properly
-
-Properties:
-
-- Config: original_file_size
-- Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE
-- Type: bool
-- Default: true
-
-#### --protondrive-app-version
-
-The app version string
-
-The app version string indicates the client that is currently performing
-the API request. This information is required and will be sent with every
-API request.
-
-Properties:
-
-- Config: app_version
-- Env Var: RCLONE_PROTONDRIVE_APP_VERSION
-- Type: string
-- Default: "macos-drive@1.0.0-alpha.1+rclone"
-
-#### --protondrive-replace-existing-draft
-
-Create a new revision when filename conflict is detected
-
-When a file upload is cancelled or failed before completion, a draft will be
-created and the subsequent upload of the same file to the same location will be
-reported as a conflict.
-
-The value can also be set by --protondrive-replace-existing-draft=true
-
-If the option is set to true, the draft will be replaced and then the upload
-operation will restart. If there are other clients also uploading at the same
-file location at the same time, the behavior is currently unknown. Need to set
-to true for integration tests.
-If the option is set to false, an error "a draft exist - usually this means a
-file is being uploaded at another client, or, there was a failed upload attempt"
-will be returned, and no upload will happen.
-
-Properties:
-
-- Config: replace_existing_draft
-- Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT
-- Type: bool
-- Default: false
-
-#### --protondrive-enable-caching
-
-Caches the files and folders metadata to reduce API calls
-
-Notice: If you are mounting ProtonDrive as a VFS, please disable this feature,
-as the current implementation doesn't update or clear the cache when there are
-external changes.
-
-The files and folders on ProtonDrive are represented as links with keyrings,
-which can be cached to improve performance and be friendly to the API server.
-
-The cache is currently built for the case when the rclone is the only instance
-performing operations to the mount point. The event system, which is the proton
-API system that provides visibility of what has changed on the drive, is yet
-to be implemented, so updates from other clients won’t be reflected in the
-cache. Thus, if there are concurrent clients accessing the same mount point,
-then we might have a problem with caching the stale data.
-
-Properties:
-
-- Config: enable_caching
-- Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING
-- Type: bool
-- Default: true
-
-
-
-## Limitations
-
-This backend uses the
-[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which
-is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a
-fork of the [official repo](https://github.com/ProtonMail/go-proton-api).
-
-There is no official API documentation available from Proton Drive. But, thanks
-to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api)
-and the web, iOS, and Android client codebases, we don't need to completely
-reverse engineer the APIs by observing the web client traffic!
-
-[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic
-building blocks of API calls and error handling, such as 429 exponential
-back-off, but it is pretty much just a barebone interface to the Proton API.
-For example, the encryption and decryption of the Proton Drive file are not
-provided in this library.
-
-The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on
-top of this quickly. This codebase handles the intricate tasks before and after
-calling Proton APIs, particularly the complex encryption scheme, allowing
-developers to implement features for other software on top of this codebase.
-There are likely quite a few errors in this library, as there isn't official
-documentation available.
-
-# Seafile
-
-This is a backend for the [Seafile](https://www.seafile.com/) storage service:
-- It works with both the free community edition or the professional edition.
-- Seafile versions 6.x, 7.x, 8.x and 9.x are all supported.
-- Encrypted libraries are also supported.
-- It supports 2FA enabled users
-- Using a Library API Token is **not** supported
-
-## Configuration
-
-There are two distinct modes you can setup your remote:
-- you point your remote to the **root of the server**, meaning you don't specify a library during the configuration:
-Paths are specified as `remote:library`. You may put subdirectories in too, e.g. `remote:library/path/to/dir`.
-- you point your remote to a specific library during the configuration:
-Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_)
-
-### Configuration in root mode
-
-Here is an example of making a seafile configuration for a user with **no** two-factor authentication. First run
-
- rclone config
-
-This will guide you through an interactive setup process. To authenticate
-you will need the URL of your server, your email (or username) and your password.
-
-No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> seafile Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Seafile "seafile" [snip] Storage> seafile ** See help for seafile backend at: https://rclone.org/seafile/ **
-URL of seafile host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to cloud.seafile.com "https://cloud.seafile.com/" url> http://my.seafile.server/ User name (usually email address) Enter a string value. Press Enter for the default (""). user> me@example.com Password y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g> y Enter the password: password: Confirm the password: password: Two-factor authentication ('true' if the account has 2FA enabled) Enter a boolean value (true or false). Press Enter for the default ("false"). 2fa> false Name of the library. Leave blank to access all non-encrypted libraries. Enter a string value. Press Enter for the default (""). library> Library password (for encrypted libraries only). Leave blank if you pass it through the command line. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Two-factor authentication is not enabled on this account. -------------------- [seafile] type = seafile url = http://my.seafile.server/ user = me@example.com pass = *** ENCRYPTED *** 2fa = false -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y
-
-This remote is called `seafile`. It's pointing to the root of your seafile server and can now be used like this:
-
-See all libraries
-
- rclone lsd seafile:
-
-Create a new library
-
- rclone mkdir seafile:library
-
-List the contents of a library
-
- rclone ls seafile:library
-
-Sync `/home/local/directory` to the remote library, deleting any
-excess files in the library.
-
- rclone sync --interactive /home/local/directory seafile:library
-
-### Configuration in library mode
-
-Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:
-
-No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> seafile Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value [snip] XX / Seafile "seafile" [snip] Storage> seafile ** See help for seafile backend at: https://rclone.org/seafile/ **
-URL of seafile host to connect to Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Connect to cloud.seafile.com "https://cloud.seafile.com/" url> http://my.seafile.server/ User name (usually email address) Enter a string value. Press Enter for the default (""). user> me@example.com Password y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g> y Enter the password: password: Confirm the password: password: Two-factor authentication ('true' if the account has 2FA enabled) Enter a boolean value (true or false). Press Enter for the default ("false"). 2fa> true Name of the library. Leave blank to access all non-encrypted libraries. Enter a string value. Press Enter for the default (""). library> My Library Library password (for encrypted libraries only). Leave blank if you pass it through the command line. y) Yes type in my own password g) Generate random password n) No leave this optional password blank (default) y/g/n> n Edit advanced config? (y/n) y) Yes n) No (default) y/n> n Remote config Two-factor authentication: please enter your 2FA code 2fa code> 123456 Authenticating... Success! -------------------- [seafile] type = seafile url = http://my.seafile.server/ user = me@example.com pass = 2fa = true library = My Library -------------------- y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y
-
-You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once.
-
-You specified `My Library` during the configuration. The root of the remote is pointing at the
-root of the library `My Library`:
-
-See all files in the library:
-
- rclone lsd seafile:
-
-Create a new directory inside the library
-
- rclone mkdir seafile:directory
-
-List the contents of a directory
-
- rclone ls seafile:directory
-
-Sync `/home/local/directory` to the remote library, deleting any
-excess files in the library.
-
- rclone sync --interactive /home/local/directory seafile:
-
-
-### --fast-list
-
-Seafile version 7+ supports `--fast-list` which allows you to use fewer
-transactions in exchange for more memory. See the [rclone
-docs](https://rclone.org/docs/#fast-list) for more details.
-Please note this is not supported on seafile server version 6.x
-
-
-### Restricted filename characters
-
-In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
-the following characters are also replaced:
-
-| Character | Value | Replacement |
-| --------- |:-----:|:-----------:|
-| / | 0x2F | / |
-| " | 0x22 | " |
-| \ | 0x5C | \ |
-
-Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
-as they can't be used in JSON strings.
-
-### Seafile and rclone link
-
-Rclone supports generating share links for non-encrypted libraries only.
-They can either be for a file or a directory:
-
-rclone link seafile:seafile-tutorial.doc http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/
-
-or if run on a directory you will get:
-
-rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/
-
-Please note a share link is unique for each file or directory. If you run a link command on a file/dir
-that has already been shared, you will get the exact same link.
-
-### Compatibility
-
-It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions:
-- 6.3.4 community edition
-- 7.0.5 community edition
-- 7.1.3 community edition
-- 9.0.10 community edition
-
-Versions below 6.0 are not supported.
-Versions between 6.0 and 6.3 haven't been tested and might not work properly.
-
-Each new version of `rclone` is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/) of the seafile community server.
-
-
-### Standard options
-
-Here are the Standard options specific to seafile (seafile).
-
-#### --seafile-url
-
-URL of seafile host to connect to.
-
-Properties:
-
-- Config: url
-- Env Var: RCLONE_SEAFILE_URL
-- Type: string
-- Required: true
-- Examples:
- - "https://cloud.seafile.com/"
- - Connect to cloud.seafile.com.
-
-#### --seafile-user
-
-User name (usually email address).
-
-Properties:
-
-- Config: user
-- Env Var: RCLONE_SEAFILE_USER
-- Type: string
-- Required: true
-
-#### --seafile-pass
-
-Password.
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: pass
-- Env Var: RCLONE_SEAFILE_PASS
-- Type: string
-- Required: false
-
-#### --seafile-2fa
-
-Two-factor authentication ('true' if the account has 2FA enabled).
-
-Properties:
-
-- Config: 2fa
-- Env Var: RCLONE_SEAFILE_2FA
-- Type: bool
-- Default: false
-
-#### --seafile-library
-
-Name of the library.
-
-Leave blank to access all non-encrypted libraries.
-
-Properties:
-
-- Config: library
-- Env Var: RCLONE_SEAFILE_LIBRARY
-- Type: string
-- Required: false
-
-#### --seafile-library-key
-
-Library password (for encrypted libraries only).
-
-Leave blank if you pass it through the command line.
-
-**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
-
-Properties:
-
-- Config: library_key
-- Env Var: RCLONE_SEAFILE_LIBRARY_KEY
-- Type: string
-- Required: false
-
-#### --seafile-auth-token
-
-Authentication token.
-
-Properties:
-
-- Config: auth_token
-- Env Var: RCLONE_SEAFILE_AUTH_TOKEN
-- Type: string
-- Required: false
-
-### Advanced options
-
-Here are the Advanced options specific to seafile (seafile).
-
-#### --seafile-create-library
-
-Should rclone create a library if it doesn't exist.
-
-Properties:
-
-- Config: create_library
-- Env Var: RCLONE_SEAFILE_CREATE_LIBRARY
-- Type: bool
-- Default: false
-
-#### --seafile-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_SEAFILE_ENCODING
-- Type: Encoding
-- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8
-
-
-
-# SFTP
-
-SFTP is the [Secure (or SSH) File Transfer
-Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol).
-
-The SFTP backend can be used with a number of different providers:
-
-
-- Hetzner Storage Box
-- rsync.net
-
-
-SFTP runs over SSH v2 and is installed as standard with most modern
-SSH installations.
-
-Paths are specified as `remote:path`. If the path does not begin with
-a `/` it is relative to the home directory of the user. An empty path
-`remote:` refers to the user's home directory. For example, `rclone lsd remote:`
-would list the home directory of the user configured in the rclone remote config
-(`i.e /home/sftpuser`). However, `rclone lsd remote:/` would list the root
-directory for remote machine (i.e. `/`)
-
-Note that some SFTP servers will need the leading / - Synology is a
-good example of this. rsync.net and Hetzner, on the other hand, requires users to
-OMIT the leading /.
-
-Note that by default rclone will try to execute shell commands on
-the server, see [shell access considerations](#shell-access-considerations).
-
-## Configuration
-
-Here is an example of making an SFTP configuration. First run
-
- rclone config
-
-This will guide you through an interactive setup process.
-
-No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] XX / SSH/SFTP "sftp" [snip] Storage> sftp SSH host to connect to Choose a number from below, or type in your own value 1 / Connect to example.com "example.com" host> example.com SSH username Enter a string value. Press Enter for the default ("$USER"). user> sftpuser SSH port number Enter a signed integer. Press Enter for the default (22). port> SSH password, leave blank to use ssh-agent. y) Yes type in my own password g) Generate random password n) No leave this optional password blank y/g/n> n Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. key_file> Remote config -------------------- [remote] host = example.com user = sftpuser port = pass = key_file = -------------------- y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y
-
-This remote is called `remote` and can now be used like this:
-
-See all directories in the home directory
-
- rclone lsd remote:
-
-See all directories in the root directory
-
- rclone lsd remote:/
-
-Make a new directory
-
- rclone mkdir remote:path/to/directory
-
-List the contents of a directory
-
- rclone ls remote:path/to/directory
-
-Sync `/home/local/directory` to the remote directory, deleting any
-excess files in the directory.
-
- rclone sync --interactive /home/local/directory remote:directory
-
-Mount the remote path `/srv/www-data/` to the local path
-`/mnt/www-data`
-
- rclone mount remote:/srv/www-data/ /mnt/www-data
-
-### SSH Authentication
-
-The SFTP remote supports three authentication methods:
-
- * Password
- * Key file, including certificate signed keys
- * ssh-agent
-
-Key files should be PEM-encoded private key files. For instance `/home/$USER/.ssh/id_rsa`.
-Only unencrypted OpenSSH or PEM encrypted files are supported.
-
-The key file can be specified in either an external file (key_file) or contained within the
-rclone config file (key_pem). If using key_pem in the config file, the entry should be on a
-single line with new line ('\n' or '\r\n') separating lines. i.e.
-
- key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----
-
-This will generate it correctly for key_pem for use in the config:
-
- awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
-
-If you don't specify `pass`, `key_file`, or `key_pem` or `ask_password` then
-rclone will attempt to contact an ssh-agent. You can also specify `key_use_agent`
-to force the usage of an ssh-agent. In this case `key_file` or `key_pem` can
-also be specified to force the usage of a specific key in the ssh-agent.
-
-Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment.
-
-If you set the `ask_password` option, rclone will prompt for a password when
-needed and no password has been configured.
-
-#### Certificate-signed keys
-
-With traditional key-based authentication, you configure your private key only,
-and the public key built into it will be used during the authentication process.
-
-If you have a certificate you may use it to sign your public key, creating a
-separate SSH user certificate that should be used instead of the plain public key
-extracted from the private key. Then you must provide the path to the
-user certificate public key file in `pubkey_file`.
-
-Note: This is not the traditional public key paired with your private key,
-typically saved as `/home/$USER/.ssh/id_rsa.pub`. Setting this path in
-`pubkey_file` will not work.
-
-Example:
-
-[remote] type = sftp host = example.com user = sftpuser key_file = ~/id_rsa pubkey_file = ~/id_rsa-cert.pub
-
-If you concatenate a cert with a private key then you can specify the
-merged file in both places.
-
-Note: the cert must come first in the file. e.g.
-
-```
-cat id_rsa-cert.pub id_rsa > merged_key
-```
-
-### Host key validation
-
-By default rclone will not check the server's host key for validation. This
-can allow an attacker to replace a server with their own and if you use
-password authentication then this can lead to that password being exposed.
-
-Host key matching, using standard `known_hosts` files can be turned on by
-enabling the `known_hosts_file` option. This can point to the file maintained
-by `OpenSSH` or can point to a unique file.
-
-e.g. using the OpenSSH `known_hosts` file:
-
-```
+domain>
+Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+tenant>
+Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+tenant_id>
+Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+tenant_domain>
+Region name - optional (OS_REGION_NAME)
+region>
+Storage URL - optional (OS_STORAGE_URL)
+storage_url>
+Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+auth_token>
+AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+auth_version>
+Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
+Choose a number from below, or type in your own value
+ 1 / Public (default, choose this if not sure)
+ \ "public"
+ 2 / Internal (use internal service net)
+ \ "internal"
+ 3 / Admin
+ \ "admin"
+endpoint_type>
+Remote config
+--------------------
+[test]
+env_auth = true
+user =
+key =
+auth =
+user_id =
+domain =
+tenant =
+tenant_id =
+tenant_domain =
+region =
+storage_url =
+auth_token =
+auth_version =
+endpoint_type =
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+This remote is called remote and can now be used like this
See all containers
+rclone lsd remote:
+Make a new container
+rclone mkdir remote:container
+List the contents of a container
+rclone ls remote:container
+Sync /home/local/directory to the remote container, deleting any excess files in the container.
rclone sync --interactive /home/local/directory remote:container
+An OpenStack credentials file typically looks something something like this (without the comments)
+export OS_AUTH_URL=https://a.provider.net/v2.0
+export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
+export OS_TENANT_NAME="1234567890123456"
+export OS_USERNAME="123abc567xy"
+echo "Please enter your OpenStack Password: "
+read -sr OS_PASSWORD_INPUT
+export OS_PASSWORD=$OS_PASSWORD_INPUT
+export OS_REGION_NAME="SBG1"
+if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
+The config file needs to look something like this where $OS_USERNAME represents the value of the OS_USERNAME variable - 123abc567xy in the example above.
[remote]
+type = swift
+user = $OS_USERNAME
+key = $OS_PASSWORD
+auth = $OS_AUTH_URL
+tenant = $OS_TENANT_NAME
+Note that you may (or may not) need to set region too - try without first.
If you prefer you can configure rclone to use swift using a standard set of OpenStack environment variables.
+When you run through the config, make sure you choose true for env_auth and leave everything else blank.
rclone will then set any empty config parameters from the environment using standard OpenStack environment variables. There is a list of the variables in the docs for the swift library.
+If your OpenStack installation uses a non-standard authentication method that might not be yet supported by rclone or the underlying swift library, you can authenticate externally (e.g. calling manually the openstack commands to get a token). Then, you just need to pass the two configuration variables auth_token and storage_url. If they are both provided, the other variables are ignored. rclone will not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation.
You can use rclone with swift without a config file, if desired, like this:
+source openstack-credentials-file
+export RCLONE_CONFIG_MYREMOTE_TYPE=swift
+export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
+rclone lsd myremote:
+This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.
+For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update along with --use-server-modtime, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.
The modified time is stored as metadata on the object as X-Object-Meta-Mtime as floating point since the epoch accurate to 1 ns.
This is a de facto standard (used in the official python-swiftclient amongst others) for storing the modification time for an object.
+The MD5 hash algorithm is supported.
+| Character | +Value | +Replacement | +
|---|---|---|
| NUL | +0x00 | +␀ | +
| / | +0x2F | +/ | +
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
+Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
+Get swift credentials from environment variables in standard OpenStack form.
+Properties:
+User name to log in (OS_USERNAME).
+Properties:
+API key or password (OS_PASSWORD).
+Properties:
+Authentication URL for server (OS_AUTH_URL).
+Properties:
+User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+Properties:
+User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+Properties:
+Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME).
+Properties:
+Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID).
+Properties:
+Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME).
+Properties:
+Region name - optional (OS_REGION_NAME).
+Properties:
+Storage URL - optional (OS_STORAGE_URL).
+Properties:
+Auth Token from alternate authentication - optional (OS_AUTH_TOKEN).
+Properties:
+Application Credential ID (OS_APPLICATION_CREDENTIAL_ID).
+Properties:
+Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME).
+Properties:
+Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET).
+Properties:
+AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION).
+Properties:
+Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE).
+Properties:
+The storage policy to use when creating a new container.
+This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider.
+Properties:
+Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
+If true avoid calling abort upload on a failure.
+It should be set to true for resuming uploads across different sessions.
+Properties:
+Above this size files will be chunked into a _segments container.
+Above this size files will be chunked into a _segments container. The default for this is 5 GiB which is its maximum value.
+Properties:
+Don't chunk files during streaming upload.
+When doing streaming uploads (e.g. using rcat or mount) setting this flag will cause the swift backend to not upload chunked files.
+This will limit the maximum upload size to 5 GiB. However non chunked files are easier to deal with and have an MD5SUM.
+Rclone will still chunk files bigger than chunk_size when doing normal copy operations.
+Properties:
+Disable support for static and dynamic large objects
+Swift cannot transparently store files bigger than 5 GiB. There are two schemes for doing that, static or dynamic large objects, and the API does not allow rclone to determine whether a file is a static or dynamic large object without doing a HEAD on the object. Since these need to be treated differently, this means rclone has to issue HEAD requests for objects for example when reading checksums.
+When no_large_objects is set, rclone will assume that there are no static or dynamic large objects stored. This means it can stop doing the extra HEAD calls which in turn increases performance greatly especially when doing a swift to swift transfer with --checksum set.
Setting this option implies no_chunk and also that no files will be uploaded in chunks, so files bigger than 5 GiB will just fail on upload.
If you set this option and there are static or dynamic large objects, then this will give incorrect hashes for them. Downloads will succeed, but other operations such as Remove and Copy will fail.
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Description of the remote
+Properties:
+The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
+Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.
+So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies flag.
This may also be caused by specifying the region when you shouldn't have (e.g. OVH).
+This is most likely caused by forgetting to specify your tenant when setting up a swift remote.
+To use rclone with OVH cloud archive, first use rclone config to set up a swift backend with OVH, choosing pca as the storage_policy.
Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel.
+To retrieve objects use rclone copy as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following:
2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)
Rclone will wait for the time specified then retry the copy.
+Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / Pcloud
+ \ "pcloud"
+[snip]
+Storage> pcloud
+Pcloud App Client Id - leave blank normally.
+client_id>
+Pcloud App Client Secret - leave blank normally.
+client_secret>
+Remote config
+Use web browser to automatically authenticate rclone with remote?
+ * Say Y if the machine running rclone has a web browser you can use
+ * Say N if running rclone on a (remote) machine without web browser access
+If not sure try Y. If Y failed, try N.
+y) Yes
+n) No
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+--------------------
[remote]
+client_id =
+client_secret =
+token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+See the remote setup docs for how to set it up on a machine with no Internet browser available.
+Note that rclone runs a webserver on your local machine to collect the token as returned from pCloud. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone like this,
List directories in top level of your pCloud
+rclone lsd remote:
+List all the files in your pCloud
+rclone ls remote:
+To copy a local directory to a pCloud directory called backup
+rclone copy /home/source remote:backup
+pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.
+pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 hashes in the EU region, so you can use the --checksum flag.
In addition to the default restricted characters set the following characters are also replaced:
+| Character | +Value | +Replacement | +
|---|---|---|
| \ | +0x5C | +\ | +
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
+Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup can be used to empty the trash.
Due to an API limitation, the rclone cleanup command will only work if you set your username and password in the advanced options for this backend. Since we generally want to avoid storing user passwords in the rclone config file, we advise you to only set this up if you need the rclone cleanup command to work.
You can set the root_folder_id for rclone. This is the directory (identified by its Folder ID) that rclone considers to be the root of your pCloud drive.
Normally you will leave this blank and rclone will determine the correct root to use itself.
+However you can set this to restrict rclone to a specific folder hierarchy.
+In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the folder field of the URL when you open the relevant folder in the pCloud web interface.
So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the config.
Here are the Standard options specific to pcloud (Pcloud).
+OAuth Client Id.
+Leave blank normally.
+Properties:
+OAuth Client Secret.
+Leave blank normally.
+Properties:
+Here are the Advanced options specific to pcloud (Pcloud).
+OAuth Access Token as a JSON blob.
+Properties:
+Auth server URL.
+Leave blank to use the provider defaults.
+Properties:
+Token server url.
+Leave blank to use the provider defaults.
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Fill in for rclone to use a non root folder as its starting point.
+Properties:
+Hostname to connect to.
+This is normally set when rclone initially does the oauth connection, however you will need to set it by hand if you are using remote config with rclone authorize.
+Properties:
+Your pcloud username.
+This is only required when you want to use the cleanup command. Due to a bug in the pcloud API the required API does not support OAuth authentication so we have to rely on user password authentication for it.
+Properties:
+Your pcloud password.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+Description of the remote
+Properties:
+PikPak is a private cloud drive.
+Paths are specified as remote:path, and may be as deep as required, e.g. remote:directory/subdirectory.
Here is an example of making a remote for PikPak.
+First run:
+ rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> remote
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+XX / PikPak
+ \ (pikpak)
+Storage> XX
+
+Option user.
+Pikpak username.
+Enter a value.
+user> USERNAME
+
+Option pass.
+Pikpak password.
+Choose an alternative below.
+y) Yes, type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n>
+
+Configuration complete.
+Options:
+- type: pikpak
+- user: USERNAME
+- pass: *** ENCRYPTED ***
+- token: {"access_token":"eyJ...","token_type":"Bearer","refresh_token":"os...","expiry":"2023-01-26T18:54:32.170582647+09:00"}
+Keep this "remote" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+PikPak keeps modification times on objects, and updates them when uploading objects, but it does not support changing only the modification time
+The MD5 hash algorithm is supported.
+Here are the Standard options specific to pikpak (PikPak).
+Pikpak username.
+Properties:
+Pikpak password.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+Here are the Advanced options specific to pikpak (PikPak).
+OAuth Client Id.
+Leave blank normally.
+Properties:
+OAuth Client Secret.
+Leave blank normally.
+Properties:
+OAuth Access Token as a JSON blob.
+Properties:
+Auth server URL.
+Leave blank to use the provider defaults.
+Properties:
+Token server url.
+Leave blank to use the provider defaults.
+Properties:
+ID of the root folder. Leave blank normally.
+Fill in for rclone to use a non root folder as its starting point.
+Properties:
+Send files to the trash instead of deleting permanently.
+Defaults to true, namely sending files to the trash. Use --pikpak-use-trash=false to delete files permanently instead.
Properties:
+Only show files that are in the trash.
+This will show trashed files in their original directory structure.
+Properties:
+Files bigger than this will be cached on disk to calculate hash if required.
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Description of the remote
+Properties:
+Here are the commands specific to the pikpak backend.
+Run them with
+rclone backend COMMAND remote:
+The help below will explain what arguments each command takes.
+See the backend command for more info on how to pass options and arguments.
+These can be run on a running backend using the rc command backend/command.
+Add offline download task for url
+rclone backend addurl remote: [options] [<arguments>+]
+This command adds offline download task for url.
+Usage:
+rclone backend addurl pikpak:dirpath url
+Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, download will fallback to default 'My Pack' folder.
+Request decompress of a file/files in a folder
+rclone backend decompress remote: [options] [<arguments>+]
+This command requests decompress of file/files in a folder.
+Usage:
+rclone backend decompress pikpak:dirpath {filename} -o password=password
+rclone backend decompress pikpak:dirpath {filename} -o delete-src-file
+An optional argument 'filename' can be specified for a file located in 'pikpak:dirpath'. You may want to pass '-o password=password' for a password-protected files. Also, pass '-o delete-src-file' to delete source files after decompression finished.
+Result:
+{
+ "Decompressed": 17,
+ "SourceDeleted": 0,
+ "Errors": 0
+}
+PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.
+Deleted files will still be visible with --pikpak-trashed-only even after the trash emptied. This goes away after few days.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / premiumize.me
+ \ "premiumizeme"
+[snip]
+Storage> premiumizeme
+** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ **
+
+Remote config
+Use web browser to automatically authenticate rclone with remote?
+ * Say Y if the machine running rclone has a web browser you can use
+ * Say N if running rclone on a (remote) machine without web browser access
+If not sure try Y. If Y failed, try N.
+y) Yes
+n) No
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+--------------------
+[remote]
+type = premiumizeme
+token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXX","expiry":"2029-08-07T18:44:15.548915378+01:00"}
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d>
+See the remote setup docs for how to set it up on a machine with no Internet browser available.
+Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone like this,
List directories in top level of your premiumize.me
+rclone lsd remote:
+List all the files in your premiumize.me
+rclone ls remote:
+To copy a local directory to an premiumize.me directory called backup
+rclone copy /home/source remote:backup
+premiumize.me does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work.
In addition to the default restricted characters set the following characters are also replaced:
+| Character | +Value | +Replacement | +
|---|---|---|
| \ | +0x5C | +\ | +
| " | +0x22 | +" | +
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
+Here are the Standard options specific to premiumizeme (premiumize.me).
+OAuth Client Id.
+Leave blank normally.
+Properties:
+OAuth Client Secret.
+Leave blank normally.
+Properties:
+API Key.
+This is not normally used - use oauth instead.
+Properties:
+Here are the Advanced options specific to premiumizeme (premiumize.me).
+OAuth Access Token as a JSON blob.
+Properties:
+Auth server URL.
+Leave blank to use the provider defaults.
+Properties:
+Token server url.
+Leave blank to use the provider defaults.
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Description of the remote
+Properties:
+Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+premiumize.me file names can't have the \ or " characters in. rclone maps these to and from an identical looking unicode equivalents \ and "
premiumize.me only supports filenames up to 255 characters in length.
+Proton Drive is an end-to-end encrypted Swiss vault for your files that protects your data.
+This is an rclone backend for Proton Drive which supports the file transfer features of Proton Drive using the same client-side encryption.
+Due to the fact that Proton Drive doesn't publish its API documentation, this backend is implemented with best efforts by reading the open-sourced client source code and observing the Proton Drive traffic in the browser.
+NB This backend is currently in Beta. It is believed to be correct and all the integration tests pass. However the Proton Drive protocol has evolved over time there may be accounts it is not compatible with. Please post on the rclone forum if you find an incompatibility.
+Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
Here is an example of how to make a remote called remote. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / Proton Drive
+ \ "Proton Drive"
+[snip]
+Storage> protondrive
+User name
+user> you@protonmail.com
+Password.
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank
+y/g/n> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Option 2fa.
+2FA code (if the account requires one)
+Enter a value. Press Enter to leave empty.
+2fa> 123456
+Remote config
+--------------------
+[remote]
+type = protondrive
+user = you@protonmail.com
+pass = *** ENCRYPTED ***
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+NOTE: The Proton Drive encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail.
Once configured you can then use rclone like this,
List directories in top level of your Proton Drive
+rclone lsd remote:
+List all the files in your Proton Drive
+rclone ls remote:
+To copy a local directory to an Proton Drive directory called backup
+rclone copy /home/source remote:backup
+Proton Drive Bridge does not support updating modification times yet.
+The SHA1 hash algorithm is supported.
+Invalid UTF-8 bytes will be replaced, also left and right spaces will be removed (code reference)
+Proton Drive can not have two files with exactly the same name and path. If the conflict occurs, depending on the advanced config, the file might or might not be overwritten.
+Please set your mailbox password in the advanced config section.
+The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.
+Here are the Standard options specific to protondrive (Proton Drive).
+The username of your proton account
+Properties:
+The password of your proton account.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+The 2FA code
+The value can also be provided with --protondrive-2fa=000000
+The 2FA code of your proton drive account if the account is set up with two-factor authentication
+Properties:
+Here are the Advanced options specific to protondrive (Proton Drive).
+The mailbox password of your two-password proton account.
+For more information regarding the mailbox password, please check the following official knowledge base article: https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+Client uid key (internal use only)
+Properties:
+Client access token key (internal use only)
+Properties:
+Client refresh token key (internal use only)
+Properties:
+Client salted key pass key (internal use only)
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Return the file size before encryption
+The size of the encrypted file will be different from (bigger than) the original file size. Unless there is a reason to return the file size after encryption is performed, otherwise, set this option to true, as features like Open() which will need to be supplied with original content size, will fail to operate properly
+Properties:
+The app version string
+The app version string indicates the client that is currently performing the API request. This information is required and will be sent with every API request.
+Properties:
+Create a new revision when filename conflict is detected
+When a file upload is cancelled or failed before completion, a draft will be created and the subsequent upload of the same file to the same location will be reported as a conflict.
+The value can also be set by --protondrive-replace-existing-draft=true
+If the option is set to true, the draft will be replaced and then the upload operation will restart. If there are other clients also uploading at the same file location at the same time, the behavior is currently unknown. Need to set to true for integration tests. If the option is set to false, an error "a draft exist - usually this means a file is being uploaded at another client, or, there was a failed upload attempt" will be returned, and no upload will happen.
+Properties:
+Caches the files and folders metadata to reduce API calls
+Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, as the current implementation doesn't update or clear the cache when there are external changes.
+The files and folders on ProtonDrive are represented as links with keyrings, which can be cached to improve performance and be friendly to the API server.
+The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.
+Properties:
+Description of the remote
+Properties:
+This backend uses the Proton-API-Bridge, which is based on go-proton-api, a fork of the official repo.
+There is no official API documentation available from Proton Drive. But, thanks to Proton open sourcing proton-go-api and the web, iOS, and Android client codebases, we don't need to completely reverse engineer the APIs by observing the web client traffic!
+proton-go-api provides the basic building blocks of API calls and error handling, such as 429 exponential back-off, but it is pretty much just a barebone interface to the Proton API. For example, the encryption and decryption of the Proton Drive file are not provided in this library.
+The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on top of this quickly. This codebase handles the intricate tasks before and after calling Proton APIs, particularly the complex encryption scheme, allowing developers to implement features for other software on top of this codebase. There are likely quite a few errors in this library, as there isn't official documentation available.
+Paths are specified as remote:path
put.io paths may be as deep as required, e.g. remote:directory/subdirectory.
The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> putio
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / Put.io
+ \ "putio"
+[snip]
+Storage> putio
+** See help for putio backend at: https://rclone.org/putio/ **
+
+Remote config
+Use web browser to automatically authenticate rclone with remote?
+ * Say Y if the machine running rclone has a web browser you can use
+ * Say N if running rclone on a (remote) machine without web browser access
+If not sure try Y. If Y failed, try N.
+y) Yes
+n) No
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+--------------------
+[putio]
+type = putio
+token = {"access_token":"XXXXXXXX","expiry":"0001-01-01T00:00:00Z"}
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+putio putio
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> q
+See the remote setup docs for how to set it up on a machine with no Internet browser available.
+Note that rclone runs a webserver on your local machine to collect the token as returned from put.io if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall, or use manual mode.
You can then use it like this,
+List directories in top level of your put.io
+rclone lsd remote:
+List all the files in your put.io
+rclone ls remote:
+To copy a local directory to a put.io directory called backup
+rclone copy /home/source remote:backup
+In addition to the default restricted characters set the following characters are also replaced:
+| Character | +Value | +Replacement | +
|---|---|---|
| \ | +0x5C | +\ | +
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
+Here are the Standard options specific to putio (Put.io).
+OAuth Client Id.
+Leave blank normally.
+Properties:
+OAuth Client Secret.
+Leave blank normally.
+Properties:
+Here are the Advanced options specific to putio (Put.io).
+OAuth Access Token as a JSON blob.
+Properties:
+Auth server URL.
+Leave blank to use the provider defaults.
+Properties:
+Token server url.
+Leave blank to use the provider defaults.
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Description of the remote
+Properties:
+put.io has rate limiting. When you hit a limit, rclone automatically retries after waiting the amount of time requested by the server.
+If you want to avoid ever hitting these limits, you may use the --tpslimit flag with a low number. Note that the imposed limits may be different for different operations, and may change over time.
Proton Drive is an end-to-end encrypted Swiss vault for your files that protects your data.
+This is an rclone backend for Proton Drive which supports the file transfer features of Proton Drive using the same client-side encryption.
+Due to the fact that Proton Drive doesn't publish its API documentation, this backend is implemented with best efforts by reading the open-sourced client source code and observing the Proton Drive traffic in the browser.
+NB This backend is currently in Beta. It is believed to be correct and all the integration tests pass. However the Proton Drive protocol has evolved over time there may be accounts it is not compatible with. Please post on the rclone forum if you find an incompatibility.
+Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
Here is an example of how to make a remote called remote. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / Proton Drive
+ \ "Proton Drive"
+[snip]
+Storage> protondrive
+User name
+user> you@protonmail.com
+Password.
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank
+y/g/n> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Option 2fa.
+2FA code (if the account requires one)
+Enter a value. Press Enter to leave empty.
+2fa> 123456
+Remote config
+--------------------
+[remote]
+type = protondrive
+user = you@protonmail.com
+pass = *** ENCRYPTED ***
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+NOTE: The Proton Drive encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail.
Once configured you can then use rclone like this,
List directories in top level of your Proton Drive
+rclone lsd remote:
+List all the files in your Proton Drive
+rclone ls remote:
+To copy a local directory to an Proton Drive directory called backup
+rclone copy /home/source remote:backup
+Proton Drive Bridge does not support updating modification times yet.
+The SHA1 hash algorithm is supported.
+Invalid UTF-8 bytes will be replaced, also left and right spaces will be removed (code reference)
+Proton Drive can not have two files with exactly the same name and path. If the conflict occurs, depending on the advanced config, the file might or might not be overwritten.
+Please set your mailbox password in the advanced config section.
+The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.
+Here are the Standard options specific to protondrive (Proton Drive).
+The username of your proton account
+Properties:
+The password of your proton account.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+The 2FA code
+The value can also be provided with --protondrive-2fa=000000
+The 2FA code of your proton drive account if the account is set up with two-factor authentication
+Properties:
+Here are the Advanced options specific to protondrive (Proton Drive).
+The mailbox password of your two-password proton account.
+For more information regarding the mailbox password, please check the following official knowledge base article: https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+Client uid key (internal use only)
+Properties:
+Client access token key (internal use only)
+Properties:
+Client refresh token key (internal use only)
+Properties:
+Client salted key pass key (internal use only)
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Return the file size before encryption
+The size of the encrypted file will be different from (bigger than) the original file size. Unless there is a reason to return the file size after encryption is performed, otherwise, set this option to true, as features like Open() which will need to be supplied with original content size, will fail to operate properly
+Properties:
+The app version string
+The app version string indicates the client that is currently performing the API request. This information is required and will be sent with every API request.
+Properties:
+Create a new revision when filename conflict is detected
+When a file upload is cancelled or failed before completion, a draft will be created and the subsequent upload of the same file to the same location will be reported as a conflict.
+The value can also be set by --protondrive-replace-existing-draft=true
+If the option is set to true, the draft will be replaced and then the upload operation will restart. If there are other clients also uploading at the same file location at the same time, the behavior is currently unknown. Need to set to true for integration tests. If the option is set to false, an error "a draft exist - usually this means a file is being uploaded at another client, or, there was a failed upload attempt" will be returned, and no upload will happen.
+Properties:
+Caches the files and folders metadata to reduce API calls
+Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, as the current implementation doesn't update or clear the cache when there are external changes.
+The files and folders on ProtonDrive are represented as links with keyrings, which can be cached to improve performance and be friendly to the API server.
+The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.
+Properties:
+Description of the remote
+Properties:
+This backend uses the Proton-API-Bridge, which is based on go-proton-api, a fork of the official repo.
+There is no official API documentation available from Proton Drive. But, thanks to Proton open sourcing proton-go-api and the web, iOS, and Android client codebases, we don't need to completely reverse engineer the APIs by observing the web client traffic!
+proton-go-api provides the basic building blocks of API calls and error handling, such as 429 exponential back-off, but it is pretty much just a barebone interface to the Proton API. For example, the encryption and decryption of the Proton Drive file are not provided in this library.
+The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on top of this quickly. This codebase handles the intricate tasks before and after calling Proton APIs, particularly the complex encryption scheme, allowing developers to implement features for other software on top of this codebase. There are likely quite a few errors in this library, as there isn't official documentation available.
+This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users - Using a Library API Token is not supported
+There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library. You may put subdirectories in too, e.g. remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)
Here is an example of making a seafile configuration for a user with no two-factor authentication. First run
+rclone config
+This will guide you through an interactive setup process. To authenticate you will need the URL of your server, your email (or username) and your password.
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> seafile
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / Seafile
+ \ "seafile"
+[snip]
+Storage> seafile
+** See help for seafile backend at: https://rclone.org/seafile/ **
+
+URL of seafile host to connect to
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+ 1 / Connect to cloud.seafile.com
+ \ "https://cloud.seafile.com/"
+url> http://my.seafile.server/
+User name (usually email address)
+Enter a string value. Press Enter for the default ("").
+user> me@example.com
+Password
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank (default)
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Two-factor authentication ('true' if the account has 2FA enabled)
+Enter a boolean value (true or false). Press Enter for the default ("false").
+2fa> false
+Name of the library. Leave blank to access all non-encrypted libraries.
+Enter a string value. Press Enter for the default ("").
+library>
+Library password (for encrypted libraries only). Leave blank if you pass it through the command line.
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank (default)
+y/g/n> n
+Edit advanced config? (y/n)
+y) Yes
+n) No (default)
+y/n> n
+Remote config
+Two-factor authentication is not enabled on this account.
+--------------------
+[seafile]
+type = seafile
+url = http://my.seafile.server/
+user = me@example.com
+pass = *** ENCRYPTED ***
+2fa = false
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+This remote is called seafile. It's pointing to the root of your seafile server and can now be used like this:
See all libraries
+rclone lsd seafile:
+Create a new library
+rclone mkdir seafile:library
+List the contents of a library
+rclone ls seafile:library
+Sync /home/local/directory to the remote library, deleting any excess files in the library.
rclone sync --interactive /home/local/directory seafile:library
+Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> seafile
+Type of storage to configure.
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+[snip]
+XX / Seafile
+ \ "seafile"
+[snip]
+Storage> seafile
+** See help for seafile backend at: https://rclone.org/seafile/ **
+
+URL of seafile host to connect to
+Enter a string value. Press Enter for the default ("").
+Choose a number from below, or type in your own value
+ 1 / Connect to cloud.seafile.com
+ \ "https://cloud.seafile.com/"
+url> http://my.seafile.server/
+User name (usually email address)
+Enter a string value. Press Enter for the default ("").
+user> me@example.com
+Password
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank (default)
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Two-factor authentication ('true' if the account has 2FA enabled)
+Enter a boolean value (true or false). Press Enter for the default ("false").
+2fa> true
+Name of the library. Leave blank to access all non-encrypted libraries.
+Enter a string value. Press Enter for the default ("").
+library> My Library
+Library password (for encrypted libraries only). Leave blank if you pass it through the command line.
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank (default)
+y/g/n> n
+Edit advanced config? (y/n)
+y) Yes
+n) No (default)
+y/n> n
+Remote config
+Two-factor authentication: please enter your 2FA code
+2fa code> 123456
+Authenticating...
+Success!
+--------------------
+[seafile]
+type = seafile
+url = http://my.seafile.server/
+user = me@example.com
+pass =
+2fa = true
+library = My Library
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once.
+You specified My Library during the configuration. The root of the remote is pointing at the root of the library My Library:
See all files in the library:
+rclone lsd seafile:
+Create a new directory inside the library
+rclone mkdir seafile:directory
+List the contents of a directory
+rclone ls seafile:directory
+Sync /home/local/directory to the remote library, deleting any excess files in the library.
rclone sync --interactive /home/local/directory seafile:
+Seafile version 7+ supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.x
In addition to the default restricted characters set the following characters are also replaced:
+| Character | +Value | +Replacement | +
|---|---|---|
| / | +0x2F | +/ | +
| " | +0x22 | +" | +
| \ | +0x5C | +\ | +
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
+Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory:
+rclone link seafile:seafile-tutorial.doc
+http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/
+
+or if run on a directory you will get:
+rclone link seafile:dir
+http://my.seafile.server/d/9ea2455f6f55478bbb0d/
+Please note a share link is unique for each file or directory. If you run a link command on a file/dir that has already been shared, you will get the exact same link.
+It has been actively developed using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition - 9.0.10 community edition
+Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.
+Each new version of rclone is automatically tested against the latest docker image of the seafile community server.
Here are the Standard options specific to seafile (seafile).
+URL of seafile host to connect to.
+Properties:
+User name (usually email address).
+Properties:
+Password.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+Two-factor authentication ('true' if the account has 2FA enabled).
+Properties:
+Name of the library.
+Leave blank to access all non-encrypted libraries.
+Properties:
+Library password (for encrypted libraries only).
+Leave blank if you pass it through the command line.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+Authentication token.
+Properties:
+Here are the Advanced options specific to seafile (seafile).
+Should rclone create a library if it doesn't exist.
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Description of the remote
+Properties:
+SFTP is the Secure (or SSH) File Transfer Protocol.
+The SFTP backend can be used with a number of different providers:
+SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.
+Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory. For example, rclone lsd remote: would list the home directory of the user configured in the rclone remote config (i.e /home/sftpuser). However, rclone lsd remote:/ would list the root directory for remote machine (i.e. /)
Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net and Hetzner, on the other hand, requires users to OMIT the leading /.
+Note that by default rclone will try to execute shell commands on the server, see shell access considerations.
+Here is an example of making an SFTP configuration. First run
+rclone config
+This will guide you through an interactive setup process.
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / SSH/SFTP
+ \ "sftp"
+[snip]
+Storage> sftp
+SSH host to connect to
+Choose a number from below, or type in your own value
+ 1 / Connect to example.com
+ \ "example.com"
+host> example.com
+SSH username
+Enter a string value. Press Enter for the default ("$USER").
+user> sftpuser
+SSH port number
+Enter a signed integer. Press Enter for the default (22).
+port>
+SSH password, leave blank to use ssh-agent.
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank
+y/g/n> n
+Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+key_file>
+Remote config
+--------------------
+[remote]
+host = example.com
+user = sftpuser
+port =
+pass =
+key_file =
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+This remote is called remote and can now be used like this:
See all directories in the home directory
+rclone lsd remote:
+See all directories in the root directory
+rclone lsd remote:/
+Make a new directory
+rclone mkdir remote:path/to/directory
+List the contents of a directory
+rclone ls remote:path/to/directory
+Sync /home/local/directory to the remote directory, deleting any excess files in the directory.
rclone sync --interactive /home/local/directory remote:directory
+Mount the remote path /srv/www-data/ to the local path /mnt/www-data
rclone mount remote:/srv/www-data/ /mnt/www-data
+The SFTP remote supports three authentication methods:
+Key files should be PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa. Only unencrypted OpenSSH or PEM encrypted files are supported.
The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line ('' or '') separating lines. i.e.
+key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----
+This will generate it correctly for key_pem for use in the config:
+awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
+If you don't specify pass, key_file, or key_pem or ask_password then rclone will attempt to contact an ssh-agent. You can also specify key_use_agent to force the usage of an ssh-agent. In this case key_file or key_pem can also be specified to force the usage of a specific key in the ssh-agent.
Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment.
+If you set the ask_password option, rclone will prompt for a password when needed and no password has been configured.
With traditional key-based authentication, you configure your private key only, and the public key built into it will be used during the authentication process.
+If you have a certificate you may use it to sign your public key, creating a separate SSH user certificate that should be used instead of the plain public key extracted from the private key. Then you must provide the path to the user certificate public key file in pubkey_file.
Note: This is not the traditional public key paired with your private key, typically saved as /home/$USER/.ssh/id_rsa.pub. Setting this path in pubkey_file will not work.
Example:
+[remote]
+type = sftp
+host = example.com
+user = sftpuser
+key_file = ~/id_rsa
+pubkey_file = ~/id_rsa-cert.pub
+If you concatenate a cert with a private key then you can specify the merged file in both places.
+Note: the cert must come first in the file. e.g.
+cat id_rsa-cert.pub id_rsa > merged_key
+By default rclone will not check the server's host key for validation. This can allow an attacker to replace a server with their own and if you use password authentication then this can lead to that password being exposed.
+Host key matching, using standard known_hosts files can be turned on by enabling the known_hosts_file option. This can point to the file maintained by OpenSSH or can point to a unique file.
e.g. using the OpenSSH known_hosts file:
[remote]
type = sftp
host = example.com
user = sftpuser
@@ -33521,14 +34261,14 @@ known_hosts_file = ~/.ssh/known_hosts
The options md5sum_command and sha1_command can be used to customize the command to be executed for calculation of checksums. You can for example set a specific path to where md5sum and sha1sum executables are located, or use them to specify some other tools that print checksums in compatible format. The value can include command-line arguments, or even shell script blocks as with PowerShell. Rclone has subcommands md5sum and sha1sum that use compatible format, which means if you have an rclone executable on the server it can be used. As mentioned above, they will be automatically picked up if found in PATH, but if not you can set something like /path/to/rclone md5sum as the value of option md5sum_command to make sure a specific executable is used.
Remote checksumming is recommended and enabled by default. First time rclone is using a SFTP remote, if options md5sum_command or sha1_command are not set, it will check if any of the default commands for each of them, as described above, can be used. The result will be saved in the remote configuration, so next time it will use the same. Value none will be set if none of the default commands could be used for a specific algorithm, and this algorithm will not be supported by the remote.
Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote shell commands is prohibited. Set the configuration option disable_hashcheck to true to disable checksumming entirely, or set shell_type to none to disable all functionality based on remote shell command execution.
Modified times are stored on the server to 1 second precision.
Modified times are used in syncing and are fully supported.
Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false in your RClone backend configuration to disable this behaviour.
The about command returns the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote.
SFTP usually supports the about command, but it depends on the server. If the server implements the vendor-specific VFS statistics extension, which is normally the case with OpenSSH instances, it will be used. If not, but the same login has access to a Unix shell, where the df command is available (e.g. in the remote's PATH), then this will be used instead. If the server shell is PowerShell, probably with a Windows OpenSSH server, rclone will use a built-in shell command (see shell access). If none of the above is applicable, about will fail.
Here are the Standard options specific to sftp (SSH/SFTP).
SSH host to connect to.
@@ -33679,7 +34419,7 @@ known_hosts_file = ~/.ssh/known_hostsHere are the Advanced options specific to sftp (SSH/SFTP).
Optional path to known_hosts file.
@@ -33981,7 +34721,16 @@ server_command = sudo /usr/libexec/openssh/sftp-serverDescription of the remote
+Properties:
+On some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck is a good idea.
The only ssh agent supported under Windows is Putty's pageant.
The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher setting in the configuration file to true. Further details on the insecurity of this cipher can be found in this paper.
The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in smb.conf (usually in /etc/samba/) file. You can find shares by querying the root if you're unsure (e.g. rclone lsd remote:).
You can't access to the shared printers from rclone, obviously.
You can't use Anonymous access for logging in. You have to use the guest user with an empty password instead. The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods. Alternatively, the local backend on Windows can access SMB servers using UNC paths, by \\server\share. This doesn't apply to non-Windows OSes, such as Linux and macOS.
Here is an example of making a SMB configuration.
First run
rclone config
@@ -34077,7 +34826,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> d
-Here are the Standard options specific to smb (SMB / CIFS).
SMB server hostname to connect to.
@@ -34138,7 +34887,7 @@ y/e/d> dHere are the Advanced options specific to smb (SMB / CIFS).
Max time before closing idle connections.
@@ -34180,6 +34929,15 @@ y/e/d> dDescription of the remote
+Properties:
+Storj is an encrypted, secure, and cost-effective object storage service that enables you to store, back up, and archive large amounts of data in a decentralized manner.
To make a new Storj configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Storj project you are a member of.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -34336,7 +35094,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).
Choose an authentication method.
@@ -34415,6 +35173,17 @@ y/e/d> yHere are the Advanced options specific to storj (Storj Decentralized Cloud Storage).
+Description of the remote
+Properties:
+Paths are specified as remote:bucket (or remote: for the lsf command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.
Once configured you can then use rclone like this.
rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
Or even between another cloud storage and Storj.
rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
-rclone about is not supported by the rclone Storj backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
To fix these, please raise your system limits. You can do this issuing a ulimit -n 65536 just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc, or change the system-wide configuration, usually /etc/sysctl.conf and/or /etc/security/limits.conf, but please refer to your operating system manual.
SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.
-The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -34542,15 +35311,15 @@ y/e/d> y
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
NB you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync.
-SugarSync does not support modification times or hashes, therefore syncing will default to --size-only checking. Note that using --update will work as rclone can read the time files were uploaded.
SugarSync replaces the default restricted characters set except for DEL.
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Deleted files will be moved to the "Deleted items" folder by default.
However you can supply the flag --sugarsync-hard-delete or set the config parameter hard_delete = true if you would like files to be deleted straight away.
Here are the Standard options specific to sugarsync (Sugarsync).
Sugarsync App ID.
@@ -34591,7 +35360,7 @@ y/e/d> yHere are the Advanced options specific to sugarsync (Sugarsync).
Sugarsync refresh token.
@@ -34663,7 +35432,16 @@ y/e/d> yDescription of the remote
+Properties:
+rclone about is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.
See List of backends that do not support rclone about and rclone about
This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and therefore not suitable for long term storage.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
To configure an Uptobox backend you'll need your personal api token. You'll find it in your account settings
Here is an example of how to make a remote called remote with the default setup. First run:
rclone config
@@ -34726,9 +35504,9 @@ y/e/d>
rclone ls remote:
To copy a local directory to an Uptobox directory called backup
rclone copy /home/source remote:backup
-Uptobox supports neither modified times nor checksums. All timestamps will read as that set by --default-time.
In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Here are the Standard options specific to uptobox (Uptobox).
Your access token.
@@ -34764,7 +35542,7 @@ y/e/d>Here are the Advanced options specific to uptobox (Uptobox).
Set to make uploaded files private
@@ -34785,7 +35563,16 @@ y/e/d>Description of the remote
+Properties:
+Uptobox will delete inactive files that have not been accessed in 60 days.
rclone about is not supported by this backend an overview of used space can however been seen in the uptobox web interface.
Subfolders can be used in upstream remotes. Assume a union remote named backup with the remotes mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop.
There is no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop.
Here is an example of how to make a union called remote for local folders. First run:
rclone config
This will guide you through an interactive setup process:
@@ -34936,7 +35723,7 @@ e/n/d/r/c/s/q> qTo check if your upstream supports the field, run rclone about remote: [flags] and see if the required field exists.
Policies basically search upstream remotes and create a list of files / paths for functions to work on. The policy is responsible for filtering and sorting. The policy type defines the sorting but filtering is mostly uniform as described below.
When files are written, they will be written to both remote:dir and /local.
As many remotes as desired can be added to upstreams but there should only be one :writeback tag.
Rclone does not manage the :writeback remote in any way other than writing files back to it. So if you need to expire old files or manage the size then you will have to do this yourself.
Here are the Standard options specific to union (Union merges the contents of several upstream fs).
List of space separated upstreams.
@@ -35081,7 +35868,7 @@ upstreams = /local:writeback remote:dirHere are the Advanced options specific to union (Union merges the contents of several upstream fs).
Minimum viable free space for lfs/eplfs policies.
@@ -35093,13 +35880,22 @@ upstreams = /local:writeback remote:dirDescription of the remote
+Properties:
+Any metadata supported by the underlying remote is read and written.
See the metadata docs for more info.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -35173,10 +35969,10 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an WebDAV directory called backup
rclone copy /home/source remote:backup
-Plain WebDAV does not support modified times. However when used with Fastmail Files, Owncloud or Nextcloud rclone will support modified times.
Likewise plain WebDAV does not support hashes, however when used with Fastmail Files, Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
-Here are the Standard options specific to webdav (WebDAV).
URL of http host to connect to.
@@ -35257,7 +36053,7 @@ y/e/d> yHere are the Advanced options specific to webdav (WebDAV).
Command to run to get a bearer token.
@@ -35312,9 +36108,27 @@ y/e/d> yExclude ownCloud shares
+Properties:
+Description of the remote
+Properties:
+See below for notes on specific providers.
-Use https://webdav.fastmail.com/ or a subdirectory as the URL, and your Fastmail email username@domain.tld as the username. Follow this documentation to create an app password with access to Files (WebDAV) and use this as the password.
Fastmail supports modified times using the X-OC-Mtime header.
Yandex Disk is a cloud storage solution created by Yandex.
-Here is an example of making a yandex configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -35444,17 +36258,17 @@ y/e/d> ySync /home/local/directory to the remote path, deleting any excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
Yandex paths may be as deep as required, e.g. remote:directory/subdirectory.
Modified times are supported and are stored accurate to 1 ns in custom metadata called rclone_modified in RFC3339 with nanoseconds format.
The MD5 hash algorithm is natively supported by Yandex Disk.
If you wish to empty your trash you can use the rclone cleanup remote: command which will permanently delete all your trashed files. This command does not take any path arguments.
To view your current quota you can use the rclone about remote: command which will display your usage limit (quota) and the current usage.
The default restricted characters set are replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the Standard options specific to yandex (Yandex Disk).
OAuth Client Id.
@@ -35476,7 +36290,7 @@ y/e/d> yHere are the Advanced options specific to yandex (Yandex Disk).
OAuth Access Token as a JSON blob.
@@ -35526,13 +36340,22 @@ y/e/d> yDescription of the remote
+Properties:
+When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers errors in the logs if this is happening. Setting the timeout to twice the max size of file in GiB should be enough, so if you want to upload a 30 GiB file set a timeout of 2 * 30 = 60m, that is --timeout 60m.
Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.
[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
Zoho WorkDrive is a cloud storage solution created by Zoho.
-Here is an example of making a zoho configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -35605,14 +36428,14 @@ y/e/d>Sync /home/local/directory to the remote path, deleting any excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
Zoho paths may be as deep as required, eg remote:directory/subdirectory.
Modified times are currently not supported for Zoho Workdrive
No hash algorithms are supported.
To view your current quota you can use the rclone about remote: command which will display your current usage.
Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.
-Here are the Standard options specific to zoho (Zoho).
OAuth Client Id.
@@ -35671,7 +36494,7 @@ y/e/d> -Here are the Advanced options specific to zoho (Zoho).
OAuth Access Token as a JSON blob.
@@ -35712,6 +36535,15 @@ y/e/d>Description of the remote
+Properties:
+For Zoho we advise you to set up your own client_id. To do so you have to complete the following steps.
Local paths are specified as normal filesystem paths, e.g. /path/to/wherever, so
rclone sync --interactive /home/source /tmp/destination
Will sync /home/source to /tmp/destination.
For consistencies sake one can also configure a remote of type local in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever, but it is probably easier not to.
Rclone reads and writes the modification times using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
@@ -36095,7 +36927,7 @@ $ tree /tmp/b 0 file2NB Rclone (like most unix tools such as du, rsync and tar) treats a bind mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.
-Here are the Advanced options specific to local (Local Disk).
Disable UNC (long path names) conversion on Windows.
@@ -36259,9 +37091,19 @@ $ tree /tmp/bDescription of the remote
+Properties:
+Depending on which OS is in use the local backend may return only some of the system metadata. Setting system metadata is supported on all OSes but setting user metadata is only supported on linux, freebsd, netbsd, macOS and Solaris. It is not supported on Windows yet (see pkg/attrs#47).
User metadata is stored as extended attributes (which may not be supported by all file systems) under the "user.*" prefix.
+Metadata is supported on files and directories.
Here are the possible system metadata items for the local backend.
See the metadata docs for more info.
-Here are the commands specific to the local backend.
Run them with
rclone backend COMMAND remote:
@@ -36350,6 +37192,362 @@ $ tree /tmp/b
D flags in the ModTime column to see which backends support it.-M/--metadata is in use.
+D flags in the Metadata column to see which backends support it.CVE-2024-24786 by upgrading google.golang.org/protobuf (Nick Craig-Wood)--no-unicode-normalization and --ignore-case-sync for --checkfile (nielash)--time-format flag (nielash)srcFs and dstFs to core/stats and core/transferred stats (Nick Craig-Wood)operations/hashsum to the rc as rclone hashsum equivalent (Nick Craig-Wood)config/paths to the rc as rclone config paths equivalent (Nick Craig-Wood)Shutdown and shutdown the oauth properly (rkonfj)local/ftp/sftp has been resolved (unless using --inplace) (nielash)--fix-case, --ignore-case-sync, --no-unicode-normalization (nielash)--volname being ignored (nielash)--baseurl without leading / (Nick Craig-Wood)--fix-case flag to rename case insensitive dest (nielash)--fix-case (nielash)--daemon (Nick Craig-Wood)--track-renames and --backup-dir are now supported (nielash)--color (AUTO|NEVER|ALWAYS) (nielash)check and sync, for performance improvements and less risk of error. (nielash)--resync is now much more efficient (especially for users of --create-empty-src-dirs) (nielash)sync) (nielash)cryptcheck (when possible) or --download, (nielash) instead of of --size-only, when check is not available.--resync. (nielash)--recover flag allows robust recovery in the event of interruptions, without requiring --resync. (nielash)--max-lock setting allows lock files to automatically renew and expire, for better automatic recovery when a run is interrupted. (nielash)--conflict-resolve, --conflict-loser, and --conflict-suffix flags. (nielash)--resync-mode flag allows more control over which version of a file gets kept during a --resync. (nielash)--retries and --retries-sleep (when --resilient is set.) (nielash)--azureblob-delete-snapshots (Nick Craig-Wood)--b2-download-auth-duration does in the docs (Nick Craig-Wood)backend restore command (Nikhil Ahuja)Content-Range header (Volodymyr)--s3-version-deleted to show delete markers in listings when using versions. (Nick Craig-Wood)--s3-use-dual-stack (Anthony Metzidis)FILE_SERVER_ROOT is relative (DanielEgbers)owncloud_exclude_shares which allows to exclude shared files and folders when listing remote resources (Thomas Müller)--vfs-cache-mode full (Nick Craig-Wood)--sudo and fix error/option handling (Nick Craig-Wood)*token and *ts.token are the same (rkonfj)--vfs-refresh runs in the background (Nick Craig-Wood)<nil> (Nick Craig-Wood)Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.
+As of v1.66, rclone supports syncing directory modtimes, if the backend supports it. Some backends do not support it -- see overview for a complete list. Additionally, note that empty directories are not synced by default (this can be enabled with --create-empty-src-dirs.)
Currently rclone loads each directory/bucket entirely into memory before using it. Since each rclone object takes 0.5k-1k of memory this can take a very long time and use a large amount of memory.
Millions of files in a directory tends to occur on bucket-based remotes (e.g. S3 buckets) since those remotes do not segregate subdirectories within the bucket.
@@ -43798,7 +44996,7 @@ THE SOFTWARE.PPkOogL@pBbDd^%z%?ey17 zvf0oOf)M;h0unyoEiZHmnid^i=C~w-eRe@$jgkc)GlpJ3o1s_8I#Q%iA5r3QUVs8K zw({=R=Ys42L258+IjK4<=yfhy28{(L zavhLe@SMjapH29Py`|6T0qvhuDrxe=5O88ofOa}mJsnYpsiQKw|E}+=nX{=HItLt= zm93gi#-3ifdoMK<*2$e`U{3nY8lyaU>2c|AtNzQ{CfDEf<)Gxq*_;}AcT(e?FDVG# z<1WiESLWuk>2jjQ2-x4Q5Ov6bMdm!X*HV_o4Nu};+gx&D!ockLDn<}kc+oXlQmiG3 zjxJ%My^e`&cb_?mpPWs b{Q;W&MyT0rGhZbX=_4A9URDgjTa4^6CU+R(*4~D>=rQ$JT6LdE7!9Dta zUKlPp{gIAp`{-FZRG@4KbU1CIZL()gsGgoq$)*RHWKc9&k4F0uCM59QCb>A{%ey{R zv>%GRQuD89vR76Fe?Md)-mqs7TdIJoy(6?B(9zAt0(Mjh5)wk8N2is8nYrp?zii$9 zmdKKXZJXeXM8H)-?7N2j7_hjZ9*550cParL&zM7t%gy6Ek1h3}-zmt*4eMwwv?Cym zZ!s(8JkyJI$W}?u8|JV`A6Exw&BDzs?pqVZ4FVhc{B7gR-kmyWq*QI*M1M+aTnYe9 zZCP6BHzZLuw}fHQ5?81s SFSKFz?uh!C#>`I5j5+5fX3A!e)e?4 z?v62J2obe%)VhcFksSwQ-G^}0zJOYFG}7KZj2Aqk{$3NDj)3k?q!Py6s?^*%g@*wR zK5^mW_Bd&Saq3XNNnI*&Y9pPR6w@AkABL}<-pgmpcMQc@O|%g9 SJA>kI3HKkLIWY@{_WSaynyK Bue$Hnz^H0 z7D 8FCCa2u#Pm8{2SpTB8257UNnYX|?vw)HCLUw}Ip!QK#4- XeZ>6yVe6V^@hm26hS*@S6HC+aI#Y_8MU3el1lUA6k; zA3@+{)sKw5a95Z2Lk!-(-k=i!N8}#f*5ww`Pu)K(ycD;XwNc>a_V8xD!=pp$4s`2} zkty!MzW!>AaRN$;BO4D@1FmzQYyul>Z)vWXQ;PqcmP*G8IqNDAHAm^4dv6D7be(Ps z5`t&@(I|zUpb`7|;zS+}y>~UPkXG)ujasT4D_89)+4U-tx4%B;&> JSPY|zcom&$2Wm5k z4nRy?vZEFkL9_|fnm%}Xh(HD=Zam^gP4oFK>Vh~Br >{@jE>q1P(7Q=N)^aA^IZsX}Cg~VA#u5=DS8G+wTC@ z!at7Dcyz)XL0$yp_50n;mA^qKsj5_C#COaw)f8=<4v^eQuPx3C&An+Ou_m=9ur|I< zSJ=Q!kk+)Ldw~d#Zr)6{_s^GGfKFR@JSsx}QQzBJ*Xd%&PiAyWd%poeK1jY?$^tTg zv^Xw?16lm<-v{)l22^ldFlrUmqZKC5JG3W$hbQNB^WVm3f1|D7!2I1mzmC%ECQY zMYrLc(q3HM;;B1jL4GJw>%c7i1%E*$q6X0ksZ^LUkKs^*I!j@?X{7wGiZqxCeA-P) zD;){P#X)*$Gl-(t Zu32K>>JSr$Y;TEbNQz=MOUa9-G~ z^RuesQu!#z-0CGRonJVgWt0i?rraRhfZQ1TGkf~p-tN!qdP!g4@*J3$Vy2@p?@5z7 z &X3I^ z*aq=$*m|%%@fW-f3g)8G1D5eYnby7lkiO~RISxc4jO!vB-~I+H7iSF9rj@2@rg&xA+PZb{BV&BPn%KtVWkM !EdK&|imwO~hr3R#|dXPQ{ zAtqJCPlE6V>jwW~dOGAa8&Xr#v&p60;&ANtc~O>Cv@wQfL2>0zIHnc7HnCN08KR(5 z@Lt$Xr+`LLQ4jHS)coXxX@ncIk{O>`Eq%?kOAkF*B0o9UkG(Af&nKB z7L6~P&UG=&_W1mxp9v!(j vlH7?i;T+!rB zkgB85ewqlCf>c;M7PX|6u1}dg7o~E3Wow@L`@C}er`{tD6r_^!q{^hi&yPOyV$;3U zRa8aN7RM8Bjowuzu)m@yq67pHHn=XsUt@wGySZn64`J3yv$}j}Q)YH %izRTEFrG*E1f(or{{2A?bnhCFxXs;Lis6-k;)O)5gCX?bw5p?zr?j@DNm z@kI1&fm7JLmkuZ(3kA=QK0$_9aIjFUBIXEl!_)xnE3_8le7;OL%G 2sA)D5gg72`j7{@KMF6-CiZ_QX{p&sPoeOK@uF zG1`QoVlWqgqU!#vCG(G(@t+j@7t+(jr`EtbomYAm#+lD`_j*8OCk5%qtjn7Nra6-y zr3TD|_r{+0xxawX1N#haKEl~%ykn=1Z6vd5h%Ph#4B*widsh1K#-Q1kA3ytI{O#Qv zJ &37g-?s@DLA~m!& L+ZXjp%V3W=E z&v9}$NJe?%=O=naac^qfjb7EdOvZLIj#|tjXBnie-^PXf& ikSJ_^LPWfIlbSP!lbke+UhtGM29 A_?5H6JK-@$!PE#N9 RSu hYsENf@7y11!6}N{DWCBkL#r> z?ZQ_eufL(4^?1FV4LTdm2u|C$lYW|}_9o!l-q>uMZ~sY5h3!!DkBGgbJjmMi?MVP- z)OOrQag&@@RFOp3nYMj7O;aohQWLoJs|FnM2KzYBR@u*aZ1Fl~zxuF}wLOlTF$KFS zFBK*E?wy$sEWArVSBC`RY|N-(=yz^{ma0_$$Sex-TSpcXDBI3q2D;%kLV~oUlTu-4 z>h=Me2fL4}$vJT)OUW@~ZL^E}{m8hYMvvJ=!a@TNn*@*AY;gV}!<4Mrad PWk7t+0H9F*J^_j+g7L1Z_w1}4E2h%&WU{EvsoGHliNvKr_FUXelfuEq-aaN zIpIElEHy1+;46{+3shZ%s8I9PR@Xl|Tzup&EMPi(yQ|EJspfI#JT1`U1?eDpi7E5K z>8|5r`d!7vYZ^Wg4|*%|%=^Ec>a0T|p>D^bU>|5)$~S@D=?Pe&X8j|x5d?=UNWPt8 z0MGTUwQzDWazE&asD}QLDo7nYNdqNjcF;rKX%@~y_F996RTV)}t^t`P_#D(ZWpfu>#W_N*?<5Oc=(g%>Z$ zDY0zuQ?}~SPc@+mP#!+MT4kmJ$zh#0%FOJTr#N_379oI5P_ma-0bZpD&a#)9L~bAi z*Zo2C$eAy3k`h6jC}5H#C6`V>GF?kpD>bu|To{gYX&(23Q~)RzHcaYx@`7ya*);6Z zylDt2y0XE4J~%JUp#(X=f8aB9;H7Tv!1z~4r~+YBqe(>(X+CjfewVCaN@^3^^eT5l z#yZ&hv~b}J6_k9=mf8t@dPVVSwW)xB5s~)w6Hx0u$3I>644u4%zKZr?5X5#INaHY6 zu(6B|VDQF bCS*l2Sr56-N7P#y2>=7-Cr Av%^F9Nht~G>f@wVl@0nu7; zFrp4fv99iwE1&t?#LvqwV_XPye*=)wsUZ88T^+Bzc xOUn%w62$d z8VMgm>FktJwTT+r@JyfSx<|y*U~T9ZOTPmXk%14M{sa-m6q`q CJ{>$KN)|qot?3 ` z3B!pq&iW9D6V7;NU*=hBLHHY!if!KAt~SnWZ?g~htn7x>9)o=4sBA})xA!iL z#2&64ac_$i_dZ-wVRiF1A8)Xt0ME8dFy?7U(MFcL%>xb!s*ec5_Y zZ_X^8DvE$vDd3-p>>$hS@fJhe9uj{=OPNDsGglKqi{y>W`G@=OSwN9v=E}f*wNN%D zQ}qgfhi?T_)63KSqs+=}V73NUf__r0ntFafS)xZ@o==fs8?pVYfmG$u#KRbFTze;m zsvmX)6`& =op|n4=_n{_lF7(<&MjV@e(pE4e-1=az zBwc4xVL0H$3r>3qTQ6(NJ+B6Vcl~b(teHSTb@f!L*pREGPHrDK(i@a6L2GyGz?XJ@ zr-oR9nXorS+BoFM_AG*_Lo^^gIjBwambF>>?kOe-_`;0^*-(&ji4LO(@{Y`vE=CN& z517P0_wp&*C3qf@k1Ew~R%OE;4O<>s5J?MiE!O-k@3?67u(n+2+S=x#JSlk *P);pQ9zkaS-y(9Cb= z9cTiA-ynprK(Q&_M}Lj~yzM3u#SDuQSJRK7YaU3wm}|1QfRfk`lT9WOei>;1DjZD) zA`S^mK3T{^KD08>ngl<2_4v WvQ z+kE7#q#zEwi~A*hs9;4Hv#pg{8j|Ns%-wJkoB#$ynonbeYPap}WU@#h097y~Wvy#M zYySZHIp|g=O-#l(M^et%K6w@aBSrTbvg&NEfR=wV#2{g|gBOV-k^shjAcjA2zxP|< zm=S3RMB~6;Dj}Q8UD|;nTi9gMX9hvIo%sud9|w$%{Qy~KiVPTazXORnwXr#78!rSm z1R2M?|6^sp&K#7Z7guwuwKc(tS>BA`5$s?v>~ssr<#RC^Vo*IY#P7(z+ayMoPr@vY zn}#gq(cb@Z^^g3>hA|r`Z|syl5dS_xv+ORq%vY#m8}p4pNS9o~?Al`NF*niU%;UbD z04bI+UvYjThWdR_1C>w9pms*(+)v21As_=~m78R_!A+D;Tsz7p$u&7*%my7x4xXk2 zh@DbnDrA02av*t-k|H46lRUj5- 48=t&~e42AtNrz9(7W0?kBl=fB3a>#SfASH%#7vSdduA##GV~re^i+ z90^0bQYB9+Oe$jWrOV(gWBllL|JWA7Cn|{ezeFaT9PHut;;*C6NA5f!ZhJF<9lC%X zooc^d+ 5zi4VKm4Kw&uocT4M$jJ7%+0m(d1YK1|Tu2CAL z#?3U)aoMj*ZV-03F j}1pJGIFoW8Q*W+lM$77`VMIDrWu5GgUZ$1Dly!+D3#SF;h z`Tl*-*(3Ow3&T}seat#}p;ESc!nM`0XmoD2iV33(X?Ce8*ttm8JfnM~Kj>l_l) `{7u9pAT+ zj5+{`b1)zrQS*S9%a@}2u96w0 lf%={9PR;P}HE&4||~6~dHr zB$b}!Vyni_T4(0>lgrXF@j=kdBnDQ|+KoxOKXXMos-Ab5ddkM1=SU!zV#EFL;z(!J z{#Asnwc#)9JScR17~|7Ap@bGMZag91z?tx)3C>?k0ytK vRXPAbOz!hGey9Yof? z481<2j5X 6cO0aNgi~ljx72(QC%#%P=|MyEPtEsxKhl;VvI1%+4tfbsJhg=3*|Q DLL9^z<6< J?)(-qXFS#>M*z)!=q;4|iJ^;8$A{_u9tpU*7^miPmx;v%t xTsp5VIt0^||lBT%=rJIGv^G4M`* zYf?g|w~s^^zBHY*KWIPZD(OM3rhz#|yZKM~5uI>Su&RZ~;K1>7j{P;yj*opSMbz<| zI6K%PPW4&ingB`1C^iV&I+}497K6`BJIs6)xBfCn@P0Iej}3#+- Ys zkuc~x_%fs6777`vh;#-PfToJz3d&J;73XM-k-gMj=y^8_Iv4E+JW;rz;?nNtN?|y- zyI-m7HH3eMxG~CKT$VTybLv4NAw}@U{vEe({OE8r132OK=y)xtV?=fsFp$^no~Hts zJn1=dFrrok#RTpY_3nPTxb`I97q -A=d8z}K(Jg-Am1^~3hhNdfzy6#=XZ^@I z`2IEa``Mx)yb00-MoMlWo+$u@qv+!}3Dg8?;`>o3EuDAVfDHO$;KZRNMiH|m9W)0( z6EVno@%6)p2<2Qn0lLJ;J 8yu z63I!&ePJ11Z(F1F%or*1Lv3c+5Qv%2$O|^Y>Lj+wdm>!$hU=GY^t}@pltYQI&Y0*N z)S|>L4`KI@zO1@Nr>y2hciSS0rKpe3cJmyG$2qc-*}R7L&u4|tuyyPtoRjd$#1Y~B zaE0Bc>c2^U8K~R!Zl1W*Kz_eZExG&SdUp+qR4r^o=xJ+5lP9xe2scR_Y14Ct3Yc9o zTMgk3VKF~fC#l0Lc*r{hrWhhgT;h A-=jn z-NA%Nnl&b~S{|hAjwwWmw3x&GLriC+7Nirxif`~xt-qVkeWn}T7k4vG3ntB 1dE=Y{JRl59|(h@-^s{?H6_sadR| zLJy$f{%DZ@2UXGZUlq;C?f>L&7aCIA;N#ljOtFtJEqE#c()){nF3dtEsruR+d?T;x z$Pjvf&q{<@Vb43&UfT}UBN*}j#lE97a?UUQ0!9d<#2=|e^8~h47?9X=MONY{8Q3Tb z?;s&X1O3F9I`y%|Gbs_7<^3Y&qW%RYb!_#uf>U2B$Rdmt)}h~+C?dqznV1NZ=I-!N z^aGbC1htv%<|Y;4)Gg_I>^pIi*iD{q5dsY7tt2}!#dc?Atjk^o@EDdwz1jDzCdCwp@j-$ zsh SO;NL6a2`GA g$*D-`%Mqua!aGhTJa0a3dL!Dv9x_QdQ(95=jTSGn3mfaArfdPp=RRuP)n&T1dNX zZyrJ#Yt8jQcGmdZTuzMifbXI`b#`o6>tve9VVu0~MaByWB|H;#)nT>*)DL?va|& z z7$jfjY|LVp-*> RN`3+NX~?Y;iqe8wKYQp>WVdsj*B8p^SoH*Q}2A?eRe-?BHp@BLg3r|!A?Fr;jZ zLm~q9Y_CGU_hNH&K!EcMBzj9c#rj0yLlI57JoM2^@($^RN8k3Z)H)H>SUq@wZ6&{7 z4f71Ti8^D?(-)@9h7VD`Zg|j+gYqoZo))H*Aa?H_ #z8+1hC7qF_5j zbxacsDf7pgxNSO{BQLF(pLg3$4|6570=Ji95ueouc_W%P&c|$-`VJyjC1_NsBE{zk z@djhJLdeG#;MQe?w#o5Bo}~CzXiaIY_j*Q6ZfOSt+JKcjJ#X+SahWKILj=T*p!&+H zVF-o&Qtj?1ODCC2bClB0QvCirKUDwiTb#9c&Z6{_X ztztgs2PM9Nhs6?6Mmi$~1(OvsrTA@;QXhcv7jCJK{jL`84txk>dvxwv%zNi4^kw7H z={%)1jnWpGgTY;8Nq$AqsC0Lm^tZE}1hs@ae8rK(d26qzVm!=myzf$fQ`In2fadYv zIgBicp~tqHC$Um~+CSfVNIyPHJeB3H`LbLvhAj~fX-L%hLB-A@-92wtQ0>aWXZHt# zYD9u}WpmYPOZ}4!lVfHvh9TanhZa|wpY`w>#=;vR3$)!!9CM z1TY>uS-)m4@8}|q4g^_>m()k?(*q>5Kk}dTk6`z9-?dIQb BWqHrUGzLyuuh_eMVatbewsz|cF3=j&~(fF-q`XWma0&v>$G2(UkTr|9!1 zjuw>=1iwCJm6DrLHoerr9!u46N%Q7(V@nJc!p3E_vh^7M(6Epr&ljr^Y4;0%1lzSM zI5mC>vZoT|tjJv}q9!;SN1mmSN5;UUY2!l}J-)$50bDqkpe8S5nmpWwWWzdgk*#(F zigc>6t!ma+=8-3gm{9Vw-xAMgge;6k4mXI~q?UZ 4qSj&QAUSgt}O z2aT01*$zEEw>U)LMhCrA%Ds{JD^Jmvi5vecqRkxE6P^q76FaaR;(`CzU04W@l8yRE zcH6TsY%sNidf`*J|Bq-6jnit94b6-fpL{XR`31Gx vT_Z&RC$Ret eJReUuvwn?VSFWe(4L Ar+U=> z`a?fW;kJ0$o_p)QnRl=uuv(Wp4nNe;mz=#%pVP?*^$#&><=orh4b8>ncHGy+mg-VF zk5)V|??pQ{bcqrBjbTLh5$U%sN6c6JGA*TYB|r++YGv3(^g!rH9732DXY(NkP+$1M z A(wONs3_92Hd*;hh1?XN>{qqzg?yN4)QW7 zl_JLWCY*ovJKZ7PDPOv!o?97B``}nyAa&HKzP~OK*wEt!$C5IJ{+EpHbq=ye%c~hy zl;~rn@t+xsF9WS>UhXj~cmE*F6o#E^BgF{`j1E5IE^ZkKQz!1at91R0I{Qt?kuN_q z$Sr#Evy3WX{;uclz_z$d_2d}CG$A!H$JqTX!d#qutY|A%+!t4)vv;*PQ16mr$mhZp zW6!* H IB{k1+g;TKgj)ho~>)RtaB-yX? z^b^|JZM>JG2-K^I)2Pc7U5pjhlwA>eM1Lf9-rz~?D)z?qSOBm4&@(Wl3iYN|m$FwR zK2fqS_;cb`13!jFCQ=u5h~0 2cTRj@@zZ8gEtL<^h$Pz(<0e0!g39%Od0!mn}T2V{-`2*uHypP=G-_ zLpQ&^x7X#dNFhJ$O?VB}yt{nWKus&>pMo=)`_n%QZ~S7SSn7f(Qd;MKzDfrbBD2pp zBnZRVt~(q0=tI|W*j@KWCmkZW)Ct_Z*-XmgKi&rzQ0=cuh$GW|As&8qZ@E!ya6+E? z`}?RO{GD8pxBP+WCjF7hrH|Xm@(C(gx+l9=4CpSug bdZ#VnY zuc&=JUY-^0hN`%~Rp-@Mu8zIirW8-FAYXO@KF^dLEJ3rXFE8ehEOBu}iN}AOrBWnv zpc|-UZ!c@L5>)(iYLxbAz~eT4t(7Wt<-WiKNBlW{M-k@g0`!D4HClUg7JMS26HeP) zO+H(I=S^QAmFymD?@2^iw&7em%_}`iRpr6ia2-`?5Oi}_DZs44hdOLf#yN1Qw*)m9 zS4AChS`c?Pj#^$F*b~8;nmCf9^AUgN7KWGnY{d;ams|Yjdqy+m3K#SSvBL`+V?h%3 z_e)BCQ0G=W+h5ZHtWP;UJxFw%r>M;T46+*LTRvN?7M0EE(4DqzQU>MH7c#U{u*30d z!-nvq 6^SN(LsEa#D z%(~rmj(KjXsCu-0Ic%*Q?G|YImjZRf{Ko*Jr_`vA04px?CO|)#JsFj;kmJGc!!cLT zWK+<5R^z)`g};T6h<32402K;mCcAp annYLN78a9ZK+HtJ {Wu&lx57MbfGZDl!Pc?D~;56dbvb{mUaDbOkaZ z3llRy?nL+zg(th89cikuQ4nLOeLNSL(_2Yi;}0aem><}v1iXtof?0@r;S<|aWAt<; z +Xv@Bo(@Jmv`}AJOed$!9@Z`FMU+zEcw&09khf691o8uEd?~Yz=3z zn}pIEV=0YNTZ$PWttF )YxwkxHo)KW`g8A+qI8cRgc zMhT`u7i6eJExA84_fNS0z&X!3&-;Dvci!*)jtQn;TXh(cf?ND>A7rY4&;RZ#y&_c+ zw=k{L_ja|KzFx5e1jrw&eS2KG@HJT@FL6cJw>-*M^PQDkN72Y7Gl#gw ^q_e|3Z4uF mw znt_!7W<1bq`~hv>J;eDzoRio#kE)z79Mku6SBjnIk>>}a45OS2C$JB s(KVPJLm?XsIcUZD|9B#bxOSWj10_WgD9fxMwM)TZ`vJT 3?My?N+SkAR@v8`D_q^G>_xcz*?T@f6wnIu9l*7vTuF;{L(#M zesuy7A;8n*_qDy7Txp{Y5C&n~>@4$kNBdwiFy(XEKhTjrcEAu?XC-dTgU%46xkF`^ zu^&?)414PDzIJ*h)C0K){5B%EU8NDSnhhFnmc`iJ(ix= ;1u~kl(E6wwwqYUAF$RBA_JY)HW4sliL3Um%r_|_=8EZ zu@n}E-pu|7AbP`451@j(O {y#uoQl%$(6*KbEjs^e|%!(R@dtd!6EFZ61V4iqZ5K 6LzpHC>3 z%Fa9IYa#>mcdv&%aVOZqlDyMC6qiT?p$K(}ogV(kibR)eejKaM # z*>V`0Da;Q-6;9OCU!BRQ3HAo88~JCMmGmvt)gLqNYK|=(^^N!JBV61E%TGF=(W^(> zs6$UMtCa@}#)pDK o$!=( z7Y6y}0};ng(yA;S8Axg!)PUZA9(a}4WUK<~{U|@XFEPd|$p-syrUB4p+|W)YCcm8; z0$mCB7Wd5xKGlfVzFyJD2*hm)?0~Y5saJP#w_3N9Gj~Ghf%HXrJoY;RM^^h?TPQ-Q z6r-jiEfJHUj*PA!(Vu;WmEYgN4urb^J*PEm1@Vjwz>22CH@-4LH*49EN0&8Te3qsq z%8eCKL<;q)9A#U}jbih*P}j8;pA@O9&oh98h+a|qvyi!KrLolx=UhL1+js0q5Z;H6 z6OH7^ OS))k<>BN1Fx zVLv}kG-7Kii`8~`e@1U~g($Vr9b|E+6)}m4yY{wWqv{w7OCH{{1FyH3^%th53)VvI z#QLpyeY~73Pjq;yl30=HlMh%h z%WIO34}mwJ%a>L|O~3-#BSr75GeUD0xRQkI 5To2A<&dt^x9-DO>L z;&K)^!&F$;q7R`$@PJXYNGY$5`G(Lr{HP#NLN<=jpSZsa?!2H6Y+g>c$?0)4zX$r3 zPH13zr=&`tz=V+9X6tFP+@w*4S(b-=(YR{w7W?UN@-0Gst1VeWY+N58C$OHX3rSw` zVX1pN01Cq+kz&mElj%L&{LVcNL>MKV+*-)s)Nv%zNpLdiaRo(PjHvgJV>Svx<|Y|N za3l{(4_%M!kkXQ=QD;|`dA(}snxNXC(*K5G?)YO?0DAY=I1)|s74J>YIbmJyCl&wZ z{@34NB#=F1TAi)rNPe$ gF =r#;yzk)tgA4%Sqg(Wv`m7s?glkIz `7mTsf))=jg{SMN9g=w!? pKD0C z@xfv~j3$%-vZ*+>Bdr}B&n%RxQHp@=pQm6%!6+Z)Pb_=E)$L>0&Gmr4C*EF9;?4su zh`S341{S%*@nUWJ>FpWMY!1r%jK#MA@L5XjPcIy%L{FF%BsabQ_$mOu_b off\f[R]. +This can potentially cause data corruption if you do. +You can work around this by giving each rclone its own cache hierarchy +with \f[C]--cache-dir\f[R]. +You don\[aq]t need to worry about this if the remotes in use don\[aq]t +overlap. +.SS --vfs-cache-mode off +.PP +In this mode (the default) the cache will read directly from the remote +and write directly to the remote without caching anything on disk. +.PP +This will mean some operations are not possible +.IP \[bu] 2 +Files can\[aq]t be opened for both read AND write +.IP \[bu] 2 +Files opened for write can\[aq]t be seeked +.IP \[bu] 2 +Existing files opened for write must have O_TRUNC set +.IP \[bu] 2 +Files open for read with O_TRUNC will be opened write only +.IP \[bu] 2 +Files open for write only will behave as if O_TRUNC was supplied +.IP \[bu] 2 +Open modes O_APPEND, O_TRUNC are ignored +.IP \[bu] 2 +If an upload fails it can\[aq]t be retried +.SS --vfs-cache-mode minimal +.PP +This is very similar to \[dq]off\[dq] except that files opened for read +AND write will be buffered to disk. +This means that files opened for write will be a lot more compatible, +but uses the minimal disk space. +.PP +These operations are not possible +.IP \[bu] 2 +Files opened for write only can\[aq]t be seeked +.IP \[bu] 2 +Existing files opened for write must have O_TRUNC set +.IP \[bu] 2 +Files opened for write only will ignore O_APPEND, O_TRUNC +.IP \[bu] 2 +If an upload fails it can\[aq]t be retried +.SS --vfs-cache-mode writes +.PP +In this mode files opened for read only are still read directly from the +remote, write only and read/write files are buffered to disk first. +.PP +This mode should support all normal file system operations. +.PP +If an upload fails it will be retried at exponentially increasing +intervals up to 1 minute. +.SS --vfs-cache-mode full +.PP +In this mode all reads and writes are buffered to and from disk. +When data is read from the remote this is buffered to disk as well. +.PP +In this mode the files in the cache will be sparse files and rclone will +keep track of which bits of the files it has downloaded. +.PP +So if an application only reads the starts of each file, then rclone +will only buffer the start of the file. +These files will appear to be their full size in the cache, but they +will be sparse files with only the data that has been downloaded present +in them. +.PP +This mode should support all normal file system operations and is +otherwise identical to \f[C]--vfs-cache-mode\f[R] writes. +.PP +When reading a file rclone will read \f[C]--buffer-size\f[R] plus +\f[C]--vfs-read-ahead\f[R] bytes ahead. +The \f[C]--buffer-size\f[R] is buffered in memory whereas the +\f[C]--vfs-read-ahead\f[R] is buffered on disk. +.PP +When using this mode it is recommended that \f[C]--buffer-size\f[R] is +not set too large and \f[C]--vfs-read-ahead\f[R] is set large if +required. +.PP +\f[B]IMPORTANT\f[R] not all file systems support sparse files. +In particular FAT/exFAT do not. +Rclone will perform very badly if the cache directory is on a filesystem +which doesn\[aq]t support sparse files and it will log an ERROR message +if one is detected. +.SS Fingerprinting +.PP +Various parts of the VFS use fingerprinting to see if a local file copy +has changed relative to a remote file. +Fingerprints are made from: +.IP \[bu] 2 +size +.IP \[bu] 2 +modification time +.IP \[bu] 2 +hash +.PP +where available on an object. +.PP +On some backends some of these attributes are slow to read (they take an +extra API call per object, or extra work per object). +.PP +For example \f[C]hash\f[R] is slow with the \f[C]local\f[R] and +\f[C]sftp\f[R] backends as they have to read the entire file and hash +it, and \f[C]modtime\f[R] is slow with the \f[C]s3\f[R], +\f[C]swift\f[R], \f[C]ftp\f[R] and \f[C]qinqstor\f[R] backends because +they need to do an extra API call to fetch it. +.PP +If you use the \f[C]--vfs-fast-fingerprint\f[R] flag then rclone will +not include the slow operations in the fingerprint. +This makes the fingerprinting less accurate but much faster and will +improve the opening time of cached files. +.PP +If you are running a vfs cache over \f[C]local\f[R], \f[C]s3\f[R] or +\f[C]swift\f[R] backends then using this flag is recommended. +.PP +Note that if you change the value of this flag, the fingerprints of the +files in the cache may be invalidated and the files will need to be +downloaded again. +.SS VFS Chunked Reading +.PP +When rclone reads files from a remote it reads them in chunks. +This means that rather than requesting the whole file rclone reads the +chunk specified. +This can reduce the used download quota for some remotes by requesting +only chunks from the remote that are actually read, at the cost of an +increased number of requests. +.PP +These flags control the chunking: +.IP +.nf +\f[C] +--vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) +--vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) +\f[R] +.fi +.PP +Rclone will start reading a chunk of size +\f[C]--vfs-read-chunk-size\f[R], and then double the size for each read. +When \f[C]--vfs-read-chunk-size-limit\f[R] is specified, and greater +than \f[C]--vfs-read-chunk-size\f[R], the chunk size for each open file +will get doubled only until the specified value is reached. +If the value is \[dq]off\[dq], which is the default, the limit is +disabled and the chunk size will grow indefinitely. +.PP +With \f[C]--vfs-read-chunk-size 100M\f[R] and +\f[C]--vfs-read-chunk-size-limit 0\f[R] the following parts will be +downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. +When \f[C]--vfs-read-chunk-size-limit 500M\f[R] is specified, the result +would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so +on. +.PP +Setting \f[C]--vfs-read-chunk-size\f[R] to \f[C]0\f[R] or \[dq]off\[dq] +disables chunked reading. +.SS VFS Performance +.PP +These flags may be used to enable/disable features of the VFS for +performance or other reasons. +See also the chunked reading feature. +.PP +In particular S3 and Swift benefit hugely from the +\f[C]--no-modtime\f[R] flag (or use \f[C]--use-server-modtime\f[R] for a +slightly different effect) as each read of the modification time takes a +transaction. +.IP +.nf +\f[C] +--no-checksum Don\[aq]t compare checksums on up/download. +--no-modtime Don\[aq]t read/write the modification time (can speed things up). +--no-seek Don\[aq]t allow seeking in files. +--read-only Only allow read-only access. +\f[R] +.fi +.PP +Sometimes rclone is delivered reads or writes out of order. +Rather than seeking rclone will wait a short time for the in sequence +read or write to come in. +These flags only come into effect when not using an on disk cache file. +.IP +.nf +\f[C] +--vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) +--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +\f[R] +.fi +.PP +When using VFS write caching (\f[C]--vfs-cache-mode\f[R] with value +writes or full), the global flag \f[C]--transfers\f[R] can be set to +adjust the number of parallel uploads of modified files from the cache +(the related global flag \f[C]--checkers\f[R] has no effect on the VFS). +.IP +.nf +\f[C] +--transfers int Number of file transfers to run in parallel (default 4) +\f[R] +.fi +.SS VFS Case Sensitivity +.PP +Linux file systems are case-sensitive: two files can differ only by +case, and the exact case must be used when opening a file. +.PP +File systems in modern Windows are case-insensitive but case-preserving: +although existing files can be opened using any case, the exact case +used to create the file is preserved and available for programs to +query. +It is not allowed for two files in the same directory to differ only by +case. +.PP +Usually file systems on macOS are case-insensitive. +It is possible to make macOS file systems case-sensitive but that is not +the default. +.PP +The \f[C]--vfs-case-insensitive\f[R] VFS flag controls how rclone +handles these two cases. +If its value is \[dq]false\[dq], rclone passes file names to the remote +as-is. +If the flag is \[dq]true\[dq] (or appears without a value on the command +line), rclone may perform a \[dq]fixup\[dq] as explained below. +.PP +The user may specify a file name to open/delete/rename/etc with a case +different than what is stored on the remote. +If an argument refers to an existing file with exactly the same name, +then the case of the existing file on the disk will be used. +However, if a file name with exactly the same name is not found but a +name differing only by case exists, rclone will transparently fixup the +name. +This fixup happens only when an existing file is requested. +Case sensitivity of file names created anew by rclone is controlled by +the underlying remote. +.PP +Note that case sensitivity of the operating system running rclone (the +target) may differ from case sensitivity of a file system presented by +rclone (the source). +The flag controls whether \[dq]fixup\[dq] is performed to satisfy the +target. +.PP +If the flag is not provided on the command line, then its default value +depends on the operating system where rclone runs: \[dq]true\[dq] on +Windows and macOS, \[dq]false\[dq] otherwise. +If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. +.SS VFS Disk Options +.PP +This flag allows you to manually set the statistics about the filing +system. +It can be useful when those statistics cannot be read correctly +automatically. +.IP +.nf +\f[C] +--vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +\f[R] +.fi +.SS Alternate report of used bytes +.PP +Some backends, most notably S3, do not report the amount of bytes used. +If you need this information to be available when running \f[C]df\f[R] +on the filesystem, then pass the flag \f[C]--vfs-used-is-size\f[R] to +rclone. +With this flag set, instead of relying on the backend to report this +information, rclone will scan the whole remote similar to +\f[C]rclone size\f[R] and compute the total used space itself. +.PP +\f[I]WARNING.\f[R] Contrary to \f[C]rclone size\f[R], this flag ignores +filters so that the result is accurate. +However, this is very inefficient and may cost lots of API calls +resulting in extra charges. +Use it as a last resort and only with caching. +.IP +.nf +\f[C] +rclone nfsmount remote:path /path/to/mountpoint [flags] +\f[R] +.fi +.SS Options +.IP +.nf +\f[C] + --addr string IPaddress:Port or :Port to bind server to + --allow-non-empty Allow mounting over a non-empty directory (not supported on Windows) + --allow-other Allow access to other users (not supported on Windows) + --allow-root Allow access to root user (not supported on Windows) + --async-read Use asynchronous reads (not supported on Windows) (default true) + --attr-timeout Duration Time for which file/directory attributes are cached (default 1s) + --daemon Run mount in background and exit parent process (as background output is suppressed, use --log-file with --log-format=pid,... to monitor) (not supported on Windows) + --daemon-timeout Duration Time limit for rclone to respond to kernel (not supported on Windows) (default 0s) + --daemon-wait Duration Time to wait for ready mount from daemon (maximum time on Linux, constant sleep time on OSX/BSD) (not supported on Windows) (default 1m0s) + --debug-fuse Debug the FUSE internals - needs -v + --default-permissions Makes kernel enforce access control based on the file mode (not supported on Windows) + --devname string Set the device name - default is remote:path + --dir-cache-time Duration Time to cache directory entries for (default 5m0s) + --dir-perms FileMode Directory permissions (default 0777) + --file-perms FileMode File permissions (default 0666) + --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp (repeat if required) + --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) + -h, --help help for nfsmount + --max-read-ahead SizeSuffix The number of bytes that can be prefetched for sequential reads (not supported on Windows) (default 128Ki) + --mount-case-insensitive Tristate Tell the OS the mount is case insensitive (true) or sensitive (false) regardless of the backend (auto) (default unset) + --network-mode Mount as remote network drive, instead of fixed disk drive (supported on Windows only) + --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000) + --no-checksum Don\[aq]t compare checksums on up/download + --no-modtime Don\[aq]t read/write the modification time (can speed things up) + --no-seek Don\[aq]t allow seeking in files + --noappledouble Ignore Apple Double (._) and .DS_Store files (supported on OSX only) (default true) + --noapplexattr Ignore all \[dq]com.apple.*\[dq] extended attributes (supported on OSX only) + -o, --option stringArray Option for libfuse/WinFsp (repeat if required) + --poll-interval Duration Time to wait between polling for changes, must be smaller than dir-cache-time and only on supported remotes (set 0 to disable) (default 1m0s) + --read-only Only allow read-only access + --sudo Use sudo to run the mount command as root. + --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) + --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) + --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-poll-interval Duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-case-insensitive If a file name not found, find a case insensitive match + --vfs-disk-space-total-size SizeSuffix Specify the total space of disk (default off) + --vfs-fast-fingerprint Use fast (less accurate) fingerprints for change detection + --vfs-read-ahead SizeSuffix Extra read ahead over --buffer-size when using cache-mode full + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) + --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) + --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-refresh Refreshes the directory cache recursively in the background on start + --vfs-used-is-size rclone size Use the rclone size algorithm for Used size + --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) + --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) + --volname string Set the volume name (supported on Windows and OSX only) + --write-back-cache Makes kernel buffer writes before sending them to rclone (without this, writethrough caching is used) (not supported on Windows) +\f[R] +.fi +.SS Filter Options +.PP +Flags for filtering directory listings. +.IP +.nf +\f[C] + --delete-excluded Delete files on dest excluded from sync + --exclude stringArray Exclude files matching pattern + --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) + --exclude-if-present stringArray Exclude directories if filename is present + --files-from stringArray Read list of source-file names from file (use - to read from stdin) + --files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin) + -f, --filter stringArray Add a file filtering rule + --filter-from stringArray Read file filtering patterns from a file (use - to read from stdin) + --ignore-case Ignore case in filters (case insensitive) + --include stringArray Include files matching pattern + --include-from stringArray Read file include patterns from file (use - to read from stdin) + --max-age Duration Only transfer files younger than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --max-depth int If set limits the recursion depth to this (default -1) + --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) + --metadata-exclude stringArray Exclude metadatas matching pattern + --metadata-exclude-from stringArray Read metadata exclude patterns from file (use - to read from stdin) + --metadata-filter stringArray Add a metadata filtering rule + --metadata-filter-from stringArray Read metadata filtering patterns from a file (use - to read from stdin) + --metadata-include stringArray Include metadatas matching pattern + --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) + --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) + --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off) +\f[R] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SH SEE ALSO +.IP \[bu] 2 +rclone (https://rclone.org/commands/rclone/) - Show help for rclone +commands, flags and backends. .SH rclone obscure .PP Obscure password for use in the rclone config file. @@ -8546,6 +9869,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -8600,6 +9949,7 @@ rclone serve dlna remote:path [flags] --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -8612,7 +9962,7 @@ rclone serve dlna remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -9091,6 +10441,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -9163,6 +10539,7 @@ rclone serve docker [flags] --socket-gid int GID for unix socket (default: current process GID) (default 1000) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -9175,7 +10552,7 @@ rclone serve docker [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -9626,6 +11003,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -9667,7 +11070,7 @@ STDOUT. ignored. .PP There is an example program -bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) +bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. .PP The program\[aq]s job is to take a \f[C]user\f[R] and \f[C]pass\f[R] on @@ -9776,6 +11179,7 @@ rclone serve ftp remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication (default \[dq]anonymous\[dq]) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -9788,7 +11192,7 @@ rclone serve ftp remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -10451,6 +11855,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -10492,7 +11922,7 @@ STDOUT. ignored. .PP There is an example program -bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) +bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. .PP The program\[aq]s job is to take a \f[C]user\f[R] and \f[C]pass\f[R] on @@ -10610,6 +12040,7 @@ rclone serve http remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -10622,7 +12053,7 @@ rclone serve http remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -10691,6 +12122,12 @@ Modifying files through NFS protocol requires VFS caching. Usually you will need to specify \f[C]--vfs-cache-mode\f[R] in order to be able to write to the mountpoint (full is recommended). If you don\[aq]t specify VFS cache mode, the mount will be read-only. +Note also that \f[C]--nfs-cache-handle-limit\f[R] controls the maximum +number of cached file handles stored by the caching handler. +This should not be set too low or you may experience errors when trying +to access files. +The default is \f[C]1000000\f[R], but consider lowering this limit if +the server\[aq]s system resource usage causes problems. .PP To serve NFS over the network use following command: .IP @@ -11096,6 +12533,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -11139,6 +12602,7 @@ rclone serve nfs remote:path [flags] --file-perms FileMode File permissions (default 0666) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for nfs + --nfs-cache-handle-limit int max file handles cached simultaneously (min 5) (default 1000000) --no-checksum Don\[aq]t compare checksums on up/download --no-modtime Don\[aq]t read/write the modification time (can speed things up) --no-seek Don\[aq]t allow seeking in files @@ -11146,6 +12610,7 @@ rclone serve nfs remote:path [flags] --read-only Only allow read-only access --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -11158,7 +12623,7 @@ rclone serve nfs remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -12009,6 +13474,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -12072,6 +13563,7 @@ rclone serve s3 remote:path [flags] --server-write-timeout Duration Timeout for server writing data (default 1h0m0s) --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -12084,7 +13576,7 @@ rclone serve s3 remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -12575,6 +14067,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -12616,7 +14134,7 @@ STDOUT. ignored. .PP There is an example program -bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) +bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. .PP The program\[aq]s job is to take a \f[C]user\f[R] and \f[C]pass\f[R] on @@ -12725,6 +14243,7 @@ rclone serve sftp remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -12737,7 +14256,7 @@ rclone serve sftp remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -13433,6 +14952,32 @@ If the flag is not provided on the command line, then its default value depends on the operating system where rclone runs: \[dq]true\[dq] on Windows and macOS, \[dq]false\[dq] otherwise. If the flag is provided without a value, then it is \[dq]true\[dq]. +.PP +The \f[C]--no-unicode-normalization\f[R] flag controls whether a similar +\[dq]fixup\[dq] is performed for filenames that differ but are +canonically +equivalent (https://en.wikipedia.org/wiki/Unicode_equivalence) with +respect to unicode. +Unicode normalization can be particularly helpful for users of macOS, +which prefers form NFD instead of the NFC used by most other platforms. +It is therefore highly recommended to keep the default of +\f[C]false\f[R] on macOS, to avoid encoding compatibility issues. +.PP +In the (probably unlikely) event that a directory has multiple duplicate +filenames after applying case and unicode normalization, the +\f[C]--vfs-block-norm-dupes\f[R] flag allows hiding these duplicates. +This comes with a performance tradeoff, as rclone will have to scan the +entire directory for duplicates when listing a directory. +For this reason, it is recommended to leave this disabled if not needed. +However, macOS users may wish to consider using it, as otherwise, if a +remote directory contains both NFC and NFD versions of the same +filename, an odd situation will occur: both versions of the file will be +visible in the mount, and both will appear to be editable, however, +editing either version will actually result in only the NFD version +getting edited under the hood. +\f[C]--vfs-block- norm-dupes\f[R] prevents this confusion by detecting +this scenario, hiding the duplicates, and logging an error, similar to +how this is handled in \f[C]rclone sync\f[R]. .SS VFS Disk Options .PP This flag allows you to manually set the statistics about the filing @@ -13474,7 +15019,7 @@ STDOUT. ignored. .PP There is an example program -bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/test_proxy.py) +bin/test_proxy.py (https://github.com/rclone/rclone/blob/master/bin/test_proxy.py) in the rclone source code. .PP The program\[aq]s job is to take a \f[C]user\f[R] and \f[C]pass\f[R] on @@ -13594,6 +15139,7 @@ rclone serve webdav remote:path [flags] --uid uint32 Override the uid field set by the filesystem (not supported on Windows) (default 1000) --umask int Override the permission bits set by the filesystem (not supported on Windows) (default 2) --user string User name for authentication + --vfs-block-norm-dupes If duplicate filenames exist in the same directory (after normalization), log an error and hide the duplicates (may have a performance cost) --vfs-cache-max-age Duration Max time since last access of objects in the cache (default 1h0m0s) --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) @@ -13606,7 +15152,7 @@ rclone serve webdav remote:path [flags] --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128Mi) --vfs-read-chunk-size-limit SizeSuffix If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached (\[aq]off\[aq] is unlimited) (default off) --vfs-read-wait Duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-refresh Refreshes the directory cache recursively on start + --vfs-refresh Refreshes the directory cache recursively in the background on start --vfs-used-is-size rclone size Use the rclone size algorithm for Used size --vfs-write-back Duration Time to writeback files after last use when using cache (default 5s) --vfs-write-wait Duration Time to wait for in-sequence write before giving error (default 1s) @@ -14601,12 +16147,13 @@ rclone sync --interactive /path/to/files remote:current-backup .fi .SS Metadata support .PP -Metadata is data about a file which isn\[aq]t the contents of the file. +Metadata is data about a file (or directory) which isn\[aq]t the +contents of the file (or directory). Normally rclone only preserves the modification time and the content (MIME) type where possible. .PP -Rclone supports preserving all the available metadata on files (not -directories) when using the \f[C]--metadata\f[R] or \f[C]-M\f[R] flag. +Rclone supports preserving all the available metadata on files and +directories when using the \f[C]--metadata\f[R] or \f[C]-M\f[R] flag. .PP Exactly what metadata is supported and what that support means depends on the backend. @@ -14614,6 +16161,9 @@ Backends that support metadata have a metadata section in their docs and are listed in the features table (https://rclone.org/overview/#features) (Eg local (https://rclone.org/local/#metadata), s3) .PP +Some backends don\[aq]t support metadata, some only support metadata on +files and some support metadata on both files and directories. +.PP Rclone only supports a one-time sync of metadata. This means that metadata will be synced from the source object to the destination object only when the source object has changed and needs to @@ -14636,6 +16186,15 @@ This flag can be repeated as many times as necessary. The --metadata-mapper flag can be used to pass the name of a program in which can transform metadata when it is being copied from source to destination. +.PP +Rclone supports \f[C]--metadata-set\f[R] and \f[C]--metadata-mapper\f[R] +when doing sever side \f[C]Move\f[R] and server side \f[C]Copy\f[R], but +not when doing server side \f[C]DirMove\f[R] (renaming a directory) as +this would involve recursing into the directory. +Note that you can disable \f[C]DirMove\f[R] with +\f[C]--disable DirMove\f[R] and rclone will revert back to using +\f[C]Move\f[R] for each individual object where \f[C]--metadata-set\f[R] +and \f[C]--metadata-mapper\f[R] are supported. .SS Types of metadata .PP Metadata is divided into two type. @@ -15485,6 +17044,28 @@ data was copied, or skipping if not. NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check and adjust your scripts accordingly! +.SS --fix-case +.PP +Normally, a sync to a case insensitive dest (such as macOS / Windows) +will not result in a matching filename if the source and dest filenames +have casing differences but are otherwise identical. +For example, syncing \f[C]hello.txt\f[R] to \f[C]HELLO.txt\f[R] will +normally result in the dest filename remaining \f[C]HELLO.txt\f[R]. +If \f[C]--fix-case\f[R] is set, then \f[C]HELLO.txt\f[R] will be renamed +to \f[C]hello.txt\f[R] to match the source. +.PP +NB: - directory names with incorrect casing will also be fixed - +\f[C]--fix-case\f[R] will be ignored if \f[C]--immutable\f[R] is set - +using \f[C]--local-case-sensitive\f[R] instead is not advisable; it will +cause \f[C]HELLO.txt\f[R] to get deleted! - the old dest filename must +not be excluded by filters. +Be especially careful with +\f[C]--files-from\f[R] (https://rclone.org/filtering/#files-from-read-list-of-source-file-names), +which does not respect +\f[C]--ignore-case\f[R] (https://rclone.org/filtering/#ignore-case-make-searches-case-insensitive)! +- on remotes that do not support server-side move, \f[C]--fix-case\f[R] +will require downloading the file and re-uploading it. +To avoid this, do not use \f[C]--fix-case\f[R]. .SS --fs-cache-expire-duration=TIME .PP When using rclone via the API rclone caches created remotes for 5 @@ -15972,15 +17553,15 @@ being copied to .IP \[bu] 2 \f[C]DstFsType\f[R] is the name of the destination backend. .IP \[bu] 2 -\f[C]Remote\f[R] is the path of the file relative to the root. +\f[C]Remote\f[R] is the path of the object relative to the root. .IP \[bu] 2 \f[C]Size\f[R], \f[C]MimeType\f[R], \f[C]ModTime\f[R] are attributes of -the file. +the object. .IP \[bu] 2 \f[C]IsDir\f[R] is \f[C]true\f[R] if this is a directory (not yet implemented). .IP \[bu] 2 -\f[C]ID\f[R] is the source \f[C]ID\f[R] of the file if known. +\f[C]ID\f[R] is the source \f[C]ID\f[R] of the object if known. .IP \[bu] 2 \f[C]Metadata\f[R] is the backend specific metadata as described in the backend docs. @@ -16061,7 +17642,7 @@ json.dump(o, sys.stdout, indent=\[dq]\[rs]t\[dq]) .PP You can find this example (slightly expanded) in the rclone source code at -bin/test_metadata_mapper.py (https://github.com/rclone/rclone/blob/master/test_metadata_mapper.py). +bin/test_metadata_mapper.py (https://github.com/rclone/rclone/blob/master/bin/test_metadata_mapper.py). .PP If you want to see the input to the metadata mapper and the output returned from it in the log you can use \f[C]-vv --dump mapper\f[R]. @@ -16122,7 +17703,7 @@ Capable backends are marked in the overview (https://rclone.org/overview/#optional-features) as \f[C]MultithreadUpload\f[R]. (They need to implement either the \f[C]OpenWriterAt\f[R] or -\f[C]OpenChunkedWriter\f[R] internal interfaces). +\f[C]OpenChunkWriter\f[R] internal interfaces). These include include, \f[C]local\f[R], \f[C]s3\f[R], \f[C]azureblob\f[R], \f[C]b2\f[R], \f[C]oracleobjectstorage\f[R] and \f[C]smb\f[R] at the time of writing. @@ -16245,6 +17826,10 @@ remote files if they are incorrect as it would normally. This can be used if the remote is being synced with another tool also (e.g. the Google Drive client). +.SS --no-update-dir-modtime +.PP +When using this flag, rclone won\[aq]t update modification times of +remote directories if they are incorrect as it would normally. .SS --order-by string .PP The \f[C]--order-by\f[R] flag controls the order in which files in the @@ -17502,7 +19087,7 @@ For more help and alternate methods see: https://rclone.org/remote_setup/ Execute the following on the machine with the web browser (same rclone version recommended): - rclone authorize \[dq]amazon cloud drive\[dq] + rclone authorize \[dq]dropbox\[dq] Then paste the result below: result> @@ -17513,7 +19098,7 @@ Then on your main desktop machine .IP .nf \f[C] -rclone authorize \[dq]amazon cloud drive\[dq] +rclone authorize \[dq]dropbox\[dq] If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth Log in and authorize rclone for access Waiting for code... @@ -18446,7 +20031,7 @@ for an alternative \f[C]filter-file.txt\f[R]: .PP Files \f[C]file1.jpg\f[R], \f[C]file3.png\f[R] and \f[C]file2.avi\f[R] are listed whilst \f[C]secret17.jpg\f[R] and files without the suffix -\&.jpg\f[C]or\f[R].png\[ga] are excluded. +\f[C].jpg\f[R] or \f[C].png\f[R] are excluded. .PP E.g. for an alternative \f[C]filter-file.txt\f[R]: @@ -19619,6 +21204,32 @@ password (https://rclone.org/commands/rclone_config_password/) command for more information on the above. .PP \f[B]Authentication is required for this call.\f[R] +.SS config/paths: Reads the config file path and other important paths. +.PP +Returns a JSON object with the following keys: +.IP \[bu] 2 +config: path to config file +.IP \[bu] 2 +cache: path to root of cache directory +.IP \[bu] 2 +temp: path to root of temporary directory +.PP +Eg +.IP +.nf +\f[C] +{ + \[dq]cache\[dq]: \[dq]/home/USER/.cache/rclone\[dq], + \[dq]config\[dq]: \[dq]/home/USER/.rclone.conf\[dq], + \[dq]temp\[dq]: \[dq]/tmp\[dq] +} +\f[R] +.fi +.PP +See the config paths (https://rclone.org/commands/rclone_config_paths/) +command for more information on the above. +.PP +\f[B]Authentication is required for this call.\f[R] .SS config/providers: Shows how providers are configured in the config file. .PP Returns a JSON object: - providers - array of objects @@ -20573,6 +22184,63 @@ instead: rclone rc --loopback operations/fsinfo fs=remote: \f[R] .fi +.SS operations/hashsum: Produces a hashsum file for all the objects in the path. +.PP +Produces a hash file for all the objects in the path using the hash +named. +The output is in the same format as the standard md5sum/sha1sum tool. +.PP +This takes the following parameters: +.IP \[bu] 2 +fs - a remote name string e.g. +\[dq]drive:\[dq] for the source, \[dq]/\[dq] for local filesystem +.RS 2 +.IP \[bu] 2 +this can point to a file and just that file will be returned in the +listing. +.RE +.IP \[bu] 2 +hashType - type of hash to be used +.IP \[bu] 2 +download - check by downloading rather than with hash (boolean) +.IP \[bu] 2 +base64 - output the hashes in base64 rather than hex (boolean) +.PP +If you supply the download flag, it will download the data from the +remote and create the hash on the fly. +This can be useful for remotes that don\[aq]t support the given hash or +if you really want to check all the data. +.PP +Note that if you wish to supply a checkfile to check hashes against the +current files then you should use operations/check instead of +operations/hashsum. +.PP +Returns: +.IP \[bu] 2 +hashsum - array of strings of the hashes +.IP \[bu] 2 +hashType - type of hash used +.PP +Example: +.IP +.nf +\f[C] +$ rclone rc --loopback operations/hashsum fs=bin hashType=MD5 download=true base64=true +{ + \[dq]hashType\[dq]: \[dq]md5\[dq], + \[dq]hashsum\[dq]: [ + \[dq]WTSVLpuiXyJO_kGzJerRLg== backend-versions.sh\[dq], + \[dq]v1b_OlWCJO9LtNq3EIKkNQ== bisect-go-rclone.sh\[dq], + \[dq]VHbmHzHh4taXzgag8BAIKQ== bisect-rclone.sh\[dq], + ] +} +\f[R] +.fi +.PP +See the hashsum (https://rclone.org/commands/rclone_hashsum/) command +for more information on the above. +.PP +\f[B]Authentication is required for this call.\f[R] .SS operations/list: List the given remote and path in JSON format .PP This takes the following parameters: @@ -21042,7 +22710,13 @@ errors, instead of requiring resync. Use at your own risk! .IP \[bu] 2 workdir - server directory for history files (default: -/home/ncw/.cache/rclone/bisync) +\f[C]\[ti]/.cache/rclone/bisync\f[R]) +.IP \[bu] 2 +backupdir1 - --backup-dir for Path1. +Must be a non-overlapping path on the same remote. +.IP \[bu] 2 +backupdir2 - --backup-dir for Path2. +Must be a non-overlapping path on the same remote. .IP \[bu] 2 noCleanup - retain working files .PP @@ -21567,21 +23241,6 @@ T}@T{ - T} T{ -Amazon Drive -T}@T{ -MD5 -T}@T{ -- -T}@T{ -Yes -T}@T{ -No -T}@T{ -R -T}@T{ -- -T} -T{ Amazon S3 (or S3 compatible) T}@T{ MD5 @@ -21706,7 +23365,7 @@ Google Drive T}@T{ MD5, SHA1, SHA256 T}@T{ -R/W +DR/W T}@T{ No T}@T{ @@ -21714,7 +23373,7 @@ Yes T}@T{ R/W T}@T{ -- +DRWU T} T{ Google Photos @@ -21916,7 +23575,7 @@ Microsoft OneDrive T}@T{ QuickXorHash \[u2075] T}@T{ -R/W +DR/W T}@T{ Yes T}@T{ @@ -21924,7 +23583,7 @@ No T}@T{ R T}@T{ -- +DRW T} T{ OpenDrive @@ -22096,7 +23755,7 @@ SFTP T}@T{ MD5, SHA1 \[S2] T}@T{ -R/W +DR/W T}@T{ Depends T}@T{ @@ -22231,7 +23890,7 @@ The local filesystem T}@T{ All T}@T{ -R/W +DR/W T}@T{ Depends T}@T{ @@ -22239,7 +23898,7 @@ No T}@T{ - T}@T{ -RWU +DRWU T} .TE .PP @@ -22300,8 +23959,8 @@ Almost all cloud storage systems store some sort of timestamp on objects, but several of them not something that is appropriate to use for syncing. E.g. -some backends will only write a timestamp that represent the time of the -upload. +some backends will only write a timestamp that represents the time of +the upload. To be relevant for syncing it should be able to store the modification time of the source object. If this is not the case, rclone will only check the file size by @@ -22310,6 +23969,43 @@ default, though can be configured to check the file hash (with the Ideally it should also be possible to change the timestamp of an existing file without having to re-upload it. .PP +.TS +tab(@); +lw(19.4n) lw(50.6n). +T{ +Key +T}@T{ +Explanation +T} +_ +T{ +\f[C]-\f[R] +T}@T{ +ModTimes not supported - times likely the upload time +T} +T{ +\f[C]R\f[R] +T}@T{ +ModTimes supported on files but can\[aq]t be changed without re-upload +T} +T{ +\f[C]R/W\f[R] +T}@T{ +Read and Write ModTimes fully supported on files +T} +T{ +\f[C]DR\f[R] +T}@T{ +ModTimes supported on files and directories but can\[aq]t be changed +without re-upload +T} +T{ +\f[C]DR/W\f[R] +T}@T{ +Read and Write ModTimes fully supported on files and directories +T} +.TE +.PP Storage systems with a \f[C]-\f[R] in the ModTime column, means the modification read on objects is not the modification time of the file when uploaded. @@ -22331,6 +24027,9 @@ ignored. .PP Storage systems with \f[C]R/W\f[R] (for read/write) in the ModTime column, means they do also support modtime-only operations. +.PP +Storage systems with \f[C]D\f[R] in the ModTime column means that the +following symbols apply to directories as well as files. .SS Case Insensitive .PP If a cloud storage systems is case sensitive then it is possible to have @@ -23107,7 +24806,7 @@ The levels of metadata support are .PP .TS tab(@); -l l. +lw(19.4n) lw(50.6n). T{ Key T}@T{ @@ -23117,17 +24816,34 @@ _ T{ \f[C]R\f[R] T}@T{ -Read only System Metadata +Read only System Metadata on files only T} T{ \f[C]RW\f[R] T}@T{ -Read and write System Metadata +Read and write System Metadata on files only T} T{ \f[C]RWU\f[R] T}@T{ -Read and write System Metadata and read and write User Metadata +Read and write System Metadata and read and write User Metadata on files +only +T} +T{ +\f[C]DR\f[R] +T}@T{ +Read only System Metadata on files and directories +T} +T{ +\f[C]DRW\f[R] +T}@T{ +Read and write System Metadata on files and directories +T} +T{ +\f[C]DRWU\f[R] +T}@T{ +Read and write System Metadata and read and write User Metadata on files +and directories T} .TE .PP @@ -23217,31 +24933,6 @@ T}@T{ Yes T} T{ -Amazon Drive -T}@T{ -Yes -T}@T{ -No -T}@T{ -Yes -T}@T{ -Yes -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -No -T}@T{ -Yes -T} -T{ Amazon S3 (or S3 compatible) T}@T{ No @@ -23567,6 +25258,31 @@ T}@T{ Yes T} T{ +ImageKit +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +Yes +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +No +T}@T{ +Yes +T} +T{ Internet Archive T}@T{ No @@ -24294,7 +26010,7 @@ T} T{ The local filesystem T}@T{ -Yes +No T}@T{ No T}@T{ @@ -24435,7 +26151,7 @@ Flags for anything which can Copy a file. --ignore-checksum Skip post copy check of checksums --ignore-existing Skip all files that exist on destination --ignore-size Ignore size when skipping use modtime or checksum - -I, --ignore-times Don\[aq]t skip files that match size and time - transfer all files + -I, --ignore-times Don\[aq]t skip items that match size and time - transfer all unconditionally --immutable Do not modify files, fail if existing files have been modified --inplace Download directly to destination file instead of atomic download to temp/rename --max-backlog int Maximum number of objects in sync or check backlog (default 10000) @@ -24449,6 +26165,7 @@ Flags for anything which can Copy a file. --multi-thread-write-buffer-size SizeSuffix In memory buffer size for writing when in multi-thread mode (default 128Ki) --no-check-dest Don\[aq]t check the destination, copy regardless --no-traverse Don\[aq]t traverse destination file system on copy + --no-update-dir-modtime Don\[aq]t update directory modification times --no-update-modtime Don\[aq]t update destination modtime if files identical --order-by string Instructions on how to order the transfers, e.g. \[aq]size,descending\[aq] --partial-suffix string Add partial-suffix to temporary file name when --inplace is not used (default \[dq].partial\[dq]) @@ -24469,6 +26186,7 @@ Flags just used for \f[C]rclone sync\f[R]. --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring --delete-during When synchronizing, delete files during transfer + --fix-case Force rename of case insensitive dest to match source --ignore-errors Delete even if there are I/O errors --max-delete int When synchronizing, limit the number of deletes (default -1) --max-delete-size SizeSuffix When synchronizing, limit the total size of deletes (default off) @@ -24524,7 +26242,7 @@ General networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.65.0\[dq]) + --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.66.0\[dq]) \f[R] .fi .SS Performance @@ -24709,14 +26427,7 @@ These can be set in the config file also. .IP .nf \f[C] - --acd-auth-url string Auth server URL - --acd-client-id string OAuth Client Id - --acd-client-secret string OAuth Client Secret - --acd-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) - --acd-templink-threshold SizeSuffix Files >= this size will be downloaded via their tempLink (default 9Gi) - --acd-token string OAuth Access Token as a JSON blob - --acd-token-url string Token server url - --acd-upload-wait-per-gb Duration Additional time per GiB to wait after a failed complete upload to see if it appears (default 3m0s) + --alias-description string Description of the remote --alias-remote string Remote or path to alias --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive --azureblob-account string Azure Storage Account Name @@ -24727,6 +26438,8 @@ These can be set in the config file also. --azureblob-client-id string The ID of the client in use --azureblob-client-secret string One of the service principal\[aq]s client secrets --azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth + --azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion + --azureblob-description string Description of the remote --azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created --azureblob-disable-checksum Don\[aq]t store MD5 checksum with object metadata --azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8) @@ -24757,6 +26470,7 @@ These can be set in the config file also. --azurefiles-client-secret string One of the service principal\[aq]s client secrets --azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth --azurefiles-connection-string string Azure Files Connection String + --azurefiles-description string Description of the remote --azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot) --azurefiles-endpoint string Endpoint for the service --azurefiles-env-auth Read credentials from runtime (environment variables, CLI or MSI) @@ -24776,8 +26490,9 @@ These can be set in the config file also. --b2-account string Account ID or Application Key ID --b2-chunk-size SizeSuffix Upload chunk size (default 96Mi) --b2-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4Gi) + --b2-description string Description of the remote --b2-disable-checksum Disable checksums for large (> upload cutoff) files - --b2-download-auth-duration Duration Time before the authorization token will expire in s or suffix ms|s|m|h|d (default 1w) + --b2-download-auth-duration Duration Time before the public link authorization token will expire in s or suffix ms|s|m|h|d (default 1w) --b2-download-url string Custom endpoint for downloads --b2-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --b2-endpoint string Endpoint for the service @@ -24796,6 +26511,7 @@ These can be set in the config file also. --box-client-id string OAuth Client Id --box-client-secret string OAuth Client Secret --box-commit-retries int Max number of times to try committing a multipart file (default 100) + --box-description string Description of the remote --box-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot) --box-impersonate string Impersonate this user ID when using a service account --box-list-chunk int Size of listing chunk 1-1000 (default 1000) @@ -24812,6 +26528,7 @@ These can be set in the config file also. --cache-db-path string Directory to store file structure metadata DB (default \[dq]$HOME/.cache/rclone/cache-backend\[dq]) --cache-db-purge Clear all the cached data for this remote on start --cache-db-wait-time Duration How long to wait for the DB to be available - 0 is unlimited (default 1s) + --cache-description string Description of the remote --cache-info-age Duration How long to cache file structure information (directory listings, file size, times, etc.) (default 6h0m0s) --cache-plex-insecure string Skip all certificate verification when connecting to the Plex server --cache-plex-password string The password of the Plex user (obscured) @@ -24825,15 +26542,19 @@ These can be set in the config file also. --cache-workers int How many workers should run in parallel to download chunks (default 4) --cache-writes Cache file data on writes through the FS --chunker-chunk-size SizeSuffix Files larger than chunk size will be split in chunks (default 2Gi) + --chunker-description string Description of the remote --chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks --chunker-hash-type string Choose how chunker handles hash sums (default \[dq]md5\[dq]) --chunker-remote string Remote to chunk/unchunk + --combine-description string Description of the remote --combine-upstreams SpaceSepList Upstreams for combining + --compress-description string Description of the remote --compress-level int GZIP compression level (-2 to 9) (default -1) --compress-mode string Compression mode (default \[dq]gzip\[dq]) --compress-ram-cache-limit SizeSuffix Some remotes don\[aq]t allow the upload of files with unknown size (default 20Mi) --compress-remote string Remote to compress -L, --copy-links Follow symlinks and copy the pointed to item + --crypt-description string Description of the remote --crypt-directory-name-encryption Option to either encrypt directory names or leave them intact (default true) --crypt-filename-encoding string How to encode the encrypted filename to text string (default \[dq]base32\[dq]) --crypt-filename-encryption string How to encrypt the filenames (default \[dq]standard\[dq]) @@ -24844,6 +26565,7 @@ These can be set in the config file also. --crypt-remote string Remote to encrypt/decrypt --crypt-server-side-across-configs Deprecated: use --server-side-across-configs instead --crypt-show-mapping For all files listed show how the names encrypt + --crypt-strict-names If set, this will raise an error when crypt comes across a filename that can\[aq]t be decrypted --crypt-suffix string If this is set it will override the default suffix of \[dq].bin\[dq] (default \[dq].bin\[dq]) --drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded --drive-allow-import-name-change Allow the filetype to change when uploading Google docs @@ -24853,6 +26575,7 @@ These can be set in the config file also. --drive-client-id string Google Application Client Id --drive-client-secret string OAuth Client Secret --drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut + --drive-description string Description of the remote --drive-disable-http2 Disable drive using http2 (default true) --drive-encoding Encoding The encoding for the backend (default InvalidUtf8) --drive-env-auth Get IAM credentials from runtime (environment variables or instance meta data if no env vars) @@ -24901,6 +26624,7 @@ These can be set in the config file also. --dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi) --dropbox-client-id string OAuth Client Id --dropbox-client-secret string OAuth Client Secret + --dropbox-description string Description of the remote --dropbox-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,RightSpace,InvalidUtf8,Dot) --dropbox-impersonate string Impersonate this user when using a business account --dropbox-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) @@ -24910,10 +26634,12 @@ These can be set in the config file also. --dropbox-token-url string Token server url --fichier-api-key string Your API Key, get it from https://1fichier.com/console/params.pl --fichier-cdn Set if you wish to use CDN download links + --fichier-description string Description of the remote --fichier-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot) --fichier-file-password string If you want to download a shared file that is password protected, add this parameter (obscured) --fichier-folder-password string If you want to list the files in a shared folder that is password protected, add this parameter (obscured) --fichier-shared-folder string If you want to download a shared folder, add this parameter + --filefabric-description string Description of the remote --filefabric-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --filefabric-permanent-token string Permanent Authentication Token --filefabric-root-folder-id string ID of the root folder @@ -24924,6 +26650,7 @@ These can be set in the config file also. --ftp-ask-password Allow asking for FTP password when needed --ftp-close-timeout Duration Maximum time to wait for a response to close (default 1m0s) --ftp-concurrency int Maximum number of FTP simultaneous connections, 0 for unlimited + --ftp-description string Description of the remote --ftp-disable-epsv Disable using EPSV even if server advertises support --ftp-disable-mlsd Disable using MLSD even if server advertises support --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) @@ -24949,6 +26676,7 @@ These can be set in the config file also. --gcs-client-id string OAuth Client Id --gcs-client-secret string OAuth Client Secret --gcs-decompress If set this will decompress gzip encoded objects + --gcs-description string Description of the remote --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-endpoint string Endpoint for the service @@ -24969,6 +26697,7 @@ These can be set in the config file also. --gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s) --gphotos-client-id string OAuth Client Id --gphotos-client-secret string OAuth Client Secret + --gphotos-description string Description of the remote --gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gphotos-include-archived Also view and download archived media --gphotos-read-only Set to make the Google Photos backend read only @@ -24977,10 +26706,12 @@ These can be set in the config file also. --gphotos-token string OAuth Access Token as a JSON blob --gphotos-token-url string Token server url --hasher-auto-size SizeSuffix Auto-update checksum for files smaller than this size (disabled by default) + --hasher-description string Description of the remote --hasher-hashes CommaSepList Comma separated list of supported checksum types (default md5,sha1) --hasher-max-age Duration Maximum time to keep checksums in cache (0 = no cache, off = cache forever) (default off) --hasher-remote string Remote to cache checksums for (e.g. myRemote:path) --hdfs-data-transfer-protection string Kerberos data transfer protection: authentication|integrity|privacy + --hdfs-description string Description of the remote --hdfs-encoding Encoding The encoding for the backend (default Slash,Colon,Del,Ctl,InvalidUtf8,Dot) --hdfs-namenode CommaSepList Hadoop name nodes and ports --hdfs-service-principal-name string Kerberos service principal name for the namenode @@ -24989,6 +26720,7 @@ These can be set in the config file also. --hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi) --hidrive-client-id string OAuth Client Id --hidrive-client-secret string OAuth Client Secret + --hidrive-description string Description of the remote --hidrive-disable-fetching-member-count Do not fetch number of objects in directories unless it is absolutely necessary --hidrive-encoding Encoding The encoding for the backend (default Slash,Dot) --hidrive-endpoint string Endpoint for the service (default \[dq]https://api.hidrive.strato.com/2.1\[dq]) @@ -24999,10 +26731,12 @@ These can be set in the config file also. --hidrive-token-url string Token server url --hidrive-upload-concurrency int Concurrency for chunked uploads (default 4) --hidrive-upload-cutoff SizeSuffix Cutoff/Threshold for chunked uploads (default 96Mi) + --http-description string Description of the remote --http-headers CommaSepList Set HTTP headers for all transactions --http-no-head Don\[aq]t use HEAD requests --http-no-slash Set this if the site doesn\[aq]t end directories with / --http-url string URL of HTTP host to connect to + --imagekit-description string Description of the remote --imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket) --imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys) --imagekit-only-signed Restrict unsigned image URLs If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true @@ -25011,6 +26745,7 @@ These can be set in the config file also. --imagekit-upload-tags string Tags to add to the uploaded files, e.g. \[dq]tag1,tag2\[dq] --imagekit-versions Include old versions in directory listings --internetarchive-access-key-id string IAS3 Access Key + --internetarchive-description string Description of the remote --internetarchive-disable-checksum Don\[aq]t ask the server to test against MD5 checksum calculated by rclone (default true) --internetarchive-encoding Encoding The encoding for the backend (default Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot) --internetarchive-endpoint string IAS3 Endpoint (default \[dq]https://s3.us.archive.org\[dq]) @@ -25020,6 +26755,7 @@ These can be set in the config file also. --jottacloud-auth-url string Auth server URL --jottacloud-client-id string OAuth Client Id --jottacloud-client-secret string OAuth Client Secret + --jottacloud-description string Description of the remote --jottacloud-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot) --jottacloud-hard-delete Delete files permanently rather than putting them into the trash --jottacloud-md5-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate the MD5 if required (default 10Mi) @@ -25028,6 +26764,7 @@ These can be set in the config file also. --jottacloud-token-url string Token server url --jottacloud-trashed-only Only show files that are in the trash --jottacloud-upload-resume-limit SizeSuffix Files bigger than this can be resumed if the upload fail\[aq]s (default 10Mi) + --koofr-description string Description of the remote --koofr-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --koofr-endpoint string The Koofr API endpoint to use --koofr-mountid string Mount ID of the mount to use @@ -25035,10 +26772,12 @@ These can be set in the config file also. --koofr-provider string Choose your storage provider --koofr-setmtime Does the backend support setting modification time (default true) --koofr-user string Your user name + --linkbox-description string Description of the remote --linkbox-token string Token from https://www.linkbox.to/admin/account -l, --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension --local-case-insensitive Force the filesystem to report itself as case insensitive --local-case-sensitive Force the filesystem to report itself as case sensitive + --local-description string Description of the remote --local-encoding Encoding The encoding for the backend (default Slash,Dot) --local-no-check-updated Don\[aq]t check to see if the files change during upload --local-no-preallocate Disable preallocation of disk space for transferred files @@ -25051,6 +26790,7 @@ These can be set in the config file also. --mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true) --mailru-client-id string OAuth Client Id --mailru-client-secret string OAuth Client Secret + --mailru-description string Description of the remote --mailru-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot) --mailru-pass string Password (obscured) --mailru-speedup-enable Skip full upload if there is another file with same data hash (default true) @@ -25061,12 +26801,15 @@ These can be set in the config file also. --mailru-token-url string Token server url --mailru-user string User name (usually email) --mega-debug Output more debug from Mega + --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --mega-hard-delete Delete files permanently rather than putting them into the trash --mega-pass string Password (obscured) --mega-use-https Use HTTPS for transfers --mega-user string User name + --memory-description string Description of the remote --netstorage-account string Set the NetStorage account name + --netstorage-description string Description of the remote --netstorage-host string Domain+path of NetStorage host to connect to --netstorage-protocol string Select between HTTP or HTTPS protocol (default \[dq]https\[dq]) --netstorage-secret string Set the NetStorage account secret/G2O key for authentication (obscured) @@ -25078,6 +26821,7 @@ These can be set in the config file also. --onedrive-client-id string OAuth Client Id --onedrive-client-secret string OAuth Client Secret --onedrive-delta If set rclone will use delta listing to implement recursive listings + --onedrive-description string Description of the remote --onedrive-drive-id string The ID of the drive to use --onedrive-drive-type string The type of the drive (personal | business | documentLibrary) --onedrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot) @@ -25087,6 +26831,7 @@ These can be set in the config file also. --onedrive-link-scope string Set the scope of the links created by the link command (default \[dq]anonymous\[dq]) --onedrive-link-type string Set the type of the links created by the link command (default \[dq]view\[dq]) --onedrive-list-chunk int Size of listing chunk (default 1000) + --onedrive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off) --onedrive-no-versions Remove all versions on modifying operations --onedrive-region string Choose national cloud region for OneDrive (default \[dq]global\[dq]) --onedrive-root-folder-id string ID of the root folder @@ -25100,6 +26845,7 @@ These can be set in the config file also. --oos-config-profile string Profile name inside the oci config file (default \[dq]Default\[dq]) --oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --oos-copy-timeout Duration Timeout for copy (default 1m0s) + --oos-description string Description of the remote --oos-disable-checksum Don\[aq]t store MD5 checksum with object metadata --oos-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --oos-endpoint string Endpoint for Object storage API @@ -25118,12 +26864,14 @@ These can be set in the config file also. --oos-upload-concurrency int Concurrency for multipart uploads (default 10) --oos-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --opendrive-chunk-size SizeSuffix Files will be uploaded in chunks this size (default 10Mi) + --opendrive-description string Description of the remote --opendrive-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot) --opendrive-password string Password (obscured) --opendrive-username string Username --pcloud-auth-url string Auth server URL --pcloud-client-id string OAuth Client Id --pcloud-client-secret string OAuth Client Secret + --pcloud-description string Description of the remote --pcloud-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --pcloud-hostname string Hostname to connect to (default \[dq]api.pcloud.com\[dq]) --pcloud-password string Your pcloud password (obscured) @@ -25134,6 +26882,7 @@ These can be set in the config file also. --pikpak-auth-url string Auth server URL --pikpak-client-id string OAuth Client Id --pikpak-client-secret string OAuth Client Secret + --pikpak-description string Description of the remote --pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot) --pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi) --pikpak-pass string Pikpak password (obscured) @@ -25146,11 +26895,13 @@ These can be set in the config file also. --premiumizeme-auth-url string Auth server URL --premiumizeme-client-id string OAuth Client Id --premiumizeme-client-secret string OAuth Client Secret + --premiumizeme-description string Description of the remote --premiumizeme-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --premiumizeme-token string OAuth Access Token as a JSON blob --premiumizeme-token-url string Token server url --protondrive-2fa string The 2FA code --protondrive-app-version string The app version string (default \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq]) + --protondrive-description string Description of the remote --protondrive-enable-caching Caches the files and folders metadata to reduce API calls (default true) --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) @@ -25161,12 +26912,14 @@ These can be set in the config file also. --putio-auth-url string Auth server URL --putio-client-id string OAuth Client Id --putio-client-secret string OAuth Client Secret + --putio-description string Description of the remote --putio-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-token string OAuth Access Token as a JSON blob --putio-token-url string Token server url --qingstor-access-key-id string QingStor Access Key ID --qingstor-chunk-size SizeSuffix Chunk size to use for uploading (default 4Mi) --qingstor-connection-retries int Number of connection retries (default 3) + --qingstor-description string Description of the remote --qingstor-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8) --qingstor-endpoint string Enter an endpoint URL to connection QingStor API --qingstor-env-auth Get QingStor credentials from runtime @@ -25175,18 +26928,21 @@ These can be set in the config file also. --qingstor-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --qingstor-zone string Zone to connect to --quatrix-api-key string API key for accessing Quatrix account + --quatrix-description string Description of the remote --quatrix-effective-upload-time string Wanted upload time for one chunk (default \[dq]4s\[dq]) --quatrix-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) --quatrix-hard-delete Delete files permanently rather than putting them into the trash --quatrix-host string Host name of Quatrix account --quatrix-maximal-summary-chunk-size SizeSuffix The maximal summary for all chunks. It should not be less than \[aq]transfers\[aq]*\[aq]minimal_chunk_size\[aq] (default 95.367Mi) --quatrix-minimal-chunk-size SizeSuffix The minimal size for one chunk (default 9.537Mi) + --quatrix-skip-project-folders Skip project folders in operations --s3-access-key-id string AWS Access Key ID --s3-acl string Canned ACL used when creating buckets and storing or copying objects --s3-bucket-acl string Canned ACL used when creating buckets --s3-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi) --s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi) --s3-decompress If set this will decompress gzip encoded objects + --s3-description string Description of the remote --s3-directory-markers Upload an empty object with a trailing slash when a new directory is created --s3-disable-checksum Don\[aq]t store MD5 checksum with object metadata --s3-disable-http2 Disable usage of http2 for S3 backends @@ -25221,19 +26977,22 @@ These can be set in the config file also. --s3-sse-kms-key-id string If using KMS ID you must provide the ARN of Key --s3-storage-class string The storage class to use when storing new objects in S3 --s3-sts-endpoint string Endpoint for STS - --s3-upload-concurrency int Concurrency for multipart uploads (default 4) + --s3-upload-concurrency int Concurrency for multipart uploads and copies (default 4) --s3-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --s3-use-accelerate-endpoint If true use the AWS S3 accelerated endpoint --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) + --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) --s3-use-presigned-request Whether to use a presigned request or PutObject for single part uploads --s3-v2-auth If true use v2 authentication --s3-version-at Time Show file versions as they were at the specified time (default off) + --s3-version-deleted Show deleted file markers when using versions --s3-versions Include old versions in directory listings --seafile-2fa Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled) --seafile-create-library Should rclone create a library if it doesn\[aq]t exist + --seafile-description string Description of the remote --seafile-encoding Encoding The encoding for the backend (default Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8) --seafile-library string Name of the library --seafile-library-key string Library password (for encrypted libraries only) (obscured) @@ -25245,6 +27004,7 @@ These can be set in the config file also. --sftp-ciphers SpaceSepList Space separated list of ciphers to be used for session encryption, ordered by preference --sftp-concurrency int The maximum number of outstanding requests for one file (default 64) --sftp-copy-is-hardlink Set to enable server side copies using hardlinks + --sftp-description string Description of the remote --sftp-disable-concurrent-reads If set don\[aq]t use concurrent reads --sftp-disable-concurrent-writes If set don\[aq]t use concurrent writes --sftp-disable-hashcheck Disable the execution of SSH commands to determine if remote file hashing is available @@ -25279,6 +27039,7 @@ These can be set in the config file also. --sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi) --sharefile-client-id string OAuth Client Id --sharefile-client-secret string OAuth Client Secret + --sharefile-description string Description of the remote --sharefile-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot) --sharefile-endpoint string Endpoint for API calls --sharefile-root-folder-id string ID of the root folder @@ -25287,10 +27048,12 @@ These can be set in the config file also. --sharefile-upload-cutoff SizeSuffix Cutoff for switching to multipart upload (default 128Mi) --sia-api-password string Sia Daemon API Password (obscured) --sia-api-url string Sia daemon API URL, like http://sia.daemon.host:9980 (default \[dq]http://127.0.0.1:9980\[dq]) + --sia-description string Description of the remote --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-user-agent string Siad User Agent (default \[dq]Sia-Agent\[dq]) --skip-links Don\[aq]t warn about skipped symlinks --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) + --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default \[dq]WORKGROUP\[dq]) --smb-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot) --smb-hide-special-share Hide special shares (e.g. print$) which users aren\[aq]t supposed to access (default true) @@ -25302,6 +27065,7 @@ These can be set in the config file also. --smb-user string SMB username (default \[dq]$USER\[dq]) --storj-access-grant string Access grant --storj-api-key string API key + --storj-description string Description of the remote --storj-passphrase string Encryption passphrase --storj-provider string Choose an authentication method (default \[dq]existing\[dq]) --storj-satellite-address string Satellite address (default \[dq]us1.storj.io\[dq]) @@ -25310,6 +27074,7 @@ These can be set in the config file also. --sugarsync-authorization string Sugarsync authorization --sugarsync-authorization-expiry string Sugarsync authorization expiry --sugarsync-deleted-id string Sugarsync deleted folder id + --sugarsync-description string Description of the remote --sugarsync-encoding Encoding The encoding for the backend (default Slash,Ctl,InvalidUtf8,Dot) --sugarsync-hard-delete Permanently delete files if true --sugarsync-private-access-key string Sugarsync Private Access Key @@ -25323,6 +27088,7 @@ These can be set in the config file also. --swift-auth-token string Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) --swift-auth-version int AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) --swift-chunk-size SizeSuffix Above this size files will be chunked into a _segments container (default 5Gi) + --swift-description string Description of the remote --swift-domain string User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) --swift-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8) --swift-endpoint-type string Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) (default \[dq]public\[dq]) @@ -25342,17 +27108,21 @@ These can be set in the config file also. --union-action-policy string Policy to choose upstream on ACTION category (default \[dq]epall\[dq]) --union-cache-time int Cache time of usage and free space (in seconds) (default 120) --union-create-policy string Policy to choose upstream on CREATE category (default \[dq]epmfs\[dq]) + --union-description string Description of the remote --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-search-policy string Policy to choose upstream on SEARCH category (default \[dq]ff\[dq]) --union-upstreams string List of space separated upstreams --uptobox-access-token string Your access token + --uptobox-description string Description of the remote --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot) --uptobox-private Set to make uploaded files private --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon) --webdav-bearer-token-command string Command to run to get a bearer token + --webdav-description string Description of the remote --webdav-encoding string The encoding for the backend --webdav-headers CommaSepList Set HTTP headers for all transactions --webdav-nextcloud-chunk-size SizeSuffix Nextcloud upload chunk size (default 10Mi) + --webdav-owncloud-exclude-shares Exclude ownCloud shares --webdav-pacer-min-sleep Duration Minimum time to sleep between API calls (default 10ms) --webdav-pass string Password (obscured) --webdav-url string URL of http host to connect to @@ -25361,6 +27131,7 @@ These can be set in the config file also. --yandex-auth-url string Auth server URL --yandex-client-id string OAuth Client Id --yandex-client-secret string OAuth Client Secret + --yandex-description string Description of the remote --yandex-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot) --yandex-hard-delete Delete files permanently rather than putting them into the trash --yandex-token string OAuth Access Token as a JSON blob @@ -25368,6 +27139,7 @@ These can be set in the config file also. --zoho-auth-url string Auth server URL --zoho-client-id string OAuth Client Id --zoho-client-secret string OAuth Client Secret + --zoho-description string Description of the remote --zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8) --zoho-region string Zoho region to connect to --zoho-token string OAuth Access Token as a JSON blob @@ -26098,12 +27870,21 @@ docker volume inspect my_vol .PP If docker refuses to remove the volume, you should find containers or swarm services that use it and stop them first. +.SS Bisync +.PP +\f[C]bisync\f[R] is \f[B]in beta\f[R] and is considered an \f[B]advanced +command\f[R], so use with care. +Make sure you have read and understood the entire +manual (https://rclone.org/bisync) (especially the Limitations section) +before using, or data loss can result. +Questions can be asked in the Rclone Forum (https://forum.rclone.org/). .SS Getting started .IP \[bu] 2 Install rclone (https://rclone.org/install/) and setup your remotes. .IP \[bu] 2 Bisync will create its working directory at -\f[C]\[ti]/.cache/rclone/bisync\f[R] on Linux or +\f[C]\[ti]/.cache/rclone/bisync\f[R] on Linux, +\f[C]/Users/yourusername/Library/Caches/rclone/bisync\f[R] on Mac, or \f[C]C:\[rs]Users\[rs]MyLogin\[rs]AppData\[rs]Local\[rs]rclone\[rs]bisync\f[R] on Windows. Make sure that this location is writable. @@ -26112,16 +27893,28 @@ Run bisync with the \f[C]--resync\f[R] flag, specifying the paths to the local and remote sync directory roots. .IP \[bu] 2 For successive sync runs, leave off the \f[C]--resync\f[R] flag. +(\f[B]Important!\f[R]) .IP \[bu] 2 Consider using a filters file for excluding unnecessary files and directories from the sync. .IP \[bu] 2 Consider setting up the --check-access feature for safety. .IP \[bu] 2 -On Linux, consider setting up a crontab entry. +On Linux or Mac, consider setting up a crontab entry. bisync can safely run in concurrent cron jobs thanks to lock files it maintains. .PP +For example, your first command might look like this: +.IP +.nf +\f[C] +rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run +\f[R] +.fi +.PP +If all looks good, run it again without \f[C]--dry-run\f[R]. +After that, remove \f[C]--resync\f[R] as well. +.PP Here is a typical run log (with timestamps removed for clarity): .IP .nf @@ -26182,36 +27975,36 @@ Positional arguments: Type \[aq]rclone listremotes\[aq] for list of configured remotes. Optional Flags: - --check-access Ensure expected \[ga]RCLONE_TEST\[ga] files are found on - both Path1 and Path2 filesystems, else abort. - --check-filename FILENAME Filename for \[ga]--check-access\[ga] (default: \[ga]RCLONE_TEST\[ga]) - --check-sync CHOICE Controls comparison of final listings: - \[ga]true | false | only\[ga] (default: true) - If set to \[ga]only\[ga], bisync will only compare listings - from the last run but skip actual sync. - --filters-file PATH Read filtering patterns from a file - --max-delete PERCENT Safety check on maximum percentage of deleted files allowed. - If exceeded, the bisync run will abort. (default: 50%) - --force Bypass \[ga]--max-delete\[ga] safety check and run the sync. - Consider using with \[ga]--verbose\[ga] - --create-empty-src-dirs Sync creation and deletion of empty directories. - (Not compatible with --remove-empty-dirs) - --remove-empty-dirs Remove empty directories at the final cleanup step. - -1, --resync Performs the resync run. - Warning: Path1 files may overwrite Path2 versions. - Consider using \[ga]--verbose\[ga] or \[ga]--dry-run\[ga] first. - --ignore-listing-checksum Do not use checksums for listings - (add --ignore-checksum to additionally skip post-copy checksum checks) - --resilient Allow future runs to retry after certain less-serious errors, - instead of requiring --resync. Use at your own risk! - --localtime Use local time in listings (default: UTC) - --no-cleanup Retain working files (useful for troubleshooting and testing). - --workdir PATH Use custom working directory (useful for testing). - (default: \[ga]\[ti]/.cache/rclone/bisync\[ga]) - -n, --dry-run Go through the motions - No files are copied/deleted. - -v, --verbose Increases logging verbosity. - May be specified more than once for more details. - -h, --help help for bisync + --backup-dir1 string --backup-dir for Path1. Must be a non-overlapping path on the same remote. + --backup-dir2 string --backup-dir for Path2. Must be a non-overlapping path on the same remote. + --check-access Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort. + --check-filename string Filename for --check-access (default: RCLONE_TEST) + --check-sync string Controls comparison of final listings: true|false|only (default: true) (default \[dq]true\[dq]) + --compare string Comma-separated list of bisync-specific compare options ex. \[aq]size,modtime,checksum\[aq] (default: \[aq]size,modtime\[aq]) + --conflict-loser ConflictLoserAction Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num) + --conflict-resolve string Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default \[dq]none\[dq]) + --conflict-suffix string Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: \[aq]conflict\[aq]) + --create-empty-src-dirs Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs) + --download-hash Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!) + --filters-file string Read filtering patterns from a file + --force Bypass --max-delete safety check and run the sync. Consider using with --verbose + -h, --help help for bisync + --ignore-listing-checksum Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks) + --max-lock Duration Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s) + --no-cleanup Retain working files (useful for troubleshooting and testing). + --no-slow-hash Ignore listing checksums only on backends where they are slow + --recover Automatically recover from interruptions without requiring --resync. + --remove-empty-dirs Remove ALL empty directories at the final cleanup step. + --resilient Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk! + -1, --resync Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first. + --resync-mode string During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default \[dq]none\[dq]) + --retries int Retry operations this many times if they fail (requires --resilient). (default 3) + --retries-sleep Duration Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s) + --slow-hash-sync-only Ignore slow checksums for listings and deltas, but still consider them during sync calls. + --workdir string Use custom working dir - useful for testing. (default: {WORKDIR}) + --max-delete PERCENT Safety check on maximum percentage of deleted files allowed. If exceeded, the bisync run will abort. (default: 50%) + -n, --dry-run Go through the motions - No files are copied/deleted. + -v, --verbose Increases logging verbosity. May be specified more than once for more details. \f[R] .fi .PP @@ -26251,25 +28044,16 @@ will have ALL empty directories purged as the last step in the process. .PP This will effectively make both Path1 and Path2 filesystems contain a matching superset of all files. -Path2 files that do not exist in Path1 will be copied to Path1, and the -process will then copy the Path1 tree to Path2. +By default, Path2 files that do not exist in Path1 will be copied to +Path1, and the process will then copy the Path1 tree to Path2. .PP -The \f[C]--resync\f[R] sequence is roughly equivalent to: +The \f[C]--resync\f[R] sequence is roughly equivalent to the following +(but see \f[C]--resync-mode\f[R] for other options): .IP .nf \f[C] -rclone copy Path2 Path1 --ignore-existing -rclone copy Path1 Path2 -\f[R] -.fi -.PP -Or, if using \f[C]--create-empty-src-dirs\f[R]: -.IP -.nf -\f[C] -rclone copy Path2 Path1 --ignore-existing -rclone copy Path1 Path2 --create-empty-src-dirs -rclone copy Path2 Path1 --create-empty-src-dirs +rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs] +rclone copy Path1 Path2 [--create-empty-src-dirs] \f[R] .fi .PP @@ -26279,10 +28063,12 @@ This is required for safety - that bisync can verify that both paths are valid. .PP When using \f[C]--resync\f[R], a newer version of a file on the Path2 -filesystem will be overwritten by the Path1 filesystem version. +filesystem will (by default) be overwritten by the Path1 filesystem +version. (Note that this is NOT entirely -symmetrical (https://github.com/rclone/rclone/issues/5681#issuecomment-938761815).) -Carefully evaluate deltas using +symmetrical (https://github.com/rclone/rclone/issues/5681#issuecomment-938761815), +and more symmetrical options can be specified with the +\f[C]--resync-mode\f[R] flag.) Carefully evaluate deltas using --dry-run (https://rclone.org/flags/#non-backend-flags). .PP For a resync run, one of the paths may be empty (no files in the path @@ -26295,6 +28081,125 @@ fails with \f[C]Empty current PathN listing. Cannot sync to an empty directory: X.pathN.lst\f[R] This is a safety check that an unexpected empty path does not result in deleting \f[B]everything\f[R] in the other path. +.PP +Note that \f[C]--resync\f[R] implies \f[C]--resync-mode path1\f[R] +unless a different \f[C]--resync-mode\f[R] is explicitly specified. +It is not necessary to use both the \f[C]--resync\f[R] and +\f[C]--resync-mode\f[R] flags -- either one is sufficient without the +other. +.PP +\f[B]Note:\f[R] \f[C]--resync\f[R] (including \f[C]--resync-mode\f[R]) +should only be used under three specific (rare) circumstances: 1. +It is your \f[I]first\f[R] bisync run (between these two paths) 2. +You\[aq]ve just made changes to your bisync settings (such as editing +the contents of your \f[C]--filters-file\f[R]) 3. +There was an error on the prior run, and as a result, bisync now +requires \f[C]--resync\f[R] to recover +.PP +The rest of the time, you should \f[I]omit\f[R] \f[C]--resync\f[R]. +The reason is because \f[C]--resync\f[R] will only \f[I]copy\f[R] (not +\f[I]sync\f[R]) each side to the other. +Therefore, if you included \f[C]--resync\f[R] for every bisync run, it +would never be possible to delete a file -- the deleted file would +always keep reappearing at the end of every run (because it\[aq]s being +copied from the other side where it still exists). +Similarly, renaming a file would always result in a duplicate copy (both +old and new name) on both sides. +.PP +If you find that frequent interruptions from #3 are an issue, rather +than automatically running \f[C]--resync\f[R], the recommended +alternative is to use the \f[C]--resilient\f[R], \f[C]--recover\f[R], +and \f[C]--conflict-resolve\f[R] flags, (along with Graceful Shutdown +mode, when needed) for a very robust \[dq]set-it-and-forget-it\[dq] +bisync setup that can automatically bounce back from almost any +interruption it might encounter. +Consider adding something like the following: +.IP +.nf +\f[C] +--resilient --recover --max-lock 2m --conflict-resolve newer +\f[R] +.fi +.SS --resync-mode CHOICE +.PP +In the event that a file differs on both sides during a +\f[C]--resync\f[R], \f[C]--resync-mode\f[R] controls which version will +overwrite the other. +The supported options are similar to \f[C]--conflict-resolve\f[R]. +For all of the following options, the version that is kept is referred +to as the \[dq]winner\[dq], and the version that is overwritten +(deleted) is referred to as the \[dq]loser\[dq]. +The options are named after the \[dq]winner\[dq]: +.IP \[bu] 2 +\f[C]path1\f[R] - (the default) - the version from Path1 is +unconditionally considered the winner (regardless of \f[C]modtime\f[R] +and \f[C]size\f[R], if any). +This can be useful if one side is more trusted or up-to-date than the +other, at the time of the \f[C]--resync\f[R]. +.IP \[bu] 2 +\f[C]path2\f[R] - same as \f[C]path1\f[R], except the path2 version is +considered the winner. +.IP \[bu] 2 +\f[C]newer\f[R] - the newer file (by \f[C]modtime\f[R]) is considered +the winner, regardless of which side it came from. +This may result in having a mix of some winners from Path1, and some +winners from Path2. +(The implementation is analogous to running +\f[C]rclone copy --update\f[R] in both directions.) +.IP \[bu] 2 +\f[C]older\f[R] - same as \f[C]newer\f[R], except the older file is +considered the winner, and the newer file is considered the loser. +.IP \[bu] 2 +\f[C]larger\f[R] - the larger file (by \f[C]size\f[R]) is considered the +winner (regardless of \f[C]modtime\f[R], if any). +This can be a useful option for remotes without \f[C]modtime\f[R] +support, or with the kinds of files (such as logs) that tend to grow but +not shrink, over time. +.IP \[bu] 2 +\f[C]smaller\f[R] - the smaller file (by \f[C]size\f[R]) is considered +the winner (regardless of \f[C]modtime\f[R], if any). +.PP +For all of the above options, note the following: - If either of the +underlying remotes lacks support for the chosen method, it will be +ignored and will fall back to the default of \f[C]path1\f[R]. +(For example, if \f[C]--resync-mode newer\f[R] is set, but one of the +paths uses a remote that doesn\[aq]t support \f[C]modtime\f[R].) - If a +winner can\[aq]t be determined because the chosen method\[aq]s attribute +is missing or equal, it will be ignored, and bisync will instead try to +determine whether the files differ by looking at the other +\f[C]--compare\f[R] methods in effect. +(For example, if \f[C]--resync-mode newer\f[R] is set, but the Path1 and +Path2 modtimes are identical, bisync will compare the sizes.) If bisync +concludes that they differ, preference is given to whichever is the +\[dq]source\[dq] at that moment. +(In practice, this gives a slight advantage to Path2, as the 2to1 copy +comes before the 1to2 copy.) If the files \f[I]do not\f[R] differ, +nothing is copied (as both sides are already correct). +- These options apply only to files that exist on both sides (with the +same name and relative path). +Files that exist \f[I]only\f[R] on one side and not the other are +\f[I]always\f[R] copied to the other, during \f[C]--resync\f[R] (this is +one of the main differences between resync and non-resync runs.). +- \f[C]--conflict-resolve\f[R], \f[C]--conflict-loser\f[R], and +\f[C]--conflict-suffix\f[R] do not apply during \f[C]--resync\f[R], and +unlike these flags, nothing is renamed during \f[C]--resync\f[R]. +When a file differs on both sides during \f[C]--resync\f[R], one version +always overwrites the other (much like in \f[C]rclone copy\f[R].) +(Consider using \f[C]--backup-dir\f[R] to retain a backup of the losing +version.) - Unlike for \f[C]--conflict-resolve\f[R], +\f[C]--resync-mode none\f[R] is not a valid option (or rather, it will +be interpreted as \[dq]no resync\[dq], unless \f[C]--resync\f[R] has +also been specified, in which case it will be ignored.) - Winners and +losers are decided at the individual file-level only (there is not +currently an option to pick an entire winning directory atomically, +although the \f[C]path1\f[R] and \f[C]path2\f[R] options typically +produce a similar result.) - To maintain backward-compatibility, the +\f[C]--resync\f[R] flag implies \f[C]--resync-mode path1\f[R] unless a +different \f[C]--resync-mode\f[R] is explicitly specified. +Similarly, all \f[C]--resync-mode\f[R] options (except \f[C]none\f[R]) +imply \f[C]--resync\f[R], so it is not necessary to use both the +\f[C]--resync\f[R] and \f[C]--resync-mode\f[R] flags simultaneously -- +either one is sufficient without the other. .SS --check-access .PP Access check files are an additional safety measure against data loss. @@ -26337,6 +28242,185 @@ One or more files having this filename must exist, synchronized between your source and destination filesets, in order for \f[C]--check-access\f[R] to succeed. See --check-access for additional details. +.SS --compare +.PP +As of \f[C]v1.66\f[R], bisync fully supports comparing based on any +combination of size, modtime, and checksum (lifting the prior +restriction on backends without modtime support.) +.PP +By default (without the \f[C]--compare\f[R] flag), bisync inherits the +same comparison options as \f[C]sync\f[R] (that is: \f[C]size\f[R] and +\f[C]modtime\f[R] by default, unless modified with flags such as +\f[C]--checksum\f[R] (https://rclone.org/docs/#c-checksum) or +\f[C]--size-only\f[R].) +.PP +If the \f[C]--compare\f[R] flag is set, it will override these defaults. +This can be useful if you wish to compare based on combinations not +currently supported in \f[C]sync\f[R], such as comparing all three of +\f[C]size\f[R] AND \f[C]modtime\f[R] AND \f[C]checksum\f[R] +simultaneously (or just \f[C]modtime\f[R] AND \f[C]checksum\f[R]). +.PP +\f[C]--compare\f[R] takes a comma-separated list, with the currently +supported values being \f[C]size\f[R], \f[C]modtime\f[R], and +\f[C]checksum\f[R]. +For example, if you want to compare size and checksum, but not modtime, +you would do: +.IP +.nf +\f[C] +--compare size,checksum +\f[R] +.fi +.PP +Or if you want to compare all three: +.IP +.nf +\f[C] +--compare size,modtime,checksum +\f[R] +.fi +.PP +\f[C]--compare\f[R] overrides any conflicting flags. +For example, if you set the conflicting flags +\f[C]--compare checksum --size-only\f[R], \f[C]--size-only\f[R] will be +ignored, and bisync will compare checksum and not size. +To avoid confusion, it is recommended to use \f[I]either\f[R] +\f[C]--compare\f[R] or the normal \f[C]sync\f[R] flags, but not both. +.PP +If \f[C]--compare\f[R] includes \f[C]checksum\f[R] and both remotes +support checksums but have no hash types in common with each other, +checksums will be considered \f[I]only\f[R] for comparisons within the +same side (to determine what has changed since the prior sync), but not +for comparisons against the opposite side. +If one side supports checksums and the other does not, checksums will +only be considered on the side that supports them. +.PP +When comparing with \f[C]checksum\f[R] and/or \f[C]size\f[R] without +\f[C]modtime\f[R], bisync cannot determine whether a file is +\f[C]newer\f[R] or \f[C]older\f[R] -- only whether it is +\f[C]changed\f[R] or \f[C]unchanged\f[R]. +(If it is \f[C]changed\f[R] on both sides, bisync still does the +standard equality-check to avoid declaring a sync conflict unless it +absolutely has to.) +.PP +It is recommended to do a \f[C]--resync\f[R] when changing +\f[C]--compare\f[R] settings, as otherwise your prior listing files may +not contain the attributes you wish to compare (for example, they will +not have stored checksums if you were not previously comparing +checksums.) +.SS --ignore-listing-checksum +.PP +When \f[C]--checksum\f[R] or \f[C]--compare checksum\f[R] is set, bisync +will retrieve (or generate) checksums (for backends that support them) +when creating the listings for both paths, and store the checksums in +the listing files. +\f[C]--ignore-listing-checksum\f[R] will disable this behavior, which +may speed things up considerably, especially on backends (such as +local (https://rclone.org/local/)) where hashes must be computed on the +fly instead of retrieved. +Please note the following: +.IP \[bu] 2 +As of \f[C]v1.66\f[R], \f[C]--ignore-listing-checksum\f[R] is now +automatically set when neither \f[C]--checksum\f[R] nor +\f[C]--compare checksum\f[R] are in use (as the checksums would not be +used for anything.) +.IP \[bu] 2 +\f[C]--ignore-listing-checksum\f[R] is NOT the same as +\f[C]--ignore-checksum\f[R] (https://rclone.org/docs/#ignore-checksum), +and you may wish to use one or the other, or both. +In a nutshell: \f[C]--ignore-listing-checksum\f[R] controls whether +checksums are considered when scanning for diffs, while +\f[C]--ignore-checksum\f[R] controls whether checksums are considered +during the copy/sync operations that follow, if there ARE diffs. +.IP \[bu] 2 +Unless \f[C]--ignore-listing-checksum\f[R] is passed, bisync currently +computes hashes for one path \f[I]even when there\[aq]s no common hash +with the other path\f[R] (for example, a +crypt (https://rclone.org/crypt/#modification-times-and-hashes) remote.) +This can still be beneficial, as the hashes will still be used to detect +changes within the same side (if \f[C]--checksum\f[R] or +\f[C]--compare checksum\f[R] is set), even if they can\[aq]t be used to +compare against the opposite side. +.IP \[bu] 2 +If you wish to ignore listing checksums \f[I]only\f[R] on remotes where +they are slow to compute, consider using \f[C]--no-slow-hash\f[R] (or +\f[C]--slow-hash-sync-only\f[R]) instead of +\f[C]--ignore-listing-checksum\f[R]. +.IP \[bu] 2 +If \f[C]--ignore-listing-checksum\f[R] is used simultaneously with +\f[C]--compare checksum\f[R] (or \f[C]--checksum\f[R]), checksums will +be ignored for bisync deltas, but still considered during the sync +operations that follow (if deltas are detected based on modtime and/or +size.) +.SS --no-slow-hash +.PP +On some remotes (notably \f[C]local\f[R]), checksums can dramatically +slow down a bisync run, because hashes cannot be stored and need to be +computed in real-time when they are requested. +On other remotes (such as \f[C]drive\f[R]), they add practically no time +at all. +The \f[C]--no-slow-hash\f[R] flag will automatically skip checksums on +remotes where they are slow, while still comparing them on others +(assuming \f[C]--compare\f[R] includes \f[C]checksum\f[R].) This can be +useful when one of your bisync paths is slow but you still want to check +checksums on the other, for a more robust sync. +.SS --slow-hash-sync-only +.PP +Same as \f[C]--no-slow-hash\f[R], except slow hashes are still +considered during sync calls. +They are still NOT considered for determining deltas, nor or they +included in listings. +They are also skipped during \f[C]--resync\f[R]. +The main use case for this flag is when you have a large number of +files, but relatively few of them change from run to run -- so you +don\[aq]t want to check your entire tree every time (it would take too +long), but you still want to consider checksums for the smaller group of +files for which a \f[C]modtime\f[R] or \f[C]size\f[R] change was +detected. +Keep in mind that this speed savings comes with a safety trade-off: if a +file\[aq]s content were to change without a change to its +\f[C]modtime\f[R] or \f[C]size\f[R], bisync would not detect it, and it +would not be synced. +.PP +\f[C]--slow-hash-sync-only\f[R] is only useful if both remotes share a +common hash type (if they don\[aq]t, bisync will automatically fall back +to \f[C]--no-slow-hash\f[R].) Both \f[C]--no-slow-hash\f[R] and +\f[C]--slow-hash-sync-only\f[R] have no effect without +\f[C]--compare checksum\f[R] (or \f[C]--checksum\f[R]). +.SS --download-hash +.PP +If \f[C]--download-hash\f[R] is set, bisync will use best efforts to +obtain an MD5 checksum by downloading and computing on-the-fly, when +checksums are not otherwise available (for example, a remote that +doesn\[aq]t support them.) Note that since rclone has to download the +entire file, this may dramatically slow down your bisync runs, and is +also likely to use a lot of data, so it is probably not practical for +bisync paths with a large total file size. +However, it can be a good option for syncing small-but-important files +with maximum accuracy (for example, a source code repo on a +\f[C]crypt\f[R] remote.) An additional advantage over methods like +\f[C]cryptcheck\f[R] (https://rclone.org/commands/rclone_cryptcheck/) is +that the original file is not required for comparison (for example, +\f[C]--download-hash\f[R] can be used to bisync two different crypt +remotes with different passwords.) +.PP +When \f[C]--download-hash\f[R] is set, bisync still looks for more +efficient checksums first, and falls back to downloading only when none +are found. +It takes priority over conflicting flags such as +\f[C]--no-slow-hash\f[R]. +\f[C]--download-hash\f[R] is not suitable for Google Docs and other +files of unknown size, as their checksums would change from run to run +(due to small variances in the internals of the generated export file.) +Therefore, bisync automatically skips \f[C]--download-hash\f[R] for +files with a size less than 0. +.PP +See also: \f[C]Hasher\f[R] (https://rclone.org/hasher/) backend, +\f[C]cryptcheck\f[R] (https://rclone.org/commands/rclone_cryptcheck/) +command, +\f[C]rclone check --download\f[R] (https://rclone.org/commands/rclone_check/) +option, \f[C]md5sum\f[R] (https://rclone.org/commands/rclone_md5sum/) +command .SS --max-delete .PP As a safety check, if greater than the \f[C]--max-delete\f[R] percent of @@ -26381,6 +28465,195 @@ the MD5 hash of the current filters file and compares it to the hash stored in the \f[C].md5\f[R] file. If they don\[aq]t match, the run aborts with a critical error and thus forces you to do a \f[C]--resync\f[R], likely avoiding a disaster. +.SS --conflict-resolve CHOICE +.PP +In bisync, a \[dq]conflict\[dq] is a file that is \f[I]new\f[R] or +\f[I]changed\f[R] on \f[I]both sides\f[R] (relative to the prior run) +AND is \f[I]not currently identical\f[R] on both sides. +\f[C]--conflict-resolve\f[R] controls how bisync handles such a +scenario. +The currently supported options are: +.IP \[bu] 2 +\f[C]none\f[R] - (the default) - do not attempt to pick a winner, keep +and rename both files according to \f[C]--conflict-loser\f[R] and +\f[C]--conflict-suffix\f[R] settings. +For example, with the default settings, \f[C]file.txt\f[R] on Path1 is +renamed \f[C]file.txt.conflict1\f[R] and \f[C]file.txt\f[R] on Path2 is +renamed \f[C]file.txt.conflict2\f[R]. +Both are copied to the opposite path during the run, so both sides end +up with a copy of both files. +(As \f[C]none\f[R] is the default, it is not necessary to specify +\f[C]--conflict-resolve none\f[R] -- you can just omit the flag.) +.IP \[bu] 2 +\f[C]newer\f[R] - the newer file (by \f[C]modtime\f[R]) is considered +the winner and is copied without renaming. +The older file (the \[dq]loser\[dq]) is handled according to +\f[C]--conflict-loser\f[R] and \f[C]--conflict-suffix\f[R] settings +(either renamed or deleted.) For example, if \f[C]file.txt\f[R] on Path1 +is newer than \f[C]file.txt\f[R] on Path2, the result on both sides +(with other default settings) will be \f[C]file.txt\f[R] (winner from +Path1) and \f[C]file.txt.conflict1\f[R] (loser from Path2). +.IP \[bu] 2 +\f[C]older\f[R] - same as \f[C]newer\f[R], except the older file is +considered the winner, and the newer file is considered the loser. +.IP \[bu] 2 +\f[C]larger\f[R] - the larger file (by \f[C]size\f[R]) is considered the +winner (regardless of \f[C]modtime\f[R], if any). +.IP \[bu] 2 +\f[C]smaller\f[R] - the smaller file (by \f[C]size\f[R]) is considered +the winner (regardless of \f[C]modtime\f[R], if any). +.IP \[bu] 2 +\f[C]path1\f[R] - the version from Path1 is unconditionally considered +the winner (regardless of \f[C]modtime\f[R] and \f[C]size\f[R], if any). +This can be useful if one side is usually more trusted or up-to-date +than the other. +.IP \[bu] 2 +\f[C]path2\f[R] - same as \f[C]path1\f[R], except the path2 version is +considered the winner. +.PP +For all of the above options, note the following: - If either of the +underlying remotes lacks support for the chosen method, it will be +ignored and fall back to \f[C]none\f[R]. +(For example, if \f[C]--conflict-resolve newer\f[R] is set, but one of +the paths uses a remote that doesn\[aq]t support \f[C]modtime\f[R].) - +If a winner can\[aq]t be determined because the chosen method\[aq]s +attribute is missing or equal, it will be ignored and fall back to +\f[C]none\f[R]. +(For example, if \f[C]--conflict-resolve newer\f[R] is set, but the +Path1 and Path2 modtimes are identical, even if the sizes may differ.) - +If the file\[aq]s content is currently identical on both sides, it is +not considered a \[dq]conflict\[dq], even if new or changed on both +sides since the prior sync. +(For example, if you made a change on one side and then synced it to the +other side by other means.) Therefore, none of the conflict resolution +flags apply in this scenario. +- The conflict resolution flags do not apply during a +\f[C]--resync\f[R], as there is no \[dq]prior run\[dq] to speak of (but +see \f[C]--resync-mode\f[R] for similar options.) +.SS --conflict-loser CHOICE +.PP +\f[C]--conflict-loser\f[R] determines what happens to the +\[dq]loser\[dq] of a sync conflict (when \f[C]--conflict-resolve\f[R] +determines a winner) or to both files (when there is no winner.) The +currently supported options are: +.IP \[bu] 2 +\f[C]num\f[R] - (the default) - auto-number the conflicts by +automatically appending the next available number to the +\f[C]--conflict-suffix\f[R], in chronological order. +For example, with the default settings, the first conflict for +\f[C]file.txt\f[R] will be renamed \f[C]file.txt.conflict1\f[R]. +If \f[C]file.txt.conflict1\f[R] already exists, +\f[C]file.txt.conflict2\f[R] will be used instead (etc., up to a maximum +of 9223372036854775807 conflicts.) +.IP \[bu] 2 +\f[C]pathname\f[R] - rename the conflicts according to which side they +came from, which was the default behavior prior to \f[C]v1.66\f[R]. +For example, with \f[C]--conflict-suffix path\f[R], \f[C]file.txt\f[R] +from Path1 will be renamed \f[C]file.txt.path1\f[R], and +\f[C]file.txt\f[R] from Path2 will be renamed \f[C]file.txt.path2\f[R]. +If two non-identical suffixes are provided (ex. +\f[C]--conflict-suffix cloud,local\f[R]), the trailing digit is omitted. +Importantly, note that with \f[C]pathname\f[R], there is no +auto-numbering beyond \f[C]2\f[R], so if \f[C]file.txt.path2\f[R] +somehow already exists, it will be overwritten. +Using a dynamic date variable in your \f[C]--conflict-suffix\f[R] (see +below) is one possible way to avoid this. +Note also that conflicts-of-conflicts are possible, if the original +conflict is not manually resolved -- for example, if for some reason you +edited \f[C]file.txt.path1\f[R] on both sides, and those edits were +different, the result would be \f[C]file.txt.path1.path1\f[R] and +\f[C]file.txt.path1.path2\f[R] (in addition to +\f[C]file.txt.path2\f[R].) +.IP \[bu] 2 +\f[C]delete\f[R] - keep the winner only and delete the loser, instead of +renaming it. +If a winner cannot be determined (see \f[C]--conflict-resolve\f[R] for +details on how this could happen), \f[C]delete\f[R] is ignored and the +default \f[C]num\f[R] is used instead (i.e. +both versions are kept and renamed, and neither is deleted.) +\f[C]delete\f[R] is inherently the most destructive option, so use it +only with care. +.PP +For all of the above options, note that if a winner cannot be determined +(see \f[C]--conflict-resolve\f[R] for details on how this could happen), +or if \f[C]--conflict-resolve\f[R] is not in use, \f[I]both\f[R] files +will be renamed. +.SS --conflict-suffix STRING[,STRING] +.PP +\f[C]--conflict-suffix\f[R] controls the suffix that is appended when +bisync renames a \f[C]--conflict-loser\f[R] (default: +\f[C]conflict\f[R]). +\f[C]--conflict-suffix\f[R] will accept either one string or two +comma-separated strings to assign different suffixes to Path1 vs. +Path2. +This may be helpful later in identifying the source of the conflict. +(For example, +\f[C]--conflict-suffix dropboxconflict,laptopconflict\f[R]) +.PP +With \f[C]--conflict-loser num\f[R], a number is always appended to the +suffix. +With \f[C]--conflict-loser pathname\f[R], a number is appended only when +one suffix is specified (or when two identical suffixes are specified.) +i.e. +with \f[C]--conflict-loser pathname\f[R], all of the following would +produce exactly the same result: +.IP +.nf +\f[C] +--conflict-suffix path +--conflict-suffix path,path +--conflict-suffix path1,path2 +\f[R] +.fi +.PP +Suffixes may be as short as 1 character. +By default, the suffix is appended after any other extensions (ex. +\f[C]file.jpg.conflict1\f[R]), however, this can be changed with the +\f[C]--suffix-keep-extension\f[R] (https://rclone.org/docs/#suffix-keep-extension) +flag (i.e. +to instead result in \f[C]file.conflict1.jpg\f[R]). +.PP +\f[C]--conflict-suffix\f[R] supports several \f[I]dynamic date +variables\f[R] when enclosed in curly braces as globs. +This can be helpful to track the date and/or time that each conflict was +handled by bisync. +For example: +.IP +.nf +\f[C] +--conflict-suffix {DateOnly}-conflict +// result: myfile.txt.2006-01-02-conflict1 +\f[R] +.fi +.PP +All of the formats described +here (https://pkg.go.dev/time#pkg-constants) and +here (https://pkg.go.dev/time#example-Time.Format) are supported, but +take care to ensure that your chosen format does not use any characters +that are illegal on your remotes (for example, macOS does not allow +colons in filenames, and slashes are also best avoided as they are often +interpreted as directory separators.) To address this particular issue, +an additional \f[C]{MacFriendlyTime}\f[R] (or just \f[C]{mac}\f[R]) +option is supported, which results in \f[C]2006-01-02 0304PM\f[R]. +.PP +Note that \f[C]--conflict-suffix\f[R] is entirely separate from +rclone\[aq]s main +\f[C]--sufix\f[R] (https://rclone.org/docs/#suffix-suffix) flag. +This is intentional, as users may wish to use both flags simultaneously, +if also using \f[C]--backup-dir\f[R]. +.PP +Finally, note that the default in bisync prior to \f[C]v1.66\f[R] was to +rename conflicts with \f[C]..path1\f[R] and \f[C]..path2\f[R] (with two +periods, and \f[C]path\f[R] instead of \f[C]conflict\f[R].) Bisync now +defaults to a single dot instead of a double dot, but additional dots +can be added by including them in the specified suffix string. +For example, for behavior equivalent to the previous default, use: +.IP +.nf +\f[C] +[--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path +\f[R] +.fi .SS --check-sync .PP Enabled by default, the check-sync function checks that all of the same @@ -26402,47 +28675,67 @@ The check may be run manually with \f[C]--check-sync=only\f[R]. It runs only the integrity check and terminates without actually synching. .PP -See also: Concurrent modifications -.SS --ignore-listing-checksum +Note that currently, \f[C]--check-sync\f[R] \f[B]only checks listing +snapshots and NOT the actual files on the remotes.\f[R] Note also that +the listing snapshots will not know about any changes that happened +during or after the latest bisync run, as those will be discovered on +the next run. +Therefore, while listings should always match \f[I]each other\f[R] at +the end of a bisync run, it is \f[I]expected\f[R] that they will not +match the underlying remotes, nor will the remotes match each other, if +there were changes during or after the run. +This is normal, and any differences will be detected and synced on the +next run. .PP -By default, bisync will retrieve (or generate) checksums (for backends -that support them) when creating the listings for both paths, and store -the checksums in the listing files. -\f[C]--ignore-listing-checksum\f[R] will disable this behavior, which -may speed things up considerably, especially on backends (such as -local (https://rclone.org/local/)) where hashes must be computed on the -fly instead of retrieved. -Please note the following: -.IP \[bu] 2 -While checksums are (by default) generated and stored in the listing -files, they are NOT currently used for determining diffs (deltas). -It is anticipated that full checksum support will be added in a future -version. -.IP \[bu] 2 -\f[C]--ignore-listing-checksum\f[R] is NOT the same as -\f[C]--ignore-checksum\f[R] (https://rclone.org/docs/#ignore-checksum), -and you may wish to use one or the other, or both. -In a nutshell: \f[C]--ignore-listing-checksum\f[R] controls whether -checksums are considered when scanning for diffs, while -\f[C]--ignore-checksum\f[R] controls whether checksums are considered -during the copy/sync operations that follow, if there ARE diffs. -.IP \[bu] 2 -Unless \f[C]--ignore-listing-checksum\f[R] is passed, bisync currently -computes hashes for one path \f[I]even when there\[aq]s no common hash -with the other path\f[R] (for example, a -crypt (https://rclone.org/crypt/#modification-times-and-hashes) remote.) -.IP \[bu] 2 -If both paths support checksums and have a common hash, AND -\f[C]--ignore-listing-checksum\f[R] was not specified when creating the -listings, \f[C]--check-sync=only\f[R] can be used to compare Path1 vs. -Path2 checksums (as of the time the previous listings were created.) -However, \f[C]--check-sync=only\f[R] will NOT include checksums if the -previous listings were generated on a run using -\f[C]--ignore-listing-checksum\f[R]. -For a more robust integrity check of the current state, consider using -\f[C]check\f[R] (or +For a robust integrity check of the current state of the remotes (as +opposed to just their listing snapshots), consider using \f[C]check\f[R] +(or \f[C]cryptcheck\f[R] (https://rclone.org/commands/rclone_cryptcheck/), -if at least one path is a \f[C]crypt\f[R] remote.) +if at least one path is a \f[C]crypt\f[R] remote) instead of +\f[C]--check-sync\f[R], keeping in mind that differences are expected if +files changed during or after your last bisync run. +.PP +For example, a possible sequence could look like this: +.IP "1." 3 +Normally scheduled bisync run: +.IP +.nf +\f[C] +rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient +\f[R] +.fi +.IP "2." 3 +Periodic independent integrity check (perhaps scheduled nightly or +weekly): +.IP +.nf +\f[C] +rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt +\f[R] +.fi +.IP "3." 3 +If diffs are found, you have some choices to correct them. +If one side is more up-to-date and you want to make the other side match +it, you could run: +.IP +.nf +\f[C] +rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v +\f[R] +.fi +.PP +(or switch Path1 and Path2 to make Path2 the source-of-truth) +.PP +Or, if neither side is totally up-to-date, you could run a +\f[C]--resync\f[R] to bring them back into agreement (but remember that +this could cause deleted files to re-appear.) +.PP +*Note also that \f[C]rclone check\f[R] does not currently include empty +directories, so if you want to know if any empty directories are out of +sync, consider alternatively running the above \f[C]rclone sync\f[R] +command with \f[C]--dry-run\f[R] added. +.PP +See also: Concurrent modifications, \f[C]--resilient\f[R] .SS --resilient .PP \f[B]\f[BI]Caution: this is an experimental feature. Use at your own @@ -26475,6 +28768,135 @@ Certain more serious errors will still enforce a \f[C]--resync\f[R] lockout, even in \f[C]--resilient\f[R] mode, to prevent data loss. .PP Behavior of \f[C]--resilient\f[R] may change in a future version. +(See also: \f[C]--recover\f[R], \f[C]--max-lock\f[R], Graceful Shutdown) +.SS --recover +.PP +If \f[C]--recover\f[R] is set, in the event of a sudden interruption or +other un-graceful shutdown, bisync will attempt to automatically recover +on the next run, instead of requiring \f[C]--resync\f[R]. +Bisync is able to recover robustly by keeping one \[dq]backup\[dq] +listing at all times, representing the state of both paths after the +last known successful sync. +Bisync can then compare the current state with this snapshot to +determine which changes it needs to retry. +Changes that were synced after this snapshot (during the run that was +later interrupted) will appear to bisync as if they are \[dq]new or +changed on both sides\[dq], but in most cases this is not a problem, as +bisync will simply do its usual \[dq]equality check\[dq] and learn that +no action needs to be taken on these files, since they are already +identical on both sides. +.PP +In the rare event that a file is synced successfully during a run that +later aborts, and then that same file changes AGAIN before the next run, +bisync will think it is a sync conflict, and handle it accordingly. +(From bisync\[aq]s perspective, the file has changed on both sides since +the last trusted sync, and the files on either side are not currently +identical.) Therefore, \f[C]--recover\f[R] carries with it a slightly +increased chance of having conflicts -- though in practice this is +pretty rare, as the conditions required to cause it are quite specific. +This risk can be reduced by using bisync\[aq]s \[dq]Graceful +Shutdown\[dq] mode (triggered by sending \f[C]SIGINT\f[R] or +\f[C]Ctrl+C\f[R]), when you have the choice, instead of forcing a sudden +termination. +.PP +\f[C]--recover\f[R] and \f[C]--resilient\f[R] are similar, but distinct +-- the main difference is that \f[C]--resilient\f[R] is about +\f[I]retrying\f[R], while \f[C]--recover\f[R] is about +\f[I]recovering\f[R]. +Most users will probably want both. +\f[C]--resilient\f[R] allows retrying when bisync has chosen to abort +itself due to safety features such as failing \f[C]--check-access\f[R] +or detecting a filter change. +\f[C]--resilient\f[R] does not cover external interruptions such as a +user shutting down their computer in the middle of a sync -- that is +what \f[C]--recover\f[R] is for. +.SS --max-lock +.PP +Bisync uses lock files as a safety feature to prevent interference from +other bisync runs while it is running. +Bisync normally removes these lock files at the end of a run, but if +bisync is abruptly interrupted, these files will be left behind. +By default, they will lock out all future runs, until the user has a +chance to manually check things out and remove the lock. +As an alternative, \f[C]--max-lock\f[R] can be used to make them +automatically expire after a certain period of time, so that future runs +are not locked out forever, and auto-recovery is possible. +\f[C]--max-lock\f[R] can be any duration \f[C]2m\f[R] or greater (or +\f[C]0\f[R] to disable). +If set, lock files older than this will be considered \[dq]expired\[dq], +and future runs will be allowed to disregard them and proceed. +(Note that the \f[C]--max-lock\f[R] duration must be set by the process +that left the lock file -- not the later one interpreting it.) +.PP +If set, bisync will also \[dq]renew\[dq] these lock files every +\f[C]--max-lock minus one minute\f[R] throughout a run, for extra +safety. +(For example, with \f[C]--max-lock 5m\f[R], bisync would renew the lock +file (for another 5 minutes) every 4 minutes until the run has +completed.) In other words, it should not be possible for a lock file to +pass its expiration time while the process that created it is still +running -- and you can therefore be reasonably sure that any +\f[I]expired\f[R] lock file you may find was left there by an +interrupted run, not one that is still running and just taking awhile. +.PP +If \f[C]--max-lock\f[R] is \f[C]0\f[R] or not set, the default is that +lock files will never expire, and will block future runs (of these same +two bisync paths) indefinitely. +.PP +For maximum resilience from disruptions, consider setting a relatively +short duration like \f[C]--max-lock 2m\f[R] along with +\f[C]--resilient\f[R] and \f[C]--recover\f[R], and a relatively frequent +cron schedule. +The result will be a very robust \[dq]set-it-and-forget-it\[dq] bisync +run that can automatically bounce back from almost any interruption it +might encounter, without requiring the user to get involved and run a +\f[C]--resync\f[R]. +(See also: Graceful Shutdown mode) +.SS --backup-dir1 and --backup-dir2 +.PP +As of \f[C]v1.66\f[R], +\f[C]--backup-dir\f[R] (https://rclone.org/docs/#backup-dir-dir) is +supported in bisync. +Because \f[C]--backup-dir\f[R] must be a non-overlapping path on the +same remote, Bisync has introduced new \f[C]--backup-dir1\f[R] and +\f[C]--backup-dir2\f[R] flags to support separate backup-dirs for +\f[C]Path1\f[R] and \f[C]Path2\f[R] (bisyncing between different remotes +with \f[C]--backup-dir\f[R] would not otherwise be possible.) +\f[C]--backup-dir1\f[R] and \f[C]--backup-dir2\f[R] can use different +remotes from each other, but \f[C]--backup-dir1\f[R] must use the same +remote as \f[C]Path1\f[R], and \f[C]--backup-dir2\f[R] must use the same +remote as \f[C]Path2\f[R]. +Each backup directory must not overlap its respective bisync Path +without being excluded by a filter rule. +.PP +The standard \f[C]--backup-dir\f[R] will also work, if both paths use +the same remote (but note that deleted files from both paths would be +mixed together in the same dir). +If either \f[C]--backup-dir1\f[R] and \f[C]--backup-dir2\f[R] are set, +they will override \f[C]--backup-dir\f[R]. +.PP +Example: +.IP +.nf +\f[C] +rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case +\f[R] +.fi +.PP +In this example, if the user deletes a file in +\f[C]/Users/someuser/some/local/path/Bisync\f[R], bisync will propagate +the delete to the other side by moving the corresponding file from +\f[C]gdrive:Bisync\f[R] to \f[C]gdrive:BackupDir\f[R]. +If the user deletes a file from \f[C]gdrive:Bisync\f[R], bisync moves it +from \f[C]/Users/someuser/some/local/path/Bisync\f[R] to +\f[C]/Users/someuser/some/local/path/BackupDir\f[R]. +.PP +In the event of a rename due to a sync conflict, the rename is not +considered a delete, unless a previous conflict with the same name +already exists and would get overwritten. +.PP +See also: \f[C]--suffix\f[R] (https://rclone.org/docs/#suffix-suffix), +\f[C]--suffix-keep-extension\f[R] (https://rclone.org/docs/#suffix-keep-extension) .SS Operation .SS Runtime flow details .PP @@ -26493,8 +28915,10 @@ Propagate changes on \f[C]path1\f[R] to \f[C]path2\f[R], and vice-versa. Lock file prevents multiple simultaneous runs when taking a while. This can be particularly useful if bisync is run by cron scheduler. .IP \[bu] 2 -Handle change conflicts non-destructively by creating \f[C]..path1\f[R] -and \f[C]..path2\f[R] file versions. +Handle change conflicts non-destructively by creating +\f[C].conflict1\f[R], \f[C].conflict2\f[R], etc. +file versions, according to \f[C]--conflict-resolve\f[R], +\f[C]--conflict-loser\f[R], and \f[C]--conflict-suffix\f[R] settings. .IP \[bu] 2 File system access health check using \f[C]RCLONE_TEST\f[R] files (see the \f[C]--check-access\f[R] flag). @@ -26625,10 +29049,12 @@ T}@T{ File is new on Path1 AND new on Path2 (and Path1 version is NOT identical to Path2) T}@T{ -Files renamed to _Path1 and _Path2 +Conflicts handled according to \f[C]--conflict-resolve\f[R] & +\f[C]--conflict-loser\f[R] settings T}@T{ -\f[C]rclone copy\f[R] _Path2 file to Path1, \f[C]rclone copy\f[R] _Path1 -file to Path2 +default: \f[C]rclone copy\f[R] renamed \f[C]Path2.conflict2\f[R] file to +Path1, \f[C]rclone copy\f[R] renamed \f[C]Path1.conflict1\f[R] file to +Path2 T} T{ Path2 newer AND Path1 changed @@ -26636,10 +29062,12 @@ T}@T{ File is newer on Path2 AND also changed (newer/older/size) on Path1 (and Path1 version is NOT identical to Path2) T}@T{ -Files renamed to _Path1 and _Path2 +Conflicts handled according to \f[C]--conflict-resolve\f[R] & +\f[C]--conflict-loser\f[R] settings T}@T{ -\f[C]rclone copy\f[R] _Path2 file to Path1, \f[C]rclone copy\f[R] _Path1 -file to Path2 +default: \f[C]rclone copy\f[R] renamed \f[C]Path2.conflict2\f[R] file to +Path1, \f[C]rclone copy\f[R] renamed \f[C]Path1.conflict1\f[R] file to +Path2 T} T{ Path2 newer AND Path1 deleted @@ -26678,8 +29106,7 @@ new/changed on both sides), it first checks whether the Path1 and Path2 versions are currently \f[I]identical\f[R] (using the same underlying function as \f[C]check\f[R].) If bisync concludes that the files are identical, it will skip them and move on. -Otherwise, it will create renamed \f[C]..Path1\f[R] and -\f[C]..Path2\f[R] duplicates, as before. +Otherwise, it will create renamed duplicates, as before. This behavior also improves the experience of renaming directories (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=Renamed%20directories), as a \f[C]--resync\f[R] is no longer required, so long as the same @@ -26699,21 +29126,13 @@ Consider the situation carefully and perhaps use \f[C]--dry-run\f[R] before you commit to the changes. .SS Modification times .PP -Bisync relies on file timestamps to identify changed files and will -\f[I]refuse\f[R] to operate if backend lacks the modification time -support. -.PP +By default, bisync compares files by modification time and size. If you or your application should change the content of a file without -changing the modification time then bisync will \f[I]not\f[R] notice the -change, and thus will not copy it to the other side. -.PP -Note that on some cloud storage systems it is not possible to have file -timestamps that match \f[I]precisely\f[R] between the local and other -filesystems. -.PP -Bisync\[aq]s approach to this problem is by tracking the changes on each -side \f[I]separately\f[R] over time with a local database of files in -that side then applying the resulting changes on the other side. +changing the modification time and size, then bisync will \f[I]not\f[R] +notice the change, and thus will not copy it to the other side. +As an alternative, consider comparing by checksum (if your remotes +support it). +See \f[C]--compare\f[R] for details. .SS Error handling .PP Certain bisync critical errors, such as file copy/move failing, will @@ -26741,7 +29160,8 @@ Some errors are considered temporary and re-running the bisync is not blocked. The \f[I]critical return\f[R] blocks further bisync runs. .PP -See also: \f[C]--resilient\f[R] +See also: \f[C]--resilient\f[R], \f[C]--recover\f[R], +\f[C]--max-lock\f[R], Graceful Shutdown .SS Lock file .PP When bisync is running, a lock file is created in the bisync working @@ -26754,6 +29174,8 @@ The lock file effectively blocks follow-on (e.g., scheduled by \f[I]cron\f[R]) runs when the prior invocation is taking a long time. The lock file contains \f[I]PID\f[R] of the blocking process, which may help in debug. +Lock files can be set to automatically expire after a certain amount of +time, using the \f[C]--max-lock\f[R] flag. .PP \f[B]Note\f[R] that while concurrent bisync runs are allowed, \f[I]be very cautious\f[R] that there is no overlap in the trees being synched @@ -26765,86 +29187,84 @@ and general mayhem. - \f[C]0\f[R] on a successful run, - \f[C]1\f[R] for a non-critical failing run (a rerun may be successful), - \f[C]2\f[R] for a critically aborted run (requires a \f[C]--resync\f[R] to recover). +.SS Graceful Shutdown +.PP +Bisync has a \[dq]Graceful Shutdown\[dq] mode which is activated by +sending \f[C]SIGINT\f[R] or pressing \f[C]Ctrl+C\f[R] during a run. +Once triggered, bisync will use best efforts to exit cleanly before the +timer runs out. +If bisync is in the middle of transferring files, it will attempt to +cleanly empty its queue by finishing what it has started but not taking +more. +If it cannot do so within 30 seconds, it will cancel the in-progress +transfers at that point and then give itself a maximum of 60 seconds to +wrap up, save its state for next time, and exit. +With the \f[C]-vP\f[R] flags you will see constant status updates and a +final confirmation of whether or not the graceful shutdown was +successful. +.PP +At any point during the \[dq]Graceful Shutdown\[dq] sequence, a second +\f[C]SIGINT\f[R] or \f[C]Ctrl+C\f[R] will trigger an immediate, +un-graceful exit, which will leave things in a messier state. +Usually a robust recovery will still be possible if using +\f[C]--recover\f[R] mode, otherwise you will need to do a +\f[C]--resync\f[R]. +.PP +If you plan to use Graceful Shutdown mode, it is recommended to use +\f[C]--resilient\f[R] and \f[C]--recover\f[R], and it is important to +NOT use \f[C]--inplace\f[R] (https://rclone.org/docs/#inplace), +otherwise you risk leaving partially-written files on one side, which +may be confused for real files on the next run. +Note also that in the event of an abrupt interruption, a lock file will +be left behind to block concurrent runs. +You will need to delete it before you can proceed with the next run (or +wait for it to expire on its own, if using \f[C]--max-lock\f[R].) .SS Limitations .SS Supported backends .PP Bisync is considered \f[I]BETA\f[R] and has been tested with the following backends: - Local filesystem - Google Drive - Dropbox - -OneDrive - S3 - SFTP - Yandex Disk +OneDrive - S3 - SFTP - Yandex Disk - Crypt .PP It has not been fully tested with other services yet. If it works, or sorta works, please let us know and we\[aq]ll update the list. Run the test suite to check for proper operation as described below. .PP -First release of \f[C]rclone bisync\f[R] requires that underlying -backend supports the modification time feature and will refuse to run -otherwise. -This limitation will be lifted in a future \f[C]rclone bisync\f[R] -release. +The first release of \f[C]rclone bisync\f[R] required both underlying +backends to support modification times, and refused to run otherwise. +This limitation has been lifted as of \f[C]v1.66\f[R], as bisync now +supports comparing checksum and/or size instead of (or in addition to) +modtime. +See \f[C]--compare\f[R] for details. .SS Concurrent modifications .PP -When using \f[B]Local, FTP or SFTP\f[R] remotes rclone does not create -\f[I]temporary\f[R] files at the destination when copying, and thus if -the connection is lost the created file may be corrupt, which will -likely propagate back to the original path on the next sync, resulting -in data loss. -This will be solved in a future release, there is no workaround at the -moment. +When using \f[B]Local, FTP or SFTP\f[R] remotes with +\f[C]--inplace\f[R] (https://rclone.org/docs/#inplace), rclone does not +create \f[I]temporary\f[R] files at the destination when copying, and +thus if the connection is lost the created file may be corrupt, which +will likely propagate back to the original path on the next sync, +resulting in data loss. +It is therefore recommended to \f[I]omit\f[R] \f[C]--inplace\f[R]. .PP Files that \f[B]change during\f[R] a bisync run may result in data loss. -This has been seen in a highly dynamic environment, where the filesystem -is getting hammered by running processes during the sync. -The currently recommended solution is to sync at quiet times or filter -out unnecessary directories and files. -.PP -As an alternative -approach (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=scans%2C%20to%20avoid-,errors%20if%20files%20changed%20during%20sync,-Given%20the%20number), -consider using \f[C]--check-sync=false\f[R] (and possibly -\f[C]--resilient\f[R]) to make bisync more forgiving of filesystems that -change during the sync. -Be advised that this may cause bisync to miss events that occur during a -bisync run, so it is a good idea to supplement this with a periodic -independent integrity check, and corrective sync if diffs are found. -For example, a possible sequence could look like this: -.IP "1." 3 -Normally scheduled bisync run: -.IP -.nf -\f[C] -rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --check-sync=false --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -\f[R] -.fi -.IP "2." 3 -Periodic independent integrity check (perhaps scheduled nightly or -weekly): -.IP -.nf -\f[C] -rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt -\f[R] -.fi -.IP "3." 3 -If diffs are found, you have some choices to correct them. -If one side is more up-to-date and you want to make the other side match -it, you could run: -.IP -.nf -\f[C] -rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v -\f[R] -.fi -.PP -(or switch Path1 and Path2 to make Path2 the source-of-truth) -.PP -Or, if neither side is totally up-to-date, you could run a -\f[C]--resync\f[R] to bring them back into agreement (but remember that -this could cause deleted files to re-appear.) -.PP -*Note also that \f[C]rclone check\f[R] does not currently include empty -directories, so if you want to know if any empty directories are out of -sync, consider alternatively running the above \f[C]rclone sync\f[R] -command with \f[C]--dry-run\f[R] added. +Prior to \f[C]rclone v1.66\f[R], this was commonly seen in highly +dynamic environments, where the filesystem was getting hammered by +running processes during the sync. +As of \f[C]rclone v1.66\f[R], bisync was redesigned to use a +\[dq]snapshot\[dq] model, greatly reducing the risks from changes during +a sync. +Changes that are not detected during the current sync will now be +detected during the following sync, and will no longer cause the entire +run to throw a critical error. +There is additionally a mechanism to mark files as needing to be +internally rechecked next time, for added safety. +It should therefore no longer be necessary to sync only at quiet times +-- however, note that an error can still occur if a file happens to +change at the exact moment it\[aq]s being read/written by bisync (same +as would happen in \f[C]rclone sync\f[R].) (See also: +\f[C]--ignore-checksum\f[R] (https://rclone.org/docs/#ignore-checksum), +\f[C]--local-no-check-updated\f[R] (https://rclone.org/local/#local-no-check-updated)) .SS Empty directories .PP By default, new/deleted empty directories on one path are \f[I]not\f[R] @@ -26870,11 +29290,21 @@ It looks scarier than it is, but it\[aq]s still probably best to stick to one or the other, and use \f[C]--resync\f[R] when you need to switch. .SS Renamed directories .PP -Renaming a folder on the Path1 side results in deleting all files on the -Path2 side and then copying all files again from Path1 to Path2. +By default, renaming a folder on the Path1 side results in deleting all +files on the Path2 side and then copying all files again from Path1 to +Path2. Bisync sees this as all files in the old directory name as deleted and all files in the new directory name as new. -Currently, the most effective and efficient method of renaming a +.PP +A recommended solution is to use +\f[C]--track-renames\f[R] (https://rclone.org/docs/#track-renames), +which is now supported in bisync as of \f[C]rclone v1.66\f[R]. +Note that \f[C]--track-renames\f[R] is not available during +\f[C]--resync\f[R], as \f[C]--resync\f[R] does not delete anything +(\f[C]--track-renames\f[R] only supports \f[C]sync\f[R], not +\f[C]copy\f[R].) +.PP +Otherwise, the most effective and efficient method of renaming a directory is to rename it to the same name on both sides. (As of \f[C]rclone v1.64\f[R], a \f[C]--resync\f[R] is no longer required after doing so, as bisync will automatically detect that Path1 @@ -26892,32 +29322,29 @@ directories (https://github.com/rclone/rclone/commit/cbf3d4356135814921382dd3285 For now, the recommended way to avoid using \f[C]--fast-list\f[R] is to add \f[C]--disable ListR\f[R] to all bisync commands. The default behavior may change in a future version. -.SS Overridden Configs +.SS Case (and unicode) sensitivity .PP -When rclone detects an overridden config, it adds a suffix like -\f[C]{ABCDE}\f[R] on the fly to the internal name of the remote. -Bisync follows suit by including this suffix in its listing filenames. -However, this suffix does not necessarily persist from run to run, -especially if different flags are provided. -So if next time the suffix assigned is \f[C]{FGHIJ}\f[R], bisync will -get confused, because it\[aq]s looking for a listing file with -\f[C]{FGHIJ}\f[R], when the file it wants has \f[C]{ABCDE}\f[R]. -As a result, it throws -\f[C]Bisync critical error: cannot find prior Path1 or Path2 listings, likely due to critical error on prior run\f[R] -and refuses to run again until the user runs a \f[C]--resync\f[R] -(unless using \f[C]--resilient\f[R]). -The best workaround at the moment is to set any backend-specific flags -in the config file (https://rclone.org/commands/rclone_config/) instead -of specifying them with command flags. -(You can still override them as needed for other rclone commands.) -.SS Case sensitivity +As of \f[C]v1.66\f[R], case and unicode form differences no longer cause +critical errors, and normalization (when comparing between filesystems) +is handled according to the same flags and defaults as +\f[C]rclone sync\f[R]. +See the following options (all of which are supported by bisync) to +control this behavior more granularly: - +\f[C]--fix-case\f[R] (https://rclone.org/docs/#fix-case) - +\f[C]--ignore-case-sync\f[R] (https://rclone.org/docs/#ignore-case-sync) +- +\f[C]--no-unicode-normalization\f[R] (https://rclone.org/docs/#no-unicode-normalization) +- +\f[C]--local-unicode-normalization\f[R] (https://rclone.org/local/#local-unicode-normalization) +and +\f[C]--local-case-sensitive\f[R] (https://rclone.org/local/#local-case-sensitive) +(caution: these are normally not what you want.) .PP -Synching with \f[B]case-insensitive\f[R] filesystems, such as Windows or -\f[C]Box\f[R], can result in file name conflicts. -This will be fixed in a future release. -The near-term workaround is to make sure that files on both sides -don\[aq]t have spelling case differences (\f[C]Smile.jpg\f[R] vs. -\f[C]smile.jpg\f[R]). +Note that in the (probably rare) event that \f[C]--fix-case\f[R] is used +AND a file is new/changed on both sides AND the checksums match AND the +filename case does not match, the Path1 filename is considered the +winner, for the purposes of \f[C]--fix-case\f[R] (Path2 will be renamed +to match it). .SS Windows support .PP Bisync has been tested on Windows 8.1, Windows 10 Pro 64-bit and on @@ -27273,27 +29700,72 @@ If the error is \f[C]This file has been identified as malware or spam and cannot be downloaded\f[R], consider using the flag --drive-acknowledge-abuse (https://rclone.org/drive/#drive-acknowledge-abuse). -.SS Google Doc files +.SS Google Docs (and other files of unknown size) .PP -Google docs exist as virtual files on Google Drive and cannot be -transferred to other filesystems natively. -While it is possible to export a Google doc to a normal file (with -\f[C].xlsx\f[R] extension, for example), it is not possible to import a -normal file back into a Google document. +As of \f[C]v1.66\f[R], Google +Docs (https://rclone.org/drive/#import-export-of-google-documents) +(including Google Sheets, Slides, etc.) are now supported in bisync, +subject to the same options, defaults, and limitations as in +\f[C]rclone sync\f[R]. +When bisyncing drive with non-drive backends, the drive -> non-drive +direction is controlled by +\f[C]--drive-export-formats\f[R] (https://rclone.org/drive/#drive-export-formats) +(default \f[C]\[dq]docx,xlsx,pptx,svg\[dq]\f[R]) and the non-drive -> +drive direction is controlled by +\f[C]--drive-import-formats\f[R] (https://rclone.org/drive/#drive-import-formats) +(default none.) .PP -Bisync\[aq]s handling of Google Doc files is to flag them in the run log -output for user\[aq]s attention and ignore them for any file transfers, -deletes, or syncs. -They will show up with a length of \f[C]-1\f[R] in the listings. -This bisync run is otherwise successful: -.IP -.nf -\f[C] -2021/05/11 08:23:15 INFO : Synching Path1 \[dq]/path/to/local/tree/base/\[dq] with Path2 \[dq]GDrive:\[dq] -2021/05/11 08:23:15 INFO : ...path2.lst-new: Ignoring incorrect line: \[dq]- -1 - - 2018-07-29T08:49:30.136000000+0000 GoogleDoc.docx\[dq] -2021/05/11 08:23:15 INFO : Bisync successful -\f[R] -.fi +For example, with the default export/import formats, a Google Sheet on +the drive side will be synced to an \f[C].xlsx\f[R] file on the +non-drive side. +In the reverse direction, \f[C].xlsx\f[R] files with filenames that +match an existing Google Sheet will be synced to that Google Sheet, +while \f[C].xlsx\f[R] files that do NOT match an existing Google Sheet +will be copied to drive as normal \f[C].xlsx\f[R] files (without +conversion to Sheets, although the Google Drive web browser UI may still +give you the option to open it as one.) +.PP +If \f[C]--drive-import-formats\f[R] is set (it\[aq]s not, by default), +then all of the specified formats will be converted to Google Docs, if +there is no existing Google Doc with a matching name. +Caution: such conversion can be quite lossy, and in most cases it\[aq]s +probably not what you want! +.PP +To bisync Google Docs as URL shortcut links (in a manner similar to +\[dq]Drive for Desktop\[dq]), use: \f[C]--drive-export-formats url\f[R] +(or +alternatives (https://rclone.org/drive/#exportformats:~:text=available%20Google%20Documents.-,Extension,macOS,-Standard%20options).) +.PP +Note that these link files cannot be edited on the non-drive side -- you +will get errors if you try to sync an edited link file back to drive. +They CAN be deleted (it will result in deleting the corresponding Google +Doc.) If you create a \f[C].url\f[R] file on the non-drive side that +does not match an existing Google Doc, bisyncing it will just result in +copying the literal \f[C].url\f[R] file over to drive (no Google Doc +will be created.) So, as a general rule of thumb, think of them as +read-only placeholders on the non-drive side, and make all your changes +on the drive side. +.PP +Likewise, even with other export-formats, it is best to only move/rename +Google Docs on the drive side. +This is because otherwise, bisync will interpret this as a file deleted +and another created, and accordingly, it will delete the Google Doc and +create a new file at the new path. +(Whether or not that new file is a Google Doc depends on +\f[C]--drive-import-formats\f[R].) +.PP +Lastly, take note that all Google Docs on the drive side have a size of +\f[C]-1\f[R] and no checksum. +Therefore, they cannot be reliably synced with the \f[C]--checksum\f[R] +or \f[C]--size-only\f[R] flags. +(To be exact: they will still get created/deleted, and bisync\[aq]s +delta engine will notice changes and queue them for syncing, but the +underlying sync function will consider them identical and skip them.) To +work around this, use the default (modtime and size) instead of +\f[C]--checksum\f[R] or \f[C]--size-only\f[R]. +.PP +To ignore Google Docs entirely, use +\f[C]--drive-skip-gdocs\f[R] (https://rclone.org/drive/#drive-skip-gdocs). .SS Usage examples .SS Cron .PP @@ -27789,6 +30261,77 @@ Also note a number of academic publications by Benjamin Pierce (http://www.cis.upenn.edu/%7Ebcpierce/papers/index.shtml#File%20Synchronization) about \f[I]Unison\f[R] and synchronization in general. .SS Changelog +.SS \f[C]v1.66\f[R] +.IP \[bu] 2 +Copies and deletes are now handled in one operation instead of two +.IP \[bu] 2 +\f[C]--track-renames\f[R] and \f[C]--backup-dir\f[R] are now supported +.IP \[bu] 2 +Partial uploads known issue on +\f[C]local\f[R]/\f[C]ftp\f[R]/\f[C]sftp\f[R] has been resolved (unless +using \f[C]--inplace\f[R]) +.IP \[bu] 2 +Final listings are now generated from sync results, to avoid needing to +re-list +.IP \[bu] 2 +Bisync is now much more resilient to changes that happen during a bisync +run, and far less prone to critical errors / undetected changes +.IP \[bu] 2 +Bisync is now capable of rolling a file listing back in cases of +uncertainty, essentially marking the file as needing to be rechecked +next time. +.IP \[bu] 2 +A few basic terminal colors are now supported, controllable with +\f[C]--color\f[R] (https://rclone.org/docs/#color-when) +(\f[C]AUTO\f[R]|\f[C]NEVER\f[R]|\f[C]ALWAYS\f[R]) +.IP \[bu] 2 +Initial listing snapshots of Path1 and Path2 are now generated +concurrently, using the same \[dq]march\[dq] infrastructure as +\f[C]check\f[R] and \f[C]sync\f[R], for performance improvements and +less risk of +error (https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636#:~:text=4.%20Listings%20should%20alternate%20between%20paths%20to%20minimize%20errors). +.IP \[bu] 2 +Fixed handling of unicode normalization and case insensitivity, support +for \f[C]--fix-case\f[R] (https://rclone.org/docs/#fix-case), +\f[C]--ignore-case-sync\f[R], \f[C]--no-unicode-normalization\f[R] +.IP \[bu] 2 +\f[C]--resync\f[R] is now much more efficient (especially for users of +\f[C]--create-empty-src-dirs\f[R]) +.IP \[bu] 2 +Google Docs (and other files of unknown size) are now supported (with +the same options as in \f[C]sync\f[R]) +.IP \[bu] 2 +Equality checks before a sync conflict rename now fall back to +\f[C]cryptcheck\f[R] (when possible) or \f[C]--download\f[R], instead of +of \f[C]--size-only\f[R], when \f[C]check\f[R] is not available. +.IP \[bu] 2 +Bisync no longer fails to find the correct listing file when configs are +overridden with backend-specific flags. +.IP \[bu] 2 +Bisync now fully supports comparing based on any combination of size, +modtime, and checksum, lifting the prior restriction on backends without +modtime support. +.IP \[bu] 2 +Bisync now supports a \[dq]Graceful Shutdown\[dq] mode to cleanly cancel +a run early without requiring \f[C]--resync\f[R]. +.IP \[bu] 2 +New \f[C]--recover\f[R] flag allows robust recovery in the event of +interruptions, without requiring \f[C]--resync\f[R]. +.IP \[bu] 2 +A new \f[C]--max-lock\f[R] setting allows lock files to automatically +renew and expire, for better automatic recovery when a run is +interrupted. +.IP \[bu] 2 +Bisync now supports auto-resolving sync conflicts and customizing rename +behavior with new \f[C]--conflict-resolve\f[R], +\f[C]--conflict-loser\f[R], and \f[C]--conflict-suffix\f[R] flags. +.IP \[bu] 2 +A new \f[C]--resync-mode\f[R] flag allows more control over which +version of a file gets kept during a \f[C]--resync\f[R]. +.IP \[bu] 2 +Bisync now supports +\f[C]--retries\f[R] (https://rclone.org/docs/#retries-int) and +\f[C]--retries-sleep\f[R] (when \f[C]--resilient\f[R] is set.) .SS \f[C]v1.64\f[R] .IP \[bu] 2 Fixed an @@ -28299,6 +30842,19 @@ Type: Encoding .IP \[bu] 2 Default: Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot +.SS --fichier-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_FICHIER_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Limitations .PP \f[C]rclone about\f[R] is not supported by the 1Fichier backend. @@ -28444,404 +31000,23 @@ Env Var: RCLONE_ALIAS_REMOTE Type: string .IP \[bu] 2 Required: true -.SH Amazon Drive -.PP -Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage -service run by Amazon for consumers. -.SS Status -.PP -\f[B]Important:\f[R] rclone supports Amazon Drive only if you have your -own set of API keys. -Unfortunately the Amazon Drive developer -program (https://developer.amazon.com/amazon-drive) is now closed to new -entries so if you don\[aq]t already have your own set of keys you will -not be able to use rclone with Amazon Drive. -.PP -For the history on why rclone no longer has a set of Amazon Drive API -keys see the -forum (https://forum.rclone.org/t/rclone-has-been-banned-from-amazon-drive/2314). -.PP -If you happen to know anyone who works at Amazon then please ask them to -re-instate rclone into the Amazon Drive developer program - thanks! -.SS Configuration -.PP -The initial setup for Amazon Drive involves getting a token from Amazon -which you need to do in your browser. -\f[C]rclone config\f[R] walks you through it. -.PP -The configuration process for Amazon Drive may involve using an oauth -proxy (https://github.com/ncw/oauthproxy). -This is used to keep the Amazon credentials out of the source code. -The proxy runs in Google\[aq]s very secure App Engine environment and -doesn\[aq]t store any credentials which pass through it. -.PP -Since rclone doesn\[aq]t currently have its own Amazon Drive credentials -so you will either need to have your own \f[C]client_id\f[R] and -\f[C]client_secret\f[R] with Amazon Drive, or use a third-party oauth -proxy in which case you will need to enter \f[C]client_id\f[R], -\f[C]client_secret\f[R], \f[C]auth_url\f[R] and \f[C]token_url\f[R]. -.PP -Note also if you are not using Amazon\[aq]s \f[C]auth_url\f[R] and -\f[C]token_url\f[R], (ie you filled in something for those) then if -setting up on a remote machine you can only use the copying the config -method of -configuration (https://rclone.org/remote_setup/#configuring-by-copying-the-config-file) -- \f[C]rclone authorize\f[R] will not work. -.PP -Here is an example of how to make a remote called \f[C]remote\f[R]. -First run: -.IP -.nf -\f[C] - rclone config -\f[R] -.fi -.PP -This will guide you through an interactive setup process: -.IP -.nf -\f[C] -No remotes found, make a new one? -n) New remote -r) Rename remote -c) Copy remote -s) Set configuration password -q) Quit config -n/r/c/s/q> n -name> remote -Type of storage to configure. -Choose a number from below, or type in your own value -[snip] -XX / Amazon Drive - \[rs] \[dq]amazon cloud drive\[dq] -[snip] -Storage> amazon cloud drive -Amazon Application Client Id - required. -client_id> your client ID goes here -Amazon Application Client Secret - required. -client_secret> your client secret goes here -Auth server URL - leave blank to use Amazon\[aq]s. -auth_url> Optional auth URL -Token server url - leave blank to use Amazon\[aq]s. -token_url> Optional token URL -Remote config -Make sure your Redirect URL is set to \[dq]http://127.0.0.1:53682/\[dq] in your custom config. -Use web browser to automatically authenticate rclone with remote? - * Say Y if the machine running rclone has a web browser you can use - * Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. If Y failed, try N. -y) Yes -n) No -y/n> y -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... -Got code --------------------- -[remote] -client_id = your client ID goes here -client_secret = your client secret goes here -auth_url = Optional auth URL -token_url = Optional token URL -token = {\[dq]access_token\[dq]:\[dq]xxxxxxxxxxxxxxxxxxxxxxx\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]refresh_token\[dq]:\[dq]xxxxxxxxxxxxxxxxxx\[dq],\[dq]expiry\[dq]:\[dq]2015-09-06T16:07:39.658438471+01:00\[dq]} --------------------- -y) Yes this is OK -e) Edit this remote -d) Delete this remote -y/e/d> y -\f[R] -.fi -.PP -See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. -.PP -Note that rclone runs a webserver on your local machine to collect the -token as returned from Amazon. -This only runs from the moment it opens your browser to the moment you -get back the verification code. -This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you -to unblock it temporarily if you are running a host firewall. -.PP -Once configured you can then use \f[C]rclone\f[R] like this, -.PP -List directories in top level of your Amazon Drive -.IP -.nf -\f[C] -rclone lsd remote: -\f[R] -.fi -.PP -List all the files in your Amazon Drive -.IP -.nf -\f[C] -rclone ls remote: -\f[R] -.fi -.PP -To copy a local directory to an Amazon Drive directory called backup -.IP -.nf -\f[C] -rclone copy /home/source remote:backup -\f[R] -.fi -.SS Modification times and hashes -.PP -Amazon Drive doesn\[aq]t allow modification times to be changed via the -API so these won\[aq]t be accurate or used for syncing. -.PP -It does support the MD5 hash algorithm, so for a more accurate sync, you -can use the \f[C]--checksum\f[R] flag. -.SS Restricted filename characters -.PP -.TS -tab(@); -l c c. -T{ -Character -T}@T{ -Value -T}@T{ -Replacement -T} -_ -T{ -NUL -T}@T{ -0x00 -T}@T{ -\[u2400] -T} -T{ -/ -T}@T{ -0x2F -T}@T{ -\[uFF0F] -T} -.TE -.PP -Invalid UTF-8 bytes will also be -replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t -be used in JSON strings. -.SS Deleting files -.PP -Any files you delete with rclone will end up in the trash. -Amazon don\[aq]t provide an API to permanently delete files, nor to -empty the trash, so you will have to do that with one of Amazon\[aq]s -apps or via the Amazon Drive website. -As of November 17, 2016, files are automatically deleted by Amazon from -the trash after 30 days. -.SS Using with non \f[C].com\f[R] Amazon accounts -.PP -Let\[aq]s say you usually use \f[C]amazon.co.uk\f[R]. -When you authenticate with rclone it will take you to an -\f[C]amazon.com\f[R] page to log in. -Your \f[C]amazon.co.uk\f[R] email and password should work here just -fine. -.SS Standard options -.PP -Here are the Standard options specific to amazon cloud drive (Amazon -Drive). -.SS --acd-client-id -.PP -OAuth Client Id. -.PP -Leave blank normally. -.PP -Properties: -.IP \[bu] 2 -Config: client_id -.IP \[bu] 2 -Env Var: RCLONE_ACD_CLIENT_ID -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --acd-client-secret -.PP -OAuth Client Secret. -.PP -Leave blank normally. -.PP -Properties: -.IP \[bu] 2 -Config: client_secret -.IP \[bu] 2 -Env Var: RCLONE_ACD_CLIENT_SECRET -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false .SS Advanced options .PP -Here are the Advanced options specific to amazon cloud drive (Amazon -Drive). -.SS --acd-token +Here are the Advanced options specific to alias (Alias for an existing +remote). +.SS --alias-description .PP -OAuth Access Token as a JSON blob. +Description of the remote .PP Properties: .IP \[bu] 2 -Config: token +Config: description .IP \[bu] 2 -Env Var: RCLONE_ACD_TOKEN +Env Var: RCLONE_ALIAS_DESCRIPTION .IP \[bu] 2 Type: string .IP \[bu] 2 Required: false -.SS --acd-auth-url -.PP -Auth server URL. -.PP -Leave blank to use the provider defaults. -.PP -Properties: -.IP \[bu] 2 -Config: auth_url -.IP \[bu] 2 -Env Var: RCLONE_ACD_AUTH_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --acd-token-url -.PP -Token server url. -.PP -Leave blank to use the provider defaults. -.PP -Properties: -.IP \[bu] 2 -Config: token_url -.IP \[bu] 2 -Env Var: RCLONE_ACD_TOKEN_URL -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --acd-checkpoint -.PP -Checkpoint for internal polling (debug). -.PP -Properties: -.IP \[bu] 2 -Config: checkpoint -.IP \[bu] 2 -Env Var: RCLONE_ACD_CHECKPOINT -.IP \[bu] 2 -Type: string -.IP \[bu] 2 -Required: false -.SS --acd-upload-wait-per-gb -.PP -Additional time per GiB to wait after a failed complete upload to see if -it appears. -.PP -Sometimes Amazon Drive gives an error when a file has been fully -uploaded but the file appears anyway after a little while. -This happens sometimes for files over 1 GiB in size and nearly every -time for files bigger than 10 GiB. -This parameter controls the time rclone waits for the file to appear. -.PP -The default value for this parameter is 3 minutes per GiB, so by default -it will wait 3 minutes for every GiB uploaded to see if the file -appears. -.PP -You can disable this feature by setting it to 0. -This may cause conflict errors as rclone retries the failed upload but -the file will most likely appear correctly eventually. -.PP -These values were determined empirically by observing lots of uploads of -big files for a range of file sizes. -.PP -Upload with the \[dq]-v\[dq] flag to see more info about what rclone is -doing in this situation. -.PP -Properties: -.IP \[bu] 2 -Config: upload_wait_per_gb -.IP \[bu] 2 -Env Var: RCLONE_ACD_UPLOAD_WAIT_PER_GB -.IP \[bu] 2 -Type: Duration -.IP \[bu] 2 -Default: 3m0s -.SS --acd-templink-threshold -.PP -Files >= this size will be downloaded via their tempLink. -.PP -Files this size or more will be downloaded via their \[dq]tempLink\[dq]. -This is to work around a problem with Amazon Drive which blocks -downloads of files bigger than about 10 GiB. -The default for this is 9 GiB which shouldn\[aq]t need to be changed. -.PP -To download files above this threshold, rclone requests a -\[dq]tempLink\[dq] which downloads the file through a temporary URL -directly from the underlying S3 storage. -.PP -Properties: -.IP \[bu] 2 -Config: templink_threshold -.IP \[bu] 2 -Env Var: RCLONE_ACD_TEMPLINK_THRESHOLD -.IP \[bu] 2 -Type: SizeSuffix -.IP \[bu] 2 -Default: 9Gi -.SS --acd-encoding -.PP -The encoding for the backend. -.PP -See the encoding section in the -overview (https://rclone.org/overview/#encoding) for more info. -.PP -Properties: -.IP \[bu] 2 -Config: encoding -.IP \[bu] 2 -Env Var: RCLONE_ACD_ENCODING -.IP \[bu] 2 -Type: Encoding -.IP \[bu] 2 -Default: Slash,InvalidUtf8,Dot -.SS Limitations -.PP -Note that Amazon Drive is case insensitive so you can\[aq]t have a file -called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. -.PP -Amazon Drive has rate limiting so you may notice errors in the sync (429 -errors). -rclone will automatically retry the sync up to 3 times by default (see -\f[C]--retries\f[R] flag) which should hopefully work around this -problem. -.PP -Amazon Drive has an internal limit of file sizes that can be uploaded to -the service. -This limit is not officially published, but all files larger than this -will fail. -.PP -At the time of writing (Jan 2016) is in the area of 50 GiB per file. -This means that larger files are likely to fail. -.PP -Unfortunately there is no way for rclone to see that this failure is -because of file size, so it will retry the operation, as any other -failure. -To avoid this problem, use \f[C]--max-size 50000M\f[R] option to limit -the maximum size of uploaded files. -Note that \f[C]--max-size\f[R] does not split files into segments, it -only ignores files over this size. -.PP -\f[C]rclone about\f[R] is not supported by the Amazon Drive backend. -Backends without this capability cannot determine free space for an -rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member -of an rclone union remote. -.PP -See List of backends that do not support rclone -about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) .SH Amazon S3 Storage Providers .PP The S3 backend can be used with a number of different providers: @@ -28971,7 +31146,7 @@ name> remote Type of storage to configure. Choose a number from below, or type in your own value [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Ceph, ChinaMobile, ArvanCloud, Dreamhost, IBM COS, Liara, Minio, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] \[dq]s3\[dq] [snip] Storage> s3 @@ -29605,6 +31780,8 @@ being written to: \f[C]PutObject\f[R] .IP \[bu] 2 \f[C]PutObjectACL\f[R] +.IP \[bu] 2 +\f[C]CreateBucket\f[R] (unless using s3-no-check-bucket) .PP When using the \f[C]lsd\f[R] subcommand, the \f[C]ListAllMyBuckets\f[R] permission is required. @@ -29650,6 +31827,10 @@ It assumes that \f[C]USER_NAME\f[R] has been created. .IP "2." 3 The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket\[aq]s objects. +.IP "3." 3 +When using s3-no-check-bucket and the bucket already exsits, the +\f[C]\[dq]arn:aws:s3:::BUCKET_NAME\[dq]\f[R] doesn\[aq]t have to be +included. .PP For reference, here\[aq]s an Ansible script (https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) @@ -31037,10 +33218,10 @@ Type: string Required: false .SS --s3-upload-concurrency .PP -Concurrency for multipart uploads. +Concurrency for multipart uploads and copies. .PP This is the number of chunks of the same file that are uploaded -concurrently. +concurrently for multipart uploads and copies. .PP If you are uploading small numbers of large files over high-speed links and these uploads do not fully utilize your bandwidth, then increasing @@ -31097,6 +33278,22 @@ Env Var: RCLONE_S3_V2_AUTH Type: bool .IP \[bu] 2 Default: false +.SS --s3-use-dual-stack +.PP +If true use AWS S3 dual-stack endpoint (IPv6 support). +.PP +See AWS Docs on Dualstack +Endpoints (https://docs.aws.amazon.com/AmazonS3/latest/userguide/dual-stack-endpoints.html) +.PP +Properties: +.IP \[bu] 2 +Config: use_dual_stack +.IP \[bu] 2 +Env Var: RCLONE_S3_USE_DUAL_STACK +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS --s3-use-accelerate-endpoint .PP If true use the AWS S3 accelerated endpoint. @@ -31448,6 +33645,27 @@ Env Var: RCLONE_S3_VERSION_AT Type: Time .IP \[bu] 2 Default: off +.SS --s3-version-deleted +.PP +Show deleted file markers when using versions. +.PP +This shows deleted file markers in the listing when using versions. +These will appear as 0 size files. +The only operation which can be performed on them is deletion. +.PP +Deleting a delete marker will reveal the previous version. +.PP +Deleted files will always show with a timestamp. +.PP +Properties: +.IP \[bu] 2 +Config: version_deleted +.IP \[bu] 2 +Env Var: RCLONE_S3_VERSION_DELETED +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS --s3-decompress .PP If set this will decompress gzip encoded objects. @@ -31620,6 +33838,19 @@ Env Var: RCLONE_S3_USE_MULTIPART_UPLOADS Type: Tristate .IP \[bu] 2 Default: unset +.SS --s3-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_S3_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Metadata .PP User metadata is stored as x-amz-meta- keys. @@ -32424,10 +34655,10 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) [snip] -Storage> 5 +Storage> s3 Option provider. Choose your S3 provider. Choose a number from below, or type in your own value. @@ -32561,18 +34792,11 @@ Select \[dq]s3\[dq] storage. .nf \f[C] Choose a number from below, or type in your own value - 1 / Alias for an existing remote - \[rs] \[dq]alias\[dq] - 2 / Amazon Drive - \[rs] \[dq]amazon cloud drive\[dq] - 3 / Amazon S3 Complaint Storage Providers (Dreamhost, Ceph, ChinaMobile, Liara, ArvanCloud, Minio, IBM COS) - \[rs] \[dq]s3\[dq] - 4 / Backblaze B2 - \[rs] \[dq]b2\[dq] [snip] - 23 / HTTP - \[rs] \[dq]http\[dq] -Storage> 3 +XX / Amazon S3 Compliant Storage Providers including AWS, ... + \[rs] \[dq]s3\[dq] +[snip] +Storage> s3 \f[R] .fi .IP "4." 3 @@ -32767,7 +34991,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) [snip] Storage> s3 @@ -32884,7 +35108,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) [snip] Storage> s3 @@ -33193,15 +35417,8 @@ Select \f[C]s3\f[R] storage. .nf \f[C] Choose a number from below, or type in your own value - 1 / 1Fichier - \[rs] (fichier) - 2 / Akamai NetStorage - \[rs] (netstorage) - 3 / Alias for an existing remote - \[rs] (alias) - 4 / Amazon Drive - \[rs] (amazon cloud drive) - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) [snip] Storage> s3 @@ -33510,7 +35727,7 @@ Choose \f[C]s3\f[R] backend Type of storage to configure. Choose a number from below, or type in your own value. [snip] -XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) [snip] Storage> s3 @@ -33849,7 +36066,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (\[dq]\[dq]). Choose a number from below, or type in your own value [snip] - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Minio, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] \[dq]s3\[dq] [snip] Storage> s3 @@ -33966,7 +36183,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. ... - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Lyve Cloud, Minio, RackCorp, SeaweedFS, and Tencent COS +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) ... Storage> s3 @@ -34230,15 +36447,8 @@ Select \f[C]s3\f[R] storage. .nf \f[C] Choose a number from below, or type in your own value - 1 / 1Fichier - \[rs] (fichier) - 2 / Akamai NetStorage - \[rs] (netstorage) - 3 / Alias for an existing remote - \[rs] (alias) - 4 / Amazon Drive - \[rs] (amazon cloud drive) - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, Liara, Lyve Cloud, Minio, Netease, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] (s3) [snip] Storage> s3 @@ -34475,7 +36685,7 @@ Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. [snip] - X / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others +XX / Amazon S3 Compliant Storage Providers including AWS, ...Linode, ...and others \[rs] (s3) [snip] Storage> s3 @@ -34738,13 +36948,8 @@ Select \f[C]s3\f[R] storage. .nf \f[C] Choose a number from below, or type in your own value -1 / 1Fichier - \[rs] \[dq]fichier\[dq] - 2 / Alias for an existing remote - \[rs] \[dq]alias\[dq] - 3 / Amazon Drive - \[rs] \[dq]amazon cloud drive\[dq] - 4 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, ChinaMobile, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, Liara, Minio, and Tencent COS +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] \[dq]s3\[dq] [snip] Storage> s3 @@ -35192,7 +37397,7 @@ Type of storage to configure. Enter a string value. Press Enter for the default (\[dq]\[dq]). Choose a number from below, or type in your own value - 5 / Amazon S3 Compliant Storage Providers including AWS, Alibaba, Ceph, China Mobile, Cloudflare, GCS, ArvanCloud, DigitalOcean, Dreamhost, Huawei OBS, IBM COS, IDrive e2, IONOS Cloud, Liara, Lyve Cloud, Minio, Netease, Petabox, RackCorp, Scaleway, SeaweedFS, StackPath, Storj, Synology, Tencent COS, Qiniu and Wasabi +XX / Amazon S3 Compliant Storage Providers including AWS, ... \[rs] \[dq]s3\[dq] Storage> s3 @@ -35844,9 +38049,12 @@ Properties: #### --b2-download-auth-duration -Time before the authorization token will expire in s or suffix ms|s|m|h|d. +Time before the public link authorization token will expire in s or suffix ms|s|m|h|d. + +This is used in combination with \[dq]rclone link\[dq] for making files +accessible to the public and sets the duration before the download +authorization token will expire. -The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week. Properties: @@ -35922,6 +38130,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --b2-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_B2_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the b2 backend. @@ -36450,6 +38669,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot +#### --box-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_BOX_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -37119,6 +39349,17 @@ Properties: - Type: Duration - Default: 1s +#### --cache-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_CACHE_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the cache backend. @@ -37594,6 +39835,17 @@ Properties: - If meta format is set to \[dq]none\[dq], rename transactions will always be used. - This method is EXPERIMENTAL, don\[aq]t use on production systems. +#### --chunker-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_CHUNKER_DESCRIPTION +- Type: string +- Required: false + # Citrix ShareFile @@ -37878,6 +40130,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,LeftPeriod,RightSpace,RightPeriod,InvalidUtf8,Dot +#### --sharefile-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_SHAREFILE_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -38436,6 +40699,22 @@ Properties: - Type: bool - Default: false +#### --crypt-strict-names + +If set, this will raise an error when crypt comes across a filename that can\[aq]t be decrypted. + +(By default, rclone will just log a NOTICE and continue as normal.) +This can happen if encrypted and unencrypted files are stored in the same +directory (which is not recommended.) It may also indicate a more serious +problem that should be investigated. + +Properties: + +- Config: strict_names +- Env Var: RCLONE_CRYPT_STRICT_NAMES +- Type: bool +- Default: false + #### --crypt-filename-encoding How to encode the encrypted filename to text string. @@ -38473,6 +40752,17 @@ Properties: - Type: string - Default: \[dq].bin\[dq] +#### --crypt-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_CRYPT_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -38641,7 +40931,7 @@ encoding is modified in two ways: * we strip the padding character \[ga]=\[ga] \[ga]base32\[ga] is used rather than the more efficient \[ga]base64\[ga] so rclone can be -used on case insensitive remotes (e.g. Windows, Amazon Drive). +used on case insensitive remotes (e.g. Windows, Box, Dropbox, Onedrive etc). ### Key derivation @@ -38800,6 +41090,17 @@ Properties: - Type: SizeSuffix - Default: 20Mi +#### --compress-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_COMPRESS_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -38944,6 +41245,21 @@ Properties: - Type: SpaceSepList - Default: +### Advanced options + +Here are the Advanced options specific to combine (Combine several remotes into one). + +#### --combine-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_COMBINE_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -39394,6 +41710,17 @@ Properties: - Type: Duration - Default: 10m0s +#### --dropbox-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_DROPBOX_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -39698,6 +42025,17 @@ Properties: - Type: Encoding - Default: Slash,Del,Ctl,InvalidUtf8,Dot +#### --filefabric-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_FILEFABRIC_DESCRIPTION +- Type: string +- Required: false + # FTP @@ -40124,6 +42462,17 @@ Properties: - \[dq]Ctl,LeftPeriod,Slash\[dq] - VsFTPd can\[aq]t handle file names starting with dot +#### --ftp-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_FTP_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -40842,6 +43191,17 @@ Properties: - Type: Encoding - Default: Slash,CrLf,InvalidUtf8,Dot +#### --gcs-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_GCS_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -42207,10 +44567,23 @@ Properties: - \[dq]true\[dq] - Get GCP IAM credentials from the environment (env vars or IAM). +#### --drive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_DRIVE_DESCRIPTION +- Type: string +- Required: false + ### Metadata User metadata is stored in the properties field of the drive object. +Metadata is supported on files and directories. + Here are the possible system metadata items for the drive backend. | Name | Help | Type | Example | Read Only | @@ -43054,6 +45427,19 @@ T{ RCLONE_GPHOTOS_BATCH_COMMIT_TIMEOUT - Type: Duration - Default: 10m0s T} T{ +#### --gphotos-description +T} +T{ +Description of the remote +T} +T{ +Properties: +T} +T{ +- Config: description - Env Var: RCLONE_GPHOTOS_DESCRIPTION - Type: +string - Required: false +T} +T{ ## Limitations T} T{ @@ -43418,6 +45804,17 @@ Properties: - Type: SizeSuffix - Default: 0 +#### --hasher-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HASHER_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -43753,6 +46150,17 @@ Properties: - Type: Encoding - Default: Slash,Colon,Del,Ctl,InvalidUtf8,Dot +#### --hdfs-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HDFS_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -44159,6 +46567,17 @@ Properties: - Type: Encoding - Default: Slash,Dot +#### --hidrive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HIDRIVE_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -44381,6 +46800,17 @@ Properties: - Type: bool - Default: false +#### --http-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_HTTP_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the http backend. @@ -44626,6 +47056,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket +#### --imagekit-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_IMAGEKIT_DESCRIPTION +- Type: string +- Required: false + ### Metadata Any metadata supported by the underlying remote is read and written. @@ -44895,6 +47336,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,CrLf,Del,Ctl,InvalidUtf8,Dot +#### --internetarchive-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_INTERNETARCHIVE_DESCRIPTION +- Type: string +- Required: false + ### Metadata Metadata fields provided by Internet Archive. @@ -45339,6 +47791,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot +#### --jottacloud-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_JOTTACLOUD_DESCRIPTION +- Type: string +- Required: false + ### Metadata Jottacloud has limited support for metadata, currently an extended set of timestamps. @@ -45558,6 +48021,17 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --koofr-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_KOOFR_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -45712,6 +48186,21 @@ Properties: - Type: string - Required: true +### Advanced options + +Here are the Advanced options specific to linkbox (Linkbox). + +#### --linkbox-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_LINKBOX_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -46103,6 +48592,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --mailru-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_MAILRU_DESCRIPTION +- Type: string +- Required: false + ## Limitations @@ -46372,6 +48872,17 @@ Properties: - Type: Encoding - Default: Slash,InvalidUtf8,Dot +#### --mega-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_MEGA_DESCRIPTION +- Type: string +- Required: false + ### Process \[ga]killed\[ga] @@ -46450,6 +48961,21 @@ The memory backend replaces the [default restricted characters set](https://rclone.org/overview/#restricted-characters). +### Advanced options + +Here are the Advanced options specific to memory (In memory object storage system.). + +#### --memory-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_MEMORY_DESCRIPTION +- Type: string +- Required: false + # Akamai NetStorage @@ -46698,6 +49224,17 @@ Properties: - \[dq]https\[dq] - HTTPS protocol +#### --netstorage-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_NETSTORAGE_DESCRIPTION +- Type: string +- Required: false + ## Backend commands Here are the commands specific to the netstorage backend. @@ -47545,6 +50082,35 @@ Properties: - Type: bool - Default: false +#### --azureblob-delete-snapshots + +Set to specify how to deal with snapshots on blob deletion. + +Properties: + +- Config: delete_snapshots +- Env Var: RCLONE_AZUREBLOB_DELETE_SNAPSHOTS +- Type: string +- Required: false +- Choices: + - \[dq]\[dq] + - By default, the delete operation fails if a blob has snapshots + - \[dq]include\[dq] + - Specify \[aq]include\[aq] to remove the root blob and all its snapshots + - \[dq]only\[dq] + - Specify \[aq]only\[aq] to remove only the snapshots but keep the root blob. + +#### --azureblob-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_AZUREBLOB_DESCRIPTION +- Type: string +- Required: false + ### Custom upload headers @@ -48266,6 +50832,17 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot +#### --azurefiles-description + +Description of the remote + +Properties: + +- Config: description +- Env Var: RCLONE_AZUREFILES_DESCRIPTION +- Type: string +- Required: false + ### Custom upload headers @@ -48899,7 +51476,7 @@ Properties: If set rclone will use delta listing to implement recursive listings. -If this flag is set the the onedrive backend will advertise \[ga]ListR\[ga] +If this flag is set the onedrive backend will advertise \[ga]ListR\[ga] support for recursive listings. Setting this flag speeds up these things greatly: @@ -48932,6 +51509,30 @@ Properties: - Type: bool - Default: false +#### --onedrive-metadata-permissions + +Control whether permissions should be read or written in metadata. + +Reading permissions metadata from files can be done quickly, but it +isn\[aq]t always desirable to set the permissions from the metadata. + + +Properties: + +- Config: metadata_permissions +- Env Var: RCLONE_ONEDRIVE_METADATA_PERMISSIONS +- Type: Bits +- Default: off +- Examples: + - \[dq]off\[dq] + - Do not read or write the value + - \[dq]read\[dq] + - Read the value only + - \[dq]write\[dq] + - Write the value only + - \[dq]read,write\[dq] + - Read and Write the value. + #### --onedrive-encoding The encoding for the backend. @@ -48945,1609 +51546,2748 @@ Properties: - Type: Encoding - Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,LeftTilde,RightSpace,RightPeriod,InvalidUtf8,Dot +#### --onedrive-description - -## Limitations - -If you don\[aq]t use rclone for 90 days the refresh token will -expire. This will result in authorization problems. This is easy to -fix by running the \[ga]rclone config reconnect remote:\[ga] command to get a -new token and refresh token. - -### Naming - -Note that OneDrive is case insensitive so you can\[aq]t have a -file called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. - -There are quite a few characters that can\[aq]t be in OneDrive file -names. These can\[aq]t occur on Windows platforms, but on non-Windows -platforms they are common. Rclone will map these names to and from an -identical looking unicode equivalent. For example if a file has a \[ga]?\[ga] -in it will be mapped to \[ga]\[uFF1F]\[ga] instead. - -### File sizes - -The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). - -### Path length - -The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones. - -### Number of files - -OneDrive seems to be OK with at least 50,000 files in a folder, but at -100,000 rclone will get errors listing the directory like \[ga]couldn\[cq]t -list files: UnknownError:\[ga]. See -[#2707](https://github.com/rclone/rclone/issues/2707) for more info. - -An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). - -## Versions - -Every change in a file OneDrive causes the service to create a new -version of the file. This counts against a users quota. For -example changing the modification time of a file creates a second -version, so the file apparently uses twice the space. - -For example the \[ga]copy\[ga] command is affected by this as rclone copies -the file and then afterwards sets the modification time to match the -source file which uses another version. - -You can use the \[ga]rclone cleanup\[ga] command (see below) to remove all old -versions. - -Or you can set the \[ga]no_versions\[ga] parameter to \[ga]true\[ga] and rclone will -remove versions after operations which create new versions. This takes -extra transactions so only enable it if you need it. - -**Note** At the time of writing Onedrive Personal creates versions -(but not for setting the modification time) but the API for removing -them returns \[dq]API not found\[dq] so cleanup and \[ga]no_versions\[ga] should not -be used on Onedrive Personal. - -### Disabling versioning - -Starting October 2018, users will no longer be able to -disable versioning by default. This is because Microsoft has brought -an -[update](https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) -to the mechanism. To change this new default setting, a PowerShell -command is required to be run by a SharePoint admin. If you are an -admin, you can run these commands in PowerShell to change that -setting: - -1. \[ga]Install-Module -Name Microsoft.Online.SharePoint.PowerShell\[ga] (in case you haven\[aq]t installed this already) -2. \[ga]Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking\[ga] -3. \[ga]Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU\[at]YOURSITE.COM\[ga] (replacing \[ga]YOURSITE\[ga], \[ga]YOU\[ga], \[ga]YOURSITE.COM\[ga] with the actual values; this will prompt for your credentials) -4. \[ga]Set-SPOTenant -EnableMinimumVersionRequirement $False\[ga] -5. \[ga]Disconnect-SPOService\[ga] (to disconnect from the server) - -*Below are the steps for normal users to disable versioning. If you don\[aq]t see the \[dq]No Versioning\[dq] option, make sure the above requirements are met.* - -User [Weropol](https://github.com/Weropol) has found a method to disable -versioning on OneDrive - -1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page. -2. Click Site settings. -3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists. -4. Click Customize \[dq]Documents\[dq]. -5. Click General Settings > Versioning Settings. -6. Under Document Version History select the option No versioning. -Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe. -7. Apply the changes by clicking OK. -8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag) -9. Restore the versioning settings after using rclone. (Optional) - -## Cleanup - -OneDrive supports \[ga]rclone cleanup\[ga] which causes rclone to look through -every file under the path supplied and delete all version but the -current version. Because this involves traversing all the files, then -querying each file for versions it can be quite slow. Rclone does -\[ga]--checkers\[ga] tests in parallel. The command also supports \[ga]--interactive\[ga]/\[ga]i\[ga] -or \[ga]--dry-run\[ga] which is a great way to see what it would do. - - rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir - rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir - -**NB** Onedrive personal can\[aq]t currently delete versions - -## Troubleshooting ## - -### Excessive throttling or blocked on SharePoint - -If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: \[ga]--user-agent \[dq]ISV|rclone.org|rclone/v1.55.1\[dq]\[ga] - -The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling) - -### Unexpected file size/hash differences on Sharepoint #### - -It is a -[known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631) -issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies -uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and -hash checks to fail. There are also other situations that will cause OneDrive to -report inconsistent file sizes. To use rclone with such -affected files on Sharepoint, you -may disable these checks with the following command line arguments: -\f[R] -.fi -.PP ---ignore-checksum --ignore-size -.IP -.nf -\f[C] -Alternatively, if you have write access to the OneDrive files, it may be possible -to fix this problem for certain files, by attempting the steps below. -Open the web interface for [OneDrive](https://onedrive.live.com) and find the -affected files (which will be in the error messages/log for rclone). Simply click on -each of these files, causing OneDrive to open them on the web. This will cause each -file to be converted in place to a format that is functionally equivalent -but which will no longer trigger the size discrepancy. Once all problematic files -are converted you will no longer need the ignore options above. - -### Replacing/deleting existing files on Sharepoint gets \[dq]item not found\[dq] #### - -It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue -that Sharepoint (not OneDrive or OneDrive for Business) may return \[dq]item not -found\[dq] errors when users try to replace or delete uploaded files; this seems to -mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use -the \[ga]--backup-dir \[ga] command line argument so rclone moves the -files to be replaced/deleted into a given backup directory (instead of directly -replacing/deleting them). For example, to instruct rclone to move the files into -the directory \[ga]rclone-backup-dir\[ga] on backend \[ga]mysharepoint\[ga], you may use: -\f[R] -.fi -.PP ---backup-dir mysharepoint:rclone-backup-dir -.IP -.nf -\f[C] -### access\[rs]_denied (AADSTS65005) #### -\f[R] -.fi -.PP -Error: access_denied Code: AADSTS65005 Description: Using application -\[aq]rclone\[aq] is currently not supported for your organization -[YOUR_ORGANIZATION] because it is in an unmanaged state. -An administrator needs to claim ownership of the company by DNS -validation of [YOUR_ORGANIZATION] before the application rclone can be -provisioned. -.IP -.nf -\f[C] -This means that rclone can\[aq]t use the OneDrive for Business API with your account. You can\[aq]t do much about it, maybe write an email to your admins. - -However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint - -### invalid\[rs]_grant (AADSTS50076) #### -\f[R] -.fi -.PP -Error: invalid_grant Code: AADSTS50076 Description: Due to a -configuration change made by your administrator, or because you moved to -a new location, you must use multi-factor authentication to access -\[aq]...\[aq]. -.IP -.nf -\f[C] -If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run \[ga]rclone config\[ga], and choose to edit your OneDrive backend. Then, you don\[aq]t need to actually make any changes until you reach this question: \[ga]Already have a token - refresh?\[ga]. For this question, answer \[ga]y\[ga] and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend. - -### Invalid request when making public links #### - -On Sharepoint and OneDrive for Business, \[ga]rclone link\[ga] may return an \[dq]Invalid -request\[dq] error. A possible cause is that the organisation admin didn\[aq]t allow -public links to be made for the organisation/sharepoint library. To fix the -permissions as an admin, take a look at the docs: -[1](https://docs.microsoft.com/en-us/sharepoint/turn-external-sharing-on-or-off), -[2](https://support.microsoft.com/en-us/office/set-up-and-manage-access-requests-94b26e0b-2822-49d4-929a-8455698654b3). - -### Can not access \[ga]Shared\[ga] with me files - -Shared with me files is not supported by rclone [currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround: - -1. Visit [https://onedrive.live.com](https://onedrive.live.com/) -2. Right click a item in \[ga]Shared\[ga], then click \[ga]Add shortcut to My files\[ga] in the context - \[dq]) -3. The shortcut will appear in \[ga]My files\[ga], you can access it with rclone, it behaves like a normal folder/file. - \[dq]) - \[dq]) - -### Live Photos uploaded from iOS (small video clips in .heic files) - -The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) -of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020. -The usage and download of these uploaded Live Photos is unfortunately still work-in-progress -and this introduces several issues when copying, synchronising and mounting \[en] both in rclone and in the native OneDrive client on Windows. - -The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. -Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. -The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive. - -The different sizes will cause \[ga]rclone copy/sync\[ga] to repeatedly recopy unmodified photos something like this: - - DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) - DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK - INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) - -These recopies can be worked around by adding \[ga]--ignore-size\[ga]. Please note that this workaround only syncs the still-picture not the movie clip, -and relies on modification dates being correctly updated on all files in all situations. - -The different sizes will also cause \[ga]rclone check\[ga] to report size errors something like this: - - ERROR : 20230203_123826234_iOS.heic: sizes differ - -These check errors can be suppressed by adding \[ga]--ignore-size\[ga]. - -The different sizes will also cause \[ga]rclone mount\[ga] to fail downloading with an error something like this: - - ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF - -or like this when using \[ga]--cache-mode=full\[ga]: - - INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: - ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: - -# OpenDrive - -Paths are specified as \[ga]remote:path\[ga] - -Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. - -## Configuration - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: -\f[R] -.fi -.IP "n)" 3 -New remote -.IP "o)" 3 -Delete remote -.IP "p)" 3 -Quit config e/n/d/q> n name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / -OpenDrive \ \[dq]opendrive\[dq] [snip] Storage> opendrive Username -username> Password -.IP "q)" 3 -Yes type in my own password -.IP "r)" 3 -Generate random password y/g> y Enter the password: password: Confirm -the password: password: -------------------- [remote] username = -password = *** ENCRYPTED *** -------------------- -.IP "s)" 3 -Yes this is OK -.IP "t)" 3 -Edit this remote -.IP "u)" 3 -Delete this remote y/e/d> y -.IP -.nf -\f[C] -List directories in top level of your OpenDrive - - rclone lsd remote: - -List all the files in your OpenDrive - - rclone ls remote: - -To copy a local directory to an OpenDrive directory called backup - - rclone copy /home/source remote:backup - -### Modification times and hashes - -OpenDrive allows modification times to be set on objects accurate to 1 -second. These will be used to detect whether objects need syncing or -not. - -The MD5 hash algorithm is supported. - -### Restricted filename characters - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| NUL | 0x00 | \[u2400] | -| / | 0x2F | \[uFF0F] | -| \[dq] | 0x22 | \[uFF02] | -| * | 0x2A | \[uFF0A] | -| : | 0x3A | \[uFF1A] | -| < | 0x3C | \[uFF1C] | -| > | 0x3E | \[uFF1E] | -| ? | 0x3F | \[uFF1F] | -| \[rs] | 0x5C | \[uFF3C] | -| \[rs]| | 0x7C | \[uFF5C] | - -File names can also not begin or end with the following characters. -These only get replaced if they are the first or last character in the name: - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| SP | 0x20 | \[u2420] | -| HT | 0x09 | \[u2409] | -| LF | 0x0A | \[u240A] | -| VT | 0x0B | \[u240B] | -| CR | 0x0D | \[u240D] | - - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - - -### Standard options - -Here are the Standard options specific to opendrive (OpenDrive). - -#### --opendrive-username - -Username. +Description of the remote Properties: -- Config: username -- Env Var: RCLONE_OPENDRIVE_USERNAME -- Type: string -- Required: true - -#### --opendrive-password - -Password. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - -Properties: - -- Config: password -- Env Var: RCLONE_OPENDRIVE_PASSWORD -- Type: string -- Required: true - -### Advanced options - -Here are the Advanced options specific to opendrive (OpenDrive). - -#### --opendrive-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_OPENDRIVE_ENCODING -- Type: Encoding -- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot - -#### --opendrive-chunk-size - -Files will be uploaded in chunks this size. - -Note that these chunks are buffered in memory so increasing them will -increase memory use. - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE -- Type: SizeSuffix -- Default: 10Mi - - - -## Limitations - -Note that OpenDrive is case insensitive so you can\[aq]t have a -file called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. - -There are quite a few characters that can\[aq]t be in OpenDrive file -names. These can\[aq]t occur on Windows platforms, but on non-Windows -platforms they are common. Rclone will map these names to and from an -identical looking unicode equivalent. For example if a file has a \[ga]?\[ga] -in it will be mapped to \[ga]\[uFF1F]\[ga] instead. - -\[ga]rclone about\[ga] is not supported by the OpenDrive backend. Backends without -this capability cannot determine free space for an rclone mount or -use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union -remote. - -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - -# Oracle Object Storage -- [Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) -- [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/) -- [Oracle Object Storage Limits](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf) - -Paths are specified as \[ga]remote:bucket\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] command.) You may put subdirectories in -too, e.g. \[ga]remote:bucket/path/to/dir\[ga]. - -Sample command to transfer local artifacts to remote:bucket in oracle object storage: - -\[ga]rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv\[ga] - -## Configuration - -Here is an example of making an oracle object storage configuration. \[ga]rclone config\[ga] walks you -through it. - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: - -\f[R] -.fi -.IP "n)" 3 -New remote -.IP "o)" 3 -Delete remote -.IP "p)" 3 -Rename remote -.IP "q)" 3 -Copy remote -.IP "r)" 3 -Set configuration password -.IP "s)" 3 -Quit config e/n/d/r/c/s/q> n -.PP -Enter name for new remote. -name> remote -.PP -Option Storage. -Type of storage to configure. -Choose a number from below, or type in your own value. -[snip] XX / Oracle Cloud Infrastructure Object Storage -\ (oracleobjectstorage) Storage> oracleobjectstorage -.PP -Option provider. -Choose your Auth Provider Choose a number from below, or type in your -own string value. -Press Enter for the default (env_auth). -1 / automatically pickup the credentials from runtime(env), first one to -provide auth wins \ (env_auth) / use an OCI user and an API key for -authentication. -2 | you\[cq]ll need to put in a config file your tenancy OCID, user -OCID, region, the path, fingerprint to an API key. -| https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm -\ (user_principal_auth) / use instance principals to authorize an -instance to make API calls. -3 | each instance has its own identity, and authenticates using the -certificates that are read from instance metadata. -| -https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm -\ (instance_principal_auth) 4 / use resource principals to make API -calls \ (resource_principal_auth) 5 / no credentials needed, this is -typically for reading public buckets \ (no_auth) provider> 2 -.PP -Option namespace. -Object storage namespace Enter a value. -namespace> idbamagbg734 -.PP -Option compartment. -Object storage compartment OCID Enter a value. -compartment> -ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba -.PP -Option region. -Object storage Region Enter a value. -region> us-ashburn-1 -.PP -Option endpoint. -Endpoint for Object storage API. -Leave blank to use the default endpoint for the region. -Enter a value. -Press Enter to leave empty. -endpoint> -.PP -Option config_file. -Full Path to OCI config file Choose a number from below, or type in your -own string value. -Press Enter for the default (\[ti]/.oci/config). -1 / oci configuration file location \ (\[ti]/.oci/config) config_file> -/etc/oci/dev.conf -.PP -Option config_profile. -Profile name inside OCI config file Choose a number from below, or type -in your own string value. -Press Enter for the default (Default). -1 / Use the default profile \ (Default) config_profile> Test -.PP -Edit advanced config? -y) Yes n) No (default) y/n> n -.PP -Configuration complete. -Options: - type: oracleobjectstorage - namespace: idbamagbg734 - -compartment: -ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba -- region: us-ashburn-1 - provider: user_principal_auth - config_file: -/etc/oci/dev.conf - config_profile: Test Keep this \[dq]remote\[dq] -remote? -y) Yes this is OK (default) e) Edit this remote d) Delete this remote -y/e/d> y -.IP -.nf -\f[C] -See all buckets - - rclone lsd remote: - -Create a new bucket - - rclone mkdir remote:bucket - -List the contents of a bucket - - rclone ls remote:bucket - rclone ls remote:bucket --max-depth 1 - -## Authentication Providers - -OCI has various authentication methods. To learn more about authentication methods please refer [oci authentication -methods](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) -These choices can be specified in the rclone config file. - -Rclone supports the following OCI authentication provider. - - User Principal - Instance Principal - Resource Principal - No authentication - -### User Principal - -Sample rclone config file for Authentication Provider User Principal: - - [oos] - type = oracleobjectstorage - namespace = id 34 - compartment = ocid1.compartment.oc1..aa ba - region = us-ashburn-1 - provider = user_principal_auth - config_file = /home/opc/.oci/config - config_profile = Default - -Advantages: -- One can use this method from any server within OCI or on-premises or from other cloud provider. - -Considerations: -- you need to configure user\[cq]s privileges / policy to allow access to object storage -- Overhead of managing users and keys. -- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user\[aq]s credentials. - -### Instance Principal - -An OCI compute instance can be authorized to use rclone by using it\[aq]s identity and certificates as an instance principal. -With this approach no credentials have to be stored and managed. - -Sample rclone configuration file for Authentication Provider Instance Principal: - - [opc\[at]rclone \[ti]]$ cat \[ti]/.config/rclone/rclone.conf - [oos] - type = oracleobjectstorage - namespace = id fn - compartment = ocid1.compartment.oc1..aa k7a - region = us-ashburn-1 - provider = instance_principal_auth - -Advantages: - -- With instance principals, you don\[aq]t need to configure user credentials and transfer/ save it to disk in your compute - instances or rotate the credentials. -- You don\[cq]t need to deal with users and keys. -- Greatly helps in automation as you don\[aq]t have to manage access keys, user private keys, storing them in vault, - using kms etc. - -Considerations: - -- You need to configure a dynamic group having this instance as member and add policy to read object storage to that - dynamic group. -- Everyone who has access to this machine can execute the CLI commands. -- It is applicable for oci compute instances only. It cannot be used on external instance or resources. - -### Resource Principal - -Resource principal auth is very similar to instance principal auth but used for resources that are not -compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). -To use resource principal ensure Rclone process is started with these environment variables set in its process. - - export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 - export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 - export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem - export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token - -Sample rclone configuration file for Authentication Provider Resource Principal: - - [oos] - type = oracleobjectstorage - namespace = id 34 - compartment = ocid1.compartment.oc1..aa ba - region = us-ashburn-1 - provider = resource_principal_auth - -### No authentication - -Public buckets do not require any authentication mechanism to read objects. -Sample rclone configuration file for No authentication: - - [oos] - type = oracleobjectstorage - namespace = id 34 - compartment = ocid1.compartment.oc1..aa ba - region = us-ashburn-1 - provider = no_auth - -### Modification times and hashes - -The modification time is stored as metadata on the object as -\[ga]opc-meta-mtime\[ga] as floating point since the epoch, accurate to 1 ns. - -If the modification time needs to be updated rclone will attempt to perform a server -side copy to update the modification if the object can be copied in a single part. -In the case the object is larger than 5Gb, the object will be uploaded rather than copied. - -Note that reading this from the object takes an additional \[ga]HEAD\[ga] request as the metadata -isn\[aq]t returned in object listings. - -The MD5 hash algorithm is supported. - -### Multipart uploads - -rclone supports multipart uploads with OOS which means that it can -upload files bigger than 5 GiB. - -Note that files uploaded *both* with multipart upload *and* through -crypt remotes do not have MD5 sums. - -rclone switches from single part uploads to multipart uploads at the -point specified by \[ga]--oos-upload-cutoff\[ga]. This can be a maximum of 5 GiB -and a minimum of 0 (ie always upload multipart files). - -The chunk sizes used in the multipart upload are specified by -\[ga]--oos-chunk-size\[ga] and the number of chunks uploaded concurrently is -specified by \[ga]--oos-upload-concurrency\[ga]. - -Multipart uploads will use \[ga]--transfers\[ga] * \[ga]--oos-upload-concurrency\[ga] * -\[ga]--oos-chunk-size\[ga] extra memory. Single part uploads to not use extra -memory. - -Single part transfers can be faster than multipart transfers or slower -depending on your latency from oos - the more latency, the more likely -single part transfers will be faster. - -Increasing \[ga]--oos-upload-concurrency\[ga] will increase throughput (8 would -be a sensible value) and increasing \[ga]--oos-chunk-size\[ga] also increases -throughput (16M would be sensible). Increasing either of these will -use more memory. The default values are high enough to gain most of -the possible performance without using too much memory. - - -### Standard options - -Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage). - -#### --oos-provider - -Choose your Auth Provider - -Properties: - -- Config: provider -- Env Var: RCLONE_OOS_PROVIDER -- Type: string -- Default: \[dq]env_auth\[dq] -- Examples: - - \[dq]env_auth\[dq] - - automatically pickup the credentials from runtime(env), first one to provide auth wins - - \[dq]user_principal_auth\[dq] - - use an OCI user and an API key for authentication. - - you\[cq]ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. - - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm - - \[dq]instance_principal_auth\[dq] - - use instance principals to authorize an instance to make API calls. - - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. - - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm - - \[dq]resource_principal_auth\[dq] - - use resource principals to make API calls - - \[dq]no_auth\[dq] - - no credentials needed, this is typically for reading public buckets - -#### --oos-namespace - -Object storage namespace - -Properties: - -- Config: namespace -- Env Var: RCLONE_OOS_NAMESPACE -- Type: string -- Required: true - -#### --oos-compartment - -Object storage compartment OCID - -Properties: - -- Config: compartment -- Env Var: RCLONE_OOS_COMPARTMENT -- Provider: !no_auth -- Type: string -- Required: true - -#### --oos-region - -Object storage Region - -Properties: - -- Config: region -- Env Var: RCLONE_OOS_REGION -- Type: string -- Required: true - -#### --oos-endpoint - -Endpoint for Object storage API. - -Leave blank to use the default endpoint for the region. - -Properties: - -- Config: endpoint -- Env Var: RCLONE_OOS_ENDPOINT +- Config: description +- Env Var: RCLONE_ONEDRIVE_DESCRIPTION - Type: string - Required: false -#### --oos-config-file - -Path to OCI config file - -Properties: - -- Config: config_file -- Env Var: RCLONE_OOS_CONFIG_FILE -- Provider: user_principal_auth -- Type: string -- Default: \[dq]\[ti]/.oci/config\[dq] -- Examples: - - \[dq]\[ti]/.oci/config\[dq] - - oci configuration file location - -#### --oos-config-profile - -Profile name inside the oci config file - -Properties: - -- Config: config_profile -- Env Var: RCLONE_OOS_CONFIG_PROFILE -- Provider: user_principal_auth -- Type: string -- Default: \[dq]Default\[dq] -- Examples: - - \[dq]Default\[dq] - - Use the default profile - -### Advanced options - -Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage). - -#### --oos-storage-tier - -The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm - -Properties: - -- Config: storage_tier -- Env Var: RCLONE_OOS_STORAGE_TIER -- Type: string -- Default: \[dq]Standard\[dq] -- Examples: - - \[dq]Standard\[dq] - - Standard storage tier, this is the default tier - - \[dq]InfrequentAccess\[dq] - - InfrequentAccess storage tier - - \[dq]Archive\[dq] - - Archive storage tier - -#### --oos-upload-cutoff - -Cutoff for switching to chunked upload. - -Any files larger than this will be uploaded in chunks of chunk_size. -The minimum is 0 and the maximum is 5 GiB. - -Properties: - -- Config: upload_cutoff -- Env Var: RCLONE_OOS_UPLOAD_CUTOFF -- Type: SizeSuffix -- Default: 200Mi - -#### --oos-chunk-size - -Chunk size to use for uploading. - -When uploading files larger than upload_cutoff or files with unknown -size (e.g. from \[dq]rclone rcat\[dq] or uploaded with \[dq]rclone mount\[dq] they will be uploaded -as multipart uploads using this chunk size. - -Note that \[dq]upload_concurrency\[dq] chunks of this size are buffered -in memory per transfer. - -If you are transferring large files over high-speed links and you have -enough memory, then increasing this will speed up the transfers. - -Rclone will automatically increase the chunk size when uploading a -large file of known size to stay below the 10,000 chunks limit. - -Files of unknown size are uploaded with the configured -chunk_size. Since the default chunk size is 5 MiB and there can be at -most 10,000 chunks, this means that by default the maximum size of -a file you can stream upload is 48 GiB. If you wish to stream upload -larger files then you will need to increase chunk_size. - -Increasing the chunk size decreases the accuracy of the progress -statistics displayed with \[dq]-P\[dq] flag. - - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_OOS_CHUNK_SIZE -- Type: SizeSuffix -- Default: 5Mi - -#### --oos-max-upload-parts - -Maximum number of parts in a multipart upload. - -This option defines the maximum number of multipart chunks to use -when doing a multipart upload. - -OCI has max parts limit of 10,000 chunks. - -Rclone will automatically increase the chunk size when uploading a -large file of a known size to stay below this number of chunks limit. - - -Properties: - -- Config: max_upload_parts -- Env Var: RCLONE_OOS_MAX_UPLOAD_PARTS -- Type: int -- Default: 10000 - -#### --oos-upload-concurrency - -Concurrency for multipart uploads. - -This is the number of chunks of the same file that are uploaded -concurrently. - -If you are uploading small numbers of large files over high-speed links -and these uploads do not fully utilize your bandwidth, then increasing -this may help to speed up the transfers. - -Properties: - -- Config: upload_concurrency -- Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY -- Type: int -- Default: 10 - -#### --oos-copy-cutoff - -Cutoff for switching to multipart copy. - -Any files larger than this that need to be server-side copied will be -copied in chunks of this size. - -The minimum is 0 and the maximum is 5 GiB. - -Properties: - -- Config: copy_cutoff -- Env Var: RCLONE_OOS_COPY_CUTOFF -- Type: SizeSuffix -- Default: 4.656Gi - -#### --oos-copy-timeout - -Timeout for copy. - -Copy is an asynchronous operation, specify timeout to wait for copy to succeed - - -Properties: - -- Config: copy_timeout -- Env Var: RCLONE_OOS_COPY_TIMEOUT -- Type: Duration -- Default: 1m0s - -#### --oos-disable-checksum - -Don\[aq]t store MD5 checksum with object metadata. - -Normally rclone will calculate the MD5 checksum of the input before -uploading it so it can add it to metadata on the object. This is great -for data integrity checking but can cause long delays for large files -to start uploading. - -Properties: - -- Config: disable_checksum -- Env Var: RCLONE_OOS_DISABLE_CHECKSUM -- Type: bool -- Default: false - -#### --oos-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_OOS_ENCODING -- Type: Encoding -- Default: Slash,InvalidUtf8,Dot - -#### --oos-leave-parts-on-error - -If true avoid calling abort upload on a failure, leaving all successfully uploaded parts for manual recovery. - -It should be set to true for resuming uploads across different sessions. - -WARNING: Storing parts of an incomplete multipart upload counts towards space usage on object storage and will add -additional costs if not cleaned up. - - -Properties: - -- Config: leave_parts_on_error -- Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR -- Type: bool -- Default: false - -#### --oos-attempt-resume-upload - -If true attempt to resume previously started multipart upload for the object. -This will be helpful to speed up multipart transfers by resuming uploads from past session. - -WARNING: If chunk size differs in resumed session from past incomplete session, then the resumed multipart upload is -aborted and a new multipart upload is started with the new chunk size. - -The flag leave_parts_on_error must be true to resume and optimize to skip parts that were already uploaded successfully. - - -Properties: - -- Config: attempt_resume_upload -- Env Var: RCLONE_OOS_ATTEMPT_RESUME_UPLOAD -- Type: bool -- Default: false - -#### --oos-no-check-bucket - -If set, don\[aq]t attempt to check the bucket exists or create it. - -This can be useful when trying to minimise the number of transactions -rclone does if you know the bucket exists already. - -It can also be needed if the user you are using does not have bucket -creation permissions. - - -Properties: - -- Config: no_check_bucket -- Env Var: RCLONE_OOS_NO_CHECK_BUCKET -- Type: bool -- Default: false - -#### --oos-sse-customer-key-file - -To use SSE-C, a file containing the base64-encoded string of the AES-256 encryption key associated -with the object. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.\[aq] - -Properties: - -- Config: sse_customer_key_file -- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE -- Type: string -- Required: false -- Examples: - - \[dq]\[dq] - - None - -#### --oos-sse-customer-key - -To use SSE-C, the optional header that specifies the base64-encoded 256-bit encryption key to use to -encrypt or decrypt the data. Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is -needed. For more information, see Using Your Own Keys for Server-Side Encryption -(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm) - -Properties: - -- Config: sse_customer_key -- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY -- Type: string -- Required: false -- Examples: - - \[dq]\[dq] - - None - -#### --oos-sse-customer-key-sha256 - -If using SSE-C, The optional header that specifies the base64-encoded SHA256 hash of the encryption -key. This value is used to check the integrity of the encryption key. see Using Your Own Keys for -Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). - -Properties: - -- Config: sse_customer_key_sha256 -- Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256 -- Type: string -- Required: false -- Examples: - - \[dq]\[dq] - - None - -#### --oos-sse-kms-key-id - -if using your own master key in vault, this header specifies the -OCID (https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) of a master encryption key used to call -the Key Management service to generate a data encryption key or to encrypt or decrypt a data encryption key. -Please note only one of sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. - -Properties: - -- Config: sse_kms_key_id -- Env Var: RCLONE_OOS_SSE_KMS_KEY_ID -- Type: string -- Required: false -- Examples: - - \[dq]\[dq] - - None - -#### --oos-sse-customer-algorithm - -If using SSE-C, the optional header that specifies \[dq]AES256\[dq] as the encryption algorithm. -Object Storage supports \[dq]AES256\[dq] as the encryption algorithm. For more information, see -Using Your Own Keys for Server-Side Encryption (https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). - -Properties: - -- Config: sse_customer_algorithm -- Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM -- Type: string -- Required: false -- Examples: - - \[dq]\[dq] - - None - - \[dq]AES256\[dq] - - AES256 - -## Backend commands - -Here are the commands specific to the oracleobjectstorage backend. - -Run them with - - rclone backend COMMAND remote: - -The help below will explain what arguments each command takes. - -See the [backend](https://rclone.org/commands/rclone_backend/) command for more -info on how to pass options and arguments. - -These can be run on a running backend using the rc command -[backend/command](https://rclone.org/rc/#backend-command). - -### rename - -change the name of an object - - rclone backend rename remote: [options] [ +] - -This command can be used to rename a object. - -Usage Examples: - - rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name - - -### list-multipart-uploads - -List the unfinished multipart uploads - - rclone backend list-multipart-uploads remote: [options] [ +] - -This command lists the unfinished multipart uploads in JSON format. - - rclone backend list-multipart-uploads oos:bucket/path/to/object - -It returns a dictionary of buckets with values as lists of unfinished -multipart uploads. - -You can call it with no bucket in which case it lists all bucket, with -a bucket or with a bucket and path. - +### Metadata + +OneDrive supports System Metadata (not User Metadata, as of this writing) for +both files and directories. Much of the metadata is read-only, and there are some +differences between OneDrive Personal and Business (see table below for +details). + +Permissions are also supported, if \[ga]--onedrive-metadata-permissions\[ga] is set. The +accepted values for \[ga]--onedrive-metadata-permissions\[ga] are \[ga]read\[ga], \[ga]write\[ga], +\[ga]read,write\[ga], and \[ga]off\[ga] (the default). \[ga]write\[ga] supports adding new permissions, +updating the \[dq]role\[dq] of existing permissions, and removing permissions. Updating +and removing require the Permission ID to be known, so it is recommended to use +\[ga]read,write\[ga] instead of \[ga]write\[ga] if you wish to update/remove permissions. + +Permissions are read/written in JSON format using the same schema as the +[OneDrive API](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online), +which differs slightly between OneDrive Personal and Business. + +Example for OneDrive Personal: +\[ga]\[ga]\[ga]json +[ { - \[dq]test-bucket\[dq]: [ - { - \[dq]namespace\[dq]: \[dq]test-namespace\[dq], - \[dq]bucket\[dq]: \[dq]test-bucket\[dq], - \[dq]object\[dq]: \[dq]600m.bin\[dq], - \[dq]uploadId\[dq]: \[dq]51dd8114-52a4-b2f2-c42f-5291f05eb3c8\[dq], - \[dq]timeCreated\[dq]: \[dq]2022-07-29T06:21:16.595Z\[dq], - \[dq]storageTier\[dq]: \[dq]Standard\[dq] - } - ] - - -### cleanup - -Remove unfinished multipart uploads. - - rclone backend cleanup remote: [options] [ +] - -This command removes unfinished multipart uploads of age greater than -max-age which defaults to 24 hours. - -Note that you can use --interactive/-i or --dry-run with this command to see what -it would do. - - rclone backend cleanup oos:bucket/path/to/object - rclone backend cleanup -o max-age=7w oos:bucket/path/to/object - -Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. - - -Options: - -- \[dq]max-age\[dq]: Max age of upload to delete - - - -## Tutorials -### [Mounting Buckets](https://rclone.org/oracleobjectstorage/tutorial_mount/) - -# QingStor - -Paths are specified as \[ga]remote:bucket\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] -command.) You may put subdirectories in too, e.g. \[ga]remote:bucket/path/to/dir\[ga]. - -## Configuration - -Here is an example of making an QingStor configuration. First run - - rclone config - -This will guide you through an interactive setup process. + \[dq]id\[dq]: \[dq]1234567890ABC!123\[dq], + \[dq]grantedTo\[dq]: { + \[dq]user\[dq]: { + \[dq]id\[dq]: \[dq]ryan\[at]contoso.com\[dq] + }, + \[dq]application\[dq]: {}, + \[dq]device\[dq]: {} + }, + \[dq]invitation\[dq]: { + \[dq]email\[dq]: \[dq]ryan\[at]contoso.com\[dq] + }, + \[dq]link\[dq]: { + \[dq]webUrl\[dq]: \[dq]https://1drv.ms/t/s!1234567890ABC\[dq] + }, + \[dq]roles\[dq]: [ + \[dq]read\[dq] + ], + \[dq]shareId\[dq]: \[dq]s!1234567890ABC\[dq] + } +] \f[R] .fi .PP -No remotes found, make a new one? -n) New remote r) Rename remote c) Copy remote s) Set configuration -password q) Quit config n/r/c/s/q> n name> remote Type of storage to -configure. -Choose a number from below, or type in your own value [snip] XX / -QingStor Object Storage \ \[dq]qingstor\[dq] [snip] Storage> qingstor -Get QingStor credentials from runtime. -Only applies if access_key_id and secret_access_key is blank. -Choose a number from below, or type in your own value 1 / Enter QingStor -credentials in the next step \ \[dq]false\[dq] 2 / Get QingStor -credentials from the environment (env vars or IAM) \ \[dq]true\[dq] -env_auth> 1 QingStor Access Key ID - leave blank for anonymous access or -runtime credentials. -access_key_id> access_key QingStor Secret Access Key (password) - leave -blank for anonymous access or runtime credentials. -secret_access_key> secret_key Enter an endpoint URL to connection -QingStor API. -Leave blank will use the default value -\[dq]https://qingstor.com:443\[dq] endpoint> Zone connect to. -Default is \[dq]pek3a\[dq]. -Choose a number from below, or type in your own value / The Beijing -(China) Three Zone 1 | Needs location constraint pek3a. -\ \[dq]pek3a\[dq] / The Shanghai (China) First Zone 2 | Needs location -constraint sh1a. -\ \[dq]sh1a\[dq] zone> 1 Number of connection retry. -Leave blank will use the default value \[dq]3\[dq]. -connection_retries> Remote config -------------------- [remote] env_auth -= false access_key_id = access_key secret_access_key = secret_key -endpoint = zone = pek3a connection_retries = -------------------- y) Yes -this is OK e) Edit this remote d) Delete this remote y/e/d> y +Example for OneDrive Business: .IP .nf \f[C] -This remote is called \[ga]remote\[ga] and can now be used like this - -See all buckets - - rclone lsd remote: - -Make a new bucket - - rclone mkdir remote:bucket - -List the contents of a bucket - - rclone ls remote:bucket - -Sync \[ga]/home/local/directory\[ga] to the remote bucket, deleting any excess -files in the bucket. - - rclone sync --interactive /home/local/directory remote:bucket - -### --fast-list - -This remote supports \[ga]--fast-list\[ga] which allows you to use fewer -transactions in exchange for more memory. See the [rclone -docs](https://rclone.org/docs/#fast-list) for more details. - -### Multipart uploads - -rclone supports multipart uploads with QingStor which means that it can -upload files bigger than 5 GiB. Note that files uploaded with multipart -upload don\[aq]t have an MD5SUM. - -Note that incomplete multipart uploads older than 24 hours can be -removed with \[ga]rclone cleanup remote:bucket\[ga] just for one bucket -\[ga]rclone cleanup remote:\[ga] for all buckets. QingStor does not ever -remove incomplete multipart uploads so it may be necessary to run this -from time to time. - -### Buckets and Zone - -With QingStor you can list buckets (\[ga]rclone lsd\[ga]) using any zone, -but you can only access the content of a bucket from the zone it was -created in. If you attempt to access a bucket from the wrong zone, -you will get an error, \[ga]incorrect zone, the bucket is not in \[aq]XXX\[aq] -zone\[ga]. - -### Authentication - -There are two ways to supply \[ga]rclone\[ga] with a set of QingStor -credentials. In order of precedence: - - - Directly in the rclone configuration file (as configured by \[ga]rclone config\[ga]) - - set \[ga]access_key_id\[ga] and \[ga]secret_access_key\[ga] - - Runtime configuration: - - set \[ga]env_auth\[ga] to \[ga]true\[ga] in the config file - - Exporting the following environment variables before running \[ga]rclone\[ga] - - Access Key ID: \[ga]QS_ACCESS_KEY_ID\[ga] or \[ga]QS_ACCESS_KEY\[ga] - - Secret Access Key: \[ga]QS_SECRET_ACCESS_KEY\[ga] or \[ga]QS_SECRET_KEY\[ga] - -### Restricted filename characters - -The control characters 0x00-0x1F and / are replaced as in the [default -restricted characters set](https://rclone.org/overview/#restricted-characters). Note -that 0x7F is not replaced. - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - - -### Standard options - -Here are the Standard options specific to qingstor (QingCloud Object Storage). - -#### --qingstor-env-auth - -Get QingStor credentials from runtime. - -Only applies if access_key_id and secret_access_key is blank. - -Properties: - -- Config: env_auth -- Env Var: RCLONE_QINGSTOR_ENV_AUTH -- Type: bool -- Default: false -- Examples: - - \[dq]false\[dq] - - Enter QingStor credentials in the next step. - - \[dq]true\[dq] - - Get QingStor credentials from the environment (env vars or IAM). - -#### --qingstor-access-key-id - -QingStor Access Key ID. - -Leave blank for anonymous access or runtime credentials. - -Properties: - -- Config: access_key_id -- Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID -- Type: string -- Required: false - -#### --qingstor-secret-access-key - -QingStor Secret Access Key (password). - -Leave blank for anonymous access or runtime credentials. - -Properties: - -- Config: secret_access_key -- Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY -- Type: string -- Required: false - -#### --qingstor-endpoint - -Enter an endpoint URL to connection QingStor API. - -Leave blank will use the default value \[dq]https://qingstor.com:443\[dq]. - -Properties: - -- Config: endpoint -- Env Var: RCLONE_QINGSTOR_ENDPOINT -- Type: string -- Required: false - -#### --qingstor-zone - -Zone to connect to. - -Default is \[dq]pek3a\[dq]. - -Properties: - -- Config: zone -- Env Var: RCLONE_QINGSTOR_ZONE -- Type: string -- Required: false -- Examples: - - \[dq]pek3a\[dq] - - The Beijing (China) Three Zone. - - Needs location constraint pek3a. - - \[dq]sh1a\[dq] - - The Shanghai (China) First Zone. - - Needs location constraint sh1a. - - \[dq]gd2a\[dq] - - The Guangdong (China) Second Zone. - - Needs location constraint gd2a. - -### Advanced options - -Here are the Advanced options specific to qingstor (QingCloud Object Storage). - -#### --qingstor-connection-retries - -Number of connection retries. - -Properties: - -- Config: connection_retries -- Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES -- Type: int -- Default: 3 - -#### --qingstor-upload-cutoff - -Cutoff for switching to chunked upload. - -Any files larger than this will be uploaded in chunks of chunk_size. -The minimum is 0 and the maximum is 5 GiB. - -Properties: - -- Config: upload_cutoff -- Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF -- Type: SizeSuffix -- Default: 200Mi - -#### --qingstor-chunk-size - -Chunk size to use for uploading. - -When uploading files larger than upload_cutoff they will be uploaded -as multipart uploads using this chunk size. - -Note that \[dq]--qingstor-upload-concurrency\[dq] chunks of this size are buffered -in memory per transfer. - -If you are transferring large files over high-speed links and you have -enough memory, then increasing this will speed up the transfers. - -Properties: - -- Config: chunk_size -- Env Var: RCLONE_QINGSTOR_CHUNK_SIZE -- Type: SizeSuffix -- Default: 4Mi - -#### --qingstor-upload-concurrency - -Concurrency for multipart uploads. - -This is the number of chunks of the same file that are uploaded -concurrently. - -NB if you set this to > 1 then the checksums of multipart uploads -become corrupted (the uploads themselves are not corrupted though). - -If you are uploading small numbers of large files over high-speed links -and these uploads do not fully utilize your bandwidth, then increasing -this may help to speed up the transfers. - -Properties: - -- Config: upload_concurrency -- Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY -- Type: int -- Default: 1 - -#### --qingstor-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_QINGSTOR_ENCODING -- Type: Encoding -- Default: Slash,Ctl,InvalidUtf8 - - - -## Limitations - -\[ga]rclone about\[ga] is not supported by the qingstor backend. Backends without -this capability cannot determine free space for an rclone mount or -use policy \[ga]mfs\[ga] (most free space) as a member of an rclone union -remote. - -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - -# Quatrix - -Quatrix by Maytech is [Quatrix Secure Compliant File Sharing | Maytech](https://www.maytech.net/products/quatrix-business). - -Paths are specified as \[ga]remote:path\[ga] - -Paths may be as deep as required, e.g., \[ga]remote:directory/subdirectory\[ga]. - -The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user\[aq]s profile at \[ga]https:// /profile/api-keys\[ga] -or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create. - -See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer - -## Configuration - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: +[ + { + \[dq]id\[dq]: \[dq]48d31887-5fad-4d73-a9f5-3c356e68a038\[dq], + \[dq]grantedToIdentities\[dq]: [ + { + \[dq]user\[dq]: { + \[dq]displayName\[dq]: \[dq]ryan\[at]contoso.com\[dq] + }, + \[dq]application\[dq]: {}, + \[dq]device\[dq]: {} + } + ], + \[dq]link\[dq]: { + \[dq]type\[dq]: \[dq]view\[dq], + \[dq]scope\[dq]: \[dq]users\[dq], + \[dq]webUrl\[dq]: \[dq]https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s\[dq] + }, + \[dq]roles\[dq]: [ + \[dq]read\[dq] + ], + \[dq]shareId\[dq]: \[dq]u!LKj1lkdlals90j1nlkascl\[dq] + }, + { + \[dq]id\[dq]: \[dq]5D33DD65C6932946\[dq], + \[dq]grantedTo\[dq]: { + \[dq]user\[dq]: { + \[dq]displayName\[dq]: \[dq]John Doe\[dq], + \[dq]id\[dq]: \[dq]efee1b77-fb3b-4f65-99d6-274c11914d12\[dq] + }, + \[dq]application\[dq]: {}, + \[dq]device\[dq]: {} + }, + \[dq]roles\[dq]: [ + \[dq]owner\[dq] + ], + \[dq]shareId\[dq]: \[dq]FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U\[dq] + } +] \f[R] .fi .PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / -Quatrix by Maytech \ \[dq]quatrix\[dq] [snip] Storage> quatrix API key -for accessing Quatrix account. -api_key> your_api_key Host name of Quatrix account. -host> example.quatrix.it +To write permissions, pass in a \[dq]permissions\[dq] metadata key using +this same format. +The +\f[C]--metadata-mapper\f[R] (https://rclone.org/docs/#metadata-mapper) +tool can be very helpful for this. +.PP +When adding permissions, an email address can be provided in the +\f[C]User.ID\f[R] or \f[C]DisplayName\f[R] properties of +\f[C]grantedTo\f[R] or \f[C]grantedToIdentities\f[R]. +Alternatively, an ObjectID can be provided in \f[C]User.ID\f[R]. +At least one valid recipient must be provided in order to add a +permission for a user. +Creating a Public Link is also supported, if \f[C]Link.Scope\f[R] is set +to \f[C]\[dq]anonymous\[dq]\f[R]. +.PP +Example request to add a \[dq]read\[dq] permission: +.IP +.nf +\f[C] +[ + { + \[dq]id\[dq]: \[dq]\[dq], + \[dq]grantedTo\[dq]: { + \[dq]user\[dq]: {}, + \[dq]application\[dq]: {}, + \[dq]device\[dq]: {} + }, + \[dq]grantedToIdentities\[dq]: [ + { + \[dq]user\[dq]: { + \[dq]id\[dq]: \[dq]ryan\[at]contoso.com\[dq] + }, + \[dq]application\[dq]: {}, + \[dq]device\[dq]: {} + } + ], + \[dq]roles\[dq]: [ + \[dq]read\[dq] + ] + } +] +\f[R] +.fi +.PP +Note that adding a permission can fail if a conflicting permission +already exists for the file/folder. +.PP +To update an existing permission, include both the Permission ID and the +new \f[C]roles\f[R] to be assigned. +\f[C]roles\f[R] is the only property that can be changed. +.PP +To remove permissions, pass in a blob containing only the permissions +you wish to keep (which can be empty, to remove all.) +.PP +Note that both reading and writing permissions requires extra API calls, +so if you don\[aq]t need to read or write permissions it is recommended +to omit \f[C]--onedrive-metadata-permissions\f[R]. +.PP +Metadata and permissions are supported for Folders (directories) as well +as Files. +Note that setting the \f[C]mtime\f[R] or \f[C]btime\f[R] on a Folder +requires one extra API call on OneDrive Business only. +.PP +OneDrive does not currently support User Metadata. +When writing metadata, only writeable system properties will be written +-- any read-only or unrecognized keys passed in will be ignored. +.PP +TIP: to see the metadata and permissions for any file or folder, run: +.IP +.nf +\f[C] +rclone lsjson remote:path --stat -M --onedrive-metadata-permissions read +\f[R] +.fi +.PP +Here are the possible system metadata items for the onedrive backend. .PP .TS tab(@); -lw(20.4n). +lw(11.1n) lw(11.1n) lw(11.1n) lw(16.6n) lw(20.3n). T{ -[remote] api_key = your_api_key host = example.quatrix.it +Name +T}@T{ +Help +T}@T{ +Type +T}@T{ +Example +T}@T{ +Read Only T} _ T{ -y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y -\[ga]\[ga]\[ga] +btime +T}@T{ +Time of file birth (creation) with S accuracy (mS for OneDrive +Personal). +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05Z +T}@T{ +N T} T{ -Once configured you can then use \f[C]rclone\f[R] like this, +content-type +T}@T{ +The MIME type of the file. +T}@T{ +string +T}@T{ +text/plain +T}@T{ +\f[B]Y\f[R] T} T{ -List directories in top level of your Quatrix +created-by-display-name +T}@T{ +Display name of the user that created the item. +T}@T{ +string +T}@T{ +John Doe +T}@T{ +\f[B]Y\f[R] T} T{ +created-by-id +T}@T{ +ID of the user that created the item. +T}@T{ +string +T}@T{ +48d31887-5fad-4d73-a9f5-3c356e68a038 +T}@T{ +\f[B]Y\f[R] +T} +T{ +description +T}@T{ +A short description of the file. +Max 1024 characters. +Only supported for OneDrive Personal. +T}@T{ +string +T}@T{ +Contract for signing +T}@T{ +N +T} +T{ +id +T}@T{ +The unique identifier of the item within OneDrive. +T}@T{ +string +T}@T{ +01BYE5RZ6QN3ZWBTUFOFD3GSPGOHDJD36K +T}@T{ +\f[B]Y\f[R] +T} +T{ +last-modified-by-display-name +T}@T{ +Display name of the user that last modified the item. +T}@T{ +string +T}@T{ +John Doe +T}@T{ +\f[B]Y\f[R] +T} +T{ +last-modified-by-id +T}@T{ +ID of the user that last modified the item. +T}@T{ +string +T}@T{ +48d31887-5fad-4d73-a9f5-3c356e68a038 +T}@T{ +\f[B]Y\f[R] +T} +T{ +malware-detected +T}@T{ +Whether OneDrive has detected that the item contains malware. +T}@T{ +boolean +T}@T{ +true +T}@T{ +\f[B]Y\f[R] +T} +T{ +mtime +T}@T{ +Time of last modification with S accuracy (mS for OneDrive Personal). +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05Z +T}@T{ +N +T} +T{ +package-type +T}@T{ +If present, indicates that this item is a package instead of a folder or +file. +Packages are treated like files in some contexts and folders in others. +T}@T{ +string +T}@T{ +oneNote +T}@T{ +\f[B]Y\f[R] +T} +T{ +permissions +T}@T{ +Permissions in a JSON dump of OneDrive format. +Enable with --onedrive-metadata-permissions. +Properties: id, grantedTo, grantedToIdentities, invitation, +inheritedFrom, link, roles, shareId +T}@T{ +JSON +T}@T{ +{} +T}@T{ +N +T} +T{ +shared-by-id +T}@T{ +ID of the user that shared the item (if shared). +T}@T{ +string +T}@T{ +48d31887-5fad-4d73-a9f5-3c356e68a038 +T}@T{ +\f[B]Y\f[R] +T} +T{ +shared-owner-id +T}@T{ +ID of the owner of the shared item (if shared). +T}@T{ +string +T}@T{ +48d31887-5fad-4d73-a9f5-3c356e68a038 +T}@T{ +\f[B]Y\f[R] +T} +T{ +shared-scope +T}@T{ +If shared, indicates the scope of how the item is shared: anonymous, +organization, or users. +T}@T{ +string +T}@T{ +users +T}@T{ +\f[B]Y\f[R] +T} +T{ +shared-time +T}@T{ +Time when the item was shared, with S accuracy (mS for OneDrive +Personal). +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05Z +T}@T{ +\f[B]Y\f[R] +T} +T{ +utime +T}@T{ +Time of upload with S accuracy (mS for OneDrive Personal). +T}@T{ +RFC 3339 +T}@T{ +2006-01-02T15:04:05Z +T}@T{ +\f[B]Y\f[R] +T} +.TE +.PP +See the metadata (https://rclone.org/docs/#metadata) docs for more info. +.SS Limitations +.PP +If you don\[aq]t use rclone for 90 days the refresh token will expire. +This will result in authorization problems. +This is easy to fix by running the +\f[C]rclone config reconnect remote:\f[R] command to get a new token and +refresh token. +.SS Naming +.PP +Note that OneDrive is case insensitive so you can\[aq]t have a file +called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. +.PP +There are quite a few characters that can\[aq]t be in OneDrive file +names. +These can\[aq]t occur on Windows platforms, but on non-Windows platforms +they are common. +Rclone will map these names to and from an identical looking unicode +equivalent. +For example if a file has a \f[C]?\f[R] in it will be mapped to +\f[C]\[uFF1F]\f[R] instead. +.SS File sizes +.PP +The largest allowed file size is 250 GiB for both OneDrive Personal and +OneDrive for Business (Updated 13 Jan +2021) (https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). +.SS Path length +.PP +The entire path, including the file name, must contain fewer than 400 +characters for OneDrive, OneDrive for Business and SharePoint Online. +If you are encrypting file and folder names with rclone, you may want to +pay attention to this limitation because the encrypted names are +typically longer than the original ones. +.SS Number of files +.PP +OneDrive seems to be OK with at least 50,000 files in a folder, but at +100,000 rclone will get errors listing the directory like +\f[C]couldn\[cq]t list files: UnknownError:\f[R]. +See #2707 (https://github.com/rclone/rclone/issues/2707) for more info. +.PP +An official document about the limitations for different types of +OneDrive can be found +here (https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). +.SS Versions +.PP +Every change in a file OneDrive causes the service to create a new +version of the file. +This counts against a users quota. +For example changing the modification time of a file creates a second +version, so the file apparently uses twice the space. +.PP +For example the \f[C]copy\f[R] command is affected by this as rclone +copies the file and then afterwards sets the modification time to match +the source file which uses another version. +.PP +You can use the \f[C]rclone cleanup\f[R] command (see below) to remove +all old versions. +.PP +Or you can set the \f[C]no_versions\f[R] parameter to \f[C]true\f[R] and +rclone will remove versions after operations which create new versions. +This takes extra transactions so only enable it if you need it. +.PP +\f[B]Note\f[R] At the time of writing Onedrive Personal creates versions +(but not for setting the modification time) but the API for removing +them returns \[dq]API not found\[dq] so cleanup and +\f[C]no_versions\f[R] should not be used on Onedrive Personal. +.SS Disabling versioning +.PP +Starting October 2018, users will no longer be able to disable +versioning by default. +This is because Microsoft has brought an +update (https://techcommunity.microsoft.com/t5/Microsoft-OneDrive-Blog/New-Updates-to-OneDrive-and-SharePoint-Team-Site-Versioning/ba-p/204390) +to the mechanism. +To change this new default setting, a PowerShell command is required to +be run by a SharePoint admin. +If you are an admin, you can run these commands in PowerShell to change +that setting: +.IP "1." 3 +\f[C]Install-Module -Name Microsoft.Online.SharePoint.PowerShell\f[R] +(in case you haven\[aq]t installed this already) +.IP "2." 3 +\f[C]Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking\f[R] +.IP "3." 3 +\f[C]Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU\[at]YOURSITE.COM\f[R] +(replacing \f[C]YOURSITE\f[R], \f[C]YOU\f[R], \f[C]YOURSITE.COM\f[R] +with the actual values; this will prompt for your credentials) +.IP "4." 3 +\f[C]Set-SPOTenant -EnableMinimumVersionRequirement $False\f[R] +.IP "5." 3 +\f[C]Disconnect-SPOService\f[R] (to disconnect from the server) +.PP +\f[I]Below are the steps for normal users to disable versioning. If you +don\[aq]t see the \[dq]No Versioning\[dq] option, make sure the above +requirements are met.\f[R] +.PP +User Weropol (https://github.com/Weropol) has found a method to disable +versioning on OneDrive +.IP "1." 3 +Open the settings menu by clicking on the gear symbol at the top of the +OneDrive Business page. +.IP "2." 3 +Click Site settings. +.IP "3." 3 +Once on the Site settings page, navigate to Site Administration > Site +libraries and lists. +.IP "4." 3 +Click Customize \[dq]Documents\[dq]. +.IP "5." 3 +Click General Settings > Versioning Settings. +.IP "6." 3 +Under Document Version History select the option No versioning. +Note: This will disable the creation of new file versions, but will not +remove any previous versions. +Your documents are safe. +.IP "7." 3 +Apply the changes by clicking OK. +.IP "8." 3 +Use rclone to upload or modify files. +(I also use the --no-update-modtime flag) +.IP "9." 3 +Restore the versioning settings after using rclone. +(Optional) +.SS Cleanup +.PP +OneDrive supports \f[C]rclone cleanup\f[R] which causes rclone to look +through every file under the path supplied and delete all version but +the current version. +Because this involves traversing all the files, then querying each file +for versions it can be quite slow. +Rclone does \f[C]--checkers\f[R] tests in parallel. +The command also supports \f[C]--interactive\f[R]/\f[C]i\f[R] or +\f[C]--dry-run\f[R] which is a great way to see what it would do. +.IP +.nf +\f[C] +rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir +rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir +\f[R] +.fi +.PP +\f[B]NB\f[R] Onedrive personal can\[aq]t currently delete versions +.SS Troubleshooting +.SS Excessive throttling or blocked on SharePoint +.PP +If you experience excessive throttling or is being blocked on SharePoint +then it may help to set the user agent explicitly with a flag like this: +\f[C]--user-agent \[dq]ISV|rclone.org|rclone/v1.55.1\[dq]\f[R] +.PP +The specific details can be found in the Microsoft document: Avoid +getting throttled or blocked in SharePoint +Online (https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling) +.SS Unexpected file size/hash differences on Sharepoint +.PP +It is a +known (https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631) +issue that Sharepoint (not OneDrive or OneDrive for Business) silently +modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), +causing file size and hash checks to fail. +There are also other situations that will cause OneDrive to report +inconsistent file sizes. +To use rclone with such affected files on Sharepoint, you may disable +these checks with the following command line arguments: +.IP +.nf +\f[C] +--ignore-checksum --ignore-size +\f[R] +.fi +.PP +Alternatively, if you have write access to the OneDrive files, it may be +possible to fix this problem for certain files, by attempting the steps +below. +Open the web interface for OneDrive (https://onedrive.live.com) and find +the affected files (which will be in the error messages/log for rclone). +Simply click on each of these files, causing OneDrive to open them on +the web. +This will cause each file to be converted in place to a format that is +functionally equivalent but which will no longer trigger the size +discrepancy. +Once all problematic files are converted you will no longer need the +ignore options above. +.SS Replacing/deleting existing files on Sharepoint gets \[dq]item not found\[dq] +.PP +It is a +known (https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue +that Sharepoint (not OneDrive or OneDrive for Business) may return +\[dq]item not found\[dq] errors when users try to replace or delete +uploaded files; this seems to mainly affect Office files (.docx, .xlsx, +etc.) and web files (.html, .aspx, etc.). +As a workaround, you may use the \f[C]--backup-dir \f[R] +command line argument so rclone moves the files to be replaced/deleted +into a given backup directory (instead of directly replacing/deleting +them). +For example, to instruct rclone to move the files into the directory +\f[C]rclone-backup-dir\f[R] on backend \f[C]mysharepoint\f[R], you may +use: +.IP +.nf +\f[C] +--backup-dir mysharepoint:rclone-backup-dir +\f[R] +.fi +.SS access_denied (AADSTS65005) +.IP +.nf +\f[C] +Error: access_denied +Code: AADSTS65005 +Description: Using application \[aq]rclone\[aq] is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned. +\f[R] +.fi +.PP +This means that rclone can\[aq]t use the OneDrive for Business API with +your account. +You can\[aq]t do much about it, maybe write an email to your admins. +.PP +However, there are other ways to interact with your OneDrive account. +Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint +.SS invalid_grant (AADSTS50076) +.IP +.nf +\f[C] +Error: invalid_grant +Code: AADSTS50076 +Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access \[aq]...\[aq]. +\f[R] +.fi +.PP +If you see the error above after enabling multi-factor authentication +for your account, you can fix it by refreshing your OAuth refresh token. +To do that, run \f[C]rclone config\f[R], and choose to edit your +OneDrive backend. +Then, you don\[aq]t need to actually make any changes until you reach +this question: \f[C]Already have a token - refresh?\f[R]. +For this question, answer \f[C]y\f[R] and go through the process to +refresh your token, just like the first time the backend is configured. +After this, rclone should work again for this backend. +.SS Invalid request when making public links +.PP +On Sharepoint and OneDrive for Business, \f[C]rclone link\f[R] may +return an \[dq]Invalid request\[dq] error. +A possible cause is that the organisation admin didn\[aq]t allow public +links to be made for the organisation/sharepoint library. +To fix the permissions as an admin, take a look at the docs: +1 (https://docs.microsoft.com/en-us/sharepoint/turn-external-sharing-on-or-off), +2 (https://support.microsoft.com/en-us/office/set-up-and-manage-access-requests-94b26e0b-2822-49d4-929a-8455698654b3). +.SS Can not access \f[C]Shared\f[R] with me files +.PP +Shared with me files is not supported by rclone +currently (https://github.com/rclone/rclone/issues/4062), but there is a +workaround: +.IP "1." 3 +Visit https://onedrive.live.com (https://onedrive.live.com/) +.IP "2." 3 +Right click a item in \f[C]Shared\f[R], then click +\f[C]Add shortcut to My files\f[R] in the context +[IMAGE: make_shortcut (https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png)] +.IP "3." 3 +The shortcut will appear in \f[C]My files\f[R], you can access it with +rclone, it behaves like a normal folder/file. +[IMAGE: in_my_files (https://i.imgur.com/0S8H3li.png)] +[IMAGE: rclone_mount (https://i.imgur.com/2Iq66sW.png)] +.SS Live Photos uploaded from iOS (small video clips in .heic files) +.PP +The iOS OneDrive app introduced upload and +storage (https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) +of Live Photos (https://support.apple.com/en-gb/HT207310) in 2020. +The usage and download of these uploaded Live Photos is unfortunately +still work-in-progress and this introduces several issues when copying, +synchronising and mounting \[en] both in rclone and in the native +OneDrive client on Windows. +.PP +The root cause can easily be seen if you locate one of your Live Photos +in the OneDrive web interface. +Then download the photo from the web interface. +You will then see that the size of downloaded .heic file is smaller than +the size displayed in the web interface. +The downloaded file is smaller because it only contains a single frame +(still photo) extracted from the Live Photo (movie) stored in OneDrive. +.PP +The different sizes will cause \f[C]rclone copy/sync\f[R] to repeatedly +recopy unmodified photos something like this: +.IP +.nf +\f[C] +DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) +DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK +INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) +\f[R] +.fi +.PP +These recopies can be worked around by adding \f[C]--ignore-size\f[R]. +Please note that this workaround only syncs the still-picture not the +movie clip, and relies on modification dates being correctly updated on +all files in all situations. +.PP +The different sizes will also cause \f[C]rclone check\f[R] to report +size errors something like this: +.IP +.nf +\f[C] +ERROR : 20230203_123826234_iOS.heic: sizes differ +\f[R] +.fi +.PP +These check errors can be suppressed by adding \f[C]--ignore-size\f[R]. +.PP +The different sizes will also cause \f[C]rclone mount\f[R] to fail +downloading with an error something like this: +.IP +.nf +\f[C] +ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF +\f[R] +.fi +.PP +or like this when using \f[C]--cache-mode=full\f[R]: +.IP +.nf +\f[C] +INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: +ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: +\f[R] +.fi +.SH OpenDrive +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.SS Configuration +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +n) New remote +d) Delete remote +q) Quit config +e/n/d/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / OpenDrive + \[rs] \[dq]opendrive\[dq] +[snip] +Storage> opendrive +Username +username> +Password +y) Yes type in my own password +g) Generate random password +y/g> y +Enter the password: +password: +Confirm the password: +password: +-------------------- +[remote] +username = +password = *** ENCRYPTED *** +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +List directories in top level of your OpenDrive +.IP +.nf +\f[C] rclone lsd remote: -T} -T{ -List all the files in your Quatrix -T} -T{ +\f[R] +.fi +.PP +List all the files in your OpenDrive +.IP +.nf +\f[C] rclone ls remote: -T} -T{ -To copy a local directory to an Quatrix directory called backup -T} -T{ +\f[R] +.fi +.PP +To copy a local directory to an OpenDrive directory called backup +.IP +.nf +\f[C] rclone copy /home/source remote:backup +\f[R] +.fi +.SS Modification times and hashes +.PP +OpenDrive allows modification times to be set on objects accurate to 1 +second. +These will be used to detect whether objects need syncing or not. +.PP +The MD5 hash algorithm is supported. +.SS Restricted filename characters +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +NUL +T}@T{ +0x00 +T}@T{ +\[u2400] T} T{ -### API key validity +/ +T}@T{ +0x2F +T}@T{ +\[uFF0F] T} T{ +\[dq] +T}@T{ +0x22 +T}@T{ +\[uFF02] +T} +T{ +* +T}@T{ +0x2A +T}@T{ +\[uFF0A] +T} +T{ +: +T}@T{ +0x3A +T}@T{ +\[uFF1A] +T} +T{ +< +T}@T{ +0x3C +T}@T{ +\[uFF1C] +T} +T{ +> +T}@T{ +0x3E +T}@T{ +\[uFF1E] +T} +T{ +? +T}@T{ +0x3F +T}@T{ +\[uFF1F] +T} +T{ +\[rs] +T}@T{ +0x5C +T}@T{ +\[uFF3C] +T} +T{ +| +T}@T{ +0x7C +T}@T{ +\[uFF5C] +T} +.TE +.PP +File names can also not begin or end with the following characters. +These only get replaced if they are the first or last character in the +name: +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +SP +T}@T{ +0x20 +T}@T{ +\[u2420] +T} +T{ +HT +T}@T{ +0x09 +T}@T{ +\[u2409] +T} +T{ +LF +T}@T{ +0x0A +T}@T{ +\[u240A] +T} +T{ +VT +T}@T{ +0x0B +T}@T{ +\[u240B] +T} +T{ +CR +T}@T{ +0x0D +T}@T{ +\[u240D] +T} +.TE +.PP +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Standard options +.PP +Here are the Standard options specific to opendrive (OpenDrive). +.SS --opendrive-username +.PP +Username. +.PP +Properties: +.IP \[bu] 2 +Config: username +.IP \[bu] 2 +Env Var: RCLONE_OPENDRIVE_USERNAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --opendrive-password +.PP +Password. +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: password +.IP \[bu] 2 +Env Var: RCLONE_OPENDRIVE_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS Advanced options +.PP +Here are the Advanced options specific to opendrive (OpenDrive). +.SS --opendrive-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_OPENDRIVE_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: +Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,LeftSpace,LeftCrLfHtVt,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot +.SS --opendrive-chunk-size +.PP +Files will be uploaded in chunks this size. +.PP +Note that these chunks are buffered in memory so increasing them will +increase memory use. +.PP +Properties: +.IP \[bu] 2 +Config: chunk_size +.IP \[bu] 2 +Env Var: RCLONE_OPENDRIVE_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 10Mi +.SS --opendrive-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_OPENDRIVE_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP +Note that OpenDrive is case insensitive so you can\[aq]t have a file +called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. +.PP +There are quite a few characters that can\[aq]t be in OpenDrive file +names. +These can\[aq]t occur on Windows platforms, but on non-Windows platforms +they are common. +Rclone will map these names to and from an identical looking unicode +equivalent. +For example if a file has a \f[C]?\f[R] in it will be mapped to +\f[C]\[uFF1F]\f[R] instead. +.PP +\f[C]rclone about\f[R] is not supported by the OpenDrive backend. +Backends without this capability cannot determine free space for an +rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member +of an rclone union remote. +.PP +See List of backends that do not support rclone +about (https://rclone.org/overview/#optional-features) and rclone +about (https://rclone.org/commands/rclone_about/) +.SH Oracle Object Storage +.IP \[bu] 2 +Oracle Object Storage +Overview (https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) +.IP \[bu] 2 +Oracle Object Storage +FAQ (https://www.oracle.com/cloud/storage/object-storage/faq/) +.IP \[bu] 2 +Oracle Object Storage +Limits (https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf) +.PP +Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for +the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. +\f[C]remote:bucket/path/to/dir\f[R]. +.PP +Sample command to transfer local artifacts to remote:bucket in oracle +object storage: +.PP +\f[C]rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv\f[R] +.SS Configuration +.PP +Here is an example of making an oracle object storage configuration. +\f[C]rclone config\f[R] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> n + +Enter name for new remote. +name> remote + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Oracle Cloud Infrastructure Object Storage + \[rs] (oracleobjectstorage) +Storage> oracleobjectstorage + +Option provider. +Choose your Auth Provider +Choose a number from below, or type in your own string value. +Press Enter for the default (env_auth). + 1 / automatically pickup the credentials from runtime(env), first one to provide auth wins + \[rs] (env_auth) + / use an OCI user and an API key for authentication. + 2 | you\[cq]ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. + | https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm + \[rs] (user_principal_auth) + / use instance principals to authorize an instance to make API calls. + 3 | each instance has its own identity, and authenticates using the certificates that are read from instance metadata. + | https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm + \[rs] (instance_principal_auth) + / use workload identity to grant Kubernetes pods policy-driven access to Oracle Cloud + 4 | Infrastructure (OCI) resources using OCI Identity and Access Management (IAM). + | https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm + \[rs] (workload_identity_auth) + 5 / use resource principals to make API calls + \[rs] (resource_principal_auth) + 6 / no credentials needed, this is typically for reading public buckets + \[rs] (no_auth) +provider> 2 + +Option namespace. +Object storage namespace +Enter a value. +namespace> idbamagbg734 + +Option compartment. +Object storage compartment OCID +Enter a value. +compartment> ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba + +Option region. +Object storage Region +Enter a value. +region> us-ashburn-1 + +Option endpoint. +Endpoint for Object storage API. +Leave blank to use the default endpoint for the region. +Enter a value. Press Enter to leave empty. +endpoint> + +Option config_file. +Full Path to OCI config file +Choose a number from below, or type in your own string value. +Press Enter for the default (\[ti]/.oci/config). + 1 / oci configuration file location + \[rs] (\[ti]/.oci/config) +config_file> /etc/oci/dev.conf + +Option config_profile. +Profile name inside OCI config file +Choose a number from below, or type in your own string value. +Press Enter for the default (Default). + 1 / Use the default profile + \[rs] (Default) +config_profile> Test + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: oracleobjectstorage +- namespace: idbamagbg734 +- compartment: ocid1.compartment.oc1..aaaaaaaapufkxc7ame3sthry5i7ujrwfc7ejnthhu6bhanm5oqfjpyasjkba +- region: us-ashburn-1 +- provider: user_principal_auth +- config_file: /etc/oci/dev.conf +- config_profile: Test +Keep this \[dq]remote\[dq] remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +See all buckets +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +Create a new bucket +.IP +.nf +\f[C] +rclone mkdir remote:bucket +\f[R] +.fi +.PP +List the contents of a bucket +.IP +.nf +\f[C] +rclone ls remote:bucket +rclone ls remote:bucket --max-depth 1 +\f[R] +.fi +.SS Authentication Providers +.PP +OCI has various authentication methods. +To learn more about authentication methods please refer oci +authentication +methods (https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) +These choices can be specified in the rclone config file. +.PP +Rclone supports the following OCI authentication provider. +.IP +.nf +\f[C] +User Principal +Instance Principal +Resource Principal +Workload Identity +No authentication +\f[R] +.fi +.SS User Principal +.PP +Sample rclone config file for Authentication Provider User Principal: +.IP +.nf +\f[C] +[oos] +type = oracleobjectstorage +namespace = id 34 +compartment = ocid1.compartment.oc1..aa ba +region = us-ashburn-1 +provider = user_principal_auth +config_file = /home/opc/.oci/config +config_profile = Default +\f[R] +.fi +.PP +Advantages: - One can use this method from any server within OCI or +on-premises or from other cloud provider. +.PP +Considerations: - you need to configure user\[cq]s privileges / policy +to allow access to object storage - Overhead of managing users and keys. +- If the user is deleted, the config file will no longer work and may +cause automation regressions that use the user\[aq]s credentials. +.SS Instance Principal +.PP +An OCI compute instance can be authorized to use rclone by using +it\[aq]s identity and certificates as an instance principal. +With this approach no credentials have to be stored and managed. +.PP +Sample rclone configuration file for Authentication Provider Instance +Principal: +.IP +.nf +\f[C] +[opc\[at]rclone \[ti]]$ cat \[ti]/.config/rclone/rclone.conf +[oos] +type = oracleobjectstorage +namespace = id fn +compartment = ocid1.compartment.oc1..aa k7a +region = us-ashburn-1 +provider = instance_principal_auth +\f[R] +.fi +.PP +Advantages: +.IP \[bu] 2 +With instance principals, you don\[aq]t need to configure user +credentials and transfer/ save it to disk in your compute instances or +rotate the credentials. +.IP \[bu] 2 +You don\[cq]t need to deal with users and keys. +.IP \[bu] 2 +Greatly helps in automation as you don\[aq]t have to manage access keys, +user private keys, storing them in vault, using kms etc. +.PP +Considerations: +.IP \[bu] 2 +You need to configure a dynamic group having this instance as member and +add policy to read object storage to that dynamic group. +.IP \[bu] 2 +Everyone who has access to this machine can execute the CLI commands. +.IP \[bu] 2 +It is applicable for oci compute instances only. +It cannot be used on external instance or resources. +.SS Resource Principal +.PP +Resource principal auth is very similar to instance principal auth but +used for resources that are not compute instances such as serverless +functions (https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). +To use resource principal ensure Rclone process is started with these +environment variables set in its process. +.IP +.nf +\f[C] +export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 +export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 +export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem +export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token +\f[R] +.fi +.PP +Sample rclone configuration file for Authentication Provider Resource +Principal: +.IP +.nf +\f[C] +[oos] +type = oracleobjectstorage +namespace = id 34 +compartment = ocid1.compartment.oc1..aa ba +region = us-ashburn-1 +provider = resource_principal_auth +\f[R] +.fi +.SS Workload Identity +.PP +Workload Identity auth may be used when running Rclone from Kubernetes +pod on a Container Engine for Kubernetes (OKE) cluster. +For more details on configuring Workload Identity, see Granting +Workloads Access to OCI +Resources (https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm). +To use workload identity, ensure Rclone is started with these +environment variables set in its process. +.IP +.nf +\f[C] +export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 +export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 +\f[R] +.fi +.SS No authentication +.PP +Public buckets do not require any authentication mechanism to read +objects. +Sample rclone configuration file for No authentication: +.IP +.nf +\f[C] +[oos] +type = oracleobjectstorage +namespace = id 34 +compartment = ocid1.compartment.oc1..aa ba +region = us-ashburn-1 +provider = no_auth +\f[R] +.fi +.SS Modification times and hashes +.PP +The modification time is stored as metadata on the object as +\f[C]opc-meta-mtime\f[R] as floating point since the epoch, accurate to +1 ns. +.PP +If the modification time needs to be updated rclone will attempt to +perform a server side copy to update the modification if the object can +be copied in a single part. +In the case the object is larger than 5Gb, the object will be uploaded +rather than copied. +.PP +Note that reading this from the object takes an additional +\f[C]HEAD\f[R] request as the metadata isn\[aq]t returned in object +listings. +.PP +The MD5 hash algorithm is supported. +.SS Multipart uploads +.PP +rclone supports multipart uploads with OOS which means that it can +upload files bigger than 5 GiB. +.PP +Note that files uploaded \f[I]both\f[R] with multipart upload +\f[I]and\f[R] through crypt remotes do not have MD5 sums. +.PP +rclone switches from single part uploads to multipart uploads at the +point specified by \f[C]--oos-upload-cutoff\f[R]. +This can be a maximum of 5 GiB and a minimum of 0 (ie always upload +multipart files). +.PP +The chunk sizes used in the multipart upload are specified by +\f[C]--oos-chunk-size\f[R] and the number of chunks uploaded +concurrently is specified by \f[C]--oos-upload-concurrency\f[R]. +.PP +Multipart uploads will use \f[C]--transfers\f[R] * +\f[C]--oos-upload-concurrency\f[R] * \f[C]--oos-chunk-size\f[R] extra +memory. +Single part uploads to not use extra memory. +.PP +Single part transfers can be faster than multipart transfers or slower +depending on your latency from oos - the more latency, the more likely +single part transfers will be faster. +.PP +Increasing \f[C]--oos-upload-concurrency\f[R] will increase throughput +(8 would be a sensible value) and increasing \f[C]--oos-chunk-size\f[R] +also increases throughput (16M would be sensible). +Increasing either of these will use more memory. +The default values are high enough to gain most of the possible +performance without using too much memory. +.SS Standard options +.PP +Here are the Standard options specific to oracleobjectstorage (Oracle +Cloud Infrastructure Object Storage). +.SS --oos-provider +.PP +Choose your Auth Provider +.PP +Properties: +.IP \[bu] 2 +Config: provider +.IP \[bu] 2 +Env Var: RCLONE_OOS_PROVIDER +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]env_auth\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]env_auth\[dq] +.RS 2 +.IP \[bu] 2 +automatically pickup the credentials from runtime(env), first one to +provide auth wins +.RE +.IP \[bu] 2 +\[dq]user_principal_auth\[dq] +.RS 2 +.IP \[bu] 2 +use an OCI user and an API key for authentication. +.IP \[bu] 2 +you\[cq]ll need to put in a config file your tenancy OCID, user OCID, +region, the path, fingerprint to an API key. +.IP \[bu] 2 +https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm +.RE +.IP \[bu] 2 +\[dq]instance_principal_auth\[dq] +.RS 2 +.IP \[bu] 2 +use instance principals to authorize an instance to make API calls. +.IP \[bu] 2 +each instance has its own identity, and authenticates using the +certificates that are read from instance metadata. +.IP \[bu] 2 +https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm +.RE +.IP \[bu] 2 +\[dq]workload_identity_auth\[dq] +.RS 2 +.IP \[bu] 2 +use workload identity to grant OCI Container Engine for Kubernetes +workloads policy-driven access to OCI resources using OCI Identity and +Access Management (IAM). +.IP \[bu] 2 +https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm +.RE +.IP \[bu] 2 +\[dq]resource_principal_auth\[dq] +.RS 2 +.IP \[bu] 2 +use resource principals to make API calls +.RE +.IP \[bu] 2 +\[dq]no_auth\[dq] +.RS 2 +.IP \[bu] 2 +no credentials needed, this is typically for reading public buckets +.RE +.RE +.SS --oos-namespace +.PP +Object storage namespace +.PP +Properties: +.IP \[bu] 2 +Config: namespace +.IP \[bu] 2 +Env Var: RCLONE_OOS_NAMESPACE +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --oos-compartment +.PP +Object storage compartment OCID +.PP +Properties: +.IP \[bu] 2 +Config: compartment +.IP \[bu] 2 +Env Var: RCLONE_OOS_COMPARTMENT +.IP \[bu] 2 +Provider: !no_auth +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --oos-region +.PP +Object storage Region +.PP +Properties: +.IP \[bu] 2 +Config: region +.IP \[bu] 2 +Env Var: RCLONE_OOS_REGION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --oos-endpoint +.PP +Endpoint for Object storage API. +.PP +Leave blank to use the default endpoint for the region. +.PP +Properties: +.IP \[bu] 2 +Config: endpoint +.IP \[bu] 2 +Env Var: RCLONE_OOS_ENDPOINT +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --oos-config-file +.PP +Path to OCI config file +.PP +Properties: +.IP \[bu] 2 +Config: config_file +.IP \[bu] 2 +Env Var: RCLONE_OOS_CONFIG_FILE +.IP \[bu] 2 +Provider: user_principal_auth +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]\[ti]/.oci/config\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[ti]/.oci/config\[dq] +.RS 2 +.IP \[bu] 2 +oci configuration file location +.RE +.RE +.SS --oos-config-profile +.PP +Profile name inside the oci config file +.PP +Properties: +.IP \[bu] 2 +Config: config_profile +.IP \[bu] 2 +Env Var: RCLONE_OOS_CONFIG_PROFILE +.IP \[bu] 2 +Provider: user_principal_auth +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]Default\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]Default\[dq] +.RS 2 +.IP \[bu] 2 +Use the default profile +.RE +.RE +.SS Advanced options +.PP +Here are the Advanced options specific to oracleobjectstorage (Oracle +Cloud Infrastructure Object Storage). +.SS --oos-storage-tier +.PP +The storage class to use when storing new objects in storage. +https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm +.PP +Properties: +.IP \[bu] 2 +Config: storage_tier +.IP \[bu] 2 +Env Var: RCLONE_OOS_STORAGE_TIER +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]Standard\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]Standard\[dq] +.RS 2 +.IP \[bu] 2 +Standard storage tier, this is the default tier +.RE +.IP \[bu] 2 +\[dq]InfrequentAccess\[dq] +.RS 2 +.IP \[bu] 2 +InfrequentAccess storage tier +.RE +.IP \[bu] 2 +\[dq]Archive\[dq] +.RS 2 +.IP \[bu] 2 +Archive storage tier +.RE +.RE +.SS --oos-upload-cutoff +.PP +Cutoff for switching to chunked upload. +.PP +Any files larger than this will be uploaded in chunks of chunk_size. +The minimum is 0 and the maximum is 5 GiB. +.PP +Properties: +.IP \[bu] 2 +Config: upload_cutoff +.IP \[bu] 2 +Env Var: RCLONE_OOS_UPLOAD_CUTOFF +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 200Mi +.SS --oos-chunk-size +.PP +Chunk size to use for uploading. +.PP +When uploading files larger than upload_cutoff or files with unknown +size (e.g. +from \[dq]rclone rcat\[dq] or uploaded with \[dq]rclone mount\[dq] they +will be uploaded as multipart uploads using this chunk size. +.PP +Note that \[dq]upload_concurrency\[dq] chunks of this size are buffered +in memory per transfer. +.PP +If you are transferring large files over high-speed links and you have +enough memory, then increasing this will speed up the transfers. +.PP +Rclone will automatically increase the chunk size when uploading a large +file of known size to stay below the 10,000 chunks limit. +.PP +Files of unknown size are uploaded with the configured chunk_size. +Since the default chunk size is 5 MiB and there can be at most 10,000 +chunks, this means that by default the maximum size of a file you can +stream upload is 48 GiB. +If you wish to stream upload larger files then you will need to increase +chunk_size. +.PP +Increasing the chunk size decreases the accuracy of the progress +statistics displayed with \[dq]-P\[dq] flag. +.PP +Properties: +.IP \[bu] 2 +Config: chunk_size +.IP \[bu] 2 +Env Var: RCLONE_OOS_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 5Mi +.SS --oos-max-upload-parts +.PP +Maximum number of parts in a multipart upload. +.PP +This option defines the maximum number of multipart chunks to use when +doing a multipart upload. +.PP +OCI has max parts limit of 10,000 chunks. +.PP +Rclone will automatically increase the chunk size when uploading a large +file of a known size to stay below this number of chunks limit. +.PP +Properties: +.IP \[bu] 2 +Config: max_upload_parts +.IP \[bu] 2 +Env Var: RCLONE_OOS_MAX_UPLOAD_PARTS +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 10000 +.SS --oos-upload-concurrency +.PP +Concurrency for multipart uploads. +.PP +This is the number of chunks of the same file that are uploaded +concurrently. +.PP +If you are uploading small numbers of large files over high-speed links +and these uploads do not fully utilize your bandwidth, then increasing +this may help to speed up the transfers. +.PP +Properties: +.IP \[bu] 2 +Config: upload_concurrency +.IP \[bu] 2 +Env Var: RCLONE_OOS_UPLOAD_CONCURRENCY +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 10 +.SS --oos-copy-cutoff +.PP +Cutoff for switching to multipart copy. +.PP +Any files larger than this that need to be server-side copied will be +copied in chunks of this size. +.PP +The minimum is 0 and the maximum is 5 GiB. +.PP +Properties: +.IP \[bu] 2 +Config: copy_cutoff +.IP \[bu] 2 +Env Var: RCLONE_OOS_COPY_CUTOFF +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 4.656Gi +.SS --oos-copy-timeout +.PP +Timeout for copy. +.PP +Copy is an asynchronous operation, specify timeout to wait for copy to +succeed +.PP +Properties: +.IP \[bu] 2 +Config: copy_timeout +.IP \[bu] 2 +Env Var: RCLONE_OOS_COPY_TIMEOUT +.IP \[bu] 2 +Type: Duration +.IP \[bu] 2 +Default: 1m0s +.SS --oos-disable-checksum +.PP +Don\[aq]t store MD5 checksum with object metadata. +.PP +Normally rclone will calculate the MD5 checksum of the input before +uploading it so it can add it to metadata on the object. +This is great for data integrity checking but can cause long delays for +large files to start uploading. +.PP +Properties: +.IP \[bu] 2 +Config: disable_checksum +.IP \[bu] 2 +Env Var: RCLONE_OOS_DISABLE_CHECKSUM +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --oos-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_OOS_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,InvalidUtf8,Dot +.SS --oos-leave-parts-on-error +.PP +If true avoid calling abort upload on a failure, leaving all +successfully uploaded parts for manual recovery. +.PP +It should be set to true for resuming uploads across different sessions. +.PP +WARNING: Storing parts of an incomplete multipart upload counts towards +space usage on object storage and will add additional costs if not +cleaned up. +.PP +Properties: +.IP \[bu] 2 +Config: leave_parts_on_error +.IP \[bu] 2 +Env Var: RCLONE_OOS_LEAVE_PARTS_ON_ERROR +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --oos-attempt-resume-upload +.PP +If true attempt to resume previously started multipart upload for the +object. +This will be helpful to speed up multipart transfers by resuming uploads +from past session. +.PP +WARNING: If chunk size differs in resumed session from past incomplete +session, then the resumed multipart upload is aborted and a new +multipart upload is started with the new chunk size. +.PP +The flag leave_parts_on_error must be true to resume and optimize to +skip parts that were already uploaded successfully. +.PP +Properties: +.IP \[bu] 2 +Config: attempt_resume_upload +.IP \[bu] 2 +Env Var: RCLONE_OOS_ATTEMPT_RESUME_UPLOAD +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --oos-no-check-bucket +.PP +If set, don\[aq]t attempt to check the bucket exists or create it. +.PP +This can be useful when trying to minimise the number of transactions +rclone does if you know the bucket exists already. +.PP +It can also be needed if the user you are using does not have bucket +creation permissions. +.PP +Properties: +.IP \[bu] 2 +Config: no_check_bucket +.IP \[bu] 2 +Env Var: RCLONE_OOS_NO_CHECK_BUCKET +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --oos-sse-customer-key-file +.PP +To use SSE-C, a file containing the base64-encoded string of the AES-256 +encryption key associated with the object. +Please note only one of +sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed.\[aq] +.PP +Properties: +.IP \[bu] 2 +Config: sse_customer_key_file +.IP \[bu] 2 +Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_FILE +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.RE +.SS --oos-sse-customer-key +.PP +To use SSE-C, the optional header that specifies the base64-encoded +256-bit encryption key to use to encrypt or decrypt the data. +Please note only one of +sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. +For more information, see Using Your Own Keys for Server-Side Encryption +(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm) +.PP +Properties: +.IP \[bu] 2 +Config: sse_customer_key +.IP \[bu] 2 +Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.RE +.SS --oos-sse-customer-key-sha256 +.PP +If using SSE-C, The optional header that specifies the base64-encoded +SHA256 hash of the encryption key. +This value is used to check the integrity of the encryption key. +see Using Your Own Keys for Server-Side Encryption +(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). +.PP +Properties: +.IP \[bu] 2 +Config: sse_customer_key_sha256 +.IP \[bu] 2 +Env Var: RCLONE_OOS_SSE_CUSTOMER_KEY_SHA256 +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.RE +.SS --oos-sse-kms-key-id +.PP +if using your own master key in vault, this header specifies the OCID +(https://docs.cloud.oracle.com/Content/General/Concepts/identifiers.htm) +of a master encryption key used to call the Key Management service to +generate a data encryption key or to encrypt or decrypt a data +encryption key. +Please note only one of +sse_customer_key_file|sse_customer_key|sse_kms_key_id is needed. +.PP +Properties: +.IP \[bu] 2 +Config: sse_kms_key_id +.IP \[bu] 2 +Env Var: RCLONE_OOS_SSE_KMS_KEY_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.RE +.SS --oos-sse-customer-algorithm +.PP +If using SSE-C, the optional header that specifies \[dq]AES256\[dq] as +the encryption algorithm. +Object Storage supports \[dq]AES256\[dq] as the encryption algorithm. +For more information, see Using Your Own Keys for Server-Side Encryption +(https://docs.cloud.oracle.com/Content/Object/Tasks/usingyourencryptionkeys.htm). +.PP +Properties: +.IP \[bu] 2 +Config: sse_customer_algorithm +.IP \[bu] 2 +Env Var: RCLONE_OOS_SSE_CUSTOMER_ALGORITHM +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.IP \[bu] 2 +\[dq]AES256\[dq] +.RS 2 +.IP \[bu] 2 +AES256 +.RE +.RE +.SS --oos-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_OOS_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Backend commands +.PP +Here are the commands specific to the oracleobjectstorage backend. +.PP +Run them with +.IP +.nf +\f[C] +rclone backend COMMAND remote: +\f[R] +.fi +.PP +The help below will explain what arguments each command takes. +.PP +See the backend (https://rclone.org/commands/rclone_backend/) command +for more info on how to pass options and arguments. +.PP +These can be run on a running backend using the rc command +backend/command (https://rclone.org/rc/#backend-command). +.SS rename +.PP +change the name of an object +.IP +.nf +\f[C] +rclone backend rename remote: [options] [ +] +\f[R] +.fi +.PP +This command can be used to rename a object. +.PP +Usage Examples: +.IP +.nf +\f[C] +rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name +\f[R] +.fi +.SS list-multipart-uploads +.PP +List the unfinished multipart uploads +.IP +.nf +\f[C] +rclone backend list-multipart-uploads remote: [options] [ +] +\f[R] +.fi +.PP +This command lists the unfinished multipart uploads in JSON format. +.IP +.nf +\f[C] +rclone backend list-multipart-uploads oos:bucket/path/to/object +\f[R] +.fi +.PP +It returns a dictionary of buckets with values as lists of unfinished +multipart uploads. +.PP +You can call it with no bucket in which case it lists all bucket, with a +bucket or with a bucket and path. +.IP +.nf +\f[C] +{ + \[dq]test-bucket\[dq]: [ + { + \[dq]namespace\[dq]: \[dq]test-namespace\[dq], + \[dq]bucket\[dq]: \[dq]test-bucket\[dq], + \[dq]object\[dq]: \[dq]600m.bin\[dq], + \[dq]uploadId\[dq]: \[dq]51dd8114-52a4-b2f2-c42f-5291f05eb3c8\[dq], + \[dq]timeCreated\[dq]: \[dq]2022-07-29T06:21:16.595Z\[dq], + \[dq]storageTier\[dq]: \[dq]Standard\[dq] + } + ] +\f[R] +.fi +.SS cleanup +.PP +Remove unfinished multipart uploads. +.IP +.nf +\f[C] +rclone backend cleanup remote: [options] [ +] +\f[R] +.fi +.PP +This command removes unfinished multipart uploads of age greater than +max-age which defaults to 24 hours. +.PP +Note that you can use --interactive/-i or --dry-run with this command to +see what it would do. +.IP +.nf +\f[C] +rclone backend cleanup oos:bucket/path/to/object +rclone backend cleanup -o max-age=7w oos:bucket/path/to/object +\f[R] +.fi +.PP +Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. +.PP +Options: +.IP \[bu] 2 +\[dq]max-age\[dq]: Max age of upload to delete +.SS restore +.PP +Restore objects from Archive to Standard storage +.IP +.nf +\f[C] +rclone backend restore remote: [options] [ +] +\f[R] +.fi +.PP +This command can be used to restore one or more objects from Archive to +Standard storage. +.IP +.nf +\f[C] +Usage Examples: + +rclone backend restore oos:bucket/path/to/directory -o hours=HOURS +rclone backend restore oos:bucket -o hours=HOURS +\f[R] +.fi +.PP +This flag also obeys the filters. +Test first with --interactive/-i or --dry-run flags +.IP +.nf +\f[C] +rclone --interactive backend restore --include \[dq]*.txt\[dq] oos:bucket/path -o hours=72 +\f[R] +.fi +.PP +All the objects shown will be marked for restore, then +.IP +.nf +\f[C] +rclone backend restore --include \[dq]*.txt\[dq] oos:bucket/path -o hours=72 + +It returns a list of status dictionaries with Object Name and Status +keys. The Status will be \[dq]RESTORED\[dq]\[dq] if it was successful or an error message +if not. + +[ + { + \[dq]Object\[dq]: \[dq]test.txt\[dq] + \[dq]Status\[dq]: \[dq]RESTORED\[dq], + }, + { + \[dq]Object\[dq]: \[dq]test/file4.txt\[dq] + \[dq]Status\[dq]: \[dq]RESTORED\[dq], + } +] +\f[R] +.fi +.PP +Options: +.IP \[bu] 2 +\[dq]hours\[dq]: The number of hours for which this object will be +restored. +Default is 24 hrs. +.SS Tutorials +.SS Mounting Buckets (https://rclone.org/oracleobjectstorage/tutorial_mount/) +.SH QingStor +.PP +Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for +the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. +\f[C]remote:bucket/path/to/dir\f[R]. +.SS Configuration +.PP +Here is an example of making an QingStor configuration. +First run +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +n/r/c/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / QingStor Object Storage + \[rs] \[dq]qingstor\[dq] +[snip] +Storage> qingstor +Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own value + 1 / Enter QingStor credentials in the next step + \[rs] \[dq]false\[dq] + 2 / Get QingStor credentials from the environment (env vars or IAM) + \[rs] \[dq]true\[dq] +env_auth> 1 +QingStor Access Key ID - leave blank for anonymous access or runtime credentials. +access_key_id> access_key +QingStor Secret Access Key (password) - leave blank for anonymous access or runtime credentials. +secret_access_key> secret_key +Enter an endpoint URL to connection QingStor API. +Leave blank will use the default value \[dq]https://qingstor.com:443\[dq] +endpoint> +Zone connect to. Default is \[dq]pek3a\[dq]. +Choose a number from below, or type in your own value + / The Beijing (China) Three Zone + 1 | Needs location constraint pek3a. + \[rs] \[dq]pek3a\[dq] + / The Shanghai (China) First Zone + 2 | Needs location constraint sh1a. + \[rs] \[dq]sh1a\[dq] +zone> 1 +Number of connection retry. +Leave blank will use the default value \[dq]3\[dq]. +connection_retries> +Remote config +-------------------- +[remote] +env_auth = false +access_key_id = access_key +secret_access_key = secret_key +endpoint = +zone = pek3a +connection_retries = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +This remote is called \f[C]remote\f[R] and can now be used like this +.PP +See all buckets +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +Make a new bucket +.IP +.nf +\f[C] +rclone mkdir remote:bucket +\f[R] +.fi +.PP +List the contents of a bucket +.IP +.nf +\f[C] +rclone ls remote:bucket +\f[R] +.fi +.PP +Sync \f[C]/home/local/directory\f[R] to the remote bucket, deleting any +excess files in the bucket. +.IP +.nf +\f[C] +rclone sync --interactive /home/local/directory remote:bucket +\f[R] +.fi +.SS --fast-list +.PP +This remote supports \f[C]--fast-list\f[R] which allows you to use fewer +transactions in exchange for more memory. +See the rclone docs (https://rclone.org/docs/#fast-list) for more +details. +.SS Multipart uploads +.PP +rclone supports multipart uploads with QingStor which means that it can +upload files bigger than 5 GiB. +Note that files uploaded with multipart upload don\[aq]t have an MD5SUM. +.PP +Note that incomplete multipart uploads older than 24 hours can be +removed with \f[C]rclone cleanup remote:bucket\f[R] just for one bucket +\f[C]rclone cleanup remote:\f[R] for all buckets. +QingStor does not ever remove incomplete multipart uploads so it may be +necessary to run this from time to time. +.SS Buckets and Zone +.PP +With QingStor you can list buckets (\f[C]rclone lsd\f[R]) using any +zone, but you can only access the content of a bucket from the zone it +was created in. +If you attempt to access a bucket from the wrong zone, you will get an +error, +\f[C]incorrect zone, the bucket is not in \[aq]XXX\[aq] zone\f[R]. +.SS Authentication +.PP +There are two ways to supply \f[C]rclone\f[R] with a set of QingStor +credentials. +In order of precedence: +.IP \[bu] 2 +Directly in the rclone configuration file (as configured by +\f[C]rclone config\f[R]) +.RS 2 +.IP \[bu] 2 +set \f[C]access_key_id\f[R] and \f[C]secret_access_key\f[R] +.RE +.IP \[bu] 2 +Runtime configuration: +.RS 2 +.IP \[bu] 2 +set \f[C]env_auth\f[R] to \f[C]true\f[R] in the config file +.IP \[bu] 2 +Exporting the following environment variables before running +\f[C]rclone\f[R] +.RS 2 +.IP \[bu] 2 +Access Key ID: \f[C]QS_ACCESS_KEY_ID\f[R] or \f[C]QS_ACCESS_KEY\f[R] +.IP \[bu] 2 +Secret Access Key: \f[C]QS_SECRET_ACCESS_KEY\f[R] or +\f[C]QS_SECRET_KEY\f[R] +.RE +.RE +.SS Restricted filename characters +.PP +The control characters 0x00-0x1F and / are replaced as in the default +restricted characters +set (https://rclone.org/overview/#restricted-characters). +Note that 0x7F is not replaced. +.PP +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Standard options +.PP +Here are the Standard options specific to qingstor (QingCloud Object +Storage). +.SS --qingstor-env-auth +.PP +Get QingStor credentials from runtime. +.PP +Only applies if access_key_id and secret_access_key is blank. +.PP +Properties: +.IP \[bu] 2 +Config: env_auth +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_ENV_AUTH +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]false\[dq] +.RS 2 +.IP \[bu] 2 +Enter QingStor credentials in the next step. +.RE +.IP \[bu] 2 +\[dq]true\[dq] +.RS 2 +.IP \[bu] 2 +Get QingStor credentials from the environment (env vars or IAM). +.RE +.RE +.SS --qingstor-access-key-id +.PP +QingStor Access Key ID. +.PP +Leave blank for anonymous access or runtime credentials. +.PP +Properties: +.IP \[bu] 2 +Config: access_key_id +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_ACCESS_KEY_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --qingstor-secret-access-key +.PP +QingStor Secret Access Key (password). +.PP +Leave blank for anonymous access or runtime credentials. +.PP +Properties: +.IP \[bu] 2 +Config: secret_access_key +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_SECRET_ACCESS_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --qingstor-endpoint +.PP +Enter an endpoint URL to connection QingStor API. +.PP +Leave blank will use the default value +\[dq]https://qingstor.com:443\[dq]. +.PP +Properties: +.IP \[bu] 2 +Config: endpoint +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_ENDPOINT +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --qingstor-zone +.PP +Zone to connect to. +.PP +Default is \[dq]pek3a\[dq]. +.PP +Properties: +.IP \[bu] 2 +Config: zone +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_ZONE +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]pek3a\[dq] +.RS 2 +.IP \[bu] 2 +The Beijing (China) Three Zone. +.IP \[bu] 2 +Needs location constraint pek3a. +.RE +.IP \[bu] 2 +\[dq]sh1a\[dq] +.RS 2 +.IP \[bu] 2 +The Shanghai (China) First Zone. +.IP \[bu] 2 +Needs location constraint sh1a. +.RE +.IP \[bu] 2 +\[dq]gd2a\[dq] +.RS 2 +.IP \[bu] 2 +The Guangdong (China) Second Zone. +.IP \[bu] 2 +Needs location constraint gd2a. +.RE +.RE +.SS Advanced options +.PP +Here are the Advanced options specific to qingstor (QingCloud Object +Storage). +.SS --qingstor-connection-retries +.PP +Number of connection retries. +.PP +Properties: +.IP \[bu] 2 +Config: connection_retries +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_CONNECTION_RETRIES +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 3 +.SS --qingstor-upload-cutoff +.PP +Cutoff for switching to chunked upload. +.PP +Any files larger than this will be uploaded in chunks of chunk_size. +The minimum is 0 and the maximum is 5 GiB. +.PP +Properties: +.IP \[bu] 2 +Config: upload_cutoff +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_UPLOAD_CUTOFF +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 200Mi +.SS --qingstor-chunk-size +.PP +Chunk size to use for uploading. +.PP +When uploading files larger than upload_cutoff they will be uploaded as +multipart uploads using this chunk size. +.PP +Note that \[dq]--qingstor-upload-concurrency\[dq] chunks of this size +are buffered in memory per transfer. +.PP +If you are transferring large files over high-speed links and you have +enough memory, then increasing this will speed up the transfers. +.PP +Properties: +.IP \[bu] 2 +Config: chunk_size +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 4Mi +.SS --qingstor-upload-concurrency +.PP +Concurrency for multipart uploads. +.PP +This is the number of chunks of the same file that are uploaded +concurrently. +.PP +NB if you set this to > 1 then the checksums of multipart uploads become +corrupted (the uploads themselves are not corrupted though). +.PP +If you are uploading small numbers of large files over high-speed links +and these uploads do not fully utilize your bandwidth, then increasing +this may help to speed up the transfers. +.PP +Properties: +.IP \[bu] 2 +Config: upload_concurrency +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_UPLOAD_CONCURRENCY +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 1 +.SS --qingstor-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,Ctl,InvalidUtf8 +.SS --qingstor-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_QINGSTOR_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP +\f[C]rclone about\f[R] is not supported by the qingstor backend. +Backends without this capability cannot determine free space for an +rclone mount or use policy \f[C]mfs\f[R] (most free space) as a member +of an rclone union remote. +.PP +See List of backends that do not support rclone +about (https://rclone.org/overview/#optional-features) and rclone +about (https://rclone.org/commands/rclone_about/) +.SH Quatrix +.PP +Quatrix by Maytech is Quatrix Secure Compliant File Sharing | +Maytech (https://www.maytech.net/products/quatrix-business). +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g., +\f[C]remote:directory/subdirectory\f[R]. +.PP +The initial setup for Quatrix involves getting an API Key from Quatrix. +You can get the API key in the user\[aq]s profile at +\f[C]https:// /profile/api-keys\f[R] or with the help of the API +- +https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create. +.PP +See complete Swagger documentation for Quatrix - +https://docs.maytech.net/quatrix/quatrix-api/api-explorer +.SS Configuration +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Quatrix by Maytech + \[rs] \[dq]quatrix\[dq] +[snip] +Storage> quatrix +API key for accessing Quatrix account. +api_key> your_api_key +Host name of Quatrix account. +host> example.quatrix.it + +-------------------- +[remote] +api_key = your_api_key +host = example.quatrix.it +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +Once configured you can then use \f[C]rclone\f[R] like this, +.PP +List directories in top level of your Quatrix +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +List all the files in your Quatrix +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP +To copy a local directory to an Quatrix directory called backup +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS API key validity +.PP API Key is created with no expiration date. It will be valid until you delete or deactivate it in your account. After disabling, the API Key can be enabled back. If the API Key was deleted and a new key was created, you can update it in rclone config. The same happens if the hostname was changed. -T} -T{ -\[ga]\[ga]\[ga] $ rclone config Current remotes: -T} -T{ -Name Type ==== ==== remote quatrix -T} -T{ -e) Edit existing remote n) New remote d) Delete remote r) Rename remote -c) Copy remote s) Set configuration password q) Quit config -e/n/d/r/c/s/q> e Choose a number from below, or type in an existing -value 1 > remote remote> remote -T} -.TE +.IP +.nf +\f[C] +$ rclone config +Current remotes: + +Name Type +==== ==== +remote quatrix + +e) Edit existing remote +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> e +Choose a number from below, or type in an existing value + 1 > remote +remote> remote +-------------------- +[remote] +type = quatrix +host = some_host.quatrix.it +api_key = your_api_key +-------------------- +Edit remote +Option api_key. +API key for accessing Quatrix account +Enter a string value. Press Enter for the default (your_api_key) +api_key> +Option host. +Host name of Quatrix account +Enter a string value. Press Enter for the default (some_host.quatrix.it). + +-------------------- +[remote] +type = quatrix +host = some_host.quatrix.it +api_key = your_api_key +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.SS Modification times and hashes .PP -[remote] type = quatrix host = some_host.quatrix.it api_key = -your_api_key -------------------- Edit remote Option api_key. -API key for accessing Quatrix account Enter a string value. -Press Enter for the default (your_api_key) api_key> Option host. -Host name of Quatrix account Enter a string value. -Press Enter for the default (some_host.quatrix.it). -.PP -.TS -tab(@); -lw(20.4n). -T{ -[remote] type = quatrix host = some_host.quatrix.it api_key = -your_api_key -T} -_ -T{ -y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> y -\[ga]\[ga]\[ga] -T} -T{ -### Modification times and hashes -T} -T{ Quatrix allows modification times to be set on objects accurate to 1 microsecond. These will be used to detect whether objects need syncing or not. -T} -T{ +.PP Quatrix does not support hashes, so you cannot use the \f[C]--checksum\f[R] flag. -T} -T{ -### Restricted filename characters -T} -T{ +.SS Restricted filename characters +.PP File names in Quatrix are case sensitive and have limitations like the maximum length of a filename is 255, and the minimum length is 1. A file name cannot be equal to \f[C].\f[R] or \f[C]..\f[R] nor contain \f[C]/\f[R] , \f[C]\[rs]\f[R] or non-printable ascii. -T} -T{ -### Transfers -T} -T{ +.SS Transfers +.PP For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to \f[C]--transfers\f[R] chunks at the same time (shared among all multipart uploads). @@ -50567,150 +54307,155 @@ increase in case of high upload speed. As well as it can decrease in case of upload speed problems. If no free memory is available, all chunks will equal \f[C]minimal_chunk_size\f[R]. -T} -T{ -### Deleting files -T} -T{ +.SS Deleting files +.PP Files you delete with rclone will end up in Trash and be stored there for 30 days. Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account. -T} -T{ -### Standard options -T} -T{ +.SS Standard options +.PP Here are the Standard options specific to quatrix (Quatrix by Maytech). -T} -T{ -#### --quatrix-api-key -T} -T{ +.SS --quatrix-api-key +.PP API key for accessing Quatrix account -T} -T{ +.PP Properties: -T} -T{ -- Config: api_key - Env Var: RCLONE_QUATRIX_API_KEY - Type: string - +.IP \[bu] 2 +Config: api_key +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_API_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 Required: true -T} -T{ -#### --quatrix-host -T} -T{ +.SS --quatrix-host +.PP Host name of Quatrix account -T} -T{ +.PP Properties: -T} -T{ -- Config: host - Env Var: RCLONE_QUATRIX_HOST - Type: string - Required: -true -T} -T{ -### Advanced options -T} -T{ +.IP \[bu] 2 +Config: host +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_HOST +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS Advanced options +.PP Here are the Advanced options specific to quatrix (Quatrix by Maytech). -T} -T{ -#### --quatrix-encoding -T} -T{ +.SS --quatrix-encoding +.PP The encoding for the backend. -T} -T{ +.PP See the encoding section in the overview (https://rclone.org/overview/#encoding) for more info. -T} -T{ +.PP Properties: -T} -T{ -- Config: encoding - Env Var: RCLONE_QUATRIX_ENCODING - Type: Encoding - +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot -T} -T{ -#### --quatrix-effective-upload-time -T} -T{ +.SS --quatrix-effective-upload-time +.PP Wanted upload time for one chunk -T} -T{ +.PP Properties: -T} -T{ -- Config: effective_upload_time - Env Var: -RCLONE_QUATRIX_EFFECTIVE_UPLOAD_TIME - Type: string - Default: -\[dq]4s\[dq] -T} -T{ -#### --quatrix-minimal-chunk-size -T} -T{ +.IP \[bu] 2 +Config: effective_upload_time +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_EFFECTIVE_UPLOAD_TIME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]4s\[dq] +.SS --quatrix-minimal-chunk-size +.PP The minimal size for one chunk -T} -T{ +.PP Properties: -T} -T{ -- Config: minimal_chunk_size - Env Var: -RCLONE_QUATRIX_MINIMAL_CHUNK_SIZE - Type: SizeSuffix - Default: 9.537Mi -T} -T{ -#### --quatrix-maximal-summary-chunk-size -T} -T{ +.IP \[bu] 2 +Config: minimal_chunk_size +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_MINIMAL_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 9.537Mi +.SS --quatrix-maximal-summary-chunk-size +.PP The maximal summary for all chunks. It should not be less than \[aq]transfers\[aq]*\[aq]minimal_chunk_size\[aq] -T} -T{ +.PP Properties: -T} -T{ -- Config: maximal_summary_chunk_size - Env Var: -RCLONE_QUATRIX_MAXIMAL_SUMMARY_CHUNK_SIZE - Type: SizeSuffix - Default: -95.367Mi -T} -T{ -#### --quatrix-hard-delete -T} -T{ -Delete files permanently rather than putting them into the trash. -T} -T{ +.IP \[bu] 2 +Config: maximal_summary_chunk_size +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_MAXIMAL_SUMMARY_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 95.367Mi +.SS --quatrix-hard-delete +.PP +Delete files permanently rather than putting them into the trash +.PP Properties: -T} -T{ -- Config: hard_delete - Env Var: RCLONE_QUATRIX_HARD_DELETE - Type: bool -- Default: false -T} -T{ -## Storage usage -T} -T{ +.IP \[bu] 2 +Config: hard_delete +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_HARD_DELETE +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --quatrix-skip-project-folders +.PP +Skip project folders in operations +.PP +Properties: +.IP \[bu] 2 +Config: skip_project_folders +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_SKIP_PROJECT_FOLDERS +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --quatrix-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_QUATRIX_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Storage usage +.PP The storage usage in Quatrix is restricted to the account during the purchase. You can restrict any user with a smaller storage limit. The account limit is applied if the user has no custom storage limit. Once you\[aq]ve reached the limit, the upload of files will fail. This can be fixed by freeing up the space or increasing the quota. -T} -T{ -## Server-side operations -T} -T{ +.SS Server-side operations +.PP Quatrix supports server-side operations (copy and move). In case of conflict, files are overwritten during server-side operation. -T} -T{ -# Sia -T} -T{ +.SH Sia +.PP Sia (sia.tech (https://sia.tech/)) is a decentralized cloud storage platform based on the blockchain (https://wikipedia.org/wiki/Blockchain) technology. @@ -50721,27 +54466,22 @@ Siacoins and Wallet, Blockchain and Consensus, Renting and Hosting, and so on. If you are new to it, you\[aq]d better first familiarize yourself using their excellent support documentation (https://support.sia.tech/). -T} -T{ -## Introduction -T} -T{ +.SS Introduction +.PP Before you can use rclone with Sia, you will need to have a running copy of \f[C]Sia-UI\f[R] or \f[C]siad\f[R] (the Sia daemon) locally on your computer or on local network (e.g. a NAS). Please follow the Get started (https://sia.tech/get-started) guide and install one. -T} -T{ +.PP rclone interacts with Sia network by talking to the Sia daemon via HTTP API (https://sia.tech/docs/) which is usually available on port \f[I]9980\f[R]. By default you will run the daemon locally on the same computer so it\[aq]s safe to leave the API password blank (the API URL will be \f[C]http://127.0.0.1:9980\f[R] making external access impossible). -T} -T{ +.PP However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you\[aq]ll need to @@ -50757,8 +54497,7 @@ variable \f[C]SIA_API_PASSWORD\f[R] or text file named \f[C]apipassword\f[R] in the daemon directory. - Set rclone backend option \f[C]api_password\f[R] taking it from above locations. -T} -T{ +.PP Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command @@ -50780,2853 +54519,3879 @@ The only way to use \f[C]siad\f[R] without API password is to run it \f[B]on localhost\f[R] with command line argument \f[C]--authorize-api=false\f[R], but this is insecure and \f[B]strongly discouraged\f[R]. -T} -T{ -## Configuration -T} -T{ +.SS Configuration +.PP Here is an example of how to make a \f[C]sia\f[R] remote called \f[C]mySia\f[R]. First, run: -T} -T{ -rclone config -T} -T{ +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP This will guide you through an interactive setup process: -T} -T{ -\[ga]\[ga]\[ga] No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> mySia Type of storage to configure. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value ... -29 / Sia Decentralized Cloud \ \[dq]sia\[dq] ... -Storage> sia Sia daemon API URL, like http://sia.daemon.host:9980. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> mySia +Type of storage to configure. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value +\&... +29 / Sia Decentralized Cloud + \[rs] \[dq]sia\[dq] +\&... +Storage> sia +Sia daemon API URL, like http://sia.daemon.host:9980. +Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). +Keep default if Sia daemon runs on localhost. +Enter a string value. Press Enter for the default (\[dq]http://127.0.0.1:9980\[dq]). +api_url> http://127.0.0.1:9980 +Sia Daemon API Password. +Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank (default) +y/g/n> y +Enter the password: +password: +Confirm the password: +password: +Edit advanced config? +y) Yes +n) No (default) +y/n> n +-------------------- +[mySia] +type = sia +api_url = http://127.0.0.1:9980 +api_password = *** ENCRYPTED *** +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +Once configured, you can then use \f[C]rclone\f[R] like this: +.IP \[bu] 2 +List directories in top level of your Sia storage +.IP +.nf +\f[C] +rclone lsd mySia: +\f[R] +.fi +.IP \[bu] 2 +List all the files in your Sia storage +.IP +.nf +\f[C] +rclone ls mySia: +\f[R] +.fi +.IP \[bu] 2 +Upload a local directory to the Sia directory called \f[I]backup\f[R] +.IP +.nf +\f[C] +rclone copy /home/source mySia:backup +\f[R] +.fi +.SS Standard options +.PP +Here are the Standard options specific to sia (Sia Decentralized Cloud). +.SS --sia-api-url +.PP +Sia daemon API URL, like http://sia.daemon.host:9980. +.PP Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). Keep default if Sia daemon runs on localhost. -Enter a string value. -Press Enter for the default (\[dq]http://127.0.0.1:9980\[dq]). -api_url> http://127.0.0.1:9980 Sia Daemon API Password. +.PP +Properties: +.IP \[bu] 2 +Config: api_url +.IP \[bu] 2 +Env Var: RCLONE_SIA_API_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]http://127.0.0.1:9980\[dq] +.SS --sia-api-password +.PP +Sia Daemon API Password. +.PP Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. -y) Yes type in my own password g) Generate random password n) No leave -this optional password blank (default) y/g/n> y Enter the password: -password: Confirm the password: password: Edit advanced config? -y) Yes n) No (default) y/n> n +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: api_password +.IP \[bu] 2 +Env Var: RCLONE_SIA_API_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP +Here are the Advanced options specific to sia (Sia Decentralized Cloud). +.SS --sia-user-agent +.PP +Siad User Agent +.PP +Sia daemon requires the \[aq]Sia-Agent\[aq] user agent by default for +security +.PP +Properties: +.IP \[bu] 2 +Config: user_agent +.IP \[bu] 2 +Env Var: RCLONE_SIA_USER_AGENT +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]Sia-Agent\[dq] +.SS --sia-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_SIA_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot +.SS --sia-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_SIA_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.IP \[bu] 2 +Modification times not supported +.IP \[bu] 2 +Checksums not supported +.IP \[bu] 2 +\f[C]rclone about\f[R] not supported +.IP \[bu] 2 +rclone can work only with \f[I]Siad\f[R] or \f[I]Sia-UI\f[R] at the +moment, the \f[B]SkyNet daemon is not supported yet.\f[R] +.IP \[bu] 2 +Sia does not allow control characters or symbols like question and pound +signs in file names. +rclone will transparently encode (https://rclone.org/overview/#encoding) +them for you, but you\[aq]d better be aware +.SH Swift +.PP +Swift refers to OpenStack Object +Storage (https://docs.openstack.org/swift/latest/). +Commercial implementations of that being: +.IP \[bu] 2 +Rackspace Cloud Files (https://www.rackspace.com/cloud/files/) +.IP \[bu] 2 +Memset Memstore (https://www.memset.com/cloud/storage/) +.IP \[bu] 2 +OVH Object +Storage (https://www.ovh.co.uk/public-cloud/storage/object-storage/) +.IP \[bu] 2 +Oracle Cloud +Storage (https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html) +.IP \[bu] 2 +Blomp Cloud Storage (https://www.blomp.com/cloud-storage/) +.IP \[bu] 2 +IBM Bluemix Cloud ObjectStorage +Swift (https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) +.PP +Paths are specified as \f[C]remote:container\f[R] (or \f[C]remote:\f[R] +for the \f[C]lsd\f[R] command.) You may put subdirectories in too, e.g. +\f[C]remote:container/path/to/dir\f[R]. +.SS Configuration +.PP +Here is an example of making a swift configuration. +First run +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH) + \[rs] \[dq]swift\[dq] +[snip] +Storage> swift +Get swift credentials from environment variables in standard OpenStack form. +Choose a number from below, or type in your own value + 1 / Enter swift credentials in the next step + \[rs] \[dq]false\[dq] + 2 / Get swift credentials from environment vars. Leave other fields blank if using this. + \[rs] \[dq]true\[dq] +env_auth> true +User name to log in (OS_USERNAME). +user> +API key or password (OS_PASSWORD). +key> +Authentication URL for server (OS_AUTH_URL). +Choose a number from below, or type in your own value + 1 / Rackspace US + \[rs] \[dq]https://auth.api.rackspacecloud.com/v1.0\[dq] + 2 / Rackspace UK + \[rs] \[dq]https://lon.auth.api.rackspacecloud.com/v1.0\[dq] + 3 / Rackspace v2 + \[rs] \[dq]https://identity.api.rackspacecloud.com/v2.0\[dq] + 4 / Memset Memstore UK + \[rs] \[dq]https://auth.storage.memset.com/v1.0\[dq] + 5 / Memset Memstore UK v2 + \[rs] \[dq]https://auth.storage.memset.com/v2.0\[dq] + 6 / OVH + \[rs] \[dq]https://auth.cloud.ovh.net/v3\[dq] + 7 / Blomp Cloud Storage + \[rs] \[dq]https://authenticate.ain.net\[dq] +auth> +User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). +user_id> +User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) +domain> +Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME) +tenant> +Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID) +tenant_id> +Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) +tenant_domain> +Region name - optional (OS_REGION_NAME) +region> +Storage URL - optional (OS_STORAGE_URL) +storage_url> +Auth Token from alternate authentication - optional (OS_AUTH_TOKEN) +auth_token> +AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION) +auth_version> +Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) +Choose a number from below, or type in your own value + 1 / Public (default, choose this if not sure) + \[rs] \[dq]public\[dq] + 2 / Internal (use internal service net) + \[rs] \[dq]internal\[dq] + 3 / Admin + \[rs] \[dq]admin\[dq] +endpoint_type> +Remote config +-------------------- +[test] +env_auth = true +user = +key = +auth = +user_id = +domain = +tenant = +tenant_id = +tenant_domain = +region = +storage_url = +auth_token = +auth_version = +endpoint_type = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +This remote is called \f[C]remote\f[R] and can now be used like this +.PP +See all containers +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +Make a new container +.IP +.nf +\f[C] +rclone mkdir remote:container +\f[R] +.fi +.PP +List the contents of a container +.IP +.nf +\f[C] +rclone ls remote:container +\f[R] +.fi +.PP +Sync \f[C]/home/local/directory\f[R] to the remote container, deleting +any excess files in the container. +.IP +.nf +\f[C] +rclone sync --interactive /home/local/directory remote:container +\f[R] +.fi +.SS Configuration from an OpenStack credentials file +.PP +An OpenStack credentials file typically looks something something like +this (without the comments) +.IP +.nf +\f[C] +export OS_AUTH_URL=https://a.provider.net/v2.0 +export OS_TENANT_ID=ffffffffffffffffffffffffffffffff +export OS_TENANT_NAME=\[dq]1234567890123456\[dq] +export OS_USERNAME=\[dq]123abc567xy\[dq] +echo \[dq]Please enter your OpenStack Password: \[dq] +read -sr OS_PASSWORD_INPUT +export OS_PASSWORD=$OS_PASSWORD_INPUT +export OS_REGION_NAME=\[dq]SBG1\[dq] +if [ -z \[dq]$OS_REGION_NAME\[dq] ]; then unset OS_REGION_NAME; fi +\f[R] +.fi +.PP +The config file needs to look something like this where +\f[C]$OS_USERNAME\f[R] represents the value of the \f[C]OS_USERNAME\f[R] +variable - \f[C]123abc567xy\f[R] in the example above. +.IP +.nf +\f[C] +[remote] +type = swift +user = $OS_USERNAME +key = $OS_PASSWORD +auth = $OS_AUTH_URL +tenant = $OS_TENANT_NAME +\f[R] +.fi +.PP +Note that you may (or may not) need to set \f[C]region\f[R] too - try +without first. +.SS Configuration from the environment +.PP +If you prefer you can configure rclone to use swift using a standard set +of OpenStack environment variables. +.PP +When you run through the config, make sure you choose \f[C]true\f[R] for +\f[C]env_auth\f[R] and leave everything else blank. +.PP +rclone will then set any empty config parameters from the environment +using standard OpenStack environment variables. +There is a list of the +variables (https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment) +in the docs for the swift library. +.SS Using an alternate authentication method +.PP +If your OpenStack installation uses a non-standard authentication method +that might not be yet supported by rclone or the underlying swift +library, you can authenticate externally (e.g. +calling manually the \f[C]openstack\f[R] commands to get a token). +Then, you just need to pass the two configuration variables +\f[C]auth_token\f[R] and \f[C]storage_url\f[R]. +If they are both provided, the other variables are ignored. +rclone will not try to authenticate but instead assume it is already +authenticated and use these two variables to access the OpenStack +installation. +.SS Using rclone without a config file +.PP +You can use rclone with swift without a config file, if desired, like +this: +.IP +.nf +\f[C] +source openstack-credentials-file +export RCLONE_CONFIG_MYREMOTE_TYPE=swift +export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true +rclone lsd myremote: +\f[R] +.fi +.SS --fast-list +.PP +This remote supports \f[C]--fast-list\f[R] which allows you to use fewer +transactions in exchange for more memory. +See the rclone docs (https://rclone.org/docs/#fast-list) for more +details. +.SS --update and --use-server-modtime +.PP +As noted below, the modified time is stored on metadata on the object. +It is used by default for all operations that require checking the time +a file was last updated. +It allows rclone to treat the remote more like a true filesystem, but it +is inefficient because it requires an extra API call to retrieve the +metadata. +.PP +For many operations, the time the object was last uploaded to the remote +is sufficient to determine if it is \[dq]dirty\[dq]. +By using \f[C]--update\f[R] along with \f[C]--use-server-modtime\f[R], +you can avoid the extra API call and simply upload files whose local +modtime is newer than the time it was last uploaded. +.SS Modification times and hashes +.PP +The modified time is stored as metadata on the object as +\f[C]X-Object-Meta-Mtime\f[R] as floating point since the epoch accurate +to 1 ns. +.PP +This is a de facto standard (used in the official python-swiftclient +amongst others) for storing the modification time for an object. +.PP +The MD5 hash algorithm is supported. +.SS Restricted filename characters +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +NUL +T}@T{ +0x00 +T}@T{ +\[u2400] +T} +T{ +/ +T}@T{ +0x2F +T}@T{ +\[uFF0F] T} .TE .PP -[mySia] type = sia api_url = http://127.0.0.1:9980 api_password = *** -ENCRYPTED *** -------------------- y) Yes this is OK (default) e) Edit -this remote d) Delete this remote y/e/d> y -.IP -.nf -\f[C] -Once configured, you can then use \[ga]rclone\[ga] like this: - -- List directories in top level of your Sia storage -\f[R] -.fi +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Standard options .PP -rclone lsd mySia: -.IP -.nf -\f[C] -- List all the files in your Sia storage -\f[R] -.fi +Here are the Standard options specific to swift (OpenStack Swift +(Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). +.SS --swift-env-auth .PP -rclone ls mySia: -.IP -.nf -\f[C] -- Upload a local directory to the Sia directory called _backup_ -\f[R] -.fi +Get swift credentials from environment variables in standard OpenStack +form. .PP -rclone copy /home/source mySia:backup -.IP -.nf -\f[C] - -### Standard options - -Here are the Standard options specific to sia (Sia Decentralized Cloud). - -#### --sia-api-url - -Sia daemon API URL, like http://sia.daemon.host:9980. - -Note that siad must run with --disable-api-security to open API port for other hosts (not recommended). -Keep default if Sia daemon runs on localhost. - Properties: - -- Config: api_url -- Env Var: RCLONE_SIA_API_URL -- Type: string -- Default: \[dq]http://127.0.0.1:9980\[dq] - -#### --sia-api-password - -Sia Daemon API Password. - -Can be found in the apipassword file located in HOME/.sia/ or in the daemon directory. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - -Properties: - -- Config: api_password -- Env Var: RCLONE_SIA_API_PASSWORD -- Type: string -- Required: false - -### Advanced options - -Here are the Advanced options specific to sia (Sia Decentralized Cloud). - -#### --sia-user-agent - -Siad User Agent - -Sia daemon requires the \[aq]Sia-Agent\[aq] user agent by default for security - -Properties: - -- Config: user_agent -- Env Var: RCLONE_SIA_USER_AGENT -- Type: string -- Default: \[dq]Sia-Agent\[dq] - -#### --sia-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_SIA_ENCODING -- Type: Encoding -- Default: Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot - - - -## Limitations - -- Modification times not supported -- Checksums not supported -- \[ga]rclone about\[ga] not supported -- rclone can work only with _Siad_ or _Sia-UI_ at the moment, - the **SkyNet daemon is not supported yet.** -- Sia does not allow control characters or symbols like question and pound - signs in file names. rclone will transparently [encode](https://rclone.org/overview/#encoding) - them for you, but you\[aq]d better be aware - -# Swift - -Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/). -Commercial implementations of that being: - - * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/) - * [Memset Memstore](https://www.memset.com/cloud/storage/) - * [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/) - * [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html) - * [Blomp Cloud Storage](https://www.blomp.com/cloud-storage/) - * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) - -Paths are specified as \[ga]remote:container\[ga] (or \[ga]remote:\[ga] for the \[ga]lsd\[ga] -command.) You may put subdirectories in too, e.g. \[ga]remote:container/path/to/dir\[ga]. - -## Configuration - -Here is an example of making a swift configuration. First run - - rclone config - -This will guide you through an interactive setup process. -\f[R] -.fi -.PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / -OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset -Memstore, OVH) \ \[dq]swift\[dq] [snip] Storage> swift Get swift -credentials from environment variables in standard OpenStack form. -Choose a number from below, or type in your own value 1 / Enter swift -credentials in the next step \ \[dq]false\[dq] 2 / Get swift credentials -from environment vars. +.IP \[bu] 2 +Config: env_auth +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_ENV_AUTH +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]false\[dq] +.RS 2 +.IP \[bu] 2 +Enter swift credentials in the next step. +.RE +.IP \[bu] 2 +\[dq]true\[dq] +.RS 2 +.IP \[bu] 2 +Get swift credentials from environment vars. +.IP \[bu] 2 Leave other fields blank if using this. -\ \[dq]true\[dq] env_auth> true User name to log in (OS_USERNAME). -user> API key or password (OS_PASSWORD). -key> Authentication URL for server (OS_AUTH_URL). -Choose a number from below, or type in your own value 1 / Rackspace US -\ \[dq]https://auth.api.rackspacecloud.com/v1.0\[dq] 2 / Rackspace UK -\ \[dq]https://lon.auth.api.rackspacecloud.com/v1.0\[dq] 3 / Rackspace -v2 \ \[dq]https://identity.api.rackspacecloud.com/v2.0\[dq] 4 / Memset -Memstore UK \ \[dq]https://auth.storage.memset.com/v1.0\[dq] 5 / Memset -Memstore UK v2 \ \[dq]https://auth.storage.memset.com/v2.0\[dq] 6 / OVH -\ \[dq]https://auth.cloud.ovh.net/v3\[dq] 7 / Blomp Cloud Storage -\ \[dq]https://authenticate.ain.net\[dq] auth> User ID to log in - -optional - most swift systems use user and leave this blank (v3 auth) -(OS_USER_ID). -user_id> User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) domain> -Tenant name - optional for v1 auth, this or tenant_id required otherwise -(OS_TENANT_NAME or OS_PROJECT_NAME) tenant> Tenant ID - optional for v1 -auth, this or tenant required otherwise (OS_TENANT_ID) tenant_id> Tenant -domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME) tenant_domain> -Region name - optional (OS_REGION_NAME) region> Storage URL - optional -(OS_STORAGE_URL) storage_url> Auth Token from alternate authentication - -optional (OS_AUTH_TOKEN) auth_token> AuthVersion - optional - set to -(1,2,3) if your auth URL has no version (ST_AUTH_VERSION) auth_version> -Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE) -Choose a number from below, or type in your own value 1 / Public -(default, choose this if not sure) \ \[dq]public\[dq] 2 / Internal (use -internal service net) \ \[dq]internal\[dq] 3 / Admin \ \[dq]admin\[dq] -endpoint_type> Remote config -------------------- [test] env_auth = true -user = key = auth = user_id = domain = tenant = tenant_id = -tenant_domain = region = storage_url = auth_token = auth_version = -endpoint_type = -------------------- y) Yes this is OK e) Edit this -remote d) Delete this remote y/e/d> y -.IP -.nf -\f[C] -This remote is called \[ga]remote\[ga] and can now be used like this - -See all containers - - rclone lsd remote: - -Make a new container - - rclone mkdir remote:container - -List the contents of a container - - rclone ls remote:container - -Sync \[ga]/home/local/directory\[ga] to the remote container, deleting any -excess files in the container. - - rclone sync --interactive /home/local/directory remote:container - -### Configuration from an OpenStack credentials file - -An OpenStack credentials file typically looks something something -like this (without the comments) -\f[R] -.fi +.RE +.RE +.SS --swift-user .PP -export OS_AUTH_URL=https://a.provider.net/v2.0 export -OS_TENANT_ID=ffffffffffffffffffffffffffffffff export -OS_TENANT_NAME=\[dq]1234567890123456\[dq] export -OS_USERNAME=\[dq]123abc567xy\[dq] echo \[dq]Please enter your OpenStack -Password: \[dq] read -sr OS_PASSWORD_INPUT export -OS_PASSWORD=$OS_PASSWORD_INPUT export OS_REGION_NAME=\[dq]SBG1\[dq] if [ -z \[dq]$OS_REGION_NAME\[dq] -]; then unset OS_REGION_NAME; fi -.IP -.nf -\f[C] -The config file needs to look something like this where \[ga]$OS_USERNAME\[ga] -represents the value of the \[ga]OS_USERNAME\[ga] variable - \[ga]123abc567xy\[ga] in -the example above. -\f[R] -.fi -.PP -[remote] type = swift user = $OS_USERNAME key = $OS_PASSWORD auth = -$OS_AUTH_URL tenant = $OS_TENANT_NAME -.IP -.nf -\f[C] -Note that you may (or may not) need to set \[ga]region\[ga] too - try without first. - -### Configuration from the environment - -If you prefer you can configure rclone to use swift using a standard -set of OpenStack environment variables. - -When you run through the config, make sure you choose \[ga]true\[ga] for -\[ga]env_auth\[ga] and leave everything else blank. - -rclone will then set any empty config parameters from the environment -using standard OpenStack environment variables. There is [a list of -the -variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment) -in the docs for the swift library. - -### Using an alternate authentication method - -If your OpenStack installation uses a non-standard authentication method -that might not be yet supported by rclone or the underlying swift library, -you can authenticate externally (e.g. calling manually the \[ga]openstack\[ga] -commands to get a token). Then, you just need to pass the two -configuration variables \[ga]\[ga]auth_token\[ga]\[ga] and \[ga]\[ga]storage_url\[ga]\[ga]. -If they are both provided, the other variables are ignored. rclone will -not try to authenticate but instead assume it is already authenticated -and use these two variables to access the OpenStack installation. - -#### Using rclone without a config file - -You can use rclone with swift without a config file, if desired, like -this: -\f[R] -.fi -.PP -source openstack-credentials-file export -RCLONE_CONFIG_MYREMOTE_TYPE=swift export -RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true rclone lsd myremote: -.IP -.nf -\f[C] -### --fast-list - -This remote supports \[ga]--fast-list\[ga] which allows you to use fewer -transactions in exchange for more memory. See the [rclone -docs](https://rclone.org/docs/#fast-list) for more details. - -### --update and --use-server-modtime - -As noted below, the modified time is stored on metadata on the object. It is -used by default for all operations that require checking the time a file was -last updated. It allows rclone to treat the remote more like a true filesystem, -but it is inefficient because it requires an extra API call to retrieve the -metadata. - -For many operations, the time the object was last uploaded to the remote is -sufficient to determine if it is \[dq]dirty\[dq]. By using \[ga]--update\[ga] along with -\[ga]--use-server-modtime\[ga], you can avoid the extra API call and simply upload -files whose local modtime is newer than the time it was last uploaded. - -### Modification times and hashes - -The modified time is stored as metadata on the object as -\[ga]X-Object-Meta-Mtime\[ga] as floating point since the epoch accurate to 1 -ns. - -This is a de facto standard (used in the official python-swiftclient -amongst others) for storing the modification time for an object. - -The MD5 hash algorithm is supported. - -### Restricted filename characters - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| NUL | 0x00 | \[u2400] | -| / | 0x2F | \[uFF0F] | - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - - -### Standard options - -Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). - -#### --swift-env-auth - -Get swift credentials from environment variables in standard OpenStack form. - -Properties: - -- Config: env_auth -- Env Var: RCLONE_SWIFT_ENV_AUTH -- Type: bool -- Default: false -- Examples: - - \[dq]false\[dq] - - Enter swift credentials in the next step. - - \[dq]true\[dq] - - Get swift credentials from environment vars. - - Leave other fields blank if using this. - -#### --swift-user - User name to log in (OS_USERNAME). - +.PP Properties: - -- Config: user -- Env Var: RCLONE_SWIFT_USER -- Type: string -- Required: false - -#### --swift-key - +.IP \[bu] 2 +Config: user +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_USER +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-key +.PP API key or password (OS_PASSWORD). - +.PP Properties: - -- Config: key -- Env Var: RCLONE_SWIFT_KEY -- Type: string -- Required: false - -#### --swift-auth - +.IP \[bu] 2 +Config: key +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-auth +.PP Authentication URL for server (OS_AUTH_URL). - +.PP Properties: - -- Config: auth -- Env Var: RCLONE_SWIFT_AUTH -- Type: string -- Required: false -- Examples: - - \[dq]https://auth.api.rackspacecloud.com/v1.0\[dq] - - Rackspace US - - \[dq]https://lon.auth.api.rackspacecloud.com/v1.0\[dq] - - Rackspace UK - - \[dq]https://identity.api.rackspacecloud.com/v2.0\[dq] - - Rackspace v2 - - \[dq]https://auth.storage.memset.com/v1.0\[dq] - - Memset Memstore UK - - \[dq]https://auth.storage.memset.com/v2.0\[dq] - - Memset Memstore UK v2 - - \[dq]https://auth.cloud.ovh.net/v3\[dq] - - OVH - - \[dq]https://authenticate.ain.net\[dq] - - Blomp Cloud Storage - -#### --swift-user-id - -User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID). - +.IP \[bu] 2 +Config: auth +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_AUTH +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]https://auth.api.rackspacecloud.com/v1.0\[dq] +.RS 2 +.IP \[bu] 2 +Rackspace US +.RE +.IP \[bu] 2 +\[dq]https://lon.auth.api.rackspacecloud.com/v1.0\[dq] +.RS 2 +.IP \[bu] 2 +Rackspace UK +.RE +.IP \[bu] 2 +\[dq]https://identity.api.rackspacecloud.com/v2.0\[dq] +.RS 2 +.IP \[bu] 2 +Rackspace v2 +.RE +.IP \[bu] 2 +\[dq]https://auth.storage.memset.com/v1.0\[dq] +.RS 2 +.IP \[bu] 2 +Memset Memstore UK +.RE +.IP \[bu] 2 +\[dq]https://auth.storage.memset.com/v2.0\[dq] +.RS 2 +.IP \[bu] 2 +Memset Memstore UK v2 +.RE +.IP \[bu] 2 +\[dq]https://auth.cloud.ovh.net/v3\[dq] +.RS 2 +.IP \[bu] 2 +OVH +.RE +.IP \[bu] 2 +\[dq]https://authenticate.ain.net\[dq] +.RS 2 +.IP \[bu] 2 +Blomp Cloud Storage +.RE +.RE +.SS --swift-user-id +.PP +User ID to log in - optional - most swift systems use user and leave +this blank (v3 auth) (OS_USER_ID). +.PP Properties: - -- Config: user_id -- Env Var: RCLONE_SWIFT_USER_ID -- Type: string -- Required: false - -#### --swift-domain - +.IP \[bu] 2 +Config: user_id +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_USER_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-domain +.PP User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME) - +.PP Properties: - -- Config: domain -- Env Var: RCLONE_SWIFT_DOMAIN -- Type: string -- Required: false - -#### --swift-tenant - -Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME). - +.IP \[bu] 2 +Config: domain +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_DOMAIN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-tenant +.PP +Tenant name - optional for v1 auth, this or tenant_id required otherwise +(OS_TENANT_NAME or OS_PROJECT_NAME). +.PP Properties: - -- Config: tenant -- Env Var: RCLONE_SWIFT_TENANT -- Type: string -- Required: false - -#### --swift-tenant-id - -Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID). - +.IP \[bu] 2 +Config: tenant +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_TENANT +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-tenant-id +.PP +Tenant ID - optional for v1 auth, this or tenant required otherwise +(OS_TENANT_ID). +.PP Properties: - -- Config: tenant_id -- Env Var: RCLONE_SWIFT_TENANT_ID -- Type: string -- Required: false - -#### --swift-tenant-domain - +.IP \[bu] 2 +Config: tenant_id +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_TENANT_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-tenant-domain +.PP Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME). - +.PP Properties: - -- Config: tenant_domain -- Env Var: RCLONE_SWIFT_TENANT_DOMAIN -- Type: string -- Required: false - -#### --swift-region - +.IP \[bu] 2 +Config: tenant_domain +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_TENANT_DOMAIN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-region +.PP Region name - optional (OS_REGION_NAME). - +.PP Properties: - -- Config: region -- Env Var: RCLONE_SWIFT_REGION -- Type: string -- Required: false - -#### --swift-storage-url - +.IP \[bu] 2 +Config: region +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_REGION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-storage-url +.PP Storage URL - optional (OS_STORAGE_URL). - +.PP Properties: - -- Config: storage_url -- Env Var: RCLONE_SWIFT_STORAGE_URL -- Type: string -- Required: false - -#### --swift-auth-token - +.IP \[bu] 2 +Config: storage_url +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_STORAGE_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-auth-token +.PP Auth Token from alternate authentication - optional (OS_AUTH_TOKEN). - +.PP Properties: - -- Config: auth_token -- Env Var: RCLONE_SWIFT_AUTH_TOKEN -- Type: string -- Required: false - -#### --swift-application-credential-id - +.IP \[bu] 2 +Config: auth_token +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_AUTH_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-application-credential-id +.PP Application Credential ID (OS_APPLICATION_CREDENTIAL_ID). - +.PP Properties: - -- Config: application_credential_id -- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID -- Type: string -- Required: false - -#### --swift-application-credential-name - +.IP \[bu] 2 +Config: application_credential_id +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-application-credential-name +.PP Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME). - +.PP Properties: - -- Config: application_credential_name -- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME -- Type: string -- Required: false - -#### --swift-application-credential-secret - +.IP \[bu] 2 +Config: application_credential_name +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_NAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-application-credential-secret +.PP Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET). - +.PP Properties: - -- Config: application_credential_secret -- Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET -- Type: string -- Required: false - -#### --swift-auth-version - -AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION). - +.IP \[bu] 2 +Config: application_credential_secret +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_APPLICATION_CREDENTIAL_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --swift-auth-version +.PP +AuthVersion - optional - set to (1,2,3) if your auth URL has no version +(ST_AUTH_VERSION). +.PP Properties: - -- Config: auth_version -- Env Var: RCLONE_SWIFT_AUTH_VERSION -- Type: int -- Default: 0 - -#### --swift-endpoint-type - +.IP \[bu] 2 +Config: auth_version +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_AUTH_VERSION +.IP \[bu] 2 +Type: int +.IP \[bu] 2 +Default: 0 +.SS --swift-endpoint-type +.PP Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE). - +.PP Properties: - -- Config: endpoint_type -- Env Var: RCLONE_SWIFT_ENDPOINT_TYPE -- Type: string -- Default: \[dq]public\[dq] -- Examples: - - \[dq]public\[dq] - - Public (default, choose this if not sure) - - \[dq]internal\[dq] - - Internal (use internal service net) - - \[dq]admin\[dq] - - Admin - -#### --swift-storage-policy - +.IP \[bu] 2 +Config: endpoint_type +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_ENDPOINT_TYPE +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]public\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]public\[dq] +.RS 2 +.IP \[bu] 2 +Public (default, choose this if not sure) +.RE +.IP \[bu] 2 +\[dq]internal\[dq] +.RS 2 +.IP \[bu] 2 +Internal (use internal service net) +.RE +.IP \[bu] 2 +\[dq]admin\[dq] +.RS 2 +.IP \[bu] 2 +Admin +.RE +.RE +.SS --swift-storage-policy +.PP The storage policy to use when creating a new container. - -This applies the specified storage policy when creating a new -container. The policy cannot be changed afterwards. The allowed -configuration values and their meaning depend on your Swift storage -provider. - +.PP +This applies the specified storage policy when creating a new container. +The policy cannot be changed afterwards. +The allowed configuration values and their meaning depend on your Swift +storage provider. +.PP Properties: - -- Config: storage_policy -- Env Var: RCLONE_SWIFT_STORAGE_POLICY -- Type: string -- Required: false -- Examples: - - \[dq]\[dq] - - Default - - \[dq]pcs\[dq] - - OVH Public Cloud Storage - - \[dq]pca\[dq] - - OVH Public Cloud Archive - -### Advanced options - -Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). - -#### --swift-leave-parts-on-error - +.IP \[bu] 2 +Config: storage_policy +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_STORAGE_POLICY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +Default +.RE +.IP \[bu] 2 +\[dq]pcs\[dq] +.RS 2 +.IP \[bu] 2 +OVH Public Cloud Storage +.RE +.IP \[bu] 2 +\[dq]pca\[dq] +.RS 2 +.IP \[bu] 2 +OVH Public Cloud Archive +.RE +.RE +.SS Advanced options +.PP +Here are the Advanced options specific to swift (OpenStack Swift +(Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). +.SS --swift-leave-parts-on-error +.PP If true avoid calling abort upload on a failure. - +.PP It should be set to true for resuming uploads across different sessions. - +.PP Properties: - -- Config: leave_parts_on_error -- Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR -- Type: bool -- Default: false - -#### --swift-chunk-size - +.IP \[bu] 2 +Config: leave_parts_on_error +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_LEAVE_PARTS_ON_ERROR +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --swift-chunk-size +.PP Above this size files will be chunked into a _segments container. - -Above this size files will be chunked into a _segments container. The -default for this is 5 GiB which is its maximum value. - +.PP +Above this size files will be chunked into a _segments container. +The default for this is 5 GiB which is its maximum value. +.PP Properties: - -- Config: chunk_size -- Env Var: RCLONE_SWIFT_CHUNK_SIZE -- Type: SizeSuffix -- Default: 5Gi - -#### --swift-no-chunk - +.IP \[bu] 2 +Config: chunk_size +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 5Gi +.SS --swift-no-chunk +.PP Don\[aq]t chunk files during streaming upload. - -When doing streaming uploads (e.g. using rcat or mount) setting this -flag will cause the swift backend to not upload chunked files. - -This will limit the maximum upload size to 5 GiB. However non chunked -files are easier to deal with and have an MD5SUM. - +.PP +When doing streaming uploads (e.g. +using rcat or mount) setting this flag will cause the swift backend to +not upload chunked files. +.PP +This will limit the maximum upload size to 5 GiB. +However non chunked files are easier to deal with and have an MD5SUM. +.PP Rclone will still chunk files bigger than chunk_size when doing normal copy operations. - +.PP Properties: - -- Config: no_chunk -- Env Var: RCLONE_SWIFT_NO_CHUNK -- Type: bool -- Default: false - -#### --swift-no-large-objects - +.IP \[bu] 2 +Config: no_chunk +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_NO_CHUNK +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --swift-no-large-objects +.PP Disable support for static and dynamic large objects - -Swift cannot transparently store files bigger than 5 GiB. There are -two schemes for doing that, static or dynamic large objects, and the -API does not allow rclone to determine whether a file is a static or -dynamic large object without doing a HEAD on the object. Since these -need to be treated differently, this means rclone has to issue HEAD -requests for objects for example when reading checksums. - -When \[ga]no_large_objects\[ga] is set, rclone will assume that there are no -static or dynamic large objects stored. This means it can stop doing -the extra HEAD calls which in turn increases performance greatly -especially when doing a swift to swift transfer with \[ga]--checksum\[ga] set. - -Setting this option implies \[ga]no_chunk\[ga] and also that no files will be -uploaded in chunks, so files bigger than 5 GiB will just fail on +.PP +Swift cannot transparently store files bigger than 5 GiB. +There are two schemes for doing that, static or dynamic large objects, +and the API does not allow rclone to determine whether a file is a +static or dynamic large object without doing a HEAD on the object. +Since these need to be treated differently, this means rclone has to +issue HEAD requests for objects for example when reading checksums. +.PP +When \f[C]no_large_objects\f[R] is set, rclone will assume that there +are no static or dynamic large objects stored. +This means it can stop doing the extra HEAD calls which in turn +increases performance greatly especially when doing a swift to swift +transfer with \f[C]--checksum\f[R] set. +.PP +Setting this option implies \f[C]no_chunk\f[R] and also that no files +will be uploaded in chunks, so files bigger than 5 GiB will just fail on upload. - -If you set this option and there *are* static or dynamic large objects, -then this will give incorrect hashes for them. Downloads will succeed, -but other operations such as Remove and Copy will fail. - - +.PP +If you set this option and there \f[I]are\f[R] static or dynamic large +objects, then this will give incorrect hashes for them. +Downloads will succeed, but other operations such as Remove and Copy +will fail. +.PP Properties: - -- Config: no_large_objects -- Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS -- Type: bool -- Default: false - -#### --swift-encoding - +.IP \[bu] 2 +Config: no_large_objects +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_NO_LARGE_OBJECTS +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --swift-encoding +.PP The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP Properties: - -- Config: encoding -- Env Var: RCLONE_SWIFT_ENCODING -- Type: Encoding -- Default: Slash,InvalidUtf8 - - - -## Limitations - +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,InvalidUtf8 +.SS --swift-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_SWIFT_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP The Swift API doesn\[aq]t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won\[aq]t check or use the MD5SUM for these. - -## Troubleshooting - -### Rclone gives Failed to create file system for \[dq]remote:\[dq]: Bad Request - +.SS Troubleshooting +.SS Rclone gives Failed to create file system for \[dq]remote:\[dq]: Bad Request +.PP Due to an oddity of the underlying swift library, it gives a \[dq]Bad Request\[dq] error rather than a more sensible error when the authentication fails for Swift. - -So this most likely means your username / password is wrong. You can -investigate further with the \[ga]--dump-bodies\[ga] flag. - +.PP +So this most likely means your username / password is wrong. +You can investigate further with the \f[C]--dump-bodies\f[R] flag. +.PP This may also be caused by specifying the region when you shouldn\[aq]t -have (e.g. OVH). - -### Rclone gives Failed to create file system: Response didn\[aq]t have storage url and auth token - +have (e.g. +OVH). +.SS Rclone gives Failed to create file system: Response didn\[aq]t have storage url and auth token +.PP This is most likely caused by forgetting to specify your tenant when setting up a swift remote. - -## OVH Cloud Archive - -To use rclone with OVH cloud archive, first use \[ga]rclone config\[ga] to set up a \[ga]swift\[ga] backend with OVH, choosing \[ga]pca\[ga] as the \[ga]storage_policy\[ga]. - -### Uploading Objects - -Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a \[dq]Frozen\[dq] state within the OVH control panel. - -### Retrieving Objects - -To retrieve objects use \[ga]rclone copy\[ga] as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following: - -\[ga]2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)\[ga] - -Rclone will wait for the time specified then retry the copy. - -# pCloud - -Paths are specified as \[ga]remote:path\[ga] - -Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. - -## Configuration - -The initial setup for pCloud involves getting a token from pCloud which you -need to do in your browser. \[ga]rclone config\[ga] walks you through it. - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: -\f[R] -.fi +.SS OVH Cloud Archive .PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / Pcloud -\ \[dq]pcloud\[dq] [snip] Storage> pcloud Pcloud App Client Id - leave -blank normally. -client_id> Pcloud App Client Secret - leave blank normally. -client_secret> Remote config Use web browser to automatically -authenticate rclone with remote? -* Say Y if the machine running rclone has a web browser you can use * -Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. -If Y failed, try N. -y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to -the following link: http://127.0.0.1:53682/auth Log in and authorize -rclone for access Waiting for code... -Got code -------------------- [remote] client_id = client_secret = token -= -{\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]expiry\[dq]:\[dq]0001-01-01T00:00:00Z\[dq]} --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y +To use rclone with OVH cloud archive, first use \f[C]rclone config\f[R] +to set up a \f[C]swift\f[R] backend with OVH, choosing \f[C]pca\f[R] as +the \f[C]storage_policy\f[R]. +.SS Uploading Objects +.PP +Uploading objects to OVH cloud archive is no different to object +storage, you just simply run the command you like (move, copy or sync) +to upload the objects. +Once uploaded the objects will show in a \[dq]Frozen\[dq] state within +the OVH control panel. +.SS Retrieving Objects +.PP +To retrieve objects use \f[C]rclone copy\f[R] as normal. +If the objects are in a frozen state then rclone will ask for them all +to be unfrozen and it will wait at the end of the output with a message +like the following: +.PP +\f[C]2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)\f[R] +.PP +Rclone will wait for the time specified then retry the copy. +.SH pCloud +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.SS Configuration +.PP +The initial setup for pCloud involves getting a token from pCloud which +you need to do in your browser. +\f[C]rclone config\f[R] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: .IP .nf \f[C] -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. - -Note that rclone runs a webserver on your local machine to collect the -token as returned from pCloud. This only runs from the moment it opens -your browser to the moment you get back the verification code. This -is on \[ga]http://127.0.0.1:53682/\[ga] and this it may require you to unblock -it temporarily if you are running a host firewall. - -Once configured you can then use \[ga]rclone\[ga] like this, - -List directories in top level of your pCloud - - rclone lsd remote: - -List all the files in your pCloud - - rclone ls remote: - -To copy a local directory to a pCloud directory called backup - - rclone copy /home/source remote:backup - -### Modification times and hashes - -pCloud allows modification times to be set on objects accurate to 1 -second. These will be used to detect whether objects need syncing or -not. In order to set a Modification time pCloud requires the object -be re-uploaded. - -pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and SHA256 -hashes in the EU region, so you can use the \[ga]--checksum\[ga] flag. - -### Restricted filename characters - -In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) -the following characters are also replaced: - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| \[rs] | 0x5C | \[uFF3C] | - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - -### Deleting files - -Deleted files will be moved to the trash. Your subscription level -will determine how long items stay in the trash. \[ga]rclone cleanup\[ga] can -be used to empty the trash. - -### Emptying the trash - -Due to an API limitation, the \[ga]rclone cleanup\[ga] command will only work if you -set your username and password in the advanced options for this backend. -Since we generally want to avoid storing user passwords in the rclone config -file, we advise you to only set this up if you need the \[ga]rclone cleanup\[ga] command to work. - -### Root folder ID - -You can set the \[ga]root_folder_id\[ga] for rclone. This is the directory -(identified by its \[ga]Folder ID\[ga]) that rclone considers to be the root -of your pCloud drive. - -Normally you will leave this blank and rclone will determine the -correct root to use itself. - -However you can set this to restrict rclone to a specific folder -hierarchy. - -In order to do this you will have to find the \[ga]Folder ID\[ga] of the -directory you wish rclone to display. This will be the \[ga]folder\[ga] field -of the URL when you open the relevant folder in the pCloud web -interface. - -So if the folder you want rclone to use has a URL which looks like -\[ga]https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid\[ga] -in the browser, then you use \[ga]5xxxxxxxx8\[ga] as -the \[ga]root_folder_id\[ga] in the config. - - -### Standard options - -Here are the Standard options specific to pcloud (Pcloud). - -#### --pcloud-client-id - -OAuth Client Id. - -Leave blank normally. - -Properties: - -- Config: client_id -- Env Var: RCLONE_PCLOUD_CLIENT_ID -- Type: string -- Required: false - -#### --pcloud-client-secret - -OAuth Client Secret. - -Leave blank normally. - -Properties: - -- Config: client_secret -- Env Var: RCLONE_PCLOUD_CLIENT_SECRET -- Type: string -- Required: false - -### Advanced options - -Here are the Advanced options specific to pcloud (Pcloud). - -#### --pcloud-token - -OAuth Access Token as a JSON blob. - -Properties: - -- Config: token -- Env Var: RCLONE_PCLOUD_TOKEN -- Type: string -- Required: false - -#### --pcloud-auth-url - -Auth server URL. - -Leave blank to use the provider defaults. - -Properties: - -- Config: auth_url -- Env Var: RCLONE_PCLOUD_AUTH_URL -- Type: string -- Required: false - -#### --pcloud-token-url - -Token server url. - -Leave blank to use the provider defaults. - -Properties: - -- Config: token_url -- Env Var: RCLONE_PCLOUD_TOKEN_URL -- Type: string -- Required: false - -#### --pcloud-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_PCLOUD_ENCODING -- Type: Encoding -- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - -#### --pcloud-root-folder-id - -Fill in for rclone to use a non root folder as its starting point. - -Properties: - -- Config: root_folder_id -- Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID -- Type: string -- Default: \[dq]d0\[dq] - -#### --pcloud-hostname - -Hostname to connect to. - -This is normally set when rclone initially does the oauth connection, -however you will need to set it by hand if you are using remote config -with rclone authorize. - - -Properties: - -- Config: hostname -- Env Var: RCLONE_PCLOUD_HOSTNAME -- Type: string -- Default: \[dq]api.pcloud.com\[dq] -- Examples: - - \[dq]api.pcloud.com\[dq] - - Original/US region - - \[dq]eapi.pcloud.com\[dq] - - EU region - -#### --pcloud-username - -Your pcloud username. - -This is only required when you want to use the cleanup command. Due to a bug -in the pcloud API the required API does not support OAuth authentication so -we have to rely on user password authentication for it. - -Properties: - -- Config: username -- Env Var: RCLONE_PCLOUD_USERNAME -- Type: string -- Required: false - -#### --pcloud-password - -Your pcloud password. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - -Properties: - -- Config: password -- Env Var: RCLONE_PCLOUD_PASSWORD -- Type: string -- Required: false - - - -# PikPak - -PikPak is [a private cloud drive](https://mypikpak.com/). - -Paths are specified as \[ga]remote:path\[ga], and may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. - -## Configuration - -Here is an example of making a remote for PikPak. - -First run: - - rclone config - -This will guide you through an interactive setup process: + rclone config \f[R] .fi .PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Pcloud + \[rs] \[dq]pcloud\[dq] +[snip] +Storage> pcloud +Pcloud App Client Id - leave blank normally. +client_id> +Pcloud App Client Secret - leave blank normally. +client_secret> +Remote config +Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. If Y failed, try N. +y) Yes +n) No +y/n> y +If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +client_id = +client_secret = +token = {\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]bearer\[dq],\[dq]expiry\[dq]:\[dq]0001-01-01T00:00:00Z\[dq]} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi .PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP +Note that rclone runs a webserver on your local machine to collect the +token as returned from pCloud. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you +to unblock it temporarily if you are running a host firewall. +.PP +Once configured you can then use \f[C]rclone\f[R] like this, +.PP +List directories in top level of your pCloud +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +List all the files in your pCloud +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP +To copy a local directory to a pCloud directory called backup +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS Modification times and hashes +.PP +pCloud allows modification times to be set on objects accurate to 1 +second. +These will be used to detect whether objects need syncing or not. +In order to set a Modification time pCloud requires the object be +re-uploaded. +.PP +pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and +SHA256 hashes in the EU region, so you can use the \f[C]--checksum\f[R] +flag. +.SS Restricted filename characters +.PP +In addition to the default restricted characters +set (https://rclone.org/overview/#restricted-characters) the following +characters are also replaced: +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +\[rs] +T}@T{ +0x5C +T}@T{ +\[uFF3C] +T} +.TE +.PP +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Deleting files +.PP +Deleted files will be moved to the trash. +Your subscription level will determine how long items stay in the trash. +\f[C]rclone cleanup\f[R] can be used to empty the trash. +.SS Emptying the trash +.PP +Due to an API limitation, the \f[C]rclone cleanup\f[R] command will only +work if you set your username and password in the advanced options for +this backend. +Since we generally want to avoid storing user passwords in the rclone +config file, we advise you to only set this up if you need the +\f[C]rclone cleanup\f[R] command to work. +.SS Root folder ID +.PP +You can set the \f[C]root_folder_id\f[R] for rclone. +This is the directory (identified by its \f[C]Folder ID\f[R]) that +rclone considers to be the root of your pCloud drive. +.PP +Normally you will leave this blank and rclone will determine the correct +root to use itself. +.PP +However you can set this to restrict rclone to a specific folder +hierarchy. +.PP +In order to do this you will have to find the \f[C]Folder ID\f[R] of the +directory you wish rclone to display. +This will be the \f[C]folder\f[R] field of the URL when you open the +relevant folder in the pCloud web interface. +.PP +So if the folder you want rclone to use has a URL which looks like +\f[C]https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid\f[R] +in the browser, then you use \f[C]5xxxxxxxx8\f[R] as the +\f[C]root_folder_id\f[R] in the config. +.SS Standard options +.PP +Here are the Standard options specific to pcloud (Pcloud). +.SS --pcloud-client-id +.PP +OAuth Client Id. +.PP +Leave blank normally. +.PP +Properties: +.IP \[bu] 2 +Config: client_id +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_CLIENT_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-client-secret +.PP +OAuth Client Secret. +.PP +Leave blank normally. +.PP +Properties: +.IP \[bu] 2 +Config: client_secret +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_CLIENT_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP +Here are the Advanced options specific to pcloud (Pcloud). +.SS --pcloud-token +.PP +OAuth Access Token as a JSON blob. +.PP +Properties: +.IP \[bu] 2 +Config: token +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-auth-url +.PP +Auth server URL. +.PP +Leave blank to use the provider defaults. +.PP +Properties: +.IP \[bu] 2 +Config: auth_url +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_AUTH_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-token-url +.PP +Token server url. +.PP +Leave blank to use the provider defaults. +.PP +Properties: +.IP \[bu] 2 +Config: token_url +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_TOKEN_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +.SS --pcloud-root-folder-id +.PP +Fill in for rclone to use a non root folder as its starting point. +.PP +Properties: +.IP \[bu] 2 +Config: root_folder_id +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_ROOT_FOLDER_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]d0\[dq] +.SS --pcloud-hostname +.PP +Hostname to connect to. +.PP +This is normally set when rclone initially does the oauth connection, +however you will need to set it by hand if you are using remote config +with rclone authorize. +.PP +Properties: +.IP \[bu] 2 +Config: hostname +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_HOSTNAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]api.pcloud.com\[dq] +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]api.pcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Original/US region +.RE +.IP \[bu] 2 +\[dq]eapi.pcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +EU region +.RE +.RE +.SS --pcloud-username +.PP +Your pcloud username. +.PP +This is only required when you want to use the cleanup command. +Due to a bug in the pcloud API the required API does not support OAuth +authentication so we have to rely on user password authentication for +it. +.PP +Properties: +.IP \[bu] 2 +Config: username +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_USERNAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-password +.PP +Your pcloud password. +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: password +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pcloud-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_PCLOUD_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SH PikPak +.PP +PikPak is a private cloud drive (https://mypikpak.com/). +.PP +Paths are specified as \f[C]remote:path\f[R], and may be as deep as +required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.SS Configuration +.PP +Here is an example of making a remote for PikPak. +.PP +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + Enter name for new remote. name> remote -.PP + Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. -XX / PikPak \ (pikpak) Storage> XX -.PP +XX / PikPak + \[rs] (pikpak) +Storage> XX + Option user. Pikpak username. Enter a value. user> USERNAME -.PP + Option pass. Pikpak password. Choose an alternative below. -y) Yes, type in my own password g) Generate random password y/g> y Enter -the password: password: Confirm the password: password: -.PP +y) Yes, type in my own password +g) Generate random password +y/g> y +Enter the password: +password: +Confirm the password: +password: + Edit advanced config? -y) Yes n) No (default) y/n> -.PP +y) Yes +n) No (default) +y/n> + Configuration complete. -Options: - type: pikpak - user: USERNAME - pass: *** ENCRYPTED *** - -token: -{\[dq]access_token\[dq]:\[dq]eyJ...\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]os...\[dq],\[dq]expiry\[dq]:\[dq]2023-01-26T18:54:32.170582647+09:00\[dq]} +Options: +- type: pikpak +- user: USERNAME +- pass: *** ENCRYPTED *** +- token: {\[dq]access_token\[dq]:\[dq]eyJ...\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]os...\[dq],\[dq]expiry\[dq]:\[dq]2023-01-26T18:54:32.170582647+09:00\[dq]} Keep this \[dq]remote\[dq] remote? -y) Yes this is OK (default) e) Edit this remote d) Delete this remote +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote y/e/d> y -.IP -.nf -\f[C] -### Modification times and hashes - -PikPak keeps modification times on objects, and updates them when uploading objects, -but it does not support changing only the modification time - +\f[R] +.fi +.SS Modification times and hashes +.PP +PikPak keeps modification times on objects, and updates them when +uploading objects, but it does not support changing only the +modification time +.PP The MD5 hash algorithm is supported. - - -### Standard options - +.SS Standard options +.PP Here are the Standard options specific to pikpak (PikPak). - -#### --pikpak-user - +.SS --pikpak-user +.PP Pikpak username. - +.PP Properties: - -- Config: user -- Env Var: RCLONE_PIKPAK_USER -- Type: string -- Required: true - -#### --pikpak-pass - +.IP \[bu] 2 +Config: user +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_USER +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --pikpak-pass +.PP Pikpak password. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP Properties: - -- Config: pass -- Env Var: RCLONE_PIKPAK_PASS -- Type: string -- Required: true - -### Advanced options - +.IP \[bu] 2 +Config: pass +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_PASS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS Advanced options +.PP Here are the Advanced options specific to pikpak (PikPak). - -#### --pikpak-client-id - +.SS --pikpak-client-id +.PP OAuth Client Id. - +.PP Leave blank normally. - +.PP Properties: - -- Config: client_id -- Env Var: RCLONE_PIKPAK_CLIENT_ID -- Type: string -- Required: false - -#### --pikpak-client-secret - +.IP \[bu] 2 +Config: client_id +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_CLIENT_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pikpak-client-secret +.PP OAuth Client Secret. - +.PP Leave blank normally. - +.PP Properties: - -- Config: client_secret -- Env Var: RCLONE_PIKPAK_CLIENT_SECRET -- Type: string -- Required: false - -#### --pikpak-token - +.IP \[bu] 2 +Config: client_secret +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_CLIENT_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pikpak-token +.PP OAuth Access Token as a JSON blob. - +.PP Properties: - -- Config: token -- Env Var: RCLONE_PIKPAK_TOKEN -- Type: string -- Required: false - -#### --pikpak-auth-url - +.IP \[bu] 2 +Config: token +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pikpak-auth-url +.PP Auth server URL. - +.PP Leave blank to use the provider defaults. - +.PP Properties: - -- Config: auth_url -- Env Var: RCLONE_PIKPAK_AUTH_URL -- Type: string -- Required: false - -#### --pikpak-token-url - +.IP \[bu] 2 +Config: auth_url +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_AUTH_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pikpak-token-url +.PP Token server url. - +.PP Leave blank to use the provider defaults. - +.PP Properties: - -- Config: token_url -- Env Var: RCLONE_PIKPAK_TOKEN_URL -- Type: string -- Required: false - -#### --pikpak-root-folder-id - +.IP \[bu] 2 +Config: token_url +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_TOKEN_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pikpak-root-folder-id +.PP ID of the root folder. Leave blank normally. - +.PP Fill in for rclone to use a non root folder as its starting point. - - +.PP Properties: - -- Config: root_folder_id -- Env Var: RCLONE_PIKPAK_ROOT_FOLDER_ID -- Type: string -- Required: false - -#### --pikpak-use-trash - +.IP \[bu] 2 +Config: root_folder_id +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_ROOT_FOLDER_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --pikpak-use-trash +.PP Send files to the trash instead of deleting permanently. - +.PP Defaults to true, namely sending files to the trash. -Use \[ga]--pikpak-use-trash=false\[ga] to delete files permanently instead. - +Use \f[C]--pikpak-use-trash=false\f[R] to delete files permanently +instead. +.PP Properties: - -- Config: use_trash -- Env Var: RCLONE_PIKPAK_USE_TRASH -- Type: bool -- Default: true - -#### --pikpak-trashed-only - +.IP \[bu] 2 +Config: use_trash +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_USE_TRASH +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: true +.SS --pikpak-trashed-only +.PP Only show files that are in the trash. - +.PP This will show trashed files in their original directory structure. - +.PP Properties: - -- Config: trashed_only -- Env Var: RCLONE_PIKPAK_TRASHED_ONLY -- Type: bool -- Default: false - -#### --pikpak-hash-memory-limit - -Files bigger than this will be cached on disk to calculate hash if required. - +.IP \[bu] 2 +Config: trashed_only +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_TRASHED_ONLY +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --pikpak-hash-memory-limit +.PP +Files bigger than this will be cached on disk to calculate hash if +required. +.PP Properties: - -- Config: hash_memory_limit -- Env Var: RCLONE_PIKPAK_HASH_MEMORY_LIMIT -- Type: SizeSuffix -- Default: 10Mi - -#### --pikpak-encoding - +.IP \[bu] 2 +Config: hash_memory_limit +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_HASH_MEMORY_LIMIT +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 10Mi +.SS --pikpak-encoding +.PP The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP Properties: - -- Config: encoding -- Env Var: RCLONE_PIKPAK_ENCODING -- Type: Encoding -- Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot - -## Backend commands - +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: +Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot +.SS --pikpak-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_PIKPAK_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Backend commands +.PP Here are the commands specific to the pikpak backend. - +.PP Run them with - - rclone backend COMMAND remote: - +.IP +.nf +\f[C] +rclone backend COMMAND remote: +\f[R] +.fi +.PP The help below will explain what arguments each command takes. - -See the [backend](https://rclone.org/commands/rclone_backend/) command for more -info on how to pass options and arguments. - +.PP +See the backend (https://rclone.org/commands/rclone_backend/) command +for more info on how to pass options and arguments. +.PP These can be run on a running backend using the rc command -[backend/command](https://rclone.org/rc/#backend-command). - -### addurl - +backend/command (https://rclone.org/rc/#backend-command). +.SS addurl +.PP Add offline download task for url - - rclone backend addurl remote: [options] [ +] - +.IP +.nf +\f[C] +rclone backend addurl remote: [options] [ +] +\f[R] +.fi +.PP This command adds offline download task for url. - +.PP Usage: - - rclone backend addurl pikpak:dirpath url - -Downloads will be stored in \[aq]dirpath\[aq]. If \[aq]dirpath\[aq] is invalid, -download will fallback to default \[aq]My Pack\[aq] folder. - - -### decompress - +.IP +.nf +\f[C] +rclone backend addurl pikpak:dirpath url +\f[R] +.fi +.PP +Downloads will be stored in \[aq]dirpath\[aq]. +If \[aq]dirpath\[aq] is invalid, download will fallback to default +\[aq]My Pack\[aq] folder. +.SS decompress +.PP Request decompress of a file/files in a folder - - rclone backend decompress remote: [options] [ +] - +.IP +.nf +\f[C] +rclone backend decompress remote: [options] [ +] +\f[R] +.fi +.PP This command requests decompress of file/files in a folder. - +.PP Usage: - - rclone backend decompress pikpak:dirpath {filename} -o password=password - rclone backend decompress pikpak:dirpath {filename} -o delete-src-file - -An optional argument \[aq]filename\[aq] can be specified for a file located in -\[aq]pikpak:dirpath\[aq]. You may want to pass \[aq]-o password=password\[aq] for a -password-protected files. Also, pass \[aq]-o delete-src-file\[aq] to delete -source files after decompression finished. - +.IP +.nf +\f[C] +rclone backend decompress pikpak:dirpath {filename} -o password=password +rclone backend decompress pikpak:dirpath {filename} -o delete-src-file +\f[R] +.fi +.PP +An optional argument \[aq]filename\[aq] can be specified for a file +located in \[aq]pikpak:dirpath\[aq]. +You may want to pass \[aq]-o password=password\[aq] for a +password-protected files. +Also, pass \[aq]-o delete-src-file\[aq] to delete source files after +decompression finished. +.PP Result: - - { - \[dq]Decompressed\[dq]: 17, - \[dq]SourceDeleted\[dq]: 0, - \[dq]Errors\[dq]: 0 - } - - - - -## Limitations - -### Hashes may be empty - -PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files. - -### Deleted files still visible with trashed-only - -Deleted files will still be visible with \[ga]--pikpak-trashed-only\[ga] even after the -trash emptied. This goes away after few days. - -# premiumize.me - -Paths are specified as \[ga]remote:path\[ga] - -Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. - -## Configuration - -The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you -need to do in your browser. \[ga]rclone config\[ga] walks you through it. - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: -\f[R] -.fi -.PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value [snip] XX / -premiumize.me \ \[dq]premiumizeme\[dq] [snip] Storage> premiumizeme ** -See help for premiumizeme backend at: https://rclone.org/premiumizeme/ -** -.PP -Remote config Use web browser to automatically authenticate rclone with -remote? -* Say Y if the machine running rclone has a web browser you can use * -Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. -If Y failed, try N. -y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to -the following link: http://127.0.0.1:53682/auth Log in and authorize -rclone for access Waiting for code... -Got code -------------------- [remote] type = premiumizeme token = -{\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2029-08-07T18:44:15.548915378+01:00\[dq]} --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> .IP .nf \f[C] -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +{ + \[dq]Decompressed\[dq]: 17, + \[dq]SourceDeleted\[dq]: 0, + \[dq]Errors\[dq]: 0 +} +\f[R] +.fi +.SS Limitations +.SS Hashes may be empty +.PP +PikPak supports MD5 hash, but sometimes given empty especially for +user-uploaded files. +.SS Deleted files still visible with trashed-only +.PP +Deleted files will still be visible with \f[C]--pikpak-trashed-only\f[R] +even after the trash emptied. +This goes away after few days. +.SH premiumize.me +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.SS Configuration +.PP +The initial setup for premiumize.me (https://premiumize.me/) involves +getting a token from premiumize.me which you need to do in your browser. +\f[C]rclone config\f[R] walks you through it. +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value +[snip] +XX / premiumize.me + \[rs] \[dq]premiumizeme\[dq] +[snip] +Storage> premiumizeme +** See help for premiumizeme backend at: https://rclone.org/premiumizeme/ ** +Remote config +Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. If Y failed, try N. +y) Yes +n) No +y/n> y +If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[remote] +type = premiumizeme +token = {\[dq]access_token\[dq]:\[dq]XXX\[dq],\[dq]token_type\[dq]:\[dq]Bearer\[dq],\[dq]refresh_token\[dq]:\[dq]XXX\[dq],\[dq]expiry\[dq]:\[dq]2029-08-07T18:44:15.548915378+01:00\[dq]} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> +\f[R] +.fi +.PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP Note that rclone runs a webserver on your local machine to collect the -token as returned from premiumize.me. This only runs from the moment it opens -your browser to the moment you get back the verification code. This -is on \[ga]http://127.0.0.1:53682/\[ga] and this it may require you to unblock -it temporarily if you are running a host firewall. - -Once configured you can then use \[ga]rclone\[ga] like this, - +token as returned from premiumize.me. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you +to unblock it temporarily if you are running a host firewall. +.PP +Once configured you can then use \f[C]rclone\f[R] like this, +.PP List directories in top level of your premiumize.me - - rclone lsd remote: - +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP List all the files in your premiumize.me - - rclone ls remote: - +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP To copy a local directory to an premiumize.me directory called backup - - rclone copy /home/source remote:backup - -### Modification times and hashes - +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS Modification times and hashes +.PP premiumize.me does not support modification times or hashes, therefore -syncing will default to \[ga]--size-only\[ga] checking. Note that using -\[ga]--update\[ga] will work. - -### Restricted filename characters - -In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) -the following characters are also replaced: - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| \[rs] | 0x5C | \[uFF3C] | -| \[dq] | 0x22 | \[uFF02] | - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - - -### Standard options - +syncing will default to \f[C]--size-only\f[R] checking. +Note that using \f[C]--update\f[R] will work. +.SS Restricted filename characters +.PP +In addition to the default restricted characters +set (https://rclone.org/overview/#restricted-characters) the following +characters are also replaced: +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +\[rs] +T}@T{ +0x5C +T}@T{ +\[uFF3C] +T} +T{ +\[dq] +T}@T{ +0x22 +T}@T{ +\[uFF02] +T} +.TE +.PP +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Standard options +.PP Here are the Standard options specific to premiumizeme (premiumize.me). - -#### --premiumizeme-client-id - +.SS --premiumizeme-client-id +.PP OAuth Client Id. - +.PP Leave blank normally. - +.PP Properties: - -- Config: client_id -- Env Var: RCLONE_PREMIUMIZEME_CLIENT_ID -- Type: string -- Required: false - -#### --premiumizeme-client-secret - +.IP \[bu] 2 +Config: client_id +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_CLIENT_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --premiumizeme-client-secret +.PP OAuth Client Secret. - +.PP Leave blank normally. - +.PP Properties: - -- Config: client_secret -- Env Var: RCLONE_PREMIUMIZEME_CLIENT_SECRET -- Type: string -- Required: false - -#### --premiumizeme-api-key - +.IP \[bu] 2 +Config: client_secret +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_CLIENT_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --premiumizeme-api-key +.PP API Key. - +.PP This is not normally used - use oauth instead. - - +.PP Properties: - -- Config: api_key -- Env Var: RCLONE_PREMIUMIZEME_API_KEY -- Type: string -- Required: false - -### Advanced options - +.IP \[bu] 2 +Config: api_key +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_API_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP Here are the Advanced options specific to premiumizeme (premiumize.me). - -#### --premiumizeme-token - +.SS --premiumizeme-token +.PP OAuth Access Token as a JSON blob. - +.PP Properties: - -- Config: token -- Env Var: RCLONE_PREMIUMIZEME_TOKEN -- Type: string -- Required: false - -#### --premiumizeme-auth-url - +.IP \[bu] 2 +Config: token +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --premiumizeme-auth-url +.PP Auth server URL. - +.PP Leave blank to use the provider defaults. - +.PP Properties: - -- Config: auth_url -- Env Var: RCLONE_PREMIUMIZEME_AUTH_URL -- Type: string -- Required: false - -#### --premiumizeme-token-url - +.IP \[bu] 2 +Config: auth_url +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_AUTH_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --premiumizeme-token-url +.PP Token server url. - +.PP Leave blank to use the provider defaults. - +.PP Properties: - -- Config: token_url -- Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL -- Type: string -- Required: false - -#### --premiumizeme-encoding - +.IP \[bu] 2 +Config: token_url +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_TOKEN_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --premiumizeme-encoding +.PP The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP Properties: - -- Config: encoding -- Env Var: RCLONE_PREMIUMIZEME_ENCODING -- Type: Encoding -- Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot - - - -## Limitations - -Note that premiumize.me is case insensitive so you can\[aq]t have a file called -\[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. - -premiumize.me file names can\[aq]t have the \[ga]\[rs]\[ga] or \[ga]\[dq]\[ga] characters in. +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot +.SS --premiumizeme-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_PREMIUMIZEME_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP +Note that premiumize.me is case insensitive so you can\[aq]t have a file +called \[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq]. +.PP +premiumize.me file names can\[aq]t have the \f[C]\[rs]\f[R] or +\f[C]\[dq]\f[R] characters in. rclone maps these to and from an identical looking unicode equivalents -\[ga]\[uFF3C]\[ga] and \[ga]\[uFF02]\[ga] - +\f[C]\[uFF3C]\f[R] and \f[C]\[uFF02]\f[R] +.PP premiumize.me only supports filenames up to 255 characters in length. - -# Proton Drive - -[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault - for your files that protects your data. - -This is an rclone backend for Proton Drive which supports the file transfer -features of Proton Drive using the same client-side encryption. - -Due to the fact that Proton Drive doesn\[aq]t publish its API documentation, this -backend is implemented with best efforts by reading the open-sourced client -source code and observing the Proton Drive traffic in the browser. - -**NB** This backend is currently in Beta. It is believed to be correct -and all the integration tests pass. However the Proton Drive protocol -has evolved over time there may be accounts it is not compatible -with. Please [post on the rclone forum](https://forum.rclone.org/) if -you find an incompatibility. - -Paths are specified as \[ga]remote:path\[ga] - -Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. - -## Configurations - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: -\f[R] -.fi +.SH Proton Drive .PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / Proton -Drive \ \[dq]Proton Drive\[dq] [snip] Storage> protondrive User name -user> you\[at]protonmail.com Password. -y) Yes type in my own password g) Generate random password n) No leave -this optional password blank y/g/n> y Enter the password: password: -Confirm the password: password: Option 2fa. -2FA code (if the account requires one) Enter a value. -Press Enter to leave empty. -2fa> 123456 Remote config -------------------- [remote] type = -protondrive user = you\[at]protonmail.com pass = *** ENCRYPTED *** --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y +Proton Drive (https://proton.me/drive) is an end-to-end encrypted Swiss +vault for your files that protects your data. +.PP +This is an rclone backend for Proton Drive which supports the file +transfer features of Proton Drive using the same client-side encryption. +.PP +Due to the fact that Proton Drive doesn\[aq]t publish its API +documentation, this backend is implemented with best efforts by reading +the open-sourced client source code and observing the Proton Drive +traffic in the browser. +.PP +\f[B]NB\f[R] This backend is currently in Beta. +It is believed to be correct and all the integration tests pass. +However the Proton Drive protocol has evolved over time there may be +accounts it is not compatible with. +Please post on the rclone forum (https://forum.rclone.org/) if you find +an incompatibility. +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.SS Configurations +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: .IP .nf \f[C] -**NOTE:** The Proton Drive encryption keys need to have been already generated -after a regular login via the browser, otherwise attempting to use the -credentials in \[ga]rclone\[ga] will fail. - -Once configured you can then use \[ga]rclone\[ga] like this, - + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Proton Drive + \[rs] \[dq]Proton Drive\[dq] +[snip] +Storage> protondrive +User name +user> you\[at]protonmail.com +Password. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank +y/g/n> y +Enter the password: +password: +Confirm the password: +password: +Option 2fa. +2FA code (if the account requires one) +Enter a value. Press Enter to leave empty. +2fa> 123456 +Remote config +-------------------- +[remote] +type = protondrive +user = you\[at]protonmail.com +pass = *** ENCRYPTED *** +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +\f[B]NOTE:\f[R] The Proton Drive encryption keys need to have been +already generated after a regular login via the browser, otherwise +attempting to use the credentials in \f[C]rclone\f[R] will fail. +.PP +Once configured you can then use \f[C]rclone\f[R] like this, +.PP List directories in top level of your Proton Drive - - rclone lsd remote: - +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP List all the files in your Proton Drive - - rclone ls remote: - +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP To copy a local directory to an Proton Drive directory called backup - - rclone copy /home/source remote:backup - -### Modification times and hashes - +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS Modification times and hashes +.PP Proton Drive Bridge does not support updating modification times yet. - +.PP The SHA1 hash algorithm is supported. - -### Restricted filename characters - -Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and -right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) - -### Duplicated files - -Proton Drive can not have two files with exactly the same name and path. If the -conflict occurs, depending on the advanced config, the file might or might not -be overwritten. - -### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) - +.SS Restricted filename characters +.PP +Invalid UTF-8 bytes will be +replaced (https://rclone.org/overview/#invalid-utf8), also left and +right spaces will be removed (code +reference (https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) +.SS Duplicated files +.PP +Proton Drive can not have two files with exactly the same name and path. +If the conflict occurs, depending on the advanced config, the file might +or might not be overwritten. +.SS Mailbox password (https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) +.PP Please set your mailbox password in the advanced config section. - -### Caching - -The cache is currently built for the case when the rclone is the only instance -performing operations to the mount point. The event system, which is the proton -API system that provides visibility of what has changed on the drive, is yet -to be implemented, so updates from other clients won\[cq]t be reflected in the -cache. Thus, if there are concurrent clients accessing the same mount point, +.SS Caching +.PP +The cache is currently built for the case when the rclone is the only +instance performing operations to the mount point. +The event system, which is the proton API system that provides +visibility of what has changed on the drive, is yet to be implemented, +so updates from other clients won\[cq]t be reflected in the cache. +Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data. - - -### Standard options - +.SS Standard options +.PP Here are the Standard options specific to protondrive (Proton Drive). - -#### --protondrive-username - +.SS --protondrive-username +.PP The username of your proton account - +.PP Properties: - -- Config: username -- Env Var: RCLONE_PROTONDRIVE_USERNAME -- Type: string -- Required: true - -#### --protondrive-password - +.IP \[bu] 2 +Config: username +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_USERNAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --protondrive-password +.PP The password of your proton account. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP Properties: - -- Config: password -- Env Var: RCLONE_PROTONDRIVE_PASSWORD -- Type: string -- Required: true - -#### --protondrive-2fa - +.IP \[bu] 2 +Config: password +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --protondrive-2fa +.PP The 2FA code - +.PP The value can also be provided with --protondrive-2fa=000000 - -The 2FA code of your proton drive account if the account is set up with +.PP +The 2FA code of your proton drive account if the account is set up with two-factor authentication - +.PP Properties: - -- Config: 2fa -- Env Var: RCLONE_PROTONDRIVE_2FA -- Type: string -- Required: false - -### Advanced options - +.IP \[bu] 2 +Config: 2fa +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_2FA +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP Here are the Advanced options specific to protondrive (Proton Drive). - -#### --protondrive-mailbox-password - +.SS --protondrive-mailbox-password +.PP The mailbox password of your two-password proton account. - -For more information regarding the mailbox password, please check the -following official knowledge base article: +.PP +For more information regarding the mailbox password, please check the +following official knowledge base article: https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password - - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP Properties: - -- Config: mailbox_password -- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD -- Type: string -- Required: false - -#### --protondrive-client-uid - +.IP \[bu] 2 +Config: mailbox_password +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-uid +.PP Client uid key (internal use only) - +.PP Properties: - -- Config: client_uid -- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID -- Type: string -- Required: false - -#### --protondrive-client-access-token - +.IP \[bu] 2 +Config: client_uid +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_UID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-access-token +.PP Client access token key (internal use only) - +.PP Properties: - -- Config: client_access_token -- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN -- Type: string -- Required: false - -#### --protondrive-client-refresh-token - +.IP \[bu] 2 +Config: client_access_token +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-refresh-token +.PP Client refresh token key (internal use only) - +.PP Properties: - -- Config: client_refresh_token -- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN -- Type: string -- Required: false - -#### --protondrive-client-salted-key-pass - +.IP \[bu] 2 +Config: client_refresh_token +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-salted-key-pass +.PP Client salted key pass key (internal use only) - +.PP Properties: - -- Config: client_salted_key_pass -- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS -- Type: string -- Required: false - -#### --protondrive-encoding - +.IP \[bu] 2 +Config: client_salted_key_pass +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-encoding +.PP The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP Properties: - -- Config: encoding -- Env Var: RCLONE_PROTONDRIVE_ENCODING -- Type: Encoding -- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot - -#### --protondrive-original-file-size - +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot +.SS --protondrive-original-file-size +.PP Return the file size before encryption - -The size of the encrypted file will be different from (bigger than) the -original file size. Unless there is a reason to return the file size -after encryption is performed, otherwise, set this option to true, as -features like Open() which will need to be supplied with original content -size, will fail to operate properly - +.PP +The size of the encrypted file will be different from (bigger than) the +original file size. +Unless there is a reason to return the file size after encryption is +performed, otherwise, set this option to true, as features like Open() +which will need to be supplied with original content size, will fail to +operate properly +.PP Properties: - -- Config: original_file_size -- Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE -- Type: bool -- Default: true - -#### --protondrive-app-version - -The app version string - -The app version string indicates the client that is currently performing -the API request. This information is required and will be sent with every -API request. - +.IP \[bu] 2 +Config: original_file_size +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: true +.SS --protondrive-app-version +.PP +The app version string +.PP +The app version string indicates the client that is currently performing +the API request. +This information is required and will be sent with every API request. +.PP Properties: - -- Config: app_version -- Env Var: RCLONE_PROTONDRIVE_APP_VERSION -- Type: string -- Default: \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq] - -#### --protondrive-replace-existing-draft - +.IP \[bu] 2 +Config: app_version +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_APP_VERSION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq] +.SS --protondrive-replace-existing-draft +.PP Create a new revision when filename conflict is detected - -When a file upload is cancelled or failed before completion, a draft will be -created and the subsequent upload of the same file to the same location will be -reported as a conflict. - +.PP +When a file upload is cancelled or failed before completion, a draft +will be created and the subsequent upload of the same file to the same +location will be reported as a conflict. +.PP The value can also be set by --protondrive-replace-existing-draft=true - -If the option is set to true, the draft will be replaced and then the upload -operation will restart. If there are other clients also uploading at the same -file location at the same time, the behavior is currently unknown. Need to set -to true for integration tests. -If the option is set to false, an error \[dq]a draft exist - usually this means a -file is being uploaded at another client, or, there was a failed upload attempt\[dq] -will be returned, and no upload will happen. - +.PP +If the option is set to true, the draft will be replaced and then the +upload operation will restart. +If there are other clients also uploading at the same file location at +the same time, the behavior is currently unknown. +Need to set to true for integration tests. +If the option is set to false, an error \[dq]a draft exist - usually +this means a file is being uploaded at another client, or, there was a +failed upload attempt\[dq] will be returned, and no upload will happen. +.PP Properties: - -- Config: replace_existing_draft -- Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT -- Type: bool -- Default: false - -#### --protondrive-enable-caching - +.IP \[bu] 2 +Config: replace_existing_draft +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --protondrive-enable-caching +.PP Caches the files and folders metadata to reduce API calls - -Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, -as the current implementation doesn\[aq]t update or clear the cache when there are -external changes. - -The files and folders on ProtonDrive are represented as links with keyrings, -which can be cached to improve performance and be friendly to the API server. - -The cache is currently built for the case when the rclone is the only instance -performing operations to the mount point. The event system, which is the proton -API system that provides visibility of what has changed on the drive, is yet -to be implemented, so updates from other clients won\[cq]t be reflected in the -cache. Thus, if there are concurrent clients accessing the same mount point, +.PP +Notice: If you are mounting ProtonDrive as a VFS, please disable this +feature, as the current implementation doesn\[aq]t update or clear the +cache when there are external changes. +.PP +The files and folders on ProtonDrive are represented as links with +keyrings, which can be cached to improve performance and be friendly to +the API server. +.PP +The cache is currently built for the case when the rclone is the only +instance performing operations to the mount point. +The event system, which is the proton API system that provides +visibility of what has changed on the drive, is yet to be implemented, +so updates from other clients won\[cq]t be reflected in the cache. +Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data. - +.PP Properties: - -- Config: enable_caching -- Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING -- Type: bool -- Default: true - - - -## Limitations - -This backend uses the -[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which -is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a -fork of the [official repo](https://github.com/ProtonMail/go-proton-api). - -There is no official API documentation available from Proton Drive. But, thanks -to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) -and the web, iOS, and Android client codebases, we don\[aq]t need to completely -reverse engineer the APIs by observing the web client traffic! - -[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic -building blocks of API calls and error handling, such as 429 exponential -back-off, but it is pretty much just a barebone interface to the Proton API. -For example, the encryption and decryption of the Proton Drive file are not -provided in this library. - -The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on -top of this quickly. This codebase handles the intricate tasks before and after -calling Proton APIs, particularly the complex encryption scheme, allowing -developers to implement features for other software on top of this codebase. -There are likely quite a few errors in this library, as there isn\[aq]t official -documentation available. - -# put.io - -Paths are specified as \[ga]remote:path\[ga] - +.IP \[bu] 2 +Config: enable_caching +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: true +.SS --protondrive-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP +This backend uses the +Proton-API-Bridge (https://github.com/henrybear327/Proton-API-Bridge), +which is based on +go-proton-api (https://github.com/henrybear327/go-proton-api), a fork of +the official repo (https://github.com/ProtonMail/go-proton-api). +.PP +There is no official API documentation available from Proton Drive. +But, thanks to Proton open sourcing +proton-go-api (https://github.com/ProtonMail/go-proton-api) and the web, +iOS, and Android client codebases, we don\[aq]t need to completely +reverse engineer the APIs by observing the web client traffic! +.PP +proton-go-api (https://github.com/ProtonMail/go-proton-api) provides the +basic building blocks of API calls and error handling, such as 429 +exponential back-off, but it is pretty much just a barebone interface to +the Proton API. +For example, the encryption and decryption of the Proton Drive file are +not provided in this library. +.PP +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be +built on top of this quickly. +This codebase handles the intricate tasks before and after calling +Proton APIs, particularly the complex encryption scheme, allowing +developers to implement features for other software on top of this +codebase. +There are likely quite a few errors in this library, as there isn\[aq]t +official documentation available. +.SH put.io +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP put.io paths may be as deep as required, e.g. -\[ga]remote:directory/subdirectory\[ga]. - -## Configuration - -The initial setup for put.io involves getting a token from put.io -which you need to do in your browser. \[ga]rclone config\[ga] walks you -through it. - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: -\f[R] -.fi +\f[C]remote:directory/subdirectory\f[R]. +.SS Configuration .PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> putio Type of storage to configure. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value [snip] XX / Put.io -\ \[dq]putio\[dq] [snip] Storage> putio ** See help for putio backend -at: https://rclone.org/putio/ ** +The initial setup for put.io involves getting a token from put.io which +you need to do in your browser. +\f[C]rclone config\f[R] walks you through it. .PP -Remote config Use web browser to automatically authenticate rclone with -remote? -* Say Y if the machine running rclone has a web browser you can use * -Say N if running rclone on a (remote) machine without web browser access -If not sure try Y. -If Y failed, try N. -y) Yes n) No y/n> y If your browser doesn\[aq]t open automatically go to -the following link: http://127.0.0.1:53682/auth Log in and authorize -rclone for access Waiting for code... -Got code -------------------- [putio] type = putio token = -{\[dq]access_token\[dq]:\[dq]XXXXXXXX\[dq],\[dq]expiry\[dq]:\[dq]0001-01-01T00:00:00Z\[dq]} --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y Current remotes: -.PP -Name Type ==== ==== putio putio -.IP "e)" 3 -Edit existing remote -.IP "f)" 3 -New remote -.IP "g)" 3 -Delete remote -.IP "h)" 3 -Rename remote -.IP "i)" 3 -Copy remote -.IP "j)" 3 -Set configuration password -.IP "k)" 3 -Quit config e/n/d/r/c/s/q> q +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: .IP .nf \f[C] -See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> putio +Type of storage to configure. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value +[snip] +XX / Put.io + \[rs] \[dq]putio\[dq] +[snip] +Storage> putio +** See help for putio backend at: https://rclone.org/putio/ ** +Remote config +Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. If Y failed, try N. +y) Yes +n) No +y/n> y +If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth +Log in and authorize rclone for access +Waiting for code... +Got code +-------------------- +[putio] +type = putio +token = {\[dq]access_token\[dq]:\[dq]XXXXXXXX\[dq],\[dq]expiry\[dq]:\[dq]0001-01-01T00:00:00Z\[dq]} +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +Current remotes: + +Name Type +==== ==== +putio putio + +e) Edit existing remote +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> q +\f[R] +.fi +.PP +See the remote setup docs (https://rclone.org/remote_setup/) for how to +set it up on a machine with no Internet browser available. +.PP Note that rclone runs a webserver on your local machine to collect the -token as returned from put.io if using web browser to automatically -authenticate. This only -runs from the moment it opens your browser to the moment you get back -the verification code. This is on \[ga]http://127.0.0.1:53682/\[ga] and this -it may require you to unblock it temporarily if you are running a host -firewall, or use manual mode. - -You can then use it like this, - -List directories in top level of your put.io - - rclone lsd remote: - -List all the files in your put.io - - rclone ls remote: - -To copy a local directory to a put.io directory called backup - - rclone copy /home/source remote:backup - -### Restricted filename characters - -In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) -the following characters are also replaced: - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| \[rs] | 0x5C | \[uFF3C] | - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - - -### Standard options - -Here are the Standard options specific to putio (Put.io). - -#### --putio-client-id - -OAuth Client Id. - -Leave blank normally. - -Properties: - -- Config: client_id -- Env Var: RCLONE_PUTIO_CLIENT_ID -- Type: string -- Required: false - -#### --putio-client-secret - -OAuth Client Secret. - -Leave blank normally. - -Properties: - -- Config: client_secret -- Env Var: RCLONE_PUTIO_CLIENT_SECRET -- Type: string -- Required: false - -### Advanced options - -Here are the Advanced options specific to putio (Put.io). - -#### --putio-token - -OAuth Access Token as a JSON blob. - -Properties: - -- Config: token -- Env Var: RCLONE_PUTIO_TOKEN -- Type: string -- Required: false - -#### --putio-auth-url - -Auth server URL. - -Leave blank to use the provider defaults. - -Properties: - -- Config: auth_url -- Env Var: RCLONE_PUTIO_AUTH_URL -- Type: string -- Required: false - -#### --putio-token-url - -Token server url. - -Leave blank to use the provider defaults. - -Properties: - -- Config: token_url -- Env Var: RCLONE_PUTIO_TOKEN_URL -- Type: string -- Required: false - -#### --putio-encoding - -The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - -Properties: - -- Config: encoding -- Env Var: RCLONE_PUTIO_ENCODING -- Type: Encoding -- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - - - -## Limitations - -put.io has rate limiting. When you hit a limit, rclone automatically -retries after waiting the amount of time requested by the server. - -If you want to avoid ever hitting these limits, you may use the -\[ga]--tpslimit\[ga] flag with a low number. Note that the imposed limits -may be different for different operations, and may change over time. - -# Proton Drive - -[Proton Drive](https://proton.me/drive) is an end-to-end encrypted Swiss vault - for your files that protects your data. - -This is an rclone backend for Proton Drive which supports the file transfer -features of Proton Drive using the same client-side encryption. - -Due to the fact that Proton Drive doesn\[aq]t publish its API documentation, this -backend is implemented with best efforts by reading the open-sourced client -source code and observing the Proton Drive traffic in the browser. - -**NB** This backend is currently in Beta. It is believed to be correct -and all the integration tests pass. However the Proton Drive protocol -has evolved over time there may be accounts it is not compatible -with. Please [post on the rclone forum](https://forum.rclone.org/) if -you find an incompatibility. - -Paths are specified as \[ga]remote:path\[ga] - -Paths may be as deep as required, e.g. \[ga]remote:directory/subdirectory\[ga]. - -## Configurations - -Here is an example of how to make a remote called \[ga]remote\[ga]. First run: - - rclone config - -This will guide you through an interactive setup process: -\f[R] -.fi +token as returned from put.io if using web browser to automatically +authenticate. +This only runs from the moment it opens your browser to the moment you +get back the verification code. +This is on \f[C]http://127.0.0.1:53682/\f[R] and this it may require you +to unblock it temporarily if you are running a host firewall, or use +manual mode. .PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / Proton -Drive \ \[dq]Proton Drive\[dq] [snip] Storage> protondrive User name -user> you\[at]protonmail.com Password. -y) Yes type in my own password g) Generate random password n) No leave -this optional password blank y/g/n> y Enter the password: password: -Confirm the password: password: Option 2fa. -2FA code (if the account requires one) Enter a value. -Press Enter to leave empty. -2fa> 123456 Remote config -------------------- [remote] type = -protondrive user = you\[at]protonmail.com pass = *** ENCRYPTED *** --------------------- y) Yes this is OK e) Edit this remote d) Delete -this remote y/e/d> y +You can then use it like this, +.PP +List directories in top level of your put.io .IP .nf \f[C] -**NOTE:** The Proton Drive encryption keys need to have been already generated -after a regular login via the browser, otherwise attempting to use the -credentials in \[ga]rclone\[ga] will fail. - -Once configured you can then use \[ga]rclone\[ga] like this, - -List directories in top level of your Proton Drive - - rclone lsd remote: - -List all the files in your Proton Drive - - rclone ls remote: - -To copy a local directory to an Proton Drive directory called backup - - rclone copy /home/source remote:backup - -### Modification times and hashes - -Proton Drive Bridge does not support updating modification times yet. - -The SHA1 hash algorithm is supported. - -### Restricted filename characters - -Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and -right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) - -### Duplicated files - -Proton Drive can not have two files with exactly the same name and path. If the -conflict occurs, depending on the advanced config, the file might or might not -be overwritten. - -### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) - -Please set your mailbox password in the advanced config section. - -### Caching - -The cache is currently built for the case when the rclone is the only instance -performing operations to the mount point. The event system, which is the proton -API system that provides visibility of what has changed on the drive, is yet -to be implemented, so updates from other clients won\[cq]t be reflected in the -cache. Thus, if there are concurrent clients accessing the same mount point, -then we might have a problem with caching the stale data. - - -### Standard options - -Here are the Standard options specific to protondrive (Proton Drive). - -#### --protondrive-username - -The username of your proton account - +rclone lsd remote: +\f[R] +.fi +.PP +List all the files in your put.io +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP +To copy a local directory to a put.io directory called backup +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS Restricted filename characters +.PP +In addition to the default restricted characters +set (https://rclone.org/overview/#restricted-characters) the following +characters are also replaced: +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +\[rs] +T}@T{ +0x5C +T}@T{ +\[uFF3C] +T} +.TE +.PP +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Standard options +.PP +Here are the Standard options specific to putio (Put.io). +.SS --putio-client-id +.PP +OAuth Client Id. +.PP +Leave blank normally. +.PP Properties: - -- Config: username -- Env Var: RCLONE_PROTONDRIVE_USERNAME -- Type: string -- Required: true - -#### --protondrive-password - -The password of your proton account. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.IP \[bu] 2 +Config: client_id +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_CLIENT_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --putio-client-secret +.PP +OAuth Client Secret. +.PP +Leave blank normally. +.PP Properties: - -- Config: password -- Env Var: RCLONE_PROTONDRIVE_PASSWORD -- Type: string -- Required: true - -#### --protondrive-2fa - -The 2FA code - -The value can also be provided with --protondrive-2fa=000000 - -The 2FA code of your proton drive account if the account is set up with -two-factor authentication - +.IP \[bu] 2 +Config: client_secret +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_CLIENT_SECRET +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP +Here are the Advanced options specific to putio (Put.io). +.SS --putio-token +.PP +OAuth Access Token as a JSON blob. +.PP Properties: - -- Config: 2fa -- Env Var: RCLONE_PROTONDRIVE_2FA -- Type: string -- Required: false - -### Advanced options - -Here are the Advanced options specific to protondrive (Proton Drive). - -#### --protondrive-mailbox-password - -The mailbox password of your two-password proton account. - -For more information regarding the mailbox password, please check the -following official knowledge base article: -https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password - - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.IP \[bu] 2 +Config: token +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --putio-auth-url +.PP +Auth server URL. +.PP +Leave blank to use the provider defaults. +.PP Properties: - -- Config: mailbox_password -- Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD -- Type: string -- Required: false - -#### --protondrive-client-uid - -Client uid key (internal use only) - +.IP \[bu] 2 +Config: auth_url +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_AUTH_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --putio-token-url +.PP +Token server url. +.PP +Leave blank to use the provider defaults. +.PP Properties: - -- Config: client_uid -- Env Var: RCLONE_PROTONDRIVE_CLIENT_UID -- Type: string -- Required: false - -#### --protondrive-client-access-token - -Client access token key (internal use only) - -Properties: - -- Config: client_access_token -- Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN -- Type: string -- Required: false - -#### --protondrive-client-refresh-token - -Client refresh token key (internal use only) - -Properties: - -- Config: client_refresh_token -- Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN -- Type: string -- Required: false - -#### --protondrive-client-salted-key-pass - -Client salted key pass key (internal use only) - -Properties: - -- Config: client_salted_key_pass -- Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS -- Type: string -- Required: false - -#### --protondrive-encoding - +.IP \[bu] 2 +Config: token_url +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_TOKEN_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --putio-encoding +.PP The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP Properties: - -- Config: encoding -- Env Var: RCLONE_PROTONDRIVE_ENCODING -- Type: Encoding -- Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot - -#### --protondrive-original-file-size - -Return the file size before encryption - -The size of the encrypted file will be different from (bigger than) the -original file size. Unless there is a reason to return the file size -after encryption is performed, otherwise, set this option to true, as -features like Open() which will need to be supplied with original content -size, will fail to operate properly - +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +.SS --putio-description +.PP +Description of the remote +.PP Properties: - -- Config: original_file_size -- Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE -- Type: bool -- Default: true - -#### --protondrive-app-version - -The app version string - -The app version string indicates the client that is currently performing -the API request. This information is required and will be sent with every -API request. - -Properties: - -- Config: app_version -- Env Var: RCLONE_PROTONDRIVE_APP_VERSION -- Type: string -- Default: \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq] - -#### --protondrive-replace-existing-draft - -Create a new revision when filename conflict is detected - -When a file upload is cancelled or failed before completion, a draft will be -created and the subsequent upload of the same file to the same location will be -reported as a conflict. - -The value can also be set by --protondrive-replace-existing-draft=true - -If the option is set to true, the draft will be replaced and then the upload -operation will restart. If there are other clients also uploading at the same -file location at the same time, the behavior is currently unknown. Need to set -to true for integration tests. -If the option is set to false, an error \[dq]a draft exist - usually this means a -file is being uploaded at another client, or, there was a failed upload attempt\[dq] -will be returned, and no upload will happen. - -Properties: - -- Config: replace_existing_draft -- Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT -- Type: bool -- Default: false - -#### --protondrive-enable-caching - -Caches the files and folders metadata to reduce API calls - -Notice: If you are mounting ProtonDrive as a VFS, please disable this feature, -as the current implementation doesn\[aq]t update or clear the cache when there are -external changes. - -The files and folders on ProtonDrive are represented as links with keyrings, -which can be cached to improve performance and be friendly to the API server. - -The cache is currently built for the case when the rclone is the only instance -performing operations to the mount point. The event system, which is the proton -API system that provides visibility of what has changed on the drive, is yet -to be implemented, so updates from other clients won\[cq]t be reflected in the -cache. Thus, if there are concurrent clients accessing the same mount point, +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_PUTIO_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP +put.io has rate limiting. +When you hit a limit, rclone automatically retries after waiting the +amount of time requested by the server. +.PP +If you want to avoid ever hitting these limits, you may use the +\f[C]--tpslimit\f[R] flag with a low number. +Note that the imposed limits may be different for different operations, +and may change over time. +.SH Proton Drive +.PP +Proton Drive (https://proton.me/drive) is an end-to-end encrypted Swiss +vault for your files that protects your data. +.PP +This is an rclone backend for Proton Drive which supports the file +transfer features of Proton Drive using the same client-side encryption. +.PP +Due to the fact that Proton Drive doesn\[aq]t publish its API +documentation, this backend is implemented with best efforts by reading +the open-sourced client source code and observing the Proton Drive +traffic in the browser. +.PP +\f[B]NB\f[R] This backend is currently in Beta. +It is believed to be correct and all the integration tests pass. +However the Proton Drive protocol has evolved over time there may be +accounts it is not compatible with. +Please post on the rclone forum (https://forum.rclone.org/) if you find +an incompatibility. +.PP +Paths are specified as \f[C]remote:path\f[R] +.PP +Paths may be as deep as required, e.g. +\f[C]remote:directory/subdirectory\f[R]. +.SS Configurations +.PP +Here is an example of how to make a remote called \f[C]remote\f[R]. +First run: +.IP +.nf +\f[C] + rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / Proton Drive + \[rs] \[dq]Proton Drive\[dq] +[snip] +Storage> protondrive +User name +user> you\[at]protonmail.com +Password. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank +y/g/n> y +Enter the password: +password: +Confirm the password: +password: +Option 2fa. +2FA code (if the account requires one) +Enter a value. Press Enter to leave empty. +2fa> 123456 +Remote config +-------------------- +[remote] +type = protondrive +user = you\[at]protonmail.com +pass = *** ENCRYPTED *** +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +\f[B]NOTE:\f[R] The Proton Drive encryption keys need to have been +already generated after a regular login via the browser, otherwise +attempting to use the credentials in \f[C]rclone\f[R] will fail. +.PP +Once configured you can then use \f[C]rclone\f[R] like this, +.PP +List directories in top level of your Proton Drive +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +List all the files in your Proton Drive +.IP +.nf +\f[C] +rclone ls remote: +\f[R] +.fi +.PP +To copy a local directory to an Proton Drive directory called backup +.IP +.nf +\f[C] +rclone copy /home/source remote:backup +\f[R] +.fi +.SS Modification times and hashes +.PP +Proton Drive Bridge does not support updating modification times yet. +.PP +The SHA1 hash algorithm is supported. +.SS Restricted filename characters +.PP +Invalid UTF-8 bytes will be +replaced (https://rclone.org/overview/#invalid-utf8), also left and +right spaces will be removed (code +reference (https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) +.SS Duplicated files +.PP +Proton Drive can not have two files with exactly the same name and path. +If the conflict occurs, depending on the advanced config, the file might +or might not be overwritten. +.SS Mailbox password (https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) +.PP +Please set your mailbox password in the advanced config section. +.SS Caching +.PP +The cache is currently built for the case when the rclone is the only +instance performing operations to the mount point. +The event system, which is the proton API system that provides +visibility of what has changed on the drive, is yet to be implemented, +so updates from other clients won\[cq]t be reflected in the cache. +Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data. - +.SS Standard options +.PP +Here are the Standard options specific to protondrive (Proton Drive). +.SS --protondrive-username +.PP +The username of your proton account +.PP Properties: - -- Config: enable_caching -- Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING -- Type: bool -- Default: true - - - -## Limitations - -This backend uses the -[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which -is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a -fork of the [official repo](https://github.com/ProtonMail/go-proton-api). - -There is no official API documentation available from Proton Drive. But, thanks -to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) -and the web, iOS, and Android client codebases, we don\[aq]t need to completely -reverse engineer the APIs by observing the web client traffic! - -[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic -building blocks of API calls and error handling, such as 429 exponential -back-off, but it is pretty much just a barebone interface to the Proton API. -For example, the encryption and decryption of the Proton Drive file are not -provided in this library. - -The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on -top of this quickly. This codebase handles the intricate tasks before and after -calling Proton APIs, particularly the complex encryption scheme, allowing -developers to implement features for other software on top of this codebase. -There are likely quite a few errors in this library, as there isn\[aq]t official -documentation available. - -# Seafile - -This is a backend for the [Seafile](https://www.seafile.com/) storage service: -- It works with both the free community edition or the professional edition. +.IP \[bu] 2 +Config: username +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_USERNAME +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --protondrive-password +.PP +The password of your proton account. +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: password +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --protondrive-2fa +.PP +The 2FA code +.PP +The value can also be provided with --protondrive-2fa=000000 +.PP +The 2FA code of your proton drive account if the account is set up with +two-factor authentication +.PP +Properties: +.IP \[bu] 2 +Config: 2fa +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_2FA +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP +Here are the Advanced options specific to protondrive (Proton Drive). +.SS --protondrive-mailbox-password +.PP +The mailbox password of your two-password proton account. +.PP +For more information regarding the mailbox password, please check the +following official knowledge base article: +https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: mailbox_password +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_MAILBOX_PASSWORD +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-uid +.PP +Client uid key (internal use only) +.PP +Properties: +.IP \[bu] 2 +Config: client_uid +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_UID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-access-token +.PP +Client access token key (internal use only) +.PP +Properties: +.IP \[bu] 2 +Config: client_access_token +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_ACCESS_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-refresh-token +.PP +Client refresh token key (internal use only) +.PP +Properties: +.IP \[bu] 2 +Config: client_refresh_token +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_REFRESH_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-client-salted-key-pass +.PP +Client salted key pass key (internal use only) +.PP +Properties: +.IP \[bu] 2 +Config: client_salted_key_pass +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_CLIENT_SALTED_KEY_PASS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --protondrive-encoding +.PP +The encoding for the backend. +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP +Properties: +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,LeftSpace,RightSpace,InvalidUtf8,Dot +.SS --protondrive-original-file-size +.PP +Return the file size before encryption +.PP +The size of the encrypted file will be different from (bigger than) the +original file size. +Unless there is a reason to return the file size after encryption is +performed, otherwise, set this option to true, as features like Open() +which will need to be supplied with original content size, will fail to +operate properly +.PP +Properties: +.IP \[bu] 2 +Config: original_file_size +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_ORIGINAL_FILE_SIZE +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: true +.SS --protondrive-app-version +.PP +The app version string +.PP +The app version string indicates the client that is currently performing +the API request. +This information is required and will be sent with every API request. +.PP +Properties: +.IP \[bu] 2 +Config: app_version +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_APP_VERSION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Default: \[dq]macos-drive\[at]1.0.0-alpha.1+rclone\[dq] +.SS --protondrive-replace-existing-draft +.PP +Create a new revision when filename conflict is detected +.PP +When a file upload is cancelled or failed before completion, a draft +will be created and the subsequent upload of the same file to the same +location will be reported as a conflict. +.PP +The value can also be set by --protondrive-replace-existing-draft=true +.PP +If the option is set to true, the draft will be replaced and then the +upload operation will restart. +If there are other clients also uploading at the same file location at +the same time, the behavior is currently unknown. +Need to set to true for integration tests. +If the option is set to false, an error \[dq]a draft exist - usually +this means a file is being uploaded at another client, or, there was a +failed upload attempt\[dq] will be returned, and no upload will happen. +.PP +Properties: +.IP \[bu] 2 +Config: replace_existing_draft +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_REPLACE_EXISTING_DRAFT +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --protondrive-enable-caching +.PP +Caches the files and folders metadata to reduce API calls +.PP +Notice: If you are mounting ProtonDrive as a VFS, please disable this +feature, as the current implementation doesn\[aq]t update or clear the +cache when there are external changes. +.PP +The files and folders on ProtonDrive are represented as links with +keyrings, which can be cached to improve performance and be friendly to +the API server. +.PP +The cache is currently built for the case when the rclone is the only +instance performing operations to the mount point. +The event system, which is the proton API system that provides +visibility of what has changed on the drive, is yet to be implemented, +so updates from other clients won\[cq]t be reflected in the cache. +Thus, if there are concurrent clients accessing the same mount point, +then we might have a problem with caching the stale data. +.PP +Properties: +.IP \[bu] 2 +Config: enable_caching +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_ENABLE_CACHING +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: true +.SS --protondrive-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Limitations +.PP +This backend uses the +Proton-API-Bridge (https://github.com/henrybear327/Proton-API-Bridge), +which is based on +go-proton-api (https://github.com/henrybear327/go-proton-api), a fork of +the official repo (https://github.com/ProtonMail/go-proton-api). +.PP +There is no official API documentation available from Proton Drive. +But, thanks to Proton open sourcing +proton-go-api (https://github.com/ProtonMail/go-proton-api) and the web, +iOS, and Android client codebases, we don\[aq]t need to completely +reverse engineer the APIs by observing the web client traffic! +.PP +proton-go-api (https://github.com/ProtonMail/go-proton-api) provides the +basic building blocks of API calls and error handling, such as 429 +exponential back-off, but it is pretty much just a barebone interface to +the Proton API. +For example, the encryption and decryption of the Proton Drive file are +not provided in this library. +.PP +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be +built on top of this quickly. +This codebase handles the intricate tasks before and after calling +Proton APIs, particularly the complex encryption scheme, allowing +developers to implement features for other software on top of this +codebase. +There are likely quite a few errors in this library, as there isn\[aq]t +official documentation available. +.SH Seafile +.PP +This is a backend for the Seafile (https://www.seafile.com/) storage +service: - It works with both the free community edition or the +professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. -- It supports 2FA enabled users -- Using a Library API Token is **not** supported - -## Configuration - -There are two distinct modes you can setup your remote: -- you point your remote to the **root of the server**, meaning you don\[aq]t specify a library during the configuration: -Paths are specified as \[ga]remote:library\[ga]. You may put subdirectories in too, e.g. \[ga]remote:library/path/to/dir\[ga]. +- It supports 2FA enabled users - Using a Library API Token is +\f[B]not\f[R] supported +.SS Configuration +.PP +There are two distinct modes you can setup your remote: - you point your +remote to the \f[B]root of the server\f[R], meaning you don\[aq]t +specify a library during the configuration: Paths are specified as +\f[C]remote:library\f[R]. +You may put subdirectories in too, e.g. +\f[C]remote:library/path/to/dir\f[R]. - you point your remote to a specific library during the configuration: -Paths are specified as \[ga]remote:path/to/dir\[ga]. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_) - -### Configuration in root mode - -Here is an example of making a seafile configuration for a user with **no** two-factor authentication. First run - - rclone config - -This will guide you through an interactive setup process. To authenticate -you will need the URL of your server, your email (or username) and your password. -\f[R] -.fi +Paths are specified as \f[C]remote:path/to/dir\f[R]. +\f[B]This is the recommended mode when using encrypted libraries\f[R]. +(\f[I]This mode is possibly slightly faster than the root mode\f[R]) +.SS Configuration in root mode .PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> seafile Type of storage to configure. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value [snip] XX / -Seafile \ \[dq]seafile\[dq] [snip] Storage> seafile ** See help for -seafile backend at: https://rclone.org/seafile/ ** -.PP -URL of seafile host to connect to Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value 1 / Connect to -cloud.seafile.com \ \[dq]https://cloud.seafile.com/\[dq] url> -http://my.seafile.server/ User name (usually email address) Enter a -string value. -Press Enter for the default (\[dq]\[dq]). -user> me\[at]example.com Password y) Yes type in my own password g) -Generate random password n) No leave this optional password blank -(default) y/g> y Enter the password: password: Confirm the password: -password: Two-factor authentication (\[aq]true\[aq] if the account has -2FA enabled) Enter a boolean value (true or false). -Press Enter for the default (\[dq]false\[dq]). -2fa> false Name of the library. -Leave blank to access all non-encrypted libraries. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -library> Library password (for encrypted libraries only). -Leave blank if you pass it through the command line. -y) Yes type in my own password g) Generate random password n) No leave -this optional password blank (default) y/g/n> n Edit advanced config? -(y/n) y) Yes n) No (default) y/n> n Remote config Two-factor -authentication is not enabled on this account. --------------------- [seafile] type = seafile url = -http://my.seafile.server/ user = me\[at]example.com pass = *** ENCRYPTED -*** 2fa = false -------------------- y) Yes this is OK (default) e) Edit -this remote d) Delete this remote y/e/d> y +Here is an example of making a seafile configuration for a user with +\f[B]no\f[R] two-factor authentication. +First run .IP .nf \f[C] -This remote is called \[ga]seafile\[ga]. It\[aq]s pointing to the root of your seafile server and can now be used like this: +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +To authenticate you will need the URL of your server, your email (or +username) and your password. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> seafile +Type of storage to configure. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value +[snip] +XX / Seafile + \[rs] \[dq]seafile\[dq] +[snip] +Storage> seafile +** See help for seafile backend at: https://rclone.org/seafile/ ** +URL of seafile host to connect to +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value + 1 / Connect to cloud.seafile.com + \[rs] \[dq]https://cloud.seafile.com/\[dq] +url> http://my.seafile.server/ +User name (usually email address) +Enter a string value. Press Enter for the default (\[dq]\[dq]). +user> me\[at]example.com +Password +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank (default) +y/g> y +Enter the password: +password: +Confirm the password: +password: +Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled) +Enter a boolean value (true or false). Press Enter for the default (\[dq]false\[dq]). +2fa> false +Name of the library. Leave blank to access all non-encrypted libraries. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +library> +Library password (for encrypted libraries only). Leave blank if you pass it through the command line. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank (default) +y/g/n> n +Edit advanced config? (y/n) +y) Yes +n) No (default) +y/n> n +Remote config +Two-factor authentication is not enabled on this account. +-------------------- +[seafile] +type = seafile +url = http://my.seafile.server/ +user = me\[at]example.com +pass = *** ENCRYPTED *** +2fa = false +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +This remote is called \f[C]seafile\f[R]. +It\[aq]s pointing to the root of your seafile server and can now be used +like this: +.PP See all libraries - - rclone lsd seafile: - -Create a new library - - rclone mkdir seafile:library - -List the contents of a library - - rclone ls seafile:library - -Sync \[ga]/home/local/directory\[ga] to the remote library, deleting any -excess files in the library. - - rclone sync --interactive /home/local/directory seafile:library - -### Configuration in library mode - -Here\[aq]s an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you: -\f[R] -.fi -.PP -No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> seafile Type of storage to configure. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value [snip] XX / -Seafile \ \[dq]seafile\[dq] [snip] Storage> seafile ** See help for -seafile backend at: https://rclone.org/seafile/ ** -.PP -URL of seafile host to connect to Enter a string value. -Press Enter for the default (\[dq]\[dq]). -Choose a number from below, or type in your own value 1 / Connect to -cloud.seafile.com \ \[dq]https://cloud.seafile.com/\[dq] url> -http://my.seafile.server/ User name (usually email address) Enter a -string value. -Press Enter for the default (\[dq]\[dq]). -user> me\[at]example.com Password y) Yes type in my own password g) -Generate random password n) No leave this optional password blank -(default) y/g> y Enter the password: password: Confirm the password: -password: Two-factor authentication (\[aq]true\[aq] if the account has -2FA enabled) Enter a boolean value (true or false). -Press Enter for the default (\[dq]false\[dq]). -2fa> true Name of the library. -Leave blank to access all non-encrypted libraries. -Enter a string value. -Press Enter for the default (\[dq]\[dq]). -library> My Library Library password (for encrypted libraries only). -Leave blank if you pass it through the command line. -y) Yes type in my own password g) Generate random password n) No leave -this optional password blank (default) y/g/n> n Edit advanced config? -(y/n) y) Yes n) No (default) y/n> n Remote config Two-factor -authentication: please enter your 2FA code 2fa code> 123456 -Authenticating... -Success! -------------------- [seafile] type = seafile url = -http://my.seafile.server/ user = me\[at]example.com pass = 2fa = true -library = My Library -------------------- y) Yes this is OK (default) e) -Edit this remote d) Delete this remote y/e/d> y .IP .nf \f[C] -You\[aq]ll notice your password is blank in the configuration. It\[aq]s because we only need the password to authenticate you once. - -You specified \[ga]My Library\[ga] during the configuration. The root of the remote is pointing at the -root of the library \[ga]My Library\[ga]: - -See all files in the library: - - rclone lsd seafile: - -Create a new directory inside the library - - rclone mkdir seafile:directory - -List the contents of a directory - - rclone ls seafile:directory - -Sync \[ga]/home/local/directory\[ga] to the remote library, deleting any +rclone lsd seafile: +\f[R] +.fi +.PP +Create a new library +.IP +.nf +\f[C] +rclone mkdir seafile:library +\f[R] +.fi +.PP +List the contents of a library +.IP +.nf +\f[C] +rclone ls seafile:library +\f[R] +.fi +.PP +Sync \f[C]/home/local/directory\f[R] to the remote library, deleting any excess files in the library. +.IP +.nf +\f[C] +rclone sync --interactive /home/local/directory seafile:library +\f[R] +.fi +.SS Configuration in library mode +.PP +Here\[aq]s an example of a configuration in library mode with a user +that has the two-factor authentication enabled. +Your 2FA code will be asked at the end of the configuration, and will +attempt to authenticate you: +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> seafile +Type of storage to configure. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value +[snip] +XX / Seafile + \[rs] \[dq]seafile\[dq] +[snip] +Storage> seafile +** See help for seafile backend at: https://rclone.org/seafile/ ** - rclone sync --interactive /home/local/directory seafile: - - -### --fast-list - -Seafile version 7+ supports \[ga]--fast-list\[ga] which allows you to use fewer -transactions in exchange for more memory. See the [rclone -docs](https://rclone.org/docs/#fast-list) for more details. +URL of seafile host to connect to +Enter a string value. Press Enter for the default (\[dq]\[dq]). +Choose a number from below, or type in your own value + 1 / Connect to cloud.seafile.com + \[rs] \[dq]https://cloud.seafile.com/\[dq] +url> http://my.seafile.server/ +User name (usually email address) +Enter a string value. Press Enter for the default (\[dq]\[dq]). +user> me\[at]example.com +Password +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank (default) +y/g> y +Enter the password: +password: +Confirm the password: +password: +Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled) +Enter a boolean value (true or false). Press Enter for the default (\[dq]false\[dq]). +2fa> true +Name of the library. Leave blank to access all non-encrypted libraries. +Enter a string value. Press Enter for the default (\[dq]\[dq]). +library> My Library +Library password (for encrypted libraries only). Leave blank if you pass it through the command line. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank (default) +y/g/n> n +Edit advanced config? (y/n) +y) Yes +n) No (default) +y/n> n +Remote config +Two-factor authentication: please enter your 2FA code +2fa code> 123456 +Authenticating... +Success! +-------------------- +[seafile] +type = seafile +url = http://my.seafile.server/ +user = me\[at]example.com +pass = +2fa = true +library = My Library +-------------------- +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +You\[aq]ll notice your password is blank in the configuration. +It\[aq]s because we only need the password to authenticate you once. +.PP +You specified \f[C]My Library\f[R] during the configuration. +The root of the remote is pointing at the root of the library +\f[C]My Library\f[R]: +.PP +See all files in the library: +.IP +.nf +\f[C] +rclone lsd seafile: +\f[R] +.fi +.PP +Create a new directory inside the library +.IP +.nf +\f[C] +rclone mkdir seafile:directory +\f[R] +.fi +.PP +List the contents of a directory +.IP +.nf +\f[C] +rclone ls seafile:directory +\f[R] +.fi +.PP +Sync \f[C]/home/local/directory\f[R] to the remote library, deleting any +excess files in the library. +.IP +.nf +\f[C] +rclone sync --interactive /home/local/directory seafile: +\f[R] +.fi +.SS --fast-list +.PP +Seafile version 7+ supports \f[C]--fast-list\f[R] which allows you to +use fewer transactions in exchange for more memory. +See the rclone docs (https://rclone.org/docs/#fast-list) for more +details. Please note this is not supported on seafile server version 6.x - - -### Restricted filename characters - -In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) -the following characters are also replaced: - -| Character | Value | Replacement | -| --------- |:-----:|:-----------:| -| / | 0x2F | \[uFF0F] | -| \[dq] | 0x22 | \[uFF02] | -| \[rs] | 0x5C | \[uFF3C] | - -Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), -as they can\[aq]t be used in JSON strings. - -### Seafile and rclone link - +.SS Restricted filename characters +.PP +In addition to the default restricted characters +set (https://rclone.org/overview/#restricted-characters) the following +characters are also replaced: +.PP +.TS +tab(@); +l c c. +T{ +Character +T}@T{ +Value +T}@T{ +Replacement +T} +_ +T{ +/ +T}@T{ +0x2F +T}@T{ +\[uFF0F] +T} +T{ +\[dq] +T}@T{ +0x22 +T}@T{ +\[uFF02] +T} +T{ +\[rs] +T}@T{ +0x5C +T}@T{ +\[uFF3C] +T} +.TE +.PP +Invalid UTF-8 bytes will also be +replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t +be used in JSON strings. +.SS Seafile and rclone link +.PP Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory: -\f[R] -.fi -.PP +.IP +.nf +\f[C] rclone link seafile:seafile-tutorial.doc http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ -.IP -.nf -\f[C] +\f[R] +.fi +.PP or if run on a directory you will get: -\f[R] -.fi -.PP -rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/ .IP .nf \f[C] -Please note a share link is unique for each file or directory. If you run a link command on a file/dir -that has already been shared, you will get the exact same link. - -### Compatibility - -It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions: -- 6.3.4 community edition -- 7.0.5 community edition -- 7.1.3 community edition -- 9.0.10 community edition - +rclone link seafile:dir +http://my.seafile.server/d/9ea2455f6f55478bbb0d/ +\f[R] +.fi +.PP +Please note a share link is unique for each file or directory. +If you run a link command on a file/dir that has already been shared, +you will get the exact same link. +.SS Compatibility +.PP +It has been actively developed using the seafile docker +image (https://github.com/haiwen/seafile-docker) of these versions: - +6.3.4 community edition - 7.0.5 community edition - 7.1.3 community +edition - 9.0.10 community edition +.PP Versions below 6.0 are not supported. -Versions between 6.0 and 6.3 haven\[aq]t been tested and might not work properly. - -Each new version of \[ga]rclone\[ga] is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/) of the seafile community server. - - -### Standard options - +Versions between 6.0 and 6.3 haven\[aq]t been tested and might not work +properly. +.PP +Each new version of \f[C]rclone\f[R] is automatically tested against the +latest docker image (https://hub.docker.com/r/seafileltd/seafile-mc/) of +the seafile community server. +.SS Standard options +.PP Here are the Standard options specific to seafile (seafile). - -#### --seafile-url - +.SS --seafile-url +.PP URL of seafile host to connect to. - +.PP Properties: - -- Config: url -- Env Var: RCLONE_SEAFILE_URL -- Type: string -- Required: true -- Examples: - - \[dq]https://cloud.seafile.com/\[dq] - - Connect to cloud.seafile.com. - -#### --seafile-user - +.IP \[bu] 2 +Config: url +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_URL +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]https://cloud.seafile.com/\[dq] +.RS 2 +.IP \[bu] 2 +Connect to cloud.seafile.com. +.RE +.RE +.SS --seafile-user +.PP User name (usually email address). - +.PP Properties: - -- Config: user -- Env Var: RCLONE_SEAFILE_USER -- Type: string -- Required: true - -#### --seafile-pass - +.IP \[bu] 2 +Config: user +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_USER +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: true +.SS --seafile-pass +.PP Password. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP Properties: - -- Config: pass -- Env Var: RCLONE_SEAFILE_PASS -- Type: string -- Required: false - -#### --seafile-2fa - -Two-factor authentication (\[aq]true\[aq] if the account has 2FA enabled). - +.IP \[bu] 2 +Config: pass +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_PASS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --seafile-2fa +.PP +Two-factor authentication (\[aq]true\[aq] if the account has 2FA +enabled). +.PP Properties: - -- Config: 2fa -- Env Var: RCLONE_SEAFILE_2FA -- Type: bool -- Default: false - -#### --seafile-library - +.IP \[bu] 2 +Config: 2fa +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_2FA +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --seafile-library +.PP Name of the library. - +.PP Leave blank to access all non-encrypted libraries. - +.PP Properties: - -- Config: library -- Env Var: RCLONE_SEAFILE_LIBRARY -- Type: string -- Required: false - -#### --seafile-library-key - +.IP \[bu] 2 +Config: library +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_LIBRARY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --seafile-library-key +.PP Library password (for encrypted libraries only). - +.PP Leave blank if you pass it through the command line. - -**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). - +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP Properties: - -- Config: library_key -- Env Var: RCLONE_SEAFILE_LIBRARY_KEY -- Type: string -- Required: false - -#### --seafile-auth-token - +.IP \[bu] 2 +Config: library_key +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_LIBRARY_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --seafile-auth-token +.PP Authentication token. - +.PP Properties: - -- Config: auth_token -- Env Var: RCLONE_SEAFILE_AUTH_TOKEN -- Type: string -- Required: false - -### Advanced options - +.IP \[bu] 2 +Config: auth_token +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_AUTH_TOKEN +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP Here are the Advanced options specific to seafile (seafile). - -#### --seafile-create-library - +.SS --seafile-create-library +.PP Should rclone create a library if it doesn\[aq]t exist. - +.PP Properties: - -- Config: create_library -- Env Var: RCLONE_SEAFILE_CREATE_LIBRARY -- Type: bool -- Default: false - -#### --seafile-encoding - +.IP \[bu] 2 +Config: create_library +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_CREATE_LIBRARY +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --seafile-encoding +.PP The encoding for the backend. - -See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info. - +.PP +See the encoding section in the +overview (https://rclone.org/overview/#encoding) for more info. +.PP Properties: - -- Config: encoding -- Env Var: RCLONE_SEAFILE_ENCODING -- Type: Encoding -- Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 - - - -# SFTP - -SFTP is the [Secure (or SSH) File Transfer -Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). - +.IP \[bu] 2 +Config: encoding +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_ENCODING +.IP \[bu] 2 +Type: Encoding +.IP \[bu] 2 +Default: Slash,DoubleQuote,BackSlash,Ctl,InvalidUtf8 +.SS --seafile-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_SEAFILE_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SH SFTP +.PP +SFTP is the Secure (or SSH) File Transfer +Protocol (https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). +.PP The SFTP backend can be used with a number of different providers: - - -- Hetzner Storage Box -- rsync.net - - -SFTP runs over SSH v2 and is installed as standard with most modern -SSH installations. - -Paths are specified as \[ga]remote:path\[ga]. If the path does not begin with -a \[ga]/\[ga] it is relative to the home directory of the user. An empty path -\[ga]remote:\[ga] refers to the user\[aq]s home directory. For example, \[ga]rclone lsd remote:\[ga] -would list the home directory of the user configured in the rclone remote config -(\[ga]i.e /home/sftpuser\[ga]). However, \[ga]rclone lsd remote:/\[ga] would list the root -directory for remote machine (i.e. \[ga]/\[ga]) - -Note that some SFTP servers will need the leading / - Synology is a -good example of this. rsync.net and Hetzner, on the other hand, requires users to -OMIT the leading /. - -Note that by default rclone will try to execute shell commands on -the server, see [shell access considerations](#shell-access-considerations). - -## Configuration - -Here is an example of making an SFTP configuration. First run - - rclone config - +.IP \[bu] 2 +Hetzner Storage Box +.IP \[bu] 2 +rsync.net +.PP +SFTP runs over SSH v2 and is installed as standard with most modern SSH +installations. +.PP +Paths are specified as \f[C]remote:path\f[R]. +If the path does not begin with a \f[C]/\f[R] it is relative to the home +directory of the user. +An empty path \f[C]remote:\f[R] refers to the user\[aq]s home directory. +For example, \f[C]rclone lsd remote:\f[R] would list the home directory +of the user configured in the rclone remote config +(\f[C]i.e /home/sftpuser\f[R]). +However, \f[C]rclone lsd remote:/\f[R] would list the root directory for +remote machine (i.e. +\f[C]/\f[R]) +.PP +Note that some SFTP servers will need the leading / - Synology is a good +example of this. +rsync.net and Hetzner, on the other hand, requires users to OMIT the +leading /. +.PP +Note that by default rclone will try to execute shell commands on the +server, see shell access considerations. +.SS Configuration +.PP +Here is an example of making an SFTP configuration. +First run +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP This will guide you through an interactive setup process. -\f[R] -.fi -.PP +.IP +.nf +\f[C] No remotes found, make a new one? -n) New remote s) Set configuration password q) Quit config n/s/q> n -name> remote Type of storage to configure. -Choose a number from below, or type in your own value [snip] XX / -SSH/SFTP \ \[dq]sftp\[dq] [snip] Storage> sftp SSH host to connect to -Choose a number from below, or type in your own value 1 / Connect to -example.com \ \[dq]example.com\[dq] host> example.com SSH username Enter -a string value. -Press Enter for the default (\[dq]$USER\[dq]). -user> sftpuser SSH port number Enter a signed integer. -Press Enter for the default (22). -port> SSH password, leave blank to use ssh-agent. -y) Yes type in my own password g) Generate random password n) No leave -this optional password blank y/g/n> n Path to unencrypted PEM-encoded -private key file, leave blank to use ssh-agent. -key_file> Remote config -------------------- [remote] host = example.com -user = sftpuser port = pass = key_file = -------------------- y) Yes -this is OK e) Edit this remote d) Delete this remote y/e/d> y -.IP -.nf -\f[C] -This remote is called \[ga]remote\[ga] and can now be used like this: - -See all directories in the home directory - - rclone lsd remote: - -See all directories in the root directory - - rclone lsd remote:/ - -Make a new directory - - rclone mkdir remote:path/to/directory - -List the contents of a directory - - rclone ls remote:path/to/directory - -Sync \[ga]/home/local/directory\[ga] to the remote directory, deleting any -excess files in the directory. - - rclone sync --interactive /home/local/directory remote:directory - -Mount the remote path \[ga]/srv/www-data/\[ga] to the local path -\[ga]/mnt/www-data\[ga] - - rclone mount remote:/srv/www-data/ /mnt/www-data - -### SSH Authentication - -The SFTP remote supports three authentication methods: - - * Password - * Key file, including certificate signed keys - * ssh-agent - -Key files should be PEM-encoded private key files. For instance \[ga]/home/$USER/.ssh/id_rsa\[ga]. -Only unencrypted OpenSSH or PEM encrypted files are supported. - -The key file can be specified in either an external file (key_file) or contained within the -rclone config file (key_pem). If using key_pem in the config file, the entry should be on a -single line with new line (\[aq]\[rs]n\[aq] or \[aq]\[rs]r\[rs]n\[aq]) separating lines. i.e. - - key_pem = -----BEGIN RSA PRIVATE KEY-----\[rs]nMaMbaIXtE\[rs]n0gAMbMbaSsd\[rs]nMbaass\[rs]n-----END RSA PRIVATE KEY----- - -This will generate it correctly for key_pem for use in the config: - - awk \[aq]{printf \[dq]%s\[rs]\[rs]n\[dq], $0}\[aq] < \[ti]/.ssh/id_rsa - -If you don\[aq]t specify \[ga]pass\[ga], \[ga]key_file\[ga], or \[ga]key_pem\[ga] or \[ga]ask_password\[ga] then -rclone will attempt to contact an ssh-agent. You can also specify \[ga]key_use_agent\[ga] -to force the usage of an ssh-agent. In this case \[ga]key_file\[ga] or \[ga]key_pem\[ga] can -also be specified to force the usage of a specific key in the ssh-agent. - -Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment. - -If you set the \[ga]ask_password\[ga] option, rclone will prompt for a password when -needed and no password has been configured. - -#### Certificate-signed keys - -With traditional key-based authentication, you configure your private key only, -and the public key built into it will be used during the authentication process. - -If you have a certificate you may use it to sign your public key, creating a -separate SSH user certificate that should be used instead of the plain public key -extracted from the private key. Then you must provide the path to the -user certificate public key file in \[ga]pubkey_file\[ga]. - -Note: This is not the traditional public key paired with your private key, -typically saved as \[ga]/home/$USER/.ssh/id_rsa.pub\[ga]. Setting this path in -\[ga]pubkey_file\[ga] will not work. - -Example: +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n +name> remote +Type of storage to configure. +Choose a number from below, or type in your own value +[snip] +XX / SSH/SFTP + \[rs] \[dq]sftp\[dq] +[snip] +Storage> sftp +SSH host to connect to +Choose a number from below, or type in your own value + 1 / Connect to example.com + \[rs] \[dq]example.com\[dq] +host> example.com +SSH username +Enter a string value. Press Enter for the default (\[dq]$USER\[dq]). +user> sftpuser +SSH port number +Enter a signed integer. Press Enter for the default (22). +port> +SSH password, leave blank to use ssh-agent. +y) Yes type in my own password +g) Generate random password +n) No leave this optional password blank +y/g/n> n +Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent. +key_file> +Remote config +-------------------- +[remote] +host = example.com +user = sftpuser +port = +pass = +key_file = +-------------------- +y) Yes this is OK +e) Edit this remote +d) Delete this remote +y/e/d> y \f[R] .fi .PP -[remote] type = sftp host = example.com user = sftpuser key_file = -\[ti]/id_rsa pubkey_file = \[ti]/id_rsa-cert.pub +This remote is called \f[C]remote\f[R] and can now be used like this: +.PP +See all directories in the home directory .IP .nf \f[C] +rclone lsd remote: +\f[R] +.fi +.PP +See all directories in the root directory +.IP +.nf +\f[C] +rclone lsd remote:/ +\f[R] +.fi +.PP +Make a new directory +.IP +.nf +\f[C] +rclone mkdir remote:path/to/directory +\f[R] +.fi +.PP +List the contents of a directory +.IP +.nf +\f[C] +rclone ls remote:path/to/directory +\f[R] +.fi +.PP +Sync \f[C]/home/local/directory\f[R] to the remote directory, deleting +any excess files in the directory. +.IP +.nf +\f[C] +rclone sync --interactive /home/local/directory remote:directory +\f[R] +.fi +.PP +Mount the remote path \f[C]/srv/www-data/\f[R] to the local path +\f[C]/mnt/www-data\f[R] +.IP +.nf +\f[C] +rclone mount remote:/srv/www-data/ /mnt/www-data +\f[R] +.fi +.SS SSH Authentication +.PP +The SFTP remote supports three authentication methods: +.IP \[bu] 2 +Password +.IP \[bu] 2 +Key file, including certificate signed keys +.IP \[bu] 2 +ssh-agent +.PP +Key files should be PEM-encoded private key files. +For instance \f[C]/home/$USER/.ssh/id_rsa\f[R]. +Only unencrypted OpenSSH or PEM encrypted files are supported. +.PP +The key file can be specified in either an external file (key_file) or +contained within the rclone config file (key_pem). +If using key_pem in the config file, the entry should be on a single +line with new line (\[aq]\[aq] or \[aq]\[aq]) separating lines. +i.e. +.IP +.nf +\f[C] +key_pem = -----BEGIN RSA PRIVATE KEY-----\[rs]nMaMbaIXtE\[rs]n0gAMbMbaSsd\[rs]nMbaass\[rs]n-----END RSA PRIVATE KEY----- +\f[R] +.fi +.PP +This will generate it correctly for key_pem for use in the config: +.IP +.nf +\f[C] +awk \[aq]{printf \[dq]%s\[rs]\[rs]n\[dq], $0}\[aq] < \[ti]/.ssh/id_rsa +\f[R] +.fi +.PP +If you don\[aq]t specify \f[C]pass\f[R], \f[C]key_file\f[R], or +\f[C]key_pem\f[R] or \f[C]ask_password\f[R] then rclone will attempt to +contact an ssh-agent. +You can also specify \f[C]key_use_agent\f[R] to force the usage of an +ssh-agent. +In this case \f[C]key_file\f[R] or \f[C]key_pem\f[R] can also be +specified to force the usage of a specific key in the ssh-agent. +.PP +Using an ssh-agent is the only way to load encrypted OpenSSH keys at the +moment. +.PP +If you set the \f[C]ask_password\f[R] option, rclone will prompt for a +password when needed and no password has been configured. +.SS Certificate-signed keys +.PP +With traditional key-based authentication, you configure your private +key only, and the public key built into it will be used during the +authentication process. +.PP +If you have a certificate you may use it to sign your public key, +creating a separate SSH user certificate that should be used instead of +the plain public key extracted from the private key. +Then you must provide the path to the user certificate public key file +in \f[C]pubkey_file\f[R]. +.PP +Note: This is not the traditional public key paired with your private +key, typically saved as \f[C]/home/$USER/.ssh/id_rsa.pub\f[R]. +Setting this path in \f[C]pubkey_file\f[R] will not work. +.PP +Example: +.IP +.nf +\f[C] +[remote] +type = sftp +host = example.com +user = sftpuser +key_file = \[ti]/id_rsa +pubkey_file = \[ti]/id_rsa-cert.pub +\f[R] +.fi +.PP If you concatenate a cert with a private key then you can specify the merged file in both places. - -Note: the cert must come first in the file. e.g. - -\[ga]\[ga]\[ga] +.PP +Note: the cert must come first in the file. +e.g. +.IP +.nf +\f[C] cat id_rsa-cert.pub id_rsa > merged_key -\[ga]\[ga]\[ga] - -### Host key validation - -By default rclone will not check the server\[aq]s host key for validation. This -can allow an attacker to replace a server with their own and if you use -password authentication then this can lead to that password being exposed. - -Host key matching, using standard \[ga]known_hosts\[ga] files can be turned on by -enabling the \[ga]known_hosts_file\[ga] option. This can point to the file maintained -by \[ga]OpenSSH\[ga] or can point to a unique file. - -e.g. using the OpenSSH \[ga]known_hosts\[ga] file: - -\[ga]\[ga]\[ga] +\f[R] +.fi +.SS Host key validation +.PP +By default rclone will not check the server\[aq]s host key for +validation. +This can allow an attacker to replace a server with their own and if you +use password authentication then this can lead to that password being +exposed. +.PP +Host key matching, using standard \f[C]known_hosts\f[R] files can be +turned on by enabling the \f[C]known_hosts_file\f[R] option. +This can point to the file maintained by \f[C]OpenSSH\f[R] or can point +to a unique file. +.PP +e.g. +using the OpenSSH \f[C]known_hosts\f[R] file: +.IP +.nf +\f[C] [remote] type = sftp host = example.com @@ -54716,6 +59481,19 @@ Env Var: RCLONE_SFTP_COPY_IS_HARDLINK Type: bool .IP \[bu] 2 Default: false +.SS --sftp-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_SFTP_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Limitations .PP On some SFTP servers (e.g. @@ -55041,6 +59819,19 @@ Type: Encoding .IP \[bu] 2 Default: Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot +.SS --smb-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_SMB_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SH Storj .PP Storj (https://storj.io) is an encrypted, secure, and cost-effective @@ -55435,6 +60226,23 @@ Provider: new Type: string .IP \[bu] 2 Required: false +.SS Advanced options +.PP +Here are the Advanced options specific to storj (Storj Decentralized +Cloud Storage). +.SS --storj-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_STORJ_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Usage .PP Paths are specified as \f[C]remote:bucket\f[R] (or \f[C]remote:\f[R] for @@ -55958,6 +60766,19 @@ Env Var: RCLONE_SUGARSYNC_ENCODING Type: Encoding .IP \[bu] 2 Default: Slash,Ctl,InvalidUtf8,Dot +.SS --sugarsync-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_SUGARSYNC_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Limitations .PP \f[C]rclone about\f[R] is not supported by the SugarSync backend. @@ -56164,6 +60985,19 @@ Type: Encoding .IP \[bu] 2 Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot +.SS --uptobox-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_UPTOBOX_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Limitations .PP Uptobox will delete inactive files that have not been accessed in 60 @@ -56673,6 +61507,19 @@ Env Var: RCLONE_UNION_MIN_FREE_SPACE Type: SizeSuffix .IP \[bu] 2 Default: 1Gi +.SS --union-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_UNION_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Metadata .PP Any metadata supported by the underlying remote is read and written. @@ -57023,6 +61870,32 @@ Env Var: RCLONE_WEBDAV_NEXTCLOUD_CHUNK_SIZE Type: SizeSuffix .IP \[bu] 2 Default: 10Mi +.SS --webdav-owncloud-exclude-shares +.PP +Exclude ownCloud shares +.PP +Properties: +.IP \[bu] 2 +Config: owncloud_exclude_shares +.IP \[bu] 2 +Env Var: RCLONE_WEBDAV_OWNCLOUD_EXCLUDE_SHARES +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false +.SS --webdav-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_WEBDAV_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Provider notes .PP See below for notes on specific providers. @@ -57484,6 +62357,19 @@ Env Var: RCLONE_YANDEX_ENCODING Type: Encoding .IP \[bu] 2 Default: Slash,Del,Ctl,InvalidUtf8,Dot +.SS --yandex-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_YANDEX_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Limitations .PP When uploading very large files (bigger than about 5 GiB) you will need @@ -57803,6 +62689,19 @@ Env Var: RCLONE_ZOHO_ENCODING Type: Encoding .IP \[bu] 2 Default: Del,Ctl,InvalidUtf8 +.SS --zoho-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_ZOHO_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Setting up your own client_id .PP For Zoho we advise you to set up your own client_id. @@ -58784,6 +63683,19 @@ Env Var: RCLONE_LOCAL_ENCODING Type: Encoding .IP \[bu] 2 Default: Slash,Dot +.SS --local-description +.PP +Description of the remote +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_LOCAL_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Metadata .PP Depending on which OS is in use the local backend may return only some @@ -58796,6 +63708,8 @@ pkg/attrs#47 (https://github.com/pkg/xattr/issues/47)). User metadata is stored as extended attributes (which may not be supported by all file systems) under the \[dq]user.*\[dq] prefix. .PP +Metadata is supported on files and directories. +.PP Here are the possible system metadata items for the local backend. .PP .TS @@ -58931,6 +63845,678 @@ Options: .IP \[bu] 2 \[dq]error\[dq]: return an error based on option value .SH Changelog +.SS v1.66.0 - 2024-03-10 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.65.0...v1.66.0) +.IP \[bu] 2 +Major features +.RS 2 +.IP \[bu] 2 +Rclone will now sync directory modification times if the backend +supports it. +.RS 2 +.IP \[bu] 2 +This can be disabled with +--no-update-dir-modtime (https://rclone.org/docs/#no-update-dir-modtime) +.IP \[bu] 2 +See the overview (https://rclone.org/overview/#features) and look for +the \f[C]D\f[R] flags in the \f[C]ModTime\f[R] column to see which +backends support it. +.RE +.IP \[bu] 2 +Rclone will now sync directory metadata if the backend supports it when +\f[C]-M\f[R]/\f[C]--metadata\f[R] is in use. +.RS 2 +.IP \[bu] 2 +See the overview (https://rclone.org/overview/#features) and look for +the \f[C]D\f[R] flags in the \f[C]Metadata\f[R] column to see which +backends support it. +.RE +.IP \[bu] 2 +Bisync has received many updates see below for more details or +bisync\[aq]s changelog (https://rclone.org/bisync/#changelog) +.RE +.IP \[bu] 2 +Removed backends +.RS 2 +.IP \[bu] 2 +amazonclouddrive: Remove Amazon Drive backend code and docs (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +New Features +.RS 2 +.IP \[bu] 2 +backend +.RS 2 +.IP \[bu] 2 +Add description field for all backends (Paul Stern) +.RE +.IP \[bu] 2 +build +.RS 2 +.IP \[bu] 2 +Update to go1.22 and make go1.20 the minimum required version (Nick +Craig-Wood) +.IP \[bu] 2 +Fix \f[C]CVE-2024-24786\f[R] by upgrading +\f[C]google.golang.org/protobuf\f[R] (Nick Craig-Wood) +.RE +.IP \[bu] 2 +check: Respect \f[C]--no-unicode-normalization\f[R] and +\f[C]--ignore-case-sync\f[R] for \f[C]--checkfile\f[R] (nielash) +.IP \[bu] 2 +cmd: Much improved shell auto completion which reduces the size of the +completion file and works faster (Nick Craig-Wood) +.IP \[bu] 2 +doc updates (albertony, ben-ba, Eli, emyarod, huajin tong, Jack +Provance, kapitainsky, keongalvin, Nick Craig-Wood, nielash, rarspace01, +rzitzer, Tera, Vincent Murphy) +.IP \[bu] 2 +fs: Add more detailed logging for file includes/excludes (Kyle Reynolds) +.IP \[bu] 2 +lsf +.RS 2 +.IP \[bu] 2 +Add \f[C]--time-format\f[R] flag (nielash) +.IP \[bu] 2 +Make metadata appear for directories (Nick Craig-Wood) +.RE +.IP \[bu] 2 +lsjson: Make metadata appear for directories (Nick Craig-Wood) +.IP \[bu] 2 +rc +.RS 2 +.IP \[bu] 2 +Add \f[C]srcFs\f[R] and \f[C]dstFs\f[R] to \f[C]core/stats\f[R] and +\f[C]core/transferred\f[R] stats (Nick Craig-Wood) +.IP \[bu] 2 +Add \f[C]operations/hashsum\f[R] to the rc as \f[C]rclone hashsum\f[R] +equivalent (Nick Craig-Wood) +.IP \[bu] 2 +Add \f[C]config/paths\f[R] to the rc as \f[C]rclone config paths\f[R] +equivalent (Nick Craig-Wood) +.RE +.IP \[bu] 2 +sync +.RS 2 +.IP \[bu] 2 +Optionally report list of synced paths to file (nielash) +.IP \[bu] 2 +Implement directory sync for mod times and metadata (Nick Craig-Wood) +.IP \[bu] 2 +Don\[aq]t set directory modtimes if already set (nielash) +.IP \[bu] 2 +Don\[aq]t sync directory modtimes from backends which don\[aq]t have +directories (Nick Craig-Wood) +.RE +.RE +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +backend +.RS 2 +.IP \[bu] 2 +Make backends which use oauth implement the \f[C]Shutdown\f[R] and +shutdown the oauth properly (rkonfj) +.RE +.IP \[bu] 2 +bisync +.RS 2 +.IP \[bu] 2 +Handle unicode and case normalization consistently (nielash) +.IP \[bu] 2 +Partial uploads known issue on +\f[C]local\f[R]/\f[C]ftp\f[R]/\f[C]sftp\f[R] has been resolved (unless +using \f[C]--inplace\f[R]) (nielash) +.IP \[bu] 2 +Fixed handling of unicode normalization and case insensitivity, support +for \f[C]--fix-case\f[R] (https://rclone.org/docs/#fix-case), +\f[C]--ignore-case-sync\f[R], \f[C]--no-unicode-normalization\f[R] +(nielash) +.IP \[bu] 2 +Bisync no longer fails to find the correct listing file when configs are +overridden with backend-specific flags. +(nielash) +.RE +.IP \[bu] 2 +nfsmount +.RS 2 +.IP \[bu] 2 +Fix exit after external unmount (nielash) +.IP \[bu] 2 +Fix \f[C]--volname\f[R] being ignored (nielash) +.RE +.IP \[bu] 2 +operations +.RS 2 +.IP \[bu] 2 +Fix renaming a file on macOS (nielash) +.IP \[bu] 2 +Fix case-insensitive moves in operations.Move (nielash) +.IP \[bu] 2 +Fix TestCaseInsensitiveMoveFileDryRun on chunker integration tests +(nielash) +.IP \[bu] 2 +Fix TestMkdirModTime test (Nick Craig-Wood) +.IP \[bu] 2 +Fix TestSetDirModTime for backends with SetDirModTime but not Metadata +(Nick Craig-Wood) +.IP \[bu] 2 +Fix typo in log messages (nielash) +.RE +.IP \[bu] 2 +serve nfs: Fix writing files via Finder on macOS (nielash) +.IP \[bu] 2 +serve restic: Fix error handling (Michael Eischer) +.IP \[bu] 2 +serve webdav: Fix \f[C]--baseurl\f[R] without leading / (Nick +Craig-Wood) +.IP \[bu] 2 +stats: Fix race between ResetCounters and stopAverageLoop called from +time.AfterFunc (Nick Craig-Wood) +.IP \[bu] 2 +sync +.RS 2 +.IP \[bu] 2 +\f[C]--fix-case\f[R] flag to rename case insensitive dest (nielash) +.IP \[bu] 2 +Use operations.DirMove instead of sync.MoveDir for \f[C]--fix-case\f[R] +(nielash) +.RE +.IP \[bu] 2 +systemd: Fix detection and switch to the coreos package everywhere +rather than having 2 separate libraries (Anagh Kumar Baranwal) +.RE +.IP \[bu] 2 +Mount +.RS 2 +.IP \[bu] 2 +Fix macOS not noticing errors with \f[C]--daemon\f[R] (Nick Craig-Wood) +.IP \[bu] 2 +Notice daemon dying much quicker (Nick Craig-Wood) +.RE +.IP \[bu] 2 +VFS +.RS 2 +.IP \[bu] 2 +Fix unicode normalization on macOS (nielash) +.RE +.IP \[bu] 2 +Bisync +.RS 2 +.IP \[bu] 2 +Copies and deletes are now handled in one operation instead of two +(nielash) +.IP \[bu] 2 +\f[C]--track-renames\f[R] and \f[C]--backup-dir\f[R] are now supported +(nielash) +.IP \[bu] 2 +Final listings are now generated from sync results, to avoid needing to +re-list (nielash) +.IP \[bu] 2 +Bisync is now much more resilient to changes that happen during a bisync +run, and far less prone to critical errors / undetected changes +(nielash) +.IP \[bu] 2 +Bisync is now capable of rolling a file listing back in cases of +uncertainty, essentially marking the file as needing to be rechecked +next time. +(nielash) +.IP \[bu] 2 +A few basic terminal colors are now supported, controllable with +\f[C]--color\f[R] (https://rclone.org/docs/#color-when) +(\f[C]AUTO\f[R]|\f[C]NEVER\f[R]|\f[C]ALWAYS\f[R]) (nielash) +.IP \[bu] 2 +Initial listing snapshots of Path1 and Path2 are now generated +concurrently, using the same \[dq]march\[dq] infrastructure as +\f[C]check\f[R] and \f[C]sync\f[R], for performance improvements and +less risk of error. +(nielash) +.IP \[bu] 2 +\f[C]--resync\f[R] is now much more efficient (especially for users of +\f[C]--create-empty-src-dirs\f[R]) (nielash) +.IP \[bu] 2 +Google Docs (and other files of unknown size) are now supported (with +the same options as in \f[C]sync\f[R]) (nielash) +.IP \[bu] 2 +Equality checks before a sync conflict rename now fall back to +\f[C]cryptcheck\f[R] (when possible) or \f[C]--download\f[R], (nielash) +instead of of \f[C]--size-only\f[R], when \f[C]check\f[R] is not +available. +.IP \[bu] 2 +Bisync now fully supports comparing based on any combination of size, +modtime, and checksum, lifting the prior restriction on backends without +modtime support. +(nielash) +.IP \[bu] 2 +Bisync now supports a \[dq]Graceful Shutdown\[dq] mode to cleanly cancel +a run early without requiring \f[C]--resync\f[R]. +(nielash) +.IP \[bu] 2 +New \f[C]--recover\f[R] flag allows robust recovery in the event of +interruptions, without requiring \f[C]--resync\f[R]. +(nielash) +.IP \[bu] 2 +A new \f[C]--max-lock\f[R] setting allows lock files to automatically +renew and expire, for better automatic recovery when a run is +interrupted. +(nielash) +.IP \[bu] 2 +Bisync now supports auto-resolving sync conflicts and customizing rename +behavior with new \f[C]--conflict-resolve\f[R], +\f[C]--conflict-loser\f[R], and \f[C]--conflict-suffix\f[R] flags. +(nielash) +.IP \[bu] 2 +A new \f[C]--resync-mode\f[R] flag allows more control over which +version of a file gets kept during a \f[C]--resync\f[R]. +(nielash) +.IP \[bu] 2 +Bisync now supports +\f[C]--retries\f[R] (https://rclone.org/docs/#retries-int) and +\f[C]--retries-sleep\f[R] (when \f[C]--resilient\f[R] is set.) (nielash) +.IP \[bu] 2 +Clarify file operation directions in dry-run logs (Kyle Reynolds) +.RE +.IP \[bu] 2 +Local +.RS 2 +.IP \[bu] 2 +Fix cleanRootPath on Windows after go1.21.4 stdlib update (nielash) +.IP \[bu] 2 +Implement setting modification time on directories (nielash) +.IP \[bu] 2 +Implement modtime and metadata for directories (Nick Craig-Wood) +.IP \[bu] 2 +Fix setting of btime on directories on Windows (Nick Craig-Wood) +.IP \[bu] 2 +Delete backend implementation of Purge to speed up and make stats (Nick +Craig-Wood) +.IP \[bu] 2 +Support metadata setting and mapping on server side Move (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Cache +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Crypt +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.IP \[bu] 2 +Improve handling of undecryptable file names (nielash) +.IP \[bu] 2 +Add missing error check spotted by linter (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Azure Blob +.RS 2 +.IP \[bu] 2 +Implement \f[C]--azureblob-delete-snapshots\f[R] (Nick Craig-Wood) +.RE +.IP \[bu] 2 +B2 +.RS 2 +.IP \[bu] 2 +Clarify exactly what \f[C]--b2-download-auth-duration\f[R] does in the +docs (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Chunker +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Combine +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.IP \[bu] 2 +Fix directory metadata error on upstream root (nielash) +.IP \[bu] 2 +Fix directory move across upstreams (nielash) +.RE +.IP \[bu] 2 +Compress +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Drive +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (nielash) +.IP \[bu] 2 +Implement modtime and metadata setting for directories (Nick Craig-Wood) +.IP \[bu] 2 +Support metadata setting and mapping on server side Move,Copy (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +FTP +.RS 2 +.IP \[bu] 2 +Fix mkdir with rsftp which is returning the wrong code (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Hasher +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.IP \[bu] 2 +Fix error from trying to stop an already-stopped db (nielash) +.IP \[bu] 2 +Look for cached hash if passed hash unexpectedly blank (nielash) +.RE +.IP \[bu] 2 +Imagekit +.RS 2 +.IP \[bu] 2 +Updated docs and web content (Harshit Budhraja) +.IP \[bu] 2 +Updated overview - supported operations (Harshit Budhraja) +.RE +.IP \[bu] 2 +Mega +.RS 2 +.IP \[bu] 2 +Fix panic with go1.22 (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Netstorage +.RS 2 +.IP \[bu] 2 +Fix Root to return correct directory when pointing to a file (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Onedrive +.RS 2 +.IP \[bu] 2 +Add metadata support (nielash) +.RE +.IP \[bu] 2 +Opendrive +.RS 2 +.IP \[bu] 2 +Fix moving file/folder within the same parent dir (nielash) +.RE +.IP \[bu] 2 +Oracle Object Storage +.RS 2 +.IP \[bu] 2 +Support \f[C]backend restore\f[R] command (Nikhil Ahuja) +.IP \[bu] 2 +Support workload identity authentication for OKE (Anders Swanson) +.RE +.IP \[bu] 2 +Protondrive +.RS 2 +.IP \[bu] 2 +Fix encoding of Root method (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Quatrix +.RS 2 +.IP \[bu] 2 +Fix \f[C]Content-Range\f[R] header (Volodymyr) +.IP \[bu] 2 +Add option to skip project folders (Oksana Zhykina) +.IP \[bu] 2 +Fix Root to return correct directory when pointing to a file (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +S3 +.RS 2 +.IP \[bu] 2 +Add \f[C]--s3-version-deleted\f[R] to show delete markers in listings +when using versions. +(Nick Craig-Wood) +.IP \[bu] 2 +Add IPv6 support with option \f[C]--s3-use-dual-stack\f[R] (Anthony +Metzidis) +.IP \[bu] 2 +Copy parts in parallel when doing chunked server side copy (Nick +Craig-Wood) +.IP \[bu] 2 +GCS provider: fix server side copy of files bigger than 5G (Nick +Craig-Wood) +.IP \[bu] 2 +Support metadata setting and mapping on server side Copy (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Seafile +.RS 2 +.IP \[bu] 2 +Fix download/upload error when \f[C]FILE_SERVER_ROOT\f[R] is relative +(DanielEgbers) +.IP \[bu] 2 +Fix Root to return correct directory when pointing to a file (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +SFTP +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (nielash) +.IP \[bu] 2 +Set directory modtimes update on write flag (Nick Craig-Wood) +.IP \[bu] 2 +Shorten wait delay for external ssh binaries now that we are using +go1.20 (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Swift +.RS 2 +.IP \[bu] 2 +Avoid unnecessary container versioning check (Joe Cai) +.RE +.IP \[bu] 2 +Union +.RS 2 +.IP \[bu] 2 +Implement setting modification time on directories (if supported by +wrapped remote) (nielash) +.IP \[bu] 2 +Implement setting metadata on directories (Nick Craig-Wood) +.RE +.IP \[bu] 2 +WebDAV +.RS 2 +.IP \[bu] 2 +Reduce priority of chunks upload log (Gabriel Ramos) +.IP \[bu] 2 +owncloud: Add config \f[C]owncloud_exclude_shares\f[R] which allows to +exclude shared files and folders when listing remote resources (Thomas +M\[:u]ller) +.RE +.SS v1.65.2 - 2024-01-24 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.65.1...v1.65.2) +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +build: bump github.com/cloudflare/circl from 1.3.6 to 1.3.7 (dependabot) +.IP \[bu] 2 +docs updates (Nick Craig-Wood, kapitainsky, nielash, Tera, Harshit +Budhraja) +.RE +.IP \[bu] 2 +VFS +.RS 2 +.IP \[bu] 2 +Fix stale data when using \f[C]--vfs-cache-mode\f[R] full (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Azure Blob +.RS 2 +.IP \[bu] 2 +\f[B]IMPORTANT\f[R] Fix data corruption bug - see +#7590 (https://github.com/rclone/rclone/issues/7590) (Nick Craig-Wood) +.RE +.SS v1.65.1 - 2024-01-08 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.65.0...v1.65.1) +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +build +.RS 2 +.IP \[bu] 2 +Bump golang.org/x/crypto to fix ssh terrapin CVE-2023-48795 (dependabot) +.IP \[bu] 2 +Update to go1.21.5 to fix Windows path problems (Nick Craig-Wood) +.IP \[bu] 2 +Fix docker build on arm/v6 (Nick Craig-Wood) +.RE +.IP \[bu] 2 +install.sh: fix harmless error message on install (Nick Craig-Wood) +.IP \[bu] 2 +accounting: fix stats to show server side transfers (Nick Craig-Wood) +.IP \[bu] 2 +doc fixes (albertony, ben-ba, Eli Orzitzer, emyarod, keongalvin, +rarspace01) +.IP \[bu] 2 +nfsmount: Compile for all unix oses, add \f[C]--sudo\f[R] and fix +error/option handling (Nick Craig-Wood) +.IP \[bu] 2 +operations: Fix files moved by rclone move not being counted as +transfers (Nick Craig-Wood) +.IP \[bu] 2 +oauthutil: Avoid panic when \f[C]*token\f[R] and \f[C]*ts.token\f[R] are +the same (rkonfj) +.IP \[bu] 2 +serve s3: Fix listing oddities (Nick Craig-Wood) +.RE +.IP \[bu] 2 +VFS +.RS 2 +.IP \[bu] 2 +Note that \f[C]--vfs-refresh\f[R] runs in the background (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Azurefiles +.RS 2 +.IP \[bu] 2 +Fix storage base url (Oksana) +.RE +.IP \[bu] 2 +Crypt +.RS 2 +.IP \[bu] 2 +Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Chunker +.RS 2 +.IP \[bu] 2 +Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Compress +.RS 2 +.IP \[bu] 2 +Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Dropbox +.RS 2 +.IP \[bu] 2 +Fix used space on dropbox team accounts (Nick Craig-Wood) +.RE +.IP \[bu] 2 +FTP +.RS 2 +.IP \[bu] 2 +Fix multi-thread copy (WeidiDeng) +.RE +.IP \[bu] 2 +Googlephotos +.RS 2 +.IP \[bu] 2 +Fix nil pointer exception when batch failed (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Hasher +.RS 2 +.IP \[bu] 2 +Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +.IP \[bu] 2 +Fix invalid memory address error when MaxAge == 0 (nielash) +.RE +.IP \[bu] 2 +Onedrive +.RS 2 +.IP \[bu] 2 +Fix error listing: unknown object type \f[C] \f[R] (Nick Craig-Wood) +.IP \[bu] 2 +Fix \[dq]unauthenticated: Unauthenticated\[dq] errors when uploading +(Nick Craig-Wood) +.RE +.IP \[bu] 2 +Oracleobjectstorage +.RS 2 +.IP \[bu] 2 +Fix object storage endpoint for custom endpoints (Manoj Ghosh) +.IP \[bu] 2 +Multipart copy create bucket if it doesn\[aq]t exist. +(Manoj Ghosh) +.RE +.IP \[bu] 2 +Protondrive +.RS 2 +.IP \[bu] 2 +Fix CVE-2023-45286 / GHSA-xwh9-gc39-5298 (Nick Craig-Wood) +.RE +.IP \[bu] 2 +S3 +.RS 2 +.IP \[bu] 2 +Fix crash if no UploadId in multipart upload (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Smb +.RS 2 +.IP \[bu] 2 +Fix shares not listed by updating go-smb2 (halms) +.RE +.IP \[bu] 2 +Union +.RS 2 +.IP \[bu] 2 +Fix rclone move a file over itself deleting the file (Nick Craig-Wood) +.RE .SS v1.65.0 - 2023-11-26 .PP See commits (https://github.com/rclone/rclone/compare/v1.64.0...v1.65.0) @@ -71868,10 +77454,14 @@ Project named rclone Project started .SH Bugs and Limitations .SS Limitations -.SS Directory timestamps aren\[aq]t preserved +.SS Directory timestamps aren\[aq]t preserved on some backends .PP -Rclone doesn\[aq]t currently preserve the timestamps of directories. -This is because rclone only really considers objects when syncing. +As of \f[C]v1.66\f[R], rclone supports syncing directory modtimes, if +the backend supports it. +Some backends do not support it -- see +overview (https://rclone.org/overview/) for a complete list. +Additionally, note that empty directories are not synced by default +(this can be enabled with \f[C]--create-empty-src-dirs\f[R].) .SS Rclone struggles with millions of files in a directory/bucket .PP Currently rclone loads each directory/bucket entirely into memory before @@ -72323,7 +77913,7 @@ Bj\[/o]rn Erik Pedersen .IP \[bu] 2 Lukas Loesche .IP \[bu] 2 -emyarod +emyarod .IP \[bu] 2 T.C. Ferguson @@ -73835,6 +79425,48 @@ Alen \[vS]iljak \[u4F60]\[u77E5]\[u9053]\[u672A]\[u6765]\[u5417] .IP \[bu] 2 Abhinav Dhiman <8640877+ahnv@users.noreply.github.com> +.IP \[bu] 2 +halms <7513146+halms@users.noreply.github.com> +.IP \[bu] 2 +ben-ba +.IP \[bu] 2 +Eli Orzitzer +.IP \[bu] 2 +Anthony Metzidis +.IP \[bu] 2 +emyarod +.IP \[bu] 2 +keongalvin +.IP \[bu] 2 +rarspace01 +.IP \[bu] 2 +Paul Stern +.IP \[bu] 2 +Nikhil Ahuja +.IP \[bu] 2 +Harshit Budhraja <52413945+harshit-budhraja@users.noreply.github.com> +.IP \[bu] 2 +Tera <24725862+teraa@users.noreply.github.com> +.IP \[bu] 2 +Kyle Reynolds +.IP \[bu] 2 +Michael Eischer +.IP \[bu] 2 +Thomas M\[:u]ller <1005065+DeepDiver1975@users.noreply.github.com> +.IP \[bu] 2 +DanielEgbers <27849724+DanielEgbers@users.noreply.github.com> +.IP \[bu] 2 +Jack Provance <49460795+njprov@users.noreply.github.com> +.IP \[bu] 2 +Gabriel Ramos <109390599+gabrielramos02@users.noreply.github.com> +.IP \[bu] 2 +Dan McArdle +.IP \[bu] 2 +Joe Cai +.IP \[bu] 2 +Anders Swanson +.IP \[bu] 2 +huajin tong <137764712+thirdkeyword@users.noreply.github.com> .SH Contact the rclone project .SS Forum .PP