From cd3b08d8cfbc910d9909b53c014bc87505ff4abe Mon Sep 17 00:00:00 2001
From: Nick Craig-Wood Sep 08, 2024 Jan 12, 2025 When running in background mode the user will have to stop the mount manually: The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually. The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support the about feature at all, then 1 PiB is set as both the total and the free size. Note that all the rclone filters can be used to select a subset of the files to be visible in the mount. When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode. Note that systemd runs mount units without any environment variables including Note that systemd runs mount units without any environment variables including The core Unix program rclone by default expects GNU-style flags When using VFS write caching ( By default the VFS does not support symlinks. However this may be enabled with either of the following flags: As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink Note that This scheme is compatible with that used by the local backend with the --local-links flag. The It hasn't been tested with the other A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree The VFS will correctly resolve Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa). Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. When running in background mode the user will have to stop the mount manually: The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually. The size of the mounted file system will be set according to information retrieved from the remote, the same as returned by the rclone about command. Remotes with unlimited storage may report the used size only, then an additional 1 PiB of free space is assumed. If the remote does not support the about feature at all, then 1 PiB is set as both the total and the free size. Note that all the rclone filters can be used to select a subset of the files to be visible in the mount. When running rclone nfsmount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone nfsmount service specified as a requirement will see all files and folders immediately in this mode. Note that systemd runs mount units without any environment variables including Note that systemd runs mount units without any environment variables including The core Unix program rclone by default expects GNU-style flags When using VFS write caching ( By default the VFS does not support symlinks. However this may be enabled with either of the following flags: As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink Note that This scheme is compatible with that used by the local backend with the --local-links flag. The It hasn't been tested with the other A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree The VFS will correctly resolve Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa). Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Use If you set You can use a unix socket by setting the url to You can use a unix socket by setting the url to By default this will serve over http. If you want you can serve over https. You will need to supply the Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --rc-addr`). Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html Socket activation can be tested ad-hoc with the Flags to control the Remote Control API When using VFS write caching ( By default the VFS does not support symlinks. However this may be enabled with either of the following flags: As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink Note that This scheme is compatible with that used by the local backend with the --local-links flag. The It hasn't been tested with the other A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree The VFS will correctly resolve Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa). Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. When using VFS write caching ( By default the VFS does not support symlinks. However this may be enabled with either of the following flags: As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink Note that This scheme is compatible with that used by the local backend with the --local-links flag. The It hasn't been tested with the other A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree The VFS will correctly resolve Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa). Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. When using VFS write caching ( By default the VFS does not support symlinks. However this may be enabled with either of the following flags: As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink Note that This scheme is compatible with that used by the local backend with the --local-links flag. The It hasn't been tested with the other A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree The VFS will correctly resolve Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa). Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Use If you set You can use a unix socket by setting the url to You can use a unix socket by setting the url to By default this will serve over http. If you want you can serve over https. You will need to supply the Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr`). Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html Socket activation can be tested ad-hoc with the When using VFS write caching ( By default the VFS does not support symlinks. However this may be enabled with either of the following flags: As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink Note that This scheme is compatible with that used by the local backend with the --local-links flag. The It hasn't been tested with the other A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree The VFS will correctly resolve Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa). Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Modifying files through the NFS protocol requires VFS caching. Usually you will need to specify To serve NFS over the network use following command: When using VFS write caching ( By default the VFS does not support symlinks. However this may be enabled with either of the following flags: As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink Note that This scheme is compatible with that used by the local backend with the --local-links flag. The It hasn't been tested with the other A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree The VFS will correctly resolve Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa). Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Use If you set You can use a unix socket by setting the url to You can use a unix socket by setting the url to By default this will serve over http. If you want you can serve over https. You will need to supply the Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr`). Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html Socket activation can be tested ad-hoc with the Use If you set You can use a unix socket by setting the url to You can use a unix socket by setting the url to By default this will serve over http. If you want you can serve over https. You will need to supply the Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr`). Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html Socket activation can be tested ad-hoc with the When using VFS write caching ( By default the VFS does not support symlinks. However this may be enabled with either of the following flags: As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink Note that This scheme is compatible with that used by the local backend with the --local-links flag. The It hasn't been tested with the other A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree The VFS will correctly resolve Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa). Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. When using VFS write caching ( By default the VFS does not support symlinks. However this may be enabled with either of the following flags: As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink Note that This scheme is compatible with that used by the local backend with the --local-links flag. The It hasn't been tested with the other A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree The VFS will correctly resolve Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa). Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. Use If you set You can use a unix socket by setting the url to You can use a unix socket by setting the url to By default this will serve over http. If you want you can serve over https. You will need to supply the Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr`). Instead of the listening addresses specified above, rclone will listen to all FDs passed by the service manager, if any (and ignore any arguments passed by This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html Socket activation can be tested ad-hoc with the When using VFS write caching ( By default the VFS does not support symlinks. However this may be enabled with either of the following flags: As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink Note that This scheme is compatible with that used by the local backend with the --local-links flag. The It hasn't been tested with the other A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree The VFS will correctly resolve Note that there is an outstanding issue with symlink support issue #8245 with duplicate files being created when symlinks are moved into directories where there is a file of the same name (or vice versa). Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case. During rmdirs it will not remove root directory, even if it's empty. Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows). If you supply this flag then rclone will copy symbolic links from any supported backend backend, and store them as text files, with a The text file will contain the target of the symbolic link. The Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the If FILE exists then rclone will append to it. The program should then modify the input as desired and send it to STDOUT. The returned Metadata can be removed here too. An example python program might look something like this to implement the above transformations. You can find this example (slightly expanded) in the rclone source code at bin/test_metadata_mapper.py. If you want to see the input to the metadata mapper and the output returned from it in the log you can use See the metadata section for more info. When rclone is running it will accumulate errors as it goes along, and only exit with a non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with Verbosity is slightly different, the environment variable equivalent of The same parser is used for the options and the environment variables so they take exactly the same form. The options set by environment variables can be seen with the Options that can appear multiple times (type If You can set defaults for values in the config file on an individual remote basis. The names of the config items are documented in the page for each backend. To find the name of the environment variable, you need to set, take This flag can be repeated. See above for the order filter flags are processed in. The Specifies path/file names to an rclone command, based on a single include or exclude rule, in This flag can be repeated. See above for the order filter flags are processed in. Adds path/file names to an rclone command based on rules in a named file. The file contains a list of remarks and pattern rules. Include rules start with This flag can be repeated. See above for the order filter flags are processed in. Arrange the order of filter rules with the most restrictive first and work down. Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. Use E.g. for Adds path/files to an rclone command from a list in a named file. Rclone processes the path/file names in the order of the list, and no others. Other filter flags ( Rclone commands with a If the Rclone commands do not error if any names in the If you just want to run a remote control then see the rcd command. Flag to start the http server listen on remote requests Flag to start the http server listen on remote requests. IPaddress:Port or :Port to bind server to. (default "localhost:5572") IPaddress:Port or :Port to bind server to. (default "localhost:5572"). SSL PEM key (concatenation of certificate and CA certificate) SSL PEM key (concatenation of certificate and CA certificate). Client certificate authority to verify clients with Client certificate authority to verify clients with. htpasswd file - if not provided no authentication is done htpasswd file - if not provided no authentication is done. SSL PEM Private key TLS PEM private key file. Maximum size of request header (default 4096) Maximum size of request header (default 4096). The minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0"). Password for authentication. Realm for authentication (default "rclone") Realm for authentication (default "rclone"). Timeout for server reading data (default 1h0m0s) Timeout for server reading data (default 1h0m0s). Timeout for server writing data (default 1h0m0s) Timeout for server writing data (default 1h0m0s). Enable the serving of remote objects via the HTTP interface. This means objects will be accessible at http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a listing of the remotes. Objects may be requested from remotes using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object Default Off. User-specified template. Rclone itself implements the remote control protocol in its You can use it like this You can use it like this: Run If the remote is running on a different URL than the default Or, if the remote is listening on a Unix socket, use the Run This returns an empty result on success, or an error. This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied. Flags helpful for increasing performance. Flags to control the Remote Control API. Flags to control the Metrics HTTP endpoint.. Docker 1.9 has added support for creating named volumes via command-line interface and mounting them in containers as a way to share data between them. Since Docker 1.10 you can create named volumes with Docker Compose by descriptions in docker-compose.yml files for use by container groups on a single host. As of Docker 1.12 volumes are supported by Docker Swarm included with Docker Engine and created from descriptions in swarm compose v3 files for use with swarm stacks across multiple cluster nodes. is equivalent to the combined syntax but is arguably easier to parameterize in scripts. The Mount and VFS options as well as backend parameters are named like their twin command-line flags without the Mount and VFS options as well as backend parameters are named like their twin command-line flags without the Please note that you can provide parameters only for the backend immediately referenced by the backend type of mounted though this is rarely needed. If the plugin fails to work properly, and only as a last resort after you tried diagnosing with the above methods, you can try clearing the state of the plugin. Note that all existing rclone docker volumes will probably have to be recreated. This might be needed because a reinstall don't cleanup existing state files to allow for easy restoration, as stated above. Finally I'd like to mention a caveat with updating volume settings. Docker CLI does not have a dedicated command like When bisync is running, a lock file is created in the bisync working directory, typically at Note that while concurrent bisync runs are allowed, be very cautious that there is no overlap in the trees being synched between concurrent runs, lest there be replicated files, deleted files and general mayhem. See also the section about exit codes in main docs. Bisync has a "Graceful Shutdown" mode which is activated by sending At any point during the "Graceful Shutdown" sequence, a second By default, rclone will HEAD every object it uploads. It does this to check the object got uploaded correctly. You can disable this with the --s3-no-head option - see there for more details. Setting this flag increases the chance for undetected upload failures. If you are copying objects between S3 buckets in the same region, you should use server-side copy. This is much faster than downloading and re-uploading the objects, as no data is transferred. For rclone to use server-side copy, you must use the same remote for the source and destination. When using server-side copy, the performance is limited by the rate at which rclone issues API requests to S3. See below for how to increase the number of API requests rclone makes. You can increase the rate of API requests to S3 by increasing the parallelism using Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests. Depending on your provider, you can increase significantly the number of transfers and checkers. For example, with AWS S3, if you can increase the number of checkers to values like 200. If you are doing a server-side copy, you can also increase the number of transfers to 200. You will need to experiment with these values to find the optimal settings for your setup. Rclone does its best to verify every part of an upload or download to the s3 provider using various hashes. Every HTTP transaction to/from the provider has a All communications with the provider is done over https for encryption and additional error protection. Rclone uploads single part uploads with a Rclone then does a HEAD request (disable with Note that if the source does not have an MD5 then the single part uploads will not have hash protection. In this case it is recommended to use For files above When rclone has finished the upload of all the parts it then completes the upload by sending: The provider checks the MD5 for all the parts it has received against what rclone sends and if it is good it returns OK. Rclone then does a HEAD request (disable with If the source has an MD5 sum then rclone will attach the Rclone checks the MD5 hash of the data downloaded against either the ETag or the At each stage rclone and the provider are sending and checking hashes of everything. Rclone deliberately HEADs each object after upload to check it arrived safely for extra security. (You can disable this with If you require further assurance that your data is intact you can use And if you are feeling ultimately paranoid use When bucket versioning is enabled (this can be done with rclone with the Old versions of files, where available, are visible using the rclone supports multipart uploads with S3 which means that it can upload files bigger than 5 GiB. Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums. rclone switches from single part uploads to multipart uploads at the point specified by As mentioned in the Modification times and hashes section, small files that are not uploaded as multipart, use a different tag, causing the upload to fail. A simple solution is to set the Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others). Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others). Choose your S3 provider. Properties: Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others). Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others). Canned ACL used when creating buckets. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl Set to use AWS Directory Buckets If you are using an AWS Directory Bucket then set this flag. This will ensure no This also sets Note that Directory Buckets do not support: Rclone limitations with Directory Buckets: Properties: Set to debug the SDK This can be set to a comma separated list of the following functions: This is the provider used as main example and described in the configuration section above. From rclone v1.69 Directory Buckets are supported. You will need to set the Note that rclone cannot yet: See the --s3-directory-buckets flag for more info AWS Snowball is a hardware appliance used for transferring bulk data back to AWS. Its main software interface is S3 object storage. To use rclone with AWS Snowball Edge devices, configure as standard for an 'S3 Compatible Service'. Now run For R2 tokens with the "Object Read & Write" permission, you may also need to add Note that Cloudflare decompresses files uploaded with A consequence of this is that Dreamhost DreamObjects is an object storage system based on CEPH. To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config: So once set up, for example, to copy files into a bucket OUTSCALE Object Storage (OOS) is an enterprise-grade, S3-compatible storage service provided by OUTSCALE, a brand of Dassault Systèmes. For more information about OOS, see the official documentation. Here is an example of an OOS configuration that you can paste into your rclone configuration file: You can also run Qiniu Cloud Object Storage (Kodo), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management. To configure access to Qiniu Kodo, follow the steps below: C14 Cold Storage is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" Scaleway Glacier is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" Seagate Lyve Cloud is an S3 compatible object storage platform from Seagate intended for enterprise use. Here is a config run through for a remote called So once set up, for example to copy files into a bucket Selectel Cloud Storage is an S3 compatible storage system which features triple redundancy storage, automatic scaling, high availability and a comprehensive IAM system. Selectel have a section on their website for configuring rclone which shows how to make the right API keys. From rclone v1.69 Selectel is a supported operator - please choose the Note that you should use "vHosted" access for the buckets (which is the recommended default), not "path style". You can use And your config should end up looking like this: Wasabi is a cloud-based object storage service for a broad range of applications and use cases. Wasabi is designed for individuals and organizations that require a high-performance, reliable, and secure data storage infrastructure at minimal cost. Wasabi provides an S3 interface which can be configured for use with rclone like this. For Netease NOS configure as per the configurator Here is an example of making a Petabox configuration. First run: This will guide you through an interactive setup process. Options: Use client credentials OAuth flow. This will use the OAUTH2 client Credentials Flow as described in RFC 6749. Properties: Fill in for rclone to use a non root folder as its starting point. Properties: This is a backend for the Cloudinary platform Cloudinary is an image and video API platform. Trusted by 1.5 million developers and 10,000 enterprise and hyper-growth companies as a critical part of their tech stack to deliver visually engaging experiences. To use this backend, you need to create a free account on Cloudinary. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See the pricing details. Please refer to the docs Here is an example of making a Cloudinary configuration. First, create a cloudinary.com account and choose a plan. You will need to log in and get the Now run Follow the interactive setup process: List directories in the top level of your Media Library Make a new directory. List the contents of a directory. Cloudinary stores md5 and timestamps for any successful Put automatically and read-only. Here are the Standard options specific to cloudinary (Cloudinary). Cloudinary Environment Name Properties: Cloudinary API Key Properties: Cloudinary API Secret Properties: Specify the API endpoint for environments out of the US Properties: Upload Preset to select asset manipulation on upload Properties: Here are the Advanced options specific to cloudinary (Cloudinary). The encoding for the backend. See the encoding section in the overview for more info. Properties: Wait N seconds for eventual consistency of the databases that support the backend operation Properties: Description of the remote. Properties: Citrix ShareFile is a secure file sharing and transfer service aimed as business. The initial setup for Citrix ShareFile involves getting a token from Citrix ShareFile which you can in your browser. Here is an example of how to make a remote called Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Here are the Standard options specific to sharefile (Citrix Sharefile). OAuth Client Id. Here are the Advanced options specific to sharefile (Citrix Sharefile). OAuth Access Token as a JSON blob. Use client credentials OAuth flow. This will use the OAUTH2 client Credentials Flow as described in RFC 6749. Properties: Cutoff for switching to multipart upload. Properties: The encryption is a secret-key encryption (also called symmetric key encryption) algorithm, where a password (or pass phrase) is used to generate real encryption key. The password can be supplied by user, or you may chose to let rclone generate one. It will be stored in the configuration file, in a lightly obscured form. If you are in an environment where you are not able to keep your configuration secured, you should add configuration encryption as protection. As long as you have this configuration file, you will be able to decrypt your data. Without the configuration file, as long as you remember the password (or keep it in a safe place), you can re-create the configuration and gain access to the existing data. You may also configure a corresponding remote in a different installation to access the same data. See below for guidance to changing password. Encryption uses cryptographic salt, to permute the encryption key so that the same string may be encrypted in different ways. When configuring the crypt remote it is optional to enter a salt, or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique string. Normally in cryptography, the salt is stored together with the encrypted content, and do not have to be memorized by the user. This is not the case in rclone, because rclone does not store any additional information on the remotes. Use of custom salt is effectively a second password that must be memorized. File content encryption is performed using NaCl SecretBox, based on XSalsa20 cipher and Poly1305 for integrity. Names (file- and directory names) are also encrypted by default, but this has some implications and is therefore possible to be turned off. Here is an example of how to make a remote called To use Before configuring the crypt remote, check the underlying remote is working. In this example the underlying remote is called Crypt stores modification times using the underlying remote so support depends on that. Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator. Use the Here are the Standard options specific to crypt (Encrypt/Decrypt a remote). Remote to encrypt/decrypt. Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote). Deprecated: use --server-side-across-configs instead. This remote is currently experimental. Things may break and data may be lost. Anything you do with this remote is at your own risk. Please understand the risks associated with using experimental code and don't use this remote in critical applications. The To use this remote, all you need to do is specify another remote and a compression mode to use: If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to the compression algorithm you chose. These files are standard files that can be opened by various archive programs, but they have some hidden metadata that allows them to be used by rclone. While you may download and decompress these files at will, do not manually delete or rename files. Files without correct metadata files will not be recognized by rclone. The compressed files will be named Here are the Standard options specific to compress (Compress a remote). Remote to compress. Here are the Advanced options specific to compress (Compress a remote). GZIP compression level (-2 to 9). You'd do this by specifying an During the initial setup with Here is an example of how to make a combine called This will guide you through an interactive setup process: If you then add that config to your config file (find it with See the Google Drive docs for full info. Here are the Standard options specific to combine (Combine several remotes into one). Upstreams for combining Here are the Advanced options specific to combine (Combine several remotes into one). Description of the remote. Paths are specified as Dropbox paths may be as deep as required, e.g. The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. Here is an example of how to make a remote called This provides the maximum possible upload speed especially with lots of small files, however rclone can't check the file got uploaded properly using this mode. If you are using this mode then using "rclone check" after the transfer completes is recommended. Or you could do an initial transfer with Note that there may be a pause when quitting rclone while rclone finishes up the last batch using this mode. Here are the Standard options specific to dropbox (Dropbox). OAuth Client Id. Here are the Advanced options specific to dropbox (Dropbox). OAuth Access Token as a JSON blob. Use client credentials OAuth flow. This will use the OAUTH2 client Credentials Flow as described in RFC 6749. Properties: Upload chunk size (< 150Mi). Any files larger than this will be uploaded in chunks of this size. This backend supports Storage Made Easy's Enterprise File Fabric™ which provides a software solution to integrate and unify File and Object Storage accessible through a global file system. The initial setup for the Enterprise File Fabric backend involves getting a token from the Enterprise File Fabric which you need to do in your browser. Here is an example of how to make a remote called The ID for "S3 Storage" would be Here are the Standard options specific to filefabric (Enterprise File Fabric). URL of the Enterprise File Fabric to connect to. Here are the Advanced options specific to filefabric (Enterprise File Fabric). Session Token. Files.com is a cloud storage service that provides a secure and easy way to store and share files. The initial setup for filescom involves authenticating with your Files.com account. You can do this by providing your site subdomain, username, and password. Alternatively, you can authenticate using an API Key from Files.com. Here is an example of how to make a remote called This will guide you through an interactive setup process: Sync Here are the Standard options specific to filescom (Files.com). Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com). Here are the Advanced options specific to filescom (Files.com). The API key used to authenticate with Files.com. FTP is the File Transfer Protocol. Rclone FTP support is provided using the github.com/jlaffaye/ftp package. Limitations of Rclone's FTP backend Paths are specified as To create an FTP configuration named Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, see below. This backend's interactive configuration wizard provides a selection of sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd. Just hit a selection number when prompted. Here are the Standard options specific to ftp (FTP). FTP host to connect to. Here are the Advanced options specific to ftp (FTP). Maximum number of FTP simultaneous connections, 0 for unlimited. Socks 5 proxy host. Supports the format user:pass@host:port, user@host:port, host:port. Example: Properties: Don't check the upload is OK Normally rclone will try to check the upload exists after it has uploaded a file to make sure the size and modification time are as expected. This flag stops rclone doing these checks. This enables uploading to folders which are write only. You will likely need to use the --inplace flag also if uploading to a write only folder. Properties: The encoding for the backend. See the encoding section in the overview for more info. Gofile is a content storage and distribution platform. Its aim is to provide as much service as possible for free or at a very low price. The initial setup for Gofile involves logging in to the web interface and going to the "My Profile" section. Copy the "Account API token" for use in the config file. Note that if you wish to connect rclone to Gofile you will need a premium account. Here is an example of how to make a remote called This will guide you through an interactive setup process: The ID to use is the part before the To restrict rclone to the Here are the Standard options specific to gofile (Gofile). API Access token Here are the Advanced options specific to gofile (Gofile). ID of the root folder Use Paths are specified as The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. Here is an example of how to make a remote called You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines. To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the Another option for service account authentication is to use access tokens via gcloud impersonate-service-account. Access tokens protect security by avoiding the use of the JSON key file, which can be breached. They also bypass oauth login flow, which is simpler on remote VMs that lack a web browser. If you already have a working service account, skip to step 3. You can re-use an existing service account as well (like the one created above) Use the Google Cloud console to identify a limited role. Some relevant pre-defined roles: hit For downloads of objects that permit public access you can configure rclone to use anonymous access by setting Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings. Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). OAuth Client Id. Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). OAuth Access Token as a JSON blob. Use client credentials OAuth flow. This will use the OAUTH2 client Credentials Flow as described in RFC 6749. Properties: Short-lived access token. Leave blank normally. Needed only if you want use short-lived access token instead of interactive login. Properties: Upload an empty object with a trailing slash when a new directory is created Empty folders are unsupported for bucket based remotes, this option creates an empty object ending with "/", to persist the folder. Paths are specified as Drive paths may be as deep as required, e.g. The initial setup for drive involves getting a token from Google drive which you need to do in your browser. Here is an example of how to make a remote called Here are the Standard options specific to drive (Google Drive). Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance. Here are the Advanced options specific to drive (Google Drive). OAuth Access Token as a JSON blob. Use client credentials OAuth flow. This will use the OAUTH2 client Credentials Flow as described in RFC 6749. Properties: ID of the root folder. Leave blank normally. Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point. Rescue or delete any orphaned files This command rescues or deletes any orphaned files or directories. Sometimes files can get orphaned in Google Drive. This means that they are no longer in any folder in Google Drive. This command finds those files and either rescues them to a directory you specify or deletes them. Usage: This can be used in 3 ways. First, list all orphaned files Second rescue all orphaned files to the directory indicated e.g. To rescue all orphans to a directory called "Orphans" in the top level Third delete all orphaned files to the trash Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiB/s but lots of small files can take a long time. Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server-side copies with Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID". Choose an application type of "Desktop app" and click "Create". (the default name is fine) It will show you a client ID and client secret. Make a note of these. (If you selected "External" at Step 5 continue to Step 9. If you chose "Internal" you don't need to publish and can skip straight to Step 10 but your destination drive must be part of the same Google Workspace.) (If you selected "External" at Step 5 continue to Step 10. If you chose "Internal" you don't need to publish and can skip straight to Step 11 but your destination drive must be part of the same Google Workspace.) Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm. You will also want to add yourself as a test user. Provide the noted client ID and client secret to rclone. The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos. NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use. The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. Here is an example of how to make a remote called This means that you can use the The Here are the Standard options specific to google photos (Google Photos). OAuth Client Id. Here are the Advanced options specific to google photos (Google Photos). OAuth Access Token as a JSON blob. Use client credentials OAuth flow. This will use the OAUTH2 client Credentials Flow as described in RFC 6749. Properties: Set to read the size of media items. Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media. Use the gphotosdl proxy for downloading the full resolution images The Google API will deliver images and video which aren't full resolution, and/or have EXIF data missing. However if you ue the gphotosdl proxy tnen you can download original, unchanged images. This runs a headless browser in the background. Download the software from gphotosdl First run with Then once you have logged into google photos close the browser window and run Then supply the parameter Properties: The encoding for the backend. See the encoding section in the overview for more info. When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115. The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort NB you can use the --gphotos-proxy flag to use a headless browser to download images in full resolution. When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044. NB you can use the --gphotos-proxy flag to use a headless browser to download images in full resolution. If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to Here are the Standard options specific to hasher (Better checksums for other remotes). Remote to cache checksums for (e.g. myRemote:path). Here are the Advanced options specific to hasher (Better checksums for other remotes). Auto-update checksum for files smaller than this size (disabled by default). HDFS is a distributed file-system, part of the Apache Hadoop framework. Paths are specified as Here is an example of how to make a remote called This will guide you through an interactive setup process: Invalid UTF-8 bytes will also be replaced. Here are the Standard options specific to hdfs (Hadoop distributed file system). Hadoop name nodes and ports. Here are the Advanced options specific to hdfs (Hadoop distributed file system). Kerberos service principal name for the namenode. Paths are specified as Paths may be as deep as required, e.g. The initial setup for hidrive involves getting a token from HiDrive which you need to do in your browser. Here is an example of how to make a remote called This will guide you through an interactive setup process: By default, rclone will know the number of directory members contained in a directory. For example, The acquisition of this information will result in additional time costs for HiDrive's API. When dealing with large directory structures, it may be desirable to circumvent this time cost, especially when this information is not explicitly needed. For this, the See the below section about configuration options for more details. Here are the Standard options specific to hidrive (HiDrive). OAuth Client Id. Here are the Advanced options specific to hidrive (HiDrive). OAuth Access Token as a JSON blob. Use client credentials OAuth flow. This will use the OAUTH2 client Credentials Flow as described in RFC 6749. Properties: User-level that rclone should use when requesting access from HiDrive. Properties: The If the path following the To just download a single file it is easier to use copyurl. Here is an example of how to make a remote called This will guide you through an interactive setup process: or: Here are the Standard options specific to http (HTTP). URL of HTTP host to connect to. Here are the Advanced options specific to http (HTTP). Set HTTP headers for all transactions.rclone(1) User Manual
-Rclone syncs your files to cloud storage
# Linux
fusermount -u /path/to/local/mount
-# OS X
+#... or on some systems
+fusermount3 -u /path/to/local/mount
+# OS X or Linux when using nfsmount
umount /path/to/local/mountsystemd
PATH or HOME. This means that tilde (~) expansion will not work and you should provide --config and --cache-dir explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount program, rclone will use the fallback PATH of /bin:/usr/bin in this scenario. Please ensure that fusermount is present on this PATH.PATH or HOME. This means that tilde (~) expansion will not work and you should provide --config and --cache-dir explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount or fusermount3 program, rclone will use the fallback PATH of /bin:/usr/bin in this scenario. Please ensure that fusermount/fusermount3 is present on this PATH.Rclone as Unix mount helper
/bin/mount normally takes the -t FSTYPE argument then runs the /sbin/mount.FSTYPE helper program passing it mount options as -o key=val,... or --opt=.... Automount (classic or systemd) behaves in a similar way.--key val. To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone and optionally /usr/bin/rclonefs, e.g. ln -s /usr/bin/rclone /sbin/mount.rclone. rclone will detect it and translate command-line arguments appropriately.--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)Symlinks
+
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFSlink-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.--links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.--vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.rclone serve commands yet.
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dirlinked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.VFS Case Sensitivity
# Linux
fusermount -u /path/to/local/mount
-# OS X
+#... or on some systems
+fusermount3 -u /path/to/local/mount
+# OS X or Linux when using nfsmount
umount /path/to/local/mountsystemd
PATH or HOME. This means that tilde (~) expansion will not work and you should provide --config and --cache-dir explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount program, rclone will use the fallback PATH of /bin:/usr/bin in this scenario. Please ensure that fusermount is present on this PATH.PATH or HOME. This means that tilde (~) expansion will not work and you should provide --config and --cache-dir explicitly as absolute paths via rclone arguments. Since mounting requires the fusermount or fusermount3 program, rclone will use the fallback PATH of /bin:/usr/bin in this scenario. Please ensure that fusermount/fusermount3 is present on this PATH.Rclone as Unix mount helper
/bin/mount normally takes the -t FSTYPE argument then runs the /sbin/mount.FSTYPE helper program passing it mount options as -o key=val,... or --opt=.... Automount (classic or systemd) behaves in a similar way.--key val. To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone and optionally /usr/bin/rclonefs, e.g. ln -s /usr/bin/rclone /sbin/mount.rclone. rclone will detect it and translate command-line arguments appropriately.--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)Symlinks
+
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFSlink-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.--links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.--vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.rclone serve commands yet.
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dirlinked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.VFS Case Sensitivity
Server options
--rc-addr to specify which IP address and port the server should listen on, eg --rc-addr 1.2.3.4:8000 or --rc-addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.--rc-addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.unix:///path/to/socket or just by using an absolute path name.--rc-addr may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.--rc-server-read-timeout and --rc-server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.--rc-max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.--rc-baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --rc-baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --rc-baseurl, so --rc-baseurl "rclone", --rc-baseurl "/rclone" and --rc-baseurl "/rclone/" are all treated identically.TLS (SSL)
--rc-cert and --rc-key flags. If you wish to do client side certificate validation then you will need to supply --rc-client-ca also.--rc-cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --krc-ey should be the PEM encoded private key and --rc-client-ca should be the PEM encoded client certificate authority certificate.--rc-cert must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --rc-key must be set to the path of a file with the PEM encoded private key. If setting --rc-client-ca, it should be set to the path of a file with PEM encoded client certificate authority certificates.--rc-min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").Socket activation
---rc-addr).systemd-socket-activatecommand
@@ -4056,7 +4108,7 @@ htpasswd -B htpasswd anotherUser
systemd-socket-activate -l 8000 -- rclone serveRC Options
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -4281,6 +4333,22 @@ htpasswd -B htpasswd anotherUser--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)Symlinks
+
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFSlink-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.--links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.--vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.rclone serve commands yet.
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dirlinked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.VFS Case Sensitivity
--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)Symlinks
+
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFSlink-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.--links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.--vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.rclone serve commands yet.
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dirlinked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.VFS Case Sensitivity
--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)Symlinks
+
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFSlink-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.--links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.--vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.rclone serve commands yet.
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dirlinked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.VFS Case Sensitivity
Server options
--addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.--addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.unix:///path/to/socket or just by using an absolute path name.--addr may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.TLS (SSL)
--cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.--cert must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --key must be set to the path of a file with the PEM encoded private key. If setting --client-ca, it should be set to the path of a file with PEM encoded client certificate authority certificates.--min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").Socket activation
---addr).systemd-socket-activatecommand
@@ -5087,6 +5193,22 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
systemd-socket-activate -l 8000 -- rclone serve--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)Symlinks
+
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFSlink-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.--links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.--vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.rclone serve commands yet.
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dirlinked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.VFS Case Sensitivity
--vfs-cache-mode in order to be able to write to the mountpoint (full is recommended). If you don't specify VFS cache mode, the mount will be read-only.--nfs-cache-type controls the type of the NFS handle cache. By default this is memory where new handles will be randomly allocated when needed. These are stored in memory. If the server is restarted the handle cache will be lost and connected NFS clients will get stale handle errors.--nfs-cache-type disk uses an on disk NFS handle cache. Rclone hashes the path of the object and stores it in a file named after the hash. These hashes are stored on disk the directory controlled by --cache-dir or the exact directory may be specified with --nfs-cache-dir. Using this means that the NFS server can be restarted at will without affecting the connected clients.--nfs-cache-type symlink is similar to --nfs-cache-type disk in that it uses an on disk cache, but the cache entries are held as symlinks. Rclone will use the handle of the underlying file as the NFS handle which improves performance. This sort of cache can't be backed up and restored as the underlying handles will change. This is Linux only.--nfs-cache-type symlink is similar to --nfs-cache-type disk in that it uses an on disk cache, but the cache entries are held as symlinks. Rclone will use the handle of the underlying file as the NFS handle which improves performance. This sort of cache can't be backed up and restored as the underlying handles will change. This is Linux only. It requres running rclone as root or with CAP_DAC_READ_SEARCH. You can run rclone with this extra permission by doing this to the rclone binary sudo setcap cap_dac_read_search+ep /path/to/rclone.--nfs-cache-handle-limit controls the maximum number of cached NFS handles stored by the caching handler. This should not be set too low or you may experience errors when trying to access files. The default is 1000000, but consider lowering this limit if the server's system resource usage causes problems. This is only used by the memory type cache.
@@ -5343,6 +5467,22 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)Symlinks
+
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFSlink-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.--links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.--vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.rclone serve commands yet.
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dirlinked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.VFS Case Sensitivity
Server options
--addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.--addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.unix:///path/to/socket or just by using an absolute path name.--addr may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.TLS (SSL)
--cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.--cert must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --key must be set to the path of a file with the PEM encoded private key. If setting --client-ca, it should be set to the path of a file with PEM encoded client certificate authority certificates.--min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").Socket activation
---addr).systemd-socket-activatecommand
@@ -5503,11 +5645,11 @@ htpasswd -B htpasswd anotherUser
--append-only Disallow deletion of repository data
--baseurl string Prefix for URLs - leave blank for root
--cache-objects Cache listed objects (default true)
- --cert string TLS PEM key (concatenation of certificate and CA certificate)
- --client-ca string Client certificate authority to verify clients with
+ --cert string Path to TLS PEM public key certificate file (can also include intermediate/CA certificates)
+ --client-ca string Path to TLS PEM CA file with certificate authorities to verify clients with
-h, --help help for restic
--htpasswd string A htpasswd file - if not provided no authentication is done
- --key string TLS PEM Private key
+ --key string Path to TLS PEM private key file
--max-header-bytes int Maximum size of request header (default 4096)
--min-tls-version string Minimum TLS version that is acceptable (default "tls1.0")
--pass string Password for authentication
@@ -5602,17 +5744,17 @@ htpasswd -B htpasswd anotherUser
systemd-socket-activate -l 8000 -- rclone serveServer options
--addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.--addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.unix:///path/to/socket or just by using an absolute path name.--addr may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.TLS (SSL)
--cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.--cert must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --key must be set to the path of a file with the PEM encoded private key. If setting --client-ca, it should be set to the path of a file with PEM encoded client certificate authority certificates.--min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").Socket activation
---addr).systemd-socket-activatecommand
@@ -5729,6 +5871,22 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
systemd-socket-activate -l 8000 -- rclone serve--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)Symlinks
+
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFSlink-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.--links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.--vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.rclone serve commands yet.
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dirlinked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.VFS Case Sensitivity
--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)Symlinks
+
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFSlink-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.--links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.--vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.rclone serve commands yet.
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dirlinked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.VFS Case Sensitivity
Server options
--addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.--addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.unix:///path/to/socket or just by using an absolute path name. Note that unix sockets bypass the authentication - this is expected to be done with file system permissions.unix:///path/to/socket or just by using an absolute path name.--addr may be repeated to listen on multiple IPs/ports/sockets. Socket activation, described further below, can also be used to accomplish the same.--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.TLS (SSL)
--cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.--cert should be a either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.--cert must be set to the path of a file containing either a PEM encoded certificate, or a concatenation of that with the CA certificate. --key must be set to the path of a file with the PEM encoded private key. If setting --client-ca, it should be set to the path of a file with PEM encoded client certificate authority certificates.--min-tls-version is minimum TLS version that is acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" (default "tls1.0").Socket activation
---addr).systemd-socket-activatecommand
@@ -6347,6 +6525,22 @@ htpasswd -B htpasswd anotherUser
--vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s)
systemd-socket-activate -l 8000 -- rclone serve--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).
+--transfers int Number of file transfers to run in parallel (default 4)Symlinks
+
+--links Translate symlinks to/from regular files with a '.rclonelink' extension.
+--vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFSlink-to-file.txt would be stored on cloud storage as link-to-file.txt.rclonelink and the contents would be the path to the symlink destination.--links enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). --vfs-links just enables it for the VFS layer.--vfs-links flag has been designed for rclone mount, rclone nfsmount and rclone serve nfs.rclone serve commands yet.
+.
+├── dir
+│ └── file.txt
+└── linked-dir -> dirlinked-dir but not linked-dir/file.txt. This is not a problem for the tested commands but may be for other commands.VFS Case Sensitivity
--leave-root
--links / -l
+.rclonelink suffix in the destination.--links / -l flag enables this feature for all supported backends and the VFS. There are individual flags for just enabling it for the VFS --vfs-links and the local backend --local-links if required.--log-file=FILE
-v flag. See the Logging section for more info.ID is the source ID of the object if known.Metadata is the backend specific metadata as described in the backend docs.{
- "SrcFs": "gdrive:",
- "SrcFsType": "drive",
- "DstFs": "newdrive:user",
- "DstFsType": "onedrive",
- "Remote": "test.txt",
- "Size": 6,
- "MimeType": "text/plain; charset=utf-8",
- "ModTime": "2022-10-11T17:53:10.286745272+01:00",
- "IsDir": false,
- "ID": "xyz",
- "Metadata": {
- "btime": "2022-10-11T16:53:11Z",
- "content-type": "text/plain; charset=utf-8",
- "mtime": "2022-10-11T17:53:10.286745272+01:00",
- "owner": "user1@domain1.com",
- "permissions": "...",
- "description": "my nice file",
- "starred": "false"
- }
-}{
+ "SrcFs": "gdrive:",
+ "SrcFsType": "drive",
+ "DstFs": "newdrive:user",
+ "DstFsType": "onedrive",
+ "Remote": "test.txt",
+ "Size": 6,
+ "MimeType": "text/plain; charset=utf-8",
+ "ModTime": "2022-10-11T17:53:10.286745272+01:00",
+ "IsDir": false,
+ "ID": "xyz",
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain1.com",
+ "permissions": "...",
+ "description": "my nice file",
+ "starred": "false"
+ }
+}Metadata field will be used in its entirety for the destination object. Any other fields will be ignored. Note in this example we translate user names and permissions and add something to the description:{
- "Metadata": {
- "btime": "2022-10-11T16:53:11Z",
- "content-type": "text/plain; charset=utf-8",
- "mtime": "2022-10-11T17:53:10.286745272+01:00",
- "owner": "user1@domain2.com",
- "permissions": "...",
- "description": "my nice file [migrated from domain1]",
- "starred": "false"
- }
-}{
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain2.com",
+ "permissions": "...",
+ "description": "my nice file [migrated from domain1]",
+ "starred": "false"
+ }
+}import sys, json
-
-i = json.load(sys.stdin)
-metadata = i["Metadata"]
-# Add tag to description
-if "description" in metadata:
- metadata["description"] += " [migrated from domain1]"
-else:
- metadata["description"] = "[migrated from domain1]"
-# Modify owner
-if "owner" in metadata:
- metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
-o = { "Metadata": metadata }
-json.dump(o, sys.stdout, indent="\t")import sys, json
+
+i = json.load(sys.stdin)
+metadata = i["Metadata"]
+# Add tag to description
+if "description" in metadata:
+ metadata["description"] += " [migrated from domain1]"
+else:
+ metadata["description"] = "[migrated from domain1]"
+# Modify owner
+if "owner" in metadata:
+ metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
+o = { "Metadata": metadata }
+json.dump(o, sys.stdout, indent="\t")-vv --dump mapper.-q) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.List of exit codes
-
0 - success1 - Syntax or usage error2 - Error not otherwise categorised0 - Success1 - Error not otherwise categorised2 - Syntax or usage error3 - Directory not found4 - File not found5 - Temporary error (one that more retries might fix) (Retry errors)--verbose or -v is RCLONE_VERBOSE=1, or for -vv, RCLONE_VERBOSE=2.-vv flag, e.g. rclone version -vv.stringArray) are treated slighly differently as environment variables can only be defined once. In order to allow a simple mechanism for adding one or many items, the input is treated as a CSV encoded string. For example
+
+
+
+
+
+Environment Variable
+Equivalent options
+
+
+
+RCLONE_EXCLUDE="*.jpg"
+--exclude "*.jpg"
+
+
+RCLONE_EXCLUDE="*.jpg,*.png"
+--exclude "*.jpg" --exclude "*.png"
+
+
+RCLONE_EXCLUDE='"*.jpg","*.png"'
+--exclude "*.jpg" --exclude "*.png"
+
+
+
+RCLONE_EXCLUDE='"/directory with comma , in it /**"'`--exclude "/directory with comma , in it /**"
+stringArray options are defined as environment variables and options on the command line then all the values will be used.Config file
RCLONE_CONFIG_ + name of remote + _ + name of config file option and make it all uppercase. Note one implication here is the remote's name must be convertible into a valid environment variable name, so it can only contain letters, digits, or the _ (underscore) character.--include-from flag is useful where multiple include filter rules are applied to an rclone command.--include-from implies --exclude ** at the end of an rclone internal filter list. Therefore if you mix --include and --include-from flags with --exclude, --exclude-from, --filter or --filter-from, you must use include rules for all the files you want in the include statement. For more flexibility use the --filter-from flag.--exclude-from has no effect when combined with --files-from or --files-from-raw flags.--exclude-from followed by - reads filter rules from standard input.--include-from has no effect when combined with --files-from or --files-from-raw flags.--include-from followed by - reads filter rules from standard input.--filter - Add a file-filtering rule+ or - format.+ and exclude rules with -. ! clears existing rules. Rules are processed in the order they are defined.-vv --dump filters to see how they appear in the final regexp.filter-file.txt:# a sample filter rule file
- secret*.jpg
+ *.jpg
+ *.png
+ file2.avi
+- /dir/tmp/** # WARNING! This text will be treated as part of the path.
- /dir/Trash/**
+ /dir/**
# exclude everything else
@@ -8316,6 +8552,7 @@ file2.avi--include, --include-from, --exclude, --exclude-from, --filter and --filter-from) are ignored when --files-from is used.--files-from expects a list of files as its input. Leading or trailing whitespace is stripped from the input lines. Lines starting with # or ; are ignored.--files-from followed by - reads the list of files from standard input.--files-from flag traverse the remote, treating the names in --files-from as a set of filters.--no-traverse and --files-from flags are used together an rclone command does not traverse the remote. Instead it addresses each path/file named in the file individually. For each path/file name, that requires typically 1 API call. This can be efficient for a short --files-from list and a remote containing many files.--files-from file are missing from the source remote.Supported parameters
--rc
---rc-addr=IP
---rc-cert=KEY
---rc-client-ca=PATH
---rc-htpasswd=PATH
---rc-key=PATH
---rc-max-header-bytes=VALUE
---rc-min-tls-version=VALUE
--rc-user=VALUE
@@ -8526,11 +8763,11 @@ dir1/dir2/dir3/.ignore
--rc-pass=VALUE
--rc-realm=VALUE
---rc-server-read-timeout=DURATION
---rc-server-write-timeout=DURATION
---rc-serve
Accessing the remote control via the rclone rc command
rclone rc command.
-$ rclone rc rc/noop param1=one param2=two
{
"param1": "one",
"param2": "two"
}rclone rc on its own to see the help for the installed remote control commands.http://localhost:5572/, use the --url option to specify it:
+$ rclone rc --url http://some.remote:1234/ rc/noop--unix-socket option instead:
+$ rclone rc --unix-socket /tmp/rclone.sock rc/nooprclone rc on its own, without any commands, to see the help for the installed remote control commands. Note that this also needs to connect to the remote server.JSON input
rclone rc also supports a --json flag which can be used to send more complicated input parameters.$ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop
@@ -9931,6 +10172,7 @@ rclone rc options/set --json '{"main": {"LogLevel": 8}}&
fs - select the VFS in use (optional)id - a numeric ID as returned from vfs/queueexpiry - a new expiry time as floating point secondsrelative - if set, expiry is to be treated as relative to the current expiry (optional, boolean)-
+
+Cloudinary
+MD5
+R
+No
+Yes
+-
+-
+
-Dropbox
DBHASH ¹
R
@@ -10172,7 +10423,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
-
-
+
-Enterprise File Fabric
-
R/W
@@ -10181,7 +10432,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W
-
+
-Files.com
MD5, CRC32
DR/W
@@ -10190,7 +10441,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R
-
+
-FTP
-
R/W ¹⁰
@@ -10199,7 +10450,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
-
-
+
-Gofile
MD5
DR/W
@@ -10208,7 +10459,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R
-
+
-Google Cloud Storage
MD5
R/W
@@ -10217,7 +10468,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W
-
+
-Google Drive
MD5, SHA1, SHA256
DR/W
@@ -10226,7 +10477,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R/W
DRWU
+
-Google Photos
-
-
@@ -10235,7 +10486,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R
-
+
-HDFS
-
R/W
@@ -10244,7 +10495,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
-
-
+
-HiDrive
HiDrive ¹²
R/W
@@ -10253,7 +10504,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
-
-
+
+HTTP
-
R
@@ -10262,6 +10513,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
R
-
+
iCloud Drive
+-
+R
+No
+No
+-
+-
+
Internet Archive
MD5, SHA1, CRC32
@@ -10382,7 +10642,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
pCloud
MD5, SHA1 ⁷
-R
+R/W
No
No
W
@@ -11215,6 +11475,20 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
+
+Cloudinary
+No
+No
+No
+No
+No
+No
+Yes
+No
+No
+No
+No
+
-Enterprise File Fabric
Yes
Yes
@@ -11228,7 +11502,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
Yes
+
-Files.com
Yes
Yes
@@ -11242,7 +11516,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
Yes
+
-FTP
No
No
@@ -11256,7 +11530,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
Yes
+
-Gofile
Yes
Yes
@@ -11270,21 +11544,21 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
Yes
+
-Google Cloud Storage
Yes
Yes
No
No
No
-Yes
+No
Yes
No
No
No
No
+
-Google Drive
Yes
Yes
@@ -11298,7 +11572,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
Yes
+
-Google Photos
No
No
@@ -11312,7 +11586,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
No
+
-HDFS
Yes
No
@@ -11326,7 +11600,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
Yes
Yes
+
-HiDrive
Yes
Yes
@@ -11340,7 +11614,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
Yes
+
+HTTP
No
No
@@ -11354,6 +11628,20 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
Yes
+
iCloud Drive
+Yes
+Yes
+Yes
+Yes
+No
+No
+No
+No
+No
+No
+Yes
+
ImageKit
Yes
@@ -11505,7 +11793,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
No
No
No
-No
+Yes
Yes
@@ -11868,6 +12156,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
-I, --ignore-times Don't skip items that match size and time - transfer all unconditionally
--immutable Do not modify files, fail if existing files have been modified
--inplace Download directly to destination file instead of atomic download to temp/rename
+ -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--max-backlog int Maximum number of objects in sync or check backlog (default 10000)
--max-duration Duration Maximum duration rclone will transfer data for (default 0s)
--max-transfer SizeSuffix Maximum size of data to transfer (default off)
@@ -11932,7 +12221,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.68.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.69.0")
-Performance
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@@ -12033,7 +12322,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
RC
--rc Enable the remote control server
- --rc-addr stringArray IPaddress:Port or :Port to bind server to (default ["localhost:5572"])
+ --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572)
--rc-allow-origin string Origin which cross-domain request (CORS) can be executed from
--rc-baseurl string Prefix for URLs - leave blank for root
--rc-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -12063,7 +12352,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--rc-web-gui-update Check and update to latest version of web guiMetrics
--metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to (default [""])
+
+ --zoho-token-url string Token server url
+ --zoho-upload-cutoff SizeSuffix Cutoff for switching to large file upload api (>= 10 MiB) (default 10Mi) --metrics-addr stringArray IPaddress:Port or :Port to bind metrics server to
--metrics-allow-origin string Origin which cross-domain request (CORS) can be executed from
--metrics-baseurl string Prefix for URLs - leave blank for root
--metrics-cert string TLS PEM key (concatenation of certificate and CA certificate)
@@ -12097,6 +12386,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azureblob-description string Description of the remote
--azureblob-directory-markers Upload an empty object with a trailing slash when a new directory is created
--azureblob-disable-checksum Don't store MD5 checksum with object metadata
+ --azureblob-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azureblob-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8)
--azureblob-endpoint string Endpoint for the service
--azureblob-env-auth Read credentials from runtime (environment variables, CLI or MSI)
@@ -12114,6 +12404,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--azureblob-tenant string ID of the service principal's tenant. Also called its directory ID
--azureblob-upload-concurrency int Concurrency for multipart uploads (default 16)
--azureblob-upload-cutoff string Cutoff for switching to chunked upload (<= 256 MiB) (deprecated)
+ --azureblob-use-az Use Azure CLI tool az for authentication
--azureblob-use-emulator Uses local storage emulator if provided as 'true'
--azureblob-use-msi Use a managed service identity to authenticate (only works in Azure)
--azureblob-username string User name (usually an email address)
@@ -12163,6 +12454,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--box-auth-url string Auth server URL
--box-box-config-file string Box App config.json location
--box-box-sub-type string (default "user")
+ --box-client-credentials Use client credentials OAuth flow
--box-client-id string OAuth Client Id
--box-client-secret string OAuth Client Secret
--box-commit-retries int Max number of times to try committing a multipart file (default 100)
@@ -12201,6 +12493,14 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--chunker-fail-hard Choose how chunker should handle files with missing or invalid chunks
--chunker-hash-type string Choose how chunker handles hash sums (default "md5")
--chunker-remote string Remote to chunk/unchunk
+ --cloudinary-api-key string Cloudinary API Key
+ --cloudinary-api-secret string Cloudinary API Secret
+ --cloudinary-cloud-name string Cloudinary Environment Name
+ --cloudinary-description string Description of the remote
+ --cloudinary-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Question,Asterisk,Pipe,Hash,Percent,BackSlash,Del,Ctl,RightSpace,InvalidUtf8,Dot)
+ --cloudinary-eventually-consistent-delay Duration Wait N seconds for eventual consistency of the databases that support the backend operation (default 0s)
+ --cloudinary-upload-prefix string Specify the API endpoint for environments out of the US
+ --cloudinary-upload-preset string Upload Preset to select asset manipulation on upload
--combine-description string Description of the remote
--combine-upstreams SpaceSepList Upstreams for combining
--compress-description string Description of the remote
@@ -12227,6 +12527,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--drive-auth-owner-only Only consider files owned by the authenticated user
--drive-auth-url string Auth server URL
--drive-chunk-size SizeSuffix Upload chunk size (default 8Mi)
+ --drive-client-credentials Use client credentials OAuth flow
--drive-client-id string Google Application Client Id
--drive-client-secret string OAuth Client Secret
--drive-copy-shortcut-content Server side copy contents of shortcuts instead of the shortcut
@@ -12277,6 +12578,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--dropbox-batch-size int Max number of files in upload batch
--dropbox-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
--dropbox-chunk-size SizeSuffix Upload chunk size (< 150Mi) (default 48Mi)
+ --dropbox-client-credentials Use client credentials OAuth flow
--dropbox-client-id string OAuth Client Id
--dropbox-client-secret string OAuth Client Secret
--dropbox-description string Description of the remote
@@ -12323,6 +12625,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--ftp-host string FTP host to connect to
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server
+ --ftp-no-check-upload Don't check the upload is OK
--ftp-pass string FTP password (obscured)
--ftp-port int FTP port number (default 21)
--ftp-shut-timeout Duration Maximum time to wait for data connection closing status (default 1m0s)
@@ -12331,10 +12634,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
--ftp-user string FTP username (default "$USER")
--ftp-writing-mdtm Use MDTM to set modification time (VsFtpd quirk)
+ --gcs-access-token string Short-lived access token
--gcs-anonymous Access public buckets and objects without credentials
--gcs-auth-url string Auth server URL
--gcs-bucket-acl string Access Control List for new buckets
--gcs-bucket-policy-only Access checks should use bucket-level IAM policies
+ --gcs-client-credentials Use client credentials OAuth flow
--gcs-client-id string OAuth Client Id
--gcs-client-secret string OAuth Client Secret
--gcs-decompress If set this will decompress gzip encoded objects
@@ -12363,11 +12668,13 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--gphotos-batch-mode string Upload file batching sync|async|off (default "sync")
--gphotos-batch-size int Max number of files in upload batch
--gphotos-batch-timeout Duration Max time to allow an idle upload batch before uploading (default 0s)
+ --gphotos-client-credentials Use client credentials OAuth flow
--gphotos-client-id string OAuth Client Id
--gphotos-client-secret string OAuth Client Secret
--gphotos-description string Description of the remote
--gphotos-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gphotos-include-archived Also view and download archived media
+ --gphotos-proxy string Use the gphotosdl proxy for downloading the full resolution images
--gphotos-read-only Set to make the Google Photos backend read only
--gphotos-read-size Set to read the size of media items
--gphotos-start-year int Year limits the photos to be downloaded to those which are uploaded after the given year (default 2000)
@@ -12386,6 +12693,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--hdfs-username string Hadoop user name
--hidrive-auth-url string Auth server URL
--hidrive-chunk-size SizeSuffix Chunksize for chunked uploads (default 48Mi)
+ --hidrive-client-credentials Use client credentials OAuth flow
--hidrive-client-id string OAuth Client Id
--hidrive-client-secret string OAuth Client Secret
--hidrive-description string Description of the remote
@@ -12405,6 +12713,11 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--http-no-head Don't use HEAD requests
--http-no-slash Set this if the site doesn't end directories with /
--http-url string URL of HTTP host to connect to
+ --iclouddrive-apple-id string Apple ID
+ --iclouddrive-client-id string Client id (default "d39ba9916b7251055b22c7f910e2ea796ee65e98b2ddecea8f5dde8d9d1a815d")
+ --iclouddrive-description string Description of the remote
+ --iclouddrive-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --iclouddrive-password string Password (obscured)
--imagekit-description string Description of the remote
--imagekit-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf8,Dot,SquareBracket)
--imagekit-endpoint string You can find your ImageKit.io URL endpoint in your [dashboard](https://imagekit.io/dashboard/developer/api-keys)
@@ -12422,6 +12735,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
--jottacloud-auth-url string Auth server URL
+ --jottacloud-client-credentials Use client credentials OAuth flow
--jottacloud-client-id string OAuth Client Id
--jottacloud-client-secret string OAuth Client Secret
--jottacloud-description string Description of the remote
@@ -12443,11 +12757,11 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--koofr-user string Your user name
--linkbox-description string Description of the remote
--linkbox-token string Token from https://www.linkbox.to/admin/account
- -l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
--local-case-insensitive Force the filesystem to report itself as case insensitive
--local-case-sensitive Force the filesystem to report itself as case sensitive
--local-description string Description of the remote
--local-encoding Encoding The encoding for the backend (default Slash,Dot)
+ --local-links Translate symlinks to/from regular files with a '.rclonelink' extension for the local backend
--local-no-check-updated Don't check to see if the files change during upload
--local-no-clone Disable reflink cloning for server-side copies
--local-no-preallocate Disable preallocation of disk space for transferred files
@@ -12459,6 +12773,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated)
--mailru-auth-url string Auth server URL
--mailru-check-hash What should copy do if file checksum is mismatched or invalid (default true)
+ --mailru-client-credentials Use client credentials OAuth flow
--mailru-client-id string OAuth Client Id
--mailru-client-secret string OAuth Client Secret
--mailru-description string Description of the remote
@@ -12489,6 +12804,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--onedrive-auth-url string Auth server URL
--onedrive-av-override Allows download of files the server thinks has a virus
--onedrive-chunk-size SizeSuffix Chunk size to upload files with - must be multiple of 320k (327,680 bytes) (default 10Mi)
+ --onedrive-client-credentials Use client credentials OAuth flow
--onedrive-client-id string OAuth Client Id
--onedrive-client-secret string OAuth Client Secret
--onedrive-delta If set rclone will use delta listing to implement recursive listings
@@ -12508,11 +12824,12 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--onedrive-region string Choose national cloud region for OneDrive (default "global")
--onedrive-root-folder-id string ID of the root folder
--onedrive-server-side-across-configs Deprecated: use --server-side-across-configs instead
+ --onedrive-tenant string ID of the service principal's tenant. Also called its directory ID
--onedrive-token string OAuth Access Token as a JSON blob
--onedrive-token-url string Token server url
--oos-attempt-resume-upload If true attempt to resume previously started multipart upload for the object
--oos-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
- --oos-compartment string Object storage compartment OCID
+ --oos-compartment string Specify compartment OCID, if you need to list buckets
--oos-config-file string Path to OCI config file (default "~/.oci/config")
--oos-config-profile string Profile name inside the oci config file (default "Default")
--oos-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
@@ -12541,6 +12858,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--opendrive-password string Password (obscured)
--opendrive-username string Username
--pcloud-auth-url string Auth server URL
+ --pcloud-client-credentials Use client credentials OAuth flow
--pcloud-client-id string OAuth Client Id
--pcloud-client-secret string OAuth Client Secret
--pcloud-description string Description of the remote
@@ -12551,26 +12869,25 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--pcloud-token string OAuth Access Token as a JSON blob
--pcloud-token-url string Token server url
--pcloud-username string Your pcloud username
- --pikpak-auth-url string Auth server URL
--pikpak-chunk-size SizeSuffix Chunk size for multipart uploads (default 5Mi)
- --pikpak-client-id string OAuth Client Id
- --pikpak-client-secret string OAuth Client Secret
--pikpak-description string Description of the remote
+ --pikpak-device-id string Device ID used for authorization
--pikpak-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,RightSpace,RightPeriod,InvalidUtf8,Dot)
--pikpak-hash-memory-limit SizeSuffix Files bigger than this will be cached on disk to calculate hash if required (default 10Mi)
+ --pikpak-no-media-link Use original file links instead of media links
--pikpak-pass string Pikpak password (obscured)
--pikpak-root-folder-id string ID of the root folder
- --pikpak-token string OAuth Access Token as a JSON blob
- --pikpak-token-url string Token server url
--pikpak-trashed-only Only show files that are in the trash
--pikpak-upload-concurrency int Concurrency for multipart uploads (default 5)
--pikpak-use-trash Send files to the trash instead of deleting permanently (default true)
--pikpak-user string Pikpak username
+ --pikpak-user-agent string HTTP user agent for pikpak (default "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0")
--pixeldrain-api-key string API key for your pixeldrain account
--pixeldrain-api-url string The API endpoint to connect to. In the vast majority of cases it's fine to leave (default "https://pixeldrain.com/api")
--pixeldrain-description string Description of the remote
--pixeldrain-root-folder-id string Root of the filesystem to use (default "me")
--premiumizeme-auth-url string Auth server URL
+ --premiumizeme-client-credentials Use client credentials OAuth flow
--premiumizeme-client-id string OAuth Client Id
--premiumizeme-client-secret string OAuth Client Secret
--premiumizeme-description string Description of the remote
@@ -12588,6 +12905,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--protondrive-replace-existing-draft Create a new revision when filename conflict is detected
--protondrive-username string The username of your proton account
--putio-auth-url string Auth server URL
+ --putio-client-credentials Use client credentials OAuth flow
--putio-client-id string OAuth Client Id
--putio-client-secret string OAuth Client Secret
--putio-description string Description of the remote
@@ -12621,6 +12939,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--s3-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 4.656Gi)
--s3-decompress If set this will decompress gzip encoded objects
--s3-description string Description of the remote
+ --s3-directory-bucket Set to use AWS Directory Buckets
--s3-directory-markers Upload an empty object with a trailing slash when a new directory is created
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-disable-http2 Disable usage of http2 for S3 backends
@@ -12702,6 +13021,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sftp-pass string SSH password, leave blank to use ssh-agent (obscured)
--sftp-path-override string Override path used by SSH shell commands
--sftp-port int SSH port number (default 22)
+ --sftp-pubkey string SSH public certificate for public certificate based authentication
--sftp-pubkey-file string Optional path to public key file
--sftp-server-command string Specifies the path or command to run a sftp server on the remote host
--sftp-set-env SpaceSepList Environment variables to pass to sftp and commands
@@ -12717,6 +13037,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--sftp-user string SSH username (default "$USER")
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
+ --sharefile-client-credentials Use client credentials OAuth flow
--sharefile-client-id string OAuth Client Id
--sharefile-client-secret string OAuth Client Secret
--sharefile-description string Description of the remote
@@ -12806,6 +13127,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--uptobox-description string Description of the remote
--uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
+ --webdav-auth-redirect Preserve authentication on redirect
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
--webdav-description string Description of the remote
@@ -12821,6 +13143,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--webdav-user string User name
--webdav-vendor string Name of the WebDAV site/service/software you are using
--yandex-auth-url string Auth server URL
+ --yandex-client-credentials Use client credentials OAuth flow
--yandex-client-id string OAuth Client Id
--yandex-client-secret string OAuth Client Secret
--yandex-description string Description of the remote
@@ -12830,13 +13153,15 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--yandex-token string OAuth Access Token as a JSON blob
--yandex-token-url string Token server url
--zoho-auth-url string Auth server URL
+ --zoho-client-credentials Use client credentials OAuth flow
--zoho-client-id string OAuth Client Id
--zoho-client-secret string OAuth Client Secret
--zoho-description string Description of the remote
--zoho-encoding Encoding The encoding for the backend (default Del,Ctl,InvalidUtf8)
--zoho-region string Zoho region to connect to
--zoho-token string OAuth Access Token as a JSON blob
- --zoho-token-url string Token server urlDocker Volume Plugin
Introduction
-o remote=:backend:dir/subdirpath part is optional.-- CLI prefix. Optionally you can use underscores instead of dashes in option names. For example, --vfs-cache-mode full becomes -o vfs-cache-mode=full or -o vfs_cache_mode=full. Boolean CLI flags without value will gain the true value, e.g. --allow-other becomes -o allow-other=true or -o allow_other=true.-- CLI prefix. Optionally you can use underscores instead of dashes in option names. For example, --vfs-cache-mode full becomes -o vfs-cache-mode=full or -o vfs_cache_mode=full. Boolean CLI flags without value will gain the true value, e.g. --allow-other becomes -o allow-other=true or -o allow_other=true.remote. If this is a wrapping backend like alias, chunker or crypt, you cannot provide options for the referred to remote or backend. This limitation is imposed by the rclone connection string parser. The only workaround is to feed plugin with rclone.conf or configure plugin arguments (see below).Special Volume Options
mount-type determines the mount method and in general can be one of: mount, cmount, or mount2. This can be aliased as mount_type. It should be noted that the managed rclone docker plugin currently does not support the cmount method and mount2 is rarely needed. This option defaults to the first found method, which is usually mount so you generally won't need it.docker plugin disable rclone # disable the plugin to ensure no interference
+sudo rm /var/lib/docker-plugins/rclone/cache/docker-plugin.state # removing the plugin state
+docker plugin enable rclone # re-enable the plugin afterwardCaveats
docker volume update. It may be tempting to invoke docker volume create with updated options on existing volume, but there is a gotcha. The command will do nothing, it won't even return an error. I hope that docker maintainers will fix this some day. In the meantime be aware that you must remove your volume before recreating it with new settings:docker volume remove my_vol
@@ -13420,8 +13749,9 @@ rclone copy Path1 Path2 [--create-empty-src-dirs]Lock file
~/.cache/rclone/bisync/PATH1..PATH2.lck on Linux. If bisync should crash or hang, the lock file will remain in place and block any further runs of bisync for the same paths. Delete the lock file as part of debugging the situation. The lock file effectively blocks follow-on (e.g., scheduled by cron) runs when the prior invocation is taking a long time. The lock file contains PID of the blocking process, which may help in debug. Lock files can be set to automatically expire after a certain amount of time, using the --max-lock flag.Return codes
-rclone bisync returns the following codes to calling program: - 0 on a successful run, - 1 for a non-critical failing run (a rerun may be successful), - 2 for a critically aborted run (requires a --resync to recover).Exit codes
+rclone bisync returns the following codes to calling program: - 0 on a successful run, - 1 for a non-critical failing run (a rerun may be successful), - 2 on syntax or usage error, - 7 for a critically aborted run (requires a --resync to recover).Graceful Shutdown
SIGINT or pressing Ctrl+C during a run. Once triggered, bisync will use best efforts to exit cleanly before the timer runs out. If bisync is in the middle of transferring files, it will attempt to cleanly empty its queue by finishing what it has started but not taking more. If it cannot do so within 30 seconds, it will cancel the in-progress transfers at that point and then give itself a maximum of 60 seconds to wrap up, save its state for next time, and exit. With the -vP flags you will see constant status updates and a final confirmation of whether or not the graceful shutdown was successful.SIGINT or Ctrl+C will trigger an immediate, un-graceful exit, which will leave things in a messier state. Usually a robust recovery will still be possible if using --recover mode, otherwise you will need to do a --resync.Increasing performance
+Using server-side copy
+
+rclone copy s3:source-bucket s3:destination-bucketIncreasing the rate of API requests
+--transfers and --checkers options.
+rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucketData integrity
+X-Amz-Content-Sha256 or a Content-Md5 header to guard against corruption of the HTTP body. The HTTP Header is protected by the signature passed in the Authorization header.Single part uploads
+
+
+Content-Md5 using the MD5 hash read from the source. The provider checks this is correct on receipt of the data.--s3-no-head) to read the ETag back which is the MD5 of the file and checks that with what it sent.--s3-upload-cutoff 0 so all files are uploaded as multipart uploads.Multipart uploads
+--s3-upload-cutoff rclone splits the file into multiple parts for upload.
+
+X-Amz-Content-Sha256 and a Content-Md5
+
+X-Amz-Content-Sha256--s3-no-head) and checks the ETag is what it expects (in this case it should be the MD5 sum of all the MD5 sums of all the parts with the number of parts on the end).X-Amz-Meta-Md5chksum with it as the ETag for a multipart upload can't easily be checked against the file as the chunk size must be known in order to calculate it.Downloads
+X-Amz-Meta-Md5chksum metadata (if present) which rclone uploads with multipart uploads.Further checking
+--s3-no-head).rclone check to check the hashes locally vs the remote.rclone check --download which will download the files and check them against the local copies. (Note that this doesn't use disk to do this - it streams them in memory).Versions
rclone backend versioning command) when rclone uploads a new version of a file it creates a new version of it Likewise when you delete a file, the old version will be marked hidden and still be available.--s3-versions flag.Multipart uploads
+Multipart uploads
--s3-upload-cutoff. This can be a maximum of 5 GiB and a minimum of 0 (ie always upload multipart files).--s3-upload-cutoff 0 and force all the files to be uploaded as multipart.Standard options
---s3-provider
+
+
+
-
@@ -15345,7 +15727,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
Advanced options
---s3-bucket-acl
--s3-directory-bucket
+Content-Md5 headers are sent and ensure ETag headers are not interpreted as MD5 sums. X-Amz-Meta-Md5chksum will be set on all objects whether single or multipart uploaded.no_check_bucket = true.
+
+Content-Encoding: gzip
+
+rclone mkdirrclone rmdir yetrclone lsf at the top level.directory_markers = true but it doesn't.
+
--s3-sdk-log-mode
Providers
AWS S3
AWS Directory Buckets
+directory_buckets = true config parameter or use --s3-directory-buckets.
+
+AWS Snowball Edge
rclone lsf r2: to see your buckets and rclone lsf r2:bucket to look within a bucket.no_check_bucket = true for object uploads to work correctly.Content-Encoding: gzip by default which is a deviation from what AWS does. If this is causing a problem then upload the files with --header-upload "Cache-Control: no-transform"Content-Encoding: gzip will never appear in the metadata on Cloudflare.Dreamhost
+rclone copy /path/to/files minio:bucketOutscale
+
+[outscale]
+type = s3
+provider = Outscale
+env_auth = false
+access_key_id = ABCDEFGHIJ0123456789
+secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+region = eu-west-2
+endpoint = oos.eu-west-2.outscale.com
+acl = privaterclone config to go through the interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+Enter name for new remote.
+name> outscale
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+ X / Amazon S3 Compliant Storage Providers including AWS, ...Outscale, ...and others
+ \ (s3)
+[snip]
+Storage> outscale
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / OUTSCALE Object Storage (OOS)
+ \ (Outscale)
+[snip]
+provider> Outscale
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth>
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ABCDEFGHIJ0123456789
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+Option region.
+Region where your bucket will be created and your data stored.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Paris, France
+ \ (eu-west-2)
+ 2 / New Jersey, USA
+ \ (us-east-2)
+ 3 / California, USA
+ \ (us-west-1)
+ 4 / SecNumCloud, Paris, France
+ \ (cloudgouv-eu-west-1)
+ 5 / Tokyo, Japan
+ \ (ap-northeast-1)
+region> 1
+Option endpoint.
+Endpoint for S3 API.
+Required when using an S3 clone.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Outscale EU West 2 (Paris)
+ \ (oos.eu-west-2.outscale.com)
+ 2 / Outscale US east 2 (New Jersey)
+ \ (oos.us-east-2.outscale.com)
+ 3 / Outscale EU West 1 (California)
+ \ (oos.us-west-1.outscale.com)
+ 4 / Outscale SecNumCloud (Paris)
+ \ (oos.cloudgouv-eu-west-1.outscale.com)
+ 5 / Outscale AP Northeast 1 (Japan)
+ \ (oos.ap-northeast-1.outscale.com)
+endpoint> 1
+Option acl.
+Canned ACL used when creating buckets and storing or copying objects.
+This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
+For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
+Note that this ACL is applied when server-side copying objects as S3
+doesn't copy the ACL from the source but rather writes a fresh one.
+If the acl is an empty string then no X-Amz-Acl: header is added and
+the default (private) will be used.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ / Owner gets FULL_CONTROL.
+ 1 | No one else has access rights (default).
+ \ (private)
+[snip]
+acl> 1
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> nConfiguration complete.
+Options:
+- type: s3
+- provider: Outscale
+- access_key_id: ABCDEFGHIJ0123456789
+- secret_access_key: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+- endpoint: oos.eu-west-2.outscale.com
+Keep this "outscale" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> yQiniu Cloud Object Storage (Kodo)
storage_class. So you can configure your remote with the storage_class = GLACIER option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)storage_class. So you can configure your remote with the storage_class = GLACIER option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)Seagate Lyve Cloud
remote - you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first.
+rclone copy /path/to/files seaweedfs_s3:fooSelectel
+Selectel provider type.rclone config to make a new provider like this
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> selectel
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Amazon S3 Compliant Storage Providers including ..., Selectel, ...
+ \ (s3)
+[snip]
+Storage> s3
+
+Option provider.
+Choose your S3 provider.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+[snip]
+XX / Selectel Object Storage
+ \ (Selectel)
+[snip]
+provider> Selectel
+
+Option env_auth.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
+Only applies if access_key_id and secret_access_key is blank.
+Choose a number from below, or type in your own boolean value (true or false).
+Press Enter for the default (false).
+ 1 / Enter AWS credentials in the next step.
+ \ (false)
+ 2 / Get AWS credentials from the environment (env vars or IAM).
+ \ (true)
+env_auth> 1
+
+Option access_key_id.
+AWS Access Key ID.
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+access_key_id> ACCESS_KEY
+
+Option secret_access_key.
+AWS Secret Access Key (password).
+Leave blank for anonymous access or runtime credentials.
+Enter a value. Press Enter to leave empty.
+secret_access_key> SECRET_ACCESS_KEY
+
+Option region.
+Region where your data stored.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / St. Petersburg
+ \ (ru-1)
+region> 1
+
+Option endpoint.
+Endpoint for Selectel Object Storage.
+Choose a number from below, or type in your own value.
+Press Enter to leave empty.
+ 1 / Saint Petersburg
+ \ (s3.ru-1.storage.selcloud.ru)
+endpoint> 1
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: s3
+- provider: Selectel
+- access_key_id: ACCESS_KEY
+- secret_access_key: SECRET_ACCESS_KEY
+- region: ru-1
+- endpoint: s3.ru-1.storage.selcloud.ru
+Keep this "selectel" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y[selectel]
+type = s3
+provider = Selectel
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+region = ru-1
+endpoint = s3.ru-1.storage.selcloud.ruWasabi
rclone config setting the provider Netease. This will automatically set force_path_style = false which is necessary for it to run properly.Petabox
rclone configrclone config
@@ -19069,6 +19706,7 @@ rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHid
No remotes found, make a new one?
n) New remote
@@ -19054,6 +19690,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
{
"daysFromHidingToDeleting": 1,
"daysFromUploadingToHiding": null,
+ "daysFromStartingToCancelingUnfinishedLargeFiles": null,
"fileNamePrefix": ""
}
]
cleanup
@@ -19141,7 +19779,7 @@ If not sure try Y. If Y failed, try N.
y) Yes
n) No
y/n> y
-If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=XXXXXXXXXXXXXXXXXXXXXX
Log in and authorize rclone for access
Waiting for code...
Got code
@@ -19384,6 +20022,16 @@ y/e/d> y
--box-client-credentials
+
+
--box-root-folder-id
Cloudinary
+About Cloudinary
+Accounts & Pricing
+Securing Your Credentials
+Configuration
+API Key and API Secret for your account from the developer section.rclone config
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter the name for the new remote.
+name> cloudinary-media-library
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / cloudinary.com
+\ (cloudinary)
+[snip]
+Storage> cloudinary
+
+Option cloud_name.
+You can find your cloudinary.com cloud_name in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+Enter a value.
+cloud_name> ****************************
+
+Option api_key.
+You can find your cloudinary.com api key in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+Enter a value.
+api_key> ****************************
+
+Option api_secret.
+You can find your cloudinary.com api secret in your [dashboard](https://console.cloudinary.com/pm/developer-dashboard)
+This value must be a single character, one of the following: y, g.
+y/g> y
+Enter a value.
+api_secret> ****************************
+
+Option upload_prefix.
+[Upload prefix](https://cloudinary.com/documentation/cloudinary_sdks#configuration_parameters) to specify alternative data center
+Enter a value.
+upload_prefix>
+
+Option upload_preset.
+[Upload presets](https://cloudinary.com/documentation/upload_presets) can be defined for different upload profiles
+Enter a value.
+upload_preset>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: cloudinary
+- api_key: ****************************
+- api_secret: ****************************
+- cloud_name: ****************************
+- upload_prefix:
+- upload_preset:
+
+Keep this "cloudinary-media-library" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> yrclone lsd cloudinary-media-library:rclone mkdir cloudinary-media-library:directoryrclone ls cloudinary-media-library:directoryModified time and hashes
+Standard options
+--cloudinary-cloud-name
+
+
+--cloudinary-api-key
+
+
+--cloudinary-api-secret
+
+
+--cloudinary-upload-prefix
+
+
+--cloudinary-upload-preset
+
+
+Advanced options
+--cloudinary-encoding
+
+
+--cloudinary-eventually-consistent-delay
+
+
+--cloudinary-description
+
+
Citrix ShareFile
Configuration
+Configuration
rclone config walks you through it.remote. First run:
@@ -20360,7 +21172,7 @@ y/e/d> y
rclone configStandard options
+Standard options
--sharefile-client-id
Advanced options
+Advanced options
--sharefile-token
--sharefile-client-credentials
+
+
--sharefile-upload-cutoff
Configuration
+Configuration
secret.crypt, first set up the underlying remote. Follow the rclone config instructions for the specific backend.remote. We will configure a path path within this remote to contain the encrypted content. Anything inside remote:path will be encrypted and anything outside will not.rclone cryptcheck command to check the integrity of an encrypted remote instead of rclone check which can't check the checksums properly.Standard options
+Standard options
--crypt-remote
Advanced options
+Advanced options
--crypt-server-side-across-configs
Warning
Compress remote adds compression to another remote. It is best used with remotes containing many large compressible files.Configuration
+Configuration
Current remotes:
@@ -21039,7 +21861,7 @@ y/e/d> yFile names
*.###########.gz where * is the base file and the # part is base64 encoded size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend.Standard options
+Standard options
--compress-remote
Advanced options
+Advanced options
--compress-level
upstreams parameter in the config like thisupstreams = images=s3:imagesbucket files=drive:important/filesrclone config you will specify the upstreams remotes as a space separated list. The upstream remotes can either be a local paths or other remotes.Configuration
+Configuration
remote for the example above. First run: rclone configrclone config file) then you can access all the shared drives in one place with the AllDrives: remote.Standard options
+Standard options
--combine-upstreams
Advanced options
+Advanced options
--combine-description
remote:pathremote:directory/subdirectory.Configuration
+Configuration
rclone config walks you through it.remote. First run:
@@ -21337,7 +22159,7 @@ y/e/d> y
rclone config--dropbox-batch-mode async then do a final transfer with --dropbox-batch-mode sync (the default).Standard options
+Standard options
--dropbox-client-id
Advanced options
+Advanced options
--dropbox-token
--dropbox-client-credentials
+
+
--dropbox-chunk-size
Enterprise File Fabric
Configuration
+Configuration
rclone config walks you through it.remote. First run:
@@ -21648,7 +22480,7 @@ y/e/d> y
120673757,My contacts/
120673761,S3 Storage/
rclone config120673761.Standard options
+Standard options
--filefabric-url
Advanced options
+Advanced options
--filefabric-token
Files.com
rclone config walks you through it.Configuration
+Configuration
remote. First run:rclone configrclone ls remote:/home/local/directory to the remote directory, deleting any excess files in the directory.
-rclone sync --interactive /home/local/directory remote:dirStandard options
+Standard options
--filescom-site
Advanced options
+Advanced options
--filescom-api-key
remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory.Configuration
+Configuration
remote, runrclone configStandard options
+Standard options
--ftp-host
Advanced options
+Advanced options
--ftp-concurrency
--ftp-socks-proxy
+ Supports the format user:pass@host:port, user@host:port, host:port.
-
- Example:
-
- myUser:myPass@localhost:9005
- myUser:myPass@localhost:9005
+--ftp-no-check-upload
+
+
--ftp-encoding
Configuration
+Configuration
remote. First run: rclone config; so you could setroot_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0Files directory.Standard options
+Standard options
--gofile-access-token
Advanced options
+Advanced options
--gofile-root-folder-id
rclone dedupe to fix duplicated files.Google Cloud Storage
remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.Configuration
+Configuration
rclone config walks you through it.remote. First run:
@@ -22608,6 +23449,40 @@ y/e/d> y
rclone configUser permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.service_account_file prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials with the actual contents of the file instead, or set the equivalent environment variable.Service Account Authentication with Access Tokens
+1. Create a service account using
+
+gcloud iam service-accounts create gcs-read-only 2. Attach a Viewer (read-only) or User (read-write) role to the service account
+
+ $ PROJECT_ID=my-project
+ $ gcloud --verbose iam service-accounts add-iam-policy-binding \
+ gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
+ --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
+ --role=roles/storage.objectViewer
+
+3. Get a temporary access key for the service account
+
+$ gcloud auth application-default print-access-token \
+ --impersonate-service-account \
+ gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com
+
+ya29.c.c0ASRK0GbAFEewXD [truncated]4. Update
+access_token settingCTRL-C when you see waiting for code. This will save the config without doing oauth flow
+rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx5. Run rclone as usual
+
+rclone ls dev-gcs:${MY_BUCKET}/More Info on Service Accounts
+
Anonymous Access
anonymous to true. With unauthorized access you can't write or create files but only read or list those buckets and objects that have public read access.Application Default Credentials
@@ -22665,7 +23540,7 @@ y/e/d> y
Standard options
+Standard options
--gcs-client-id
Advanced options
+Advanced options
--gcs-token
--gcs-client-credentials
+
+
+--gcs-access-token
+
+
--gcs-directory-markers
Google Drive
drive:pathdrive:directory/subdirectory.Configuration
+Configuration
rclone config walks you through it.remote. First run:
@@ -23538,81 +24433,86 @@ trashed=false and 'c' in parents
rclone configJSON Text Format for Google Apps scripts
+
+md
+text/markdown
+Markdown Text Format
+
-odp
application/vnd.oasis.opendocument.presentation
Openoffice Presentation
+
-ods
application/vnd.oasis.opendocument.spreadsheet
Openoffice Spreadsheet
+
-ods
application/x-vnd.oasis.opendocument.spreadsheet
Openoffice Spreadsheet
+
-odt
application/vnd.oasis.opendocument.text
Openoffice Document
+
-pdf
application/pdf
Adobe PDF Format
+
-pjpeg
image/pjpeg
Progressive JPEG Image
+
-png
image/png
PNG Image Format
+
-pptx
application/vnd.openxmlformats-officedocument.presentationml.presentation
Microsoft Office Powerpoint
+
-rtf
application/rtf
Rich Text Format
+
-svg
image/svg+xml
Scalable Vector Graphics Format
+
-tsv
text/tab-separated-values
Standard TSV format for spreadsheets
+
-txt
text/plain
Plain Text
+
-wmf
application/x-msmetafile
Windows Meta File
+
-xls
application/vnd.ms-excel
Classic Excel file
+
-xlsx
application/vnd.openxmlformats-officedocument.spreadsheetml.sheet
Microsoft Office Spreadsheet
+
-zip
application/zip
A ZIP file of HTML, Images CSS
@@ -23651,7 +24551,7 @@ trashed=false and 'c' in parents
Standard options
+Standard options
--drive-client-id
Advanced options
+Advanced options
--drive-token
--drive-client-credentials
+
+
--drive-root-folder-id
rescue
+
+rclone backend rescue remote: [options] [<arguments>+]
+rclone backend rescue drive:
+rclone backend rescue drive: "relative/path/to/rescue/directory"
+rclone backend rescue drive: Orphansrclone backend rescue drive: -o deleteLimitations
--disable copy to download and upload the files if you prefer.Google Photos
Configuration
+Configuration
rclone config walks you through it.remote. First run:
@@ -24733,7 +25659,7 @@ y/e/d> y
rclone configalbum path pretty much like a normal filesystem and it is a good target for repeated syncing.shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.Standard options
+Standard options
--gphotos-client-id
Advanced options
+Advanced options
--gphotos-token
--gphotos-client-credentials
+
+
--gphotos-read-size
--gphotos-proxy
+
+gphotosdl -login
+gphotosdl--gphotos-proxy "http://localhost:8282" to make rclone use the proxy.
+
--gphotos-encoding
Downloading Images
Downloading Videos
Duplicates
file.jpg would then appear as file {123456}.jpg and file {ABCDEF}.jpg (the actual IDs are a lot longer alas!).upload then uploaded the same image to album/my_album the filename of the image in album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn't cause too many problems.rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1stickyimport is similar to import but works much faster because it does not need to stat existing files and skips initial tree walk. Instead of binding cache entries to file fingerprints it creates sticky entries bound to the file name alone ignoring size, modification time etc. Such hash entries can be replaced only by purge, delete, backend drop or by full re-read/re-write of the files.Configuration reference
-Standard options
+Standard options
--hasher-remote
Advanced options
+Advanced options
--hasher-auto-size
HDFS
remote: or remote:path/to/dir.Configuration
+Configuration
remote. First run: rclone configStandard options
+Standard options
--hdfs-namenode
Advanced options
+Advanced options
--hdfs-service-principal-name
remote:pathremote:directory/subdirectory.rclone config walks you through it.Configuration
+Configuration
remote. First run: rclone configrclone lsd uses this information.disable_fetching_member_count option can be used.Standard options
+Standard options
--hidrive-client-id
Advanced options
+Advanced options
--hidrive-token
--hidrive-client-credentials
+
+
--hidrive-scope-role
remote: represents the configured url, and any path following it will be resolved relative to this url, according to the URL standard. This means with remote url https://beta.rclone.org/branch and path fix, the resolved URL will be https://beta.rclone.org/branch/fix, while with path /fix the resolved URL will be https://beta.rclone.org/fix as the absolute path is resolved from the root of the domain.remote: ends with / it will be assumed to point to a directory. If the path does not end with /, then a HEAD request is sent and the response used to decide if it it is treated as a file or a directory (run with -vv to see details). When --http-no-head is specified, a path without ending / is always assumed to be a file. If rclone incorrectly assumes the path is a file, the solution is to specify the path with ending /. When you know the path is a directory, ending it with / is always better as it avoids the initial HEAD request.Configuration
+Configuration
remote. First run: rclone configrclone lsd --http-url https://beta.rclone.org :http:
-rclone lsd :http,url='https://beta.rclone.org':Standard options
+Standard options
--http-url
Advanced options
+Advanced options
--http-headers
ImageKit.io provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web.
-To use this backend, you need to create an account on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See the pricing details.
-Here is an example of making an imagekit configuration.
Firstly create a ImageKit.io account and choose a plan.
You will need to log in and get the publicKey and privateKey for your account from the developer section.
rclone mkdir imagekit-media-library:directory
List the contents of a directory.
rclone ls imagekit-media-library:directory
-ImageKit does not support modification times or hashes yet.
No checksums are supported.
-Here are the Standard options specific to imagekit (ImageKit.io).
You can find your ImageKit.io URL endpoint in your dashboard
@@ -25886,7 +26852,7 @@ y/e/d> yHere are the Advanced options specific to imagekit (ImageKit.io).
If you have configured Restrict unsigned image URLs in your dashboard settings, set this to true.
See the metadata docs for more info.
+The initial setup for an iCloud Drive backend involves getting a trust token/session. This can be done by simply using the regular iCloud password, and accepting the code prompt on another iCloud connected device.
+IMPORTANT: At the moment an app specific password won't be accepted. Only use your regular password and 2FA.
+rclone config walks you through the token creation. The trust token is valid for 30 days. After which you will have to reauthenticate with rclone reconnect or rclone config.
Here is an example of how to make a remote called iclouddrive. First run:
rclone config
+This will guide you through an interactive setup process:
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> iclouddrive
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / iCloud Drive
+ \ (iclouddrive)
+[snip]
+Storage> iclouddrive
+Option apple_id.
+Apple ID.
+Enter a value.
+apple_id> APPLEID
+Option password.
+Password.
+Choose an alternative below.
+y) Yes, type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+Option config_2fa.
+Two-factor authentication: please enter your 2FA code
+Enter a value.
+config_2fa> 2FACODE
+Remote config
+--------------------
+[koofr]
+- type: iclouddrive
+- apple_id: APPLEID
+- password: *** ENCRYPTED ***
+- cookies: ****************************
+- trust_token: ****************************
+--------------------
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+ADP is currently unsupported and need to be disabled
+Here are the Standard options specific to iclouddrive (iCloud Drive).
+Apple ID.
+Properties:
+Password.
+NB Input to this must be obscured - see rclone obscure.
+Properties:
+Trust token (internal use)
+Properties:
+cookies (internal use only)
+Properties:
+Here are the Advanced options specific to iclouddrive (iCloud Drive).
+Client id
+Properties:
+The encoding for the backend.
+See the encoding section in the overview for more info.
+Properties:
+Description of the remote.
+Properties:
+The Internet Archive backend utilizes Items on archive.org
Refer to IAS3 API documentation for the API this backend uses.
@@ -26062,7 +27156,7 @@ y/e/d> yThese auto-created files can be excluded from the sync using metadata filtering.
rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata"
Which excludes from the sync any files which have the source=metadata or format=Metadata flags which are added to Internet Archive auto-created files.
Here is an example of making an internetarchive configuration. Most applies to the other providers as well, any differences are described below.
First run
rclone config
@@ -26131,7 +27225,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-Here are the Standard options specific to internetarchive (Internet Archive).
IAS3 Access Key.
@@ -26153,7 +27247,7 @@ y/e/d> yHere are the Advanced options specific to internetarchive (Internet Archive).
IAS3 Endpoint.
@@ -26359,7 +27453,7 @@ Response: {"error":"invalid_grant","error_description&qOnlime has sold access to Jottacloud proper, while providing localized support to Danish Customers, but have recently set up their own hosting, transferring their customers from Jottacloud servers to their own ones.
This, of course, necessitates using their servers for authentication, but otherwise functionality and architecture seems equivalent to Jottacloud.
To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication in the setup. The rest of the setup is identical to the default setup.
-Here is an example of how to make a remote called remote with the default setup. First run:
rclone config
This will guide you through an interactive setup process:
@@ -26523,7 +27617,7 @@ y/e/d> yVersioning can be disabled by --jottacloud-no-versions option. This is achieved by deleting the remote file prior to uploading a new version. If the upload the fails no version of the file will be available in the remote.
To view your current quota you can use the rclone about remote: command which will display your usage limit (unless it is unlimited) and the current usage.
Here are the Standard options specific to jottacloud (Jottacloud).
OAuth Client Id.
@@ -26545,7 +27639,7 @@ y/e/d> yHere are the Advanced options specific to jottacloud (Jottacloud).
OAuth Access Token as a JSON blob.
@@ -26576,6 +27670,16 @@ y/e/d> yUse client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+Files bigger than this will be cached on disk to calculate the MD5 if required.
Properties:
@@ -26702,7 +27806,7 @@ y/e/d> yPaths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr web application, giving the password a nice name like rclone and clicking on generate.
Here is an example of how to make a remote called koofr. First run:
rclone config
@@ -26789,7 +27893,7 @@ y/e/d> y
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
Choose your storage provider.
@@ -26845,7 +27949,7 @@ y/e/d> yHere are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).
Mount ID of the mount to use.
@@ -27016,7 +28120,7 @@ d) Delete this remote y/e/d> yLinkbox is a private cloud drive.
-Here is an example of making a remote for Linkbox.
First run:
rclone config
@@ -27052,7 +28156,7 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
-Here are the Standard options specific to linkbox (Linkbox).
Token from https://www.linkbox.to/admin/account
@@ -27063,7 +28167,7 @@ y/e/d> yHere are the Advanced options specific to linkbox (Linkbox).
Description of the remote.
@@ -27089,7 +28193,7 @@ y/e/d> yHere is an example of making a mailru configuration.
First create a Mail.ru Cloud account and choose a tariff.
You will need to log in and create an app password for rclone. Rclone will not work with your normal username and password - it will give an error like oauth2: server response missing access_token.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the Standard options specific to mailru (Mail.ru Cloud).
OAuth Client Id.
@@ -27293,7 +28397,7 @@ y/e/d> y -Here are the Advanced options specific to mailru (Mail.ru Cloud).
OAuth Access Token as a JSON blob.
@@ -27324,6 +28428,16 @@ y/e/d> yUse client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+Comma separated list of file name patterns eligible for speedup (put by hash).
Patterns are case insensitive and can contain '*' or '?' meta characters.
@@ -27469,7 +28583,7 @@ y/e/d> yThis is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
Here is an example of how to make a remote called remote. First run:
rclone config
This will guide you through an interactive setup process:
@@ -27570,7 +28684,7 @@ me@example.com:/$Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though.
Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum.
So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.
-Here are the Standard options specific to mega (Mega).
User name.
@@ -27591,7 +28705,7 @@ me@example.com:/$Here are the Advanced options specific to mega (Mega).
Output more debug from Mega.
@@ -27650,7 +28764,7 @@ me@example.com:/$The memory backend is an in RAM backend. It does not persist its data - use the local backend for that.
The memory backend behaves like a bucket-based remote (e.g. like s3). Because it has no parameters you can just use it with the :memory: remote name.
You can configure it as a remote like this with rclone config too if you want to:
No remotes found, make a new one?
n) New remote
@@ -27686,7 +28800,7 @@ rclone serve sftp :memory:
The memory backend supports MD5 hashes and modification times accurate to 1 nS.
The memory backend replaces the default restricted characters set.
-Here are the Advanced options specific to memory (In memory object storage system.).
Description of the remote.
@@ -27701,7 +28815,7 @@ rclone serve sftp :memory:Paths are specified as remote: You may put subdirectories in too, e.g. remote:/path/to/dir. If you have a CP code you can use that as the folder after the domain such as <domain>/<cpcode>/<internal directories within cpcode>.
For example, this is commonly configured with or without a CP code: * With a CP code. [your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/ * Without a CP code. [your-domain-prefix]-nsu.akamaihd.net
See all buckets rclone lsd remote: The initial setup for Netstorage involves getting an account and secret. Use rclone config to walk you through the setup process.
Here's an example of how to make a remote called ns1.
NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.
Note: Read the NetStorage Usage API for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.
-Here are the Standard options specific to netstorage (Akamai NetStorage).
Domain+path of NetStorage host to connect to.
@@ -27841,7 +28955,7 @@ y/e/d> yHere are the Advanced options specific to netstorage (Akamai NetStorage).
Select between HTTP or HTTPS protocol.
@@ -27890,7 +29004,7 @@ y/e/d> yThe desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. rclone backend symlink <src> <path>
Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.
Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote. First run:
rclone config
This will guide you through an interactive setup process:
@@ -28028,6 +29142,7 @@ y/e/d> yWhen using Managed Service Identity if the VM(SS) on which this program is running has a system-assigned identity, it will be used by default. If the resource has no system-assigned but exactly one user-assigned identity, the user-assigned identity will be used by default.
If the resource has multiple user-assigned identities you will need to unset env_auth and set use_msi instead. See the use_msi section.
If you are operating in disconnected clouds, or private clouds such as Azure Stack you may want to set disable_instance_discovery = true. This determines whether rclone requests Microsoft Entra instance metadata from https://login.microsoft.com/ before authenticating. Setting this to true will skip this request, making you responsible for ensuring the configured authority is valid and trustworthy.
Credentials created with the az tool can be picked up using env_auth.
For example if you were to login with a service principal like this:
@@ -28084,10 +29199,14 @@ container/If use_msi is set then managed service identity credentials are used. This authentication only works when running in an Azure service. env_auth needs to be unset to use this.
However if you have multiple user identities to choose from these must be explicitly specified using exactly one of the msi_object_id, msi_client_id, or msi_mi_res_id parameters.
If none of msi_object_id, msi_client_id, or msi_mi_res_id is set, this is is equivalent to using env_auth.
azSet to use the Azure CLI tool az as the sole means of authentication.
Setting this can be useful if you wish to use the az CLI on a host with a System Managed Identity that you do not want to use.
Don't set env_auth at the same time.
If you want to access resources with public anonymous access then set account only. You can do this without making an rclone config:
rclone lsf :azureblob,account=ACCOUNT:CONTAINER
-Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage).
Azure Storage Account Name.
@@ -28183,7 +29302,7 @@ container/Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage).
Send the certificate chain when using certificate auth.
@@ -28233,6 +29352,18 @@ container/Skip requesting Microsoft Entra instance metadata
+This should be set true only by applications authenticating in disconnected clouds, or private clouds such as Azure Stack.
+It determines whether rclone requests Microsoft Entra instance metadata from https://login.microsoft.com/ before authenticating.
Setting this to true will skip this request, making you responsible for ensuring the configured authority is valid and trustworthy.
+Properties:
+Use a managed service identity to authenticate (only works in Azure).
When true, use a managed service identity to authenticate to Azure Storage instead of a SAS token or account key.
@@ -28284,6 +29415,18 @@ container/Use Azure CLI tool az for authentication
+Set to use the Azure CLI tool az as the sole means of authentication.
+Setting this can be useful if you wish to use the az CLI on a host with a System Managed Identity that you do not want to use.
+Don't set env_auth at the same time.
+Properties:
+Endpoint for the service.
Leave blank normally.
@@ -28505,7 +29648,7 @@ container/Also, if you want to access a storage emulator instance running on a different machine, you can override the endpoint parameter in the advanced settings, setting it to http(s)://<host>:<port>/devstoreaccount1 (e.g. http://10.254.2.5:10000/devstoreaccount1).
Paths are specified as remote: You may put subdirectories in too, e.g. remote:path/to/dir.
Here is an example of making a Microsoft Azure Files Storage configuration. For a remote called remote. First run:
rclone config
This will guide you through an interactive setup process:
@@ -28750,7 +29893,7 @@ y/e/d>If use_msi is set then managed service identity credentials are used. This authentication only works when running in an Azure service. env_auth needs to be unset to use this.
However if you have multiple user identities to choose from these must be explicitly specified using exactly one of the msi_object_id, msi_client_id, or msi_mi_res_id parameters.
If none of msi_object_id, msi_client_id, or msi_mi_res_id is set, this is is equivalent to using env_auth.
Here are the Standard options specific to azurefiles (Microsoft Azure Files).
Azure Storage Account Name.
@@ -28865,7 +30008,7 @@ y/e/d>Here are the Advanced options specific to azurefiles (Microsoft Azure Files).
Send the certificate chain when using certificate auth.
@@ -29040,7 +30183,7 @@ y/e/d>Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -29152,6 +30295,17 @@ y/e/d> y
token_url to https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token.Note: If you have a special region, you may need a different host in step 4 and 5. Here are some hints.
+OAuth Client Credential flow will allow rclone to use permissions directly associated with the Azure AD Enterprise application, rather that adopting the context of an Azure AD user account.
+This flow can be enabled by following the steps below:
+true for client_credentials and in the tenant section enter the tenant ID.When it comes to choosing the type of the connection work with the client credentials flow. In particular the "onedrive" option does not work. You can use the "sharepoint" option or if that does not find the correct drive ID type it in manually with the "driveid" option.
+NOTE Assigning permissions directly to the application means that anyone with the Client ID and Client Secret can access your OneDrive files. Take care to safeguard these credentials.
OneDrive allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
OneDrive Personal, OneDrive for Business and Sharepoint Server support QuickXorHash.
@@ -29265,7 +30419,7 @@ y/e/d> yInvalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.
-Here are the Standard options specific to onedrive (Microsoft OneDrive).
OAuth Client Id.
@@ -29315,7 +30469,17 @@ y/e/d> y -ID of the service principal's tenant. Also called its directory ID.
+Set this if using - Client Credential flow
+Properties:
+Here are the Advanced options specific to onedrive (Microsoft OneDrive).
OAuth Access Token as a JSON blob.
@@ -29346,6 +30510,16 @@ y/e/d> yUse client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter "Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big." Note that the chunks will be buffered into memory.
@@ -29660,75 +30834,75 @@ rclone rc vfs/refresh recursive=truePermissions are also supported, if --onedrive-metadata-permissions is set. The accepted values for --onedrive-metadata-permissions are "read", "write", "read,write", and "off" (the default). "write" supports adding new permissions, updating the "role" of existing permissions, and removing permissions. Updating and removing require the Permission ID to be known, so it is recommended to use "read,write" instead of "write" if you wish to update/remove permissions.
Permissions are read/written in JSON format using the same schema as the OneDrive API, which differs slightly between OneDrive Personal and Business.
Example for OneDrive Personal:
-[
- {
- "id": "1234567890ABC!123",
- "grantedTo": {
- "user": {
- "id": "ryan@contoso.com"
- },
- "application": {},
- "device": {}
- },
- "invitation": {
- "email": "ryan@contoso.com"
- },
- "link": {
- "webUrl": "https://1drv.ms/t/s!1234567890ABC"
- },
- "roles": [
- "read"
- ],
- "shareId": "s!1234567890ABC"
- }
-][
+ {
+ "id": "1234567890ABC!123",
+ "grantedTo": {
+ "user": {
+ "id": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ },
+ "invitation": {
+ "email": "ryan@contoso.com"
+ },
+ "link": {
+ "webUrl": "https://1drv.ms/t/s!1234567890ABC"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "s!1234567890ABC"
+ }
+]Example for OneDrive Business:
-[
- {
- "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
- "grantedToIdentities": [
- {
- "user": {
- "displayName": "ryan@contoso.com"
- },
- "application": {},
- "device": {}
- }
- ],
- "link": {
- "type": "view",
- "scope": "users",
- "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
- },
- "roles": [
- "read"
- ],
- "shareId": "u!LKj1lkdlals90j1nlkascl"
- },
- {
- "id": "5D33DD65C6932946",
- "grantedTo": {
- "user": {
- "displayName": "John Doe",
- "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
- },
- "application": {},
- "device": {}
- },
- "roles": [
- "owner"
- ],
- "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
- }
-][
+ {
+ "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
+ "grantedToIdentities": [
+ {
+ "user": {
+ "displayName": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ }
+ ],
+ "link": {
+ "type": "view",
+ "scope": "users",
+ "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "u!LKj1lkdlals90j1nlkascl"
+ },
+ {
+ "id": "5D33DD65C6932946",
+ "grantedTo": {
+ "user": {
+ "displayName": "John Doe",
+ "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
+ },
+ "application": {},
+ "device": {}
+ },
+ "roles": [
+ "owner"
+ ],
+ "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
+ }
+]To write permissions, pass in a "permissions" metadata key using this same format. The --metadata-mapper tool can be very helpful for this.
When adding permissions, an email address can be provided in the User.ID or DisplayName properties of grantedTo or grantedToIdentities. Alternatively, an ObjectID can be provided in User.ID. At least one valid recipient must be provided in order to add a permission for a user. Creating a Public Link is also supported, if Link.Scope is set to "anonymous".
Example request to add a "read" permission with --metadata-mapper:
{
- "Metadata": {
- "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
- }
-}{
+ "Metadata": {
+ "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
+ }
+}Note that adding a permission can fail if a conflicting permission already exists for the file/folder.
To update an existing permission, include both the Permission ID and the new roles to be assigned. roles is the only property that can be changed.
To remove permissions, pass in a blob containing only the permissions you wish to keep (which can be empty, to remove all.) Note that the owner role will be ignored, as it cannot be removed.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
Here is an example of how to make a remote called remote. First run:
rclone config
This will guide you through an interactive setup process:
@@ -30118,7 +31292,7 @@ y/e/d> yInvalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the Standard options specific to opendrive (OpenDrive).
Username.
@@ -30139,7 +31313,7 @@ y/e/d> yHere are the Advanced options specific to opendrive (OpenDrive).
The encoding for the backend.
@@ -30184,7 +31358,7 @@ y/e/d> yPaths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.
Sample command to transfer local artifacts to remote:bucket in oracle object storage:
rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv
Here is an example of making an oracle object storage configuration. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -30367,7 +31541,7 @@ provider = no_auth
If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb, the object will be uploaded rather than copied.
Note that reading this from the object takes an additional HEAD request as the metadata isn't returned in object listings.
The MD5 hash algorithm is supported.
-rclone supports multipart uploads with OOS which means that it can upload files bigger than 5 GiB.
Note that files uploaded both with multipart upload and through crypt remotes do not have MD5 sums.
rclone switches from single part uploads to multipart uploads at the point specified by --oos-upload-cutoff. This can be a maximum of 5 GiB and a minimum of 0 (ie always upload multipart files).
Multipart uploads will use --transfers * --oos-upload-concurrency * --oos-chunk-size extra memory. Single part uploads to not use extra memory.
Single part transfers can be faster than multipart transfers or slower depending on your latency from oos - the more latency, the more likely single part transfers will be faster.
Increasing --oos-upload-concurrency will increase throughput (8 would be a sensible value) and increasing --oos-chunk-size also increases throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.
Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
Choose your Auth Provider
@@ -30428,14 +31602,15 @@ provider = no_authObject storage compartment OCID
+Specify compartment OCID, if you need to list buckets.
+List objects works without compartment OCID.
Properties:
Object storage Region
@@ -30490,7 +31665,7 @@ provider = no_auth -Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).
The storage class to use when storing new objects in storage. https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/understandingstoragetiers.htm
@@ -30809,7 +31984,7 @@ if not.Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.
Here is an example of making an QingStor configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -30880,7 +32055,7 @@ y/e/d> yrclone sync --interactive /home/local/directory remote:bucket
This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5 GiB. Note that files uploaded with multipart upload don't have an MD5SUM.
Note that incomplete multipart uploads older than 24 hours can be removed with rclone cleanup remote:bucket just for one bucket rclone cleanup remote: for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time.
The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the Standard options specific to qingstor (QingCloud Object Storage).
Get QingStor credentials from runtime.
@@ -30986,7 +32161,7 @@ y/e/d> y -Here are the Advanced options specific to qingstor (QingCloud Object Storage).
Number of connection retries.
@@ -31059,7 +32234,7 @@ y/e/d> yPaths may be as deep as required, e.g., remote:directory/subdirectory.
The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at https://<account>/profile/api-keys or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create.
See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer
-Here is an example of how to make a remote called remote. First run:
rclone config
This will guide you through an interactive setup process:
@@ -31152,7 +32327,7 @@ y/e/d> yFor files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all multipart uploads). Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by default, and it can be changed in the advanced configuration, so increasing --transfers will increase the memory use. The chunk size has a maximum size limit, which is set to 100_000_000 bytes by default and can be changed in the advanced configuration. The size of the uploaded chunk will dynamically change depending on the upload speed. The total memory use equals the number of transfers multiplied by the minimal chunk size. In case there's free memory allocated for the upload (which equals the difference of maximal_summary_chunk_size and minimal_chunk_size * transfers), the chunk size may increase in case of high upload speed. As well as it can decrease in case of upload speed problems. If no free memory is available, all chunks will equal minimal_chunk_size.
Files you delete with rclone will end up in Trash and be stored there for 30 days. Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account.
-Here are the Standard options specific to quatrix (Quatrix by Maytech).
API key for accessing Quatrix account
@@ -31172,7 +32347,7 @@ y/e/d> yHere are the Advanced options specific to quatrix (Quatrix by Maytech).
The encoding for the backend.
@@ -31249,7 +32424,7 @@ y/e/d> yrclone interacts with Sia network by talking to the Sia daemon via HTTP API which is usually available on port 9980. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be http://127.0.0.1:9980 making external access impossible).
However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: - Ensure you have Sia daemon installed directly or in a docker container because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide --api-addr :9980 and --disable-api-security arguments on the daemon command line. - Enforce API password for the siad daemon via environment variable SIA_API_PASSWORD or text file named apipassword in the daemon directory. - Set rclone backend option api_password taking it from above locations.
Notes: 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line siac wallet unlock. Alternatively you can make siad unlock your wallet automatically upon startup by running it with environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR directory, it will generate a random password and store in the text file named apipassword under YOUR_HOME/.sia/ directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember this when you configure password in rclone. 3. The only way to use siad without API password is to run it on localhost with command line argument --authorize-api=false, but this is insecure and strongly discouraged.
Here is an example of how to make a sia remote called mySia. First, run:
rclone config
This will guide you through an interactive setup process:
@@ -31309,7 +32484,7 @@ y/e/d> yrclone copy /home/source mySia:backup
-Here are the Standard options specific to sia (Sia Decentralized Cloud).
Sia daemon API URL, like http://sia.daemon.host:9980.
@@ -31332,7 +32507,7 @@ y/e/d> yHere are the Advanced options specific to sia (Sia Decentralized Cloud).
Siad User Agent
@@ -31382,7 +32557,7 @@ y/e/d> yPaths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.
Here is an example of making a swift configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -31548,7 +32723,7 @@ rclone lsd myremote:Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
Get swift credentials from environment variables in standard OpenStack form.
@@ -31786,7 +32961,7 @@ rclone lsd myremote: -Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
If true avoid calling abort upload on a failure.
@@ -31910,7 +33085,7 @@ rclone lsd myremote:Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -31932,6 +33107,10 @@ Pcloud App Client Id - leave blank normally.
client_id>
Pcloud App Client Secret - leave blank normally.
client_secret>
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
Remote config
Use web browser to automatically authenticate rclone with remote?
* Say Y if the machine running rclone has a web browser you can use
@@ -31956,6 +33135,7 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
See the remote setup docs for how to set it up on a machine with no Internet browser available.
+Note if you are using remote config with rclone authorize while your pcloud server is the EU region, you will need to set the hostname in 'Edit advanced config', otherwise you might get a token error.
Note that rclone runs a webserver on your local machine to collect the token as returned from pCloud. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.
Once configured you can then use rclone like this,
List directories in top level of your pCloud
@@ -31996,7 +33176,7 @@ y/e/d> yHowever you can set this to restrict rclone to a specific folder hierarchy.
In order to do this you will have to find the Folder ID of the directory you wish rclone to display. This will be the folder field of the URL when you open the relevant folder in the pCloud web interface.
So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the config.
Here are the Standard options specific to pcloud (Pcloud).
OAuth Client Id.
@@ -32018,7 +33198,7 @@ y/e/d> yHere are the Advanced options specific to pcloud (Pcloud).
OAuth Access Token as a JSON blob.
@@ -32049,6 +33229,16 @@ y/e/d> yUse client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+The encoding for the backend.
See the encoding section in the overview for more info.
@@ -32121,7 +33311,7 @@ y/e/d> yPikPak is a private cloud drive.
Paths are specified as remote:path, and may be as deep as required, e.g. remote:directory/subdirectory.
Here is an example of making a remote for PikPak.
First run:
rclone config
@@ -32177,7 +33367,7 @@ y/e/d> y
PikPak keeps modification times on objects, and updates them when uploading objects, but it does not support changing only the modification time
The MD5 hash algorithm is supported.
-Here are the Standard options specific to pikpak (PikPak).
Pikpak username.
@@ -32198,56 +33388,26 @@ y/e/d> yHere are the Advanced options specific to pikpak (PikPak).
-OAuth Client Id.
-Leave blank normally.
+Device ID used for authorization.
Properties:
OAuth Client Secret.
-Leave blank normally.
+HTTP user agent for pikpak.
+Defaults to "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" or "--pikpak-user-agent" provided on command line.
Properties:
OAuth Access Token as a JSON blob.
-Properties:
-Auth server URL.
-Leave blank to use the provider defaults.
-Properties:
-Token server url.
-Leave blank to use the provider defaults.
-Properties:
-ID of the root folder. Leave blank normally.
@@ -32279,6 +33439,16 @@ y/e/d> yUse original file links instead of media links.
+This avoids issues caused by invalid media links, but may reduce download speeds.
+Properties:
+Files bigger than this will be cached on disk to calculate hash if required.
Properties:
@@ -32439,7 +33609,7 @@ e/n/d/r/c/s/q> qrclone lsf Pixeldrain: --dirs-only -Fpi
This will print directories in your Pixeldrain home directory and their public IDs.
Enter this directory ID in the rclone config and you will be able to access the directory.
-Here are the Standard options specific to pixeldrain (Pixeldrain Filesystem).
API key for your pixeldrain account. Found on https://pixeldrain.com/user/api_keys.
@@ -32460,7 +33630,7 @@ e/n/d/r/c/s/q> qHere are the Advanced options specific to pixeldrain (Pixeldrain Filesystem).
The API endpoint to connect to. In the vast majority of cases it's fine to leave this at default. It is only intended to be changed for testing purposes.
@@ -32528,7 +33698,7 @@ e/n/d/r/c/s/q> qPaths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -32605,7 +33775,7 @@ y/e/d>
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the Standard options specific to premiumizeme (premiumize.me).
OAuth Client Id.
@@ -32637,7 +33807,7 @@ y/e/d>Here are the Advanced options specific to premiumizeme (premiumize.me).
OAuth Access Token as a JSON blob.
@@ -32668,6 +33838,16 @@ y/e/d>Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+The encoding for the backend.
See the encoding section in the overview for more info.
@@ -32760,7 +33940,7 @@ y/e/d> yPlease set your mailbox password in the advanced config section.
The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.
-Here are the Standard options specific to protondrive (Proton Drive).
The username of your proton account
@@ -32792,7 +33972,7 @@ y/e/d> yHere are the Advanced options specific to protondrive (Proton Drive).
The mailbox password of your two-password proton account.
@@ -32912,7 +34092,7 @@ y/e/d> yPaths are specified as remote:path
put.io paths may be as deep as required, e.g. remote:directory/subdirectory.
The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -32996,7 +34176,7 @@ e/n/d/r/c/s/q> q
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the Standard options specific to putio (Put.io).
OAuth Client Id.
@@ -33018,7 +34198,7 @@ e/n/d/r/c/s/q> qHere are the Advanced options specific to putio (Put.io).
OAuth Access Token as a JSON blob.
@@ -33049,6 +34229,16 @@ e/n/d/r/c/s/q> qUse client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+The encoding for the backend.
See the encoding section in the overview for more info.
@@ -33140,7 +34330,7 @@ y/e/d> yPlease set your mailbox password in the advanced config section.
The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton API system that provides visibility of what has changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.
-Here are the Standard options specific to protondrive (Proton Drive).
The username of your proton account
@@ -33172,7 +34362,7 @@ y/e/d> yHere are the Advanced options specific to protondrive (Proton Drive).
The mailbox password of your two-password proton account.
@@ -33291,7 +34481,7 @@ y/e/d> yThe Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on top of this quickly. This codebase handles the intricate tasks before and after calling Proton APIs, particularly the complex encryption scheme, allowing developers to implement features for other software on top of this codebase. There are likely quite a few errors in this library, as there isn't official documentation available.
This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users - Using a Library API Token is not supported
-There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library. You may put subdirectories in too, e.g. remote:library/path/to/dir. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)
Here is an example of making a seafile configuration for a user with no two-factor authentication. First run
@@ -33492,7 +34682,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/It has been actively developed using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition - 9.0.10 community edition
Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.
Each new version of rclone is automatically tested against the latest docker image of the seafile community server.
Here are the Standard options specific to seafile (seafile).
URL of seafile host to connect to.
@@ -33568,7 +34758,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/Here are the Advanced options specific to seafile (seafile).
Should rclone create a library if it doesn't exist.
@@ -33609,7 +34799,7 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/Paths are specified as remote:path. If the path does not begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory. For example, rclone lsd remote: would list the home directory of the user configured in the rclone remote config (i.e /home/sftpuser). However, rclone lsd remote:/ would list the root directory for remote machine (i.e. /)
Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net and Hetzner, on the other hand, requires users to OMIT the leading /.
Note that by default rclone will try to execute shell commands on the server, see shell access considerations.
-Here is an example of making an SFTP configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -33688,7 +34878,7 @@ y/e/d> yIf you set the ask_password option, rclone will prompt for a password when needed and no password has been configured.
With traditional key-based authentication, you configure your private key only, and the public key built into it will be used during the authentication process.
-If you have a certificate you may use it to sign your public key, creating a separate SSH user certificate that should be used instead of the plain public key extracted from the private key. Then you must provide the path to the user certificate public key file in pubkey_file.
If you have a certificate you may use it to sign your public key, creating a separate SSH user certificate that should be used instead of the plain public key extracted from the private key. Then you must provide the path to the user certificate public key file in pubkey_file or the content of the file in pubkey.
Note: This is not the traditional public key paired with your private key, typically saved as /home/$USER/.ssh/id_rsa.pub. Setting this path in pubkey_file will not work.
Example:
[remote]
@@ -33753,7 +34943,7 @@ known_hosts_file = ~/.ssh/known_hosts
The about command returns the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote.
SFTP usually supports the about command, but it depends on the server. If the server implements the vendor-specific VFS statistics extension, which is normally the case with OpenSSH instances, it will be used. If not, but the same login has access to a Unix shell, where the df command is available (e.g. in the remote's PATH), then this will be used instead. If the server shell is PowerShell, probably with a Windows OpenSSH server, rclone will use a built-in shell command (see shell access). If none of the above is applicable, about will fail.
Here are the Standard options specific to sftp (SSH/SFTP).
SSH host to connect to.
@@ -33829,6 +35019,15 @@ known_hosts_file = ~/.ssh/known_hostsSSH public certificate for public certificate based authentication. Set this if you have a signed certificate you want to use for authentication. If specified will override pubkey_file.
+Properties:
+Optional path to public key file.
Set this if you have a signed certificate you want to use for authentication.
@@ -33908,7 +35107,7 @@ known_hosts_file = ~/.ssh/known_hostsHere are the Advanced options specific to sftp (SSH/SFTP).
Optional path to known_hosts file.
@@ -34247,13 +35446,13 @@ server_command = sudo /usr/libexec/openssh/sftp-serverSee Hetzner's documentation for details
SMB is a communication protocol to share files over network.
-This relies on go-smb2 library for communication with SMB protocol.
+This relies on go-smb2 library for communication with SMB protocol.
Paths are specified as remote:sharename (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:item/path/to/dir.
The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in smb.conf (usually in /etc/samba/) file. You can find shares by querying the root if you're unsure (e.g. rclone lsd remote:).
You can't access to the shared printers from rclone, obviously.
You can't use Anonymous access for logging in. You have to use the guest user with an empty password instead. The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods. Alternatively, the local backend on Windows can access SMB servers using UNC paths, by \\server\share. This doesn't apply to non-Windows OSes, such as Linux and macOS.
Here is an example of making a SMB configuration.
First run
rclone config
@@ -34328,7 +35527,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> d
-Here are the Standard options specific to smb (SMB / CIFS).
SMB server hostname to connect to.
@@ -34389,7 +35588,7 @@ y/e/d> dHere are the Advanced options specific to smb (SMB / CIFS).
Max time before closing idle connections.
@@ -34500,7 +35699,7 @@ y/e/d> dTo make a new Storj configuration you need one of the following: * Access Grant that someone else shared with you. * API Key of a Storj project you are a member of.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -34597,7 +35796,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).
Choose an authentication method.
@@ -34676,7 +35875,7 @@ y/e/d> yHere are the Advanced options specific to storj (Storj Decentralized Cloud Storage).
Description of the remote.
@@ -34749,7 +35948,7 @@ y/e/d> yTo fix these, please raise your system limits. You can do this issuing a ulimit -n 65536 just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. $HOME/.bashrc, or change the system-wide configuration, usually /etc/sysctl.conf and/or /etc/security/limits.conf, but please refer to your operating system manual.
SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.
-The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -34822,7 +36021,7 @@ y/e/d> y
Deleted files will be moved to the "Deleted items" folder by default.
However you can supply the flag --sugarsync-hard-delete or set the config parameter hard_delete = true if you would like files to be deleted straight away.
Here are the Standard options specific to sugarsync (Sugarsync).
Sugarsync App ID.
@@ -34863,7 +36062,7 @@ y/e/d> yHere are the Advanced options specific to sugarsync (Sugarsync).
Sugarsync refresh token.
@@ -34951,7 +36150,7 @@ y/e/d> yPaths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
The initial setup for Uloz.to involves filling in the user credentials. rclone config walks you through it.
Here is an example of how to make a remote called remote. First run:
rclone config
This will guide you through an interactive setup process:
@@ -35044,7 +36243,7 @@ y/e/d> yIn order to do this you will have to find the Folder slug of the folder you wish to use as root. This will be the last segment of the URL when you open the relevant folder in the Uloz.to web interface.
For example, for exploring a folder with URL https://uloz.to/fm/my-files/foobar, foobar should be used as the root slug.
root_folder_slug can be used alongside a specific path in the remote path. For example, if your remote's root_folder_slug corresponds to /foo/bar, remote:baz/qux will refer to ABSOLUTE_ULOZTO_ROOT/foo/bar/baz/qux.
Here are the Standard options specific to ulozto (Uloz.to).
The application token identifying the app. An app API key can be either found in the API doc https://uloz.to/upload-resumable-api-beta or obtained from customer service.
@@ -35074,7 +36273,7 @@ y/e/d> yHere are the Advanced options specific to ulozto (Uloz.to).
If set, rclone will use this folder as the root folder for all operations. For example, if the slug identifies 'foo/bar/', 'ulozto:baz' is equivalent to 'ulozto:foo/bar/baz' without any root slug set.
@@ -35123,7 +36322,7 @@ y/e/d> yThis is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and therefore not suitable for long term storage.
Paths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
To configure an Uptobox backend you'll need your personal api token. You'll find it in your account settings
Here is an example of how to make a remote called remote with the default setup. First run:
rclone config
@@ -35203,7 +36402,7 @@ y/e/d>
Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
-Here are the Standard options specific to uptobox (Uptobox).
Your access token.
@@ -35215,7 +36414,7 @@ y/e/d>Here are the Advanced options specific to uptobox (Uptobox).
Set to make uploaded files private
@@ -35259,7 +36458,7 @@ y/e/d>Subfolders can be used in upstream remotes. Assume a union remote named backup with the remotes mydrive:private/backup. Invoking rclone mkdir backup:desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/desktop.
There is no special handling of paths containing .. segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop.
Here is an example of how to make a union called remote for local folders. First run:
rclone config
This will guide you through an interactive setup process:
@@ -35492,7 +36691,7 @@ upstreams = /local:writeback remote:dirWhen files are written, they will be written to both remote:dir and /local.
As many remotes as desired can be added to upstreams but there should only be one :writeback tag.
Rclone does not manage the :writeback remote in any way other than writing files back to it. So if you need to expire old files or manage the size then you will have to do this yourself.
Here are the Standard options specific to union (Union merges the contents of several upstream fs).
List of space separated upstreams.
@@ -35541,7 +36740,7 @@ upstreams = /local:writeback remote:dirHere are the Advanced options specific to union (Union merges the contents of several upstream fs).
Minimum viable free space for lfs/eplfs policies.
@@ -35568,7 +36767,7 @@ upstreams = /local:writeback remote:dirPaths are specified as remote:path
Paths may be as deep as required, e.g. remote:directory/subdirectory.
To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.
Here is an example of how to make a remote called remote. First run:
rclone config
@@ -35645,7 +36844,7 @@ y/e/d> y
Plain WebDAV does not support modified times. However when used with Fastmail Files, Owncloud or Nextcloud rclone will support modified times.
Likewise plain WebDAV does not support hashes, however when used with Fastmail Files, Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
-Here are the Standard options specific to webdav (WebDAV).
URL of http host to connect to.
@@ -35726,7 +36925,7 @@ y/e/d> yHere are the Advanced options specific to webdav (WebDAV).
Command to run to get a bearer token.
@@ -35808,6 +37007,18 @@ y/e/d> yPreserve authentication on redirect.
+If the server redirects rclone to a new domain when it is trying to read a file then normally rclone will drop the Authorization: header from the request.
+This is standard security practice to avoid sending your credentials to an unknown webserver.
+However this is desirable in some circumstances. If you are getting an error like "401 Unauthorized" when rclone is attempting to read files from the webdav server then you can try this option.
+Properties:
+Description of the remote.
Properties:
@@ -35895,7 +37106,7 @@ vendor = other bearer_token_command = oidc-token XDCYandex Disk is a cloud storage solution created by Yandex.
-Here is an example of making a yandex configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -35960,7 +37171,7 @@ y/e/d> yThe default restricted characters set are replaced.
Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
-Here are the Standard options specific to yandex (Yandex Disk).
OAuth Client Id.
@@ -35982,7 +37193,7 @@ y/e/d> yHere are the Advanced options specific to yandex (Yandex Disk).
OAuth Access Token as a JSON blob.
@@ -36013,6 +37224,16 @@ y/e/d> yUse client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+Delete files permanently rather than putting them into the trash.
Properties:
@@ -36056,7 +37277,7 @@ y/e/d> y[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
Zoho WorkDrive is a cloud storage solution created by Zoho.
-Here is an example of making a zoho configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -36136,7 +37357,7 @@ y/e/d>To view your current quota you can use the rclone about remote: command which will display your current usage.
Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.
-Here are the Standard options specific to zoho (Zoho).
OAuth Client Id.
@@ -36195,7 +37416,7 @@ y/e/d> -Here are the Advanced options specific to zoho (Zoho).
OAuth Access Token as a JSON blob.
@@ -36226,6 +37447,25 @@ y/e/d>Use client credentials OAuth flow.
+This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
+Properties:
+Cutoff for switching to large file upload api (>= 10 MiB).
+Properties:
+The encoding for the backend.
See the encoding section in the overview for more info.
@@ -36257,7 +37497,7 @@ y/e/d>Local paths are specified as normal filesystem paths, e.g. /path/to/wherever, so
rclone sync --interactive /home/source /tmp/destination
Will sync /home/source to /tmp/destination.
For consistencies sake one can also configure a remote of type local in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever, but it is probably easier not to.
Rclone reads and writes the modification times using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
@@ -36568,9 +37808,9 @@ nounc = true 6 two/three 6 b/two 6 b/one -Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).
-If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a '.rclonelink' suffix in the remote storage.
+If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a .rclonelink suffix in the remote storage.
The text file will contain the target of the symbolic link (see example).
This flag applies to all commands.
For example, supposing you have a directory structure like this
@@ -36580,7 +37820,7 @@ nounc = true └── file2 -> /home/user/file3Copying the entire directory with '-l'
$ rclone copy -l /tmp/a/ remote:/tmp/a/
-The remote files are created with a '.rclonelink' suffix
+The remote files are created with a .rclonelink suffix
$ rclone ls remote:/tmp/a
5 file1.rclonelink
14 file2.rclonelink
@@ -36610,6 +37850,7 @@ $ tree /tmp/b
$ tree /tmp/c
/tmp/c
└── file1 -> ./file4
+Note that --local-links just enables this feature for the local backend. --links and -l enable the feature for all supported backends and the VFS.
Note that this flag is incompatible with -copy-links / -L.
Normally rclone will recurse through filesystems as mounted.
@@ -36633,7 +37874,7 @@ $ tree /tmp/c 0 file2NB Rclone (like most unix tools such as du, rsync and tar) treats a bind mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.
-Here are the Advanced options specific to local (Local Disk).
Disable UNC (long path names) conversion on Windows.
@@ -36660,8 +37901,8 @@ $ tree /tmp/cTranslate symlinks to/from regular files with a '.rclonelink' extension.
+Translate symlinks to/from regular files with a '.rclonelink' extension for the local backend.
Properties:
NewLazyDLL with NewLazySystemDLL (albertony)--links flag global and add new --local-links and --vfs-links flags (Nick Craig-Wood)relative to vfs/queue-set-expiry (Nick Craig-Wood)--nfs-cache-type symlink (Nick Craig-Wood)-P (Nick Craig-Wood)--flat flag for making directories with many entries (Nick Craig-Wood)ls -laR (Nick Craig-Wood)Last-Modified timestamp (Nick Craig-Wood). and .. entries (Filipe Azevedo)--vfs-used-is-size value is calculated and then thrown away (Ilias Ozgur Can Leonard)--vfs-links flag or the global --links flag--azureblob-disable-instance-discovery (Nick Craig-Wood)--azureblob-use-az to force the use of the Azure CLI for auth (Nick Craig-Wood)daysFromStartingToCancelingUnfinishedLargeFiles to backend lifecycle command (Louis Laureys)rclone backend rescue to rescue orphaned files (Nick Craig-Wood)--ftp-no-check-upload to allow upload to write only dirs (Nick Craig-Wood)--gcs-access-token (Leandro Piccilli)--gphotos-proxy to allow download of full resolution media (Nick Craig-Wood)rclone about support to backend (quiescens)compartmentid optional (Manoj Ghosh)--s3-directory-bucket to support AWS Directory Buckets (Nick Craig-Wood)eu-south-1 region (Diego Monti)--webdav-auth-redirect to fix 401 unauthorized on redirect (Nick Craig-Wood)--links and --metadata (Nick Craig-Wood)
+--metadata and --links and copying files to the local backend--links and --metadata (Nick Craig-Wood)--copy-links on macOS when cloning (nielash)--s3-download-url after migration to SDKv2 (Nick Craig-Wood)--dump filters not always appearing (Nick Craig-Wood)stringArray config values from environment variables (Nick Craig-Wood)--metrics-addr (Nick Craig-Wood)vfs-read-chunk-streams option in docker volume driver (Divyam)env_auth=true (Nick Craig-Wood)SetModTime (Georg Welzel)OpenWriterAt feature to enable multipart uploads (Georg Welzel)