@@ -49706,7 +49550,7 @@ log-in and you are sure the user and the password are correct, likely
you have got the remote blocked for a while.
-Standard options
+Standard options
Here are the Standard options specific to mega (Mega).
--mega-user
User name.
@@ -49739,7 +49583,7 @@ one
Type: string
Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to mega (Mega).
--mega-session-id
Session (internal use only)
@@ -49825,7 +49669,7 @@ type has sufficient memory/CPU to execute the commands. Use the resource
monitoring tools to inspect after sending the commands. Look at
this issue.
-Limitations
+Limitations
This backend uses the go-mega go library which
is an opensource go library implementing the Mega API. There doesn't
@@ -49839,7 +49683,7 @@ there are likely quite a few errors still remaining in this library.
The memory backend behaves like a bucket-based remote (e.g. like s3).
Because it has no parameters you can just use it with the
:memory: remote name.
-Configuration
+Configuration
You can configure it as a remote like this with
rclone config too if you want to:
No remotes found, make a new one\?
@@ -49873,20 +49717,37 @@ testing or with an rclone server or rclone mount, e.g.
rclone mount :memory: /mnt/tmp
rclone serve webdav :memory:
rclone serve sftp :memory:
-Modification times and
+Modification times and
hashes
The memory backend supports MD5 hashes and modification times
accurate to 1 nS.
-Restricted filename
+Restricted filename
characters
The memory backend replaces the default
restricted characters set.
-Advanced options
+Advanced options
Here are the Advanced options specific to memory (In memory object
storage system.).
+--memory-discard
+If set all writes will be discarded and reads will return an
+error
+If set then when files are uploaded the contents not be saved. The
+files will appear to have been uploaded but will give an error on read.
+Files will have their MD5 sum calculated on upload which takes very
+little CPU time and allows the transfers to be checked.
+This can be useful for testing performance.
+Probably most easily used by using the connection string syntax:
+:memory,discard:bucket
+Properties:
+
+- Config: discard
+- Env Var: RCLONE_MEMORY_DISCARD
+- Type: bool
+- Default: false
+
--memory-description
Description of the remote.
Properties:
@@ -49916,7 +49777,7 @@ code:
The initial setup for Netstorage involves getting an account and
secret. Use rclone config to walk you through the setup
process.
-Configuration
+Configuration
Here's an example of how to make a remote called
ns1.
@@ -50002,7 +49863,7 @@ between CP codes
Your credentials must have access to two CP codes on the same remote.
You can't perform operations between different remotes.
rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/
-Features
+Features
Symlink Support
The Netstorage backend changes the rclone --links, -l
behavior. When uploading, instead of creating the .rclonelink file, use
@@ -50086,7 +49947,7 @@ using quick-delete method will not delete the tree immediately and
objects targeted for quick-delete may still be accessible.
-Standard options
+Standard options
Here are the Standard options specific to netstorage (Akamai
NetStorage).
--netstorage-host
@@ -50123,7 +49984,7 @@ obscure.
- Type: string
- Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to netstorage (Akamai
NetStorage).
--netstorage-protocol
@@ -50188,7 +50049,7 @@ applicable.
remote: for the lsd command.) You may put
subdirectories in too, e.g.
remote:container/path/to/dir.
-Configuration
+Configuration
Here is an example of making a Microsoft Azure Blob Storage
configuration. For a remote called remote. First run:
rclone config
@@ -50238,7 +50099,7 @@ deleting any excess files in the container.
fewer transactions in exchange for more memory. See the rclone docs for more
details.
-Modification times and
+Modification times and
hashes
The modification time is stored as metadata on the object with the
mtime key. It is stored using RFC3339 Format time with
@@ -50252,6 +50113,34 @@ syncing is recommended if using --use-server-modtime.
MD5 hashes are stored with blobs. However blobs that were uploaded in
chunks only have an MD5 if the source remote was capable of MD5 hashes,
e.g. the local disk.
+
+Rclone can map arbitrary metadata to Azure Blob headers, user
+metadata, and tags when --metadata is enabled (or when
+using --metadata-set / --metadata-mapper).
+
+- Headers: Set these keys in metadata to map to the corresponding blob
+headers:
+
+cache-control, content-disposition,
+content-encoding, content-language,
+content-type.
+
+- User metadata: Any other non-reserved keys are written as user
+metadata (keys are normalized to lowercase). Keys starting with
+
x-ms- are reserved and are not stored as user
+metadata.
+- Tags: Provide
x-ms-tags as a comma-separated list of
+key=value pairs, e.g.
+x-ms-tags=env=dev,team=sync. These are applied as blob tags
+on upload and on server-side copies. Whitespace around keys/values is
+ignored.
+- Modtime override: Provide
mtime in RFC3339/RFC3339Nano
+format to override the stored modtime persisted in user metadata. If
+mtime cannot be parsed, rclone logs a debug message and
+ignores the override.
+
+Notes: - Rclone ignores reserved x-ms-* keys (except
+x-ms-tags) for user metadata.
When uploading large files, increasing the value of
--azureblob-upload-concurrency will increase performance at
@@ -50259,7 +50148,7 @@ the cost of using more memory. The default of 16 is set quite
conservatively to use less memory. It maybe be necessary raise it to 64
or higher to fully utilize a 1 GBit/s link with a single file
transfer.
-Restricted filename
+Restricted filename
characters
In addition to the default
@@ -50521,7 +50410,7 @@ config:
rclone lsf :azureblob,account=ACCOUNT:CONTAINER
-Standard options
+Standard options
Here are the Standard options specific to azureblob (Microsoft Azure
Blob Storage).
--azureblob-account
@@ -50571,6 +50460,17 @@ for full info.
- Type: string
- Required: false
+--azureblob-connection-string
+Storage Connection String.
+Connection string for the storage. Leave blank if using other auth
+methods.
+Properties:
+
+- Config: connection_string
+- Env Var: RCLONE_AZUREBLOB_CONNECTION_STRING
+- Type: string
+- Required: false
+
--azureblob-tenant
ID of the service principal's tenant. Also called its directory
ID.
@@ -50631,7 +50531,7 @@ obscure.
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to azureblob (Microsoft Azure
Blob Storage).
- Type: string
- Required: false
+
+User metadata is stored as x-ms-meta- keys. Azure metadata keys are
+case insensitive and are always returned in lower case.
+Here are the possible system metadata items for the azureblob
+backend.
+
+
+
+
+
+
+
+
+
+
+
+
+
+| cache-control |
+Cache-Control header |
+string |
+no-cache |
+N |
+
+
+| content-disposition |
+Content-Disposition header |
+string |
+inline |
+N |
+
+
+| content-encoding |
+Content-Encoding header |
+string |
+gzip |
+N |
+
+
+| content-language |
+Content-Language header |
+string |
+en-US |
+N |
+
+
+| content-type |
+Content-Type header |
+string |
+text/plain |
+N |
+
+
+| mtime |
+Time of last modification, read from rclone metadata |
+RFC 3339 |
+2006-01-02T15:04:05.999999999Z07:00 |
+N |
+
+
+| tier |
+Tier of the object |
+string |
+Hot |
+Y |
+
+
+
+See the metadata docs
+for more info.
You can set custom upload headers with the
@@ -51098,7 +51074,7 @@ blob.
Eg --header-upload "Content-Type: text/potato" or
--header-upload "X-MS-Tags: foo=bar".
-Limitations
+Limitations
MD5 sums are only uploaded with chunked files if the source has an
MD5 sum. This will always be the case for a local to azure copy.
rclone about is not supported by the Microsoft Azure
@@ -51128,7 +51104,7 @@ in the advanced settings, setting it to
Storage
Paths are specified as remote: You may put
subdirectories in too, e.g. remote:path/to/dir.
-Configuration
+Configuration
Here is an example of making a Microsoft Azure Files Storage
configuration. For a remote called remote. First run:
rclone config
@@ -51216,7 +51192,7 @@ at the cost of using more memory. The default of 16 is set quite
conservatively to use less memory. It maybe be necessary raise it to 64
or higher to fully utilize a 1 GBit/s link with a single file
transfer.
-Restricted filename
+Restricted filename
characters
In addition to the default
@@ -51489,14 +51465,14 @@ a System Managed Identity that you do not want to use. Don't set
env_auth at the same time.
-Standard options
+Standard options
Here are the Standard options specific to azurefiles (Microsoft Azure
Files).
--azurefiles-account
Azure Storage Account Name.
Set this to the Azure Storage Account Name in use.
-Leave blank to use SAS URL or connection string, otherwise it needs
-to be set.
+Leave blank to use SAS URL or Emulator, otherwise it needs to be
+set.
If this is blank and if env_auth is set it will be read from the
environment variable AZURE_STORAGE_ACCOUNT_NAME if
possible.
@@ -51507,20 +51483,10 @@ possible.
- Type: string
- Required: false
---azurefiles-share-name
-Azure Files Share Name.
-This is required and is the name of the share to access.
-Properties:
-
-- Config: share_name
-- Env Var: RCLONE_AZUREFILES_SHARE_NAME
-- Type: string
-- Required: false
-
--azurefiles-env-auth
Read credentials from runtime (environment variables, CLI or
MSI).
-See the authentication docs
+
See the authentication docs
for full info.
Properties:
@@ -51531,7 +51497,7 @@ for full info.
--azurefiles-key
Storage Account Shared Key.
-Leave blank to use SAS URL or connection string.
+Leave blank to use SAS URL or Emulator.
Properties:
- Config: key
@@ -51540,8 +51506,8 @@ for full info.
- Required: false
--azurefiles-sas-url
-SAS URL.
-Leave blank if using account/key or connection string.
+SAS URL for container level access only.
+Leave blank if using account/key or Emulator.
Properties:
- Config: sas_url
@@ -51551,7 +51517,9 @@ for full info.
--azurefiles-connection-string
-Azure Files Connection String.
+Storage Connection String.
+Connection string for the storage. Leave blank if using other auth
+methods.
Properties:
- Config: connection_string
@@ -51619,7 +51587,17 @@ obscure.
- Type: string
- Required: false
-Advanced options
+--azurefiles-share-name
+Azure Files Share Name.
+This is required and is the name of the share to access.
+Properties:
+
+- Config: share_name
+- Env Var: RCLONE_AZUREFILES_SHARE_NAME
+- Type: string
+- Required: false
+
+Advanced options
Here are the Advanced options specific to azurefiles (Microsoft Azure
Files).
Leave blank normally. Needed only if you want to use a service
principal instead of interactive login.
$ az ad sp create-for-rbac --name "<name>" \
- --role "Storage Files Data Owner" \
+ --role "Storage Blob Data Owner" \
--scopes "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/blobServices/default/containers/<container>" \
> azure-principal.json
See "Create
an Azure service principal" and "Assign
-an Azure role for access to files data" pages for more details.
-NB this section needs updating for Azure Files -
-pull requests appreciated!
+an Azure role for access to blob data" pages for more details.
It may be more convenient to put the credentials directly into the
rclone config file under the client_id, tenant
and client_secret keys instead of setting
@@ -51687,6 +51663,23 @@ and client_secret keys instead of setting
- Type: string
- Required: false
+--azurefiles-disable-instance-discovery
+Skip requesting Microsoft Entra instance metadata
+This should be set true only by applications authenticating in
+disconnected clouds, or private clouds such as Azure Stack.
+It determines whether rclone requests Microsoft Entra instance
+metadata from https://login.microsoft.com/ before
+authenticating.
+Setting this to true will skip this request, making you responsible
+for ensuring the configured authority is valid and trustworthy.
+Properties:
+
+- Config: disable_instance_discovery
+- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+- Type: bool
+- Default: false
+
--azurefiles-use-msi
Use a managed service identity to authenticate (only works in
Azure).
@@ -51738,28 +51731,24 @@ msi_client_id, or msi_mi_res_id parameters.
- Type: string
- Required: false
---azurefiles-disable-instance-discovery
-Skip requesting Microsoft Entra instance metadata This should be set
-true only by applications authenticating in disconnected clouds, or
-private clouds such as Azure Stack. It determines whether rclone
-requests Microsoft Entra instance metadata from
-https://login.microsoft.com/ before authenticating. Setting
-this to true will skip this request, making you responsible for ensuring
-the configured authority is valid and trustworthy.
+--azurefiles-use-emulator
+Uses local storage emulator if provided as 'true'.
+Leave blank if using real azure storage endpoint.
Properties:
-- Config: disable_instance_discovery
-- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+- Config: use_emulator
+- Env Var: RCLONE_AZUREFILES_USE_EMULATOR
- Type: bool
- Default: false
--azurefiles-use-az
-Use Azure CLI tool az for authentication Set to use the Use Azure CLI tool az for authentication
+Set to use the Azure CLI tool
-az as the sole means of authentication. Setting this can be useful
-if you wish to use the az CLI on a host with a System Managed Identity
-that you do not want to use. Don't set env_auth at the same time.
+az as the sole means of authentication.
+Setting this can be useful if you wish to use the az CLI on a host
+with a System Managed Identity that you do not want to use.
+Don't set env_auth at the same time.
Properties:
- Config: use_az
@@ -51861,14 +51850,14 @@ Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPerio
- Content-Type
Eg --header-upload "Content-Type: text/potato"
-Limitations
+Limitations
MD5 sums are only uploaded with chunked files if the source has an
MD5 sum. This will always be the case for a local to azure copy.
Microsoft OneDrive
Paths are specified as remote:path
Paths may be as deep as required, e.g.
remote:directory/subdirectory.
-Configuration
+Configuration
The initial setup for OneDrive involves getting a token from
Microsoft which you need to do in your browser.
rclone config walks you through it.
@@ -52089,7 +52078,7 @@ data using the app-only token without requiring their credentials.
application means that anyone with the Client ID and Client
Secret can access your OneDrive files. Take care to safeguard these
credentials.
-Modification times and
+Modification times and
hashes
OneDrive allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -52132,7 +52121,7 @@ using.
Some commands (like rclone lsf -R) will use
ListR by default - you can turn this off with
--disable ListR if you need to.
-Restricted filename
+Restricted filename
characters
In addition to the default
@@ -52245,7 +52234,7 @@ trash, so you will have to do that with one of Microsoft's apps or via
the OneDrive website.
-Standard options
+Standard options
Here are the Standard options specific to onedrive (Microsoft
OneDrive).
--onedrive-client-id
@@ -52307,7 +52296,7 @@ ID.
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to onedrive (Microsoft
OneDrive).
--onedrive-token
@@ -52674,9 +52663,9 @@ efficient.
This is why this flag is not set as the default.
As a rule of thumb if nearly all of your data is under rclone's root
directory (the root/directory in
-onedrive:root/directory) then using this flag will be be a
-big performance win. If your data is mostly not under the root then
-using this flag will be a big performance loss.
+onedrive:root/directory) then using this flag will be a big
+performance win. If your data is mostly not under the root then using
+this flag will be a big performance loss.
It is recommended if you are mounting your onedrive at the root (or
near the root when using crypt) and using rclone
rc vfs/refresh.
@@ -52744,7 +52733,7 @@ Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,
- Type: string
- Required: false
-
+
OneDrive supports System Metadata (not User Metadata, as of this
writing) for both files and directories. Much of the metadata is
read-only, and there are some differences between OneDrive Personal and
@@ -52764,69 +52753,69 @@ href="https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/pe
API, which differs slightly between OneDrive Personal and
Business.
Example for OneDrive Personal:
-[
- {
- "id": "1234567890ABC!123",
- "grantedTo": {
- "user": {
- "id": "ryan@contoso.com"
- },
- "application": {},
- "device": {}
- },
- "invitation": {
- "email": "ryan@contoso.com"
- },
- "link": {
- "webUrl": "https://1drv.ms/t/s!1234567890ABC"
- },
- "roles": [
- "read"
- ],
- "shareId": "s!1234567890ABC"
- }
-]
+[
+ {
+ "id": "1234567890ABC!123",
+ "grantedTo": {
+ "user": {
+ "id": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ },
+ "invitation": {
+ "email": "ryan@contoso.com"
+ },
+ "link": {
+ "webUrl": "https://1drv.ms/t/s!1234567890ABC"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "s!1234567890ABC"
+ }
+]
Example for OneDrive Business:
-[
- {
- "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
- "grantedToIdentities": [
- {
- "user": {
- "displayName": "ryan@contoso.com"
- },
- "application": {},
- "device": {}
- }
- ],
- "link": {
- "type": "view",
- "scope": "users",
- "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
- },
- "roles": [
- "read"
- ],
- "shareId": "u!LKj1lkdlals90j1nlkascl"
- },
- {
- "id": "5D33DD65C6932946",
- "grantedTo": {
- "user": {
- "displayName": "John Doe",
- "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
- },
- "application": {},
- "device": {}
- },
- "roles": [
- "owner"
- ],
- "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
- }
-]
+[
+ {
+ "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
+ "grantedToIdentities": [
+ {
+ "user": {
+ "displayName": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ }
+ ],
+ "link": {
+ "type": "view",
+ "scope": "users",
+ "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "u!LKj1lkdlals90j1nlkascl"
+ },
+ {
+ "id": "5D33DD65C6932946",
+ "grantedTo": {
+ "user": {
+ "displayName": "John Doe",
+ "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
+ },
+ "application": {},
+ "device": {}
+ },
+ "roles": [
+ "owner"
+ ],
+ "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
+ }
+]
To write permissions, pass in a "permissions" metadata key using this
same format. The --metadata-mapper
@@ -52840,12 +52829,12 @@ for a user. Creating a Public Link is also supported, if
Link.Scope is set to "anonymous".
Example request to add a "read" permission with
--metadata-mapper:
-{
- "Metadata": {
- "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
- }
-}
+{
+ "Metadata": {
+ "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
+ }
+}
Note that adding a permission can fail if a conflicting permission
already exists for the file/folder.
To update an existing permission, include both the Permission ID and
@@ -52918,8 +52907,8 @@ Personal).
| description |
-A short description of the file. Max 1024 characters. Only supported
-for OneDrive Personal. |
+A short description of the file. Max 1024 characters. No longer
+supported by Microsoft. |
string |
Contract for signing |
N |
@@ -53053,7 +53042,7 @@ hopefully give you a message of
URL of the format
https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents
-Limitations
+Limitations
If you don't use rclone for 90 days the refresh token will expire.
This will result in authorization problems. This is easy to fix by
running the rclone config reconnect remote: command to get
@@ -53296,7 +53285,7 @@ ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader:
Paths are specified as remote:path
Paths may be as deep as required, e.g.
remote:directory/subdirectory.
-Configuration
+Configuration
Here is an example of how to make a remote called
remote. First run:
rclone config
@@ -53339,13 +53328,13 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an OpenDrive directory called backup
rclone copy /home/source remote:backup
-Modification times and
+Modification times and
hashes
OpenDrive allows modification times to be set on objects accurate to
1 second. These will be used to detect whether objects need syncing or
not.
The MD5 hash algorithm is supported.
-Restricted filename
+Restricted filename
characters
@@ -53452,7 +53441,7 @@ href="https://rclone.org/overview/#invalid-utf8">replaced, as they
can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to opendrive (OpenDrive).
--opendrive-username
Username.
@@ -53475,7 +53464,7 @@ obscure.
- Type: string
- Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to opendrive (OpenDrive).
--opendrive-encoding
The encoding for the backend.
@@ -53540,7 +53529,7 @@ access the contents
- Required: false
-Limitations
+Limitations
Note that OpenDrive is case insensitive so you can't have a file
called "Hello.doc" and one called "hello.doc".
There are quite a few characters that can't be in OpenDrive file
@@ -53572,7 +53561,7 @@ subdirectories in too, e.g. remote:bucket/path/to/dir.
Sample command to transfer local artifacts to remote:bucket in oracle
object storage:
rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv
-Configuration
+Configuration
Here is an example of making an oracle object storage configuration.
rclone config walks you through it.
Here is an example of how to make a remote called
@@ -53700,15 +53689,15 @@ No authentication
User Principal
Sample rclone config file for Authentication Provider User
Principal:
-[oos]
-type = oracleobjectstorage
-namespace = id<redacted>34
-compartment = ocid1.compartment.oc1..aa<redacted>ba
-region = us-ashburn-1
-provider = user_principal_auth
-config_file = /home/opc/.oci/config
-config_profile = Default
+[oos]
+type = oracleobjectstorage
+namespace = id<redacted>34
+compartment = ocid1.compartment.oc1..aa<redacted>ba
+region = us-ashburn-1
+provider = user_principal_auth
+config_file = /home/opc/.oci/config
+config_profile = Default
Advantages:
- One can use this method from any server within OCI or on-premises or
@@ -53765,13 +53754,13 @@ export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token
Sample rclone configuration file for Authentication Provider Resource
Principal:
-[oos]
-type = oracleobjectstorage
-namespace = id<redacted>34
-compartment = ocid1.compartment.oc1..aa<redacted>ba
-region = us-ashburn-1
-provider = resource_principal_auth
+[oos]
+type = oracleobjectstorage
+namespace = id<redacted>34
+compartment = ocid1.compartment.oc1..aa<redacted>ba
+region = us-ashburn-1
+provider = resource_principal_auth
Workload Identity
Workload Identity auth may be used when running Rclone from
Kubernetes pod on a Container Engine for Kubernetes (OKE) cluster. For
@@ -53785,14 +53774,14 @@ export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
No authentication
Public buckets do not require any authentication mechanism to read
objects. Sample rclone configuration file for No authentication:
-[oos]
-type = oracleobjectstorage
-namespace = id<redacted>34
-compartment = ocid1.compartment.oc1..aa<redacted>ba
-region = us-ashburn-1
-provider = no_auth
-Modification times and
+[oos]
+type = oracleobjectstorage
+namespace = id<redacted>34
+compartment = ocid1.compartment.oc1..aa<redacted>ba
+region = us-ashburn-1
+provider = no_auth
+Modification times and
hashes
The modification time is stored as metadata on the object as
opc-meta-mtime as floating point since the epoch, accurate
@@ -53831,7 +53820,7 @@ values are high enough to gain most of the possible performance without
using too much memory.
-Standard options
+Standard options
Here are the Standard options specific to oracleobjectstorage (Oracle
Cloud Infrastructure Object Storage).
--oos-provider
@@ -53955,7 +53944,7 @@ buckets
-Advanced options
+Advanced options
Here are the Advanced options specific to oracleobjectstorage (Oracle
Cloud Infrastructure Object Storage).
--oos-storage-tier
@@ -54253,7 +54242,7 @@ Encryption
- Type: string
- Required: false
-
+
User metadata is stored as opc-meta- keys.
Here are the possible system metadata items for the
oracleobjectstorage backend.
@@ -54349,26 +54338,26 @@ format.
multipart uploads.
You can call it with no bucket in which case it lists all bucket,
with a bucket or with a bucket and path.
-{
- "test-bucket": [
- {
- "namespace": "test-namespace",
- "bucket": "test-bucket",
- "object": "600m.bin",
- "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
- "timeCreated": "2022-07-29T06:21:16.595Z",
- "storageTier": "Standard"
- }
- ]
-}
-
-### cleanup
-
-Remove unfinished multipart uploads.
-
-```console
-rclone backend cleanup remote: [options] [<arguments>+]
+{
+ "test-bucket": [
+ {
+ "namespace": "test-namespace",
+ "bucket": "test-bucket",
+ "object": "600m.bin",
+ "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
+ "timeCreated": "2022-07-29T06:21:16.595Z",
+ "storageTier": "Standard"
+ }
+ ]
+}
+
+### cleanup
+
+Remove unfinished multipart uploads.
+
+```console
+rclone backend cleanup remote: [options] [<arguments>+]
This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command
@@ -54397,17 +54386,17 @@ rclone backend restore oos:bucket -o hours=HOURS
It returns a list of status dictionaries with Object Name and Status
keys. The Status will be "RESTORED"" if it was successful or an error
message if not.
-[
- {
- "Object": "test.txt"
- "Status": "RESTORED",
- },
- {
- "Object": "test/file4.txt"
- "Status": "RESTORED",
- }
-]
+[
+ {
+ "Object": "test.txt"
+ "Status": "RESTORED",
+ },
+ {
+ "Object": "test/file4.txt"
+ "Status": "RESTORED",
+ }
+]
Options:
- "hours": The number of hours for which this object will be restored.
@@ -54422,7 +54411,7 @@ Buckets
Paths are specified as remote:bucket (or
remote: for the lsd command.) You may put
subdirectories in too, e.g. remote:bucket/path/to/dir.
-Configuration
+Configuration
Here is an example of making an QingStor configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -54537,7 +54526,7 @@ file
-Restricted filename
+Restricted filename
characters
The control characters 0x00-0x1F and / are replaced as in the default
@@ -54547,7 +54536,7 @@ href="https://rclone.org/overview/#invalid-utf8">replaced, as they
can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to qingstor (QingCloud Object
Storage).
--qingstor-env-auth
@@ -54631,7 +54620,7 @@ IAM).
-Advanced options
+Advanced options
Here are the Advanced options specific to qingstor (QingCloud Object
Storage).
--qingstor-connection-retries
@@ -54707,7 +54696,7 @@ section in the overview for more info.
- Required: false
-Limitations
+Limitations
rclone about is not supported by the qingstor backend.
Backends without this capability cannot determine free space for an
rclone mount or use policy mfs (most free space) as a
@@ -54731,7 +54720,7 @@ class="uri">https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/p
See complete Swagger
documentation for Quatrix.
-Configuration
+Configuration
Here is an example of how to make a remote called
remote. First run:
rclone config
@@ -54821,14 +54810,14 @@ y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
-Modification times and
+Modification times and
hashes
Quatrix allows modification times to be set on objects accurate to 1
microsecond. These will be used to detect whether objects need syncing
or not.
Quatrix does not support hashes, so you cannot use the
--checksum flag.
-Restricted filename
+Restricted filename
characters
File names in Quatrix are case sensitive and have limitations like
the maximum length of a filename is 255, and the minimum length is 1. A
@@ -54858,7 +54847,7 @@ and an API to empty the Trash so that you can remove files permanently
from your account.
-Standard options
+Standard options
Here are the Standard options specific to quatrix (Quatrix by
Maytech).
--quatrix-api-key
@@ -54879,7 +54868,7 @@ Maytech).
- Type: string
- Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to quatrix (Quatrix by
Maytech).
--quatrix-encoding
@@ -55020,7 +55009,7 @@ it on localhost with command line argument
--authorize-api=false, but this is insecure and
strongly discouraged.
-Configuration
+Configuration
Here is an example of how to make a sia remote called
mySia. First, run:
rclone config
@@ -55080,7 +55069,7 @@ y/e/d> y
-Standard options
+Standard options
Here are the Standard options specific to sia (Sia Decentralized
Cloud).
--sia-api-url
@@ -55109,7 +55098,7 @@ obscure.
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to sia (Sia Decentralized
Cloud).
--sia-user-agent
@@ -55144,7 +55133,7 @@ section in the overview for more info.
- Required: false
-Limitations
+Limitations
rclone config
This will guide you through an interactive setup process.
@@ -55290,27 +55279,27 @@ deleting any excess files in the container.
from an OpenStack credentials file
An OpenStack credentials file typically looks something something
like this (without the comments)
-export OS_AUTH_URL=https://a.provider.net/v2.0
-export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
-export OS_TENANT_NAME="1234567890123456"
-export OS_USERNAME="123abc567xy"
-echo "Please enter your OpenStack Password: "
-read -sr OS_PASSWORD_INPUT
-export OS_PASSWORD=$OS_PASSWORD_INPUT
-export OS_REGION_NAME="SBG1"
-if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
+export OS_AUTH_URL=https://a.provider.net/v2.0
+export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
+export OS_TENANT_NAME="1234567890123456"
+export OS_USERNAME="123abc567xy"
+echo "Please enter your OpenStack Password: "
+read -sr OS_PASSWORD_INPUT
+export OS_PASSWORD=$OS_PASSWORD_INPUT
+export OS_REGION_NAME="SBG1"
+if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
The config file needs to look something like this where
$OS_USERNAME represents the value of the
OS_USERNAME variable - 123abc567xy in the
example above.
-[remote]
-type = swift
-user = $OS_USERNAME
-key = $OS_PASSWORD
-auth = $OS_AUTH_URL
-tenant = $OS_TENANT_NAME
+[remote]
+type = swift
+user = $OS_USERNAME
+key = $OS_PASSWORD
+auth = $OS_AUTH_URL
+tenant = $OS_TENANT_NAME
Note that you may (or may not) need to set region too -
try without first.
Configuration from the
@@ -55339,11 +55328,11 @@ OpenStack installation.
config file
You can use rclone with swift without a config file, if desired, like
this:
-source openstack-credentials-file
-export RCLONE_CONFIG_MYREMOTE_TYPE=swift
-export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
-rclone lsd myremote:
+source openstack-credentials-file
+export RCLONE_CONFIG_MYREMOTE_TYPE=swift
+export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
+rclone lsd myremote:
--fast-list
This remote supports --fast-list which allows you to use
fewer transactions in exchange for more memory. See the --update along with --use-server-modtime, you
can avoid the extra API call and simply upload files whose local modtime
is newer than the time it was last uploaded.
-Modification times and
+Modification times and
hashes
The modified time is stored as metadata on the object as
X-Object-Meta-Mtime as floating point since the epoch
@@ -55369,7 +55358,7 @@ accurate to 1 ns.
This is a de facto standard (used in the official python-swiftclient
amongst others) for storing the modification time for an object.
The MD5 hash algorithm is supported.
-Restricted filename
+Restricted filename
characters
@@ -55397,7 +55386,7 @@ href="https://rclone.org/overview/#invalid-utf8">replaced, as they
can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to swift (OpenStack Swift
(Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
--swift-env-auth
@@ -55649,7 +55638,7 @@ provider.
-Advanced options
+Advanced options
Here are the Advanced options specific to swift (OpenStack Swift
(Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).
--swift-leave-parts-on-error
@@ -55700,7 +55689,7 @@ Swift API that do not implement pagination as expected. See also
--swift-chunk-size
Above this size files will be chunked.
-Above this size files will be chunked into a a _segments
+
Above this size files will be chunked into a _segments
container or a .file-segments directory. (See the
use_segments_container option for more info). Default for
this is 5 GiB which is its maximum value, which means only files above
@@ -55809,7 +55798,7 @@ section in the overview for more info.
- Required: false
-Limitations
+Limitations
The Swift API doesn't return a correct MD5SUM for segmented files
(Dynamic or Static Large Objects) so rclone won't check or use the
MD5SUM for these.
@@ -55850,7 +55839,7 @@ the following:
Paths are specified as remote:path
Paths may be as deep as required, e.g.
remote:directory/subdirectory.
-Configuration
+Configuration
The initial setup for pCloud involves getting a token from pCloud
which you need to do in your browser. rclone config walks
you through it.
@@ -55921,7 +55910,7 @@ you to unblock it temporarily if you are running a host firewall.
rclone ls remote:
To copy a local directory to a pCloud directory called backup
rclone copy /home/source remote:backup
-Modification times and
+Modification times and
hashes
pCloud allows modification times to be set on objects accurate to 1
second. These will be used to detect whether objects need syncing or
@@ -55930,7 +55919,7 @@ re-uploaded.
pCloud supports MD5 and SHA1 hashes in the US region, and SHA1 and
SHA256 hashes in the EU region, so you can use the
--checksum flag.
-Restricted filename
+Restricted filename
characters
In addition to the default
@@ -55965,7 +55954,7 @@ will only work if you set your username and password in the advanced
options for this backend. Since we generally want to avoid storing user
passwords in the rclone config file, we advise you to only set this up
if you need the rclone cleanup command to work.
-Root folder ID
+Root folder ID
You can set the root_folder_id for rclone. This is the
directory (identified by its Folder ID) that rclone
considers to be the root of your pCloud drive.
@@ -55989,9 +55978,31 @@ dxxxxxxxx4,My Videos/
the returned id from rclone lsf command (ex.
dxxxxxxxx2) as the root_folder_id variable
value in the config file.
+Change notifications and
+mounts
+The pCloud backend supports real‑time updates for rclone mounts via
+change notifications. rclone uses pCloud’s diff long‑polling API to
+detect changes and will automatically refresh directory listings in the
+mounted filesystem when changes occur.
+Notes and behavior:
+
+- Works automatically when using
rclone mount and
+requires no additional configuration.
+- Notifications are directory‑scoped: when rclone detects a change, it
+refreshes the affected directory so new/removed/renamed files become
+visible promptly.
+- Updates are near real‑time. The backend uses a long‑poll with short
+fallback polling intervals, so you should see changes appear quickly
+without manual refreshes.
+
+If you want to debug or verify notifications, you can use the helper
+command:
+rclone test changenotify remote:
+This will log incoming change notifications for the given remote.
-Standard options
+Standard options
Here are the Standard options specific to pcloud (Pcloud).
--pcloud-client-id
OAuth Client Id.
@@ -56013,7 +56024,7 @@ value in the config file.
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to pcloud (Pcloud).
--pcloud-token
OAuth Access Token as a JSON blob.
@@ -56140,7 +56151,7 @@ obscure.
drive.
Paths are specified as remote:path, and may be as deep
as required, e.g. remote:directory/subdirectory.
-Configuration
+Configuration
Here is an example of making a remote for PikPak.
First run:
rclone config
@@ -56193,7 +56204,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-Modification times and
+Modification times and
hashes
PikPak keeps modification times on objects, and updates them when
uploading objects, but it does not support changing only the
@@ -56201,7 +56212,7 @@ modification time
The MD5 hash algorithm is supported.
-Standard options
+Standard options
Here are the Standard options specific to pikpak (PikPak).
--pikpak-user
Pikpak username.
@@ -56224,7 +56235,7 @@ obscure.
- Type: string
- Required: true
-Advanced options
+Advanced options
Here are the Advanced options specific to pikpak (PikPak).
--pikpak-device-id
Device ID used for authorization.
@@ -56402,14 +56413,14 @@ in 'pikpak:dirpath'. You may want to pass '-o password=password' for a
password-protected files. Also, pass '-o delete-src-file' to delete
source files after decompression finished.
Result:
-{
- "Decompressed": 17,
- "SourceDeleted": 0,
- "Errors": 0
-}
+{
+ "Decompressed": 17,
+ "SourceDeleted": 0,
+ "Errors": 0
+}
-Limitations
+Limitations
Hashes may be empty
PikPak supports MD5 hash, but sometimes given empty especially for
user-uploaded files.
@@ -56514,7 +56525,7 @@ directory and their public IDs.
access the directory.
-Standard options
+Standard options
Here are the Standard options specific to pixeldrain (Pixeldrain
Filesystem).
--pixeldrain-api-key
@@ -56538,7 +56549,7 @@ directory ID to use a shared directory.
- Type: string
- Default: "me"
-Advanced options
+Advanced options
Here are the Advanced options specific to pixeldrain (Pixeldrain
Filesystem).
--pixeldrain-api-url
@@ -56561,7 +56572,7 @@ testing purposes.
- Type: string
- Required: false
-
+
Pixeldrain supports file modes and creation times.
Here are the possible system metadata items for the pixeldrain
backend.
@@ -56613,7 +56624,7 @@ for more info.
Paths are specified as remote:path
Paths may be as deep as required, e.g.
remote:directory/subdirectory.
-Configuration
+Configuration
The initial setup for premiumize.me involves getting a token
from premiumize.me which you need to do in your browser.
@@ -56676,12 +56687,12 @@ you to unblock it temporarily if you are running a host firewall.
To copy a local directory to an premiumize.me directory called
backup
rclone copy /home/source remote:backup
-Modification times and
+Modification times and
hashes
premiumize.me does not support modification times or hashes,
therefore syncing will default to --size-only checking.
Note that using --update will work.
-Restricted filename
+Restricted filename
characters
In addition to the default
@@ -56713,7 +56724,7 @@ href="https://rclone.org/overview/#invalid-utf8">replaced, as they
can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to premiumizeme
(premiumize.me).
--premiumizeme-client-id
@@ -56746,7 +56757,7 @@ can't be used in JSON strings.
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to premiumizeme
(premiumize.me).
--premiumizeme-token
@@ -56812,7 +56823,7 @@ section in the overview for more info.
- Required: false
-Limitations
+Limitations
Note that premiumize.me is case insensitive so you can't have a file
called "Hello.doc" and one called "hello.doc".
premiumize.me file names can't have the \ or
@@ -56894,12 +56905,12 @@ attempting to use the credentials in rclone will fail.
To copy a local directory to an Proton Drive directory called
backup
rclone copy /home/source remote:backup
-Modification times and
+Modification times and
hashes
Proton Drive Bridge does not support updating modification times
yet.
The SHA1 hash algorithm is supported.
-Restricted filename
+Restricted filename
characters
Invalid UTF-8 bytes will be replaced, also left
@@ -56924,7 +56935,7 @@ clients accessing the same mount point, then we might have a problem
with caching the stale data.
-Standard options
+Standard options
Here are the Standard options specific to protondrive (Proton
Drive).
--protondrive-username
@@ -56976,7 +56987,7 @@ obscure.
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to protondrive (Proton
Drive).
- Required: false
-Limitations
+Limitations
This backend uses the Proton-API-Bridge,
which is based on
Paths are specified as remote:path
put.io paths may be as deep as required, e.g.
remote:directory/subdirectory.
-Configuration
+Configuration
The initial setup for put.io involves getting a token from put.io
which you need to do in your browser. rclone config walks
you through it.
@@ -57231,7 +57242,7 @@ mode.
rclone ls remote:
To copy a local directory to a put.io directory called backup
rclone copy /home/source remote:backup
-Restricted filename
+Restricted filename
characters
In addition to the default
@@ -57258,7 +57269,7 @@ href="https://rclone.org/overview/#invalid-utf8">replaced, as they
can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to putio (Put.io).
--putio-client-id
OAuth Client Id.
@@ -57280,7 +57291,7 @@ can't be used in JSON strings.
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to putio (Put.io).
--putio-token
OAuth Access Token as a JSON blob.
@@ -57344,7 +57355,7 @@ section in the overview for more info.
- Required: false
-Limitations
+Limitations
put.io has rate limiting. When you hit a limit, rclone automatically
retries after waiting the amount of time requested by the server.
If you want to avoid ever hitting these limits, you may use the
@@ -57425,12 +57436,12 @@ attempting to use the credentials in rclone will fail.
To copy a local directory to an Proton Drive directory called
backup
rclone copy /home/source remote:backup
-Modification times and
+Modification times and
hashes
Proton Drive Bridge does not support updating modification times
yet.
The SHA1 hash algorithm is supported.
-Restricted filename
+Restricted filename
characters
Invalid UTF-8 bytes will be replaced, also left
@@ -57455,7 +57466,7 @@ clients accessing the same mount point, then we might have a problem
with caching the stale data.
-Standard options
+Standard options
Here are the Standard options specific to protondrive (Proton
Drive).
--protondrive-username
@@ -57507,7 +57518,7 @@ obscure.
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to protondrive (Proton
Drive).
- Required: false
-Limitations
+Limitations
This backend uses the Proton-API-Bridge,
which is based on
- It supports 2FA enabled users
- Using a Library API Token is not supported
-Configuration
+Configuration
There are two distinct modes you can setup your remote:
- you point your remote to the root of the server,
@@ -57880,7 +57891,7 @@ to use fewer transactions in exchange for more memory. See the rclone docs for more
details. Please note this is not supported on seafile server version
6.x
-Restricted filename
+Restricted filename
characters
In addition to the default
@@ -57945,7 +57956,7 @@ href="https://hub.docker.com/r/seafileltd/seafile-mc/">latest docker
image of the seafile community server.
-Standard options
+Standard options
Here are the Standard options specific to seafile (seafile).
--seafile-url
URL of seafile host to connect to.
@@ -58026,7 +58037,7 @@ obscure.
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to seafile (seafile).
--seafile-create-library
Should rclone create a library if it doesn't exist.
@@ -58087,7 +58098,7 @@ users to OMIT the leading /.
Note that by default rclone will try to execute shell commands on the
server, see shell access
considerations.
-Configuration
+Configuration
Here is an example of making an SFTP configuration. First run
rclone config
This will guide you through an interactive setup process.
@@ -58194,13 +58205,13 @@ provide the path to the user certificate public key file in
key, typically saved as /home/$USER/.ssh/id_rsa.pub.
Setting this path in pubkey_file will not work.
Example:
-[remote]
-type = sftp
-host = example.com
-user = sftpuser
-key_file = ~/id_rsa
-pubkey_file = ~/id_rsa-cert.pub
+[remote]
+type = sftp
+host = example.com
+user = sftpuser
+key_file = ~/id_rsa
+pubkey_file = ~/id_rsa-cert.pub
If you concatenate a cert with a private key then you can specify the
merged file in both places.
Note: the cert must come first in the file. e.g.
@@ -58215,13 +58226,13 @@ be turned on by enabling the known_hosts_file option. This
can point to the file maintained by OpenSSH or can point to
a unique file.
e.g. using the OpenSSH known_hosts file:
-[remote]
-type = sftp
-host = example.com
-user = sftpuser
-pass =
-known_hosts_file = ~/.ssh/known_hosts
+[remote]
+type = sftp
+host = example.com
+user = sftpuser
+pass =
+known_hosts_file = ~/.ssh/known_hosts
Alternatively you can create your own known hosts file like this:
ssh-keyscan -t dsa,rsa,ecdsa,ed25519 example.com >> known_hosts
There are some limitations:
@@ -58368,7 +58379,7 @@ option hashes to none or options
none to not only disable checksumming, but also disable all
other functionality that are based on remote shell command
execution.
-Modification times and
+Modification times and
hashes
Modified times are stored on the server to 1 second precision.
Modified times are used in syncing and are fully supported.
@@ -58393,7 +58404,7 @@ built-in shell command (see shell access).
If none of the above is applicable, about will fail.
-Standard options
+Standard options
Here are the Standard options specific to sftp (SSH/SFTP).
--sftp-host
SSH host to connect to.
@@ -58588,7 +58599,7 @@ connection for every hash it calculates.
- Type: SpaceSepList
- Default:
-Advanced options
+Advanced options
Here are the Advanced options specific to sftp (SSH/SFTP).
--sftp-known-hosts-file
Optional path to known_hosts file.
@@ -59012,6 +59023,10 @@ host:port.
URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT
verb.
+Supports the format http://user:pass@host:port, http://host:port,
+http://host.
+Example:
+http://myUser:myPass@proxyhostname.example.com:8000
Properties:
- Config: http_proxy
@@ -59048,7 +59063,7 @@ as the source and the destination will be the same file.
- Required: false
-Limitations
+Limitations
On some SFTP servers (e.g. Synology) the paths are different for SSH
and SFTP so the hashes can't be calculated properly. You can either use
--sftp-path-override or
@@ -59080,6 +59095,224 @@ documentation of rclone examples.
See Hetzner's
documentation for details
+Shade
+This is a backend for the Shade
+platform
+About Shade
+Shade is an AI-powered cloud NAS
+that makes your cloud files behave like a local drive, optimized for
+media and creative workflows. It provides fast, secure access with
+natural-language search, easy sharing, and scalable cloud storage.
+Accounts & Pricing
+To use this backend, you need to create a free account on Shade. You
+can start with a free account and get 20GB of storage for free.
+Usage
+Paths are specified as remote:path
+Paths may be as deep as required, e.g.
+remote:directory/subdirectory.
+Configuration
+Here is an example of making a Shade configuration.
+First, create a create a free
+account account and choose a plan.
+You will need to log in and get the API Key and
+Drive ID for your account from the settings section of your
+account and created drive respectively.
+Now run
+rclone config
+Follow this interactive process:
+$ rclone config
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> n
+
+Enter name for new remote.
+name> Shade
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[OTHER OPTIONS]
+xx / Shade FS
+ \ (shade)
+[OTHER OPTIONS]
+Storage> xx
+
+Option drive_id.
+The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive.
+Enter a value.
+drive_id> [YOUR_ID]
+
+Option api_key.
+An API key for your account.
+Enter a value.
+api_key> [YOUR_API_KEY]
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: shade
+- drive_id: [YOUR_ID]
+- api_key: [YOUR_API_KEY]
+Keep this "Shade" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Modification times and
+hashes
+Shade does not support hashes and writing mod times.
+Transfers
+Shade uses multipart uploads by default. This means that files will
+be chunked and sent up to Shade concurrently. In order to configure how
+many simultaneous uploads you want to use, upload the 'concurrency'
+option in the advanced config section. Note that this uses more memory
+and initiates more http requests.
+Deleting files
+Please note that when deleting files in Shade via rclone it will
+delete the file instantly, instead of sending it to the trash. This
+means that it will not be recoverable.
+
+
+Standard options
+Here are the Standard options specific to shade (Shade FS).
+--shade-drive-id
+The ID of your drive, see this in the drive settings. Individual
+rclone configs must be made per drive.
+Properties:
+
+- Config: drive_id
+- Env Var: RCLONE_SHADE_DRIVE_ID
+- Type: string
+- Required: true
+
+--shade-api-key
+An API key for your account.
+Properties:
+
+- Config: api_key
+- Env Var: RCLONE_SHADE_API_KEY
+- Type: string
+- Required: true
+
+Advanced options
+Here are the Advanced options specific to shade (Shade FS).
+--shade-endpoint
+Endpoint for the service.
+Leave blank normally.
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_SHADE_ENDPOINT
+- Type: string
+- Required: false
+
+--shade-chunk-size
+Chunk size to use for uploading.
+Any files larger than this will be uploaded in chunks of this
+size.
+Note that this is stored in memory per transfer, so increasing it
+will increase memory usage.
+Minimum is 5MB, maximum is 5GB.
+Properties:
+
+- Config: chunk_size
+- Env Var: RCLONE_SHADE_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 64Mi
+
+--shade-upload-concurrency
+Concurrency for multipart uploads and copies. This is the number of
+chunks of the same file that are uploaded concurrently for multipart
+uploads and copies.
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_SHADE_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 4
+
+--shade-max-upload-parts
+Maximum amount of parts in a multipart upload.
+Properties:
+
+- Config: max_upload_parts
+- Env Var: RCLONE_SHADE_MAX_UPLOAD_PARTS
+- Type: int
+- Default: 10000
+
+--shade-token
+JWT Token for performing Shade FS operations. Don't set this value -
+rclone will set it automatically
+Properties:
+
+- Config: token
+- Env Var: RCLONE_SHADE_TOKEN
+- Type: string
+- Required: false
+
+--shade-token-expiry
+JWT Token Expiration time. Don't set this value - rclone will set it
+automatically
+Properties:
+
+- Config: token_expiry
+- Env Var: RCLONE_SHADE_TOKEN_EXPIRY
+- Type: string
+- Required: false
+
+--shade-encoding
+The encoding for the backend.
+See the encoding
+section in the overview for more info.
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_SHADE_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+
+--shade-description
+Description of the remote.
+Properties:
+
+- Config: description
+- Env Var: RCLONE_SHADE_DESCRIPTION
+- Type: string
+- Required: false
+
+
+Limitations
+Note that Shade is case insensitive so you can't have a file called
+"Hello.doc" and one called "hello.doc".
+Shade only supports filenames up to 255 characters in length.
+rclone about is not supported by the Shade backend.
+Backends without this capability cannot determine free space for an
+rclone mount or use policy mfs (most free space) as a
+member of an rclone union remote.
+See List of
+backends that do not support rclone about and rclone about
+Backend commands
+Here are the commands specific to the shade backend.
+Run them with
+rclone backend COMMAND remote:
+The help below will explain what arguments each command takes.
+See the backend command
+for more info on how to pass options and arguments.
+These can be run on a running backend using the rc command backend/command.
SMB
SMB is a
communication protocol to share files over network.
@@ -59104,7 +59337,7 @@ href="https://rclone.org/local/#paths-on-windows">the local backend
on Windows can access SMB servers using UNC paths, by
\\server\share. This doesn't apply to non-Windows OSes,
such as Linux and macOS.
-Configuration
+Configuration
Here is an example of making a SMB configuration.
First run
rclone config
@@ -59181,7 +59414,7 @@ d) Delete this remote
y/e/d> d
-Standard options
+Standard options
Here are the Standard options specific to smb (SMB / CIFS).
--smb-host
SMB server hostname to connect to.
@@ -59259,7 +59492,7 @@ KRB5_CONFIG and KRB5CCNAME environment variables.
- Type: bool
- Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to smb (SMB / CIFS).
--smb-idle-timeout
Max time before closing idle connections.
@@ -59451,7 +59684,7 @@ upload
gateway
-Configuration
+Configuration
To make a new Storj configuration you need one of the following:
-Advanced options
+Advanced options
Here are the Advanced options specific to storj (Storj Decentralized
Cloud Storage).
--storj-description
@@ -59653,7 +59886,7 @@ Cloud Storage).
- Required: false
-Usage
+Usage
Paths are specified as remote:bucket (or
remote: for the lsf command.) You may put
subdirectories in too, e.g. remote:bucket/path/to/dir.
@@ -59727,7 +59960,7 @@ deleted.
rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
Or even between another cloud storage and Storj.
rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
-Limitations
+Limitations
rclone about is not supported by the rclone Storj
backend. Backends without this capability cannot determine free space
for an rclone mount or use policy mfs (most free space) as
@@ -59756,7 +59989,7 @@ operating system manual.
SugarSync is a cloud service that
enables active synchronization of files across computers and other
devices for file backup, access, syncing, and sharing.
-Configuration
+Configuration
The initial setup for SugarSync involves getting a token from
SugarSync which you can do with rclone. rclone config walks
you through it.
@@ -59829,13 +60062,13 @@ store them, it only uses them to get the initial token.
NB you can't create files in the top level folder
you have to create a folder, which rclone will create as a "Sync Folder"
with SugarSync.
-Modification times and
+Modification times and
hashes
SugarSync does not support modification times or hashes, therefore
syncing will default to --size-only checking. Note that
using --update will work as rclone can read the time files
were uploaded.
-Restricted filename
+Restricted filename
characters
SugarSync replaces the default
@@ -59843,7 +60076,7 @@ restricted characters set except for DEL.
Invalid UTF-8 bytes will also be replaced, as they
can't be used in XML strings.
-Deleting files
+Deleting files
Deleted files will be moved to the "Deleted items" folder by
default.
However you can supply the flag --sugarsync-hard-delete
@@ -59851,7 +60084,7 @@ or set the config parameter hard_delete = true if you would
like files to be deleted straight away.
-Standard options
+Standard options
Here are the Standard options specific to sugarsync (Sugarsync).
--sugarsync-app-id
Sugarsync App ID.
@@ -59894,7 +60127,7 @@ files.
- Type: bool
- Default: false
-Advanced options
+Advanced options
Here are the Advanced options specific to sugarsync (Sugarsync).
--sugarsync-refresh-token
Sugarsync refresh token.
@@ -59978,7 +60211,7 @@ section in the overview for more info.
- Required: false
-Limitations
+Limitations
rclone about is not supported by the SugarSync backend.
Backends without this capability cannot determine free space for an
rclone mount or use policy mfs (most free space) as a
@@ -59992,7 +60225,7 @@ href="https://rclone.org/commands/rclone_about/">rclone about.
remote:directory/subdirectory.
The initial setup for Uloz.to involves filling in the user
credentials. rclone config walks you through it.
-Configuration
+Configuration
Here is an example of how to make a remote called
remote. First run:
rclone config
@@ -60057,7 +60290,7 @@ y/e/d> y
and password. Uloz.to offers an API key as well, but it's reserved for
the use of Uloz.to's in-house application and using it in different
circumstances is unreliable.
-Modification times and
+Modification times and
hashes
Uloz.to doesn't allow the user to set a custom modification time, or
retrieve the hashes after upload. As a result, the integration uses a
@@ -60067,7 +60300,7 @@ and hashes. Timestamps are stored with microsecond precision.
Afterwards, the backend only serves the client-side calculated hashes.
Hashes can also be retrieved upon creating a file download link, but
it's impractical for list-like use cases.
-Restricted filename
+Restricted filename
characters
In addition to the default
@@ -60092,17 +60325,17 @@ replaced:
Invalid UTF-8 bytes will also be replaced, as they
can't be used in JSON strings.
-Transfers
+Transfers
All files are currently uploaded using a single HTTP request, so for
uploading large files a stable connection is necessary. Rclone will
upload up to --transfers chunks at the same time (shared
among all uploads).
-Deleting files
+Deleting files
By default, files are moved to the recycle bin whereas folders are
deleted immediately. Trashed files are permanently deleted after 30 days
in the recycle bin.
Emptying the trash is currently not implemented in rclone.
-Root folder ID
+Root folder ID
You can set the root_folder_slug for rclone. This is the
folder (identified by its Folder slug) that rclone
considers to be the root of your Uloz.to drive.
@@ -60123,7 +60356,7 @@ in the remote path. For example, if your remote's
ABSOLUTE_ULOZTO_ROOT/foo/bar/baz/qux.
-Standard options
+Standard options
Here are the Standard options specific to ulozto (Uloz.to).
--ulozto-app-token
The application token identifying the app. An app API key can be
@@ -60157,7 +60390,7 @@ obscure.
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to ulozto (Uloz.to).
--ulozto-root-folder-slug
If set, rclone will use this folder as the root folder for all
@@ -60200,7 +60433,7 @@ section in the overview for more info.
- Required: false
-Limitations
+Limitations
Uloz.to file names can't have the \ character in. rclone
maps this to and from an identical looking unicode equivalent
\ (U+FF3C Fullwidth Reverse Solidus).
@@ -60217,158 +60450,6 @@ determine free space for an rclone mount or use policy mfs
See List of
backends that do not support rclone about and rclone about.
-Uptobox
-This is a Backend for Uptobox file storage service. Uptobox is closer
-to a one-click hoster than a traditional cloud storage provider and
-therefore not suitable for long term storage.
-Paths are specified as remote:path
-Paths may be as deep as required, e.g.
-remote:directory/subdirectory.
-Configuration
-To configure an Uptobox backend you'll need your personal api token.
-You'll find it in your account
-settings.
-Here is an example of how to make a remote called remote
-with the default setup. First run:
-rclone config
-This will guide you through an interactive setup process:
-Current remotes:
-
-Name Type
-==== ====
-TestUptobox uptobox
-
-e) Edit existing remote
-n) New remote
-d) Delete remote
-r) Rename remote
-c) Copy remote
-s) Set configuration password
-q) Quit config
-e/n/d/r/c/s/q> n
-name> uptobox
-Type of storage to configure.
-Enter a string value. Press Enter for the default ("").
-Choose a number from below, or type in your own value
-[...]
-37 / Uptobox
- \ "uptobox"
-[...]
-Storage> uptobox
-** See help for uptobox backend at: https://rclone.org/uptobox/ **
-
-Your API Key, get it from https://uptobox.com/my_account
-Enter a string value. Press Enter for the default ("").
-api_key> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-Edit advanced config? (y/n)
-y) Yes
-n) No (default)
-y/n> n
-Remote config
---------------------
-[uptobox]
-type = uptobox
-api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
---------------------
-y) Yes this is OK (default)
-e) Edit this remote
-d) Delete this remote
-y/e/d>
-Once configured you can then use rclone like this
-(replace remote with the name you gave your remote):
-List directories in top level of your Uptobox
-rclone lsd remote:
-List all the files in your Uptobox
-rclone ls remote:
-To copy a local directory to an Uptobox directory called backup
-rclone copy /home/source remote:backup
-Modification times and
-hashes
-Uptobox supports neither modified times nor checksums. All timestamps
-will read as that set by --default-time.
-Restricted filename
-characters
-In addition to the default
-restricted characters set the following characters are also
-replaced:
-
-
-
-
-
-
-| " |
-0x22 |
-" |
-
-
-| ` |
-0x41 |
-` |
-
-
-
-Invalid UTF-8 bytes will also be replaced, as they
-can't be used in XML strings.
-
-
-Standard options
-Here are the Standard options specific to uptobox (Uptobox).
---uptobox-access-token
-Your access token.
-Get it from https://uptobox.com/my_account.
-Properties:
-
-- Config: access_token
-- Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
-- Type: string
-- Required: false
-
-Advanced options
-Here are the Advanced options specific to uptobox (Uptobox).
---uptobox-private
-Set to make uploaded files private
-Properties:
-
-- Config: private
-- Env Var: RCLONE_UPTOBOX_PRIVATE
-- Type: bool
-- Default: false
-
---uptobox-encoding
-The encoding for the backend.
-See the encoding
-section in the overview for more info.
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_UPTOBOX_ENCODING
-- Type: Encoding
-- Default:
-Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
-
---uptobox-description
-Description of the remote.
-Properties:
-
-- Config: description
-- Env Var: RCLONE_UPTOBOX_DESCRIPTION
-- Type: string
-- Required: false
-
-
-Limitations
-Uptobox will delete inactive files that have not been accessed in 60
-days.
-rclone about is not supported by this backend an
-overview of used space can however been seen in the uptobox web
-interface.
Union
The union backend joins several remotes together to make
a single unified view of them.
@@ -60399,7 +60480,7 @@ named backup with the remotes
segments. Invoking rclone mkdir backup:../desktop is
exactly the same as invoking
rclone mkdir mydrive:private/backup/../desktop.
-Configuration
+Configuration
Here is an example of how to make a union called remote
for local folders. First run:
rclone config
@@ -60678,13 +60759,13 @@ upstream.
Writeback
The tag :writeback on an upstream remote can be used to
make a simple cache system like this:
-[union]
-type = union
-action_policy = all
-create_policy = all
-search_policy = ff
-upstreams = /local:writeback remote:dir
+[union]
+type = union
+action_policy = all
+create_policy = all
+search_policy = ff
+upstreams = /local:writeback remote:dir
When files are opened for read, if the file is in
remote:dir but not /local then rclone will
copy the file entirely into /local before returning a
@@ -60701,7 +60782,7 @@ other than writing files back to it. So if you need to expire old files
or manage the size then you will have to do this yourself.
-Standard options
+Standard options
Here are the Standard options specific to union (Union merges the
contents of several upstream fs).
--union-upstreams
@@ -60752,7 +60833,7 @@ dir" upstreamb:', etc.
- Type: int
- Default: 120
-Advanced options
+Advanced options
Here are the Advanced options specific to union (Union merges the
contents of several upstream fs).
--union-min-free-space
@@ -60775,7 +60856,7 @@ considered for use in lfs or eplfs policies.
- Type: string
- Required: false
-
+
Any metadata supported by the underlying remote is read and
written.
See the metadata docs
@@ -60785,7 +60866,7 @@ for more info.
Paths are specified as remote:path
Paths may be as deep as required, e.g.
remote:directory/subdirectory.
-Configuration
+Configuration
To configure the WebDAV remote you will need to have a URL for it,
and a username and password. If you know what kind of system you are
connecting to then rclone can enable extra features.
@@ -60863,7 +60944,7 @@ y/e/d> y
rclone ls remote:
To copy a local directory to an WebDAV directory called backup
rclone copy /home/source remote:backup
-Modification times and
+Modification times and
hashes
Plain WebDAV does not support modified times. However when used with
Fastmail Files, ownCloud or Nextcloud rclone will support modified
@@ -60875,7 +60956,7 @@ may appear on all objects, or only on objects which had a hash uploaded
with them.
-Standard options
+Standard options
Here are the Standard options specific to webdav (WebDAV).
--webdav-url
URL of http host to connect to.
@@ -60965,7 +61046,7 @@ obscure.
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to webdav (WebDAV).
--webdav-bearer-token-command
Command to run to get a bearer token.
@@ -61144,13 +61225,13 @@ and use your normal account email and password for user and
pass. If you have 2FA enabled, you have to generate an app
password. Set the vendor to sharepoint.
Your config file should look like this:
-[sharepoint]
-type = webdav
-url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
-vendor = sharepoint
-user = YourEmailAddress
-pass = encryptedpassword
+[sharepoint]
+type = webdav
+url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
+vendor = sharepoint
+user = YourEmailAddress
+pass = encryptedpassword
Sharepoint with NTLM
Authentication
Use this option in case your (hosted) Sharepoint is not tied to
@@ -61168,13 +61249,13 @@ class="uri">https://example.sharepoint.com/sites/12345/Documents
NTLM uses domain and user name combination for authentication, set
user to DOMAIN\username.
Your config file should look like this:
-[sharepoint]
-type = webdav
-url = https://[YOUR-DOMAIN]/some-path-to/Documents
-vendor = sharepoint-ntlm
-user = DOMAIN\user
-pass = encryptedpassword
+[sharepoint]
+type = webdav
+url = https://[YOUR-DOMAIN]/some-path-to/Documents
+vendor = sharepoint-ntlm
+user = DOMAIN\user
+pass = encryptedpassword
Required Flags for
SharePoint
As SharePoint does some special things with uploaded documents, you
@@ -61204,14 +61285,14 @@ access tokens.
username or password, instead enter your Macaroon as the
bearer_token.
The config will end up looking something like this.
-[dcache]
-type = webdav
-url = https://dcache...
-vendor = other
-user =
-pass =
-bearer_token = your-macaroon
+[dcache]
+type = webdav
+url = https://dcache...
+vendor = other
+user =
+pass =
+bearer_token = your-macaroon
There is a script
that obtains a Macaroon from a dCache WebDAV endpoint, and creates an
@@ -61251,16 +61332,16 @@ the advanced config and enter the command to get a bearer token (e.g.,
The following example config shows a WebDAV endpoint that uses
oidc-agent to supply an access token from the XDC OIDC
Provider.
-[dcache]
-type = webdav
-url = https://dcache.example.org/
-vendor = other
-bearer_token_command = oidc-token XDC
+[dcache]
+type = webdav
+url = https://dcache.example.org/
+vendor = other
+bearer_token_command = oidc-token XDC
Yandex Disk
Yandex Disk is a cloud storage
solution created by Yandex.
-Configuration
+Configuration
Here is an example of making a yandex configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -61324,7 +61405,7 @@ any excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
Yandex paths may be as deep as required, e.g.
remote:directory/subdirectory.
-Modification times and
+Modification times and
hashes
Modified times are supported and are stored accurate to 1 ns in
custom metadata called rclone_modified in RFC3339 with
@@ -61339,7 +61420,7 @@ arguments.
To view your current quota you can use the
rclone about remote: command which will display your usage
limit (quota) and the current usage.
-Restricted filename
+Restricted filename
characters
The default
@@ -61349,7 +61430,7 @@ href="https://rclone.org/overview/#invalid-utf8">replaced, as they
can't be used in JSON strings.
-Standard options
+Standard options
Here are the Standard options specific to yandex (Yandex Disk).
--yandex-client-id
OAuth Client Id.
@@ -61371,7 +61452,7 @@ can't be used in JSON strings.
- Type: string
- Required: false
-Advanced options
+Advanced options
Here are the Advanced options specific to yandex (Yandex Disk).
--yandex-token
OAuth Access Token as a JSON blob.
@@ -61454,7 +61535,7 @@ client. May help with upload performance.
- Required: false
-Limitations
+Limitations
When uploading very large files (bigger than about 5 GiB) you will
need to increase the --timeout parameter. This is because
Yandex pauses (perhaps to calculate the MD5SUM for the entire file)
@@ -61474,7 +61555,7 @@ Rclone won't be able to complete any actions.
Zoho WorkDrive is a
cloud storage solution created by Zoho.
-Configuration
+Configuration
Here is an example of making a zoho configuration. First run
rclone config
This will guide you through an interactive setup process:
@@ -61557,7 +61638,7 @@ any excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
Zoho paths may be as deep as required, eg
remote:directory/subdirectory.
-Modification times and
+Modification times and
hashes
Modified times are currently not supported for Zoho Workdrive
No hash algorithms are supported.
@@ -61565,14 +61646,14 @@ hashes
To view your current quota you can use the
rclone about remote: command which will display your
current usage.
-Restricted filename
+Restricted filename
characters
Only control characters and invalid UTF-8 are replaced. In addition
most Unicode full-width characters are not supported at all and will be
removed from filenames during upload.
-Standard options
+Standard options
Here are the Standard options specific to zoho (Zoho).
--zoho-client-id
OAuth Client Id.
@@ -61633,7 +61714,7 @@ browser.
-Advanced options
+Advanced options
Here are the Advanced options specific to zoho (Zoho).
--zoho-token
OAuth Access Token as a JSON blob.
@@ -61726,7 +61807,7 @@ enable it in other regions.
rclone sync --interactive /home/source /tmp/destination
Will sync /home/source to
/tmp/destination.
-Configuration
+Configuration
For consistencies sake one can also configure a remote of type
local in the config file, and access the local filesystem
using rclone remote paths, e.g. remote:path/to/wherever,
@@ -62057,15 +62138,15 @@ drivers like EncFS. To disable
UNC conversion globally, add this to your .rclone.conf
file:
-
+
If you want to selectively disable UNC, you can add it to a separate
entry like this:
-[nounc]
-type = local
-nounc = true
+[nounc]
+type = local
+nounc = true
And use rclone like this:
rclone copy c:\src nounc:z:\dst
This will use UNC paths on c:\src but not on
@@ -62185,7 +62266,7 @@ systems. On systems where it isn't supported (e.g. Windows) it will be
ignored.
-Advanced options
+Advanced options
Here are the Advanced options specific to local (Local Disk).
--local-nounc
Disable UNC (long path names) conversion on Windows.
@@ -62487,7 +62568,7 @@ section in the overview for more info.
- Type: string
- Required: false
-
+
Depending on which OS is in use the local backend may return only
some of the system metadata. Setting system metadata is supported on all
OSes but setting user metadata is only supported on linux, freebsd,
@@ -62570,7 +62651,7 @@ backend.
See the metadata docs
for more info.
-Backend commands
+Backend commands
Here are the commands specific to the local backend.
Run them with:
rclone backend COMMAND remote:
@@ -62593,6 +62674,145 @@ the output.
Changelog
+v1.73.0 - 2026-01-30
+See
+commits
+
+- New backends
+
+- New Features
+
+- docs: Add Support Tiers to
+the documentation (Nick Craig-Wood)
+- rc: Add operations/hashsumfile
+to sum a single file only (Nick Craig-Wood)
+- serve webdav: Implement download directory as Zip (Leo)
+
+- Bug Fixes
+
+- fs: fix bwlimit: correct reporting (Mikel Olasagasti Uranga)
+- log: fix systemd adding extra newline (dougal)
+- docs: fixes (albertony, darkdragon-001, Duncan Smart, hyusap,
+Marc-Philip, Nick Craig-Wood, vicerace, vyv03354, yuval-cloudinary,
+yy)
+- serve s3: Make errors in
--s3-auth-key fatal (Nick
+Craig-Wood)
+
+- Mount
+
+- Fix OpenBSD mount support. (Nick Owens)
+
+- Azure Blob
+
+- Add metadata and tags support across upload and copy paths (Cliff
+Frey)
+- Factor the common auth into a library (Nick Craig-Wood)
+
+- Azurefiles
+
+- Factor the common auth into a library (Nick Craig-Wood)
+
+- B2
+
+- Support authentication with new bucket restricted application keys
+(DianaNites)
+
+- Drive
+
+- Add
--drive-metadata-force-expansive-access flag (Nick
+Craig-Wood)
+- Fix crash when trying to creating shortcut to a Google doc (Nick
+Craig-Wood)
+
+- FTP
+
+- Add http proxy authentication support (Nicolas Dessart)
+
+- Mega
+
+- Reverts TLS workaround (necaran)
+
+- Memory
+
+- Add
--memory-discard flag for speed testing (Nick
+Craig-Wood)
+
+- OneDrive
+
+- Fix cancelling multipart upload (Nick Craig-Wood)
+- Fix setting modification time on directories for OneDrive Personal
+(Nick Craig-Wood)
+- Fix OneDrive Personal no longer supports description (Nick
+Craig-Wood)
+- Fix require sign in for OneDrive Personal (Nick Craig-Wood)
+- Fix permissions on OneDrive Personal (Nick Craig-Wood)
+
+- Oracle Object Storage
+
+- Eliminate unnecessary heap allocation (Qingwei Li)
+
+- Pcloud
+
+- Add support for
ChangeNotify to enable real-time
+updates in mount (masrlinu)
+
+- Protondrive
+
+- Update to use forks of upstream modules to unblock development (Nick
+Craig-Wood)
+
+- S3
+
+- Add ability to specify an IAM role for cross-account interaction
+(Vladislav Tropnikov)
+- Linode: updated endpoints to use ISO 3166-1 alpha-2 standard
+(jbagwell-akamai)
+- Fix Copy ignoring storage class (vupn0712)
+
+- SFTP
+
+- Add http proxy authentication support (Nicolas Dessart)
+- Eliminate unnecessary heap allocation (Qingwei Li)
+
+
+v1.72.1 - 2025-12-10
+See
+commits
+
+- Bug Fixes
+
+- build: update to go1.25.5 to fix CVE-2025-61729
+- doc fixes (Duncan Smart, Nick Craig-Wood)
+- configfile: Fix piped config support (Jonas Tingeborn)
+- log
+
+- Fix PID not included in JSON log output (Tingsong Xu)
+- Fix backtrace not going to the --log-file (Nick Craig-Wood)
+
+
+- Google Cloud Storage
+
+- Improve endpoint parameter docs (Johannes Rothe)
+
+- S3
+
+- Add missing regions for Selectel provider (Nick Craig-Wood)
+
+
v1.72.0 - 2025-11-21
See
@@ -74214,7 +74434,7 @@ installations
- Project started
Bugs and Limitations
-Limitations
+Limitations
Directory
timestamps aren't preserved on some backends
As of v1.66, rclone supports syncing directory modtimes,
@@ -74424,10 +74644,10 @@ formats
some.domain.com no such host
This happens when rclone cannot resolve a domain. Please check that
your DNS setup is generally working, e.g.
-# both should print a long list of possible IP addresses
-dig www.googleapis.com # resolve using your default DNS
-dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server
+# both should print a long list of possible IP addresses
+dig www.googleapis.com # resolve using your default DNS
+dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server
If you are using systemd-resolved (default on Arch
Linux), ensure it is at version 233 or higher. Previous releases contain
a bug which causes not all domains to be resolved properly.
@@ -74450,8 +74670,8 @@ yyyy/mm/dd hh:mm:ss Fatal error: config failed to refresh token: failed to start
with opening the port on the host.
A simple solution may be restarting the Host Network Service with eg.
Powershell
-
+
The
total size reported in the stats for a sync is wrong and keeps
@@ -74506,6 +74726,38 @@ same Unicode characters are intentionally used in file names, this
replacement strategy leads to unwanted renames. Read more under section
caveats.
+Why
+does rclone fail to connect over TLS but another client works?
+
If you see TLS handshake failures (or packet captures show the server
+rejecting all offered ciphers), the server/proxy may only support legacy
+TLS cipher suites (for example RSA key-exchange ciphers such as
+RSA_WITH_AES_256_CBC_SHA256, or old 3DES ciphers). Recent
+Go versions (which rclone is built with) have removed insecure
+ciphers from the default list, so rclone may refuse to
+negotiate them even if other tools still do.
+If you can't update/reconfigure the server/proxy to support modern
+TLS (TLS 1.2/1.3) and ECDHE-based cipher suites you can re-enable legacy
+ciphers via GODEBUG:
+
+Windows (cmd.exe):
+set GODEBUG=tlsrsakex=1
+rclone copy ...
+Windows (PowerShell):
+$env:GODEBUG="tlsrsakex=1"
+rclone copy ...
+Linux/macOS:
+GODEBUG=tlsrsakex=1 rclone copy ...
+
+If the server only supports 3DES, try:
+GODEBUG=tls3des=1 rclone ...
+This applies to any rclone feature using TLS (HTTPS,
+FTPS, WebDAV over TLS, proxies with TLS interception, etc.). Use these
+workarounds only long enough to get the server/proxy updated.
License
This is free software under the terms of the MIT license (check the
COPYING file included with the source code).
@@ -76757,6 +77009,62 @@ class="email">30904953+jijamik@users.noreply.github.com
class="email">git@dsander.de
- Nikolay Kiryanov nikolay@kiryanov.ru
+- Diana 5275194+DianaNites@users.noreply.github.com
+- Duncan Smart duncan.smart@gmail.com
+- vicerace vicerace@sohu.com
+- Cliff Frey cliff@openai.com
+- Vladislav Tropnikov vtr.name@gmail.com
+- Leo i@hardrain980.com
+- Johannes Rothe mail@johannes-rothe.de
+- Tingsong Xu tingsong.xu@rightcapital.com
+- Jonas Tingeborn 134889+jojje@users.noreply.github.com
+- jhasse-shade jacob@shade.inc
+- vyv03354 VYV03354@nifty.ne.jp
+- masrlinu masrlinu@users.noreply.github.com 5259918+masrlinu@users.noreply.github.com
+- vupn0712 126212736+vupn0712@users.noreply.github.com
+- darkdragon-001 darkdragon-001@users.noreply.github.com
+- sys6101 csvmen@gmail.com
+- Nicolas Dessart nds@outsight.tech
+- Qingwei Li 332664203@qq.com
+- yy yhymmt37@gmail.com
+- Marc-Philip marc-philip.werner@sap.com
+- Mikel Olasagasti Uranga mikel@olasagasti.info
+- Nick Owens mischief@offblast.org
+- hyusap paulayush@gmail.com
+- jzunigax2 125698953+jzunigax2@users.noreply.github.com
+- lullius lullius@users.noreply.github.com
+- StarHack StarHack@users.noreply.github.com
Forum
diff --git a/MANUAL.md b/MANUAL.md
index 06bc59a49..d586ea5de 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Nov 21, 2025
+% Jan 30, 2026
# NAME
@@ -186,6 +186,7 @@ WebDAV or S3, that work out of the box.)
- Akamai Netstorage
- Alibaba Cloud (Aliyun) Object Storage System (OSS)
- Amazon S3
+- Bizfly Cloud Simple Storage
- Backblaze B2
- Box
- Ceph
@@ -198,12 +199,14 @@ WebDAV or S3, that work out of the box.)
- DigitalOcean Spaces
- Digi Storage
- Dreamhost
+- Drime
- Dropbox
- Enterprise File Fabric
- Exaba
- Fastmail Files
- FileLu Cloud Storage
- FileLu S5 (S3-Compatible Object Storage)
+- Filen
- Files.com
- FlashBlade
- FTP
@@ -220,6 +223,7 @@ WebDAV or S3, that work out of the box.)
- iCloud Drive
- ImageKit
- Internet Archive
+- Internxt
- Jottacloud
- IBM COS S3
- IDrive e2
@@ -272,6 +276,7 @@ WebDAV or S3, that work out of the box.)
- Selectel
- Servercore Object Storage
- SFTP
+- Shade
- Sia
- SMB / CIFS
- Spectra Logic
@@ -281,7 +286,6 @@ WebDAV or S3, that work out of the box.)
- SugarSync
- Tencent Cloud Object Storage (COS)
- Uloz.to
-- Uptobox
- Wasabi
- WebDAV
- Yandex Disk
@@ -1039,9 +1043,11 @@ See the following for detailed instructions for
- [Crypt](https://rclone.org/crypt/) - to encrypt other remotes
- [DigitalOcean Spaces](https://rclone.org/s3/#digitalocean-spaces)
- [Digi Storage](https://rclone.org/koofr/#digi-storage)
+- [Drime](https://rclone.org/drime/)
- [Dropbox](https://rclone.org/dropbox/)
- [Enterprise File Fabric](https://rclone.org/filefabric/)
- [FileLu Cloud Storage](https://rclone.org/filelu/)
+- [Filen](https://rclone.org/filen/)
- [Files.com](https://rclone.org/filescom/)
- [FTP](https://rclone.org/ftp/)
- [Gofile](https://rclone.org/gofile/)
@@ -1055,6 +1061,7 @@ See the following for detailed instructions for
- [HTTP](https://rclone.org/http/)
- [iCloud Drive](https://rclone.org/iclouddrive/)
- [Internet Archive](https://rclone.org/internetarchive/)
+- [Internxt](https://rclone.org/internxt/)
- [Jottacloud](https://rclone.org/jottacloud/)
- [Koofr](https://rclone.org/koofr/)
- [Linkbox](https://rclone.org/linkbox/)
@@ -1078,13 +1085,13 @@ See the following for detailed instructions for
- [rsync.net](https://rclone.org/sftp/#rsync-net)
- [Seafile](https://rclone.org/seafile/)
- [SFTP](https://rclone.org/sftp/)
+- [Shade](https://rclone.org/shade/)
- [Sia](https://rclone.org/sia/)
- [SMB](https://rclone.org/smb/)
- [Storj](https://rclone.org/storj/)
- [SugarSync](https://rclone.org/sugarsync/)
- [Union](https://rclone.org/union/)
- [Uloz.to](https://rclone.org/ulozto/)
-- [Uptobox](https://rclone.org/uptobox/)
- [WebDAV](https://rclone.org/webdav/)
- [Yandex Disk](https://rclone.org/yandex/)
- [Zoho WorkDrive](https://rclone.org/zoho/)
@@ -2312,6 +2319,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
use `-R` to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use `--disable ListR` to suppress the behavior.
+See [`--fast-list`](https://rclone.org/docs/#fast-list) for more details.
+
Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
@@ -2428,6 +2439,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
use `-R` to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use `--disable ListR` to suppress the behavior.
+See [`--fast-list`](https://rclone.org/docs/#fast-list) for more details.
+
Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
@@ -2533,6 +2548,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
use `-R` to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use `--disable ListR` to suppress the behavior.
+See [`--fast-list`](https://rclone.org/docs/#fast-list) for more details.
+
Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
@@ -5369,12 +5388,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
-// Output: stories/The Quick Brown Fox!-20251121
+// Output: stories/The Quick Brown Fox!-20260130
```
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
-// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
+// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM
```
```console
@@ -6596,6 +6615,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
use `-R` to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use `--disable ListR` to suppress the behavior.
+See [`--fast-list`](https://rclone.org/docs/#fast-list) for more details.
+
Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
@@ -6736,7 +6759,7 @@ with the following options:
- If `--files-only` is specified then files will be returned only,
no directories.
-If `--stat` is set then the the output is not an array of items,
+If `--stat` is set then the output is not an array of items,
but instead a single JSON blob will be returned about the item pointed to.
This will return an error if the item isn't found, however on bucket based
backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will
@@ -6779,6 +6802,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
use `-R` to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use `--disable ListR` to suppress the behavior.
+See [`--fast-list`](https://rclone.org/docs/#fast-list) for more details.
+
Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
@@ -6930,7 +6957,7 @@ at all, then 1 PiB is set as both the total and the free size.
## Installing on Windows
To run `rclone mount on Windows`, you will need to
-download and install [WinFsp](http://www.secfs.net/winfsp/).
+download and install [WinFsp](https://winfsp.dev).
[WinFsp](https://github.com/winfsp/winfsp) is an open-source
Windows File System Proxy which makes it easy to write user space file
@@ -8372,7 +8399,7 @@ at all, then 1 PiB is set as both the total and the free size.
## Installing on Windows
To run `rclone nfsmount on Windows`, you will need to
-download and install [WinFsp](http://www.secfs.net/winfsp/).
+download and install [WinFsp](https://winfsp.dev).
[WinFsp](https://github.com/winfsp/winfsp) is an open-source
Windows File System Proxy which makes it easy to write user space file
@@ -9422,7 +9449,7 @@ argument by passing a hyphen as an argument. This will use the first
line of STDIN as the password not including the trailing newline.
```console
-echo "secretpassword" | rclone obscure -
+echo 'secretpassword' | rclone obscure -
```
If there is no data on STDIN to read, rclone obscure will default to
@@ -13738,6 +13765,26 @@ docs](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
`--auth-key` is not provided then `serve s3` will allow anonymous
access.
+Like all rclone flags `--auth-key` can be set via environment
+variables, in this case `RCLONE_AUTH_KEY`. Since this flag can be
+repeated, the input to `RCLONE_AUTH_KEY` is CSV encoded. Because the
+`accessKey,secretKey` has a comma in, this means it needs to be in
+quotes.
+
+```console
+export RCLONE_AUTH_KEY='"user,pass"'
+rclone serve s3 ...
+```
+
+Or to supply multiple identities:
+
+```console
+export RCLONE_AUTH_KEY='"user1,pass1","user2,pass2"'
+rclone serve s3 ...
+```
+
+Setting this variable without quotes will produce an error.
+
Please note that some clients may require HTTPS endpoints. See [the
SSL docs](#tls-ssl) for more information.
@@ -16093,6 +16140,7 @@ rclone serve webdav remote:path [flags]
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
+ --disable-zip Disable zip download of directories
--etag-hash string Which hash to use for the ETag, or auto or blank for off
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
@@ -22714,7 +22762,7 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint mountType=m
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
```
-The vfsOpt are as described in options/get and can be seen in the the
+The vfsOpt are as described in options/get and can be seen in the
"vfs" section when running and the mountOpt can be seen in the "mount" section:
```console
@@ -23045,6 +23093,40 @@ See the [hashsum](https://rclone.org/commands/rclone_hashsum/) command for more
**Authentication is required for this call.**
+### operations/hashsumfile: Produces a hash for a single file. {#operations-hashsumfile}
+
+Produces a hash for a single file using the hash named.
+
+This takes the following parameters:
+
+- fs - a remote name string e.g. "drive:"
+- remote - a path within that remote e.g. "file.txt"
+- hashType - type of hash to be used
+- download - check by downloading rather than with hash (boolean)
+- base64 - output the hashes in base64 rather than hex (boolean)
+
+If you supply the download flag, it will download the data from the
+remote and create the hash on the fly. This can be useful for remotes
+that don't support the given hash or if you really want to read all
+the data.
+
+Returns:
+
+- hash - hash for the file
+- hashType - type of hash used
+
+Example:
+
+ $ rclone rc --loopback operations/hashsumfile fs=/ remote=/bin/bash hashType=MD5 download=true base64=true
+ {
+ "hashType": "md5",
+ "hash": "MDMw-fG2YXs7Uz5Nz-H68A=="
+ }
+
+See the [hashsum](https://rclone.org/commands/rclone_hashsum/) command for more information on the above.
+
+**Authentication is required for this call.**
+
### operations/list: List the given remote and path in JSON format {#operations-list}
This takes the following parameters:
@@ -24041,103 +24123,7 @@ show through.
Here is an overview of the major features of each cloud storage system.
-| Name | Hash | ModTime | Case Insensitive | Duplicate Files | MIME Type | Metadata |
-| ---------------------------- |:-----------------:|:-------:|:----------------:|:---------------:|:---------:|:--------:|
-| 1Fichier | Whirlpool | - | No | Yes | R | - |
-| Akamai Netstorage | MD5, SHA256 | R/W | No | No | R | - |
-| Amazon S3 (or S3 compatible) | MD5 | R/W | No | No | R/W | RWU |
-| Backblaze B2 | SHA1 | R/W | No | No | R/W | - |
-| Box | SHA1 | R/W | Yes | No | - | - |
-| Citrix ShareFile | MD5 | R/W | Yes | No | - | - |
-| Cloudinary | MD5 | R | No | Yes | - | - |
-| Dropbox | DBHASH ¹ | R | Yes | No | - | - |
-| Enterprise File Fabric | - | R/W | Yes | No | R/W | - |
-| FileLu Cloud Storage | MD5 | R/W | No | Yes | R | - |
-| Files.com | MD5, CRC32 | DR/W | Yes | No | R | - |
-| FTP | - | R/W ¹⁰ | No | No | - | - |
-| Gofile | MD5 | DR/W | No | Yes | R | - |
-| Google Cloud Storage | MD5 | R/W | No | No | R/W | - |
-| Google Drive | MD5, SHA1, SHA256 | DR/W | No | Yes | R/W | DRWU |
-| Google Photos | - | - | No | Yes | R | - |
-| HDFS | - | R/W | No | No | - | - |
-| HiDrive | HiDrive ¹² | R/W | No | No | - | - |
-| HTTP | - | R | No | No | R | R |
-| iCloud Drive | - | R | No | No | - | - |
-| Internet Archive | MD5, SHA1, CRC32 | R/W ¹¹ | No | No | - | RWU |
-| Jottacloud | MD5 | R/W | Yes | No | R | RW |
-| Koofr | MD5 | - | Yes | No | - | - |
-| Linkbox | - | R | No | No | - | - |
-| Mail.ru Cloud | Mailru ⁶ | R/W | Yes | No | - | - |
-| Mega | - | - | No | Yes | - | - |
-| Memory | MD5 | R/W | No | No | - | - |
-| Microsoft Azure Blob Storage | MD5 | R/W | No | No | R/W | - |
-| Microsoft Azure Files Storage | MD5 | R/W | Yes | No | R/W | - |
-| Microsoft OneDrive | QuickXorHash ⁵ | DR/W | Yes | No | R | DRW |
-| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
-| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
-| Oracle Object Storage | MD5 | R/W | No | No | R/W | RU |
-| pCloud | MD5, SHA1 ⁷ | R/W | No | No | W | - |
-| PikPak | MD5 | R | No | No | R | - |
-| Pixeldrain | SHA256 | R/W | No | No | R | RW |
-| premiumize.me | - | - | Yes | No | R | - |
-| put.io | CRC-32 | R/W | No | Yes | R | - |
-| Proton Drive | SHA1 | R/W | No | No | R | - |
-| QingStor | MD5 | - ⁹ | No | No | R/W | - |
-| Quatrix by Maytech | - | R/W | No | No | - | - |
-| Seafile | - | - | No | No | - | - |
-| SFTP | MD5, SHA1 ² | DR/W | Depends | No | - | - |
-| Sia | - | - | No | No | - | - |
-| SMB | - | R/W | Yes | No | - | - |
-| SugarSync | - | - | No | No | - | - |
-| Storj | - | R | No | No | - | - |
-| Uloz.to | MD5, SHA256 ¹³ | - | No | Yes | - | - |
-| Uptobox | - | - | No | Yes | - | - |
-| WebDAV | MD5, SHA1 ³ | R ⁴ | Depends | No | - | - |
-| Yandex Disk | MD5 | R/W | No | No | R | - |
-| Zoho WorkDrive | - | - | No | No | - | - |
-| The local filesystem | All | DR/W | Depends | No | - | DRWU |
-¹ Dropbox supports [its own custom
-hash](https://www.dropbox.com/developers/reference/content-hash).
-This is an SHA256 sum of all the 4 MiB block SHA256s.
-
-² SFTP supports checksums if the same login has shell access and
-`md5sum` or `sha1sum` as well as `echo` are in the remote's PATH.
-
-³ WebDAV supports hashes when used with Fastmail Files, Owncloud and Nextcloud only.
-
-⁴ WebDAV supports modtimes when used with Fastmail Files, Owncloud and Nextcloud
-only.
-
-⁵ [QuickXorHash](https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash)
-is Microsoft's own hash.
-
-⁶ Mail.ru uses its own modified SHA1 hash
-
-⁷ pCloud only supports SHA1 (not MD5) in its EU region
-
-⁸ Opendrive does not support creation of duplicate files using
-their web client interface or other stock clients, but the underlying
-storage platform has been determined to allow duplicate files, and it
-is possible to create them with `rclone`. It may be that this is a
-mistake or an unsupported feature.
-
-⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.
-
-¹⁰ FTP supports modtimes for the major FTP servers, and also others
-if they advertised required protocol extensions. See [this](https://rclone.org/ftp/#modification-times)
-for more details.
-
-¹¹ Internet Archive requires option `wait_archive` to be set to a non-zero value
-for full modtime support.
-
-¹² HiDrive supports [its own custom
-hash](https://static.hidrive.com/dev/0001).
-It combines SHA1 sums for each 4 KiB block hierarchically to a single
-top-level sum.
-
-¹³ Uloz.to provides server-calculated MD5 hash upon file upload. MD5 and SHA256
-hashes are client-calculated and stored as metadata fields.
### Hash
@@ -24533,73 +24519,7 @@ See [the metadata docs](https://rclone.org/docs/#metadata) for more info.
All rclone remotes support a base command set. Other features depend
upon backend-specific capabilities.
-| Name | Purge | Copy | Move | DirMove | CleanUp | ListR | StreamUpload | MultithreadUpload | LinkSharing | About | EmptyDir |
-| ---------------------------- |:-----:|:----:|:----:|:-------:|:-------:|:-----:|:------------:|:------------------|:------------:|:-----:|:--------:|
-| 1Fichier | No | Yes | Yes | No | No | No | No | No | Yes | No | Yes |
-| Akamai Netstorage | Yes | No | No | No | No | Yes | Yes | No | No | No | Yes |
-| Amazon S3 (or S3 compatible) | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No |
-| Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No |
-| Box | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes |
-| Citrix ShareFile | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
-| Dropbox | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
-| Cloudinary | No | No | No | No | No | No | Yes | No | No | No | No |
-| Enterprise File Fabric | Yes | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes |
-| Files.com | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | No | Yes |
-| FTP | No | No | Yes | Yes | No | No | Yes | No | No | No | Yes |
-| Gofile | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
-| Google Cloud Storage | Yes | Yes | No | No | No | No | Yes | No | No | No | No |
-| Google Drive | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
-| Google Photos | No | No | No | No | No | No | No | No | No | No | No |
-| HDFS | Yes | No | Yes | Yes | No | No | Yes | No | No | Yes | Yes |
-| HiDrive | Yes | Yes | Yes | Yes | No | No | Yes | No | No | No | Yes |
-| HTTP | No | No | No | No | No | No | No | No | No | No | Yes |
-| iCloud Drive | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
-| ImageKit | Yes | No | Yes | No | No | No | No | No | No | No | Yes |
-| Internet Archive | No | Yes | No | No | Yes | Yes | No | No | Yes | Yes | No |
-| Jottacloud | Yes | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
-| Koofr | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
-| Mail.ru Cloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
-| Mega | Yes | No | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
-| Memory | No | Yes | No | No | No | Yes | Yes | No | No | No | No |
-| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | Yes | Yes | No | No | No |
-| Microsoft Azure Files Storage | No | Yes | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes |
-| Microsoft OneDrive | Yes | Yes | Yes | Yes | Yes | Yes ⁵ | No | No | Yes | Yes | Yes |
-| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
-| OpenStack Swift | Yes ¹ | Yes | No | No | No | Yes | Yes | No | No | Yes | No |
-| Oracle Object Storage | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | No |
-| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
-| PikPak | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
-| Pixeldrain | Yes | No | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
-| premiumize.me | Yes | No | Yes | Yes | No | No | No | No | Yes | Yes | Yes |
-| put.io | Yes | No | Yes | Yes | Yes | No | Yes | No | No | Yes | Yes |
-| Proton Drive | Yes | No | Yes | Yes | Yes | No | No | No | No | Yes | Yes |
-| QingStor | No | Yes | No | No | Yes | Yes | No | No | No | No | No |
-| Quatrix by Maytech | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
-| Seafile | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes | Yes |
-| SFTP | No | Yes ⁴| Yes | Yes | No | No | Yes | No | No | Yes | Yes |
-| Sia | No | No | No | No | No | No | Yes | No | No | No | Yes |
-| SMB | No | No | Yes | Yes | No | No | Yes | Yes | No | No | Yes |
-| SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | No | Yes |
-| Storj | Yes ² | Yes | Yes | No | No | Yes | Yes | No | Yes | No | No |
-| Uloz.to | No | No | Yes | Yes | No | No | No | No | No | No | Yes |
-| Uptobox | No | Yes | Yes | Yes | No | No | No | No | No | No | No |
-| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ³ | No | No | Yes | Yes |
-| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes |
-| Zoho WorkDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
-| The local filesystem | No | No | Yes | Yes | No | No | Yes | Yes | No | Yes | Yes |
-¹ Note Swift implements this in order to delete directory markers but
-it doesn't actually have a quicker way of deleting files other than
-deleting them individually.
-
-² Storj implements this efficiently only for entire buckets. If
-purging a directory inside a bucket, files are deleted individually.
-
-³ StreamUpload is not supported with Nextcloud
-
-⁴ Use the `--sftp-copy-is-hardlink` flag to enable.
-
-⁵ Use the `--onedrive-delta` flag to enable.
### Purge
@@ -24685,6 +24605,60 @@ See [rclone about command](https://rclone.org/commands/rclone_about/)
The remote supports empty directories. See [Limitations](https://rclone.org/bugs/#limitations)
for details. Most Object/Bucket-based remotes do not support this.
+# Tiers
+
+Rclone backends are divided into tiers to give users an idea of the stability of each backend.
+
+| Tier | Label | Intended meaning |
+|--------|---------------|------------------|
+| | Core | Production-grade, first-class |
+| | Stable | Well-supported, minor gaps |
+| | Supported | Works for many uses; known caveats |
+| | Experimental | Use with care; expect gaps/changes |
+| | Deprecated | No longer maintained or supported |
+
+## Overview
+
+Here is a summary of all backends:
+
+
+
+## Scoring
+
+Here is how the backends are scored.
+
+### Features
+
+These are useful optional features a backend should have in rough
+order of importance. Each one of these scores a point for the Features
+column.
+
+- F1: Hash(es)
+- F2: Modtime
+- F3: Stream upload
+- F4: Copy/Move
+- F5: DirMove
+- F6: Metadata
+- F7: MultipartUpload
+
+
+### Tier
+
+The tier is decided after determining these attributes. Some discretion is allowed in tiering as some of these attributes are more important than others.
+
+| Attr | T1: Core | T2: Stable | T3: Supported | T4: Experimental | T5: Incubator |
+|------|----------|------------|---------------|------------------|---------------|
+| Maintainers | >=2 | >=1 | >=1 | >=0 | >=0 |
+| API source | Official | Official | Either | Either | Either |
+| Features (F1-F7) | >=5/7 | >=4/7 | >=3/7 | >=2/7 | N/A |
+| Integration tests | All Green | All green | Nearly all green | Some Flaky | N/A |
+| Error handling | Pacer | Pacer | Retries | Retries | N/A |
+| Data integrity | Hashes, alt, modtime | Hashes or alt | Hash OR modtime | Best-effort | N/A |
+| Perf baseline | Bench within 2x S3 | Bench doc | Anecdotal OK | Optional | N/A |
+| Adoption | widely used | often used | some use | N/A | N/A |
+| Docs completeness | Full | Full | Basic | Minimal | Minimal |
+| Security | Principle-of-least-privilege | Reasonable scopes | Basic auth | Works | Works |
+
# Global Flags
This describes the global flags available to every rclone command
@@ -24802,7 +24776,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
```
@@ -25033,6 +25007,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal's client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azureblob-connection-string string Storage Connection String
--azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
--azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
@@ -25069,7 +25044,7 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-client-id string The ID of the client in use
--azurefiles-client-secret string One of the service principal's client secrets
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
- --azurefiles-connection-string string Azure Files Connection String
+ --azurefiles-connection-string string Storage Connection String
--azurefiles-description string Description of the remote
--azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
@@ -25081,12 +25056,13 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
--azurefiles-password string The user's password (obscured)
- --azurefiles-sas-url string SAS URL
+ --azurefiles-sas-url string SAS URL for container level access only
--azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
--azurefiles-use-az Use Azure CLI tool az for authentication
+ --azurefiles-use-emulator Uses local storage emulator if provided as 'true'
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
@@ -25188,6 +25164,16 @@ Backend-only flags (these can be set in the config file also).
--doi-doi string The DOI or the doi.org URL
--doi-doi-resolver-api-url string The URL of the DOI resolver API to use
--doi-provider string DOI provider
+ --drime-access-token string API Access token
+ --drime-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
+ --drime-description string Description of the remote
+ --drime-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --drime-hard-delete Delete files permanently rather than putting them into the trash
+ --drime-list-chunk int Number of items to list in each call (default 1000)
+ --drime-root-folder-id string ID of the root folder
+ --drime-upload-concurrency int Concurrency for multipart uploads and copies (default 4)
+ --drime-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --drime-workspace-id string Account ID
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
@@ -25208,6 +25194,7 @@ Backend-only flags (these can be set in the config file also).
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
+ --drive-metadata-enforce-expansive-access Whether the request should enforce expansive access rules
--drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
--drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
--drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
@@ -25276,6 +25263,17 @@ Backend-only flags (these can be set in the config file also).
--filelu-description string Description of the remote
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
--filelu-key string Your FileLu Rclone key from My Account
+ --filen-api-key string API Key for your Filen account (obscured)
+ --filen-auth-version string Authentication Version (internal use only)
+ --filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
+ --filen-description string Description of the remote
+ --filen-email string Email of your Filen account
+ --filen-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --filen-master-keys string Master Keys (internal use only)
+ --filen-password string Password of your Filen account (obscured)
+ --filen-private-key string Private RSA Key (internal use only)
+ --filen-public-key string Public RSA Key (internal use only)
+ --filen-upload-concurrency int Concurrency for chunked uploads (default 16)
--filescom-api-key string The API key used to authenticate with Files.com
--filescom-description string Description of the remote
--filescom-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -25319,7 +25317,7 @@ Backend-only flags (these can be set in the config file also).
--gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
- --gcs-endpoint string Endpoint for the service
+ --gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
@@ -25408,6 +25406,11 @@ Backend-only flags (these can be set in the config file also).
--internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
+ --internxt-description string Description of the remote
+ --internxt-email string Email of your Internxt account
+ --internxt-encoding Encoding The encoding for the backend (default Slash,BackSlash,CrLf,RightPeriod,InvalidUtf8,Dot)
+ --internxt-pass string Password (obscured)
+ --internxt-skip-hash-validation Skip hash validation when downloading files (default true)
--jottacloud-auth-url string Auth server URL
--jottacloud-client-credentials Use client credentials OAuth flow
--jottacloud-client-id string OAuth Client Id
@@ -25470,6 +25473,7 @@ Backend-only flags (these can be set in the config file also).
--mega-use-https Use HTTPS for transfers
--mega-user string User name
--memory-description string Description of the remote
+ --memory-discard If set all writes will be discarded and reads will return an error
--netstorage-account string Set the NetStorage account name
--netstorage-description string Description of the remote
--netstorage-host string Domain+path of NetStorage host to connect to
@@ -25645,6 +25649,10 @@ Backend-only flags (these can be set in the config file also).
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
--s3-requester-pays Enables requester pays option when interacting with S3 bucket
+ --s3-role-arn string ARN of the IAM role to assume
+ --s3-role-external-id string External ID for assumed role
+ --s3-role-session-duration string Session duration for assumed role
+ --s3-role-session-name string Session name for assumed role
--s3-sdk-log-mode Bits Set to debug the SDK (default Off)
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
@@ -25728,6 +25736,16 @@ Backend-only flags (these can be set in the config file also).
--sftp-user string SSH username (default "$USER")
--sftp-xxh128sum-command string The command used to read XXH128 hashes
--sftp-xxh3sum-command string The command used to read XXH3 hashes
+ --shade-api-key string An API key for your account
+ --shade-chunk-size SizeSuffix Chunk size to use for uploading (default 64Mi)
+ --shade-description string Description of the remote
+ --shade-drive-id string The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive
+ --shade-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --shade-endpoint string Endpoint for the service
+ --shade-max-upload-parts int Maximum amount of parts in a multipart upload (default 10000)
+ --shade-token string JWT Token for performing Shade FS operations. Don't set this value - rclone will set it automatically
+ --shade-token-expiry string JWT Token Expiration time. Don't set this value - rclone will set it automatically
+ --shade-upload-concurrency int Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies (default 4)
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-credentials Use client credentials OAuth flow
@@ -25819,10 +25837,6 @@ Backend-only flags (these can be set in the config file also).
--union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi)
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
- --uptobox-access-token string Your access token
- --uptobox-description string Description of the remote
- --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
- --uptobox-private Set to make uploaded files private
--webdav-auth-redirect Preserve authentication on redirect
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
@@ -27514,7 +27528,11 @@ The following backends have known issues that need more investigation:
- `TestDropbox` (`dropbox`)
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
-- Updated: 2025-11-21-010037
+- `TestSeafile` (`seafile`)
+ - [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafile-1.txt)
+- `TestSeafileV6` (`seafile`)
+ - [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafileV6-1.txt)
+- Updated: 2026-01-30-010015
The following backends either have not been tested recently or have known issues
@@ -27523,6 +27541,7 @@ that are deemed unfixable for the time being:
- `TestArchive` (`archive`)
- `TestCache` (`cache`)
+- `TestDrime` (`drime`)
- `TestFileLu` (`filelu`)
- `TestFilesCom` (`filescom`)
- `TestImageKit` (`imagekit`)
@@ -28978,6 +28997,7 @@ The S3 backend can be used with a number of different providers:
- China Mobile Ecloud Elastic Object Storage (EOS)
- Cloudflare R2
- Arvan Cloud Object Storage (AOS)
+- Bizfly Cloud Simple Storage
- Cubbit DS3
- DigitalOcean Spaces
- Dreamhost
@@ -29705,6 +29725,68 @@ If none of these option actually end up providing `rclone` with AWS
credentials then S3 interaction will be non-authenticated (see the
[anonymous access](#anonymous-access) section for more info).
+#### Assume Role (Cross-Account Access)
+
+If you need to access S3 resources in a different AWS account, you can use IAM role assumption.
+This is useful for cross-account access scenarios where you have credentials in one account
+but need to access resources in another account.
+
+To use assume role, configure the following parameters:
+
+- `role_arn` - The ARN (Amazon Resource Name) of the IAM role to assume in the target account.
+ Format: `arn:aws:iam::ACCOUNT-ID:role/ROLE-NAME`
+- `role_session_name` (optional) - A name for the assumed role session. If not specified,
+ rclone will generate one automatically.
+- `role_session_duration` (optional) - Duration for which the assumed role credentials are valid.
+ If not specified, AWS default duration will be used (typically 1 hour).
+- `role_external_id` (optional) - An external ID required by the role's trust policy for additional security.
+ This is typically used when the role is accessed by a third party.
+
+The assume role feature works with both direct credentials (`env_auth = false`) and environment-based
+authentication (`env_auth = true`). Rclone will first authenticate using the base credentials, then
+use those credentials to assume the specified role.
+
+Example configuration for cross-account access:
+
+```
+[s3-cross-account]
+type = s3
+provider = AWS
+env_auth = true
+region = us-east-1
+role_arn = arn:aws:iam::123456789012:role/CrossAccountS3Role
+role_session_name = rclone-session
+role_external_id = unique-role-external-id-12345
+```
+
+In this example:
+- Base credentials are obtained from the environment (IAM role, credentials file, or environment variables)
+- These credentials are then used to assume the role `CrossAccountS3Role` in account `123456789012`
+- An external ID is provided for additional security as required by the role's trust policy
+
+The target role's trust policy in the destination account must allow the source account or user to assume it.
+Example trust policy:
+
+```json
+{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "arn:aws:iam::SOURCE-ACCOUNT-ID:root"
+ },
+ "Action": "sts:AssumeRole",
+ "Condition": {
+ "StringEquals": {
+ "sts:ExternalID": "unique-role-external-id-12345"
+ }
+ }
+ }
+ ]
+}
+```
+
### S3 Permissions
When using the `sync` subcommand of `rclone` the following minimum
@@ -29803,7 +29885,7 @@ all the files to be uploaded as multipart.
### Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
#### --s3-provider
@@ -29822,6 +29904,8 @@ Properties:
- Alibaba Cloud Object Storage System (OSS) formerly Aliyun
- "ArvanCloud"
- Arvan Cloud Object Storage (AOS)
+ - "BizflyCloud"
+ - Bizfly Cloud Simple Storage
- "Ceph"
- Ceph Object Storage
- "ChinaMobile"
@@ -29963,7 +30047,7 @@ Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
-- Provider: AWS,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other
+- Provider: AWS,BizflyCloud,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other
- Type: string
- Required: false
- Examples:
@@ -30072,6 +30156,12 @@ Properties:
- AWS GovCloud (US) Region.
- Needs location constraint us-gov-west-1.
- Provider: AWS
+ - "hn"
+ - Ha Noi
+ - Provider: BizflyCloud
+ - "hcm"
+ - Ho Chi Minh
+ - Provider: BizflyCloud
- ""
- Use this if unsure.
- Will use v4 signatures and an empty region.
@@ -30343,12 +30433,21 @@ Properties:
- "ru-1"
- St. Petersburg
- Provider: Selectel,Servercore
- - "gis-1"
- - Moscow
- - Provider: Servercore
+ - "ru-3"
+ - St. Petersburg
+ - Provider: Selectel
- "ru-7"
- Moscow
- - Provider: Servercore
+ - Provider: Selectel,Servercore
+ - "gis-1"
+ - Moscow
+ - Provider: Selectel,Servercore
+ - "kz-1"
+ - Kazakhstan
+ - Provider: Selectel
+ - "uz-2"
+ - Uzbekistan
+ - Provider: Selectel
- "uz-2"
- Tashkent, Uzbekistan
- Provider: Servercore
@@ -30384,7 +30483,7 @@ Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
-- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other
+- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other
- Type: string
- Required: false
- Examples:
@@ -30470,6 +30569,12 @@ Properties:
- "s3.ir-tbz-sh1.arvanstorage.ir"
- Tabriz Iran (Shahriar)
- Provider: ArvanCloud
+ - "hn.ss.bfcplatform.vn"
+ - Hanoi endpoint
+ - Provider: BizflyCloud
+ - "hcm.ss.bfcplatform.vn"
+ - Ho Chi Minh endpoint
+ - Provider: BizflyCloud
- "eos-wuxi-1.cmecloud.cn"
- The default endpoint - a good choice if you are unsure.
- East China (Suzhou)
@@ -30876,67 +30981,67 @@ Properties:
- Iran
- Provider: Liara
- "nl-ams-1.linodeobjects.com"
- - Amsterdam (Netherlands), nl-ams-1
+ - Amsterdam, NL (nl-ams-1)
- Provider: Linode
- "us-southeast-1.linodeobjects.com"
- - Atlanta, GA (USA), us-southeast-1
+ - Atlanta, GA, US (us-southeast-1)
- Provider: Linode
- "in-maa-1.linodeobjects.com"
- - Chennai (India), in-maa-1
+ - Chennai, IN (in-maa-1)
- Provider: Linode
- "us-ord-1.linodeobjects.com"
- - Chicago, IL (USA), us-ord-1
+ - Chicago, IL, US (us-ord-1)
- Provider: Linode
- "eu-central-1.linodeobjects.com"
- - Frankfurt (Germany), eu-central-1
+ - Frankfurt, DE (eu-central-1)
- Provider: Linode
- "id-cgk-1.linodeobjects.com"
- - Jakarta (Indonesia), id-cgk-1
+ - Jakarta, ID (id-cgk-1)
- Provider: Linode
- "gb-lon-1.linodeobjects.com"
- - London 2 (Great Britain), gb-lon-1
+ - London 2, UK (gb-lon-1)
- Provider: Linode
- "us-lax-1.linodeobjects.com"
- - Los Angeles, CA (USA), us-lax-1
+ - Los Angeles, CA, US (us-lax-1)
- Provider: Linode
- "es-mad-1.linodeobjects.com"
- - Madrid (Spain), es-mad-1
- - Provider: Linode
- - "au-mel-1.linodeobjects.com"
- - Melbourne (Australia), au-mel-1
+ - Madrid, ES (es-mad-1)
- Provider: Linode
- "us-mia-1.linodeobjects.com"
- - Miami, FL (USA), us-mia-1
+ - Miami, FL, US (us-mia-1)
- Provider: Linode
- "it-mil-1.linodeobjects.com"
- - Milan (Italy), it-mil-1
+ - Milan, IT (it-mil-1)
- Provider: Linode
- "us-east-1.linodeobjects.com"
- - Newark, NJ (USA), us-east-1
+ - Newark, NJ, US (us-east-1)
- Provider: Linode
- "jp-osa-1.linodeobjects.com"
- - Osaka (Japan), jp-osa-1
+ - Osaka, JP (jp-osa-1)
- Provider: Linode
- "fr-par-1.linodeobjects.com"
- - Paris (France), fr-par-1
+ - Paris, FR (fr-par-1)
- Provider: Linode
- "br-gru-1.linodeobjects.com"
- - São Paulo (Brazil), br-gru-1
+ - Sao Paulo, BR (br-gru-1)
- Provider: Linode
- "us-sea-1.linodeobjects.com"
- - Seattle, WA (USA), us-sea-1
+ - Seattle, WA, US (us-sea-1)
- Provider: Linode
- "ap-south-1.linodeobjects.com"
- - Singapore, ap-south-1
+ - Singapore, SG (ap-south-1)
- Provider: Linode
- "sg-sin-1.linodeobjects.com"
- - Singapore 2, sg-sin-1
+ - Singapore 2, SG (sg-sin-1)
- Provider: Linode
- "se-sto-1.linodeobjects.com"
- - Stockholm (Sweden), se-sto-1
+ - Stockholm, SE (se-sto-1)
- Provider: Linode
- - "us-iad-1.linodeobjects.com"
- - Washington, DC, (USA), us-iad-1
+ - "jp-tyo-1.linodeobjects.com"
+ - Tokyo 3, JP (jp-tyo-1)
+ - Provider: Linode
+ - "us-iad-10.linodeobjects.com"
+ - Washington, DC, US (us-iad-10)
- Provider: Linode
- "s3.us-west-1.{account_name}.lyve.seagate.com"
- US West 1 - California
@@ -31140,13 +31245,25 @@ Properties:
- SeaweedFS S3 localhost
- Provider: SeaweedFS
- "s3.ru-1.storage.selcloud.ru"
- - Saint Petersburg
+ - St. Petersburg
+ - Provider: Selectel
+ - "s3.ru-3.storage.selcloud.ru"
+ - St. Petersburg
+ - Provider: Selectel
+ - "s3.ru-7.storage.selcloud.ru"
+ - Moscow
- Provider: Selectel,Servercore
- "s3.gis-1.storage.selcloud.ru"
- Moscow
- - Provider: Servercore
- - "s3.ru-7.storage.selcloud.ru"
- - Moscow
+ - Provider: Selectel,Servercore
+ - "s3.kz-1.storage.selcloud.ru"
+ - Kazakhstan
+ - Provider: Selectel
+ - "s3.uz-2.storage.selcloud.ru"
+ - Uzbekistan
+ - Provider: Selectel
+ - "s3.ru-1.storage.selcloud.ru"
+ - Saint Petersburg
- Provider: Servercore
- "s3.uz-2.srvstorage.uz"
- Tashkent, Uzbekistan
@@ -31672,36 +31789,36 @@ Properties:
- Config: acl
- Env Var: RCLONE_S3_ACL
-- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- Type: string
- Required: false
- Examples:
- "private"
- Owner gets FULL_CONTROL.
- No one else has access rights (default).
- - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other
+ - Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other
- "public-read"
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ access.
- - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ - Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "public-read-write"
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ and WRITE access.
- Granting this on a bucket is generally not recommended.
- - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ - Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "authenticated-read"
- Owner gets FULL_CONTROL.
- The AuthenticatedUsers group gets READ access.
- - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ - Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "bucket-owner-read"
- Object owner gets FULL_CONTROL.
- Bucket owner gets READ access.
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
- - Provider: AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ - Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "bucket-owner-full-control"
- Both the object owner and the bucket owner get FULL_CONTROL over the object.
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
- - Provider: AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ - Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "private"
- Owner gets FULL_CONTROL.
- No one else has access rights (default).
@@ -31866,7 +31983,7 @@ Properties:
### Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
#### --s3-bucket-acl
@@ -31885,7 +32002,7 @@ Properties:
- Config: bucket_acl
- Env Var: RCLONE_S3_BUCKET_ACL
-- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other
+- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other
- Type: string
- Required: false
- Examples:
@@ -32139,6 +32256,58 @@ Properties:
- Type: string
- Required: false
+#### --s3-role-arn
+
+ARN of the IAM role to assume.
+
+Leave blank if not using assume role.
+
+Properties:
+
+- Config: role_arn
+- Env Var: RCLONE_S3_ROLE_ARN
+- Type: string
+- Required: false
+
+#### --s3-role-session-name
+
+Session name for assumed role.
+
+If empty, a session name will be generated automatically.
+
+Properties:
+
+- Config: role_session_name
+- Env Var: RCLONE_S3_ROLE_SESSION_NAME
+- Type: string
+- Required: false
+
+#### --s3-role-session-duration
+
+Session duration for assumed role.
+
+If empty, the default session duration will be used.
+
+Properties:
+
+- Config: role_session_duration
+- Env Var: RCLONE_S3_ROLE_SESSION_DURATION
+- Type: string
+- Required: false
+
+#### --s3-role-external-id
+
+External ID for assumed role.
+
+Leave blank if not using an external ID.
+
+Properties:
+
+- Config: role_external_id
+- Env Var: RCLONE_S3_ROLE_EXTERNAL_ID
+- Type: string
+- Required: false
+
#### --s3-upload-concurrency
Concurrency for multipart uploads and copies.
@@ -33434,6 +33603,36 @@ server_side_encryption =
storage_class =
```
+### BizflyCloud {#bizflycloud}
+
+[Bizfly Cloud Simple Storage](https://bizflycloud.vn/simple-storage) is an
+S3-compatible service with regions in Hanoi (HN) and Ho Chi Minh City (HCM).
+
+Use the endpoint for your region:
+
+- HN: `hn.ss.bfcplatform.vn`
+- HCM: `hcm.ss.bfcplatform.vn`
+
+A minimal configuration looks like this.
+
+```ini
+[bizfly]
+type = s3
+provider = BizflyCloud
+env_auth = false
+access_key_id = YOUR_ACCESS_KEY
+secret_access_key = YOUR_SECRET_KEY
+region = HN
+endpoint = hn.ss.bfcplatform.vn
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =
+```
+
+Switch `region` and `endpoint` to `HCM` and `hcm.ss.bfcplatform.vn` for Ho Chi
+Minh City.
+
### Ceph
[Ceph](https://ceph.com/) is an open-source, unified, distributed
@@ -38626,7 +38825,7 @@ It is useful to know how many requests are sent to the server in different scena
All copy commands send the following 4 requests:
```text
-/b2api/v1/b2_authorize_account
+/b2api/v4/b2_authorize_account
/b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names
@@ -39792,6 +39991,9 @@ Leave others unchecked. Click `Save Changes` at the top right.
The `cache` remote wraps another existing remote and stores file structure
and its data for long running tasks like `rclone mount`.
+It is **deprecated** so not recommended for use with new installations
+and may be removed at some point.
+
## Status
The cache backend code is working but it currently doesn't
@@ -42929,7 +43131,7 @@ Properties:
The URL of the DOI resolver API to use.
-The DOI resolver can be set for testing or for cases when the the canonical DOI resolver API cannot be used.
+The DOI resolver can be set for testing or for cases when the canonical DOI resolver API cannot be used.
Defaults to "https://doi.org/api".
@@ -43016,6 +43218,319 @@ It doesn't return anything.
+# Drime
+
+[Drime](https://drime.cloud/) is a cloud storage and transfer service focused
+on fast, resilient file delivery. It offers both free and paid tiers with
+emphasis on high-speed uploads and link sharing.
+
+To setup Drime you need to log in, navigate to Settings, Developer, and create a
+token to use as an API access key. Give it a sensible name and copy the token
+for use in the config.
+
+## Configuration
+
+Here is a run through of `rclone config` to make a remote called `remote`.
+
+Firstly run:
+
+
+```console
+rclone config
+```
+
+Then follow through the interactive setup:
+
+
+```text
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> remote
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+XX / Drime
+ \ (drime)
+Storage> drime
+
+Option access_token.
+API Access token
+You can get this from the web control panel.
+Enter a value. Press Enter to leave empty.
+access_token> YOUR_API_ACCESS_TOKEN
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: drime
+- access_token: YOUR_API_ACCESS_TOKEN
+Keep this "remote" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+Once configured you can then use `rclone` like this (replace `remote` with the
+name you gave your remote):
+
+List directories and files in the top level of your Drime
+
+```console
+rclone lsf remote:
+```
+
+To copy a local directory to a Drime directory called backup
+
+```console
+rclone copy /home/source remote:backup
+```
+
+
+### Modification times and hashes
+
+Drime does not support modification times or hashes.
+
+This means that by default syncs will only use the size of the file to determine
+if it needs updating.
+
+You can use the `--update` flag which will use the time the object was uploaded.
+For many operations this is sufficient to determine if it has changed. However
+files created with timestamps in the past will be missed by the sync if using
+`--update`.
+
+
+### Restricted filename characters
+
+In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
+the following characters are also replaced:
+
+| Character | Value | Replacement |
+| --------- |:-----:|:-----------:|
+| \ | 0x5C | \ |
+
+File names can also not start or end with the following characters. These only
+get replaced if they are the first or last character in the name:
+
+| Character | Value | Replacement |
+| --------- |:-----:|:-----------:|
+| SP | 0x20 | ␠ |
+
+Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
+as they can't be used in JSON strings.
+
+### Root folder ID
+
+You can set the `root_folder_id` for rclone. This is the directory
+(identified by its `Folder ID`) that rclone considers to be the root
+of your Drime drive.
+
+Normally you will leave this blank and rclone will determine the
+correct root to use itself and fill in the value in the config file.
+
+However you can set this to restrict rclone to a specific folder
+hierarchy.
+
+In order to do this you will have to find the `Folder ID` of the
+directory you wish rclone to display.
+
+You can do this with rclone
+
+```console
+$ rclone lsf -Fip --dirs-only remote:
+d6341f53-ee65-4f29-9f59-d11e8070b2a0;Files/
+f4f5c9b8-6ece-478b-b03e-4538edfe5a1c;Photos/
+d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
+```
+
+The ID to use is the part before the `;` so you could set
+
+```text
+root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0
+```
+
+To restrict rclone to the `Files` directory.
+
+
+### Standard options
+
+Here are the Standard options specific to drime (Drime).
+
+#### --drime-access-token
+
+API Access token
+
+You can get this from the web control panel.
+
+Properties:
+
+- Config: access_token
+- Env Var: RCLONE_DRIME_ACCESS_TOKEN
+- Type: string
+- Required: false
+
+### Advanced options
+
+Here are the Advanced options specific to drime (Drime).
+
+#### --drime-root-folder-id
+
+ID of the root folder
+
+Leave this blank normally, rclone will fill it in automatically.
+
+If you want rclone to be restricted to a particular folder you can
+fill it in - see the docs for more info.
+
+
+Properties:
+
+- Config: root_folder_id
+- Env Var: RCLONE_DRIME_ROOT_FOLDER_ID
+- Type: string
+- Required: false
+
+#### --drime-workspace-id
+
+Account ID
+
+Leave this blank normally unless you wish to specify a Workspace ID.
+
+
+Properties:
+
+- Config: workspace_id
+- Env Var: RCLONE_DRIME_WORKSPACE_ID
+- Type: string
+- Required: false
+
+#### --drime-list-chunk
+
+Number of items to list in each call
+
+Properties:
+
+- Config: list_chunk
+- Env Var: RCLONE_DRIME_LIST_CHUNK
+- Type: int
+- Default: 1000
+
+#### --drime-hard-delete
+
+Delete files permanently rather than putting them into the trash.
+
+Properties:
+
+- Config: hard_delete
+- Env Var: RCLONE_DRIME_HARD_DELETE
+- Type: bool
+- Default: false
+
+#### --drime-upload-cutoff
+
+Cutoff for switching to chunked upload.
+
+Any files larger than this will be uploaded in chunks of chunk_size.
+The minimum is 0 and the maximum is 5 GiB.
+
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_DRIME_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 200Mi
+
+#### --drime-chunk-size
+
+Chunk size to use for uploading.
+
+When uploading files larger than upload_cutoff or files with unknown
+size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
+photos or google docs) they will be uploaded as multipart uploads
+using this chunk size.
+
+Note that "--drime-upload-concurrency" chunks of this size are buffered
+in memory per transfer.
+
+If you are transferring large files over high-speed links and you have
+enough memory, then increasing this will speed up the transfers.
+
+Rclone will automatically increase the chunk size when uploading a
+large file of known size to stay below the 10,000 chunks limit.
+
+Files of unknown size are uploaded with the configured
+chunk_size. Since the default chunk size is 5 MiB and there can be at
+most 10,000 chunks, this means that by default the maximum size of
+a file you can stream upload is 48 GiB. If you wish to stream upload
+larger files then you will need to increase chunk_size.
+
+
+Properties:
+
+- Config: chunk_size
+- Env Var: RCLONE_DRIME_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5Mi
+
+#### --drime-upload-concurrency
+
+Concurrency for multipart uploads and copies.
+
+This is the number of chunks of the same file that are uploaded
+concurrently for multipart uploads and copies.
+
+If you are uploading small numbers of large files over high-speed links
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_DRIME_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 4
+
+#### --drime-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_DRIME_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
+
+#### --drime-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_DRIME_DESCRIPTION
+- Type: string
+- Required: false
+
+
+
+## Limitations
+
+Drime only supports filenames up to 255 bytes in length, where filenames are
+encoded in UTF8.
+
# Dropbox
Paths are specified as `remote:path`
@@ -43973,6 +44488,9 @@ managing files in the cloud easy. Its cross-platform file backup
services let you upload and back up files from any internet-connected
device.
+**Note** FileLu now has a fully featured S3 backend [FileLu S5](/s3#filelu-s5),
+an industry standard S3 compatible object store.
+
## Configuration
Here is an example of how to make a remote called `filelu`. First, run:
@@ -44210,6 +44728,241 @@ for troubleshooting and updates.
For further information, visit [FileLu's website](https://filelu.com/).
+# Filen
+## Configuration
+The initial setup for Filen requires that you get an API key for your account,
+currently this is only possible using the [Filen CLI](https://github.com/FilenCloudDienste/filen-cli).
+This means you must first download the CLI, login, and then run the `export-api-key` command.
+
+Here is an example of how to make a remote called `FilenRemote`. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+name> FilenRemote
+Option Storage.
+
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Filen
+ \ "filen"
+[snip]
+Storage> filen
+
+Option Email.
+The email of your Filen account
+Enter a value.
+Email> youremail@provider.com
+
+Option Password.
+The password of your Filen account
+Choose an alternative below.
+y) Yes, type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+
+Option API Key.
+An API Key for your Filen account
+Get this using the Filen CLI export-api-key command
+You can download the Filen CLI from https://github.com/FilenCloudDienste/filen-cli
+Choose an alternative below.
+y) Yes, type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: filen
+- Email: youremail@provider.com
+- Password: *** ENCRYPTED ***
+- API Key: *** ENCRYPTED ***
+Keep this "FilenRemote" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+### Modification times and hashes
+Modification times are fully supported for files, for directories, only the creation time matters.
+
+Filen supports Blake3 hashes.
+
+### Restricted filename characters
+Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8)
+
+
+### API Key
+
+
+### Standard options
+
+Here are the Standard options specific to filen (Filen).
+
+#### --filen-email
+
+Email of your Filen account
+
+Properties:
+
+- Config: email
+- Env Var: RCLONE_FILEN_EMAIL
+- Type: string
+- Required: true
+
+#### --filen-password
+
+Password of your Filen account
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+Properties:
+
+- Config: password
+- Env Var: RCLONE_FILEN_PASSWORD
+- Type: string
+- Required: true
+
+#### --filen-api-key
+
+API Key for your Filen account
+
+Get this using the Filen CLI export-api-key command
+You can download the Filen CLI from https://github.com/FilenCloudDienste/filen-cli
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+Properties:
+
+- Config: api_key
+- Env Var: RCLONE_FILEN_API_KEY
+- Type: string
+- Required: true
+
+### Advanced options
+
+Here are the Advanced options specific to filen (Filen).
+
+#### --filen-upload-concurrency
+
+Concurrency for chunked uploads.
+
+This is the upper limit for how many transfers for the same file are running concurrently.
+Setting this above to a value smaller than 1 will cause uploads to deadlock.
+
+If you are uploading small numbers of large files over high-speed links
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_FILEN_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 16
+
+#### --filen-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_FILEN_ENCODING
+- Type: Encoding
+- Default: Slash,Del,Ctl,InvalidUtf8,Dot
+
+#### --filen-master-keys
+
+Master Keys (internal use only)
+
+Properties:
+
+- Config: master_keys
+- Env Var: RCLONE_FILEN_MASTER_KEYS
+- Type: string
+- Required: false
+
+#### --filen-private-key
+
+Private RSA Key (internal use only)
+
+Properties:
+
+- Config: private_key
+- Env Var: RCLONE_FILEN_PRIVATE_KEY
+- Type: string
+- Required: false
+
+#### --filen-public-key
+
+Public RSA Key (internal use only)
+
+Properties:
+
+- Config: public_key
+- Env Var: RCLONE_FILEN_PUBLIC_KEY
+- Type: string
+- Required: false
+
+#### --filen-auth-version
+
+Authentication Version (internal use only)
+
+Properties:
+
+- Config: auth_version
+- Env Var: RCLONE_FILEN_AUTH_VERSION
+- Type: string
+- Required: false
+
+#### --filen-base-folder-uuid
+
+UUID of Account Root Directory (internal use only)
+
+Properties:
+
+- Config: base_folder_uuid
+- Env Var: RCLONE_FILEN_BASE_FOLDER_UUID
+- Type: string
+- Required: false
+
+#### --filen-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_FILEN_DESCRIPTION
+- Type: string
+- Required: false
+
+
+
# Files.com
[Files.com](https://www.files.com/) is a cloud storage service that provides a
@@ -44907,6 +45660,12 @@ URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
+Supports the format http://user:pass@host:port, http://host:port, http://host.
+
+Example:
+
+ http://myUser:myPass@proxyhostname.example.com:8000
+
Properties:
@@ -46071,9 +46830,14 @@ Properties:
#### --gcs-endpoint
-Endpoint for the service.
+Custom endpoint for the storage API. Leave blank to use the provider default.
-Leave blank normally.
+When using a custom endpoint that includes a subpath (e.g. example.org/custom/endpoint),
+the subpath will be ignored during upload operations due to a limitation in the
+underlying Google API Go client library.
+Download and listing operations will work correctly with the full endpoint path.
+If you require subpath support for uploads, avoid using subpaths in your custom
+endpoint configuration.
Properties:
@@ -46081,6 +46845,13 @@ Properties:
- Env Var: RCLONE_GCS_ENDPOINT
- Type: string
- Required: false
+- Examples:
+ - "storage.example.org"
+ - Specify a custom endpoint
+ - "storage.example.org:4443"
+ - Specifying a custom endpoint with port
+ - "storage.example.org:4443/gcs/api"
+ - Specifying a subpath, see the note, uploads won't use the custom path!
#### --gcs-encoding
@@ -46379,7 +47150,7 @@ account key" button.
`https://www.googleapis.com/auth/drive`
to grant read/write access to Google Drive specifically.
You can also use `https://www.googleapis.com/auth/drive.readonly` for read
- only access.
+ only access with `--drive-scope=drive.readonly`.
- Click "Authorise"
##### 3. Configure rclone, assuming a new install
@@ -47534,6 +48305,23 @@ Properties:
- "read,write"
- Read and Write the value.
+#### --drive-metadata-enforce-expansive-access
+
+Whether the request should enforce expansive access rules.
+
+From Feb 2026 this flag will be set by default so this flag can be used for
+testing before then.
+
+See: https://developers.google.com/workspace/drive/api/guides/limited-expansive-access
+
+
+Properties:
+
+- Config: metadata_enforce_expansive_access
+- Env Var: RCLONE_DRIVE_METADATA_ENFORCE_EXPANSIVE_ACCESS
+- Type: bool
+- Default: false
+
#### --drive-encoding
The encoding for the backend.
@@ -48782,8 +49570,14 @@ second that each client_id can do set by Google.
If there is a problem with this client_id (eg quota too low or the
client_id stops working) then you can make your own.
-Please follow the steps in [the google drive docs](https://rclone.org/drive/#making-your-own-client-id).
-You will need these scopes instead of the drive ones detailed:
+Please follow the steps in [the google drive docs](https://rclone.org/drive/#making-your-own-client-id)
+with the following differences:
+
+- At step 3, instead of enabling the "Google Drive API", search for and
+ enable the "Photos Library API".
+
+- At step 5, you will need to add different scopes. Use these scopes
+ instead of the drive ones:
```text
https://www.googleapis.com/auth/photoslibrary.appendonly
@@ -50995,6 +51789,189 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+# Internxt Drive
+
+[Internxt Drive](https://internxt.com) is a zero-knowledge encrypted cloud storage service.
+
+Paths are specified as `remote:path`
+
+Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
+
+## Limitations
+
+**Note:** The Internxt backend may not work with all account types. Please refer to [Internxt plan details](https://internxt.com/pricing) or contact [Internxt support](https://help.internxt.com) to verify rclone compatibility with your subscription.
+
+## Configuration
+
+Here is an example of how to make a remote called `internxt`. Run `rclone config` and follow the prompts:
+
+```
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> internxt
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / Internxt Drive
+ \ "internxt"
+[snip]
+Storage> internxt
+
+Option email.
+Email of your Internxt account.
+Enter a value.
+email> user@example.com
+
+Option pass.
+Password.
+Enter a value.
+password>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: internxt
+- email: user@example.com
+- pass: *** ENCRYPTED ***
+Keep this "internxt" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+If you have two-factor authentication enabled on your Internxt account, you will be prompted to enter the code during login.
+
+### Security Considerations
+
+The authentication process stores your password and mnemonic in the rclone configuration file. It is **strongly recommended** to encrypt your rclone config to protect these sensitive credentials:
+
+```
+rclone config password
+```
+
+This will prompt you to set a password that encrypts your entire configuration file.
+
+### Usage Examples
+
+```
+# List files
+rclone ls internxt:
+
+# Copy files to Internxt
+rclone copy /local/path internxt:remote/path
+
+# Sync local directory to Internxt
+rclone sync /local/path internxt:remote/path
+
+# Mount Internxt Drive as a local filesystem
+rclone mount internxt: /path/to/mountpoint
+
+# Check storage usage
+rclone about internxt:
+```
+
+### Modification times and hashes
+
+The Internxt backend does not support hashes.
+
+Modification times are read from the server but cannot be set. The backend reports `ModTimeNotSupported` precision, so modification times will not be used for sync comparisons.
+
+### Restricted filename characters
+
+The Internxt backend replaces the [default restricted characters
+set](https://rclone.org/overview/#restricted-characters).
+
+
+### Standard options
+
+Here are the Standard options specific to internxt (Internxt Drive).
+
+#### --internxt-email
+
+Email of your Internxt account.
+
+Properties:
+
+- Config: email
+- Env Var: RCLONE_INTERNXT_EMAIL
+- Type: string
+- Required: true
+
+#### --internxt-pass
+
+Password.
+
+**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/).
+
+Properties:
+
+- Config: pass
+- Env Var: RCLONE_INTERNXT_PASS
+- Type: string
+- Required: true
+
+### Advanced options
+
+Here are the Advanced options specific to internxt (Internxt Drive).
+
+#### --internxt-mnemonic
+
+Mnemonic (internal use only)
+
+Properties:
+
+- Config: mnemonic
+- Env Var: RCLONE_INTERNXT_MNEMONIC
+- Type: string
+- Required: false
+
+#### --internxt-skip-hash-validation
+
+Skip hash validation when downloading files.
+
+By default, hash validation is disabled. Set this to false to enable validation.
+
+Properties:
+
+- Config: skip_hash_validation
+- Env Var: RCLONE_INTERNXT_SKIP_HASH_VALIDATION
+- Type: bool
+- Default: true
+
+#### --internxt-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_INTERNXT_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,CrLf,RightPeriod,InvalidUtf8,Dot
+
+#### --internxt-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_INTERNXT_DESCRIPTION
+- Type: string
+- Required: false
+
+
+
# Jottacloud
Jottacloud is a cloud storage service provider from a Norwegian company, using
@@ -52954,6 +53931,30 @@ set](https://rclone.org/overview/#restricted-characters).
Here are the Advanced options specific to memory (In memory object storage system.).
+#### --memory-discard
+
+If set all writes will be discarded and reads will return an error
+
+If set then when files are uploaded the contents not be saved. The
+files will appear to have been uploaded but will give an error on
+read. Files will have their MD5 sum calculated on upload which takes
+very little CPU time and allows the transfers to be checked.
+
+This can be useful for testing performance.
+
+Probably most easily used by using the connection string syntax:
+
+ :memory,discard:bucket
+
+
+
+Properties:
+
+- Config: discard
+- Env Var: RCLONE_MEMORY_DISCARD
+- Type: bool
+- Default: false
+
#### --memory-description
Description of the remote.
@@ -53431,6 +54432,26 @@ MD5 hashes are stored with blobs. However blobs that were uploaded in
chunks only have an MD5 if the source remote was capable of MD5
hashes, e.g. the local disk.
+### Metadata and tags
+
+Rclone can map arbitrary metadata to Azure Blob headers, user metadata, and tags
+when `--metadata` is enabled (or when using `--metadata-set` / `--metadata-mapper`).
+
+- Headers: Set these keys in metadata to map to the corresponding blob headers:
+ - `cache-control`, `content-disposition`, `content-encoding`, `content-language`, `content-type`.
+- User metadata: Any other non-reserved keys are written as user metadata
+ (keys are normalized to lowercase). Keys starting with `x-ms-` are reserved and
+ are not stored as user metadata.
+- Tags: Provide `x-ms-tags` as a comma-separated list of `key=value` pairs, e.g.
+ `x-ms-tags=env=dev,team=sync`. These are applied as blob tags on upload and on
+ server-side copies. Whitespace around keys/values is ignored.
+- Modtime override: Provide `mtime` in RFC3339/RFC3339Nano format to override the
+ stored modtime persisted in user metadata. If `mtime` cannot be parsed, rclone
+ logs a debug message and ignores the override.
+
+Notes:
+- Rclone ignores reserved `x-ms-*` keys (except `x-ms-tags`) for user metadata.
+
### Performance
When uploading large files, increasing the value of
@@ -53759,6 +54780,20 @@ Properties:
- Type: string
- Required: false
+#### --azureblob-connection-string
+
+Storage Connection String.
+
+Connection string for the storage. Leave blank if using other auth methods.
+
+
+Properties:
+
+- Config: connection_string
+- Env Var: RCLONE_AZUREBLOB_CONNECTION_STRING
+- Type: string
+- Required: false
+
#### --azureblob-tenant
ID of the service principal's tenant. Also called its directory ID.
@@ -54368,6 +55403,24 @@ Properties:
- Type: string
- Required: false
+### Metadata
+
+User metadata is stored as x-ms-meta- keys. Azure metadata keys are case insensitive and are always returned in lower case.
+
+Here are the possible system metadata items for the azureblob backend.
+
+| Name | Help | Type | Example | Read Only |
+|------|------|------|---------|-----------|
+| cache-control | Cache-Control header | string | no-cache | N |
+| content-disposition | Content-Disposition header | string | inline | N |
+| content-encoding | Content-Encoding header | string | gzip | N |
+| content-language | Content-Language header | string | en-US | N |
+| content-type | Content-Type header | string | text/plain | N |
+| mtime | Time of last modification, read from rclone metadata | RFC 3339 | 2006-01-02T15:04:05.999999999Z07:00 | N |
+| tier | Tier of the object | string | Hot | **Y** |
+
+See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
+
### Custom upload headers
@@ -54769,7 +55822,7 @@ Azure Storage Account Name.
Set this to the Azure Storage Account Name in use.
-Leave blank to use SAS URL or connection string, otherwise it needs to be set.
+Leave blank to use SAS URL or Emulator, otherwise it needs to be set.
If this is blank and if env_auth is set it will be read from the
environment variable `AZURE_STORAGE_ACCOUNT_NAME` if possible.
@@ -54782,25 +55835,11 @@ Properties:
- Type: string
- Required: false
-#### --azurefiles-share-name
-
-Azure Files Share Name.
-
-This is required and is the name of the share to access.
-
-
-Properties:
-
-- Config: share_name
-- Env Var: RCLONE_AZUREFILES_SHARE_NAME
-- Type: string
-- Required: false
-
#### --azurefiles-env-auth
Read credentials from runtime (environment variables, CLI or MSI).
-See the [authentication docs](/azurefiles#authentication) for full info.
+See the [authentication docs](/azureblob#authentication) for full info.
Properties:
@@ -54813,7 +55852,7 @@ Properties:
Storage Account Shared Key.
-Leave blank to use SAS URL or connection string.
+Leave blank to use SAS URL or Emulator.
Properties:
@@ -54824,9 +55863,9 @@ Properties:
#### --azurefiles-sas-url
-SAS URL.
+SAS URL for container level access only.
-Leave blank if using account/key or connection string.
+Leave blank if using account/key or Emulator.
Properties:
@@ -54837,7 +55876,10 @@ Properties:
#### --azurefiles-connection-string
-Azure Files Connection String.
+Storage Connection String.
+
+Connection string for the storage. Leave blank if using other auth methods.
+
Properties:
@@ -54929,6 +55971,20 @@ Properties:
- Type: string
- Required: false
+#### --azurefiles-share-name
+
+Azure Files Share Name.
+
+This is required and is the name of the share to access.
+
+
+Properties:
+
+- Config: share_name
+- Env Var: RCLONE_AZUREFILES_SHARE_NAME
+- Type: string
+- Required: false
+
### Advanced options
Here are the Advanced options specific to azurefiles (Microsoft Azure Files).
@@ -54991,13 +56047,11 @@ Path to file containing credentials for use with a service principal.
Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
$ az ad sp create-for-rbac --name "" \
- --role "Storage Files Data Owner" \
+ --role "Storage Blob Data Owner" \
--scopes "/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/" \
> azure-principal.json
-See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to files data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
-
-**NB** this section needs updating for Azure Files - pull requests appreciated!
+See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to blob data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
It may be more convenient to put the credentials directly into the
rclone config file under the `client_id`, `tenant` and `client_secret`
@@ -55011,6 +56065,28 @@ Properties:
- Type: string
- Required: false
+#### --azurefiles-disable-instance-discovery
+
+Skip requesting Microsoft Entra instance metadata
+
+This should be set true only by applications authenticating in
+disconnected clouds, or private clouds such as Azure Stack.
+
+It determines whether rclone requests Microsoft Entra instance
+metadata from `https://login.microsoft.com/` before
+authenticating.
+
+Setting this to true will skip this request, making you responsible
+for ensuring the configured authority is valid and trustworthy.
+
+
+Properties:
+
+- Config: disable_instance_discovery
+- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+- Type: bool
+- Default: false
+
#### --azurefiles-use-msi
Use a managed service identity to authenticate (only works in Azure).
@@ -55070,32 +56146,29 @@ Properties:
- Type: string
- Required: false
-#### --azurefiles-disable-instance-discovery
+#### --azurefiles-use-emulator
-Skip requesting Microsoft Entra instance metadata
-This should be set true only by applications authenticating in
-disconnected clouds, or private clouds such as Azure Stack.
-It determines whether rclone requests Microsoft Entra instance
-metadata from `https://login.microsoft.com/` before
-authenticating.
-Setting this to true will skip this request, making you responsible
-for ensuring the configured authority is valid and trustworthy.
+Uses local storage emulator if provided as 'true'.
+Leave blank if using real azure storage endpoint.
Properties:
-- Config: disable_instance_discovery
-- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+- Config: use_emulator
+- Env Var: RCLONE_AZUREFILES_USE_EMULATOR
- Type: bool
- Default: false
#### --azurefiles-use-az
Use Azure CLI tool az for authentication
+
Set to use the [Azure CLI tool az](https://learn.microsoft.com/en-us/cli/azure/)
as the sole means of authentication.
+
Setting this can be useful if you wish to use the az CLI on a host with
a System Managed Identity that you do not want to use.
+
Don't set env_auth at the same time.
@@ -56006,7 +57079,7 @@ This is why this flag is not set as the default.
As a rule of thumb if nearly all of your data is under rclone's root
directory (the `root/directory` in `onedrive:root/directory`) then
-using this flag will be be a big performance win. If your data is
+using this flag will be a big performance win. If your data is
mostly not under the root then using this flag will be a big
performance loss.
@@ -56213,7 +57286,7 @@ Here are the possible system metadata items for the onedrive backend.
| content-type | The MIME type of the file. | string | text/plain | **Y** |
| created-by-display-name | Display name of the user that created the item. | string | John Doe | **Y** |
| created-by-id | ID of the user that created the item. | string | 48d31887-5fad-4d73-a9f5-3c356e68a038 | **Y** |
-| description | A short description of the file. Max 1024 characters. Only supported for OneDrive Personal. | string | Contract for signing | N |
+| description | A short description of the file. Max 1024 characters. No longer supported by Microsoft. | string | Contract for signing | N |
| id | The unique identifier of the item within OneDrive. | string | 01BYE5RZ6QN3ZWBTUFOFD3GSPGOHDJD36K | **Y** |
| last-modified-by-display-name | Display name of the user that last modified the item. | string | John Doe | **Y** |
| last-modified-by-id | ID of the user that last modified the item. | string | 48d31887-5fad-4d73-a9f5-3c356e68a038 | **Y** |
@@ -59084,7 +60157,7 @@ Properties:
Above this size files will be chunked.
-Above this size files will be chunked into a a `_segments` container
+Above this size files will be chunked into a `_segments` container
or a `.file-segments` directory. (See the `use_segments_container` option
for more info). Default for this is 5 GiB which is its maximum value, which
means only files above this size will be chunked.
@@ -59431,6 +60504,31 @@ So if the folder you want rclone to use your is "My Music/", then use the return
id from ```rclone lsf``` command (ex. `dxxxxxxxx2`) as the `root_folder_id` variable
value in the config file.
+### Change notifications and mounts
+
+The pCloud backend supports real‑time updates for rclone mounts via change
+notifications. rclone uses pCloud’s diff long‑polling API to detect changes and
+will automatically refresh directory listings in the mounted filesystem when
+changes occur.
+
+Notes and behavior:
+
+- Works automatically when using `rclone mount` and requires no additional
+ configuration.
+- Notifications are directory‑scoped: when rclone detects a change, it refreshes
+ the affected directory so new/removed/renamed files become visible promptly.
+- Updates are near real‑time. The backend uses a long‑poll with short fallback
+ polling intervals, so you should see changes appear quickly without manual
+ refreshes.
+
+If you want to debug or verify notifications, you can use the helper command:
+
+```bash
+rclone test changenotify remote:
+```
+
+This will log incoming change notifications for the given remote.
+
### Standard options
@@ -63024,6 +64122,12 @@ URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
+Supports the format http://user:pass@host:port, http://host:port, http://host.
+
+Example:
+
+ http://myUser:myPass@proxyhostname.example.com:8000
+
Properties:
@@ -63107,6 +64211,267 @@ Hetzner Storage Boxes are supported through the SFTP backend on port 23.
See [Hetzner's documentation for details](https://docs.hetzner.com/robot/storage-box/access/access-ssh-rsync-borg#rclone)
+# Shade
+
+This is a backend for the [Shade](https://shade.inc/) platform
+
+## About Shade
+
+[Shade](https://shade.inc/) is an AI-powered cloud NAS that makes your cloud files behave like a local drive, optimized for media and creative workflows. It provides fast, secure access with natural-language search, easy sharing, and scalable cloud storage.
+
+
+## Accounts & Pricing
+
+To use this backend, you need to [create a free account](https://app.shade.inc/) on Shade. You can start with a free account and get 20GB of storage for free.
+
+
+## Usage
+
+Paths are specified as `remote:path`
+
+Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
+
+
+## Configuration
+
+Here is an example of making a Shade configuration.
+
+First, create a [create a free account](https://app.shade.inc/) account and choose a plan.
+
+You will need to log in and get the `API Key` and `Drive ID` for your account from the settings section of your account and created drive respectively.
+
+Now run
+
+`rclone config`
+
+Follow this interactive process:
+
+```sh
+$ rclone config
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> n
+
+Enter name for new remote.
+name> Shade
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[OTHER OPTIONS]
+xx / Shade FS
+ \ (shade)
+[OTHER OPTIONS]
+Storage> xx
+
+Option drive_id.
+The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive.
+Enter a value.
+drive_id> [YOUR_ID]
+
+Option api_key.
+An API key for your account.
+Enter a value.
+api_key> [YOUR_API_KEY]
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: shade
+- drive_id: [YOUR_ID]
+- api_key: [YOUR_API_KEY]
+Keep this "Shade" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+### Modification times and hashes
+
+Shade does not support hashes and writing mod times.
+
+
+### Transfers
+
+Shade uses multipart uploads by default. This means that files will be chunked and sent up to Shade concurrently. In order to configure how many simultaneous uploads you want to use, upload the 'concurrency' option in the advanced config section. Note that this uses more memory and initiates more http requests.
+
+### Deleting files
+
+Please note that when deleting files in Shade via rclone it will delete the file instantly, instead of sending it to the trash. This means that it will not be recoverable.
+
+
+
+### Standard options
+
+Here are the Standard options specific to shade (Shade FS).
+
+#### --shade-drive-id
+
+The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive.
+
+Properties:
+
+- Config: drive_id
+- Env Var: RCLONE_SHADE_DRIVE_ID
+- Type: string
+- Required: true
+
+#### --shade-api-key
+
+An API key for your account.
+
+Properties:
+
+- Config: api_key
+- Env Var: RCLONE_SHADE_API_KEY
+- Type: string
+- Required: true
+
+### Advanced options
+
+Here are the Advanced options specific to shade (Shade FS).
+
+#### --shade-endpoint
+
+Endpoint for the service.
+
+Leave blank normally.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_SHADE_ENDPOINT
+- Type: string
+- Required: false
+
+#### --shade-chunk-size
+
+Chunk size to use for uploading.
+
+Any files larger than this will be uploaded in chunks of this size.
+
+Note that this is stored in memory per transfer, so increasing it will
+increase memory usage.
+
+Minimum is 5MB, maximum is 5GB.
+
+Properties:
+
+- Config: chunk_size
+- Env Var: RCLONE_SHADE_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 64Mi
+
+#### --shade-upload-concurrency
+
+Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_SHADE_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 4
+
+#### --shade-max-upload-parts
+
+Maximum amount of parts in a multipart upload.
+
+Properties:
+
+- Config: max_upload_parts
+- Env Var: RCLONE_SHADE_MAX_UPLOAD_PARTS
+- Type: int
+- Default: 10000
+
+#### --shade-token
+
+JWT Token for performing Shade FS operations. Don't set this value - rclone will set it automatically
+
+Properties:
+
+- Config: token
+- Env Var: RCLONE_SHADE_TOKEN
+- Type: string
+- Required: false
+
+#### --shade-token-expiry
+
+JWT Token Expiration time. Don't set this value - rclone will set it automatically
+
+Properties:
+
+- Config: token_expiry
+- Env Var: RCLONE_SHADE_TOKEN_EXPIRY
+- Type: string
+- Required: false
+
+#### --shade-encoding
+
+The encoding for the backend.
+
+See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_SHADE_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+
+#### --shade-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_SHADE_DESCRIPTION
+- Type: string
+- Required: false
+
+
+
+## Limitations
+
+Note that Shade is case insensitive so you can't have a file called
+"Hello.doc" and one called "hello.doc".
+
+Shade only supports filenames up to 255 characters in length.
+
+`rclone about` is not supported by the Shade backend. Backends without
+this capability cannot determine free space for an rclone mount or
+use policy `mfs` (most free space) as a member of an rclone union
+remote.
+
+See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
+
+## Backend commands
+
+Here are the commands specific to the shade backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the [backend](https://rclone.org/commands/rclone_backend/) command for more
+info on how to pass options and arguments.
+
+These can be run on a running backend using the rc command
+[backend/command](https://rclone.org/rc/#backend-command).
+
# SMB
SMB is [a communication protocol to share files over network](https://en.wikipedia.org/wiki/Server_Message_Block).
@@ -64481,180 +65846,6 @@ as a member of an rclone union remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
and [rclone about](https://rclone.org/commands/rclone_about/).
-# Uptobox
-
-This is a Backend for Uptobox file storage service. Uptobox is closer to a
-one-click hoster than a traditional cloud storage provider and therefore not
-suitable for long term storage.
-
-Paths are specified as `remote:path`
-
-Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
-
-## Configuration
-
-To configure an Uptobox backend you'll need your personal api token. You'll find
-it in your [account settings](https://uptobox.com/my_account).
-
-Here is an example of how to make a remote called `remote` with the default setup.
-First run:
-
-```console
-rclone config
-```
-
-This will guide you through an interactive setup process:
-
-```text
-Current remotes:
-
-Name Type
-==== ====
-TestUptobox uptobox
-
-e) Edit existing remote
-n) New remote
-d) Delete remote
-r) Rename remote
-c) Copy remote
-s) Set configuration password
-q) Quit config
-e/n/d/r/c/s/q> n
-name> uptobox
-Type of storage to configure.
-Enter a string value. Press Enter for the default ("").
-Choose a number from below, or type in your own value
-[...]
-37 / Uptobox
- \ "uptobox"
-[...]
-Storage> uptobox
-** See help for uptobox backend at: https://rclone.org/uptobox/ **
-
-Your API Key, get it from https://uptobox.com/my_account
-Enter a string value. Press Enter for the default ("").
-api_key> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-Edit advanced config? (y/n)
-y) Yes
-n) No (default)
-y/n> n
-Remote config
---------------------
-[uptobox]
-type = uptobox
-api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
---------------------
-y) Yes this is OK (default)
-e) Edit this remote
-d) Delete this remote
-y/e/d>
-```
-
-Once configured you can then use `rclone` like this (replace `remote` with the
-name you gave your remote):
-
-List directories in top level of your Uptobox
-
-```console
-rclone lsd remote:
-```
-
-List all the files in your Uptobox
-
-```console
-rclone ls remote:
-```
-
-To copy a local directory to an Uptobox directory called backup
-
-```console
-rclone copy /home/source remote:backup
-```
-
-### Modification times and hashes
-
-Uptobox supports neither modified times nor checksums. All timestamps
-will read as that set by `--default-time`.
-
-### Restricted filename characters
-
-In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters)
-the following characters are also replaced:
-
-| Character | Value | Replacement |
-| --------- |:-----:|:-----------:|
-| " | 0x22 | " |
-| ` | 0x41 | ` |
-
-Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8),
-as they can't be used in XML strings.
-
-
-### Standard options
-
-Here are the Standard options specific to uptobox (Uptobox).
-
-#### --uptobox-access-token
-
-Your access token.
-
-Get it from https://uptobox.com/my_account.
-
-Properties:
-
-- Config: access_token
-- Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
-- Type: string
-- Required: false
-
-### Advanced options
-
-Here are the Advanced options specific to uptobox (Uptobox).
-
-#### --uptobox-private
-
-Set to make uploaded files private
-
-Properties:
-
-- Config: private
-- Env Var: RCLONE_UPTOBOX_PRIVATE
-- Type: bool
-- Default: false
-
-#### --uptobox-encoding
-
-The encoding for the backend.
-
-See the [encoding section in the overview](https://rclone.org/overview/#encoding) for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_UPTOBOX_ENCODING
-- Type: Encoding
-- Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
-
-#### --uptobox-description
-
-Description of the remote.
-
-Properties:
-
-- Config: description
-- Env Var: RCLONE_UPTOBOX_DESCRIPTION
-- Type: string
-- Required: false
-
-
-
-## Limitations
-
-Uptobox will delete inactive files that have not been accessed in 60 days.
-
-`rclone about` is not supported by this backend an overview of used space can however
-been seen in the uptobox web interface.
-
# Union
The `union` backend joins several remotes together to make a single unified view
@@ -66867,6 +68058,80 @@ Options:
# Changelog
+## v1.73.0 - 2026-01-30
+
+[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.73.0)
+
+- New backends
+ - [Shade](https://rclone.org/shade/) (jhasse-shade)
+ - [Drime](https://rclone.org/drime/) (dougal)
+ - [Filen](https://rclone.org/filen/) (Enduriel)
+ - [Internxt](https://rclone.org/internxt/) (jzunigax2)
+ - New S3 providers
+ - [Bizfly Cloud Simple Storage](https://rclone.org/s3/#bizflycloud) (vupn0712)
+- New Features
+ - docs: Add [Support Tiers](https://rclone.org/tiers/) to the documentation (Nick Craig-Wood)
+ - rc: Add [operations/hashsumfile](https://rclone.org/rc/#operations-hashsumfile) to sum a single file only (Nick Craig-Wood)
+ - serve webdav: Implement download directory as Zip (Leo)
+- Bug Fixes
+ - fs: fix bwlimit: correct reporting (Mikel Olasagasti Uranga)
+ - log: fix systemd adding extra newline (dougal)
+ - docs: fixes (albertony, darkdragon-001, Duncan Smart, hyusap, Marc-Philip, Nick Craig-Wood, vicerace, vyv03354, yuval-cloudinary, yy)
+ - serve s3: Make errors in `--s3-auth-key` fatal (Nick Craig-Wood)
+- Mount
+ - Fix OpenBSD mount support. (Nick Owens)
+- Azure Blob
+ - Add metadata and tags support across upload and copy paths (Cliff Frey)
+ - Factor the common auth into a library (Nick Craig-Wood)
+- Azurefiles
+ - Factor the common auth into a library (Nick Craig-Wood)
+- B2
+ - Support authentication with new bucket restricted application keys (DianaNites)
+- Drive
+ - Add `--drive-metadata-force-expansive-access` flag (Nick Craig-Wood)
+ - Fix crash when trying to creating shortcut to a Google doc (Nick Craig-Wood)
+- FTP
+ - Add http proxy authentication support (Nicolas Dessart)
+- Mega
+ - Reverts TLS workaround (necaran)
+- Memory
+ - Add `--memory-discard` flag for speed testing (Nick Craig-Wood)
+- OneDrive
+ - Fix cancelling multipart upload (Nick Craig-Wood)
+ - Fix setting modification time on directories for OneDrive Personal (Nick Craig-Wood)
+ - Fix OneDrive Personal no longer supports description (Nick Craig-Wood)
+ - Fix require sign in for OneDrive Personal (Nick Craig-Wood)
+ - Fix permissions on OneDrive Personal (Nick Craig-Wood)
+- Oracle Object Storage
+ - Eliminate unnecessary heap allocation (Qingwei Li)
+- Pcloud
+ - Add support for `ChangeNotify` to enable real-time updates in mount (masrlinu)
+- Protondrive
+ - Update to use forks of upstream modules to unblock development (Nick Craig-Wood)
+- S3
+ - Add ability to specify an IAM role for cross-account interaction (Vladislav Tropnikov)
+ - Linode: updated endpoints to use ISO 3166-1 alpha-2 standard (jbagwell-akamai)
+ - Fix Copy ignoring storage class (vupn0712)
+- SFTP
+ - Add http proxy authentication support (Nicolas Dessart)
+ - Eliminate unnecessary heap allocation (Qingwei Li)
+
+## v1.72.1 - 2025-12-10
+
+[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
+
+- Bug Fixes
+ - build: update to go1.25.5 to fix [CVE-2025-61729](https://pkg.go.dev/vuln/GO-2025-4155)
+ - doc fixes (Duncan Smart, Nick Craig-Wood)
+ - configfile: Fix piped config support (Jonas Tingeborn)
+ - log
+ - Fix PID not included in JSON log output (Tingsong Xu)
+ - Fix backtrace not going to the --log-file (Nick Craig-Wood)
+- Google Cloud Storage
+ - Improve endpoint parameter docs (Johannes Rothe)
+- S3
+ - Add missing regions for Selectel provider (Nick Craig-Wood)
+
## v1.72.0 - 2025-11-21
[See commits](https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0)
@@ -66887,7 +68152,7 @@ Options:
- [rclone test speed](https://rclone.org/commands/rclone_test_speed/): Add command to test a specified remotes speed (dougal)
- New Features
- backends: many backends have has a paged listing (`ListP`) interface added
- - this enables progress when listing large directories and reduced memory usage
+ - this enables progress when listing large directories and reduced memory usage
- build
- Bump golang.org/x/crypto from 0.43.0 to 0.45.0 to fix CVE-2025-58181 (dependabot[bot])
- Modernize code and tests (Nick Craig-Wood, russcoss, juejinyuxitu, reddaisyy, dulanting, Oleksandr Redko)
@@ -73337,6 +74602,50 @@ original characters are supported. When the same Unicode characters
are intentionally used in file names, this replacement strategy leads
to unwanted renames. Read more under section [caveats](https://rclone.org/overview/#restricted-filenames-caveats).
+### Why does rclone fail to connect over TLS but another client works?
+
+If you see TLS handshake failures (or packet captures show the server
+rejecting all offered ciphers), the server/proxy may only support
+legacy TLS cipher suites (for example RSA key-exchange ciphers
+such as `RSA_WITH_AES_256_CBC_SHA256`, or old 3DES ciphers). Recent Go
+versions (which rclone is built with) have **removed insecure ciphers
+from the default list**, so rclone may refuse to negotiate them even
+if other tools still do.
+
+If you can't update/reconfigure the server/proxy to support modern TLS
+(TLS 1.2/1.3) and ECDHE-based cipher suites you can re-enable legacy
+ciphers via `GODEBUG`:
+
+- Windows (cmd.exe):
+
+ ```bat
+ set GODEBUG=tlsrsakex=1
+ rclone copy ...
+ ```
+
+- Windows (PowerShell):
+
+ ```powershell
+ $env:GODEBUG="tlsrsakex=1"
+ rclone copy ...
+ ```
+
+- Linux/macOS:
+
+ ```sh
+ GODEBUG=tlsrsakex=1 rclone copy ...
+ ```
+
+If the server only supports 3DES, try:
+
+```sh
+GODEBUG=tls3des=1 rclone ...
+```
+
+This applies to **any rclone feature using TLS** (HTTPS, FTPS, WebDAV
+over TLS, proxies with TLS interception, etc.). Use these workarounds
+only long enough to get the server/proxy updated.
+
# License
This is free software under the terms of the MIT license (check the
@@ -74410,6 +75719,31 @@ put them back in again. -->
- jijamik <30904953+jijamik@users.noreply.github.com>
- Dominik Sander
- Nikolay Kiryanov
+- Diana <5275194+DianaNites@users.noreply.github.com>
+- Duncan Smart
+- vicerace
+- Cliff Frey
+- Vladislav Tropnikov
+- Leo
+- Johannes Rothe
+- Tingsong Xu
+- Jonas Tingeborn <134889+jojje@users.noreply.github.com>
+- jhasse-shade
+- vyv03354
+- masrlinu <5259918+masrlinu@users.noreply.github.com>
+- vupn0712 <126212736+vupn0712@users.noreply.github.com>
+- darkdragon-001
+- sys6101
+- Nicolas Dessart
+- Qingwei Li <332664203@qq.com>
+- yy
+- Marc-Philip
+- Mikel Olasagasti Uranga
+- Nick Owens
+- hyusap
+- jzunigax2 <125698953+jzunigax2@users.noreply.github.com>
+- lullius
+- StarHack
# Contact the rclone project
diff --git a/MANUAL.txt b/MANUAL.txt
index efb8fd646..668631e8b 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Nov 21, 2025
+Jan 30, 2026
NAME
@@ -166,6 +166,7 @@ S3, that work out of the box.)
- Akamai Netstorage
- Alibaba Cloud (Aliyun) Object Storage System (OSS)
- Amazon S3
+- Bizfly Cloud Simple Storage
- Backblaze B2
- Box
- Ceph
@@ -178,12 +179,14 @@ S3, that work out of the box.)
- DigitalOcean Spaces
- Digi Storage
- Dreamhost
+- Drime
- Dropbox
- Enterprise File Fabric
- Exaba
- Fastmail Files
- FileLu Cloud Storage
- FileLu S5 (S3-Compatible Object Storage)
+- Filen
- Files.com
- FlashBlade
- FTP
@@ -200,6 +203,7 @@ S3, that work out of the box.)
- iCloud Drive
- ImageKit
- Internet Archive
+- Internxt
- Jottacloud
- IBM COS S3
- IDrive e2
@@ -252,6 +256,7 @@ S3, that work out of the box.)
- Selectel
- Servercore Object Storage
- SFTP
+- Shade
- Sia
- SMB / CIFS
- Spectra Logic
@@ -261,7 +266,6 @@ S3, that work out of the box.)
- SugarSync
- Tencent Cloud Object Storage (COS)
- Uloz.to
-- Uptobox
- Wasabi
- WebDAV
- Yandex Disk
@@ -946,9 +950,11 @@ See the following for detailed instructions for
- Crypt - to encrypt other remotes
- DigitalOcean Spaces
- Digi Storage
+- Drime
- Dropbox
- Enterprise File Fabric
- FileLu Cloud Storage
+- Filen
- Files.com
- FTP
- Gofile
@@ -962,6 +968,7 @@ See the following for detailed instructions for
- HTTP
- iCloud Drive
- Internet Archive
+- Internxt
- Jottacloud
- Koofr
- Linkbox
@@ -986,13 +993,13 @@ See the following for detailed instructions for
- rsync.net
- Seafile
- SFTP
+- Shade
- Sia
- SMB
- Storj
- SugarSync
- Union
- Uloz.to
-- Uptobox
- WebDAV
- Yandex Disk
- Zoho WorkDrive
@@ -2075,6 +2082,10 @@ recursion.
The other list commands lsd,lsf,lsjson do not recurse by default - use
-R to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use --disable ListR to suppress the behavior.
+See --fast-list for more details.
+
Listing a nonexistent directory will produce an error except for remotes
which can't have empty directories (e.g. s3, swift, or gcs - the
bucket-based remotes).
@@ -2173,6 +2184,10 @@ recursion.
The other list commands lsd,lsf,lsjson do not recurse by default - use
-R to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use --disable ListR to suppress the behavior.
+See --fast-list for more details.
+
Listing a nonexistent directory will produce an error except for remotes
which can't have empty directories (e.g. s3, swift, or gcs - the
bucket-based remotes).
@@ -2263,6 +2278,10 @@ recursion.
The other list commands lsd,lsf,lsjson do not recurse by default - use
-R to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use --disable ListR to suppress the behavior.
+See --fast-list for more details.
+
Listing a nonexistent directory will produce an error except for remotes
which can't have empty directories (e.g. s3, swift, or gcs - the
bucket-based remotes).
@@ -4588,10 +4607,10 @@ Examples:
// Output: stories/The Quick Brown Fox!.txt
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
- // Output: stories/The Quick Brown Fox!-20251121
+ // Output: stories/The Quick Brown Fox!-20260130
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
- // Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
+ // Output: stories/The Quick Brown Fox!-2026-01-30 0825PM
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
// Output: ababababababab/ababab ababababab ababababab ababab!abababab
@@ -5646,6 +5665,10 @@ recursion.
The other list commands lsd,lsf,lsjson do not recurse by default - use
-R to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use --disable ListR to suppress the behavior.
+See --fast-list for more details.
+
Listing a nonexistent directory will produce an error except for remotes
which can't have empty directories (e.g. s3, swift, or gcs - the
bucket-based remotes).
@@ -5772,11 +5795,11 @@ changed with the following options:
- If --files-only is specified then files will be returned only, no
directories.
-If --stat is set then the the output is not an array of items, but
-instead a single JSON blob will be returned about the item pointed to.
-This will return an error if the item isn't found, however on bucket
-based backends (like s3, gcs, b2, azureblob etc) if the item isn't found
-it will return an empty directory, as it isn't possible to tell empty
+If --stat is set then the output is not an array of items, but instead a
+single JSON blob will be returned about the item pointed to. This will
+return an error if the item isn't found, however on bucket based
+backends (like s3, gcs, b2, azureblob etc) if the item isn't found it
+will return an empty directory, as it isn't possible to tell empty
directories from missing directories there.
The Path field will only show folders below the remote path being
@@ -5816,6 +5839,10 @@ recursion.
The other list commands lsd,lsf,lsjson do not recurse by default - use
-R to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use --disable ListR to suppress the behavior.
+See --fast-list for more details.
+
Listing a nonexistent directory will produce an error except for remotes
which can't have empty directories (e.g. s3, swift, or gcs - the
bucket-based remotes).
@@ -8295,7 +8322,7 @@ This command can also accept a password through STDIN instead of an
argument by passing a hyphen as an argument. This will use the first
line of STDIN as the password not including the trailing newline.
- echo "secretpassword" | rclone obscure -
+ echo 'secretpassword' | rclone obscure -
If there is no data on STDIN to read, rclone obscure will default to
obfuscating the hyphen itself.
@@ -12366,6 +12393,21 @@ correctly in the request. (See the AWS docs).
--auth-key can be repeated for multiple auth pairs. If --auth-key is not
provided then serve s3 will allow anonymous access.
+Like all rclone flags --auth-key can be set via environment variables,
+in this case RCLONE_AUTH_KEY. Since this flag can be repeated, the input
+to RCLONE_AUTH_KEY is CSV encoded. Because the accessKey,secretKey has a
+comma in, this means it needs to be in quotes.
+
+ export RCLONE_AUTH_KEY='"user,pass"'
+ rclone serve s3 ...
+
+Or to supply multiple identities:
+
+ export RCLONE_AUTH_KEY='"user1,pass1","user2,pass2"'
+ rclone serve s3 ...
+
+Setting this variable without quotes will produce an error.
+
Please note that some clients may require HTTPS endpoints. See the SSL
docs for more information.
@@ -14619,6 +14661,7 @@ Options
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
+ --disable-zip Disable zip download of directories
--etag-hash string Which hash to use for the ETag, or auto or blank for off
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
@@ -20880,8 +20923,8 @@ Example:
rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint mountType=mount
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
-The vfsOpt are as described in options/get and can be seen in the the
-"vfs" section when running and the mountOpt can be seen in the "mount"
+The vfsOpt are as described in options/get and can be seen in the "vfs"
+section when running and the mountOpt can be seen in the "mount"
section:
rclone rc options/get
@@ -21224,6 +21267,40 @@ See the hashsum command for more information on the above.
Authentication is required for this call.
+operations/hashsumfile: Produces a hash for a single file.
+
+Produces a hash for a single file using the hash named.
+
+This takes the following parameters:
+
+- fs - a remote name string e.g. "drive:"
+- remote - a path within that remote e.g. "file.txt"
+- hashType - type of hash to be used
+- download - check by downloading rather than with hash (boolean)
+- base64 - output the hashes in base64 rather than hex (boolean)
+
+If you supply the download flag, it will download the data from the
+remote and create the hash on the fly. This can be useful for remotes
+that don't support the given hash or if you really want to read all the
+data.
+
+Returns:
+
+- hash - hash for the file
+- hashType - type of hash used
+
+Example:
+
+ $ rclone rc --loopback operations/hashsumfile fs=/ remote=/bin/bash hashType=MD5 download=true base64=true
+ {
+ "hashType": "md5",
+ "hash": "MDMw-fG2YXs7Uz5Nz-H68A=="
+ }
+
+See the hashsum command for more information on the above.
+
+Authentication is required for this call.
+
operations/list: List the given remote and path in JSON format
This takes the following parameters:
@@ -22191,167 +22268,6 @@ Features
Here is an overview of the major features of each cloud storage system.
- --------------------------------------------------------------------------------------
- Name Hash ModTime Case Duplicate MIME Metadata
- Insensitive Files Type
- ----------------- -------------- --------- ------------- ----------- ------ ----------
- 1Fichier Whirlpool - No Yes R -
-
- Akamai Netstorage MD5, SHA256 R/W No No R -
-
- Amazon S3 (or S3 MD5 R/W No No R/W RWU
- compatible)
-
- Backblaze B2 SHA1 R/W No No R/W -
-
- Box SHA1 R/W Yes No - -
-
- Citrix ShareFile MD5 R/W Yes No - -
-
- Cloudinary MD5 R No Yes - -
-
- Dropbox DBHASH ¹ R Yes No - -
-
- Enterprise File - R/W Yes No R/W -
- Fabric
-
- FileLu Cloud MD5 R/W No Yes R -
- Storage
-
- Files.com MD5, CRC32 DR/W Yes No R -
-
- FTP - R/W ¹⁰ No No - -
-
- Gofile MD5 DR/W No Yes R -
-
- Google Cloud MD5 R/W No No R/W -
- Storage
-
- Google Drive MD5, SHA1, DR/W No Yes R/W DRWU
- SHA256
-
- Google Photos - - No Yes R -
-
- HDFS - R/W No No - -
-
- HiDrive HiDrive ¹² R/W No No - -
-
- HTTP - R No No R R
-
- iCloud Drive - R No No - -
-
- Internet Archive MD5, SHA1, R/W ¹¹ No No - RWU
- CRC32
-
- Jottacloud MD5 R/W Yes No R RW
-
- Koofr MD5 - Yes No - -
-
- Linkbox - R No No - -
-
- Mail.ru Cloud Mailru ⁶ R/W Yes No - -
-
- Mega - - No Yes - -
-
- Memory MD5 R/W No No - -
-
- Microsoft Azure MD5 R/W No No R/W -
- Blob Storage
-
- Microsoft Azure MD5 R/W Yes No R/W -
- Files Storage
-
- Microsoft QuickXorHash ⁵ DR/W Yes No R DRW
- OneDrive
-
- OpenDrive MD5 R/W Yes Partial ⁸ - -
-
- OpenStack Swift MD5 R/W No No R/W -
-
- Oracle Object MD5 R/W No No R/W RU
- Storage
-
- pCloud MD5, SHA1 ⁷ R/W No No W -
-
- PikPak MD5 R No No R -
-
- Pixeldrain SHA256 R/W No No R RW
-
- premiumize.me - - Yes No R -
-
- put.io CRC-32 R/W No Yes R -
-
- Proton Drive SHA1 R/W No No R -
-
- QingStor MD5 - ⁹ No No R/W -
-
- Quatrix by - R/W No No - -
- Maytech
-
- Seafile - - No No - -
-
- SFTP MD5, SHA1 ² DR/W Depends No - -
-
- Sia - - No No - -
-
- SMB - R/W Yes No - -
-
- SugarSync - - No No - -
-
- Storj - R No No - -
-
- Uloz.to MD5, SHA256 ¹³ - No Yes - -
-
- Uptobox - - No Yes - -
-
- WebDAV MD5, SHA1 ³ R ⁴ Depends No - -
-
- Yandex Disk MD5 R/W No No R -
-
- Zoho WorkDrive - - No No - -
-
- The local All DR/W Depends No - DRWU
- filesystem
- --------------------------------------------------------------------------------------
-
-¹ Dropbox supports its own custom hash. This is an SHA256 sum of all the
-4 MiB block SHA256s.
-
-² SFTP supports checksums if the same login has shell access and md5sum
-or sha1sum as well as echo are in the remote's PATH.
-
-³ WebDAV supports hashes when used with Fastmail Files, Owncloud and
-Nextcloud only.
-
-⁴ WebDAV supports modtimes when used with Fastmail Files, Owncloud and
-Nextcloud only.
-
-⁵ QuickXorHash is Microsoft's own hash.
-
-⁶ Mail.ru uses its own modified SHA1 hash
-
-⁷ pCloud only supports SHA1 (not MD5) in its EU region
-
-⁸ Opendrive does not support creation of duplicate files using their web
-client interface or other stock clients, but the underlying storage
-platform has been determined to allow duplicate files, and it is
-possible to create them with rclone. It may be that this is a mistake or
-an unsupported feature.
-
-⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.
-
-¹⁰ FTP supports modtimes for the major FTP servers, and also others if
-they advertised required protocol extensions. See this for more details.
-
-¹¹ Internet Archive requires option wait_archive to be set to a non-zero
-value for full modtime support.
-
-¹² HiDrive supports its own custom hash. It combines SHA1 sums for each
-4 KiB block hierarchically to a single top-level sum.
-
-¹³ Uloz.to provides server-calculated MD5 hash upon file upload. MD5 and
-SHA256 hashes are client-calculated and stored as metadata fields.
-
Hash
The cloud storage system supports various hash types of the objects. The
@@ -22792,139 +22708,6 @@ Optional Features
All rclone remotes support a base command set. Other features depend
upon backend-specific capabilities.
- -------------------------------------------------------------------------------------------------------------------------------------
- Name Purge Copy Move DirMove CleanUp ListR StreamUpload MultithreadUpload LinkSharing About EmptyDir
- --------------- ------- ------ ------ --------- --------- ------- -------------- ------------------- ------------- ------- ----------
- 1Fichier No Yes Yes No No No No No Yes No Yes
-
- Akamai Yes No No No No Yes Yes No No No Yes
- Netstorage
-
- Amazon S3 (or No Yes No No Yes Yes Yes Yes Yes No No
- S3 compatible)
-
- Backblaze B2 No Yes No No Yes Yes Yes Yes Yes No No
-
- Box Yes Yes Yes Yes Yes No Yes No Yes Yes Yes
-
- Citrix Yes Yes Yes Yes No No No No No No Yes
- ShareFile
-
- Dropbox Yes Yes Yes Yes No No Yes No Yes Yes Yes
-
- Cloudinary No No No No No No Yes No No No No
-
- Enterprise File Yes Yes Yes Yes Yes No No No No No Yes
- Fabric
-
- Files.com Yes Yes Yes Yes No No Yes No Yes No Yes
-
- FTP No No Yes Yes No No Yes No No No Yes
-
- Gofile Yes Yes Yes Yes No No Yes No Yes Yes Yes
-
- Google Cloud Yes Yes No No No No Yes No No No No
- Storage
-
- Google Drive Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes
-
- Google Photos No No No No No No No No No No No
-
- HDFS Yes No Yes Yes No No Yes No No Yes Yes
-
- HiDrive Yes Yes Yes Yes No No Yes No No No Yes
-
- HTTP No No No No No No No No No No Yes
-
- iCloud Drive Yes Yes Yes Yes No No No No No No Yes
-
- ImageKit Yes No Yes No No No No No No No Yes
-
- Internet No Yes No No Yes Yes No No Yes Yes No
- Archive
-
- Jottacloud Yes Yes Yes Yes Yes Yes No No Yes Yes Yes
-
- Koofr Yes Yes Yes Yes No No Yes No Yes Yes Yes
-
- Mail.ru Cloud Yes Yes Yes Yes Yes No No No Yes Yes Yes
-
- Mega Yes No Yes Yes Yes No No No Yes Yes Yes
-
- Memory No Yes No No No Yes Yes No No No No
-
- Microsoft Azure Yes Yes No No No Yes Yes Yes No No No
- Blob Storage
-
- Microsoft Azure No Yes Yes Yes No No Yes Yes No Yes Yes
- Files Storage
-
- Microsoft Yes Yes Yes Yes Yes Yes ⁵ No No Yes Yes Yes
- OneDrive
-
- OpenDrive Yes Yes Yes Yes No No No No No Yes Yes
-
- OpenStack Swift Yes ¹ Yes No No No Yes Yes No No Yes No
-
- Oracle Object No Yes No No Yes Yes Yes Yes No No No
- Storage
-
- pCloud Yes Yes Yes Yes Yes No No No Yes Yes Yes
-
- PikPak Yes Yes Yes Yes Yes No No No Yes Yes Yes
-
- Pixeldrain Yes No Yes Yes No No Yes No Yes Yes Yes
-
- premiumize.me Yes No Yes Yes No No No No Yes Yes Yes
-
- put.io Yes No Yes Yes Yes No Yes No No Yes Yes
-
- Proton Drive Yes No Yes Yes Yes No No No No Yes Yes
-
- QingStor No Yes No No Yes Yes No No No No No
-
- Quatrix by Yes Yes Yes Yes No No No No No Yes Yes
- Maytech
-
- Seafile Yes Yes Yes Yes Yes Yes Yes No Yes Yes Yes
-
- SFTP No Yes ⁴ Yes Yes No No Yes No No Yes Yes
-
- Sia No No No No No No Yes No No No Yes
-
- SMB No No Yes Yes No No Yes Yes No No Yes
-
- SugarSync Yes Yes Yes Yes No No Yes No Yes No Yes
-
- Storj Yes ² Yes Yes No No Yes Yes No Yes No No
-
- Uloz.to No No Yes Yes No No No No No No Yes
-
- Uptobox No Yes Yes Yes No No No No No No No
-
- WebDAV Yes Yes Yes Yes No No Yes ³ No No Yes Yes
-
- Yandex Disk Yes Yes Yes Yes Yes No Yes No Yes Yes Yes
-
- Zoho WorkDrive Yes Yes Yes Yes No No No No No Yes Yes
-
- The local No No Yes Yes No No Yes Yes No Yes Yes
- filesystem
- -------------------------------------------------------------------------------------------------------------------------------------
-
-¹ Note Swift implements this in order to delete directory markers but it
-doesn't actually have a quicker way of deleting files other than
-deleting them individually.
-
-² Storj implements this efficiently only for entire buckets. If purging
-a directory inside a bucket, files are deleted individually.
-
-³ StreamUpload is not supported with Nextcloud
-
-⁴ Use the --sftp-copy-is-hardlink flag to enable.
-
-⁵ Use the --onedrive-delta flag to enable.
-
Purge
This deletes a directory quicker than just deleting all the files in the
@@ -23007,6 +22790,75 @@ EmptyDir
The remote supports empty directories. See Limitations for details. Most
Object/Bucket-based remotes do not support this.
+Tiers
+
+Rclone backends are divided into tiers to give users an idea of the
+stability of each backend.
+
+ Tier Label Intended meaning
+ ------ -------------- ------------------------------------
+ Core Production-grade, first-class
+ Stable Well-supported, minor gaps
+ Supported Works for many uses; known caveats
+ Experimental Use with care; expect gaps/changes
+ Deprecated No longer maintained or supported
+
+Overview
+
+Here is a summary of all backends:
+
+Scoring
+
+Here is how the backends are scored.
+
+Features
+
+These are useful optional features a backend should have in rough order
+of importance. Each one of these scores a point for the Features column.
+
+- F1: Hash(es)
+- F2: Modtime
+- F3: Stream upload
+- F4: Copy/Move
+- F5: DirMove
+- F6: Metadata
+- F7: MultipartUpload
+
+Tier
+
+The tier is decided after determining these attributes. Some discretion
+is allowed in tiering as some of these attributes are more important
+than others.
+
+ ---------------------------------------------------------------------------------------------------------
+ Attr T1: Core T2: Stable T3: Supported T4: Experimental T5: Incubator
+ -------------- ------------------------------ ------------ -------------- ---------------- --------------
+ Maintainers >=2 >=1 >=1 >=0 >=0
+
+ API source Official Official Either Either Either
+
+ Features >=5/7 >=4/7 >=3/7 >=2/7 N/A
+ (F1-F7)
+
+ Integration All Green All green Nearly all Some Flaky N/A
+ tests green
+
+ Error handling Pacer Pacer Retries Retries N/A
+
+ Data integrity Hashes, alt, modtime Hashes or Hash OR Best-effort N/A
+ alt modtime
+
+ Perf baseline Bench within 2x S3 Bench doc Anecdotal OK Optional N/A
+
+ Adoption widely used often used some use N/A N/A
+
+ Docs Full Full Basic Minimal Minimal
+ completeness
+
+ Security Principle-of-least-privilege Reasonable Basic auth Works Works
+ scopes
+ ---------------------------------------------------------------------------------------------------------
+
Global Flags
This describes the global flags available to every rclone command split
@@ -23110,7 +22962,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
Performance
@@ -23311,6 +23163,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal's client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azureblob-connection-string string Storage Connection String
--azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
--azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
@@ -23347,7 +23200,7 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-client-id string The ID of the client in use
--azurefiles-client-secret string One of the service principal's client secrets
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
- --azurefiles-connection-string string Azure Files Connection String
+ --azurefiles-connection-string string Storage Connection String
--azurefiles-description string Description of the remote
--azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
@@ -23359,12 +23212,13 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
--azurefiles-password string The user's password (obscured)
- --azurefiles-sas-url string SAS URL
+ --azurefiles-sas-url string SAS URL for container level access only
--azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
--azurefiles-use-az Use Azure CLI tool az for authentication
+ --azurefiles-use-emulator Uses local storage emulator if provided as 'true'
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
@@ -23466,6 +23320,16 @@ Backend-only flags (these can be set in the config file also).
--doi-doi string The DOI or the doi.org URL
--doi-doi-resolver-api-url string The URL of the DOI resolver API to use
--doi-provider string DOI provider
+ --drime-access-token string API Access token
+ --drime-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
+ --drime-description string Description of the remote
+ --drime-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --drime-hard-delete Delete files permanently rather than putting them into the trash
+ --drime-list-chunk int Number of items to list in each call (default 1000)
+ --drime-root-folder-id string ID of the root folder
+ --drime-upload-concurrency int Concurrency for multipart uploads and copies (default 4)
+ --drime-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --drime-workspace-id string Account ID
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
@@ -23486,6 +23350,7 @@ Backend-only flags (these can be set in the config file also).
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
+ --drive-metadata-enforce-expansive-access Whether the request should enforce expansive access rules
--drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
--drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
--drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
@@ -23554,6 +23419,17 @@ Backend-only flags (these can be set in the config file also).
--filelu-description string Description of the remote
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
--filelu-key string Your FileLu Rclone key from My Account
+ --filen-api-key string API Key for your Filen account (obscured)
+ --filen-auth-version string Authentication Version (internal use only)
+ --filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
+ --filen-description string Description of the remote
+ --filen-email string Email of your Filen account
+ --filen-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --filen-master-keys string Master Keys (internal use only)
+ --filen-password string Password of your Filen account (obscured)
+ --filen-private-key string Private RSA Key (internal use only)
+ --filen-public-key string Public RSA Key (internal use only)
+ --filen-upload-concurrency int Concurrency for chunked uploads (default 16)
--filescom-api-key string The API key used to authenticate with Files.com
--filescom-description string Description of the remote
--filescom-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -23597,7 +23473,7 @@ Backend-only flags (these can be set in the config file also).
--gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
- --gcs-endpoint string Endpoint for the service
+ --gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
@@ -23686,6 +23562,11 @@ Backend-only flags (these can be set in the config file also).
--internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
+ --internxt-description string Description of the remote
+ --internxt-email string Email of your Internxt account
+ --internxt-encoding Encoding The encoding for the backend (default Slash,BackSlash,CrLf,RightPeriod,InvalidUtf8,Dot)
+ --internxt-pass string Password (obscured)
+ --internxt-skip-hash-validation Skip hash validation when downloading files (default true)
--jottacloud-auth-url string Auth server URL
--jottacloud-client-credentials Use client credentials OAuth flow
--jottacloud-client-id string OAuth Client Id
@@ -23748,6 +23629,7 @@ Backend-only flags (these can be set in the config file also).
--mega-use-https Use HTTPS for transfers
--mega-user string User name
--memory-description string Description of the remote
+ --memory-discard If set all writes will be discarded and reads will return an error
--netstorage-account string Set the NetStorage account name
--netstorage-description string Description of the remote
--netstorage-host string Domain+path of NetStorage host to connect to
@@ -23923,6 +23805,10 @@ Backend-only flags (these can be set in the config file also).
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
--s3-requester-pays Enables requester pays option when interacting with S3 bucket
+ --s3-role-arn string ARN of the IAM role to assume
+ --s3-role-external-id string External ID for assumed role
+ --s3-role-session-duration string Session duration for assumed role
+ --s3-role-session-name string Session name for assumed role
--s3-sdk-log-mode Bits Set to debug the SDK (default Off)
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
@@ -24006,6 +23892,16 @@ Backend-only flags (these can be set in the config file also).
--sftp-user string SSH username (default "$USER")
--sftp-xxh128sum-command string The command used to read XXH128 hashes
--sftp-xxh3sum-command string The command used to read XXH3 hashes
+ --shade-api-key string An API key for your account
+ --shade-chunk-size SizeSuffix Chunk size to use for uploading (default 64Mi)
+ --shade-description string Description of the remote
+ --shade-drive-id string The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive
+ --shade-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --shade-endpoint string Endpoint for the service
+ --shade-max-upload-parts int Maximum amount of parts in a multipart upload (default 10000)
+ --shade-token string JWT Token for performing Shade FS operations. Don't set this value - rclone will set it automatically
+ --shade-token-expiry string JWT Token Expiration time. Don't set this value - rclone will set it automatically
+ --shade-upload-concurrency int Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies (default 4)
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-credentials Use client credentials OAuth flow
@@ -24097,10 +23993,6 @@ Backend-only flags (these can be set in the config file also).
--union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi)
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams
- --uptobox-access-token string Your access token
- --uptobox-description string Description of the remote
- --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
- --uptobox-private Set to make uploaded files private
--webdav-auth-redirect Preserve authentication on redirect
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
@@ -25734,13 +25626,18 @@ The following backends have known issues that need more investigation:
- TestDropbox (dropbox)
- TestBisyncRemoteRemote/normalization
-- Updated: 2025-11-21-010037
+- TestSeafile (seafile)
+ - TestBisyncLocalRemote/volatile
+- TestSeafileV6 (seafile)
+ - TestBisyncLocalRemote/volatile
+- Updated: 2026-01-30-010015
The following backends either have not been tested recently or have
known issues that are deemed unfixable for the time being:
- TestArchive (archive)
- TestCache (cache)
+- TestDrime (drime)
- TestFileLu (filelu)
- TestFilesCom (filescom)
- TestImageKit (imagekit)
@@ -27180,6 +27077,7 @@ The S3 backend can be used with a number of different providers:
- China Mobile Ecloud Elastic Object Storage (EOS)
- Cloudflare R2
- Arvan Cloud Object Storage (AOS)
+- Bizfly Cloud Simple Storage
- Cubbit DS3
- DigitalOcean Spaces
- Dreamhost
@@ -27871,6 +27769,70 @@ If none of these option actually end up providing rclone with AWS
credentials then S3 interaction will be non-authenticated (see the
anonymous access section for more info).
+Assume Role (Cross-Account Access)
+
+If you need to access S3 resources in a different AWS account, you can
+use IAM role assumption. This is useful for cross-account access
+scenarios where you have credentials in one account but need to access
+resources in another account.
+
+To use assume role, configure the following parameters:
+
+- role_arn - The ARN (Amazon Resource Name) of the IAM role to assume
+ in the target account. Format:
+ arn:aws:iam::ACCOUNT-ID:role/ROLE-NAME
+- role_session_name (optional) - A name for the assumed role session.
+ If not specified, rclone will generate one automatically.
+- role_session_duration (optional) - Duration for which the assumed
+ role credentials are valid. If not specified, AWS default duration
+ will be used (typically 1 hour).
+- role_external_id (optional) - An external ID required by the role's
+ trust policy for additional security. This is typically used when
+ the role is accessed by a third party.
+
+The assume role feature works with both direct credentials
+(env_auth = false) and environment-based authentication
+(env_auth = true). Rclone will first authenticate using the base
+credentials, then use those credentials to assume the specified role.
+
+Example configuration for cross-account access:
+
+ [s3-cross-account]
+ type = s3
+ provider = AWS
+ env_auth = true
+ region = us-east-1
+ role_arn = arn:aws:iam::123456789012:role/CrossAccountS3Role
+ role_session_name = rclone-session
+ role_external_id = unique-role-external-id-12345
+
+In this example: - Base credentials are obtained from the environment
+(IAM role, credentials file, or environment variables) - These
+credentials are then used to assume the role CrossAccountS3Role in
+account 123456789012 - An external ID is provided for additional
+security as required by the role's trust policy
+
+The target role's trust policy in the destination account must allow the
+source account or user to assume it. Example trust policy:
+
+ {
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "arn:aws:iam::SOURCE-ACCOUNT-ID:root"
+ },
+ "Action": "sts:AssumeRole",
+ "Condition": {
+ "StringEquals": {
+ "sts:ExternalID": "unique-role-external-id-12345"
+ }
+ }
+ }
+ ]
+ }
+
S3 Permissions
When using the sync subcommand of rclone the following minimum
@@ -27966,13 +27928,13 @@ force all the files to be uploaded as multipart.
Standard options
Here are the Standard options specific to s3 (Amazon S3 Compliant
-Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
-Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade,
-GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia,
-Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale,
-OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS,
-Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology,
-TencentCOS, Wasabi, Zata, Other).
+Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph,
+ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu,
+FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS,
+Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease,
+Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway,
+SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj,
+Synology, TencentCOS, Wasabi, Zata, Other).
--s3-provider
@@ -27991,6 +27953,8 @@ Properties:
- Alibaba Cloud Object Storage System (OSS) formerly Aliyun
- "ArvanCloud"
- Arvan Cloud Object Storage (AOS)
+ - "BizflyCloud"
+ - Bizfly Cloud Simple Storage
- "Ceph"
- Ceph Object Storage
- "ChinaMobile"
@@ -28134,7 +28098,7 @@ Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
- Provider:
- AWS,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other
+ AWS,BizflyCloud,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other
- Type: string
- Required: false
- Examples:
@@ -28243,6 +28207,12 @@ Properties:
- AWS GovCloud (US) Region.
- Needs location constraint us-gov-west-1.
- Provider: AWS
+ - "hn"
+ - Ha Noi
+ - Provider: BizflyCloud
+ - "hcm"
+ - Ho Chi Minh
+ - Provider: BizflyCloud
- ""
- Use this if unsure.
- Will use v4 signatures and an empty region.
@@ -28517,12 +28487,21 @@ Properties:
- "ru-1"
- St. Petersburg
- Provider: Selectel,Servercore
- - "gis-1"
- - Moscow
- - Provider: Servercore
+ - "ru-3"
+ - St. Petersburg
+ - Provider: Selectel
- "ru-7"
- Moscow
- - Provider: Servercore
+ - Provider: Selectel,Servercore
+ - "gis-1"
+ - Moscow
+ - Provider: Selectel,Servercore
+ - "kz-1"
+ - Kazakhstan
+ - Provider: Selectel
+ - "uz-2"
+ - Uzbekistan
+ - Provider: Selectel
- "uz-2"
- Tashkent, Uzbekistan
- Provider: Servercore
@@ -28559,7 +28538,7 @@ Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
- Provider:
- AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other
+ AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other
- Type: string
- Required: false
- Examples:
@@ -28645,6 +28624,12 @@ Properties:
- "s3.ir-tbz-sh1.arvanstorage.ir"
- Tabriz Iran (Shahriar)
- Provider: ArvanCloud
+ - "hn.ss.bfcplatform.vn"
+ - Hanoi endpoint
+ - Provider: BizflyCloud
+ - "hcm.ss.bfcplatform.vn"
+ - Ho Chi Minh endpoint
+ - Provider: BizflyCloud
- "eos-wuxi-1.cmecloud.cn"
- The default endpoint - a good choice if you are unsure.
- East China (Suzhou)
@@ -29051,67 +29036,67 @@ Properties:
- Iran
- Provider: Liara
- "nl-ams-1.linodeobjects.com"
- - Amsterdam (Netherlands), nl-ams-1
+ - Amsterdam, NL (nl-ams-1)
- Provider: Linode
- "us-southeast-1.linodeobjects.com"
- - Atlanta, GA (USA), us-southeast-1
+ - Atlanta, GA, US (us-southeast-1)
- Provider: Linode
- "in-maa-1.linodeobjects.com"
- - Chennai (India), in-maa-1
+ - Chennai, IN (in-maa-1)
- Provider: Linode
- "us-ord-1.linodeobjects.com"
- - Chicago, IL (USA), us-ord-1
+ - Chicago, IL, US (us-ord-1)
- Provider: Linode
- "eu-central-1.linodeobjects.com"
- - Frankfurt (Germany), eu-central-1
+ - Frankfurt, DE (eu-central-1)
- Provider: Linode
- "id-cgk-1.linodeobjects.com"
- - Jakarta (Indonesia), id-cgk-1
+ - Jakarta, ID (id-cgk-1)
- Provider: Linode
- "gb-lon-1.linodeobjects.com"
- - London 2 (Great Britain), gb-lon-1
+ - London 2, UK (gb-lon-1)
- Provider: Linode
- "us-lax-1.linodeobjects.com"
- - Los Angeles, CA (USA), us-lax-1
+ - Los Angeles, CA, US (us-lax-1)
- Provider: Linode
- "es-mad-1.linodeobjects.com"
- - Madrid (Spain), es-mad-1
- - Provider: Linode
- - "au-mel-1.linodeobjects.com"
- - Melbourne (Australia), au-mel-1
+ - Madrid, ES (es-mad-1)
- Provider: Linode
- "us-mia-1.linodeobjects.com"
- - Miami, FL (USA), us-mia-1
+ - Miami, FL, US (us-mia-1)
- Provider: Linode
- "it-mil-1.linodeobjects.com"
- - Milan (Italy), it-mil-1
+ - Milan, IT (it-mil-1)
- Provider: Linode
- "us-east-1.linodeobjects.com"
- - Newark, NJ (USA), us-east-1
+ - Newark, NJ, US (us-east-1)
- Provider: Linode
- "jp-osa-1.linodeobjects.com"
- - Osaka (Japan), jp-osa-1
+ - Osaka, JP (jp-osa-1)
- Provider: Linode
- "fr-par-1.linodeobjects.com"
- - Paris (France), fr-par-1
+ - Paris, FR (fr-par-1)
- Provider: Linode
- "br-gru-1.linodeobjects.com"
- - São Paulo (Brazil), br-gru-1
+ - Sao Paulo, BR (br-gru-1)
- Provider: Linode
- "us-sea-1.linodeobjects.com"
- - Seattle, WA (USA), us-sea-1
+ - Seattle, WA, US (us-sea-1)
- Provider: Linode
- "ap-south-1.linodeobjects.com"
- - Singapore, ap-south-1
+ - Singapore, SG (ap-south-1)
- Provider: Linode
- "sg-sin-1.linodeobjects.com"
- - Singapore 2, sg-sin-1
+ - Singapore 2, SG (sg-sin-1)
- Provider: Linode
- "se-sto-1.linodeobjects.com"
- - Stockholm (Sweden), se-sto-1
+ - Stockholm, SE (se-sto-1)
- Provider: Linode
- - "us-iad-1.linodeobjects.com"
- - Washington, DC, (USA), us-iad-1
+ - "jp-tyo-1.linodeobjects.com"
+ - Tokyo 3, JP (jp-tyo-1)
+ - Provider: Linode
+ - "us-iad-10.linodeobjects.com"
+ - Washington, DC, US (us-iad-10)
- Provider: Linode
- "s3.us-west-1.{account_name}.lyve.seagate.com"
- US West 1 - California
@@ -29315,13 +29300,25 @@ Properties:
- SeaweedFS S3 localhost
- Provider: SeaweedFS
- "s3.ru-1.storage.selcloud.ru"
- - Saint Petersburg
+ - St. Petersburg
+ - Provider: Selectel
+ - "s3.ru-3.storage.selcloud.ru"
+ - St. Petersburg
+ - Provider: Selectel
+ - "s3.ru-7.storage.selcloud.ru"
+ - Moscow
- Provider: Selectel,Servercore
- "s3.gis-1.storage.selcloud.ru"
- Moscow
- - Provider: Servercore
- - "s3.ru-7.storage.selcloud.ru"
- - Moscow
+ - Provider: Selectel,Servercore
+ - "s3.kz-1.storage.selcloud.ru"
+ - Kazakhstan
+ - Provider: Selectel
+ - "s3.uz-2.storage.selcloud.ru"
+ - Uzbekistan
+ - Provider: Selectel
+ - "s3.ru-1.storage.selcloud.ru"
+ - Saint Petersburg
- Provider: Servercore
- "s3.uz-2.srvstorage.uz"
- Tashkent, Uzbekistan
@@ -29850,7 +29847,7 @@ Properties:
- Config: acl
- Env Var: RCLONE_S3_ACL
- Provider:
- AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- Type: string
- Required: false
- Examples:
@@ -29858,37 +29855,37 @@ Properties:
- Owner gets FULL_CONTROL.
- No one else has access rights (default).
- Provider:
- AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other
+ AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other
- "public-read"
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ access.
- Provider:
- AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "public-read-write"
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ and WRITE access.
- Granting this on a bucket is generally not recommended.
- Provider:
- AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "authenticated-read"
- Owner gets FULL_CONTROL.
- The AuthenticatedUsers group gets READ access.
- Provider:
- AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "bucket-owner-read"
- Object owner gets FULL_CONTROL.
- Bucket owner gets READ access.
- If you specify this canned ACL when creating a bucket,
Amazon S3 ignores it.
- Provider:
- AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "bucket-owner-full-control"
- Both the object owner and the bucket owner get FULL_CONTROL
over the object.
- If you specify this canned ACL when creating a bucket,
Amazon S3 ignores it.
- Provider:
- AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "private"
- Owner gets FULL_CONTROL.
- No one else has access rights (default).
@@ -30063,13 +30060,13 @@ Properties:
Advanced options
Here are the Advanced options specific to s3 (Amazon S3 Compliant
-Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
-Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade,
-GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia,
-Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale,
-OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS,
-Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology,
-TencentCOS, Wasabi, Zata, Other).
+Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph,
+ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu,
+FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS,
+Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease,
+Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway,
+SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj,
+Synology, TencentCOS, Wasabi, Zata, Other).
--s3-bucket-acl
@@ -30089,7 +30086,7 @@ Properties:
- Config: bucket_acl
- Env Var: RCLONE_S3_BUCKET_ACL
- Provider:
- AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other
+ AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other
- Type: string
- Required: false
- Examples:
@@ -30343,6 +30340,58 @@ Properties:
- Type: string
- Required: false
+--s3-role-arn
+
+ARN of the IAM role to assume.
+
+Leave blank if not using assume role.
+
+Properties:
+
+- Config: role_arn
+- Env Var: RCLONE_S3_ROLE_ARN
+- Type: string
+- Required: false
+
+--s3-role-session-name
+
+Session name for assumed role.
+
+If empty, a session name will be generated automatically.
+
+Properties:
+
+- Config: role_session_name
+- Env Var: RCLONE_S3_ROLE_SESSION_NAME
+- Type: string
+- Required: false
+
+--s3-role-session-duration
+
+Session duration for assumed role.
+
+If empty, the default session duration will be used.
+
+Properties:
+
+- Config: role_session_duration
+- Env Var: RCLONE_S3_ROLE_SESSION_DURATION
+- Type: string
+- Required: false
+
+--s3-role-external-id
+
+External ID for assumed role.
+
+Leave blank if not using an external ID.
+
+Properties:
+
+- Config: role_external_id
+- Env Var: RCLONE_S3_ROLE_EXTERNAL_ID
+- Type: string
+- Required: false
+
--s3-upload-concurrency
Concurrency for multipart uploads and copies.
@@ -31584,6 +31633,34 @@ This will leave the config file looking like this.
server_side_encryption =
storage_class =
+BizflyCloud
+
+Bizfly Cloud Simple Storage is an S3-compatible service with regions in
+Hanoi (HN) and Ho Chi Minh City (HCM).
+
+Use the endpoint for your region:
+
+- HN: hn.ss.bfcplatform.vn
+- HCM: hcm.ss.bfcplatform.vn
+
+A minimal configuration looks like this.
+
+ [bizfly]
+ type = s3
+ provider = BizflyCloud
+ env_auth = false
+ access_key_id = YOUR_ACCESS_KEY
+ secret_access_key = YOUR_SECRET_KEY
+ region = HN
+ endpoint = hn.ss.bfcplatform.vn
+ location_constraint =
+ acl =
+ server_side_encryption =
+ storage_class =
+
+Switch region and endpoint to HCM and hcm.ss.bfcplatform.vn for Ho Chi
+Minh City.
+
Ceph
Ceph is an open-source, unified, distributed storage system designed for
@@ -36551,7 +36628,7 @@ different scenarios.
All copy commands send the following 4 requests:
- /b2api/v1/b2_authorize_account
+ /b2api/v4/b2_authorize_account
/b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names
@@ -37658,6 +37735,9 @@ Cache
The cache remote wraps another existing remote and stores file structure
and its data for long running tasks like rclone mount.
+It is deprecated so not recommended for use with new installations and
+may be removed at some point.
+
Status
The cache backend code is working but it currently doesn't have a
@@ -40727,8 +40807,8 @@ Properties:
The URL of the DOI resolver API to use.
-The DOI resolver can be set for testing or for cases when the the
-canonical DOI resolver API cannot be used.
+The DOI resolver can be set for testing or for cases when the canonical
+DOI resolver API cannot be used.
Defaults to "https://doi.org/api".
@@ -40803,6 +40883,299 @@ will default to those currently in use.
It doesn't return anything.
+Drime
+
+Drime is a cloud storage and transfer service focused on fast, resilient
+file delivery. It offers both free and paid tiers with emphasis on
+high-speed uploads and link sharing.
+
+To setup Drime you need to log in, navigate to Settings, Developer, and
+create a token to use as an API access key. Give it a sensible name and
+copy the token for use in the config.
+
+Configuration
+
+Here is a run through of rclone config to make a remote called remote.
+
+Firstly run:
+
+ rclone config
+
+Then follow through the interactive setup:
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+
+ Enter name for new remote.
+ name> remote
+
+ Option Storage.
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ XX / Drime
+ \ (drime)
+ Storage> drime
+
+ Option access_token.
+ API Access token
+ You can get this from the web control panel.
+ Enter a value. Press Enter to leave empty.
+ access_token> YOUR_API_ACCESS_TOKEN
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+
+ Configuration complete.
+ Options:
+ - type: drime
+ - access_token: YOUR_API_ACCESS_TOKEN
+ Keep this "remote" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+Once configured you can then use rclone like this (replace remote with
+the name you gave your remote):
+
+List directories and files in the top level of your Drime
+
+ rclone lsf remote:
+
+To copy a local directory to a Drime directory called backup
+
+ rclone copy /home/source remote:backup
+
+Modification times and hashes
+
+Drime does not support modification times or hashes.
+
+This means that by default syncs will only use the size of the file to
+determine if it needs updating.
+
+You can use the --update flag which will use the time the object was
+uploaded. For many operations this is sufficient to determine if it has
+changed. However files created with timestamps in the past will be
+missed by the sync if using --update.
+
+Restricted filename characters
+
+In addition to the default restricted characters set the following
+characters are also replaced:
+
+ Character Value Replacement
+ ----------- ------- -------------
+ \ 0x5C \
+
+File names can also not start or end with the following characters.
+These only get replaced if they are the first or last character in the
+name:
+
+ Character Value Replacement
+ ----------- ------- -------------
+ SP 0x20 ␠
+
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON
+strings.
+
+Root folder ID
+
+You can set the root_folder_id for rclone. This is the directory
+(identified by its Folder ID) that rclone considers to be the root of
+your Drime drive.
+
+Normally you will leave this blank and rclone will determine the correct
+root to use itself and fill in the value in the config file.
+
+However you can set this to restrict rclone to a specific folder
+hierarchy.
+
+In order to do this you will have to find the Folder ID of the directory
+you wish rclone to display.
+
+You can do this with rclone
+
+ $ rclone lsf -Fip --dirs-only remote:
+ d6341f53-ee65-4f29-9f59-d11e8070b2a0;Files/
+ f4f5c9b8-6ece-478b-b03e-4538edfe5a1c;Photos/
+ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
+
+The ID to use is the part before the ; so you could set
+
+ root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0
+
+To restrict rclone to the Files directory.
+
+Standard options
+
+Here are the Standard options specific to drime (Drime).
+
+--drime-access-token
+
+API Access token
+
+You can get this from the web control panel.
+
+Properties:
+
+- Config: access_token
+- Env Var: RCLONE_DRIME_ACCESS_TOKEN
+- Type: string
+- Required: false
+
+Advanced options
+
+Here are the Advanced options specific to drime (Drime).
+
+--drime-root-folder-id
+
+ID of the root folder
+
+Leave this blank normally, rclone will fill it in automatically.
+
+If you want rclone to be restricted to a particular folder you can fill
+it in - see the docs for more info.
+
+Properties:
+
+- Config: root_folder_id
+- Env Var: RCLONE_DRIME_ROOT_FOLDER_ID
+- Type: string
+- Required: false
+
+--drime-workspace-id
+
+Account ID
+
+Leave this blank normally unless you wish to specify a Workspace ID.
+
+Properties:
+
+- Config: workspace_id
+- Env Var: RCLONE_DRIME_WORKSPACE_ID
+- Type: string
+- Required: false
+
+--drime-list-chunk
+
+Number of items to list in each call
+
+Properties:
+
+- Config: list_chunk
+- Env Var: RCLONE_DRIME_LIST_CHUNK
+- Type: int
+- Default: 1000
+
+--drime-hard-delete
+
+Delete files permanently rather than putting them into the trash.
+
+Properties:
+
+- Config: hard_delete
+- Env Var: RCLONE_DRIME_HARD_DELETE
+- Type: bool
+- Default: false
+
+--drime-upload-cutoff
+
+Cutoff for switching to chunked upload.
+
+Any files larger than this will be uploaded in chunks of chunk_size. The
+minimum is 0 and the maximum is 5 GiB.
+
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_DRIME_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 200Mi
+
+--drime-chunk-size
+
+Chunk size to use for uploading.
+
+When uploading files larger than upload_cutoff or files with unknown
+size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
+photos or google docs) they will be uploaded as multipart uploads using
+this chunk size.
+
+Note that "--drime-upload-concurrency" chunks of this size are buffered
+in memory per transfer.
+
+If you are transferring large files over high-speed links and you have
+enough memory, then increasing this will speed up the transfers.
+
+Rclone will automatically increase the chunk size when uploading a large
+file of known size to stay below the 10,000 chunks limit.
+
+Files of unknown size are uploaded with the configured chunk_size. Since
+the default chunk size is 5 MiB and there can be at most 10,000 chunks,
+this means that by default the maximum size of a file you can stream
+upload is 48 GiB. If you wish to stream upload larger files then you
+will need to increase chunk_size.
+
+Properties:
+
+- Config: chunk_size
+- Env Var: RCLONE_DRIME_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5Mi
+
+--drime-upload-concurrency
+
+Concurrency for multipart uploads and copies.
+
+This is the number of chunks of the same file that are uploaded
+concurrently for multipart uploads and copies.
+
+If you are uploading small numbers of large files over high-speed links
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_DRIME_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 4
+
+--drime-encoding
+
+The encoding for the backend.
+
+See the encoding section in the overview for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_DRIME_ENCODING
+- Type: Encoding
+- Default:
+ Slash,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
+
+--drime-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_DRIME_DESCRIPTION
+- Type: string
+- Required: false
+
+Limitations
+
+Drime only supports filenames up to 255 bytes in length, where filenames
+are encoded in UTF8.
+
Dropbox
Paths are specified as remote:path
@@ -41720,6 +42093,9 @@ integration with rclone, FileLu makes managing files in the cloud easy.
Its cross-platform file backup services let you upload and back up files
from any internet-connected device.
+Note FileLu now has a fully featured S3 backend FileLu S5, an industry
+standard S3 compatible object store.
+
Configuration
Here is an example of how to make a remote called filelu. First, run:
@@ -41919,6 +42295,244 @@ troubleshooting and updates.
For further information, visit FileLu's website.
+Filen
+
+Configuration
+
+The initial setup for Filen requires that you get an API key for your
+account, currently this is only possible using the Filen CLI. This means
+you must first download the CLI, login, and then run the export-api-key
+command.
+
+Here is an example of how to make a remote called FilenRemote. First
+run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+
+ name> FilenRemote
+ Option Storage.
+
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ [snip]
+ XX / Filen
+ \ "filen"
+ [snip]
+ Storage> filen
+
+ Option Email.
+ The email of your Filen account
+ Enter a value.
+ Email> youremail@provider.com
+
+ Option Password.
+ The password of your Filen account
+ Choose an alternative below.
+ y) Yes, type in my own password
+ g) Generate random password
+ y/g> y
+ Enter the password:
+ password:
+ Confirm the password:
+ password:
+
+ Option API Key.
+ An API Key for your Filen account
+ Get this using the Filen CLI export-api-key command
+ You can download the Filen CLI from https://github.com/FilenCloudDienste/filen-cli
+ Choose an alternative below.
+ y) Yes, type in my own password
+ g) Generate random password
+ y/g> y
+ Enter the password:
+ password:
+ Confirm the password:
+ password:
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+
+ Configuration complete.
+ Options:
+ - type: filen
+ - Email: youremail@provider.com
+ - Password: *** ENCRYPTED ***
+ - API Key: *** ENCRYPTED ***
+ Keep this "FilenRemote" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+Modification times and hashes
+
+Modification times are fully supported for files, for directories, only
+the creation time matters.
+
+Filen supports Blake3 hashes.
+
+Restricted filename characters
+
+Invalid UTF-8 bytes will be replaced
+
+API Key
+
+Standard options
+
+Here are the Standard options specific to filen (Filen).
+
+--filen-email
+
+Email of your Filen account
+
+Properties:
+
+- Config: email
+- Env Var: RCLONE_FILEN_EMAIL
+- Type: string
+- Required: true
+
+--filen-password
+
+Password of your Filen account
+
+NB Input to this must be obscured - see rclone obscure.
+
+Properties:
+
+- Config: password
+- Env Var: RCLONE_FILEN_PASSWORD
+- Type: string
+- Required: true
+
+--filen-api-key
+
+API Key for your Filen account
+
+Get this using the Filen CLI export-api-key command You can download the
+Filen CLI from https://github.com/FilenCloudDienste/filen-cli
+
+NB Input to this must be obscured - see rclone obscure.
+
+Properties:
+
+- Config: api_key
+- Env Var: RCLONE_FILEN_API_KEY
+- Type: string
+- Required: true
+
+Advanced options
+
+Here are the Advanced options specific to filen (Filen).
+
+--filen-upload-concurrency
+
+Concurrency for chunked uploads.
+
+This is the upper limit for how many transfers for the same file are
+running concurrently. Setting this above to a value smaller than 1 will
+cause uploads to deadlock.
+
+If you are uploading small numbers of large files over high-speed links
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_FILEN_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 16
+
+--filen-encoding
+
+The encoding for the backend.
+
+See the encoding section in the overview for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_FILEN_ENCODING
+- Type: Encoding
+- Default: Slash,Del,Ctl,InvalidUtf8,Dot
+
+--filen-master-keys
+
+Master Keys (internal use only)
+
+Properties:
+
+- Config: master_keys
+- Env Var: RCLONE_FILEN_MASTER_KEYS
+- Type: string
+- Required: false
+
+--filen-private-key
+
+Private RSA Key (internal use only)
+
+Properties:
+
+- Config: private_key
+- Env Var: RCLONE_FILEN_PRIVATE_KEY
+- Type: string
+- Required: false
+
+--filen-public-key
+
+Public RSA Key (internal use only)
+
+Properties:
+
+- Config: public_key
+- Env Var: RCLONE_FILEN_PUBLIC_KEY
+- Type: string
+- Required: false
+
+--filen-auth-version
+
+Authentication Version (internal use only)
+
+Properties:
+
+- Config: auth_version
+- Env Var: RCLONE_FILEN_AUTH_VERSION
+- Type: string
+- Required: false
+
+--filen-base-folder-uuid
+
+UUID of Account Root Directory (internal use only)
+
+Properties:
+
+- Config: base_folder_uuid
+- Env Var: RCLONE_FILEN_BASE_FOLDER_UUID
+- Type: string
+- Required: false
+
+--filen-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_FILEN_DESCRIPTION
+- Type: string
+- Required: false
+
Files.com
Files.com is a cloud storage service that provides a secure and easy way
@@ -42573,6 +43187,13 @@ URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT
verb.
+Supports the format http://user:pass@host:port, http://host:port,
+http://host.
+
+Example:
+
+ http://myUser:myPass@proxyhostname.example.com:8000
+
Properties:
- Config: http_proxy
@@ -43696,9 +44317,15 @@ Properties:
--gcs-endpoint
-Endpoint for the service.
+Custom endpoint for the storage API. Leave blank to use the provider
+default.
-Leave blank normally.
+When using a custom endpoint that includes a subpath (e.g.
+example.org/custom/endpoint), the subpath will be ignored during upload
+operations due to a limitation in the underlying Google API Go client
+library. Download and listing operations will work correctly with the
+full endpoint path. If you require subpath support for uploads, avoid
+using subpaths in your custom endpoint configuration.
Properties:
@@ -43706,6 +44333,14 @@ Properties:
- Env Var: RCLONE_GCS_ENDPOINT
- Type: string
- Required: false
+- Examples:
+ - "storage.example.org"
+ - Specify a custom endpoint
+ - "storage.example.org:4443"
+ - Specifying a custom endpoint with port
+ - "storage.example.org:4443/gcs/api"
+ - Specifying a subpath, see the note, uploads won't use the
+ custom path!
--gcs-encoding
@@ -43989,7 +44624,8 @@ key" button.
- In the next field, "OAuth Scopes", enter
https://www.googleapis.com/auth/drive to grant read/write access to
Google Drive specifically. You can also use
- https://www.googleapis.com/auth/drive.readonly for read only access.
+ https://www.googleapis.com/auth/drive.readonly for read only access
+ with --drive-scope=drive.readonly.
- Click "Authorise"
3. Configure rclone, assuming a new install
@@ -45166,6 +45802,23 @@ Properties:
- "read,write"
- Read and Write the value.
+--drive-metadata-enforce-expansive-access
+
+Whether the request should enforce expansive access rules.
+
+From Feb 2026 this flag will be set by default so this flag can be used
+for testing before then.
+
+See:
+https://developers.google.com/workspace/drive/api/guides/limited-expansive-access
+
+Properties:
+
+- Config: metadata_enforce_expansive_access
+- Env Var: RCLONE_DRIVE_METADATA_ENFORCE_EXPANSIVE_ACCESS
+- Type: bool
+- Default: false
+
--drive-encoding
The encoding for the backend.
@@ -46378,8 +47031,14 @@ that each client_id can do set by Google.
If there is a problem with this client_id (eg quota too low or the
client_id stops working) then you can make your own.
-Please follow the steps in the google drive docs. You will need these
-scopes instead of the drive ones detailed:
+Please follow the steps in the google drive docs with the following
+differences:
+
+- At step 3, instead of enabling the "Google Drive API", search for
+ and enable the "Photos Library API".
+
+- At step 5, you will need to add different scopes. Use these scopes
+ instead of the drive ones:
https://www.googleapis.com/auth/photoslibrary.appendonly
https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata
@@ -48595,6 +49254,189 @@ backend.
See the metadata docs for more info.
+Internxt Drive
+
+Internxt Drive is a zero-knowledge encrypted cloud storage service.
+
+Paths are specified as remote:path
+
+Paths may be as deep as required, e.g. remote:directory/subdirectory.
+
+Limitations
+
+Note: The Internxt backend may not work with all account types. Please
+refer to Internxt plan details or contact Internxt support to verify
+rclone compatibility with your subscription.
+
+Configuration
+
+Here is an example of how to make a remote called internxt. Run
+rclone config and follow the prompts:
+
+ No remotes found, make a new one?
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+ name> internxt
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ [snip]
+ XX / Internxt Drive
+ \ "internxt"
+ [snip]
+ Storage> internxt
+
+ Option email.
+ Email of your Internxt account.
+ Enter a value.
+ email> user@example.com
+
+ Option pass.
+ Password.
+ Enter a value.
+ password>
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+
+ Configuration complete.
+ Options:
+ - type: internxt
+ - email: user@example.com
+ - pass: *** ENCRYPTED ***
+ Keep this "internxt" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+If you have two-factor authentication enabled on your Internxt account,
+you will be prompted to enter the code during login.
+
+Security Considerations
+
+The authentication process stores your password and mnemonic in the
+rclone configuration file. It is strongly recommended to encrypt your
+rclone config to protect these sensitive credentials:
+
+ rclone config password
+
+This will prompt you to set a password that encrypts your entire
+configuration file.
+
+Usage Examples
+
+ # List files
+ rclone ls internxt:
+
+ # Copy files to Internxt
+ rclone copy /local/path internxt:remote/path
+
+ # Sync local directory to Internxt
+ rclone sync /local/path internxt:remote/path
+
+ # Mount Internxt Drive as a local filesystem
+ rclone mount internxt: /path/to/mountpoint
+
+ # Check storage usage
+ rclone about internxt:
+
+Modification times and hashes
+
+The Internxt backend does not support hashes.
+
+Modification times are read from the server but cannot be set. The
+backend reports ModTimeNotSupported precision, so modification times
+will not be used for sync comparisons.
+
+Restricted filename characters
+
+The Internxt backend replaces the default restricted characters set.
+
+Standard options
+
+Here are the Standard options specific to internxt (Internxt Drive).
+
+--internxt-email
+
+Email of your Internxt account.
+
+Properties:
+
+- Config: email
+- Env Var: RCLONE_INTERNXT_EMAIL
+- Type: string
+- Required: true
+
+--internxt-pass
+
+Password.
+
+NB Input to this must be obscured - see rclone obscure.
+
+Properties:
+
+- Config: pass
+- Env Var: RCLONE_INTERNXT_PASS
+- Type: string
+- Required: true
+
+Advanced options
+
+Here are the Advanced options specific to internxt (Internxt Drive).
+
+--internxt-mnemonic
+
+Mnemonic (internal use only)
+
+Properties:
+
+- Config: mnemonic
+- Env Var: RCLONE_INTERNXT_MNEMONIC
+- Type: string
+- Required: false
+
+--internxt-skip-hash-validation
+
+Skip hash validation when downloading files.
+
+By default, hash validation is disabled. Set this to false to enable
+validation.
+
+Properties:
+
+- Config: skip_hash_validation
+- Env Var: RCLONE_INTERNXT_SKIP_HASH_VALIDATION
+- Type: bool
+- Default: true
+
+--internxt-encoding
+
+The encoding for the backend.
+
+See the encoding section in the overview for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_INTERNXT_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,CrLf,RightPeriod,InvalidUtf8,Dot
+
+--internxt-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_INTERNXT_DESCRIPTION
+- Type: string
+- Required: false
+
Jottacloud
Jottacloud is a cloud storage service provider from a Norwegian company,
@@ -50521,6 +51363,28 @@ Advanced options
Here are the Advanced options specific to memory (In memory object
storage system.).
+--memory-discard
+
+If set all writes will be discarded and reads will return an error
+
+If set then when files are uploaded the contents not be saved. The files
+will appear to have been uploaded but will give an error on read. Files
+will have their MD5 sum calculated on upload which takes very little CPU
+time and allows the transfers to be checked.
+
+This can be useful for testing performance.
+
+Probably most easily used by using the connection string syntax:
+
+ :memory,discard:bucket
+
+Properties:
+
+- Config: discard
+- Env Var: RCLONE_MEMORY_DISCARD
+- Type: bool
+- Default: false
+
--memory-description
Description of the remote.
@@ -50957,6 +51821,31 @@ MD5 hashes are stored with blobs. However blobs that were uploaded in
chunks only have an MD5 if the source remote was capable of MD5 hashes,
e.g. the local disk.
+Metadata and tags
+
+Rclone can map arbitrary metadata to Azure Blob headers, user metadata,
+and tags when --metadata is enabled (or when using --metadata-set /
+--metadata-mapper).
+
+- Headers: Set these keys in metadata to map to the corresponding blob
+ headers:
+ - cache-control, content-disposition, content-encoding,
+ content-language, content-type.
+- User metadata: Any other non-reserved keys are written as user
+ metadata (keys are normalized to lowercase). Keys starting with
+ x-ms- are reserved and are not stored as user metadata.
+- Tags: Provide x-ms-tags as a comma-separated list of key=value
+ pairs, e.g. x-ms-tags=env=dev,team=sync. These are applied as blob
+ tags on upload and on server-side copies. Whitespace around
+ keys/values is ignored.
+- Modtime override: Provide mtime in RFC3339/RFC3339Nano format to
+ override the stored modtime persisted in user metadata. If mtime
+ cannot be parsed, rclone logs a debug message and ignores the
+ override.
+
+Notes: - Rclone ignores reserved x-ms-* keys (except x-ms-tags) for user
+metadata.
+
Performance
When uploading large files, increasing the value of
@@ -51275,6 +52164,20 @@ Properties:
- Type: string
- Required: false
+--azureblob-connection-string
+
+Storage Connection String.
+
+Connection string for the storage. Leave blank if using other auth
+methods.
+
+Properties:
+
+- Config: connection_string
+- Env Var: RCLONE_AZUREBLOB_CONNECTION_STRING
+- Type: string
+- Required: false
+
--azureblob-tenant
ID of the service principal's tenant. Also called its directory ID.
@@ -51865,6 +52768,38 @@ Properties:
- Type: string
- Required: false
+Metadata
+
+User metadata is stored as x-ms-meta- keys. Azure metadata keys are case
+insensitive and are always returned in lower case.
+
+Here are the possible system metadata items for the azureblob backend.
+
+ ------------------------------------------------------------------------------------------------------------------
+ Name Help Type Example Read Only
+ --------------------- --------------------- ----------- ------------------------------------- --------------------
+ cache-control Cache-Control header string no-cache N
+
+ content-disposition Content-Disposition string inline N
+ header
+
+ content-encoding Content-Encoding string gzip N
+ header
+
+ content-language Content-Language string en-US N
+ header
+
+ content-type Content-Type header string text/plain N
+
+ mtime Time of last RFC 3339 2006-01-02T15:04:05.999999999Z07:00 N
+ modification, read
+ from rclone metadata
+
+ tier Tier of the object string Hot Y
+ ------------------------------------------------------------------------------------------------------------------
+
+See the metadata docs for more info.
+
Custom upload headers
You can set custom upload headers with the --header-upload flag.
@@ -52254,8 +53189,7 @@ Azure Storage Account Name.
Set this to the Azure Storage Account Name in use.
-Leave blank to use SAS URL or connection string, otherwise it needs to
-be set.
+Leave blank to use SAS URL or Emulator, otherwise it needs to be set.
If this is blank and if env_auth is set it will be read from the
environment variable AZURE_STORAGE_ACCOUNT_NAME if possible.
@@ -52267,19 +53201,6 @@ Properties:
- Type: string
- Required: false
---azurefiles-share-name
-
-Azure Files Share Name.
-
-This is required and is the name of the share to access.
-
-Properties:
-
-- Config: share_name
-- Env Var: RCLONE_AZUREFILES_SHARE_NAME
-- Type: string
-- Required: false
-
--azurefiles-env-auth
Read credentials from runtime (environment variables, CLI or MSI).
@@ -52297,7 +53218,7 @@ Properties:
Storage Account Shared Key.
-Leave blank to use SAS URL or connection string.
+Leave blank to use SAS URL or Emulator.
Properties:
@@ -52308,9 +53229,9 @@ Properties:
--azurefiles-sas-url
-SAS URL.
+SAS URL for container level access only.
-Leave blank if using account/key or connection string.
+Leave blank if using account/key or Emulator.
Properties:
@@ -52321,7 +53242,10 @@ Properties:
--azurefiles-connection-string
-Azure Files Connection String.
+Storage Connection String.
+
+Connection string for the storage. Leave blank if using other auth
+methods.
Properties:
@@ -52401,6 +53325,19 @@ Properties:
- Type: string
- Required: false
+--azurefiles-share-name
+
+Azure Files Share Name.
+
+This is required and is the name of the share to access.
+
+Properties:
+
+- Config: share_name
+- Env Var: RCLONE_AZUREFILES_SHARE_NAME
+- Type: string
+- Required: false
+
Advanced options
Here are the Advanced options specific to azurefiles (Microsoft Azure
@@ -52459,15 +53396,12 @@ Leave blank normally. Needed only if you want to use a service principal
instead of interactive login.
$ az ad sp create-for-rbac --name "" \
- --role "Storage Files Data Owner" \
+ --role "Storage Blob Data Owner" \
--scopes "/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/" \
> azure-principal.json
See "Create an Azure service principal" and "Assign an Azure role for
-access to files data" pages for more details.
-
-NB this section needs updating for Azure Files - pull requests
-appreciated!
+access to blob data" pages for more details.
It may be more convenient to put the credentials directly into the
rclone config file under the client_id, tenant and client_secret keys
@@ -52480,6 +53414,26 @@ Properties:
- Type: string
- Required: false
+--azurefiles-disable-instance-discovery
+
+Skip requesting Microsoft Entra instance metadata
+
+This should be set true only by applications authenticating in
+disconnected clouds, or private clouds such as Azure Stack.
+
+It determines whether rclone requests Microsoft Entra instance metadata
+from https://login.microsoft.com/ before authenticating.
+
+Setting this to true will skip this request, making you responsible for
+ensuring the configured authority is valid and trustworthy.
+
+Properties:
+
+- Config: disable_instance_discovery
+- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+- Type: bool
+- Default: false
+
--azurefiles-use-msi
Use a managed service identity to authenticate (only works in Azure).
@@ -52541,29 +53495,29 @@ Properties:
- Type: string
- Required: false
---azurefiles-disable-instance-discovery
+--azurefiles-use-emulator
-Skip requesting Microsoft Entra instance metadata This should be set
-true only by applications authenticating in disconnected clouds, or
-private clouds such as Azure Stack. It determines whether rclone
-requests Microsoft Entra instance metadata from
-https://login.microsoft.com/ before authenticating. Setting this to true
-will skip this request, making you responsible for ensuring the
-configured authority is valid and trustworthy.
+Uses local storage emulator if provided as 'true'.
+
+Leave blank if using real azure storage endpoint.
Properties:
-- Config: disable_instance_discovery
-- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+- Config: use_emulator
+- Env Var: RCLONE_AZUREFILES_USE_EMULATOR
- Type: bool
- Default: false
--azurefiles-use-az
-Use Azure CLI tool az for authentication Set to use the Azure CLI tool
-az as the sole means of authentication. Setting this can be useful if
-you wish to use the az CLI on a host with a System Managed Identity that
-you do not want to use. Don't set env_auth at the same time.
+Use Azure CLI tool az for authentication
+
+Set to use the Azure CLI tool az as the sole means of authentication.
+
+Setting this can be useful if you wish to use the az CLI on a host with
+a System Managed Identity that you do not want to use.
+
+Don't set env_auth at the same time.
Properties:
@@ -53467,7 +54421,7 @@ This is why this flag is not set as the default.
As a rule of thumb if nearly all of your data is under rclone's root
directory (the root/directory in onedrive:root/directory) then using
-this flag will be be a big performance win. If your data is mostly not
+this flag will be a big performance win. If your data is mostly not
under the root then using this flag will be a big performance loss.
It is recommended if you are mounting your onedrive at the root (or near
@@ -53678,8 +54632,8 @@ Here are the possible system metadata items for the onedrive backend.
item.
description A short description of the file. string Contract for signing N
- Max 1024 characters. Only
- supported for OneDrive Personal.
+ Max 1024 characters. No longer
+ supported by Microsoft.
id The unique identifier of the item string 01BYE5RZ6QN3ZWBTUFOFD3GSPGOHDJD36K Y
within OneDrive.
@@ -56502,7 +57456,7 @@ Properties:
Above this size files will be chunked.
-Above this size files will be chunked into a a _segments container or a
+Above this size files will be chunked into a _segments container or a
.file-segments directory. (See the use_segments_container option for
more info). Default for this is 5 GiB which is its maximum value, which
means only files above this size will be chunked.
@@ -56829,6 +57783,31 @@ So if the folder you want rclone to use your is "My Music/", then use
the returned id from rclone lsf command (ex. dxxxxxxxx2) as the
root_folder_id variable value in the config file.
+Change notifications and mounts
+
+The pCloud backend supports real‑time updates for rclone mounts via
+change notifications. rclone uses pCloud’s diff long‑polling API to
+detect changes and will automatically refresh directory listings in the
+mounted filesystem when changes occur.
+
+Notes and behavior:
+
+- Works automatically when using rclone mount and requires no
+ additional configuration.
+- Notifications are directory‑scoped: when rclone detects a change, it
+ refreshes the affected directory so new/removed/renamed files become
+ visible promptly.
+- Updates are near real‑time. The backend uses a long‑poll with short
+ fallback polling intervals, so you should see changes appear quickly
+ without manual refreshes.
+
+If you want to debug or verify notifications, you can use the helper
+command:
+
+ rclone test changenotify remote:
+
+This will log incoming change notifications for the given remote.
+
Standard options
Here are the Standard options specific to pcloud (Pcloud).
@@ -60298,6 +61277,13 @@ URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT
verb.
+Supports the format http://user:pass@host:port, http://host:port,
+http://host.
+
+Example:
+
+ http://myUser:myPass@proxyhostname.example.com:8000
+
Properties:
- Config: http_proxy
@@ -60375,6 +61361,273 @@ Hetzner Storage Boxes are supported through the SFTP backend on port 23.
See Hetzner's documentation for details
+Shade
+
+This is a backend for the Shade platform
+
+About Shade
+
+Shade is an AI-powered cloud NAS that makes your cloud files behave like
+a local drive, optimized for media and creative workflows. It provides
+fast, secure access with natural-language search, easy sharing, and
+scalable cloud storage.
+
+Accounts & Pricing
+
+To use this backend, you need to create a free account on Shade. You can
+start with a free account and get 20GB of storage for free.
+
+Usage
+
+Paths are specified as remote:path
+
+Paths may be as deep as required, e.g. remote:directory/subdirectory.
+
+Configuration
+
+Here is an example of making a Shade configuration.
+
+First, create a create a free account account and choose a plan.
+
+You will need to log in and get the API Key and Drive ID for your
+account from the settings section of your account and created drive
+respectively.
+
+Now run
+
+rclone config
+
+Follow this interactive process:
+
+ $ rclone config
+ e) Edit existing remote
+ n) New remote
+ d) Delete remote
+ r) Rename remote
+ c) Copy remote
+ s) Set configuration password
+ q) Quit config
+ e/n/d/r/c/s/q> n
+
+ Enter name for new remote.
+ name> Shade
+
+ Option Storage.
+ Type of storage to configure.
+ Choose a number from below, or type in your own value.
+ [OTHER OPTIONS]
+ xx / Shade FS
+ \ (shade)
+ [OTHER OPTIONS]
+ Storage> xx
+
+ Option drive_id.
+ The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive.
+ Enter a value.
+ drive_id> [YOUR_ID]
+
+ Option api_key.
+ An API key for your account.
+ Enter a value.
+ api_key> [YOUR_API_KEY]
+
+ Edit advanced config?
+ y) Yes
+ n) No (default)
+ y/n> n
+
+ Configuration complete.
+ Options:
+ - type: shade
+ - drive_id: [YOUR_ID]
+ - api_key: [YOUR_API_KEY]
+ Keep this "Shade" remote?
+ y) Yes this is OK (default)
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+Modification times and hashes
+
+Shade does not support hashes and writing mod times.
+
+Transfers
+
+Shade uses multipart uploads by default. This means that files will be
+chunked and sent up to Shade concurrently. In order to configure how
+many simultaneous uploads you want to use, upload the 'concurrency'
+option in the advanced config section. Note that this uses more memory
+and initiates more http requests.
+
+Deleting files
+
+Please note that when deleting files in Shade via rclone it will delete
+the file instantly, instead of sending it to the trash. This means that
+it will not be recoverable.
+
+Standard options
+
+Here are the Standard options specific to shade (Shade FS).
+
+--shade-drive-id
+
+The ID of your drive, see this in the drive settings. Individual rclone
+configs must be made per drive.
+
+Properties:
+
+- Config: drive_id
+- Env Var: RCLONE_SHADE_DRIVE_ID
+- Type: string
+- Required: true
+
+--shade-api-key
+
+An API key for your account.
+
+Properties:
+
+- Config: api_key
+- Env Var: RCLONE_SHADE_API_KEY
+- Type: string
+- Required: true
+
+Advanced options
+
+Here are the Advanced options specific to shade (Shade FS).
+
+--shade-endpoint
+
+Endpoint for the service.
+
+Leave blank normally.
+
+Properties:
+
+- Config: endpoint
+- Env Var: RCLONE_SHADE_ENDPOINT
+- Type: string
+- Required: false
+
+--shade-chunk-size
+
+Chunk size to use for uploading.
+
+Any files larger than this will be uploaded in chunks of this size.
+
+Note that this is stored in memory per transfer, so increasing it will
+increase memory usage.
+
+Minimum is 5MB, maximum is 5GB.
+
+Properties:
+
+- Config: chunk_size
+- Env Var: RCLONE_SHADE_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 64Mi
+
+--shade-upload-concurrency
+
+Concurrency for multipart uploads and copies. This is the number of
+chunks of the same file that are uploaded concurrently for multipart
+uploads and copies.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_SHADE_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 4
+
+--shade-max-upload-parts
+
+Maximum amount of parts in a multipart upload.
+
+Properties:
+
+- Config: max_upload_parts
+- Env Var: RCLONE_SHADE_MAX_UPLOAD_PARTS
+- Type: int
+- Default: 10000
+
+--shade-token
+
+JWT Token for performing Shade FS operations. Don't set this value -
+rclone will set it automatically
+
+Properties:
+
+- Config: token
+- Env Var: RCLONE_SHADE_TOKEN
+- Type: string
+- Required: false
+
+--shade-token-expiry
+
+JWT Token Expiration time. Don't set this value - rclone will set it
+automatically
+
+Properties:
+
+- Config: token_expiry
+- Env Var: RCLONE_SHADE_TOKEN_EXPIRY
+- Type: string
+- Required: false
+
+--shade-encoding
+
+The encoding for the backend.
+
+See the encoding section in the overview for more info.
+
+Properties:
+
+- Config: encoding
+- Env Var: RCLONE_SHADE_ENCODING
+- Type: Encoding
+- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+
+--shade-description
+
+Description of the remote.
+
+Properties:
+
+- Config: description
+- Env Var: RCLONE_SHADE_DESCRIPTION
+- Type: string
+- Required: false
+
+Limitations
+
+Note that Shade is case insensitive so you can't have a file called
+"Hello.doc" and one called "hello.doc".
+
+Shade only supports filenames up to 255 characters in length.
+
+rclone about is not supported by the Shade backend. Backends without
+this capability cannot determine free space for an rclone mount or use
+policy mfs (most free space) as a member of an rclone union remote.
+
+See List of backends that do not support rclone about and rclone about
+
+Backend commands
+
+Here are the commands specific to the shade backend.
+
+Run them with
+
+ rclone backend COMMAND remote:
+
+The help below will explain what arguments each command takes.
+
+See the backend command for more info on how to pass options and
+arguments.
+
+These can be run on a running backend using the rc command
+backend/command.
+
SMB
SMB is a communication protocol to share files over network.
@@ -61668,169 +62921,6 @@ an rclone union remote.
See List of backends that do not support rclone about and rclone about.
-Uptobox
-
-This is a Backend for Uptobox file storage service. Uptobox is closer to
-a one-click hoster than a traditional cloud storage provider and
-therefore not suitable for long term storage.
-
-Paths are specified as remote:path
-
-Paths may be as deep as required, e.g. remote:directory/subdirectory.
-
-Configuration
-
-To configure an Uptobox backend you'll need your personal api token.
-You'll find it in your account settings.
-
-Here is an example of how to make a remote called remote with the
-default setup. First run:
-
- rclone config
-
-This will guide you through an interactive setup process:
-
- Current remotes:
-
- Name Type
- ==== ====
- TestUptobox uptobox
-
- e) Edit existing remote
- n) New remote
- d) Delete remote
- r) Rename remote
- c) Copy remote
- s) Set configuration password
- q) Quit config
- e/n/d/r/c/s/q> n
- name> uptobox
- Type of storage to configure.
- Enter a string value. Press Enter for the default ("").
- Choose a number from below, or type in your own value
- [...]
- 37 / Uptobox
- \ "uptobox"
- [...]
- Storage> uptobox
- ** See help for uptobox backend at: https://rclone.org/uptobox/ **
-
- Your API Key, get it from https://uptobox.com/my_account
- Enter a string value. Press Enter for the default ("").
- api_key> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- Edit advanced config? (y/n)
- y) Yes
- n) No (default)
- y/n> n
- Remote config
- --------------------
- [uptobox]
- type = uptobox
- api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
- --------------------
- y) Yes this is OK (default)
- e) Edit this remote
- d) Delete this remote
- y/e/d>
-
-Once configured you can then use rclone like this (replace remote with
-the name you gave your remote):
-
-List directories in top level of your Uptobox
-
- rclone lsd remote:
-
-List all the files in your Uptobox
-
- rclone ls remote:
-
-To copy a local directory to an Uptobox directory called backup
-
- rclone copy /home/source remote:backup
-
-Modification times and hashes
-
-Uptobox supports neither modified times nor checksums. All timestamps
-will read as that set by --default-time.
-
-Restricted filename characters
-
-In addition to the default restricted characters set the following
-characters are also replaced:
-
- Character Value Replacement
- ----------- ------- -------------
- " 0x22 "
- ` 0x41 `
-
-Invalid UTF-8 bytes will also be replaced, as they can't be used in XML
-strings.
-
-Standard options
-
-Here are the Standard options specific to uptobox (Uptobox).
-
---uptobox-access-token
-
-Your access token.
-
-Get it from https://uptobox.com/my_account.
-
-Properties:
-
-- Config: access_token
-- Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
-- Type: string
-- Required: false
-
-Advanced options
-
-Here are the Advanced options specific to uptobox (Uptobox).
-
---uptobox-private
-
-Set to make uploaded files private
-
-Properties:
-
-- Config: private
-- Env Var: RCLONE_UPTOBOX_PRIVATE
-- Type: bool
-- Default: false
-
---uptobox-encoding
-
-The encoding for the backend.
-
-See the encoding section in the overview for more info.
-
-Properties:
-
-- Config: encoding
-- Env Var: RCLONE_UPTOBOX_ENCODING
-- Type: Encoding
-- Default:
- Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
-
---uptobox-description
-
-Description of the remote.
-
-Properties:
-
-- Config: description
-- Env Var: RCLONE_UPTOBOX_DESCRIPTION
-- Type: string
-- Required: false
-
-Limitations
-
-Uptobox will delete inactive files that have not been accessed in 60
-days.
-
-rclone about is not supported by this backend an overview of used space
-can however been seen in the uptobox web interface.
-
Union
The union backend joins several remotes together to make a single
@@ -64010,6 +65100,93 @@ Options:
Changelog
+v1.73.0 - 2026-01-30
+
+See commits
+
+- New backends
+ - Shade (jhasse-shade)
+ - Drime (dougal)
+ - Filen (Enduriel)
+ - Internxt (jzunigax2)
+ - New S3 providers
+ - Bizfly Cloud Simple Storage (vupn0712)
+- New Features
+ - docs: Add Support Tiers to the documentation (Nick Craig-Wood)
+ - rc: Add operations/hashsumfile to sum a single file only (Nick
+ Craig-Wood)
+ - serve webdav: Implement download directory as Zip (Leo)
+- Bug Fixes
+ - fs: fix bwlimit: correct reporting (Mikel Olasagasti Uranga)
+ - log: fix systemd adding extra newline (dougal)
+ - docs: fixes (albertony, darkdragon-001, Duncan Smart, hyusap,
+ Marc-Philip, Nick Craig-Wood, vicerace, vyv03354,
+ yuval-cloudinary, yy)
+ - serve s3: Make errors in --s3-auth-key fatal (Nick Craig-Wood)
+- Mount
+ - Fix OpenBSD mount support. (Nick Owens)
+- Azure Blob
+ - Add metadata and tags support across upload and copy paths
+ (Cliff Frey)
+ - Factor the common auth into a library (Nick Craig-Wood)
+- Azurefiles
+ - Factor the common auth into a library (Nick Craig-Wood)
+- B2
+ - Support authentication with new bucket restricted application
+ keys (DianaNites)
+- Drive
+ - Add --drive-metadata-force-expansive-access flag (Nick
+ Craig-Wood)
+ - Fix crash when trying to creating shortcut to a Google doc (Nick
+ Craig-Wood)
+- FTP
+ - Add http proxy authentication support (Nicolas Dessart)
+- Mega
+ - Reverts TLS workaround (necaran)
+- Memory
+ - Add --memory-discard flag for speed testing (Nick Craig-Wood)
+- OneDrive
+ - Fix cancelling multipart upload (Nick Craig-Wood)
+ - Fix setting modification time on directories for OneDrive
+ Personal (Nick Craig-Wood)
+ - Fix OneDrive Personal no longer supports description (Nick
+ Craig-Wood)
+ - Fix require sign in for OneDrive Personal (Nick Craig-Wood)
+ - Fix permissions on OneDrive Personal (Nick Craig-Wood)
+- Oracle Object Storage
+ - Eliminate unnecessary heap allocation (Qingwei Li)
+- Pcloud
+ - Add support for ChangeNotify to enable real-time updates in
+ mount (masrlinu)
+- Protondrive
+ - Update to use forks of upstream modules to unblock development
+ (Nick Craig-Wood)
+- S3
+ - Add ability to specify an IAM role for cross-account interaction
+ (Vladislav Tropnikov)
+ - Linode: updated endpoints to use ISO 3166-1 alpha-2 standard
+ (jbagwell-akamai)
+ - Fix Copy ignoring storage class (vupn0712)
+- SFTP
+ - Add http proxy authentication support (Nicolas Dessart)
+ - Eliminate unnecessary heap allocation (Qingwei Li)
+
+v1.72.1 - 2025-12-10
+
+See commits
+
+- Bug Fixes
+ - build: update to go1.25.5 to fix CVE-2025-61729
+ - doc fixes (Duncan Smart, Nick Craig-Wood)
+ - configfile: Fix piped config support (Jonas Tingeborn)
+ - log
+ - Fix PID not included in JSON log output (Tingsong Xu)
+ - Fix backtrace not going to the --log-file (Nick Craig-Wood)
+- Google Cloud Storage
+ - Improve endpoint parameter docs (Johannes Rothe)
+- S3
+ - Add missing regions for Selectel provider (Nick Craig-Wood)
+
v1.72.0 - 2025-11-21
See commits
@@ -72427,6 +73604,42 @@ same Unicode characters are intentionally used in file names, this
replacement strategy leads to unwanted renames. Read more under section
caveats.
+Why does rclone fail to connect over TLS but another client works?
+
+If you see TLS handshake failures (or packet captures show the server
+rejecting all offered ciphers), the server/proxy may only support legacy
+TLS cipher suites (for example RSA key-exchange ciphers such as
+RSA_WITH_AES_256_CBC_SHA256, or old 3DES ciphers). Recent Go versions
+(which rclone is built with) have removed insecure ciphers from the
+default list, so rclone may refuse to negotiate them even if other tools
+still do.
+
+If you can't update/reconfigure the server/proxy to support modern TLS
+(TLS 1.2/1.3) and ECDHE-based cipher suites you can re-enable legacy
+ciphers via GODEBUG:
+
+- Windows (cmd.exe):
+
+ set GODEBUG=tlsrsakex=1
+ rclone copy ...
+
+- Windows (PowerShell):
+
+ $env:GODEBUG="tlsrsakex=1"
+ rclone copy ...
+
+- Linux/macOS:
+
+ GODEBUG=tlsrsakex=1 rclone copy ...
+
+If the server only supports 3DES, try:
+
+ GODEBUG=tls3des=1 rclone ...
+
+This applies to any rclone feature using TLS (HTTPS, FTPS, WebDAV over
+TLS, proxies with TLS interception, etc.). Use these workarounds only
+long enough to get the server/proxy updated.
+
License
This is free software under the terms of the MIT license (check the
@@ -73506,6 +74719,32 @@ Contributors
- jijamik 30904953+jijamik@users.noreply.github.com
- Dominik Sander git@dsander.de
- Nikolay Kiryanov nikolay@kiryanov.ru
+- Diana 5275194+DianaNites@users.noreply.github.com
+- Duncan Smart duncan.smart@gmail.com
+- vicerace vicerace@sohu.com
+- Cliff Frey cliff@openai.com
+- Vladislav Tropnikov vtr.name@gmail.com
+- Leo i@hardrain980.com
+- Johannes Rothe mail@johannes-rothe.de
+- Tingsong Xu tingsong.xu@rightcapital.com
+- Jonas Tingeborn 134889+jojje@users.noreply.github.com
+- jhasse-shade jacob@shade.inc
+- vyv03354 VYV03354@nifty.ne.jp
+- masrlinu masrlinu@users.noreply.github.com
+ 5259918+masrlinu@users.noreply.github.com
+- vupn0712 126212736+vupn0712@users.noreply.github.com
+- darkdragon-001 darkdragon-001@users.noreply.github.com
+- sys6101 csvmen@gmail.com
+- Nicolas Dessart nds@outsight.tech
+- Qingwei Li 332664203@qq.com
+- yy yhymmt37@gmail.com
+- Marc-Philip marc-philip.werner@sap.com
+- Mikel Olasagasti Uranga mikel@olasagasti.info
+- Nick Owens mischief@offblast.org
+- hyusap paulayush@gmail.com
+- jzunigax2 125698953+jzunigax2@users.noreply.github.com
+- lullius lullius@users.noreply.github.com
+- StarHack StarHack@users.noreply.github.com
Contact the rclone project
diff --git a/bin/make_manual.py b/bin/make_manual.py
index 68d0164fc..70d949d25 100755
--- a/bin/make_manual.py
+++ b/bin/make_manual.py
@@ -23,6 +23,7 @@ docs = [
"gui.md",
"rc.md",
"overview.md",
+ "tiers.md",
"flags.md",
"docker.md",
"bisync.md",
@@ -43,7 +44,7 @@ docs = [
"compress.md",
"combine.md",
"doi.md",
- "drime.md"
+ "drime.md",
"dropbox.md",
"filefabric.md",
"filelu.md",
@@ -143,7 +144,7 @@ def read_doc(doc):
contents = fd.read()
parts = contents.split("---\n", 2)
if len(parts) != 3:
- raise ValueError("Couldn't find --- markers: found %d parts" % len(parts))
+ raise ValueError(f"{doc}: Couldn't find --- markers: found {len(parts)} parts")
contents = parts[2].strip()+"\n\n"
# Remove icons
contents = re.sub(r'
### Custom upload headers
diff --git a/docs/content/azurefiles.md b/docs/content/azurefiles.md
index 2c4680f7a..8b858f672 100644
--- a/docs/content/azurefiles.md
+++ b/docs/content/azurefiles.md
@@ -359,7 +359,7 @@ Azure Storage Account Name.
Set this to the Azure Storage Account Name in use.
-Leave blank to use SAS URL or connection string, otherwise it needs to be set.
+Leave blank to use SAS URL or Emulator, otherwise it needs to be set.
If this is blank and if env_auth is set it will be read from the
environment variable `AZURE_STORAGE_ACCOUNT_NAME` if possible.
@@ -372,25 +372,11 @@ Properties:
- Type: string
- Required: false
-#### --azurefiles-share-name
-
-Azure Files Share Name.
-
-This is required and is the name of the share to access.
-
-
-Properties:
-
-- Config: share_name
-- Env Var: RCLONE_AZUREFILES_SHARE_NAME
-- Type: string
-- Required: false
-
#### --azurefiles-env-auth
Read credentials from runtime (environment variables, CLI or MSI).
-See the [authentication docs](/azurefiles#authentication) for full info.
+See the [authentication docs](/azureblob#authentication) for full info.
Properties:
@@ -403,7 +389,7 @@ Properties:
Storage Account Shared Key.
-Leave blank to use SAS URL or connection string.
+Leave blank to use SAS URL or Emulator.
Properties:
@@ -414,9 +400,9 @@ Properties:
#### --azurefiles-sas-url
-SAS URL.
+SAS URL for container level access only.
-Leave blank if using account/key or connection string.
+Leave blank if using account/key or Emulator.
Properties:
@@ -427,7 +413,10 @@ Properties:
#### --azurefiles-connection-string
-Azure Files Connection String.
+Storage Connection String.
+
+Connection string for the storage. Leave blank if using other auth methods.
+
Properties:
@@ -519,6 +508,20 @@ Properties:
- Type: string
- Required: false
+#### --azurefiles-share-name
+
+Azure Files Share Name.
+
+This is required and is the name of the share to access.
+
+
+Properties:
+
+- Config: share_name
+- Env Var: RCLONE_AZUREFILES_SHARE_NAME
+- Type: string
+- Required: false
+
### Advanced options
Here are the Advanced options specific to azurefiles (Microsoft Azure Files).
@@ -581,13 +584,11 @@ Path to file containing credentials for use with a service principal.
Leave blank normally. Needed only if you want to use a service principal instead of interactive login.
$ az ad sp create-for-rbac --name "" \
- --role "Storage Files Data Owner" \
+ --role "Storage Blob Data Owner" \
--scopes "/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/" \
> azure-principal.json
-See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to files data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
-
-**NB** this section needs updating for Azure Files - pull requests appreciated!
+See ["Create an Azure service principal"](https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli) and ["Assign an Azure role for access to blob data"](https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli) pages for more details.
It may be more convenient to put the credentials directly into the
rclone config file under the `client_id`, `tenant` and `client_secret`
@@ -601,6 +602,28 @@ Properties:
- Type: string
- Required: false
+#### --azurefiles-disable-instance-discovery
+
+Skip requesting Microsoft Entra instance metadata
+
+This should be set true only by applications authenticating in
+disconnected clouds, or private clouds such as Azure Stack.
+
+It determines whether rclone requests Microsoft Entra instance
+metadata from `https://login.microsoft.com/` before
+authenticating.
+
+Setting this to true will skip this request, making you responsible
+for ensuring the configured authority is valid and trustworthy.
+
+
+Properties:
+
+- Config: disable_instance_discovery
+- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+- Type: bool
+- Default: false
+
#### --azurefiles-use-msi
Use a managed service identity to authenticate (only works in Azure).
@@ -660,32 +683,29 @@ Properties:
- Type: string
- Required: false
-#### --azurefiles-disable-instance-discovery
+#### --azurefiles-use-emulator
-Skip requesting Microsoft Entra instance metadata
-This should be set true only by applications authenticating in
-disconnected clouds, or private clouds such as Azure Stack.
-It determines whether rclone requests Microsoft Entra instance
-metadata from `https://login.microsoft.com/` before
-authenticating.
-Setting this to true will skip this request, making you responsible
-for ensuring the configured authority is valid and trustworthy.
+Uses local storage emulator if provided as 'true'.
+Leave blank if using real azure storage endpoint.
Properties:
-- Config: disable_instance_discovery
-- Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+- Config: use_emulator
+- Env Var: RCLONE_AZUREFILES_USE_EMULATOR
- Type: bool
- Default: false
#### --azurefiles-use-az
Use Azure CLI tool az for authentication
+
Set to use the [Azure CLI tool az](https://learn.microsoft.com/en-us/cli/azure/)
as the sole means of authentication.
+
Setting this can be useful if you wish to use the az CLI on a host with
a System Managed Identity that you do not want to use.
+
Don't set env_auth at the same time.
diff --git a/docs/content/bisync.md b/docs/content/bisync.md
index b3405f9b8..f6e99c7fd 100644
--- a/docs/content/bisync.md
+++ b/docs/content/bisync.md
@@ -1049,7 +1049,11 @@ The following backends have known issues that need more investigation:
- `TestDropbox` (`dropbox`)
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
-- Updated: 2025-11-21-010037
+- `TestSeafile` (`seafile`)
+ - [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafile-1.txt)
+- `TestSeafileV6` (`seafile`)
+ - [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafileV6-1.txt)
+- Updated: 2026-01-30-010015
The following backends either have not been tested recently or have known issues
@@ -1058,6 +1062,7 @@ that are deemed unfixable for the time being:
- `TestArchive` (`archive`)
- `TestCache` (`cache`)
+- `TestDrime` (`drime`)
- `TestFileLu` (`filelu`)
- `TestFilesCom` (`filescom`)
- `TestImageKit` (`imagekit`)
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index e90b70c55..12210a62e 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -6,6 +6,64 @@ description: "Rclone Changelog"
# Changelog
+## v1.73.0 - 2026-01-30
+
+[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.73.0)
+
+- New backends
+ - [Shade](/shade/) (jhasse-shade)
+ - [Drime](/drime/) (dougal)
+ - [Filen](/filen/) (Enduriel)
+ - [Internxt](/internxt/) (jzunigax2)
+ - New S3 providers
+ - [Bizfly Cloud Simple Storage](/s3/#bizflycloud) (vupn0712)
+- New Features
+ - docs: Add [Support Tiers](/tiers/) to the documentation (Nick Craig-Wood)
+ - rc: Add [operations/hashsumfile](/rc/#operations-hashsumfile) to sum a single file only (Nick Craig-Wood)
+ - serve webdav: Implement download directory as Zip (Leo)
+- Bug Fixes
+ - fs: fix bwlimit: correct reporting (Mikel Olasagasti Uranga)
+ - log: fix systemd adding extra newline (dougal)
+ - docs: fixes (albertony, darkdragon-001, Duncan Smart, hyusap, Marc-Philip, Nick Craig-Wood, vicerace, vyv03354, yuval-cloudinary, yy)
+ - serve s3: Make errors in `--s3-auth-key` fatal (Nick Craig-Wood)
+- Mount
+ - Fix OpenBSD mount support. (Nick Owens)
+- Azure Blob
+ - Add metadata and tags support across upload and copy paths (Cliff Frey)
+ - Factor the common auth into a library (Nick Craig-Wood)
+- Azurefiles
+ - Factor the common auth into a library (Nick Craig-Wood)
+- B2
+ - Support authentication with new bucket restricted application keys (DianaNites)
+- Drive
+ - Add `--drive-metadata-force-expansive-access` flag (Nick Craig-Wood)
+ - Fix crash when trying to creating shortcut to a Google doc (Nick Craig-Wood)
+- FTP
+ - Add http proxy authentication support (Nicolas Dessart)
+- Mega
+ - Reverts TLS workaround (necaran)
+- Memory
+ - Add `--memory-discard` flag for speed testing (Nick Craig-Wood)
+- OneDrive
+ - Fix cancelling multipart upload (Nick Craig-Wood)
+ - Fix setting modification time on directories for OneDrive Personal (Nick Craig-Wood)
+ - Fix OneDrive Personal no longer supports description (Nick Craig-Wood)
+ - Fix require sign in for OneDrive Personal (Nick Craig-Wood)
+ - Fix permissions on OneDrive Personal (Nick Craig-Wood)
+- Oracle Object Storage
+ - Eliminate unnecessary heap allocation (Qingwei Li)
+- Pcloud
+ - Add support for `ChangeNotify` to enable real-time updates in mount (masrlinu)
+- Protondrive
+ - Update to use forks of upstream modules to unblock development (Nick Craig-Wood)
+- S3
+ - Add ability to specify an IAM role for cross-account interaction (Vladislav Tropnikov)
+ - Linode: updated endpoints to use ISO 3166-1 alpha-2 standard (jbagwell-akamai)
+ - Fix Copy ignoring storage class (vupn0712)
+- SFTP
+ - Add http proxy authentication support (Nicolas Dessart)
+ - Eliminate unnecessary heap allocation (Qingwei Li)
+
## v1.72.1 - 2025-12-10
[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index 8d670c739..23788d2b8 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -37,6 +37,7 @@ rclone [flags]
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal's client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azureblob-connection-string string Storage Connection String
--azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
--azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
@@ -73,7 +74,7 @@ rclone [flags]
--azurefiles-client-id string The ID of the client in use
--azurefiles-client-secret string One of the service principal's client secrets
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
- --azurefiles-connection-string string Azure Files Connection String
+ --azurefiles-connection-string string Storage Connection String
--azurefiles-description string Description of the remote
--azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
@@ -85,12 +86,13 @@ rclone [flags]
--azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
--azurefiles-password string The user's password (obscured)
- --azurefiles-sas-url string SAS URL
+ --azurefiles-sas-url string SAS URL for container level access only
--azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
--azurefiles-use-az Use Azure CLI tool az for authentication
+ --azurefiles-use-emulator Uses local storage emulator if provided as 'true'
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
@@ -220,6 +222,16 @@ rclone [flags]
--doi-doi string The DOI or the doi.org URL
--doi-doi-resolver-api-url string The URL of the DOI resolver API to use
--doi-provider string DOI provider
+ --drime-access-token string API Access token
+ --drime-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
+ --drime-description string Description of the remote
+ --drime-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --drime-hard-delete Delete files permanently rather than putting them into the trash
+ --drime-list-chunk int Number of items to list in each call (default 1000)
+ --drime-root-folder-id string ID of the root folder
+ --drime-upload-concurrency int Concurrency for multipart uploads and copies (default 4)
+ --drime-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --drime-workspace-id string Account ID
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
@@ -240,6 +252,7 @@ rclone [flags]
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
+ --drive-metadata-enforce-expansive-access Whether the request should enforce expansive access rules
--drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
--drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
--drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
@@ -319,6 +332,17 @@ rclone [flags]
--filelu-description string Description of the remote
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
--filelu-key string Your FileLu Rclone key from My Account
+ --filen-api-key string API Key for your Filen account (obscured)
+ --filen-auth-version string Authentication Version (internal use only)
+ --filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
+ --filen-description string Description of the remote
+ --filen-email string Email of your Filen account
+ --filen-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --filen-master-keys string Master Keys (internal use only)
+ --filen-password string Password of your Filen account (obscured)
+ --filen-private-key string Private RSA Key (internal use only)
+ --filen-public-key string Public RSA Key (internal use only)
+ --filen-upload-concurrency int Concurrency for chunked uploads (default 16)
--files-from stringArray Read list of source-file names from file (use - to read from stdin)
--files-from-raw stringArray Read list of source-file names from file without any processing of lines (use - to read from stdin)
--filescom-api-key string The API key used to authenticate with Files.com
@@ -369,7 +393,7 @@ rclone [flags]
--gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
- --gcs-endpoint string Endpoint for the service
+ --gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
@@ -477,6 +501,11 @@ rclone [flags]
--internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
+ --internxt-description string Description of the remote
+ --internxt-email string Email of your Internxt account
+ --internxt-encoding Encoding The encoding for the backend (default Slash,BackSlash,CrLf,RightPeriod,InvalidUtf8,Dot)
+ --internxt-pass string Password (obscured)
+ --internxt-skip-hash-validation Skip hash validation when downloading files (default true)
--jottacloud-auth-url string Auth server URL
--jottacloud-client-credentials Use client credentials OAuth flow
--jottacloud-client-id string OAuth Client Id
@@ -562,6 +591,7 @@ rclone [flags]
--mega-use-https Use HTTPS for transfers
--mega-user string User name
--memory-description string Description of the remote
+ --memory-discard If set all writes will be discarded and reads will return an error
--memprofile string Write memory profile to file
-M, --metadata If set, preserve metadata when copying objects
--metadata-exclude stringArray Exclude metadatas matching pattern
@@ -819,6 +849,10 @@ rclone [flags]
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
--s3-requester-pays Enables requester pays option when interacting with S3 bucket
+ --s3-role-arn string ARN of the IAM role to assume
+ --s3-role-external-id string External ID for assumed role
+ --s3-role-session-duration string Session duration for assumed role
+ --s3-role-session-name string Session name for assumed role
--s3-sdk-log-mode Bits Set to debug the SDK (default Off)
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
@@ -903,6 +937,16 @@ rclone [flags]
--sftp-user string SSH username (default "$USER")
--sftp-xxh128sum-command string The command used to read XXH128 hashes
--sftp-xxh3sum-command string The command used to read XXH3 hashes
+ --shade-api-key string An API key for your account
+ --shade-chunk-size SizeSuffix Chunk size to use for uploading (default 64Mi)
+ --shade-description string Description of the remote
+ --shade-drive-id string The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive
+ --shade-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --shade-endpoint string Endpoint for the service
+ --shade-max-upload-parts int Maximum amount of parts in a multipart upload (default 10000)
+ --shade-token string JWT Token for performing Shade FS operations. Don't set this value - rclone will set it automatically
+ --shade-token-expiry string JWT Token Expiration time. Don't set this value - rclone will set it automatically
+ --shade-upload-concurrency int Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies (default 4)
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-credentials Use client credentials OAuth flow
@@ -1019,7 +1063,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect
diff --git a/docs/content/commands/rclone_convmv.md b/docs/content/commands/rclone_convmv.md
index 04f9026ce..83840905a 100644
--- a/docs/content/commands/rclone_convmv.md
+++ b/docs/content/commands/rclone_convmv.md
@@ -231,12 +231,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
-// Output: stories/The Quick Brown Fox!-20251121
+// Output: stories/The Quick Brown Fox!-20260130
```
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
-// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
+// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM
```
```console
diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md
index 122a9ab0f..7fdf8f495 100644
--- a/docs/content/commands/rclone_ls.md
+++ b/docs/content/commands/rclone_ls.md
@@ -41,6 +41,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
use `-R` to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use `--disable ListR` to suppress the behavior.
+See [`--fast-list`](/docs/#fast-list) for more details.
+
Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md
index b30f0c851..bb79eb48a 100644
--- a/docs/content/commands/rclone_lsd.md
+++ b/docs/content/commands/rclone_lsd.md
@@ -53,6 +53,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
use `-R` to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use `--disable ListR` to suppress the behavior.
+See [`--fast-list`](/docs/#fast-list) for more details.
+
Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
diff --git a/docs/content/commands/rclone_lsf.md b/docs/content/commands/rclone_lsf.md
index 3eb2bd8e6..a7b40a580 100644
--- a/docs/content/commands/rclone_lsf.md
+++ b/docs/content/commands/rclone_lsf.md
@@ -158,6 +158,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
use `-R` to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use `--disable ListR` to suppress the behavior.
+See [`--fast-list`](/docs/#fast-list) for more details.
+
Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md
index 7f19807bf..dc090a530 100644
--- a/docs/content/commands/rclone_lsjson.md
+++ b/docs/content/commands/rclone_lsjson.md
@@ -68,7 +68,7 @@ with the following options:
- If `--files-only` is specified then files will be returned only,
no directories.
-If `--stat` is set then the the output is not an array of items,
+If `--stat` is set then the output is not an array of items,
but instead a single JSON blob will be returned about the item pointed to.
This will return an error if the item isn't found, however on bucket based
backends (like s3, gcs, b2, azureblob etc) if the item isn't found it will
@@ -111,6 +111,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
use `-R` to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use `--disable ListR` to suppress the behavior.
+See [`--fast-list`](/docs/#fast-list) for more details.
+
Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md
index 4c02b1d54..862774dd4 100644
--- a/docs/content/commands/rclone_lsl.md
+++ b/docs/content/commands/rclone_lsl.md
@@ -42,6 +42,10 @@ Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the re
The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default -
use `-R` to make them recurse.
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default. Use `--disable ListR` to suppress the behavior.
+See [`--fast-list`](/docs/#fast-list) for more details.
+
Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index 9cba7200a..9b294500c 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -78,7 +78,7 @@ at all, then 1 PiB is set as both the total and the free size.
## Installing on Windows
To run `rclone mount on Windows`, you will need to
-download and install [WinFsp](http://www.secfs.net/winfsp/).
+download and install [WinFsp](https://winfsp.dev).
[WinFsp](https://github.com/winfsp/winfsp) is an open-source
Windows File System Proxy which makes it easy to write user space file
@@ -336,7 +336,7 @@ full new copy of the file.
When mounting with `--read-only`, attempts to write to files will fail *silently*
as opposed to with a clear warning as in macFUSE.
-## Mounting on Linux
+# Mounting on Linux
On newer versions of Ubuntu, you may encounter the following error when running
`rclone mount`:
diff --git a/docs/content/commands/rclone_nfsmount.md b/docs/content/commands/rclone_nfsmount.md
index 446eb613b..83b53ed49 100644
--- a/docs/content/commands/rclone_nfsmount.md
+++ b/docs/content/commands/rclone_nfsmount.md
@@ -79,7 +79,7 @@ at all, then 1 PiB is set as both the total and the free size.
## Installing on Windows
To run `rclone nfsmount on Windows`, you will need to
-download and install [WinFsp](http://www.secfs.net/winfsp/).
+download and install [WinFsp](https://winfsp.dev).
[WinFsp](https://github.com/winfsp/winfsp) is an open-source
Windows File System Proxy which makes it easy to write user space file
diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md
index 81a3ed99e..738c3ff6d 100644
--- a/docs/content/commands/rclone_obscure.md
+++ b/docs/content/commands/rclone_obscure.md
@@ -25,7 +25,7 @@ argument by passing a hyphen as an argument. This will use the first
line of STDIN as the password not including the trailing newline.
```console
-echo "secretpassword" | rclone obscure -
+echo 'secretpassword' | rclone obscure -
```
If there is no data on STDIN to read, rclone obscure will default to
diff --git a/docs/content/commands/rclone_serve_s3.md b/docs/content/commands/rclone_serve_s3.md
index e3ae8618a..1e8722114 100644
--- a/docs/content/commands/rclone_serve_s3.md
+++ b/docs/content/commands/rclone_serve_s3.md
@@ -26,6 +26,26 @@ docs](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
`--auth-key` is not provided then `serve s3` will allow anonymous
access.
+Like all rclone flags `--auth-key` can be set via environment
+variables, in this case `RCLONE_AUTH_KEY`. Since this flag can be
+repeated, the input to `RCLONE_AUTH_KEY` is CSV encoded. Because the
+`accessKey,secretKey` has a comma in, this means it needs to be in
+quotes.
+
+```console
+export RCLONE_AUTH_KEY='"user,pass"'
+rclone serve s3 ...
+```
+
+Or to supply multiple identities:
+
+```console
+export RCLONE_AUTH_KEY='"user1,pass1","user2,pass2"'
+rclone serve s3 ...
+```
+
+Setting this variable without quotes will produce an error.
+
Please note that some clients may require HTTPS endpoints. See [the
SSL docs](#tls-ssl) for more information.
diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md
index 5821df0da..c16133434 100644
--- a/docs/content/commands/rclone_serve_webdav.md
+++ b/docs/content/commands/rclone_serve_webdav.md
@@ -803,6 +803,7 @@ rclone serve webdav remote:path [flags]
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
+ --disable-zip Disable zip download of directories
--etag-hash string Which hash to use for the ETag, or auto or blank for off
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
diff --git a/docs/content/doi.md b/docs/content/doi.md
index 9fe179c06..f20d3ed50 100644
--- a/docs/content/doi.md
+++ b/docs/content/doi.md
@@ -113,7 +113,7 @@ Properties:
The URL of the DOI resolver API to use.
-The DOI resolver can be set for testing or for cases when the the canonical DOI resolver API cannot be used.
+The DOI resolver can be set for testing or for cases when the canonical DOI resolver API cannot be used.
Defaults to "https://doi.org/api".
diff --git a/docs/content/drime.md b/docs/content/drime.md
index 7b8055e2a..a3e6e8c90 100644
--- a/docs/content/drime.md
+++ b/docs/content/drime.md
@@ -190,7 +190,7 @@ Properties:
Account ID
-Leave this blank normally, rclone will fill it in automatically.
+Leave this blank normally unless you wish to specify a Workspace ID.
Properties:
@@ -211,6 +211,81 @@ Properties:
- Type: int
- Default: 1000
+#### --drime-hard-delete
+
+Delete files permanently rather than putting them into the trash.
+
+Properties:
+
+- Config: hard_delete
+- Env Var: RCLONE_DRIME_HARD_DELETE
+- Type: bool
+- Default: false
+
+#### --drime-upload-cutoff
+
+Cutoff for switching to chunked upload.
+
+Any files larger than this will be uploaded in chunks of chunk_size.
+The minimum is 0 and the maximum is 5 GiB.
+
+Properties:
+
+- Config: upload_cutoff
+- Env Var: RCLONE_DRIME_UPLOAD_CUTOFF
+- Type: SizeSuffix
+- Default: 200Mi
+
+#### --drime-chunk-size
+
+Chunk size to use for uploading.
+
+When uploading files larger than upload_cutoff or files with unknown
+size (e.g. from "rclone rcat" or uploaded with "rclone mount" or google
+photos or google docs) they will be uploaded as multipart uploads
+using this chunk size.
+
+Note that "--drime-upload-concurrency" chunks of this size are buffered
+in memory per transfer.
+
+If you are transferring large files over high-speed links and you have
+enough memory, then increasing this will speed up the transfers.
+
+Rclone will automatically increase the chunk size when uploading a
+large file of known size to stay below the 10,000 chunks limit.
+
+Files of unknown size are uploaded with the configured
+chunk_size. Since the default chunk size is 5 MiB and there can be at
+most 10,000 chunks, this means that by default the maximum size of
+a file you can stream upload is 48 GiB. If you wish to stream upload
+larger files then you will need to increase chunk_size.
+
+
+Properties:
+
+- Config: chunk_size
+- Env Var: RCLONE_DRIME_CHUNK_SIZE
+- Type: SizeSuffix
+- Default: 5Mi
+
+#### --drime-upload-concurrency
+
+Concurrency for multipart uploads and copies.
+
+This is the number of chunks of the same file that are uploaded
+concurrently for multipart uploads and copies.
+
+If you are uploading small numbers of large files over high-speed links
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_DRIME_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 4
+
#### --drime-encoding
The encoding for the backend.
diff --git a/docs/content/drive.md b/docs/content/drive.md
index 92c4bb17a..7dd25145d 100644
--- a/docs/content/drive.md
+++ b/docs/content/drive.md
@@ -1420,6 +1420,23 @@ Properties:
- "read,write"
- Read and Write the value.
+#### --drive-metadata-enforce-expansive-access
+
+Whether the request should enforce expansive access rules.
+
+From Feb 2026 this flag will be set by default so this flag can be used for
+testing before then.
+
+See: https://developers.google.com/workspace/drive/api/guides/limited-expansive-access
+
+
+Properties:
+
+- Config: metadata_enforce_expansive_access
+- Env Var: RCLONE_DRIVE_METADATA_ENFORCE_EXPANSIVE_ACCESS
+- Type: bool
+- Default: false
+
#### --drive-encoding
The encoding for the backend.
diff --git a/docs/content/filen.md b/docs/content/filen.md
index af2fe5dcb..38577b163 100644
--- a/docs/content/filen.md
+++ b/docs/content/filen.md
@@ -140,6 +140,24 @@ Properties:
Here are the Advanced options specific to filen (Filen).
+#### --filen-upload-concurrency
+
+Concurrency for chunked uploads.
+
+This is the upper limit for how many transfers for the same file are running concurrently.
+Setting this above to a value smaller than 1 will cause uploads to deadlock.
+
+If you are uploading small numbers of large files over high-speed links
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_FILEN_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 16
+
#### --filen-encoding
The encoding for the backend.
@@ -153,28 +171,6 @@ Properties:
- Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
-#### --filen-upload-concurrency
-
-Concurrency for multipart uploads.
-
-This is the number of chunks of the same file that are uploaded
-concurrently for multipart uploads.
-
-Note that chunks are stored in memory and there may be up to
-"--transfers" * "--filen-upload-concurrency" chunks stored at once
-in memory.
-
-If you are uploading small numbers of large files over high-speed links
-and these uploads do not fully utilize your bandwidth, then increasing
-this may help to speed up the transfers.
-
-Properties:
-
-- Config: upload_concurrency
-- Env Var: RCLONE_FILEN_UPLOAD_CONCURRENCY
-- Type: int
-- Default: 16
-
#### --filen-master-keys
Master Keys (internal use only)
diff --git a/docs/content/flags.md b/docs/content/flags.md
index 24706dbf1..d7bb0af9b 100644
--- a/docs/content/flags.md
+++ b/docs/content/flags.md
@@ -121,7 +121,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
```
@@ -352,6 +352,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal's client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azureblob-connection-string string Storage Connection String
--azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
--azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
@@ -388,7 +389,7 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-client-id string The ID of the client in use
--azurefiles-client-secret string One of the service principal's client secrets
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
- --azurefiles-connection-string string Azure Files Connection String
+ --azurefiles-connection-string string Storage Connection String
--azurefiles-description string Description of the remote
--azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
@@ -400,12 +401,13 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
--azurefiles-password string The user's password (obscured)
- --azurefiles-sas-url string SAS URL
+ --azurefiles-sas-url string SAS URL for container level access only
--azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal's tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
--azurefiles-use-az Use Azure CLI tool az for authentication
+ --azurefiles-use-emulator Uses local storage emulator if provided as 'true'
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
@@ -507,6 +509,16 @@ Backend-only flags (these can be set in the config file also).
--doi-doi string The DOI or the doi.org URL
--doi-doi-resolver-api-url string The URL of the DOI resolver API to use
--doi-provider string DOI provider
+ --drime-access-token string API Access token
+ --drime-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
+ --drime-description string Description of the remote
+ --drime-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --drime-hard-delete Delete files permanently rather than putting them into the trash
+ --drime-list-chunk int Number of items to list in each call (default 1000)
+ --drime-root-folder-id string ID of the root folder
+ --drime-upload-concurrency int Concurrency for multipart uploads and copies (default 4)
+ --drime-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --drime-workspace-id string Account ID
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
@@ -527,6 +539,7 @@ Backend-only flags (these can be set in the config file also).
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
+ --drive-metadata-enforce-expansive-access Whether the request should enforce expansive access rules
--drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
--drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
--drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
@@ -595,6 +608,17 @@ Backend-only flags (these can be set in the config file also).
--filelu-description string Description of the remote
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
--filelu-key string Your FileLu Rclone key from My Account
+ --filen-api-key string API Key for your Filen account (obscured)
+ --filen-auth-version string Authentication Version (internal use only)
+ --filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
+ --filen-description string Description of the remote
+ --filen-email string Email of your Filen account
+ --filen-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --filen-master-keys string Master Keys (internal use only)
+ --filen-password string Password of your Filen account (obscured)
+ --filen-private-key string Private RSA Key (internal use only)
+ --filen-public-key string Public RSA Key (internal use only)
+ --filen-upload-concurrency int Concurrency for chunked uploads (default 16)
--filescom-api-key string The API key used to authenticate with Files.com
--filescom-description string Description of the remote
--filescom-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -638,7 +662,7 @@ Backend-only flags (these can be set in the config file also).
--gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
- --gcs-endpoint string Endpoint for the service
+ --gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
@@ -727,6 +751,11 @@ Backend-only flags (these can be set in the config file also).
--internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server's processing tasks (specifically archive and book_op) to finish (default 0s)
+ --internxt-description string Description of the remote
+ --internxt-email string Email of your Internxt account
+ --internxt-encoding Encoding The encoding for the backend (default Slash,BackSlash,CrLf,RightPeriod,InvalidUtf8,Dot)
+ --internxt-pass string Password (obscured)
+ --internxt-skip-hash-validation Skip hash validation when downloading files (default true)
--jottacloud-auth-url string Auth server URL
--jottacloud-client-credentials Use client credentials OAuth flow
--jottacloud-client-id string OAuth Client Id
@@ -789,6 +818,7 @@ Backend-only flags (these can be set in the config file also).
--mega-use-https Use HTTPS for transfers
--mega-user string User name
--memory-description string Description of the remote
+ --memory-discard If set all writes will be discarded and reads will return an error
--netstorage-account string Set the NetStorage account name
--netstorage-description string Description of the remote
--netstorage-host string Domain+path of NetStorage host to connect to
@@ -964,6 +994,10 @@ Backend-only flags (these can be set in the config file also).
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
--s3-requester-pays Enables requester pays option when interacting with S3 bucket
+ --s3-role-arn string ARN of the IAM role to assume
+ --s3-role-external-id string External ID for assumed role
+ --s3-role-session-duration string Session duration for assumed role
+ --s3-role-session-name string Session name for assumed role
--s3-sdk-log-mode Bits Set to debug the SDK (default Off)
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
@@ -1047,6 +1081,16 @@ Backend-only flags (these can be set in the config file also).
--sftp-user string SSH username (default "$USER")
--sftp-xxh128sum-command string The command used to read XXH128 hashes
--sftp-xxh3sum-command string The command used to read XXH3 hashes
+ --shade-api-key string An API key for your account
+ --shade-chunk-size SizeSuffix Chunk size to use for uploading (default 64Mi)
+ --shade-description string Description of the remote
+ --shade-drive-id string The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive
+ --shade-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --shade-endpoint string Endpoint for the service
+ --shade-max-upload-parts int Maximum amount of parts in a multipart upload (default 10000)
+ --shade-token string JWT Token for performing Shade FS operations. Don't set this value - rclone will set it automatically
+ --shade-token-expiry string JWT Token Expiration time. Don't set this value - rclone will set it automatically
+ --shade-upload-concurrency int Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies (default 4)
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-credentials Use client credentials OAuth flow
diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md
index 902ff1f31..3f7c59349 100644
--- a/docs/content/googlecloudstorage.md
+++ b/docs/content/googlecloudstorage.md
@@ -785,9 +785,14 @@ Properties:
#### --gcs-endpoint
-Endpoint for the service.
+Custom endpoint for the storage API. Leave blank to use the provider default.
-Leave blank normally.
+When using a custom endpoint that includes a subpath (e.g. example.org/custom/endpoint),
+the subpath will be ignored during upload operations due to a limitation in the
+underlying Google API Go client library.
+Download and listing operations will work correctly with the full endpoint path.
+If you require subpath support for uploads, avoid using subpaths in your custom
+endpoint configuration.
Properties:
@@ -795,6 +800,13 @@ Properties:
- Env Var: RCLONE_GCS_ENDPOINT
- Type: string
- Required: false
+- Examples:
+ - "storage.example.org"
+ - Specify a custom endpoint
+ - "storage.example.org:4443"
+ - Specifying a custom endpoint with port
+ - "storage.example.org:4443/gcs/api"
+ - Specifying a subpath, see the note, uploads won't use the custom path!
#### --gcs-encoding
diff --git a/docs/content/memory.md b/docs/content/memory.md
index 059848796..760a58509 100644
--- a/docs/content/memory.md
+++ b/docs/content/memory.md
@@ -70,6 +70,30 @@ set](/overview/#restricted-characters).
Here are the Advanced options specific to memory (In memory object storage system.).
+#### --memory-discard
+
+If set all writes will be discarded and reads will return an error
+
+If set then when files are uploaded the contents not be saved. The
+files will appear to have been uploaded but will give an error on
+read. Files will have their MD5 sum calculated on upload which takes
+very little CPU time and allows the transfers to be checked.
+
+This can be useful for testing performance.
+
+Probably most easily used by using the connection string syntax:
+
+ :memory,discard:bucket
+
+
+
+Properties:
+
+- Config: discard
+- Env Var: RCLONE_MEMORY_DISCARD
+- Type: bool
+- Default: false
+
#### --memory-description
Description of the remote.
diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md
index 5437a0080..7e2041551 100644
--- a/docs/content/onedrive.md
+++ b/docs/content/onedrive.md
@@ -788,7 +788,7 @@ This is why this flag is not set as the default.
As a rule of thumb if nearly all of your data is under rclone's root
directory (the `root/directory` in `onedrive:root/directory`) then
-using this flag will be be a big performance win. If your data is
+using this flag will be a big performance win. If your data is
mostly not under the root then using this flag will be a big
performance loss.
@@ -995,7 +995,7 @@ Here are the possible system metadata items for the onedrive backend.
| content-type | The MIME type of the file. | string | text/plain | **Y** |
| created-by-display-name | Display name of the user that created the item. | string | John Doe | **Y** |
| created-by-id | ID of the user that created the item. | string | 48d31887-5fad-4d73-a9f5-3c356e68a038 | **Y** |
-| description | A short description of the file. Max 1024 characters. Only supported for OneDrive Personal. | string | Contract for signing | N |
+| description | A short description of the file. Max 1024 characters. No longer supported by Microsoft. | string | Contract for signing | N |
| id | The unique identifier of the item within OneDrive. | string | 01BYE5RZ6QN3ZWBTUFOFD3GSPGOHDJD36K | **Y** |
| last-modified-by-display-name | Display name of the user that last modified the item. | string | John Doe | **Y** |
| last-modified-by-id | ID of the user that last modified the item. | string | 48d31887-5fad-4d73-a9f5-3c356e68a038 | **Y** |
diff --git a/docs/content/rc.md b/docs/content/rc.md
index acf4fbb77..2e47b13ec 100644
--- a/docs/content/rc.md
+++ b/docs/content/rc.md
@@ -1372,7 +1372,7 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint mountType=m
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
```
-The vfsOpt are as described in options/get and can be seen in the the
+The vfsOpt are as described in options/get and can be seen in the
"vfs" section when running and the mountOpt can be seen in the "mount" section:
```console
@@ -1703,6 +1703,40 @@ See the [hashsum](/commands/rclone_hashsum/) command for more information on the
**Authentication is required for this call.**
+### operations/hashsumfile: Produces a hash for a single file. {#operations-hashsumfile}
+
+Produces a hash for a single file using the hash named.
+
+This takes the following parameters:
+
+- fs - a remote name string e.g. "drive:"
+- remote - a path within that remote e.g. "file.txt"
+- hashType - type of hash to be used
+- download - check by downloading rather than with hash (boolean)
+- base64 - output the hashes in base64 rather than hex (boolean)
+
+If you supply the download flag, it will download the data from the
+remote and create the hash on the fly. This can be useful for remotes
+that don't support the given hash or if you really want to read all
+the data.
+
+Returns:
+
+- hash - hash for the file
+- hashType - type of hash used
+
+Example:
+
+ $ rclone rc --loopback operations/hashsumfile fs=/ remote=/bin/bash hashType=MD5 download=true base64=true
+ {
+ "hashType": "md5",
+ "hash": "MDMw-fG2YXs7Uz5Nz-H68A=="
+ }
+
+See the [hashsum](/commands/rclone_hashsum/) command for more information on the above.
+
+**Authentication is required for this call.**
+
### operations/list: List the given remote and path in JSON format {#operations-list}
This takes the following parameters:
diff --git a/docs/content/s3.md b/docs/content/s3.md
index 7c9882884..96d6664ce 100644
--- a/docs/content/s3.md
+++ b/docs/content/s3.md
@@ -906,7 +906,7 @@ all the files to be uploaded as multipart.
### Standard options
-Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
+Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
#### --s3-provider
@@ -925,6 +925,8 @@ Properties:
- Alibaba Cloud Object Storage System (OSS) formerly Aliyun
- "ArvanCloud"
- Arvan Cloud Object Storage (AOS)
+ - "BizflyCloud"
+ - Bizfly Cloud Simple Storage
- "Ceph"
- Ceph Object Storage
- "ChinaMobile"
@@ -1066,7 +1068,7 @@ Properties:
- Config: region
- Env Var: RCLONE_S3_REGION
-- Provider: AWS,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other
+- Provider: AWS,BizflyCloud,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other
- Type: string
- Required: false
- Examples:
@@ -1175,6 +1177,12 @@ Properties:
- AWS GovCloud (US) Region.
- Needs location constraint us-gov-west-1.
- Provider: AWS
+ - "hn"
+ - Ha Noi
+ - Provider: BizflyCloud
+ - "hcm"
+ - Ho Chi Minh
+ - Provider: BizflyCloud
- ""
- Use this if unsure.
- Will use v4 signatures and an empty region.
@@ -1446,12 +1454,21 @@ Properties:
- "ru-1"
- St. Petersburg
- Provider: Selectel,Servercore
- - "gis-1"
- - Moscow
- - Provider: Servercore
+ - "ru-3"
+ - St. Petersburg
+ - Provider: Selectel
- "ru-7"
- Moscow
- - Provider: Servercore
+ - Provider: Selectel,Servercore
+ - "gis-1"
+ - Moscow
+ - Provider: Selectel,Servercore
+ - "kz-1"
+ - Kazakhstan
+ - Provider: Selectel
+ - "uz-2"
+ - Uzbekistan
+ - Provider: Selectel
- "uz-2"
- Tashkent, Uzbekistan
- Provider: Servercore
@@ -1487,7 +1504,7 @@ Properties:
- Config: endpoint
- Env Var: RCLONE_S3_ENDPOINT
-- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other
+- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other
- Type: string
- Required: false
- Examples:
@@ -1573,6 +1590,12 @@ Properties:
- "s3.ir-tbz-sh1.arvanstorage.ir"
- Tabriz Iran (Shahriar)
- Provider: ArvanCloud
+ - "hn.ss.bfcplatform.vn"
+ - Hanoi endpoint
+ - Provider: BizflyCloud
+ - "hcm.ss.bfcplatform.vn"
+ - Ho Chi Minh endpoint
+ - Provider: BizflyCloud
- "eos-wuxi-1.cmecloud.cn"
- The default endpoint - a good choice if you are unsure.
- East China (Suzhou)
@@ -1979,67 +2002,67 @@ Properties:
- Iran
- Provider: Liara
- "nl-ams-1.linodeobjects.com"
- - Amsterdam (Netherlands), nl-ams-1
+ - Amsterdam, NL (nl-ams-1)
- Provider: Linode
- "us-southeast-1.linodeobjects.com"
- - Atlanta, GA (USA), us-southeast-1
+ - Atlanta, GA, US (us-southeast-1)
- Provider: Linode
- "in-maa-1.linodeobjects.com"
- - Chennai (India), in-maa-1
+ - Chennai, IN (in-maa-1)
- Provider: Linode
- "us-ord-1.linodeobjects.com"
- - Chicago, IL (USA), us-ord-1
+ - Chicago, IL, US (us-ord-1)
- Provider: Linode
- "eu-central-1.linodeobjects.com"
- - Frankfurt (Germany), eu-central-1
+ - Frankfurt, DE (eu-central-1)
- Provider: Linode
- "id-cgk-1.linodeobjects.com"
- - Jakarta (Indonesia), id-cgk-1
+ - Jakarta, ID (id-cgk-1)
- Provider: Linode
- "gb-lon-1.linodeobjects.com"
- - London 2 (Great Britain), gb-lon-1
+ - London 2, UK (gb-lon-1)
- Provider: Linode
- "us-lax-1.linodeobjects.com"
- - Los Angeles, CA (USA), us-lax-1
+ - Los Angeles, CA, US (us-lax-1)
- Provider: Linode
- "es-mad-1.linodeobjects.com"
- - Madrid (Spain), es-mad-1
- - Provider: Linode
- - "au-mel-1.linodeobjects.com"
- - Melbourne (Australia), au-mel-1
+ - Madrid, ES (es-mad-1)
- Provider: Linode
- "us-mia-1.linodeobjects.com"
- - Miami, FL (USA), us-mia-1
+ - Miami, FL, US (us-mia-1)
- Provider: Linode
- "it-mil-1.linodeobjects.com"
- - Milan (Italy), it-mil-1
+ - Milan, IT (it-mil-1)
- Provider: Linode
- "us-east-1.linodeobjects.com"
- - Newark, NJ (USA), us-east-1
+ - Newark, NJ, US (us-east-1)
- Provider: Linode
- "jp-osa-1.linodeobjects.com"
- - Osaka (Japan), jp-osa-1
+ - Osaka, JP (jp-osa-1)
- Provider: Linode
- "fr-par-1.linodeobjects.com"
- - Paris (France), fr-par-1
+ - Paris, FR (fr-par-1)
- Provider: Linode
- "br-gru-1.linodeobjects.com"
- - São Paulo (Brazil), br-gru-1
+ - Sao Paulo, BR (br-gru-1)
- Provider: Linode
- "us-sea-1.linodeobjects.com"
- - Seattle, WA (USA), us-sea-1
+ - Seattle, WA, US (us-sea-1)
- Provider: Linode
- "ap-south-1.linodeobjects.com"
- - Singapore, ap-south-1
+ - Singapore, SG (ap-south-1)
- Provider: Linode
- "sg-sin-1.linodeobjects.com"
- - Singapore 2, sg-sin-1
+ - Singapore 2, SG (sg-sin-1)
- Provider: Linode
- "se-sto-1.linodeobjects.com"
- - Stockholm (Sweden), se-sto-1
+ - Stockholm, SE (se-sto-1)
- Provider: Linode
- - "us-iad-1.linodeobjects.com"
- - Washington, DC, (USA), us-iad-1
+ - "jp-tyo-1.linodeobjects.com"
+ - Tokyo 3, JP (jp-tyo-1)
+ - Provider: Linode
+ - "us-iad-10.linodeobjects.com"
+ - Washington, DC, US (us-iad-10)
- Provider: Linode
- "s3.us-west-1.{account_name}.lyve.seagate.com"
- US West 1 - California
@@ -2243,13 +2266,25 @@ Properties:
- SeaweedFS S3 localhost
- Provider: SeaweedFS
- "s3.ru-1.storage.selcloud.ru"
- - Saint Petersburg
+ - St. Petersburg
+ - Provider: Selectel
+ - "s3.ru-3.storage.selcloud.ru"
+ - St. Petersburg
+ - Provider: Selectel
+ - "s3.ru-7.storage.selcloud.ru"
+ - Moscow
- Provider: Selectel,Servercore
- "s3.gis-1.storage.selcloud.ru"
- Moscow
- - Provider: Servercore
- - "s3.ru-7.storage.selcloud.ru"
- - Moscow
+ - Provider: Selectel,Servercore
+ - "s3.kz-1.storage.selcloud.ru"
+ - Kazakhstan
+ - Provider: Selectel
+ - "s3.uz-2.storage.selcloud.ru"
+ - Uzbekistan
+ - Provider: Selectel
+ - "s3.ru-1.storage.selcloud.ru"
+ - Saint Petersburg
- Provider: Servercore
- "s3.uz-2.srvstorage.uz"
- Tashkent, Uzbekistan
@@ -2775,36 +2810,36 @@ Properties:
- Config: acl
- Env Var: RCLONE_S3_ACL
-- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- Type: string
- Required: false
- Examples:
- "private"
- Owner gets FULL_CONTROL.
- No one else has access rights (default).
- - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other
+ - Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other
- "public-read"
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ access.
- - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ - Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "public-read-write"
- Owner gets FULL_CONTROL.
- The AllUsers group gets READ and WRITE access.
- Granting this on a bucket is generally not recommended.
- - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ - Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "authenticated-read"
- Owner gets FULL_CONTROL.
- The AuthenticatedUsers group gets READ access.
- - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ - Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "bucket-owner-read"
- Object owner gets FULL_CONTROL.
- Bucket owner gets READ access.
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
- - Provider: AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ - Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "bucket-owner-full-control"
- Both the object owner and the bucket owner get FULL_CONTROL over the object.
- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
- - Provider: AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+ - Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
- "private"
- Owner gets FULL_CONTROL.
- No one else has access rights (default).
@@ -2969,7 +3004,7 @@ Properties:
### Advanced options
-Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
+Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other).
#### --s3-bucket-acl
@@ -2988,7 +3023,7 @@ Properties:
- Config: bucket_acl
- Env Var: RCLONE_S3_BUCKET_ACL
-- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other
+- Provider: AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other
- Type: string
- Required: false
- Examples:
@@ -3242,6 +3277,58 @@ Properties:
- Type: string
- Required: false
+#### --s3-role-arn
+
+ARN of the IAM role to assume.
+
+Leave blank if not using assume role.
+
+Properties:
+
+- Config: role_arn
+- Env Var: RCLONE_S3_ROLE_ARN
+- Type: string
+- Required: false
+
+#### --s3-role-session-name
+
+Session name for assumed role.
+
+If empty, a session name will be generated automatically.
+
+Properties:
+
+- Config: role_session_name
+- Env Var: RCLONE_S3_ROLE_SESSION_NAME
+- Type: string
+- Required: false
+
+#### --s3-role-session-duration
+
+Session duration for assumed role.
+
+If empty, the default session duration will be used.
+
+Properties:
+
+- Config: role_session_duration
+- Env Var: RCLONE_S3_ROLE_SESSION_DURATION
+- Type: string
+- Required: false
+
+#### --s3-role-external-id
+
+External ID for assumed role.
+
+Leave blank if not using an external ID.
+
+Properties:
+
+- Config: role_external_id
+- Env Var: RCLONE_S3_ROLE_EXTERNAL_ID
+- Type: string
+- Required: false
+
#### --s3-upload-concurrency
Concurrency for multipart uploads and copies.
diff --git a/docs/content/shade.md b/docs/content/shade.md
index 7735d7eea..8243d94e5 100644
--- a/docs/content/shade.md
+++ b/docs/content/shade.md
@@ -1,3 +1,9 @@
+---
+title: "Shade"
+description: "Shade Backend Docs"
+versionIntroduced: "v1.73"
+---
+
# {{< icon "fa fa-moon" >}} Shade
This is a backend for the [Shade](https://shade.inc/) platform
@@ -115,7 +121,7 @@ Properties:
#### --shade-api-key
-An API key for your account. You can find this under Settings > API Keys
+An API key for your account.
Properties:
@@ -159,6 +165,50 @@ Properties:
- Type: SizeSuffix
- Default: 64Mi
+#### --shade-upload-concurrency
+
+Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies.
+
+Properties:
+
+- Config: upload_concurrency
+- Env Var: RCLONE_SHADE_UPLOAD_CONCURRENCY
+- Type: int
+- Default: 4
+
+#### --shade-max-upload-parts
+
+Maximum amount of parts in a multipart upload.
+
+Properties:
+
+- Config: max_upload_parts
+- Env Var: RCLONE_SHADE_MAX_UPLOAD_PARTS
+- Type: int
+- Default: 10000
+
+#### --shade-token
+
+JWT Token for performing Shade FS operations. Don't set this value - rclone will set it automatically
+
+Properties:
+
+- Config: token
+- Env Var: RCLONE_SHADE_TOKEN
+- Type: string
+- Required: false
+
+#### --shade-token-expiry
+
+JWT Token Expiration time. Don't set this value - rclone will set it automatically
+
+Properties:
+
+- Config: token_expiry
+- Env Var: RCLONE_SHADE_TOKEN_EXPIRY
+- Type: string
+- Required: false
+
#### --shade-encoding
The encoding for the backend.
diff --git a/docs/content/swift.md b/docs/content/swift.md
index 6c41809bc..7bd65f498 100644
--- a/docs/content/swift.md
+++ b/docs/content/swift.md
@@ -564,7 +564,7 @@ Properties:
Above this size files will be chunked.
-Above this size files will be chunked into a a `_segments` container
+Above this size files will be chunked into a `_segments` container
or a `.file-segments` directory. (See the `use_segments_container` option
for more info). Default for this is 5 GiB which is its maximum value, which
means only files above this size will be chunked.
diff --git a/lib/transform/transform.md b/lib/transform/transform.md
index bb8d08e0e..609e7ac25 100644
--- a/lib/transform/transform.md
+++ b/lib/transform/transform.md
@@ -218,12 +218,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
-// Output: stories/The Quick Brown Fox!-20251121
+// Output: stories/The Quick Brown Fox!-20260130
```
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
-// Output: stories/The Quick Brown Fox!-2025-11-21 0508PM
+// Output: stories/The Quick Brown Fox!-2026-01-30 0852PM
```
```console
diff --git a/rclone.1 b/rclone.1
index 2c3a0298b..42cf2d4fc 100644
--- a/rclone.1
+++ b/rclone.1
@@ -15,7 +15,7 @@
. ftr VB CB
. ftr VBI CBI
.\}
-.TH "rclone" "1" "Nov 21, 2025" "User Manual" ""
+.TH "rclone" "1" "Jan 30, 2026" "User Manual" ""
.hy
.SH NAME
.PP
@@ -239,6 +239,8 @@ Alibaba Cloud (Aliyun) Object Storage System (OSS)
.IP \[bu] 2
Amazon S3
.IP \[bu] 2
+Bizfly Cloud Simple Storage
+.IP \[bu] 2
Backblaze B2
.IP \[bu] 2
Box
@@ -263,6 +265,8 @@ Digi Storage
.IP \[bu] 2
Dreamhost
.IP \[bu] 2
+Drime
+.IP \[bu] 2
Dropbox
.IP \[bu] 2
Enterprise File Fabric
@@ -275,6 +279,8 @@ FileLu Cloud Storage
.IP \[bu] 2
FileLu S5 (S3-Compatible Object Storage)
.IP \[bu] 2
+Filen
+.IP \[bu] 2
Files.com
.IP \[bu] 2
FlashBlade
@@ -307,6 +313,8 @@ ImageKit
.IP \[bu] 2
Internet Archive
.IP \[bu] 2
+Internxt
+.IP \[bu] 2
Jottacloud
.IP \[bu] 2
IBM COS S3
@@ -411,6 +419,8 @@ Servercore Object Storage
.IP \[bu] 2
SFTP
.IP \[bu] 2
+Shade
+.IP \[bu] 2
Sia
.IP \[bu] 2
SMB / CIFS
@@ -429,8 +439,6 @@ Tencent Cloud Object Storage (COS)
.IP \[bu] 2
Uloz.to
.IP \[bu] 2
-Uptobox
-.IP \[bu] 2
Wasabi
.IP \[bu] 2
WebDAV
@@ -1376,12 +1384,16 @@ DigitalOcean Spaces (https://rclone.org/s3/#digitalocean-spaces)
.IP \[bu] 2
Digi Storage (https://rclone.org/koofr/#digi-storage)
.IP \[bu] 2
+Drime (https://rclone.org/drime/)
+.IP \[bu] 2
Dropbox (https://rclone.org/dropbox/)
.IP \[bu] 2
Enterprise File Fabric (https://rclone.org/filefabric/)
.IP \[bu] 2
FileLu Cloud Storage (https://rclone.org/filelu/)
.IP \[bu] 2
+Filen (https://rclone.org/filen/)
+.IP \[bu] 2
Files.com (https://rclone.org/filescom/)
.IP \[bu] 2
FTP (https://rclone.org/ftp/)
@@ -1409,6 +1421,8 @@ iCloud Drive (https://rclone.org/iclouddrive/)
.IP \[bu] 2
Internet Archive (https://rclone.org/internetarchive/)
.IP \[bu] 2
+Internxt (https://rclone.org/internxt/)
+.IP \[bu] 2
Jottacloud (https://rclone.org/jottacloud/)
.IP \[bu] 2
Koofr (https://rclone.org/koofr/)
@@ -1456,6 +1470,8 @@ Seafile (https://rclone.org/seafile/)
.IP \[bu] 2
SFTP (https://rclone.org/sftp/)
.IP \[bu] 2
+Shade (https://rclone.org/shade/)
+.IP \[bu] 2
Sia (https://rclone.org/sia/)
.IP \[bu] 2
SMB (https://rclone.org/smb/)
@@ -1468,8 +1484,6 @@ Union (https://rclone.org/union/)
.IP \[bu] 2
Uloz.to (https://rclone.org/ulozto/)
.IP \[bu] 2
-Uptobox (https://rclone.org/uptobox/)
-.IP \[bu] 2
WebDAV (https://rclone.org/webdav/)
.IP \[bu] 2
Yandex Disk (https://rclone.org/yandex/)
@@ -2862,6 +2876,12 @@ Note that \f[V]ls\f[R] and \f[V]lsl\f[R] recurse by default - use
The other list commands \f[V]lsd\f[R],\f[V]lsf\f[R],\f[V]lsjson\f[R] do
not recurse by default - use \f[V]-R\f[R] to make them recurse.
.PP
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default.
+Use \f[V]--disable ListR\f[R] to suppress the behavior.
+See \f[V]--fast-list\f[R] (https://rclone.org/docs/#fast-list) for more
+details.
+.PP
Listing a nonexistent directory will produce an error except for remotes
which can\[aq]t have empty directories (e.g.
s3, swift, or gcs - the bucket-based remotes).
@@ -2988,6 +3008,12 @@ Note that \f[V]ls\f[R] and \f[V]lsl\f[R] recurse by default - use
The other list commands \f[V]lsd\f[R],\f[V]lsf\f[R],\f[V]lsjson\f[R] do
not recurse by default - use \f[V]-R\f[R] to make them recurse.
.PP
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default.
+Use \f[V]--disable ListR\f[R] to suppress the behavior.
+See \f[V]--fast-list\f[R] (https://rclone.org/docs/#fast-list) for more
+details.
+.PP
Listing a nonexistent directory will produce an error except for remotes
which can\[aq]t have empty directories (e.g.
s3, swift, or gcs - the bucket-based remotes).
@@ -3100,6 +3126,12 @@ Note that \f[V]ls\f[R] and \f[V]lsl\f[R] recurse by default - use
The other list commands \f[V]lsd\f[R],\f[V]lsf\f[R],\f[V]lsjson\f[R] do
not recurse by default - use \f[V]-R\f[R] to make them recurse.
.PP
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default.
+Use \f[V]--disable ListR\f[R] to suppress the behavior.
+See \f[V]--fast-list\f[R] (https://rclone.org/docs/#fast-list) for more
+details.
+.PP
Listing a nonexistent directory will produce an error except for remotes
which can\[aq]t have empty directories (e.g.
s3, swift, or gcs - the bucket-based remotes).
@@ -6260,14 +6292,14 @@ rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]a
.nf
\f[C]
rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{YYYYMMDD}\[dq]
-// Output: stories/The Quick Brown Fox!-20251121
+// Output: stories/The Quick Brown Fox!-20260130
\f[R]
.fi
.IP
.nf
\f[C]
rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{macfriendlytime}\[dq]
-// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
+// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM
\f[R]
.fi
.IP
@@ -7643,6 +7675,12 @@ Note that \f[V]ls\f[R] and \f[V]lsl\f[R] recurse by default - use
The other list commands \f[V]lsd\f[R],\f[V]lsf\f[R],\f[V]lsjson\f[R] do
not recurse by default - use \f[V]-R\f[R] to make them recurse.
.PP
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default.
+Use \f[V]--disable ListR\f[R] to suppress the behavior.
+See \f[V]--fast-list\f[R] (https://rclone.org/docs/#fast-list) for more
+details.
+.PP
Listing a nonexistent directory will produce an error except for remotes
which can\[aq]t have empty directories (e.g.
s3, swift, or gcs - the bucket-based remotes).
@@ -7796,9 +7834,8 @@ only, no files/objects.
If \f[V]--files-only\f[R] is specified then files will be returned only,
no directories.
.PP
-If \f[V]--stat\f[R] is set then the the output is not an array of items,
-but instead a single JSON blob will be returned about the item pointed
-to.
+If \f[V]--stat\f[R] is set then the output is not an array of items, but
+instead a single JSON blob will be returned about the item pointed to.
This will return an error if the item isn\[aq]t found, however on bucket
based backends (like s3, gcs, b2, azureblob etc) if the item isn\[aq]t
found it will return an empty directory, as it isn\[aq]t possible to
@@ -7850,6 +7887,12 @@ Note that \f[V]ls\f[R] and \f[V]lsl\f[R] recurse by default - use
The other list commands \f[V]lsd\f[R],\f[V]lsf\f[R],\f[V]lsjson\f[R] do
not recurse by default - use \f[V]-R\f[R] to make them recurse.
.PP
+List commands prefer a recursive method that uses more memory but fewer
+transactions by default.
+Use \f[V]--disable ListR\f[R] to suppress the behavior.
+See \f[V]--fast-list\f[R] (https://rclone.org/docs/#fast-list) for more
+details.
+.PP
Listing a nonexistent directory will produce an error except for remotes
which can\[aq]t have empty directories (e.g.
s3, swift, or gcs - the bucket-based remotes).
@@ -8015,7 +8058,7 @@ feature at all, then 1 PiB is set as both the total and the free size.
.SS Installing on Windows
.PP
To run \f[V]rclone mount on Windows\f[R], you will need to download and
-install WinFsp (http://www.secfs.net/winfsp/).
+install WinFsp (https://winfsp.dev).
.PP
WinFsp (https://github.com/winfsp/winfsp) is an open-source Windows File
System Proxy which makes it easy to write user space file systems for
@@ -9669,7 +9712,7 @@ feature at all, then 1 PiB is set as both the total and the free size.
.SS Installing on Windows
.PP
To run \f[V]rclone nfsmount on Windows\f[R], you will need to download
-and install WinFsp (http://www.secfs.net/winfsp/).
+and install WinFsp (https://winfsp.dev).
.PP
WinFsp (https://github.com/winfsp/winfsp) is an open-source Windows File
System Proxy which makes it easy to write user space file systems for
@@ -10889,7 +10932,7 @@ trailing newline.
.IP
.nf
\f[C]
-echo \[dq]secretpassword\[dq] | rclone obscure -
+echo \[aq]secretpassword\[aq] | rclone obscure -
\f[R]
.fi
.PP
@@ -15929,6 +15972,31 @@ docs (https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
If \f[V]--auth-key\f[R] is not provided then \f[V]serve s3\f[R] will
allow anonymous access.
.PP
+Like all rclone flags \f[V]--auth-key\f[R] can be set via environment
+variables, in this case \f[V]RCLONE_AUTH_KEY\f[R].
+Since this flag can be repeated, the input to \f[V]RCLONE_AUTH_KEY\f[R]
+is CSV encoded.
+Because the \f[V]accessKey,secretKey\f[R] has a comma in, this means it
+needs to be in quotes.
+.IP
+.nf
+\f[C]
+export RCLONE_AUTH_KEY=\[aq]\[dq]user,pass\[dq]\[aq]
+rclone serve s3 ...
+\f[R]
+.fi
+.PP
+Or to supply multiple identities:
+.IP
+.nf
+\f[C]
+export RCLONE_AUTH_KEY=\[aq]\[dq]user1,pass1\[dq],\[dq]user2,pass2\[dq]\[aq]
+rclone serve s3 ...
+\f[R]
+.fi
+.PP
+Setting this variable without quotes will produce an error.
+.PP
Please note that some clients may require HTTPS endpoints.
See the SSL docs for more information.
.PP
@@ -18721,6 +18789,7 @@ rclone serve webdav remote:path [flags]
--dir-cache-time Duration Time to cache directory entries for (default 5m0s)
--dir-perms FileMode Directory permissions (default 777)
--disable-dir-list Disable HTML directory list on GET request for a directory
+ --disable-zip Disable zip download of directories
--etag-hash string Which hash to use for the ETag, or auto or blank for off
--file-perms FileMode File permissions (default 666)
--gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000)
@@ -26787,7 +26856,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt=\[aq]{\[dq]CacheM
\f[R]
.fi
.PP
-The vfsOpt are as described in options/get and can be seen in the the
+The vfsOpt are as described in options/get and can be seen in the
\[dq]vfs\[dq] section when running and the mountOpt can be seen in the
\[dq]mount\[dq] section:
.IP
@@ -27195,6 +27264,51 @@ See the hashsum (https://rclone.org/commands/rclone_hashsum/) command
for more information on the above.
.PP
\f[B]Authentication is required for this call.\f[R]
+.SS operations/hashsumfile: Produces a hash for a single file.
+.PP
+Produces a hash for a single file using the hash named.
+.PP
+This takes the following parameters:
+.IP \[bu] 2
+fs - a remote name string e.g.
+\[dq]drive:\[dq]
+.IP \[bu] 2
+remote - a path within that remote e.g.
+\[dq]file.txt\[dq]
+.IP \[bu] 2
+hashType - type of hash to be used
+.IP \[bu] 2
+download - check by downloading rather than with hash (boolean)
+.IP \[bu] 2
+base64 - output the hashes in base64 rather than hex (boolean)
+.PP
+If you supply the download flag, it will download the data from the
+remote and create the hash on the fly.
+This can be useful for remotes that don\[aq]t support the given hash or
+if you really want to read all the data.
+.PP
+Returns:
+.IP \[bu] 2
+hash - hash for the file
+.IP \[bu] 2
+hashType - type of hash used
+.PP
+Example:
+.IP
+.nf
+\f[C]
+$ rclone rc --loopback operations/hashsumfile fs=/ remote=/bin/bash hashType=MD5 download=true base64=true
+{
+ \[dq]hashType\[dq]: \[dq]md5\[dq],
+ \[dq]hash\[dq]: \[dq]MDMw-fG2YXs7Uz5Nz-H68A==\[dq]
+}
+\f[R]
+.fi
+.PP
+See the hashsum (https://rclone.org/commands/rclone_hashsum/) command
+for more information on the above.
+.PP
+\f[B]Authentication is required for this call.\f[R]
.SS operations/list: List the given remote and path in JSON format
.PP
This takes the following parameters:
@@ -28420,868 +28534,6 @@ underlying differences show through.
.SS Features
.PP
Here is an overview of the major features of each cloud storage system.
-.PP
-.TS
-tab(@);
-lw(17.5n) cw(11.9n) cw(5.6n) cw(11.2n) cw(10.6n) cw(6.9n) cw(6.2n).
-T{
-Name
-T}@T{
-Hash
-T}@T{
-ModTime
-T}@T{
-Case Insensitive
-T}@T{
-Duplicate Files
-T}@T{
-MIME Type
-T}@T{
-Metadata
-T}
-_
-T{
-1Fichier
-T}@T{
-Whirlpool
-T}@T{
--
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-R
-T}@T{
--
-T}
-T{
-Akamai Netstorage
-T}@T{
-MD5, SHA256
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R
-T}@T{
--
-T}
-T{
-Amazon S3 (or S3 compatible)
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R/W
-T}@T{
-RWU
-T}
-T{
-Backblaze B2
-T}@T{
-SHA1
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R/W
-T}@T{
--
-T}
-T{
-Box
-T}@T{
-SHA1
-T}@T{
-R/W
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Citrix ShareFile
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Cloudinary
-T}@T{
-MD5
-T}@T{
-R
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
--
-T}@T{
--
-T}
-T{
-Dropbox
-T}@T{
-DBHASH ¹
-T}@T{
-R
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Enterprise File Fabric
-T}@T{
--
-T}@T{
-R/W
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-R/W
-T}@T{
--
-T}
-T{
-FileLu Cloud Storage
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-R
-T}@T{
--
-T}
-T{
-Files.com
-T}@T{
-MD5, CRC32
-T}@T{
-DR/W
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-R
-T}@T{
--
-T}
-T{
-FTP
-T}@T{
--
-T}@T{
-R/W ¹⁰
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Gofile
-T}@T{
-MD5
-T}@T{
-DR/W
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-R
-T}@T{
--
-T}
-T{
-Google Cloud Storage
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R/W
-T}@T{
--
-T}
-T{
-Google Drive
-T}@T{
-MD5, SHA1, SHA256
-T}@T{
-DR/W
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-R/W
-T}@T{
-DRWU
-T}
-T{
-Google Photos
-T}@T{
--
-T}@T{
--
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-R
-T}@T{
--
-T}
-T{
-HDFS
-T}@T{
--
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-HiDrive
-T}@T{
-HiDrive ¹²
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-HTTP
-T}@T{
--
-T}@T{
-R
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R
-T}@T{
-R
-T}
-T{
-iCloud Drive
-T}@T{
--
-T}@T{
-R
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Internet Archive
-T}@T{
-MD5, SHA1, CRC32
-T}@T{
-R/W ¹¹
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
-RWU
-T}
-T{
-Jottacloud
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-R
-T}@T{
-RW
-T}
-T{
-Koofr
-T}@T{
-MD5
-T}@T{
--
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Linkbox
-T}@T{
--
-T}@T{
-R
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Mail.ru Cloud
-T}@T{
-Mailru ⁶
-T}@T{
-R/W
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Mega
-T}@T{
--
-T}@T{
--
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
--
-T}@T{
--
-T}
-T{
-Memory
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Microsoft Azure Blob Storage
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R/W
-T}@T{
--
-T}
-T{
-Microsoft Azure Files Storage
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-R/W
-T}@T{
--
-T}
-T{
-Microsoft OneDrive
-T}@T{
-QuickXorHash ⁵
-T}@T{
-DR/W
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-R
-T}@T{
-DRW
-T}
-T{
-OpenDrive
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-Yes
-T}@T{
-Partial ⁸
-T}@T{
--
-T}@T{
--
-T}
-T{
-OpenStack Swift
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R/W
-T}@T{
--
-T}
-T{
-Oracle Object Storage
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R/W
-T}@T{
-RU
-T}
-T{
-pCloud
-T}@T{
-MD5, SHA1 ⁷
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
-W
-T}@T{
--
-T}
-T{
-PikPak
-T}@T{
-MD5
-T}@T{
-R
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R
-T}@T{
--
-T}
-T{
-Pixeldrain
-T}@T{
-SHA256
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R
-T}@T{
-RW
-T}
-T{
-premiumize.me
-T}@T{
--
-T}@T{
--
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-R
-T}@T{
--
-T}
-T{
-put.io
-T}@T{
-CRC-32
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-R
-T}@T{
--
-T}
-T{
-Proton Drive
-T}@T{
-SHA1
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R
-T}@T{
--
-T}
-T{
-QingStor
-T}@T{
-MD5
-T}@T{
-- ⁹
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R/W
-T}@T{
--
-T}
-T{
-Quatrix by Maytech
-T}@T{
--
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Seafile
-T}@T{
--
-T}@T{
--
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-SFTP
-T}@T{
-MD5, SHA1 ²
-T}@T{
-DR/W
-T}@T{
-Depends
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Sia
-T}@T{
--
-T}@T{
--
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-SMB
-T}@T{
--
-T}@T{
-R/W
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-SugarSync
-T}@T{
--
-T}@T{
--
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Storj
-T}@T{
--
-T}@T{
-R
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Uloz.to
-T}@T{
-MD5, SHA256 ¹³
-T}@T{
--
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
--
-T}@T{
--
-T}
-T{
-Uptobox
-T}@T{
--
-T}@T{
--
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
--
-T}@T{
--
-T}
-T{
-WebDAV
-T}@T{
-MD5, SHA1 ³
-T}@T{
-R ⁴
-T}@T{
-Depends
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-Yandex Disk
-T}@T{
-MD5
-T}@T{
-R/W
-T}@T{
-No
-T}@T{
-No
-T}@T{
-R
-T}@T{
--
-T}
-T{
-Zoho WorkDrive
-T}@T{
--
-T}@T{
--
-T}@T{
-No
-T}@T{
-No
-T}@T{
--
-T}@T{
--
-T}
-T{
-The local filesystem
-T}@T{
-All
-T}@T{
-DR/W
-T}@T{
-Depends
-T}@T{
-No
-T}@T{
--
-T}@T{
-DRWU
-T}
-.TE
-.PP
-¹ Dropbox supports its own custom
-hash (https://www.dropbox.com/developers/reference/content-hash).
-This is an SHA256 sum of all the 4 MiB block SHA256s.
-.PP
-² SFTP supports checksums if the same login has shell access and
-\f[V]md5sum\f[R] or \f[V]sha1sum\f[R] as well as \f[V]echo\f[R] are in
-the remote\[aq]s PATH.
-.PP
-³ WebDAV supports hashes when used with Fastmail Files, Owncloud and
-Nextcloud only.
-.PP
-⁴ WebDAV supports modtimes when used with Fastmail Files, Owncloud and
-Nextcloud only.
-.PP
-⁵
-QuickXorHash (https://docs.microsoft.com/en-us/onedrive/developer/code-snippets/quickxorhash)
-is Microsoft\[aq]s own hash.
-.PP
-⁶ Mail.ru uses its own modified SHA1 hash
-.PP
-⁷ pCloud only supports SHA1 (not MD5) in its EU region
-.PP
-⁸ Opendrive does not support creation of duplicate files using their web
-client interface or other stock clients, but the underlying storage
-platform has been determined to allow duplicate files, and it is
-possible to create them with \f[V]rclone\f[R].
-It may be that this is a mistake or an unsupported feature.
-.PP
-⁹ QingStor does not support SetModTime for objects bigger than 5 GiB.
-.PP
-¹⁰ FTP supports modtimes for the major FTP servers, and also others if
-they advertised required protocol extensions.
-See this (https://rclone.org/ftp/#modification-times) for more details.
-.PP
-¹¹ Internet Archive requires option \f[V]wait_archive\f[R] to be set to
-a non-zero value for full modtime support.
-.PP
-¹² HiDrive supports its own custom
-hash (https://static.hidrive.com/dev/0001).
-It combines SHA1 sums for each 4 KiB block hierarchically to a single
-top-level sum.
-.PP
-¹³ Uloz.to provides server-calculated MD5 hash upon file upload.
-MD5 and SHA256 hashes are client-calculated and stored as metadata
-fields.
.SS Hash
.PP
The cloud storage system supports various hash types of the objects.
@@ -30196,1350 +29448,6 @@ See the metadata docs (https://rclone.org/docs/#metadata) for more info.
.PP
All rclone remotes support a base command set.
Other features depend upon backend-specific capabilities.
-.PP
-.TS
-tab(@);
-lw(14.4n) cw(3.6n) cw(3.1n) cw(3.1n) cw(4.6n) cw(4.6n) cw(3.6n) cw(7.2n) lw(9.8n) cw(7.2n) cw(3.6n) cw(5.1n).
-T{
-Name
-T}@T{
-Purge
-T}@T{
-Copy
-T}@T{
-Move
-T}@T{
-DirMove
-T}@T{
-CleanUp
-T}@T{
-ListR
-T}@T{
-StreamUpload
-T}@T{
-MultithreadUpload
-T}@T{
-LinkSharing
-T}@T{
-About
-T}@T{
-EmptyDir
-T}
-_
-T{
-1Fichier
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-Akamai Netstorage
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-Amazon S3 (or S3 compatible)
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}
-T{
-Backblaze B2
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}
-T{
-Box
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Citrix ShareFile
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-Dropbox
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Cloudinary
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}
-T{
-Enterprise File Fabric
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-Files.com
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-FTP
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-Gofile
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Google Cloud Storage
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}
-T{
-Google Drive
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Google Photos
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}
-T{
-HDFS
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-HiDrive
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-HTTP
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-iCloud Drive
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-ImageKit
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-Internet Archive
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}
-T{
-Jottacloud
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Koofr
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Mail.ru Cloud
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Mega
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Memory
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}
-T{
-Microsoft Azure Blob Storage
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}
-T{
-Microsoft Azure Files Storage
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Microsoft OneDrive
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes ⁵
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-OpenDrive
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-OpenStack Swift
-T}@T{
-Yes ¹
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}
-T{
-Oracle Object Storage
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}
-T{
-pCloud
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-PikPak
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Pixeldrain
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-premiumize.me
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-put.io
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Proton Drive
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-QingStor
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}
-T{
-Quatrix by Maytech
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Seafile
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-SFTP
-T}@T{
-No
-T}@T{
-Yes ⁴
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Sia
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-SMB
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-SugarSync
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-Storj
-T}@T{
-Yes ²
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}
-T{
-Uloz.to
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}
-T{
-Uptobox
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}
-T{
-WebDAV
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes ³
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Yandex Disk
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-Zoho WorkDrive
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-T{
-The local filesystem
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}@T{
-No
-T}@T{
-Yes
-T}@T{
-Yes
-T}
-.TE
-.PP
-¹ Note Swift implements this in order to delete directory markers but it
-doesn\[aq]t actually have a quicker way of deleting files other than
-deleting them individually.
-.PP
-² Storj implements this efficiently only for entire buckets.
-If purging a directory inside a bucket, files are deleted individually.
-.PP
-³ StreamUpload is not supported with Nextcloud
-.PP
-⁴ Use the \f[V]--sftp-copy-is-hardlink\f[R] flag to enable.
-.PP
-⁵ Use the \f[V]--onedrive-delta\f[R] flag to enable.
.SS Purge
.PP
This deletes a directory quicker than just deleting all the files in the
@@ -31625,6 +29533,235 @@ See rclone about command (https://rclone.org/commands/rclone_about/)
The remote supports empty directories.
See Limitations (https://rclone.org/bugs/#limitations) for details.
Most Object/Bucket-based remotes do not support this.
+.SH Tiers
+.PP
+Rclone backends are divided into tiers to give users an idea of the
+stability of each backend.
+.PP
+.TS
+tab(@);
+l l l.
+T{
+Tier
+T}@T{
+Label
+T}@T{
+Intended meaning
+T}
+_
+T{
+T}@T{
+Core
+T}@T{
+Production-grade, first-class
+T}
+T{
+T}@T{
+Stable
+T}@T{
+Well-supported, minor gaps
+T}
+T{
+T}@T{
+Supported
+T}@T{
+Works for many uses; known caveats
+T}
+T{
+T}@T{
+Experimental
+T}@T{
+Use with care; expect gaps/changes
+T}
+T{
+T}@T{
+Deprecated
+T}@T{
+No longer maintained or supported
+T}
+.TE
+.SS Overview
+.PP
+Here is a summary of all backends:
+.SS Scoring
+.PP
+Here is how the backends are scored.
+.SS Features
+.PP
+These are useful optional features a backend should have in rough order
+of importance.
+Each one of these scores a point for the Features column.
+.IP \[bu] 2
+F1: Hash(es)
+.IP \[bu] 2
+F2: Modtime
+.IP \[bu] 2
+F3: Stream upload
+.IP \[bu] 2
+F4: Copy/Move
+.IP \[bu] 2
+F5: DirMove
+.PD 0
+.P
+.PD
+.IP \[bu] 2
+F6: Metadata
+.IP \[bu] 2
+F7: MultipartUpload
+.SS Tier
+.PP
+The tier is decided after determining these attributes.
+Some discretion is allowed in tiering as some of these attributes are
+more important than others.
+.PP
+.TS
+tab(@);
+lw(5.5n) lw(9.2n) lw(11.1n) lw(13.8n) lw(16.6n) lw(13.8n).
+T{
+Attr
+T}@T{
+T1: Core
+T}@T{
+T2: Stable
+T}@T{
+T3: Supported
+T}@T{
+T4: Experimental
+T}@T{
+T5: Incubator
+T}
+_
+T{
+Maintainers
+T}@T{
+>=2
+T}@T{
+>=1
+T}@T{
+>=1
+T}@T{
+>=0
+T}@T{
+>=0
+T}
+T{
+API source
+T}@T{
+Official
+T}@T{
+Official
+T}@T{
+Either
+T}@T{
+Either
+T}@T{
+Either
+T}
+T{
+Features (F1-F7)
+T}@T{
+>=5/7
+T}@T{
+>=4/7
+T}@T{
+>=3/7
+T}@T{
+>=2/7
+T}@T{
+N/A
+T}
+T{
+Integration tests
+T}@T{
+All Green
+T}@T{
+All green
+T}@T{
+Nearly all green
+T}@T{
+Some Flaky
+T}@T{
+N/A
+T}
+T{
+Error handling
+T}@T{
+Pacer
+T}@T{
+Pacer
+T}@T{
+Retries
+T}@T{
+Retries
+T}@T{
+N/A
+T}
+T{
+Data integrity
+T}@T{
+Hashes, alt, modtime
+T}@T{
+Hashes or alt
+T}@T{
+Hash OR modtime
+T}@T{
+Best-effort
+T}@T{
+N/A
+T}
+T{
+Perf baseline
+T}@T{
+Bench within 2x S3
+T}@T{
+Bench doc
+T}@T{
+Anecdotal OK
+T}@T{
+Optional
+T}@T{
+N/A
+T}
+T{
+Adoption
+T}@T{
+widely used
+T}@T{
+often used
+T}@T{
+some use
+T}@T{
+N/A
+T}@T{
+N/A
+T}
+T{
+Docs completeness
+T}@T{
+Full
+T}@T{
+Full
+T}@T{
+Basic
+T}@T{
+Minimal
+T}@T{
+Minimal
+T}
+T{
+Security
+T}@T{
+Principle-of-least-privilege
+T}@T{
+Reasonable scopes
+T}@T{
+Basic auth
+T}@T{
+Works
+T}@T{
+Works
+T}
+.TE
.SH Global Flags
.PP
This describes the global flags available to every rclone command split
@@ -31741,7 +29878,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.72.0\[dq])
+ --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.73.0\[dq])
\f[R]
.fi
.SS Performance
@@ -31972,6 +30109,7 @@ Backend-only flags (these can be set in the config file also).
--azureblob-client-id string The ID of the client in use
--azureblob-client-secret string One of the service principal\[aq]s client secrets
--azureblob-client-send-certificate-chain Send the certificate chain when using certificate auth
+ --azureblob-connection-string string Storage Connection String
--azureblob-copy-concurrency int Concurrency for multipart copy (default 512)
--azureblob-copy-cutoff SizeSuffix Cutoff for switching to multipart copy (default 8Mi)
--azureblob-delete-snapshots string Set to specify how to deal with snapshots on blob deletion
@@ -32008,7 +30146,7 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-client-id string The ID of the client in use
--azurefiles-client-secret string One of the service principal\[aq]s client secrets
--azurefiles-client-send-certificate-chain Send the certificate chain when using certificate auth
- --azurefiles-connection-string string Azure Files Connection String
+ --azurefiles-connection-string string Storage Connection String
--azurefiles-description string Description of the remote
--azurefiles-disable-instance-discovery Skip requesting Microsoft Entra instance metadata
--azurefiles-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPeriod,InvalidUtf8,Dot)
@@ -32020,12 +30158,13 @@ Backend-only flags (these can be set in the config file also).
--azurefiles-msi-mi-res-id string Azure resource ID of the user-assigned MSI to use, if any
--azurefiles-msi-object-id string Object ID of the user-assigned MSI to use, if any
--azurefiles-password string The user\[aq]s password (obscured)
- --azurefiles-sas-url string SAS URL
+ --azurefiles-sas-url string SAS URL for container level access only
--azurefiles-service-principal-file string Path to file containing credentials for use with a service principal
--azurefiles-share-name string Azure Files Share Name
--azurefiles-tenant string ID of the service principal\[aq]s tenant. Also called its directory ID
--azurefiles-upload-concurrency int Concurrency for multipart uploads (default 16)
--azurefiles-use-az Use Azure CLI tool az for authentication
+ --azurefiles-use-emulator Uses local storage emulator if provided as \[aq]true\[aq]
--azurefiles-use-msi Use a managed service identity to authenticate (only works in Azure)
--azurefiles-username string User name (usually an email address)
--b2-account string Account ID or Application Key ID
@@ -32127,6 +30266,16 @@ Backend-only flags (these can be set in the config file also).
--doi-doi string The DOI or the doi.org URL
--doi-doi-resolver-api-url string The URL of the DOI resolver API to use
--doi-provider string DOI provider
+ --drime-access-token string API Access token
+ --drime-chunk-size SizeSuffix Chunk size to use for uploading (default 5Mi)
+ --drime-description string Description of the remote
+ --drime-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot)
+ --drime-hard-delete Delete files permanently rather than putting them into the trash
+ --drime-list-chunk int Number of items to list in each call (default 1000)
+ --drime-root-folder-id string ID of the root folder
+ --drime-upload-concurrency int Concurrency for multipart uploads and copies (default 4)
+ --drime-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
+ --drime-workspace-id string Account ID
--drive-acknowledge-abuse Set to allow files which return cannotDownloadAbusiveFile to be downloaded
--drive-allow-import-name-change Allow the filetype to change when uploading Google docs
--drive-auth-owner-only Only consider files owned by the authenticated user
@@ -32147,6 +30296,7 @@ Backend-only flags (these can be set in the config file also).
--drive-import-formats string Comma separated list of preferred formats for uploading Google docs
--drive-keep-revision-forever Keep new head revision of each file forever
--drive-list-chunk int Size of listing chunk 100-1000, 0 to disable (default 1000)
+ --drive-metadata-enforce-expansive-access Whether the request should enforce expansive access rules
--drive-metadata-labels Bits Control whether labels should be read or written in metadata (default off)
--drive-metadata-owner Bits Control whether owner should be read or written in metadata (default read)
--drive-metadata-permissions Bits Control whether permissions should be read or written in metadata (default off)
@@ -32215,6 +30365,17 @@ Backend-only flags (these can be set in the config file also).
--filelu-description string Description of the remote
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
--filelu-key string Your FileLu Rclone key from My Account
+ --filen-api-key string API Key for your Filen account (obscured)
+ --filen-auth-version string Authentication Version (internal use only)
+ --filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
+ --filen-description string Description of the remote
+ --filen-email string Email of your Filen account
+ --filen-encoding Encoding The encoding for the backend (default Slash,Del,Ctl,InvalidUtf8,Dot)
+ --filen-master-keys string Master Keys (internal use only)
+ --filen-password string Password of your Filen account (obscured)
+ --filen-private-key string Private RSA Key (internal use only)
+ --filen-public-key string Public RSA Key (internal use only)
+ --filen-upload-concurrency int Concurrency for chunked uploads (default 16)
--filescom-api-key string The API key used to authenticate with Files.com
--filescom-description string Description of the remote
--filescom-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot)
@@ -32258,7 +30419,7 @@ Backend-only flags (these can be set in the config file also).
--gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
- --gcs-endpoint string Endpoint for the service
+ --gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it
@@ -32347,6 +30508,11 @@ Backend-only flags (these can be set in the config file also).
--internetarchive-item-metadata stringArray Metadata to be set on the IA item, this is different from file-level metadata that can be set using --metadata-set
--internetarchive-secret-access-key string IAS3 Secret Key (password)
--internetarchive-wait-archive Duration Timeout for waiting the server\[aq]s processing tasks (specifically archive and book_op) to finish (default 0s)
+ --internxt-description string Description of the remote
+ --internxt-email string Email of your Internxt account
+ --internxt-encoding Encoding The encoding for the backend (default Slash,BackSlash,CrLf,RightPeriod,InvalidUtf8,Dot)
+ --internxt-pass string Password (obscured)
+ --internxt-skip-hash-validation Skip hash validation when downloading files (default true)
--jottacloud-auth-url string Auth server URL
--jottacloud-client-credentials Use client credentials OAuth flow
--jottacloud-client-id string OAuth Client Id
@@ -32409,6 +30575,7 @@ Backend-only flags (these can be set in the config file also).
--mega-use-https Use HTTPS for transfers
--mega-user string User name
--memory-description string Description of the remote
+ --memory-discard If set all writes will be discarded and reads will return an error
--netstorage-account string Set the NetStorage account name
--netstorage-description string Description of the remote
--netstorage-host string Domain+path of NetStorage host to connect to
@@ -32584,6 +30751,10 @@ Backend-only flags (these can be set in the config file also).
--s3-provider string Choose your S3 provider
--s3-region string Region to connect to
--s3-requester-pays Enables requester pays option when interacting with S3 bucket
+ --s3-role-arn string ARN of the IAM role to assume
+ --s3-role-external-id string External ID for assumed role
+ --s3-role-session-duration string Session duration for assumed role
+ --s3-role-session-name string Session name for assumed role
--s3-sdk-log-mode Bits Set to debug the SDK (default Off)
--s3-secret-access-key string AWS Secret Access Key (password)
--s3-server-side-encryption string The server-side encryption algorithm used when storing this object in S3
@@ -32667,6 +30838,16 @@ Backend-only flags (these can be set in the config file also).
--sftp-user string SSH username (default \[dq]$USER\[dq])
--sftp-xxh128sum-command string The command used to read XXH128 hashes
--sftp-xxh3sum-command string The command used to read XXH3 hashes
+ --shade-api-key string An API key for your account
+ --shade-chunk-size SizeSuffix Chunk size to use for uploading (default 64Mi)
+ --shade-description string Description of the remote
+ --shade-drive-id string The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive
+ --shade-encoding Encoding The encoding for the backend (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot)
+ --shade-endpoint string Endpoint for the service
+ --shade-max-upload-parts int Maximum amount of parts in a multipart upload (default 10000)
+ --shade-token string JWT Token for performing Shade FS operations. Don\[aq]t set this value - rclone will set it automatically
+ --shade-token-expiry string JWT Token Expiration time. Don\[aq]t set this value - rclone will set it automatically
+ --shade-upload-concurrency int Concurrency for multipart uploads and copies. This is the number of chunks of the same file that are uploaded concurrently for multipart uploads and copies (default 4)
--sharefile-auth-url string Auth server URL
--sharefile-chunk-size SizeSuffix Upload chunk size (default 64Mi)
--sharefile-client-credentials Use client credentials OAuth flow
@@ -32758,10 +30939,6 @@ Backend-only flags (these can be set in the config file also).
--union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi)
--union-search-policy string Policy to choose upstream on SEARCH category (default \[dq]ff\[dq])
--union-upstreams string List of space separated upstreams
- --uptobox-access-token string Your access token
- --uptobox-description string Description of the remote
- --uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
- --uptobox-private Set to make uploaded files private
--webdav-auth-redirect Preserve authentication on redirect
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token
@@ -34968,7 +33145,19 @@ The following backends have known issues that need more investigation:
\f[V]TestBisyncRemoteRemote/normalization\f[R] (https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
.RE
.IP \[bu] 2
-Updated: 2025-11-21-010037
+\f[V]TestSeafile\f[R] (\f[V]seafile\f[R])
+.RS 2
+.IP \[bu] 2
+\f[V]TestBisyncLocalRemote/volatile\f[R] (https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafile-1.txt)
+.RE
+.IP \[bu] 2
+\f[V]TestSeafileV6\f[R] (\f[V]seafile\f[R])
+.RS 2
+.IP \[bu] 2
+\f[V]TestBisyncLocalRemote/volatile\f[R] (https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafileV6-1.txt)
+.RE
+.IP \[bu] 2
+Updated: 2026-01-30-010015
.PP
The following backends either have not been tested recently or have
known issues that are deemed unfixable for the time being:
@@ -34977,6 +33166,8 @@ known issues that are deemed unfixable for the time being:
.IP \[bu] 2
\f[V]TestCache\f[R] (\f[V]cache\f[R])
.IP \[bu] 2
+\f[V]TestDrime\f[R] (\f[V]drime\f[R])
+.IP \[bu] 2
\f[V]TestFileLu\f[R] (\f[V]filelu\f[R])
.IP \[bu] 2
\f[V]TestFilesCom\f[R] (\f[V]filescom\f[R])
@@ -36882,6 +35073,8 @@ Cloudflare R2
.IP \[bu] 2
Arvan Cloud Object Storage (AOS)
.IP \[bu] 2
+Bizfly Cloud Simple Storage
+.IP \[bu] 2
Cubbit DS3
.IP \[bu] 2
DigitalOcean Spaces
@@ -37777,6 +35970,85 @@ that the \f[V]aws\f[R] CLI tool does and the other AWS SDKs.
If none of these option actually end up providing \f[V]rclone\f[R] with
AWS credentials then S3 interaction will be non-authenticated (see the
anonymous access section for more info).
+.SS Assume Role (Cross-Account Access)
+.PP
+If you need to access S3 resources in a different AWS account, you can
+use IAM role assumption.
+This is useful for cross-account access scenarios where you have
+credentials in one account but need to access resources in another
+account.
+.PP
+To use assume role, configure the following parameters:
+.IP \[bu] 2
+\f[V]role_arn\f[R] - The ARN (Amazon Resource Name) of the IAM role to
+assume in the target account.
+Format: \f[V]arn:aws:iam::ACCOUNT-ID:role/ROLE-NAME\f[R]
+.IP \[bu] 2
+\f[V]role_session_name\f[R] (optional) - A name for the assumed role
+session.
+If not specified, rclone will generate one automatically.
+.IP \[bu] 2
+\f[V]role_session_duration\f[R] (optional) - Duration for which the
+assumed role credentials are valid.
+If not specified, AWS default duration will be used (typically 1 hour).
+.IP \[bu] 2
+\f[V]role_external_id\f[R] (optional) - An external ID required by the
+role\[aq]s trust policy for additional security.
+This is typically used when the role is accessed by a third party.
+.PP
+The assume role feature works with both direct credentials
+(\f[V]env_auth = false\f[R]) and environment-based authentication
+(\f[V]env_auth = true\f[R]).
+Rclone will first authenticate using the base credentials, then use
+those credentials to assume the specified role.
+.PP
+Example configuration for cross-account access:
+.IP
+.nf
+\f[C]
+[s3-cross-account]
+type = s3
+provider = AWS
+env_auth = true
+region = us-east-1
+role_arn = arn:aws:iam::123456789012:role/CrossAccountS3Role
+role_session_name = rclone-session
+role_external_id = unique-role-external-id-12345
+\f[R]
+.fi
+.PP
+In this example: - Base credentials are obtained from the environment
+(IAM role, credentials file, or environment variables) - These
+credentials are then used to assume the role
+\f[V]CrossAccountS3Role\f[R] in account \f[V]123456789012\f[R] - An
+external ID is provided for additional security as required by the
+role\[aq]s trust policy
+.PP
+The target role\[aq]s trust policy in the destination account must allow
+the source account or user to assume it.
+Example trust policy:
+.IP
+.nf
+\f[C]
+{
+ \[dq]Version\[dq]: \[dq]2012-10-17\[dq],
+ \[dq]Statement\[dq]: [
+ {
+ \[dq]Effect\[dq]: \[dq]Allow\[dq],
+ \[dq]Principal\[dq]: {
+ \[dq]AWS\[dq]: \[dq]arn:aws:iam::SOURCE-ACCOUNT-ID:root\[dq]
+ },
+ \[dq]Action\[dq]: \[dq]sts:AssumeRole\[dq],
+ \[dq]Condition\[dq]: {
+ \[dq]StringEquals\[dq]: {
+ \[dq]sts:ExternalID\[dq]: \[dq]unique-role-external-id-12345\[dq]
+ }
+ }
+ }
+ ]
+}
+\f[R]
+.fi
.SS S3 Permissions
.PP
When using the \f[V]sync\f[R] subcommand of \f[V]rclone\f[R] the
@@ -37894,13 +36166,13 @@ all the files to be uploaded as multipart.
.SS Standard options
.PP
Here are the Standard options specific to s3 (Amazon S3 Compliant
-Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
-Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade,
-GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia,
-Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale,
-OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS,
-Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology,
-TencentCOS, Wasabi, Zata, Other).
+Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph,
+ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu,
+FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS,
+Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease,
+Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway,
+SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj,
+Synology, TencentCOS, Wasabi, Zata, Other).
.SS --s3-provider
.PP
Choose your S3 provider.
@@ -37936,6 +36208,12 @@ Alibaba Cloud Object Storage System (OSS) formerly Aliyun
Arvan Cloud Object Storage (AOS)
.RE
.IP \[bu] 2
+\[dq]BizflyCloud\[dq]
+.RS 2
+.IP \[bu] 2
+Bizfly Cloud Simple Storage
+.RE
+.IP \[bu] 2
\[dq]Ceph\[dq]
.RS 2
.IP \[bu] 2
@@ -38270,7 +36548,7 @@ Config: region
Env Var: RCLONE_S3_REGION
.IP \[bu] 2
Provider:
-AWS,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other
+AWS,BizflyCloud,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other
.IP \[bu] 2
Type: string
.IP \[bu] 2
@@ -38541,6 +36819,22 @@ Needs location constraint us-gov-west-1.
Provider: AWS
.RE
.IP \[bu] 2
+\[dq]hn\[dq]
+.RS 2
+.IP \[bu] 2
+Ha Noi
+.IP \[bu] 2
+Provider: BizflyCloud
+.RE
+.IP \[bu] 2
+\[dq]hcm\[dq]
+.RS 2
+.IP \[bu] 2
+Ho Chi Minh
+.IP \[bu] 2
+Provider: BizflyCloud
+.RE
+.IP \[bu] 2
\[dq]\[dq]
.RS 2
.IP \[bu] 2
@@ -39263,12 +37557,13 @@ Petersburg
Provider: Selectel,Servercore
.RE
.IP \[bu] 2
-\[dq]gis-1\[dq]
+\[dq]ru-3\[dq]
.RS 2
.IP \[bu] 2
-Moscow
+St.
+Petersburg
.IP \[bu] 2
-Provider: Servercore
+Provider: Selectel
.RE
.IP \[bu] 2
\[dq]ru-7\[dq]
@@ -39276,7 +37571,31 @@ Provider: Servercore
.IP \[bu] 2
Moscow
.IP \[bu] 2
-Provider: Servercore
+Provider: Selectel,Servercore
+.RE
+.IP \[bu] 2
+\[dq]gis-1\[dq]
+.RS 2
+.IP \[bu] 2
+Moscow
+.IP \[bu] 2
+Provider: Selectel,Servercore
+.RE
+.IP \[bu] 2
+\[dq]kz-1\[dq]
+.RS 2
+.IP \[bu] 2
+Kazakhstan
+.IP \[bu] 2
+Provider: Selectel
+.RE
+.IP \[bu] 2
+\[dq]uz-2\[dq]
+.RS 2
+.IP \[bu] 2
+Uzbekistan
+.IP \[bu] 2
+Provider: Selectel
.RE
.IP \[bu] 2
\[dq]uz-2\[dq]
@@ -39356,7 +37675,7 @@ Config: endpoint
Env Var: RCLONE_S3_ENDPOINT
.IP \[bu] 2
Provider:
-AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other
+AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other
.IP \[bu] 2
Type: string
.IP \[bu] 2
@@ -39583,6 +37902,22 @@ Tabriz Iran (Shahriar)
Provider: ArvanCloud
.RE
.IP \[bu] 2
+\[dq]hn.ss.bfcplatform.vn\[dq]
+.RS 2
+.IP \[bu] 2
+Hanoi endpoint
+.IP \[bu] 2
+Provider: BizflyCloud
+.RE
+.IP \[bu] 2
+\[dq]hcm.ss.bfcplatform.vn\[dq]
+.RS 2
+.IP \[bu] 2
+Ho Chi Minh endpoint
+.IP \[bu] 2
+Provider: BizflyCloud
+.RE
+.IP \[bu] 2
\[dq]eos-wuxi-1.cmecloud.cn\[dq]
.RS 2
.IP \[bu] 2
@@ -40664,7 +38999,7 @@ Provider: Liara
\[dq]nl-ams-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Amsterdam (Netherlands), nl-ams-1
+Amsterdam, NL (nl-ams-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40672,7 +39007,7 @@ Provider: Linode
\[dq]us-southeast-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Atlanta, GA (USA), us-southeast-1
+Atlanta, GA, US (us-southeast-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40680,7 +39015,7 @@ Provider: Linode
\[dq]in-maa-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Chennai (India), in-maa-1
+Chennai, IN (in-maa-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40688,7 +39023,7 @@ Provider: Linode
\[dq]us-ord-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Chicago, IL (USA), us-ord-1
+Chicago, IL, US (us-ord-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40696,7 +39031,7 @@ Provider: Linode
\[dq]eu-central-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Frankfurt (Germany), eu-central-1
+Frankfurt, DE (eu-central-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40704,7 +39039,7 @@ Provider: Linode
\[dq]id-cgk-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Jakarta (Indonesia), id-cgk-1
+Jakarta, ID (id-cgk-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40712,7 +39047,7 @@ Provider: Linode
\[dq]gb-lon-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-London 2 (Great Britain), gb-lon-1
+London 2, UK (gb-lon-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40720,7 +39055,7 @@ Provider: Linode
\[dq]us-lax-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Los Angeles, CA (USA), us-lax-1
+Los Angeles, CA, US (us-lax-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40728,15 +39063,7 @@ Provider: Linode
\[dq]es-mad-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Madrid (Spain), es-mad-1
-.IP \[bu] 2
-Provider: Linode
-.RE
-.IP \[bu] 2
-\[dq]au-mel-1.linodeobjects.com\[dq]
-.RS 2
-.IP \[bu] 2
-Melbourne (Australia), au-mel-1
+Madrid, ES (es-mad-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40744,7 +39071,7 @@ Provider: Linode
\[dq]us-mia-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Miami, FL (USA), us-mia-1
+Miami, FL, US (us-mia-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40752,7 +39079,7 @@ Provider: Linode
\[dq]it-mil-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Milan (Italy), it-mil-1
+Milan, IT (it-mil-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40760,7 +39087,7 @@ Provider: Linode
\[dq]us-east-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Newark, NJ (USA), us-east-1
+Newark, NJ, US (us-east-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40768,7 +39095,7 @@ Provider: Linode
\[dq]jp-osa-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Osaka (Japan), jp-osa-1
+Osaka, JP (jp-osa-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40776,7 +39103,7 @@ Provider: Linode
\[dq]fr-par-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Paris (France), fr-par-1
+Paris, FR (fr-par-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40784,7 +39111,7 @@ Provider: Linode
\[dq]br-gru-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-São Paulo (Brazil), br-gru-1
+Sao Paulo, BR (br-gru-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40792,7 +39119,7 @@ Provider: Linode
\[dq]us-sea-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Seattle, WA (USA), us-sea-1
+Seattle, WA, US (us-sea-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40800,7 +39127,7 @@ Provider: Linode
\[dq]ap-south-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Singapore, ap-south-1
+Singapore, SG (ap-south-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40808,7 +39135,7 @@ Provider: Linode
\[dq]sg-sin-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Singapore 2, sg-sin-1
+Singapore 2, SG (sg-sin-1)
.IP \[bu] 2
Provider: Linode
.RE
@@ -40816,15 +39143,23 @@ Provider: Linode
\[dq]se-sto-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Stockholm (Sweden), se-sto-1
+Stockholm, SE (se-sto-1)
.IP \[bu] 2
Provider: Linode
.RE
.IP \[bu] 2
-\[dq]us-iad-1.linodeobjects.com\[dq]
+\[dq]jp-tyo-1.linodeobjects.com\[dq]
.RS 2
.IP \[bu] 2
-Washington, DC, (USA), us-iad-1
+Tokyo 3, JP (jp-tyo-1)
+.IP \[bu] 2
+Provider: Linode
+.RE
+.IP \[bu] 2
+\[dq]us-iad-10.linodeobjects.com\[dq]
+.RS 2
+.IP \[bu] 2
+Washington, DC, US (us-iad-10)
.IP \[bu] 2
Provider: Linode
.RE
@@ -41371,7 +39706,25 @@ Provider: SeaweedFS
\[dq]s3.ru-1.storage.selcloud.ru\[dq]
.RS 2
.IP \[bu] 2
-Saint Petersburg
+St.
+Petersburg
+.IP \[bu] 2
+Provider: Selectel
+.RE
+.IP \[bu] 2
+\[dq]s3.ru-3.storage.selcloud.ru\[dq]
+.RS 2
+.IP \[bu] 2
+St.
+Petersburg
+.IP \[bu] 2
+Provider: Selectel
+.RE
+.IP \[bu] 2
+\[dq]s3.ru-7.storage.selcloud.ru\[dq]
+.RS 2
+.IP \[bu] 2
+Moscow
.IP \[bu] 2
Provider: Selectel,Servercore
.RE
@@ -41381,13 +39734,29 @@ Provider: Selectel,Servercore
.IP \[bu] 2
Moscow
.IP \[bu] 2
-Provider: Servercore
+Provider: Selectel,Servercore
.RE
.IP \[bu] 2
-\[dq]s3.ru-7.storage.selcloud.ru\[dq]
+\[dq]s3.kz-1.storage.selcloud.ru\[dq]
.RS 2
.IP \[bu] 2
-Moscow
+Kazakhstan
+.IP \[bu] 2
+Provider: Selectel
+.RE
+.IP \[bu] 2
+\[dq]s3.uz-2.storage.selcloud.ru\[dq]
+.RS 2
+.IP \[bu] 2
+Uzbekistan
+.IP \[bu] 2
+Provider: Selectel
+.RE
+.IP \[bu] 2
+\[dq]s3.ru-1.storage.selcloud.ru\[dq]
+.RS 2
+.IP \[bu] 2
+Saint Petersburg
.IP \[bu] 2
Provider: Servercore
.RE
@@ -42745,7 +41114,7 @@ Config: acl
Env Var: RCLONE_S3_ACL
.IP \[bu] 2
Provider:
-AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
.IP \[bu] 2
Type: string
.IP \[bu] 2
@@ -42762,7 +41131,7 @@ Owner gets FULL_CONTROL.
No one else has access rights (default).
.IP \[bu] 2
Provider:
-AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other
+AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other
.RE
.IP \[bu] 2
\[dq]public-read\[dq]
@@ -42773,7 +41142,7 @@ Owner gets FULL_CONTROL.
The AllUsers group gets READ access.
.IP \[bu] 2
Provider:
-AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
.RE
.IP \[bu] 2
\[dq]public-read-write\[dq]
@@ -42786,7 +41155,7 @@ The AllUsers group gets READ and WRITE access.
Granting this on a bucket is generally not recommended.
.IP \[bu] 2
Provider:
-AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
.RE
.IP \[bu] 2
\[dq]authenticated-read\[dq]
@@ -42797,7 +41166,7 @@ Owner gets FULL_CONTROL.
The AuthenticatedUsers group gets READ access.
.IP \[bu] 2
Provider:
-AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
.RE
.IP \[bu] 2
\[dq]bucket-owner-read\[dq]
@@ -42811,7 +41180,7 @@ If you specify this canned ACL when creating a bucket, Amazon S3 ignores
it.
.IP \[bu] 2
Provider:
-AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
.RE
.IP \[bu] 2
\[dq]bucket-owner-full-control\[dq]
@@ -42824,7 +41193,7 @@ If you specify this canned ACL when creating a bucket, Amazon S3 ignores
it.
.IP \[bu] 2
Provider:
-AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
+AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other
.RE
.IP \[bu] 2
\[dq]private\[dq]
@@ -43175,13 +41544,13 @@ Required: false
.SS Advanced options
.PP
Here are the Advanced options specific to s3 (Amazon S3 Compliant
-Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile,
-Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade,
-GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia,
-Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale,
-OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS,
-Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology,
-TencentCOS, Wasabi, Zata, Other).
+Storage Providers including AWS, Alibaba, ArvanCloud, BizflyCloud, Ceph,
+ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu,
+FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS,
+Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease,
+Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway,
+SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj,
+Synology, TencentCOS, Wasabi, Zata, Other).
.SS --s3-bucket-acl
.PP
Canned ACL used when creating buckets.
@@ -43202,7 +41571,7 @@ Config: bucket_acl
Env Var: RCLONE_S3_BUCKET_ACL
.IP \[bu] 2
Provider:
-AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other
+AWS,Alibaba,ArvanCloud,BizflyCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other
.IP \[bu] 2
Type: string
.IP \[bu] 2
@@ -43553,6 +41922,66 @@ Env Var: RCLONE_S3_SESSION_TOKEN
Type: string
.IP \[bu] 2
Required: false
+.SS --s3-role-arn
+.PP
+ARN of the IAM role to assume.
+.PP
+Leave blank if not using assume role.
+.PP
+Properties:
+.IP \[bu] 2
+Config: role_arn
+.IP \[bu] 2
+Env Var: RCLONE_S3_ROLE_ARN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --s3-role-session-name
+.PP
+Session name for assumed role.
+.PP
+If empty, a session name will be generated automatically.
+.PP
+Properties:
+.IP \[bu] 2
+Config: role_session_name
+.IP \[bu] 2
+Env Var: RCLONE_S3_ROLE_SESSION_NAME
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --s3-role-session-duration
+.PP
+Session duration for assumed role.
+.PP
+If empty, the default session duration will be used.
+.PP
+Properties:
+.IP \[bu] 2
+Config: role_session_duration
+.IP \[bu] 2
+Env Var: RCLONE_S3_ROLE_SESSION_DURATION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --s3-role-external-id
+.PP
+External ID for assumed role.
+.PP
+Leave blank if not using an external ID.
+.PP
+Properties:
+.IP \[bu] 2
+Config: role_external_id
+.IP \[bu] 2
+Env Var: RCLONE_S3_ROLE_EXTERNAL_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SS --s3-upload-concurrency
.PP
Concurrency for multipart uploads and copies.
@@ -45106,6 +43535,39 @@ server_side_encryption =
storage_class =
\f[R]
.fi
+.SS BizflyCloud
+.PP
+Bizfly Cloud Simple Storage (https://bizflycloud.vn/simple-storage) is
+an S3-compatible service with regions in Hanoi (HN) and Ho Chi Minh City
+(HCM).
+.PP
+Use the endpoint for your region:
+.IP \[bu] 2
+HN: \f[V]hn.ss.bfcplatform.vn\f[R]
+.IP \[bu] 2
+HCM: \f[V]hcm.ss.bfcplatform.vn\f[R]
+.PP
+A minimal configuration looks like this.
+.IP
+.nf
+\f[C]
+[bizfly]
+type = s3
+provider = BizflyCloud
+env_auth = false
+access_key_id = YOUR_ACCESS_KEY
+secret_access_key = YOUR_SECRET_KEY
+region = HN
+endpoint = hn.ss.bfcplatform.vn
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =
+\f[R]
+.fi
+.PP
+Switch \f[V]region\f[R] and \f[V]endpoint\f[R] to \f[V]HCM\f[R] and
+\f[V]hcm.ss.bfcplatform.vn\f[R] for Ho Chi Minh City.
.SS Ceph
.PP
Ceph (https://ceph.com/) is an open-source, unified, distributed storage
@@ -50948,7 +49410,7 @@ All copy commands send the following 4 requests:
.IP
.nf
\f[C]
-/b2api/v1/b2_authorize_account
+/b2api/v4/b2_authorize_account
/b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names
@@ -52343,6 +50805,9 @@ Click \f[V]Save Changes\f[R] at the top right.
The \f[V]cache\f[R] remote wraps another existing remote and stores file
structure and its data for long running tasks like
\f[V]rclone mount\f[R].
+.PP
+It is \f[B]deprecated\f[R] so not recommended for use with new
+installations and may be removed at some point.
.SS Status
.PP
The cache backend code is working but it currently doesn\[aq]t have a
@@ -56170,8 +54635,8 @@ Invenio
.PP
The URL of the DOI resolver API to use.
.PP
-The DOI resolver can be set for testing or for cases when the the
-canonical DOI resolver API cannot be used.
+The DOI resolver can be set for testing or for cases when the canonical
+DOI resolver API cannot be used.
.PP
Defaults to \[dq]https://doi.org/api\[dq].
.PP
@@ -56268,6 +54733,374 @@ Only new parameters need be passed as the values will default to those
currently in use.
.PP
It doesn\[aq]t return anything.
+.SH Drime
+.PP
+Drime (https://drime.cloud/) is a cloud storage and transfer service
+focused on fast, resilient file delivery.
+It offers both free and paid tiers with emphasis on high-speed uploads
+and link sharing.
+.PP
+To setup Drime you need to log in, navigate to Settings, Developer, and
+create a token to use as an API access key.
+Give it a sensible name and copy the token for use in the config.
+.SS Configuration
+.PP
+Here is a run through of \f[V]rclone config\f[R] to make a remote called
+\f[V]remote\f[R].
+.PP
+Firstly run:
+.IP
+.nf
+\f[C]
+rclone config
+\f[R]
+.fi
+.PP
+Then follow through the interactive setup:
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+Enter name for new remote.
+name> remote
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+XX / Drime
+ \[rs] (drime)
+Storage> drime
+
+Option access_token.
+API Access token
+You can get this from the web control panel.
+Enter a value. Press Enter to leave empty.
+access_token> YOUR_API_ACCESS_TOKEN
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: drime
+- access_token: YOUR_API_ACCESS_TOKEN
+Keep this \[dq]remote\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.PP
+Once configured you can then use \f[V]rclone\f[R] like this (replace
+\f[V]remote\f[R] with the name you gave your remote):
+.PP
+List directories and files in the top level of your Drime
+.IP
+.nf
+\f[C]
+rclone lsf remote:
+\f[R]
+.fi
+.PP
+To copy a local directory to a Drime directory called backup
+.IP
+.nf
+\f[C]
+rclone copy /home/source remote:backup
+\f[R]
+.fi
+.SS Modification times and hashes
+.PP
+Drime does not support modification times or hashes.
+.PP
+This means that by default syncs will only use the size of the file to
+determine if it needs updating.
+.PP
+You can use the \f[V]--update\f[R] flag which will use the time the
+object was uploaded.
+For many operations this is sufficient to determine if it has changed.
+However files created with timestamps in the past will be missed by the
+sync if using \f[V]--update\f[R].
+.SS Restricted filename characters
+.PP
+In addition to the default restricted characters
+set (https://rclone.org/overview/#restricted-characters) the following
+characters are also replaced:
+.PP
+.TS
+tab(@);
+l c c.
+T{
+Character
+T}@T{
+Value
+T}@T{
+Replacement
+T}
+_
+T{
+\[rs]
+T}@T{
+0x5C
+T}@T{
+\
+T}
+.TE
+.PP
+File names can also not start or end with the following characters.
+These only get replaced if they are the first or last character in the
+name:
+.PP
+.TS
+tab(@);
+l c c.
+T{
+Character
+T}@T{
+Value
+T}@T{
+Replacement
+T}
+_
+T{
+SP
+T}@T{
+0x20
+T}@T{
+␠
+T}
+.TE
+.PP
+Invalid UTF-8 bytes will also be
+replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t
+be used in JSON strings.
+.SS Root folder ID
+.PP
+You can set the \f[V]root_folder_id\f[R] for rclone.
+This is the directory (identified by its \f[V]Folder ID\f[R]) that
+rclone considers to be the root of your Drime drive.
+.PP
+Normally you will leave this blank and rclone will determine the correct
+root to use itself and fill in the value in the config file.
+.PP
+However you can set this to restrict rclone to a specific folder
+hierarchy.
+.PP
+In order to do this you will have to find the \f[V]Folder ID\f[R] of the
+directory you wish rclone to display.
+.PP
+You can do this with rclone
+.IP
+.nf
+\f[C]
+$ rclone lsf -Fip --dirs-only remote:
+d6341f53-ee65-4f29-9f59-d11e8070b2a0;Files/
+f4f5c9b8-6ece-478b-b03e-4538edfe5a1c;Photos/
+d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
+\f[R]
+.fi
+.PP
+The ID to use is the part before the \f[V];\f[R] so you could set
+.IP
+.nf
+\f[C]
+root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0
+\f[R]
+.fi
+.PP
+To restrict rclone to the \f[V]Files\f[R] directory.
+.SS Standard options
+.PP
+Here are the Standard options specific to drime (Drime).
+.SS --drime-access-token
+.PP
+API Access token
+.PP
+You can get this from the web control panel.
+.PP
+Properties:
+.IP \[bu] 2
+Config: access_token
+.IP \[bu] 2
+Env Var: RCLONE_DRIME_ACCESS_TOKEN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS Advanced options
+.PP
+Here are the Advanced options specific to drime (Drime).
+.SS --drime-root-folder-id
+.PP
+ID of the root folder
+.PP
+Leave this blank normally, rclone will fill it in automatically.
+.PP
+If you want rclone to be restricted to a particular folder you can fill
+it in - see the docs for more info.
+.PP
+Properties:
+.IP \[bu] 2
+Config: root_folder_id
+.IP \[bu] 2
+Env Var: RCLONE_DRIME_ROOT_FOLDER_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --drime-workspace-id
+.PP
+Account ID
+.PP
+Leave this blank normally unless you wish to specify a Workspace ID.
+.PP
+Properties:
+.IP \[bu] 2
+Config: workspace_id
+.IP \[bu] 2
+Env Var: RCLONE_DRIME_WORKSPACE_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --drime-list-chunk
+.PP
+Number of items to list in each call
+.PP
+Properties:
+.IP \[bu] 2
+Config: list_chunk
+.IP \[bu] 2
+Env Var: RCLONE_DRIME_LIST_CHUNK
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 1000
+.SS --drime-hard-delete
+.PP
+Delete files permanently rather than putting them into the trash.
+.PP
+Properties:
+.IP \[bu] 2
+Config: hard_delete
+.IP \[bu] 2
+Env Var: RCLONE_DRIME_HARD_DELETE
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
+.SS --drime-upload-cutoff
+.PP
+Cutoff for switching to chunked upload.
+.PP
+Any files larger than this will be uploaded in chunks of chunk_size.
+The minimum is 0 and the maximum is 5 GiB.
+.PP
+Properties:
+.IP \[bu] 2
+Config: upload_cutoff
+.IP \[bu] 2
+Env Var: RCLONE_DRIME_UPLOAD_CUTOFF
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 200Mi
+.SS --drime-chunk-size
+.PP
+Chunk size to use for uploading.
+.PP
+When uploading files larger than upload_cutoff or files with unknown
+size (e.g.
+from \[dq]rclone rcat\[dq] or uploaded with \[dq]rclone mount\[dq] or
+google photos or google docs) they will be uploaded as multipart uploads
+using this chunk size.
+.PP
+Note that \[dq]--drime-upload-concurrency\[dq] chunks of this size are
+buffered in memory per transfer.
+.PP
+If you are transferring large files over high-speed links and you have
+enough memory, then increasing this will speed up the transfers.
+.PP
+Rclone will automatically increase the chunk size when uploading a large
+file of known size to stay below the 10,000 chunks limit.
+.PP
+Files of unknown size are uploaded with the configured chunk_size.
+Since the default chunk size is 5 MiB and there can be at most 10,000
+chunks, this means that by default the maximum size of a file you can
+stream upload is 48 GiB.
+If you wish to stream upload larger files then you will need to increase
+chunk_size.
+.PP
+Properties:
+.IP \[bu] 2
+Config: chunk_size
+.IP \[bu] 2
+Env Var: RCLONE_DRIME_CHUNK_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 5Mi
+.SS --drime-upload-concurrency
+.PP
+Concurrency for multipart uploads and copies.
+.PP
+This is the number of chunks of the same file that are uploaded
+concurrently for multipart uploads and copies.
+.PP
+If you are uploading small numbers of large files over high-speed links
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+.PP
+Properties:
+.IP \[bu] 2
+Config: upload_concurrency
+.IP \[bu] 2
+Env Var: RCLONE_DRIME_UPLOAD_CONCURRENCY
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 4
+.SS --drime-encoding
+.PP
+The encoding for the backend.
+.PP
+See the encoding section in the
+overview (https://rclone.org/overview/#encoding) for more info.
+.PP
+Properties:
+.IP \[bu] 2
+Config: encoding
+.IP \[bu] 2
+Env Var: RCLONE_DRIME_ENCODING
+.IP \[bu] 2
+Type: Encoding
+.IP \[bu] 2
+Default: Slash,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
+.SS --drime-description
+.PP
+Description of the remote.
+.PP
+Properties:
+.IP \[bu] 2
+Config: description
+.IP \[bu] 2
+Env Var: RCLONE_DRIME_DESCRIPTION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS Limitations
+.PP
+Drime only supports filenames up to 255 bytes in length, where filenames
+are encoded in UTF8.
.SH Dropbox
.PP
Paths are specified as \f[V]remote:path\f[R]
@@ -57445,6 +56278,9 @@ With support for high storage limits and seamless integration with
rclone, FileLu makes managing files in the cloud easy.
Its cross-platform file backup services let you upload and back up files
from any internet-connected device.
+.PP
+\f[B]Note\f[R] FileLu now has a fully featured S3 backend FileLu S5, an
+industry standard S3 compatible object store.
.SS Configuration
.PP
Here is an example of how to make a remote called \f[V]filelu\f[R].
@@ -57718,6 +56554,273 @@ for troubleshooting and updates.
.PP
For further information, visit FileLu\[aq]s
website (https://filelu.com/).
+.SH Filen
+.SS Configuration
+.PP
+The initial setup for Filen requires that you get an API key for your
+account, currently this is only possible using the Filen
+CLI (https://github.com/FilenCloudDienste/filen-cli).
+This means you must first download the CLI, login, and then run the
+\f[V]export-api-key\f[R] command.
+.PP
+Here is an example of how to make a remote called \f[V]FilenRemote\f[R].
+First run:
+.IP
+.nf
+\f[C]
+ rclone config
+\f[R]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+
+name> FilenRemote
+Option Storage.
+
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[snip]
+XX / Filen
+ \[rs] \[dq]filen\[dq]
+[snip]
+Storage> filen
+
+Option Email.
+The email of your Filen account
+Enter a value.
+Email> youremail\[at]provider.com
+
+Option Password.
+The password of your Filen account
+Choose an alternative below.
+y) Yes, type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+
+Option API Key.
+An API Key for your Filen account
+Get this using the Filen CLI export-api-key command
+You can download the Filen CLI from https://github.com/FilenCloudDienste/filen-cli
+Choose an alternative below.
+y) Yes, type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: filen
+- Email: youremail\[at]provider.com
+- Password: *** ENCRYPTED ***
+- API Key: *** ENCRYPTED ***
+Keep this \[dq]FilenRemote\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.SS Modification times and hashes
+.PP
+Modification times are fully supported for files, for directories, only
+the creation time matters.
+.PP
+Filen supports Blake3 hashes.
+.SS Restricted filename characters
+.PP
+Invalid UTF-8 bytes will be
+replaced (https://rclone.org/overview/#invalid-utf8)
+.SS API Key
+.SS Standard options
+.PP
+Here are the Standard options specific to filen (Filen).
+.SS --filen-email
+.PP
+Email of your Filen account
+.PP
+Properties:
+.IP \[bu] 2
+Config: email
+.IP \[bu] 2
+Env Var: RCLONE_FILEN_EMAIL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --filen-password
+.PP
+Password of your Filen account
+.PP
+\f[B]NB\f[R] Input to this must be obscured - see rclone
+obscure (https://rclone.org/commands/rclone_obscure/).
+.PP
+Properties:
+.IP \[bu] 2
+Config: password
+.IP \[bu] 2
+Env Var: RCLONE_FILEN_PASSWORD
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --filen-api-key
+.PP
+API Key for your Filen account
+.PP
+Get this using the Filen CLI export-api-key command You can download the
+Filen CLI from https://github.com/FilenCloudDienste/filen-cli
+.PP
+\f[B]NB\f[R] Input to this must be obscured - see rclone
+obscure (https://rclone.org/commands/rclone_obscure/).
+.PP
+Properties:
+.IP \[bu] 2
+Config: api_key
+.IP \[bu] 2
+Env Var: RCLONE_FILEN_API_KEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS Advanced options
+.PP
+Here are the Advanced options specific to filen (Filen).
+.SS --filen-upload-concurrency
+.PP
+Concurrency for chunked uploads.
+.PP
+This is the upper limit for how many transfers for the same file are
+running concurrently.
+Setting this above to a value smaller than 1 will cause uploads to
+deadlock.
+.PP
+If you are uploading small numbers of large files over high-speed links
+and these uploads do not fully utilize your bandwidth, then increasing
+this may help to speed up the transfers.
+.PP
+Properties:
+.IP \[bu] 2
+Config: upload_concurrency
+.IP \[bu] 2
+Env Var: RCLONE_FILEN_UPLOAD_CONCURRENCY
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 16
+.SS --filen-encoding
+.PP
+The encoding for the backend.
+.PP
+See the encoding section in the
+overview (https://rclone.org/overview/#encoding) for more info.
+.PP
+Properties:
+.IP \[bu] 2
+Config: encoding
+.IP \[bu] 2
+Env Var: RCLONE_FILEN_ENCODING
+.IP \[bu] 2
+Type: Encoding
+.IP \[bu] 2
+Default: Slash,Del,Ctl,InvalidUtf8,Dot
+.SS --filen-master-keys
+.PP
+Master Keys (internal use only)
+.PP
+Properties:
+.IP \[bu] 2
+Config: master_keys
+.IP \[bu] 2
+Env Var: RCLONE_FILEN_MASTER_KEYS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --filen-private-key
+.PP
+Private RSA Key (internal use only)
+.PP
+Properties:
+.IP \[bu] 2
+Config: private_key
+.IP \[bu] 2
+Env Var: RCLONE_FILEN_PRIVATE_KEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --filen-public-key
+.PP
+Public RSA Key (internal use only)
+.PP
+Properties:
+.IP \[bu] 2
+Config: public_key
+.IP \[bu] 2
+Env Var: RCLONE_FILEN_PUBLIC_KEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --filen-auth-version
+.PP
+Authentication Version (internal use only)
+.PP
+Properties:
+.IP \[bu] 2
+Config: auth_version
+.IP \[bu] 2
+Env Var: RCLONE_FILEN_AUTH_VERSION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --filen-base-folder-uuid
+.PP
+UUID of Account Root Directory (internal use only)
+.PP
+Properties:
+.IP \[bu] 2
+Config: base_folder_uuid
+.IP \[bu] 2
+Env Var: RCLONE_FILEN_BASE_FOLDER_UUID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --filen-description
+.PP
+Description of the remote.
+.PP
+Properties:
+.IP \[bu] 2
+Config: description
+.IP \[bu] 2
+Env Var: RCLONE_FILEN_DESCRIPTION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SH Files.com
.PP
Files.com (https://www.files.com/) is a cloud storage service that
@@ -58547,6 +57650,17 @@ URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT
verb.
.PP
+Supports the format http://user:pass\[at]host:port, http://host:port,
+http://host.
+.PP
+Example:
+.IP
+.nf
+\f[C]
+http://myUser:myPass\[at]proxyhostname.example.com:8000
+\f[R]
+.fi
+.PP
Properties:
.IP \[bu] 2
Config: http_proxy
@@ -60216,9 +59330,17 @@ Type: bool
Default: false
.SS --gcs-endpoint
.PP
-Endpoint for the service.
+Custom endpoint for the storage API.
+Leave blank to use the provider default.
.PP
-Leave blank normally.
+When using a custom endpoint that includes a subpath (e.g.
+example.org/custom/endpoint), the subpath will be ignored during upload
+operations due to a limitation in the underlying Google API Go client
+library.
+Download and listing operations will work correctly with the full
+endpoint path.
+If you require subpath support for uploads, avoid using subpaths in your
+custom endpoint configuration.
.PP
Properties:
.IP \[bu] 2
@@ -60229,6 +59351,29 @@ Env Var: RCLONE_GCS_ENDPOINT
Type: string
.IP \[bu] 2
Required: false
+.IP \[bu] 2
+Examples:
+.RS 2
+.IP \[bu] 2
+\[dq]storage.example.org\[dq]
+.RS 2
+.IP \[bu] 2
+Specify a custom endpoint
+.RE
+.IP \[bu] 2
+\[dq]storage.example.org:4443\[dq]
+.RS 2
+.IP \[bu] 2
+Specifying a custom endpoint with port
+.RE
+.IP \[bu] 2
+\[dq]storage.example.org:4443/gcs/api\[dq]
+.RS 2
+.IP \[bu] 2
+Specifying a subpath, see the note, uploads won\[aq]t use the custom
+path!
+.RE
+.RE
.SS --gcs-encoding
.PP
The encoding for the backend.
@@ -60557,7 +59702,7 @@ In the next field, \[dq]OAuth Scopes\[dq], enter
access to Google Drive specifically.
You can also use
\f[V]https://www.googleapis.com/auth/drive.readonly\f[R] for read only
-access.
+access with \f[V]--drive-scope=drive.readonly\f[R].
.IP \[bu] 2
Click \[dq]Authorise\[dq]
.SS 3. Configure rclone, assuming a new install
@@ -62233,6 +61378,25 @@ If writing fails log errors only, don\[aq]t fail the transfer
Read and Write the value.
.RE
.RE
+.SS --drive-metadata-enforce-expansive-access
+.PP
+Whether the request should enforce expansive access rules.
+.PP
+From Feb 2026 this flag will be set by default so this flag can be used
+for testing before then.
+.PP
+See:
+https://developers.google.com/workspace/drive/api/guides/limited-expansive-access
+.PP
+Properties:
+.IP \[bu] 2
+Config: metadata_enforce_expansive_access
+.IP \[bu] 2
+Env Var: RCLONE_DRIVE_METADATA_ENFORCE_EXPANSIVE_ACCESS
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --drive-encoding
.PP
The encoding for the backend.
@@ -63821,8 +62985,14 @@ If there is a problem with this client_id (eg quota too low or the
client_id stops working) then you can make your own.
.PP
Please follow the steps in the google drive
-docs (https://rclone.org/drive/#making-your-own-client-id).
-You will need these scopes instead of the drive ones detailed:
+docs (https://rclone.org/drive/#making-your-own-client-id) with the
+following differences:
+.IP \[bu] 2
+At step 3, instead of enabling the \[dq]Google Drive API\[dq], search
+for and enable the \[dq]Photos Library API\[dq].
+.IP \[bu] 2
+At step 5, you will need to add different scopes.
+Use these scopes instead of the drive ones:
.IP
.nf
\f[C]
@@ -66770,6 +65940,212 @@ T}
.TE
.PP
See the metadata (https://rclone.org/docs/#metadata) docs for more info.
+.SH Internxt Drive
+.PP
+Internxt Drive (https://internxt.com) is a zero-knowledge encrypted
+cloud storage service.
+.PP
+Paths are specified as \f[V]remote:path\f[R]
+.PP
+Paths may be as deep as required, e.g.
+\f[V]remote:directory/subdirectory\f[R].
+.SS Limitations
+.PP
+\f[B]Note:\f[R] The Internxt backend may not work with all account
+types.
+Please refer to Internxt plan details (https://internxt.com/pricing) or
+contact Internxt support (https://help.internxt.com) to verify rclone
+compatibility with your subscription.
+.SS Configuration
+.PP
+Here is an example of how to make a remote called \f[V]internxt\f[R].
+Run \f[V]rclone config\f[R] and follow the prompts:
+.IP
+.nf
+\f[C]
+No remotes found, make a new one?
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> internxt
+Type of storage to configure.
+Choose a number from below, or type in your own value
+[snip]
+XX / Internxt Drive
+ \[rs] \[dq]internxt\[dq]
+[snip]
+Storage> internxt
+
+Option email.
+Email of your Internxt account.
+Enter a value.
+email> user\[at]example.com
+
+Option pass.
+Password.
+Enter a value.
+password>
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: internxt
+- email: user\[at]example.com
+- pass: *** ENCRYPTED ***
+Keep this \[dq]internxt\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.PP
+If you have two-factor authentication enabled on your Internxt account,
+you will be prompted to enter the code during login.
+.SS Security Considerations
+.PP
+The authentication process stores your password and mnemonic in the
+rclone configuration file.
+It is \f[B]strongly recommended\f[R] to encrypt your rclone config to
+protect these sensitive credentials:
+.IP
+.nf
+\f[C]
+rclone config password
+\f[R]
+.fi
+.PP
+This will prompt you to set a password that encrypts your entire
+configuration file.
+.SS Usage Examples
+.IP
+.nf
+\f[C]
+# List files
+rclone ls internxt:
+
+# Copy files to Internxt
+rclone copy /local/path internxt:remote/path
+
+# Sync local directory to Internxt
+rclone sync /local/path internxt:remote/path
+
+# Mount Internxt Drive as a local filesystem
+rclone mount internxt: /path/to/mountpoint
+
+# Check storage usage
+rclone about internxt:
+\f[R]
+.fi
+.SS Modification times and hashes
+.PP
+The Internxt backend does not support hashes.
+.PP
+Modification times are read from the server but cannot be set.
+The backend reports \f[V]ModTimeNotSupported\f[R] precision, so
+modification times will not be used for sync comparisons.
+.SS Restricted filename characters
+.PP
+The Internxt backend replaces the default restricted characters
+set (https://rclone.org/overview/#restricted-characters).
+.SS Standard options
+.PP
+Here are the Standard options specific to internxt (Internxt Drive).
+.SS --internxt-email
+.PP
+Email of your Internxt account.
+.PP
+Properties:
+.IP \[bu] 2
+Config: email
+.IP \[bu] 2
+Env Var: RCLONE_INTERNXT_EMAIL
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --internxt-pass
+.PP
+Password.
+.PP
+\f[B]NB\f[R] Input to this must be obscured - see rclone
+obscure (https://rclone.org/commands/rclone_obscure/).
+.PP
+Properties:
+.IP \[bu] 2
+Config: pass
+.IP \[bu] 2
+Env Var: RCLONE_INTERNXT_PASS
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS Advanced options
+.PP
+Here are the Advanced options specific to internxt (Internxt Drive).
+.SS --internxt-mnemonic
+.PP
+Mnemonic (internal use only)
+.PP
+Properties:
+.IP \[bu] 2
+Config: mnemonic
+.IP \[bu] 2
+Env Var: RCLONE_INTERNXT_MNEMONIC
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --internxt-skip-hash-validation
+.PP
+Skip hash validation when downloading files.
+.PP
+By default, hash validation is disabled.
+Set this to false to enable validation.
+.PP
+Properties:
+.IP \[bu] 2
+Config: skip_hash_validation
+.IP \[bu] 2
+Env Var: RCLONE_INTERNXT_SKIP_HASH_VALIDATION
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: true
+.SS --internxt-encoding
+.PP
+The encoding for the backend.
+.PP
+See the encoding section in the
+overview (https://rclone.org/overview/#encoding) for more info.
+.PP
+Properties:
+.IP \[bu] 2
+Config: encoding
+.IP \[bu] 2
+Env Var: RCLONE_INTERNXT_ENCODING
+.IP \[bu] 2
+Type: Encoding
+.IP \[bu] 2
+Default: Slash,BackSlash,CrLf,RightPeriod,InvalidUtf8,Dot
+.SS --internxt-description
+.PP
+Description of the remote.
+.PP
+Properties:
+.IP \[bu] 2
+Config: description
+.IP \[bu] 2
+Env Var: RCLONE_INTERNXT_DESCRIPTION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SH Jottacloud
.PP
Jottacloud is a cloud storage service provider from a Norwegian company,
@@ -69278,6 +68654,35 @@ set (https://rclone.org/overview/#restricted-characters).
.PP
Here are the Advanced options specific to memory (In memory object
storage system.).
+.SS --memory-discard
+.PP
+If set all writes will be discarded and reads will return an error
+.PP
+If set then when files are uploaded the contents not be saved.
+The files will appear to have been uploaded but will give an error on
+read.
+Files will have their MD5 sum calculated on upload which takes very
+little CPU time and allows the transfers to be checked.
+.PP
+This can be useful for testing performance.
+.PP
+Probably most easily used by using the connection string syntax:
+.IP
+.nf
+\f[C]
+:memory,discard:bucket
+\f[R]
+.fi
+.PP
+Properties:
+.IP \[bu] 2
+Config: discard
+.IP \[bu] 2
+Env Var: RCLONE_MEMORY_DISCARD
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --memory-description
.PP
Description of the remote.
@@ -69864,6 +69269,39 @@ MD5 hashes are stored with blobs.
However blobs that were uploaded in chunks only have an MD5 if the
source remote was capable of MD5 hashes, e.g.
the local disk.
+.SS Metadata and tags
+.PP
+Rclone can map arbitrary metadata to Azure Blob headers, user metadata,
+and tags when \f[V]--metadata\f[R] is enabled (or when using
+\f[V]--metadata-set\f[R] / \f[V]--metadata-mapper\f[R]).
+.IP \[bu] 2
+Headers: Set these keys in metadata to map to the corresponding blob
+headers:
+.RS 2
+.IP \[bu] 2
+\f[V]cache-control\f[R], \f[V]content-disposition\f[R],
+\f[V]content-encoding\f[R], \f[V]content-language\f[R],
+\f[V]content-type\f[R].
+.RE
+.IP \[bu] 2
+User metadata: Any other non-reserved keys are written as user metadata
+(keys are normalized to lowercase).
+Keys starting with \f[V]x-ms-\f[R] are reserved and are not stored as
+user metadata.
+.IP \[bu] 2
+Tags: Provide \f[V]x-ms-tags\f[R] as a comma-separated list of
+\f[V]key=value\f[R] pairs, e.g.
+\f[V]x-ms-tags=env=dev,team=sync\f[R].
+These are applied as blob tags on upload and on server-side copies.
+Whitespace around keys/values is ignored.
+.IP \[bu] 2
+Modtime override: Provide \f[V]mtime\f[R] in RFC3339/RFC3339Nano format
+to override the stored modtime persisted in user metadata.
+If \f[V]mtime\f[R] cannot be parsed, rclone logs a debug message and
+ignores the override.
+.PP
+Notes: - Rclone ignores reserved \f[V]x-ms-*\f[R] keys (except
+\f[V]x-ms-tags\f[R]) for user metadata.
.SS Performance
.PP
When uploading large files, increasing the value of
@@ -70309,6 +69747,22 @@ Env Var: RCLONE_AZUREBLOB_SAS_URL
Type: string
.IP \[bu] 2
Required: false
+.SS --azureblob-connection-string
+.PP
+Storage Connection String.
+.PP
+Connection string for the storage.
+Leave blank if using other auth methods.
+.PP
+Properties:
+.IP \[bu] 2
+Config: connection_string
+.IP \[bu] 2
+Env Var: RCLONE_AZUREBLOB_CONNECTION_STRING
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SS --azureblob-tenant
.PP
ID of the service principal\[aq]s tenant.
@@ -71032,6 +70486,109 @@ Env Var: RCLONE_AZUREBLOB_DESCRIPTION
Type: string
.IP \[bu] 2
Required: false
+.SS Metadata
+.PP
+User metadata is stored as x-ms-meta- keys.
+Azure metadata keys are case insensitive and are always returned in
+lower case.
+.PP
+Here are the possible system metadata items for the azureblob backend.
+.PP
+.TS
+tab(@);
+lw(11.1n) lw(11.1n) lw(11.1n) lw(16.6n) lw(20.3n).
+T{
+Name
+T}@T{
+Help
+T}@T{
+Type
+T}@T{
+Example
+T}@T{
+Read Only
+T}
+_
+T{
+cache-control
+T}@T{
+Cache-Control header
+T}@T{
+string
+T}@T{
+no-cache
+T}@T{
+N
+T}
+T{
+content-disposition
+T}@T{
+Content-Disposition header
+T}@T{
+string
+T}@T{
+inline
+T}@T{
+N
+T}
+T{
+content-encoding
+T}@T{
+Content-Encoding header
+T}@T{
+string
+T}@T{
+gzip
+T}@T{
+N
+T}
+T{
+content-language
+T}@T{
+Content-Language header
+T}@T{
+string
+T}@T{
+en-US
+T}@T{
+N
+T}
+T{
+content-type
+T}@T{
+Content-Type header
+T}@T{
+string
+T}@T{
+text/plain
+T}@T{
+N
+T}
+T{
+mtime
+T}@T{
+Time of last modification, read from rclone metadata
+T}@T{
+RFC 3339
+T}@T{
+2006-01-02T15:04:05.999999999Z07:00
+T}@T{
+N
+T}
+T{
+tier
+T}@T{
+Tier of the object
+T}@T{
+string
+T}@T{
+Hot
+T}@T{
+\f[B]Y\f[R]
+T}
+.TE
+.PP
+See the metadata (https://rclone.org/docs/#metadata) docs for more info.
.SS Custom upload headers
.PP
You can set custom upload headers with the \f[V]--header-upload\f[R]
@@ -71588,8 +71145,7 @@ Azure Storage Account Name.
.PP
Set this to the Azure Storage Account Name in use.
.PP
-Leave blank to use SAS URL or connection string, otherwise it needs to
-be set.
+Leave blank to use SAS URL or Emulator, otherwise it needs to be set.
.PP
If this is blank and if env_auth is set it will be read from the
environment variable \f[V]AZURE_STORAGE_ACCOUNT_NAME\f[R] if possible.
@@ -71603,21 +71159,6 @@ Env Var: RCLONE_AZUREFILES_ACCOUNT
Type: string
.IP \[bu] 2
Required: false
-.SS --azurefiles-share-name
-.PP
-Azure Files Share Name.
-.PP
-This is required and is the name of the share to access.
-.PP
-Properties:
-.IP \[bu] 2
-Config: share_name
-.IP \[bu] 2
-Env Var: RCLONE_AZUREFILES_SHARE_NAME
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
.SS --azurefiles-env-auth
.PP
Read credentials from runtime (environment variables, CLI or MSI).
@@ -71637,7 +71178,7 @@ Default: false
.PP
Storage Account Shared Key.
.PP
-Leave blank to use SAS URL or connection string.
+Leave blank to use SAS URL or Emulator.
.PP
Properties:
.IP \[bu] 2
@@ -71650,9 +71191,9 @@ Type: string
Required: false
.SS --azurefiles-sas-url
.PP
-SAS URL.
+SAS URL for container level access only.
.PP
-Leave blank if using account/key or connection string.
+Leave blank if using account/key or Emulator.
.PP
Properties:
.IP \[bu] 2
@@ -71665,7 +71206,10 @@ Type: string
Required: false
.SS --azurefiles-connection-string
.PP
-Azure Files Connection String.
+Storage Connection String.
+.PP
+Connection string for the storage.
+Leave blank if using other auth methods.
.PP
Properties:
.IP \[bu] 2
@@ -71759,6 +71303,21 @@ Env Var: RCLONE_AZUREFILES_CLIENT_CERTIFICATE_PASSWORD
Type: string
.IP \[bu] 2
Required: false
+.SS --azurefiles-share-name
+.PP
+Azure Files Share Name.
+.PP
+This is required and is the name of the share to access.
+.PP
+Properties:
+.IP \[bu] 2
+Config: share_name
+.IP \[bu] 2
+Env Var: RCLONE_AZUREFILES_SHARE_NAME
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
.SS Advanced options
.PP
Here are the Advanced options specific to azurefiles (Microsoft Azure
@@ -71826,7 +71385,7 @@ interactive login.
.nf
\f[C]
$ az ad sp create-for-rbac --name \[dq]\[dq] \[rs]
- --role \[dq]Storage Files Data Owner\[dq] \[rs]
+ --role \[dq]Storage Blob Data Owner\[dq] \[rs]
--scopes \[dq]/subscriptions//resourceGroups//providers/Microsoft.Storage/storageAccounts//blobServices/default/containers/\[dq] \[rs]
> azure-principal.json
\f[R]
@@ -71834,13 +71393,10 @@ $ az ad sp create-for-rbac --name \[dq]\[dq] \[rs]
.PP
See \[dq]Create an Azure service
principal\[dq] (https://docs.microsoft.com/en-us/cli/azure/create-an-azure-service-principal-azure-cli)
-and \[dq]Assign an Azure role for access to files
+and \[dq]Assign an Azure role for access to blob
data\[dq] (https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-cli)
pages for more details.
.PP
-\f[B]NB\f[R] this section needs updating for Azure Files - pull requests
-appreciated!
-.PP
It may be more convenient to put the credentials directly into the
rclone config file under the \f[V]client_id\f[R], \f[V]tenant\f[R] and
\f[V]client_secret\f[R] keys instead of setting
@@ -71855,6 +71411,28 @@ Env Var: RCLONE_AZUREFILES_SERVICE_PRINCIPAL_FILE
Type: string
.IP \[bu] 2
Required: false
+.SS --azurefiles-disable-instance-discovery
+.PP
+Skip requesting Microsoft Entra instance metadata
+.PP
+This should be set true only by applications authenticating in
+disconnected clouds, or private clouds such as Azure Stack.
+.PP
+It determines whether rclone requests Microsoft Entra instance metadata
+from \f[V]https://login.microsoft.com/\f[R] before authenticating.
+.PP
+Setting this to true will skip this request, making you responsible for
+ensuring the configured authority is valid and trustworthy.
+.PP
+Properties:
+.IP \[bu] 2
+Config: disable_instance_discovery
+.IP \[bu] 2
+Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+.IP \[bu] 2
+Type: bool
+.IP \[bu] 2
+Default: false
.SS --azurefiles-use-msi
.PP
Use a managed service identity to authenticate (only works in Azure).
@@ -71925,32 +71503,32 @@ Env Var: RCLONE_AZUREFILES_MSI_MI_RES_ID
Type: string
.IP \[bu] 2
Required: false
-.SS --azurefiles-disable-instance-discovery
+.SS --azurefiles-use-emulator
.PP
-Skip requesting Microsoft Entra instance metadata This should be set
-true only by applications authenticating in disconnected clouds, or
-private clouds such as Azure Stack.
-It determines whether rclone requests Microsoft Entra instance metadata
-from \f[V]https://login.microsoft.com/\f[R] before authenticating.
-Setting this to true will skip this request, making you responsible for
-ensuring the configured authority is valid and trustworthy.
+Uses local storage emulator if provided as \[aq]true\[aq].
+.PP
+Leave blank if using real azure storage endpoint.
.PP
Properties:
.IP \[bu] 2
-Config: disable_instance_discovery
+Config: use_emulator
.IP \[bu] 2
-Env Var: RCLONE_AZUREFILES_DISABLE_INSTANCE_DISCOVERY
+Env Var: RCLONE_AZUREFILES_USE_EMULATOR
.IP \[bu] 2
Type: bool
.IP \[bu] 2
Default: false
.SS --azurefiles-use-az
.PP
-Use Azure CLI tool az for authentication Set to use the Azure CLI tool
+Use Azure CLI tool az for authentication
+.PP
+Set to use the Azure CLI tool
az (https://learn.microsoft.com/en-us/cli/azure/) as the sole means of
authentication.
+.PP
Setting this can be useful if you wish to use the az CLI on a host with
a System Managed Identity that you do not want to use.
+.PP
Don\[aq]t set env_auth at the same time.
.PP
Properties:
@@ -73195,7 +72773,7 @@ This is why this flag is not set as the default.
.PP
As a rule of thumb if nearly all of your data is under rclone\[aq]s root
directory (the \f[V]root/directory\f[R] in
-\f[V]onedrive:root/directory\f[R]) then using this flag will be be a big
+\f[V]onedrive:root/directory\f[R]) then using this flag will be a big
performance win.
If your data is mostly not under the root then using this flag will be a
big performance loss.
@@ -73518,7 +73096,7 @@ description
T}@T{
A short description of the file.
Max 1024 characters.
-Only supported for OneDrive Personal.
+No longer supported by Microsoft.
T}@T{
string
T}@T{
@@ -77376,7 +76954,7 @@ Default: 0
.PP
Above this size files will be chunked.
.PP
-Above this size files will be chunked into a a \f[V]_segments\f[R]
+Above this size files will be chunked into a \f[V]_segments\f[R]
container or a \f[V].file-segments\f[R] directory.
(See the \f[V]use_segments_container\f[R] option for more info).
Default for this is 5 GiB which is its maximum value, which means only
@@ -77770,6 +77348,37 @@ So if the folder you want rclone to use your is \[dq]My Music/\[dq],
then use the returned id from \f[V]rclone lsf\f[R] command (ex.
\f[V]dxxxxxxxx2\f[R]) as the \f[V]root_folder_id\f[R] variable value in
the config file.
+.SS Change notifications and mounts
+.PP
+The pCloud backend supports real‑time updates for rclone mounts via
+change notifications.
+rclone uses pCloud\[cq]s diff long‑polling API to detect changes and
+will automatically refresh directory listings in the mounted filesystem
+when changes occur.
+.PP
+Notes and behavior:
+.IP \[bu] 2
+Works automatically when using \f[V]rclone mount\f[R] and requires no
+additional configuration.
+.IP \[bu] 2
+Notifications are directory‑scoped: when rclone detects a change, it
+refreshes the affected directory so new/removed/renamed files become
+visible promptly.
+.IP \[bu] 2
+Updates are near real‑time.
+The backend uses a long‑poll with short fallback polling intervals, so
+you should see changes appear quickly without manual refreshes.
+.PP
+If you want to debug or verify notifications, you can use the helper
+command:
+.IP
+.nf
+\f[C]
+rclone test changenotify remote:
+\f[R]
+.fi
+.PP
+This will log incoming change notifications for the given remote.
.SS Standard options
.PP
Here are the Standard options specific to pcloud (Pcloud).
@@ -82068,6 +81677,17 @@ URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT
verb.
.PP
+Supports the format http://user:pass\[at]host:port, http://host:port,
+http://host.
+.PP
+Example:
+.IP
+.nf
+\f[C]
+http://myUser:myPass\[at]proxyhostname.example.com:8000
+\f[R]
+.fi
+.PP
Properties:
.IP \[bu] 2
Config: http_proxy
@@ -82157,6 +81777,297 @@ Hetzner Storage Boxes are supported through the SFTP backend on port 23.
.PP
See Hetzner\[aq]s documentation for
details (https://docs.hetzner.com/robot/storage-box/access/access-ssh-rsync-borg#rclone)
+.SH Shade
+.PP
+This is a backend for the Shade (https://shade.inc/) platform
+.SS About Shade
+.PP
+Shade (https://shade.inc/) is an AI-powered cloud NAS that makes your
+cloud files behave like a local drive, optimized for media and creative
+workflows.
+It provides fast, secure access with natural-language search, easy
+sharing, and scalable cloud storage.
+.SS Accounts & Pricing
+.PP
+To use this backend, you need to create a free
+account (https://app.shade.inc/) on Shade.
+You can start with a free account and get 20GB of storage for free.
+.SS Usage
+.PP
+Paths are specified as \f[V]remote:path\f[R]
+.PP
+Paths may be as deep as required, e.g.
+\f[V]remote:directory/subdirectory\f[R].
+.SS Configuration
+.PP
+Here is an example of making a Shade configuration.
+.PP
+First, create a create a free account (https://app.shade.inc/) account
+and choose a plan.
+.PP
+You will need to log in and get the \f[V]API Key\f[R] and
+\f[V]Drive ID\f[R] for your account from the settings section of your
+account and created drive respectively.
+.PP
+Now run
+.PP
+\f[V]rclone config\f[R]
+.PP
+Follow this interactive process:
+.IP
+.nf
+\f[C]
+$ rclone config
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> n
+
+Enter name for new remote.
+name> Shade
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[OTHER OPTIONS]
+xx / Shade FS
+ \[rs] (shade)
+[OTHER OPTIONS]
+Storage> xx
+
+Option drive_id.
+The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive.
+Enter a value.
+drive_id> [YOUR_ID]
+
+Option api_key.
+An API key for your account.
+Enter a value.
+api_key> [YOUR_API_KEY]
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: shade
+- drive_id: [YOUR_ID]
+- api_key: [YOUR_API_KEY]
+Keep this \[dq]Shade\[dq] remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+\f[R]
+.fi
+.SS Modification times and hashes
+.PP
+Shade does not support hashes and writing mod times.
+.SS Transfers
+.PP
+Shade uses multipart uploads by default.
+This means that files will be chunked and sent up to Shade concurrently.
+In order to configure how many simultaneous uploads you want to use,
+upload the \[aq]concurrency\[aq] option in the advanced config section.
+Note that this uses more memory and initiates more http requests.
+.SS Deleting files
+.PP
+Please note that when deleting files in Shade via rclone it will delete
+the file instantly, instead of sending it to the trash.
+This means that it will not be recoverable.
+.SS Standard options
+.PP
+Here are the Standard options specific to shade (Shade FS).
+.SS --shade-drive-id
+.PP
+The ID of your drive, see this in the drive settings.
+Individual rclone configs must be made per drive.
+.PP
+Properties:
+.IP \[bu] 2
+Config: drive_id
+.IP \[bu] 2
+Env Var: RCLONE_SHADE_DRIVE_ID
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS --shade-api-key
+.PP
+An API key for your account.
+.PP
+Properties:
+.IP \[bu] 2
+Config: api_key
+.IP \[bu] 2
+Env Var: RCLONE_SHADE_API_KEY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: true
+.SS Advanced options
+.PP
+Here are the Advanced options specific to shade (Shade FS).
+.SS --shade-endpoint
+.PP
+Endpoint for the service.
+.PP
+Leave blank normally.
+.PP
+Properties:
+.IP \[bu] 2
+Config: endpoint
+.IP \[bu] 2
+Env Var: RCLONE_SHADE_ENDPOINT
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --shade-chunk-size
+.PP
+Chunk size to use for uploading.
+.PP
+Any files larger than this will be uploaded in chunks of this size.
+.PP
+Note that this is stored in memory per transfer, so increasing it will
+increase memory usage.
+.PP
+Minimum is 5MB, maximum is 5GB.
+.PP
+Properties:
+.IP \[bu] 2
+Config: chunk_size
+.IP \[bu] 2
+Env Var: RCLONE_SHADE_CHUNK_SIZE
+.IP \[bu] 2
+Type: SizeSuffix
+.IP \[bu] 2
+Default: 64Mi
+.SS --shade-upload-concurrency
+.PP
+Concurrency for multipart uploads and copies.
+This is the number of chunks of the same file that are uploaded
+concurrently for multipart uploads and copies.
+.PP
+Properties:
+.IP \[bu] 2
+Config: upload_concurrency
+.IP \[bu] 2
+Env Var: RCLONE_SHADE_UPLOAD_CONCURRENCY
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 4
+.SS --shade-max-upload-parts
+.PP
+Maximum amount of parts in a multipart upload.
+.PP
+Properties:
+.IP \[bu] 2
+Config: max_upload_parts
+.IP \[bu] 2
+Env Var: RCLONE_SHADE_MAX_UPLOAD_PARTS
+.IP \[bu] 2
+Type: int
+.IP \[bu] 2
+Default: 10000
+.SS --shade-token
+.PP
+JWT Token for performing Shade FS operations.
+Don\[aq]t set this value - rclone will set it automatically
+.PP
+Properties:
+.IP \[bu] 2
+Config: token
+.IP \[bu] 2
+Env Var: RCLONE_SHADE_TOKEN
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --shade-token-expiry
+.PP
+JWT Token Expiration time.
+Don\[aq]t set this value - rclone will set it automatically
+.PP
+Properties:
+.IP \[bu] 2
+Config: token_expiry
+.IP \[bu] 2
+Env Var: RCLONE_SHADE_TOKEN_EXPIRY
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS --shade-encoding
+.PP
+The encoding for the backend.
+.PP
+See the encoding section in the
+overview (https://rclone.org/overview/#encoding) for more info.
+.PP
+Properties:
+.IP \[bu] 2
+Config: encoding
+.IP \[bu] 2
+Env Var: RCLONE_SHADE_ENCODING
+.IP \[bu] 2
+Type: Encoding
+.IP \[bu] 2
+Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
+.SS --shade-description
+.PP
+Description of the remote.
+.PP
+Properties:
+.IP \[bu] 2
+Config: description
+.IP \[bu] 2
+Env Var: RCLONE_SHADE_DESCRIPTION
+.IP \[bu] 2
+Type: string
+.IP \[bu] 2
+Required: false
+.SS Limitations
+.PP
+Note that Shade is case insensitive so you can\[aq]t have a file called
+\[dq]Hello.doc\[dq] and one called \[dq]hello.doc\[dq].
+.PP
+Shade only supports filenames up to 255 characters in length.
+.PP
+\f[V]rclone about\f[R] is not supported by the Shade backend.
+Backends without this capability cannot determine free space for an
+rclone mount or use policy \f[V]mfs\f[R] (most free space) as a member
+of an rclone union remote.
+.PP
+See List of backends that do not support rclone
+about (https://rclone.org/overview/#optional-features) and rclone
+about (https://rclone.org/commands/rclone_about/)
+.SS Backend commands
+.PP
+Here are the commands specific to the shade backend.
+.PP
+Run them with
+.IP
+.nf
+\f[C]
+rclone backend COMMAND remote:
+\f[R]
+.fi
+.PP
+The help below will explain what arguments each command takes.
+.PP
+See the backend (https://rclone.org/commands/rclone_backend/) command
+for more info on how to pass options and arguments.
+.PP
+These can be run on a running backend using the rc command
+backend/command (https://rclone.org/rc/#backend-command).
.SH SMB
.PP
SMB is a communication protocol to share files over
@@ -83787,218 +83698,6 @@ of an rclone union remote.
See List of backends that do not support rclone
about (https://rclone.org/overview/#optional-features) and rclone
about (https://rclone.org/commands/rclone_about/).
-.SH Uptobox
-.PP
-This is a Backend for Uptobox file storage service.
-Uptobox is closer to a one-click hoster than a traditional cloud storage
-provider and therefore not suitable for long term storage.
-.PP
-Paths are specified as \f[V]remote:path\f[R]
-.PP
-Paths may be as deep as required, e.g.
-\f[V]remote:directory/subdirectory\f[R].
-.SS Configuration
-.PP
-To configure an Uptobox backend you\[aq]ll need your personal api token.
-You\[aq]ll find it in your account
-settings (https://uptobox.com/my_account).
-.PP
-Here is an example of how to make a remote called \f[V]remote\f[R] with
-the default setup.
-First run:
-.IP
-.nf
-\f[C]
-rclone config
-\f[R]
-.fi
-.PP
-This will guide you through an interactive setup process:
-.IP
-.nf
-\f[C]
-Current remotes:
-
-Name Type
-==== ====
-TestUptobox uptobox
-
-e) Edit existing remote
-n) New remote
-d) Delete remote
-r) Rename remote
-c) Copy remote
-s) Set configuration password
-q) Quit config
-e/n/d/r/c/s/q> n
-name> uptobox
-Type of storage to configure.
-Enter a string value. Press Enter for the default (\[dq]\[dq]).
-Choose a number from below, or type in your own value
-[...]
-37 / Uptobox
- \[rs] \[dq]uptobox\[dq]
-[...]
-Storage> uptobox
-** See help for uptobox backend at: https://rclone.org/uptobox/ **
-
-Your API Key, get it from https://uptobox.com/my_account
-Enter a string value. Press Enter for the default (\[dq]\[dq]).
-api_key> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
-Edit advanced config? (y/n)
-y) Yes
-n) No (default)
-y/n> n
-Remote config
---------------------
-[uptobox]
-type = uptobox
-api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
---------------------
-y) Yes this is OK (default)
-e) Edit this remote
-d) Delete this remote
-y/e/d>
-\f[R]
-.fi
-.PP
-Once configured you can then use \f[V]rclone\f[R] like this (replace
-\f[V]remote\f[R] with the name you gave your remote):
-.PP
-List directories in top level of your Uptobox
-.IP
-.nf
-\f[C]
-rclone lsd remote:
-\f[R]
-.fi
-.PP
-List all the files in your Uptobox
-.IP
-.nf
-\f[C]
-rclone ls remote:
-\f[R]
-.fi
-.PP
-To copy a local directory to an Uptobox directory called backup
-.IP
-.nf
-\f[C]
-rclone copy /home/source remote:backup
-\f[R]
-.fi
-.SS Modification times and hashes
-.PP
-Uptobox supports neither modified times nor checksums.
-All timestamps will read as that set by \f[V]--default-time\f[R].
-.SS Restricted filename characters
-.PP
-In addition to the default restricted characters
-set (https://rclone.org/overview/#restricted-characters) the following
-characters are also replaced:
-.PP
-.TS
-tab(@);
-l c c.
-T{
-Character
-T}@T{
-Value
-T}@T{
-Replacement
-T}
-_
-T{
-\[dq]
-T}@T{
-0x22
-T}@T{
-"
-T}
-T{
-\[ga]
-T}@T{
-0x41
-T}@T{
-`
-T}
-.TE
-.PP
-Invalid UTF-8 bytes will also be
-replaced (https://rclone.org/overview/#invalid-utf8), as they can\[aq]t
-be used in XML strings.
-.SS Standard options
-.PP
-Here are the Standard options specific to uptobox (Uptobox).
-.SS --uptobox-access-token
-.PP
-Your access token.
-.PP
-Get it from https://uptobox.com/my_account.
-.PP
-Properties:
-.IP \[bu] 2
-Config: access_token
-.IP \[bu] 2
-Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.SS Advanced options
-.PP
-Here are the Advanced options specific to uptobox (Uptobox).
-.SS --uptobox-private
-.PP
-Set to make uploaded files private
-.PP
-Properties:
-.IP \[bu] 2
-Config: private
-.IP \[bu] 2
-Env Var: RCLONE_UPTOBOX_PRIVATE
-.IP \[bu] 2
-Type: bool
-.IP \[bu] 2
-Default: false
-.SS --uptobox-encoding
-.PP
-The encoding for the backend.
-.PP
-See the encoding section in the
-overview (https://rclone.org/overview/#encoding) for more info.
-.PP
-Properties:
-.IP \[bu] 2
-Config: encoding
-.IP \[bu] 2
-Env Var: RCLONE_UPTOBOX_ENCODING
-.IP \[bu] 2
-Type: Encoding
-.IP \[bu] 2
-Default:
-Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
-.SS --uptobox-description
-.PP
-Description of the remote.
-.PP
-Properties:
-.IP \[bu] 2
-Config: description
-.IP \[bu] 2
-Env Var: RCLONE_UPTOBOX_DESCRIPTION
-.IP \[bu] 2
-Type: string
-.IP \[bu] 2
-Required: false
-.SS Limitations
-.PP
-Uptobox will delete inactive files that have not been accessed in 60
-days.
-.PP
-\f[V]rclone about\f[R] is not supported by this backend an overview of
-used space can however been seen in the uptobox web interface.
.SH Union
.PP
The \f[V]union\f[R] backend joins several remotes together to make a
@@ -87115,6 +86814,199 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: Return an error based on option value.
.SH Changelog
+.SS v1.73.0 - 2026-01-30
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.72.0...v1.73.0)
+.IP \[bu] 2
+New backends
+.RS 2
+.IP \[bu] 2
+Shade (https://rclone.org/shade/) (jhasse-shade)
+.IP \[bu] 2
+Drime (https://rclone.org/drime/) (dougal)
+.IP \[bu] 2
+Filen (https://rclone.org/filen/) (Enduriel)
+.IP \[bu] 2
+Internxt (https://rclone.org/internxt/) (jzunigax2)
+.IP \[bu] 2
+New S3 providers
+.RS 2
+.IP \[bu] 2
+Bizfly Cloud Simple Storage (https://rclone.org/s3/#bizflycloud)
+(vupn0712)
+.RE
+.RE
+.IP \[bu] 2
+New Features
+.RS 2
+.IP \[bu] 2
+docs: Add Support Tiers (https://rclone.org/tiers/) to the documentation
+(Nick Craig-Wood)
+.IP \[bu] 2
+rc: Add
+operations/hashsumfile (https://rclone.org/rc/#operations-hashsumfile)
+to sum a single file only (Nick Craig-Wood)
+.IP \[bu] 2
+serve webdav: Implement download directory as Zip (Leo)
+.RE
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+fs: fix bwlimit: correct reporting (Mikel Olasagasti Uranga)
+.IP \[bu] 2
+log: fix systemd adding extra newline (dougal)
+.IP \[bu] 2
+docs: fixes (albertony, darkdragon-001, Duncan Smart, hyusap,
+Marc-Philip, Nick Craig-Wood, vicerace, vyv03354, yuval-cloudinary, yy)
+.IP \[bu] 2
+serve s3: Make errors in \f[V]--s3-auth-key\f[R] fatal (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Mount
+.RS 2
+.IP \[bu] 2
+Fix OpenBSD mount support.
+(Nick Owens)
+.RE
+.IP \[bu] 2
+Azure Blob
+.RS 2
+.IP \[bu] 2
+Add metadata and tags support across upload and copy paths (Cliff Frey)
+.IP \[bu] 2
+Factor the common auth into a library (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Azurefiles
+.RS 2
+.IP \[bu] 2
+Factor the common auth into a library (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+B2
+.RS 2
+.IP \[bu] 2
+Support authentication with new bucket restricted application keys
+(DianaNites)
+.RE
+.IP \[bu] 2
+Drive
+.RS 2
+.IP \[bu] 2
+Add \f[V]--drive-metadata-force-expansive-access\f[R] flag (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix crash when trying to creating shortcut to a Google doc (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+FTP
+.RS 2
+.IP \[bu] 2
+Add http proxy authentication support (Nicolas Dessart)
+.RE
+.IP \[bu] 2
+Mega
+.RS 2
+.IP \[bu] 2
+Reverts TLS workaround (necaran)
+.RE
+.IP \[bu] 2
+Memory
+.RS 2
+.IP \[bu] 2
+Add \f[V]--memory-discard\f[R] flag for speed testing (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+OneDrive
+.RS 2
+.IP \[bu] 2
+Fix cancelling multipart upload (Nick Craig-Wood)
+.IP \[bu] 2
+Fix setting modification time on directories for OneDrive Personal (Nick
+Craig-Wood)
+.IP \[bu] 2
+Fix OneDrive Personal no longer supports description (Nick Craig-Wood)
+.IP \[bu] 2
+Fix require sign in for OneDrive Personal (Nick Craig-Wood)
+.IP \[bu] 2
+Fix permissions on OneDrive Personal (Nick Craig-Wood)
+.RE
+.IP \[bu] 2
+Oracle Object Storage
+.RS 2
+.IP \[bu] 2
+Eliminate unnecessary heap allocation (Qingwei Li)
+.RE
+.IP \[bu] 2
+Pcloud
+.RS 2
+.IP \[bu] 2
+Add support for \f[V]ChangeNotify\f[R] to enable real-time updates in
+mount (masrlinu)
+.RE
+.IP \[bu] 2
+Protondrive
+.RS 2
+.IP \[bu] 2
+Update to use forks of upstream modules to unblock development (Nick
+Craig-Wood)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Add ability to specify an IAM role for cross-account interaction
+(Vladislav Tropnikov)
+.IP \[bu] 2
+Linode: updated endpoints to use ISO 3166-1 alpha-2 standard
+(jbagwell-akamai)
+.IP \[bu] 2
+Fix Copy ignoring storage class (vupn0712)
+.RE
+.IP \[bu] 2
+SFTP
+.RS 2
+.IP \[bu] 2
+Add http proxy authentication support (Nicolas Dessart)
+.IP \[bu] 2
+Eliminate unnecessary heap allocation (Qingwei Li)
+.RE
+.SS v1.72.1 - 2025-12-10
+.PP
+See commits (https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
+.IP \[bu] 2
+Bug Fixes
+.RS 2
+.IP \[bu] 2
+build: update to go1.25.5 to fix
+CVE-2025-61729 (https://pkg.go.dev/vuln/GO-2025-4155)
+.IP \[bu] 2
+doc fixes (Duncan Smart, Nick Craig-Wood)
+.IP \[bu] 2
+configfile: Fix piped config support (Jonas Tingeborn)
+.IP \[bu] 2
+log
+.RS 2
+.IP \[bu] 2
+Fix PID not included in JSON log output (Tingsong Xu)
+.IP \[bu] 2
+Fix backtrace not going to the --log-file (Nick Craig-Wood)
+.RE
+.RE
+.IP \[bu] 2
+Google Cloud Storage
+.RS 2
+.IP \[bu] 2
+Improve endpoint parameter docs (Johannes Rothe)
+.RE
+.IP \[bu] 2
+S3
+.RS 2
+.IP \[bu] 2
+Add missing regions for Selectel provider (Nick Craig-Wood)
+.RE
.SS v1.72.0 - 2025-11-21
.PP
See commits (https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0)
@@ -103780,6 +103672,63 @@ When the same Unicode characters are intentionally used in file names,
this replacement strategy leads to unwanted renames.
Read more under section
caveats (https://rclone.org/overview/#restricted-filenames-caveats).
+.SS Why does rclone fail to connect over TLS but another client works?
+.PP
+If you see TLS handshake failures (or packet captures show the server
+rejecting all offered ciphers), the server/proxy may only support legacy
+TLS cipher suites (for example RSA key-exchange ciphers such as
+\f[V]RSA_WITH_AES_256_CBC_SHA256\f[R], or old 3DES ciphers).
+Recent Go versions (which rclone is built with) have \f[B]removed
+insecure ciphers from the default list\f[R], so rclone may refuse to
+negotiate them even if other tools still do.
+.PP
+If you can\[aq]t update/reconfigure the server/proxy to support modern
+TLS (TLS 1.2/1.3) and ECDHE-based cipher suites you can re-enable legacy
+ciphers via \f[V]GODEBUG\f[R]:
+.IP \[bu] 2
+Windows (cmd.exe):
+.RS 2
+.IP
+.nf
+\f[C]
+set GODEBUG=tlsrsakex=1
+rclone copy ...
+\f[R]
+.fi
+.RE
+.IP \[bu] 2
+Windows (PowerShell):
+.RS 2
+.IP
+.nf
+\f[C]
+$env:GODEBUG=\[dq]tlsrsakex=1\[dq]
+rclone copy ...
+\f[R]
+.fi
+.RE
+.IP \[bu] 2
+Linux/macOS:
+.RS 2
+.IP
+.nf
+\f[C]
+GODEBUG=tlsrsakex=1 rclone copy ...
+\f[R]
+.fi
+.RE
+.PP
+If the server only supports 3DES, try:
+.IP
+.nf
+\f[C]
+GODEBUG=tls3des=1 rclone ...
+\f[R]
+.fi
+.PP
+This applies to \f[B]any rclone feature using TLS\f[R] (HTTPS, FTPS,
+WebDAV over TLS, proxies with TLS interception, etc.).
+Use these workarounds only long enough to get the server/proxy updated.
.SH License
.PP
This is free software under the terms of the MIT license (check the
@@ -105903,6 +105852,57 @@ jijamik <30904953+jijamik@users.noreply.github.com>
Dominik Sander
.IP \[bu] 2
Nikolay Kiryanov
+.IP \[bu] 2
+Diana <5275194+DianaNites@users.noreply.github.com>
+.IP \[bu] 2
+Duncan Smart
+.IP \[bu] 2
+vicerace
+.IP \[bu] 2
+Cliff Frey
+.IP \[bu] 2
+Vladislav Tropnikov
+.IP \[bu] 2
+Leo
+.IP \[bu] 2
+Johannes Rothe
+.IP \[bu] 2
+Tingsong Xu
+.IP \[bu] 2
+Jonas Tingeborn <134889+jojje@users.noreply.github.com>
+.IP \[bu] 2
+jhasse-shade
+.IP \[bu] 2
+vyv03354
+.IP \[bu] 2
+masrlinu
+<5259918+masrlinu@users.noreply.github.com>
+.IP \[bu] 2
+vupn0712 <126212736+vupn0712@users.noreply.github.com>
+.IP \[bu] 2
+darkdragon-001
+.IP \[bu] 2
+sys6101
+.IP \[bu] 2
+Nicolas Dessart
+.IP \[bu] 2
+Qingwei Li <332664203@qq.com>
+.IP \[bu] 2
+yy
+.IP \[bu] 2
+Marc-Philip
+.IP \[bu] 2
+Mikel Olasagasti Uranga
+.IP \[bu] 2
+Nick Owens
+.IP \[bu] 2
+hyusap
+.IP \[bu] 2
+jzunigax2 <125698953+jzunigax2@users.noreply.github.com>
+.IP \[bu] 2
+lullius
+.IP \[bu] 2
+StarHack
.SH Contact the rclone project
.SS Forum
.PP