From c7dab94e94641a036e856b21782affd29fad2aab Mon Sep 17 00:00:00 2001 From: Nick Craig-Wood Date: Tue, 17 Feb 2026 16:55:43 +0000 Subject: [PATCH] Version v1.73.1 --- MANUAL.html | 2659 ++++++++++++----------- MANUAL.md | 81 +- MANUAL.txt | 90 +- docs/content/bisync.md | 13 +- docs/content/changelog.md | 27 + docs/content/commands/rclone.md | 4 +- docs/content/commands/rclone_convmv.md | 4 +- docs/content/commands/rclone_copyurl.md | 9 + docs/content/filelu.md | 22 + docs/content/flags.md | 4 +- go.sum | 2 - lib/transform/transform.md | 4 +- rclone.1 | 139 +- 13 files changed, 1727 insertions(+), 1331 deletions(-) diff --git a/MANUAL.html b/MANUAL.html index 1caf83cca..13e574773 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -233,7 +233,7 @@

rclone(1) User Manual

Nick Craig-Wood

-

Jan 30, 2026

+

Feb 17, 2026

NAME

rclone - manage files on cloud storage

@@ -4553,9 +4553,9 @@ SquareBracket
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo"
 // Output: stories/The Quick Brown Fox!.txt
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
-// Output: stories/The Quick Brown Fox!-20260130
+// Output: stories/The Quick Brown Fox!-20260217
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
-// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM
+// Output: stories/The Quick Brown Fox!-2026-02-17 0451PM
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
 // Output: ababababababab/ababab ababababab ababababab ababab!abababab

The regex command generally accepts Perl-style regular expressions, @@ -4954,6 +4954,14 @@ class="sourceCode sh">https://example.org/foo/bar.json,local/path/bar.json +https://example.org/qux/baz.json,another/local/directory/qux.json

Troubleshooting

If you can't get rclone copyurl to work then here are some things you can try:

@@ -5564,26 +5572,26 @@ for rclone commands, flags and backends.

Synopsis

List directories and objects in the path in JSON format.

The output is an array of Items, where each Item looks like this:

-
{
-  "Hashes" : {
-    "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f",
-    "MD5" : "b1946ac92492d2347c6235b4d2611184",
-    "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
-  },
-  "ID": "y2djkhiujf83u33",
-  "OrigID": "UYOJVTUW00Q1RzTDA",
-  "IsBucket" : false,
-  "IsDir" : false,
-  "MimeType" : "application/octet-stream",
-  "ModTime" : "2017-05-31T16:15:57.034468261+01:00",
-  "Name" : "file.txt",
-  "Encrypted" : "v0qpsdq8anpci8n929v3uu9338",
-  "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338",
-  "Path" : "full/path/goes/here/file.txt",
-  "Size" : 6,
-  "Tier" : "hot",
-}
+
{
+  "Hashes" : {
+    "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f",
+    "MD5" : "b1946ac92492d2347c6235b4d2611184",
+    "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
+  },
+  "ID": "y2djkhiujf83u33",
+  "OrigID": "UYOJVTUW00Q1RzTDA",
+  "IsBucket" : false,
+  "IsDir" : false,
+  "MimeType" : "application/octet-stream",
+  "ModTime" : "2017-05-31T16:15:57.034468261+01:00",
+  "Name" : "file.txt",
+  "Encrypted" : "v0qpsdq8anpci8n929v3uu9338",
+  "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338",
+  "Path" : "full/path/goes/here/file.txt",
+  "Size" : 6,
+  "Tier" : "hot",
+}

The exact set of properties included depends on the backend:

If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

-
{
-  "user": "me",
-  "pass": "mypassword"
-}
-

If public-key authentication was used by the client, input to the -proxy process (on STDIN) would look similar to this:

{
   "user": "me",
-  "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+  "pass": "mypassword"
 }
-

And as an example return this on STDOUT

+

If public-key authentication was used by the client, input to the +proxy process (on STDIN) would look similar to this:

{
-  "type": "sftp",
-  "_root": "",
-  "_obscure": "pass",
-  "user": "me",
-  "pass": "mypassword",
-  "host": "sftp.example.com"
-}
+ "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" +}
+

And as an example return this on STDOUT

+
{
+  "type": "sftp",
+  "_root": "",
+  "_obscure": "pass",
+  "user": "me",
+  "pass": "mypassword",
+  "host": "sftp.example.com"
+}

This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since @@ -10711,28 +10719,28 @@ obscure

If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

-
{
-  "user": "me",
-  "pass": "mypassword"
-}
-

If public-key authentication was used by the client, input to the -proxy process (on STDIN) would look similar to this:

{
   "user": "me",
-  "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+  "pass": "mypassword"
 }
-

And as an example return this on STDOUT

+

If public-key authentication was used by the client, input to the +proxy process (on STDIN) would look similar to this:

{
-  "type": "sftp",
-  "_root": "",
-  "_obscure": "pass",
-  "user": "me",
-  "pass": "mypassword",
-  "host": "sftp.example.com"
-}
+ "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" +} +

And as an example return this on STDOUT

+
{
+  "type": "sftp",
+  "_root": "",
+  "_obscure": "pass",
+  "user": "me",
+  "pass": "mypassword",
+  "host": "sftp.example.com"
+}

This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since @@ -10893,12 +10901,12 @@ default is 1000000, but consider lowering this limit if the server's system resource usage causes problems. This is only used by the memory type cache.

To serve NFS over the network use following command:

-
rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
+
rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full

This specifies a port that can be used in the mount command. To mount the server under Linux/macOS, use the following command:

-
mount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpoint
+
mount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpoint

Where $PORT is the same port number used in the serve nfs command and $HOSTNAME is the network address of the machine that serve nfs was run on.

@@ -11608,9 +11616,9 @@ the server like this:

with a command like this:

rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY local:/path/to/folder

The rclone.conf for the server could look like this:

-
[local]
-type = local
+
[local]
+type = local

The local configuration is optional though. If you run the server with a remote:path like /path/to/folder (without the local: prefix and @@ -11619,14 +11627,14 @@ default configuration, which will be visible as a warning in the logs. But it will run nonetheless.

This will be compatible with an rclone (client) remote configuration which is defined like this:

-
[serves3]
-type = s3
-provider = Rclone
-endpoint = http://127.0.0.1:8080/
-access_key_id = ACCESS_KEY_ID
-secret_access_key = SECRET_ACCESS_KEY
-use_multipart_uploads = false
+
[serves3]
+type = s3
+provider = Rclone
+endpoint = http://127.0.0.1:8080/
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+use_multipart_uploads = false

Note that setting use_multipart_uploads = false is to work around a bug which will be fixed in due course.

@@ -12738,28 +12746,28 @@ obscure

If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

-
{
-  "user": "me",
-  "pass": "mypassword"
-}
-

If public-key authentication was used by the client, input to the -proxy process (on STDIN) would look similar to this:

{
   "user": "me",
-  "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+  "pass": "mypassword"
 }
-

And as an example return this on STDOUT

+

If public-key authentication was used by the client, input to the +proxy process (on STDIN) would look similar to this:

{
-  "type": "sftp",
-  "_root": "",
-  "_obscure": "pass",
-  "user": "me",
-  "pass": "mypassword",
-  "host": "sftp.example.com"
-}
+ "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" +} +

And as an example return this on STDOUT

+
{
+  "type": "sftp",
+  "_root": "",
+  "_obscure": "pass",
+  "user": "me",
+  "pass": "mypassword",
+  "host": "sftp.example.com"
+}

This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since @@ -13551,28 +13559,28 @@ obscure

If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

-
{
-  "user": "me",
-  "pass": "mypassword"
-}
-

If public-key authentication was used by the client, input to the -proxy process (on STDIN) would look similar to this:

{
   "user": "me",
-  "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+  "pass": "mypassword"
 }
-

And as an example return this on STDOUT

+

If public-key authentication was used by the client, input to the +proxy process (on STDIN) would look similar to this:

{
-  "type": "sftp",
-  "_root": "",
-  "_obscure": "pass",
-  "user": "me",
-  "pass": "mypassword",
-  "host": "sftp.example.com"
-}
+ "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" +} +

And as an example return this on STDOUT

+
{
+  "type": "sftp",
+  "_root": "",
+  "_obscure": "pass",
+  "user": "me",
+  "pass": "mypassword",
+  "host": "sftp.example.com"
+}

This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since @@ -14276,11 +14284,11 @@ infrastructure without a proper certificate. You could supply the --no-check-certificate flag to rclone, but this will affect all the remotes. To make it just affect this remote you use an override. You could put this in the config file:

-
[remote]
-type = XXX
-...
-override.no_check_certificate = true
+
[remote]
+type = XXX
+...
+override.no_check_certificate = true

or use it in the connection string remote,override.no_check_certificate=true: (or just remote,override.no_check_certificate:).

@@ -14324,11 +14332,11 @@ as an override. For example, say you have a remote where you would always like to use the --checksum flag. You could supply the --checksum flag to rclone on every command line, but instead you could put this in the config file:

-
[remote]
-type = XXX
-...
-global.checksum = true
+
[remote]
+type = XXX
+...
+global.checksum = true

or use it in the connection string remote,global.checksum=true: (or just remote,global.checksum:). This is equivalent to using the @@ -14364,13 +14372,13 @@ shell.

Windows

If your names have spaces in you need to put them in ", e.g.

-
rclone copy "E:\folder name\folder name\folder name" remote:backup
+
rclone copy "E:\folder name\folder name\folder name" remote:backup

If you are using the root directory on its own then don't quote it (see #464 for why), e.g.

-
rclone copy E:\ remote:backup
+
rclone copy E:\ remote:backup

Copying files or directories with : in the names

rclone uses : to mark a remote name. This is, however, a @@ -14983,11 +14991,11 @@ value is the internal lowercase name as returned by command rclone help backends. Comments are indicated by ; or # at the beginning of a line.

Example:

-
[megaremote]
-type = mega
-user = you@example.com
-pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qH
+
[megaremote]
+type = mega
+user = you@example.com
+pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qH

Note that passwords are in obscured form. Also, many storage systems uses token-based authentication instead of @@ -15507,49 +15515,49 @@ complete log file is not strictly valid JSON and needs a parser that can handle it.

The JSON logs will be printed on a single line, but are shown expanded here for clarity.

-
{
-  "time": "2025-05-13T17:30:51.036237518+01:00",
-  "level": "debug",
-  "msg": "4 go routines active\n",
-  "source": "cmd/cmd.go:298"
-}
+
{
+  "time": "2025-05-13T17:30:51.036237518+01:00",
+  "level": "debug",
+  "msg": "4 go routines active\n",
+  "source": "cmd/cmd.go:298"
+}

Completed data transfer logs will have extra size information. Logs which are about a particular object will have object and objectType fields also.

-
{
-  "time": "2025-05-13T17:38:05.540846352+01:00",
-  "level": "info",
-  "msg": "Copied (new) to: file2.txt",
-  "size": 6,
-  "object": "file.txt",
-  "objectType": "*local.Object",
-  "source": "operations/copy.go:368"
-}
+
{
+  "time": "2025-05-13T17:38:05.540846352+01:00",
+  "level": "info",
+  "msg": "Copied (new) to: file2.txt",
+  "size": 6,
+  "object": "file.txt",
+  "objectType": "*local.Object",
+  "source": "operations/copy.go:368"
+}

Stats logs will contain a stats field which is the same as returned from the rc call core/stats.

-
{
-  "time": "2025-05-13T17:38:05.540912847+01:00",
-  "level": "info",
-  "msg": "...text version of the stats...",
-  "stats": {
-    "bytes": 6,
-    "checks": 0,
-    "deletedDirs": 0,
-    "deletes": 0,
-    "elapsedTime": 0.000904825,
-    ...truncated for clarity...
-    "totalBytes": 6,
-    "totalChecks": 0,
-    "totalTransfers": 1,
-    "transferTime": 0.000882794,
-    "transfers": 1
-  },
-  "source": "accounting/stats.go:569"
-}
+
{
+  "time": "2025-05-13T17:38:05.540912847+01:00",
+  "level": "info",
+  "msg": "...text version of the stats...",
+  "stats": {
+    "bytes": 6,
+    "checks": 0,
+    "deletedDirs": 0,
+    "deletes": 0,
+    "elapsedTime": 0.000904825,
+    ...truncated for clarity...
+    "totalBytes": 6,
+    "totalChecks": 0,
+    "totalTransfers": 1,
+    "transferTime": 0.000882794,
+    "transfers": 1
+  },
+  "source": "accounting/stats.go:569"
+}

--low-level-retries int

This controls the number of low level retries rclone does.

A low level retry is used to retry a failing operation - typically @@ -15704,63 +15712,63 @@ known.

  • Metadata is the backend specific metadata as described in the backend docs.
  • -
    {
    -  "SrcFs": "gdrive:",
    -  "SrcFsType": "drive",
    -  "DstFs": "newdrive:user",
    -  "DstFsType": "onedrive",
    -  "Remote": "test.txt",
    -  "Size": 6,
    -  "MimeType": "text/plain; charset=utf-8",
    -  "ModTime": "2022-10-11T17:53:10.286745272+01:00",
    -  "IsDir": false,
    -  "ID": "xyz",
    -  "Metadata": {
    -    "btime": "2022-10-11T16:53:11Z",
    -    "content-type": "text/plain; charset=utf-8",
    -    "mtime": "2022-10-11T17:53:10.286745272+01:00",
    -    "owner": "user1@domain1.com",
    -    "permissions": "...",
    -    "description": "my nice file",
    -    "starred": "false"
    -  }
    -}
    +
    {
    +  "SrcFs": "gdrive:",
    +  "SrcFsType": "drive",
    +  "DstFs": "newdrive:user",
    +  "DstFsType": "onedrive",
    +  "Remote": "test.txt",
    +  "Size": 6,
    +  "MimeType": "text/plain; charset=utf-8",
    +  "ModTime": "2022-10-11T17:53:10.286745272+01:00",
    +  "IsDir": false,
    +  "ID": "xyz",
    +  "Metadata": {
    +    "btime": "2022-10-11T16:53:11Z",
    +    "content-type": "text/plain; charset=utf-8",
    +    "mtime": "2022-10-11T17:53:10.286745272+01:00",
    +    "owner": "user1@domain1.com",
    +    "permissions": "...",
    +    "description": "my nice file",
    +    "starred": "false"
    +  }
    +}

    The program should then modify the input as desired and send it to STDOUT. The returned Metadata field will be used in its entirety for the destination object. Any other fields will be ignored. Note in this example we translate user names and permissions and add something to the description:

    -
    {
    -  "Metadata": {
    -    "btime": "2022-10-11T16:53:11Z",
    -    "content-type": "text/plain; charset=utf-8",
    -    "mtime": "2022-10-11T17:53:10.286745272+01:00",
    -    "owner": "user1@domain2.com",
    -    "permissions": "...",
    -    "description": "my nice file [migrated from domain1]",
    -    "starred": "false"
    -  }
    -}
    +
    {
    +  "Metadata": {
    +    "btime": "2022-10-11T16:53:11Z",
    +    "content-type": "text/plain; charset=utf-8",
    +    "mtime": "2022-10-11T17:53:10.286745272+01:00",
    +    "owner": "user1@domain2.com",
    +    "permissions": "...",
    +    "description": "my nice file [migrated from domain1]",
    +    "starred": "false"
    +  }
    +}

    Metadata can be removed here too.

    An example python program might look something like this to implement the above transformations.

    -
    import sys, json
    -
    -i = json.load(sys.stdin)
    -metadata = i["Metadata"]
    -# Add tag to description
    -if "description" in metadata:
    -    metadata["description"] += " [migrated from domain1]"
    -else:
    -    metadata["description"] = "[migrated from domain1]"
    -# Modify owner
    -if "owner" in metadata:
    -    metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
    -o = { "Metadata": metadata }
    -json.dump(o, sys.stdout, indent="\t")
    +
    import sys, json
    +
    +i = json.load(sys.stdin)
    +metadata = i["Metadata"]
    +# Add tag to description
    +if "description" in metadata:
    +    metadata["description"] += " [migrated from domain1]"
    +else:
    +    metadata["description"] = "[migrated from domain1]"
    +# Modify owner
    +if "owner" in metadata:
    +    metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
    +o = { "Metadata": metadata }
    +json.dump(o, sys.stdout, indent="\t")

    You can find this example (slightly expanded) in the rclone source code at bin/test_metadata_mapper.py.

    @@ -16571,11 +16579,11 @@ password, in which case it will be used for decrypting the configuration.

    You can set this for a session from a script. For unix like systems save this to a file called set-rclone-password:

    -
    #!/bin/echo Source this file don't run it
    -
    -read -s RCLONE_CONFIG_PASS
    -export RCLONE_CONFIG_PASS
    +
    #!/bin/echo Source this file don't run it
    +
    +read -s RCLONE_CONFIG_PASS
    +export RCLONE_CONFIG_PASS

    Then source the file when you want to use it. From the shell you would do source set-rclone-password. It will then ask you for the password and set it in the environment variable.

    @@ -16647,11 +16655,11 @@ a password store: pass init rclone.

    Windows

    Encrypt the config file (all systems)

    @@ -18325,8 +18333,8 @@ href="#option-blocks">the options blocks section for more info).

    For example, if you wished to run a sync with the --checksum parameter, you would pass this parameter in your JSON blob.

    -
    "_config":{"CheckSum": true}
    +
    "_config":{"CheckSum": true}

    If using rclone rc this could be passed as

    rclone rc sync/sync ... _config='{"CheckSum": true}'

    Any config parameters you don't set will inherit the global defaults @@ -18335,9 +18343,9 @@ which were set with command line flags or environment variables.

    see data types for more info. Here is an example setting the equivalent of --buffer-size in string or integer format.

    -
    "_config":{"BufferSize": "42M"}
    -"_config":{"BufferSize": 44040192}
    +
    "_config":{"BufferSize": "42M"}
    +"_config":{"BufferSize": 44040192}

    If you wish to check the _config assignment has worked properly then calling options/local will show what the value got set to.

    @@ -18354,8 +18362,8 @@ href="#option-blocks">the options blocks section for more info).

    For example, if you wished to run a sync with these flags

    --max-size 1M --max-age 42s --include "a" --include "b"

    you would pass this parameter in your JSON blob.

    -
    "_filter":{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}
    +
    "_filter":{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}

    If using rclone rc this could be passed as

    rclone rc ... _filter='{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}'

    Any filter parameters you don't set will inherit the global defaults @@ -18364,9 +18372,9 @@ which were set with command line flags or environment variables.

    see data types for more info. Here is an example setting the equivalent of --buffer-size in string or integer format.

    -
    "_filter":{"MinSize": "42M"}
    -"_filter":{"MinSize": 44040192}
    +
    "_filter":{"MinSize": "42M"}
    +"_filter":{"MinSize": 44040192}

    If you wish to check the _filter assignment has worked properly then calling options/local will show what the value got set to.

    @@ -18546,36 +18554,36 @@ allowed unless Required or Default is set)

    An example of this might be the --log-level flag. Note that the Name of the option becomes the command line flag with _ replaced with -.

    -
    {
    -    "Advanced": false,
    -    "Default": 5,
    -    "DefaultStr": "NOTICE",
    -    "Examples": [
    -        {
    -            "Help": "",
    -            "Value": "EMERGENCY"
    -        },
    -        {
    -            "Help": "",
    -            "Value": "ALERT"
    -        },
    -        ...
    -    ],
    -    "Exclusive": true,
    -    "FieldName": "LogLevel",
    -    "Groups": "Logging",
    -    "Help": "Log level DEBUG|INFO|NOTICE|ERROR",
    -    "Hide": 0,
    -    "IsPassword": false,
    -    "Name": "log_level",
    -    "NoPrefix": true,
    -    "Required": true,
    -    "Sensitive": false,
    -    "Type": "LogLevel",
    -    "Value": null,
    -    "ValueStr": "NOTICE"
    -},
    +
    {
    +    "Advanced": false,
    +    "Default": 5,
    +    "DefaultStr": "NOTICE",
    +    "Examples": [
    +        {
    +            "Help": "",
    +            "Value": "EMERGENCY"
    +        },
    +        {
    +            "Help": "",
    +            "Value": "ALERT"
    +        },
    +        ...
    +    ],
    +    "Exclusive": true,
    +    "FieldName": "LogLevel",
    +    "Groups": "Logging",
    +    "Help": "Log level DEBUG|INFO|NOTICE|ERROR",
    +    "Hide": 0,
    +    "IsPassword": false,
    +    "Name": "log_level",
    +    "NoPrefix": true,
    +    "Required": true,
    +    "Sensitive": false,
    +    "Type": "LogLevel",
    +    "Value": null,
    +    "ValueStr": "NOTICE"
    +},

    Note that the Help may be multiple lines separated by \n. The first line will always be a short sentence and this is the sentence shown when running rclone help flags.

    @@ -18601,25 +18609,25 @@ set. If the local backend is desired then type should be set to local. If _root isn't specified then it defaults to the root of the remote.

    For example this JSON is equivalent to remote:/tmp

    -
    {
    -    "_name": "remote",
    -    "_root": "/tmp"
    -}
    -

    And this is equivalent to -:sftp,host='example.com':/tmp

    {
    -    "type": "sftp",
    -    "host": "example.com",
    -    "_root": "/tmp"
    -}
    -

    And this is equivalent to /tmp/dir

    + "_name": "remote", + "_root": "/tmp" +} +

    And this is equivalent to +:sftp,host='example.com':/tmp

    {
    -    "type": "local",
    -    "_root": "/tmp/dir"
    -}
    + "type": "sftp", + "host": "example.com", + "_root": "/tmp" +} +

    And this is equivalent to /tmp/dir

    +
    {
    +    "type": "local",
    +    "_root": "/tmp/dir"
    +}

    Supported commands

    backend/command: Runs a backend command.

    @@ -19183,12 +19191,12 @@ concurrently.
  • inputs - an list of inputs to the commands with an extra _path parameter
  • -
    {
    -    "_path": "rc/path",
    -    "param1": "parameter for the path as documented",
    -    "param2": "parameter for the path as documented, etc",
    -}
    +
    {
    +    "_path": "rc/path",
    +    "param1": "parameter for the path as documented",
    +    "param2": "parameter for the path as documented, etc",
    +}

    The inputs may use _async, _group, _config and _filter as normal when using the rc.

    @@ -19198,37 +19206,37 @@ rc.

    each in inputs.

    For example:

    -
    rclone rc job/batch --json '{
    -  "inputs": [
    -    {
    -      "_path": "rc/noop",
    -      "parameter": "OK"
    -    },
    -    {
    -      "_path": "rc/error",
    -      "parameter": "BAD"
    -    }
    -  ]
    -}
    -'
    -

    Gives the result:

    {
    -  "results": [
    -    {
    -      "parameter": "OK"
    -    },
    -    {
    -      "error": "arbitrary error on input map[parameter:BAD]",
    -      "input": {
    -        "parameter": "BAD"
    -      },
    -      "path": "rc/error",
    -      "status": 500
    -    }
    -  ]
    -}
    +class="sourceCode sh">rclone rc job/batch --json '{ + "inputs": [ + { + "_path": "rc/noop", + "parameter": "OK" + }, + { + "_path": "rc/error", + "parameter": "BAD" + } + ] +} +' +

    Gives the result:

    +
    {
    +  "results": [
    +    {
    +      "parameter": "OK"
    +    },
    +    {
    +      "error": "arbitrary error on input map[parameter:BAD]",
    +      "input": {
    +        "parameter": "BAD"
    +      },
    +      "path": "rc/error",
    +      "status": 500
    +    }
    +  ]
    +}

    Authentication is required for this call.

    job/list: Lists the IDs of the running jobs

    Parameters: None.

    @@ -19996,25 +20004,25 @@ Useful for testing error handling.

    Eg

    rclone rc serve/list

    Returns

    -
    {
    -    "list": [
    -        {
    -            "addr": "[::]:4321",
    -            "id": "nfs-ffc2a4e5",
    -            "params": {
    -                "fs": "remote:",
    -                "opt": {
    -                    "ListenAddr": ":4321"
    -                },
    -                "type": "nfs",
    -                "vfsOpt": {
    -                    "CacheMode": "full"
    -                }
    -            }
    -        }
    -    ]
    -}
    +
    {
    +    "list": [
    +        {
    +            "addr": "[::]:4321",
    +            "id": "nfs-ffc2a4e5",
    +            "params": {
    +                "fs": "remote:",
    +                "opt": {
    +                    "ListenAddr": ":4321"
    +                },
    +                "type": "nfs",
    +                "vfsOpt": {
    +                    "CacheMode": "full"
    +                }
    +            }
    +        }
    +    ]
    +}

    Authentication is required for this call.

    serve/start: Create a new server

    Create a new server with the specified parameters.

    @@ -20039,11 +20047,11 @@ above.

    rclone rc serve/start type=nfs fs=remote: addr=:4321 vfs_cache_mode=full
     rclone rc serve/start --json '{"type":"nfs","fs":"remote:","addr":":1234","vfs_cache_mode":"full"}'

    This will give the reply

    -
    {
    -    "addr": "[::]:4321", // Address the server was started on
    -    "id": "nfs-ecfc6852" // Unique identifier for the server instance
    -}
    +
    {
    +    "addr": "[::]:4321", // Address the server was started on
    +    "id": "nfs-ecfc6852" // Unique identifier for the server instance
    +}

    Or an error if it failed to start.

    Stop the server with serve/stop and list the running servers with serve/list.

    @@ -20075,14 +20083,14 @@ be passed to serve/start as the serveType parameter.

    Eg

    rclone rc serve/types

    Returns

    -
    {
    -    "types": [
    -        "http",
    -        "sftp",
    -        "nfs"
    -    ]
    -}
    +
    {
    +    "types": [
    +        "http",
    +        "sftp",
    +        "nfs"
    +    ]
    +}

    Authentication is required for this call.

    sync/bisync: Perform bidirectional synchronization between two paths.

    @@ -20328,16 +20336,16 @@ formatted to be reasonably human-readable.

    If an error occurs then there will be an HTTP error status (e.g. 500) and the body of the response will contain a JSON encoded error object, e.g.

    -
    {
    -    "error": "Expecting string value for key \"remote\" (was float64)",
    -    "input": {
    -        "fs": "/tmp",
    -        "remote": 3
    -    },
    -    "status": 400,
    -    "path": "operations/rmdir"
    -}
    +
    {
    +    "error": "Expecting string value for key \"remote\" (was float64)",
    +    "input": {
    +        "fs": "/tmp",
    +        "remote": 3
    +    },
    +    "status": 400,
    +    "path": "operations/rmdir"
    +}

    The keys in the error response are:

    -
  • TestSeafile (seafile) +
  • TestInternxt (internxt)
  • -
  • TestSeafileV6 (seafile) -
  • -
  • Updated: 2026-01-30-010015 +
  • Updated: 2026-02-17-010016
  • The following backends either have not been tested recently or have @@ -24767,19 +24782,19 @@ versions I manually run the following command:

  • The Dropbox client then syncs the changes with Dropbox.
  • rclone.conf snippet

    -
    [Dropbox]
    -type = dropbox
    -...
    -
    -[Dropcrypt]
    -type = crypt
    -remote = /path/to/DBoxroot/crypt          # on the Linux server
    -remote = C:\Users\MyLogin\Dropbox\crypt   # on the Windows notebook
    -filename_encryption = standard
    -directory_name_encryption = true
    -password = ...
    -...
    +
    [Dropbox]
    +type = dropbox
    +...
    +
    +[Dropcrypt]
    +type = crypt
    +remote = /path/to/DBoxroot/crypt          # on the Linux server
    +remote = C:\Users\MyLogin\Dropbox\crypt   # on the Windows notebook
    +filename_encryption = standard
    +directory_name_encryption = true
    +password = ...
    +...

    Testing

    You should read this section only if you are developing for rclone. You need to have rclone source code locally to work with bisync @@ -26435,24 +26450,24 @@ An external ID is provided for additional security as required by the role's trust policy

    The target role's trust policy in the destination account must allow the source account or user to assume it. Example trust policy:

    -
    {
    -  "Version": "2012-10-17",
    -  "Statement": [
    -    {
    -      "Effect": "Allow",
    -      "Principal": {
    -        "AWS": "arn:aws:iam::SOURCE-ACCOUNT-ID:root"
    -      },
    -      "Action": "sts:AssumeRole",
    -      "Condition": {
    -        "StringEquals": {
    -          "sts:ExternalID": "unique-role-external-id-12345"
    -        }
    -      }
    -    }
    -  ]
    -}
    +
    {
    +  "Version": "2012-10-17",
    +  "Statement": [
    +    {
    +      "Effect": "Allow",
    +      "Principal": {
    +        "AWS": "arn:aws:iam::SOURCE-ACCOUNT-ID:root"
    +      },
    +      "Action": "sts:AssumeRole",
    +      "Condition": {
    +        "StringEquals": {
    +          "sts:ExternalID": "unique-role-external-id-12345"
    +        }
    +      }
    +    }
    +  ]
    +}

    S3 Permissions

    When using the sync subcommand of rclone the following minimum permissions are required to be available on the @@ -26469,34 +26484,34 @@ href="#s3-no-check-bucket">s3-no-check-bucket)

    When using the lsd subcommand, the ListAllMyBuckets permission is required.

    Example policy:

    -
    {
    -  "Version": "2012-10-17",
    -  "Statement": [
    -    {
    -      "Effect": "Allow",
    -      "Principal": {
    -        "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
    -      },
    -      "Action": [
    -        "s3:ListBucket",
    -        "s3:DeleteObject",
    -        "s3:GetObject",
    -        "s3:PutObject",
    -        "s3:PutObjectAcl"
    -      ],
    -      "Resource": [
    -        "arn:aws:s3:::BUCKET_NAME/*",
    -        "arn:aws:s3:::BUCKET_NAME"
    -      ]
    -    },
    -    {
    -      "Effect": "Allow",
    -      "Action": "s3:ListAllMyBuckets",
    -      "Resource": "arn:aws:s3:::*"
    -    }
    -  ]
    -}
    +
    {
    +  "Version": "2012-10-17",
    +  "Statement": [
    +    {
    +      "Effect": "Allow",
    +      "Principal": {
    +        "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
    +      },
    +      "Action": [
    +        "s3:ListBucket",
    +        "s3:DeleteObject",
    +        "s3:GetObject",
    +        "s3:PutObject",
    +        "s3:PutObjectAcl"
    +      ],
    +      "Resource": [
    +        "arn:aws:s3:::BUCKET_NAME/*",
    +        "arn:aws:s3:::BUCKET_NAME"
    +      ]
    +    },
    +    {
    +      "Effect": "Allow",
    +      "Action": "s3:ListAllMyBuckets",
    +      "Resource": "arn:aws:s3:::*"
    +    }
    +  ]
    +}

    Notes on above:

    1. This is a policy that can be used when creating bucket. It assumes @@ -30908,17 +30923,17 @@ rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITYIt returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successful or an error message if not.

      -
      [
      -    {
      -        "Status": "OK",
      -        "Remote": "test.txt"
      -    },
      -    {
      -        "Status": "OK",
      -        "Remote": "test/file4.txt"
      -    }
      -]
      +
      [
      +    {
      +        "Status": "OK",
      +        "Remote": "test.txt"
      +    },
      +    {
      +        "Status": "OK",
      +        "Remote": "test/file4.txt"
      +    }
      +]

      Options:

      • "description": The optional description for the job.
      • @@ -30940,36 +30955,36 @@ rclone backend restore-status s3:bucket/path/to/directory rclone backend restore-status -o all s3:bucket/path/to/directory

        This command does not obey the filters.

        It returns a list of status dictionaries:

        -
        [
        -    {
        -        "Remote": "file.txt",
        -        "VersionID": null,
        -        "RestoreStatus": {
        -            "IsRestoreInProgress": true,
        -            "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
        -        },
        -        "StorageClass": "GLACIER"
        -    },
        -    {
        -        "Remote": "test.pdf",
        -        "VersionID": null,
        -        "RestoreStatus": {
        -            "IsRestoreInProgress": false,
        -            "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
        -        },
        -        "StorageClass": "DEEP_ARCHIVE"
        -    },
        -    {
        -        "Remote": "test.gz",
        -        "VersionID": null,
        -        "RestoreStatus": {
        -            "IsRestoreInProgress": true,
        -            "RestoreExpiryDate": "null"
        -        },
        -        "StorageClass": "INTELLIGENT_TIERING"
        -    }
        -]
        +
        [
        +    {
        +        "Remote": "file.txt",
        +        "VersionID": null,
        +        "RestoreStatus": {
        +            "IsRestoreInProgress": true,
        +            "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
        +        },
        +        "StorageClass": "GLACIER"
        +    },
        +    {
        +        "Remote": "test.pdf",
        +        "VersionID": null,
        +        "RestoreStatus": {
        +            "IsRestoreInProgress": false,
        +            "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
        +        },
        +        "StorageClass": "DEEP_ARCHIVE"
        +    },
        +    {
        +        "Remote": "test.gz",
        +        "VersionID": null,
        +        "RestoreStatus": {
        +            "IsRestoreInProgress": true,
        +            "RestoreExpiryDate": "null"
        +        },
        +        "StorageClass": "INTELLIGENT_TIERING"
        +    }
        +]

        Options:

        • "all": If set then show all objects, not just ones with restore @@ -30986,27 +31001,27 @@ format.

          multipart uploads.

          You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path.

          -
          {
          -    "rclone": [
          -        {
          -            "Initiated": "2020-06-26T14:20:36Z",
          -            "Initiator": {
          -                "DisplayName": "XXX",
          -                "ID": "arn:aws:iam::XXX:user/XXX"
          -            },
          -            "Key": "KEY",
          -            "Owner": {
          -                "DisplayName": null,
          -                "ID": "XXX"
          -            },
          -            "StorageClass": "STANDARD",
          -            "UploadId": "XXX"
          -        }
          -    ],
          -    "rclone-1000files": [],
          -    "rclone-dst": []
          -}
          +
          {
          +    "rclone": [
          +        {
          +            "Initiated": "2020-06-26T14:20:36Z",
          +            "Initiator": {
          +                "DisplayName": "XXX",
          +                "ID": "arn:aws:iam::XXX:user/XXX"
          +            },
          +            "Key": "KEY",
          +            "Owner": {
          +                "DisplayName": null,
          +                "ID": "XXX"
          +            },
          +            "StorageClass": "STANDARD",
          +            "UploadId": "XXX"
          +        }
          +    ],
          +    "rclone-1000files": [],
          +    "rclone-dst": []
          +}

          cleanup

          Remove unfinished multipart uploads.

          rclone backend cleanup remote: [options] [<arguments>+]
          @@ -31062,10 +31077,10 @@ will default to those currently in use.

          If you want to use rclone to access a public bucket, configure with a blank access_key_id and secret_access_key. Your config should end up looking like this:

          -
          [anons3]
          -type = s3
          -provider = AWS
          +
          [anons3]
          +type = s3
          +provider = AWS

          Then use it as normal with the name of the public bucket, e.g.

          rclone lsd anons3:1000genomes

          You will be able to list and copy data but not upload it.

          @@ -31101,14 +31116,14 @@ query parameter based authentication.

          With rclone v1.59 or later setting upload_cutoff should not be necessary.

          eg.

          -
          [snowball]
          -type = s3
          -provider = Other
          -access_key_id = YOUR_ACCESS_KEY
          -secret_access_key = YOUR_SECRET_KEY
          -endpoint = http://[IP of Snowball]:8080
          -upload_cutoff = 0
          +
          [snowball]
          +type = s3
          +provider = Other
          +access_key_id = YOUR_ACCESS_KEY
          +secret_access_key = YOUR_SECRET_KEY
          +endpoint = http://[IP of Snowball]:8080
          +upload_cutoff = 0

          Alibaba OSS

          Here is an example of making an Alibaba Cloud (Aliyun) @@ -31303,19 +31318,19 @@ e) Edit this remote d) Delete this remote y/e/d> y

          This will leave the config file looking like this.

          -
          [ArvanCloud]
          -type = s3
          -provider = ArvanCloud
          -env_auth = false
          -access_key_id = YOURACCESSKEY
          -secret_access_key = YOURSECRETACCESSKEY
          -region =
          -endpoint = s3.arvanstorage.com
          -location_constraint =
          -acl =
          -server_side_encryption =
          -storage_class =
          +
          [ArvanCloud]
          +type = s3
          +provider = ArvanCloud
          +env_auth = false
          +access_key_id = YOURACCESSKEY
          +secret_access_key = YOURSECRETACCESSKEY
          +region =
          +endpoint = s3.arvanstorage.com
          +location_constraint =
          +acl =
          +server_side_encryption =
          +storage_class =

          BizflyCloud

          Bizfly Cloud Simple Storage is an S3-compatible service with regions in Hanoi (HN) and @@ -31326,19 +31341,19 @@ Ho Chi Minh City (HCM).

        • HCM: hcm.ss.bfcplatform.vn

        A minimal configuration looks like this.

        -
        [bizfly]
        -type = s3
        -provider = BizflyCloud
        -env_auth = false
        -access_key_id = YOUR_ACCESS_KEY
        -secret_access_key = YOUR_SECRET_KEY
        -region = HN
        -endpoint = hn.ss.bfcplatform.vn
        -location_constraint =
        -acl =
        -server_side_encryption =
        -storage_class =
        +
        [bizfly]
        +type = s3
        +provider = BizflyCloud
        +env_auth = false
        +access_key_id = YOUR_ACCESS_KEY
        +secret_access_key = YOUR_SECRET_KEY
        +region = HN
        +endpoint = hn.ss.bfcplatform.vn
        +location_constraint =
        +acl =
        +server_side_encryption =
        +storage_class =

        Switch region and endpoint to HCM and hcm.ss.bfcplatform.vn for Ho Chi Minh City.

        @@ -31350,19 +31365,19 @@ interface.

        To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:

        -
        [ceph]
        -type = s3
        -provider = Ceph
        -env_auth = false
        -access_key_id = XXX
        -secret_access_key = YYY
        -region =
        -endpoint = https://ceph.endpoint.example.com
        -location_constraint =
        -acl =
        -server_side_encryption =
        -storage_class =
        +
        [ceph]
        +type = s3
        +provider = Ceph
        +env_auth = false
        +access_key_id = XXX
        +secret_access_key = YYY
        +region =
        +endpoint = https://ceph.endpoint.example.com
        +location_constraint =
        +acl =
        +server_side_encryption =
        +storage_class =

        If you are using an older version of CEPH (e.g. 10.2.x Jewel) and a version of rclone before v1.59 then you may need to supply the parameter --s3-upload-cutoff 0 or put this in the config file as @@ -31375,18 +31390,18 @@ tools you will get a JSON blob with the / escaped as access key.

        Eg the dump from Ceph looks something like this (irrelevant keys removed).

        -
        {
        -    "user_id": "xxx",
        -    "display_name": "xxxx",
        -    "keys": [
        -        {
        -            "user": "xxx",
        -            "access_key": "xxxxxx",
        -            "secret_key": "xxxxxx\/xxxx"
        -        }
        -    ],
        -}
        +
        {
        +    "user_id": "xxx",
        +    "display_name": "xxxx",
        +    "keys": [
        +        {
        +            "user": "xxx",
        +            "access_key": "xxxxxx",
        +            "secret_key": "xxxxxx\/xxxx"
        +        }
        +    ],
        +}

        Because this is a json dump, it is encoding the / as \/, so if you use the secret key as xxxxxx/xxxx it will work fine.

        @@ -31713,15 +31728,15 @@ e) Edit this remote d) Delete this remote y/e/d> y

        This will leave your config looking something like:

        -
        [r2]
        -type = s3
        -provider = Cloudflare
        -access_key_id = ACCESS_KEY
        -secret_access_key = SECRET_ACCESS_KEY
        -region = auto
        -endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
        -acl = private
        +
        [r2]
        +type = s3
        +provider = Cloudflare
        +access_key_id = ACCESS_KEY
        +secret_access_key = SECRET_ACCESS_KEY
        +region = auto
        +endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
        +acl = private

        Now run rclone lsf r2: to see your buckets and rclone lsf r2:bucket to look within a bucket.

        For R2 tokens with the "Object Read & Write" permission, you may @@ -31756,14 +31771,14 @@ region> eu-west-1 (or leave empty) endpoint> s3.cubbit.eu acl>

        The resulting configuration file should look like:

        -
        [cubbit-ds3]
        -type = s3
        -provider = Cubbit
        -access_key_id = ACCESS_KEY
        -secret_access_key = SECRET_KEY
        -region = eu-west-1
        -endpoint = s3.cubbit.eu
        +
        [cubbit-ds3]
        +type = s3
        +provider = Cubbit
        +access_key_id = ACCESS_KEY
        +secret_access_key = SECRET_KEY
        +region = eu-west-1
        +endpoint = s3.cubbit.eu

        You can then start using Cubbit DS3 with rclone. For example, to create a new bucket and copy files into it, you can run:

        rclone mkdir cubbit-ds3:my-bucket
        @@ -31798,19 +31813,19 @@ location_constraint>
         acl>
         storage_class>

        The resulting configuration file should look like:

        -
        [spaces]
        -type = s3
        -provider = DigitalOcean
        -env_auth = false
        -access_key_id = YOUR_ACCESS_KEY
        -secret_access_key = YOUR_SECRET_KEY
        -region =
        -endpoint = nyc3.digitaloceanspaces.com
        -location_constraint =
        -acl =
        -server_side_encryption =
        -storage_class =
        +
        [spaces]
        +type = s3
        +provider = DigitalOcean
        +env_auth = false
        +access_key_id = YOUR_ACCESS_KEY
        +secret_access_key = YOUR_SECRET_KEY
        +region =
        +endpoint = nyc3.digitaloceanspaces.com
        +location_constraint =
        +acl =
        +server_side_encryption =
        +storage_class =

        Once configured, you can create a new Space and begin copying files. For example:

        rclone mkdir spaces:my-new-space
        @@ -31822,19 +31837,19 @@ object storage system based on CEPH.

        To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:

        -
        [dreamobjects]
        -type = s3
        -provider = DreamHost
        -env_auth = false
        -access_key_id = your_access_key
        -secret_access_key = your_secret_key
        -region =
        -endpoint = objects-us-west-1.dream.io
        -location_constraint =
        -acl = private
        -server_side_encryption =
        -storage_class =
        +
        [dreamobjects]
        +type = s3
        +provider = DreamHost
        +env_auth = false
        +access_key_id = your_access_key
        +secret_access_key = your_secret_key
        +region =
        +endpoint = objects-us-west-1.dream.io
        +location_constraint =
        +acl = private
        +server_side_encryption =
        +storage_class =

        Exaba

        Exaba is an on-premises, S3-compatible storage for service providers and large enterprises. It is @@ -31895,13 +31910,13 @@ y) Yes n) No (default) y/n> n

        And the config generated will end up looking like this:

        -
        [exaba]
        -type = s3
        -provider = Exaba
        -access_key_id = XXX
        -secret_access_key = XXX
        -endpoint = http://127.0.0.1:9000/
        +
        [exaba]
        +type = s3
        +provider = Exaba
        +access_key_id = XXX
        +secret_access_key = XXX
        +endpoint = http://127.0.0.1:9000/

        Google Cloud Storage

        GoogleCloudStorage is @@ -31912,13 +31927,13 @@ object storage service from Google Cloud Platform.

        secret key. These can be retrieved by creating an HMAC key.

        -
        [gs]
        -type = s3
        -provider = GCS
        -access_key_id = your_access_key
        -secret_access_key = your_secret_key
        -endpoint = https://storage.googleapis.com
        +
        [gs]
        +type = s3
        +provider = GCS
        +access_key_id = your_access_key
        +secret_access_key = your_secret_key
        +endpoint = https://storage.googleapis.com

        Note that --s3-versions does not work with GCS when it needs to do directory paging. Rclone will return the error:

        @@ -32050,30 +32065,30 @@ s) Set configuration password q) Quit config e/n/d/r/c/s/q>

        This will leave the config file looking like this.

        -
        [my-hetzner]
        -type = s3
        -provider = Hetzner
        -access_key_id = ACCESS_KEY
        -secret_access_key = SECRET_KEY
        -region = hel1
        -endpoint = hel1.your-objectstorage.com
        -acl = private
        +
        [my-hetzner]
        +type = s3
        +provider = Hetzner
        +access_key_id = ACCESS_KEY
        +secret_access_key = SECRET_KEY
        +region = hel1
        +endpoint = hel1.your-objectstorage.com
        +acl = private

        Huawei OBS

        Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use cloud storage that lets you store virtually any volume of unstructured data in any format and access it from anywhere.

        OBS provides an S3 interface, you can copy and modify the following configuration and add it to your rclone configuration file.

        -
        [obs]
        -type = s3
        -provider = HuaweiOBS
        -access_key_id = your-access-key-id
        -secret_access_key = your-secret-access-key
        -region = af-south-1
        -endpoint = obs.af-south-1.myhuaweicloud.com
        -acl = private
        +
        [obs]
        +type = s3
        +provider = HuaweiOBS
        +access_key_id = your-access-key-id
        +secret_access_key = your-secret-access-key
        +region = af-south-1
        +endpoint = obs.af-south-1.myhuaweicloud.com
        +acl = private

        Or you can also configure via the interactive command line:

        No remotes found, make a new one\?
         n) New remote
        @@ -32302,15 +32317,15 @@ Choose a number from below, or type in your own value
         acl> 1
      • Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this

        -
        [xxx]
        -type = s3
        -Provider = IBMCOS
        -access_key_id = xxx
        -secret_access_key = yyy
        -endpoint = s3-api.us-geo.objectstorage.softlayer.net
        -location_constraint = us-standard
        -acl = private
      • +
        [xxx]
        +type = s3
        +Provider = IBMCOS
        +access_key_id = xxx
        +secret_access_key = yyy
        +endpoint = s3-api.us-geo.objectstorage.softlayer.net
        +location_constraint = us-standard
        +acl = private
      • Execute rclone commands

    1) Create a bucket.
    @@ -32568,14 +32583,14 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    This will leave the config file looking like this.

    -
    [intercolo]
    -type = s3
    -provider = Intercolo
    -access_key_id = ACCESS_KEY
    -secret_access_key = SECRET_KEY
    -region = de-fra
    -endpoint = de-fra.i3storage.com
    +
    [intercolo]
    +type = s3
    +provider = Intercolo
    +access_key_id = ACCESS_KEY
    +secret_access_key = SECRET_KEY
    +region = de-fra
    +endpoint = de-fra.i3storage.com

    IONOS Cloud

    IONOS S3 Object Storage is a service offered by IONOS for storing and @@ -32883,19 +32898,19 @@ e) Edit this remote d) Delete this remote y/e/d> y

    This will leave the config file looking like this.

    -
    [Liara]
    -type = s3
    -provider = Liara
    -env_auth = false
    -access_key_id = YOURACCESSKEY
    -secret_access_key = YOURSECRETACCESSKEY
    -region =
    -endpoint = storage.iran.liara.space
    -location_constraint =
    -acl =
    -server_side_encryption =
    -storage_class =
    +
    [Liara]
    +type = s3
    +provider = Liara
    +env_auth = false
    +access_key_id = YOURACCESSKEY
    +secret_access_key = YOURSECRETACCESSKEY
    +region =
    +endpoint = storage.iran.liara.space
    +location_constraint =
    +acl =
    +server_side_encryption =
    +storage_class =

    Linode

    Here is an example of making a Linode Object @@ -33035,13 +33050,13 @@ e) Edit this remote d) Delete this remote y/e/d> y

    This will leave the config file looking like this.

    -
    [linode]
    -type = s3
    -provider = Linode
    -access_key_id = ACCESS_KEY
    -secret_access_key = SECRET_ACCESS_KEY
    -endpoint = eu-central-1.linodeobjects.com
    +
    [linode]
    +type = s3
    +provider = Linode
    +access_key_id = ACCESS_KEY
    +secret_access_key = SECRET_ACCESS_KEY
    +endpoint = eu-central-1.linodeobjects.com

    Magalu

    Here is an example of making a Magalu Object Storage @@ -33143,13 +33158,13 @@ e) Edit this remote d) Delete this remote y/e/d> y

    This will leave the config file looking like this.

    -
    [magalu]
    -type = s3
    -provider = Magalu
    -access_key_id = ACCESS_KEY
    -secret_access_key = SECRET_ACCESS_KEY
    -endpoint = br-ne1.magaluobjects.com
    +
    [magalu]
    +type = s3
    +provider = Magalu
    +access_key_id = ACCESS_KEY
    +secret_access_key = SECRET_ACCESS_KEY
    +endpoint = br-ne1.magaluobjects.com

    MEGA S4

    MEGA S4 Object Storage is an S3 compatible object storage system. It has a single pricing tier @@ -33242,13 +33257,13 @@ e) Edit this remote d) Delete this remote y/e/d> y

    This will leave the config file looking like this.

    -
    [megas4]
    -type = s3
    -provider = Mega
    -access_key_id = XXX
    -secret_access_key = XXX
    -endpoint = s3.eu-central-1.s4.mega.io
    +
    [megas4]
    +type = s3
    +provider = Mega
    +access_key_id = XXX
    +secret_access_key = XXX
    +endpoint = s3.eu-central-1.s4.mega.io

    Minio

    Minio is an object storage server built for cloud application developers and devops.

    @@ -33287,17 +33302,17 @@ endpoint> http://192.168.1.106:9000 location_constraint> server_side_encryption>

    Which makes the config file look like this

    -
    [minio]
    -type = s3
    -provider = Minio
    -env_auth = false
    -access_key_id = USWUXHGYZQYFYFFIT3RE
    -secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
    -region = us-east-1
    -endpoint = http://192.168.1.106:9000
    -location_constraint =
    -server_side_encryption =
    +
    [minio]
    +type = s3
    +provider = Minio
    +env_auth = false
    +access_key_id = USWUXHGYZQYFYFFIT3RE
    +secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
    +region = us-east-1
    +endpoint = http://192.168.1.106:9000
    +location_constraint =
    +server_side_encryption =

    So once set up, for example, to copy files into a bucket

    rclone copy /path/to/files minio:bucket

    Netease NOS

    @@ -33315,16 +33330,16 @@ href="https://docs.outscale.com/en/userguide/OUTSCALE-Object-Storage-OOS.html">o documentation.

    Here is an example of an OOS configuration that you can paste into your rclone configuration file:

    -
    [outscale]
    -type = s3
    -provider = Outscale
    -env_auth = false
    -access_key_id = ABCDEFGHIJ0123456789
    -secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    -region = eu-west-2
    -endpoint = oos.eu-west-2.outscale.com
    -acl = private
    +
    [outscale]
    +type = s3
    +provider = Outscale
    +env_auth = false
    +access_key_id = ABCDEFGHIJ0123456789
    +secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    +region = eu-west-2
    +endpoint = oos.eu-west-2.outscale.com
    +acl = private

    You can also run rclone config to go through the interactive setup process:

    No remotes found, make a new one\?
    @@ -33618,15 +33633,15 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    Your configuration file should now look like this:

    -
    [ovhcloud-rbx]
    -type = s3
    -provider = OVHcloud
    -access_key_id = my_access
    -secret_access_key = my_secret
    -region = rbx
    -endpoint = s3.rbx.io.cloud.ovh.net
    -acl = private
    +
    [ovhcloud-rbx]
    +type = s3
    +provider = OVHcloud
    +access_key_id = my_access
    +secret_access_key = my_secret
    +region = rbx
    +endpoint = s3.rbx.io.cloud.ovh.net
    +acl = private

    Petabox

    Here is an example of making a Petabox configuration. First run:

    @@ -33767,14 +33782,14 @@ e) Edit this remote d) Delete this remote y/e/d> y

    This will leave the config file looking like this.

    -
    [My Petabox Storage]
    -type = s3
    -provider = Petabox
    -access_key_id = YOUR_ACCESS_KEY_ID
    -secret_access_key = YOUR_SECRET_ACCESS_KEY
    -region = us-east-1
    -endpoint = s3.petabox.io
    +
    [My Petabox Storage]
    +type = s3
    +provider = Petabox
    +access_key_id = YOUR_ACCESS_KEY_ID
    +secret_access_key = YOUR_SECRET_ACCESS_KEY
    +region = us-east-1
    +endpoint = s3.petabox.io

    Pure Storage FlashBlade

    Pure @@ -33869,13 +33884,13 @@ d) Delete this remote y/e/d> y

    This results in the following configuration being stored in ~/.config/rclone/rclone.conf:

    -
    [flashblade]
    -type = s3
    -provider = FlashBlade
    -access_key_id = ACCESS_KEY_ID
    -secret_access_key = SECRET_ACCESS_KEY
    -endpoint = https://s3.flashblade.example.com
    +
    [flashblade]
    +type = s3
    +provider = FlashBlade
    +access_key_id = ACCESS_KEY_ID
    +secret_access_key = SECRET_ACCESS_KEY
    +endpoint = https://s3.flashblade.example.com

    Note: The FlashBlade endpoint should be the S3 data VIP. For virtual-hosted style requests, ensure proper DNS configuration: subdomains of the endpoint hostname should resolve to a FlashBlade data @@ -34152,13 +34167,13 @@ e) Edit this remote d) Delete this remote y/e/d> y

    This will leave the config file looking like this.

    -
    [s5lu]
    -type = s3
    -provider = FileLu
    -access_key_id = XXX
    -secret_access_key = XXX
    -endpoint = s5lu.com
    +
    [s5lu]
    +type = s3
    +provider = FileLu
    +access_key_id = XXX
    +secret_access_key = XXX
    +endpoint = s5lu.com

    Rabata

    Rabata is an S3-compatible secure cloud storage service that offers flat, transparent pricing (no API @@ -34295,16 +34310,16 @@ details are required for the next steps of configuration, when rclone config asks for your access_key_id and secret_access_key.

    Your config should end up looking a bit like this:

    -
    [RCS3-demo-config]
    -type = s3
    -provider = RackCorp
    -env_auth = true
    -access_key_id = YOURACCESSKEY
    -secret_access_key = YOURSECRETACCESSKEY
    -region = au-nsw
    -endpoint = s3.rackcorp.com
    -location_constraint = au-nsw
    +
    [RCS3-demo-config]
    +type = s3
    +provider = RackCorp
    +env_auth = true
    +access_key_id = YOURACCESSKEY
    +secret_access_key = YOURSECRETACCESSKEY
    +region = au-nsw
    +endpoint = s3.rackcorp.com
    +location_constraint = au-nsw

    Rclone Serve S3

    Rclone can serve any remote over the S3 protocol. For details see the rclone serve @@ -34314,14 +34329,14 @@ server like this:

    rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path

    This will be compatible with an rclone remote which is defined like this:

    -
    [serves3]
    -type = s3
    -provider = Rclone
    -endpoint = http://127.0.0.1:8080/
    -access_key_id = ACCESS_KEY_ID
    -secret_access_key = SECRET_ACCESS_KEY
    -use_multipart_uploads = false
    +
    [serves3]
    +type = s3
    +provider = Rclone
    +endpoint = http://127.0.0.1:8080/
    +access_key_id = ACCESS_KEY_ID
    +secret_access_key = SECRET_ACCESS_KEY
    +use_multipart_uploads = false

    Note that setting use_multipart_uploads = false is to work around a bug which @@ -34334,20 +34349,20 @@ Scaleway console or transferred through our API and CLI or using any S3-compatible tool.

    Scaleway provides an S3 interface which can be configured for use with rclone like this:

    -
    [scaleway]
    -type = s3
    -provider = Scaleway
    -env_auth = false
    -endpoint = s3.nl-ams.scw.cloud
    -access_key_id = SCWXXXXXXXXXXXXXX
    -secret_access_key = 1111111-2222-3333-44444-55555555555555
    -region = nl-ams
    -location_constraint = nl-ams
    -acl = private
    -upload_cutoff = 5M
    -chunk_size = 5M
    -copy_cutoff = 5M
    +
    [scaleway]
    +type = s3
    +provider = Scaleway
    +env_auth = false
    +endpoint = s3.nl-ams.scw.cloud
    +access_key_id = SCWXXXXXXXXXXXXXX
    +secret_access_key = 1111111-2222-3333-44444-55555555555555
    +region = nl-ams
    +location_constraint = nl-ams
    +acl = private
    +upload_cutoff = 5M
    +chunk_size = 5M
    +copy_cutoff = 5M

    Scaleway Glacier is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" @@ -34448,13 +34463,13 @@ Press Enter to leave empty. [snip] acl>

    And the config file should end up looking like this:

    -
    [remote]
    -type = s3
    -provider = LyveCloud
    -access_key_id = XXX
    -secret_access_key = YYY
    -endpoint = s3.us-east-1.lyvecloud.seagate.com
    +
    [remote]
    +type = s3
    +provider = LyveCloud
    +access_key_id = XXX
    +secret_access_key = YYY
    +endpoint = s3.us-east-1.lyvecloud.seagate.com

    SeaweedFS

    SeaweedFS is a distributed storage system for blobs, objects, files, and data lake, @@ -34490,13 +34505,13 @@ such:

    }

    To use rclone with SeaweedFS, above configuration should end up with something like this in your config:

    -
    [seaweedfs_s3]
    -type = s3
    -provider = SeaweedFS
    -access_key_id = any
    -secret_access_key = any
    -endpoint = localhost:8333
    +
    [seaweedfs_s3]
    +type = s3
    +provider = SeaweedFS
    +access_key_id = any
    +secret_access_key = any
    +endpoint = localhost:8333

    So once set up, for example to copy files into a bucket

    rclone copy /path/to/files seaweedfs_s3:foo

    Selectel

    @@ -34599,14 +34614,14 @@ e) Edit this remote d) Delete this remote y/e/d> y

    And your config should end up looking like this:

    -
    [selectel]
    -type = s3
    -provider = Selectel
    -access_key_id = ACCESS_KEY
    -secret_access_key = SECRET_ACCESS_KEY
    -region = ru-1
    -endpoint = s3.ru-1.storage.selcloud.ru
    +
    [selectel]
    +type = s3
    +provider = Selectel
    +access_key_id = ACCESS_KEY
    +secret_access_key = SECRET_ACCESS_KEY
    +region = ru-1
    +endpoint = s3.ru-1.storage.selcloud.ru

    Servercore

    Servercore Object Storage is an S3 compatible object storage system that @@ -34799,13 +34814,13 @@ e) Edit this remote d) Delete this remote y/e/d> y

    And your config should end up looking like this:

    -
    [spectratest]
    -type = s3
    -provider = SpectraLogic
    -access_key_id = ACCESS_KEY
    -secret_access_key = SECRET_ACCESS_KEY
    -endpoint = https://bp.example.com
    +
    [spectratest]
    +type = s3
    +provider = SpectraLogic
    +access_key_id = ACCESS_KEY
    +secret_access_key = SECRET_ACCESS_KEY
    +endpoint = https://bp.example.com

    Storj

    Storj is a decentralized cloud storage which can be used through its native protocol or an S3 compatible gateway.

    @@ -35233,19 +35248,19 @@ e) Edit this remote d) Delete this remote y/e/d> y

    This will leave the config file looking like this.

    -
    [wasabi]
    -type = s3
    -provider = Wasabi
    -env_auth = false
    -access_key_id = YOURACCESSKEY
    -secret_access_key = YOURSECRETACCESSKEY
    -region =
    -endpoint = s3.wasabisys.com
    -location_constraint =
    -acl =
    -server_side_encryption =
    -storage_class =
    +
    [wasabi]
    +type = s3
    +provider = Wasabi
    +env_auth = false
    +access_key_id = YOURACCESSKEY
    +secret_access_key = YOURSECRETACCESSKEY
    +region =
    +endpoint = s3.wasabisys.com
    +location_constraint =
    +acl =
    +server_side_encryption =
    +storage_class =

    Zata Object Storage

    Zata Object Storage provides a secure, S3-compatible cloud storage solution designed for scalability and @@ -35381,14 +35396,14 @@ e) Edit this remote d) Delete this remote y/e/d>

    This will leave the config file looking like this.

    -
    [my zata storage]
    -type = s3
    -provider = Zata
    -access_key_id = xxx
    -secret_access_key = xxx
    -region = us-east-1
    -endpoint = idr01.zata.ai
    +
    [my zata storage]
    +type = s3
    +provider = Zata
    +access_key_id = xxx
    +secret_access_key = xxx
    +region = us-east-1
    +endpoint = idr01.zata.ai

    Memory usage

    The most common cause of rclone using lots of memory is a single directory with millions of files in. Despite s3 not really having the @@ -36361,15 +36376,15 @@ bucket.

    To show the current lifecycle rules:

    rclone backend lifecycle b2:bucket

    This will dump something like this showing the lifecycle rules.

    -
    [
    -    {
    -        "daysFromHidingToDeleting": 1,
    -        "daysFromUploadingToHiding": null,
    -        "daysFromStartingToCancelingUnfinishedLargeFiles": null,
    -        "fileNamePrefix": ""
    -    }
    -]
    +
    [
    +    {
    +        "daysFromHidingToDeleting": 1,
    +        "daysFromUploadingToHiding": null,
    +        "daysFromStartingToCancelingUnfinishedLargeFiles": null,
    +        "fileNamePrefix": ""
    +    }
    +]

    If there are no lifecycle rules (the default) then it will just return [].

    To reset the current lifecycle rules:

    @@ -39608,18 +39623,18 @@ the shared drives you have access to.

    drive: you would run

    rclone backend -o config drives drive:

    This would produce something like this:

    -
    [My Drive]
    -type = alias
    -remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
    -
    -[Test Drive]
    -type = alias
    -remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
    -
    -[AllDrives]
    -type = combine
    -upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
    +
    [My Drive]
    +type = alias
    +remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
    +
    +[Test Drive]
    +type = alias
    +remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
    +
    +[AllDrives]
    +type = combine
    +upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"

    If you then add that config to your config file (find it with rclone config file) then you can access all the shared drives in one place with the AllDrives: remote.

    @@ -41049,6 +41064,25 @@ Storage).

    Advanced options

    Here are the Advanced options specific to filelu (FileLu Cloud Storage).

    +

    --filelu-upload-cutoff

    +

    Cutoff for switching to chunked upload. Any files larger than this +will be uploaded in chunks of chunk_size.

    +

    Properties:

    + +

    --filelu-chunk-size

    +

    Chunk size to use for uploading. Used for multipart uploads.

    +

    Properties:

    +

    --filelu-encoding

    The encoding for the backend.

    See the encoding @@ -44790,34 +44824,34 @@ account.

    Usage example:

    rclone backend [-o config] drives drive:

    This will return a JSON list of objects like this:

    -
    [
    -    {
    -        "id": "0ABCDEF-01234567890",
    -        "kind": "drive#teamDrive",
    -        "name": "My Drive"
    -    },
    -    {
    -        "id": "0ABCDEFabcdefghijkl",
    -        "kind": "drive#teamDrive",
    -        "name": "Test Drive"
    -    }
    -]
    +
    [
    +    {
    +        "id": "0ABCDEF-01234567890",
    +        "kind": "drive#teamDrive",
    +        "name": "My Drive"
    +    },
    +    {
    +        "id": "0ABCDEFabcdefghijkl",
    +        "kind": "drive#teamDrive",
    +        "name": "Test Drive"
    +    }
    +]

    With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives found and a combined drive.

    -
    [My Drive]
    -type = alias
    -remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
    -
    -[Test Drive]
    -type = alias
    -remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
    -
    -[AllDrives]
    -type = combine
    -upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
    +
    [My Drive]
    +type = alias
    +remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
    +
    +[Test Drive]
    +type = alias
    +remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
    +
    +[AllDrives]
    +type = combine
    +upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"

    Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. Any illegal characters will be substituted with "_" and duplicate names will have numbers suffixed. It @@ -44836,11 +44870,11 @@ use via the API.

    Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it.

    Result:

    -
    {
    -    "Untrashed": 17,
    -    "Errors": 0
    -}
    +
    {
    +    "Untrashed": 17,
    +    "Errors": 0
    +}

    copyid

    Copy files by ID.

    rclone backend copyid remote: [options] [<arguments>+]
    @@ -44896,32 +44930,32 @@ escaped with  characters. "'" becomes "'" and "" becomes "\", for example to match a file named "foo ' .txt":

    rclone backend query drive: "name = 'foo \' \\\.txt'"

    The result is a JSON array of matches, for example:

    -
    [
    -    {
    -        "createdTime": "2017-06-29T19:58:28.537Z",
    -        "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
    -        "md5Checksum": "68518d16be0c6fbfab918be61d658032",
    -        "mimeType": "text/plain",
    -        "modifiedTime": "2024-02-02T10:40:02.874Z",
    -        "name": "foo ' \\.txt",
    -        "parents": [
    -            "0BxAe_BCDE4zkFGZpcWJGek0xbzC"
    -        ],
    -        "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
    -        "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
    -        "size": "311",
    -        "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
    -    }
    -]
    -```console
    -
    -### rescue
    -
    -Rescue or delete any orphaned files.
    -
    -```console
    -rclone backend rescue remote: [options] [<arguments>+]
    +
    [
    +    {
    +        "createdTime": "2017-06-29T19:58:28.537Z",
    +        "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
    +        "md5Checksum": "68518d16be0c6fbfab918be61d658032",
    +        "mimeType": "text/plain",
    +        "modifiedTime": "2024-02-02T10:40:02.874Z",
    +        "name": "foo ' \\.txt",
    +        "parents": [
    +            "0BxAe_BCDE4zkFGZpcWJGek0xbzC"
    +        ],
    +        "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
    +        "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
    +        "size": "311",
    +        "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
    +    }
    +]
    +```console
    +
    +### rescue
    +
    +Rescue or delete any orphaned files.
    +
    +```console
    +rclone backend rescue remote: [options] [<arguments>+]

    This command rescues or deletes any orphaned files or directories.

    Sometimes files can get orphaned in Google Drive. This means that @@ -45703,18 +45737,18 @@ y/e/d> y config file, usually YOURHOME/.config/rclone/rclone.conf. Open it in your favorite text editor, find section for the base remote and create new section for hasher like in the following examples:

    -
    [Hasher1]
    -type = hasher
    -remote = myRemote:path
    -hashes = md5
    -max_age = off
    -
    -[Hasher2]
    -type = hasher
    -remote = /local/path
    -hashes = dropbox,sha1
    -max_age = 24h
    +
    [Hasher1]
    +type = hasher
    +remote = myRemote:path
    +hashes = md5
    +max_age = off
    +
    +[Hasher2]
    +type = hasher
    +remote = /local/path
    +hashes = dropbox,sha1
    +max_age = 24h

    Hasher takes basically the following parameters: