diff --git a/MANUAL.html b/MANUAL.html
index 1caf83cca..13e574773 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -233,7 +233,7 @@
Jan 30, 2026 Feb 17, 2026rclone(1) User Manual
-
rclone - manage files on cloud storage
@@ -4553,9 +4553,9 @@ SquareBracketrclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo"
// Output: stories/The Quick Brown Fox!.txt
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
-// Output: stories/The Quick Brown Fox!-20260130
+// Output: stories/The Quick Brown Fox!-20260217
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
-// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM
+// Output: stories/The Quick Brown Fox!-2026-02-17 0451PM
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
// Output: ababababababab/ababab ababababab ababababab ababab!abababab
Now you can run classic mounts like this:
mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem
or create systemd mount units:
-# /etc/systemd/system/mnt-data.mount
-[Unit]
-Description=Mount for /mnt/data
-[Mount]
-Type=rclone
-What=sftp1:subdir
-Where=/mnt/data
-Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rcloneoptionally accompanied by systemd automount unit
# /etc/systemd/system/mnt-data.automount
+class="sourceCode ini"># /etc/systemd/system/mnt-data.mount
[Unit]
-Description=AutoMount for /mnt/data
-[Automount]
-Where=/mnt/data
-TimeoutIdleSec=600
-[Install]
-WantedBy=multi-user.targetoptionally accompanied by systemd automount unit
+# /etc/systemd/system/mnt-data.automount
+[Unit]
+Description=AutoMount for /mnt/data
+[Automount]
+Where=/mnt/data
+TimeoutIdleSec=600
+[Install]
+WantedBy=multi-user.targetor add in /etc/fstab a line like
sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0
or use classic Automountd. Remember to provide explicit @@ -7349,25 +7357,25 @@ detect it and translate command-line arguments appropriately.
Now you can run classic mounts like this:
mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem
or create systemd mount units:
-# /etc/systemd/system/mnt-data.mount
-[Unit]
-Description=Mount for /mnt/data
-[Mount]
-Type=rclone
-What=sftp1:subdir
-Where=/mnt/data
-Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rcloneoptionally accompanied by systemd automount unit
# /etc/systemd/system/mnt-data.automount
+class="sourceCode ini"># /etc/systemd/system/mnt-data.mount
[Unit]
-Description=AutoMount for /mnt/data
-[Automount]
-Where=/mnt/data
-TimeoutIdleSec=600
-[Install]
-WantedBy=multi-user.targetoptionally accompanied by systemd automount unit
+# /etc/systemd/system/mnt-data.automount
+[Unit]
+Description=AutoMount for /mnt/data
+[Automount]
+Where=/mnt/data
+TimeoutIdleSec=600
+[Install]
+WantedBy=multi-user.targetor add in /etc/fstab a line like
sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0
or use classic Automountd. Remember to provide explicit @@ -7939,11 +7947,11 @@ class="uri">http://host:port.
--user, --pass.
The --unix-socket flag can be used to connect over a
unix socket like this
# start server on /tmp/my.socket
-rclone rcd --rc-addr unix:///tmp/my.socket
-# Connect to it
-rclone rc --unix-socket /tmp/my.socket core/stats# start server on /tmp/my.socket
+rclone rcd --rc-addr unix:///tmp/my.socket
+# Connect to it
+rclone rc --unix-socket /tmp/my.socket core/statsArguments should be passed in as parameter=value.
The result will be returned as a JSON object by default.
The --json parameter can be used to pass in a JSON blob
@@ -7956,21 +7964,21 @@ This is useful for rc commands which take the "opt" parameter which by
convention is a dictionary of strings.
-o key=value -o key2
Will place this in the "opt" value
-{"key":"value", "key2",""){"key":"value", "key2","")The -a/--arg option can be used to set
strings in the "arg" value. It can be repeated as many times as
required. This is useful for rc commands which take the "arg" parameter
which by convention is a list of strings.
-a value -a value2
Will place this in the "arg" value
-["value", "value2"]["value", "value2"]Use --loopback to connect to the rclone instance running
rclone rc. This is very useful for testing commands without
having to run an rclone rc server, e.g.:
rclone rc --loopback operations/about fs=/rclone rc --loopback operations/about fs=/Use rclone rc to see a list of all possible
commands.
rclone rc commands parameter [flags]
@@ -9937,28 +9945,28 @@ obscure
If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:
-{
- "user": "me",
- "pass": "mypassword"
-}If public-key authentication was used by the client, input to the -proxy process (on STDIN) would look similar to this:
{
"user": "me",
- "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+ "pass": "mypassword"
}And as an example return this on STDOUT
+If public-key authentication was used by the client, input to the +proxy process (on STDIN) would look similar to this:
{
- "type": "sftp",
- "_root": "",
- "_obscure": "pass",
- "user": "me",
- "pass": "mypassword",
- "host": "sftp.example.com"
-}And as an example return this on STDOUT
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}This would mean that an SFTP backend would be created on the fly for
the user and pass/public_key
returned in the output to the host given. Note that since
@@ -10711,28 +10719,28 @@ obscure
If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:
-{
- "user": "me",
- "pass": "mypassword"
-}If public-key authentication was used by the client, input to the -proxy process (on STDIN) would look similar to this:
{
"user": "me",
- "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+ "pass": "mypassword"
}And as an example return this on STDOUT
+If public-key authentication was used by the client, input to the +proxy process (on STDIN) would look similar to this:
{
- "type": "sftp",
- "_root": "",
- "_obscure": "pass",
- "user": "me",
- "pass": "mypassword",
- "host": "sftp.example.com"
-}And as an example return this on STDOUT
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}This would mean that an SFTP backend would be created on the fly for
the user and pass/public_key
returned in the output to the host given. Note that since
@@ -10893,12 +10901,12 @@ default is 1000000, but consider lowering this limit if the
server's system resource usage causes problems. This is only used by the
memory type cache.
To serve NFS over the network use following command:
-rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=fullrclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=fullThis specifies a port that can be used in the mount command. To mount the server under Linux/macOS, use the following command:
-mount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpointmount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpointWhere $PORT is the same port number used in the
serve nfs command and $HOSTNAME is the network
address of the machine that serve nfs was run on.
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY local:/path/to/folder
The rclone.conf for the server could look like this:
[local]
-type = local[local]
+type = localThe local configuration is optional though. If you run
the server with a remote:path like
/path/to/folder (without the local: prefix and
@@ -11619,14 +11627,14 @@ default configuration, which will be visible as a warning in the logs.
But it will run nonetheless.
This will be compatible with an rclone (client) remote configuration which is defined like this:
-[serves3]
-type = s3
-provider = Rclone
-endpoint = http://127.0.0.1:8080/
-access_key_id = ACCESS_KEY_ID
-secret_access_key = SECRET_ACCESS_KEY
-use_multipart_uploads = false[serves3]
+type = s3
+provider = Rclone
+endpoint = http://127.0.0.1:8080/
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+use_multipart_uploads = falseNote that setting use_multipart_uploads = false is to
work around a bug which will be fixed in due
course.
If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:
-{
- "user": "me",
- "pass": "mypassword"
-}If public-key authentication was used by the client, input to the -proxy process (on STDIN) would look similar to this:
{
"user": "me",
- "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+ "pass": "mypassword"
}And as an example return this on STDOUT
+If public-key authentication was used by the client, input to the +proxy process (on STDIN) would look similar to this:
{
- "type": "sftp",
- "_root": "",
- "_obscure": "pass",
- "user": "me",
- "pass": "mypassword",
- "host": "sftp.example.com"
-}And as an example return this on STDOUT
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}This would mean that an SFTP backend would be created on the fly for
the user and pass/public_key
returned in the output to the host given. Note that since
@@ -13551,28 +13559,28 @@ obscure
If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:
-{
- "user": "me",
- "pass": "mypassword"
-}If public-key authentication was used by the client, input to the -proxy process (on STDIN) would look similar to this:
{
"user": "me",
- "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
+ "pass": "mypassword"
}And as an example return this on STDOUT
+If public-key authentication was used by the client, input to the +proxy process (on STDIN) would look similar to this:
{
- "type": "sftp",
- "_root": "",
- "_obscure": "pass",
- "user": "me",
- "pass": "mypassword",
- "host": "sftp.example.com"
-}And as an example return this on STDOUT
+{
+ "type": "sftp",
+ "_root": "",
+ "_obscure": "pass",
+ "user": "me",
+ "pass": "mypassword",
+ "host": "sftp.example.com"
+}This would mean that an SFTP backend would be created on the fly for
the user and pass/public_key
returned in the output to the host given. Note that since
@@ -14276,11 +14284,11 @@ infrastructure without a proper certificate. You could supply the
--no-check-certificate flag to rclone, but this will affect
all the remotes. To make it just affect this remote you
use an override. You could put this in the config file:
[remote]
-type = XXX
-...
-override.no_check_certificate = true[remote]
+type = XXX
+...
+override.no_check_certificate = trueor use it in the connection string
remote,override.no_check_certificate=true: (or just
remote,override.no_check_certificate:).
override. For example, say you have a remote where
you would always like to use the --checksum flag. You could
supply the --checksum flag to rclone on every command line,
but instead you could put this in the config file:
-[remote]
-type = XXX
-...
-global.checksum = true[remote]
+type = XXX
+...
+global.checksum = trueor use it in the connection string
remote,global.checksum=true: (or just
remote,global.checksum:). This is equivalent to using the
@@ -14364,13 +14372,13 @@ shell.
If your names have spaces in you need to put them in ",
e.g.
rclone copy "E:\folder name\folder name\folder name" remote:backuprclone copy "E:\folder name\folder name\folder name" remote:backupIf you are using the root directory on its own then don't quote it (see #464 for why), e.g.
-rclone copy E:\ remote:backuprclone copy E:\ remote:backup: in the namesrclone uses : to mark a remote name. This is, however, a
@@ -14983,11 +14991,11 @@ value is the internal lowercase name as returned by command
rclone help backends. Comments are indicated by
; or # at the beginning of a line.
Example:
-[megaremote]
-type = mega
-user = you@example.com
-pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qH[megaremote]
+type = mega
+user = you@example.com
+pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qHNote that passwords are in obscured form. Also, many storage systems uses token-based authentication instead of @@ -15507,49 +15515,49 @@ complete log file is not strictly valid JSON and needs a parser that can handle it.
The JSON logs will be printed on a single line, but are shown expanded here for clarity.
-{
- "time": "2025-05-13T17:30:51.036237518+01:00",
- "level": "debug",
- "msg": "4 go routines active\n",
- "source": "cmd/cmd.go:298"
-}{
+ "time": "2025-05-13T17:30:51.036237518+01:00",
+ "level": "debug",
+ "msg": "4 go routines active\n",
+ "source": "cmd/cmd.go:298"
+}Completed data transfer logs will have extra size
information. Logs which are about a particular object will have
object and objectType fields also.
{
- "time": "2025-05-13T17:38:05.540846352+01:00",
- "level": "info",
- "msg": "Copied (new) to: file2.txt",
- "size": 6,
- "object": "file.txt",
- "objectType": "*local.Object",
- "source": "operations/copy.go:368"
-}{
+ "time": "2025-05-13T17:38:05.540846352+01:00",
+ "level": "info",
+ "msg": "Copied (new) to: file2.txt",
+ "size": 6,
+ "object": "file.txt",
+ "objectType": "*local.Object",
+ "source": "operations/copy.go:368"
+}Stats logs will contain a stats field which is the same
as returned from the rc call core/stats.
{
- "time": "2025-05-13T17:38:05.540912847+01:00",
- "level": "info",
- "msg": "...text version of the stats...",
- "stats": {
- "bytes": 6,
- "checks": 0,
- "deletedDirs": 0,
- "deletes": 0,
- "elapsedTime": 0.000904825,
- ...truncated for clarity...
- "totalBytes": 6,
- "totalChecks": 0,
- "totalTransfers": 1,
- "transferTime": 0.000882794,
- "transfers": 1
- },
- "source": "accounting/stats.go:569"
-}{
+ "time": "2025-05-13T17:38:05.540912847+01:00",
+ "level": "info",
+ "msg": "...text version of the stats...",
+ "stats": {
+ "bytes": 6,
+ "checks": 0,
+ "deletedDirs": 0,
+ "deletes": 0,
+ "elapsedTime": 0.000904825,
+ ...truncated for clarity...
+ "totalBytes": 6,
+ "totalChecks": 0,
+ "totalTransfers": 1,
+ "transferTime": 0.000882794,
+ "transfers": 1
+ },
+ "source": "accounting/stats.go:569"
+}This controls the number of low level retries rclone does.
A low level retry is used to retry a failing operation - typically @@ -15704,63 +15712,63 @@ known.
Metadata is the backend specific metadata as described
in the backend docs.{
- "SrcFs": "gdrive:",
- "SrcFsType": "drive",
- "DstFs": "newdrive:user",
- "DstFsType": "onedrive",
- "Remote": "test.txt",
- "Size": 6,
- "MimeType": "text/plain; charset=utf-8",
- "ModTime": "2022-10-11T17:53:10.286745272+01:00",
- "IsDir": false,
- "ID": "xyz",
- "Metadata": {
- "btime": "2022-10-11T16:53:11Z",
- "content-type": "text/plain; charset=utf-8",
- "mtime": "2022-10-11T17:53:10.286745272+01:00",
- "owner": "user1@domain1.com",
- "permissions": "...",
- "description": "my nice file",
- "starred": "false"
- }
-}{
+ "SrcFs": "gdrive:",
+ "SrcFsType": "drive",
+ "DstFs": "newdrive:user",
+ "DstFsType": "onedrive",
+ "Remote": "test.txt",
+ "Size": 6,
+ "MimeType": "text/plain; charset=utf-8",
+ "ModTime": "2022-10-11T17:53:10.286745272+01:00",
+ "IsDir": false,
+ "ID": "xyz",
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain1.com",
+ "permissions": "...",
+ "description": "my nice file",
+ "starred": "false"
+ }
+}The program should then modify the input as desired and send it to
STDOUT. The returned Metadata field will be used in its
entirety for the destination object. Any other fields will be ignored.
Note in this example we translate user names and permissions and add
something to the description:
{
- "Metadata": {
- "btime": "2022-10-11T16:53:11Z",
- "content-type": "text/plain; charset=utf-8",
- "mtime": "2022-10-11T17:53:10.286745272+01:00",
- "owner": "user1@domain2.com",
- "permissions": "...",
- "description": "my nice file [migrated from domain1]",
- "starred": "false"
- }
-}{
+ "Metadata": {
+ "btime": "2022-10-11T16:53:11Z",
+ "content-type": "text/plain; charset=utf-8",
+ "mtime": "2022-10-11T17:53:10.286745272+01:00",
+ "owner": "user1@domain2.com",
+ "permissions": "...",
+ "description": "my nice file [migrated from domain1]",
+ "starred": "false"
+ }
+}Metadata can be removed here too.
An example python program might look something like this to implement the above transformations.
-import sys, json
-
-i = json.load(sys.stdin)
-metadata = i["Metadata"]
-# Add tag to description
-if "description" in metadata:
- metadata["description"] += " [migrated from domain1]"
-else:
- metadata["description"] = "[migrated from domain1]"
-# Modify owner
-if "owner" in metadata:
- metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
-o = { "Metadata": metadata }
-json.dump(o, sys.stdout, indent="\t")import sys, json
+
+i = json.load(sys.stdin)
+metadata = i["Metadata"]
+# Add tag to description
+if "description" in metadata:
+ metadata["description"] += " [migrated from domain1]"
+else:
+ metadata["description"] = "[migrated from domain1]"
+# Modify owner
+if "owner" in metadata:
+ metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
+o = { "Metadata": metadata }
+json.dump(o, sys.stdout, indent="\t")You can find this example (slightly expanded) in the rclone source code at bin/test_metadata_mapper.py.
@@ -16571,11 +16579,11 @@ password, in which case it will be used for decrypting the configuration.You can set this for a session from a script. For unix like systems
save this to a file called set-rclone-password:
#!/bin/echo Source this file don't run it
-
-read -s RCLONE_CONFIG_PASS
-export RCLONE_CONFIG_PASS#!/bin/echo Source this file don't run it
+
+read -s RCLONE_CONFIG_PASS
+export RCLONE_CONFIG_PASSThen source the file when you want to use it. From the shell you
would do source set-rclone-password. It will then ask you
for the password and set it in the environment variable.
pass init rclone.
Generate and store a password
-New-Object -TypeName PSCredential -ArgumentList "rclone", (ConvertTo-SecureString -String ([System.Web.Security.Membership]::GeneratePassword(40, 10)) -AsPlainText -Force) | Export-Clixml -Path "rclone-credential.xml"Add the password retrieval instruction
[Environment]::SetEnvironmentVariable("RCLONE_PASSWORD_COMMAND", "[System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR((Import-Clixml -Path "rclone-credential.xml").Password))")New-Object -TypeName PSCredential -ArgumentList "rclone", (ConvertTo-SecureString -String ([System.Web.Security.Membership]::GeneratePassword(40, 10)) -AsPlainText -Force) | Export-Clixml -Path "rclone-credential.xml"
+Add the password retrieval instruction
+[Environment]::SetEnvironmentVariable("RCLONE_PASSWORD_COMMAND", "[System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR((Import-Clixml -Path "rclone-credential.xml").Password))")For example, if you wished to run a sync with the
--checksum parameter, you would pass this parameter in your
JSON blob.
"_config":{"CheckSum": true}"_config":{"CheckSum": true}If using rclone rc this could be passed as
rclone rc sync/sync ... _config='{"CheckSum": true}'
Any config parameters you don't set will inherit the global defaults @@ -18335,9 +18343,9 @@ which were set with command line flags or environment variables.
see data types for more info. Here is an example setting the equivalent of--buffer-size in string
or integer format.
-"_config":{"BufferSize": "42M"}
-"_config":{"BufferSize": 44040192}"_config":{"BufferSize": "42M"}
+"_config":{"BufferSize": 44040192}If you wish to check the _config assignment has worked
properly then calling options/local will show what the
value got set to.
For example, if you wished to run a sync with these flags
--max-size 1M --max-age 42s --include "a" --include "b"
you would pass this parameter in your JSON blob.
-"_filter":{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}"_filter":{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}If using rclone rc this could be passed as
rclone rc ... _filter='{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}'
Any filter parameters you don't set will inherit the global defaults @@ -18364,9 +18372,9 @@ which were set with command line flags or environment variables.
see data types for more info. Here is an example setting the equivalent of--buffer-size in string
or integer format.
-"_filter":{"MinSize": "42M"}
-"_filter":{"MinSize": 44040192}"_filter":{"MinSize": "42M"}
+"_filter":{"MinSize": 44040192}If you wish to check the _filter assignment has worked
properly then calling options/local will show what the
value got set to.
An example of this might be the --log-level flag. Note
that the Name of the option becomes the command line flag
with _ replaced with -.
{
- "Advanced": false,
- "Default": 5,
- "DefaultStr": "NOTICE",
- "Examples": [
- {
- "Help": "",
- "Value": "EMERGENCY"
- },
- {
- "Help": "",
- "Value": "ALERT"
- },
- ...
- ],
- "Exclusive": true,
- "FieldName": "LogLevel",
- "Groups": "Logging",
- "Help": "Log level DEBUG|INFO|NOTICE|ERROR",
- "Hide": 0,
- "IsPassword": false,
- "Name": "log_level",
- "NoPrefix": true,
- "Required": true,
- "Sensitive": false,
- "Type": "LogLevel",
- "Value": null,
- "ValueStr": "NOTICE"
-},{
+ "Advanced": false,
+ "Default": 5,
+ "DefaultStr": "NOTICE",
+ "Examples": [
+ {
+ "Help": "",
+ "Value": "EMERGENCY"
+ },
+ {
+ "Help": "",
+ "Value": "ALERT"
+ },
+ ...
+ ],
+ "Exclusive": true,
+ "FieldName": "LogLevel",
+ "Groups": "Logging",
+ "Help": "Log level DEBUG|INFO|NOTICE|ERROR",
+ "Hide": 0,
+ "IsPassword": false,
+ "Name": "log_level",
+ "NoPrefix": true,
+ "Required": true,
+ "Sensitive": false,
+ "Type": "LogLevel",
+ "Value": null,
+ "ValueStr": "NOTICE"
+},Note that the Help may be multiple lines separated by
\n. The first line will always be a short sentence and this
is the sentence shown when running rclone help flags.
local backend is desired then type
should be set to local. If _root isn't
specified then it defaults to the root of the remote.
For example this JSON is equivalent to remote:/tmp
{
- "_name": "remote",
- "_root": "/tmp"
-}And this is equivalent to
-:sftp,host='example.com':/tmp
{
- "type": "sftp",
- "host": "example.com",
- "_root": "/tmp"
-}And this is equivalent to /tmp/dir
And this is equivalent to
+:sftp,host='example.com':/tmp
{
- "type": "local",
- "_root": "/tmp/dir"
-}And this is equivalent to /tmp/dir
{
+ "type": "local",
+ "_root": "/tmp/dir"
+}_path parameter{
- "_path": "rc/path",
- "param1": "parameter for the path as documented",
- "param2": "parameter for the path as documented, etc",
-}{
+ "_path": "rc/path",
+ "param1": "parameter for the path as documented",
+ "param2": "parameter for the path as documented, etc",
+}The inputs may use _async, _group,
_config and _filter as normal when using the
rc.
For example:
-rclone rc job/batch --json '{
- "inputs": [
- {
- "_path": "rc/noop",
- "parameter": "OK"
- },
- {
- "_path": "rc/error",
- "parameter": "BAD"
- }
- ]
-}
-'Gives the result:
{
- "results": [
- {
- "parameter": "OK"
- },
- {
- "error": "arbitrary error on input map[parameter:BAD]",
- "input": {
- "parameter": "BAD"
- },
- "path": "rc/error",
- "status": 500
- }
- ]
-}rclone rc job/batch --json '{
+ "inputs": [
+ {
+ "_path": "rc/noop",
+ "parameter": "OK"
+ },
+ {
+ "_path": "rc/error",
+ "parameter": "BAD"
+ }
+ ]
+}
+'
+Gives the result:
+{
+ "results": [
+ {
+ "parameter": "OK"
+ },
+ {
+ "error": "arbitrary error on input map[parameter:BAD]",
+ "input": {
+ "parameter": "BAD"
+ },
+ "path": "rc/error",
+ "status": 500
+ }
+ ]
+}Authentication is required for this call.
Parameters: None.
@@ -19996,25 +20004,25 @@ Useful for testing error handling.Eg
rclone rc serve/list
Returns
-{
- "list": [
- {
- "addr": "[::]:4321",
- "id": "nfs-ffc2a4e5",
- "params": {
- "fs": "remote:",
- "opt": {
- "ListenAddr": ":4321"
- },
- "type": "nfs",
- "vfsOpt": {
- "CacheMode": "full"
- }
- }
- }
- ]
-}{
+ "list": [
+ {
+ "addr": "[::]:4321",
+ "id": "nfs-ffc2a4e5",
+ "params": {
+ "fs": "remote:",
+ "opt": {
+ "ListenAddr": ":4321"
+ },
+ "type": "nfs",
+ "vfsOpt": {
+ "CacheMode": "full"
+ }
+ }
+ }
+ ]
+}Authentication is required for this call.
Create a new server with the specified parameters.
@@ -20039,11 +20047,11 @@ above.rclone rc serve/start type=nfs fs=remote: addr=:4321 vfs_cache_mode=full
rclone rc serve/start --json '{"type":"nfs","fs":"remote:","addr":":1234","vfs_cache_mode":"full"}'
This will give the reply
-{
- "addr": "[::]:4321", // Address the server was started on
- "id": "nfs-ecfc6852" // Unique identifier for the server instance
-}{
+ "addr": "[::]:4321", // Address the server was started on
+ "id": "nfs-ecfc6852" // Unique identifier for the server instance
+}Or an error if it failed to start.
Stop the server with serve/stop and list the running
servers with serve/list.
Eg
rclone rc serve/types
Returns
-{
- "types": [
- "http",
- "sftp",
- "nfs"
- ]
-}{
+ "types": [
+ "http",
+ "sftp",
+ "nfs"
+ ]
+}Authentication is required for this call.
If an error occurs then there will be an HTTP error status (e.g. 500) and the body of the response will contain a JSON encoded error object, e.g.
-{
- "error": "Expecting string value for key \"remote\" (was float64)",
- "input": {
- "fs": "/tmp",
- "remote": 3
- },
- "status": 400,
- "path": "operations/rmdir"
-}{
+ "error": "Expecting string value for key \"remote\" (was float64)",
+ "input": {
+ "fs": "/tmp",
+ "remote": 3
+ },
+ "status": 400,
+ "path": "operations/rmdir"
+}The keys in the error response are:
curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2'
Response
-{
- "potato": "1",
- "sausage": "2"
-}{
+ "potato": "1",
+ "sausage": "2"
+}Here is what an error response looks like:
curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
-{
- "error": "arbitrary error on input map[potato:1 sausage:2]",
- "input": {
- "potato": "1",
- "sausage": "2"
- }
-}{
+ "error": "arbitrary error on input map[potato:1 sausage:2]",
+ "input": {
+ "potato": "1",
+ "sausage": "2"
+ }
+}Note that curl doesn't return errors to the shell unless you use the
-f option
$ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
@@ -20377,38 +20385,38 @@ $ echo $?
Using POST with a form
curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop
Response
-{
- "potato": "1",
- "sausage": "2"
-}
+{
+ "potato": "1",
+ "sausage": "2"
+}
Note that you can combine these with URL parameters too with the POST
parameters taking precedence.
curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4"
Response
-{
- "potato": "1",
- "rutabaga": "3",
- "sausage": "4"
-}
+{
+ "potato": "1",
+ "rutabaga": "3",
+ "sausage": "4"
+}
Using POST with a JSON blob
curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop
response
-{
- "password": "xyz",
- "username": "xyz"
-}
+{
+ "password": "xyz",
+ "username": "xyz"
+}
This can be combined with URL parameters too if required. The JSON
blob takes precedence.
curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4'
-{
- "potato": 2,
- "rutabaga": "3",
- "sausage": 1
-}
+{
+ "potato": 2,
+ "rutabaga": "3",
+ "sausage": 1
+}
Debugging rclone with pprof
If you use the --rc flag this will also enable the use
of the go profiling tools on the same port.
@@ -21496,7 +21504,7 @@ split into groups.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
- --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0")
+ --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.1")
Flags helpful for increasing performance.
--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@@ -21920,9 +21928,11 @@ split into groups.
--filefabric-token-expiry string Token expiry time
--filefabric-url string URL of the Enterprise File Fabric to connect to
--filefabric-version string Version read from the file fabric
+ --filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi)
--filelu-description string Description of the remote
--filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation)
--filelu-key string Your FileLu Rclone key from My Account
+ --filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi)
--filen-api-key string API Key for your Filen account (obscured)
--filen-auth-version string Authentication Version (internal use only)
--filen-base-folder-uuid string UUID of Account Root Directory (internal use only)
@@ -22623,30 +22633,30 @@ to the Swarm cluster and save as
every node. By default this location is accessible only to the
root user so you will need appropriate privileges. The resulting config
will look like this:
-[gdrive]
-type = drive
-scope = drive
-drive_id = 1234567...
-root_folder_id = 0Abcd...
-token = {"access_token":...}
+[gdrive]
+type = drive
+scope = drive
+drive_id = 1234567...
+root_folder_id = 0Abcd...
+token = {"access_token":...}
Now create the file named example.yml with a swarm stack
description like this:
-version: '3'
-services:
- heimdall:
- image: linuxserver/heimdall:latest
- ports: [8080:80]
- volumes: [configdata:/config]
-volumes:
- configdata:
- driver: rclone
- driver_opts:
- remote: 'gdrive:heimdall'
- allow_other: 'true'
- vfs_cache_mode: full
- poll_interval: 0
+version: '3'
+services:
+ heimdall:
+ image: linuxserver/heimdall:latest
+ ports: [8080:80]
+ volumes: [configdata:/config]
+volumes:
+ configdata:
+ driver: rclone
+ driver_opts:
+ remote: 'gdrive:heimdall'
+ allow_other: 'true'
+ vfs_cache_mode: full
+ poll_interval: 0
and run the stack:
docker stack deploy example -c ./example.yml
After a few seconds docker will spread the parsed stack description
@@ -22770,16 +22780,16 @@ volume and have at least two elements, the self-explanatory
driver: rclone value and the driver_opts:
structure playing the same role as -o key=val CLI
flags:
-volumes:
- volume_name_1:
- driver: rclone
- driver_opts:
- remote: 'gdrive:'
- allow_other: 'true'
- vfs_cache_mode: full
- token: '{"type": "borrower", "expires": "2021-12-31"}'
- poll_interval: 0
+volumes:
+ volume_name_1:
+ driver: rclone
+ driver_opts:
+ remote: 'gdrive:'
+ allow_other: 'true'
+ vfs_cache_mode: full
+ token: '{"type": "borrower", "expires": "2021-12-31"}'
+ poll_interval: 0
Notice a few important details:
- YAML prefers
_ in option names instead of
@@ -22931,16 +22941,16 @@ docker plugin inspect rclone
to inform the docker daemon that a volume is (un-)available. As a
workaround you can setup a healthcheck to verify that the mount is
responding, for example:
-services:
- my_service:
- image: my_image
- healthcheck:
- test: ls /path/to/rclone/mount || exit 1
- interval: 1m
- timeout: 15s
- retries: 3
- start_period: 15sservices:
+ my_service:
+ image: my_image
+ healthcheck:
+ test: ls /path/to/rclone/mount || exit 1
+ interval: 1m
+ timeout: 15s
+ retries: 3
+ start_period: 15sIn most cases you should prefer managed mode. Moreover, MacOS and Windows do not support native Docker plugins. Please use managed mode on @@ -24203,17 +24213,22 @@ investigation:
TestBisyncRemoteRemote/normalizationTestSeafile (seafile)
+TestInternxt (internxt)
TestSeafileV6 (seafile)
-TestBisyncLocalRemote/all_changedTestBisyncLocalRemote/volatileTestBisyncLocalRemote/ext_paths
+TestBisyncLocalRemote/max_delete_path1TestBisyncRemoteRemote/basicTestBisyncRemoteRemote/concurrentThe following backends either have not been tested recently or have @@ -24767,19 +24782,19 @@ versions I manually run the following command:
[Dropbox]
-type = dropbox
-...
-
-[Dropcrypt]
-type = crypt
-remote = /path/to/DBoxroot/crypt # on the Linux server
-remote = C:\Users\MyLogin\Dropbox\crypt # on the Windows notebook
-filename_encryption = standard
-directory_name_encryption = true
-password = ...
-...[Dropbox]
+type = dropbox
+...
+
+[Dropcrypt]
+type = crypt
+remote = /path/to/DBoxroot/crypt # on the Linux server
+remote = C:\Users\MyLogin\Dropbox\crypt # on the Windows notebook
+filename_encryption = standard
+directory_name_encryption = true
+password = ...
+...You should read this section only if you are developing for rclone. You need to have rclone source code locally to work with bisync @@ -26435,24 +26450,24 @@ An external ID is provided for additional security as required by the role's trust policy
The target role's trust policy in the destination account must allow the source account or user to assume it. Example trust policy:
-{
- "Version": "2012-10-17",
- "Statement": [
- {
- "Effect": "Allow",
- "Principal": {
- "AWS": "arn:aws:iam::SOURCE-ACCOUNT-ID:root"
- },
- "Action": "sts:AssumeRole",
- "Condition": {
- "StringEquals": {
- "sts:ExternalID": "unique-role-external-id-12345"
- }
- }
- }
- ]
-}{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "arn:aws:iam::SOURCE-ACCOUNT-ID:root"
+ },
+ "Action": "sts:AssumeRole",
+ "Condition": {
+ "StringEquals": {
+ "sts:ExternalID": "unique-role-external-id-12345"
+ }
+ }
+ }
+ ]
+}When using the sync subcommand of rclone
the following minimum permissions are required to be available on the
@@ -26469,34 +26484,34 @@ href="#s3-no-check-bucket">s3-no-check-bucket)
When using the lsd subcommand, the
ListAllMyBuckets permission is required.
Example policy:
-{
- "Version": "2012-10-17",
- "Statement": [
- {
- "Effect": "Allow",
- "Principal": {
- "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
- },
- "Action": [
- "s3:ListBucket",
- "s3:DeleteObject",
- "s3:GetObject",
- "s3:PutObject",
- "s3:PutObjectAcl"
- ],
- "Resource": [
- "arn:aws:s3:::BUCKET_NAME/*",
- "arn:aws:s3:::BUCKET_NAME"
- ]
- },
- {
- "Effect": "Allow",
- "Action": "s3:ListAllMyBuckets",
- "Resource": "arn:aws:s3:::*"
- }
- ]
-}{
+ "Version": "2012-10-17",
+ "Statement": [
+ {
+ "Effect": "Allow",
+ "Principal": {
+ "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
+ },
+ "Action": [
+ "s3:ListBucket",
+ "s3:DeleteObject",
+ "s3:GetObject",
+ "s3:PutObject",
+ "s3:PutObjectAcl"
+ ],
+ "Resource": [
+ "arn:aws:s3:::BUCKET_NAME/*",
+ "arn:aws:s3:::BUCKET_NAME"
+ ]
+ },
+ {
+ "Effect": "Allow",
+ "Action": "s3:ListAllMyBuckets",
+ "Resource": "arn:aws:s3:::*"
+ }
+ ]
+}Notes on above:
It returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successful or an error message if not.
-[
- {
- "Status": "OK",
- "Remote": "test.txt"
- },
- {
- "Status": "OK",
- "Remote": "test/file4.txt"
- }
-][
+ {
+ "Status": "OK",
+ "Remote": "test.txt"
+ },
+ {
+ "Status": "OK",
+ "Remote": "test/file4.txt"
+ }
+]Options:
This command does not obey the filters.
It returns a list of status dictionaries:
-[
- {
- "Remote": "file.txt",
- "VersionID": null,
- "RestoreStatus": {
- "IsRestoreInProgress": true,
- "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
- },
- "StorageClass": "GLACIER"
- },
- {
- "Remote": "test.pdf",
- "VersionID": null,
- "RestoreStatus": {
- "IsRestoreInProgress": false,
- "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
- },
- "StorageClass": "DEEP_ARCHIVE"
- },
- {
- "Remote": "test.gz",
- "VersionID": null,
- "RestoreStatus": {
- "IsRestoreInProgress": true,
- "RestoreExpiryDate": "null"
- },
- "StorageClass": "INTELLIGENT_TIERING"
- }
-][
+ {
+ "Remote": "file.txt",
+ "VersionID": null,
+ "RestoreStatus": {
+ "IsRestoreInProgress": true,
+ "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
+ },
+ "StorageClass": "GLACIER"
+ },
+ {
+ "Remote": "test.pdf",
+ "VersionID": null,
+ "RestoreStatus": {
+ "IsRestoreInProgress": false,
+ "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
+ },
+ "StorageClass": "DEEP_ARCHIVE"
+ },
+ {
+ "Remote": "test.gz",
+ "VersionID": null,
+ "RestoreStatus": {
+ "IsRestoreInProgress": true,
+ "RestoreExpiryDate": "null"
+ },
+ "StorageClass": "INTELLIGENT_TIERING"
+ }
+]Options:
You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path.
-{
- "rclone": [
- {
- "Initiated": "2020-06-26T14:20:36Z",
- "Initiator": {
- "DisplayName": "XXX",
- "ID": "arn:aws:iam::XXX:user/XXX"
- },
- "Key": "KEY",
- "Owner": {
- "DisplayName": null,
- "ID": "XXX"
- },
- "StorageClass": "STANDARD",
- "UploadId": "XXX"
- }
- ],
- "rclone-1000files": [],
- "rclone-dst": []
-}{
+ "rclone": [
+ {
+ "Initiated": "2020-06-26T14:20:36Z",
+ "Initiator": {
+ "DisplayName": "XXX",
+ "ID": "arn:aws:iam::XXX:user/XXX"
+ },
+ "Key": "KEY",
+ "Owner": {
+ "DisplayName": null,
+ "ID": "XXX"
+ },
+ "StorageClass": "STANDARD",
+ "UploadId": "XXX"
+ }
+ ],
+ "rclone-1000files": [],
+ "rclone-dst": []
+}Remove unfinished multipart uploads.
rclone backend cleanup remote: [options] [<arguments>+]
@@ -31062,10 +31077,10 @@ will default to those currently in use.
If you want to use rclone to access a public bucket, configure with a
blank access_key_id and secret_access_key.
Your config should end up looking like this:
[anons3]
-type = s3
-provider = AWS[anons3]
+type = s3
+provider = AWSThen use it as normal with the name of the public bucket, e.g.
rclone lsd anons3:1000genomes
You will be able to list and copy data but not upload it.
@@ -31101,14 +31116,14 @@ query parameter based authentication.With rclone v1.59 or later setting upload_cutoff should
not be necessary.
eg.
-[snowball]
-type = s3
-provider = Other
-access_key_id = YOUR_ACCESS_KEY
-secret_access_key = YOUR_SECRET_KEY
-endpoint = http://[IP of Snowball]:8080
-upload_cutoff = 0[snowball]
+type = s3
+provider = Other
+access_key_id = YOUR_ACCESS_KEY
+secret_access_key = YOUR_SECRET_KEY
+endpoint = http://[IP of Snowball]:8080
+upload_cutoff = 0Here is an example of making an Alibaba Cloud (Aliyun)
@@ -31303,19 +31318,19 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave the config file looking like this. Bizfly Cloud Simple
Storage is an S3-compatible service with regions in Hanoi (HN) and
@@ -31326,19 +31341,19 @@ Ho Chi Minh City (HCM).[ArvanCloud]
-type = s3
-provider = ArvanCloud
-env_auth = false
-access_key_id = YOURACCESSKEY
-secret_access_key = YOURSECRETACCESSKEY
-region =
-endpoint = s3.arvanstorage.com
-location_constraint =
-acl =
-server_side_encryption =
-storage_class =[ArvanCloud]
+type = s3
+provider = ArvanCloud
+env_auth = false
+access_key_id = YOURACCESSKEY
+secret_access_key = YOURSECRETACCESSKEY
+region =
+endpoint = s3.arvanstorage.com
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =BizflyCloud
hcm.ss.bfcplatform.vn
A minimal configuration looks like this.
-[bizfly]
-type = s3
-provider = BizflyCloud
-env_auth = false
-access_key_id = YOUR_ACCESS_KEY
-secret_access_key = YOUR_SECRET_KEY
-region = HN
-endpoint = hn.ss.bfcplatform.vn
-location_constraint =
-acl =
-server_side_encryption =
-storage_class =[bizfly]
+type = s3
+provider = BizflyCloud
+env_auth = false
+access_key_id = YOUR_ACCESS_KEY
+secret_access_key = YOUR_SECRET_KEY
+region = HN
+endpoint = hn.ss.bfcplatform.vn
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =Switch region and endpoint to
HCM and hcm.ss.bfcplatform.vn for Ho Chi Minh
City.
To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config:
-[ceph]
-type = s3
-provider = Ceph
-env_auth = false
-access_key_id = XXX
-secret_access_key = YYY
-region =
-endpoint = https://ceph.endpoint.example.com
-location_constraint =
-acl =
-server_side_encryption =
-storage_class =[ceph]
+type = s3
+provider = Ceph
+env_auth = false
+access_key_id = XXX
+secret_access_key = YYY
+region =
+endpoint = https://ceph.endpoint.example.com
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =If you are using an older version of CEPH (e.g. 10.2.x Jewel) and a
version of rclone before v1.59 then you may need to supply the parameter
--s3-upload-cutoff 0 or put this in the config file as
@@ -31375,18 +31390,18 @@ tools you will get a JSON blob with the / escaped as
access key.
Eg the dump from Ceph looks something like this (irrelevant keys removed).
-{
- "user_id": "xxx",
- "display_name": "xxxx",
- "keys": [
- {
- "user": "xxx",
- "access_key": "xxxxxx",
- "secret_key": "xxxxxx\/xxxx"
- }
- ],
-}{
+ "user_id": "xxx",
+ "display_name": "xxxx",
+ "keys": [
+ {
+ "user": "xxx",
+ "access_key": "xxxxxx",
+ "secret_key": "xxxxxx\/xxxx"
+ }
+ ],
+}Because this is a json dump, it is encoding the / as
\/, so if you use the secret key as
xxxxxx/xxxx it will work fine.
This will leave your config looking something like:
-[r2]
-type = s3
-provider = Cloudflare
-access_key_id = ACCESS_KEY
-secret_access_key = SECRET_ACCESS_KEY
-region = auto
-endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
-acl = private[r2]
+type = s3
+provider = Cloudflare
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+region = auto
+endpoint = https://ACCOUNT_ID.r2.cloudflarestorage.com
+acl = privateNow run rclone lsf r2: to see your buckets and
rclone lsf r2:bucket to look within a bucket.
For R2 tokens with the "Object Read & Write" permission, you may @@ -31756,14 +31771,14 @@ region> eu-west-1 (or leave empty) endpoint> s3.cubbit.eu acl>
The resulting configuration file should look like:
-[cubbit-ds3]
-type = s3
-provider = Cubbit
-access_key_id = ACCESS_KEY
-secret_access_key = SECRET_KEY
-region = eu-west-1
-endpoint = s3.cubbit.eu[cubbit-ds3]
+type = s3
+provider = Cubbit
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_KEY
+region = eu-west-1
+endpoint = s3.cubbit.euYou can then start using Cubbit DS3 with rclone. For example, to create a new bucket and copy files into it, you can run:
rclone mkdir cubbit-ds3:my-bucket
@@ -31798,19 +31813,19 @@ location_constraint>
acl>
storage_class>
The resulting configuration file should look like:
-[spaces]
-type = s3
-provider = DigitalOcean
-env_auth = false
-access_key_id = YOUR_ACCESS_KEY
-secret_access_key = YOUR_SECRET_KEY
-region =
-endpoint = nyc3.digitaloceanspaces.com
-location_constraint =
-acl =
-server_side_encryption =
-storage_class =[spaces]
+type = s3
+provider = DigitalOcean
+env_auth = false
+access_key_id = YOUR_ACCESS_KEY
+secret_access_key = YOUR_SECRET_KEY
+region =
+endpoint = nyc3.digitaloceanspaces.com
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =Once configured, you can create a new Space and begin copying files. For example:
rclone mkdir spaces:my-new-space
@@ -31822,19 +31837,19 @@ object storage system based on CEPH.
To use rclone with Dreamhost, configure as above but leave the region
blank and set the endpoint. You should end up with something like this
in your config:
-[dreamobjects]
-type = s3
-provider = DreamHost
-env_auth = false
-access_key_id = your_access_key
-secret_access_key = your_secret_key
-region =
-endpoint = objects-us-west-1.dream.io
-location_constraint =
-acl = private
-server_side_encryption =
-storage_class =
+[dreamobjects]
+type = s3
+provider = DreamHost
+env_auth = false
+access_key_id = your_access_key
+secret_access_key = your_secret_key
+region =
+endpoint = objects-us-west-1.dream.io
+location_constraint =
+acl = private
+server_side_encryption =
+storage_class =
Exaba
Exaba is an on-premises,
S3-compatible storage for service providers and large enterprises. It is
@@ -31895,13 +31910,13 @@ y) Yes
n) No (default)
y/n> n
And the config generated will end up looking like this:
-[exaba]
-type = s3
-provider = Exaba
-access_key_id = XXX
-secret_access_key = XXX
-endpoint = http://127.0.0.1:9000/[exaba]
+type = s3
+provider = Exaba
+access_key_id = XXX
+secret_access_key = XXX
+endpoint = http://127.0.0.1:9000/GoogleCloudStorage is @@ -31912,13 +31927,13 @@ object storage service from Google Cloud Platform.
secret key. These can be retrieved by creating an HMAC key. -[gs]
-type = s3
-provider = GCS
-access_key_id = your_access_key
-secret_access_key = your_secret_key
-endpoint = https://storage.googleapis.com[gs]
+type = s3
+provider = GCS
+access_key_id = your_access_key
+secret_access_key = your_secret_key
+endpoint = https://storage.googleapis.comNote that --s3-versions does not work
with GCS when it needs to do directory paging. Rclone will return the
error:
This will leave the config file looking like this.
-[my-hetzner]
-type = s3
-provider = Hetzner
-access_key_id = ACCESS_KEY
-secret_access_key = SECRET_KEY
-region = hel1
-endpoint = hel1.your-objectstorage.com
-acl = private[my-hetzner]
+type = s3
+provider = Hetzner
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_KEY
+region = hel1
+endpoint = hel1.your-objectstorage.com
+acl = privateObject Storage Service (OBS) provides stable, secure, efficient, and easy-to-use cloud storage that lets you store virtually any volume of unstructured data in any format and access it from anywhere.
OBS provides an S3 interface, you can copy and modify the following configuration and add it to your rclone configuration file.
-[obs]
-type = s3
-provider = HuaweiOBS
-access_key_id = your-access-key-id
-secret_access_key = your-secret-access-key
-region = af-south-1
-endpoint = obs.af-south-1.myhuaweicloud.com
-acl = private[obs]
+type = s3
+provider = HuaweiOBS
+access_key_id = your-access-key-id
+secret_access_key = your-secret-access-key
+region = af-south-1
+endpoint = obs.af-south-1.myhuaweicloud.com
+acl = privateOr you can also configure via the interactive command line:
No remotes found, make a new one\?
n) New remote
@@ -32302,15 +32317,15 @@ Choose a number from below, or type in your own value
acl> 1Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this
-[xxx]
-type = s3
-Provider = IBMCOS
-access_key_id = xxx
-secret_access_key = yyy
-endpoint = s3-api.us-geo.objectstorage.softlayer.net
-location_constraint = us-standard
-acl = private[xxx]
+type = s3
+Provider = IBMCOS
+access_key_id = xxx
+secret_access_key = yyy
+endpoint = s3-api.us-geo.objectstorage.softlayer.net
+location_constraint = us-standard
+acl = privateExecute rclone commands
1) Create a bucket.
@@ -32568,14 +32583,14 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave the config file looking like this.
-[intercolo]
-type = s3
-provider = Intercolo
-access_key_id = ACCESS_KEY
-secret_access_key = SECRET_KEY
-region = de-fra
-endpoint = de-fra.i3storage.com[intercolo]
+type = s3
+provider = Intercolo
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_KEY
+region = de-fra
+endpoint = de-fra.i3storage.comIONOS S3 Object Storage is a service offered by IONOS for storing and @@ -32883,19 +32898,19 @@ e) Edit this remote d) Delete this remote y/e/d> y
This will leave the config file looking like this.
-[Liara]
-type = s3
-provider = Liara
-env_auth = false
-access_key_id = YOURACCESSKEY
-secret_access_key = YOURSECRETACCESSKEY
-region =
-endpoint = storage.iran.liara.space
-location_constraint =
-acl =
-server_side_encryption =
-storage_class =[Liara]
+type = s3
+provider = Liara
+env_auth = false
+access_key_id = YOURACCESSKEY
+secret_access_key = YOURSECRETACCESSKEY
+region =
+endpoint = storage.iran.liara.space
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =Here is an example of making a Linode Object
@@ -33035,13 +33050,13 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave the config file looking like this. Here is an example of making a Magalu Object Storage
@@ -33143,13 +33158,13 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave the config file looking like this. MEGA S4 Object Storage is
an S3 compatible object storage system. It has a single pricing tier
@@ -33242,13 +33257,13 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave the config file looking like this. Minio is an object storage server
built for cloud application developers and devops. Which makes the config file look like this So once set up, for example, to copy files into a bucket[linode]
-type = s3
-provider = Linode
-access_key_id = ACCESS_KEY
-secret_access_key = SECRET_ACCESS_KEY
-endpoint = eu-central-1.linodeobjects.com[linode]
+type = s3
+provider = Linode
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+endpoint = eu-central-1.linodeobjects.comMagalu
[magalu]
-type = s3
-provider = Magalu
-access_key_id = ACCESS_KEY
-secret_access_key = SECRET_ACCESS_KEY
-endpoint = br-ne1.magaluobjects.com[magalu]
+type = s3
+provider = Magalu
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+endpoint = br-ne1.magaluobjects.comMEGA S4
[megas4]
-type = s3
-provider = Mega
-access_key_id = XXX
-secret_access_key = XXX
-endpoint = s3.eu-central-1.s4.mega.io[megas4]
+type = s3
+provider = Mega
+access_key_id = XXX
+secret_access_key = XXX
+endpoint = s3.eu-central-1.s4.mega.ioMinio
[minio]
-type = s3
-provider = Minio
-env_auth = false
-access_key_id = USWUXHGYZQYFYFFIT3RE
-secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
-region = us-east-1
-endpoint = http://192.168.1.106:9000
-location_constraint =
-server_side_encryption =[minio]
+type = s3
+provider = Minio
+env_auth = false
+access_key_id = USWUXHGYZQYFYFFIT3RE
+secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
+region = us-east-1
+endpoint = http://192.168.1.106:9000
+location_constraint =
+server_side_encryption =rclone copy /path/to/files minio:bucketNetease NOS
@@ -33315,16 +33330,16 @@ href="https://docs.outscale.com/en/userguide/OUTSCALE-Object-Storage-OOS.html">o
documentation.
Here is an example of an OOS configuration that you can paste into your rclone configuration file:
-[outscale]
-type = s3
-provider = Outscale
-env_auth = false
-access_key_id = ABCDEFGHIJ0123456789
-secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-region = eu-west-2
-endpoint = oos.eu-west-2.outscale.com
-acl = private[outscale]
+type = s3
+provider = Outscale
+env_auth = false
+access_key_id = ABCDEFGHIJ0123456789
+secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
+region = eu-west-2
+endpoint = oos.eu-west-2.outscale.com
+acl = privateYou can also run rclone config to go through the
interactive setup process:
No remotes found, make a new one\?
@@ -33618,15 +33633,15 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
Your configuration file should now look like this:
-[ovhcloud-rbx]
-type = s3
-provider = OVHcloud
-access_key_id = my_access
-secret_access_key = my_secret
-region = rbx
-endpoint = s3.rbx.io.cloud.ovh.net
-acl = private[ovhcloud-rbx]
+type = s3
+provider = OVHcloud
+access_key_id = my_access
+secret_access_key = my_secret
+region = rbx
+endpoint = s3.rbx.io.cloud.ovh.net
+acl = privateHere is an example of making a Petabox configuration. First run:
@@ -33767,14 +33782,14 @@ e) Edit this remote d) Delete this remote y/e/d> yThis will leave the config file looking like this.
-[My Petabox Storage]
-type = s3
-provider = Petabox
-access_key_id = YOUR_ACCESS_KEY_ID
-secret_access_key = YOUR_SECRET_ACCESS_KEY
-region = us-east-1
-endpoint = s3.petabox.io[My Petabox Storage]
+type = s3
+provider = Petabox
+access_key_id = YOUR_ACCESS_KEY_ID
+secret_access_key = YOUR_SECRET_ACCESS_KEY
+region = us-east-1
+endpoint = s3.petabox.ioPure
@@ -33869,13 +33884,13 @@ d) Delete this remote
y/e/d> y
This results in the following configuration being stored in
Note: The FlashBlade endpoint should be the S3 data VIP. For
virtual-hosted style requests, ensure proper DNS configuration:
subdomains of the endpoint hostname should resolve to a FlashBlade data
@@ -34152,13 +34167,13 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
This will leave the config file looking like this. Rabata is an S3-compatible secure
cloud storage service that offers flat, transparent pricing (no API
@@ -34295,16 +34310,16 @@ details are required for the next steps of configuration, when
Your config should end up looking a bit like this: Rclone can serve any remote over the S3 protocol. For details see the
rclone serve
@@ -34314,14 +34329,14 @@ server like this: This will be compatible with an rclone remote which is defined like
this: Note that setting Scaleway provides an S3 interface which can be configured for use
with rclone like this: Scaleway
Glacier is the low-cost S3 Glacier alternative from Scaleway and it
works the same way as on S3 by accepting the "GLACIER"
@@ -34448,13 +34463,13 @@ Press Enter to leave empty.
[snip]
acl>
And the config file should end up looking like this: SeaweedFS is a
distributed storage system for blobs, objects, files, and data lake,
@@ -34490,13 +34505,13 @@ such: To use rclone with SeaweedFS, above configuration should end up with
something like this in your config: So once set up, for example to copy files into a bucket And your config should end up looking like this: Servercore
Object Storage is an S3 compatible object storage system that
@@ -34799,13 +34814,13 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
And your config should end up looking like this: Storj is a decentralized cloud storage which can be used through its
native protocol or an S3 compatible gateway. This will leave the config file looking like this. Zata Object Storage provides a secure,
S3-compatible cloud storage solution designed for scalability and
@@ -35381,14 +35396,14 @@ e) Edit this remote
d) Delete this remote
y/e/d>
This will leave the config file looking like this. The most common cause of rclone using lots of memory is a single
directory with millions of files in. Despite s3 not really having the
@@ -36361,15 +36376,15 @@ bucket. To show the current lifecycle rules: This will dump something like this showing the lifecycle rules. If there are no lifecycle rules (the default) then it will just
return To reset the current lifecycle rules:~/.config/rclone/rclone.conf:[flashblade]
-type = s3
-provider = FlashBlade
-access_key_id = ACCESS_KEY_ID
-secret_access_key = SECRET_ACCESS_KEY
-endpoint = https://s3.flashblade.example.com[flashblade]
+type = s3
+provider = FlashBlade
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+endpoint = https://s3.flashblade.example.com[s5lu]
-type = s3
-provider = FileLu
-access_key_id = XXX
-secret_access_key = XXX
-endpoint = s5lu.com[s5lu]
+type = s3
+provider = FileLu
+access_key_id = XXX
+secret_access_key = XXX
+endpoint = s5lu.comRabata
rclone config asks for your access_key_id and
secret_access_key.[RCS3-demo-config]
-type = s3
-provider = RackCorp
-env_auth = true
-access_key_id = YOURACCESSKEY
-secret_access_key = YOURSECRETACCESSKEY
-region = au-nsw
-endpoint = s3.rackcorp.com
-location_constraint = au-nsw[RCS3-demo-config]
+type = s3
+provider = RackCorp
+env_auth = true
+access_key_id = YOURACCESSKEY
+secret_access_key = YOURSECRETACCESSKEY
+region = au-nsw
+endpoint = s3.rackcorp.com
+location_constraint = au-nswRclone Serve S3
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path[serves3]
-type = s3
-provider = Rclone
-endpoint = http://127.0.0.1:8080/
-access_key_id = ACCESS_KEY_ID
-secret_access_key = SECRET_ACCESS_KEY
-use_multipart_uploads = false[serves3]
+type = s3
+provider = Rclone
+endpoint = http://127.0.0.1:8080/
+access_key_id = ACCESS_KEY_ID
+secret_access_key = SECRET_ACCESS_KEY
+use_multipart_uploads = falseuse_multipart_uploads = false is to
work around a bug which
@@ -34334,20 +34349,20 @@ Scaleway console or transferred through our API and CLI or using any
S3-compatible tool.[scaleway]
-type = s3
-provider = Scaleway
-env_auth = false
-endpoint = s3.nl-ams.scw.cloud
-access_key_id = SCWXXXXXXXXXXXXXX
-secret_access_key = 1111111-2222-3333-44444-55555555555555
-region = nl-ams
-location_constraint = nl-ams
-acl = private
-upload_cutoff = 5M
-chunk_size = 5M
-copy_cutoff = 5M[scaleway]
+type = s3
+provider = Scaleway
+env_auth = false
+endpoint = s3.nl-ams.scw.cloud
+access_key_id = SCWXXXXXXXXXXXXXX
+secret_access_key = 1111111-2222-3333-44444-55555555555555
+region = nl-ams
+location_constraint = nl-ams
+acl = private
+upload_cutoff = 5M
+chunk_size = 5M
+copy_cutoff = 5M[remote]
-type = s3
-provider = LyveCloud
-access_key_id = XXX
-secret_access_key = YYY
-endpoint = s3.us-east-1.lyvecloud.seagate.com[remote]
+type = s3
+provider = LyveCloud
+access_key_id = XXX
+secret_access_key = YYY
+endpoint = s3.us-east-1.lyvecloud.seagate.comSeaweedFS
[seaweedfs_s3]
-type = s3
-provider = SeaweedFS
-access_key_id = any
-secret_access_key = any
-endpoint = localhost:8333[seaweedfs_s3]
+type = s3
+provider = SeaweedFS
+access_key_id = any
+secret_access_key = any
+endpoint = localhost:8333rclone copy /path/to/files seaweedfs_s3:fooSelectel
@@ -34599,14 +34614,14 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
[selectel]
-type = s3
-provider = Selectel
-access_key_id = ACCESS_KEY
-secret_access_key = SECRET_ACCESS_KEY
-region = ru-1
-endpoint = s3.ru-1.storage.selcloud.ru[selectel]
+type = s3
+provider = Selectel
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+region = ru-1
+endpoint = s3.ru-1.storage.selcloud.ruServercore
[spectratest]
-type = s3
-provider = SpectraLogic
-access_key_id = ACCESS_KEY
-secret_access_key = SECRET_ACCESS_KEY
-endpoint = https://bp.example.com[spectratest]
+type = s3
+provider = SpectraLogic
+access_key_id = ACCESS_KEY
+secret_access_key = SECRET_ACCESS_KEY
+endpoint = https://bp.example.comStorj
[wasabi]
-type = s3
-provider = Wasabi
-env_auth = false
-access_key_id = YOURACCESSKEY
-secret_access_key = YOURSECRETACCESSKEY
-region =
-endpoint = s3.wasabisys.com
-location_constraint =
-acl =
-server_side_encryption =
-storage_class =[wasabi]
+type = s3
+provider = Wasabi
+env_auth = false
+access_key_id = YOURACCESSKEY
+secret_access_key = YOURSECRETACCESSKEY
+region =
+endpoint = s3.wasabisys.com
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =Zata Object Storage
[my zata storage]
-type = s3
-provider = Zata
-access_key_id = xxx
-secret_access_key = xxx
-region = us-east-1
-endpoint = idr01.zata.ai[my zata storage]
+type = s3
+provider = Zata
+access_key_id = xxx
+secret_access_key = xxx
+region = us-east-1
+endpoint = idr01.zata.aiMemory usage
rclone backend lifecycle b2:bucket[
- {
- "daysFromHidingToDeleting": 1,
- "daysFromUploadingToHiding": null,
- "daysFromStartingToCancelingUnfinishedLargeFiles": null,
- "fileNamePrefix": ""
- }
-][
+ {
+ "daysFromHidingToDeleting": 1,
+ "daysFromUploadingToHiding": null,
+ "daysFromStartingToCancelingUnfinishedLargeFiles": null,
+ "fileNamePrefix": ""
+ }
+][].
drive: you would run
rclone backend -o config drives drive:
This would produce something like this:
-[My Drive]
-type = alias
-remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
-
-[Test Drive]
-type = alias
-remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
-
-[AllDrives]
-type = combine
-upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"[My Drive]
+type = alias
+remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
+
+[Test Drive]
+type = alias
+remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
+
+[AllDrives]
+type = combine
+upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"If you then add that config to your config file (find it with
rclone config file) then you can access all the shared
drives in one place with the AllDrives: remote.
Here are the Advanced options specific to filelu (FileLu Cloud Storage).
+Cutoff for switching to chunked upload. Any files larger than this +will be uploaded in chunks of chunk_size.
+Properties:
+Chunk size to use for uploading. Used for multipart uploads.
+Properties:
+The encoding for the backend.
See the encoding @@ -44790,34 +44824,34 @@ account.
Usage example:
rclone backend [-o config] drives drive:
This will return a JSON list of objects like this:
-[
- {
- "id": "0ABCDEF-01234567890",
- "kind": "drive#teamDrive",
- "name": "My Drive"
- },
- {
- "id": "0ABCDEFabcdefghijkl",
- "kind": "drive#teamDrive",
- "name": "Test Drive"
- }
-][
+ {
+ "id": "0ABCDEF-01234567890",
+ "kind": "drive#teamDrive",
+ "name": "My Drive"
+ },
+ {
+ "id": "0ABCDEFabcdefghijkl",
+ "kind": "drive#teamDrive",
+ "name": "Test Drive"
+ }
+]With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives found and a combined drive.
-[My Drive]
-type = alias
-remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
-
-[Test Drive]
-type = alias
-remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
-
-[AllDrives]
-type = combine
-upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"[My Drive]
+type = alias
+remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
+
+[Test Drive]
+type = alias
+remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
+
+[AllDrives]
+type = combine
+upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. Any illegal characters will be substituted with "_" and duplicate names will have numbers suffixed. It @@ -44836,11 +44870,11 @@ use via the API.
Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it.
Result:
-{
- "Untrashed": 17,
- "Errors": 0
-}{
+ "Untrashed": 17,
+ "Errors": 0
+}Copy files by ID.
rclone backend copyid remote: [options] [<arguments>+]
@@ -44896,32 +44930,32 @@ escaped with characters. "'" becomes "'" and "" becomes "\", for
example to match a file named "foo ' .txt":
rclone backend query drive: "name = 'foo \' \\\.txt'"
The result is a JSON array of matches, for example:
-[
- {
- "createdTime": "2017-06-29T19:58:28.537Z",
- "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
- "md5Checksum": "68518d16be0c6fbfab918be61d658032",
- "mimeType": "text/plain",
- "modifiedTime": "2024-02-02T10:40:02.874Z",
- "name": "foo ' \\.txt",
- "parents": [
- "0BxAe_BCDE4zkFGZpcWJGek0xbzC"
- ],
- "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
- "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
- "size": "311",
- "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
- }
-]
-```console
-
-### rescue
-
-Rescue or delete any orphaned files.
-
-```console
-rclone backend rescue remote: [options] [<arguments>+][
+ {
+ "createdTime": "2017-06-29T19:58:28.537Z",
+ "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
+ "md5Checksum": "68518d16be0c6fbfab918be61d658032",
+ "mimeType": "text/plain",
+ "modifiedTime": "2024-02-02T10:40:02.874Z",
+ "name": "foo ' \\.txt",
+ "parents": [
+ "0BxAe_BCDE4zkFGZpcWJGek0xbzC"
+ ],
+ "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
+ "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
+ "size": "311",
+ "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
+ }
+]
+```console
+
+### rescue
+
+Rescue or delete any orphaned files.
+
+```console
+rclone backend rescue remote: [options] [<arguments>+]This command rescues or deletes any orphaned files or directories.
Sometimes files can get orphaned in Google Drive. This means that
@@ -45703,18 +45737,18 @@ y/e/d> y
config file, usually YOURHOME/.config/rclone/rclone.conf.
Open it in your favorite text editor, find section for the base remote
and create new section for hasher like in the following examples:
[Hasher1]
-type = hasher
-remote = myRemote:path
-hashes = md5
-max_age = off
-
-[Hasher2]
-type = hasher
-remote = /local/path
-hashes = dropbox,sha1
-max_age = 24h[Hasher1]
+type = hasher
+remote = myRemote:path
+hashes = md5
+max_age = off
+
+[Hasher2]
+type = hasher
+remote = /local/path
+hashes = dropbox,sha1
+max_age = 24hHasher takes basically the following parameters:
remote is requiredNB it need few seconds to startup.
For this docker image the remote needs to be configured like this:
-[remote]
-type = hdfs
-namenode = 127.0.0.1:8020
-username = root[remote]
+type = hdfs
+namenode = 127.0.0.1:8020
+username = rootYou can stop this image with docker kill rclone-hdfs
(NB it does not use volumes, so all data uploaded will
be lost.)
Example for OneDrive Personal:
-[
- {
- "id": "1234567890ABC!123",
- "grantedTo": {
- "user": {
- "id": "ryan@contoso.com"
- },
- "application": {},
- "device": {}
- },
- "invitation": {
- "email": "ryan@contoso.com"
- },
- "link": {
- "webUrl": "https://1drv.ms/t/s!1234567890ABC"
- },
- "roles": [
- "read"
- ],
- "shareId": "s!1234567890ABC"
- }
-]Example for OneDrive Business:
[
{
- "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
- "grantedToIdentities": [
- {
- "user": {
- "displayName": "ryan@contoso.com"
- },
- "application": {},
- "device": {}
- }
- ],
- "link": {
- "type": "view",
- "scope": "users",
- "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
- },
- "roles": [
- "read"
- ],
- "shareId": "u!LKj1lkdlals90j1nlkascl"
- },
- {
- "id": "5D33DD65C6932946",
- "grantedTo": {
- "user": {
- "displayName": "John Doe",
- "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
- },
- "application": {},
- "device": {}
- },
- "roles": [
- "owner"
- ],
- "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
- }
-]Example for OneDrive Business:
+[
+ {
+ "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
+ "grantedToIdentities": [
+ {
+ "user": {
+ "displayName": "ryan@contoso.com"
+ },
+ "application": {},
+ "device": {}
+ }
+ ],
+ "link": {
+ "type": "view",
+ "scope": "users",
+ "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
+ },
+ "roles": [
+ "read"
+ ],
+ "shareId": "u!LKj1lkdlals90j1nlkascl"
+ },
+ {
+ "id": "5D33DD65C6932946",
+ "grantedTo": {
+ "user": {
+ "displayName": "John Doe",
+ "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
+ },
+ "application": {},
+ "device": {}
+ },
+ "roles": [
+ "owner"
+ ],
+ "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
+ }
+]To write permissions, pass in a "permissions" metadata key using this
same format. The --metadata-mapper
@@ -52829,12 +52863,12 @@ for a user. Creating a Public Link is also supported, if
Link.Scope is set to "anonymous".
Example request to add a "read" permission with
--metadata-mapper:
{
- "Metadata": {
- "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
- }
-}{
+ "Metadata": {
+ "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
+ }
+}Note that adding a permission can fail if a conflicting permission already exists for the file/folder.
To update an existing permission, include both the Permission ID and @@ -53689,15 +53723,15 @@ No authentication
Sample rclone config file for Authentication Provider User Principal:
-[oos]
-type = oracleobjectstorage
-namespace = id<redacted>34
-compartment = ocid1.compartment.oc1..aa<redacted>ba
-region = us-ashburn-1
-provider = user_principal_auth
-config_file = /home/opc/.oci/config
-config_profile = Default[oos]
+type = oracleobjectstorage
+namespace = id<redacted>34
+compartment = ocid1.compartment.oc1..aa<redacted>ba
+region = us-ashburn-1
+provider = user_principal_auth
+config_file = /home/opc/.oci/config
+config_profile = DefaultAdvantages:
Sample rclone configuration file for Authentication Provider Resource Principal:
-[oos]
-type = oracleobjectstorage
-namespace = id<redacted>34
-compartment = ocid1.compartment.oc1..aa<redacted>ba
-region = us-ashburn-1
-provider = resource_principal_auth[oos]
+type = oracleobjectstorage
+namespace = id<redacted>34
+compartment = ocid1.compartment.oc1..aa<redacted>ba
+region = us-ashburn-1
+provider = resource_principal_authWorkload Identity auth may be used when running Rclone from Kubernetes pod on a Container Engine for Kubernetes (OKE) cluster. For @@ -53774,13 +53808,13 @@ export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
Public buckets do not require any authentication mechanism to read objects. Sample rclone configuration file for No authentication:
-[oos]
-type = oracleobjectstorage
-namespace = id<redacted>34
-compartment = ocid1.compartment.oc1..aa<redacted>ba
-region = us-ashburn-1
-provider = no_auth[oos]
+type = oracleobjectstorage
+namespace = id<redacted>34
+compartment = ocid1.compartment.oc1..aa<redacted>ba
+region = us-ashburn-1
+provider = no_authThe modification time is stored as metadata on the object as @@ -54338,26 +54372,26 @@ format.
multipart uploads.You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path.
-{
- "test-bucket": [
- {
- "namespace": "test-namespace",
- "bucket": "test-bucket",
- "object": "600m.bin",
- "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
- "timeCreated": "2022-07-29T06:21:16.595Z",
- "storageTier": "Standard"
- }
- ]
-}
-
-### cleanup
-
-Remove unfinished multipart uploads.
-
-```console
-rclone backend cleanup remote: [options] [<arguments>+]{
+ "test-bucket": [
+ {
+ "namespace": "test-namespace",
+ "bucket": "test-bucket",
+ "object": "600m.bin",
+ "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
+ "timeCreated": "2022-07-29T06:21:16.595Z",
+ "storageTier": "Standard"
+ }
+ ]
+}
+
+### cleanup
+
+Remove unfinished multipart uploads.
+
+```console
+rclone backend cleanup remote: [options] [<arguments>+]This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command @@ -54386,17 +54420,17 @@ rclone backend restore oos:bucket -o hours=HOURS
It returns a list of status dictionaries with Object Name and Status keys. The Status will be "RESTORED"" if it was successful or an error message if not.
-[
- {
- "Object": "test.txt"
- "Status": "RESTORED",
- },
- {
- "Object": "test/file4.txt"
- "Status": "RESTORED",
- }
-][
+ {
+ "Object": "test.txt"
+ "Status": "RESTORED",
+ },
+ {
+ "Object": "test/file4.txt"
+ "Status": "RESTORED",
+ }
+]Options:
An OpenStack credentials file typically looks something something like this (without the comments)
-export OS_AUTH_URL=https://a.provider.net/v2.0
-export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
-export OS_TENANT_NAME="1234567890123456"
-export OS_USERNAME="123abc567xy"
-echo "Please enter your OpenStack Password: "
-read -sr OS_PASSWORD_INPUT
-export OS_PASSWORD=$OS_PASSWORD_INPUT
-export OS_REGION_NAME="SBG1"
-if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fiexport OS_AUTH_URL=https://a.provider.net/v2.0
+export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
+export OS_TENANT_NAME="1234567890123456"
+export OS_USERNAME="123abc567xy"
+echo "Please enter your OpenStack Password: "
+read -sr OS_PASSWORD_INPUT
+export OS_PASSWORD=$OS_PASSWORD_INPUT
+export OS_REGION_NAME="SBG1"
+if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fiThe config file needs to look something like this where
$OS_USERNAME represents the value of the
OS_USERNAME variable - 123abc567xy in the
example above.
[remote]
-type = swift
-user = $OS_USERNAME
-key = $OS_PASSWORD
-auth = $OS_AUTH_URL
-tenant = $OS_TENANT_NAME[remote]
+type = swift
+user = $OS_USERNAME
+key = $OS_PASSWORD
+auth = $OS_AUTH_URL
+tenant = $OS_TENANT_NAMENote that you may (or may not) need to set region too -
try without first.
You can use rclone with swift without a config file, if desired, like this:
-source openstack-credentials-file
-export RCLONE_CONFIG_MYREMOTE_TYPE=swift
-export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
-rclone lsd myremote:source openstack-credentials-file
+export RCLONE_CONFIG_MYREMOTE_TYPE=swift
+export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
+rclone lsd myremote:This remote supports --fast-list which allows you to use
fewer transactions in exchange for more memory. See the
If you want to debug or verify notifications, you can use the helper command:
-rclone test changenotify remote:rclone test changenotify remote:This will log incoming change notifications for the given remote.
@@ -56413,12 +56447,12 @@ in 'pikpak:dirpath'. You may want to pass '-o password=password' for a password-protected files. Also, pass '-o delete-src-file' to delete source files after decompression finished.Result:
-{
- "Decompressed": 17,
- "SourceDeleted": 0,
- "Errors": 0
-}{
+ "Decompressed": 17,
+ "SourceDeleted": 0,
+ "Errors": 0
+}/home/$USER/.ssh/id_rsa.pub.
Setting this path in pubkey_file will not work.
Example:
-[remote]
-type = sftp
-host = example.com
-user = sftpuser
-key_file = ~/id_rsa
-pubkey_file = ~/id_rsa-cert.pub[remote]
+type = sftp
+host = example.com
+user = sftpuser
+key_file = ~/id_rsa
+pubkey_file = ~/id_rsa-cert.pubIf you concatenate a cert with a private key then you can specify the merged file in both places.
Note: the cert must come first in the file. e.g.
@@ -58226,13 +58260,13 @@ be turned on by enabling theknown_hosts_file option. This
can point to the file maintained by OpenSSH or can point to
a unique file.
e.g. using the OpenSSH known_hosts file:
[remote]
-type = sftp
-host = example.com
-user = sftpuser
-pass =
-known_hosts_file = ~/.ssh/known_hosts[remote]
+type = sftp
+host = example.com
+user = sftpuser
+pass =
+known_hosts_file = ~/.ssh/known_hostsAlternatively you can create your own known hosts file like this:
ssh-keyscan -t dsa,rsa,ecdsa,ed25519 example.com >> known_hosts
There are some limitations:
@@ -59121,54 +59155,54 @@ account and created drive respectively.Now run
rclone config
Follow this interactive process:
-$ rclone config
-e) Edit existing remote
-n) New remote
-d) Delete remote
-r) Rename remote
-c) Copy remote
-s) Set configuration password
-q) Quit config
-e/n/d/r/c/s/q> n
-
-Enter name for new remote.
-name> Shade
-
-Option Storage.
-Type of storage to configure.
-Choose a number from below, or type in your own value.
-[OTHER OPTIONS]
-xx / Shade FS
- \ (shade)
-[OTHER OPTIONS]
-Storage> xx
-
-Option drive_id.
-The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive.
-Enter a value.
-drive_id> [YOUR_ID]
-
-Option api_key.
-An API key for your account.
-Enter a value.
-api_key> [YOUR_API_KEY]
-
-Edit advanced config?
-y) Yes
-n) No (default)
-y/n> n
-
-Configuration complete.
-Options:
-- type: shade
-- drive_id: [YOUR_ID]
-- api_key: [YOUR_API_KEY]
-Keep this "Shade" remote?
-y) Yes this is OK (default)
-e) Edit this remote
-d) Delete this remote
-y/e/d> y$ rclone config
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> n
+
+Enter name for new remote.
+name> Shade
+
+Option Storage.
+Type of storage to configure.
+Choose a number from below, or type in your own value.
+[OTHER OPTIONS]
+xx / Shade FS
+ \ (shade)
+[OTHER OPTIONS]
+Storage> xx
+
+Option drive_id.
+The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive.
+Enter a value.
+drive_id> [YOUR_ID]
+
+Option api_key.
+An API key for your account.
+Enter a value.
+api_key> [YOUR_API_KEY]
+
+Edit advanced config?
+y) Yes
+n) No (default)
+y/n> n
+
+Configuration complete.
+Options:
+- type: shade
+- drive_id: [YOUR_ID]
+- api_key: [YOUR_API_KEY]
+Keep this "Shade" remote?
+y) Yes this is OK (default)
+e) Edit this remote
+d) Delete this remote
+y/e/d> yShade does not support hashes and writing mod times.
@@ -60759,13 +60793,13 @@ upstream.The tag :writeback on an upstream remote can be used to
make a simple cache system like this:
[union]
-type = union
-action_policy = all
-create_policy = all
-search_policy = ff
-upstreams = /local:writeback remote:dir[union]
+type = union
+action_policy = all
+create_policy = all
+search_policy = ff
+upstreams = /local:writeback remote:dirWhen files are opened for read, if the file is in
remote:dir but not /local then rclone will
copy the file entirely into /local before returning a
@@ -61225,13 +61259,13 @@ and use your normal account email and password for user and
pass. If you have 2FA enabled, you have to generate an app
password. Set the vendor to sharepoint.
Your config file should look like this:
-[sharepoint]
-type = webdav
-url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
-vendor = sharepoint
-user = YourEmailAddress
-pass = encryptedpassword[sharepoint]
+type = webdav
+url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
+vendor = sharepoint
+user = YourEmailAddress
+pass = encryptedpasswordUse this option in case your (hosted) Sharepoint is not tied to @@ -61249,13 +61283,13 @@ class="uri">https://example.sharepoint.com/sites/12345/Documents
NTLM uses domain and user name combination for authentication, set
user to DOMAIN\username.
Your config file should look like this:
-[sharepoint]
-type = webdav
-url = https://[YOUR-DOMAIN]/some-path-to/Documents
-vendor = sharepoint-ntlm
-user = DOMAIN\user
-pass = encryptedpassword[sharepoint]
+type = webdav
+url = https://[YOUR-DOMAIN]/some-path-to/Documents
+vendor = sharepoint-ntlm
+user = DOMAIN\user
+pass = encryptedpasswordAs SharePoint does some special things with uploaded documents, you @@ -61285,14 +61319,14 @@ access tokens.
username or password, instead enter your Macaroon as thebearer_token.
The config will end up looking something like this.
-[dcache]
-type = webdav
-url = https://dcache...
-vendor = other
-user =
-pass =
-bearer_token = your-macaroon[dcache]
+type = webdav
+url = https://dcache...
+vendor = other
+user =
+pass =
+bearer_token = your-macaroonThere is a script that obtains a Macaroon from a dCache WebDAV endpoint, and creates an @@ -61332,12 +61366,12 @@ the advanced config and enter the command to get a bearer token (e.g.,
The following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from the XDC OIDC Provider.
-[dcache]
-type = webdav
-url = https://dcache.example.org/
-vendor = other
-bearer_token_command = oidc-token XDC[dcache]
+type = webdav
+url = https://dcache.example.org/
+vendor = other
+bearer_token_command = oidc-token XDCYandex Disk is a cloud storage solution created by Yandex.
@@ -62138,15 +62172,15 @@ drivers like EncFS. To disable UNC conversion globally, add this to your.rclone.conf
file:
-[local]
-nounc = true[local]
+nounc = trueIf you want to selectively disable UNC, you can add it to a separate entry like this:
-[nounc]
-type = local
-nounc = true[nounc]
+type = local
+nounc = trueAnd use rclone like this:
rclone copy c:\src nounc:z:\dst
This will use UNC paths on c:\src but not on
@@ -62674,6 +62708,57 @@ the output.
See @@ -74644,10 +74729,10 @@ formats
some.domain.com no such hostThis happens when rclone cannot resolve a domain. Please check that your DNS setup is generally working, e.g.
-# both should print a long list of possible IP addresses
-dig www.googleapis.com # resolve using your default DNS
-dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server# both should print a long list of possible IP addresses
+dig www.googleapis.com # resolve using your default DNS
+dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS serverIf you are using systemd-resolved (default on Arch
Linux), ensure it is at version 233 or higher. Previous releases contain
a bug which causes not all domains to be resolved properly.
A simple solution may be restarting the Host Network Service with eg. Powershell
-Restart-Service hnsRestart-Service hnsGODEBUG:
Windows (cmd.exe):
-set GODEBUG=tlsrsakex=1
-rclone copy ...Windows (PowerShell):
$env:GODEBUG="tlsrsakex=1"
-rclone copy ...Linux/macOS:
+class="sourceCode bat">set GODEBUG=tlsrsakex=1
+rclone copy ...Windows (PowerShell):
GODEBUG=tlsrsakex=1 rclone copy ...$env:GODEBUG="tlsrsakex=1"
+rclone copy ...Linux/macOS:
+GODEBUG=tlsrsakex=1 rclone copy ...If the server only supports 3DES, try:
-GODEBUG=tls3des=1 rclone ...GODEBUG=tls3des=1 rclone ...This applies to any rclone feature using TLS (HTTPS, FTPS, WebDAV over TLS, proxies with TLS interception, etc.). Use these workarounds only long enough to get the server/proxy updated.
diff --git a/MANUAL.md b/MANUAL.md index d586ea5de..0bf46d40b 100644 --- a/MANUAL.md +++ b/MANUAL.md @@ -1,6 +1,6 @@ % rclone(1) User Manual % Nick Craig-Wood -% Jan 30, 2026 +% Feb 17, 2026 # NAME @@ -5388,12 +5388,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e ```console rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" -// Output: stories/The Quick Brown Fox!-20260130 +// Output: stories/The Quick Brown Fox!-20260217 ``` ```console rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" -// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM +// Output: stories/The Quick Brown Fox!-2026-02-17 0451PM ``` ```console @@ -5841,6 +5841,15 @@ Note that `--stdout` and `--print-filename` are incompatible with `--urls`. This will do `--transfers` copies in parallel. Note that if `--auto-filename` is desired for all URLs then a file with only URLs and no filename can be used. +Each FILENAME in the CSV file can start with a relative path which will be appended +to the destination path provided at the command line. For example, running the command +shown above with the following CSV file will write two files to the destination: +`remote:dir/local/path/bar.json` and `remote:dir/another/local/directory/qux.json` +```csv +https://example.org/foo/bar.json,local/path/bar.json +https://example.org/qux/baz.json,another/local/directory/qux.json +``` + ## Troubleshooting If you can't get `rclone copyurl` to work then here are some things you can try: @@ -24776,7 +24785,7 @@ Flags for general networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.1") ``` @@ -25260,9 +25269,11 @@ Backend-only flags (these can be set in the config file also). --filefabric-token-expiry string Token expiry time --filefabric-url string URL of the Enterprise File Fabric to connect to --filefabric-version string Version read from the file fabric + --filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi) --filelu-description string Description of the remote --filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation) --filelu-key string Your FileLu Rclone key from My Account + --filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi) --filen-api-key string API Key for your Filen account (obscured) --filen-auth-version string Authentication Version (internal use only) --filen-base-folder-uuid string UUID of Account Root Directory (internal use only) @@ -27528,11 +27539,14 @@ The following backends have known issues that need more investigation: - `TestDropbox` (`dropbox`) - [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt) -- `TestSeafile` (`seafile`) - - [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafile-1.txt) -- `TestSeafileV6` (`seafile`) - - [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafileV6-1.txt) -- Updated: 2026-01-30-010015 +- `TestInternxt` (`internxt`) + - [`TestBisyncLocalRemote/all_changed`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) + - [`TestBisyncLocalRemote/ext_paths`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) + - [`TestBisyncLocalRemote/max_delete_path1`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) + - [`TestBisyncRemoteRemote/basic`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) + - [`TestBisyncRemoteRemote/concurrent`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) + - [5 more](https://pub.rclone.org/integration-tests/current/) +- Updated: 2026-02-17-010016 The following backends either have not been tested recently or have known issues @@ -44693,6 +44707,28 @@ Properties: Here are the Advanced options specific to filelu (FileLu Cloud Storage). +#### --filelu-upload-cutoff + +Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size. + +Properties: + +- Config: upload_cutoff +- Env Var: RCLONE_FILELU_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 500Mi + +#### --filelu-chunk-size + +Chunk size to use for uploading. Used for multipart uploads. + +Properties: + +- Config: chunk_size +- Env Var: RCLONE_FILELU_CHUNK_SIZE +- Type: SizeSuffix +- Default: 64Mi + #### --filelu-encoding The encoding for the backend. @@ -68058,6 +68094,33 @@ Options: # Changelog +## v1.73.1 - 2026-02-17 + +[See commits](https://github.com/rclone/rclone/compare/v1.73.0...v1.73.1) + +- Bug Fixes + - accounting: Fix missing server side stats from core/stats rc (Nick Craig-Wood) + - build + - Fix CVE-2025-68121 by updating go to 1.25.7 or later (Nick Craig-Wood) + - Bump github.com/go-chi/chi/v5 from 5.2.3 to 5.2.5 to fix GO-2026-4316 (albertony) + - docs: Extend copyurl docs with an example of CSV FILENAMEs starting with a path. (Jack Kelly) + - march: Fix runtime: program exceeds 10000-thread limit (Nick Craig-Wood) + - pacer + - Fix deadlock between pacer token and --max-connections (Nick Craig-Wood) + - Re-read the sleep time as it may be stale (Nick Craig-Wood) +- Drime + - Fix files and directories being created in the default workspace (Nick Craig-Wood) +- Filelu + - Avoid buffering entire file in memory (kingston125) + - Add multipart upload support with configurable cutoff (kingston125) +- Filen + - Fix 32 bit targets not being able to list directories (Enduriel) + - Fix potential panic in case of error during upload (Enduriel) +- Internxt + - Implement re-login under refresh logic, improve retry logic (José Zúniga) +-S3 + - Set list_version to 2 for FileLu S3 configuration (kingston125) + ## v1.73.0 - 2026-01-30 [See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.73.0) diff --git a/MANUAL.txt b/MANUAL.txt index 668631e8b..65f6468c9 100644 --- a/MANUAL.txt +++ b/MANUAL.txt @@ -1,6 +1,6 @@ rclone(1) User Manual Nick Craig-Wood -Jan 30, 2026 +Feb 17, 2026 NAME @@ -4607,10 +4607,10 @@ Examples: // Output: stories/The Quick Brown Fox!.txt rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" - // Output: stories/The Quick Brown Fox!-20260130 + // Output: stories/The Quick Brown Fox!-20260217 rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" - // Output: stories/The Quick Brown Fox!-2026-01-30 0825PM + // Output: stories/The Quick Brown Fox!-2026-02-17 0451PM rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab" // Output: ababababababab/ababab ababababab ababababab ababab!abababab @@ -5032,6 +5032,15 @@ incompatible with --urls. This will do --transfers copies in parallel. Note that if --auto-filename is desired for all URLs then a file with only URLs and no filename can be used. +Each FILENAME in the CSV file can start with a relative path which will +be appended to the destination path provided at the command line. For +example, running the command shown above with the following CSV file +will write two files to the destination: remote:dir/local/path/bar.json +and remote:dir/another/local/directory/qux.json + + https://example.org/foo/bar.json,local/path/bar.json + https://example.org/qux/baz.json,another/local/directory/qux.json + Troubleshooting If you can't get rclone copyurl to work then here are some things you @@ -22962,7 +22971,7 @@ Flags for general networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.1") Performance @@ -23416,9 +23425,11 @@ Backend-only flags (these can be set in the config file also). --filefabric-token-expiry string Token expiry time --filefabric-url string URL of the Enterprise File Fabric to connect to --filefabric-version string Version read from the file fabric + --filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi) --filelu-description string Description of the remote --filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation) --filelu-key string Your FileLu Rclone key from My Account + --filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi) --filen-api-key string API Key for your Filen account (obscured) --filen-auth-version string Authentication Version (internal use only) --filen-base-folder-uuid string UUID of Account Root Directory (internal use only) @@ -25626,11 +25637,14 @@ The following backends have known issues that need more investigation: - TestDropbox (dropbox) - TestBisyncRemoteRemote/normalization -- TestSeafile (seafile) - - TestBisyncLocalRemote/volatile -- TestSeafileV6 (seafile) - - TestBisyncLocalRemote/volatile -- Updated: 2026-01-30-010015 +- TestInternxt (internxt) + - TestBisyncLocalRemote/all_changed + - TestBisyncLocalRemote/ext_paths + - TestBisyncLocalRemote/max_delete_path1 + - TestBisyncRemoteRemote/basic + - TestBisyncRemoteRemote/concurrent + - 5 more +- Updated: 2026-02-17-010016 The following backends either have not been tested recently or have known issues that are deemed unfixable for the time being: @@ -42261,6 +42275,29 @@ Advanced options Here are the Advanced options specific to filelu (FileLu Cloud Storage). +--filelu-upload-cutoff + +Cutoff for switching to chunked upload. Any files larger than this will +be uploaded in chunks of chunk_size. + +Properties: + +- Config: upload_cutoff +- Env Var: RCLONE_FILELU_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 500Mi + +--filelu-chunk-size + +Chunk size to use for uploading. Used for multipart uploads. + +Properties: + +- Config: chunk_size +- Env Var: RCLONE_FILELU_CHUNK_SIZE +- Type: SizeSuffix +- Default: 64Mi + --filelu-encoding The encoding for the backend. @@ -65100,6 +65137,41 @@ Options: Changelog +v1.73.1 - 2026-02-17 + +See commits + +- Bug Fixes + - accounting: Fix missing server side stats from core/stats rc + (Nick Craig-Wood) + - build + - Fix CVE-2025-68121 by updating go to 1.25.7 or later (Nick + Craig-Wood) + - Bump github.com/go-chi/chi/v5 from 5.2.3 to 5.2.5 to fix + GO-2026-4316 (albertony) + - docs: Extend copyurl docs with an example of CSV FILENAMEs + starting with a path. (Jack Kelly) + - march: Fix runtime: program exceeds 10000-thread limit (Nick + Craig-Wood) + - pacer + - Fix deadlock between pacer token and --max-connections (Nick + Craig-Wood) + - Re-read the sleep time as it may be stale (Nick Craig-Wood) +- Drime + - Fix files and directories being created in the default workspace + (Nick Craig-Wood) +- Filelu + - Avoid buffering entire file in memory (kingston125) + - Add multipart upload support with configurable cutoff + (kingston125) +- Filen + - Fix 32 bit targets not being able to list directories (Enduriel) + - Fix potential panic in case of error during upload (Enduriel) +- Internxt + - Implement re-login under refresh logic, improve retry logic + (José Zúniga) -S3 + - Set list_version to 2 for FileLu S3 configuration (kingston125) + v1.73.0 - 2026-01-30 See commits diff --git a/docs/content/bisync.md b/docs/content/bisync.md index f6e99c7fd..3c4fea3d2 100644 --- a/docs/content/bisync.md +++ b/docs/content/bisync.md @@ -1049,11 +1049,14 @@ The following backends have known issues that need more investigation: - `TestDropbox` (`dropbox`) - [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt) -- `TestSeafile` (`seafile`) - - [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafile-1.txt) -- `TestSeafileV6` (`seafile`) - - [`TestBisyncLocalRemote/volatile`](https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafileV6-1.txt) -- Updated: 2026-01-30-010015 +- `TestInternxt` (`internxt`) + - [`TestBisyncLocalRemote/all_changed`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) + - [`TestBisyncLocalRemote/ext_paths`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) + - [`TestBisyncLocalRemote/max_delete_path1`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) + - [`TestBisyncRemoteRemote/basic`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) + - [`TestBisyncRemoteRemote/concurrent`](https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) + - [5 more](https://pub.rclone.org/integration-tests/current/) +- Updated: 2026-02-17-010016 The following backends either have not been tested recently or have known issues diff --git a/docs/content/changelog.md b/docs/content/changelog.md index 12210a62e..59043f711 100644 --- a/docs/content/changelog.md +++ b/docs/content/changelog.md @@ -6,6 +6,33 @@ description: "Rclone Changelog" # Changelog +## v1.73.1 - 2026-02-17 + +[See commits](https://github.com/rclone/rclone/compare/v1.73.0...v1.73.1) + +- Bug Fixes + - accounting: Fix missing server side stats from core/stats rc (Nick Craig-Wood) + - build + - Fix CVE-2025-68121 by updating go to 1.25.7 or later (Nick Craig-Wood) + - Bump github.com/go-chi/chi/v5 from 5.2.3 to 5.2.5 to fix GO-2026-4316 (albertony) + - docs: Extend copyurl docs with an example of CSV FILENAMEs starting with a path. (Jack Kelly) + - march: Fix runtime: program exceeds 10000-thread limit (Nick Craig-Wood) + - pacer + - Fix deadlock between pacer token and --max-connections (Nick Craig-Wood) + - Re-read the sleep time as it may be stale (Nick Craig-Wood) +- Drime + - Fix files and directories being created in the default workspace (Nick Craig-Wood) +- Filelu + - Avoid buffering entire file in memory (kingston125) + - Add multipart upload support with configurable cutoff (kingston125) +- Filen + - Fix 32 bit targets not being able to list directories (Enduriel) + - Fix potential panic in case of error during upload (Enduriel) +- Internxt + - Implement re-login under refresh logic, improve retry logic (José Zúniga) +-S3 + - Set list_version to 2 for FileLu S3 configuration (kingston125) + ## v1.73.0 - 2026-01-30 [See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.73.0) diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md index 23788d2b8..3127eef08 100644 --- a/docs/content/commands/rclone.md +++ b/docs/content/commands/rclone.md @@ -329,9 +329,11 @@ rclone [flags] --filefabric-token-expiry string Token expiry time --filefabric-url string URL of the Enterprise File Fabric to connect to --filefabric-version string Version read from the file fabric + --filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi) --filelu-description string Description of the remote --filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation) --filelu-key string Your FileLu Rclone key from My Account + --filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi) --filen-api-key string API Key for your Filen account (obscured) --filen-auth-version string Authentication Version (internal use only) --filen-base-folder-uuid string UUID of Account Root Directory (internal use only) @@ -1063,7 +1065,7 @@ rclone [flags] --use-json-log Use json log format --use-mmap Use mmap allocator (see docs) --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.1") -v, --verbose count Print lots more stuff (repeat for more) -V, --version Print the version number --webdav-auth-redirect Preserve authentication on redirect diff --git a/docs/content/commands/rclone_convmv.md b/docs/content/commands/rclone_convmv.md index 83840905a..8ca5f14ec 100644 --- a/docs/content/commands/rclone_convmv.md +++ b/docs/content/commands/rclone_convmv.md @@ -231,12 +231,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e ```console rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" -// Output: stories/The Quick Brown Fox!-20260130 +// Output: stories/The Quick Brown Fox!-20260217 ``` ```console rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" -// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM +// Output: stories/The Quick Brown Fox!-2026-02-17 0451PM ``` ```console diff --git a/docs/content/commands/rclone_copyurl.md b/docs/content/commands/rclone_copyurl.md index 0df388f87..47958051f 100644 --- a/docs/content/commands/rclone_copyurl.md +++ b/docs/content/commands/rclone_copyurl.md @@ -39,6 +39,15 @@ Note that `--stdout` and `--print-filename` are incompatible with `--urls`. This will do `--transfers` copies in parallel. Note that if `--auto-filename` is desired for all URLs then a file with only URLs and no filename can be used. +Each FILENAME in the CSV file can start with a relative path which will be appended +to the destination path provided at the command line. For example, running the command +shown above with the following CSV file will write two files to the destination: +`remote:dir/local/path/bar.json` and `remote:dir/another/local/directory/qux.json` +```csv +https://example.org/foo/bar.json,local/path/bar.json +https://example.org/qux/baz.json,another/local/directory/qux.json +``` + ## Troubleshooting If you can't get `rclone copyurl` to work then here are some things you can try: diff --git a/docs/content/filelu.md b/docs/content/filelu.md index e56e023cb..d71135d0b 100644 --- a/docs/content/filelu.md +++ b/docs/content/filelu.md @@ -219,6 +219,28 @@ Properties: Here are the Advanced options specific to filelu (FileLu Cloud Storage). +#### --filelu-upload-cutoff + +Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size. + +Properties: + +- Config: upload_cutoff +- Env Var: RCLONE_FILELU_UPLOAD_CUTOFF +- Type: SizeSuffix +- Default: 500Mi + +#### --filelu-chunk-size + +Chunk size to use for uploading. Used for multipart uploads. + +Properties: + +- Config: chunk_size +- Env Var: RCLONE_FILELU_CHUNK_SIZE +- Type: SizeSuffix +- Default: 64Mi + #### --filelu-encoding The encoding for the backend. diff --git a/docs/content/flags.md b/docs/content/flags.md index d7bb0af9b..51152777a 100644 --- a/docs/content/flags.md +++ b/docs/content/flags.md @@ -121,7 +121,7 @@ Flags for general networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.73.1") ``` @@ -605,9 +605,11 @@ Backend-only flags (these can be set in the config file also). --filefabric-token-expiry string Token expiry time --filefabric-url string URL of the Enterprise File Fabric to connect to --filefabric-version string Version read from the file fabric + --filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi) --filelu-description string Description of the remote --filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation) --filelu-key string Your FileLu Rclone key from My Account + --filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi) --filen-api-key string API Key for your Filen account (obscured) --filen-auth-version string Authentication Version (internal use only) --filen-base-folder-uuid string UUID of Account Root Directory (internal use only) diff --git a/go.sum b/go.sum index 1344222c5..0757791d2 100644 --- a/go.sum +++ b/go.sum @@ -423,8 +423,6 @@ github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyf github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc= github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2s0bqwp9tc8= github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw= -github.com/internxt/rclone-adapter v0.0.0-20260130171252-c3c6ebb49276 h1:PTJPYovznNqc9t/9MjvtqhrgEVC9OiK75ZPL6hqm6gM= -github.com/internxt/rclone-adapter v0.0.0-20260130171252-c3c6ebb49276/go.mod h1:vdPya4AIcDjvng4ViaAzqjegJf0VHYpYHQguFx5xBp0= github.com/internxt/rclone-adapter v0.0.0-20260213125353-6f59c89fcb7c h1:r+KtxPyrhsYeNbsfeqTfEM8xRdwgV6LuNhLZxpXecb4= github.com/internxt/rclone-adapter v0.0.0-20260213125353-6f59c89fcb7c/go.mod h1:vdPya4AIcDjvng4ViaAzqjegJf0VHYpYHQguFx5xBp0= github.com/jcmturner/aescts/v2 v2.0.0 h1:9YKLH6ey7H4eDBXW8khjYslgyqG2xZikXP0EQFKrle8= diff --git a/lib/transform/transform.md b/lib/transform/transform.md index 609e7ac25..72078fa19 100644 --- a/lib/transform/transform.md +++ b/lib/transform/transform.md @@ -218,12 +218,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e ```console rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" -// Output: stories/The Quick Brown Fox!-20260130 +// Output: stories/The Quick Brown Fox!-20260217 ``` ```console rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" -// Output: stories/The Quick Brown Fox!-2026-01-30 0852PM +// Output: stories/The Quick Brown Fox!-2026-02-17 0454PM ``` ```console diff --git a/rclone.1 b/rclone.1 index 42cf2d4fc..272f532f7 100644 --- a/rclone.1 +++ b/rclone.1 @@ -15,7 +15,7 @@ . ftr VB CB . ftr VBI CBI .\} -.TH "rclone" "1" "Jan 30, 2026" "User Manual" "" +.TH "rclone" "1" "Feb 17, 2026" "User Manual" "" .hy .SH NAME .PP @@ -6292,14 +6292,14 @@ rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]a .nf \f[C] rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{YYYYMMDD}\[dq] -// Output: stories/The Quick Brown Fox!-20260130 +// Output: stories/The Quick Brown Fox!-20260217 \f[R] .fi .IP .nf \f[C] rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{macfriendlytime}\[dq] -// Output: stories/The Quick Brown Fox!-2026-01-30 0825PM +// Output: stories/The Quick Brown Fox!-2026-02-17 0451PM \f[R] .fi .IP @@ -6832,6 +6832,20 @@ incompatible with \f[V]--urls\f[R]. This will do \f[V]--transfers\f[R] copies in parallel. Note that if \f[V]--auto-filename\f[R] is desired for all URLs then a file with only URLs and no filename can be used. +.PP +Each FILENAME in the CSV file can start with a relative path which will +be appended to the destination path provided at the command line. +For example, running the command shown above with the following CSV file +will write two files to the destination: +\f[V]remote:dir/local/path/bar.json\f[R] and +\f[V]remote:dir/another/local/directory/qux.json\f[R] +.IP +.nf +\f[C] +https://example.org/foo/bar.json,local/path/bar.json +https://example.org/qux/baz.json,another/local/directory/qux.json +\f[R] +.fi .SS Troubleshooting .PP If you can\[aq]t get \f[V]rclone copyurl\f[R] to work then here are some @@ -29878,7 +29892,7 @@ Flags for general networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.73.0\[dq]) + --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.73.1\[dq]) \f[R] .fi .SS Performance @@ -30362,9 +30376,11 @@ Backend-only flags (these can be set in the config file also). --filefabric-token-expiry string Token expiry time --filefabric-url string URL of the Enterprise File Fabric to connect to --filefabric-version string Version read from the file fabric + --filelu-chunk-size SizeSuffix Chunk size to use for uploading. Used for multipart uploads (default 64Mi) --filelu-description string Description of the remote --filelu-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe,Hash,Percent,BackSlash,CrLf,Del,Ctl,LeftSpace,LeftPeriod,LeftTilde,LeftCrLfHtVt,RightSpace,RightPeriod,RightCrLfHtVt,InvalidUtf8,Dot,SquareBracket,Semicolon,Exclamation) --filelu-key string Your FileLu Rclone key from My Account + --filelu-upload-cutoff SizeSuffix Cutoff for switching to chunked upload. Any files larger than this will be uploaded in chunks of chunk_size (default 500Mi) --filen-api-key string API Key for your Filen account (obscured) --filen-auth-version string Authentication Version (internal use only) --filen-base-folder-uuid string UUID of Account Root Directory (internal use only) @@ -33145,19 +33161,23 @@ The following backends have known issues that need more investigation: \f[V]TestBisyncRemoteRemote/normalization\f[R] (https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt) .RE .IP \[bu] 2 -\f[V]TestSeafile\f[R] (\f[V]seafile\f[R]) +\f[V]TestInternxt\f[R] (\f[V]internxt\f[R]) .RS 2 .IP \[bu] 2 -\f[V]TestBisyncLocalRemote/volatile\f[R] (https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafile-1.txt) +\f[V]TestBisyncLocalRemote/all_changed\f[R] (https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) +.IP \[bu] 2 +\f[V]TestBisyncLocalRemote/ext_paths\f[R] (https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) +.IP \[bu] 2 +\f[V]TestBisyncLocalRemote/max_delete_path1\f[R] (https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) +.IP \[bu] 2 +\f[V]TestBisyncRemoteRemote/basic\f[R] (https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) +.IP \[bu] 2 +\f[V]TestBisyncRemoteRemote/concurrent\f[R] (https://pub.rclone.org/integration-tests/current/internxt-cmd.bisync-TestInternxt-1.txt) +.IP \[bu] 2 +5 more (https://pub.rclone.org/integration-tests/current/) .RE .IP \[bu] 2 -\f[V]TestSeafileV6\f[R] (\f[V]seafile\f[R]) -.RS 2 -.IP \[bu] 2 -\f[V]TestBisyncLocalRemote/volatile\f[R] (https://pub.rclone.org/integration-tests/current/seafile-cmd.bisync-TestSeafileV6-1.txt) -.RE -.IP \[bu] 2 -Updated: 2026-01-30-010015 +Updated: 2026-02-17-010016 .PP The following backends either have not been tested recently or have known issues that are deemed unfixable for the time being: @@ -56514,6 +56534,34 @@ Required: true .SS Advanced options .PP Here are the Advanced options specific to filelu (FileLu Cloud Storage). +.SS --filelu-upload-cutoff +.PP +Cutoff for switching to chunked upload. +Any files larger than this will be uploaded in chunks of chunk_size. +.PP +Properties: +.IP \[bu] 2 +Config: upload_cutoff +.IP \[bu] 2 +Env Var: RCLONE_FILELU_UPLOAD_CUTOFF +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 500Mi +.SS --filelu-chunk-size +.PP +Chunk size to use for uploading. +Used for multipart uploads. +.PP +Properties: +.IP \[bu] 2 +Config: chunk_size +.IP \[bu] 2 +Env Var: RCLONE_FILELU_CHUNK_SIZE +.IP \[bu] 2 +Type: SizeSuffix +.IP \[bu] 2 +Default: 64Mi .SS --filelu-encoding .PP The encoding for the backend. @@ -86814,6 +86862,71 @@ Options: .IP \[bu] 2 \[dq]error\[dq]: Return an error based on option value. .SH Changelog +.SS v1.73.1 - 2026-02-17 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.73.0...v1.73.1) +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +accounting: Fix missing server side stats from core/stats rc (Nick +Craig-Wood) +.IP \[bu] 2 +build +.RS 2 +.IP \[bu] 2 +Fix CVE-2025-68121 by updating go to 1.25.7 or later (Nick Craig-Wood) +.IP \[bu] 2 +Bump github.com/go-chi/chi/v5 from 5.2.3 to 5.2.5 to fix GO-2026-4316 +(albertony) +.RE +.IP \[bu] 2 +docs: Extend copyurl docs with an example of CSV FILENAMEs starting with +a path. +(Jack Kelly) +.IP \[bu] 2 +march: Fix runtime: program exceeds 10000-thread limit (Nick Craig-Wood) +.IP \[bu] 2 +pacer +.RS 2 +.IP \[bu] 2 +Fix deadlock between pacer token and --max-connections (Nick Craig-Wood) +.IP \[bu] 2 +Re-read the sleep time as it may be stale (Nick Craig-Wood) +.RE +.RE +.IP \[bu] 2 +Drime +.RS 2 +.IP \[bu] 2 +Fix files and directories being created in the default workspace (Nick +Craig-Wood) +.RE +.IP \[bu] 2 +Filelu +.RS 2 +.IP \[bu] 2 +Avoid buffering entire file in memory (kingston125) +.IP \[bu] 2 +Add multipart upload support with configurable cutoff (kingston125) +.RE +.IP \[bu] 2 +Filen +.RS 2 +.IP \[bu] 2 +Fix 32 bit targets not being able to list directories (Enduriel) +.IP \[bu] 2 +Fix potential panic in case of error during upload (Enduriel) +.RE +.IP \[bu] 2 +Internxt +.RS 2 +.IP \[bu] 2 +Implement re-login under refresh logic, improve retry logic (José +Zúniga) -S3 +.IP \[bu] 2 +Set list_version to 2 for FileLu S3 configuration (kingston125) +.RE .SS v1.73.0 - 2026-01-30 .PP See commits (https://github.com/rclone/rclone/compare/v1.72.0...v1.73.0)