diff --git a/MANUAL.html b/MANUAL.html index ece490bad..7647bb506 100644 --- a/MANUAL.html +++ b/MANUAL.html @@ -233,7 +233,7 @@

rclone(1) User Manual

Nick Craig-Wood

-

Aug 22, 2025

+

Nov 21, 2025

NAME

rclone - manage files on cloud storage

@@ -244,6 +244,7 @@ Available commands: about Get quota information from the remote. + archive Perform an action on an archive. authorize Remote authorization. backend Run a backend-specific command. bisync Perform bidirectional synchronization between two paths. @@ -304,6 +305,7 @@ Use "rclone help backends" for a list of supported services.

Rclone syncs your files to cloud storage

+

rclone logo

For example, to rename all the identically named photos in your Google Photos directory, do

-
rclone dedupe --dedupe-mode rename "drive:Google Photos"
+
rclone dedupe --dedupe-mode rename "drive:Google Photos"

Or

-
rclone dedupe rename "drive:Google Photos"
+
rclone dedupe rename "drive:Google Photos"
rclone dedupe [mode] remote:path [flags]

Options

      --by-hash              Find identical hashes rather than names
@@ -2763,21 +2825,24 @@ href="https://rclone.org/flags/">global flags page for global
 options not listed here.

Important Options

Important flags useful for most commands

-
  -n, --dry-run         Do a trial run with no permanent changes
+
  -n, --dry-run         Do a trial run with no permanent changes
   -i, --interactive     Enable interactive mode
   -v, --verbose count   Print lots more stuff (repeat for more)

See Also

+ +
  • rclone - Show help for rclone commands, flags and backends.
+

rclone about

Get quota information from the remote.

Synopsis

Prints quota information about a remote to standard output. The output is typically used, free, quota and trash contents.

E.g. Typical output from rclone about remote: is:

-
Total:   17 GiB
+
Total:   17 GiB
 Used:    7.444 GiB
 Free:    1.315 GiB
 Trashed: 100.000 MiB
@@ -2795,20 +2860,21 @@ Photos).
 

All sizes are in number of bytes.

Applying a --full flag to the command prints the bytes in full, e.g.

-
Total:   18253611008
+
Total:   18253611008
 Used:    7993453766
 Free:    1411001220
 Trashed: 104857602
 Other:   8849156022

A --json flag generates conveniently machine-readable output, e.g.

-
{
-    "total": 18253611008,
-    "used": 7993453766,
-    "trashed": 104857602,
-    "other": 8849156022,
-    "free": 1411001220
-}
+
{
+  "total": 18253611008,
+  "used": 7993453766,
+  "trashed": 104857602,
+  "other": 8849156022,
+  "free": 1411001220
+}

Not all backends print all fields. Information is not included if it is not provided by a backend. Where the value is unlimited it is omitted.

@@ -2823,60 +2889,307 @@ href="https://rclone.org/overview/#optional-features">documentation.

See the global flags page for global options not listed here.

See Also

+ +
  • rclone - Show help for rclone commands, flags and backends.
+ +

rclone archive

+

Perform an action on an archive.

+

Synopsis

+

Perform an action on an archive. Requires the use of a subcommand to +specify the protocol, e.g.

+
rclone archive list remote:file.zip
+

Each subcommand has its own options which you can see in their +help.

+

See rclone archive +create for the archive formats supported.

+
rclone archive <action> [opts] <source> [<destination>] [flags]
+

Options

+
  -h, --help   help for archive
+

See the global flags page for +global options not listed here.

+

See Also

+ + + + +

rclone archive create

+

Archive source file(s) to destination.

+

Synopsis

+

Creates an archive from the files in source:path and saves the +archive to dest:path. If dest:path is missing, it will write to the +console.

+

The valid formats for the --format flag are listed +below. If --format is not set rclone will guess it from the +extension of dest:path.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
FormatExtensions
zip.zip
tar.tar
tar.gz.tar.gz, .tgz, .taz
tar.bz2.tar.bz2, .tb2, .tbz, .tbz2, .tz2
tar.lz.tar.lz
tar.lz4.tar.lz4
tar.xz.tar.xz, .txz
tar.zst.tar.zst, .tzst
tar.br.tar.br
tar.sz.tar.sz
tar.mz.tar.mz
+

The --prefix and --full-path flags control +the prefix for the files in the archive.

+

If the flag --full-path is set then the files will have +the full source path as the prefix.

+

If the flag --prefix=<value> is set then the files +will have <value> as prefix. It's possible to create +invalid file names with --prefix=<value> so use with +caution. Flag --prefix has priority over +--full-path.

+

Given a directory /sourcedir with the following:

+
file1.txt
+dir1/file2.txt
+

Running the command +rclone archive create /sourcedir /dest.tar.gz will make an +archive with the contents:

+
file1.txt
+dir1/
+dir1/file2.txt
+

Running the command +rclone archive create --full-path /sourcedir /dest.tar.gz +will make an archive with the contents:

+
sourcedir/file1.txt
+sourcedir/dir1/
+sourcedir/dir1/file2.txt
+

Running the command +rclone archive create --prefix=my_new_path /sourcedir /dest.tar.gz +will make an archive with the contents:

+
my_new_path/file1.txt
+my_new_path/dir1/
+my_new_path/dir1/file2.txt
+
rclone archive create [flags] <source> [<destination>]
+

Options

+
      --format string   Create the archive with format or guess from extension.
+      --full-path       Set prefix for files in archive to source path
+  -h, --help            help for create
+      --prefix string   Set prefix for files in archive to entered value or source path
+

See the global flags page for +global options not listed here.

+

See Also

+ + + + +

rclone archive extract

+

Extract archives from source to destination.

+

Synopsis

+

Extract the archive contents to a destination directory auto +detecting the format. See rclone archive +create for the archive formats supported.

+

For example on this archive:

+
$ rclone archive list --long remote:archive.zip
+        6 2025-10-30 09:46:23.000000000 file.txt
+        0 2025-10-30 09:46:57.000000000 dir/
+        4 2025-10-30 09:46:57.000000000 dir/bye.txt
+

You can run extract like this

+
$ rclone archive extract remote:archive.zip remote:extracted
+

Which gives this result

+
$ rclone tree remote:extracted
+/
+├── dir
+│   └── bye.txt
+└── file.txt
+

The source or destination or both can be local or remote.

+

Filters can be used to only extract certain files:

+
$ rclone archive extract archive.zip partial --include "bye.*"
+$ rclone tree partial
+/
+└── dir
+    └── bye.txt
+

The archive backend can +also be used to extract files. It can be used to read only mount +archives also but it supports a different set of archive formats to the +archive commands.

+
rclone archive extract [flags] <source> <destination>
+

Options

+
  -h, --help   help for extract
+

See the global flags page for +global options not listed here.

+

See Also

+ + + + +

rclone archive list

+

List archive contents from source.

+

Synopsis

+

List the contents of an archive to the console, auto detecting the +format. See rclone archive +create for the archive formats supported.

+

For example:

+
$ rclone archive list remote:archive.zip
+        6 file.txt
+        0 dir/
+        4 dir/bye.txt
+

Or with --long flag for more info:

+
$ rclone archive list --long remote:archive.zip
+        6 2025-10-30 09:46:23.000000000 file.txt
+        0 2025-10-30 09:46:57.000000000 dir/
+        4 2025-10-30 09:46:57.000000000 dir/bye.txt
+

Or with --plain flag which is useful for scripting:

+
$ rclone archive list --plain /path/to/archive.zip
+file.txt
+dir/
+dir/bye.txt
+

Or with --dirs-only:

+
$ rclone archive list --plain --dirs-only /path/to/archive.zip
+dir/
+

Or with --files-only:

+
$ rclone archive list --plain --files-only /path/to/archive.zip
+file.txt
+dir/bye.txt
+

Filters may also be used:

+
$ rclone archive list --long archive.zip --include "bye.*"
+        4 2025-10-30 09:46:57.000000000 dir/bye.txt
+

The archive backend can +also be used to list files. It can be used to read only mount archives +also but it supports a different set of archive formats to the archive +commands.

+
rclone archive list [flags] <source>
+

Options

+
      --dirs-only    Only list directories
+      --files-only   Only list files
+  -h, --help         help for list
+      --long         List extra attributtes
+      --plain        Only list file names
+

See the global flags page for +global options not listed here.

+

See Also

+ + + +

rclone authorize

Remote authorization.

-

Synopsis

+

Synopsis

Remote authorization. Used to authorize a remote or headless rclone -from a machine with a browser - use as instructed by rclone config.

-

The command requires 1-3 arguments: - fs name (e.g., "drive", "s3", -etc.) - Either a base64 encoded JSON blob obtained from a previous -rclone config session - Or a client_id and client_secret pair obtained -from the remote service

+from a machine with a browser. Use as instructed by rclone config. See +also the remote setup documentation.

+

The command requires 1-3 arguments:

+
    +
  • Name of a backend (e.g. "drive", "s3")
  • +
  • Either a base64 encoded JSON blob obtained from a previous rclone +config session
  • +
  • Or a client_id and client_secret pair obtained from the remote +service
  • +

Use --auth-no-open-browser to prevent rclone to open auth link in default browser automatically.

Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.

-
rclone authorize <fs name> [base64_json_blob | client_id client_secret] [flags]
-

Options

+
rclone authorize <backendname> [base64_json_blob | client_id client_secret] [flags]
+

Options

      --auth-no-open-browser   Do not automatically open auth link in default browser
   -h, --help                   help for authorize
       --template string        The path to a custom Go template for generating HTML responses

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ +
  • rclone - Show help for rclone commands, flags and backends.
+

rclone backend

Run a backend-specific command.

-

Synopsis

+

Synopsis

This runs a backend-specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions.

You can discover what commands a backend implements by using

-
rclone backend help remote:
+
rclone backend help remote:
 rclone backend help <backendname>

You can also discover information about the backend using (see operations/fsinfo in the remote control docs for more info).

-
rclone backend features remote:
+
rclone backend features remote:

Pass options to the backend command with -o. This should be key=value or key, e.g.:

-
rclone backend stats remote:path stats -o format=json -o long
+
rclone backend stats remote:path stats -o format=json -o long

Pass arguments to the backend by placing them on the end of the line

-
rclone backend cleanup remote:path file1 file2 file3
+
rclone backend cleanup remote:path file1 file2 file3

Note to run these commands on a running backend then see backend/command in the rc docs.

rclone backend <command> remote:path [opts] <args> [flags]
-

Options

+

Options

  -h, --help                 help for backend
       --json                 Always output in JSON format
   -o, --option stringArray   Option in the form name=value or name
@@ -2885,25 +3198,31 @@ href="https://rclone.org/flags/">global flags page for global options not listed here.

Important Options

Important flags useful for most commands

-
  -n, --dry-run         Do a trial run with no permanent changes
+
  -n, --dry-run         Do a trial run with no permanent changes
   -i, --interactive     Enable interactive mode
   -v, --verbose count   Print lots more stuff (repeat for more)
-

See Also

+

See Also

+ +
  • rclone - Show help for rclone commands, flags and backends.
+

rclone bisync

Perform bidirectional synchronization between two paths.

-

Synopsis

+

Synopsis

Perform bidirectional synchronization between two paths.

Bisync provides a bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On each successive run it -will: - list files on Path1 and Path2, and check for changes on each -side. Changes include New, Newer, -Older, and Deleted files. - Propagate changes -on Path1 to Path2, and vice-versa.

+will:

+
    +
  • list files on Path1 and Path2, and check for changes on each side. +Changes include New, Newer, +Older, and Deleted files.
  • +
  • Propagate changes on Path1 to Path2, and vice-versa.
  • +

Bisync is considered an advanced command, so use with care. Make sure you have read and understood the entire manual (especially the Rclone Forum.

See full bisync description for details.

rclone bisync remote1:path1 remote2:path2 [flags]
-

Options

+

Options

      --backup-dir1 string                   --backup-dir for Path1. Must be a non-overlapping path on the same remote.
       --backup-dir2 string                   --backup-dir for Path2. Must be a non-overlapping path on the same remote.
       --check-access                         Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
@@ -2944,7 +3263,7 @@ href="https://rclone.org/flags/">global flags page for global
 options not listed here.

Copy Options

Flags for anything which can copy a file

-
      --check-first                                 Do all the checks before starting transfers
+
      --check-first                                 Do all the checks before starting transfers
   -c, --checksum                                    Check for changes with size & checksum (if available, or fallback to size only)
       --compare-dest stringArray                    Include additional server-side paths during comparison
       --copy-dest stringArray                       Implies --compare-dest but also copies files from paths into destination
@@ -2980,12 +3299,12 @@ options not listed here.

-u, --update Skip files that are newer on the destination

Important Options

Important flags useful for most commands

-
  -n, --dry-run         Do a trial run with no permanent changes
+
  -n, --dry-run         Do a trial run with no permanent changes
   -i, --interactive     Enable interactive mode
   -v, --verbose count   Print lots more stuff (repeat for more)

Filter Options

Flags for filtering directory listings

-
      --delete-excluded                     Delete files on dest excluded from sync
+
      --delete-excluded                     Delete files on dest excluded from sync
       --exclude stringArray                 Exclude files matching pattern
       --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
       --exclude-if-present stringArray      Exclude directories if filename is present
@@ -3008,22 +3327,28 @@ options not listed here.

--metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
-

See Also

+

See Also

+ +
  • rclone - Show help for rclone commands, flags and backends.
+

rclone cat

Concatenates any files and sends them to stdout.

-

Synopsis

+

Synopsis

Sends any files to standard output.

You can use it like this to output a single file

-
rclone cat remote:path/to/file
+
rclone cat remote:path/to/file

Or like this to output any file in dir or its subdirectories.

-
rclone cat remote:path/to/dir
+
rclone cat remote:path/to/dir

Or like this to output any .txt files in dir or its subdirectories.

-
rclone --include "*.txt" cat remote:path/to/dir
+
rclone --include "*.txt" cat remote:path/to/dir

Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if @@ -3035,12 +3360,14 @@ between files. Be sure to shell-escape special characters. For example, to print a newline between files, use:

  • bash:

    -
    rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir
  • +
    rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir
  • powershell:

    -
    rclone --include "*.txt" --separator "`n" cat remote:path/to/dir
  • +
    rclone --include "*.txt" --separator "`n" cat remote:path/to/dir
rclone cat remote:path [flags]
-

Options

+

Options

      --count int          Only print N characters (default -1)
       --discard            Discard the output instead of printing
       --head int           Only print the first N characters
@@ -3053,7 +3380,7 @@ href="https://rclone.org/flags/">global flags page for global
 options not listed here.

Filter Options

Flags for filtering directory listings

-
      --delete-excluded                     Delete files on dest excluded from sync
+
      --delete-excluded                     Delete files on dest excluded from sync
       --exclude stringArray                 Exclude files matching pattern
       --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
       --exclude-if-present stringArray      Exclude directories if filename is present
@@ -3078,16 +3405,19 @@ options not listed here.

--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

Listing Options

Flags for listing directories

-
      --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
+
      --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
       --fast-list           Use recursive list if available; uses more memory but fewer transactions
-

See Also

+

See Also

+ +
  • rclone - Show help for rclone commands, flags and backends.
+

rclone checksum

Checks the files in the destination against a SUM file.

-

Synopsis

+

Synopsis

Checks that hashsums of destination files match the SUM file. It compares hashes (MD5, SHA1, etc) and logs a report of files which don't match. It doesn't alter the file system.

@@ -3130,7 +3460,7 @@ source or dest. href="https://rclone.org/docs/#checkers-int">--checkers option for more information.

rclone checksum <hash> sumfile dst:path [flags]
-

Options

+

Options

      --combined string         Make a combined report of changes to this file
       --differ string           Report all non-matching files to this file
       --download                Check by hashing the contents
@@ -3145,7 +3475,7 @@ href="https://rclone.org/flags/">global flags page for global
 options not listed here.

Filter Options

Flags for filtering directory listings

-
      --delete-excluded                     Delete files on dest excluded from sync
+
      --delete-excluded                     Delete files on dest excluded from sync
       --exclude stringArray                 Exclude files matching pattern
       --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
       --exclude-if-present stringArray      Exclude directories if filename is present
@@ -3170,23 +3500,28 @@ options not listed here.

--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

Listing Options

Flags for listing directories

-
      --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
+
      --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
       --fast-list           Use recursive list if available; uses more memory but fewer transactions
-

See Also

+

See Also

+ +
  • rclone - Show help for rclone commands, flags and backends.
+

rclone completion

Output completion script for a given shell.

-

Synopsis

+

Synopsis

Generates a shell completion script for rclone. Run with --help to list the supported shells.

-

Options

+

Options

  -h, --help   help for completion

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ +
  • rclone - Show help for rclone commands, flags and backends.
  • @@ -3201,14 +3536,15 @@ rclone.
  • rclone completion zsh - Output zsh completion script for rclone.
+

rclone completion bash

Output bash completion script for rclone.

-

Synopsis

+

Synopsis

Generates a bash shell autocompletion script for rclone.

By default, when run without any arguments,

-
rclone completion bash
+
rclone completion bash

the generated script will be written to

-
/etc/bash_completion.d/rclone
+
/etc/bash_completion.d/rclone

and so rclone will probably need to be run as root, or with sudo.

If you supply a path to a file as the command line argument, then the generated script will be written to that file, in which case you should @@ -3217,98 +3553,112 @@ not need root privileges.

If you have installed the script into the default location, you can logout and login again to use the autocompletion script.

Alternatively, you can source the script directly

-
. /path/to/my_bash_completion_scripts/rclone
+
. /path/to/my_bash_completion_scripts/rclone

and the autocompletion functionality will be added to your current shell.

rclone completion bash [output_file] [flags]
-

Options

+

Options

  -h, --help   help for bash

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone completion fish

Output fish completion script for rclone.

-

Synopsis

+

Synopsis

Generates a fish autocompletion script for rclone.

This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g.

-
sudo rclone completion fish
+
sudo rclone completion fish

Logout and login again to use the autocompletion scripts, or source them directly

-
. /etc/fish/completions/rclone.fish
+
. /etc/fish/completions/rclone.fish

If you supply a command line argument the script will be written there.

If output_file is "-", then the output will be written to stdout.

rclone completion fish [output_file] [flags]
-

Options

+

Options

  -h, --help   help for fish

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone completion powershell

Output powershell completion script for rclone.

-

Synopsis

+

Synopsis

Generate the autocompletion script for powershell.

To load completions in your current shell session:

-
rclone completion powershell | Out-String | Invoke-Expression
+
rclone completion powershell | Out-String | Invoke-Expression

To load completions for every new session, add the output of the above command to your powershell profile.

If output_file is "-" or missing, then the output will be written to stdout.

rclone completion powershell [output_file] [flags]
-

Options

+

Options

  -h, --help   help for powershell

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone completion zsh

Output zsh completion script for rclone.

-

Synopsis

+

Synopsis

Generates a zsh autocompletion script for rclone.

This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g.

-
sudo rclone completion zsh
+
sudo rclone completion zsh

Logout and login again to use the autocompletion scripts, or source them directly

-
autoload -U compinit && compinit
+
autoload -U compinit && compinit

If you supply a command line argument the script will be written there.

If output_file is "-", then the output will be written to stdout.

rclone completion zsh [output_file] [flags]
-

Options

+

Options

  -h, --help   help for zsh

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config create

Create a new remote with name, type and options.

-

Synopsis

+

Synopsis

Create a new remote of name with type and options. The options should be passed in pairs of key value or as key=value.

For example, to make a swift remote of name myremote using auto config you would do:

-
rclone config create myremote swift env_auth true
-rclone config create myremote swift env_auth=true
+
rclone config create myremote swift env_auth true
+rclone config create myremote swift env_auth=true

So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this:

-
rclone config create mydrive drive config_is_local=false
+
rclone config create mydrive drive config_is_local=false

Note that if the config process would normally ask a question the default is taken (unless --non-interactive is used). Each time that happens rclone will print or DEBUG a message saying how to @@ -3331,29 +3681,30 @@ text based configuration questions. If this flag is set, and rclone needs to ask the user a question, a JSON blob will be returned with the question in it.

This will look something like (some irrelevant detail removed):

-
{
-    "State": "*oauth-islocal,teamdrive,,",
-    "Option": {
-        "Name": "config_is_local",
-        "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
-        "Default": true,
-        "Examples": [
-            {
-                "Value": "true",
-                "Help": "Yes"
-            },
-            {
-                "Value": "false",
-                "Help": "No"
-            }
-        ],
-        "Required": false,
-        "IsPassword": false,
-        "Type": "bool",
-        "Exclusive": true,
-    },
-    "Error": "",
-}
+
{
+  "State": "*oauth-islocal,teamdrive,,",
+  "Option": {
+    "Name": "config_is_local",
+    "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
+    "Default": true,
+    "Examples": [
+      {
+        "Value": "true",
+        "Help": "Yes"
+      },
+      {
+        "Value": "false",
+        "Help": "No"
+      }
+    ],
+    "Required": false,
+    "IsPassword": false,
+    "Type": "bool",
+    "Exclusive": true,
+  },
+  "Error": "",
+}

The format of Option is the same as returned by rclone config providers. The question should be asked to the user and returned to rclone as the --result option @@ -3379,7 +3730,8 @@ edited as such

If Error is set then it should be shown to the user at the same time as the question.

-
rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
+
rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"

Note that when using --continue all passwords should be passed in the clear (not obscured). Any default config values should be passed in with each invocation of --continue.

@@ -3391,7 +3743,7 @@ as defaults for questions as usual.

Note that bin/config.py in the rclone source implements this protocol as a readable demonstration.

rclone config create name type [key value]* [flags]
-

Options

+

Options

      --all               Ask the full set of config questions
       --continue          Continue the configuration process with an answer
   -h, --help              help for create
@@ -3403,78 +3755,95 @@ this protocol as a readable demonstration.

--state string State - use with --continue

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config delete

Delete an existing remote.

rclone config delete name [flags]
-

Options

+

Options

  -h, --help   help for delete

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config disconnect

Disconnects user from remote

-

Synopsis

+

Synopsis

This disconnects the remote: passed in to the cloud storage system.

This normally means revoking the oauth token.

To reconnect use "rclone config reconnect".

rclone config disconnect remote: [flags]
-

Options

+

Options

  -h, --help   help for disconnect

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config dump

Dump the config file as JSON.

rclone config dump [flags]
-

Options

+

Options

  -h, --help   help for dump

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config edit

Enter an interactive configuration session.

-

Synopsis

+

Synopsis

Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.

rclone config edit [flags]
-

Options

+

Options

  -h, --help   help for edit

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config encryption

set, remove and check the encryption for the config file

-

Synopsis

+

Synopsis

This command sets, clears and checks the encryption for the config file using the subcommands below.

-

Options

+

Options

  -h, --help   help for encryption

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ +
  • rclone config - Enter an interactive configuration session.
  • @@ -3491,10 +3860,11 @@ href="https://rclone.org/commands/rclone_config_encryption_set/">rclone config encryption set - Set or change the config file encryption password
+

rclone config encryption check

Check that the config file is encrypted

-

Synopsis

+

Synopsis

This checks the config file is encrypted and that you can decrypt it.

It will attempt to decrypt the config using the password you @@ -3505,21 +3875,24 @@ password.

If the config file is not encrypted it will return a non zero exit code.

rclone config encryption check [flags]
-

Options

+

Options

  -h, --help   help for check

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config encryption remove

Remove the config file encryption password

-

Synopsis

+

Synopsis

Remove the config file encryption password

This removes the config file encryption, returning it to un-encrypted.

@@ -3528,20 +3901,23 @@ supply the old config password.

If the config was not encrypted then no error will be returned and this command will do nothing.

rclone config encryption remove [flags]
-

Options

+

Options

  -h, --help   help for remove

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config encryption set

Set or change the config file encryption password

-

Synopsis

+

Synopsis

This command sets or changes the config file encryption password.

If there was no config password set then it sets a new one, otherwise it changes the existing config password.

@@ -3558,97 +3934,116 @@ environment variable to distinguish which password you must supply.

this command which may be easier if you don't mind the unencrypted config file being on the disk briefly.

rclone config encryption set [flags]
-

Options

+

Options

  -h, --help   help for set

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config file

Show path of configuration file in use.

rclone config file [flags]
-

Options

+

Options

  -h, --help   help for file

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config password

Update password in an existing remote.

-

Synopsis

+

Synopsis

Update an existing remote's password. The password should be passed in pairs of key password or as key=password. The password should be passed in in clear (unobscured).

For example, to set password of a remote of name myremote you would do:

-
rclone config password myremote fieldname mypassword
-rclone config password myremote fieldname=mypassword
+
rclone config password myremote fieldname mypassword
+rclone config password myremote fieldname=mypassword

This command is obsolete now that "config update" and "config create" both support obscuring passwords directly.

rclone config password name [key value]+ [flags]
-

Options

+

Options

  -h, --help   help for password

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config paths

Show paths used for configuration, cache, temp etc.

rclone config paths [flags]
-

Options

+

Options

  -h, --help   help for paths

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config providers

List in JSON format all the providers and options.

rclone config providers [flags]
-

Options

+

Options

  -h, --help   help for providers

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config reconnect

Re-authenticates user with remote.

-

Synopsis

+

Synopsis

This reconnects remote: passed in to the cloud storage system.

To disconnect the remote use "rclone config disconnect".

This normally means going through the interactive oauth flow again.

rclone config reconnect remote: [flags]
-

Options

+

Options

  -h, --help   help for reconnect

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config redacted

Print redacted (decrypted) config file, or the redacted config for a single remote.

-

Synopsis

+

Synopsis

This prints a redacted copy of the config file, either the whole config file or for a given remote.

The config file will be redacted by replacing all passwords and other @@ -3658,52 +4053,93 @@ support.

It should be double checked before posting as the redaction may not be perfect.

rclone config redacted [<remote>] [flags]
-

Options

+

Options

  -h, --help   help for redacted

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config show

Print (decrypted) config file, or the config for a single remote.

rclone config show [<remote>] [flags]
-

Options

+

Options

  -h, --help   help for show

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + + +

rclone config string

+

Print connection string for a single remote.

+

Synopsis

+

Print a connection string for a single remote.

+

The connection +strings can be used wherever a remote is needed and can be more +convenient than using the config file, especially if using the RC +API.

+

Backend parameters may be provided to the command also.

+

Example:

+
$ rclone config string s3:rclone --s3-no-check-bucket
+:s3,access_key_id=XXX,no_check_bucket,provider=AWS,region=eu-west-2,secret_access_key=YYY:rclone
+

NB the strings are not quoted for use in shells (eg +bash, powershell, windows cmd). Most will work if enclosed in "double +quotes", however connection strings that contain double quotes will +require further quoting which is very shell dependent.

+
rclone config string <remote> [flags]
+

Options

+
  -h, --help   help for string
+

See the global flags page for +global options not listed here.

+

See Also

+ + + +

rclone config touch

Ensure configuration file exists.

rclone config touch [flags]
-

Options

+

Options

  -h, --help   help for touch

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config update

Update options in an existing remote.

-

Synopsis

+

Synopsis

Update an existing remote's options. The options should be passed in pairs of key value or as key=value.

For example, to update the env_auth field of a remote of name myremote you would do:

-
rclone config update myremote env_auth true
-rclone config update myremote env_auth=true
+
rclone config update myremote env_auth true
+rclone config update myremote env_auth=true

If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus:

-
rclone config update myremote env_auth=true config_refresh_token=false
+
rclone config update myremote env_auth=true config_refresh_token=false

Note that if the config process would normally ask a question the default is taken (unless --non-interactive is used). Each time that happens rclone will print or DEBUG a message saying how to @@ -3726,29 +4162,30 @@ text based configuration questions. If this flag is set, and rclone needs to ask the user a question, a JSON blob will be returned with the question in it.

This will look something like (some irrelevant detail removed):

-
{
-    "State": "*oauth-islocal,teamdrive,,",
-    "Option": {
-        "Name": "config_is_local",
-        "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
-        "Default": true,
-        "Examples": [
-            {
-                "Value": "true",
-                "Help": "Yes"
-            },
-            {
-                "Value": "false",
-                "Help": "No"
-            }
-        ],
-        "Required": false,
-        "IsPassword": false,
-        "Type": "bool",
-        "Exclusive": true,
-    },
-    "Error": "",
-}
+
{
+  "State": "*oauth-islocal,teamdrive,,",
+  "Option": {
+    "Name": "config_is_local",
+    "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n",
+    "Default": true,
+    "Examples": [
+      {
+        "Value": "true",
+        "Help": "Yes"
+      },
+      {
+        "Value": "false",
+        "Help": "No"
+      }
+    ],
+    "Required": false,
+    "IsPassword": false,
+    "Type": "bool",
+    "Exclusive": true,
+  },
+  "Error": "",
+}

The format of Option is the same as returned by rclone config providers. The question should be asked to the user and returned to rclone as the --result option @@ -3774,7 +4211,8 @@ edited as such

If Error is set then it should be shown to the user at the same time as the question.

-
rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"
+
rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true"

Note that when using --continue all passwords should be passed in the clear (not obscured). Any default config values should be passed in with each invocation of --continue.

@@ -3786,7 +4224,7 @@ as defaults for questions as usual.

Note that bin/config.py in the rclone source implements this protocol as a readable demonstration.

rclone config update name [key value]+ [flags]
-

Options

+

Options

      --all               Ask the full set of config questions
       --continue          Continue the configuration process with an answer
   -h, --help              help for update
@@ -3798,30 +4236,36 @@ this protocol as a readable demonstration.

--state string State - use with --continue

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone config userinfo

Prints info about logged in user of remote.

-

Synopsis

+

Synopsis

This prints the details of the person logged in to the cloud storage system.

rclone config userinfo remote: [flags]
-

Options

+

Options

  -h, --help   help for userinfo
       --json   Format output as JSON

See the global flags page for global options not listed here.

-

See Also

+

See Also

+ + +

rclone convmv

Convert file and directory names in place.

-

Synopsis

+

Synopsis

convmv supports advanced path name transformations for converting and renaming files and directories by applying prefixes, suffixes, and other alterations.

@@ -3859,7 +4303,7 @@ extension. Removes XXXX if it appears at the end of the file name. ---name-transform regex=/pattern/replacement/ +--name-transform regex=pattern/replacement Applies a regex-based transformation. @@ -3875,211 +4319,236 @@ extension. Truncates the file name to a maximum of N characters. +--name-transform truncate_keep_extension=N +Truncates the file name to a maximum of N characters while +preserving the original file extension. + + +--name-transform truncate_bytes=N +Truncates the file name to a maximum of N bytes (not +characters). + + +--name-transform truncate_bytes_keep_extension=N +Truncates the file name to a maximum of N bytes (not characters) +while preserving the original file extension. + + --name-transform base64encode Encodes the file name in Base64. - + --name-transform base64decode Decodes a Base64-encoded file name. - + --name-transform encoder=ENCODING Converts the file name to the specified encoding (e.g., ISO-8859-1, Windows-1252, Macintosh). - + --name-transform decoder=ENCODING Decodes the file name from the specified encoding. - + --name-transform charmap=MAP Applies a character mapping transformation. - + --name-transform lowercase Converts the file name to lowercase. - + --name-transform uppercase Converts the file name to UPPERCASE. - + --name-transform titlecase Converts the file name to Title Case. - + --name-transform ascii Strips non-ASCII characters. - + --name-transform url URL-encodes the file name. - + --name-transform nfc Converts the file name to NFC Unicode normalization form. - + --name-transform nfd Converts the file name to NFD Unicode normalization form. - + --name-transform nfkc Converts the file name to NFKC Unicode normalization form. - + --name-transform nfkd Converts the file name to NFKD Unicode normalization form. - + --name-transform command=/path/to/my/programfile names. -Executes an external program to transform +Executes an external program to transform.

Conversion modes:

-
none  
-nfc  
-nfd  
-nfkc  
-nfkd  
-replace  
-prefix  
-suffix  
-suffix_keep_extension  
-trimprefix  
-trimsuffix  
-index  
-date  
-truncate  
-base64encode  
-base64decode  
-encoder  
-decoder  
-ISO-8859-1  
-Windows-1252  
-Macintosh  
-charmap  
-lowercase  
-uppercase  
-titlecase  
-ascii  
-url  
-regex  
-command  
+
none
+nfc
+nfd
+nfkc
+nfkd
+replace
+prefix
+suffix
+suffix_keep_extension
+trimprefix
+trimsuffix
+index
+date
+truncate
+truncate_keep_extension
+truncate_bytes
+truncate_bytes_keep_extension
+base64encode
+base64decode
+encoder
+decoder
+ISO-8859-1
+Windows-1252
+Macintosh
+charmap
+lowercase
+uppercase
+titlecase
+ascii
+url
+regex
+command

Char maps:

-
  
-IBM-Code-Page-037  
-IBM-Code-Page-437  
-IBM-Code-Page-850  
-IBM-Code-Page-852  
-IBM-Code-Page-855  
-Windows-Code-Page-858  
-IBM-Code-Page-860  
-IBM-Code-Page-862  
-IBM-Code-Page-863  
-IBM-Code-Page-865  
-IBM-Code-Page-866  
-IBM-Code-Page-1047  
-IBM-Code-Page-1140  
-ISO-8859-1  
-ISO-8859-2  
-ISO-8859-3  
-ISO-8859-4  
-ISO-8859-5  
-ISO-8859-6  
-ISO-8859-7  
-ISO-8859-8  
-ISO-8859-9  
-ISO-8859-10  
-ISO-8859-13  
-ISO-8859-14  
-ISO-8859-15  
-ISO-8859-16  
-KOI8-R  
-KOI8-U  
-Macintosh  
-Macintosh-Cyrillic  
-Windows-874  
-Windows-1250  
-Windows-1251  
-Windows-1252  
-Windows-1253  
-Windows-1254  
-Windows-1255  
-Windows-1256  
-Windows-1257  
-Windows-1258  
-X-User-Defined  
+
IBM-Code-Page-037
+IBM-Code-Page-437
+IBM-Code-Page-850
+IBM-Code-Page-852
+IBM-Code-Page-855
+Windows-Code-Page-858
+IBM-Code-Page-860
+IBM-Code-Page-862
+IBM-Code-Page-863
+IBM-Code-Page-865
+IBM-Code-Page-866
+IBM-Code-Page-1047
+IBM-Code-Page-1140
+ISO-8859-1
+ISO-8859-2
+ISO-8859-3
+ISO-8859-4
+ISO-8859-5
+ISO-8859-6
+ISO-8859-7
+ISO-8859-8
+ISO-8859-9
+ISO-8859-10
+ISO-8859-13
+ISO-8859-14
+ISO-8859-15
+ISO-8859-16
+KOI8-R
+KOI8-U
+Macintosh
+Macintosh-Cyrillic
+Windows-874
+Windows-1250
+Windows-1251
+Windows-1252
+Windows-1253
+Windows-1254
+Windows-1255
+Windows-1256
+Windows-1257
+Windows-1258
+X-User-Defined

Encoding masks:

-
Asterisk  
- BackQuote  
- BackSlash  
- Colon  
- CrLf  
- Ctl  
- Del  
- Dollar  
- Dot  
- DoubleQuote  
- Exclamation  
- Hash  
- InvalidUtf8  
- LeftCrLfHtVt  
- LeftPeriod  
- LeftSpace  
- LeftTilde  
- LtGt  
- None  
- Percent  
- Pipe  
- Question  
- Raw  
- RightCrLfHtVt  
- RightPeriod  
- RightSpace  
- Semicolon  
- SingleQuote  
- Slash  
- SquareBracket  
+
Asterisk
+BackQuote
+BackSlash
+Colon
+CrLf
+Ctl
+Del
+Dollar
+Dot
+DoubleQuote
+Exclamation
+Hash
+InvalidUtf8
+LeftCrLfHtVt
+LeftPeriod
+LeftSpace
+LeftTilde
+LtGt
+None
+Percent
+Pipe
+Question
+Raw
+RightCrLfHtVt
+RightPeriod
+RightSpace
+Semicolon
+SingleQuote
+Slash
+SquareBracket

Examples:

-
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase"
+
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase"
 // Output: STORIES/THE QUICK BROWN FOX!.TXT
-
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow"
+
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow"
 // Output: stories/The Slow Brown Turtle!.txt
-
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode"
+
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode"
 // Output: c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0
-
rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode"
+
rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode"
 // Output: stories/The Quick Brown Fox!.txt
-
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc"
+
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc"
 // Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
-
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd"
+
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd"
 // Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
-
rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii"
+
rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii"
 // Output: stories/The Quick Brown  Fox!.txt
-
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt"
+
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt"
 // Output: stories/The Quick Brown Fox!
-
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_"
+
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_"
 // Output: OLD_stories/OLD_The Quick Brown Fox!.txt
-
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7"
+
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7"
 // Output: stories/The Quick Brown _ Fox Went to the Caf_!.txt
-
rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket"
+
rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket"
 // Output: stories/The Quick Brown Fox: A Memoir [draft].txt
-
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21"
+
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21"
 // Output: stories/The Quick Brown 🦊 Fox
-
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo"
+
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo"
 // Output: stories/The Quick Brown Fox!.txt
-
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
-// Output: stories/The Quick Brown Fox!-20250618
-
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
-// Output: stories/The Quick Brown Fox!-2025-06-18 0148PM
-
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
+
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
+// Output: stories/The Quick Brown Fox!-20251121
+
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
+// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
+
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
 // Output: ababababababab/ababab ababababab ababababab ababab!abababab
+

The regex command generally accepts Perl-style regular expressions, +the exact syntax is defined in the Go regular expression +reference. The replacement string may contain capturing group +variables, referencing capturing groups using the syntax +$name or ${name}, where the name can refer to +a named capturing group or it can simply be the index as a number. To +insert a literal $, use $$.

Multiple transformations can be used in sequence, applied in the order they are specified on the command line.

The --name-transform flag is also available in sync, copy, and move.

-

Files vs Directories

+

Files vs Directories

By default --name-transform will only apply to file names. The means only the leaf file name will be transformed. However some of the transforms would be better applied to the whole path or just @@ -4119,7 +4588,7 @@ the path example --name-transform all,nfc.

Note that --name-transform may not add path separators / to the name. This will cause an error.

-

Ordering and Conflicts

+

Ordering and Conflicts

  • Transformations will be applied in the order specified by the user.
      @@ -4146,27 +4615,35 @@ unexpected results and should verify transformations using --dry-run before execution.
-

Race Conditions -and Non-Deterministic Behavior

+

Race Conditions +and Non-Deterministic Behavior

Some transformations, such as replace=old:new, may introduce conflicts where multiple source files map to the same destination name. This can lead to race conditions when performing -concurrent transfers. It is up to the user to anticipate these. * If two -files from the source are transformed into the same name at the -destination, the final state may be non-deterministic. * Running rclone -check after a sync using such transformations may erroneously report -missing or differing files due to overwritten results.

-

To minimize risks, users should: * Carefully review transformations -that may introduce conflicts. * Use --dry-run to inspect -changes before executing a sync (but keep in mind that it won't show the -effect of non-deterministic transformations). * Avoid transformations -that cause multiple distinct source files to map to the same destination -name. * Consider disabling concurrency with --transfers=1 -if necessary. * Certain transformations (e.g. prefix) will -have a multiplying effect every time they are used. Avoid these when -using bisync.

+concurrent transfers. It is up to the user to anticipate these.

+
    +
  • If two files from the source are transformed into the same name at +the destination, the final state may be non-deterministic.
  • +
  • Running rclone check after a sync using such transformations may +erroneously report missing or differing files due to overwritten +results.
  • +
+

To minimize risks, users should:

+
    +
  • Carefully review transformations that may introduce conflicts.
  • +
  • Use --dry-run to inspect changes before executing a +sync (but keep in mind that it won't show the effect of +non-deterministic transformations).
  • +
  • Avoid transformations that cause multiple distinct source files to +map to the same destination name.
  • +
  • Consider disabling concurrency with --transfers=1 if +necessary.
  • +
  • Certain transformations (e.g. prefix) will have a +multiplying effect every time they are used. Avoid these when using +bisync.
  • +
rclone convmv dest:path --name-transform XXX [flags]
-

Options

+

Options

      --create-empty-src-dirs   Create empty source dirs on destination after move
       --delete-empty-src-dirs   Delete empty source dirs after move
   -h, --help                    help for convmv
@@ -4175,7 +4652,7 @@ href="https://rclone.org/flags/">global flags page for global options not listed here.

Copy Options

Flags for anything which can copy a file

-
      --check-first                                 Do all the checks before starting transfers
+
      --check-first                                 Do all the checks before starting transfers
   -c, --checksum                                    Check for changes with size & checksum (if available, or fallback to size only)
       --compare-dest stringArray                    Include additional server-side paths during comparison
       --copy-dest stringArray                       Implies --compare-dest but also copies files from paths into destination
@@ -4211,12 +4688,12 @@ options not listed here.

-u, --update Skip files that are newer on the destination

Important Options

Important flags useful for most commands

-
  -n, --dry-run         Do a trial run with no permanent changes
+
  -n, --dry-run         Do a trial run with no permanent changes
   -i, --interactive     Enable interactive mode
   -v, --verbose count   Print lots more stuff (repeat for more)

Filter Options

Flags for filtering directory listings

-
      --delete-excluded                     Delete files on dest excluded from sync
+
      --delete-excluded                     Delete files on dest excluded from sync
       --exclude stringArray                 Exclude files matching pattern
       --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
       --exclude-if-present stringArray      Exclude directories if filename is present
@@ -4241,27 +4718,31 @@ options not listed here.

--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

Listing Options

Flags for listing directories

-
      --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
+
      --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
       --fast-list           Use recursive list if available; uses more memory but fewer transactions
-

See Also

+

See Also

+ +
  • rclone - Show help for rclone commands, flags and backends.
+

rclone copyto

Copy files from source to dest, skipping identical files.

-

Synopsis

+

Synopsis

If source:path is a file or directory then it copies it to a file or directory named dest:path.

This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.

So

-
rclone copyto src dst
-

where src and dst are rclone paths, either remote:path or -/path/to/local or C:.

+
rclone copyto src dst
+

where src and dst are rclone paths, either remote:path +or /path/to/local or +C:\windows\path\if\on\windows.

This will:

-
if src is file
+
if src is file
     copy it to dst, overwriting an existing file if it exists
 if src is directory
     copy it to dst, overwriting existing files if they exist
@@ -4270,11 +4751,11 @@ if src is directory
 testing by size and modification time or MD5SUM. It doesn't delete files
 from the destination.

If you are looking to copy just a byte range of a file, please -see 'rclone cat --offset X --count Y'

+see rclone cat --offset X --count Y.

Note: Use the -P/--progress flag to view real-time transfer -statistics

-

Logger Flags

+statistics.

+

Logger Flags

The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to the file name @@ -4322,9 +4803,9 @@ scenarios are not currently supported:

Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each -file (which may or may not match what actually DID.)

+file (which may or may not match what actually DID).

rclone copyto source:path dest:path [flags]
-

Options

+

Options

      --absolute                Put a leading / in front of path names
       --combined string         Make a combined report of changes to this file
       --csv                     Output in CSV format
@@ -4341,13 +4822,13 @@ file (which may or may not match what actually DID.)

--missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05)
+ -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05)

Options shared with other commands are described next. See the global flags page for global options not listed here.

Copy Options

Flags for anything which can copy a file

-
      --check-first                                 Do all the checks before starting transfers
+
      --check-first                                 Do all the checks before starting transfers
   -c, --checksum                                    Check for changes with size & checksum (if available, or fallback to size only)
       --compare-dest stringArray                    Include additional server-side paths during comparison
       --copy-dest stringArray                       Implies --compare-dest but also copies files from paths into destination
@@ -4383,12 +4864,12 @@ options not listed here.

-u, --update Skip files that are newer on the destination

Important Options

Important flags useful for most commands

-
  -n, --dry-run         Do a trial run with no permanent changes
+
  -n, --dry-run         Do a trial run with no permanent changes
   -i, --interactive     Enable interactive mode
   -v, --verbose count   Print lots more stuff (repeat for more)

Filter Options

Flags for filtering directory listings

-
      --delete-excluded                     Delete files on dest excluded from sync
+
      --delete-excluded                     Delete files on dest excluded from sync
       --exclude stringArray                 Exclude files matching pattern
       --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
       --exclude-if-present stringArray      Exclude directories if filename is present
@@ -4413,16 +4894,19 @@ options not listed here.

--min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

Listing Options

Flags for listing directories

-
      --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
+
      --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
       --fast-list           Use recursive list if available; uses more memory but fewer transactions
-

See Also

+

See Also

+ +
  • rclone - Show help for rclone commands, flags and backends.
+

rclone copyurl

Copy the contents of the URL supplied content to dest:path.

-

Synopsis

+

Synopsis

Download a URL's content and copy it to the destination without saving it in temporary storage.

Setting --auto-filename will attempt to automatically @@ -4437,6 +4921,17 @@ the destination if there is one with the same name.

Setting --stdout or making the output file name - will cause the output to be written to standard output.

+

Setting --urls allows you to input a CSV file of URLs in +format: URL, FILENAME. If --urls is in use then replace the +URL in the arguments with the file containing the URLs, e.g.:

+
rclone copyurl --urls myurls.csv remote:dir
+

Missing filenames will be autogenerated equivalent to using +--auto-filename. Note that --stdout and +--print-filename are incompatible with --urls. +This will do --transfers copies in parallel. Note that if +--auto-filename is desired for all URLs then a file with +only URLs and no filename can be used.

Troubleshooting

If you can't get rclone copyurl to work then here are some things you can try:

@@ -4451,31 +4946,35 @@ curl's user-agent - try that
  • Make sure the site works with curl directly
  • rclone copyurl https://example.com dest:path [flags]
    -

    Options

    +

    Options

      -a, --auto-filename     Get the file name from the URL and use it for destination file path
           --header-filename   Get the file name from the Content-Disposition header
       -h, --help              help for copyurl
           --no-clobber        Prevent overwriting file with same name
       -p, --print-filename    Print the resulting name from --auto-filename
    -      --stdout            Write the output to stdout rather than a file
    + --stdout Write the output to stdout rather than a file + --urls Use a CSV file of links to process multiple URLs

    Options shared with other commands are described next. See the global flags page for global options not listed here.

    Important Options

    Important flags useful for most commands

    -
      -n, --dry-run         Do a trial run with no permanent changes
    +
      -n, --dry-run         Do a trial run with no permanent changes
       -i, --interactive     Enable interactive mode
       -v, --verbose count   Print lots more stuff (repeat for more)
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone cryptcheck

    Cryptcheck checks the integrity of an encrypted remote.

    -

    Synopsis

    -

    Checks a remote against a crypted remote. This is the +

    Synopsis

    +

    Checks a remote against an encrypted remote. This is the equivalent of running rclone check, but able to check the checksums of the encrypted remote.

    @@ -4486,11 +4985,12 @@ and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.

    Use it like this

    -
    rclone cryptcheck /path/to/files encryptedremote:path
    +
    rclone cryptcheck /path/to/files encryptedremote:path

    You can use it like this also, but that will involve downloading all -the files in remote:path.

    -
    rclone cryptcheck remote:path encryptedremote:path
    -

    After it has run it will log the status of the encryptedremote:.

    +the files in remote:path.

    +
    rclone cryptcheck remote:path encryptedremote:path
    +

    After it has run it will log the status of the +encryptedremote:.

    If you supply the --one-way flag, it will only check that files in the source match the files in the destination, not the other way around. This means that extra files in the destination that @@ -4522,7 +5022,7 @@ source or dest. href="https://rclone.org/docs/#checkers-int">--checkers option for more information.

    rclone cryptcheck remote:path cryptedremote:path [flags]
    -

    Options

    +

    Options

          --combined string         Make a combined report of changes to this file
           --differ string           Report all non-matching files to this file
           --error string            Report all files with errors (hashing or reading) to this file
    @@ -4536,10 +5036,10 @@ href="https://rclone.org/flags/">global flags page for global
     options not listed here.

    Check Options

    Flags used for check commands

    -
          --max-backlog int   Maximum number of objects in sync or check backlog (default 10000)
    +
          --max-backlog int   Maximum number of objects in sync or check backlog (default 10000)

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -4564,80 +5064,91 @@ options not listed here.

    --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    Listing Options

    Flags for listing directories

    -
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
           --fast-list           Use recursive list if available; uses more memory but fewer transactions
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone cryptdecode

    Cryptdecode returns unencrypted file names.

    -

    Synopsis

    +

    Synopsis

    Returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.

    If you supply the --reverse flag, it will return encrypted file names.

    use it like this

    -
    rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
    -
    +
    rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
     rclone cryptdecode --reverse encryptedremote: filename1 filename2

    Another way to accomplish this is by using the rclone backend encode (or decode) command. See the documentation on the crypt overlay for more info.

    rclone cryptdecode encryptedremote: encryptedfilename [flags]
    -

    Options

    +

    Options

      -h, --help      help for cryptdecode
           --reverse   Reverse cryptdecode, encrypts filenames

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone deletefile

    Remove a single file from remote.

    -

    Synopsis

    +

    Synopsis

    Remove a single file from remote. Unlike delete it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.

    rclone deletefile remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for deletefile

    Options shared with other commands are described next. See the global flags page for global options not listed here.

    Important Options

    Important flags useful for most commands

    -
      -n, --dry-run         Do a trial run with no permanent changes
    +
      -n, --dry-run         Do a trial run with no permanent changes
       -i, --interactive     Enable interactive mode
       -v, --verbose count   Print lots more stuff (repeat for more)
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone gendocs

    Output markdown docs for rclone to the directory supplied.

    -

    Synopsis

    +

    Synopsis

    This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.

    rclone gendocs output_directory [flags]
    -

    Options

    +

    Options

      -h, --help   help for gendocs

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone gitannex

    Speaks with git-annex over stdin/stdout.

    -

    Synopsis

    +

    Synopsis

    Rclone's gitannex subcommand enables git-annex to store and retrieve content from an rclone remote. It is meant to be run by @@ -4649,22 +5160,21 @@ href="https://git-annex.branchable.com/news/version_10.20240430/">10.20240430 -

    # Create the helper symlink in "$HOME/bin".
    -ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin"
    -
    -# Verify the new symlink is on your PATH.
    -which git-annex-remote-rclone-builtin
    +

    Create the helper symlink in "$HOME/bin":

    +
    ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin"
    +
    +Verify the new symlink is on your PATH:
    +
    +```console
    +which git-annex-remote-rclone-builtin
  • Add a new remote to your git-annex repo. This new remote will connect git-annex with the rclone gitannex subcommand.

    Start by asking git-annex to describe the remote's available configuration parameters.

    -
    # If you skipped step 1:
    -git annex initremote MyRemote type=rclone --whatelse
    -
    -# If you created a symlink in step 1:
    -git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse
    +

    If you skipped step 1:

    +
    git annex initremote MyRemote type=rclone --whatelse
    +

    If you created a symlink in step 1:

    +
    git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse

    NOTE: If you're porting an existing git-annex-remote-rclone @@ -4676,34 +5186,35 @@ synonyms with --whatelse as shown above.

    that will use the rclone remote named "SomeRcloneRemote". That rclone remote must be one configured in your rclone.conf file, which can be located with rclone config file.

    -
    git annex initremote MyRemote         \
    -    type=external                     \
    -    externaltype=rclone-builtin       \
    -    encryption=none                   \
    -    rcloneremotename=SomeRcloneRemote \
    -    rcloneprefix=git-annex-content    \
    -    rclonelayout=nodir
  • +
    git annex initremote MyRemote         \
    +    type=external                     \
    +    externaltype=rclone-builtin       \
    +    encryption=none                   \
    +    rcloneremotename=SomeRcloneRemote \
    +    rcloneprefix=git-annex-content    \
    +    rclonelayout=nodir
  • Before you trust this command with your precious data, be sure to test the remote. This command is very new and has not been tested on many rclone backends. Caveat emptor!

    -
    git annex testremote MyRemote
  • +
    git annex testremote MyRemote

    Happy annexing!

    rclone gitannex [flags]
    -

    Options

    +

    Options

      -h, --help   help for gitannex

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone hashsum

    Produces a hashsum file for all the objects in the path.

    -

    Synopsis

    +

    Synopsis

    Produces a hash file for all the objects in the path using the hash named. The output is in the same format as the standard md5sum/sha1sum tool.

    @@ -4719,23 +5230,23 @@ by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hyphen will be treated literally, as a relative path).

    Run without a hash to see the list of all supported hashes, e.g.

    -
    $ rclone hashsum
    +
    $ rclone hashsum
     Supported hashes are:
    -  * md5
    -  * sha1
    -  * whirlpool
    -  * crc32
    -  * sha256
    -  * sha512
    -  * blake3
    -  * xxh3
    -  * xxh128
    +- md5 +- sha1 +- whirlpool +- crc32 +- sha256 +- sha512 +- blake3 +- xxh3 +- xxh128

    Then

    -
    $ rclone hashsum MD5 remote:path
    +
    rclone hashsum MD5 remote:path

    Note that hash names are case insensitive and values are output in lower case.

    rclone hashsum [<hash> remote:path] [flags]
    -

    Options

    +

    Options

          --base64               Output base64 encoded hashsum
       -C, --checkfile string     Validate hashes against a given SUM file instead of printing them
           --download             Download the file and hash it locally; if this flag is not specified, the hash is requested from the remote
    @@ -4746,7 +5257,7 @@ href="https://rclone.org/flags/">global flags page for global
     options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -4771,19 +5282,22 @@ options not listed here.

    --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    Listing Options

    Flags for listing directories

    -
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
           --fast-list           Use recursive list if available; uses more memory but fewer transactions
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone link

    Generate public link to file/folder.

    -

    Synopsis

    +

    Synopsis

    Create, retrieve or remove a public link to the given file or folder.

    -
    rclone link remote:path/to/file
    +
    rclone link remote:path/to/file
     rclone link remote:path/to/folder/
     rclone link --unlink remote:path/to/folder/
     rclone link --expire 1d remote:path/to/file
    @@ -4796,24 +5310,27 @@ folder. Note not all backends support "--unlink" flag - those that don't will just ignore it.

    If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by -default be created with the least constraints – e.g. no expiry, no +default be created with the least constraints - e.g. no expiry, no password protection, accessible without account.

    rclone link remote:path [flags]
    -

    Options

    +

    Options

          --expire Duration   The amount of time that the link will be valid (default off)
       -h, --help              help for link
           --unlink            Remove existing public link to file/folder

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone listremotes

    List all the remotes in the config file and defined in environment variables.

    -

    Synopsis

    +

    Synopsis

    Lists all the available remotes from the config file, or the remotes matching an optional filter.

    Prints the result in human-readable format by default, and as a @@ -4827,7 +5344,7 @@ attributes, and/or filter flags specific for each attribute. The values must be specified according to regular rclone filtering pattern syntax.

    rclone listremotes [<filter>] [flags]
    -

    Options

    +

    Options

          --description string   Filter remotes by description
       -h, --help                 help for listremotes
           --json                 Format output as JSON
    @@ -4838,21 +5355,24 @@ syntax.

    --type string Filter remotes by type

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone lsf

    List directories and objects in remote:path formatted for parsing.

    -

    Synopsis

    +

    Synopsis

    List the contents of the source path (directories and objects) to standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix.

    -

    Eg

    -
    $ rclone lsf swift:bucket
    +

    E.g.

    +
    $ rclone lsf swift:bucket
     bevajer5jef
     canole
     diwogej7
    @@ -4861,7 +5381,7 @@ fubuwic

    Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:

    -
    p - path
    +
    p - path
     s - size
     t - modification time
     h - hash
    @@ -4874,8 +5394,8 @@ M - Metadata of object in JSON blob format, eg {"key":"value"
     

    So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last.

    -

    Eg

    -
    $ rclone lsf  --format "tsp" swift:bucket
    +

    E.g.

    +
    $ rclone lsf  --format "tsp" swift:bucket
     2016-06-25 18:55:41;60295;bevajer5jef
     2016-06-25 18:55:43;90613;canole
     2016-06-25 18:55:43;94467;diwogej7
    @@ -4888,9 +5408,9 @@ on the object (and for directories), "ERROR" if there was an error
     reading it from the object and "UNSUPPORTED" if that object does not
     support that hash type.

    For example, to emulate the md5sum command you can use

    -
    rclone lsf -R --hash MD5 --format hp --separator "  " --files-only .
    -

    Eg

    -
    $ rclone lsf -R --hash MD5 --format hp --separator "  " --files-only swift:bucket
    +
    rclone lsf -R --hash MD5 --format hp --separator "  " --files-only .
    +

    E.g.

    +
    $ rclone lsf -R --hash MD5 --format hp --separator "  " --files-only swift:bucket
     7908e352297f0f530b84a756f188baa3  bevajer5jef
     cd65ac234e6fea5925974a51cdd865cc  canole
     03b5341b4f234b9d984d03ad076bae91  diwogej7
    @@ -4900,17 +5420,17 @@ cd65ac234e6fea5925974a51cdd865cc  canole
     

    By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy.

    -

    Eg

    -
    $ rclone lsf  --separator "," --format "tshp" swift:bucket
    +

    E.g.

    +
    $ rclone lsf  --separator "," --format "tshp" swift:bucket
     2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
     2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole
     2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7
     2018-04-26 08:52:53,0,,ferejej3gux/
     2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic

    You can output in CSV standard format. This will escape things in " -if they contain ,

    -

    Eg

    -
    $ rclone lsf --csv --files-only --format ps remote:path
    +if they contain,

    +

    E.g.

    +
    $ rclone lsf --csv --files-only --format ps remote:path
     test.log,22355
     test.sh,449
     "this file contains a comma, in the file name.txt",6
    @@ -4919,19 +5439,21 @@ lists of files to pass to an rclone copy with the --files-from-raw flag.

    For example, to find all the files modified within one day and copy those only (without traversing the whole directory structure):

    -
    rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
    +
    rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
     rclone copy --files-from-raw new_files /path/to/local remote:path

    The default time format is '2006-01-02 15:04:05'. Other formats can be specified with the --time-format flag. Examples:

    -
    rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)'
    +
    rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)'
     rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000'
     rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00'
     rclone lsf remote:path --format pt --time-format RFC3339
     rclone lsf remote:path --format pt --time-format DateOnly
    -rclone lsf remote:path --format pt --time-format max
    +rclone lsf remote:path --format pt --time-format max +rclone lsf remote:path --format pt --time-format unix +rclone lsf remote:path --format pt --time-format unixnano

    --time-format max will automatically truncate -'2006-01-02 15:04:05.000000000' to the maximum precision +2006-01-02 15:04:05.000000000 to the maximum precision supported by the remote.

    Any of the filtering options can be applied to this command.

    There are several related list commands

    @@ -4958,7 +5480,7 @@ default - use -R to make them recurse.

    remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).

    rclone lsf remote:path [flags]
    -

    Options

    +

    Options

          --absolute             Put a leading / in front of path names
           --csv                  Output in CSV format
       -d, --dir-slash            Append a slash to directory names (default true)
    @@ -4969,13 +5491,13 @@ bucket-based remotes).

    -h, --help help for lsf -R, --recursive Recurse into the listing -s, --separator string Separator for the items in the format (default ";") - -t, --time-format string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05)
    + -t, --time-format string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05)

    Options shared with other commands are described next. See the global flags page for global options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -5000,37 +5522,41 @@ options not listed here.

    --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    Listing Options

    Flags for listing directories

    -
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
           --fast-list           Use recursive list if available; uses more memory but fewer transactions
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone lsjson

    List directories and objects in the path in JSON format.

    -

    Synopsis

    +

    Synopsis

    List directories and objects in the path in JSON format.

    The output is an array of Items, where each Item looks like this:

    -
    {
    -  "Hashes" : {
    -     "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f",
    -     "MD5" : "b1946ac92492d2347c6235b4d2611184",
    -     "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
    -  },
    -  "ID": "y2djkhiujf83u33",
    -  "OrigID": "UYOJVTUW00Q1RzTDA",
    -  "IsBucket" : false,
    -  "IsDir" : false,
    -  "MimeType" : "application/octet-stream",
    -  "ModTime" : "2017-05-31T16:15:57.034468261+01:00",
    -  "Name" : "file.txt",
    -  "Encrypted" : "v0qpsdq8anpci8n929v3uu9338",
    -  "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338",
    -  "Path" : "full/path/goes/here/file.txt",
    -  "Size" : 6,
    -  "Tier" : "hot",
    -}
    +
    {
    +  "Hashes" : {
    +    "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f",
    +    "MD5" : "b1946ac92492d2347c6235b4d2611184",
    +    "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc"
    +  },
    +  "ID": "y2djkhiujf83u33",
    +  "OrigID": "UYOJVTUW00Q1RzTDA",
    +  "IsBucket" : false,
    +  "IsDir" : false,
    +  "MimeType" : "application/octet-stream",
    +  "ModTime" : "2017-05-31T16:15:57.034468261+01:00",
    +  "Name" : "file.txt",
    +  "Encrypted" : "v0qpsdq8anpci8n929v3uu9338",
    +  "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338",
    +  "Path" : "full/path/goes/here/file.txt",
    +  "Size" : 6,
    +  "Tier" : "hot",
    +}

    The exact set of properties included depends on the backend:

    • The property IsBucket will only be included for bucket-based @@ -5115,7 +5641,7 @@ default - use -R to make them recurse.

      remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes).

      rclone lsjson remote:path [flags]
      -

      Options

      +

      Options

            --dirs-only               Show only directories in the listing
             --encrypted               Show the encrypted names
             --files-only              Show only files in the listing
      @@ -5133,7 +5659,7 @@ href="https://rclone.org/flags/">global flags page for global
       options not listed here.

      Filter Options

      Flags for filtering directory listings

      -
            --delete-excluded                     Delete files on dest excluded from sync
      +
            --delete-excluded                     Delete files on dest excluded from sync
             --exclude stringArray                 Exclude files matching pattern
             --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
             --exclude-if-present stringArray      Exclude directories if filename is present
      @@ -5158,16 +5684,19 @@ options not listed here.

      --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

      Listing Options

      Flags for listing directories

      -
            --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
      +
            --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
             --fast-list           Use recursive list if available; uses more memory but fewer transactions
      -

      See Also

      +

      See Also

      + +
      • rclone - Show help for rclone commands, flags and backends.
      +

      rclone mount

      Mount the remote as file system on a mountpoint.

      -

      Synopsis

      +

      Synopsis

      Rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

      First set up your remote using rclone config. Check it @@ -5183,7 +5712,7 @@ appropriate code (killing the child process if it fails).

      On Linux/macOS/FreeBSD start the mount like this, where /path/to/local/mount is an empty existing directory:

      -
      rclone mount remote:path/to/files /path/to/local/mount
      +
      rclone mount remote:path/to/files /path/to/local/mount

      On Windows you can start a mount in different ways. See below for details. If foreground mount is used interactively from a console window, rclone will serve the @@ -5197,7 +5726,7 @@ when mounting as a network drive), and the last example will mount as network share \\cloud\remote and map it to an automatically assigned drive:

      -
      rclone mount remote:path/to/files *
      +
      rclone mount remote:path/to/files *
       rclone mount remote:path/to/files X:
       rclone mount remote:path/to/files C:\path\parent\mount
       rclone mount remote:path/to/files \\cloud\remote
      @@ -5206,7 +5735,7 @@ receiving a SIGINT or SIGTERM signal, the mount should be automatically stopped.

      When running in background mode the user will have to stop the mount manually:

      -
      # Linux
      +
      # Linux
       fusermount -u /path/to/local/mount
       #... or on some systems
       fusermount3 -u /path/to/local/mount
      @@ -5224,8 +5753,9 @@ not support
       the about feature at all, then 1 PiB is set as both the total and the
       free size.

      Installing on Windows

      -

      To run rclone mount on Windows, you will need to download and install -WinFsp.

      +

      To run rclone mount on Windows, you will need to +download and install WinFsp.

      WinFsp is an open-source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which @@ -5253,7 +5783,7 @@ subdirectory of an existing parent directory or drive. Using the special value * will tell rclone to automatically assign the next available drive letter, starting with Z: and moving backward. Examples:

      -
      rclone mount remote:path/to/files *
      +
      rclone mount remote:path/to/files *
       rclone mount remote:path/to/files X:
       rclone mount remote:path/to/files C:\path\parent\mount
       rclone mount remote:path/to/files X:
      @@ -5265,7 +5795,7 @@ path.

      directory path is not supported in this mode, it is a limitation Windows imposes on junctions, so the remote must always be mounted to a drive letter.

      -
      rclone mount remote:path/to/files X: --network-mode
      +
      rclone mount remote:path/to/files X: --network-mode

      A volume name specified with --volname will be used to create the network share path. A complete UNC path, such as \\cloud\remote, optionally with path @@ -5282,7 +5812,7 @@ for the mapped drive, shown in Windows Explorer etc, while the complete --volname, this will implicitly set the --network-mode option, so the following two examples have same result:

      -
      rclone mount remote:path/to/files X: --network-mode
      +
      rclone mount remote:path/to/files X: --network-mode
       rclone mount remote:path/to/files X: --volname \\server\share

      You may also specify the network share UNC path as the mountpoint itself. Then rclone will automatically assign a drive letter, same as @@ -5291,7 +5821,7 @@ path specified as the volume name, as if it were specified with the --volname option. This will also implicitly set the --network-mode option. This means the following two examples have same result:

      -
      rclone mount remote:path/to/files \\cloud\remote
      +
      rclone mount remote:path/to/files \\cloud\remote
       rclone mount remote:path/to/files * --volname \\cloud\remote

      There is yet another way to enable network mode, and to set the share path, and that is to pass the "native" libfuse/WinFsp option directly: @@ -5415,11 +5945,11 @@ does not suffer from the same limitations.

      Mounting on macOS can be done either via built-in NFS server, macFUSE (also known -as osxfuse) or FUSE-T. macFUSE is -a traditional FUSE driver utilizing a macOS kernel extension (kext). +as osxfuse) or FUSE-T.macFUSE is a +traditional FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which "mounts" via an NFSv4 local server.

      -

      Unicode Normalization

      +

      Unicode Normalization

      It is highly recommended to keep the default of --no-unicode-normalization=false for all mount and serve commands on macOS. For details, see macports package manager, the following addition steps are required.

      -
      sudo mkdir /usr/local/lib
      +
      sudo mkdir /usr/local/lib
       cd /usr/local/lib
       sudo ln -s /opt/local/lib/libfuse.2.dylib

      FUSE-T Limitations, @@ -5466,6 +5996,18 @@ This may make rclone upload a full new copy of the file.

      When mounting with --read-only, attempts to write to files will fail silently as opposed to with a clear warning as in macFUSE.

      +

      Mounting on Linux

      +

      On newer versions of Ubuntu, you may encounter the following error +when running rclone mount:

      +
      +

      NOTICE: mount helper error: fusermount3: mount failed: Permission +denied CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: exit +status 1 This may be due to newer Apparmor restrictions, which +can be disabled with sudo aa-disable /usr/bin/fusermount3 +(you may need to sudo apt install apparmor-utils +beforehand).

      +

      Limitations

      Without the use of --vfs-cache-mode this can only write files sequentially, it can only seek when reading. This means that many @@ -5558,27 +6100,29 @@ run it as a mount helper you should symlink rclone binary to ln -s /usr/bin/rclone /sbin/mount.rclone. rclone will detect it and translate command-line arguments appropriately.

      Now you can run classic mounts like this:

      -
      mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem
      +
      mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem

      or create systemd mount units:

      -
      # /etc/systemd/system/mnt-data.mount
      -[Unit]
      -Description=Mount for /mnt/data
      -[Mount]
      -Type=rclone
      -What=sftp1:subdir
      -Where=/mnt/data
      -Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
      +
      # /etc/systemd/system/mnt-data.mount
      +[Unit]
      +Description=Mount for /mnt/data
      +[Mount]
      +Type=rclone
      +What=sftp1:subdir
      +Where=/mnt/data
      +Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone

      optionally accompanied by systemd automount unit

      -
      # /etc/systemd/system/mnt-data.automount
      -[Unit]
      -Description=AutoMount for /mnt/data
      -[Automount]
      -Where=/mnt/data
      -TimeoutIdleSec=600
      -[Install]
      -WantedBy=multi-user.target
      +
      # /etc/systemd/system/mnt-data.automount
      +[Unit]
      +Description=AutoMount for /mnt/data
      +[Automount]
      +Where=/mnt/data
      +TimeoutIdleSec=600
      +[Install]
      +WantedBy=multi-user.target

      or add in /etc/fstab a line like

      -
      sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0
      +
      sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0

      or use classic Automountd. Remember to provide explicit config=...,cache-dir=... as a workaround for mount units being run without HOME.

      @@ -5624,8 +6168,8 @@ about files and directories (but not the data) in memory.

      long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

      -
      --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
      ---poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
      +
          --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
      +    --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

      However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support @@ -5634,12 +6178,12 @@ picked up within the polling interval.

      You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

      -
      kill -SIGHUP $(pidof rclone)
      +
      kill -SIGHUP $(pidof rclone)

      If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

      -
      rclone rc vfs/forget
      +
      rclone rc vfs/forget

      Or individual files or directories:

      -
      rclone rc vfs/forget file=path/to/file dir=path/to/dir
      +
      rclone rc vfs/forget file=path/to/file dir=path/to/dir

      VFS File Buffering

      The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

      @@ -5660,13 +6204,13 @@ system. It can be disabled at the cost of some compatibility.

      write simultaneously to a file. See below for more details.

      Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

      -
      --cache-dir string                     Directory rclone will use for caching.
      ---vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
      ---vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
      ---vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
      ---vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
      ---vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
      ---vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)
      +
          --cache-dir string                     Directory rclone will use for caching.
      +    --vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
      +    --vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
      +    --vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
      +    --vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
      +    --vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
      +    --vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

      If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting @@ -5790,9 +6334,9 @@ specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

      These flags control the chunking:

      -
      --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
      ---vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
      ---vfs-read-chunk-streams int            The number of parallel streams to read at once
      +
          --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
      +    --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
      +    --vfs-read-chunk-streams int            The number of parallel streams to read at once

      The chunking behaves differently depending on the --vfs-read-chunk-streams parameter.

      --vfs-read-chunk-streams @@ -5841,27 +6385,27 @@ href="#vfs-chunked-reading">chunked reading feature.

      --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

      -
      --no-checksum     Don't compare checksums on up/download.
      ---no-modtime      Don't read/write the modification time (can speed things up).
      ---no-seek         Don't allow seeking in files.
      ---read-only       Only allow read-only access.
      +
          --no-checksum     Don't compare checksums on up/download.
      +    --no-modtime      Don't read/write the modification time (can speed things up).
      +    --no-seek         Don't allow seeking in files.
      +    --read-only       Only allow read-only access.

      Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

      -
      --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
      ---vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
      +
          --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
      +    --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

      When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

      -
      --transfers int  Number of file transfers to run in parallel (default 4)
      +
          --transfers int  Number of file transfers to run in parallel (default 4)

      By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

      -
      --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
      ---vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
      +
          --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
      +    --vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS

      As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be @@ -5882,7 +6426,7 @@ commands yet.

      A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

      -
      .
      +
      .
       ├── dir
       │   └── file.txt
       └── linked-dir -> dir
      @@ -5952,7 +6496,7 @@ an error, similar to how this is handled in

      This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

      -
      --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)
      +
          --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

      Alternate report of used bytes

      Some backends, most notably S3, do not report the amount of bytes @@ -5962,10 +6506,10 @@ used. If you need this information to be available when running of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

      -

      WARNING. Contrary to rclone size, this flag -ignores filters so that the result is accurate. However, this is very -inefficient and may cost lots of API calls resulting in extra charges. -Use it as a last resort and only with caching.

      +

      WARNING: Contrary to rclone size, this +flag ignores filters so that the result is accurate. However, this is +very inefficient and may cost lots of API calls resulting in extra +charges. Use it as a last resort and only with caching.

      VFS Metadata

      If you use the --vfs-metadata-extension flag you can get the VFS to expose files which contain the --metadata flag.

      For example, using rclone mount with --metadata --vfs-metadata-extension .metadata we get

      -
      $ ls -l /mnt/
      +
      $ ls -l /mnt/
       total 1048577
       -rw-rw-r-- 1 user user 1073741824 Mar  3 16:03 1G
       
      @@ -6000,7 +6544,7 @@ total 1048578
       and if there is an error reading the metadata the error will be returned
       as {"error":"error string"}.

      rclone mount remote:path /path/to/mountpoint [flags]
      -

      Options

      +

      Options

            --allow-non-empty                        Allow mounting over a non-empty directory (not supported on Windows)
             --allow-other                            Allow access to other users (not supported on Windows)
             --allow-root                             Allow access to root user (not supported on Windows)
      @@ -6060,7 +6604,7 @@ href="https://rclone.org/flags/">global flags page for global
       options not listed here.

      Filter Options

      Flags for filtering directory listings

      -
            --delete-excluded                     Delete files on dest excluded from sync
      +
            --delete-excluded                     Delete files on dest excluded from sync
             --exclude stringArray                 Exclude files matching pattern
             --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
             --exclude-if-present stringArray      Exclude directories if filename is present
      @@ -6083,14 +6627,17 @@ options not listed here.

      --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
      -

      See Also

      +

      See Also

      + +
      • rclone - Show help for rclone commands, flags and backends.
      +

      rclone moveto

      Move file or directory from source to dest.

      -

      Synopsis

      +

      Synopsis

      If source:path is a file or directory then it moves it to a file or directory named dest:path.

      This can be used to rename files or upload single files to other than @@ -6098,11 +6645,11 @@ their existing name. If the source is a directory then it acts exactly like the move command.

      So

      -
      rclone moveto src dst
      +
      rclone moveto src dst

      where src and dst are rclone paths, either remote:path or /path/to/local or C:.

      This will:

      -
      if src is file
      +
      if src is file
           move it to dst, overwriting an existing file if it exists
       if src is directory
           move it to dst, overwriting existing files if they exist
      @@ -6116,7 +6663,7 @@ first with the --dry-run or the
       

      Note: Use the -P/--progress flag to view real-time transfer statistics.

      -

      Logger Flags

      +

      Logger Flags

      The --differ, --missing-on-dst, --missing-on-src, --match and --error flags write paths, one per line, to the file name @@ -6164,9 +6711,9 @@ scenarios are not currently supported:

    Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each -file (which may or may not match what actually DID.)

    +file (which may or may not match what actually DID).

    rclone moveto source:path dest:path [flags]
    -

    Options

    +

    Options

          --absolute                Put a leading / in front of path names
           --combined string         Make a combined report of changes to this file
           --csv                     Output in CSV format
    @@ -6183,13 +6730,13 @@ file (which may or may not match what actually DID.)

    --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05)
    + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05)

    Options shared with other commands are described next. See the global flags page for global options not listed here.

    Copy Options

    Flags for anything which can copy a file

    -
          --check-first                                 Do all the checks before starting transfers
    +
          --check-first                                 Do all the checks before starting transfers
       -c, --checksum                                    Check for changes with size & checksum (if available, or fallback to size only)
           --compare-dest stringArray                    Include additional server-side paths during comparison
           --copy-dest stringArray                       Implies --compare-dest but also copies files from paths into destination
    @@ -6225,12 +6772,12 @@ options not listed here.

    -u, --update Skip files that are newer on the destination

    Important Options

    Important flags useful for most commands

    -
      -n, --dry-run         Do a trial run with no permanent changes
    +
      -n, --dry-run         Do a trial run with no permanent changes
       -i, --interactive     Enable interactive mode
       -v, --verbose count   Print lots more stuff (repeat for more)

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -6255,16 +6802,19 @@ options not listed here.

    --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    Listing Options

    Flags for listing directories

    -
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
           --fast-list           Use recursive list if available; uses more memory but fewer transactions
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone ncdu

    Explore a remote with a text based user interface.

    -

    Synopsis

    +

    Synopsis

    This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".

    @@ -6274,7 +6824,7 @@ scanning phase and you will see it building up the directory structure as it goes along.

    You can interact with the user interface using key presses, press '?' to toggle the help on and off. The supported keys are:

    -
     ↑,↓ or k,j to Move
    +
     ↑,↓ or k,j to Move
      →,l to enter
      ←,h to return
      g toggle graph
    @@ -6297,7 +6847,7 @@ to toggle the help on and off. The supported keys are:

    Listed files/directories may be prefixed by a one-character flag, some of them combined with a description in brackets at end of line. These flags have the following meaning:

    -
    e means this is an empty directory, i.e. contains no files (but
    +
    e means this is an empty directory, i.e. contains no files (but
       may contain empty subdirectories)
     ~ means this is a directory where some of the files (possibly in
       subdirectories) have unknown size, and therefore the directory
    @@ -6319,14 +6869,14 @@ href="https://rclone.org/commands/rclone_tree/">tree command. To
     just get the total size of the remote you can also use the size command.

    rclone ncdu remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for ncdu

    Options shared with other commands are described next. See the global flags page for global options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -6351,16 +6901,19 @@ options not listed here.

    --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    Listing Options

    Flags for listing directories

    -
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
           --fast-list           Use recursive list if available; uses more memory but fewer transactions
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone nfsmount

    Mount the remote as file system on a mountpoint.

    -

    Synopsis

    +

    Synopsis

    Rclone nfsmount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.

    First set up your remote using rclone config. Check it @@ -6376,7 +6929,7 @@ appropriate code (killing the child process if it fails).

    On Linux/macOS/FreeBSD start the mount like this, where /path/to/local/mount is an empty existing directory:

    -
    rclone nfsmount remote:path/to/files /path/to/local/mount
    +
    rclone nfsmount remote:path/to/files /path/to/local/mount

    On Windows you can start a mount in different ways. See below for details. If foreground mount is used interactively from a console window, rclone will serve the @@ -6390,7 +6943,7 @@ when mounting as a network drive), and the last example will mount as network share \\cloud\remote and map it to an automatically assigned drive:

    -
    rclone nfsmount remote:path/to/files *
    +
    rclone nfsmount remote:path/to/files *
     rclone nfsmount remote:path/to/files X:
     rclone nfsmount remote:path/to/files C:\path\parent\mount
     rclone nfsmount remote:path/to/files \\cloud\remote
    @@ -6399,7 +6952,7 @@ receiving a SIGINT or SIGTERM signal, the mount should be automatically stopped.

    When running in background mode the user will have to stop the mount manually:

    -
    # Linux
    +
    # Linux
     fusermount -u /path/to/local/mount
     #... or on some systems
     fusermount3 -u /path/to/local/mount
    @@ -6417,8 +6970,9 @@ not support
     the about feature at all, then 1 PiB is set as both the total and the
     free size.

    Installing on Windows

    -

    To run rclone nfsmount on Windows, you will need to download and -install WinFsp.

    +

    To run rclone nfsmount on Windows, you will need to +download and install WinFsp.

    WinFsp is an open-source Windows File System Proxy which makes it easy to write user space file systems for Windows. It provides a FUSE emulation layer which @@ -6446,7 +7000,7 @@ subdirectory of an existing parent directory or drive. Using the special value * will tell rclone to automatically assign the next available drive letter, starting with Z: and moving backward. Examples:

    -
    rclone nfsmount remote:path/to/files *
    +
    rclone nfsmount remote:path/to/files *
     rclone nfsmount remote:path/to/files X:
     rclone nfsmount remote:path/to/files C:\path\parent\mount
     rclone nfsmount remote:path/to/files X:
    @@ -6458,7 +7012,7 @@ path.

    directory path is not supported in this mode, it is a limitation Windows imposes on junctions, so the remote must always be mounted to a drive letter.

    -
    rclone nfsmount remote:path/to/files X: --network-mode
    +
    rclone nfsmount remote:path/to/files X: --network-mode

    A volume name specified with --volname will be used to create the network share path. A complete UNC path, such as \\cloud\remote, optionally with path @@ -6475,7 +7029,7 @@ for the mapped drive, shown in Windows Explorer etc, while the complete --volname, this will implicitly set the --network-mode option, so the following two examples have same result:

    -
    rclone nfsmount remote:path/to/files X: --network-mode
    +
    rclone nfsmount remote:path/to/files X: --network-mode
     rclone nfsmount remote:path/to/files X: --volname \\server\share

    You may also specify the network share UNC path as the mountpoint itself. Then rclone will automatically assign a drive letter, same as @@ -6484,7 +7038,7 @@ path specified as the volume name, as if it were specified with the --volname option. This will also implicitly set the --network-mode option. This means the following two examples have same result:

    -
    rclone nfsmount remote:path/to/files \\cloud\remote
    +
    rclone nfsmount remote:path/to/files \\cloud\remote
     rclone nfsmount remote:path/to/files * --volname \\cloud\remote

    There is yet another way to enable network mode, and to set the share path, and that is to pass the "native" libfuse/WinFsp option directly: @@ -6608,11 +7162,11 @@ does not suffer from the same limitations.

    Mounting on macOS can be done either via built-in NFS server, macFUSE (also known -as osxfuse) or FUSE-T. macFUSE is -a traditional FUSE driver utilizing a macOS kernel extension (kext). +as osxfuse) or FUSE-T.macFUSE is a +traditional FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which "mounts" via an NFSv4 local server.

    -

    Unicode Normalization

    +

    Unicode Normalization

    It is highly recommended to keep the default of --no-unicode-normalization=false for all mount and serve commands on macOS. For details, see macports package manager, the following addition steps are required.

    -
    sudo mkdir /usr/local/lib
    +
    sudo mkdir /usr/local/lib
     cd /usr/local/lib
     sudo ln -s /opt/local/lib/libfuse.2.dylib

    FUSE-T Limitations, @@ -6659,6 +7213,18 @@ This may make rclone upload a full new copy of the file.

    When mounting with --read-only, attempts to write to files will fail silently as opposed to with a clear warning as in macFUSE.

    +

    Mounting on Linux

    +

    On newer versions of Ubuntu, you may encounter the following error +when running rclone mount:

    +
    +

    NOTICE: mount helper error: fusermount3: mount failed: Permission +denied CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: exit +status 1 This may be due to newer Apparmor restrictions, which +can be disabled with sudo aa-disable /usr/bin/fusermount3 +(you may need to sudo apt install apparmor-utils +beforehand).

    +

    Limitations

    Without the use of --vfs-cache-mode this can only write files sequentially, it can only seek when reading. This means that many @@ -6751,27 +7317,29 @@ run it as a mount helper you should symlink rclone binary to ln -s /usr/bin/rclone /sbin/mount.rclone. rclone will detect it and translate command-line arguments appropriately.

    Now you can run classic mounts like this:

    -
    mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem
    +
    mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem

    or create systemd mount units:

    -
    # /etc/systemd/system/mnt-data.mount
    -[Unit]
    -Description=Mount for /mnt/data
    -[Mount]
    -Type=rclone
    -What=sftp1:subdir
    -Where=/mnt/data
    -Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone
    +
    # /etc/systemd/system/mnt-data.mount
    +[Unit]
    +Description=Mount for /mnt/data
    +[Mount]
    +Type=rclone
    +What=sftp1:subdir
    +Where=/mnt/data
    +Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone.conf,cache-dir=/var/rclone

    optionally accompanied by systemd automount unit

    -
    # /etc/systemd/system/mnt-data.automount
    -[Unit]
    -Description=AutoMount for /mnt/data
    -[Automount]
    -Where=/mnt/data
    -TimeoutIdleSec=600
    -[Install]
    -WantedBy=multi-user.target
    +
    # /etc/systemd/system/mnt-data.automount
    +[Unit]
    +Description=AutoMount for /mnt/data
    +[Automount]
    +Where=/mnt/data
    +TimeoutIdleSec=600
    +[Install]
    +WantedBy=multi-user.target

    or add in /etc/fstab a line like

    -
    sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0
    +
    sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0

    or use classic Automountd. Remember to provide explicit config=...,cache-dir=... as a workaround for mount units being run without HOME.

    @@ -6817,8 +7385,8 @@ about files and directories (but not the data) in memory.

    long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    -
    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    ---poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
    +
        --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    +    --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support @@ -6827,12 +7395,12 @@ picked up within the polling interval.

    You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    -
    kill -SIGHUP $(pidof rclone)
    +
    kill -SIGHUP $(pidof rclone)

    If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

    -
    rclone rc vfs/forget
    +
    rclone rc vfs/forget

    Or individual files or directories:

    -
    rclone rc vfs/forget file=path/to/file dir=path/to/dir
    +
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    VFS File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    @@ -6853,13 +7421,13 @@ system. It can be disabled at the cost of some compatibility.

    write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                     Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    ---vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    ---vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)
    +
        --cache-dir string                     Directory rclone will use for caching.
    +    --vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +    --vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +    --vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +    --vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +    --vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +    --vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting @@ -6983,9 +7551,9 @@ specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    -
    --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    ---vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    ---vfs-read-chunk-streams int            The number of parallel streams to read at once
    +
        --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    +    --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    +    --vfs-read-chunk-streams int            The number of parallel streams to read at once

    The chunking behaves differently depending on the --vfs-read-chunk-streams parameter.

    chunked reading feature.

    --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

    -
    --no-checksum     Don't compare checksums on up/download.
    ---no-modtime      Don't read/write the modification time (can speed things up).
    ---no-seek         Don't allow seeking in files.
    ---read-only       Only allow read-only access.
    +
        --no-checksum     Don't compare checksums on up/download.
    +    --no-modtime      Don't read/write the modification time (can speed things up).
    +    --no-seek         Don't allow seeking in files.
    +    --read-only       Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    -
    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    ---vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    +
        --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    +    --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    -
    --transfers int  Number of file transfers to run in parallel (default 4)
    +
        --transfers int  Number of file transfers to run in parallel (default 4)

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    -
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    ---vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +
        --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +    --vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be @@ -7076,7 +7644,7 @@ commands yet.

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    -
    .
    +
    .
     ├── dir
     │   └── file.txt
     └── linked-dir -> dir
    @@ -7146,7 +7714,7 @@ an error, similar to how this is handled in

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    -
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)
    +
        --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes @@ -7156,10 +7724,10 @@ used. If you need this information to be available when running of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    -

    WARNING. Contrary to rclone size, this flag -ignores filters so that the result is accurate. However, this is very -inefficient and may cost lots of API calls resulting in extra charges. -Use it as a last resort and only with caching.

    +

    WARNING: Contrary to rclone size, this +flag ignores filters so that the result is accurate. However, this is +very inefficient and may cost lots of API calls resulting in extra +charges. Use it as a last resort and only with caching.

    VFS Metadata

    If you use the --vfs-metadata-extension flag you can get the VFS to expose files which contain the --metadata flag.

    For example, using rclone mount with --metadata --vfs-metadata-extension .metadata we get

    -
    $ ls -l /mnt/
    +
    $ ls -l /mnt/
     total 1048577
     -rw-rw-r-- 1 user user 1073741824 Mar  3 16:03 1G
     
    @@ -7194,7 +7762,7 @@ total 1048578
     and if there is an error reading the metadata the error will be returned
     as {"error":"error string"}.

    rclone nfsmount remote:path /path/to/mountpoint [flags]
    -

    Options

    +

    Options

          --addr string                            IPaddress:Port or :Port to bind server to
           --allow-non-empty                        Allow mounting over a non-empty directory (not supported on Windows)
           --allow-other                            Allow access to other users (not supported on Windows)
    @@ -7259,7 +7827,7 @@ href="https://rclone.org/flags/">global flags page for global
     options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -7282,14 +7850,17 @@ options not listed here.

    --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone obscure

    Obscure password for use in the rclone config file.

    -

    Synopsis

    +

    Synopsis

    In the rclone config file, human-readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these @@ -7302,7 +7873,7 @@ character hex token.

    This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline.

    -
    echo "secretpassword" | rclone obscure -
    +
    echo "secretpassword" | rclone obscure -

    If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself.

    If you want to encrypt the config file then please use config file @@ -7310,23 +7881,27 @@ encryption - see rclone config for more info.

    rclone obscure password [flags]
    -

    Options

    +

    Options

      -h, --help   help for obscure

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone rc

    Run a command against a running rclone.

    -

    Synopsis

    +

    Synopsis

    This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. -This can be either a ":port" which is taken to mean -"http://localhost:port" or a "host:port" which is taken to mean -"http://host:port"

    +This can be either a ":port" which is taken to mean http://localhost:port or a +"host:port" which is taken to mean http://host:port.

    A username and password can be passed in with --user and --pass.

    Note that --rc-addr, --rc-user, @@ -7334,10 +7909,11 @@ This can be either a ":port" which is taken to mean --user, --pass.

    The --unix-socket flag can be used to connect over a unix socket like this

    -
    # start server on /tmp/my.socket
    -rclone rcd --rc-addr unix:///tmp/my.socket
    -# Connect to it
    -rclone rc --unix-socket /tmp/my.socket core/stats
    +
    # start server on /tmp/my.socket
    +rclone rcd --rc-addr unix:///tmp/my.socket
    +# Connect to it
    +rclone rc --unix-socket /tmp/my.socket core/stats

    Arguments should be passed in as parameter=value.

    The result will be returned as a JSON object by default.

    The --json parameter can be used to pass in a JSON blob @@ -7348,24 +7924,27 @@ key "opt" with key, value options in the form -o key=value or -o key. It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings.

    -
    -o key=value -o key2
    +
    -o key=value -o key2

    Will place this in the "opt" value

    -
    {"key":"value", "key2","")
    +
    {"key":"value", "key2","")

    The -a/--arg option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings.

    -
    -a value -a value2
    +
    -a value -a value2

    Will place this in the "arg" value

    -
    ["value", "value2"]
    +
    ["value", "value2"]

    Use --loopback to connect to the rclone instance running rclone rc. This is very useful for testing commands without having to run an rclone rc server, e.g.:

    -
    rclone rc --loopback operations/about fs=/
    +
    rclone rc --loopback operations/about fs=/

    Use rclone rc to see a list of all possible commands.

    rclone rc commands parameter [flags]
    -

    Options

    +

    Options

      -a, --arg stringArray      Argument placed in the "arg" array
       -h, --help                 help for rc
           --json string          Input JSON - use instead of key=value args
    @@ -7378,17 +7957,20 @@ commands.

    --user string Username to use to rclone remote control

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone rcat

    Copies standard input to file on remote.

    -

    Synopsis

    +

    Synopsis

    Reads from standard input (stdin) and copies it to a single remote file.

    -
    echo "hello world" | rclone rcat remote:path/to/file
    +
    echo "hello world" | rclone rcat remote:path/to/file
     ffmpeg - | rclone rcat remote:path/to/file

    If the remote file already exists, it will be overwritten.

    rcat will try to upload small files in a single request, which is @@ -7412,7 +7994,7 @@ chunks can be retried. If you need to transfer a lot of data, you may be better off caching it locally and then rclone move it to the destination which can use retries.

    rclone rcat remote:path [flags]
    -

    Options

    +

    Options

      -h, --help       help for rcat
           --size int   File size hint to preallocate (default -1)

    Options shared with other commands are described next. See the global flags page for global options not listed here.

    Important Options

    Important flags useful for most commands

    -
      -n, --dry-run         Do a trial run with no permanent changes
    +
      -n, --dry-run         Do a trial run with no permanent changes
       -i, --interactive     Enable interactive mode
       -v, --verbose count   Print lots more stuff (repeat for more)
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone rcd

    Run rclone listening to remote control commands only.

    -

    Synopsis

    +

    Synopsis

    This runs rclone so that it only listens to remote control commands.

    This is useful if you are controlling rclone via the rc API.

    @@ -7468,6 +8053,8 @@ serve. Rclone automatically inserts leading and trailing "/" on --rc-baseurl, so --rc-baseurl "rclone", --rc-baseurl "/rclone" and --rc-baseurl "/rclone/" are all treated identically.

    +

    --rc-disable-zip may be set to disable the zipping +download option.

    TLS (SSL)

    By default this will serve over http. If you want you can serve over https. You will need to supply the --rc-cert and @@ -7488,98 +8075,109 @@ acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --rc-addr).

    This allows rclone to be a socket-activated service. It can be -configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

    +configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html.

    Socket activation can be tested ad-hoc with the systemd-socket-activatecommand

    -
       systemd-socket-activate -l 8000 -- rclone serve
    +
    systemd-socket-activate -l 8000 -- rclone serve

    This will socket-activate rclone on the first connection to port 8000 -over TCP. ### Template

    +over TCP.

    +

    Template

    --rc-template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:

    --+++ + + - + + + +via '?sort=' parameter. Possible values: namedirfirst, name, size, time +(default namedirfirst). - - - - - - - - + + + +navigation. - - + + + - + + + - - + + + - - + + + - + + - - + + + - + + @@ -7630,7 +8228,7 @@ set a single username and password with the --rc-user and

    Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with --user-from-header (e.g., ---rc---user-from-header=x-remote-user). Ensure the proxy is +--rc-user-from-header=x-remote-user). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.

    If either of the above authentication methods is not configured and @@ -7641,7 +8239,7 @@ considered as the username.

    htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    To create an htpasswd file:

    -
    touch htpasswd
    +
    touch htpasswd
     htpasswd -B htpasswd user
     htpasswd -B htpasswd anotherUser

    The password file can be updated while rclone is running.

    @@ -7649,14 +8247,14 @@ htpasswd -B htpasswd anotherUser

    Use --rc-salt to change the password hashing salt from the default.

    rclone rcd <path to files to serve>* [flags]
    -

    Options

    +

    Options

      -h, --help   help for rcd

    Options shared with other commands are described next. See the global flags page for global options not listed here.

    RC Options

    Flags to control the Remote Control API

    -
          --rc                                 Enable the remote control server
    +
          --rc                                 Enable the remote control server
           --rc-addr stringArray                IPaddress:Port or :Port to bind server to (default localhost:5572)
           --rc-allow-origin string             Origin which cross-domain request (CORS) can be executed from
           --rc-baseurl string                  Prefix for URLs - leave blank for root
    @@ -7686,14 +8284,17 @@ options not listed here.

    --rc-web-gui-force-update Force update to latest version of web gui --rc-web-gui-no-open-browser Don't open the browser automatically --rc-web-gui-update Check and update to latest version of web gui
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone rmdirs

    Remove empty directories under the path.

    -

    Synopsis

    +

    Synopsis

    This recursively removes any empty directories (including directories that only contain empty directories), that it finds under the path. The root path itself will also be removed if it is empty, unless you supply @@ -7712,7 +8313,7 @@ number.

    To delete a path and any objects in it, use the purge command.

    rclone rmdirs remote:path [flags]
    -

    Options

    +

    Options

      -h, --help         help for rmdirs
           --leave-root   Do not remove root directory if empty

    Options shared with other commands are described next. See the global flags page for global options not listed here.

    Important Options

    Important flags useful for most commands

    -
      -n, --dry-run         Do a trial run with no permanent changes
    +
      -n, --dry-run         Do a trial run with no permanent changes
       -i, --interactive     Enable interactive mode
       -v, --verbose count   Print lots more stuff (repeat for more)
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone selfupdate

    Update the rclone binary.

    -

    Synopsis

    +

    Synopsis

    This command downloads the latest release of rclone and replaces the currently running binary. The download is verified with a hashsum and cryptographically signed signature; see

    Please note that this command was not available before rclone version 1.55. If it fails for you with the message unknown command "selfupdate" then you will need to update -manually following the install instructions located at -https://rclone.org/install/

    +manually following the install +documentation.

    rclone selfupdate [flags]
    -

    Options

    +

    Options

          --beta             Install beta release
           --check            Check for latest release, do not download
       -h, --help             help for selfupdate
    @@ -7792,25 +8396,34 @@ https://rclone.org/install/

    --version string Install the given rclone version (default: latest)

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone serve

    Serve a remote over a protocol.

    -

    Synopsis

    +

    Synopsis

    Serve a remote over a given protocol. Requires the use of a subcommand to specify the protocol, e.g.

    -
    rclone serve http remote:
    +
    rclone serve http remote:
    +

    When the "--metadata" flag is enabled, the following metadata fields +will be provided as headers: - "content-disposition" - "cache-control" - +"content-language" - "content-encoding" Note: The availability of these +fields depends on whether the remote supports metadata.

    Each subcommand has its own options which you can see in their help.

    rclone serve <protocol> [opts] <remote> [flags]
    -

    Options

    +

    Options

      -h, --help   help for serve

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    • @@ -7833,9 +8446,10 @@ serve sftp - Serve the remote over SFTP.
    • rclone serve webdav - Serve remote:path over WebDAV.
    +

    rclone serve dlna

    Serve remote:path over DLNA

    -

    Synopsis

    +

    Synopsis

    Run a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also @@ -7872,8 +8486,8 @@ about files and directories (but not the data) in memory.

    long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    -
    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    ---poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
    +
        --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    +    --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support @@ -7882,12 +8496,12 @@ picked up within the polling interval.

    You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    -
    kill -SIGHUP $(pidof rclone)
    +
    kill -SIGHUP $(pidof rclone)

    If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

    -
    rclone rc vfs/forget
    +
    rclone rc vfs/forget

    Or individual files or directories:

    -
    rclone rc vfs/forget file=path/to/file dir=path/to/dir
    +
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    VFS File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    @@ -7908,13 +8522,13 @@ system. It can be disabled at the cost of some compatibility.

    write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                     Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    ---vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    ---vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)
    +
        --cache-dir string                     Directory rclone will use for caching.
    +    --vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +    --vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +    --vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +    --vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +    --vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +    --vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting @@ -8038,9 +8652,9 @@ specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    -
    --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    ---vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    ---vfs-read-chunk-streams int            The number of parallel streams to read at once
    +
        --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    +    --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    +    --vfs-read-chunk-streams int            The number of parallel streams to read at once

    The chunking behaves differently depending on the --vfs-read-chunk-streams parameter.

    chunked reading feature.

    --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

    -
    --no-checksum     Don't compare checksums on up/download.
    ---no-modtime      Don't read/write the modification time (can speed things up).
    ---no-seek         Don't allow seeking in files.
    ---read-only       Only allow read-only access.
    +
        --no-checksum     Don't compare checksums on up/download.
    +    --no-modtime      Don't read/write the modification time (can speed things up).
    +    --no-seek         Don't allow seeking in files.
    +    --read-only       Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    -
    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    ---vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    +
        --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    +    --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    -
    --transfers int  Number of file transfers to run in parallel (default 4)
    +
        --transfers int  Number of file transfers to run in parallel (default 4)

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    -
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    ---vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +
        --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +    --vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be @@ -8131,7 +8745,7 @@ commands yet.

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    -
    .
    +
    .
     ├── dir
     │   └── file.txt
     └── linked-dir -> dir
    @@ -8201,7 +8815,7 @@ an error, similar to how this is handled in

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    -
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)
    +
        --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes @@ -8211,10 +8825,10 @@ used. If you need this information to be available when running of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    -

    WARNING. Contrary to rclone size, this flag -ignores filters so that the result is accurate. However, this is very -inefficient and may cost lots of API calls resulting in extra charges. -Use it as a last resort and only with caching.

    +

    WARNING: Contrary to rclone size, this +flag ignores filters so that the result is accurate. However, this is +very inefficient and may cost lots of API calls resulting in extra +charges. Use it as a last resort and only with caching.

    VFS Metadata

    If you use the --vfs-metadata-extension flag you can get the VFS to expose files which contain the --metadata flag.

    For example, using rclone mount with --metadata --vfs-metadata-extension .metadata we get

    -
    $ ls -l /mnt/
    +
    $ ls -l /mnt/
     total 1048577
     -rw-rw-r-- 1 user user 1073741824 Mar  3 16:03 1G
     
    @@ -8249,7 +8863,7 @@ total 1048578
     and if there is an error reading the metadata the error will be returned
     as {"error":"error string"}.

    rclone serve dlna remote:path [flags]
    -

    Options

    +

    Options

          --addr string                            The ip:port or :port to bind the DLNA http server to (default ":7879")
           --announce-interval Duration             The interval between SSDP announcements (default 12m0s)
           --dir-cache-time Duration                Time to cache directory entries for (default 5m0s)
    @@ -8293,7 +8907,7 @@ href="https://rclone.org/flags/">global flags page for global
     options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -8316,14 +8930,17 @@ options not listed here.

    --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    -

    See Also

    +

    See Also

    + + +

    rclone serve docker

    Serve any remote on docker's volume plugin API.

    -

    Synopsis

    +

    Synopsis

    This command implements the Docker volume plugin API allowing docker to use rclone as a data storage mechanism for various cloud providers. rclone provides docker volume plugin based on @@ -8334,7 +8951,7 @@ commands from docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example:

    -
    sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
    +
    sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv

    Running rclone serve docker will create the said socket, listening for commands from Docker to create the necessary Volumes. Normally you need not give the --socket-addr flag. The API @@ -8378,8 +8995,8 @@ about files and directories (but not the data) in memory.

    long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    -
    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    ---poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
    +
        --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    +    --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support @@ -8388,12 +9005,12 @@ picked up within the polling interval.

    You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    -
    kill -SIGHUP $(pidof rclone)
    +
    kill -SIGHUP $(pidof rclone)

    If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

    -
    rclone rc vfs/forget
    +
    rclone rc vfs/forget

    Or individual files or directories:

    -
    rclone rc vfs/forget file=path/to/file dir=path/to/dir
    +
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    VFS File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    @@ -8414,13 +9031,13 @@ system. It can be disabled at the cost of some compatibility.

    write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                     Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    ---vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    ---vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)
    +
        --cache-dir string                     Directory rclone will use for caching.
    +    --vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +    --vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +    --vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +    --vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +    --vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +    --vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting @@ -8544,9 +9161,9 @@ specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    -
    --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    ---vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    ---vfs-read-chunk-streams int            The number of parallel streams to read at once
    +
        --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    +    --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    +    --vfs-read-chunk-streams int            The number of parallel streams to read at once

    The chunking behaves differently depending on the --vfs-read-chunk-streams parameter.

    chunked reading feature.

    --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

    -
    --no-checksum     Don't compare checksums on up/download.
    ---no-modtime      Don't read/write the modification time (can speed things up).
    ---no-seek         Don't allow seeking in files.
    ---read-only       Only allow read-only access.
    +
        --no-checksum     Don't compare checksums on up/download.
    +    --no-modtime      Don't read/write the modification time (can speed things up).
    +    --no-seek         Don't allow seeking in files.
    +    --read-only       Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    -
    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    ---vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    +
        --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    +    --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    -
    --transfers int  Number of file transfers to run in parallel (default 4)
    +
        --transfers int  Number of file transfers to run in parallel (default 4)

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    -
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    ---vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +
        --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +    --vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be @@ -8637,7 +9254,7 @@ commands yet.

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    -
    .
    +
    .
     ├── dir
     │   └── file.txt
     └── linked-dir -> dir
    @@ -8707,7 +9324,7 @@ an error, similar to how this is handled in

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    -
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)
    +
        --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes @@ -8717,10 +9334,10 @@ used. If you need this information to be available when running of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    -

    WARNING. Contrary to rclone size, this flag -ignores filters so that the result is accurate. However, this is very -inefficient and may cost lots of API calls resulting in extra charges. -Use it as a last resort and only with caching.

    +

    WARNING: Contrary to rclone size, this +flag ignores filters so that the result is accurate. However, this is +very inefficient and may cost lots of API calls resulting in extra +charges. Use it as a last resort and only with caching.

    VFS Metadata

    If you use the --vfs-metadata-extension flag you can get the VFS to expose files which contain the --metadata flag.

    For example, using rclone mount with --metadata --vfs-metadata-extension .metadata we get

    -
    $ ls -l /mnt/
    +
    $ ls -l /mnt/
     total 1048577
     -rw-rw-r-- 1 user user 1073741824 Mar  3 16:03 1G
     
    @@ -8755,7 +9372,7 @@ total 1048578
     and if there is an error reading the metadata the error will be returned
     as {"error":"error string"}.

    rclone serve docker [flags]
    -

    Options

    +

    Options

          --allow-non-empty                        Allow mounting over a non-empty directory (not supported on Windows)
           --allow-other                            Allow access to other users (not supported on Windows)
           --allow-root                             Allow access to root user (not supported on Windows)
    @@ -8820,7 +9437,7 @@ href="https://rclone.org/flags/">global flags page for global
     options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -8843,14 +9460,17 @@ options not listed here.

    --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    -

    See Also

    +

    See Also

    + + +

    rclone serve ftp

    Serve remote:path over FTP.

    -

    Synopsis

    +

    Synopsis

    Run a basic FTP server to serve a remote over FTP protocol. This can be viewed with a FTP client or you can make a remote of type FTP to read and write it.

    @@ -8881,8 +9501,8 @@ about files and directories (but not the data) in memory.

    long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    -
    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    ---poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
    +
        --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    +    --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support @@ -8891,12 +9511,12 @@ picked up within the polling interval.

    You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    -
    kill -SIGHUP $(pidof rclone)
    +
    kill -SIGHUP $(pidof rclone)

    If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

    -
    rclone rc vfs/forget
    +
    rclone rc vfs/forget

    Or individual files or directories:

    -
    rclone rc vfs/forget file=path/to/file dir=path/to/dir
    +
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    VFS File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    @@ -8917,13 +9537,13 @@ system. It can be disabled at the cost of some compatibility.

    write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                     Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    ---vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    ---vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)
    +
        --cache-dir string                     Directory rclone will use for caching.
    +    --vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +    --vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +    --vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +    --vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +    --vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +    --vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting @@ -9047,9 +9667,9 @@ specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    -
    --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    ---vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    ---vfs-read-chunk-streams int            The number of parallel streams to read at once
    +
        --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    +    --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    +    --vfs-read-chunk-streams int            The number of parallel streams to read at once

    The chunking behaves differently depending on the --vfs-read-chunk-streams parameter.

    chunked reading feature.

    --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

    -
    --no-checksum     Don't compare checksums on up/download.
    ---no-modtime      Don't read/write the modification time (can speed things up).
    ---no-seek         Don't allow seeking in files.
    ---read-only       Only allow read-only access.
    +
        --no-checksum     Don't compare checksums on up/download.
    +    --no-modtime      Don't read/write the modification time (can speed things up).
    +    --no-seek         Don't allow seeking in files.
    +    --read-only       Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    -
    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    ---vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    +
        --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    +    --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    -
    --transfers int  Number of file transfers to run in parallel (default 4)
    +
        --transfers int  Number of file transfers to run in parallel (default 4)

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    -
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    ---vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +
        --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +    --vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be @@ -9140,7 +9760,7 @@ commands yet.

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    -
    .
    +
    .
     ├── dir
     │   └── file.txt
     └── linked-dir -> dir
    @@ -9210,7 +9830,7 @@ an error, similar to how this is handled in

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    -
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)
    +
        --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes @@ -9220,10 +9840,10 @@ used. If you need this information to be available when running of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    -

    WARNING. Contrary to rclone size, this flag -ignores filters so that the result is accurate. However, this is very -inefficient and may cost lots of API calls resulting in extra charges. -Use it as a last resort and only with caching.

    +

    WARNING: Contrary to rclone size, this +flag ignores filters so that the result is accurate. However, this is +very inefficient and may cost lots of API calls resulting in extra +charges. Use it as a last resort and only with caching.

    VFS Metadata

    If you use the --vfs-metadata-extension flag you can get the VFS to expose files which contain the --metadata flag.

    For example, using rclone mount with --metadata --vfs-metadata-extension .metadata we get

    -
    $ ls -l /mnt/
    +
    $ ls -l /mnt/
     total 1048577
     -rw-rw-r-- 1 user user 1073741824 Mar  3 16:03 1G
     
    @@ -9276,31 +9896,39 @@ backend on STDOUT in JSON format. This config will have any default
     parameters for the backend added, but it won't use configuration from
     environment variables or command line options - it is the job of the
     proxy program to make a complete config.

    -

    This config generated must have this extra parameter - -_root - root to use for the backend

    -

    And it may have this parameter - _obscure - comma -separated strings for parameters to obscure

    +

    This config generated must have this extra parameter

    +
      +
    • _root - root to use for the backend
    • +
    +

    And it may have this parameter

    +
      +
    • _obscure - comma separated strings for parameters to +obscure
    • +

    If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

    -
    {
    -    "user": "me",
    -    "pass": "mypassword"
    -}
    +
    {
    +  "user": "me",
    +  "pass": "mypassword"
    +}

    If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

    -
    {
    -    "user": "me",
    -    "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
    -}
    +
    {
    +  "user": "me",
    +  "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
    +}

    And as an example return this on STDOUT

    -
    {
    -    "type": "sftp",
    -    "_root": "",
    -    "_obscure": "pass",
    -    "user": "me",
    -    "pass": "mypassword",
    -    "host": "sftp.example.com"
    -}
    +
    {
    +  "type": "sftp",
    +  "_root": "",
    +  "_obscure": "pass",
    +  "user": "me",
    +  "pass": "mypassword",
    +  "host": "sftp.example.com"
    +}

    This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since @@ -9321,7 +9949,7 @@ before it takes effect.

    This can be used to build general purpose proxies to any kind of backend that rclone supports.

    rclone serve ftp remote:path [flags]
    -

    Options

    +

    Options

          --addr string                            IPaddress:Port or :Port to bind server to (default "localhost:2121")
           --auth-proxy string                      A program to use to create the backend from the auth
           --cert string                            TLS PEM key (concatenation of certificate and CA certificate)
    @@ -9368,7 +9996,7 @@ href="https://rclone.org/flags/">global flags page for global
     options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -9391,14 +10019,17 @@ options not listed here.

    --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    -

    See Also

    +

    See Also

    + + +

    rclone serve http

    Serve the remote over HTTP.

    -

    Synopsis

    +

    Synopsis

    Run a basic web server to serve a remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.

    @@ -9436,6 +10067,8 @@ serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    +

    --disable-zip may be set to disable the zipping download +option.

    TLS (SSL)

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and @@ -9456,98 +10089,109 @@ acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr).

    This allows rclone to be a socket-activated service. It can be -configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

    +configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html.

    Socket activation can be tested ad-hoc with the systemd-socket-activatecommand

    -
       systemd-socket-activate -l 8000 -- rclone serve
    +
    systemd-socket-activate -l 8000 -- rclone serve

    This will socket-activate rclone on the first connection to port 8000 -over TCP. ### Template

    +over TCP.

    +

    Template

    --template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:

    ParameterSubparameter Description
    .Name The full path of a file/directory.
    .TitleDirectory listing of .NameDirectory listing of '.Name'.
    .Sort The current sort used. This is changeable -via ?sort= parameter
    Sort Options: namedirfirst,name,size,time -(default namedirfirst)
    .OrderThe current ordering used. This is -changeable via ?order= parameter
    Order Options: asc,desc (default asc)The current ordering used. This is +changeable via '?order=' parameter. Possible values: asc, desc (default +asc).
    .Query Currently unused.
    .Breadcrumb Allows for creating a relative -navigation
    -- .LinkThe relative to the root link of the -Text..LinkThe link of the Text relative to the +root.
    -- .Text.Text The Name of the directory.
    .Entries Information about a specific file/directory.
    -- .URLThe 'url' of an entry..URLThe url of an entry.
    -- .LeafCurrently same as 'URL' but intended to be -'just' the name..LeafCurrently same as '.URL' but intended to +be just the name.
    -- .IsDir.IsDir Boolean for if an entry is a directory or not.
    -- .SizeSize in Bytes of the entry..SizeSize in bytes of the entry.
    -- .ModTime.ModTime The UTC timestamp of an entry.
    --+++ + + - + + + +via '?sort=' parameter. Possible values: namedirfirst, name, size, time +(default namedirfirst). - - - - - - - - + + + +navigation. - - + + + - + + + - - + + + - - + + + - + + - - + + + - + + @@ -9598,7 +10242,7 @@ set a single username and password with the --user and

    Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with --user-from-header (e.g., -----user-from-header=x-remote-user). Ensure the proxy is +--user-from-header=x-remote-user). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.

    If either of the above authentication methods is not configured and @@ -9609,7 +10253,7 @@ considered as the username.

    file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    To create an htpasswd file:

    -
    touch htpasswd
    +
    touch htpasswd
     htpasswd -B htpasswd user
     htpasswd -B htpasswd anotherUser

    The password file can be updated while rclone is running.

    @@ -9631,8 +10275,8 @@ about files and directories (but not the data) in memory.

    long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    -
    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    ---poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
    +
        --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    +    --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support @@ -9641,12 +10285,12 @@ picked up within the polling interval.

    You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    -
    kill -SIGHUP $(pidof rclone)
    +
    kill -SIGHUP $(pidof rclone)

    If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

    -
    rclone rc vfs/forget
    +
    rclone rc vfs/forget

    Or individual files or directories:

    -
    rclone rc vfs/forget file=path/to/file dir=path/to/dir
    +
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    VFS File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    @@ -9667,13 +10311,13 @@ system. It can be disabled at the cost of some compatibility.

    write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                     Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    ---vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    ---vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)
    +
        --cache-dir string                     Directory rclone will use for caching.
    +    --vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +    --vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +    --vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +    --vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +    --vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +    --vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting @@ -9797,9 +10441,9 @@ specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    -
    --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    ---vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    ---vfs-read-chunk-streams int            The number of parallel streams to read at once
    +
        --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    +    --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    +    --vfs-read-chunk-streams int            The number of parallel streams to read at once

    The chunking behaves differently depending on the --vfs-read-chunk-streams parameter.

    chunked reading feature.

    --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

    -
    --no-checksum     Don't compare checksums on up/download.
    ---no-modtime      Don't read/write the modification time (can speed things up).
    ---no-seek         Don't allow seeking in files.
    ---read-only       Only allow read-only access.
    +
        --no-checksum     Don't compare checksums on up/download.
    +    --no-modtime      Don't read/write the modification time (can speed things up).
    +    --no-seek         Don't allow seeking in files.
    +    --read-only       Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    -
    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    ---vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    +
        --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    +    --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    -
    --transfers int  Number of file transfers to run in parallel (default 4)
    +
        --transfers int  Number of file transfers to run in parallel (default 4)

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    -
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    ---vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +
        --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +    --vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be @@ -9890,7 +10534,7 @@ commands yet.

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    -
    .
    +
    .
     ├── dir
     │   └── file.txt
     └── linked-dir -> dir
    @@ -9960,7 +10604,7 @@ an error, similar to how this is handled in

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    -
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)
    +
        --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes @@ -9970,10 +10614,10 @@ used. If you need this information to be available when running of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    -

    WARNING. Contrary to rclone size, this flag -ignores filters so that the result is accurate. However, this is very -inefficient and may cost lots of API calls resulting in extra charges. -Use it as a last resort and only with caching.

    +

    WARNING: Contrary to rclone size, this +flag ignores filters so that the result is accurate. However, this is +very inefficient and may cost lots of API calls resulting in extra +charges. Use it as a last resort and only with caching.

    VFS Metadata

    If you use the --vfs-metadata-extension flag you can get the VFS to expose files which contain the --metadata flag.

    For example, using rclone mount with --metadata --vfs-metadata-extension .metadata we get

    -
    $ ls -l /mnt/
    +
    $ ls -l /mnt/
     total 1048577
     -rw-rw-r-- 1 user user 1073741824 Mar  3 16:03 1G
     
    @@ -10026,31 +10670,39 @@ backend on STDOUT in JSON format. This config will have any default
     parameters for the backend added, but it won't use configuration from
     environment variables or command line options - it is the job of the
     proxy program to make a complete config.

    -

    This config generated must have this extra parameter - -_root - root to use for the backend

    -

    And it may have this parameter - _obscure - comma -separated strings for parameters to obscure

    +

    This config generated must have this extra parameter

    +
      +
    • _root - root to use for the backend
    • +
    +

    And it may have this parameter

    +
      +
    • _obscure - comma separated strings for parameters to +obscure
    • +

    If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

    -
    {
    -    "user": "me",
    -    "pass": "mypassword"
    -}
    +
    {
    +  "user": "me",
    +  "pass": "mypassword"
    +}

    If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

    -
    {
    -    "user": "me",
    -    "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
    -}
    +
    {
    +  "user": "me",
    +  "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
    +}

    And as an example return this on STDOUT

    -
    {
    -    "type": "sftp",
    -    "_root": "",
    -    "_obscure": "pass",
    -    "user": "me",
    -    "pass": "mypassword",
    -    "host": "sftp.example.com"
    -}
    +
    {
    +  "type": "sftp",
    +  "_root": "",
    +  "_obscure": "pass",
    +  "user": "me",
    +  "pass": "mypassword",
    +  "host": "sftp.example.com"
    +}

    This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since @@ -10071,7 +10723,7 @@ before it takes effect.

    This can be used to build general purpose proxies to any kind of backend that rclone supports.

    rclone serve http remote:path [flags]
    -

    Options

    +

    Options

          --addr stringArray                       IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
           --allow-origin string                    Origin which cross-domain request (CORS) can be executed from
           --auth-proxy string                      A program to use to create the backend from the auth
    @@ -10080,6 +10732,7 @@ backend that rclone supports.

    --client-ca string Client certificate authority to verify clients with --dir-cache-time Duration Time to cache directory entries for (default 5m0s) --dir-perms FileMode Directory permissions (default 777) + --disable-zip Disable zip download of directories --file-perms FileMode File permissions (default 666) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for http @@ -10128,7 +10781,7 @@ href="https://rclone.org/flags/">global flags page for global options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -10151,14 +10804,17 @@ options not listed here.

    --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    -

    See Also

    +

    See Also

    + + +

    rclone serve nfs

    Serve the remote as an NFS mount

    -

    Synopsis

    +

    Synopsis

    Create an NFS server that serves the given remote over the network.

    This implements an NFSv3 server to serve any rclone remote via @@ -10207,10 +10863,12 @@ default is 1000000, but consider lowering this limit if the server's system resource usage causes problems. This is only used by the memory type cache.

    To serve NFS over the network use following command:

    -
    rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full
    +
    rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full

    This specifies a port that can be used in the mount command. To mount the server under Linux/macOS, use the following command:

    -
    mount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpoint
    +
    mount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpoint

    Where $PORT is the same port number used in the serve nfs command and $HOSTNAME is the network address of the machine that serve nfs was run on.

    @@ -10236,8 +10894,8 @@ about files and directories (but not the data) in memory.

    long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    -
    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    ---poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
    +
        --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    +    --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support @@ -10246,12 +10904,12 @@ picked up within the polling interval.

    You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    -
    kill -SIGHUP $(pidof rclone)
    +
    kill -SIGHUP $(pidof rclone)

    If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

    -
    rclone rc vfs/forget
    +
    rclone rc vfs/forget

    Or individual files or directories:

    -
    rclone rc vfs/forget file=path/to/file dir=path/to/dir
    +
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    VFS File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    @@ -10272,13 +10930,13 @@ system. It can be disabled at the cost of some compatibility.

    write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                     Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    ---vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    ---vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)
    +
        --cache-dir string                     Directory rclone will use for caching.
    +    --vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +    --vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +    --vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +    --vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +    --vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +    --vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting @@ -10402,9 +11060,9 @@ specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    -
    --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    ---vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    ---vfs-read-chunk-streams int            The number of parallel streams to read at once
    +
        --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    +    --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    +    --vfs-read-chunk-streams int            The number of parallel streams to read at once

    The chunking behaves differently depending on the --vfs-read-chunk-streams parameter.

    chunked reading feature.

    --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

    -
    --no-checksum     Don't compare checksums on up/download.
    ---no-modtime      Don't read/write the modification time (can speed things up).
    ---no-seek         Don't allow seeking in files.
    ---read-only       Only allow read-only access.
    +
        --no-checksum     Don't compare checksums on up/download.
    +    --no-modtime      Don't read/write the modification time (can speed things up).
    +    --no-seek         Don't allow seeking in files.
    +    --read-only       Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    -
    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    ---vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    +
        --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    +    --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    -
    --transfers int  Number of file transfers to run in parallel (default 4)
    +
        --transfers int  Number of file transfers to run in parallel (default 4)

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    -
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    ---vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +
        --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +    --vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be @@ -10495,7 +11153,7 @@ commands yet.

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    -
    .
    +
    .
     ├── dir
     │   └── file.txt
     └── linked-dir -> dir
    @@ -10565,7 +11223,7 @@ an error, similar to how this is handled in

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    -
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)
    +
        --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes @@ -10575,10 +11233,10 @@ used. If you need this information to be available when running of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    -

    WARNING. Contrary to rclone size, this flag -ignores filters so that the result is accurate. However, this is very -inefficient and may cost lots of API calls resulting in extra charges. -Use it as a last resort and only with caching.

    +

    WARNING: Contrary to rclone size, this +flag ignores filters so that the result is accurate. However, this is +very inefficient and may cost lots of API calls resulting in extra +charges. Use it as a last resort and only with caching.

    VFS Metadata

    If you use the --vfs-metadata-extension flag you can get the VFS to expose files which contain the --metadata flag.

    For example, using rclone mount with --metadata --vfs-metadata-extension .metadata we get

    -
    $ ls -l /mnt/
    +
    $ ls -l /mnt/
     total 1048577
     -rw-rw-r-- 1 user user 1073741824 Mar  3 16:03 1G
     
    @@ -10613,7 +11271,7 @@ total 1048578
     and if there is an error reading the metadata the error will be returned
     as {"error":"error string"}.

    rclone serve nfs remote:path [flags]
    -

    Options

    +

    Options

          --addr string                            IPaddress:Port or :Port to bind server to
           --dir-cache-time Duration                Time to cache directory entries for (default 5m0s)
           --dir-perms FileMode                     Directory permissions (default 777)
    @@ -10656,7 +11314,7 @@ href="https://rclone.org/flags/">global flags page for global
     options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -10679,14 +11337,17 @@ options not listed here.

    --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    -

    See Also

    +

    See Also

    + + +

    rclone serve restic

    Serve the remote for restic's REST API.

    -

    Synopsis

    +

    Synopsis

    Run a basic web server to serve a remote over restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.

    @@ -10704,7 +11365,7 @@ example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions.

    Now start the rclone restic server

    -
    rclone serve restic -v remote:backup
    +
    rclone serve restic -v remote:backup

    Where you can replace "backup" in the above by whatever path in the remote you wish to use.

    By default this will serve on "localhost:8080" you can change this @@ -10724,7 +11385,7 @@ rclone.

    For the example above you will want to use "http://localhost:8080/" as the URL for the REST server.

    For example:

    -
    $ export RESTIC_REPOSITORY=rest:http://localhost:8080/
    +
    $ export RESTIC_REPOSITORY=rest:http://localhost:8080/
     $ export RESTIC_PASSWORD=yourpassword
     $ restic init
     created restic backend 8b1a4b56ae at rest:http://localhost:8080/
    @@ -10737,12 +11398,13 @@ scan [/path/to/files/to/backup]
     scanned 189 directories, 312 files in 0:00
     [0:00] 100.00%  38.128 MiB / 38.128 MiB  501 / 501 items  0 errors  ETA 0:00
     duration: 0:00
    -snapshot 45c8fdd8 saved
    +snapshot 45c8fdd8 saved +

    Multiple repositories

    Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these must end with /. Eg

    -
    $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
    +
    $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
     # backup user1 stuff
     $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
     # backup user2 stuff
    @@ -10778,6 +11440,8 @@ serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    +

    --disable-zip may be set to disable the zipping download +option.

    TLS (SSL)

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and @@ -10798,13 +11462,15 @@ acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr).

    This allows rclone to be a socket-activated service. It can be -configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

    +configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html.

    Socket activation can be tested ad-hoc with the systemd-socket-activatecommand

    -
       systemd-socket-activate -l 8000 -- rclone serve
    +
    systemd-socket-activate -l 8000 -- rclone serve

    This will socket-activate rclone on the first connection to port 8000 -over TCP. ### Authentication

    +over TCP.

    +

    Authentication

    By default this will serve files without needing a login.

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and @@ -10812,7 +11478,7 @@ set a single username and password with the --user and

    Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with --user-from-header (e.g., -----user-from-header=x-remote-user). Ensure the proxy is +--user-from-header=x-remote-user). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.

    If either of the above authentication methods is not configured and @@ -10823,7 +11489,7 @@ considered as the username.

    file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    To create an htpasswd file:

    -
    touch htpasswd
    +
    touch htpasswd
     htpasswd -B htpasswd user
     htpasswd -B htpasswd anotherUser

    The password file can be updated while rclone is running.

    @@ -10831,7 +11497,7 @@ htpasswd -B htpasswd anotherUser

    Use --salt to change the password hashing salt from the default.

    rclone serve restic remote:path [flags]
    -

    Options

    +

    Options

          --addr stringArray                IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
           --allow-origin string             Origin which cross-domain request (CORS) can be executed from
           --append-only                     Disallow deletion of repository data
    @@ -10855,14 +11521,17 @@ default.

    --user-from-header string User name from a defined HTTP header

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + + +

    rclone serve s3

    Serve remote:path over s3.

    -

    Synopsis

    +

    Synopsis

    serve s3 implements a basic s3 server that serves a remote via s3. This can be viewed with an s3 client, or you can make an s3 type remote to read and write to @@ -10893,13 +11562,14 @@ clients which rely on the Etag being the MD5.

    Quickstart

    For a simple set up, to serve remote:path over s3, run the server like this:

    -
    rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
    +
    rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path

    For example, to use a simple folder in the filesystem, run the server with a command like this:

    -
    rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY local:/path/to/folder
    +
    rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY local:/path/to/folder

    The rclone.conf for the server could look like this:

    -
    [local]
    -type = local
    +
    [local]
    +type = local

    The local configuration is optional though. If you run the server with a remote:path like /path/to/folder (without the local: prefix and @@ -10908,13 +11578,14 @@ default configuration, which will be visible as a warning in the logs. But it will run nonetheless.

    This will be compatible with an rclone (client) remote configuration which is defined like this:

    -
    [serves3]
    -type = s3
    -provider = Rclone
    -endpoint = http://127.0.0.1:8080/
    -access_key_id = ACCESS_KEY_ID
    -secret_access_key = SECRET_ACCESS_KEY
    -use_multipart_uploads = false
    +
    [serves3]
    +type = s3
    +provider = Rclone
    +endpoint = http://127.0.0.1:8080/
    +access_key_id = ACCESS_KEY_ID
    +secret_access_key = SECRET_ACCESS_KEY
    +use_multipart_uploads = false

    Note that setting use_multipart_uploads = false is to work around a bug which will be fixed in due course.

    @@ -10976,7 +11647,7 @@ operations.

    Other operations will return error Unimplemented.

    -

    Authentication

    +

    Authentication

    By default this will serve files without needing a login.

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and @@ -10984,7 +11655,7 @@ set a single username and password with the --user and

    Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with --user-from-header (e.g., -----user-from-header=x-remote-user). Ensure the proxy is +--user-from-header=x-remote-user). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.

    If either of the above authentication methods is not configured and @@ -10995,7 +11666,7 @@ considered as the username.

    file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    To create an htpasswd file:

    -
    touch htpasswd
    +
    touch htpasswd
     htpasswd -B htpasswd user
     htpasswd -B htpasswd anotherUser

    The password file can be updated while rclone is running.

    @@ -11030,6 +11701,8 @@ serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    +

    --disable-zip may be set to disable the zipping download +option.

    TLS (SSL)

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and @@ -11050,13 +11723,15 @@ acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr).

    This allows rclone to be a socket-activated service. It can be -configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

    +configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html.

    Socket activation can be tested ad-hoc with the systemd-socket-activatecommand

    -
       systemd-socket-activate -l 8000 -- rclone serve
    +
    systemd-socket-activate -l 8000 -- rclone serve

    This will socket-activate rclone on the first connection to port 8000 -over TCP. ## VFS - Virtual File System

    +over TCP.

    +

    VFS - Virtual File System

    This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

    @@ -11071,8 +11746,8 @@ about files and directories (but not the data) in memory.

    long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    -
    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    ---poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
    +
        --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    +    --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support @@ -11081,12 +11756,12 @@ picked up within the polling interval.

    You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    -
    kill -SIGHUP $(pidof rclone)
    +
    kill -SIGHUP $(pidof rclone)

    If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

    -
    rclone rc vfs/forget
    +
    rclone rc vfs/forget

    Or individual files or directories:

    -
    rclone rc vfs/forget file=path/to/file dir=path/to/dir
    +
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    VFS File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    @@ -11107,13 +11782,13 @@ system. It can be disabled at the cost of some compatibility.

    write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                     Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    ---vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    ---vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)
    +
        --cache-dir string                     Directory rclone will use for caching.
    +    --vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +    --vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +    --vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +    --vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +    --vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +    --vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting @@ -11237,9 +11912,9 @@ specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    -
    --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    ---vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    ---vfs-read-chunk-streams int            The number of parallel streams to read at once
    +
        --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    +    --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    +    --vfs-read-chunk-streams int            The number of parallel streams to read at once

    The chunking behaves differently depending on the --vfs-read-chunk-streams parameter.

    chunked reading feature.

    --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

    -
    --no-checksum     Don't compare checksums on up/download.
    ---no-modtime      Don't read/write the modification time (can speed things up).
    ---no-seek         Don't allow seeking in files.
    ---read-only       Only allow read-only access.
    +
        --no-checksum     Don't compare checksums on up/download.
    +    --no-modtime      Don't read/write the modification time (can speed things up).
    +    --no-seek         Don't allow seeking in files.
    +    --read-only       Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    -
    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    ---vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    +
        --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    +    --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    -
    --transfers int  Number of file transfers to run in parallel (default 4)
    +
        --transfers int  Number of file transfers to run in parallel (default 4)

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    -
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    ---vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +
        --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +    --vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be @@ -11330,7 +12005,7 @@ commands yet.

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    -
    .
    +
    .
     ├── dir
     │   └── file.txt
     └── linked-dir -> dir
    @@ -11400,7 +12075,7 @@ an error, similar to how this is handled in

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    -
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)
    +
        --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes @@ -11410,10 +12085,10 @@ used. If you need this information to be available when running of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    -

    WARNING. Contrary to rclone size, this flag -ignores filters so that the result is accurate. However, this is very -inefficient and may cost lots of API calls resulting in extra charges. -Use it as a last resort and only with caching.

    +

    WARNING: Contrary to rclone size, this +flag ignores filters so that the result is accurate. However, this is +very inefficient and may cost lots of API calls resulting in extra +charges. Use it as a last resort and only with caching.

    VFS Metadata

    If you use the --vfs-metadata-extension flag you can get the VFS to expose files which contain the --metadata flag.

    For example, using rclone mount with --metadata --vfs-metadata-extension .metadata we get

    -
    $ ls -l /mnt/
    +
    $ ls -l /mnt/
     total 1048577
     -rw-rw-r-- 1 user user 1073741824 Mar  3 16:03 1G
     
    @@ -11448,7 +12123,7 @@ total 1048578
     and if there is an error reading the metadata the error will be returned
     as {"error":"error string"}.

    rclone serve s3 remote:path [flags]
    -

    Options

    +

    Options

          --addr stringArray                       IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
           --allow-origin string                    Origin which cross-domain request (CORS) can be executed from
           --auth-key stringArray                   Set key pair for v4 authorization: access_key_id,secret_access_key
    @@ -11508,7 +12183,7 @@ href="https://rclone.org/flags/">global flags page for global
     options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -11531,14 +12206,17 @@ options not listed here.

    --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    -

    See Also

    +

    See Also

    + + +

    rclone serve sftp

    Serve the remote over SFTP.

    -

    Synopsis

    +

    Synopsis

    Run an SFTP server to serve a remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.

    @@ -11572,11 +12250,12 @@ reachable externally then supply --addr :2022 for example.

    This also supports being run with socket activation, in which case it will listen on the first passed FD. It can be configured with .socket -and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

    +and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html.

    Socket activation can be tested ad-hoc with the systemd-socket-activatecommand:

    -
    systemd-socket-activate -l 2222 -- rclone serve sftp :local:vfs/
    +
    systemd-socket-activate -l 2222 -- rclone serve sftp :local:vfs/

    This will socket-activate rclone on the first connection to port 2222 over TCP.

    Note that the default of --vfs-cache-mode off is fine @@ -11585,7 +12264,7 @@ clients.

    If --stdio is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example:

    -
    restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...
    +
    restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ...

    On the client you need to set --transfers 1 when using --stdio. Otherwise multiple instances of the rclone server are started by OpenSSH which can lead to "corrupted on transfer" errors. @@ -11597,7 +12276,7 @@ being used. Omitting "restrict" and using --sftp-path-override to enable checksumming is possible but less secure and you could use the SFTP server provided by OpenSSH in this case.

    -

    VFS - Virtual File System

    +

    VFS - Virtual File System

    This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

    @@ -11612,8 +12291,8 @@ about files and directories (but not the data) in memory.

    long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    -
    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    ---poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
    +
        --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    +    --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support @@ -11622,12 +12301,12 @@ picked up within the polling interval.

    You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    -
    kill -SIGHUP $(pidof rclone)
    +
    kill -SIGHUP $(pidof rclone)

    If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

    -
    rclone rc vfs/forget
    +
    rclone rc vfs/forget

    Or individual files or directories:

    -
    rclone rc vfs/forget file=path/to/file dir=path/to/dir
    +
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    VFS File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    @@ -11648,13 +12327,13 @@ system. It can be disabled at the cost of some compatibility.

    write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                     Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    ---vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    ---vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)
    +
        --cache-dir string                     Directory rclone will use for caching.
    +    --vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +    --vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +    --vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +    --vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +    --vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +    --vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting @@ -11778,9 +12457,9 @@ specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    -
    --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    ---vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    ---vfs-read-chunk-streams int            The number of parallel streams to read at once
    +
        --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    +    --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    +    --vfs-read-chunk-streams int            The number of parallel streams to read at once

    The chunking behaves differently depending on the --vfs-read-chunk-streams parameter.

    chunked reading feature.

    --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

    -
    --no-checksum     Don't compare checksums on up/download.
    ---no-modtime      Don't read/write the modification time (can speed things up).
    ---no-seek         Don't allow seeking in files.
    ---read-only       Only allow read-only access.
    +
        --no-checksum     Don't compare checksums on up/download.
    +    --no-modtime      Don't read/write the modification time (can speed things up).
    +    --no-seek         Don't allow seeking in files.
    +    --read-only       Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    -
    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    ---vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    +
        --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    +    --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    -
    --transfers int  Number of file transfers to run in parallel (default 4)
    +
        --transfers int  Number of file transfers to run in parallel (default 4)

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    -
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    ---vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +
        --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +    --vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be @@ -11871,7 +12550,7 @@ commands yet.

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    -
    .
    +
    .
     ├── dir
     │   └── file.txt
     └── linked-dir -> dir
    @@ -11941,7 +12620,7 @@ an error, similar to how this is handled in

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    -
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)
    +
        --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes @@ -11951,10 +12630,10 @@ used. If you need this information to be available when running of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    -

    WARNING. Contrary to rclone size, this flag -ignores filters so that the result is accurate. However, this is very -inefficient and may cost lots of API calls resulting in extra charges. -Use it as a last resort and only with caching.

    +

    WARNING: Contrary to rclone size, this +flag ignores filters so that the result is accurate. However, this is +very inefficient and may cost lots of API calls resulting in extra +charges. Use it as a last resort and only with caching.

    VFS Metadata

    If you use the --vfs-metadata-extension flag you can get the VFS to expose files which contain the --metadata flag.

    For example, using rclone mount with --metadata --vfs-metadata-extension .metadata we get

    -
    $ ls -l /mnt/
    +
    $ ls -l /mnt/
     total 1048577
     -rw-rw-r-- 1 user user 1073741824 Mar  3 16:03 1G
     
    @@ -12007,31 +12686,39 @@ backend on STDOUT in JSON format. This config will have any default
     parameters for the backend added, but it won't use configuration from
     environment variables or command line options - it is the job of the
     proxy program to make a complete config.

    -

    This config generated must have this extra parameter - -_root - root to use for the backend

    -

    And it may have this parameter - _obscure - comma -separated strings for parameters to obscure

    +

    This config generated must have this extra parameter

    +
      +
    • _root - root to use for the backend
    • +
    +

    And it may have this parameter

    +
      +
    • _obscure - comma separated strings for parameters to +obscure
    • +

    If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

    -
    {
    -    "user": "me",
    -    "pass": "mypassword"
    -}
    +
    {
    +  "user": "me",
    +  "pass": "mypassword"
    +}

    If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

    -
    {
    -    "user": "me",
    -    "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
    -}
    +
    {
    +  "user": "me",
    +  "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
    +}

    And as an example return this on STDOUT

    -
    {
    -    "type": "sftp",
    -    "_root": "",
    -    "_obscure": "pass",
    -    "user": "me",
    -    "pass": "mypassword",
    -    "host": "sftp.example.com"
    -}
    +
    {
    +  "type": "sftp",
    +  "_root": "",
    +  "_obscure": "pass",
    +  "user": "me",
    +  "pass": "mypassword",
    +  "host": "sftp.example.com"
    +}

    This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since @@ -12052,7 +12739,7 @@ before it takes effect.

    This can be used to build general purpose proxies to any kind of backend that rclone supports.

    rclone serve sftp remote:path [flags]
    -

    Options

    +

    Options

          --addr string                            IPaddress:Port or :Port to bind server to (default "localhost:2022")
           --auth-proxy string                      A program to use to create the backend from the auth
           --authorized-keys string                 Authorized keys file (default "~/.ssh/authorized_keys")
    @@ -12099,7 +12786,7 @@ href="https://rclone.org/flags/">global flags page for global
     options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -12122,14 +12809,17 @@ options not listed here.

    --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    -

    See Also

    +

    See Also

    + + +

    rclone serve webdav

    Serve remote:path over WebDAV.

    -

    Synopsis

    +

    Synopsis

    Run a basic WebDAV server to serve a remote over HTTP via the WebDAV protocol. This can be viewed with a WebDAV client, through a web browser, or you can make a remote of type WebDAV to read and write @@ -12151,25 +12841,31 @@ dialog. Windows requires SSL / HTTPS connection to be used with Basic. If you try to connect via Add Network Location Wizard you will get the following error: "The folder you entered does not appear to be valid. Please choose another". However, you still can connect if you set the -following registry key on a client machine: HKEY_LOCAL_MACHINEto 2. The -BasicAuthLevel can be set to the following values: 0 - Basic -authentication disabled 1 - Basic authentication enabled for SSL -connections only 2 - Basic authentication enabled for SSL connections -and for non-SSL connections If required, increase the -FileSizeLimitInBytes to a higher value. Navigate to the Services -interface, then restart the WebClient service.

    +following registry key on a client machine: +HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\BasicAuthLevel +to 2. The BasicAuthLevel can be set to the following values:

    +
    0 - Basic authentication disabled
    +1 - Basic authentication enabled for SSL connections only
    +2 - Basic authentication enabled for SSL connections and for non-SSL connections
    +

    If required, increase the FileSizeLimitInBytes to a higher value. +Navigate to the Services interface, then restart the WebClient +service.

    Access Office applications on WebDAV

    -

    Navigate to following registry HKEY_CURRENT_USER[14.0/15.0/16.0] -Create a new DWORD BasicAuthLevel with value 2. 0 - Basic authentication -disabled 1 - Basic authentication enabled for SSL connections only 2 - -Basic authentication enabled for SSL and for non-SSL connections

    -

    https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint

    +

    Navigate to following registry +HKEY_CURRENT_USER\Software\Microsoft\Office\[14.0/15.0/16.0]\Common\Internet +Create a new DWORD BasicAuthLevel with value 2.

    +
    0 - Basic authentication disabled
    +1 - Basic authentication enabled for SSL connections only
    +2 - Basic authentication enabled for SSL and for non-SSL connections
    +

    https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint

    Serving over a unix socket

    You can serve the webdav on a unix socket like this:

    -
    rclone serve webdav --addr unix:///tmp/my.socket remote:path
    +
    rclone serve webdav --addr unix:///tmp/my.socket remote:path

    and connect to it like this using rclone and the webdav backend:

    -
    rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
    +
    rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:

    Note that there is no authentication on http protocol - this is expected to be done by the permissions on the socket.

    Server options

    @@ -12200,6 +12896,8 @@ serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.

    +

    --disable-zip may be set to disable the zipping download +option.

    TLS (SSL)

    By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and @@ -12220,98 +12918,109 @@ acceptable. Valid values are "tls1.0", "tls1.1", "tls1.2" and "tls1.3" listen to all FDs passed by the service manager, if any (and ignore any arguments passed by --addr).

    This allows rclone to be a socket-activated service. It can be -configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html

    +configured with .socket and .service unit files as described in https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html.

    Socket activation can be tested ad-hoc with the systemd-socket-activatecommand

    -
       systemd-socket-activate -l 8000 -- rclone serve
    +
    systemd-socket-activate -l 8000 -- rclone serve

    This will socket-activate rclone on the first connection to port 8000 -over TCP. ### Template

    +over TCP.

    +

    Template

    --template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages:

    ParameterSubparameter Description
    .Name The full path of a file/directory.
    .TitleDirectory listing of .NameDirectory listing of '.Name'.
    .Sort The current sort used. This is changeable -via ?sort= parameter
    Sort Options: namedirfirst,name,size,time -(default namedirfirst)
    .OrderThe current ordering used. This is -changeable via ?order= parameter
    Order Options: asc,desc (default asc)The current ordering used. This is +changeable via '?order=' parameter. Possible values: asc, desc (default +asc).
    .Query Currently unused.
    .Breadcrumb Allows for creating a relative -navigation
    -- .LinkThe relative to the root link of the -Text..LinkThe link of the Text relative to the +root.
    -- .Text.Text The Name of the directory.
    .Entries Information about a specific file/directory.
    -- .URLThe 'url' of an entry..URLThe url of an entry.
    -- .LeafCurrently same as 'URL' but intended to be -'just' the name..LeafCurrently same as '.URL' but intended to +be just the name.
    -- .IsDir.IsDir Boolean for if an entry is a directory or not.
    -- .SizeSize in Bytes of the entry..SizeSize in bytes of the entry.
    -- .ModTime.ModTime The UTC timestamp of an entry.
    --+++ + + - + + + +via '?sort=' parameter. Possible values: namedirfirst, name, size, time +(default namedirfirst). - - - - - - - - + + + +navigation. - - + + + - + + + - - + + + - - + + + - + + - - + + + - + + @@ -12354,7 +13063,7 @@ the specified suffix.
    ParameterSubparameter Description
    .Name The full path of a file/directory.
    .TitleDirectory listing of .NameDirectory listing of '.Name'.
    .Sort The current sort used. This is changeable -via ?sort= parameter
    Sort Options: namedirfirst,name,size,time -(default namedirfirst)
    .OrderThe current ordering used. This is -changeable via ?order= parameter
    Order Options: asc,desc (default asc)The current ordering used. This is +changeable via '?order=' parameter. Possible values: asc, desc (default +asc).
    .Query Currently unused.
    .Breadcrumb Allows for creating a relative -navigation
    -- .LinkThe relative to the root link of the -Text..LinkThe link of the Text relative to the +root.
    -- .Text.Text The Name of the directory.
    .Entries Information about a specific file/directory.
    -- .URLThe 'url' of an entry..URLThe url of an entry.
    -- .LeafCurrently same as 'URL' but intended to be -'just' the name..LeafCurrently same as '.URL' but intended to +be just the name.
    -- .IsDir.IsDir Boolean for if an entry is a directory or not.
    -- .SizeSize in Bytes of the entry..SizeSize in bytes of the entry.
    -- .ModTime.ModTime The UTC timestamp of an entry.
    -

    Authentication

    +

    Authentication

    By default this will serve files without needing a login.

    You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and @@ -12362,7 +13071,7 @@ set a single username and password with the --user and

    Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with --user-from-header (e.g., -----user-from-header=x-remote-user). Ensure the proxy is +--user-from-header=x-remote-user). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access.

    If either of the above authentication methods is not configured and @@ -12373,14 +13082,14 @@ considered as the username.

    file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.

    To create an htpasswd file:

    -
    touch htpasswd
    +
    touch htpasswd
     htpasswd -B htpasswd user
     htpasswd -B htpasswd anotherUser

    The password file can be updated while rclone is running.

    Use --realm to set the authentication realm.

    Use --salt to change the password hashing salt from the default.

    -

    VFS - Virtual File System

    +

    VFS - Virtual File System

    This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing system.

    @@ -12395,8 +13104,8 @@ about files and directories (but not the data) in memory.

    long a directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache.

    -
    --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    ---poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)
    +
        --dir-cache-time duration   Time to cache directory entries for (default 5m0s)
    +    --poll-interval duration    Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s)

    However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support @@ -12405,12 +13114,12 @@ picked up within the polling interval.

    You can send a SIGHUP signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:

    -
    kill -SIGHUP $(pidof rclone)
    +
    kill -SIGHUP $(pidof rclone)

    If you configure rclone with a remote control then you can use rclone rc to flush the whole directory cache:

    -
    rclone rc vfs/forget
    +
    rclone rc vfs/forget

    Or individual files or directories:

    -
    rclone rc vfs/forget file=path/to/file dir=path/to/dir
    +
    rclone rc vfs/forget file=path/to/file dir=path/to/dir

    VFS File Buffering

    The --buffer-size flag determines the amount of memory, that will be used to buffer data in advance.

    @@ -12431,13 +13140,13 @@ system. It can be disabled at the cost of some compatibility.

    write simultaneously to a file. See below for more details.

    Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both.

    -
    --cache-dir string                     Directory rclone will use for caching.
    ---vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    ---vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    ---vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    ---vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    ---vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    ---vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)
    +
        --cache-dir string                     Directory rclone will use for caching.
    +    --vfs-cache-mode CacheMode             Cache mode off|minimal|writes|full (default off)
    +    --vfs-cache-max-age duration           Max time since last access of objects in the cache (default 1h0m0s)
    +    --vfs-cache-max-size SizeSuffix        Max total size of objects in the cache (default off)
    +    --vfs-cache-min-free-space SizeSuffix  Target minimum free space on the disk containing the cache (default off)
    +    --vfs-cache-poll-interval duration     Interval to poll the cache for stale objects (default 1m0s)
    +    --vfs-write-back duration              Time to writeback files after last use when using cache (default 5s)

    If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir or setting @@ -12561,9 +13270,9 @@ specified. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests.

    These flags control the chunking:

    -
    --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    ---vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    ---vfs-read-chunk-streams int            The number of parallel streams to read at once
    +
        --vfs-read-chunk-size SizeSuffix        Read the source objects in chunks (default 128M)
    +    --vfs-read-chunk-size-limit SizeSuffix  Max chunk doubling size (default off)
    +    --vfs-read-chunk-streams int            The number of parallel streams to read at once

    The chunking behaves differently depending on the --vfs-read-chunk-streams parameter.

    chunked reading feature.

    --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction.

    -
    --no-checksum     Don't compare checksums on up/download.
    ---no-modtime      Don't read/write the modification time (can speed things up).
    ---no-seek         Don't allow seeking in files.
    ---read-only       Only allow read-only access.
    +
        --no-checksum     Don't compare checksums on up/download.
    +    --no-modtime      Don't read/write the modification time (can speed things up).
    +    --no-seek         Don't allow seeking in files.
    +    --read-only       Only allow read-only access.

    Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file.

    -
    --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    ---vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)
    +
        --vfs-read-wait duration   Time to wait for in-sequence read before seeking (default 20ms)
    +    --vfs-write-wait duration  Time to wait for in-sequence write before giving error (default 1s)

    When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS).

    -
    --transfers int  Number of file transfers to run in parallel (default 4)
    +
        --transfers int  Number of file transfers to run in parallel (default 4)

    By default the VFS does not support symlinks. However this may be enabled with either of the following flags:

    -
    --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    ---vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS
    +
        --links      Translate symlinks to/from regular files with a '.rclonelink' extension.
    +    --vfs-links  Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS

    As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file which appears as a symlink link-to-file.txt would be @@ -12654,7 +13363,7 @@ commands yet.

    A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree

    -
    .
    +
    .
     ├── dir
     │   └── file.txt
     └── linked-dir -> dir
    @@ -12724,7 +13433,7 @@ an error, similar to how this is handled in

    This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically.

    -
    --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)
    +
        --vfs-disk-space-total-size    Manually set the total disk space size (example: 256G, default: -1)

    Alternate report of used bytes

    Some backends, most notably S3, do not report the amount of bytes @@ -12734,10 +13443,10 @@ used. If you need this information to be available when running of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself.

    -

    WARNING. Contrary to rclone size, this flag -ignores filters so that the result is accurate. However, this is very -inefficient and may cost lots of API calls resulting in extra charges. -Use it as a last resort and only with caching.

    +

    WARNING: Contrary to rclone size, this +flag ignores filters so that the result is accurate. However, this is +very inefficient and may cost lots of API calls resulting in extra +charges. Use it as a last resort and only with caching.

    VFS Metadata

    If you use the --vfs-metadata-extension flag you can get the VFS to expose files which contain the --metadata flag.

    For example, using rclone mount with --metadata --vfs-metadata-extension .metadata we get

    -
    $ ls -l /mnt/
    +
    $ ls -l /mnt/
     total 1048577
     -rw-rw-r-- 1 user user 1073741824 Mar  3 16:03 1G
     
    @@ -12790,31 +13499,39 @@ backend on STDOUT in JSON format. This config will have any default
     parameters for the backend added, but it won't use configuration from
     environment variables or command line options - it is the job of the
     proxy program to make a complete config.

    -

    This config generated must have this extra parameter - -_root - root to use for the backend

    -

    And it may have this parameter - _obscure - comma -separated strings for parameters to obscure

    +

    This config generated must have this extra parameter

    +
      +
    • _root - root to use for the backend
    • +
    +

    And it may have this parameter

    +
      +
    • _obscure - comma separated strings for parameters to +obscure
    • +

    If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

    -
    {
    -    "user": "me",
    -    "pass": "mypassword"
    -}
    +
    {
    +  "user": "me",
    +  "pass": "mypassword"
    +}

    If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:

    -
    {
    -    "user": "me",
    -    "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
    -}
    +
    {
    +  "user": "me",
    +  "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf"
    +}

    And as an example return this on STDOUT

    -
    {
    -    "type": "sftp",
    -    "_root": "",
    -    "_obscure": "pass",
    -    "user": "me",
    -    "pass": "mypassword",
    -    "host": "sftp.example.com"
    -}
    +
    {
    +  "type": "sftp",
    +  "_root": "",
    +  "_obscure": "pass",
    +  "user": "me",
    +  "pass": "mypassword",
    +  "host": "sftp.example.com"
    +}

    This would mean that an SFTP backend would be created on the fly for the user and pass/public_key returned in the output to the host given. Note that since @@ -12835,7 +13552,7 @@ before it takes effect.

    This can be used to build general purpose proxies to any kind of backend that rclone supports.

    rclone serve webdav remote:path [flags]
    -

    Options

    +

    Options

          --addr stringArray                       IPaddress:Port or :Port to bind server to (default 127.0.0.1:8080)
           --allow-origin string                    Origin which cross-domain request (CORS) can be executed from
           --auth-proxy string                      A program to use to create the backend from the auth
    @@ -12894,7 +13611,7 @@ href="https://rclone.org/flags/">global flags page for global
     options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -12917,14 +13634,17 @@ options not listed here.

    --metadata-include-from stringArray Read metadata include patterns from file (use - to read from stdin) --min-age Duration Only transfer files older than this in s or suffix ms|s|m|h|d|w|M|y (default off) --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)
    -

    See Also

    +

    See Also

    + + +

    rclone settier

    Changes storage class/tier of objects in remote.

    -

    Synopsis

    +

    Synopsis

    Changes storage tier or class at remote if supported. Few cloud storage services provides different storage classes on objects, for example AWS S3 and Glacier, Azure Blob storage - Hot, Cool and Archive, @@ -12934,37 +13654,42 @@ immediately. For example tiering to archive in azure blob storage makes objects in frozen state, user can restore by setting tier to Hot/Cool, similarly S3 to Glacier makes object inaccessible.true

    You can use it to tier single object

    -
    rclone settier Cool remote:path/file
    +
    rclone settier Cool remote:path/file

    Or use rclone filters to set tier on only specific files

    -
    rclone --include "*.txt" settier Hot remote:path/dir
    +
    rclone --include "*.txt" settier Hot remote:path/dir

    Or just provide remote directory and all files in directory will be tiered

    -
    rclone settier tier remote:path/dir
    +
    rclone settier tier remote:path/dir
    rclone settier tier remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for settier

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone test

    Run a test command

    -

    Synopsis

    +

    Synopsis

    Rclone test is used to run test commands.

    Select which test command you want with the subcommand, eg

    -
    rclone test memory remote:
    +
    rclone test memory remote:

    Each subcommand has its own options which you can see in their help.

    NB Be careful running these commands, they may do strange things so reading their documentation first is recommended.

    -

    Options

    +

    Options

      -h, --help   help for test

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    • @@ -12984,48 +13709,57 @@ test makefiles - Make a random file hierarchy in a directory
    • rclone test memory - Load all the objects at remote:path into memory and report memory stats.
    • +
    • rclone test +speed - Run a speed test to the remote
    +

    rclone test changenotify

    Log any change notify requests for the remote passed in.

    rclone test changenotify remote: [flags]
    -

    Options

    +

    Options

      -h, --help                     help for changenotify
           --poll-interval Duration   Time to wait between polling for changes (default 10s)

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + + +

    rclone test histogram

    Makes a histogram of file name characters.

    -

    Synopsis

    +

    Synopsis

    This command outputs JSON which shows the histogram of characters used in filenames in the remote:path specified.

    The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression.

    rclone test histogram [remote:path] [flags]
    -

    Options

    +

    Options

      -h, --help   help for histogram

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + + +

    rclone test info

    Discovers file name or other limitations for paths.

    -

    Synopsis

    +

    Synopsis

    Discovers what filenames and upload methods are possible to write to the paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one.

    NB this can create undeletable files and other -hazards - use with care

    +hazards - use with care!

    rclone test info [remote:path]+ [flags]
    -

    Options

    +

    Options

          --all                    Run all tests
           --check-base32768        Check can store all possible base32768 characters
           --check-control          Check control characters
    @@ -13038,15 +13772,18 @@ hazards - use with care

    --write-json string Write results to file

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + + +

    rclone test makefile

    Make files with random contents of the size given

    rclone test makefile <size> [<file>]+ [flags]
    -

    Options

    +

    Options

          --ascii      Fill files with random ASCII printable bytes only
           --chargen    Fill files with a ASCII chargen pattern
       -h, --help       help for makefile
    @@ -13056,15 +13793,18 @@ Run a test command
           --zero       Fill files with ASCII 0x00

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + + +

    rclone test makefiles

    Make a random file hierarchy in a directory

    rclone test makefiles <dir> [flags]
    -

    Options

    +

    Options

          --ascii                      Fill files with random ASCII printable bytes only
           --chargen                    Fill files with a ASCII chargen pattern
           --files int                  Number of files to create (default 1000)
    @@ -13082,27 +13822,75 @@ Run a test command
           --zero                       Fill files with ASCII 0x00

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + + +

    rclone test memory

    Load all the objects at remote:path into memory and report memory stats.

    rclone test memory remote:path [flags]
    -

    Options

    +

    Options

      -h, --help   help for memory

    See the global flags page for global options not listed here.

    -

    See Also

    +

    See Also

    + + + +

    rclone test speed

    +

    Run a speed test to the remote

    +

    Synopsis

    +

    Run a speed test to the remote.

    +

    This command runs a series of uploads and downloads to the remote, +measuring and printing the speed of each test using varying file sizes +and numbers of files.

    +

    Test time can be innaccurate with small file caps and large files. As +it uses the results of an initial test to determine how many files to +use in each subsequent test.

    +

    It is recommended to use -q flag for a simpler output. e.g.:

    +
    rclone test speed remote: -q
    +

    NB This command will create and delete files on the +remote in a randomly named directory which will be automatically removed +on a clean exit.

    +

    You can use the --json flag to only print the results in JSON +format.

    +
    rclone test speed <remote> [flags]
    +

    Options

    +
          --ascii                Fill files with random ASCII printable bytes only
    +      --chargen              Fill files with a ASCII chargen pattern
    +      --file-cap int         Maximum number of files to use in each test (default 100)
    +  -h, --help                 help for speed
    +      --json                 Output only results in JSON format
    +      --large SizeSuffix     Size of large files (default 1Gi)
    +      --medium SizeSuffix    Size of medium files (default 10Mi)
    +      --pattern              Fill files with a periodic pattern
    +      --seed int             Seed for the random number generator (0 for random) (default 1)
    +      --small SizeSuffix     Size of small files (default 1Ki)
    +      --sparse               Make the files sparse (appear to be filled with ASCII 0x00)
    +      --test-time Duration   Length for each test to run (default 15s)
    +      --zero                 Fill files with ASCII 0x00
    +

    See the global flags page for +global options not listed here.

    +

    See Also

    + + + +

    rclone touch

    Create new file or change file modification time.

    -

    Synopsis

    +

    Synopsis

    Set the modification time on file(s) as specified by remote:path to have the current time.

    If remote:path does not exist then a zero sized file will be created, @@ -13124,7 +13912,7 @@ of:

    Note that value of --timestamp is in UTC. If you want local time then add the --localtime flag.

    rclone touch remote:path [flags]
    -

    Options

    +

    Options

      -h, --help               help for touch
           --localtime          Use localtime for timestamp, not UTC
       -C, --no-create          Do not create the file if it does not exist (implied with --recursive)
    @@ -13135,12 +13923,12 @@ href="https://rclone.org/flags/">global flags page for global
     options not listed here.

    Important Options

    Important flags useful for most commands

    -
      -n, --dry-run         Do a trial run with no permanent changes
    +
      -n, --dry-run         Do a trial run with no permanent changes
       -i, --interactive     Enable interactive mode
       -v, --verbose count   Print lots more stuff (repeat for more)

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -13165,20 +13953,23 @@ options not listed here.

    --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    Listing Options

    Flags for listing directories

    -
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
           --fast-list           Use recursive list if available; uses more memory but fewer transactions
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    rclone tree

    List the contents of the remote in a tree like fashion.

    -

    Synopsis

    +

    Synopsis

    Lists the contents of a remote in a similar way to the unix tree command.

    For example

    -
    $ rclone tree remote:path
    +
    $ rclone tree remote:path
     /
     ├── file1
     ├── file2
    @@ -13198,7 +13989,7 @@ options as they conflict with rclone's short options.

    For a more interactive navigation of the remote see the ncdu command.

    rclone tree remote:path [flags]
    -

    Options

    +

    Options

      -a, --all             All files are listed (list . files too)
       -d, --dirs-only       List directories only
           --dirsfirst       List directories before files (-U disables)
    @@ -13223,7 +14014,7 @@ href="https://rclone.org/flags/">global flags page for global
     options not listed here.

    Filter Options

    Flags for filtering directory listings

    -
          --delete-excluded                     Delete files on dest excluded from sync
    +
          --delete-excluded                     Delete files on dest excluded from sync
           --exclude stringArray                 Exclude files matching pattern
           --exclude-from stringArray            Read file exclude patterns from file (use - to read from stdin)
           --exclude-if-present stringArray      Exclude directories if filename is present
    @@ -13248,13 +14039,16 @@ options not listed here.

    --min-size SizeSuffix Only transfer files bigger than this in KiB or suffix B|K|M|G|T|P (default off)

    Listing Options

    Flags for listing directories

    -
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
    +
          --default-time Time   Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z)
           --fast-list           Use recursive list if available; uses more memory but fewer transactions
    -

    See Also

    +

    See Also

    + +
    • rclone - Show help for rclone commands, flags and backends.
    +

    Copying single files

    rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The @@ -13264,16 +14058,13 @@ error if it isn't.

    For example, suppose you have a remote with a file in called test.jpg, then you could copy just that file like this

    -
    rclone copy remote:test.jpg /tmp/download
    +
    rclone copy remote:test.jpg /tmp/download

    The file test.jpg will be placed inside /tmp/download.

    This is equivalent to specifying

    -
    rclone copy --files-from /tmp/files remote: /tmp/download
    +
    rclone copy --files-from /tmp/files remote: /tmp/download

    Where /tmp/files contains the single line

    -
    test.jpg
    +
    test.jpg

    It is recommended to use copy when copying individual files, not sync. They have pretty much the same effect but copy will use a lot less memory.

    @@ -13307,21 +14098,17 @@ leading / will refer to the root.

    backend should be provided on the command line (or in environment variables).

    Here are some examples:

    -
    rclone lsd --http-url https://pub.rclone.org :http:
    +
    rclone lsd --http-url https://pub.rclone.org :http:

    To list all the directories in the root of https://pub.rclone.org/.

    -
    rclone lsf --http-url https://example.com :http:path/to/dir
    +
    rclone lsf --http-url https://example.com :http:path/to/dir

    To list files and directories in https://example.com/path/to/dir/

    -
    rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir
    +
    rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir

    To copy files and directories in https://example.com/path/to/dir to /tmp/dir.

    -
    rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir
    +
    rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir

    To copy files and directories from example.com in the relative directory path/to/dir to /tmp/dir using sftp.

    @@ -13330,59 +14117,49 @@ using sftp.

    syntax, so instead of providing the arguments as command line parameters --http-url https://pub.rclone.org they are provided as part of the remote specification as a kind of connection string.

    -
    rclone lsd ":http,url='https://pub.rclone.org':"
    -rclone lsf ":http,url='https://example.com':path/to/dir"
    -rclone copy ":http,url='https://example.com':path/to/dir" /tmp/dir
    -rclone copy :sftp,host=example.com:path/to/dir /tmp/dir
    +
    rclone lsd ":http,url='https://pub.rclone.org':"
    +rclone lsf ":http,url='https://example.com':path/to/dir"
    +rclone copy ":http,url='https://example.com':path/to/dir" /tmp/dir
    +rclone copy :sftp,host=example.com:path/to/dir /tmp/dir

    These can apply to modify existing remotes as well as create new remotes with the on the fly syntax. This example is equivalent to adding the --drive-shared-with-me parameter to the remote gdrive:.

    -
    rclone lsf "gdrive,shared_with_me:path/to/dir"
    +
    rclone lsf "gdrive,shared_with_me:path/to/dir"

    The major advantage to using the connection string style syntax is that it only applies to the remote, not to all the remotes of that type of the command line. A common confusion is this attempt to copy a file shared on google drive to the normal drive which does not work because the --drive-shared-with-me flag applies to both the source and the destination.

    -
    rclone copy --drive-shared-with-me gdrive:shared-file.txt gdrive:
    +
    rclone copy --drive-shared-with-me gdrive:shared-file.txt gdrive:

    However using the connection string syntax, this does work.

    -
    rclone copy "gdrive,shared_with_me:shared-file.txt" gdrive:
    +
    rclone copy "gdrive,shared_with_me:shared-file.txt" gdrive:

    Note that the connection string only affects the options of the immediate backend. If for example gdriveCrypt is a crypt based on gdrive, then the following command will not work as intended, because shared_with_me is ignored by the crypt backend:

    -
    rclone copy "gdriveCrypt,shared_with_me:shared-file.txt" gdriveCrypt:
    +
    rclone copy "gdriveCrypt,shared_with_me:shared-file.txt" gdriveCrypt:

    The connection strings have the following syntax

    -
    remote,parameter=value,parameter2=value2:path/to/dir
    -:backend,parameter=value,parameter2=value2:path/to/dir
    +
    remote,parameter=value,parameter2=value2:path/to/dir
    +:backend,parameter=value,parameter2=value2:path/to/dir

    If the parameter has a : or , then it must be placed in quotes " or ', so

    -
    remote,parameter="colon:value",parameter2="comma,value":path/to/dir
    -:backend,parameter='colon:value',parameter2='comma,value':path/to/dir
    +
    remote,parameter="colon:value",parameter2="comma,value":path/to/dir
    +:backend,parameter='colon:value',parameter2='comma,value':path/to/dir

    If a quoted value needs to include that quote, then it should be doubled, so

    -
    remote,parameter="with""quote",parameter2='with''quote':path/to/dir
    +
    remote,parameter="with""quote",parameter2='with''quote':path/to/dir

    This will make parameter be with"quote and parameter2 be with'quote.

    If you leave off the =parameter then rclone will substitute =true which works very well with flags. For example, to use s3 configured in the environment you could use:

    -
    rclone lsd :s3,env_auth:
    +
    rclone lsd :s3,env_auth:

    Which is equivalent to

    -
    rclone lsd :s3,env_auth=true:
    +
    rclone lsd :s3,env_auth=true:

    Note that on the command line you might need to surround these connection strings with " or ' to stop the shell interpreting any special characters within them.

    @@ -13390,34 +14167,31 @@ shell interpreting any special characters within them.

    which aren't, but if you aren't sure then enclose them in " and use ' as the inside quote. This syntax works on all OSes.

    -
    rclone copy ":http,url='https://example.com':path/to/dir" /tmp/dir
    +
    rclone copy ":http,url='https://example.com':path/to/dir" /tmp/dir

    On Linux/macOS some characters are still interpreted inside " strings in the shell (notably \ and $ and ") so if your strings contain those you can swap the roles of " and ' thus. (This syntax does not work on Windows.)

    -
    rclone copy ':http,url="https://example.com":path/to/dir' /tmp/dir
    +
    rclone copy ':http,url="https://example.com":path/to/dir' /tmp/dir
    +

    You can use rclone config +string to convert a remote into a connection string.

    Connection strings, config and logging

    If you supply extra configuration to a backend by command line flag, environment variable or connection string then rclone will add a suffix based on the hash of the config to the name of the remote, eg

    -
    rclone -vv lsf --s3-chunk-size 20M s3:
    +
    rclone -vv lsf --s3-chunk-size 20M s3:

    Has the log message

    -
    DEBUG : s3: detected overridden config - adding "{Srj1p}" suffix to name
    +
    DEBUG : s3: detected overridden config - adding "{Srj1p}" suffix to name

    This is so rclone can tell the modified remote apart from the unmodified remote when caching the backends.

    This should only be noticeable in the logs.

    This means that on the fly backends such as

    -
    rclone -vv lsf :s3,env_auth:
    +
    rclone -vv lsf :s3,env_auth:

    Will get their own names

    -
    DEBUG : :s3: detected overridden config - adding "{YTu53}" suffix to name
    +
    DEBUG : :s3: detected overridden config - adding "{YTu53}" suffix to name

    Valid remote names

    Remote names are case sensitive, and must adhere to the following rules:

    @@ -13460,11 +14234,11 @@ infrastructure without a proper certificate. You could supply the --no-check-certificate flag to rclone, but this will affect all the remotes. To make it just affect this remote you use an override. You could put this in the config file:

    -
    [remote]
    -type = XXX
    -...
    -override.no_check_certificate = true
    +
    [remote]
    +type = XXX
    +...
    +override.no_check_certificate = true

    or use it in the connection string remote,override.no_check_certificate=true: (or just remote,override.no_check_certificate:).

    @@ -13508,11 +14282,11 @@ as an override. For example, say you have a remote where you would always like to use the --checksum flag. You could supply the --checksum flag to rclone on every command line, but instead you could put this in the config file:

    -
    [remote]
    -type = XXX
    -...
    -global.checksum = true
    +
    [remote]
    +type = XXX
    +...
    +global.checksum = true

    or use it in the connection string remote,global.checksum=true: (or just remote,global.checksum:). This is equivalent to using the @@ -13538,25 +14312,23 @@ rules

    *, ?, $, ', ", etc.) then you must quote them. Use single quotes ' by default.

    -
    rclone copy 'Important files?' remote:backup
    +
    rclone copy 'Important files?' remote:backup

    If you want to send a ' you will need to use ", e.g.

    -
    rclone copy "O'Reilly Reviews" remote:backup
    +
    rclone copy "O'Reilly Reviews" remote:backup

    The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell.

    Windows

    If your names have spaces in you need to put them in ", e.g.

    -
    rclone copy "E:\folder name\folder name\folder name" remote:backup
    +
    rclone copy "E:\folder name\folder name\folder name" remote:backup

    If you are using the root directory on its own then don't quote it (see #464 for why), e.g.

    -
    rclone copy E:\ remote:backup
    +
    rclone copy E:\ remote:backup

    Copying files or directories with : in the names

    rclone uses : to mark a remote name. This is, however, a @@ -13567,11 +14339,9 @@ path starting with a /, or use ./ as a current directory prefix.

    So to sync a directory called sync:me to a remote called remote: use

    -
    rclone sync --interactive ./sync:me remote:path
    +
    rclone sync --interactive ./sync:me remote:path

    or

    -
    rclone sync --interactive /full/path/to/sync:me remote:path
    +
    rclone sync --interactive /full/path/to/sync:me remote:path

    Server-side copy

    Most remotes (but not all - see the overview) @@ -13580,8 +14350,7 @@ support server-side copy.

    won't download all the files and re-upload them; it will instruct the server to copy them in place.

    Eg

    -
    rclone copy s3:oldbucket s3:newbucket
    +
    rclone copy s3:oldbucket s3:newbucket

    Will copy the contents of oldbucket to newbucket without downloading and re-uploading.

    Remotes which don't support server-side copy will @@ -13596,9 +14365,8 @@ download and re-upload.

    same.

    This can be used when scripting to make aged backups efficiently, e.g.

    -
    rclone sync --interactive remote:current-backup remote:previous-backup
    -rclone sync --interactive /path/to/files remote:current-backup
    +
    rclone sync --interactive remote:current-backup remote:previous-backup
    +rclone sync --interactive /path/to/files remote:current-backup

    Metadata support

    Metadata is data about a file (or directory) which isn't the contents of the file (or directory). Normally rclone only preserves the @@ -13780,7 +14548,7 @@ will take precedence if supplied in the metadata over reading the Content-Type or modification time of the source object.

    Hashes are not included in system metadata as there is a well defined way of reading those already.

    -

    Options

    +

    Options

    Rclone has a number of options to control its behaviour. These are documented below, and in the flags page.

    Options that take parameters can have the values passed in two ways, @@ -13877,15 +14645,16 @@ use the same remote as the destination of the sync. The backup directory must not overlap the destination directory without it being excluded by a filter rule.

    For example

    -
    rclone sync --interactive /path/to/local remote:current --backup-dir remote:old
    +
    rclone sync --interactive /path/to/local remote:current --backup-dir remote:old

    will sync /path/to/local to remote:current, but for any files which would have been updated or deleted will be stored in remote:old.

    If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir to store the old files, or you might want to pass --suffix with today's -date.

    +date. This can be done with --suffix $(date +%F) in bash, +and --suffix $(Get-Date -Format 'yyyy-MM-dd') in +PowerShell.

    See --compare-dest and --copy-dest.

    --bind string

    Local address to bind to for outgoing connections. This can be an @@ -13897,8 +14666,7 @@ addresses and --bind ::0 to force rclone to use IPv6 addresses.

    --bwlimit BwTimetable

    This option controls the bandwidth limit. For example

    -
    --bwlimit 10M
    +
    --bwlimit 10M

    would mean limit the upload and download bandwidth to 10 MiB/s. NB this is bytes per second not bits per second. To use a single limit, specify the @@ -13906,13 +14674,11 @@ desired bandwidth in KiB/s, or use a suffix B|K|M|G|T|P. The default is 0 which means to not limit bandwidth.

    The upload and download bandwidth can be specified separately, as --bwlimit UP:DOWN, so

    -
    --bwlimit 10M:100k
    +
    --bwlimit 10M:100k

    would mean limit the upload bandwidth to 10 MiB/s and the download bandwidth to 100 KiB/s. Either limit can be "off" meaning no limit, so to just limit the upload bandwidth you would use

    -
    --bwlimit 10M:off
    +
    --bwlimit 10M:off

    this would limit the upload bandwidth to 10 MiB/s but the download bandwidth would be unlimited.

    When specified as above the bandwidth limits last for the duration of @@ -13954,11 +14720,9 @@ Saturday it will be set to 1 MiB/s. From 20:00 on Sunday it will be unlimited.

    Timeslots without WEEKDAY are extended to the whole week. So this example:

    -
    --bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"
    +
    --bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"

    Is equivalent to this:

    -
    --bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"
    +
    --bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"

    Bandwidth limit apply to the data transfer for all backends. For most backends the directory listing bandwidth is also included (exceptions being the non HTTP backends, ftp, sftp and @@ -13975,19 +14739,16 @@ to remove the limitations of a long running rclone transfer and to restore it back to the value specified with --bwlimit quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this:

    -
    kill -SIGUSR2 $(pidof rclone)
    +
    kill -SIGUSR2 $(pidof rclone)

    If you configure rclone with a remote control then you can use change the bwlimit dynamically:

    -
    rclone rc core/bwlimit rate=1M
    +
    rclone rc core/bwlimit rate=1M

    --bwlimit-file BwTimetable

    This option controls per file bandwidth limit. For the options see the --bwlimit flag.

    For example use this to allow no transfers to be faster than 1 MiB/s

    -
    --bwlimit-file 1M
    +
    --bwlimit-file 1M

    This can be used in conjunction with --bwlimit.

    Note that if a schedule is provided the file will use the schedule in effect at the start of the transfer.

    @@ -14180,11 +14941,11 @@ value is the internal lowercase name as returned by command rclone help backends. Comments are indicated by ; or # at the beginning of a line.

    Example:

    -
    [megaremote]
    -type = mega
    -user = you@example.com
    -pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qH
    +
    [megaremote]
    +type = mega
    +user = you@example.com
    +pass = PDPcQVVjVtzFY-GTdDFozqBhTdsPg3qH

    Note that passwords are in obscured form. Also, many storage systems uses token-based authentication instead of @@ -14255,15 +15016,12 @@ the default time to the time rclone started up.

    --disable string

    This disables a comma separated list of optional features. For example to disable server-side move and server-side copy use:

    -
    --disable move,copy
    +
    --disable move,copy

    The features can be put in any case.

    To see a list of which features can be disabled use:

    -
    --disable help
    +
    --disable help

    The features a remote has can be seen in JSON format with:

    -
    rclone backend features remote:
    +
    rclone backend features remote:

    See the overview features and optional @@ -14297,8 +15055,7 @@ bandwidth in a network with DiffServ support (RFC 8622).

    For example, if you configured QoS on router to handle LE properly. Running:

    -
    rclone copy --dscp LE from:/from to:/to
    +
    rclone copy --dscp LE from:/from to:/to

    would make the priority lower than usual internet flows.

    This option has no effect on Windows (see golang/go#42728).

    @@ -14379,21 +15136,18 @@ downloads use --header-download.

    supported by --header-upload and --header-download so may be used as a workaround for those with care.

    -
    rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes"
    +
    rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes"

    --header-download stringArray

    Add an HTTP header for all download transactions. The flag can be repeated to add multiple headers.

    -
    rclone sync --interactive s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"
    +
    rclone sync --interactive s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"

    See GitHub issue #59 for currently supported backends.

    --header-upload stringArray

    Add an HTTP header for all upload transactions. The flag can be repeated to add multiple headers.

    -
    rclone sync --interactive ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"
    +
    rclone sync --interactive ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"

    See GitHub issue #59 for currently supported backends.

    @@ -14543,15 +15297,14 @@ confirmation before destructive operations.

    It is recommended that you use this flag while learning rclone especially with rclone sync.

    For example

    -
    $ rclone delete --interactive /tmp/dir
    -rclone: delete "important-file.txt"?
    -y) Yes, this is OK (default)
    -n) No, skip this
    -s) Skip all delete operations with no more questions
    -!) Do all delete operations with no more questions
    -q) Exit rclone now.
    -y/n/s/!/q> n
    +
    $ rclone delete --interactive /tmp/dir
    +rclone: delete "important-file.txt"?
    +y) Yes, this is OK (default)
    +n) No, skip this
    +s) Skip all delete operations with no more questions
    +!) Do all delete operations with no more questions
    +q) Exit rclone now.
    +y/n/s/!/q> n

    The options mean

    • y: Yes, this operation should go @@ -14617,8 +15370,7 @@ ignored.

      If this option is not set, then the other log rotation options will be ignored.

      For example if the following flags are in use

      -
      rclone --log-file rclone.log --log-file-max-size 1M --log-file-max-backups 3
      +
      rclone --log-file rclone.log --log-file-max-size 1M --log-file-max-backups 3

      Then this will create log files which look like this

      $ ls -l
       -rw-------  1 user user  1048491 Apr 11 17:15 rclone-2025-04-11T17-15-29.998.log
      @@ -14699,8 +15451,7 @@ administrator to create the registry key in advance.

      must be greater (more severe) than or equal to the --log-level. For example to log DEBUG to a log file but ERRORs to the event log you would use

      -
      --log-file rclone.log --log-level DEBUG --windows-event-log ERROR
      +
      --log-file rclone.log --log-level DEBUG --windows-event-log ERROR

      This option is only supported Windows platforms.

      --use-json-log

      This switches the log format to JSON. The log messages are then @@ -14714,49 +15465,49 @@ complete log file is not strictly valid JSON and needs a parser that can handle it.

      The JSON logs will be printed on a single line, but are shown expanded here for clarity.

      -
      {
      -  "time": "2025-05-13T17:30:51.036237518+01:00",
      -  "level": "debug",
      -  "msg": "4 go routines active\n",
      -  "source": "cmd/cmd.go:298"
      -}
      +
      {
      +  "time": "2025-05-13T17:30:51.036237518+01:00",
      +  "level": "debug",
      +  "msg": "4 go routines active\n",
      +  "source": "cmd/cmd.go:298"
      +}

      Completed data transfer logs will have extra size information. Logs which are about a particular object will have object and objectType fields also.

      -
      {
      -  "time": "2025-05-13T17:38:05.540846352+01:00",
      -  "level": "info",
      -  "msg": "Copied (new) to: file2.txt",
      -  "size": 6,
      -  "object": "file.txt",
      -  "objectType": "*local.Object",
      -  "source": "operations/copy.go:368"
      -}
      +
      {
      +  "time": "2025-05-13T17:38:05.540846352+01:00",
      +  "level": "info",
      +  "msg": "Copied (new) to: file2.txt",
      +  "size": 6,
      +  "object": "file.txt",
      +  "objectType": "*local.Object",
      +  "source": "operations/copy.go:368"
      +}

      Stats logs will contain a stats field which is the same as returned from the rc call core/stats.

      -
      {
      -  "time": "2025-05-13T17:38:05.540912847+01:00",
      -  "level": "info",
      -  "msg": "...text version of the stats...",
      -  "stats": {
      -    "bytes": 6,
      -    "checks": 0,
      -    "deletedDirs": 0,
      -    "deletes": 0,
      -    "elapsedTime": 0.000904825,
      -    ...truncated for clarity...
      -    "totalBytes": 6,
      -    "totalChecks": 0,
      -    "totalTransfers": 1,
      -    "transferTime": 0.000882794,
      -    "transfers": 1
      -  },
      -  "source": "accounting/stats.go:569"
      -}
      +
      {
      +  "time": "2025-05-13T17:38:05.540912847+01:00",
      +  "level": "info",
      +  "msg": "...text version of the stats...",
      +  "stats": {
      +    "bytes": 6,
      +    "checks": 0,
      +    "deletedDirs": 0,
      +    "deletes": 0,
      +    "elapsedTime": 0.000904825,
      +    ...truncated for clarity...
      +    "totalBytes": 6,
      +    "totalChecks": 0,
      +    "totalTransfers": 1,
      +    "transferTime": 0.000882794,
      +    "transfers": 1
      +  },
      +  "source": "accounting/stats.go:569"
      +}

      --low-level-retries int

      This controls the number of low level retries rclone does.

      A low level retry is used to retry a failing operation - typically @@ -14882,10 +15633,9 @@ enclose it in ", if you want a literal " in an argument then enclose the argument in " and double the ". See CSV encoding for more info.

      -
      --metadata-mapper "python bin/test_metadata_mapper.py"
      ---metadata-mapper 'python bin/test_metadata_mapper.py "argument with a space"'
      ---metadata-mapper 'python bin/test_metadata_mapper.py "argument with ""two"" quotes"'
      +
      --metadata-mapper "python bin/test_metadata_mapper.py"
      +--metadata-mapper 'python bin/test_metadata_mapper.py "argument with a space"'
      +--metadata-mapper 'python bin/test_metadata_mapper.py "argument with ""two"" quotes"'

      This uses a simple JSON based protocol with input on STDIN and output on STDOUT. This will be called for every file and directory copied and may be called concurrently.

      @@ -14912,63 +15662,63 @@ known.
    • Metadata is the backend specific metadata as described in the backend docs.
    -
    {
    -    "SrcFs": "gdrive:",
    -    "SrcFsType": "drive",
    -    "DstFs": "newdrive:user",
    -    "DstFsType": "onedrive",
    -    "Remote": "test.txt",
    -    "Size": 6,
    -    "MimeType": "text/plain; charset=utf-8",
    -    "ModTime": "2022-10-11T17:53:10.286745272+01:00",
    -    "IsDir": false,
    -    "ID": "xyz",
    -    "Metadata": {
    -        "btime": "2022-10-11T16:53:11Z",
    -        "content-type": "text/plain; charset=utf-8",
    -        "mtime": "2022-10-11T17:53:10.286745272+01:00",
    -        "owner": "user1@domain1.com",
    -        "permissions": "...",
    -        "description": "my nice file",
    -        "starred": "false"
    -    }
    -}
    +
    {
    +  "SrcFs": "gdrive:",
    +  "SrcFsType": "drive",
    +  "DstFs": "newdrive:user",
    +  "DstFsType": "onedrive",
    +  "Remote": "test.txt",
    +  "Size": 6,
    +  "MimeType": "text/plain; charset=utf-8",
    +  "ModTime": "2022-10-11T17:53:10.286745272+01:00",
    +  "IsDir": false,
    +  "ID": "xyz",
    +  "Metadata": {
    +    "btime": "2022-10-11T16:53:11Z",
    +    "content-type": "text/plain; charset=utf-8",
    +    "mtime": "2022-10-11T17:53:10.286745272+01:00",
    +    "owner": "user1@domain1.com",
    +    "permissions": "...",
    +    "description": "my nice file",
    +    "starred": "false"
    +  }
    +}

    The program should then modify the input as desired and send it to STDOUT. The returned Metadata field will be used in its entirety for the destination object. Any other fields will be ignored. Note in this example we translate user names and permissions and add something to the description:

    -
    {
    -    "Metadata": {
    -        "btime": "2022-10-11T16:53:11Z",
    -        "content-type": "text/plain; charset=utf-8",
    -        "mtime": "2022-10-11T17:53:10.286745272+01:00",
    -        "owner": "user1@domain2.com",
    -        "permissions": "...",
    -        "description": "my nice file [migrated from domain1]",
    -        "starred": "false"
    -    }
    -}
    +
    {
    +  "Metadata": {
    +    "btime": "2022-10-11T16:53:11Z",
    +    "content-type": "text/plain; charset=utf-8",
    +    "mtime": "2022-10-11T17:53:10.286745272+01:00",
    +    "owner": "user1@domain2.com",
    +    "permissions": "...",
    +    "description": "my nice file [migrated from domain1]",
    +    "starred": "false"
    +  }
    +}

    Metadata can be removed here too.

    An example python program might look something like this to implement the above transformations.

    -
    import sys, json
    -
    -i = json.load(sys.stdin)
    -metadata = i["Metadata"]
    -# Add tag to description
    -if "description" in metadata:
    -    metadata["description"] += " [migrated from domain1]"
    -else:
    -    metadata["description"] = "[migrated from domain1]"
    -# Modify owner
    -if "owner" in metadata:
    -    metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
    -o = { "Metadata": metadata }
    -json.dump(o, sys.stdout, indent="\t")
    +
    import sys, json
    +
    +i = json.load(sys.stdin)
    +metadata = i["Metadata"]
    +# Add tag to description
    +if "description" in metadata:
    +    metadata["description"] += " [migrated from domain1]"
    +else:
    +    metadata["description"] = "[migrated from domain1]"
    +# Modify owner
    +if "owner" in metadata:
    +    metadata["owner"] = metadata["owner"].replace("domain1.com", "domain2.com")
    +o = { "Metadata": metadata }
    +json.dump(o, sys.stdout, indent="\t")

    You can find this example (slightly expanded) in the rclone source code at bin/test_metadata_mapper.py.

    @@ -15225,10 +15975,9 @@ enclose the argument in " and double the ". See CSV encoding for more info.

    Eg

    -
    --password-command "echo hello"
    ---password-command 'echo "hello with space"'
    ---password-command 'echo "hello with ""quotes"" and space"'
    +
    --password-command "echo hello"
    +--password-command 'echo "hello with space"'
    +--password-command 'echo "hello with ""quotes"" and space"'

    Note that when changing the configuration password the environment variable RCLONE_PASSWORD_CHANGE=1 will be set. This can be used to distinguish initial decryption of the config file from the new @@ -15387,8 +16136,7 @@ use the same remote as the destination of the sync.

    or with --backup-dir. See --backup-dir for more info.

    For example

    -
    rclone copy --interactive /path/to/local/file remote:current --suffix .bak
    +
    rclone copy --interactive /path/to/local/file remote:current --suffix .bak

    will copy /path/to/local to remote:current, but for any files which would have been updated or deleted have .bak added.

    @@ -15396,8 +16144,7 @@ added.

    without --backup-dir then it is recommended to put a filter rule in excluding the suffix otherwise the sync will delete the backup files.

    -
    rclone sync --interactive /path/to/local/file remote:current --suffix .bak --exclude "*.bak"
    +
    rclone sync --interactive /path/to/local/file remote:current --suffix .bak --exclude "*.bak"

    --suffix-keep-extension

    When using --suffix, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than @@ -15723,34 +16470,32 @@ password to your configuration. This means that you will have to supply the password every time you start rclone.

    To add a password to your rclone configuration, execute rclone config.

    -
    $ rclone config
    -Current remotes:
    -
    -e) Edit existing remote
    -n) New remote
    -d) Delete remote
    -s) Set configuration password
    -q) Quit config
    -e/n/d/s/q>
    +
    $ rclone config
    +Current remotes:
    +
    +e) Edit existing remote
    +n) New remote
    +d) Delete remote
    +s) Set configuration password
    +q) Quit config
    +e/n/d/s/q>

    Go into s, Set configuration password:

    -
    e/n/d/s/q> s
    -Your configuration is not encrypted.
    -If you add a password, you will protect your login information to cloud services.
    -a) Add Password
    -q) Quit to main menu
    -a/q> a
    -Enter NEW configuration password:
    -password:
    -Confirm NEW password:
    -password:
    -Password set
    -Your configuration is encrypted.
    -c) Change Password
    -u) Unencrypt configuration
    -q) Quit to main menu
    -c/u/q>
    +
    e/n/d/s/q> s
    +Your configuration is not encrypted.
    +If you add a password, you will protect your login information to cloud services.
    +a) Add Password
    +q) Quit to main menu
    +a/q> a
    +Enter NEW configuration password:
    +password:
    +Confirm NEW password:
    +password:
    +Password set
    +Your configuration is encrypted.
    +c) Change Password
    +u) Unencrypt configuration
    +q) Quit to main menu
    +c/u/q>

    Your configuration is now encrypted, and every time you start rclone you will have to supply the password. See below for details. In the same menu, you can change the password or completely remove encryption from @@ -15784,11 +16529,11 @@ password, in which case it will be used for decrypting the configuration.

    You can set this for a session from a script. For unix like systems save this to a file called set-rclone-password:

    -
    #!/bin/echo Source this file don't run it
    -
    -read -s RCLONE_CONFIG_PASS
    -export RCLONE_CONFIG_PASS
    +
    #!/bin/echo Source this file don't run it
    +
    +read -s RCLONE_CONFIG_PASS
    +export RCLONE_CONFIG_PASS

    Then source the file when you want to use it. From the shell you would do source set-rclone-password. It will then ask you for the password and set it in the environment variable.

    @@ -15801,8 +16546,7 @@ command line argument or via the RCLONE_PASSWORD_COMMAND environment variable.

    One useful example of this is using the passwordstore application to retrieve the password:

    -
    export RCLONE_PASSWORD_COMMAND="pass rclone/config"
    +
    export RCLONE_PASSWORD_COMMAND="pass rclone/config"

    If the passwordstore password manager holds the password for the rclone configuration, using the script method means the password is primarily protected by the passwordstore system, and is @@ -15841,12 +16585,10 @@ at rest or transfer. Detailed instructions for popular OSes:

    Mac

    • Generate and store a password

      -
      security add-generic-password -a rclone -s config -w $(openssl rand -base64 40)
    • +
      security add-generic-password -a rclone -s config -w $(openssl rand -base64 40)
    • Add the retrieval instruction to your .zprofile / .profile

      -
      export RCLONE_PASSWORD_COMMAND="/usr/bin/security find-generic-password -a rclone -s config -w"
    • +
      export RCLONE_PASSWORD_COMMAND="/usr/bin/security find-generic-password -a rclone -s config -w"

    Linux

      @@ -15856,18 +16598,18 @@ Let's install the "pass" utility using a package manager, e.g. href="https://www.passwordstore.org/#download">etc.; then initialize a password store: pass init rclone.

    • Generate and store a password

      -
      echo $(openssl rand -base64 40) | pass insert -m rclone/config
    • +
      echo $(openssl rand -base64 40) | pass insert -m rclone/config
    • Add the retrieval instruction

      -
      export RCLONE_PASSWORD_COMMAND="/usr/bin/pass rclone/config"
    • +
      export RCLONE_PASSWORD_COMMAND="/usr/bin/pass rclone/config"

    Windows

    • Generate and store a password

      -
      New-Object -TypeName PSCredential -ArgumentList "rclone", (ConvertTo-SecureString -String ([System.Web.Security.Membership]::GeneratePassword(40, 10)) -AsPlainText -Force) | Export-Clixml -Path "rclone-credential.xml"
    • +
      New-Object -TypeName PSCredential -ArgumentList "rclone", (ConvertTo-SecureString -String ([System.Web.Security.Membership]::GeneratePassword(40, 10)) -AsPlainText -Force) | Export-Clixml -Path "rclone-credential.xml"
    • Add the password retrieval instruction

      -
      [Environment]::SetEnvironmentVariable("RCLONE_PASSWORD_COMMAND", "[System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR((Import-Clixml -Path "rclone-credential.xml").Password))")
    • +
      [Environment]::SetEnvironmentVariable("RCLONE_PASSWORD_COMMAND", "[System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR((Import-Clixml -Path "rclone-credential.xml").Password))")

    Encrypt the config file (all systems)

    @@ -16045,7 +16787,7 @@ reached

    Environment variables

    Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.

    -

    Options

    +

    Options

    Every option in rclone can have its default set by environment variable.

    To find the name of the environment variable, first, take the long @@ -16117,14 +16859,13 @@ variable name, so it can only contain letters, digits, or the

    For example, to configure an S3 remote named mys3: without a config file (using unix ways of setting environment variables):

    -
    $ export RCLONE_CONFIG_MYS3_TYPE=s3
    -$ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
    -$ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
    -$ rclone lsd mys3:
    -          -1 2016-09-21 12:54:21        -1 my-bucket
    -$ rclone listremotes | grep mys3
    -mys3:
    +
    $ export RCLONE_CONFIG_MYS3_TYPE=s3
    +$ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
    +$ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
    +$ rclone lsd mys3:
    +          -1 2016-09-21 12:54:21        -1 my-bucket
    +$ rclone listremotes | grep mys3
    +mys3:

    Note that if you want to create a remote using environment variables you must create the ..._TYPE variable as above.

    Note that the name of a remote created using environment variable is @@ -16133,11 +16874,10 @@ as documented above. You must write the name in uppercase in the environment variable, but as seen from example above it will be listed and can be accessed in lowercase, while you can also refer to the same remote in uppercase:

    -
    $ rclone lsd mys3:
    -          -1 2016-09-21 12:54:21        -1 my-bucket
    -$ rclone lsd MYS3:
    -          -1 2016-09-21 12:54:21        -1 my-bucket
    +
    $ rclone lsd mys3:
    +          -1 2016-09-21 12:54:21        -1 my-bucket
    +$ rclone lsd MYS3:
    +          -1 2016-09-21 12:54:21        -1 my-bucket

    Note that you can only set the options of the immediate backend, so RCLONE_CONFIG_MYS3CRYPT_ACCESS_KEY_ID has no effect, if myS3Crypt is a crypt remote based on an S3 remote. However RCLONE_S3_ACCESS_KEY_ID will @@ -16145,8 +16885,7 @@ set the access key of all remotes using S3, including myS3Crypt.

    Note also that now rclone has connection strings, it is probably easier to use those instead which makes the above example

    -
    rclone lsd :s3,access_key_id=XXX,secret_access_key=XXX:
    +
    rclone lsd :s3,access_key_id=XXX,secret_access_key=XXX:

    Precedence

    The various different methods of backend configuration are read in this order and the first one with a value is used.

    @@ -16206,18 +16945,20 @@ directory holding the config file.

    Configuring rclone on a remote / headless machine

    Some of the configurations (those involving oauth2) require an -Internet connected web browser.

    -

    If you are trying to set rclone up on a remote or headless box with -no browser available on it (e.g. a NAS or a server in a datacenter) then -you will need to use an alternative means of configuration. There are -two ways of doing it, described below.

    +internet-connected web browser.

    +

    If you are trying to set rclone up on a remote or headless machine +with no browser available on it (e.g. a NAS or a server in a +datacenter), then you will need to use an alternative means of +configuration. There are three ways of doing it, described below.

    Configuring using rclone authorize

    -

    On the headless box run rclone config but answer -N to the Use auto config? question.

    -
    Use auto config?
    - * Say Y if not sure
    - * Say N if you are working on a remote or headless machine
    +

    On the headless machine run rclone +config, but answer N to the question +Use web browser to automatically authenticate rclone with remote?.

    +
    Use web browser to automatically authenticate rclone with remote?
    + * Say Y if the machine running rclone has a web browser you can use
    + * Say N if running rclone on a (remote) machine without web browser access
    +If not sure try Y. If Y failed, try N.
     
     y) Yes (default)
     n) No
    @@ -16229,25 +16970,29 @@ a web browser available.
     For more help and alternate methods see: https://rclone.org/remote_setup/
     Execute the following on the machine with the web browser (same rclone
     version recommended):
    -    rclone authorize "onedrive"
    +        rclone authorize "onedrive"
     Then paste the result.
     Enter a value.
     config_token>
    -

    Then on your main desktop machine

    -
    rclone authorize "onedrive"
    -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
    -Log in and authorize rclone for access
    -Waiting for code...
    +

    Then on your main desktop machine, run rclone +authorize.

    +
    rclone authorize "onedrive"
    +NOTICE: Make sure your Redirect URL is set to "http://localhost:53682/" in your custom config.
    +NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx
    +NOTICE: Log in and authorize rclone for access
    +NOTICE: Waiting for code...
    +
     Got code
     Paste the following into your remote machine --->
     SECRET_TOKEN
     <---End paste
    -

    Then back to the headless box, paste in the code

    -
    config_token> SECRET_TOKEN
    +

    Then back to the headless machine, paste in the code.

    +
    config_token> SECRET_TOKEN
     --------------------
     [acd12]
    -client_id = 
    -client_secret = 
    +client_id =
    +client_secret =
     token = SECRET_TOKEN
     --------------------
     y) Yes this is OK
    @@ -16256,35 +17001,47 @@ d) Delete this remote
     y/e/d>

    Configuring by copying the config file

    -

    Rclone stores all of its config in a single configuration file. This -can easily be copied to configure a remote rclone.

    -

    So first configure rclone on your desktop machine with

    -
    rclone config
    -

    to set up the config file.

    -

    Find the config file by running rclone config file, for -example

    -
    $ rclone config file
    +

    Rclone stores all of its configuration in a single file. This can +easily be copied to configure a remote rclone (although some backends +does not support reusing the same configuration, consult your backend +documentation to be sure).

    +

    Start by running rclone config +to create the configuration file on your desktop machine.

    +
    rclone config
    +

    Then locate the file by running rclone config file.

    +
    $ rclone config file
     Configuration file is stored at:
     /home/user/.rclone.conf
    -

    Now transfer it to the remote box (scp, cut paste, ftp, sftp, etc.) -and place it in the correct place (use rclone config file -on the remote box to find out where).

    +

    Finally, transfer the file to the remote machine (scp, cut paste, +ftp, sftp, etc.) and place it in the correct location (use rclone config file on the remote +machine to find out where).

    Configuring using SSH Tunnel

    -

    Linux and MacOS users can utilize SSH Tunnel to redirect the headless -box port 53682 to local machine by using the following command:

    -
    ssh -L localhost:53682:localhost:53682 username@remote_server
    -

    Then on the headless box run rclone config and answer -Y to the Use auto config? question.

    -
    Use auto config?
    - * Say Y if not sure
    - * Say N if you are working on a remote or headless machine
    +

    If you have an SSH client installed on your local machine, you can +set up an SSH tunnel to redirect the port 53682 into the headless +machine by using the following command:

    +
    ssh -L localhost:53682:localhost:53682 username@remote_server
    +

    Then on the headless machine run rclone config and answer +Y to the question +Use web browser to automatically authenticate rclone with remote?.

    +
    Use web browser to automatically authenticate rclone with remote?
    + * Say Y if the machine running rclone has a web browser you can use
    + * Say N if running rclone on a (remote) machine without web browser access
    +If not sure try Y. If Y failed, try N.
     
     y) Yes (default)
     n) No
    -y/n> y
    -

    Then copy and paste the auth url -http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx to the -browser on your local machine, complete the auth and it is done.

    +y/n> y +NOTICE: Make sure your Redirect URL is set to "http://localhost:53682/" in your custom config. +NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx +NOTICE: Log in and authorize rclone for access +NOTICE: Waiting for code...
    +

    Finally, copy and paste the presented URL +http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx to +the browser on your local machine, complete the auth and you are +done.

    Filtering, includes and excludes

    Filter flags determine which files rclone sync, @@ -16389,7 +17146,8 @@ Windows.

    bash uses) to make things easy for users. However this does not provide absolute control over the matching, so for advanced users rclone also provides a regular expression syntax.

    -

    The regular expressions used are as defined in the Rclone generally accepts Perl-style regular expressions, the exact +syntax is defined in the Go regular expression reference. Regular expressions should be enclosed in {{ }}. They will match only the last path segment if the glob @@ -16761,12 +17519,10 @@ files on remote: with suffix .png and .jpg. All other files are excluded.

    E.g. multiple rclone copy commands can be combined with --include and a pattern-list.

    -
    rclone copy /vol1/A remote:A
    -rclone copy /vol1/B remote:B
    +
    rclone copy /vol1/A remote:A
    +rclone copy /vol1/B remote:B

    is equivalent to:

    -
    rclone copy /vol1 remote: --include "{A,B}/**"
    +
    rclone copy /vol1 remote: --include "{A,B}/**"

    E.g. rclone ls remote:/wheat --include "??[^[:punct:]]*" lists the files remote: directory wheat (and subdirectories) whose third character is not punctuation. This example @@ -16926,8 +17682,7 @@ without leading /, e.g.

    user1/dir/ford user2/prefect

    Then copy these to a remote:

    -
    rclone copy --files-from files-from.txt /home remote:backup
    +
    rclone copy --files-from files-from.txt /home remote:backup

    The three files are transferred as follows:

    /home/user1/42       → remote:backup/user1/important
     /home/user1/dir/ford → remote:backup/user1/dir/file
    @@ -16938,8 +17693,7 @@ class="sourceCode sh">
    rclone copy --files-from files-from.txt / remote:backup
    +
    rclone copy --files-from files-from.txt / remote:backup

    Then there will be an extra home directory on the remote:

    /home/user1/42       → remote:backup/home/user1/42
    @@ -17034,8 +17788,7 @@ subset of files, useful for:

    Syntax

    The flag takes two parameters expressed as a fraction:

    -
    --hash-filter K/N
    +
    --hash-filter K/N
    • N: The total number of partitions (must be a positive integer).
    • @@ -17054,8 +17807,7 @@ without duplication.

      Random Partition Selection

      Use @ as K to randomly select a partition:

      -
      --hash-filter @/M
      +
      --hash-filter @/M

      For example, --hash-filter @/3 will randomly select a number between 0 and 2. This will stay constant across retries.

      How It Works

      @@ -17084,35 +17836,32 @@ this could delete unselected files. partitions

      Assuming the current directory contains file1.jpg through file9.jpg:

      -
      $ rclone lsf --hash-filter 0/4 .
      -file1.jpg
      -file5.jpg
      -
      -$ rclone lsf --hash-filter 1/4 .
      -file3.jpg
      -file6.jpg
      -file9.jpg
      -
      -$ rclone lsf --hash-filter 2/4 .
      -file2.jpg
      -file4.jpg
      -
      -$ rclone lsf --hash-filter 3/4 .
      -file7.jpg
      -file8.jpg
      -
      -$ rclone lsf --hash-filter 4/4 . # the same as --hash-filter 0/4
      -file1.jpg
      -file5.jpg
      +
      $ rclone lsf --hash-filter 0/4 .
      +file1.jpg
      +file5.jpg
      +
      +$ rclone lsf --hash-filter 1/4 .
      +file3.jpg
      +file6.jpg
      +file9.jpg
      +
      +$ rclone lsf --hash-filter 2/4 .
      +file2.jpg
      +file4.jpg
      +
      +$ rclone lsf --hash-filter 3/4 .
      +file7.jpg
      +file8.jpg
      +
      +$ rclone lsf --hash-filter 4/4 . # the same as --hash-filter 0/4
      +file1.jpg
      +file5.jpg
      Syncing the first quarter of files
      -
      rclone sync --hash-filter 1/4 source:path destination:path
      +
      rclone sync --hash-filter 1/4 source:path destination:path
      Checking a random 1% of files for integrity
      -
      rclone check --download --hash-filter @/100 source:path destination:path
      +
      rclone check --download --hash-filter @/100 source:path destination:path

      Other flags

      --delete-excluded @@ -17124,8 +17873,7 @@ with --dry-run and -v first.

      which are excluded from the command.

      E.g. the scope of rclone sync --interactive A: B: can be restricted:

      -
      rclone --min-size 50k --delete-excluded sync A: B:
      +
      rclone --min-size 50k --delete-excluded sync A: B:

      All files on B: which are less than 50 KiB are deleted because they are excluded from the rclone sync command.

      filter patterns or regular expressions.

      For example if you wished to list only local files with a mode of 100664 you could do that with:

      -
      rclone lsf -M --files-only --metadata-include "mode=100664" .
      +
      rclone lsf -M --files-only --metadata-include "mode=100664" .

      Or if you wished to show files with an atime, mtime or btime at a given date:

      -
      rclone lsf -M --files-only --metadata-include "[abm]time=2022-12-16*" .
      +
      rclone lsf -M --files-only --metadata-include "[abm]time=2022-12-16*" .

      Like file filtering, metadata filtering only applies to files not to directories.

      The filters can be applied using these flags.

      @@ -17200,8 +17946,7 @@ somewhat experimental at the moment so things may be subject to change.

      Run this command in a terminal and rclone will download and then display the GUI in a web browser.

      -
      rclone rcd --rc-web-gui
      +
      rclone rcd --rc-web-gui

      This will produce logs like this and rclone needs to continue to run to serve the GUI:

      2019/08/25 11:40:14 NOTICE: A new release for gui is present at https://github.com/rclone/rclone-webui-react/releases/download/v0.0.6/currentbuild.zip
      @@ -17409,49 +18154,43 @@ command

      Rclone itself implements the remote control protocol in its rclone rc command.

      You can use it like this:

      -
      $ rclone rc rc/noop param1=one param2=two
      -{
      -    "param1": "one",
      -    "param2": "two"
      -}
      +
      $ rclone rc rc/noop param1=one param2=two
      +{
      +    "param1": "one",
      +    "param2": "two"
      +}

      If the remote is running on a different URL than the default http://localhost:5572/, use the --url option to specify it:

      -
      rclone rc --url http://some.remote:1234/ rc/noop
      +
      rclone rc --url http://some.remote:1234/ rc/noop

      Or, if the remote is listening on a Unix socket, use the --unix-socket option instead:

      -
      rclone rc --unix-socket /tmp/rclone.sock rc/noop
      +
      rclone rc --unix-socket /tmp/rclone.sock rc/noop

      Run rclone rc on its own, without any commands, to see the help for the installed remote control commands. Note that this also needs to connect to the remote server.

      JSON input

      rclone rc also supports a --json flag which can be used to send more complicated input parameters.

      -
      $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop
      -{
      -    "p1": [
      -        1,
      -        "2",
      -        null,
      -        4
      -    ],
      -    "p2": {
      -        "a": 1,
      -        "b": 2
      -    }
      -}
      +
      $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop
      +{
      +    "p1": [
      +        1,
      +        "2",
      +        null,
      +        4
      +    ],
      +    "p2": {
      +        "a": 1,
      +        "b": 2
      +    }
      +}

      If the parameter being passed is an object then it can be passed as a JSON string rather than using the --json flag which simplifies the command line.

      -
      rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}'
      +
      rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}'

      Rather than

      -
      rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}'
      +
      rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}'

      Special parameters

      The rc interface supports some special parameters which apply to all commands. These start with _ to show @@ -17462,57 +18201,74 @@ jobs with _async = true default jobs are executed immediately as they are created or synchronously.

      If _async has a true value when supplied to an rc call -then it will return immediately with a job id and the task will be run -in the background. The job/status call can be used to get -information of the background job. The job can be queried for up to 1 -minute after it has finished.

      +then it will return immediately with a job id and execute id, and the +task will be run in the background. The job/status call can +be used to get information of the background job. The job can be queried +for up to 1 minute after it has finished.

      It is recommended that potentially long running jobs, e.g. sync/sync, sync/copy, sync/move, operations/purge are run with the _async flag to avoid any potential problems with the HTTP request and response timing out.

      Starting a job with the _async flag:

      -
      $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop
      -{
      -    "jobid": 2
      -}
      +
      $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop
      +{
      +    "jobid": 2,
      +    "executeId": "d794c33c-463e-4acf-b911-f4b23e4f40b7"
      +}
      +

      The jobid is a unique identifier for the job within this +rclone instance. The executeId identifies the rclone +process instance and changes after rclone restart. Together, the pair +(executeId, jobid) uniquely identifies a job +across rclone restarts.

      Query the status to see if the job has finished. For more information on the meaning of these return parameters see the job/status call.

      -
      $ rclone rc --json '{ "jobid":2 }' job/status
      -{
      -    "duration": 0.000124163,
      -    "endTime": "2018-10-27T11:38:07.911245881+01:00",
      -    "error": "",
      -    "finished": true,
      -    "id": 2,
      -    "output": {
      -        "_async": true,
      -        "p1": [
      -            1,
      -            "2",
      -            null,
      -            4
      -        ],
      -        "p2": {
      -            "a": 1,
      -            "b": 2
      -        }
      -    },
      -    "startTime": "2018-10-27T11:38:07.911121728+01:00",
      -    "success": true
      -}
      -

      job/list can be used to show the running or recently -completed jobs

      -
      $ rclone rc job/list
      -{
      -    "jobids": [
      -        2
      -    ]
      -}
      +
      $ rclone rc --json '{ "jobid":2 }' job/status
      +{
      +    "duration": 0.000124163,
      +    "endTime": "2018-10-27T11:38:07.911245881+01:00",
      +    "error": "",
      +    "executeId": "d794c33c-463e-4acf-b911-f4b23e4f40b7",
      +    "finished": true,
      +    "id": 2,
      +    "output": {
      +        "_async": true,
      +        "p1": [
      +            1,
      +            "2",
      +            null,
      +            4
      +        ],
      +        "p2": {
      +            "a": 1,
      +            "b": 2
      +        }
      +    },
      +    "startTime": "2018-10-27T11:38:07.911121728+01:00",
      +    "success": true
      +}
      +

      job/list can be used to show running or recently +completed jobs along with their status

      +
      $ rclone rc job/list
      +{
      +    "executeId": "d794c33c-463e-4acf-b911-f4b23e4f40b7",
      +    "finished_ids": [
      +        1
      +    ],
      +    "jobids": [
      +        1,
      +        2
      +    ],
      +    "running_ids": [
      +        2
      +    ]
      +}
      +

      This shows: - executeId - the current rclone instance ID +(same for all jobs, changes after restart) - jobids - array +of all job IDs (both running and finished) - running_ids - +array of currently running job IDs - finished_ids - array +of finished job IDs

      Setting config flags with _config

      If you wish to set config (the equivalent of the global flags) for @@ -17520,29 +18276,26 @@ the duration of an rc call only then pass in the _config parameter.

      This should be in the same format as the main key returned by options/get.

      -
      rclone rc --loopback options/get blocks=main
      +
      rclone rc --loopback options/get blocks=main

      You can see more help on these options with this command (see the options blocks section for more info).

      -
      rclone rc --loopback options/info blocks=main
      +
      rclone rc --loopback options/info blocks=main

      For example, if you wished to run a sync with the --checksum parameter, you would pass this parameter in your JSON blob.

      -
      "_config":{"CheckSum": true}
      +
      "_config":{"CheckSum": true}

      If using rclone rc this could be passed as

      -
      rclone rc sync/sync ... _config='{"CheckSum": true}'
      +
      rclone rc sync/sync ... _config='{"CheckSum": true}'

      Any config parameters you don't set will inherit the global defaults which were set with command line flags or environment variables.

      Note that it is possible to set some values as strings or integers - see data types for more info. Here is an example setting the equivalent of --buffer-size in string or integer format.

      -
      "_config":{"BufferSize": "42M"}
      -"_config":{"BufferSize": 44040192}
      +
      "_config":{"BufferSize": "42M"}
      +"_config":{"BufferSize": 44040192}

      If you wish to check the _config assignment has worked properly then calling options/local will show what the value got set to.

      @@ -17552,30 +18305,26 @@ _filter pass in the _filter parameter.

      This should be in the same format as the filter key returned by options/get.

      -
      rclone rc --loopback options/get blocks=filter
      +
      rclone rc --loopback options/get blocks=filter

      You can see more help on these options with this command (see the options blocks section for more info).

      -
      rclone rc --loopback options/info blocks=filter
      +
      rclone rc --loopback options/info blocks=filter

      For example, if you wished to run a sync with these flags

      -
      --max-size 1M --max-age 42s --include "a" --include "b"
      +
      --max-size 1M --max-age 42s --include "a" --include "b"

      you would pass this parameter in your JSON blob.

      -
      "_filter":{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}
      +
      "_filter":{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}

      If using rclone rc this could be passed as

      -
      rclone rc ... _filter='{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}'
      +
      rclone rc ... _filter='{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}'

      Any filter parameters you don't set will inherit the global defaults which were set with command line flags or environment variables.

      Note that it is possible to set some values as strings or integers - see data types for more info. Here is an example setting the equivalent of --buffer-size in string or integer format.

      -
      "_filter":{"MinSize": "42M"}
      -"_filter":{"MinSize": 44040192}
      +
      "_filter":{"MinSize": "42M"}
      +"_filter":{"MinSize": 44040192}

      If you wish to check the _filter assignment has worked properly then calling options/local will show what the value got set to.

      @@ -17589,12 +18338,11 @@ be grouped under that value. This allows caller to group stats under their own name.

      Stats for specific group can be accessed by passing group to core/stats:

      -
      $ rclone rc --json '{ "group": "job/1" }' core/stats
      -{
      -    "speed": 12345
      -    ...
      -}
      +
      $ rclone rc --json '{ "group": "job/1" }' core/stats
      +{
      +    "speed": 12345
      +    ...
      +}

      Data types

      When the API returns types, these will mostly be straight forward integer, string or boolean types.

      @@ -17756,36 +18504,36 @@ allowed unless Required or Default is set)

      An example of this might be the --log-level flag. Note that the Name of the option becomes the command line flag with _ replaced with -.

      -
      {
      -    "Advanced": false,
      -    "Default": 5,
      -    "DefaultStr": "NOTICE",
      -    "Examples": [
      -        {
      -            "Help": "",
      -            "Value": "EMERGENCY"
      -        },
      -        {
      -            "Help": "",
      -            "Value": "ALERT"
      -        },
      -        ...
      -    ],
      -    "Exclusive": true,
      -    "FieldName": "LogLevel",
      -    "Groups": "Logging",
      -    "Help": "Log level DEBUG|INFO|NOTICE|ERROR",
      -    "Hide": 0,
      -    "IsPassword": false,
      -    "Name": "log_level",
      -    "NoPrefix": true,
      -    "Required": true,
      -    "Sensitive": false,
      -    "Type": "LogLevel",
      -    "Value": null,
      -    "ValueStr": "NOTICE"
      -},
      +
      {
      +    "Advanced": false,
      +    "Default": 5,
      +    "DefaultStr": "NOTICE",
      +    "Examples": [
      +        {
      +            "Help": "",
      +            "Value": "EMERGENCY"
      +        },
      +        {
      +            "Help": "",
      +            "Value": "ALERT"
      +        },
      +        ...
      +    ],
      +    "Exclusive": true,
      +    "FieldName": "LogLevel",
      +    "Groups": "Logging",
      +    "Help": "Log level DEBUG|INFO|NOTICE|ERROR",
      +    "Hide": 0,
      +    "IsPassword": false,
      +    "Name": "log_level",
      +    "NoPrefix": true,
      +    "Required": true,
      +    "Sensitive": false,
      +    "Type": "LogLevel",
      +    "Value": null,
      +    "ValueStr": "NOTICE"
      +},

      Note that the Help may be multiple lines separated by \n. The first line will always be a short sentence and this is the sentence shown when running rclone help flags.

      @@ -17811,26 +18559,27 @@ set. If the local backend is desired then type should be set to local. If _root isn't specified then it defaults to the root of the remote.

      For example this JSON is equivalent to remote:/tmp

      -
      {
      -    "_name": "remote",
      -    "_root": "/tmp"
      -}
      +
      {
      +    "_name": "remote",
      +    "_root": "/tmp"
      +}

      And this is equivalent to :sftp,host='example.com':/tmp

      -
      {
      -    "type": "sftp",
      -    "host": "example.com",
      -    "_root": "/tmp"
      -}
      +
      {
      +    "type": "sftp",
      +    "host": "example.com",
      +    "_root": "/tmp"
      +}

      And this is equivalent to /tmp/dir

      -
      {
      -    "type": "local",
      -    "_root": "/tmp/dir"
      -}
      +
      {
      +    "type": "local",
      +    "_root": "/tmp/dir"
      +}

      Supported commands

      +

      backend/command: Runs a backend command.

      This takes the following parameters:

        @@ -18002,7 +18751,7 @@ file

        Unlocks the config file if it is locked.

        Parameters:

          -
        • 'config_password' - password to unlock the config file
        • +
        • 'configPassword' - password to unlock the config file

        A good idea is to disable AskPassword before making this call

        Authentication is required for this call.

        @@ -18263,18 +19012,23 @@ be returned.

        } ] }
    -

    core/version: Shows the current version of rclone -and the go runtime.

    -

    This shows the current version of go and the go runtime:

    +

    core/version: Shows the current version of rclone, +Go and the OS.

    +

    This shows the current versions of rclone, Go and the OS:

      -
    • version - rclone version, e.g. "v1.53.0"
    • +
    • version - rclone version, e.g. "v1.71.2"
    • decomposed - version number as [major, minor, patch]
    • isGit - boolean - true if this was compiled from the git version
    • isBeta - boolean - true if this is a beta version
    • -
    • os - OS in use as according to Go
    • -
    • arch - cpu architecture in use according to Go
    • -
    • goVersion - version of Go runtime in use
    • +
    • os - OS in use as according to Go GOOS (e.g. "linux")
    • +
    • osKernel - OS Kernel version (e.g. "6.8.0-86-generic (x86_64)")
    • +
    • osVersion - OS Version (e.g. "ubuntu 24.04 (64 bit)")
    • +
    • osArch - cpu architecture in use (e.g. "arm64 (ARMv8 +compatible)")
    • +
    • arch - cpu architecture in use according to Go GOARCH (e.g. +"arm64")
    • +
    • goVersion - version of Go runtime in use (e.g. "go1.25.0")
    • linking - type of rclone executable (static or dynamic)
    • goTags - space separated build tags or "none"
    @@ -18378,6 +19132,62 @@ in the fs cache.

    This returns the number of entries in the fs cache.

    Returns - entries - number of items in the cache

    Authentication is required for this call.

    +

    job/batch: Run a batch of rclone rc commands +concurrently.

    +

    This takes the following parameters:

    +
      +
    • concurrency - int - do this many commands concurrently. Defaults to +--transfers if not set.
    • +
    • inputs - an list of inputs to the commands with an extra +_path parameter
    • +
    +
    {
    +    "_path": "rc/path",
    +    "param1": "parameter for the path as documented",
    +    "param2": "parameter for the path as documented, etc",
    +}
    +

    The inputs may use _async, _group, +_config and _filter as normal when using the +rc.

    +

    Returns:

    +
      +
    • results - a list of results from the commands with one entry for +each in inputs.
    • +
    +

    For example:

    +
    rclone rc job/batch --json '{
    +  "inputs": [
    +    {
    +      "_path": "rc/noop",
    +      "parameter": "OK"
    +    },
    +    {
    +      "_path": "rc/error",
    +      "parameter": "BAD"
    +    }
    +  ]
    +}
    +'
    +

    Gives the result:

    +
    {
    +  "results": [
    +    {
    +      "parameter": "OK"
    +    },
    +    {
    +      "error": "arbitrary error on input map[parameter:BAD]",
    +      "input": {
    +        "parameter": "BAD"
    +      },
    +      "path": "rc/error",
    +      "status": 500
    +    }
    +  ]
    +}
    +

    Authentication is required for this call.

    job/list: Lists the IDs of the running jobs

    Parameters: None.

    Results:

    @@ -18386,6 +19196,8 @@ in the fs cache. restart)
  • jobids - array of integer job ids (starting at 1 on each restart)
  • +
  • runningIds - array of integer job ids that are running
  • +
  • finishedIds - array of integer job ids that are finished
  • job/status: Reads the status of the job ID

    Parameters:

    @@ -18401,6 +19213,8 @@ restart)
  • error - error from the job or empty string for no error
  • finished - boolean whether the job has finished or not
  • id - as passed in above
  • +
  • executeId - rclone instance ID (changes after restart); combined +with id uniquely identifies a job
  • startTime - time the job started (e.g. "2018-10-26T18:50:20.528336039+01:00")
  • success - boolean - true for success false otherwise
  • @@ -18445,13 +19259,13 @@ mount implementation to use
  • vfsOpt: a JSON object with VFS options in.
  • Example:

    -
    rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
    +
    rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
     rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount
     rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'

    The vfsOpt are as described in options/get and can be seen in the the "vfs" section when running and the mountOpt can be seen in the "mount" section:

    -
    rclone rc options/get
    +
    rclone rc options/get

    Authentication is required for this call.

    mount/types: Show all possible mount types

    This shows all possible mount types and returns them as a list.

    @@ -18884,9 +19698,6 @@ tier or class on the single file pointed to
  • fs - a remote name string e.g. "drive:"
  • remote - a path within that remote e.g. "dir"
  • -

    See the settierfile -command for more information on the above.

    Authentication is required for this call.

    operations/size: Count the number of bytes and files in remote

    @@ -18932,9 +19743,6 @@ multiform/form-data
  • remote - a path within that remote e.g. "dir"
  • each part in body represents a file to be uploaded
  • -

    See the uploadfile -command for more information on the above.

    Authentication is required for this call.

    options/blocks: List all the option blocks

    Returns: - options - a list of the options block names

    @@ -19081,6 +19889,9 @@ a test plugin

    rc/error: This returns an error

    This returns an error with the input as part of its error string. Useful for testing error handling.

    +

    rc/fatal: This returns an fatal error

    +

    This returns an error with the input as part of its error string. +Useful for testing error handling.

    rc/list: List all the registered remote control commands

    This lists all the registered remote control commands as a JSON map @@ -19095,6 +19906,9 @@ parameters requiring auth purposes. It can be used to check that rclone is still alive and to check that parameter passing is working properly.

    Authentication is required for this call.

    +

    rc/panic: This returns an error by panicking

    +

    This returns an error with the input as part of its error string. +Useful for testing error handling.

    serve/list: Show running servers

    Show running servers with IDs.

    This takes no parameters and returns

    @@ -19110,25 +19924,25 @@ check that parameter passing is working properly.

    Eg

    rclone rc serve/list

    Returns

    -
    {
    -    "list": [
    -        {
    -            "addr": "[::]:4321",
    -            "id": "nfs-ffc2a4e5",
    -            "params": {
    -                "fs": "remote:",
    -                "opt": {
    -                    "ListenAddr": ":4321"
    -                },
    -                "type": "nfs",
    -                "vfsOpt": {
    -                    "CacheMode": "full"
    -                }
    -            }
    -        }
    -    ]
    -}
    +
    {
    +    "list": [
    +        {
    +            "addr": "[::]:4321",
    +            "id": "nfs-ffc2a4e5",
    +            "params": {
    +                "fs": "remote:",
    +                "opt": {
    +                    "ListenAddr": ":4321"
    +                },
    +                "type": "nfs",
    +                "vfsOpt": {
    +                    "CacheMode": "full"
    +                }
    +            }
    +        }
    +    ]
    +}

    Authentication is required for this call.

    serve/start: Create a new server

    Create a new server with the specified parameters.

    @@ -19153,11 +19967,11 @@ above.

    rclone rc serve/start type=nfs fs=remote: addr=:4321 vfs_cache_mode=full
     rclone rc serve/start --json '{"type":"nfs","fs":"remote:","addr":":1234","vfs_cache_mode":"full"}'

    This will give the reply

    -
    {
    -    "addr": "[::]:4321", // Address the server was started on
    -    "id": "nfs-ecfc6852" // Unique identifier for the server instance
    -}
    +
    {
    +    "addr": "[::]:4321", // Address the server was started on
    +    "id": "nfs-ecfc6852" // Unique identifier for the server instance
    +}

    Or an error if it failed to start.

    Stop the server with serve/stop and list the running servers with serve/list.

    @@ -19189,14 +20003,14 @@ be passed to serve/start as the serveType parameter.

    Eg

    rclone rc serve/types

    Returns

    -
    {
    -    "types": [
    -        "http",
    -        "sftp",
    -        "nfs"
    -    ]
    -}
    +
    {
    +    "types": [
    +        "http",
    +        "sftp",
    +        "nfs"
    +    ]
    +}

    Authentication is required for this call.

    sync/bisync: Perform bidirectional synchronization between two paths.

    @@ -19322,7 +20136,7 @@ supplied.

    call it when the --vfs-cache-mode is off, it will return an empty result.

    {
    -    "queued": // an array of files queued for upload
    +    "queue": // an array of files queued for upload
         [
             {
                 "name":      "file",   // string: name (full path) of the file,
    @@ -19425,6 +20239,7 @@ supplied.

    supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied.

    +

    Accessing the remote control via HTTP

    Rclone implements a simple HTTP based protocol.

    Each endpoint takes an JSON object and returns a JSON object or an @@ -19441,16 +20256,16 @@ formatted to be reasonably human-readable.

    If an error occurs then there will be an HTTP error status (e.g. 500) and the body of the response will contain a JSON encoded error object, e.g.

    -
    {
    -    "error": "Expecting string value for key \"remote\" (was float64)",
    -    "input": {
    -        "fs": "/tmp",
    -        "remote": 3
    -    },
    -    "status": 400,
    -    "path": "operations/rmdir"
    -}
    +
    {
    +    "error": "Expecting string value for key \"remote\" (was float64)",
    +    "input": {
    +        "fs": "/tmp",
    +        "remote": 3
    +    },
    +    "status": 400,
    +    "path": "operations/rmdir"
    +}

    The keys in the error response are:

    • error - error string
    • @@ -19464,71 +20279,64 @@ that. The response to a preflight OPTIONS request will echo the requested "Access-Control-Request-Headers" back.

      Using POST with URL parameters only

      -
      curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2'
      +
      curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2'

      Response

      -
      {
      -    "potato": "1",
      -    "sausage": "2"
      -}
      +
      {
      +    "potato": "1",
      +    "sausage": "2"
      +}

      Here is what an error response looks like:

      -
      curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
      -
      {
      -    "error": "arbitrary error on input map[potato:1 sausage:2]",
      -    "input": {
      -        "potato": "1",
      -        "sausage": "2"
      -    }
      -}
      +
      curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
      +
      {
      +    "error": "arbitrary error on input map[potato:1 sausage:2]",
      +    "input": {
      +        "potato": "1",
      +        "sausage": "2"
      +    }
      +}

      Note that curl doesn't return errors to the shell unless you use the -f option

      -
      $ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
      -curl: (22) The requested URL returned error: 400 Bad Request
      -$ echo $?
      -22
      +
      $ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
      +curl: (22) The requested URL returned error: 400 Bad Request
      +$ echo $?
      +22

      Using POST with a form

      -
      curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop
      +
      curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop

      Response

      -
      {
      -    "potato": "1",
      -    "sausage": "2"
      -}
      +
      {
      +    "potato": "1",
      +    "sausage": "2"
      +}

      Note that you can combine these with URL parameters too with the POST parameters taking precedence.

      -
      curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4"
      +
      curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4"

      Response

      -
      {
      -    "potato": "1",
      -    "rutabaga": "3",
      -    "sausage": "4"
      -}
      +
      {
      +    "potato": "1",
      +    "rutabaga": "3",
      +    "sausage": "4"
      +}

      Using POST with a JSON blob

      -
      curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop
      +
      curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop

      response

      -
      {
      -    "password": "xyz",
      -    "username": "xyz"
      -}
      +
      {
      +    "password": "xyz",
      +    "username": "xyz"
      +}

      This can be combined with URL parameters too if required. The JSON blob takes precedence.

      -
      curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4'
      -
      {
      -    "potato": 2,
      -    "rutabaga": "3",
      -    "sausage": 1
      -}
      +
      curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4'
      +
      {
      +    "potato": 2,
      +    "rutabaga": "3",
      +    "sausage": 1
      +}

      Debugging rclone with pprof

      If you use the --rc flag this will also enable the use of the go profiling tools on the same port.

      @@ -19536,34 +20344,31 @@ of the go profiling tools on the same port.

      go.

      Debugging memory use

      To profile rclone's memory use you can run:

      -
      go tool pprof -web http://localhost:5572/debug/pprof/heap
      +
      go tool pprof -web http://localhost:5572/debug/pprof/heap

      This should open a page in your browser showing what is using what memory.

      You can also use the -text flag to produce a textual summary

      -
      $ go tool pprof -text http://localhost:5572/debug/pprof/heap
      -Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
      -      flat  flat%   sum%        cum   cum%
      - 1024.03kB 66.62% 66.62%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.addDecoderNode
      -     513kB 33.38%   100%      513kB 33.38%  net/http.newBufioWriterSize
      -         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/cmd/all.init
      -         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/cmd/serve.init
      -         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/cmd/serve/restic.init
      -         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2.init
      -         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init
      -         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init.0
      -         0     0%   100%  1024.03kB 66.62%  main.init
      -         0     0%   100%      513kB 33.38%  net/http.(*conn).readRequest
      -         0     0%   100%      513kB 33.38%  net/http.(*conn).serve
      -         0     0%   100%  1024.03kB 66.62%  runtime.main
      +
      $ go tool pprof -text http://localhost:5572/debug/pprof/heap
      +Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
      +      flat  flat%   sum%        cum   cum%
      + 1024.03kB 66.62% 66.62%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.addDecoderNode
      +     513kB 33.38%   100%      513kB 33.38%  net/http.newBufioWriterSize
      +         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/cmd/all.init
      +         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/cmd/serve.init
      +         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/cmd/serve/restic.init
      +         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2.init
      +         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init
      +         0     0%   100%  1024.03kB 66.62%  github.com/rclone/rclone/vendor/golang.org/x/net/http2/hpack.init.0
      +         0     0%   100%  1024.03kB 66.62%  main.init
      +         0     0%   100%      513kB 33.38%  net/http.(*conn).readRequest
      +         0     0%   100%      513kB 33.38%  net/http.(*conn).serve
      +         0     0%   100%  1024.03kB 66.62%  runtime.main

      Debugging go routine leaks

      Memory leaks are most often caused by go routine leaks keeping memory alive which should have been garbage collected.

      See all active go routines using

      -
      curl http://localhost:5572/debug/pprof/goroutine?debug=1
      +
      curl http://localhost:5572/debug/pprof/goroutine?debug=1

      Or go to http://localhost:5572/debug/pprof/goroutine?debug=1 in @@ -19804,7 +20609,7 @@ system.

      No No R -- +R iCloud Drive @@ -20736,8 +21541,7 @@ to maintain backward compatibility, its behavior has not been changed.

      Encoding example: FTP

      To take a specific example, the FTP backend's default encoding is

      -
      --ftp-encoding "Slash,Del,Ctl,RightSpace,Dot"
      +
      --ftp-encoding "Slash,Del,Ctl,RightSpace,Dot"

      However, let's say the FTP server is running on Windows and can't have any of the invalid Windows characters in file names. You are backing up Linux servers to this FTP server which do have those @@ -20758,14 +21562,12 @@ Drive), you will notice that the file gets renamed to convert for the local filesystem, using command-line argument --local-encoding. Rclone's default behavior on Windows corresponds to

      -
      --local-encoding "Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot"
      +
      --local-encoding "Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot"

      If you want to use fullwidth characters , and in your filenames without rclone changing them when uploading to a remote, then set the same as the default value but without Colon,Question,Asterisk:

      -
      --local-encoding "Slash,LtGt,DoubleQuote,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot"
      +
      --local-encoding "Slash,LtGt,DoubleQuote,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot"

      Alternatively, you can disable the conversion of any characters with --local-encoding Raw.

      Instead of using command-line argument --local-encoding, @@ -21765,7 +22567,7 @@ split into groups.

      --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.71.0")
    + --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")

    Performance

    Flags helpful for increasing performance.

          --buffer-size SizeSuffix   In memory buffer size when reading files for each --transfer (default 16Mi)
    @@ -21925,6 +22727,8 @@ split into groups.

    Backend-only flags (these can be set in the config file also).

          --alias-description string                            Description of the remote
           --alias-remote string                                 Remote or path to alias
    +      --archive-description string                          Description of the remote
    +      --archive-remote string                               Remote to wrap to read archives from
           --azureblob-access-tier string                        Access tier of blob: hot, cool, cold or archive
           --azureblob-account string                            Azure Storage Account Name
           --azureblob-archive-tier-delete                       Delete archive tier blobs before overwriting
    @@ -22002,6 +22806,10 @@ split into groups.

    --b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-key string Application Key --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket + --b2-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in B2 + --b2-sse-customer-key string To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data + --b2-sse-customer-key-base64 string To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data + --b2-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-upload-concurrency int Concurrency for multipart uploads (default 4) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) @@ -22063,7 +22871,7 @@ split into groups.

    --combine-description string Description of the remote --combine-upstreams SpaceSepList Upstreams for combining --compress-description string Description of the remote - --compress-level int GZIP compression level (-2 to 9) (default -1) + --compress-level string GZIP (levels -2 to 9): --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) --compress-remote string Remote to compress @@ -22358,6 +23166,7 @@ split into groups.

    --mailru-token string OAuth Access Token as a JSON blob --mailru-token-url string Token server url --mailru-user string User name (usually email) + --mega-2fa string The 2FA code of your MEGA account if the account is set up with one --mega-debug Output more debug from Mega --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) @@ -22476,6 +23285,7 @@ split into groups.

    --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) --protondrive-original-file-size Return the file size before encryption (default true) + --protondrive-otp-secret-key string The OTP secret key (obscured) --protondrive-password string The password of your proton account (obscured) --protondrive-replace-existing-draft Create a new revision when filename conflict is detected --protondrive-username string The username of your proton account @@ -22558,6 +23368,7 @@ split into groups.

    --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) --s3-use-arn-region If true, enables arn region support for the service + --s3-use-data-integrity-protections Tristate If true use AWS S3 data integrity protections (default unset) --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) @@ -22640,6 +23451,7 @@ split into groups.

    --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-user-agent string Siad User Agent (default "Sia-Agent") --skip-links Don't warn about skipped symlinks + --skip-specials Don't warn about skipped pipes, sockets and device objects --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default "WORKGROUP") @@ -22792,17 +23604,17 @@ href="https://docs.docker.com/engine/install/">installing Docker on the host.

    The FUSE driver is a prerequisite for rclone mounting and should be installed on host:

    -
    sudo apt-get -y install fuse3
    +
    sudo apt-get -y install fuse3

    Create two directories required by rclone docker plugin:

    -
    sudo mkdir -p /var/lib/docker-plugins/rclone/config
    +
    sudo mkdir -p /var/lib/docker-plugins/rclone/config
     sudo mkdir -p /var/lib/docker-plugins/rclone/cache

    Install the managed rclone docker plugin for your architecture (here amd64):

    -
    docker plugin install rclone/docker-volume-rclone:amd64 args="-v" --alias rclone --grant-all-permissions
    +
    docker plugin install rclone/docker-volume-rclone:amd64 args="-v" --alias rclone --grant-all-permissions
     docker plugin list

    Create your SFTP volume:

    -
    docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true
    +
    docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true

    Note that since all options are static, you don't even have to run rclone config or create the rclone.conf file (but the config directory should still be present). In the @@ -22811,13 +23623,13 @@ and your SSH credentials as username and password. You can also change the remote path to your home directory on the host, for example -o path=/home/username.

    Time to create a test container and mount the volume into it:

    -
    docker run --rm -it -v firstvolume:/mnt --workdir /mnt ubuntu:latest bash
    +
    docker run --rm -it -v firstvolume:/mnt --workdir /mnt ubuntu:latest bash

    If all goes well, you will enter the new container and change right to the mounted SFTP remote. You can type ls to list the mounted directory or otherwise play with it. Type exit when you are done. The container will stop but the volume will stay, ready to be reused. When it's not needed anymore, remove it:

    -
    docker volume list
    +
    docker volume list
     docker volume remove firstvolume

    Now let us try something more elaborate: Google Drive volume on multi-node @@ -22842,37 +23654,39 @@ to the Swarm cluster and save as every node. By default this location is accessible only to the root user so you will need appropriate privileges. The resulting config will look like this:

    -
    [gdrive]
    -type = drive
    -scope = drive
    -drive_id = 1234567...
    -root_folder_id = 0Abcd...
    -token = {"access_token":...}
    +
    [gdrive]
    +type = drive
    +scope = drive
    +drive_id = 1234567...
    +root_folder_id = 0Abcd...
    +token = {"access_token":...}

    Now create the file named example.yml with a swarm stack description like this:

    -
    version: '3'
    -services:
    -  heimdall:
    -    image: linuxserver/heimdall:latest
    -    ports: [8080:80]
    -    volumes: [configdata:/config]
    -volumes:
    -  configdata:
    -    driver: rclone
    -    driver_opts:
    -      remote: 'gdrive:heimdall'
    -      allow_other: 'true'
    -      vfs_cache_mode: full
    -      poll_interval: 0
    +
    version: '3'
    +services:
    +  heimdall:
    +    image: linuxserver/heimdall:latest
    +    ports: [8080:80]
    +    volumes: [configdata:/config]
    +volumes:
    +  configdata:
    +    driver: rclone
    +    driver_opts:
    +      remote: 'gdrive:heimdall'
    +      allow_other: 'true'
    +      vfs_cache_mode: full
    +      poll_interval: 0

    and run the stack:

    -
    docker stack deploy example -c ./example.yml
    +
    docker stack deploy example -c ./example.yml

    After a few seconds docker will spread the parsed stack description over cluster, create the example_heimdall service on port 8080, run service containers on one or more cluster nodes and request the example_configdata volume from rclone plugins on the node hosts. You can use the following commands to confirm results:

    -
    docker service ls
    +
    docker service ls
     docker service ps example_heimdall
     docker volume ls

    Point your browser to http://cluster.host.address:8080 @@ -22887,7 +23701,7 @@ node.

    Volumes can be created with docker volume create. Here are a few examples:

    -
    docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full
    +
    docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full
     docker volume create vol2 -d rclone -o remote=:storj,access_grant=xxx:heimdall
     docker volume create vol3 -d rclone -o type=storj -o path=heimdall -o storj-access-grant=xxx -o poll-interval=0

    Note the -d rclone flag that tells docker to request @@ -22895,7 +23709,7 @@ volume from the rclone driver. This works even if you installed managed driver by its full name rclone/docker-volume-rclone because you provided the --alias rclone option.

    Volumes can be inspected as follows:

    -
    docker volume list
    +
    docker volume list
     docker volume inspect vol1

    Volume Configuration

    Rclone flags and volume options are set via the -o flag @@ -22916,9 +23730,9 @@ create on-the-fly (config-less) remotes, while the type and path options provide a simpler alternative for this. Using two split options

    -
    -o type=backend -o path=dir/subdir
    +
    -o type=backend -o path=dir/subdir

    is equivalent to the combined syntax

    -
    -o remote=:backend:dir/subdir
    +
    -o remote=:backend:dir/subdir

    but is arguably easier to parameterize in scripts. The path part is optional.

    . Inside connection string the backend prefix must be dropped from parameter names but in the -o param=value array it must be present. For instance, compare the following option array

    -
    -o remote=:sftp:/home -o sftp-host=localhost
    +
    -o remote=:sftp:/home -o sftp-host=localhost

    with equivalent connection string:

    -
    -o remote=:sftp,host=localhost:/home
    +
    -o remote=:sftp,host=localhost:/home

    This difference exists because flag options -o key=val include not only backend parameters but also mount/VFS flags and possibly other settings. Also it allows to discriminate the @@ -22987,29 +23801,34 @@ volume and have at least two elements, the self-explanatory driver: rclone value and the driver_opts: structure playing the same role as -o key=val CLI flags:

    -
    volumes:
    -  volume_name_1:
    -    driver: rclone
    -    driver_opts:
    -      remote: 'gdrive:'
    -      allow_other: 'true'
    -      vfs_cache_mode: full
    -      token: '{"type": "borrower", "expires": "2021-12-31"}'
    -      poll_interval: 0
    -

    Notice a few important details: - YAML prefers _ in -option names instead of -. - YAML treats single and double -quotes interchangeably. Simple strings and integers can be left -unquoted. - Boolean values must be quoted like 'true' or -"false" because these two words are reserved by YAML. - The -filesystem string is keyed with remote (or with +

    volumes:
    +  volume_name_1:
    +    driver: rclone
    +    driver_opts:
    +      remote: 'gdrive:'
    +      allow_other: 'true'
    +      vfs_cache_mode: full
    +      token: '{"type": "borrower", "expires": "2021-12-31"}'
    +      poll_interval: 0
    +

    Notice a few important details:

    +
      +
    • YAML prefers _ in option names instead of +-.
    • +
    • YAML treats single and double quotes interchangeably. Simple strings +and integers can be left unquoted.
    • +
    • Boolean values must be quoted like 'true' or +"false" because these two words are reserved by YAML.
    • +
    • The filesystem string is keyed with remote (or with fs). Normally you can omit quotes here, but if the string ends with colon, you must quote it like -remote: "storage_box:". - YAML is picky about surrounding -braces in values as this is in fact another syntax for key/value -mappings. For example, JSON access tokens usually contain double -quotes and surrounding braces, so you must put them in single -quotes.

      +remote: "storage_box:".
    • +
    • YAML is picky about surrounding braces in values as this is in fact +another syntax +for key/value mappings. For example, JSON access tokens usually +contain double quotes and surrounding braces, so you must put them in +single quotes.
    • +

    Installing as Managed Plugin

    Docker daemon can install plugins from an image registry and run them managed. We maintain the Docker Hub.

    The plugin requires presence of two directories on the host before it can be installed. Note that plugin will not create them automatically. By default they must exist on host at the following -locations (though you can tweak the paths): - -/var/lib/docker-plugins/rclone/config is reserved for the -rclone.conf config file and must exist -even if it's empty and the config file is not present. - -/var/lib/docker-plugins/rclone/cache holds the plugin state -file as well as optional VFS caches.

    +locations (though you can tweak the paths):

    +
      +
    • /var/lib/docker-plugins/rclone/config is reserved for +the rclone.conf config file and must exist +even if it's empty and the config file is not present.
    • +
    • /var/lib/docker-plugins/rclone/cache holds the plugin +state file as well as optional VFS caches.
    • +

    You can install managed plugin with default settings as follows:

    -
    docker plugin install rclone/docker-volume-rclone:amd64 --grant-all-permissions --alias rclone
    +
    docker plugin install rclone/docker-volume-rclone:amd64 --grant-all-permissions --alias rclone

    The :amd64 part of the image specification after colon is called a tag. Usually you will want to install the latest plugin for your architecture. In this case the tag will just name it, like amd64 above. The following plugin architectures are -currently available: - amd64 - arm64 - -arm-v7

    +currently available:

    +
      +
    • amd64
    • +
    • arm64
    • +
    • arm-v7
    • +

    Sometimes you might want a concrete plugin version, not the latest one. Then you should use image tag in the form :ARCHITECTURE-VERSION. For example, to install plugin @@ -23067,7 +23892,7 @@ then docker machinery propagates them through kernel mount namespaces and bind-mounts into requesting user containers.

    You can tweak a few plugin settings after installation when it's disabled (not in use), for instance:

    -
    docker plugin disable rclone
    +
    docker plugin disable rclone
     docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args="--vfs-cache-mode=writes --allow-other"
     docker plugin enable rclone
     docker plugin inspect rclone
    @@ -23127,7 +23952,7 @@ level assigned by rclone in the encapsulated message string.

    NO_PROXY customize the plugin proxy settings.

    You can set custom plugin options right when you install it, in one go:

    -
    docker plugin remove rclone
    +
    docker plugin remove rclone
     docker plugin install rclone/docker-volume-rclone:amd64 \
            --alias rclone --grant-all-permissions \
            args="-v --allow-other" config=/etc/rclone
    @@ -23137,15 +23962,16 @@ docker plugin inspect rclone
    to inform the docker daemon that a volume is (un-)available. As a workaround you can setup a healthcheck to verify that the mount is responding, for example:

    -
    services:
    -  my_service:
    -    image: my_image
    -    healthcheck:
    -      test: ls /path/to/rclone/mount || exit 1
    -      interval: 1m
    -      timeout: 15s
    -      retries: 3
    -      start_period: 15s
    +
    services:
    +  my_service:
    +    image: my_image
    +    healthcheck:
    +      test: ls /path/to/rclone/mount || exit 1
    +      interval: 1m
    +      timeout: 15s
    +      retries: 3
    +      start_period: 15s

    Running Plugin under Systemd

    In most cases you should prefer managed mode. Moreover, MacOS and Windows do not support native Docker plugins. Please use managed mode on @@ -23154,42 +23980,47 @@ these systems. Proceed further only if you are on Linux.

    can just run it (type rclone serve docker and hit enter) for the test.

    Install FUSE:

    -
    sudo apt-get -y install fuse
    +
    sudo apt-get -y install fuse

    Download two systemd configuration files: docker-volume-rclone.service and docker-volume-rclone.socket.

    Put them to the /etc/systemd/system/ directory:

    -
    cp docker-volume-plugin.service /etc/systemd/system/
    +
    cp docker-volume-plugin.service /etc/systemd/system/
     cp docker-volume-plugin.socket  /etc/systemd/system/

    Please note that all commands in this section must be run as root but we omit sudo prefix for brevity. Now create directories required by the service:

    -
    mkdir -p /var/lib/docker-volumes/rclone
    +
    mkdir -p /var/lib/docker-volumes/rclone
     mkdir -p /var/lib/docker-plugins/rclone/config
     mkdir -p /var/lib/docker-plugins/rclone/cache

    Run the docker plugin service in the socket activated mode:

    -
    systemctl daemon-reload
    +
    systemctl daemon-reload
     systemctl start docker-volume-rclone.service
     systemctl enable docker-volume-rclone.socket
     systemctl start docker-volume-rclone.socket
     systemctl restart docker
    -

    Or run the service directly: - run -systemctl daemon-reload to let systemd pick up new config - -run systemctl enable docker-volume-rclone.service to make -the new service start automatically when you power on your machine. - -run systemctl start docker-volume-rclone.service to start -the service now. - run systemctl restart docker to restart -docker daemon and let it detect the new plugin socket. Note that this -step is not needed in managed mode where docker knows about plugin state -changes.

    +

    Or run the service directly:

    +
      +
    • run systemctl daemon-reload to let systemd pick up new +config
    • +
    • run systemctl enable docker-volume-rclone.service to +make the new service start automatically when you power on your +machine.
    • +
    • run systemctl start docker-volume-rclone.service to +start the service now.
    • +
    • run systemctl restart docker to restart docker daemon +and let it detect the new plugin socket. Note that this step is not +needed in managed mode where docker knows about plugin state +changes.
    • +

    The two methods are equivalent from the user perspective, but I personally prefer socket activation.

    Troubleshooting

    You can see managed plugin settings with

    -
    docker plugin list
    +
    docker plugin list
     docker plugin inspect rclone

    Note that docker (including latest 20.10.7) will not show actual values of args, just the defaults.

    @@ -23200,13 +24031,13 @@ encapsulated message string.

    You will usually install the latest version of managed plugin for your platform. Use the following commands to print the actual installed version:

    -
    PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}')
    +
    PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}')
     sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone version

    You can even use runc to run shell inside the plugin container:

    -
    sudo runc --root /run/docker/runtime-runc/plugins.moby exec --tty $PLUGID bash
    +
    sudo runc --root /run/docker/runtime-runc/plugins.moby exec --tty $PLUGID bash

    Also you can use curl to check the plugin socket connectivity:

    -
    docker plugin list --no-trunc
    +
    docker plugin list --no-trunc
     PLUGID=123abc...
     sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate

    though this is rarely needed.

    @@ -23216,7 +24047,7 @@ state of the plugin. Note that all existing rclone docker volumes will probably have to be recreated. This might be needed because a reinstall don't cleanup existing state files to allow for easy restoration, as stated above.

    -
    docker plugin disable rclone # disable the plugin to ensure no interference
    +
    docker plugin disable rclone # disable the plugin to ensure no interference
     sudo rm /var/lib/docker-plugins/rclone/cache/docker-plugin.state # removing the plugin state
     docker plugin enable rclone # re-enable the plugin afterward

    Caveats

    @@ -23228,10 +24059,10 @@ volume, but there is a gotcha. The command will do nothing, it won't even return an error. I hope that docker maintainers will fix this some day. In the meantime be aware that you must remove your volume before recreating it with new settings:

    -
    docker volume remove my_vol
    +
    docker volume remove my_vol
     docker volume create my_vol -d rclone -o opt1=new_val1 ...

    and verify that settings did update:

    -
    docker volume list
    +
    docker volume list
     docker volume inspect my_vol

    If docker refuses to remove the volume, you should find containers or swarm services that use it and stop them first.

    @@ -23264,94 +24095,91 @@ entry. bisync can safely run in concurrent cron jobs thanks to lock files it maintains.

    For example, your first command might look like this:

    -
    rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run
    +
    rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run

    If all looks good, run it again without --dry-run. After that, remove --resync as well.

    Here is a typical run log (with timestamps removed for clarity):

    -
    rclone bisync /testdir/path1/ /testdir/path2/ --verbose
    -INFO  : Synching Path1 "/testdir/path1/" with Path2 "/testdir/path2/"
    -INFO  : Path1 checking for diffs
    -INFO  : - Path1    File is new                         - file11.txt
    -INFO  : - Path1    File is newer                       - file2.txt
    -INFO  : - Path1    File is newer                       - file5.txt
    -INFO  : - Path1    File is newer                       - file7.txt
    -INFO  : - Path1    File was deleted                    - file4.txt
    -INFO  : - Path1    File was deleted                    - file6.txt
    -INFO  : - Path1    File was deleted                    - file8.txt
    -INFO  : Path1:    7 changes:    1 new,    3 newer,    0 older,    3 deleted
    -INFO  : Path2 checking for diffs
    -INFO  : - Path2    File is new                         - file10.txt
    -INFO  : - Path2    File is newer                       - file1.txt
    -INFO  : - Path2    File is newer                       - file5.txt
    -INFO  : - Path2    File is newer                       - file6.txt
    -INFO  : - Path2    File was deleted                    - file3.txt
    -INFO  : - Path2    File was deleted                    - file7.txt
    -INFO  : - Path2    File was deleted                    - file8.txt
    -INFO  : Path2:    7 changes:    1 new,    3 newer,    0 older,    3 deleted
    -INFO  : Applying changes
    -INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file11.txt
    -INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file2.txt
    -INFO  : - Path2    Queue delete                        - /testdir/path2/file4.txt
    -NOTICE: - WARNING  New or changed in both paths        - file5.txt
    -NOTICE: - Path1    Renaming Path1 copy                 - /testdir/path1/file5.txt..path1
    -NOTICE: - Path1    Queue copy to Path2                 - /testdir/path2/file5.txt..path1
    -NOTICE: - Path2    Renaming Path2 copy                 - /testdir/path2/file5.txt..path2
    -NOTICE: - Path2    Queue copy to Path1                 - /testdir/path1/file5.txt..path2
    -INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file6.txt
    -INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file7.txt
    -INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file1.txt
    -INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file10.txt
    -INFO  : - Path1    Queue delete                        - /testdir/path1/file3.txt
    -INFO  : - Path2    Do queued copies to                 - Path1
    -INFO  : - Path1    Do queued copies to                 - Path2
    -INFO  : -          Do queued deletes on                - Path1
    -INFO  : -          Do queued deletes on                - Path2
    -INFO  : Updating listings
    -INFO  : Validating listings for Path1 "/testdir/path1/" vs Path2 "/testdir/path2/"
    -INFO  : Bisync successful
    +
    rclone bisync /testdir/path1/ /testdir/path2/ --verbose
    +INFO  : Synching Path1 "/testdir/path1/" with Path2 "/testdir/path2/"
    +INFO  : Path1 checking for diffs
    +INFO  : - Path1    File is new                         - file11.txt
    +INFO  : - Path1    File is newer                       - file2.txt
    +INFO  : - Path1    File is newer                       - file5.txt
    +INFO  : - Path1    File is newer                       - file7.txt
    +INFO  : - Path1    File was deleted                    - file4.txt
    +INFO  : - Path1    File was deleted                    - file6.txt
    +INFO  : - Path1    File was deleted                    - file8.txt
    +INFO  : Path1:    7 changes:    1 new,    3 newer,    0 older,    3 deleted
    +INFO  : Path2 checking for diffs
    +INFO  : - Path2    File is new                         - file10.txt
    +INFO  : - Path2    File is newer                       - file1.txt
    +INFO  : - Path2    File is newer                       - file5.txt
    +INFO  : - Path2    File is newer                       - file6.txt
    +INFO  : - Path2    File was deleted                    - file3.txt
    +INFO  : - Path2    File was deleted                    - file7.txt
    +INFO  : - Path2    File was deleted                    - file8.txt
    +INFO  : Path2:    7 changes:    1 new,    3 newer,    0 older,    3 deleted
    +INFO  : Applying changes
    +INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file11.txt
    +INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file2.txt
    +INFO  : - Path2    Queue delete                        - /testdir/path2/file4.txt
    +NOTICE: - WARNING  New or changed in both paths        - file5.txt
    +NOTICE: - Path1    Renaming Path1 copy                 - /testdir/path1/file5.txt..path1
    +NOTICE: - Path1    Queue copy to Path2                 - /testdir/path2/file5.txt..path1
    +NOTICE: - Path2    Renaming Path2 copy                 - /testdir/path2/file5.txt..path2
    +NOTICE: - Path2    Queue copy to Path1                 - /testdir/path1/file5.txt..path2
    +INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file6.txt
    +INFO  : - Path1    Queue copy to Path2                 - /testdir/path2/file7.txt
    +INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file1.txt
    +INFO  : - Path2    Queue copy to Path1                 - /testdir/path1/file10.txt
    +INFO  : - Path1    Queue delete                        - /testdir/path1/file3.txt
    +INFO  : - Path2    Do queued copies to                 - Path1
    +INFO  : - Path1    Do queued copies to                 - Path2
    +INFO  : -          Do queued deletes on                - Path1
    +INFO  : -          Do queued deletes on                - Path2
    +INFO  : Updating listings
    +INFO  : Validating listings for Path1 "/testdir/path1/" vs Path2 "/testdir/path2/"
    +INFO  : Bisync successful

    Command line syntax

    -
    $ rclone bisync --help
    -Usage:
    -  rclone bisync remote1:path1 remote2:path2 [flags]
    -
    -Positional arguments:
    -  Path1, Path2  Local path, or remote storage with ':' plus optional path.
    -                Type 'rclone listremotes' for list of configured remotes.
    -
    -Optional Flags:
    -      --backup-dir1 string                   --backup-dir for Path1. Must be a non-overlapping path on the same remote.
    -      --backup-dir2 string                   --backup-dir for Path2. Must be a non-overlapping path on the same remote.
    -      --check-access                         Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
    -      --check-filename string                Filename for --check-access (default: RCLONE_TEST)
    -      --check-sync string                    Controls comparison of final listings: true|false|only (default: true) (default "true")
    -      --compare string                       Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime')
    -      --conflict-loser ConflictLoserAction   Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num)
    -      --conflict-resolve string              Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default "none")
    -      --conflict-suffix string               Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict')
    -      --create-empty-src-dirs                Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs)
    -      --download-hash                        Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!)
    -      --filters-file string                  Read filtering patterns from a file
    -      --force                                Bypass --max-delete safety check and run the sync. Consider using with --verbose
    -  -h, --help                                 help for bisync
    -      --ignore-listing-checksum              Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)
    -      --max-lock Duration                    Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s)
    -      --no-cleanup                           Retain working files (useful for troubleshooting and testing).
    -      --no-slow-hash                         Ignore listing checksums only on backends where they are slow
    -      --recover                              Automatically recover from interruptions without requiring --resync.
    -      --remove-empty-dirs                    Remove ALL empty directories at the final cleanup step.
    -      --resilient                            Allow future runs to retry after certain less-serious errors, instead of requiring --resync.
    -  -1, --resync                               Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first.
    -      --resync-mode string                   During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default "none")
    -      --retries int                          Retry operations this many times if they fail (requires --resilient). (default 3)
    -      --retries-sleep Duration               Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s)
    -      --slow-hash-sync-only                  Ignore slow checksums for listings and deltas, but still consider them during sync calls.
    -      --workdir string                       Use custom working dir - useful for testing. (default: {WORKDIR})
    -      --max-delete PERCENT                   Safety check on maximum percentage of deleted files allowed. If exceeded, the bisync run will abort. (default: 50%)
    -  -n, --dry-run                              Go through the motions - No files are copied/deleted.
    -  -v, --verbose                              Increases logging verbosity. May be specified more than once for more details.
    +
    $ rclone bisync --help
    +Usage:
    +  rclone bisync remote1:path1 remote2:path2 [flags]
    +
    +Positional arguments:
    +  Path1, Path2  Local path, or remote storage with ':' plus optional path.
    +                Type 'rclone listremotes' for list of configured remotes.
    +
    +Optional Flags:
    +      --backup-dir1 string                   --backup-dir for Path1. Must be a non-overlapping path on the same remote.
    +      --backup-dir2 string                   --backup-dir for Path2. Must be a non-overlapping path on the same remote.
    +      --check-access                         Ensure expected RCLONE_TEST files are found on both Path1 and Path2 filesystems, else abort.
    +      --check-filename string                Filename for --check-access (default: RCLONE_TEST)
    +      --check-sync string                    Controls comparison of final listings: true|false|only (default: true) (default "true")
    +      --compare string                       Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime')
    +      --conflict-loser ConflictLoserAction   Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): , num, pathname, delete (default: num)
    +      --conflict-resolve string              Automatically resolve conflicts by preferring the version that is: none, path1, path2, newer, older, larger, smaller (default: none) (default "none")
    +      --conflict-suffix string               Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict')
    +      --create-empty-src-dirs                Sync creation and deletion of empty directories. (Not compatible with --remove-empty-dirs)
    +      --download-hash                        Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!)
    +      --filters-file string                  Read filtering patterns from a file
    +      --force                                Bypass --max-delete safety check and run the sync. Consider using with --verbose
    +  -h, --help                                 help for bisync
    +      --ignore-listing-checksum              Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)
    +      --max-lock Duration                    Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m) (default 0s)
    +      --no-cleanup                           Retain working files (useful for troubleshooting and testing).
    +      --no-slow-hash                         Ignore listing checksums only on backends where they are slow
    +      --recover                              Automatically recover from interruptions without requiring --resync.
    +      --remove-empty-dirs                    Remove ALL empty directories at the final cleanup step.
    +      --resilient                            Allow future runs to retry after certain less-serious errors, instead of requiring --resync.
    +  -1, --resync                               Performs the resync run. Equivalent to --resync-mode path1. Consider using --verbose or --dry-run first.
    +      --resync-mode string                   During resync, prefer the version that is: path1, path2, newer, older, larger, smaller (default: path1 if --resync, otherwise none for no resync.) (default "none")
    +      --retries int                          Retry operations this many times if they fail (requires --resilient). (default 3)
    +      --retries-sleep Duration               Interval between retrying operations if they fail, e.g. 500ms, 60s, 5m (0 to disable) (default 0s)
    +      --slow-hash-sync-only                  Ignore slow checksums for listings and deltas, but still consider them during sync calls.
    +      --workdir string                       Use custom working dir - useful for testing. (default: {WORKDIR})
    +      --max-delete PERCENT                   Safety check on maximum percentage of deleted files allowed. If exceeded, the bisync run will abort. (default: 50%)
    +  -n, --dry-run                              Go through the motions - No files are copied/deleted.
    +  -v, --verbose                              Increases logging verbosity. May be specified more than once for more details.

    Arbitrary rclone flags may be specified on the bisync command line, for example @@ -23390,9 +24218,8 @@ the Path1 tree to Path2.

    The --resync sequence is roughly equivalent to the following (but see --resync-mode for other options):

    -
    rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs]
    -rclone copy Path1 Path2 [--create-empty-src-dirs]
    +
    rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs]
    +rclone copy Path1 Path2 [--create-empty-src-dirs]

    The base directories on both Path1 and Path2 filesystems must exist or bisync will fail. This is required for safety - that bisync can verify that both paths are valid.

    @@ -23446,8 +24273,7 @@ href="#conflict-resolve">--conflict-resolve flags, when needed) for a very robust "set-it-and-forget-it" bisync setup that can automatically bounce back from almost any interruption it might encounter. Consider adding something like the following:

    -
    --resilient --recover --max-lock 2m --conflict-resolve newer
    +
    --resilient --recover --max-lock 2m --conflict-resolve newer

    --resync-mode CHOICE

    In the event that a file differs on both sides during a --resync, --resync-mode controls which version @@ -23588,11 +24414,9 @@ comparing all three of size AND modtime AND currently supported values being size, modtime, and checksum. For example, if you want to compare size and checksum, but not modtime, you would do:

    -
    --compare size,checksum
    +
    --compare size,checksum

    Or if you want to compare all three:

    -
    --compare size,modtime,checksum
    +
    --compare size,modtime,checksum

    --compare overrides any conflicting flags. For example, if you set the conflicting flags --compare checksum --size-only, --size-only @@ -23880,10 +24704,9 @@ appended only when one suffix is specified (or when two identical suffixes are specified.) i.e. with --conflict-loser pathname, all of the following would produce exactly the same result:

    -
    --conflict-suffix path
    ---conflict-suffix path,path
    ---conflict-suffix path1,path2
    +
    --conflict-suffix path
    +--conflict-suffix path,path
    +--conflict-suffix path1,path2

    Suffixes may be as short as 1 character. By default, the suffix is appended after any other extensions (ex. file.jpg.conflict1), however, this can be changed with the @@ -23894,9 +24717,8 @@ flag (i.e. to instead result in file.conflict1.jpg).

    variables when enclosed in curly braces as globs. This can be helpful to track the date and/or time that each conflict was handled by bisync. For example:

    -
    --conflict-suffix {DateOnly}-conflict
    -// result: myfile.txt.2006-01-02-conflict1
    +
    --conflict-suffix {DateOnly}-conflict
    +// result: myfile.txt.2006-01-02-conflict1

    All of the formats described here (go Time.Layout constants) and ..path1 and double dot, but additional dots can be added by including them in the specified suffix string. For example, for behavior equivalent to the previous default, use:

    -
    [--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path
    +
    [--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path

    --check-sync

    Enabled by default, the check-sync function checks that all of the same files exist in both the Path1 and Path2 history listings. This @@ -23959,17 +24780,14 @@ if files changed during or after your last bisync run.

    For example, a possible sequence could look like this:

    1. Normally scheduled bisync run:

      -
      rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient
    2. +
      rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient
    3. Periodic independent integrity check (perhaps scheduled nightly or weekly):

      -
      rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt
    4. +
      rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt
    5. If diffs are found, you have some choices to correct them. If one side is more up-to-date and you want to make the other side match it, you could run:

      -
      rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v
    6. +
      rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v

    (or switch Path1 and Path2 to make Path2 the source-of-truth)

    Or, if neither side is totally up-to-date, you could run a @@ -24102,8 +24920,7 @@ be mixed together in the same dir). If either --backup-dir1 and --backup-dir2 are set, they will override --backup-dir.

    Example:

    -
    rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case
    +
    rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case

    In this example, if the user deletes a file in /Users/someuser/some/local/path/Bisync, bisync will propagate the delete to the other side by moving the corresponding file @@ -24410,30 +25227,21 @@ provider-specific limitation beyond rclone's control (for example, disallowed special characters and filename encodings.)

    The following backends have known issues that need more investigation:

    - +

    The following backends either have not been tested recently or have known issues that are deemed unfixable for the time being:

    - +
      +
    • TestArchive (archive)
    • TestCache (cache)
    • TestFileLu (filelu)
    • TestFilesCom (filescom)
    • @@ -24461,7 +25269,7 @@ known issues that are deemed unfixable for the time being:

    • TestWebdavNextcloud (webdav)
    • TestWebdavOwncloud (webdav)
    • TestnStorage (netstorage) - + (more info)
    • @@ -24784,26 +25592,25 @@ listings and thus not checked during the check access phase.

      Reading bisync logs

      Here are two normal runs. The first one has a newer file on the remote. The second has no deltas between local and remote.

      -
      2021/05/16 00:24:38 INFO  : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/"
      -2021/05/16 00:24:38 INFO  : Path1 checking for diffs
      -2021/05/16 00:24:38 INFO  : - Path1    File is new                         - file.txt
      -2021/05/16 00:24:38 INFO  : Path1:    1 changes:    1 new,    0 newer,    0 older,    0 deleted
      -2021/05/16 00:24:38 INFO  : Path2 checking for diffs
      -2021/05/16 00:24:38 INFO  : Applying changes
      -2021/05/16 00:24:38 INFO  : - Path1    Queue copy to Path2                 - dropbox:/file.txt
      -2021/05/16 00:24:38 INFO  : - Path1    Do queued copies to                 - Path2
      -2021/05/16 00:24:38 INFO  : Updating listings
      -2021/05/16 00:24:38 INFO  : Validating listings for Path1 "/path/to/local/tree/" vs Path2 "dropbox:/"
      -2021/05/16 00:24:38 INFO  : Bisync successful
      -
      -2021/05/16 00:36:52 INFO  : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/"
      -2021/05/16 00:36:52 INFO  : Path1 checking for diffs
      -2021/05/16 00:36:52 INFO  : Path2 checking for diffs
      -2021/05/16 00:36:52 INFO  : No changes found
      -2021/05/16 00:36:52 INFO  : Updating listings
      -2021/05/16 00:36:52 INFO  : Validating listings for Path1 "/path/to/local/tree/" vs Path2 "dropbox:/"
      -2021/05/16 00:36:52 INFO  : Bisync successful
      +
      2021/05/16 00:24:38 INFO  : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/"
      +2021/05/16 00:24:38 INFO  : Path1 checking for diffs
      +2021/05/16 00:24:38 INFO  : - Path1    File is new                         - file.txt
      +2021/05/16 00:24:38 INFO  : Path1:    1 changes:    1 new,    0 newer,    0 older,    0 deleted
      +2021/05/16 00:24:38 INFO  : Path2 checking for diffs
      +2021/05/16 00:24:38 INFO  : Applying changes
      +2021/05/16 00:24:38 INFO  : - Path1    Queue copy to Path2                 - dropbox:/file.txt
      +2021/05/16 00:24:38 INFO  : - Path1    Do queued copies to                 - Path2
      +2021/05/16 00:24:38 INFO  : Updating listings
      +2021/05/16 00:24:38 INFO  : Validating listings for Path1 "/path/to/local/tree/" vs Path2 "dropbox:/"
      +2021/05/16 00:24:38 INFO  : Bisync successful
      +
      +2021/05/16 00:36:52 INFO  : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/"
      +2021/05/16 00:36:52 INFO  : Path1 checking for diffs
      +2021/05/16 00:36:52 INFO  : Path2 checking for diffs
      +2021/05/16 00:36:52 INFO  : No changes found
      +2021/05/16 00:36:52 INFO  : Updating listings
      +2021/05/16 00:36:52 INFO  : Validating listings for Path1 "/path/to/local/tree/" vs Path2 "dropbox:/"
      +2021/05/16 00:36:52 INFO  : Bisync successful

      Dry run oddity

      The --dry-run messages may indicate that it would try to delete some files. For example, if a file is new on Path2 and does not @@ -24829,25 +25636,23 @@ failing commands, so there may be numerous such messages in the log.

      Since there are no final error/warning messages on line 7, rclone has recovered from failure after a retry, and the overall sync was successful.

      -
      1: 2021/05/14 00:44:12 INFO  : Synching Path1 "/path/to/local/tree" with Path2 "dropbox:"
      -2: 2021/05/14 00:44:12 INFO  : Path1 checking for diffs
      -3: 2021/05/14 00:44:12 INFO  : Path2 checking for diffs
      -4: 2021/05/14 00:44:12 INFO  : Path2:  113 changes:   22 new,    0 newer,    0 older,   91 deleted
      -5: 2021/05/14 00:44:12 ERROR : /path/to/local/tree/objects/af: error listing: unexpected end of JSON input
      -6: 2021/05/14 00:44:12 NOTICE: WARNING  listing try 1 failed.                 - dropbox:
      -7: 2021/05/14 00:44:12 INFO  : Bisync successful
      +
      1: 2021/05/14 00:44:12 INFO  : Synching Path1 "/path/to/local/tree" with Path2 "dropbox:"
      +2: 2021/05/14 00:44:12 INFO  : Path1 checking for diffs
      +3: 2021/05/14 00:44:12 INFO  : Path2 checking for diffs
      +4: 2021/05/14 00:44:12 INFO  : Path2:  113 changes:   22 new,    0 newer,    0 older,   91 deleted
      +5: 2021/05/14 00:44:12 ERROR : /path/to/local/tree/objects/af: error listing: unexpected end of JSON input
      +6: 2021/05/14 00:44:12 NOTICE: WARNING  listing try 1 failed.                 - dropbox:
      +7: 2021/05/14 00:44:12 INFO  : Bisync successful

      This log shows a Critical failure which requires a --resync to recover from. See the Runtime Error Handling section.

      -
      2021/05/12 00:49:40 INFO  : Google drive root '': Waiting for checks to finish
      -2021/05/12 00:49:40 INFO  : Google drive root '': Waiting for transfers to finish
      -2021/05/12 00:49:40 INFO  : Google drive root '': not deleting files as there were IO errors
      -2021/05/12 00:49:40 ERROR : Attempt 3/3 failed with 3 errors and: not deleting files as there were IO errors
      -2021/05/12 00:49:40 ERROR : Failed to sync: not deleting files as there were IO errors
      -2021/05/12 00:49:40 NOTICE: WARNING  rclone sync try 3 failed.           - /path/to/local/tree/
      -2021/05/12 00:49:40 ERROR : Bisync aborted. Must run --resync to recover.
      +
      2021/05/12 00:49:40 INFO  : Google drive root '': Waiting for checks to finish
      +2021/05/12 00:49:40 INFO  : Google drive root '': Waiting for transfers to finish
      +2021/05/12 00:49:40 INFO  : Google drive root '': not deleting files as there were IO errors
      +2021/05/12 00:49:40 ERROR : Attempt 3/3 failed with 3 errors and: not deleting files as there were IO errors
      +2021/05/12 00:49:40 ERROR : Failed to sync: not deleting files as there were IO errors
      +2021/05/12 00:49:40 NOTICE: WARNING  rclone sync try 3 failed.           - /path/to/local/tree/
      +2021/05/12 00:49:40 ERROR : Bisync aborted. Must run --resync to recover.

      Denied downloads of "infected" or "abusive" files

      Google Drive has a filter for certain file types (.exe, @@ -24921,14 +25726,13 @@ this can be done using a Task Scheduler, on Linux you can use Cron which is described below.

      The 1st example runs a sync every 5 minutes between a local directory and an OwnCloud server, with output logged to a runlog file:

      -
      # Minute (0-59)
      -#      Hour (0-23)
      -#           Day of Month (1-31)
      -#                Month (1-12 or Jan-Dec)
      -#                     Day of Week (0-6 or Sun-Sat)
      -#                         Command
      -  */5  *    *    *    *   /path/to/rclone bisync /local/files MyCloud: --check-access --filters-file /path/to/bysync-filters.txt --log-file /path/to//bisync.log
      +
      # Minute (0-59)
      +#      Hour (0-23)
      +#           Day of Month (1-31)
      +#                Month (1-12 or Jan-Dec)
      +#                     Day of Week (0-6 or Sun-Sat)
      +#                         Command
      +  */5  *    *    *    *   /path/to/rclone bisync /local/files MyCloud: --check-access --filters-file /path/to/bysync-filters.txt --log-file /path/to//bisync.log

      See crontab syntax for the details of crontab time interval expressions.

      @@ -24936,8 +25740,7 @@ syntax for the details of crontab time interval expressions.

      stdout/stderr to a file. The 2nd example runs a sync to Dropbox every hour and logs all stdout (via the >>) and stderr (via 2>&1) to a log file.

      -
      0 * * * * /path/to/rclone bisync /path/to/local/dropbox Dropbox: --check-access --filters-file /home/user/filters.txt >> /path/to/logs/dropbox-run.log 2>&1
      +
      0 * * * * /path/to/rclone bisync /path/to/local/dropbox Dropbox: --check-access --filters-file /home/user/filters.txt >> /path/to/logs/dropbox-run.log 2>&1

      Sharing an encrypted folder tree between hosts

      bisync can keep a local folder in sync with a cloud service, but what @@ -24984,19 +25787,19 @@ versions I manually run the following command:

    • The Dropbox client then syncs the changes with Dropbox.

    rclone.conf snippet

    -
    [Dropbox]
    -type = dropbox
    -...
    -
    -[Dropcrypt]
    -type = crypt
    -remote = /path/to/DBoxroot/crypt          # on the Linux server
    -remote = C:\Users\MyLogin\Dropbox\crypt   # on the Windows notebook
    -filename_encryption = standard
    -directory_name_encryption = true
    -password = ...
    -...
    +
    [Dropbox]
    +type = dropbox
    +...
    +
    +[Dropcrypt]
    +type = crypt
    +remote = /path/to/DBoxroot/crypt          # on the Linux server
    +remote = C:\Users\MyLogin\Dropbox\crypt   # on the Windows notebook
    +filename_encryption = standard
    +directory_name_encryption = true
    +password = ...
    +...

    Testing

    You should read this section only if you are developing for rclone. You need to have rclone source code locally to work with bisync @@ -25013,31 +25816,30 @@ these errors will be captured and flagged as invalid MISCOMPAREs. Rerunning the test will let it pass. Consider such failures as noise.

    Test command syntax

    -
    usage: go test ./cmd/bisync [options...]
    -
    -Options:
    -  -case NAME        Name(s) of the test case(s) to run. Multiple names should
    -                    be separated by commas. You can remove the `test_` prefix
    -                    and replace `_` by `-` in test name for convenience.
    -                    If not `all`, the name(s) should map to a directory under
    -                    `./cmd/bisync/testdata`.
    -                    Use `all` to run all tests (default: all)
    -  -remote PATH1     `local` or name of cloud service with `:` (default: local)
    -  -remote2 PATH2    `local` or name of cloud service with `:` (default: local)
    -  -no-compare       Disable comparing test results with the golden directory
    -                    (default: compare)
    -  -no-cleanup       Disable cleanup of Path1 and Path2 testdirs.
    -                    Useful for troubleshooting. (default: cleanup)
    -  -golden           Store results in the golden directory (default: false)
    -                    This flag can be used with multiple tests.
    -  -debug            Print debug messages
    -  -stop-at NUM      Stop test after given step number. (default: run to the end)
    -                    Implies `-no-compare` and `-no-cleanup`, if the test really
    -                    ends prematurely. Only meaningful for a single test case.
    -  -refresh-times    Force refreshing the target modtime, useful for Dropbox
    -                    (default: false)
    -  -verbose          Run tests verbosely
    +
    usage: go test ./cmd/bisync [options...]
    +
    +Options:
    +  -case NAME        Name(s) of the test case(s) to run. Multiple names should
    +                    be separated by commas. You can remove the `test_` prefix
    +                    and replace `_` by `-` in test name for convenience.
    +                    If not `all`, the name(s) should map to a directory under
    +                    `./cmd/bisync/testdata`.
    +                    Use `all` to run all tests (default: all)
    +  -remote PATH1     `local` or name of cloud service with `:` (default: local)
    +  -remote2 PATH2    `local` or name of cloud service with `:` (default: local)
    +  -no-compare       Disable comparing test results with the golden directory
    +                    (default: compare)
    +  -no-cleanup       Disable cleanup of Path1 and Path2 testdirs.
    +                    Useful for troubleshooting. (default: cleanup)
    +  -golden           Store results in the golden directory (default: false)
    +                    This flag can be used with multiple tests.
    +  -debug            Print debug messages
    +  -stop-at NUM      Stop test after given step number. (default: run to the end)
    +                    Implies `-no-compare` and `-no-cleanup`, if the test really
    +                    ends prematurely. Only meaningful for a single test case.
    +  -refresh-times    Force refreshing the target modtime, useful for Dropbox
    +                    (default: false)
    +  -verbose          Run tests verbosely

    Note: unlike rclone flags which must be prefixed by double dash (--), the test command flags can be equally prefixed by a single - or double dash.

    @@ -25476,14 +26278,16 @@ signature with a public key compiled into the rclone binary.

    After importing the key, verify that the fingerprint of one of the -keys matches: FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA as +keys matches: FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA ads this key is used for signing.

    We recommend that you cross-check the fingerprint shown above through the domains listed below. By cross-checking the integrity of the @@ -25501,7 +26305,7 @@ developers at once.

    In the release directory you will see the release files and some files called MD5SUMS, SHA1SUMS and SHA256SUMS.

    -
    $ rclone lsf --http-url https://downloads.rclone.org/v1.63.1 :http:
    +
    $ rclone lsf --http-url https://downloads.rclone.org/v1.63.1 :http:
     MD5SUMS
     SHA1SUMS
     SHA256SUMS
    @@ -25515,7 +26319,7 @@ version.txt
    SHA256SUMS contain hashes of the binary files in the release directory along with a signature.

    For example:

    -
    $ rclone cat --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS
    +
    $ rclone cat --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS
     -----BEGIN PGP SIGNED MESSAGE-----
     Hash: SHA1
     
    @@ -25539,19 +26343,19 @@ binaries) appropriate to your architecture. We've also chosen the
     the other types of hash also for extra security.
     rclone selfupdate verifies just the
     SHA256SUMS.

    -
    $ mkdir /tmp/check
    -$ cd /tmp/check
    -$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS .
    -$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip .
    +
    mkdir /tmp/check
    +cd /tmp/check
    +rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS .
    +rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip .

    Verify the signatures

    First verify the signatures on the SHA256 file.

    Import the key. See above for ways to verify this key is correct.

    -
    $ gpg --keyserver keyserver.ubuntu.com --receive-keys FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA
    +
    $ gpg --keyserver keyserver.ubuntu.com --receive-keys FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA
     gpg: key 93935E02FF3B54FA: public key "Nick Craig-Wood <nick@craig-wood.com>" imported
     gpg: Total number processed: 1
     gpg:               imported: 1

    Then check the signature:

    -
    $ gpg --verify SHA256SUMS 
    +
    $ gpg --verify SHA256SUMS 
     gpg: Signature made Mon 17 Jul 2023 15:03:17 BST
     gpg:                using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA
     gpg: Good signature from "Nick Craig-Wood <nick@craig-wood.com>" [ultimate]
    @@ -25562,10 +26366,10 @@ desired.

    Verify the hashes

    Now that we know the signatures on the hashes are OK we can verify the binaries match the hashes, completing the verification.

    -
    $ sha256sum -c SHA256SUMS 2>&1 | grep OK
    +
    $ sha256sum -c SHA256SUMS 2>&1 | grep OK
     rclone-v1.63.1-windows-amd64.zip: OK

    Or do the check with rclone

    -
    $ rclone hashsum sha256 -C SHA256SUMS rclone-v1.63.1-windows-amd64.zip 
    +
    $ rclone hashsum sha256 -C SHA256SUMS rclone-v1.63.1-windows-amd64.zip 
     2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 0
     2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 1
     2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 49
    @@ -25577,7 +26381,7 @@ rclone-v1.63.1-windows-amd64.zip: OK
    hashes together

    You can verify the signatures and hashes in one command line like this:

    -
    $ h=$(gpg --decrypt SHA256SUMS) && echo "$h" | sha256sum - -c --ignore-missing
    +
    $ h=$(gpg --decrypt SHA256SUMS) && echo "$h" | sha256sum - -c --ignore-missing
     gpg: Signature made Mon 17 Jul 2023 15:03:17 BST
     gpg:                using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA
     gpg: Good signature from "Nick Craig-Wood <nick@craig-wood.com>" [ultimate]
    @@ -25595,9 +26399,9 @@ use the API.

    website which you need to do in your browser.

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -25631,13 +26435,14 @@ y) Yes this is OK
     e) Edit this remote
     d) Delete this remote
     y/e/d> y
    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your 1Fichier account

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your 1Fichier account

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to a 1Fichier directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Modification times and hashes

    1Fichier does not support modification times. It supports the @@ -25721,6 +26526,8 @@ name:

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    + +

    Standard options

    Here are the Standard options specific to fichier (1Fichier).

    --fichier-api-key

    @@ -25799,6 +26606,7 @@ Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,BackSlash,Del,Ctl,LeftSpace,
  • Type: string
  • Required: false
  • +

    Limitations

    rclone about is not supported by the 1Fichier backend. Backends without this capability cannot determine free space for an @@ -25806,7 +26614,7 @@ rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    +href="https://rclone.org/commands/rclone_about/">rclone about.

    Alias

    The alias remote provides a new name for another remote.

    @@ -25835,9 +26643,9 @@ trashed files in myDrive.

    Configuration

    Here is an example of how to make an alias called remote for local folder. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -25877,13 +26685,16 @@ c) Copy remote
     s) Set configuration password
     q) Quit config
     e/n/d/r/c/s/q> q
    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level in /mnt/storage/backup

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in /mnt/storage/backup

    -
    rclone ls remote:
    +
    rclone ls remote:

    Copy another local directory to the alias directory called source

    -
    rclone copy /home/source remote:source
    +
    rclone copy /home/source remote:source
    + +

    Standard options

    Here are the Standard options specific to alias (Alias for an existing remote).

    @@ -25910,8 +26721,11 @@ existing remote).

  • Type: string
  • Required: false
  • +

    Amazon S3 Storage Providers

    The S3 backend can be used with a number of different providers:

    + +
    • AWS S3
    • Alibaba Cloud (Aliyun) Object Storage System (OSS)
    • @@ -25919,13 +26733,17 @@ existing remote).

    • China Mobile Ecloud Elastic Object Storage (EOS)
    • Cloudflare R2
    • Arvan Cloud Object Storage (AOS)
    • +
    • Cubbit DS3
    • DigitalOcean Spaces
    • Dreamhost
    • Exaba
    • +
    • FileLu S5 (S3-Compatible Object Storage)
    • GCS
    • +
    • Hetzner
    • Huawei OBS
    • IBM COS S3
    • IDrive e2
    • +
    • Intercolo Object Storage
    • IONOS Cloud
    • Leviia Object Storage
    • Liara Object Storage
    • @@ -25938,12 +26756,15 @@ existing remote).

    • Petabox
    • Pure Storage FlashBlade
    • Qiniu Cloud Object Storage (Kodo)
    • +
    • Rabata Cloud Storage
    • RackCorp Object Storage
    • Rclone Serve S3
    • Scaleway
    • Seagate Lyve Cloud
    • SeaweedFS
    • Selectel
    • +
    • Servercore Object Storage
    • +
    • Spectra Logic
    • StackPath
    • Storj
    • Synology C2 Object Storage
    • @@ -25951,28 +26772,29 @@ existing remote).

    • Wasabi
    • Zata
    +

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    Once you have made a remote (see the provider specific section above) you can use it like this:

    See all buckets

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    Make a new bucket

    -
    rclone mkdir remote:bucket
    +
    rclone mkdir remote:bucket

    List the contents of a bucket

    -
    rclone ls remote:bucket
    +
    rclone ls remote:bucket

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    -
    rclone sync --interactive /home/local/directory remote:bucket
    +
    rclone sync --interactive /home/local/directory remote:bucket

    Configuration

    Here is an example of making an s3 configuration for the AWS S3 provider. Most applies to the other providers as well, any differences are described below.

    First run

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -26192,7 +27014,7 @@ of metadata X-Amz-Meta-Md5chksum which is a base64 encoded
     MD5 hash (in the same format as is required for
     Content-MD5). You can use base64 -d and hexdump to check
     this value manually:

    -
    echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
    +
    echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump

    or you can use rclone check to verify the hashes are OK.

    For large objects, calculating this hash can take some time so the @@ -26267,7 +27089,7 @@ individually. This takes one API call per directory. Using the memory first using a smaller number of API calls (one per 1000 objects). See the rclone docs for more details.

    -
    rclone sync --fast-list --checksum /path/to/source s3:bucket
    +
    rclone sync --fast-list --checksum /path/to/source s3:bucket

    --fast-list trades off API transactions for memory use. As a rough guide rclone uses 1k of memory per object stored, so using --fast-list on a sync of a million objects will use roughly @@ -26277,7 +27099,7 @@ then using --no-traverse is a good idea. This finds objects directly instead of through directory listings. You can do a "top-up" sync very cheaply by using --max-age and --no-traverse to copy only recent files, eg

    -
    rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket
    +
    rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket

    You'd then do a full rclone sync less often.

    Note that --fast-list isn't required in the top-up sync.

    @@ -26296,7 +27118,7 @@ should use server-side copy. This is much faster than downloading and re-uploading the objects, as no data is transferred.

    For rclone to use server-side copy, you must use the same remote for the source and destination.

    -
    rclone copy s3:source-bucket s3:destination-bucket
    +
    rclone copy s3:source-bucket s3:destination-bucket

    When using server-side copy, the performance is limited by the rate at which rclone issues API requests to S3. See below for how to increase the number of API requests rclone makes.

    @@ -26312,7 +27134,7 @@ checkers.

    For example, with AWS S3, if you can increase the number of checkers to values like 200. If you are doing a server-side copy, you can also increase the number of transfers to 200.

    -
    rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
    +
    rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket

    You will need to experiment with these values to find the optimal settings for your setup.

    Data integrity

    @@ -26407,7 +27229,7 @@ files to become hidden old versions.

    followed by a cleanup of the old versions.

    Show current version and all the versions with --s3-versions flag.

    -
    $ rclone -q ls s3:cleanup-test
    +
    $ rclone -q ls s3:cleanup-test
             9 one.txt
     
     $ rclone -q --s3-versions ls s3:cleanup-test
    @@ -26416,12 +27238,12 @@ $ rclone -q --s3-versions ls s3:cleanup-test
            16 one-v2016-07-04-141003-000.txt
            15 one-v2016-07-02-155621-000.txt

    Retrieve an old version

    -
    $ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
    +
    $ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
     
     $ ls -l /tmp/one-v2016-07-04-141003-000.txt
     -rw-rw-r-- 1 ncw ncw 16 Jul  2 17:46 /tmp/one-v2016-07-04-141003-000.txt

    Clean up all the old versions and show that they've gone.

    -
    $ rclone -q backend cleanup-hidden s3:cleanup-test
    +
    $ rclone -q backend cleanup-hidden s3:cleanup-test
     
     $ rclone -q ls s3:cleanup-test
             9 one.txt
    @@ -26433,7 +27255,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
     file name to work out whether the objects are versions or not. Versions'
     names are created by inserting timestamp between file name and its
     extension.

    -
            9 file.txt
    +
            9 file.txt
             8 file-v2023-07-17-161032-000.txt
            16 file-v2023-06-15-141003-000.txt

    If there are real files present with the same names as versions, then @@ -26509,9 +27331,10 @@ files).

    The chunk sizes used in the multipart upload are specified by --s3-chunk-size and the number of chunks uploaded concurrently is specified by --s3-upload-concurrency.

    -

    Multipart uploads will use --transfers * ---s3-upload-concurrency * --s3-chunk-size -extra memory. Single part uploads to not use extra memory.

    +

    Multipart uploads will use extra memory equal to: +--transfers × --s3-upload-concurrency × +--s3-chunk-size. Single part uploads do not use extra +memory.

    Single part transfers can be faster than multipart transfers or slower depending on your latency from S3 - the more latency, the more likely single part transfers will be faster.

    @@ -26527,7 +27350,7 @@ any region, but you can only access the content of a bucket from the region it was created in. If you attempt to access a bucket from the wrong region, you will get an error, incorrect region, the bucket is not in 'XXX' region.

    -

    Authentication

    +

    Authentication

    There are a number of ways to supply rclone with a set of AWS credentials, with and without using the environment.

    The different authentication methods are tried in this order:

    @@ -26604,33 +27427,34 @@ href="#s3-no-check-bucket">s3-no-check-bucket)

    When using the lsd subcommand, the ListAllMyBuckets permission is required.

    Example policy:

    -
    {
    -    "Version": "2012-10-17",
    -    "Statement": [
    -        {
    -            "Effect": "Allow",
    -            "Principal": {
    -                "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
    -            },
    -            "Action": [
    -                "s3:ListBucket",
    -                "s3:DeleteObject",
    -                "s3:GetObject",
    -                "s3:PutObject",
    -                "s3:PutObjectAcl"
    -            ],
    -            "Resource": [
    -              "arn:aws:s3:::BUCKET_NAME/*",
    -              "arn:aws:s3:::BUCKET_NAME"
    -            ]
    -        },
    -        {
    -            "Effect": "Allow",
    -            "Action": "s3:ListAllMyBuckets",
    -            "Resource": "arn:aws:s3:::*"
    -        }
    -    ]
    -}
    +
    {
    +  "Version": "2012-10-17",
    +  "Statement": [
    +    {
    +      "Effect": "Allow",
    +      "Principal": {
    +        "AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
    +      },
    +      "Action": [
    +        "s3:ListBucket",
    +        "s3:DeleteObject",
    +        "s3:GetObject",
    +        "s3:PutObject",
    +        "s3:PutObjectAcl"
    +      ],
    +      "Resource": [
    +        "arn:aws:s3:::BUCKET_NAME/*",
    +        "arn:aws:s3:::BUCKET_NAME"
    +      ]
    +    },
    +    {
    +      "Effect": "Allow",
    +      "Action": "s3:ListAllMyBuckets",
    +      "Resource": "arn:aws:s3:::*"
    +    }
    +  ]
    +}

    Notes on above:

    1. This is a policy that can be used when creating bucket. It assumes @@ -26659,7 +27483,7 @@ href="http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.htm policy. The bucket can still be synced or copied into normally, but if rclone tries to access data from the glacier storage class you will see an error like below.

      -
      2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
      +
      2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file

      In this case you need to restore the object(s) in question before accessing object contents. The --s3-upload-cutoff 0 and force all the files to be uploaded as multipart.

      + +

      Standard options

      Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, -Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, -IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, -Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, -SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, -Qiniu, Zata and others).

      +Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, +GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, +Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, +OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, +Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, +TencentCOS, Wasabi, Zata, Other).

      --s3-provider

      Choose your S3 provider.

      Properties:

      @@ -26724,6 +27551,10 @@ Qiniu, Zata and others).

      • Cloudflare R2 Storage
    2. +
    3. "Cubbit" +
        +
      • Cubbit DS3 Object Storage
      • +
    4. "DigitalOcean"
    -
        [xxx]
    -    type = s3
    -    Provider = IBMCOS
    -    access_key_id = xxx
    -    secret_access_key = yyy
    -    endpoint = s3-api.us-geo.objectstorage.softlayer.net
    -    location_constraint = us-standard
    -    acl = private
    -
      -
    1. Execute rclone commands
    2. -
    -
        1)  Create a bucket.
    -        rclone mkdir IBM-COS-XREGION:newbucket
    -    2)  List available buckets.
    -        rclone lsd IBM-COS-XREGION:
    -        -1 2017-11-08 21:16:22        -1 test
    -        -1 2018-02-14 20:16:39        -1 newbucket
    -    3)  List contents of a bucket.
    -        rclone ls IBM-COS-XREGION:newbucket
    -        18685952 test.exe
    -    4)  Copy a file from local to remote.
    -        rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
    -    5)  Copy a file from remote to local.
    -        rclone copy IBM-COS-XREGION:newbucket/file.txt .
    -    6)  Delete a file on remote.
    -        rclone delete IBM-COS-XREGION:newbucket/file.txt
    +
    1) Create a bucket.
    +   rclone mkdir IBM-COS-XREGION:newbucket
    +2) List available buckets.
    +   rclone lsd IBM-COS-XREGION:
    +   -1 2017-11-08 21:16:22        -1 test
    +   -1 2018-02-14 20:16:39        -1 newbucket
    +3) List contents of a bucket.
    +    rclone ls IBM-COS-XREGION:newbucket
    +    18685952 test.exe
    +4) Copy a file from local to remote.
    +   rclone copy /Users/file.txt IBM-COS-XREGION:newbucket
    +5) Copy a file from remote to local.
    +   rclone copy IBM-COS-XREGION:newbucket/file.txt .
    +6) Delete a file on remote.
    +   rclone delete IBM-COS-XREGION:newbucket/file.txt

    IBM IAM authentication

    If using IBM IAM authentication with IBM API KEY you need to fill in -these additional parameters 1. Select false for env_auth 2. Leave -access_key_id and secret_access_key blank 3. -Paste your ibm_api_key

    -
    Option ibm_api_key.
    +these additional parameters

    +
      +
    1. Select false for env_auth

    2. +
    3. Leave access_key_id and +secret_access_key blank

    4. +
    5. Paste your ibm_api_key

      +
      Option ibm_api_key.
       IBM API Key to be used to obtain IAM token
       Enter a value of type string. Press Enter for the default (1).
      -ibm_api_key>
      -
        -
      1. Paste your ibm_resource_instance_id
      2. -
      -
      Option ibm_resource_instance_id.
      +ibm_api_key>
    6. +
    7. Paste your ibm_resource_instance_id

      +
      Option ibm_resource_instance_id.
       IBM service instance id
       Enter a value of type string. Press Enter for the default (2).
      -ibm_resource_instance_id>
      -
        -
      1. In advanced settings type true for v2_auth
      2. -
      -
      Option v2_auth.
      +ibm_resource_instance_id>
    8. +
    9. In advanced settings type true for v2_auth

      +
      Option v2_auth.
       If true use v2 authentication.
       If this is false (the default) then rclone will use v4 authentication.
       If it is set then rclone will use v2 authentication.
       Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH.
       Enter a boolean value (true or false). Press Enter for the default (true).
      -v2_auth>
      +v2_auth>
    +

    IDrive e2

    Here is an example of making an IDrive e2 configuration. First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -29659,6 +33291,124 @@ y) Yes this is OK (default)
     e) Edit this remote
     d) Delete this remote
     y/e/d> y
    +

    Intercolo Object Storage

    +

    Intercolo Object +Storage offers GDPR-compliant, transparently priced, S3-compatible +cloud storage hosted in Frankfurt, Germany.

    +

    Here's an example of making a configuration for Intercolo.

    +

    First run:

    +
    rclone config
    +

    This will guide you through an interactive setup process.

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +
    +Enter name for new remote.
    +name> intercolo
    +
    +Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +[snip]
    + xx / Amazon S3 Compliant Storage Providers including AWS, ...
    +   \ (s3)
    +[snip]
    +Storage> s3
    +
    +Option provider.
    +Choose your S3 provider.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    +[snip]
    +xx / Intercolo Object Storage
    +   \ (Intercolo)
    +[snip]
    +provider> Intercolo
    +
    +Option env_auth.
    +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +Only applies if access_key_id and secret_access_key is blank.
    +Choose a number from below, or type in your own boolean value (true or false).
    +Press Enter for the default (false).
    + 1 / Enter AWS credentials in the next step.
    +   \ (false)
    + 2 / Get AWS credentials from the environment (env vars or IAM).
    +   \ (true)
    +env_auth> false
    +
    +Option access_key_id.
    +AWS Access Key ID.
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +access_key_id> ACCESS_KEY
    +
    +Option secret_access_key.
    +AWS Secret Access Key (password).
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +secret_access_key> SECRET_KEY
    +
    +Option region.
    +Region where your bucket will be created and your data stored.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / Frankfurt, Germany
    +   \ (de-fra)
    +region> 1
    +
    +Option endpoint.
    +Endpoint for Intercolo Object Storage.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / Frankfurt, Germany
    +   \ (de-fra.i3storage.com)
    +endpoint> 1
    +
    +Option acl.
    +Canned ACL used when creating buckets and storing or copying objects.
    +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
    +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
    +Note that this ACL is applied when server-side copying objects as S3
    +doesn't copy the ACL from the source but rather writes a fresh one.
    +If the acl is an empty string then no X-Amz-Acl: header is added and
    +the default (private) will be used.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    +   / Owner gets FULL_CONTROL.
    + 1 | No one else has access rights (default).
    +   \ (private)
    + [snip]
    +acl> 
    +
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +
    +Configuration complete.
    +Options:
    +- type: s3
    +- provider: Intercolo
    +- access_key_id: ACCESS_KEY
    +- secret_access_key: SECRET_KEY
    +- region: de-fra
    +- endpoint: de-fra.i3storage.com
    +Keep this "intercolo" remote?
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    This will leave the config file looking like this.

    +
    [intercolo]
    +type = s3
    +provider = Intercolo
    +access_key_id = ACCESS_KEY
    +secret_access_key = SECRET_KEY
    +region = de-fra
    +endpoint = de-fra.i3storage.com

    IONOS Cloud

    IONOS S3 Object Storage is a service offered by IONOS for storing and @@ -29671,10 +33421,10 @@ Manager.

    rclone config. This will walk you through an interactive setup process. Type n to add the new remote, and then enter a name:

    -
    Enter name for new remote.
    +
    Enter name for new remote.
     name> ionos-fra

    Type s3 to choose the connection type:

    -
    Option Storage.
    +
    Option Storage.
     Type of storage to configure.
     Choose a number from below, or type in your own value.
     [snip]
    @@ -29683,7 +33433,7 @@ XX / Amazon S3 Compliant Storage Providers including AWS, ...
     [snip]
     Storage> s3

    Type IONOS:

    -
    Option provider.
    +
    Option provider.
     Choose your S3 provider.
     Choose a number from below, or type in your own value.
     Press Enter to leave empty.
    @@ -29694,7 +33444,7 @@ XX / IONOS Cloud
     provider> IONOS

    Press Enter to choose the default option Enter AWS credentials in the next step:

    -
    Option env_auth.
    +
    Option env_auth.
     Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
     Only applies if access_key_id and secret_access_key is blank.
     Choose a number from below, or type in your own boolean value (true or false).
    @@ -29706,8 +33456,8 @@ Press Enter for the default (false).
     env_auth>

    Enter your Access Key and Secret key. These can be retrieved in the Data Center Designer, click on the -menu “Manager resources” / "Object Storage Key Manager".

    -
    Option access_key_id.
    +menu "Manager resources" / "Object Storage Key Manager".

    +
    Option access_key_id.
     AWS Access Key ID.
     Leave blank for anonymous access or runtime credentials.
     Enter a value. Press Enter to leave empty.
    @@ -29719,7 +33469,7 @@ Leave blank for anonymous access or runtime credentials.
     Enter a value. Press Enter to leave empty.
     secret_access_key> YOUR_SECRET_KEY

    Choose the region where your bucket is located:

    -
    Option region.
    +
    Option region.
     Region where your bucket will be created and your data stored.
     Choose a number from below, or type in your own value.
     Press Enter to leave empty.
    @@ -29731,7 +33481,7 @@ Press Enter to leave empty.
        \ (eu-south-2)
     region> 2

    Choose the endpoint from the same region:

    -
    Option endpoint.
    +
    Option endpoint.
     Endpoint for IONOS S3 Object Storage.
     Specify the endpoint from the same region.
     Choose a number from below, or type in your own value.
    @@ -29745,7 +33495,7 @@ Press Enter to leave empty.
     endpoint> 1

    Press Enter to choose the default option or choose the desired ACL setting:

    -
    Option acl.
    +
    Option acl.
     Canned ACL used when creating buckets and storing or copying objects.
     This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
     For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
    @@ -29760,13 +33510,13 @@ Press Enter to leave empty.
     [snip]
     acl>

    Press Enter to skip the advanced config:

    -
    Edit advanced config?
    +
    Edit advanced config?
     y) Yes
     n) No (default)
     y/n>

    Press Enter to save the configuration, and then q to quit the configuration process:

    -
    Configuration complete.
    +
    Configuration complete.
     Options:
     - type: s3
     - provider: IONOS
    @@ -29781,70 +33531,53 @@ y/e/d> y

    Done! Now you can try some commands (for macOS, use ./rclone instead of rclone).

      -
    1. Create a bucket (the name must be unique within the whole IONOS -S3)
    2. +
    3. Create a bucket (the name must be unique within the whole IONOS +S3)

      +
      rclone mkdir ionos-fra:my-bucket
    4. +
    5. List available buckets

      +
      rclone lsd ionos-fra:
    6. +
    7. Copy a file from local to remote

      +
      rclone copy /Users/file.txt ionos-fra:my-bucket
    8. +
    9. List contents of a bucket

      +
      rclone ls ionos-fra:my-bucket
    10. +
    11. Copy a file from remote to local

      +
      rclone copy ionos-fra:my-bucket/file.txt
    -
    rclone mkdir ionos-fra:my-bucket
    -
      -
    1. List available buckets
    2. -
    -
    rclone lsd ionos-fra:
    -
      -
    1. Copy a file from local to remote
    2. -
    -
    rclone copy /Users/file.txt ionos-fra:my-bucket
    -
      -
    1. List contents of a bucket
    2. -
    -
    rclone ls ionos-fra:my-bucket
    -
      -
    1. Copy a file from remote to local
    2. -
    -
    rclone copy ionos-fra:my-bucket/file.txt

    Leviia Cloud Object Storage

    Leviia Object Storage, backup and secure your data in a 100% French cloud, independent of GAFAM..

    To configure access to Leviia, follow the steps below:

      -
    1. Run rclone config and select n for a new -remote.
    2. -
    -
    rclone config
    +
  • Run rclone config and select n for a +new remote.

    +
    rclone config
     No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    -n/s/q> n
    -
      -
    1. Give the name of the configuration. For example, name it -'leviia'.
    2. -
    -
    name> leviia
    -
      -
    1. Select s3 storage.
    2. -
    -
    Choose a number from below, or type in your own value
    +n/s/q> n
  • +
  • Give the name of the configuration. For example, name it +'leviia'.

    +
    name> leviia
  • +
  • Select s3 storage.

    +
    Choose a number from below, or type in your own value
     [snip]
     XX / Amazon S3 Compliant Storage Providers including AWS, ...
        \ (s3)
     [snip]
    -Storage> s3
    -
      -
    1. Select Leviia provider.
    2. -
    -
    Choose a number from below, or type in your own value
    +Storage> s3
  • +
  • Select Leviia provider.

    +
    Choose a number from below, or type in your own value
     1 / Amazon Web Services (AWS) S3
        \ "AWS"
     [snip]
     15 / Leviia Object Storage
        \ (Leviia)
     [snip]
    -provider> Leviia
    -
      -
    1. Enter your SecretId and SecretKey of Leviia.
    2. -
    -
    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +provider> Leviia
  • +
  • Enter your SecretId and SecretKey of Leviia.

    +
    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
     Only applies if access_key_id and secret_access_key is blank.
     Enter a boolean value (true or false). Press Enter for the default ("false").
     Choose a number from below, or type in your own value
    @@ -29860,19 +33593,15 @@ access_key_id> ZnIx.xxxxxxxxxxxxxxx
     AWS Secret Access Key (password)
     Leave blank for anonymous access or runtime credentials.
     Enter a string value. Press Enter for the default ("").
    -secret_access_key> xxxxxxxxxxx
    -
      -
    1. Select endpoint for Leviia.
    2. -
    -
       / The default endpoint
    +secret_access_key> xxxxxxxxxxx
  • +
  • Select endpoint for Leviia.

    +
       / The default endpoint
      1 | Leviia.
        \ (s3.leviia.com)
     [snip]
    -endpoint> 1
    -
      -
    1. Choose acl.
    2. -
    -
    Note that this ACL is applied when server-side copying objects as S3
    +endpoint> 1
  • +
  • Choose acl.

    +
    Note that this ACL is applied when server-side copying objects as S3
     doesn't copy the ACL from the source but rather writes a fresh one.
     Enter a string value. Press Enter for the default ("").
     Choose a number from below, or type in your own value
    @@ -29906,14 +33635,15 @@ Current remotes:
     
     Name                 Type
     ====                 ====
    -leviia                s3
    +leviia s3
  • +

    Liara

    Here is an example of making a Liara Object Storage configuration. First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     n/s> n
    @@ -29986,25 +33716,26 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    This will leave the config file looking like this.

    -
    [Liara]
    -type = s3
    -provider = Liara
    -env_auth = false
    -access_key_id = YOURACCESSKEY
    -secret_access_key = YOURSECRETACCESSKEY
    -region =
    -endpoint = storage.iran.liara.space
    -location_constraint =
    -acl =
    -server_side_encryption =
    -storage_class =
    +
    [Liara]
    +type = s3
    +provider = Liara
    +env_auth = false
    +access_key_id = YOURACCESSKEY
    +secret_access_key = YOURSECRETACCESSKEY
    +region =
    +endpoint = storage.iran.liara.space
    +location_constraint =
    +acl =
    +server_side_encryption =
    +storage_class =

    Linode

    Here is an example of making a Linode Object Storage configuration. First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -30137,19 +33868,20 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    This will leave the config file looking like this.

    -
    [linode]
    -type = s3
    -provider = Linode
    -access_key_id = ACCESS_KEY
    -secret_access_key = SECRET_ACCESS_KEY
    -endpoint = eu-central-1.linodeobjects.com
    +
    [linode]
    +type = s3
    +provider = Linode
    +access_key_id = ACCESS_KEY
    +secret_access_key = SECRET_ACCESS_KEY
    +endpoint = eu-central-1.linodeobjects.com

    Magalu

    Here is an example of making a Magalu Object Storage configuration. First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -30244,21 +33976,22 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    This will leave the config file looking like this.

    -
    [magalu]
    -type = s3
    -provider = Magalu
    -access_key_id = ACCESS_KEY
    -secret_access_key = SECRET_ACCESS_KEY
    -endpoint = br-ne1.magaluobjects.com
    +
    [magalu]
    +type = s3
    +provider = Magalu
    +access_key_id = ACCESS_KEY
    +secret_access_key = SECRET_ACCESS_KEY
    +endpoint = br-ne1.magaluobjects.com

    MEGA S4

    MEGA S4 Object Storage is an S3 compatible object storage system. It has a single pricing tier with no additional charges for data transfers or API requests and it is included in existing Pro plans.

    Here is an example of making a configuration. First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -30342,12 +34075,13 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    This will leave the config file looking like this.

    -
    [megas4]
    -type = s3
    -provider = Mega
    -access_key_id = XXX
    -secret_access_key = XXX
    -endpoint = s3.eu-central-1.s4.mega.io
    +
    [megas4]
    +type = s3
    +provider = Mega
    +access_key_id = XXX
    +secret_access_key = XXX
    +endpoint = s3.eu-central-1.s4.mega.io

    Minio

    Minio is an object storage server built for cloud application developers and devops.

    @@ -30356,7 +34090,7 @@ can be used by rclone.

    To use it, install Minio following the instructions here.

    When it configures itself Minio will print something like this

    -
    Endpoint:  http://192.168.1.106:9000  http://172.23.0.1:9000
    +
    Endpoint:  http://192.168.1.106:9000  http://172.23.0.1:9000
     AccessKey: USWUXHGYZQYFYFFIT3RE
     SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
     Region:    us-east-1
    @@ -30378,7 +34112,7 @@ Object API (Amazon S3 compatible):
     Drive Capacity: 26 GiB Free, 165 GiB Total

    These details need to go into rclone config like this. Note that it is important to put the region in as stated above.

    -
    env_auth> 1
    +
    env_auth> 1
     access_key_id> USWUXHGYZQYFYFFIT3RE
     secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
     region> us-east-1
    @@ -30386,18 +34120,19 @@ endpoint> http://192.168.1.106:9000
     location_constraint>
     server_side_encryption>

    Which makes the config file look like this

    -
    [minio]
    -type = s3
    -provider = Minio
    -env_auth = false
    -access_key_id = USWUXHGYZQYFYFFIT3RE
    -secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
    -region = us-east-1
    -endpoint = http://192.168.1.106:9000
    -location_constraint =
    -server_side_encryption =
    +
    [minio]
    +type = s3
    +provider = Minio
    +env_auth = false
    +access_key_id = USWUXHGYZQYFYFFIT3RE
    +secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
    +region = us-east-1
    +endpoint = http://192.168.1.106:9000
    +location_constraint =
    +server_side_encryption =

    So once set up, for example, to copy files into a bucket

    -
    rclone copy /path/to/files minio:bucket
    +
    rclone copy /path/to/files minio:bucket

    Netease NOS

    For Netease NOS configure as per the configurator rclone config setting the provider Netease. @@ -30413,25 +34148,26 @@ href="https://docs.outscale.com/en/userguide/OUTSCALE-Object-Storage-OOS.html">o documentation.

    Here is an example of an OOS configuration that you can paste into your rclone configuration file:

    -
    [outscale]
    -type = s3
    -provider = Outscale
    -env_auth = false
    -access_key_id = ABCDEFGHIJ0123456789
    -secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    -region = eu-west-2
    -endpoint = oos.eu-west-2.outscale.com
    -acl = private
    +
    [outscale]
    +type = s3
    +provider = Outscale
    +env_auth = false
    +access_key_id = ABCDEFGHIJ0123456789
    +secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    +region = eu-west-2
    +endpoint = oos.eu-west-2.outscale.com
    +acl = private

    You can also run rclone config to go through the interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
     n/s/q> n
    -
    Enter name for new remote.
    +
    Enter name for new remote.
     name> outscale
    -
    Option Storage.
    +
    Option Storage.
     Type of storage to configure.
     Choose a number from below, or type in your own value.
     [snip]
    @@ -30439,7 +34175,7 @@ Choose a number from below, or type in your own value.
        \ (s3)
     [snip]
     Storage> outscale
    -
    Option provider.
    +
    Option provider.
     Choose your S3 provider.
     Choose a number from below, or type in your own value.
     Press Enter to leave empty.
    @@ -30448,7 +34184,7 @@ XX / OUTSCALE Object Storage (OOS)
        \ (Outscale)
     [snip]
     provider> Outscale
    -
    Option env_auth.
    +
    Option env_auth.
     Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
     Only applies if access_key_id and secret_access_key is blank.
     Choose a number from below, or type in your own boolean value (true or false).
    @@ -30458,17 +34194,17 @@ Press Enter for the default (false).
      2 / Get AWS credentials from the environment (env vars or IAM).
        \ (true)
     env_auth> 
    -
    Option access_key_id.
    +
    Option access_key_id.
     AWS Access Key ID.
     Leave blank for anonymous access or runtime credentials.
     Enter a value. Press Enter to leave empty.
     access_key_id> ABCDEFGHIJ0123456789
    -
    Option secret_access_key.
    +
    Option secret_access_key.
     AWS Secret Access Key (password).
     Leave blank for anonymous access or runtime credentials.
     Enter a value. Press Enter to leave empty.
     secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    -
    Option region.
    +
    Option region.
     Region where your bucket will be created and your data stored.
     Choose a number from below, or type in your own value.
     Press Enter to leave empty.
    @@ -30483,7 +34219,7 @@ Press Enter to leave empty.
      5 / Tokyo, Japan
        \ (ap-northeast-1)
     region> 1
    -
    Option endpoint.
    +
    Option endpoint.
     Endpoint for S3 API.
     Required when using an S3 clone.
     Choose a number from below, or type in your own value.
    @@ -30499,7 +34235,7 @@ Press Enter to leave empty.
      5 / Outscale AP Northeast 1 (Japan)
        \ (oos.ap-northeast-1.outscale.com)
     endpoint> 1
    -
    Option acl.
    +
    Option acl.
     Canned ACL used when creating buckets and storing or copying objects.
     This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
     For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
    @@ -30514,11 +34250,11 @@ Press Enter to leave empty.
        \ (private)
     [snip]
     acl> 1
    -
    Edit advanced config?
    +
    Edit advanced config?
     y) Yes
     n) No (default)
     y/n> n
    -
    Configuration complete.
    +
    Configuration complete.
     Options:
     - type: s3
     - provider: Outscale
    @@ -30540,7 +34276,7 @@ interact with the platform, take a look at the documentation.

    Here is an example of making an OVHcloud Object Storage configuration with rclone config:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -30715,21 +34451,21 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    Your configuration file should now look like this:

    -
    [ovhcloud-rbx]
    -type = s3
    -provider = OVHcloud
    -access_key_id = my_access
    -secret_access_key = my_secret
    -region = rbx
    -endpoint = s3.rbx.io.cloud.ovh.net
    -acl = private
    +
    [ovhcloud-rbx]
    +type = s3
    +provider = OVHcloud
    +access_key_id = my_access
    +secret_access_key = my_secret
    +region = rbx
    +endpoint = s3.rbx.io.cloud.ovh.net
    +acl = private

    Petabox

    Here is an example of making a Petabox configuration. First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     n/s> n
    @@ -30864,13 +34600,14 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    This will leave the config file looking like this.

    -
    [My Petabox Storage]
    -type = s3
    -provider = Petabox
    -access_key_id = YOUR_ACCESS_KEY_ID
    -secret_access_key = YOUR_SECRET_ACCESS_KEY
    -region = us-east-1
    -endpoint = s3.petabox.io
    +
    [My Petabox Storage]
    +type = s3
    +provider = Petabox
    +access_key_id = YOUR_ACCESS_KEY_ID
    +secret_access_key = YOUR_SECRET_ACCESS_KEY
    +region = us-east-1
    +endpoint = s3.petabox.io

    Pure Storage FlashBlade

    Pure @@ -30887,9 +34624,9 @@ support (Purity//FB 4.4.2+)

    To configure rclone for Pure Storage FlashBlade:

    First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -30965,12 +34702,13 @@ d) Delete this remote
     y/e/d> y

    This results in the following configuration being stored in ~/.config/rclone/rclone.conf:

    -
    [flashblade]
    -type = s3
    -provider = FlashBlade
    -access_key_id = ACCESS_KEY_ID
    -secret_access_key = SECRET_ACCESS_KEY
    -endpoint = https://s3.flashblade.example.com
    +
    [flashblade]
    +type = s3
    +provider = FlashBlade
    +access_key_id = ACCESS_KEY_ID
    +secret_access_key = SECRET_ACCESS_KEY
    +endpoint = https://s3.flashblade.example.com

    Note: The FlashBlade endpoint should be the S3 data VIP. For virtual-hosted style requests, ensure proper DNS configuration: subdomains of the endpoint hostname should resolve to a FlashBlade data @@ -30986,44 +34724,35 @@ leading market leader position. Kodo can be widely applied to mass data management.

    To configure access to Qiniu Kodo, follow the steps below:

      -
    1. Run rclone config and select n for a new -remote.
    2. -
    -
    rclone config
    +
  • Run rclone config and select n for a +new remote.

    +
    rclone config
     No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    -n/s/q> n
    -
      -
    1. Give the name of the configuration. For example, name it -'qiniu'.
    2. -
    -
    name> qiniu
    -
      -
    1. Select s3 storage.
    2. -
    -
    Choose a number from below, or type in your own value
    +n/s/q> n
  • +
  • Give the name of the configuration. For example, name it +'qiniu'.

    +
    name> qiniu
  • +
  • Select s3 storage.

    +
    Choose a number from below, or type in your own value
     [snip]
     XX / Amazon S3 Compliant Storage Providers including AWS, ...
        \ (s3)
     [snip]
    -Storage> s3
    -
      -
    1. Select Qiniu provider.
    2. -
    -
    Choose a number from below, or type in your own value
    +Storage> s3
  • +
  • Select Qiniu provider.

    +
    Choose a number from below, or type in your own value
     1 / Amazon Web Services (AWS) S3
        \ "AWS"
     [snip]
     22 / Qiniu Object Storage (Kodo)
        \ (Qiniu)
     [snip]
    -provider> Qiniu
    -
      -
    1. Enter your SecretId and SecretKey of Qiniu Kodo.
    2. -
    -
    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +provider> Qiniu
  • +
  • Enter your SecretId and SecretKey of Qiniu Kodo.

    +
    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
     Only applies if access_key_id and secret_access_key is blank.
     Enter a boolean value (true or false). Press Enter for the default ("false").
     Choose a number from below, or type in your own value
    @@ -31039,12 +34768,10 @@ access_key_id> AKIDxxxxxxxxxx
     AWS Secret Access Key (password)
     Leave blank for anonymous access or runtime credentials.
     Enter a string value. Press Enter for the default ("").
    -secret_access_key> xxxxxxxxxxx
    -
      -
    1. Select endpoint for Qiniu Kodo. This is the standard endpoint for -different region.
    2. -
    -
       / The default endpoint - a good choice if you are unsure.
    +secret_access_key> xxxxxxxxxxx
  • +
  • Select endpoint for Qiniu Kodo. This is the standard endpoint for +different region.

    +
       / The default endpoint - a good choice if you are unsure.
      1 | East China Region 1.
        | Needs location constraint cn-east-1.
        \ (cn-east-1)
    @@ -31108,11 +34835,9 @@ Press Enter to leave empty.
        \ (ap-southeast-1)
      7 / Northeast Asia Region 1
        \ (ap-northeast-1)
    -location_constraint> 1
    -
      -
    1. Choose acl and storage class.
    2. -
    -
    Note that this ACL is applied when server-side copying objects as S3
    +location_constraint> 1
  • +
  • Choose acl and storage class.

    +
    Note that this ACL is applied when server-side copying objects as S3
     doesn't copy the ACL from the source but rather writes a fresh one.
     Enter a string value. Press Enter for the default ("").
     Choose a number from below, or type in your own value
    @@ -31162,47 +34887,274 @@ Current remotes:
     
     Name                 Type
     ====                 ====
    -qiniu                s3
    +qiniu s3
  • + +

    FileLu S5

    +

    FileLu S5 Object Storage is an +S3-compatible object storage system. It provides multiple region options +(Global, US-East, EU-Central, AP-Southeast, and ME-Central) while using +a single endpoint (s5lu.com). FileLu S5 is designed for +scalability, security, and simplicity, with predictable pricing and no +hidden charges for data transfers or API requests.

    +

    Here is an example of making a configuration. First run:

    +
    rclone config
    +

    This will guide you through an interactive setup process.

    +
    No remotes found, make a new one\?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +
    +Enter name for new remote.
    +name> s5lu
    +
    +Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +[snip]
    +XX / Amazon S3 Compliant Storage Providers including AWS,... FileLu, ...
    +   \ (s3)
    +[snip]
    +Storage> s3
    +
    +Option provider.
    +Choose your S3 provider.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    +[snip]
    +XX / FileLu S5 Object Storage
    +   \ (FileLu)
    +[snip]
    +provider> FileLu
    +
    +Option env_auth.
    +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +Only applies if access_key_id and secret_access_key is blank.
    +Choose a number from below, or type in your own boolean value (true or false).
    +Press Enter for the default (false).
    + 1 / Enter AWS credentials in the next step.
    +   \ (false)
    + 2 / Get AWS credentials from the environment (env vars or IAM).
    +   \ (true)
    +env_auth>
    +
    +Option access_key_id.
    +AWS Access Key ID.
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +access_key_id> XXX
    +
    +Option secret_access_key.
    +AWS Secret Access Key (password).
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +secret_access_key> XXX
    +
    +Option endpoint.
    +Endpoint for S3 API.
    +Required when using an S3 clone.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / Global
    +   \ (global)
    + 2 / North America (US-East)
    +   \ (us-east)
    + 3 / Europe (EU-Central)
    +   \ (eu-central)
    + 4 / Asia Pacific (AP-Southeast)
    +   \ (ap-southeast)
    + 5 / Middle East (ME-Central)
    +   \ (me-central)
    +region> 1
    +
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +
    +Configuration complete.
    +Options:
    +- type: s3
    +- provider: FileLu
    +- access_key_id: XXX
    +- secret_access_key: XXX
    +- endpoint: s5lu.com
    +Keep this "s5lu" remote?
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    This will leave the config file looking like this.

    +
    [s5lu]
    +type = s3
    +provider = FileLu
    +access_key_id = XXX
    +secret_access_key = XXX
    +endpoint = s5lu.com
    +

    Rabata

    +

    Rabata is an S3-compatible secure +cloud storage service that offers flat, transparent pricing (no API +request fees) while supporting standard S3 APIs. It is suitable for +backup, application storage,media workflows, and archive use cases.

    +

    Server side copy is not implemented with Rabata, also meaning +modification time of objects cannot be updated.

    +

    Rclone config:

    +
    rclone config
    +No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +
    +Enter name for new remote.
    +name> Rabata
    +
    +Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +[snip]
    +XX / Amazon S3 Compliant Storage Providers including AWS, ...
    +   \ (s3)
    +[snip]
    +Storage> s3
    +
    +Option provider.
    +Choose your S3 provider.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    +[snip]
    +XX / Rabata Cloud Storage
    +   \ (Rabata)
    +[snip]
    +provider> Rabata
    +
    +Option env_auth.
    +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +Only applies if access_key_id and secret_access_key is blank.
    +Choose a number from below, or type in your own boolean value (true or false).
    +Press Enter for the default (false).
    + 1 / Enter AWS credentials in the next step.
    +   \ (false)
    + 2 / Get AWS credentials from the environment (env vars or IAM).
    +   \ (true)
    +env_auth> 
    +
    +Option access_key_id.
    +AWS Access Key ID.
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +access_key_id> ACCESS_KEY_ID
    +
    +Option secret_access_key.
    +AWS Secret Access Key (password).
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +secret_access_key> SECRET_ACCESS_KEY
    +
    +Option region.
    +Region where your bucket will be created and your data stored.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / US East (N. Virginia)
    +   \ (us-east-1)
    + 2 / EU (Ireland)
    +   \ (eu-west-1)
    + 3 / EU (London)
    +   \ (eu-west-2)
    +region> 3
    +
    +Option endpoint.
    +Endpoint for Rabata Object Storage.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / US East (N. Virginia)
    +   \ (s3.us-east-1.rabata.io)
    + 2 / EU West (Ireland)
    +   \ (s3.eu-west-1.rabata.io)
    + 3 / EU West (London)
    +   \ (s3.eu-west-2.rabata.io)
    +endpoint> 3
    +
    +Option location_constraint.
    +location where your bucket will be created and your data stored.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / US East (N. Virginia)
    +   \ (us-east-1)
    + 2 / EU (Ireland)
    +   \ (eu-west-1)
    + 3 / EU (London)
    +   \ (eu-west-2)
    +location_constraint> 3
    +
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +
    +Configuration complete.
    +Options:
    +- type: s3
    +- provider: Rabata
    +- access_key_id: ACCESS_KEY_ID
    +- secret_access_key: SECRET_ACCESS_KEY
    +- region: eu-west-2
    +- endpoint: s3.eu-west-2.rabata.io
    +- location_constraint: eu-west-2
    +Keep this "rabata" remote?
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +
    +Current remotes:
    +
    +Name                 Type
    +====                 ====
    +rabata               s3

    RackCorp

    RackCorp Object Storage is an S3 compatible object storage platform from your friendly cloud provider RackCorp. The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty.

    -

    Before you can use RackCorp Object Storage, you'll need to "sign up" for an account on -our "portal". Next you can -create an access key, a secret key and +

    Before you can use RackCorp Object Storage, you'll need to sign up for an account on our +portal. Next you can create an +access key, a secret key and buckets, in your location of choice with ease. These details are required for the next steps of configuration, when rclone config asks for your access_key_id and secret_access_key.

    Your config should end up looking a bit like this:

    -
    [RCS3-demo-config]
    -type = s3
    -provider = RackCorp
    -env_auth = true
    -access_key_id = YOURACCESSKEY
    -secret_access_key = YOURSECRETACCESSKEY
    -region = au-nsw
    -endpoint = s3.rackcorp.com
    -location_constraint = au-nsw
    +
    [RCS3-demo-config]
    +type = s3
    +provider = RackCorp
    +env_auth = true
    +access_key_id = YOURACCESSKEY
    +secret_access_key = YOURSECRETACCESSKEY
    +region = au-nsw
    +endpoint = s3.rackcorp.com
    +location_constraint = au-nsw

    Rclone Serve S3

    Rclone can serve any remote over the S3 protocol. For details see the rclone serve s3 documentation.

    For example, to serve remote:path over s3, run the server like this:

    -
    rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
    +
    rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path

    This will be compatible with an rclone remote which is defined like this:

    -
    [serves3]
    -type = s3
    -provider = Rclone
    -endpoint = http://127.0.0.1:8080/
    -access_key_id = ACCESS_KEY_ID
    -secret_access_key = SECRET_ACCESS_KEY
    -use_multipart_uploads = false
    +
    [serves3]
    +type = s3
    +provider = Rclone
    +endpoint = http://127.0.0.1:8080/
    +access_key_id = ACCESS_KEY_ID
    +secret_access_key = SECRET_ACCESS_KEY
    +use_multipart_uploads = false

    Note that setting use_multipart_uploads = false is to work around a bug which @@ -31215,19 +35167,20 @@ Scaleway console or transferred through our API and CLI or using any S3-compatible tool.

    Scaleway provides an S3 interface which can be configured for use with rclone like this:

    -
    [scaleway]
    -type = s3
    -provider = Scaleway
    -env_auth = false
    -endpoint = s3.nl-ams.scw.cloud
    -access_key_id = SCWXXXXXXXXXXXXXX
    -secret_access_key = 1111111-2222-3333-44444-55555555555555
    -region = nl-ams
    -location_constraint = nl-ams
    -acl = private
    -upload_cutoff = 5M
    -chunk_size = 5M
    -copy_cutoff = 5M
    +
    [scaleway]
    +type = s3
    +provider = Scaleway
    +env_auth = false
    +endpoint = s3.nl-ams.scw.cloud
    +access_key_id = SCWXXXXXXXXXXXXXX
    +secret_access_key = 1111111-2222-3333-44444-55555555555555
    +region = nl-ams
    +location_constraint = nl-ams
    +acl = private
    +upload_cutoff = 5M
    +chunk_size = 5M
    +copy_cutoff = 5M

    Scaleway Glacier is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" @@ -31245,7 +35198,7 @@ href="https://seagate.com/">Seagate intended for enterprise use.

    - you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first.

    -
    $ rclone config
    +
    $ rclone config
     No remotes found, make a new one?
     n) New remote
     s) Set configuration password
    @@ -31253,7 +35206,7 @@ q) Quit config
     n/s/q> n
     name> remote

    Choose s3 backend

    -
    Type of storage to configure.
    +
    Type of storage to configure.
     Choose a number from below, or type in your own value.
     [snip]
     XX / Amazon S3 Compliant Storage Providers including AWS, ...
    @@ -31261,7 +35214,7 @@ XX / Amazon S3 Compliant Storage Providers including AWS, ...
     [snip]
     Storage> s3

    Choose LyveCloud as S3 provider

    -
    Choose your S3 provider.
    +
    Choose your S3 provider.
     Choose a number from below, or type in your own value.
     Press Enter to leave empty.
     [snip]
    @@ -31271,7 +35224,7 @@ XX / Seagate Lyve Cloud
     provider> LyveCloud

    Take the default (just press enter) to enter access key and secret in the config file.

    -
    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +
    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
     Only applies if access_key_id and secret_access_key is blank.
     Choose a number from below, or type in your own boolean value (true or false).
     Press Enter for the default (false).
    @@ -31280,16 +35233,16 @@ Press Enter for the default (false).
      2 / Get AWS credentials from the environment (env vars or IAM).
        \ (true)
     env_auth>
    -
    AWS Access Key ID.
    +
    AWS Access Key ID.
     Leave blank for anonymous access or runtime credentials.
     Enter a value. Press Enter to leave empty.
     access_key_id> XXX
    -
    AWS Secret Access Key (password).
    +
    AWS Secret Access Key (password).
     Leave blank for anonymous access or runtime credentials.
     Enter a value. Press Enter to leave empty.
     secret_access_key> YYY

    Leave region blank

    -
    Region to connect to.
    +
    Region to connect to.
     Leave blank if you are using an S3 clone and you don't have a region.
     Choose a number from below, or type in your own value.
     Press Enter to leave empty.
    @@ -31301,7 +35254,7 @@ Press Enter to leave empty.
        \ (other-v2-signature)
     region>

    Enter your Lyve Cloud endpoint. This field cannot be kept empty.

    -
    Endpoint for Lyve Cloud S3 API.
    +
    Endpoint for Lyve Cloud S3 API.
     Required when using an S3 clone.
     Please type in your LyveCloud endpoint.
     Examples:
    @@ -31310,12 +35263,12 @@ Examples:
     Enter a value.
     endpoint> s3.us-west-1.global.lyve.seagate.com

    Leave location constraint blank

    -
    Location constraint - must be set to match the Region.
    +
    Location constraint - must be set to match the Region.
     Leave blank if not sure. Used when creating buckets only.
     Enter a value. Press Enter to leave empty.
     location_constraint>

    Choose default ACL (private).

    -
    Canned ACL used when creating buckets and storing or copying objects.
    +
    Canned ACL used when creating buckets and storing or copying objects.
     This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
     For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
     Note that this ACL is applied when server-side copying objects as S3
    @@ -31328,12 +35281,13 @@ Press Enter to leave empty.
     [snip]
     acl>

    And the config file should end up looking like this:

    -
    [remote]
    -type = s3
    -provider = LyveCloud
    -access_key_id = XXX
    -secret_access_key = YYY
    -endpoint = s3.us-east-1.lyvecloud.seagate.com
    +
    [remote]
    +type = s3
    +provider = LyveCloud
    +access_key_id = XXX
    +secret_access_key = YYY
    +endpoint = s3.us-east-1.lyvecloud.seagate.com

    SeaweedFS

    SeaweedFS is a distributed storage system for blobs, objects, files, and data lake, @@ -31345,7 +35299,7 @@ asynchronous write back, for fast local speed and minimize access cost.

    Assuming the SeaweedFS are configured with weed shell as such:

    -
    > s3.bucket.create -name foo
    +
    > s3.bucket.create -name foo
     > s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply
     {
       "identities": [
    @@ -31369,14 +35323,15 @@ such:

    }

    To use rclone with SeaweedFS, above configuration should end up with something like this in your config:

    -
    [seaweedfs_s3]
    -type = s3
    -provider = SeaweedFS
    -access_key_id = any
    -secret_access_key = any
    -endpoint = localhost:8333
    +
    [seaweedfs_s3]
    +type = s3
    +provider = SeaweedFS
    +access_key_id = any
    +secret_access_key = any
    +endpoint = localhost:8333

    So once set up, for example to copy files into a bucket

    -
    rclone copy /path/to/files seaweedfs_s3:foo
    +
    rclone copy /path/to/files seaweedfs_s3:foo

    Selectel

    Selectel Cloud Storage is an S3 compatible storage system which features triple @@ -31391,7 +35346,7 @@ the Selectel provider type.

    the recommended default), not "path style".

    You can use rclone config to make a new provider like this

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -31477,13 +35432,213 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    And your config should end up looking like this:

    -
    [selectel]
    -type = s3
    -provider = Selectel
    -access_key_id = ACCESS_KEY
    -secret_access_key = SECRET_ACCESS_KEY
    -region = ru-1
    -endpoint = s3.ru-1.storage.selcloud.ru
    +
    [selectel]
    +type = s3
    +provider = Selectel
    +access_key_id = ACCESS_KEY
    +secret_access_key = SECRET_ACCESS_KEY
    +region = ru-1
    +endpoint = s3.ru-1.storage.selcloud.ru
    +

    Servercore

    +

    Servercore +Object Storage is an S3 compatible object storage system that +provides scalable and secure storage solutions for businesses of all +sizes.

    +

    rclone config example:

    +
    No remotes found, make a new one\?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +
    +Enter name for new remote.
    +name> servercore
    +
    +Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +[snip]
    +XX / Amazon S3 Compliant Storage Providers including ..., Servercore, ...
    +   \ (s3)
    +[snip]
    +Storage> s3
    +
    +Option provider.
    +Choose your S3 provider.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    +[snip]
    +XX / Servercore Object Storage
    +   \ (Servercore)
    +[snip]
    +provider> Servercore
    +
    +Option env_auth.
    +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +Only applies if access_key_id and secret_access_key is blank.
    +Choose a number from below, or type in your own boolean value (true or false).
    +Press Enter for the default (false).
    + 1 / Enter AWS credentials in the next step.
    +   \ (false)
    + 2 / Get AWS credentials from the environment (env vars or IAM).
    +   \ (true)
    +env_auth> 1
    +
    +Option access_key_id.
    +AWS Access Key ID.
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +access_key_id> ACCESS_KEY
    +
    +Option secret_access_key.
    +AWS Secret Access Key (password).
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +secret_access_key> SECRET_ACCESS_KEY
    +
    +Option region.
    +Region where your is data stored.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / St. Petersburg
    +   \ (ru-1)
    + 2 / Moscow
    +   \ (gis-1)
    + 3 / Moscow
    +   \ (ru-7)
    + 4 / Tashkent, Uzbekistan
    +   \ (uz-2)
    + 5 / Almaty, Kazakhstan
    +   \ (kz-1)
    +region> 1
    +
    +Option endpoint.
    +Endpoint for Servercore Object Storage.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    + 1 / Saint Petersburg
    +   \ (s3.ru-1.storage.selcloud.ru)
    + 2 / Moscow
    +   \ (s3.gis-1.storage.selcloud.ru)
    + 3 / Moscow
    +   \ (s3.ru-7.storage.selcloud.ru)
    + 4 / Tashkent, Uzbekistan
    +   \ (s3.uz-2.srvstorage.uz)
    + 5 / Almaty, Kazakhstan
    +   \ (s3.kz-1.srvstorage.kz)
    +endpoint> 1
    +
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +
    +Configuration complete.
    +Options:
    +- type: s3
    +- provider: Servercore
    +- access_key_id: ACCESS_KEY
    +- secret_access_key: SECRET_ACCESS_KEY
    +- region: ru-1
    +- endpoint: s3.ru-1.storage.selcloud.ru
    +Keep this "servercore" remote?
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    Spectra Logic

    +

    Spectra +Logic is an on-prem S3-compatible object storage gateway that +exposes local object storage and policy-tiers data to Spectra tape and +public clouds under a single namespace for backup and archiving.

    +

    The S3 compatible gateway is configured using +rclone config with a type of s3 and with a +provider name of SpectraLogic. Here is an example run of +the configurator.

    +
    No remotes found, make a new one?
    +n) New remote
    +s) Set configuration password
    +q) Quit config
    +n/s/q> n
    +
    +Enter name for new remote.
    +name> spectralogic
    +
    +Option Storage.
    +Type of storage to configure.
    +Choose a number from below, or type in your own value.
    +[snip]
    +XX / Amazon S3 Compliant Storage Providers including ..., SpectraLogic, ...
    +   \ (s3)
    +[snip]
    +Storage> s3
    +
    +Option provider.
    +Choose your S3 provider.
    +Choose a number from below, or type in your own value.
    +Press Enter to leave empty.
    +[snip]
    +XX / SpectraLogic BlackPearl
    +   \ (SpectraLogic)
    +[snip]
    +provider> SpectraLogic
    +
    +Option env_auth.
    +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +Only applies if access_key_id and secret_access_key is blank.
    +Choose a number from below, or type in your own boolean value (true or false).
    +Press Enter for the default (false).
    + 1 / Enter AWS credentials in the next step.
    +   \ (false)
    + 2 / Get AWS credentials from the environment (env vars or IAM).
    +   \ (true)
    +env_auth> 1
    +
    +Option access_key_id.
    +AWS Access Key ID.
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +access_key_id> ACCESS_KEY
    +
    +Option secret_access_key.
    +AWS Secret Access Key (password).
    +Leave blank for anonymous access or runtime credentials.
    +Enter a value. Press Enter to leave empty.
    +secret_access_key> SECRET_ACCESS_KEY
    +
    +Option endpoint.
    +Endpoint for S3 API.
    +Required when using an S3 clone.
    +Enter a value. Press Enter to leave empty.
    +endpoint> https://bp.example.com
    +
    +Edit advanced config?
    +y) Yes
    +n) No (default)
    +y/n> n
    +
    +Configuration complete.
    +Options:
    +- type: s3
    +- provider: SpectraLogic
    +- access_key_id: ACCESS_KEY
    +- secret_access_key: SECRET_ACCESS_KEY
    +- endpoint: https://bp.example.com
    +Keep this "spectratest" remote?
    +y) Yes this is OK (default)
    +e) Edit this remote
    +d) Delete this remote
    +y/e/d> y
    +

    And your config should end up looking like this:

    +
    [spectratest]
    +type = s3
    +provider = SpectraLogic
    +access_key_id = ACCESS_KEY
    +secret_access_key = SECRET_ACCESS_KEY
    +endpoint = https://bp.example.com

    Storj

    Storj is a decentralized cloud storage which can be used through its native protocol or an S3 compatible gateway.

    @@ -31491,7 +35646,7 @@ native protocol or an S3 compatible gateway.

    rclone config with a type of s3 and with a provider name of Storj. Here is an example run of the configurator.

    -
    Type of storage to configure.
    +
    Type of storage to configure.
     Storage> s3
     Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
     Only applies if access_key_id and secret_access_key is blank.
    @@ -31598,9 +35753,9 @@ fees, and deletion penalty.

    provider name of Synology. Here is an example run of the configurator.

    First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -31721,43 +35876,35 @@ Tencent Cloud for unstructured data. It is secure, stable, massive,
     convenient, low-delay and low-cost.

    To configure access to Tencent COS, follow the steps below:

      -
    1. Run rclone config and select n for a new -remote.
    2. -
    -
    rclone config
    +
  • Run rclone config and select n for a +new remote.

    +
    rclone config
     No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    -n/s/q> n
    -
      -
    1. Give the name of the configuration. For example, name it 'cos'.
    2. -
    -
    name> cos
    -
      -
    1. Select s3 storage.
    2. -
    -
    Choose a number from below, or type in your own value
    +n/s/q> n
  • +
  • Give the name of the configuration. For example, name it +'cos'.

    +
    name> cos
  • +
  • Select s3 storage.

    +
    Choose a number from below, or type in your own value
     [snip]
     XX / Amazon S3 Compliant Storage Providers including AWS, ...
        \ "s3"
     [snip]
    -Storage> s3
    -
      -
    1. Select TencentCOS provider.
    2. -
    -
    Choose a number from below, or type in your own value
    +Storage> s3
  • +
  • Select TencentCOS provider.

    +
    Choose a number from below, or type in your own value
     1 / Amazon Web Services (AWS) S3
        \ "AWS"
     [snip]
     11 / Tencent Cloud Object Storage (COS)
        \ "TencentCOS"
     [snip]
    -provider> TencentCOS
    -
      -
    1. Enter your SecretId and SecretKey of Tencent Cloud.
    2. -
    -
    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    +provider> TencentCOS
  • +
  • Enter your SecretId and SecretKey of Tencent Cloud.

    +
    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
     Only applies if access_key_id and secret_access_key is blank.
     Enter a boolean value (true or false). Press Enter for the default ("false").
     Choose a number from below, or type in your own value
    @@ -31773,12 +35920,10 @@ access_key_id> AKIDxxxxxxxxxx
     AWS Secret Access Key (password)
     Leave blank for anonymous access or runtime credentials.
     Enter a string value. Press Enter for the default ("").
    -secret_access_key> xxxxxxxxxxx
    -
      -
    1. Select endpoint for Tencent COS. This is the standard endpoint for -different region.
    2. -
    -
     1 / Beijing Region.
    +secret_access_key> xxxxxxxxxxx
  • +
  • Select endpoint for Tencent COS. This is the standard endpoint +for different region.

    +
     1 / Beijing Region.
        \ "cos.ap-beijing.myqcloud.com"
      2 / Nanjing Region.
        \ "cos.ap-nanjing.myqcloud.com"
    @@ -31787,11 +35932,9 @@ different region.
  • 4 / Guangzhou Region. \ "cos.ap-guangzhou.myqcloud.com" [snip] -endpoint> 4
    -
      -
    1. Choose acl and storage class.
    2. -
    -
    Note that this ACL is applied when server-side copying objects as S3
    +endpoint> 4
    +
  • Choose acl and storage class.

    +
    Note that this ACL is applied when server-side copying objects as S3
     doesn't copy the ACL from the source but rather writes a fresh one.
     Enter a string value. Press Enter for the default ("").
     Choose a number from below, or type in your own value
    @@ -31829,7 +35972,8 @@ Current remotes:
     
     Name                 Type
     ====                 ====
    -cos                  s3
    +cos s3
  • +

    Wasabi

    Wasabi is a cloud-based object storage service for a broad range of applications and use cases. Wasabi @@ -31838,7 +35982,7 @@ high-performance, reliable, and secure data storage infrastructure at minimal cost.

    Wasabi provides an S3 interface which can be configured for use with rclone like this.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     n/s> n
    @@ -31922,25 +36066,26 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    This will leave the config file looking like this.

    -
    [wasabi]
    -type = s3
    -provider = Wasabi
    -env_auth = false
    -access_key_id = YOURACCESSKEY
    -secret_access_key = YOURSECRETACCESSKEY
    -region =
    -endpoint = s3.wasabisys.com
    -location_constraint =
    -acl =
    -server_side_encryption =
    -storage_class =
    +
    [wasabi]
    +type = s3
    +provider = Wasabi
    +env_auth = false
    +access_key_id = YOURACCESSKEY
    +secret_access_key = YOURSECRETACCESSKEY
    +region =
    +endpoint = s3.wasabisys.com
    +location_constraint =
    +acl =
    +server_side_encryption =
    +storage_class =

    Zata Object Storage

    Zata Object Storage provides a secure, S3-compatible cloud storage solution designed for scalability and performance, ideal for a variety of data storage needs.

    First run:

    -
    rclone config
    -
    This will guide you through an interactive setup process:
    +
    rclone config
    +
    This will guide you through an interactive setup process:
     
     e) Edit existing remote
     n) New remote
    @@ -32067,17 +36212,16 @@ Keep this "my zata storage" remote?
     y) Yes this is OK (default)
     e) Edit this remote
     d) Delete this remote
    -y/e/d>
    -
    +y/e/d>

    This will leave the config file looking like this.

    -
    [my zata storage]
    -type = s3
    -provider = Zata
    -access_key_id = xxx
    -secret_access_key = xxx
    -region = us-east-1
    -endpoint = idr01.zata.ai
    -
    +
    [my zata storage]
    +type = s3
    +provider = Zata
    +access_key_id = xxx
    +secret_access_key = xxx
    +region = us-east-1
    +endpoint = idr01.zata.ai

    Memory usage

    The most common cause of rclone using lots of memory is a single directory with millions of files in. Despite s3 not really having the @@ -32102,22 +36246,349 @@ rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    +href="https://rclone.org/commands/rclone_about/">rclone about.

    +

    Archive

    +

    The Archive backend allows read only access to the content of archive +files on cloud storage without downloading the complete archive. This +means you could mount a large archive file and use only the parts of it +your application requires, rather than having to extract it.

    +

    The archive files are recognised by their extension.

    + + + + + + + + + + + + + + + + + +
    ArchiveExtension
    Zip.zip
    Squashfs.sqfs
    +

    The supported archive file types are cloud friendly - a single file +can be found and downloaded without downloading the whole archive.

    +

    If you just want to create, list or extract archives and don't want +to mount them then you may find the rclone archive commands +more convenient.

    + +

    These commands supports a wider range of non cloud friendly archives +(but not squashfs) but can't be used for rclone mount or +any other rclone commands (eg rclone check).

    +

    Configuration

    +

    This backend is best used without configuration.

    +

    Use it by putting the string :archive: in front of +another remote, say remote:dir to make +:archive:remote:dir.

    +

    Any archives in remote:dir will become directories and +any files may be read out of them individually.

    +

    For example

    +
    $ rclone lsf s3:rclone/dir
    +100files.sqfs
    +100files.zip
    +

    Note that 100files.zip and 100files.sqfs +are now directories:

    +
    $ rclone lsf :archive:s3:rclone/dir
    +100files.sqfs/
    +100files.zip/
    +

    Which we can look inside:

    +
    $ rclone lsf :archive:s3:rclone/dir/100files.zip/
    +cofofiy5jun
    +gigi
    +hevupaz5z
    +kacak/
    +kozemof/
    +lamapaq4
    +qejahen
    +quhenen2rey
    +soboves8
    +vibat/
    +wose
    +xade
    +zilupot
    +

    Files not in an archive can be read and written as normal. Files in +an archive can only be read.

    +

    The archive backend can also be used in a configuration file. Use the +remote variable to point to the destination of the +archive.

    +
    [remote]
    +type = archive
    +remote = s3:rclone/dir/100files.zip
    +

    Gives

    +
    $ rclone lsf remote:
    +cofofiy5jun
    +gigi
    +hevupaz5z
    +kacak/
    +...
    +

    Modification times

    +

    Modification times are preserved with an accuracy depending on the +archive type.

    +
    $ rclone lsl --max-depth 1 :archive:s3:rclone/dir/100files.zip
    +       12 2025-10-27 14:39:20.000000000 cofofiy5jun
    +       81 2025-10-27 14:39:20.000000000 gigi
    +       58 2025-10-27 14:39:20.000000000 hevupaz5z
    +        6 2025-10-27 14:39:20.000000000 lamapaq4
    +       43 2025-10-27 14:39:20.000000000 qejahen
    +       66 2025-10-27 14:39:20.000000000 quhenen2rey
    +       95 2025-10-27 14:39:20.000000000 soboves8
    +       71 2025-10-27 14:39:20.000000000 wose
    +       76 2025-10-27 14:39:20.000000000 xade
    +       15 2025-10-27 14:39:20.000000000 zilupot
    +

    For zip and squashfs files this is 1s.

    +

    Hashes

    +

    Which hash is supported depends on the archive type. Zip files use +CRC32, Squashfs don't support any hashes. For example:

    +
    $ rclone hashsum crc32 :archive:s3:rclone/dir/100files.zip/
    +b2288554  cofofiy5jun
    +a87e62b6  wose
    +f90f630b  xade
    +c7d0ef29  gigi
    +f1c64740  soboves8
    +cb7b4a5d  quhenen2rey
    +5115242b  kozemof/fonaxo
    +afeabd9a  qejahen
    +71202402  kozemof/fijubey5di
    +bd99e512  kozemof/napux
    +...
    +

    Hashes will be checked when the file is read from the archive and +used as part of syncing if possible.

    +
    $ rclone copy -vv :archive:s3:rclone/dir/100files.zip /tmp/100files
    +...
    +2025/10/27 14:56:44 DEBUG : kacak/turovat5c/yuyuquk: crc32 = abd05cc8 OK
    +2025/10/27 14:56:44 DEBUG : kacak/turovat5c/yuyuquk.aeb661dc.partial: renamed to: kacak/turovat5c/yuyuquk
    +2025/10/27 14:56:44 INFO  : kacak/turovat5c/yuyuquk: Copied (new)
    +...
    +

    Zip

    +

    The Zip +file format is a widely used archive format that bundles one or more +files and folders into a single file, primarily for easier storage or +transmission. It typically uses compression (most commonly the DEFLATE +algorithm) to reduce the overall size of the archived content. Zip files +are supported natively by most modern operating systems.

    +

    Rclone does not support the following advanced features of Zip +files:

    +
      +
    • Splitting large archives into smaller parts
    • +
    • Password protection
    • +
    • Zstd compression
    • +
    +

    Squashfs

    +

    Squashfs is a compressed, read-only file system format primarily used +in Linux-based systems. It's designed to compress entire file systems +(including files, directories, and metadata) into a single archive file, +which can then be mounted and read directly, appearing as a normal +directory structure. Because it's read-only and highly compressed, +Squashfs is ideal for live CDs/USBs, embedded devices with limited +storage, and software package distribution, as it saves space and +ensures the integrity of the original files.

    +

    Rclone supports the following squashfs compression formats:

    +
      +
    • Gzip
    • +
    • Lzma
    • +
    • Xz
    • +
    • Zstd
    • +
    +

    These are not yet working:

    +
      +
    • Lzo - Not yet supported
    • +
    • Lz4 - Broken with "error decompressing: lz4: bad magic +number"
    • +
    +

    Rclone works fastest with large squashfs block sizes. For +example:

    +
    mksquashfs 100files 100files.sqfs -comp zstd -b 1M
    +

    Limitations

    +

    Files in the archive backend are read only. It isn't possible to +create archives with the archive backend yet. However you +can create archives with rclone archive +create.

    +

    Only .zip and .sqfs archives are supported +as these are the only common archiving formats which make it easy to +read directory listings from the archive without downloading the whole +archive.

    +

    Internally the archive backend uses the VFS to access files. It isn't +possible to configure the internal VFS yet which might be useful.

    +

    Archive Formats

    +

    Here's a table rating common archive formats on their Cloud +Optimization which is based on their ability to access a single file +without reading the entire archive.

    +

    This capability depends on whether the format has a central +index (or "table of contents") that a program can read +first to find the exact location of a specific file.

    + ++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    FormatExtensionsCloud OptimizedExplanation
    ZIP.zipExcellentZip files have an index +(the "central directory") stored at the end of the file. A +program can seek to the end, read the index to find a file's location +and size, and then seek directly to that file's data to extract it.
    SquashFS.squashfs, +.sqfs, .sfsExcellentThis is a compressed read-only +filesystem image, not just an archive. It is +specifically designed for random access. It uses +metadata and index tables to allow the system to find and decompress +individual files or data blocks on demand.
    ISO Image.isoExcellentLike SquashFS, this is a filesystem +image (for optical media). It contains a filesystem (like ISO 9660 +or UDF) with a table of contents at a known location, +allowing for direct access to any file without reading the whole +disk.
    RAR.rarGoodRAR supports "non-solid" and "solid" +modes. In the common non-solid mode, files are +compressed separately, and an index allows for easy single-file +extraction (like ZIP). In "solid" mode, this rating would be "Very +Poor."
    7z.7zPoorBy default, 7z uses "solid" archives to +maximize compression. This compresses files as one continuous stream. To +extract a file from the middle, all preceding files must be decompressed +first. (If explicitly created as "non-solid," its rating would be +"Excellent").
    tar.tarPoor"Tape Archive" is a streaming +format with no central index. To find a file, you must +read the archive from the beginning, checking each file header one by +one until you find the one you want. This is slow but doesn't require +decompressing data.
    Gzipped Tar.tar.gz, +.tgzVery PoorThis is a tar file (already +"Poor") compressed with gzip as a single, +non-seekable stream. You cannot seek. To get any file, +you must decompress the entire archive from the beginning up to +that file.
    Bzipped/XZ Tar.tar.bz2, +.tar.xzVery PoorThis is the same principle as +tar.gz. The entire archive is one large compressed block, +making random access impossible.
    +

    Ideas for improvements

    +

    It would be possible to add ISO support fairly easily as the library +we use (go-diskfs) +supports it. We could also add ext4 and fat32 +the same way, however in my experience these are not very common as +files so probably not worth it. Go-diskfs can also read partitions which +we could potentially take advantage of.

    +

    It would be possible to add write support, but this would only be for +creating new archives, not for updating existing archives.

    + + +

    Standard options

    +

    Here are the Standard options specific to archive (Read +archives).

    +

    --archive-remote

    +

    Remote to wrap to read archives from.

    +

    Normally should contain a ':' and a path, e.g. +"myremote:path/to/dir", "myremote:bucket" or "myremote:".

    +

    If this is left empty, then the archive backend will use the root as +the remote.

    +

    This means that you can use :archive:remote:path and it will be +equivalent to setting remote="remote:path".

    +

    Properties:

    +
      +
    • Config: remote
    • +
    • Env Var: RCLONE_ARCHIVE_REMOTE
    • +
    • Type: string
    • +
    • Required: false
    • +
    +

    Advanced options

    +

    Here are the Advanced options specific to archive (Read +archives).

    +

    --archive-description

    +

    Description of the remote.

    +

    Properties:

    +
      +
    • Config: description
    • +
    • Env Var: RCLONE_ARCHIVE_DESCRIPTION
    • +
    • Type: string
    • +
    • Required: false
    • +
    +

    Metadata

    +

    Any metadata supported by the underlying remote is read and +written.

    +

    See the metadata docs +for more info.

    +

    Backblaze B2

    B2 is Backblaze's cloud storage system.

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    -

    Configuration

    +

    Configuration

    Here is an example of making a b2 configuration. First run

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process. To authenticate you will either need your Account ID (a short hex number) and Master Application Key (a long hex number) OR an Application Key, which is the recommended method. See below for further details on generating and using an Application Key.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     q) Quit config
     n/q> n
    @@ -32150,14 +36621,14 @@ y/e/d> y

    This remote is called remote and can now be used like this

    See all buckets

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    Create a new bucket

    -
    rclone mkdir remote:bucket
    +
    rclone mkdir remote:bucket

    List the contents of a bucket

    -
    rclone ls remote:bucket
    +
    rclone ls remote:bucket

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    -
    rclone sync --interactive /home/local/directory remote:bucket
    +
    rclone sync --interactive /home/local/directory remote:bucket

    Application Keys

    B2 supports multiple Application @@ -32176,7 +36647,7 @@ then B2 will return 401 errors.

    fewer transactions in exchange for more memory. See the rclone docs for more details.

    -

    Modification times

    +

    Modification times

    The modification time is stored as metadata on the object as X-Bz-Info-src_last_modified_millis as milliseconds since 1970-01-01 in the Backblaze standard. Other tools should be able to use @@ -32260,7 +36731,8 @@ href="#b2-lifecycle">--b2-lifecycle flag or after creation using the --b2-hard-delete flag which permanently removes files on deletion instead of hiding them.

    Old versions of files, where available, are visible using the ---b2-versions flag.

    +--b2-versions flag. These can be deleted as required with +delete.

    It is also possible to view a bucket as it was at a certain point in time, using the --b2-version-at flag. This will show the file versions as they were at that time, showing files that have been @@ -32291,7 +36763,7 @@ files to become hidden old versions.

    followed by a cleanup of the old versions.

    Show current version and all the versions with --b2-versions flag.

    -
    $ rclone -q ls b2:cleanup-test
    +
    $ rclone -q ls b2:cleanup-test
             9 one.txt
     
     $ rclone -q --b2-versions ls b2:cleanup-test
    @@ -32300,12 +36772,12 @@ $ rclone -q --b2-versions ls b2:cleanup-test
            16 one-v2016-07-04-141003-000.txt
            15 one-v2016-07-02-155621-000.txt

    Retrieve an old version

    -
    $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
    +
    $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
     
     $ ls -l /tmp/one-v2016-07-04-141003-000.txt
     -rw-rw-r-- 1 ncw ncw 16 Jul  2 17:46 /tmp/one-v2016-07-04-141003-000.txt

    Clean up all the old versions and show that they've gone.

    -
    $ rclone -q cleanup b2:cleanup-test
    +
    $ rclone -q cleanup b2:cleanup-test
     
     $ rclone -q ls b2:cleanup-test
             9 one.txt
    @@ -32317,7 +36789,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test
     file name to work out whether the objects are versions or not. Versions'
     names are created by inserting timestamp between file name and its
     extension.

    -
            9 file.txt
    +
            9 file.txt
             8 file-v2023-07-17-161032-000.txt
            16 file-v2023-06-15-141003-000.txt

    If there are real files present with the same names as versions, then @@ -32326,7 +36798,7 @@ behaviour of --b2-versions can be unpredictable.

    It is useful to know how many requests are sent to the server in different scenarios.

    All copy commands send the following 4 requests:

    -
    /b2api/v1/b2_authorize_account
    +
    /b2api/v1/b2_authorize_account
     /b2api/v1/b2_create_bucket
     /b2api/v1/b2_list_buckets
     /b2api/v1/b2_list_file_names
    @@ -32338,11 +36810,11 @@ requests to be sent when using B2 with Crypt. When a copy operation does not require any files to be uploaded, no more requests will be sent.

    Uploading files that do not require chunking, will send 2 requests per file upload:

    -
    /b2api/v1/b2_get_upload_url
    +
    /b2api/v1/b2_get_upload_url
     /b2api/v1/b2_upload_file/

    Uploading files requiring chunking, will send 2 requests (one each to start and finish the upload) and another 2 requests for each chunk:

    -
    /b2api/v1/b2_start_large_file
    +
    /b2api/v1/b2_start_large_file
     /b2api/v1/b2_get_upload_part_url
     /b2api/v1/b2_upload_part/
     /b2api/v1/b2_finish_large_file
    @@ -32351,10 +36823,10 @@ start and finish the upload) and another 2 requests for each chunk:

    it is set rclone will show and act on older versions of files. For example

    Listing without --b2-versions

    -
    $ rclone -q ls b2:cleanup-test
    +
    $ rclone -q ls b2:cleanup-test
             9 one.txt

    And with

    -
    $ rclone -q --b2-versions ls b2:cleanup-test
    +
    $ rclone -q --b2-versions ls b2:cleanup-test
             9 one.txt
             8 one-v2016-07-04-141032-000.txt
            16 one-v2016-07-04-141003-000.txt
    @@ -32367,20 +36839,22 @@ operations are permitted, so you can't upload files or delete them.

    Rclone supports generating file share links for private B2 buckets. They can either be for a file for example:

    -
    ./rclone link B2:bucket/path/to/file.txt
    +
    ./rclone link B2:bucket/path/to/file.txt
     https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx
     

    or if run on a directory you will get:

    -
    ./rclone link B2:bucket/path
    +
    ./rclone link B2:bucket/path
     https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx

    you can then use the authorization token (the part of the url from the ?Authorization= on) on any file path under that directory. For example:

    -
    https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx
    +
    https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx
     https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx
     https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx
     
    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to b2 (Backblaze B2).

    --b2-account

    Account ID or Application Key ID.

    @@ -32409,7 +36883,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
  • Type: bool
  • Default: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to b2 (Backblaze B2).

    --b2-endpoint

    Endpoint for the service.

    @@ -32617,6 +37091,82 @@ section in the overview for more info.

  • Type: Encoding
  • Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
  • +

    --b2-sse-customer-algorithm

    +

    If using SSE-C, the server-side encryption algorithm used when +storing this object in B2.

    +

    Properties:

    +
      +
    • Config: sse_customer_algorithm
    • +
    • Env Var: RCLONE_B2_SSE_CUSTOMER_ALGORITHM
    • +
    • Type: string
    • +
    • Required: false
    • +
    • Examples: +
        +
      • "" +
          +
        • None
        • +
      • +
      • "AES256" +
          +
        • Advanced Encryption Standard (256 bits key length)
        • +
      • +
    • +
    +

    --b2-sse-customer-key

    +

    To use SSE-C, you may provide the secret encryption key encoded in a +UTF-8 compatible string to encrypt/decrypt your data

    +

    Alternatively you can provide --sse-customer-key-base64.

    +

    Properties:

    +
      +
    • Config: sse_customer_key
    • +
    • Env Var: RCLONE_B2_SSE_CUSTOMER_KEY
    • +
    • Type: string
    • +
    • Required: false
    • +
    • Examples: +
        +
      • "" +
          +
        • None
        • +
      • +
    • +
    +

    --b2-sse-customer-key-base64

    +

    To use SSE-C, you may provide the secret encryption key encoded in +Base64 format to encrypt/decrypt your data

    +

    Alternatively you can provide --sse-customer-key.

    +

    Properties:

    +
      +
    • Config: sse_customer_key_base64
    • +
    • Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_BASE64
    • +
    • Type: string
    • +
    • Required: false
    • +
    • Examples: +
        +
      • "" +
          +
        • None
        • +
      • +
    • +
    +

    --b2-sse-customer-key-md5

    +

    If using SSE-C you may provide the secret encryption key MD5 checksum +(optional).

    +

    If you leave it blank, this is calculated automatically from the +sse_customer_key provided.

    +

    Properties:

    +
      +
    • Config: sse_customer_key_md5
    • +
    • Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_MD5
    • +
    • Type: string
    • +
    • Required: false
    • +
    • Examples: +
        +
      • "" +
          +
        • None
        • +
      • +
    • +

    --b2-description

    Description of the remote.

    Properties:

    @@ -32628,8 +37178,8 @@ section in the overview for more info.

    Backend commands

    Here are the commands specific to the b2 backend.

    -

    Run them with

    -
    rclone backend COMMAND remote:
    +

    Run them with:

    +
    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    See the backend command @@ -32637,26 +37187,26 @@ for more info on how to pass options and arguments.

    These can be run on a running backend using the rc command backend/command.

    lifecycle

    -

    Read or set the lifecycle for a bucket

    -
    rclone backend lifecycle remote: [options] [<arguments>+]
    +

    Read or set the lifecycle for a bucket.

    +
    rclone backend lifecycle remote: [options] [<arguments>+]

    This command can be used to read or set the lifecycle for a bucket.

    -

    Usage Examples:

    To show the current lifecycle rules:

    -
    rclone backend lifecycle b2:bucket
    +
    rclone backend lifecycle b2:bucket

    This will dump something like this showing the lifecycle rules.

    -
    [
    -    {
    -        "daysFromHidingToDeleting": 1,
    -        "daysFromUploadingToHiding": null,
    -        "daysFromStartingToCancelingUnfinishedLargeFiles": null,
    -        "fileNamePrefix": ""
    -    }
    -]
    +
    [
    +    {
    +        "daysFromHidingToDeleting": 1,
    +        "daysFromUploadingToHiding": null,
    +        "daysFromStartingToCancelingUnfinishedLargeFiles": null,
    +        "fileNamePrefix": ""
    +    }
    +]

    If there are no lifecycle rules (the default) then it will just -return [].

    +return [].

    To reset the current lifecycle rules:

    -
    rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
    +
    rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
     rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1

    This will run and then print the new lifecycle rules as above.

    Rclone only lets you set lifecycles for the whole bucket with the @@ -32665,46 +37215,49 @@ fileNamePrefix = "".

    the daysFromHidingToDeleting to 1 day. You can enable hard_delete in the config also which will mean deletions won't cause versions but overwrites will still cause versions to be made.

    -
    rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
    -

    See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules

    +
    rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
    +

    See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules

    Options:

    • "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off.
    • "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any -unfinished large file versions after this many days
    • +unfinished large file versions after this many days.
    • "daysFromUploadingToHiding": This many days after uploading a file -is hidden
    • +is hidden.

    cleanup

    Remove unfinished large file uploads.

    -
    rclone backend cleanup remote: [options] [<arguments>+]
    +
    rclone backend cleanup remote: [options] [<arguments>+]

    This command removes unfinished large file uploads of age greater than max-age, which defaults to 24 hours.

    Note that you can use --interactive/-i or --dry-run with this command to see what it would do.

    -
    rclone backend cleanup b2:bucket/path/to/object
    +
    rclone backend cleanup b2:bucket/path/to/object
     rclone backend cleanup -o max-age=7w b2:bucket/path/to/object

    Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.

    Options:

      -
    • "max-age": Max age of upload to delete
    • +
    • "max-age": Max age of upload to delete.

    cleanup-hidden

    Remove old versions of files.

    -
    rclone backend cleanup-hidden remote: [options] [<arguments>+]
    +
    rclone backend cleanup-hidden remote: [options] [<arguments>+]

    This command removes any old hidden versions of files.

    Note that you can use --interactive/-i or --dry-run with this command to see what it would do.

    -
    rclone backend cleanup-hidden b2:bucket/path/to/dir
    -

    Limitations

    +
    rclone backend cleanup-hidden b2:bucket/path/to/dir
    + +

    Limitations

    rclone about is not supported by the B2 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    +href="https://rclone.org/commands/rclone_about/">rclone about.

    Box

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. @@ -32713,12 +37266,12 @@ href="https://rclone.org/commands/rclone_about/">rclone about

    can do either in your browser, or with a config.json downloaded from Box to use JWT authentication. rclone config walks you through it.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -32775,20 +37328,21 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

    +docs for how to set it up on a machine without an internet-connected +web browser available.

    Note that rclone runs a webserver on your local machine to collect the token as returned from Box. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this may require you to unblock it temporarily if you are running a host firewall.

    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your Box

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your Box

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to an Box directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Using rclone with an Enterprise account with SSO

    If you have an "Enterprise" account type with Box with single sign on @@ -32821,7 +37375,7 @@ bearing in mind that if you use the copy the config file method, you should not use that remote on the computer you did the authentication on.

    Here is how to do it.

    -
    $ rclone config
    +
    $ rclone config
     Current remotes:
     
     Name                 Type
    @@ -32960,7 +37514,9 @@ interface.

    https://app.box.com/folder/11xxxxxxxxx8 in the browser, then you use 11xxxxxxxxx8 as the root_folder_id in the config.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to box (Box).

    --box-client-id

    OAuth Client Id.

    @@ -32994,6 +37550,16 @@ environment variables such as ${RCLONE_CONFIG_DIR}.

  • Type: string
  • Required: false
  • +

    --box-config-credentials

    +

    Box App config.json contents.

    +

    Leave blank normally.

    +

    Properties:

    +
      +
    • Config: config_credentials
    • +
    • Env Var: RCLONE_BOX_CONFIG_CREDENTIALS
    • +
    • Type: string
    • +
    • Required: false
    • +

    --box-access-token

    Box App Primary Access Token

    Leave blank normally.

    @@ -33023,7 +37589,7 @@ environment variables such as ${RCLONE_CONFIG_DIR}.

    -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to box (Box).

    --box-token

    OAuth Access Token as a JSON blob.

    @@ -33148,7 +37714,8 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    Box file names can't have the \ character in. rclone @@ -33164,7 +37731,7 @@ rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    +href="https://rclone.org/commands/rclone_about/">rclone about.

    Get your own Box App ID

    Here is how to create your own Box App ID for rclone:

      @@ -33208,14 +37775,14 @@ find you can't work without it. There are many docs online describing the use of the cache backend to minimize API hits and by-and-large these are out of date and the cache backend isn't needed in those scenarios any more.

      -

      Configuration

      +

      Configuration

      To get started you just need to have an existing remote which can be configured with cache.

      Here is an example of how to make a remote called test-cache. First run:

      -
       rclone config
      +
      rclone config

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one?
       n) New remote
       r) Rename remote
       c) Copy remote
      @@ -33290,11 +37857,11 @@ info_age = 48h
       chunk_total_size = 10G

      You can then use it like this,

      List directories in top level of your drive

      -
      rclone lsd test-cache:
      +
      rclone lsd test-cache:

      List all the files in your drive

      -
      rclone ls test-cache:
      +
      rclone ls test-cache:

      To start a cached mount

      -
      rclone mount --allow-other test-cache: /var/tmp/test-cache
      +
      rclone mount --allow-other test-cache: /var/tmp/test-cache

      Write Features

      Offline uploading

      In an effort to make writing through cache more reliable, the backend @@ -33359,9 +37926,11 @@ adapting any of its settings.

      How to enable? Run rclone config and add all the Plex options (endpoint, username and password) in your remote and it will be automatically enabled.

      -

      Affected settings: - cache-workers: Configured -value during confirmed playback or 1 all the other -times

      +

      Affected settings:

      +
        +
      • cache-workers: Configured value during +confirmed playback or 1 all the other times
      • +
      Certificate Validation

      When the Plex server is configured to only accept secure connections, it is possible to use .plex.direct URLs to ensure @@ -33374,7 +37943,9 @@ where the dots have been replaced with dashes, e.g. 127.0.0.1 becomes 127-0-0-1.

      To get the server-hash part, the easiest way is to visit

      -

      https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token

      +

      https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token

      This page will list all the available Plex servers for your account with at least one .plex.direct link for each. Copy one URL and replace the IP address with the desired address. This can be used as @@ -33401,9 +37972,12 @@ on them.

      Any reports or feedback on how cache behaves on this OS is greatly appreciated.

        -
      • https://github.com/rclone/rclone/issues/1935
      • -
      • https://github.com/rclone/rclone/issues/1907
      • -
      • https://github.com/rclone/rclone/issues/1834
      • +
      • Issue +#1935
      • +
      • Issue +#1907
      • +
      • Issue +#1834

      Risk of throttling

      Future iterations of the cache backend will make use of the pooling @@ -33413,15 +37987,20 @@ make writing through it more tolerant to failures.

      meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts.

      -

      Some recommendations: - don't use a very small interval for entry -information (--cache-info-age) - while writes aren't yet -optimised, you can still write through cache which gives -you the advantage of adding the file in the cache at the same time if -configured to do so.

      +

      Some recommendations:

      +
        +
      • don't use a very small interval for entry information +(--cache-info-age)
      • +
      • while writes aren't yet optimised, you can still write through +cache which gives you the advantage of adding the file in +the cache at the same time if configured to do so.
      • +

      Future enhancements:

        -
      • https://github.com/rclone/rclone/issues/1937
      • -
      • https://github.com/rclone/rclone/issues/1936
      • +
      • Issue +#1937
      • +
      • Issue +#1936

      cache and crypt

      One common scenario is to keep your data encrypted in the cloud @@ -33461,11 +38040,16 @@ listener is disabled if you do not add the flag.

      Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt.

      -

      Params: - remote = path to remote -(required) - withData = true/false to -delete cached data (chunks) as well (optional, false by -default)

      -

      Standard options

      +

      Params:

      +
        +
      • remote = path to remote +(required)
      • +
      • withData = true/false to delete cached data +(chunks) as well (optional, false by default)
      • +
      + + +

      Standard options

      Here are the Standard options specific to cache (Cache a remote).

      --cache-remote

      Remote to cache.

      @@ -33589,7 +38173,7 @@ oldest chunks until it goes under this value.

      -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to cache (Cache a remote).

      --cache-plex-token

      The plex token for authentication - auto set normally.

      @@ -33795,8 +38379,8 @@ an error.

      Backend commands

      Here are the commands specific to the cache backend.

      -

      Run them with

      -
      rclone backend COMMAND remote:
      +

      Run them with:

      +
      rclone backend COMMAND remote:

      The help below will explain what arguments each command takes.

      See the backend command @@ -33805,13 +38389,14 @@ for more info on how to pass options and arguments.

      href="https://rclone.org/rc/#backend-command">backend/command.

      stats

      Print stats on the cache backend in JSON format.

      -
      rclone backend stats remote: [options] [<arguments>+]
      +
      rclone backend stats remote: [options] [<arguments>+]
      +

      Chunker

      The chunker overlay transparently splits large files into smaller chunks during upload to wrapped remote and transparently assembles them back when the file is downloaded. This allows to effectively overcome size limits imposed by storage providers.

      -

      Configuration

      +

      Configuration

      To use it, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote.

      @@ -33824,7 +38409,7 @@ swift) then you should probably put the bucket in the remote

      Now configure chunker using rclone config. We will call this one overlay to separate it from the remote itself.

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one\?
       n) New remote
       s) Set configuration password
       q) Quit config
      @@ -33954,7 +38539,7 @@ non-conforming file names as normal non-chunked files.

      When using norename transactions, chunk names will additionally have a unique file version suffix. For example, BIG_FILE_NAME.rclone_chunk.001_bp562k.

      -

      Metadata

      +

      Metadata

      Besides data chunks chunker will by default create metadata object for a composite file. The object is named after the original file. Chunker allows user to disable metadata completely (the @@ -34023,7 +38608,7 @@ secondary type. This will save CPU and bandwidth but can result in empty hashsums at destination. Beware of consequences: the sync command will revert (sometimes silently) to time/size comparison if compatible hashsums between source and target are not found.

      -

      Modification times

      +

      Modification times

      Chunker stores modification times using the wrapped remote so support depends on that. For a small non-chunked file the chunker overlay simply manipulates modification time of the wrapped remote file. For a @@ -34089,7 +38674,9 @@ We recommend users to keep rclone up-to-date to avoid data corruption.

      Changing transactions is dangerous and requires explicit migration.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to chunker (Transparently chunk/split large files).

      --chunker-remote

      @@ -34157,7 +38744,7 @@ files. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to chunker (Transparently chunk/split large files).

      --chunker-name-format

      @@ -34278,6 +38865,7 @@ used.
    1. Type: string
    2. Required: false
    3. +

      Cloudinary

      This is a backend for the Cloudinary platform

      @@ -34296,7 +38884,7 @@ pricing details.

      Securing Your Credentials

      Please refer to the docs

      -

      Configuration

      +

      Configuration

      Here is an example of making a Cloudinary configuration.

      First, create a cloudinary.com @@ -34304,7 +38892,7 @@ account and choose a plan.

      You will need to log in and get the API Key and API Secret for your account from the developer section.

      Now run

      -

      rclone config

      +
      rclone config

      Follow the interactive setup process:

      No remotes found, make a new one?
       n) New remote
      @@ -34371,15 +38959,17 @@ e) Edit this remote
       d) Delete this remote
       y/e/d> y

      List directories in the top level of your Media Library

      -

      rclone lsd cloudinary-media-library:

      +
      rclone lsd cloudinary-media-library:

      Make a new directory.

      -

      rclone mkdir cloudinary-media-library:directory

      +
      rclone mkdir cloudinary-media-library:directory

      List the contents of a directory.

      -

      rclone ls cloudinary-media-library:directory

      +
      rclone ls cloudinary-media-library:directory

      Modified time and hashes

      Cloudinary stores md5 and timestamps for any successful Put automatically and read-only.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to cloudinary (Cloudinary).

      --cloudinary-cloud-name

      @@ -34427,7 +39017,7 @@ automatically and read-only.

    4. Type: string
    5. Required: false
    6. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to cloudinary (Cloudinary).

      --cloudinary-encoding

      @@ -34485,18 +39075,19 @@ ts u3ma usdz wdp webm webp wmv]
    7. Type: string
    8. Required: false
    9. +

      Citrix ShareFile

      Citrix ShareFile is a secure file sharing and transfer service aimed as business.

      -

      Configuration

      +

      Configuration

      The initial setup for Citrix ShareFile involves getting a token from Citrix ShareFile which you can in your browser. rclone config walks you through it.

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      +
      rclone config

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one?
       n) New remote
       s) Set configuration password
       q) Quit config
      @@ -34554,21 +39145,22 @@ e) Edit this remote
       d) Delete this remote
       y/e/d> y

      See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

      +docs for how to set it up on a machine without an internet-connected +web browser available.

      Note that rclone runs a webserver on your local machine to collect the token as returned from Citrix ShareFile. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

      -

      Once configured you can then use rclone like this,

      +

      Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

      List directories in top level of your ShareFile

      -
      rclone lsd remote:
      +
      rclone lsd remote:

      List all the files in your ShareFile

      -
      rclone ls remote:
      +
      rclone ls remote:

      To copy a local directory to an ShareFile directory called backup

      -
      rclone copy /home/source remote:backup
      +
      rclone copy /home/source remote:backup

      Paths may be as deep as required, e.g. remote:directory/subdirectory.

      Modification times and @@ -34668,7 +39260,9 @@ name:

      Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to sharefile (Citrix Sharefile).

      --sharefile-client-id

      @@ -34726,7 +39320,7 @@ connectors. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to sharefile (Citrix Sharefile).

      --sharefile-token

      @@ -34825,7 +39419,8 @@ Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,Left
    10. Type: string
    11. Required: false
    12. -

      Limitations

      + +

      Limitations

      Note that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

      ShareFile only supports filenames up to 256 characters in length.

      @@ -34835,7 +39430,7 @@ for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

      See List of backends that do not support rclone about and rclone about

      +href="https://rclone.org/commands/rclone_about/">rclone about.

      Crypt

      Rclone crypt remotes encrypt and decrypt other remotes.

      @@ -34893,7 +39488,7 @@ SecretBox, based on XSalsa20 cipher and Poly1305 for integrity. Names (file- and directory names) are also encrypted by default, but this has some implications and is therefore possible to be turned off.

      -

      Configuration

      +

      Configuration

      Here is an example of how to make a remote called secret.

      To use crypt, first set up the underlying remote. Follow @@ -34918,7 +39513,7 @@ anything you read will be in encrypted form, and anything you write will be unencrypted. To avoid issues it is best to configure a dedicated path for encrypted content, and access it exclusively through a crypt remote.

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one\?
       n) New remote
       s) Set configuration password
       q) Quit config
      @@ -35066,21 +39661,23 @@ crypt remote means you will no longer able to decrypt any of the
       previously encrypted content. The only possibility is to re-upload
       everything via a crypt remote configured with your new password.

      Depending on the size of your data, your bandwidth, storage quota -etc, there are different approaches you can take: - If you have -everything in a different location, for example on your local system, -you could remove all of the prior encrypted files, change the password -for your configured crypt remote (or delete and re-create the crypt -configuration), and then re-upload everything from the alternative -location. - If you have enough space on the storage system you can -create a new crypt remote pointing to a separate directory on the same -backend, and then use rclone to copy everything from the original crypt -remote to the new, effectively decrypting everything on the fly using -the old password and re-encrypting using the new password. When done, -delete the original crypt remote directory and finally the rclone crypt -configuration with the old password. All data will be streamed from the -storage system and back, so you will get half the bandwidth and be -charged twice if you have upload and download quota on the storage -system.

      +etc, there are different approaches you can take:

      +
        +
      • If you have everything in a different location, for example on your +local system, you could remove all of the prior encrypted files, change +the password for your configured crypt remote (or delete and re-create +the crypt configuration), and then re-upload everything from the +alternative location.
      • +
      • If you have enough space on the storage system you can create a new +crypt remote pointing to a separate directory on the same backend, and +then use rclone to copy everything from the original crypt remote to the +new, effectively decrypting everything on the fly using the old password +and re-encrypting using the new password. When done, delete the original +crypt remote directory and finally the rclone crypt configuration with +the old password. All data will be streamed from the storage system and +back, so you will get half the bandwidth and be charged twice if you +have upload and download quota on the storage system.
      • +

      Note: A security problem related to the random password generator was fixed in rclone version 1.53.3 (released 2020-11-19). Passwords generated by rclone config in version 1.49.0 @@ -35093,7 +39690,7 @@ more details, and a tool you can use to check if you are affected.

      Example

      Create the following file structure using "standard" file name encryption.

      -
      plaintext/
      +
      plaintext/
       ├── file0.txt
       ├── file1.txt
       └── subdir
      @@ -35102,7 +39699,7 @@ encryption.

      └── subsubdir └── file4.txt

      Copy these to the remote, and list them

      -
      $ rclone -q copy plaintext secret:
      +
      $ rclone -q copy plaintext secret:
       $ rclone -q ls secret:
               7 file1.txt
               6 file0.txt
      @@ -35110,21 +39707,21 @@ $ rclone -q ls secret:
              10 subdir/subsubdir/file4.txt
               9 subdir/file3.txt

      The crypt remote looks like

      -
      $ rclone -q ls remote:path
      +
      $ rclone -q ls remote:path
              55 hagjclgavj2mbiqm6u6cnjjqcg
              54 v05749mltvv1tf4onltun46gls
              57 86vhrsv86mpbtd3a0akjuqslj8/dlj7fkq4kdq72emafg7a7s41uo
              58 86vhrsv86mpbtd3a0akjuqslj8/7uu829995du6o42n32otfhjqp4/b9pausrfansjth5ob3jkdqd4lc
              56 86vhrsv86mpbtd3a0akjuqslj8/8njh1sk437gttmep3p70g81aps

      The directory structure is preserved

      -
      $ rclone -q ls secret:subdir
      +
      $ rclone -q ls secret:subdir
               8 file2.txt
               9 file3.txt
              10 subsubdir/file4.txt

      Without file name encryption .bin extensions are added to underlying names. This prevents the cloud provider attempting to interpret file content.

      -
      $ rclone -q ls remote:path
      +
      $ rclone -q ls remote:path
              54 file0.txt.bin
              57 subdir/file3.txt.bin
              56 subdir/file2.txt.bin
      @@ -35200,7 +39797,9 @@ protected by an extremely strong crypto authenticator.

      Use the rclone cryptcheck command to check the integrity of an encrypted remote instead of rclone check which can't check the checksums properly.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to crypt (Encrypt/Decrypt a remote).

      --crypt-remote

      @@ -35290,7 +39889,7 @@ obscure.

    13. Type: string
    14. Required: false
    15. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to crypt (Encrypt/Decrypt a remote).

    16. Type: string
    17. Required: false
    18. -

      Metadata

      +

      Metadata

      Any metadata supported by the underlying remote is read and written.

      See the metadata docs for more info.

      Backend commands

      Here are the commands specific to the crypt backend.

      -

      Run them with

      -
      rclone backend COMMAND remote:
      +

      Run them with:

      +
      rclone backend COMMAND remote:

      The help below will explain what arguments each command takes.

      See the backend command @@ -35438,22 +40037,23 @@ for more info on how to pass options and arguments.

      These can be run on a running backend using the rc command backend/command.

      encode

      -

      Encode the given filename(s)

      -
      rclone backend encode remote: [options] [<arguments>+]
      +

      Encode the given filename(s).

      +
      rclone backend encode remote: [options] [<arguments>+]

      This encodes the filenames given as arguments returning a list of strings of the encoded results.

      -

      Usage Example:

      -
      rclone backend encode crypt: file1 [file2...]
      +

      Usage examples:

      +
      rclone backend encode crypt: file1 [file2...]
       rclone rc backend/command command=encode fs=crypt: file1 [file2...]

      decode

      -

      Decode the given filename(s)

      -
      rclone backend decode remote: [options] [<arguments>+]
      +

      Decode the given filename(s).

      +
      rclone backend decode remote: [options] [<arguments>+]

      This decodes the filenames given as arguments returning a list of strings of the decoded results. It will return an error if any of the inputs are invalid.

      -

      Usage Example:

      -
      rclone backend decode crypt: encryptedfile1 [encryptedfile2...]
      +

      Usage examples:

      +
      rclone backend decode crypt: encryptedfile1 [encryptedfile2...]
       rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...]
      +

      Backing up an encrypted remote

      If you wish to backup an encrypted remote, it is recommended that you @@ -35473,9 +40073,9 @@ remote remote2: and then the encrypted version eremote2: with path remote2:crypt using the same passwords as eremote:.

      To sync the two remotes you would do

      -
      rclone sync --interactive remote:crypt remote2:crypt
      +
      rclone sync --interactive remote:crypt remote2:crypt

      And to check the integrity you would do

      -
      rclone check remote:crypt remote2:crypt
      +
      rclone check remote:crypt remote2:crypt

      File formats

      File encryption

      Files are encrypted 1:1 source file to destination object. The file @@ -35558,11 +40158,11 @@ If the user doesn't supply a salt then rclone uses an internal one.

      scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.

      -

      SEE ALSO

      +

      See Also

      Compress

      Warning

      @@ -35573,17 +40173,17 @@ code and don't use this remote in critical applications.

      The Compress remote adds compression to another remote. It is best used with remotes containing many large compressible files.

      -

      Configuration

      +

      Configuration

      To use this remote, all you need to do is specify another remote and a compression mode to use:

      -
      Current remotes:
      +
      $ rclone config
      +Current remotes:
       
       Name                 Type
       ====                 ====
       remote_to_press      sometype
       
       e) Edit existing remote
      -$ rclone config
       n) New remote
       d) Delete remote
       r) Rename remote
      @@ -35592,43 +40192,79 @@ s) Set configuration password
       q) Quit config
       e/n/d/r/c/s/q> n
       name> compress
      +
      +Option Storage.
      +Type of storage to configure.
      +Choose a number from below, or type in your own value.
       ...
      - 8 / Compress a remote
      -   \ "compress"
      +12 / Compress a remote
      +   \ (compress)
       ...
       Storage> compress
      -** See help for compress backend at: https://rclone.org/compress/ **
       
      +Option remote.
       Remote to compress.
      -Enter a string value. Press Enter for the default ("").
      +Enter a value.
       remote> remote_to_press:subdir 
      +
      +Option mode.
       Compression mode.
      -Enter a string value. Press Enter for the default ("gzip").
      -Choose a number from below, or type in your own value
      - 1 / Gzip compression balanced for speed and compression strength.
      -   \ "gzip"
      -compression_mode> gzip
      -Edit advanced config? (y/n)
      +Choose a number from below, or type in your own value of type string.
      +Press Enter for the default (gzip).
      + 1 / Standard gzip compression with fastest parameters.
      +   \ (gzip)
      + 2 / Zstandard compression — fast modern algorithm offering adjustable speed-to-compression tradeoffs.
      +   \ (zstd)
      +mode> gzip
      +
      +Option level.
      +GZIP (levels -2 to 9):
      +- -2 — Huffman encoding only. Only use if you know what you're doing.
      +- -1 (default) — recommended; equivalent to level 5.
      +- 0 — turns off compression.
      +- 1–9 — increase compression at the cost of speed. Going past 6 generally offers very little return.
      + 
      +ZSTD (levels 0 to 4):
      +- 0 — turns off compression entirely.
      +- 1 — fastest compression with the lowest ratio.
      +- 2 (default) — good balance of speed and compression.
      +- 3 — better compression, but uses about 2–3x more CPU than the default.
      +- 4 — best possible compression ratio (highest CPU cost).
      + 
      +Notes:
      +- Choose GZIP for wide compatibility; ZSTD for better speed/ratio tradeoffs.
      +- Negative gzip levels: -2 = Huffman-only, -1 = default (≈ level 5).
      +Enter a value.
      +level> -1
      +
      +Edit advanced config?
       y) Yes
       n) No (default)
       y/n> n
      -Remote config
      ---------------------
      -[compress]
      -type = compress
      -remote = remote_to_press:subdir
      -compression_mode = gzip
      ---------------------
      +
      +Configuration complete.
      +Options:
      +- type: compress
      +- remote: remote_to_press:subdir
      +- mode: gzip
      +- level: -1
      +Keep this "compress" remote?
       y) Yes this is OK (default)
       e) Edit this remote
       d) Delete this remote
       y/e/d> y
      -

      Compression Modes

      -

      Currently only gzip compression is supported. It provides a decent -balance between speed and size and is well supported by other -applications. Compression strength can further be configured via an -advanced setting where 0 is no compression and 9 is strongest -compression.

      +

      Compression Algorithms

      +
        +
      • GZIP – a well-established and widely adopted +algorithm that strikes a solid balance between compression speed and +ratio. It supports compression levels from -2 to 9, with the default -1 +(roughly equivalent to level 5) offering an effective middle ground for +most scenarios.

      • +
      • Zstandard (zstd) – a modern, high-performance +algorithm that offers precise control over the trade-off between speed +and compression efficiency. Compression levels range from 0 (no +compression) to 4 (maximum compression).

      • +

      File types

      If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to the compression algorithm @@ -35642,7 +40278,9 @@ without correct metadata files will not be recognized by rclone.

      where * is the base file and the # part is base64 encoded size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to compress (Compress a remote).

      --compress-remote

      @@ -35668,25 +40306,36 @@ remote).

      • Standard gzip compression with fastest parameters.
      +
    19. "zstd" +
        +
      • Zstandard compression — fast modern algorithm offering adjustable +speed-to-compression tradeoffs.
      • +
    20. -

      Advanced options

      -

      Here are the Advanced options specific to compress (Compress a -remote).

      --compress-level

      -

      GZIP compression level (-2 to 9).

      -

      Generally -1 (default, equivalent to 5) is recommended. Levels 1 to 9 -increase compression at the cost of speed. Going past 6 generally offers -very little return.

      -

      Level -2 uses Huffman encoding only. Only use if you know what you -are doing. Level 0 turns off compression.

      +

      GZIP (levels -2 to 9): - -2 — Huffman encoding only. Only use if you +know what you're doing. - -1 (default) — recommended; equivalent to +level 5. - 0 — turns off compression. - 1–9 — increase compression at +the cost of speed. Going past 6 generally offers very little return.

      +

      ZSTD (levels 0 to 4): - 0 — turns off compression entirely. - 1 — +fastest compression with the lowest ratio. - 2 (default) — good balance +of speed and compression. - 3 — better compression, but uses about 2–3x +more CPU than the default. - 4 — best possible compression ratio +(highest CPU cost).

      +

      Notes: - Choose GZIP for wide compatibility; ZSTD for better +speed/ratio tradeoffs. - Negative gzip levels: -2 = Huffman-only, -1 = +default (≈ level 5).

      Properties:

      • Config: level
      • Env Var: RCLONE_COMPRESS_LEVEL
      • -
      • Type: int
      • -
      • Default: -1
      • +
      • Type: string
      • +
      • Required: true
      +

      Advanced options

      +

      Here are the Advanced options specific to compress (Compress a +remote).

      --compress-ram-cache-limit

      Some remotes don't allow the upload of files with unknown size. In this case the compressed file will need to be cached to determine it's @@ -35709,27 +40358,28 @@ than this limit will be cached on disk.

    21. Type: string
    22. Required: false
    23. -

      Metadata

      +

      Metadata

      Any metadata supported by the underlying remote is read and written.

      See the metadata docs for more info.

      +

      Combine

      The combine backend joins remotes together into a single directory tree.

      For example you might have a remote for images on one provider:

      -
      $ rclone tree s3:imagesbucket
      +
      $ rclone tree s3:imagesbucket
       /
       ├── image1.jpg
       └── image2.jpg

      And a remote for files on another:

      -
      $ rclone tree drive:important/files
      +
      $ rclone tree drive:important/files
       /
       ├── file1.txt
       └── file2.txt

      The combine backend can join these together into a synthetic directory structure like this:

      -
      $ rclone tree combined:
      +
      $ rclone tree combined:
       /
       ├── files
       │   ├── file1.txt
      @@ -35739,16 +40389,16 @@ synthetic directory structure like this:

      └── image2.jpg

      You'd do this by specifying an upstreams parameter in the config like this

      -
      upstreams = images=s3:imagesbucket files=drive:important/files
      +
      upstreams = images=s3:imagesbucket files=drive:important/files

      During the initial setup with rclone config you will specify the upstreams remotes as a space separated list. The upstream remotes can either be a local paths or other remotes.

      -

      Configuration

      +

      Configuration

      Here is an example of how to make a combine called remote for the example above. First run:

      -
       rclone config
      +
      rclone config

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one?
       n) New remote
       s) Set configuration password
       q) Quit config
      @@ -35787,25 +40437,28 @@ Google Drive Shared Drives
       the shared drives you have access to.

      Assuming your main (non shared drive) Google drive remote is called drive: you would run

      -
      rclone backend -o config drives drive:
      +
      rclone backend -o config drives drive:

      This would produce something like this:

      -
      [My Drive]
      -type = alias
      -remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
      -
      -[Test Drive]
      -type = alias
      -remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
      -
      -[AllDrives]
      -type = combine
      -upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
      +
      [My Drive]
      +type = alias
      +remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
      +
      +[Test Drive]
      +type = alias
      +remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
      +
      +[AllDrives]
      +type = combine
      +upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"

      If you then add that config to your config file (find it with rclone config file) then you can access all the shared drives in one place with the AllDrives: remote.

      See the Google Drive docs for full info.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to combine (Combine several remotes into one).

      --combine-upstreams

      @@ -35823,7 +40476,7 @@ remote to put there.

    24. Type: SpaceSepList
    25. Default:
    26. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to combine (Combine several remotes into one).

      --combine-description

      @@ -35835,32 +40488,40 @@ remotes into one).

    27. Type: string
    28. Required: false
    29. -

      Metadata

      +

      Metadata

      Any metadata supported by the underlying remote is read and written.

      See the metadata docs for more info.

      +

      DOI

      The DOI remote is a read only remote for reading files from digital object identifiers (DOI).

      -

      Currently, the DOI backend supports DOIs hosted with: - InvenioRDM - Zenodo - CaltechDATA - Other InvenioRDM -repositories - Dataverse - Harvard Dataverse - Other Dataverse -repositories

      +

      Currently, the DOI backend supports DOIs hosted with:

      +

      Paths are specified as remote:path

      Paths may be as deep as required, e.g. remote:directory/subdirectory.

      -

      Configuration

      +

      Configuration

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      +
      rclone config

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one?
       n) New remote
       s) Set configuration password
       q) Quit config
      @@ -35891,7 +40552,9 @@ y) Yes this is OK (default)
       e) Edit this remote
       d) Delete this remote
       y/e/d> y
      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to doi (DOI datasets).

      --doi-doi

      The DOI or the doi.org URL.

      @@ -35902,7 +40565,7 @@ y/e/d> y
    30. Type: string
    31. Required: true
    32. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to doi (DOI datasets).

      --doi-provider

      DOI provider.

      @@ -35957,28 +40620,29 @@ canonical DOI resolver API cannot be used.

      Backend commands

      Here are the commands specific to the doi backend.

      -

      Run them with

      -
      rclone backend COMMAND remote:
      +

      Run them with:

      +
      rclone backend COMMAND remote:

      The help below will explain what arguments each command takes.

      See the backend command for more info on how to pass options and arguments.

      These can be run on a running backend using the rc command backend/command.

      -

      metadata

      +

      metadata

      Show metadata about the DOI.

      -
      rclone backend metadata remote: [options] [<arguments>+]
      +
      rclone backend metadata remote: [options] [<arguments>+]

      This command returns a JSON object with some information about the DOI.

      -
      rclone backend medatadata doi: 
      +

      Usage example:

      +
      rclone backend metadata doi:

      It returns a JSON object representing metadata about the DOI.

      set

      Set command for updating the config parameters.

      -
      rclone backend set remote: [options] [<arguments>+]
      +
      rclone backend set remote: [options] [<arguments>+]

      This set command can be used to update the config parameters for a running doi backend.

      -

      Usage Examples:

      -
      rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
      +

      Usage examples:

      +
      rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
       rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
       rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI

      The option keys are named as they are in the config file.

      @@ -35986,19 +40650,20 @@ rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI
      with the new parameters. Only new parameters need be passed as the values will default to those currently in use.

      It doesn't return anything.

      +

      Dropbox

      Paths are specified as remote:path

      Dropbox paths may be as deep as required, e.g. remote:directory/subdirectory.

      -

      Configuration

      +

      Configuration

      The initial setup for dropbox involves getting a token from Dropbox which you need to do in your browser. rclone config walks you through it.

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      +
      rclone config

      This will guide you through an interactive setup process:

      -
      n) New remote
      +
      n) New remote
       d) Delete remote
       q) Quit config
       e/n/d/q> n
      @@ -36030,8 +40695,8 @@ e) Edit this remote
       d) Delete this remote
       y/e/d> y

      See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

      +docs for how to set it up on a machine without an internet-connected +web browser available.

      Note that rclone runs a webserver on your local machine to collect the token as returned from Dropbox. This only runs from the moment it opens your browser to the moment you get back the verification code. @@ -36040,11 +40705,11 @@ to unblock it temporarily if you are running a host firewall, or use manual mode.

      You can then use it like this,

      List directories in top level of your dropbox

      -
      rclone lsd remote:
      +
      rclone lsd remote:

      List all the files in your dropbox

      -
      rclone ls remote:
      +
      rclone ls remote:

      To copy a local directory to a dropbox directory called backup

      -
      rclone copy /home/source remote:backup
      +
      rclone copy /home/source remote:backup

      Dropbox for business

      Rclone supports Dropbox for business and Team Folders.

      When using Dropbox for business remote: and @@ -36134,7 +40799,7 @@ performance guide for more info.

      In this mode rclone will not use upload batching. This was the default before rclone v1.55. It has the disadvantage that it is very likely to encounter too_many_requests errors like this

      -
      NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds.
      +
      NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds.

      When rclone receives these it has to wait for 15s or sometimes 300s before continuing which really slows down transfers.

      This will happen especially if --transfers is large, so @@ -36232,7 +40897,9 @@ supported formats at any time.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to dropbox (Dropbox).

      --dropbox-client-id

      OAuth Client Id.

      @@ -36254,7 +40921,7 @@ supported formats at any time.

    33. Type: string
    34. Required: false
    35. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to dropbox (Dropbox).

      --dropbox-token

      OAuth Access Token as a JSON blob.

      @@ -36517,7 +41184,8 @@ used)

    36. Type: string
    37. Required: false
    38. -

      Limitations

      + +

      Limitations

      Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

      There are some file names such as thumbs.db which @@ -36591,15 +41259,15 @@ href="https://storagemadeeasy.com/about/">Storage Made Easy's Enterprise File Fabric™ which provides a software solution to integrate and unify File and Object Storage accessible through a global file system.

      -

      Configuration

      +

      Configuration

      The initial setup for the Enterprise File Fabric backend involves getting a token from the Enterprise File Fabric which you need to do in your browser. rclone config walks you through it.

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      +
      rclone config

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one?
       n) New remote
       s) Set configuration password
       q) Quit config
      @@ -36660,14 +41328,15 @@ y) Yes this is OK (default)
       e) Edit this remote
       d) Delete this remote
       y/e/d> y
      -

      Once configured you can then use rclone like this,

      +

      Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

      List directories in top level of your Enterprise File Fabric

      -
      rclone lsd remote:
      +
      rclone lsd remote:

      List all the files in your Enterprise File Fabric

      -
      rclone ls remote:
      +
      rclone ls remote:

      To copy a local directory to an Enterprise File Fabric directory called backup

      -
      rclone copy /home/source remote:backup
      +
      rclone copy /home/source remote:backup

      Modification times and hashes

      The Enterprise File Fabric allows modification times to be set on @@ -36700,7 +41369,7 @@ hierarchy.

      of the directory you wish rclone to display. These aren't displayed in the web interface, but you can use rclone lsf to find them, for example

      -
      $ rclone lsf --dirs-only -Fip --csv filefabric:
      +
      $ rclone lsf --dirs-only -Fip --csv filefabric:
       120673758,Burnt PDFs/
       120673759,My Quick Uploads/
       120673755,My Syncs/
      @@ -36708,7 +41377,9 @@ for example

      120673757,My contacts/ 120673761,S3 Storage/

      The ID for "S3 Storage" would be 120673761.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to filefabric (Enterprise File Fabric).

      --filefabric-url

      @@ -36762,7 +41433,7 @@ https://docs.storagemadeeasy.com/organisationcloud/api-tokens

    39. Type: string
    40. Required: false
    41. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to filefabric (Enterprise File Fabric).

      --filefabric-token

      @@ -36817,6 +41488,7 @@ section in the overview for more info.

    42. Type: string
    43. Required: false
    44. +

      FileLu

      FileLu is a reliable cloud storage provider offering features like secure file uploads, downloads, flexible @@ -36824,12 +41496,12 @@ storage options, and sharing capabilities. With support for high storage limits and seamless integration with rclone, FileLu makes managing files in the cloud easy. Its cross-platform file backup services let you upload and back up files from any internet-connected device.

      -

      Configuration

      +

      Configuration

      Here is an example of how to make a remote called filelu. First, run:

      -
       rclone config
      +
      rclone config

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one?
       n) New remote
       s) Set configuration password
       q) Quit config
      @@ -36856,7 +41528,7 @@ y/e/d> y
      Rclone directory.

      A path with an initial / will operate at the root where you can see the Rclone directory.

      -
      $ rclone lsf TestFileLu:/
      +
      $ rclone lsf TestFileLu:/
       CCTV/
       Camera/
       Documents/
      @@ -36868,31 +41540,31 @@ Videos/

      Example Commands

      Create a new folder named foldername in the Rclone directory:

      -
      rclone mkdir filelu:foldername
      +
      rclone mkdir filelu:foldername

      Delete a folder on FileLu:

      -
      rclone rmdir filelu:/folder/path/
      +
      rclone rmdir filelu:/folder/path/

      Delete a file on FileLu:

      -
      rclone delete filelu:/hello.txt
      +
      rclone delete filelu:/hello.txt

      List files from your FileLu account:

      -
      rclone ls filelu:
      +
      rclone ls filelu:

      List all folders:

      -
      rclone lsd filelu:
      +
      rclone lsd filelu:

      Copy a specific file to the FileLu root:

      -
      rclone copy D:\\hello.txt filelu:
      +
      rclone copy D:\hello.txt filelu:

      Copy files from a local directory to a FileLu directory:

      -
      rclone copy D:/local-folder filelu:/remote-folder/path/
      +
      rclone copy D:/local-folder filelu:/remote-folder/path/

      Download a file from FileLu into a local directory:

      -
      rclone copy filelu:/file-path/hello.txt D:/local-folder
      +
      rclone copy filelu:/file-path/hello.txt D:/local-folder

      Move files from a local directory to a FileLu directory:

      -
      rclone move D:\\local-folder filelu:/remote-path/
      +
      rclone move D:\local-folder filelu:/remote-path/

      Sync files from a local directory to a FileLu directory:

      -
      rclone sync --interactive D:/local-folder filelu:/remote-path/
      +
      rclone sync --interactive D:/local-folder filelu:/remote-path/

      Mount remote to local Linux:

      -
      rclone mount filelu: /root/mnt --vfs-cache-mode full
      +
      rclone mount filelu: /root/mnt --vfs-cache-mode full

      Mount remote to local Windows:

      -
      rclone mount filelu: D:/local_mnt --vfs-cache-mode full
      +
      rclone mount filelu: D:/local_mnt --vfs-cache-mode full

      Get storage info about the FileLu account:

      -
      rclone about filelu:
      +
      rclone about filelu:

      All the other rclone commands are supported by this backend.

      FolderID instead of folder path

      @@ -36921,14 +41593,16 @@ generated. Be sure to update your Rclone configuration with the new key.

      If you are connecting to your FileLu remote for the first time and encounter an error such as:

      -
      Failed to create file system for "my-filelu-remote:": couldn't login: Invalid credentials
      +
      Failed to create file system for "my-filelu-remote:": couldn't login: Invalid credentials

      Ensure your Rclone Key is correct.

      Process killed

      Accounts with large files or extensive metadata may experience significant memory usage during list/sync operations. Ensure the system running rclone has sufficient memory and CPU to handle these operations.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to filelu (FileLu Cloud Storage).

      --filelu-key

      @@ -36940,7 +41614,7 @@ Storage).

    45. Type: string
    46. Required: true
    47. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to filelu (FileLu Cloud Storage).

      --filelu-encoding

      @@ -36964,7 +41638,8 @@ Slash,LtGt,DoubleQuote,SingleQuote,BackQuote,Dollar,Colon,Question,Asterisk,Pipe
    48. Type: string
    49. Required: false
    50. -

      Limitations

      + +

      Limitations

      This backend uses a custom library implementing the FileLu API. While it supports file transfers, some advanced features may not yet be available. Please report any issues to the Files.com. rclone config walks you through it.

      -

      Configuration

      +

      Configuration

      Here is an example of how to make a remote called remote. First run:

      -
      rclone config
      +
      rclone config

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one?
       n) New remote
       s) Set configuration password
       q) Quit config
      @@ -37045,15 +41720,15 @@ d) Delete this remote
       y/e/d> y

      Once configured you can use rclone.

      See all files in the top level:

      -
      rclone lsf remote:
      +
      rclone lsf remote:

      Make a new directory in the root:

      -
      rclone mkdir remote:dir
      +
      rclone mkdir remote:dir

      Recursively List the contents:

      -
      rclone ls remote:
      +
      rclone ls remote:

      Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

      -
      rclone sync --interactive /home/local/directory remote:dir
      -

      Hashes

      +
      rclone sync --interactive /home/local/directory remote:dir
      +

      Hashes

      In December 2024 files.com started supporting more checksums.

      @@ -37065,7 +41740,9 @@ your requirements.

      selecting more checksums will not affect rclone's operations.

      For use with rclone, selecting at least MD5 is recommended so rclone can do an end to end integrity check.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to filescom (Files.com).

      --filescom-site

      Your site subdomain (e.g. mysite) or custom domain (e.g. @@ -37098,7 +41775,7 @@ obscure.

    51. Type: string
    52. Required: false
    53. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to filescom (Files.com).

      --filescom-api-key

      The API key used to authenticate with Files.com.

      @@ -37130,6 +41807,7 @@ Slash,BackSlash,Del,Ctl,RightSpace,RightCrLfHtVt,InvalidUtf8,Dot
    54. Type: string
    55. Required: false
    56. +

      FTP

      FTP is the File Transfer Protocol. Rclone FTP support is provided using the begin with a / it is relative to the home directory of the user. An empty path remote: refers to the user's home directory.

      -

      Configuration

      +

      Configuration

      To create an FTP configuration named remote, run

      -
      rclone config
      +
      rclone config

      Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, see below.

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one\?
       n) New remote
       r) Rename remote
       c) Copy remote
      @@ -37204,14 +41882,14 @@ d) Delete this remote
       y/e/d> y

      To see all directories in the home directory of remote

      -
      rclone lsd remote:
      +
      rclone lsd remote:

      Make a new directory

      -
      rclone mkdir remote:path/to/directory
      +
      rclone mkdir remote:path/to/directory

      List the contents of a directory

      -
      rclone ls remote:path/to/directory
      +
      rclone ls remote:path/to/directory

      Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

      -
      rclone sync --interactive /home/local/directory remote:directory
      +
      rclone sync --interactive /home/local/directory remote:directory

      Anonymous FTP

      When connecting to a FTP server that allows anonymous login, you can use the special "anonymous" username. Traditionally, this user account @@ -37222,7 +41900,7 @@ valid e-mail address as password.

      href="https://rclone.org/docs/#connection-strings">connection string remotes makes it easy to access such servers, without requiring any configuration in advance. The following are examples of that:

      -
      rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy)
      +
      rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy)
       rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy):

      The above examples work in Linux shells and in PowerShell, but not Windows Command Prompt. They execute the pass option. The following examples are exactly the same, except use an already obscured string representation of the same password "dummy", and therefore works even in Windows Command Prompt:

      -
      rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM
      +
      rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM
       rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM:

      Implicit TLS

      Rlone FTP supports implicit FTP over TLS servers (FTPS). This has to @@ -37243,17 +41921,17 @@ href="#ftp-port">--ftp-port.

      TLS Options

      TLS options for Implicit and Explicit TLS can be set using the following flags which are specific to the FTP backend:

      -
      --ftp-no-check-certificate     Do not verify the TLS certificate of the server
      +
      --ftp-no-check-certificate     Do not verify the TLS certificate of the server
       --ftp-disable-tls13            Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
       --ftp-tls-cache-size int       Size of TLS session cache for all control and data connections (default 32)

      However any of the global TLS flags can also be used such as:

      -
      --ca-cert stringArray          CA certificate used to verify servers
      +
      --ca-cert stringArray          CA certificate used to verify servers
       --client-cert string           Client SSL certificate (PEM) for mutual TLS auth
       --client-key string            Client SSL private key (PEM) for mutual TLS auth
       --no-check-certificate         Do not verify the server SSL certificate (insecure)

      If these need to be put in the config file so they apply to just the FTP backend then use the override syntax, eg

      -
      override.ca_cert = XXX
      +
      override.ca_cert = XXX
       override.client_cert = XXX
       override.client_key = XXX

      Restricted filename @@ -37303,7 +41981,9 @@ example:

      This backend's interactive configuration wizard provides a selection of sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd. Just hit a selection number when prompted.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to ftp (FTP).

      --ftp-host

      FTP host to connect to.

      @@ -37370,7 +42050,7 @@ an encrypted one. Cannot be used in combination with implicit FTPS.

    57. Type: bool
    58. Default: false
    59. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to ftp (FTP).

      --ftp-concurrency

      Maximum number of FTP simultaneous connections, 0 for unlimited.

      @@ -37599,7 +42279,8 @@ section in the overview for more info.

    60. Type: string
    61. Required: false
    62. -

      Limitations

      + +

      Limitations

      FTP servers acting as rclone remotes must support passive mode. The mode cannot be configured as passive is the only supported one. Rclone's FTP @@ -37615,7 +42296,7 @@ rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

      See List of backends that do not support rclone about and rclone about

      +href="https://rclone.org/commands/rclone_about/">rclone about.

      The implementation of : --dump headers, --dump bodies, --dump auth for debugging isn't the same as for rclone HTTP based backends - it has less fine grained @@ -37627,7 +42308,7 @@ is).

      present.

      The ftp_proxy environment variable is not currently supported.

      -

      Modification times

      +

      Modification times

      File modification time (timestamps) is supported to 1 second resolution for major FTP servers: ProFTPd, PureFTPd, VsFTPd, and FileZilla FTP server. The VsFTPd server has non-standard @@ -37657,12 +42338,12 @@ and going to the "My Profile" section. Copy the "Account API token" for use in the config file.

      Note that if you wish to connect rclone to Gofile you will need a premium account.

      -

      Configuration

      +

      Configuration

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      +
      rclone config

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one?
       n) New remote
       s) Set configuration password
       q) Quit config
      @@ -37698,11 +42379,12 @@ y) Yes this is OK (default)
       e) Edit this remote
       d) Delete this remote
       y/e/d> y
      -

      Once configured you can then use rclone like this,

      +

      Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

      List directories and files in the top level of your Gofile

      -
      rclone lsf remote:
      +
      rclone lsf remote:

      To copy a local directory to an Gofile directory called backup

      -
      rclone copy /home/source remote:backup
      +
      rclone copy /home/source remote:backup

      Modification times and hashes

      Gofile supports modification times with a resolution of 1 second.

      @@ -37810,15 +42492,17 @@ hierarchy.

      In order to do this you will have to find the Folder ID of the directory you wish rclone to display.

      You can do this with rclone

      -
      $ rclone lsf -Fip --dirs-only remote:
      +
      $ rclone lsf -Fip --dirs-only remote:
       d6341f53-ee65-4f29-9f59-d11e8070b2a0;Files/
       f4f5c9b8-6ece-478b-b03e-4538edfe5a1c;Photos/
       d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/

      The ID to use is the part before the ; so you could set

      -
      root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0
      +
      root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0

      To restrict rclone to the Files directory.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to gofile (Gofile).

      --gofile-access-token

      API Access token

      @@ -37830,7 +42514,7 @@ set

    63. Type: string
    64. Required: false
    65. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to gofile (Gofile).

      --gofile-root-folder-id

      ID of the root folder

      @@ -37884,7 +42568,8 @@ Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftPeriod
    66. Type: string
    67. Required: false
    68. -

      Limitations

      + +

      Limitations

      Gofile only supports filenames up to 255 characters in length, where a character is a unicode character.

      Directories should not be cached for more than 24h otherwise files in @@ -37909,15 +42594,15 @@ messages in the log about duplicates.

      Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

      -

      Configuration

      +

      Configuration

      The initial setup for google cloud storage involves getting a token from Google Cloud Storage which you need to do in your browser. rclone config walks you through it.

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      +
      rclone config

      This will guide you through an interactive setup process:

      -
      n) New remote
      +
      n) New remote
       d) Delete remote
       q) Quit config
       e/n/d/q> n
      @@ -37993,7 +42678,9 @@ Choose a number from below, or type in your own value
          \ "us-east1"
       13 / Northern Virginia.
          \ "us-east4"
      -14 / Oregon.
      +14 / Ohio.
      +   \ "us-east5"
      +15 / Oregon.
          \ "us-west1"
       location> 12
       The storage class to use when storing objects in Google Cloud Storage.
      @@ -38038,8 +42725,8 @@ e) Edit this remote
       d) Delete this remote
       y/e/d> y

      See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

      +docs for how to set it up on a machine without an internet-connected +web browser available.

      Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to @@ -38050,14 +42737,14 @@ mode.

      This remote is called remote and can now be used like this

      See all the buckets in your project

      -
      rclone lsd remote:
      +
      rclone lsd remote:

      Make a new bucket

      -
      rclone mkdir remote:bucket
      +
      rclone mkdir remote:bucket

      List the contents of a bucket

      -
      rclone ls remote:bucket
      +
      rclone ls remote:bucket

      Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

      -
      rclone sync --interactive /home/local/directory remote:bucket
      +
      rclone sync --interactive /home/local/directory remote:bucket

      Service Account support

      You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is @@ -38091,18 +42778,18 @@ VMs that lack a web browser.

      If you already have a working service account, skip to step 3.

      1. Create a service account using

      -
      gcloud iam service-accounts create gcs-read-only 
      +
      gcloud iam service-accounts create gcs-read-only

      You can re-use an existing service account as well (like the one created above)

      2. Attach a Viewer (read-only) or User (read-write) role to the service account

      -
       $ PROJECT_ID=my-project
      - $ gcloud --verbose iam service-accounts add-iam-policy-binding \
      -    gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com  \
      -    --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
      -    --role=roles/storage.objectViewer
      +
      $ PROJECT_ID=my-project
      +$ gcloud --verbose iam service-accounts add-iam-policy-binding \
      +   gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com  \
      +   --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
      +   --role=roles/storage.objectViewer

      Use the Google Cloud console to identify a limited role. Some relevant pre-defined roles:

        @@ -38115,7 +42802,7 @@ roles

      3. Get a temporary access key for the service account

      -
      $ gcloud auth application-default print-access-token  \
      +
      $ gcloud auth application-default print-access-token  \
          --impersonate-service-account \
             gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com  
       
      @@ -38124,9 +42811,9 @@ ya29.c.c0ASRK0GbAFEewXD [truncated]
      setting

      hit CTRL-C when you see waiting for code. This will save the config without doing oauth flow

      -
      rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx
      +
      rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx

      5. Run rclone as usual

      -
      rclone ls dev-gcs:${MY_BUCKET}/
      +
      rclone ls dev-gcs:${MY_BUCKET}/

      More Info on Service Accounts

        @@ -38181,7 +42868,7 @@ with metadata documentation

        Eg --header-upload "Content-Type text/potato"

        Note that the last of these is for setting custom metadata in the form --header-upload "x-goog-meta-key: value"

        -

        Modification times

        +

        Modification times

        Google Cloud Storage stores md5sum natively. Google's gsutil tool stores modification time with one-second precision as @@ -38234,7 +42921,9 @@ characters

        Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

        -

        Standard options

        + + +

        Standard options

        Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

        --gcs-client-id

        @@ -38520,6 +43209,10 @@ then you will need to set this.

        • Northern Virginia
        +
      • "us-east5" +
          +
        • Ohio
        • +
      • "us-west1"
        • Oregon
        • @@ -38630,7 +43323,7 @@ is blank.

      -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).

      --gcs-token

      @@ -38754,27 +43447,28 @@ section in the overview for more info.

    69. Type: string
    70. Required: false
    71. -

      Limitations

      + +

      Limitations

      rclone about is not supported by the Google Cloud Storage backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

      See List of backends that do not support rclone about and rclone about

      +href="https://rclone.org/commands/rclone_about/">rclone about.

      Google Drive

      Paths are specified as drive:path

      Drive paths may be as deep as required, e.g. drive:directory/subdirectory.

      -

      Configuration

      +

      Configuration

      The initial setup for drive involves getting a token from Google drive which you need to do in your browser. rclone config walks you through it.

      Here is an example of how to make a remote called remote. First run:

      -
       rclone config
      +
      rclone config

      This will guide you through an interactive setup process:

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one?
       n) New remote
       r) Rename remote
       c) Copy remote
      @@ -38843,8 +43537,8 @@ e) Edit this remote
       d) Delete this remote
       y/e/d> y

      See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

      +docs for how to set it up on a machine without an internet-connected +web browser available.

      Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to @@ -38854,11 +43548,11 @@ it temporarily if you are running a host firewall, or use manual mode.

      You can then use it like this,

      List directories in top level of your drive

      -
      rclone lsd remote:
      +
      rclone lsd remote:

      List all the files in your drive

      -
      rclone ls remote:
      +
      rclone ls remote:

      To copy a local directory to a drive directory called backup

      -
      rclone copy /home/source remote:backup
      +
      rclone copy /home/source remote:backup

      Scopes

      Rclone allows you to select which scope you would like for rclone to use. This changes what type of token is granted to rclone. case - Google Workspace account and individual Drive

      Let's say that you are the administrator of a Google Workspace. The goal is to read or write data on an individual's Drive account, who IS a -member of the domain. We'll call the domain -example.com, and the user -foo@example.com.

      +member of the domain. We'll call the domain <example.com>, and the +user foo@example.com.

      There's a few steps we need to go through to accomplish this:

      1. Create a service account for example.com
      @@ -38981,7 +43675,7 @@ only access.
      3. Configure rclone, assuming a new install
      -
      rclone config
      +
      rclone config
       
       n/s/q> n         # New
       name>gdrive      # Gdrive is an example name
      @@ -39008,11 +43702,14 @@ the folder named backup.
       
       

      Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using ---drive-impersonate, do this instead: - in the gdrive web -interface, share your root folder with the user/email of the new Service -Account you created/selected at step 1 - use rclone without specifying -the --drive-impersonate option, like this: -rclone -v lsf gdrive:backup

      +--drive-impersonate, do this instead:

      +
        +
      • in the gdrive web interface, share your root folder with the +user/email of the new Service Account you created/selected at step +1
      • +
      • use rclone without specifying the --drive-impersonate +option, like this: rclone -v lsf gdrive:backup
      • +

      Shared drives (team drives)

      If you want to configure the remote to point to a Google Shared Drive (previously known as Team Drives) then answer y to the @@ -39022,7 +43719,7 @@ question to configure which one you want to use. You can also type in a Shared Drive ID if you prefer.

      For example:

      -
      Configure this as a Shared Drive (Team Drive)?
      +
      Configure this as a Shared Drive (Team Drive)?
       y) Yes
       n) No
       y/n> y
      @@ -39058,11 +43755,11 @@ single API request.

      into one expression. To list the contents of directories a, b and c, the following requests will be send by the regular List function:

      -
      trashed=false and 'a' in parents
      +
      trashed=false and 'a' in parents
       trashed=false and 'b' in parents
       trashed=false and 'c' in parents

      These can now be combined into a single request:

      -
      trashed=false and ('a' in parents or 'b' in parents or 'c' in parents)
      +
      trashed=false and ('a' in parents or 'b' in parents or 'c' in parents)

      The implementation of ListR will put up to 50 parents filters into one request. It will use the --checkers value to specify the number of requests to run @@ -39070,7 +43767,7 @@ in parallel.

      In tests, these batch requests were up to 20x faster than the regular method. Running the following command against different sized folders gives:

      -
      rclone lsjson -vv -R --checkers=6 gdrive:folder
      +
      rclone lsjson -vv -R --checkers=6 gdrive:folder

      small folder (220 directories, 700 files):

      • without --fast-list: 38s
      • @@ -39474,7 +44171,9 @@ available Google Documents.

        -

        Standard options

        + + +

        Standard options

        Here are the Standard options specific to drive (Google Drive).

        --drive-client-id

        Google Application Client Id Setting your own is recommended. See @@ -39558,7 +44257,7 @@ environment variables such as ${RCLONE_CONFIG_DIR}.

      • Type: bool
      • Default: false
      -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to drive (Google Drive).

      --drive-token

      OAuth Access Token as a JSON blob.

      @@ -40229,7 +44928,7 @@ is blank.

    72. Type: string
    73. Required: false
    74. -

      Metadata

      +

      Metadata

      User metadata is stored in the properties field of the drive object.

      Metadata is supported on files and directories.

      @@ -40352,8 +45051,8 @@ drives. for more info.

      Backend commands

      Here are the commands specific to the drive backend.

      -

      Run them with

      -
      rclone backend COMMAND remote:
      +

      Run them with:

      +
      rclone backend COMMAND remote:

      The help below will explain what arguments each command takes.

      See the backend command @@ -40361,37 +45060,38 @@ for more info on how to pass options and arguments.

      These can be run on a running backend using the rc command backend/command.

      get

      -

      Get command for fetching the drive config parameters

      -
      rclone backend get remote: [options] [<arguments>+]
      +

      Get command for fetching the drive config parameters.

      +
      rclone backend get remote: [options] [<arguments>+]

      This is a get command which will be used to fetch the various drive -config parameters

      -

      Usage Examples:

      -
      rclone backend get drive: [-o service_account_file] [-o chunk_size]
      +config parameters.

      +

      Usage examples:

      +
      rclone backend get drive: [-o service_account_file] [-o chunk_size]
       rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size]

      Options:

        -
      • "chunk_size": show the current upload chunk size
      • -
      • "service_account_file": show the current service account file
      • +
      • "chunk_size": Show the current upload chunk size.
      • +
      • "service_account_file": Show the current service account file.

      set

      -

      Set command for updating the drive config parameters

      -
      rclone backend set remote: [options] [<arguments>+]
      +

      Set command for updating the drive config parameters.

      +
      rclone backend set remote: [options] [<arguments>+]

      This is a set command which will be used to update the various drive -config parameters

      -

      Usage Examples:

      -
      rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
      +config parameters.

      +

      Usage examples:

      +
      rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
       rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]

      Options:

        -
      • "chunk_size": update the current upload chunk size
      • -
      • "service_account_file": update the current service account file
      • +
      • "chunk_size": Update the current upload chunk size.
      • +
      • "service_account_file": Update the current service account +file.

      shortcut

      -

      Create shortcuts from files or directories

      -
      rclone backend shortcut remote: [options] [<arguments>+]
      +

      Create shortcuts from files or directories.

      +
      rclone backend shortcut remote: [options] [<arguments>+]

      This command creates shortcuts from files or directories.

      -

      Usage:

      -
      rclone backend shortcut drive: source_item destination_shortcut
      +

      Usage examples:

      +
      rclone backend shortcut drive: source_item destination_shortcut
       rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut

      In the first example this creates a shortcut from the "source_item" which can be a file or a directory to the "destination_shortcut". The @@ -40403,70 +45103,73 @@ relative to "drive:" to the "destination_shortcut" relative to authenticated with "drive2:" can't read files from "drive:".

      Options:

        -
      • "target": optional target remote for the shortcut destination
      • +
      • "target": Optional target remote for the shortcut destination.

      drives

      -

      List the Shared Drives available to this account

      -
      rclone backend drives remote: [options] [<arguments>+]
      +

      List the Shared Drives available to this account.

      +
      rclone backend drives remote: [options] [<arguments>+]

      This command lists the Shared Drives (Team Drives) available to this account.

      -

      Usage:

      -
      rclone backend [-o config] drives drive:
      -

      This will return a JSON list of objects like this

      -
      [
      -    {
      -        "id": "0ABCDEF-01234567890",
      -        "kind": "drive#teamDrive",
      -        "name": "My Drive"
      -    },
      -    {
      -        "id": "0ABCDEFabcdefghijkl",
      -        "kind": "drive#teamDrive",
      -        "name": "Test Drive"
      -    }
      -]
      +

      Usage example:

      +
      rclone backend [-o config] drives drive:
      +

      This will return a JSON list of objects like this:

      +
      [
      +    {
      +        "id": "0ABCDEF-01234567890",
      +        "kind": "drive#teamDrive",
      +        "name": "My Drive"
      +    },
      +    {
      +        "id": "0ABCDEFabcdefghijkl",
      +        "kind": "drive#teamDrive",
      +        "name": "Test Drive"
      +    }
      +]

      With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives found and a combined drive.

      -
      [My Drive]
      -type = alias
      -remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
      -
      -[Test Drive]
      -type = alias
      -remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
      -
      -[AllDrives]
      -type = combine
      -upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
      +
      [My Drive]
      +type = alias
      +remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
      +
      +[Test Drive]
      +type = alias
      +remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
      +
      +[AllDrives]
      +type = combine
      +upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"

      Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. Any illegal characters will be substituted with "_" and duplicate names will have numbers suffixed. It will also add a remote called AllDrives which shows all the shared drives combined into one directory tree.

      untrash

      -

      Untrash files and directories

      -
      rclone backend untrash remote: [options] [<arguments>+]
      +

      Untrash files and directories.

      +
      rclone backend untrash remote: [options] [<arguments>+]

      This command untrashes all the files and directories in the directory passed in recursively.

      -

      Usage:

      +

      Usage example:

      +
      rclone backend untrash drive:directory
      +rclone backend --interactive untrash drive:directory subdir

      This takes an optional directory to trash which make this easier to use via the API.

      -
      rclone backend untrash drive:directory
      -rclone backend --interactive untrash drive:directory subdir

      Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it.

      Result:

      -
      {
      -    "Untrashed": 17,
      -    "Errors": 0
      -}
      +
      {
      +    "Untrashed": 17,
      +    "Errors": 0
      +}

      copyid

      -

      Copy files by ID

      -
      rclone backend copyid remote: [options] [<arguments>+]
      -

      This command copies files by ID

      -

      Usage:

      -
      rclone backend copyid drive: ID path
      +

      Copy files by ID.

      +
      rclone backend copyid remote: [options] [<arguments>+]
      +

      This command copies files by ID.

      +

      Usage examples:

      +
      rclone backend copyid drive: ID path
       rclone backend copyid drive: ID1 path1 ID2 path2

      It copies the drive file with ID given to the path (an rclone path which will be passed internally to rclone copyto). The ID and path pairs @@ -40479,11 +45182,11 @@ be attempted if possible.

      Use the --interactive/-i or --dry-run flag to see what would be copied before copying.

      moveid

      -

      Move files by ID

      -
      rclone backend moveid remote: [options] [<arguments>+]
      -

      This command moves files by ID

      -

      Usage:

      -
      rclone backend moveid drive: ID path
      +

      Move files by ID.

      +
      rclone backend moveid remote: [options] [<arguments>+]
      +

      This command moves files by ID.

      +

      Usage examples:

      +
      rclone backend moveid drive: ID path
       rclone backend moveid drive: ID1 path1 ID2 path2

      It moves the drive file with ID given to the path (an rclone path which will be passed internally to rclone moveto).

      @@ -40495,65 +45198,71 @@ attempted if possible.

      Use the --interactive/-i or --dry-run flag to see what would be moved beforehand.

      exportformats

      -

      Dump the export formats for debug purposes

      -
      rclone backend exportformats remote: [options] [<arguments>+]
      +

      Dump the export formats for debug purposes.

      +
      rclone backend exportformats remote: [options] [<arguments>+]

      importformats

      -

      Dump the import formats for debug purposes

      -
      rclone backend importformats remote: [options] [<arguments>+]
      +

      Dump the import formats for debug purposes.

      +
      rclone backend importformats remote: [options] [<arguments>+]

      query

      -

      List files using Google Drive query language

      -
      rclone backend query remote: [options] [<arguments>+]
      -

      This command lists files based on a query

      -

      Usage:

      -
      rclone backend query drive: query
      +

      List files using Google Drive query language.

      +
      rclone backend query remote: [options] [<arguments>+]
      +

      This command lists files based on a query.

      +

      Usage example:

      +
      rclone backend query drive: query

      The query syntax is documented at Google Drive Search query terms and operators.

      For example:

      -
      rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'"
      +
      rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'"

      If the query contains literal ' or  characters, these need to be escaped with  characters. "'" becomes "'" and "" becomes "\", for example to match a file named "foo ' .txt":

      -
      rclone backend query drive: "name = 'foo \' \\\.txt'"
      +
      rclone backend query drive: "name = 'foo \' \\\.txt'"

      The result is a JSON array of matches, for example:

      -
      [
      -{
      -    "createdTime": "2017-06-29T19:58:28.537Z",
      -    "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
      -    "md5Checksum": "68518d16be0c6fbfab918be61d658032",
      -    "mimeType": "text/plain",
      -    "modifiedTime": "2024-02-02T10:40:02.874Z",
      -    "name": "foo ' \\.txt",
      -    "parents": [
      -        "0BxAe_BCDE4zkFGZpcWJGek0xbzC"
      -    ],
      -    "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
      -    "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
      -    "size": "311",
      -    "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
      -}
      -]
      -

      rescue

      -

      Rescue or delete any orphaned files

      -
      rclone backend rescue remote: [options] [<arguments>+]
      +
      [
      +    {
      +        "createdTime": "2017-06-29T19:58:28.537Z",
      +        "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
      +        "md5Checksum": "68518d16be0c6fbfab918be61d658032",
      +        "mimeType": "text/plain",
      +        "modifiedTime": "2024-02-02T10:40:02.874Z",
      +        "name": "foo ' \\.txt",
      +        "parents": [
      +            "0BxAe_BCDE4zkFGZpcWJGek0xbzC"
      +        ],
      +        "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
      +        "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
      +        "size": "311",
      +        "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
      +    }
      +]
      +```console
      +
      +### rescue
      +
      +Rescue or delete any orphaned files.
      +
      +```console
      +rclone backend rescue remote: [options] [<arguments>+]

      This command rescues or deletes any orphaned files or directories.

      Sometimes files can get orphaned in Google Drive. This means that they are no longer in any folder in Google Drive.

      This command finds those files and either rescues them to a directory you specify or deletes them.

      -

      Usage:

      This can be used in 3 ways.

      -

      First, list all orphaned files

      -
      rclone backend rescue drive:
      -

      Second rescue all orphaned files to the directory indicated

      -
      rclone backend rescue drive: "relative/path/to/rescue/directory"
      -

      e.g. To rescue all orphans to a directory called "Orphans" in the top -level

      -
      rclone backend rescue drive: Orphans
      -

      Third delete all orphaned files to the trash

      -
      rclone backend rescue drive: -o delete
      -

      Limitations

      +

      First, list all orphaned files:

      +
      rclone backend rescue drive:
      +

      Second rescue all orphaned files to the directory indicated:

      +
      rclone backend rescue drive: "relative/path/to/rescue/directory"
      +

      E.g. to rescue all orphans to a directory called "Orphans" in the top +level:

      +
      rclone backend rescue drive: Orphans
      +

      Third delete all orphaned files to the trash:

      +
      rclone backend rescue drive: -o delete
      + +

      Limitations

      Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiB/s but lots of small files @@ -40632,44 +45341,44 @@ the "Google Drive API".

      credentials", which opens the wizard).

    75. If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near -the top right corner of the right panel), then select "External" and -click on "CREATE"; on the next screen, enter an "Application name" -("rclone" is OK); enter "User Support Email" (your own email is OK); -enter "Developer Contact Email" (your own email is OK); then click on -"Save" (all other data is optional). You will also have to add +

      (PS: if you are a GSuite user, you could also select "Internal" +instead of "External" above, but this will restrict API use to Google +Workspace users in your organisation).

      +

      You will also have to add some -scopes, including

    76. -
    +scopes, including

    • https://www.googleapis.com/auth/docs
    • https://www.googleapis.com/auth/drive in order to be able to edit, create and delete files with RClone.
    • https://www.googleapis.com/auth/drive.metadata.readonly which you may also want to add.
    • -
    • If you want to add all at once, comma separated it would be -https://www.googleapis.com/auth/docs,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/drive.metadata.readonly.
    -
      -
    1. After adding scopes, click "Save and continue" to add test users. -Be sure to add your own account to the test users. Once you've added -yourself as a test user and saved the changes, click again on -"Credentials" on the left panel to go back to the "Credentials" -screen.

      -

      (PS: if you are a GSuite user, you could also select "Internal" -instead of "External" above, but this will restrict API use to Google -Workspace users in your organisation).

    2. -
    3. Click on the "+ CREATE CREDENTIALS" button at the top of the -screen, then select "OAuth client ID".

    4. -
    5. Choose an application type of "Desktop app" and click "Create". -(the default name is fine)

    6. +

      To do this, click Data Access on the left side panel, click "add or +remove scopes" and select the three above and press update or go to the +"Manually add scopes" text box (scroll down) and enter +"https://www.googleapis.com/auth/docs,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/drive.metadata.readonly", +press add to table then update.

      +

      You should now see the three scopes on your Data access page. Now +press save at the bottom!

      +
    7. After adding scopes, click Audience Scroll down and click "+ Add +users". Add yourself as a test user and press save.

    8. +
    9. Go to Overview on the left panel, click "Create OAuth client". +Choose an application type of "Desktop app" and click "Create". (the +default name is fine)

    10. It will show you a client ID and client secret. Make a note of -these.

      -

      (If you selected "External" at Step 5 continue to Step 10. If you +these. (If you selected "External" at Step 5 continue to Step 9. If you chose "Internal" you don't need to publish and can skip straight to Step -11 but your destination drive must be part of the same Google +10 but your destination drive must be part of the same Google Workspace.)

    11. -
    12. Go to "Oauth consent screen" and then click "PUBLISH APP" button -and confirm. You will also want to add yourself as a test user.

    13. +
    14. Go to "Audience" and then click "PUBLISH APP" button and confirm. +Add yourself as a test user if you haven't already.

    15. Provide the noted client ID and client secret to rclone.

    Be aware that, due to the "enhanced security" recently introduced by @@ -40688,8 +45397,8 @@ testing mode would also be sufficient.

    data-cites="balazer">@balazer on github for these instructions.)

    Sometimes, creation of an OAuth consent in Google API Console fails -due to an error message “The request failed because changes to one of -the field of the resource is not supported”. As a convenient workaround, +due to an error message "The request failed because changes to one of +the field of the resource is not supported". As a convenient workaround, the necessary Google Drive API key can be created on the Python Quickstart page. Just push the Enable the Drive API button to @@ -40707,15 +45416,15 @@ section carefully to make sure it is suitable for your use.

    photos it uploaded. This limitation is due to policy changes at Google. You may need to run rclone config reconnect remote: to make rclone work again after upgrading to rclone v1.70.

    -

    Configuration

    +

    Configuration

    The initial setup for google cloud storage involves getting a token from Google Photos which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -40776,8 +45485,8 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

    +docs for how to set it up on a machine without an internet-connected +web browser available.

    Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to @@ -40788,14 +45497,14 @@ mode.

    This remote is called remote and can now be used like this

    See all the albums in your photos

    -
    rclone lsd remote:album
    +
    rclone lsd remote:album

    Make a new album

    -
    rclone mkdir remote:album/newAlbum
    +
    rclone mkdir remote:album/newAlbum

    List the contents of an album

    -
    rclone ls remote:album/newAlbum
    +
    rclone ls remote:album/newAlbum

    Sync /home/local/images to the Google Photos, removing any excess files in the album.

    -
    rclone sync --interactive /home/local/image remote:album/newAlbum
    +
    rclone sync --interactive /home/local/image remote:album/newAlbum

    Layout

    As Google Photos is not a general purpose cloud storage system, the backend is laid out to help you navigate it.

    @@ -40808,7 +45517,7 @@ for syncing.)

    Note that all your photos and videos will appear somewhere under media, but they may not appear under album unless you've put them into albums.

    -
    /
    +
    /
     - upload
         - file1.jpg
         - file2.jpg
    @@ -40868,9 +45577,9 @@ writeable and you may create new directories (albums) under
     album. If you copy files with a directory hierarchy in
     there then rclone will create albums with the / character
     in them. For example if you do

    -
    rclone copy /path/to/images remote:album/images
    +
    rclone copy /path/to/images remote:album/images

    and the images directory contains

    -
    images
    +
    images
         - file1.jpg
         dir
             file2.jpg
    @@ -40899,7 +45608,9 @@ syncing.

    The shared-album directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to google photos (Google Photos).

    --gphotos-client-id

    @@ -40933,7 +45644,7 @@ access to your photos, otherwise rclone will request full access.

  • Type: bool
  • Default: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to google photos (Google Photos).

    --gphotos-token

    @@ -41135,7 +45846,8 @@ used)

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when @@ -41193,7 +45905,7 @@ uploaded an image to upload then uploaded the same image to album/my_album will be what it was uploaded with initially, not what you uploaded it with to album. In practise this shouldn't cause too many problems.

    -

    Modification times

    +

    Modification times

    The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known.

    @@ -41242,15 +45954,18 @@ client_id stops working) then you can make your own.

    href="https://rclone.org/drive/#making-your-own-client-id">the google drive docs. You will need these scopes instead of the drive ones detailed:

    -
    https://www.googleapis.com/auth/photoslibrary.appendonly
    +
    https://www.googleapis.com/auth/photoslibrary.appendonly
     https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata
     https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata

    Hasher

    Hasher is a special overlay backend to create remotes which handle -checksums for other remotes. It's main functions include: - Emulate hash -types unimplemented by backends - Cache checksums to help with slow -hashing of large local or (S)FTP files - Warm up checksum cache from -external SUM files

    +checksums for other remotes. It's main functions include:

    +
      +
    • Emulate hash types unimplemented by backends
    • +
    • Cache checksums to help with slow hashing of large local or (S)FTP +files
    • +
    • Warm up checksum cache from external SUM files
    • +

    Getting started

    To use Hasher, first set up the underlying remote following the configuration instructions for that remote. You can also use a local @@ -41264,7 +45979,7 @@ remote (S3, B2, Swift) then you should put the bucket in the remote

    Now proceed to interactive or manual configuration.

    Interactive configuration

    Run rclone config:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -41306,24 +46021,27 @@ y/e/d> y
    config file, usually YOURHOME/.config/rclone/rclone.conf. Open it in your favorite text editor, find section for the base remote and create new section for hasher like in the following examples:

    -
    [Hasher1]
    -type = hasher
    -remote = myRemote:path
    -hashes = md5
    -max_age = off
    -
    -[Hasher2]
    -type = hasher
    -remote = /local/path
    -hashes = dropbox,sha1
    -max_age = 24h
    -

    Hasher takes basically the following parameters: - -remote is required, - hashes is a comma -separated list of supported checksums (by default -md5,sha1), - max_age - maximum time to keep a -checksum value in the cache, 0 will disable caching -completely, off will cache "forever" (that is until the -files get changed).

    +
    [Hasher1]
    +type = hasher
    +remote = myRemote:path
    +hashes = md5
    +max_age = off
    +
    +[Hasher2]
    +type = hasher
    +remote = /local/path
    +hashes = dropbox,sha1
    +max_age = 24h
    +

    Hasher takes basically the following parameters:

    +
      +
    • remote is required
    • +
    • hashes is a comma separated list of supported checksums +(by default md5,sha1)
    • +
    • max_age - maximum time to keep a checksum value in the +cache 0 will disable caching completely off +will cache "forever" (that is until the files get changed)
    • +

    Make sure the remote has : (colon) in. If you specify the remote without a colon then rclone will use a local directory of that name. So if you use a remote of @@ -41336,43 +46054,44 @@ under current directory.

    Now you can use it as Hasher2:subdir/file instead of base remote. Hasher will transparently update cache with new checksums when a file is fully read or overwritten, like:

    -
    rclone copy External:path/file Hasher:dest/path
    -
    +
    rclone copy External:path/file Hasher:dest/path
     rclone cat Hasher:path/to/file > /dev/null

    The way to refresh all cached checksums (even unsupported by the base backend) for a subtree is to re-download all files in the subtree. For example, use hashsum --download using any supported hashsum on the command line (we just care to re-read):

    -
    rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null
    -
    +
    rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null
     rclone backend dump Hasher:path/to/subtree

    You can print or drop hashsum cache using custom backend commands:

    -
    rclone backend dump Hasher:dir/subdir
    -
    +
    rclone backend dump Hasher:dir/subdir
     rclone backend drop Hasher:

    Pre-Seed from a SUM File

    Hasher supports two backend commands: generic SUM file import and faster but less consistent stickyimport.

    -
    rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM [--checkers 4]
    +
    rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM [--checkers 4]

    Instead of SHA1 it can be any hash supported by the remote. The last argument can point to either a local or an other-remote:path text file in SUM format. The command will parse the SUM file, then walk down the path given by the first argument, snapshot current fingerprints and fill in the cache entries -correspondingly. - Paths in the SUM file are treated as relative to -hasher:dir/subdir. - The command will not -check that supplied values are correct. You must know -what you are doing. - This is a one-time action. The SUM file will not -get "attached" to the remote. Cache entries can still be overwritten -later, should the object's fingerprint change. - The tree walk can take -long depending on the tree size. You can increase ---checkers to make it faster. Or use +correspondingly.

    +
      +
    • Paths in the SUM file are treated as relative to +hasher:dir/subdir.
    • +
    • The command will not check that supplied values are +correct. You must know what you are doing.
    • +
    • This is a one-time action. The SUM file will not get "attached" to +the remote. Cache entries can still be overwritten later, should the +object's fingerprint change.
    • +
    • The tree walk can take long depending on the tree size. You can +increase --checkers to make it faster. Or use stickyimport if you don't care about fingerprints and -consistency.

      -
      rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1
      +consistency.
    • +
    +
    rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1

    stickyimport is similar to import but works much faster because it does not need to stat existing files and skips initial tree walk. Instead of binding cache entries to file fingerprints @@ -41381,7 +46100,9 @@ size, modification time etc. Such hash entries can be replaced only by purge, delete, backend drop or by full re-read/re-write of the files.

    Configuration reference

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to hasher (Better checksums for other remotes).

    --hasher-remote

    @@ -41412,7 +46133,7 @@ forever).

  • Type: Duration
  • Default: off
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to hasher (Better checksums for other remotes).

    --hasher-auto-size

    @@ -41434,15 +46155,15 @@ default).

  • Type: string
  • Required: false
  • -

    Metadata

    +

    Metadata

    Any metadata supported by the underlying remote is read and written.

    See the metadata docs for more info.

    Backend commands

    Here are the commands specific to the hasher backend.

    -

    Run them with

    -
    rclone backend COMMAND remote:
    +

    Run them with:

    +
    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    See the backend command @@ -41450,30 +46171,34 @@ for more info on how to pass options and arguments.

    These can be run on a running backend using the rc command backend/command.

    drop

    -

    Drop cache

    -
    rclone backend drop remote: [options] [<arguments>+]
    -

    Completely drop checksum cache. Usage Example: rclone backend drop -hasher:

    +

    Drop cache.

    +
    rclone backend drop remote: [options] [<arguments>+]
    +

    Completely drop checksum cache.

    +

    Usage example:

    +
    rclone backend drop hasher:

    dump

    -

    Dump the database

    -
    rclone backend dump remote: [options] [<arguments>+]
    -

    Dump cache records covered by the current remote

    +

    Dump the database.

    +
    rclone backend dump remote: [options] [<arguments>+]
    +

    Dump cache records covered by the current remote.

    fulldump

    -

    Full dump of the database

    -
    rclone backend fulldump remote: [options] [<arguments>+]
    -

    Dump all cache records in the database

    +

    Full dump of the database.

    +
    rclone backend fulldump remote: [options] [<arguments>+]
    +

    Dump all cache records in the database.

    import

    -

    Import a SUM file

    -
    rclone backend import remote: [options] [<arguments>+]
    +

    Import a SUM file.

    +
    rclone backend import remote: [options] [<arguments>+]

    Amend hash cache from a SUM file and bind checksums to files by -size/time. Usage Example: rclone backend import hasher:subdir md5 -/path/to/sum.md5

    +size/time.

    +

    Usage example:

    +
    rclone backend import hasher:subdir md5 /path/to/sum.md5

    stickyimport

    -

    Perform fast import of a SUM file

    -
    rclone backend stickyimport remote: [options] [<arguments>+]
    -

    Fill hash cache from a SUM file without verifying file fingerprints. -Usage Example: rclone backend stickyimport hasher:subdir md5 -remote:path/to/sum.md5

    +

    Perform fast import of a SUM file.

    +
    rclone backend stickyimport remote: [options] [<arguments>+]
    +

    Fill hash cache from a SUM file without verifying file +fingerprints.

    +

    Usage example:

    +
    rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5
    +

    Implementation details (advanced)

    This section explains how various rclone operations work on a hasher @@ -41532,12 +46257,12 @@ is a distributed file-system, part of the Apache Hadoop framework.

    Paths are specified as remote: or remote:path/to/dir.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -41597,34 +46322,35 @@ e/n/d/r/c/s/q> q

    This remote is called remote and can now be used like this

    See all the top level directories

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List the contents of a directory

    -
    rclone ls remote:directory
    +
    rclone ls remote:directory

    Sync the remote directory to /home/local/directory, deleting any excess files.

    -
    rclone sync --interactive remote:directory /home/local/directory
    +
    rclone sync --interactive remote:directory /home/local/directory

    Setting up your own HDFS instance for testing

    You may start with a manual setup or use the docker image from the tests:

    If you want to build the docker image

    -
    git clone https://github.com/rclone/rclone.git
    +
    git clone https://github.com/rclone/rclone.git
     cd rclone/fstest/testserver/images/test-hdfs
     docker build --rm -t rclone/test-hdfs .

    Or you can just use the latest one pushed

    -
    docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:8020 --hostname "rclone-hdfs" rclone/test-hdfs
    +
    docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:8020 --hostname "rclone-hdfs" rclone/test-hdfs

    NB it need few seconds to startup.

    For this docker image the remote needs to be configured like this:

    -
    [remote]
    -type = hdfs
    -namenode = 127.0.0.1:8020
    -username = root
    +
    [remote]
    +type = hdfs
    +namenode = 127.0.0.1:8020
    +username = root

    You can stop this image with docker kill rclone-hdfs (NB it does not use volumes, so all data uploaded will be lost.)

    -

    Modification times

    +

    Modification times

    Time accurate to 1 second is stored.

    Checksum

    No checksums are implemented.

    @@ -41655,7 +46381,9 @@ replaced:

    Invalid UTF-8 bytes will also be replaced.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to hdfs (Hadoop distributed file system).

    --hdfs-namenode

    @@ -41685,7 +46413,7 @@ namenodes at port 8020.

    -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to hdfs (Hadoop distributed file system).

    --hdfs-service-principal-name

    @@ -41743,8 +46471,11 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

      +
    • Erasure coding not supported, see issue #8808
    • No server-side Move or DirMove.
    • Checksums not implemented.
    @@ -41755,12 +46486,12 @@ section in the overview for more info.

    The initial setup for hidrive involves getting a token from HiDrive which you need to do in your browser. rclone config walks you through it.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found - make a new one
    +
    No remotes found - make a new one
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -41805,21 +46536,22 @@ your account and hence should not be shared with other persons.
     See the below section for more
     information.

    See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

    +docs for how to set it up on a machine without an internet-connected +web browser available.

    Note that rclone runs a webserver on your local machine to collect the token as returned from HiDrive. This only runs from the moment it opens your browser to the moment you get back the verification code. The webserver runs on http://127.0.0.1:53682/. If local port 53682 is protected by a firewall you may need to temporarily unblock the firewall to complete authorization.

    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your HiDrive root folder

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your HiDrive filesystem

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to a HiDrive directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Keeping your tokens safe

    Any OAuth-tokens will be stored by rclone in the remote's configuration file as unencrypted text. Anyone can use a valid @@ -41831,11 +46563,11 @@ information on securing your configuration file by viewing the configuration encryption docs.

    Invalid refresh token

    -

    As can be verified here, each -refresh_token (for Native Applications) is valid for 60 -days. If used to access HiDrivei, its validity will be automatically -extended.

    +

    As can be verified on HiDrive's OAuth +guide, each refresh_token (for Native Applications) is +valid for 60 days. If used to access HiDrivei, its validity will be +automatically extended.

    This means that if you

    • Don't use the HiDrive remote for 60 days
    • @@ -41845,7 +46577,7 @@ the refresh token is invalid or expired.

      To fix this you will need to authorize rclone to access your HiDrive account again.

      Using

      -
      rclone config reconnect remote:
      +
      rclone config reconnect remote:

      the process is very similar to the process of initial setup exemplified before.

      Modification times and @@ -41864,8 +46596,9 @@ cannot be named either of the following: . or ..

      Therefore rclone will automatically replace these characters, if files or folders are stored or accessed with such names.

      -

      You can read about how this filename encoding works in general here.

      +

      You can read about how this filename encoding works in general in the +main +docs.

      Keep in mind that HiDrive only supports file or folder names with a length of 255 characters or less.

      Transfers

      @@ -41896,8 +46629,7 @@ hierarchy.

      This works by prepending the contents of the root_prefix option to any paths accessed by rclone. For example, the following two ways to access the home directory are equivalent:

      -
      rclone lsd --hidrive-root-prefix="/users/test/" remote:path
      -
      +
      rclone lsd --hidrive-root-prefix="/users/test/" remote:path
       rclone lsd remote:/users/test/path

      See the below section about configuration options for more details.

      @@ -41912,7 +46644,9 @@ information is not explicitly needed. For this, the disable_fetching_member_count option can be used.

      See the below section about configuration options for more details.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to hidrive (HiDrive).

      --hidrive-client-id

      OAuth Client Id.

      @@ -41955,7 +46689,7 @@ HiDrive.

    -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to hidrive (HiDrive).

    --hidrive-token

    OAuth Access Token as a JSON blob.

    @@ -42146,7 +46880,8 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    HiDrive is able to store symbolic links (symlinks) by design, for example, when unpacked from a zip archive.

    @@ -42195,12 +46930,12 @@ ending it with / is always better as it avoids the initial HEAD request.

    To just download a single file it is easier to use copyurl.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -42245,26 +46980,28 @@ e/n/d/r/c/s/q> q

    This remote is called remote and can now be used like this

    See all the top level directories

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List the contents of a directory

    -
    rclone ls remote:directory
    +
    rclone ls remote:directory

    Sync the remote directory to /home/local/directory, deleting any excess files.

    -
    rclone sync --interactive remote:directory /home/local/directory
    +
    rclone sync --interactive remote:directory /home/local/directory

    Read only

    This remote is read only - you can't upload files to an HTTP server.

    -

    Modification times

    +

    Modification times

    Most HTTP servers store time accurate to 1 second.

    Checksum

    No checksums are stored.

    Usage without a config file

    Since the http remote only has one config parameter it is easy to use without a config file:

    -
    rclone lsd --http-url https://beta.rclone.org :http:
    +
    rclone lsd --http-url https://beta.rclone.org :http:

    or:

    -
    rclone lsd :http,url='https://beta.rclone.org':
    -

    Standard options

    +
    rclone lsd :http,url='https://beta.rclone.org':
    + + +

    Standard options

    Here are the Standard options specific to http (HTTP).

    --http-url

    URL of HTTP host to connect to.

    @@ -42286,7 +47023,7 @@ a username and password.

  • Type: bool
  • Default: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to http (HTTP).

    --http-headers

    Set HTTP headers for all transactions.

    @@ -42353,10 +47090,78 @@ may be in the listing.

  • Type: string
  • Required: false
  • +

    Metadata

    +

    HTTP metadata keys are case insensitive and are always returned in +lower case.

    +

    Here are the possible system metadata items for the http backend.

    + +++++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    NameHelpTypeExampleRead Only
    cache-controlCache-Control headerstringno-cacheN
    content-dispositionContent-Disposition headerstringinlineN
    content-disposition-filenameFilename retrieved from Content-Disposition headerstringfile.txtN
    content-encodingContent-Encoding headerstringgzipN
    content-languageContent-Language headerstringen-USN
    content-typeContent-Type headerstringtext/plainN
    +

    See the metadata docs +for more info.

    Backend commands

    Here are the commands specific to the http backend.

    -

    Run them with

    -
    rclone backend COMMAND remote:
    +

    Run them with:

    +
    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    See the backend command @@ -42365,11 +47170,11 @@ for more info on how to pass options and arguments.

    href="https://rclone.org/rc/#backend-command">backend/command.

    set

    Set command for updating the config parameters.

    -
    rclone backend set remote: [options] [<arguments>+]
    +
    rclone backend set remote: [options] [<arguments>+]

    This set command can be used to update the config parameters for a running http backend.

    -

    Usage Examples:

    -
    rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
    +

    Usage examples:

    +
    rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
     rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
     rclone rc backend/command command=set fs=remote: -o url=https://example.com

    The option keys are named as they are in the config file.

    @@ -42377,38 +47182,37 @@ rclone rc backend/command command=set fs=remote: -o url=https://example.com

    It doesn't return anything.

    -

    Limitations

    + +

    Limitations

    rclone about is not supported by the HTTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    +href="https://rclone.org/commands/rclone_about/">rclone about.

    ImageKit

    This is a backend for the ImageKit.io storage service.

    -

    About ImageKit

    ImageKit.io provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web.

    -

    Accounts & Pricing

    To use this backend, you need to create an account on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See the pricing details.

    -

    Configuration

    +

    Configuration

    Here is an example of making an imagekit configuration.

    Firstly create a ImageKit.io account and choose a plan.

    You will need to log in and get the publicKey and privateKey for your account from the developer section.

    Now run

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -42459,16 +47263,18 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    List directories in the top level of your Media Library

    -
    rclone lsd imagekit-media-library:
    +
    rclone lsd imagekit-media-library:

    Make a new directory.

    -
    rclone mkdir imagekit-media-library:directory
    +
    rclone mkdir imagekit-media-library:directory

    List the contents of a directory.

    -
    rclone ls imagekit-media-library:directory
    +
    rclone ls imagekit-media-library:directory

    Modified time and hashes

    ImageKit does not support modification times or hashes yet.

    Checksums

    No checksums are supported.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to imagekit (ImageKit.io).

    --imagekit-endpoint

    You can find your ImageKit.io URL endpoint in your dashboard

  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to imagekit (ImageKit.io).

    --imagekit-only-signed

    If you have configured Restrict unsigned image URLs in @@ -42551,7 +47357,7 @@ Slash,LtGt,DoubleQuote,Dollar,Question,Hash,Percent,BackSlash,Del,Ctl,InvalidUtf

  • Type: string
  • Required: false
  • -

    Metadata

    +

    Metadata

    Any metadata supported by the underlying remote is read and written.

    Here are the possible system metadata items for the imagekit @@ -42656,8 +47462,9 @@ image

    See the metadata docs for more info.

    +

    iCloud Drive

    -

    Configuration

    +

    Configuration

    The initial setup for an iCloud Drive backend involves getting a trust token/session. This can be done by simply using the regular iCloud password, and accepting the code prompt on another iCloud connected @@ -42670,9 +47477,9 @@ reauthenticate with rclone reconnect or rclone config.

    Here is an example of how to make a remote called iclouddrive. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -42741,7 +47548,9 @@ you may need to wait a few hours or a day before you can get rclone to
     work - keep clearing the config entry and running
     rclone reconnect remote: until rclone functions
     properly.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to iclouddrive (iCloud Drive).

    --iclouddrive-apple-id

    @@ -42783,7 +47592,7 @@ obscure.

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to iclouddrive (iCloud Drive).

    --iclouddrive-client-id

    @@ -42816,6 +47625,7 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • +

    Internet Archive

    The Internet Archive backend utilizes Items on archive.org

    @@ -42828,20 +47638,21 @@ subdirectories in too, e.g. remote:item/path/to/dir.

    Unlike S3, listing up all items uploaded by you isn't supported.

    Once you have made a remote, you can use it like this:

    Make a new item

    -
    rclone mkdir remote:item
    +
    rclone mkdir remote:item

    List the contents of a item

    -
    rclone ls remote:item
    +
    rclone ls remote:item

    Sync /home/local/directory to the remote item, deleting any excess files in the item.

    -
    rclone sync --interactive /home/local/directory remote:item
    +
    rclone sync --interactive /home/local/directory remote:item

    Notes

    Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can -check item's queue at -https://catalogd.archive.org/history/item-name-here . Because of that, -all uploads/deletes will not show up immediately and takes some time to -be available. The per-item queue is enqueued to an another queue, Item -Deriver Queue. https://catalogd.archive.org/history/item-name-here. +Because of that, all uploads/deletes will not show up immediately and +takes some time to be available. The per-item queue is enqueued to an +another queue, Item Deriver Queue. You can check the status of Item Deriver Queue here. This queue has a limit, and it may block you from uploading, or even deleting. You should avoid @@ -42856,11 +47667,19 @@ a long time depending on server's queue.

    file. The metadata will appear as file metadata on Internet Archive. However, some fields are reserved by both Internet Archive and rclone.

    -

    The following are reserved by Internet Archive: - name - -source - size - md5 - -crc32 - sha1 - format - -old_version - viruscheck - -summation

    +

    The following are reserved by Internet Archive:

    +
      +
    • name
    • +
    • source
    • +
    • size
    • +
    • md5
    • +
    • crc32
    • +
    • sha1
    • +
    • format
    • +
    • old_version
    • +
    • viruscheck
    • +
    • summation
    • +

    Trying to set values to these keys is ignored with a warning. Only setting mtime is an exception. Doing so make it the identical behavior as setting ModTime.

    @@ -42882,18 +47701,18 @@ automatically.

    These auto-created files can be excluded from the sync using metadata filtering.

    -
    rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata"
    +
    rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata"

    Which excludes from the sync any files which have the source=metadata or format=Metadata flags which are added to Internet Archive auto-created files.

    -

    Configuration

    +

    Configuration

    Here is an example of making an internetarchive configuration. Most applies to the other providers as well, any differences are described below.

    First run

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -42957,7 +47776,9 @@ y) Yes this is OK (default)
     e) Edit this remote
     d) Delete this remote
     y/e/d> y
    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to internetarchive (Internet Archive).

  • Type: bool
  • Default: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to internetarchive (Internet Archive).

    --internetarchive-endpoint

    @@ -43080,7 +47901,7 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Metadata

    +

    Metadata

    Metadata fields provided by Internet Archive. If there are multiple values for a key, only the first one is returned. This is a limitation of Rclone, that supports one value per one key.

    @@ -43210,103 +48031,189 @@ Archive

    See the metadata docs for more info.

    +

    Jottacloud

    Jottacloud is a cloud storage service provider from a Norwegian -company, using its own datacenters in Norway. In addition to the -official service at +

    In addition to the official service at jottacloud.com, it also provides -white-label solutions to different companies, such as: * Telia * Telia -Cloud (cloud.telia.se) * Telia Sky (sky.telia.no) * Tele2 * Tele2 Cloud -(mittcloud.tele2.se) * Onlime * Onlime Cloud Storage (onlime.dk) * -Elkjøp (with subsidiaries): * Elkjøp Cloud (cloud.elkjop.no) * -Elgiganten Sweden (cloud.elgiganten.se) * Elgiganten Denmark -(cloud.elgiganten.dk) * Giganti Cloud (cloud.gigantti.fi) * ELKO Cloud -(cloud.elko.is)

    -

    Most of the white-label versions are supported by this backend, -although may require different authentication setup - described -below.

    +white-label solutions to different companies. The following are +currently supported by this backend, using a different authentication +setup as described below:

    +
      +
    • Elkjøp (with subsidiaries): +
        +
      • Elkjøp Cloud (cloud.elkjop.no)
      • +
      • Elgiganten Cloud (cloud.elgiganten.dk)
      • +
      • Elgiganten Cloud (cloud.elgiganten.se)
      • +
      • ELKO Cloud (cloud.elko.is)
      • +
      • Gigantti Cloud (cloud.gigantti.fi)
      • +
    • +
    • Telia +
        +
      • Telia Cloud (cloud.telia.se)
      • +
      • Telia Sky (sky.telia.no)
      • +
    • +
    • Tele2 +
        +
      • Tele2 Cloud (mittcloud.tele2.se)
      • +
    • +
    • Onlime +
        +
      • Onlime (onlime.dk)
      • +
    • +
    • MediaMarkt +
        +
      • MediaMarkt Cloud (mediamarkt.jottacloud.com)
      • +
      • Let's Go Cloud (letsgo.jotta.cloud)
      • +
    • +

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Authentication types

    -

    Some of the whitelabel versions uses a different authentication -method than the official service, and you have to choose the correct one -when setting up the remote.

    -

    Standard authentication

    -

    The standard authentication method used by the official service -(jottacloud.com), as well as some of the whitelabel services, requires -you to generate a single-use personal login token from the account -security settings in the service's web interface. Log in to your -account, go to "Settings" and then "Security", or use the direct link -presented to you by rclone when configuring the remote: Authentication +

    Authentication in Jottacloud is in general based on OAuth and OpenID +Connect (OIDC). There are different variants to choose from, depending +on which service you are using, e.g. a white-label service may only +support one of them. Note that there is no documentation to rely on, so +the descriptions provided here are based on observations and may not be +accurate.

    +

    Jottacloud uses two optional OAuth security mechanisms, referred to +as "Refresh Token Rotation" and "Automatic Reuse Detection", which has +some implications. Access tokens normally have one hour expiry, after +which they need to be refreshed (rotated), an operation that requires +the refresh token to be supplied. Rclone does this automatically. This +is standard OAuth. But in Jottacloud, such a refresh operation not only +creates a new access token, but also refresh token, and invalidates the +existing refresh token, the one that was supplied. It keeps track of the +history of refresh tokens, sometimes referred to as a token family, +descending from the original refresh token that was issued after the +initial authentication. This is used to detect any attempts at reusing +old refresh tokens, and trigger an immedate invalidation of the current +refresh token, and effectively the entire refresh token family.

    +

    When the current refresh token has been invalidated, next time rclone +tries to perform a token refresh, it will fail with an error message +something along the lines of:

    +
    CRITICAL: Failed to create file system for "remote:": (...): couldn't fetch token: invalid_grant: maybe token expired? - try refreshing with "rclone config reconnect remote:"
    +

    If you run rclone with verbosity level 2 (-vv), you will +see a debug message with an additional error description from the OAuth +response:

    +
    DEBUG : remote: got fatal oauth error: oauth2: "invalid_grant" "Session doesn't have required client"
    +

    (The error description used to be "Stale token" instead of "Session +doesn't have required client", so you may see references to that in +older descriptions of this situation.)

    +

    When this happens, you need to re-authenticate to be able to use your +remote again, e.g. using the config +reconnect command as suggested in the error message. This will +create an entirely new refresh token (family).

    +

    A typical example of how you may end up in this situation, is if you +create a Jottacloud remote with rclone in one location, and then copy +the configuration file to a second location where you start using rclone +to access the same remote. Eventually there will now be a token refresh +attempt with an invalidated token, i.e. refresh token reuse, resulting +in both instances starting to fail with the "invalid_grant" error. It is +possible to copy remote configurations, but you must then replace the +token for one of them using the config +reconnect command.

    +

    You can get some overview of your active tokens in your service's web +user interface, if you navigate to "Settings" and then "Security" (in +which case you end up at https://www.jottacloud.com/web/secure or similar). Down +on that page you have a section "My logged in devices". This contains a +list of entries which seemingly represents currently valid refresh +tokens, or refresh token families. From the right side of that list you +can click a button ("X") to revoke (invalidate) it, which means you will +still have access using an existing access token until that expires, but +you will not be able to perform a token refresh. Note that this entire +"My logged in devices" feature seem to behave a bit differently with +different authentication variants and with use of the different +(white-label) services.

    +

    Standard

    +

    This is an OAuth variant designed for command-line applications. It +is primarily supported by the official service (jottacloud.com), but may +also be supported by some of the white-label services. The information +necessary to be able to perform authentication, like domain name and +endpoint to connect to, are found automatically (it is encoded into the +supplied login token, described next), so you do not need to specify +which service to configure.

    +

    When configuring a remote, you are asked to enter a single-use +personal login token, which you must manually generate from the account +security settings in the service's web interface. You do not need a web +browser on the same machine like with traditional OAuth, but need to use +a web browser somewhere, and be able to be copy the generated string +into your rclone configuration session. Log in to your service's web +user interface, navigate to "Settings" and then "Security", or, for the +official service, use the direct link presented to you by rclone when +configuring the remote: https://www.jottacloud.com/web/secure. Scroll down to the section "Personal login token", and click the "Generate" button. -Note that if you are using a whitelabel service you probably can't use -the direct link, you need to find the same page in their dedicated web -interface, and also it may be in a different location than described -above.

    -

    To access your account from multiple instances of rclone, you need to -configure each of them with a separate personal login token. E.g. you -create a Jottacloud remote with rclone in one location, and copy the -configuration file to a second location where you also want to run -rclone and access the same remote. Then you need to replace the token -for one of them, using the config -reconnect command, which requires you to generate a new personal -login token and supply as input. If you do not do this, the token may -easily end up being invalidated, resulting in both instances failing -with an error message something along the lines of:

    -
    oauth2: cannot fetch token: 400 Bad Request
    -Response: {"error":"invalid_grant","error_description":"Stale token"}
    -

    When this happens, you need to replace the token as described above -to be able to use your remote again.

    -

    All personal login tokens you have taken into use will be listed in -the web interface under "My logged in devices", and from the right side -of that list you can click the "X" button to revoke individual -tokens.

    -

    Legacy authentication

    -

    If you are using one of the whitelabel versions (e.g. from Elkjøp) -you may not have the option to generate a CLI token. In this case you'll -have to use the legacy authentication. To do this select yes when the -setup asks for legacy authentication and enter your username and -password. The rest of the setup is identical to the default setup.

    -

    Telia Cloud authentication

    -

    Similar to other whitelabel versions Telia Cloud doesn't offer the -option of creating a CLI token, and additionally uses a separate -authentication flow where the username is generated internally. To setup -rclone to use Telia Cloud, choose Telia Cloud authentication in the -setup. The rest of the setup is identical to the default setup.

    -

    Tele2 Cloud authentication

    -

    As Tele2-Com Hem merger was completed this authentication can be used -for former Com Hem Cloud and Tele2 Cloud customers as no support for -creating a CLI token exists, and additionally uses a separate -authentication flow where the username is generated internally. To setup -rclone to use Tele2 Cloud, choose Tele2 Cloud authentication in the -setup. The rest of the setup is identical to the default setup.

    -

    Onlime Cloud Storage -authentication

    -

    Onlime has sold access to Jottacloud proper, while providing -localized support to Danish Customers, but have recently set up their -own hosting, transferring their customers from Jottacloud servers to -their own ones.

    -

    This, of course, necessitates using their servers for authentication, -but otherwise functionality and architecture seems equivalent to -Jottacloud.

    -

    To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud -authentication in the setup. The rest of the setup is identical to the -default setup.

    -

    Configuration

    +Copy the presented string and paste it where rclone asks for it. Rclone +will then use this to perform an initial token request, and receive a +regular OAuth token which it stores in your remote configuration. There +will then also be a new entry in the "My logged in devices" list in the +web interface, with device name and application name "Jottacloud +CLI".

    +

    Each time a new token is created this way, i.e. a new personal login +token is generated and traded in for an OAuth token, you get an entirely +new refresh token family, with a new entry in the "My logged in +devices". You can create as many remotes as you want, and use multiple +instances of rclone on same or different machine, as long as you +configure them separately like this, and not get your self into the +refresh token reuse issue described above.

    +

    Traditional

    +

    Jottacloud also supports a more traditional OAuth variant. Most of +the white-label services support this, and for many of them this is the +only alternative because they do not support personal login tokens. This +method relies on pre-defined service-specific domain names and +endpoints, and rclone need you to specify which service to configure. +This also means that any changes to existing or additions of new +white-label services needs an update in the rclone backend +implementation.

    +

    When configuring a remote, you must interactively login to an OAuth +authorization web site, and a one-time authorization code is sent back +to rclone behind the scene, which it uses to request an OAuth token. +This means that you need to be on a machine with an internet-connected +web browser. If you need it on a machine where this is not the case, +then you will have to create the configuration on a different machine +and copy it from there. The Jottacloud backend does not support the +rclone authorize command. See the remote setup docs for details.

    +

    Jottacloud exerts some form of strict session management when +authenticating using this method. This leads to some unexpected cases of +the "invalid_grant" error described above, and effectively limits you to +only use of a single active authentication on the same machine. I.e. you +can only create a single rclone remote, and you can't even log in with +the service's official desktop client while having a rclone remote +configured, or else you will eventually get all sessions invalidated and +are forced to re-authenticate.

    +

    When you have successfully authenticated, there will be an entry in +the "My logged in devices" list in the web interface representing your +session. It will typically be listed with application name "Jottacloud +for Desktop" or similar (it depends on the white-label service +configuration).

    +

    Legacy

    +

    Originally Jottacloud used an OAuth variant which required your +account's username and password to be specified. When Jottacloud +migrated to the newer methods, some white-label versions (those from +Elkjøp) still used this legacy method for a long time. Currently there +are no known uses of this, it is still supported by rclone, but the +support will be removed in a future version.

    +

    Configuration

    Here is an example of how to make a remote called remote with the default setup. First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
     n/s/q> n
    +
    +Enter name for new remote.
     name> remote
    +
     Option Storage.
     Type of storage to configure.
     Choose a number from below, or type in your own value.
    @@ -43315,60 +48222,63 @@ XX / Jottacloud
        \ (jottacloud)
     [snip]
     Storage> jottacloud
    +
    +Option client_id.
    +OAuth Client Id.
    +Leave blank normally.
    +Enter a value. Press Enter to leave empty.
    +client_id>
    +
    +Option client_secret.
    +OAuth Client Secret.
    +Leave blank normally.
    +Enter a value. Press Enter to leave empty.
    +client_secret>
    +
     Edit advanced config?
     y) Yes
     n) No (default)
     y/n> n
    +
     Option config_type.
    -Select authentication type.
    -Choose a number from below, or type in an existing string value.
    +Type of authentication.
    +Choose a number from below, or type in an existing value of type string.
     Press Enter for the default (standard).
        / Standard authentication.
    - 1 | Use this if you're a normal Jottacloud user.
    +   | This is primarily supported by the official service, but may also be
    +   | supported by some white-label services. It is designed for command-line
    + 1 | applications, and you will be asked to enter a single-use personal login
    +   | token which you must manually generate from the account security settings
    +   | in the web interface of your service.
        \ (standard)
    +   / Traditional authentication.
    +   | This is supported by the official service and all white-label services
    +   | that rclone knows about. You will be asked which service to connect to.
    + 2 | It has a limitation of only a single active authentication at a time. You
    +   | need to be on, or have access to, a machine with an internet-connected
    +   | web browser.
    +   \ (traditional)
        / Legacy authentication.
    - 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users.
    + 3 | This is no longer supported by any known services and not recommended
    +   | used. You will be asked for your account's username and password.
        \ (legacy)
    -   / Telia Cloud authentication.
    - 3 | Use this if you are using Telia Cloud.
    -   \ (telia)
    -   / Tele2 Cloud authentication.
    - 4 | Use this if you are using Tele2 Cloud.
    -   \ (tele2)
    -   / Onlime Cloud authentication.
    - 5 | Use this if you are using Onlime Cloud.
    -   \ (onlime)
     config_type> 1
    +
    +Option config_login_token.
     Personal login token.
    -Generate here: https://www.jottacloud.com/web/secure
    -Login Token> <your token here>
    +Generate it from the account security settings in the web interface of your
    +service, for the official service on https://www.jottacloud.com/web/secure.
    +Enter a value.
    +config_login_token> <your token here>
    +
     Use a non-standard device/mountpoint?
     Choosing no, the default, will let you access the storage used for the archive
     section of the official Jottacloud client. If you instead want to access the
     sync or the backup section, for example, you must choose yes.
     y) Yes
     n) No (default)
    -y/n> y
    -Option config_device.
    -The device to use. In standard setup the built-in Jotta device is used,
    -which contains predefined mountpoints for archive, sync etc. All other devices
    -are treated as backup devices by the official Jottacloud client. You may create
    -a new by entering a unique name.
    -Choose a number from below, or type in your own string value.
    -Press Enter for the default (DESKTOP-3H31129).
    - 1 > DESKTOP-3H31129
    - 2 > Jotta
    -config_device> 2
    -Option config_mountpoint.
    -The mountpoint to use for the built-in device Jotta.
    -The standard setup is to use the Archive mountpoint. Most other mountpoints
    -have very limited support in rclone and should generally be avoided.
    -Choose a number from below, or type in an existing string value.
    -Press Enter for the default (Archive).
    - 1 > Archive
    - 2 > Shared
    - 3 > Sync
    -config_mountpoint> 1
    +y/n> n
    +
     Configuration complete.
     Options:
     - type: jottacloud
    @@ -43385,14 +48295,15 @@ y) Yes this is OK (default)
     e) Edit this remote
     d) Delete this remote
     y/e/d> y
    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your Jottacloud

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your Jottacloud

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to an Jottacloud directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Devices and Mountpoints

    The official Jottacloud client registers a device for each computer you install it on, and shows them in the backup section of the user @@ -43523,7 +48434,9 @@ available in the remote.

    To view your current quota you can use the rclone about remote: command which will display your usage limit (unless it is unlimited) and the current usage.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to jottacloud (Jottacloud).

    --jottacloud-client-id

    @@ -43546,7 +48459,7 @@ limit (unless it is unlimited) and the current usage.

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to jottacloud (Jottacloud).

    --jottacloud-token

    @@ -43662,7 +48575,7 @@ Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,Del,Ctl,InvalidUtf8,Dot
  • Type: string
  • Required: false
  • -

    Metadata

    +

    Metadata

    Jottacloud has limited support for metadata, currently an extended set of timestamps.

    Here are the possible system metadata items for the jottacloud @@ -43718,7 +48631,8 @@ backend

    See the metadata docs for more info.

    -

    Limitations

    + +

    Limitations

    Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    There are quite a few characters that can't be in Jottacloud file @@ -43736,7 +48650,7 @@ cases.

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for Koofr involves creating an application password for rclone. You can do that by opening the Koofr web @@ -43744,9 +48658,9 @@ application, giving the password a nice name like rclone and clicking on generate.

    Here is an example of how to make a remote called koofr. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -43804,13 +48718,13 @@ y/e/d> y

    You can choose to edit advanced config in order to enter your own service URL if you use an on-premise or white label Koofr instance, or choose an alternative mount instead of your primary storage.

    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this:

    List directories in top level of your Koofr

    -
    rclone lsd koofr:
    +
    rclone lsd koofr:

    List all the files in your Koofr

    -
    rclone ls koofr:
    +
    rclone ls koofr:

    To copy a local directory to an Koofr directory called backup

    -
    rclone copy /home/source koofr:backup
    +
    rclone copy /home/source koofr:backup

    Restricted filename characters

    In addition to the

    Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).

    --koofr-provider

    @@ -43896,7 +48812,7 @@ obscure.

  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers).

    --koofr-mountid

    @@ -43940,7 +48856,8 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    Providers

    @@ -43954,9 +48871,9 @@ Storage is a cloud storage service run by Digi.ro that provides a Koofr API.

    Here is an example of how to make a remote called ds. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -44016,9 +48933,9 @@ that runs a Koofr API compatible service, by simply providing the base
     URL to connect to.

    Here is an example of how to make a remote called other. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -44080,12 +48997,12 @@ y/e/d> y

    Linkbox

    Linkbox is a private cloud drive.

    -

    Configuration

    +

    Configuration

    Here is an example of making a remote for Linkbox.

    First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -44116,7 +49033,9 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y
     
    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to linkbox (Linkbox).

    Token from https://www.linkbox.to/admin/account

    @@ -44127,7 +49046,7 @@ y/e/d> y
  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to linkbox (Linkbox).

    Description of the remote.

    @@ -44138,7 +49057,8 @@ y/e/d> y
  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    @@ -44166,7 +49086,7 @@ deduplication, the hash algorithm is a modified SHA1 submit file hash instead of long file upload (this optimization is supported by rclone) -

    Configuration

    +

    Configuration

    Here is an example of making a mailru configuration.

    First create a Mail.ru Cloud account and choose a tariff.

    You will need to log in and create an app password for rclone. Rclone @@ -44187,9 +49107,9 @@ on forum.rclone.org) password won't work.

    Now run

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -44249,14 +49169,14 @@ y/e/d> y

    Configuration of this backend does not require a local web browser. You can use the configured backend as shown below:

    See top level directories

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    Make a new directory

    -
    rclone mkdir remote:directory
    +
    rclone mkdir remote:directory

    List the contents of a directory

    -
    rclone ls remote:directory
    +
    rclone ls remote:directory

    Sync /home/local/directory to the remote path, deleting any excess files in the path.

    -
    rclone sync --interactive /home/local/directory remote:directory
    +
    rclone sync --interactive /home/local/directory remote:directory

    Modification times and hashes

    Files support a modification time attribute with up to 1 second @@ -44338,7 +49258,9 @@ replaced:

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to mailru (Mail.ru Cloud).

    --mailru-client-id

    OAuth Client Id.

    @@ -44413,7 +49335,7 @@ this optimization.

    -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to mailru (Mail.ru Cloud).

    --mailru-token

    OAuth Access Token as a JSON blob.

    @@ -44609,7 +49531,8 @@ Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,InvalidUtf
  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.

    @@ -44623,15 +49546,18 @@ encrypted locally before they are uploaded. This prevents anyone of the key used for encryption.

    This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption.

    +

    Note MEGA S4 Object Storage, +an S3 compatible object store, also works with rclone and this is +recommended for new projects.

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -44669,13 +49595,14 @@ y/e/d> y

    NOTE: The encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail.

    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your Mega

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your Mega

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to an Mega directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Modification times and hashes

    Mega does not support modification times or hashes yet.

    @@ -44715,19 +49642,20 @@ messages in the log about duplicates.

    Object not found

    If you are connecting to your Mega remote for the first time, to test access and synchronization, you may receive an error such as

    -
    Failed to create file system for "my-mega-remote:": 
    +
    Failed to create file system for "my-mega-remote:":
     couldn't login: Object (typically, node or user) not found

    The diagnostic steps often recommended in the rclone forum start with the MEGAcmd utility. Note that this refers to the -official C++ command from https://github.com/meganz/MEGAcmd and not the -go language built command from t3rm1n4l/megacmd that is no longer +official C++ command from https://github.com/meganz/MEGAcmd and not the go +language built command from t3rm1n4l/megacmd that is no longer maintained.

    Follow the instructions for installing MEGAcmd and try accessing your remote as they recommend. You can establish whether or not you can log in using MEGAcmd, and obtain diagnostic information to help you, and search or work with others in the forum.

    -
    MEGA CMD> login me@example.com
    +
    MEGA CMD> login me@example.com
     Password:
     Fetching nodes ...
     Loading transfers from local cache
    @@ -44776,7 +49704,9 @@ relevant, please post on the forum.

    So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to mega (Mega).

    --mega-user

    User name.

    @@ -44799,8 +49729,36 @@ obscure.

  • Type: string
  • Required: true
  • -

    Advanced options

    +

    --mega-2fa

    +

    The 2FA code of your MEGA account if the account is set up with +one

    +

    Properties:

    +
      +
    • Config: 2fa
    • +
    • Env Var: RCLONE_MEGA_2FA
    • +
    • Type: string
    • +
    • Required: false
    • +
    +

    Advanced options

    Here are the Advanced options specific to mega (Mega).

    +

    --mega-session-id

    +

    Session (internal use only)

    +

    Properties:

    +
      +
    • Config: session_id
    • +
    • Env Var: RCLONE_MEGA_SESSION_ID
    • +
    • Type: string
    • +
    • Required: false
    • +
    +

    --mega-master-key

    +

    Master key (internal use only)

    +

    Properties:

    +
      +
    • Config: master_key
    • +
    • Env Var: RCLONE_MEGA_MASTER_KEY
    • +
    • Type: string
    • +
    • Required: false
    • +

    --mega-debug

    Output more debug from Mega.

    If this flag is set (along with -vv) it will print further debugging @@ -44858,6 +49816,7 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • +

    Process killed

    On accounts with large files or something else, memory usage can significantly increase when executing list/sync instructions. When @@ -44866,7 +49825,7 @@ type has sufficient memory/CPU to execute the commands. Use the resource monitoring tools to inspect after sending the commands. Look at this issue.

    -

    Limitations

    +

    Limitations

    This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't @@ -44880,10 +49839,10 @@ there are likely quite a few errors still remaining in this library.

    The memory backend behaves like a bucket-based remote (e.g. like s3). Because it has no parameters you can just use it with the :memory: remote name.

    -

    Configuration

    +

    Configuration

    You can configure it as a remote like this with rclone config too if you want to:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -44911,7 +49870,7 @@ d) Delete this remote
     y/e/d> y

    Because the memory backend isn't persistent it is most useful for testing or with an rclone server or rclone mount, e.g.

    -
    rclone mount :memory: /mnt/tmp
    +
    rclone mount :memory: /mnt/tmp
     rclone serve webdav :memory:
     rclone serve sftp :memory:

    Modification times and @@ -44923,7 +49882,9 @@ characters

    The memory backend replaces the default restricted characters set.

    -

    Advanced options

    + + +

    Advanced options

    Here are the Advanced options specific to memory (In memory object storage system.).

    --memory-description

    @@ -44935,91 +49896,80 @@ storage system.).

  • Type: string
  • Required: false
  • +

    Akamai NetStorage

    Paths are specified as remote: You may put subdirectories in too, e.g. remote:/path/to/dir. If you have a CP code you can use that as the folder after the domain such as <domain>/<cpcode>/<internal directories within cpcode>.

    -

    For example, this is commonly configured with or without a CP code: * -With a CP code. -[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/ -* Without a CP code. -[your-domain-prefix]-nsu.akamaihd.net

    -

    See all buckets rclone lsd remote: The initial setup for Netstorage -involves getting an account and secret. Use rclone config -to walk you through the setup process.

    -

    Configuration

    +

    For example, this is commonly configured with or without a CP +code:

    +
      +
    • With a CP code. +[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/
    • +
    • Without a CP code. +[your-domain-prefix]-nsu.akamaihd.net
    • +
    +

    See all buckets

    +
    rclone lsd remote:
    +

    The initial setup for Netstorage involves getting an account and +secret. Use rclone config to walk you through the setup +process.

    +

    Configuration

    Here's an example of how to make a remote called ns1.

      -
    1. To begin the interactive configuration process, enter this -command:
    2. -
    -
    rclone config
    -
      -
    1. Type n to create a new remote.
    2. -
    -
    n) New remote
    +
  • To begin the interactive configuration process, enter this +command:

    +
    rclone config
  • +
  • Type n to create a new remote.

    +
    n) New remote
     d) Delete remote
     q) Quit config
    -e/n/d/q> n
    -
      -
    1. For this example, enter ns1 when you reach the name> -prompt.
    2. -
    -
    name> ns1
    -
      -
    1. Enter netstorage as the type of storage to -configure.
    2. -
    -
    Type of storage to configure.
    +e/n/d/q> n
  • +
  • For this example, enter ns1 when you reach the +name> prompt.

    +
    name> ns1
  • +
  • Enter netstorage as the type of storage to +configure.

    +
    Type of storage to configure.
     Enter a string value. Press Enter for the default ("").
     Choose a number from below, or type in your own value
     XX / NetStorage
        \ "netstorage"
    -Storage> netstorage
    -
      -
    1. Select between the HTTP or HTTPS protocol. Most users should choose -HTTPS, which is the default. HTTP is provided primarily for debugging -purposes.
    2. -
    -
    Enter a string value. Press Enter for the default ("").
    +Storage> netstorage
  • +
  • Select between the HTTP or HTTPS protocol. Most users should +choose HTTPS, which is the default. HTTP is provided primarily for +debugging purposes.

    +
    Enter a string value. Press Enter for the default ("").
     Choose a number from below, or type in your own value
      1 / HTTP protocol
        \ "http"
      2 / HTTPS protocol
        \ "https"
    -protocol> 1
    -
      -
    1. Specify your NetStorage host, CP code, and any necessary content +protocol> 1
  • +
  • Specify your NetStorage host, CP code, and any necessary content paths using this format: -<domain>/<cpcode>/<content>/

  • - -
    Enter a string value. Press Enter for the default ("").
    -host> baseball-nsu.akamaihd.net/123456/content/
    -
      -
    1. Set the netstorage account name
    2. -
    -
    Enter a string value. Press Enter for the default ("").
    -account> username
    -
      -
    1. Set the Netstorage account secret/G2O key which will be used for +<domain>/<cpcode>/<content>/

      +
      Enter a string value. Press Enter for the default ("").
      +host> baseball-nsu.akamaihd.net/123456/content/
    2. +
    3. Set the netstorage account name

      +
      Enter a string value. Press Enter for the default ("").
      +account> username
    4. +
    5. Set the Netstorage account secret/G2O key which will be used for authentication purposes. Select the y option to set your own password then enter your secret. Note: The secret is stored in the -rclone.conf file with hex-encoded encryption.

    6. -
    -
    y) Yes type in my own password
    +rclone.conf file with hex-encoded encryption.

    +
    y) Yes type in my own password
     g) Generate random password
     y/g> y
     Enter the password:
     password:
     Confirm the password:
    -password:
    -
      -
    1. View the summary and confirm your remote configuration.
    2. -
    -
    [ns1]
    +password:
    +
  • View the summary and confirm your remote configuration.

    +
    [ns1]
     type = netstorage
     protocol = http
     host = baseball-nsu.akamaihd.net/123456/content/
    @@ -45029,27 +49979,29 @@ secret = *** ENCRYPTED ***
     y) Yes this is OK (default)
     e) Edit this remote
     d) Delete this remote
    -y/e/d> y
    +y/e/d> y
  • +

    This remote is called ns1 and can now be used.

    Example operations

    Get started with rclone and NetStorage with these examples. For -additional rclone commands, visit https://rclone.org/commands/.

    +additional rclone commands, visit https://rclone.org/commands/.

    See contents of a directory in your project

    -
    rclone lsd ns1:/974012/testing/
    +
    rclone lsd ns1:/974012/testing/

    Sync the contents local with remote

    -
    rclone sync . ns1:/974012/testing/
    +
    rclone sync . ns1:/974012/testing/

    Upload local content to remote

    -
    rclone copy notes.txt ns1:/974012/testing/
    +
    rclone copy notes.txt ns1:/974012/testing/

    Delete content on remote

    -
    rclone delete ns1:/974012/testing/notes.txt
    -

    Move or copy content -between CP codes.

    +
    rclone delete ns1:/974012/testing/notes.txt
    +

    Move or copy content +between CP codes

    Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes.

    -
    rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/
    +
    rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/

    Features

    The Netstorage backend changes the rclone --links, -l @@ -45132,7 +50084,9 @@ href="https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-deve Usage API for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to netstorage (Akamai NetStorage).

    --netstorage-host

    @@ -45169,7 +50123,7 @@ obscure.

  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to netstorage (Akamai NetStorage).

    --netstorage-protocol

    @@ -45205,8 +50159,8 @@ provided primarily for debugging purposes.

    Backend commands

    Here are the commands specific to the netstorage backend.

    -

    Run them with

    -
    rclone backend COMMAND remote:
    +

    Run them with:

    +
    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    See the backend command @@ -45214,30 +50168,32 @@ for more info on how to pass options and arguments.

    These can be run on a running backend using the rc command backend/command.

    du

    -

    Return disk usage information for a specified directory

    -
    rclone backend du remote: [options] [<arguments>+]
    +

    Return disk usage information for a specified directory.

    +
    rclone backend du remote: [options] [<arguments>+]

    The usage information returned, includes the targeted directory as well as all files stored in any sub-directories that may exist.

    You can create a symbolic link in ObjectStore with the symlink action.

    -
    rclone backend symlink remote: [options] [<arguments>+]
    +
    rclone backend symlink remote: [options] [<arguments>+]

    The desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if -applicable. -rclone backend symlink <src> <path>

    +applicable.

    +

    Usage example:

    +
    rclone backend symlink <src> <path>
    +

    Microsoft Azure Blob Storage

    Paths are specified as remote:container (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

    -

    Configuration

    +

    Configuration

    Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -45269,14 +50225,14 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    See all containers

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    Make a new container

    -
    rclone mkdir remote:container
    +
    rclone mkdir remote:container

    List the contents of a container

    -
    rclone ls remote:container
    +
    rclone ls remote:container

    Sync /home/local/directory to the remote container, deleting any excess files in the container.

    -
    rclone sync --interactive /home/local/directory remote:container
    +
    rclone sync --interactive /home/local/directory remote:container

    --fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the

  • Workload Identity
      -
    • AZURE_TENANT_ID: Tenant to authenticate in.
    • +
    • AZURE_TENANT_ID: Tenant to authenticate in
    • AZURE_CLIENT_ID: Client ID of the application the user -will authenticate to.
    • +will authenticate to
    • AZURE_FEDERATED_TOKEN_FILE: Path to projected service -account token file.
    • +account token file
    • AZURE_AUTHORITY_HOST: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com).
  • @@ -45438,13 +50394,13 @@ Auth: 3. Azure CLI credentials (as used by the az tool) using env_auth.

    For example if you were to login with a service principal like this:

    -
    az login --service-principal -u XXX -p XXX --tenant XXX
    +
    az login --service-principal -u XXX -p XXX --tenant XXX

    Then you could access rclone resources like this:

    -
    rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER
    +
    rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER

    Or

    -
    rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER
    +
    rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER

    Which is analogous to using the az tool:

    -
    az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login
    +
    az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login

    Account and Shared Key

    This is the most straight forward and least flexible way. Just fill in the account and key lines and leave the @@ -45459,14 +50415,14 @@ level SAS URL right click on a container in the Azure Blob explorer in the Azure portal.

    If you use a container level SAS URL, rclone operations are permitted only on a particular container, e.g.

    -
    rclone ls azureblob:container
    +
    rclone ls azureblob:container

    You can also list the single container from the root. This will only show the container specified by the SAS URL.

    -
    $ rclone lsd azureblob:
    +
    $ rclone lsd azureblob:
     container/

    Note that you can't see or access any other containers - this will fail

    -
    rclone ls azureblob:othercontainer
    +
    rclone ls azureblob:othercontainer

    Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server.

    @@ -45562,8 +50518,10 @@ use.

    If you want to access resources with public anonymous access then set account only. You can do this without making an rclone config:

    -
    rclone lsf :azureblob,account=ACCOUNT:CONTAINER
    -

    Standard options

    +
    rclone lsf :azureblob,account=ACCOUNT:CONTAINER
    + + +

    Standard options

    Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage).

    --azureblob-account

    @@ -45673,7 +50631,7 @@ obscure
    .

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to azureblob (Microsoft Azure Blob Storage).

  • Type: string
  • Required: false
  • +

    Custom upload headers

    You can set custom upload headers with the --header-upload flag.

    @@ -46138,8 +51097,8 @@ blob.
  • X-MS-Tags
  • Eg --header-upload "Content-Type: text/potato" or ---header-upload "X-MS-Tags: foo=bar"

    -

    Limitations

    +--header-upload "X-MS-Tags: foo=bar".

    +

    Limitations

    MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.

    rclone about is not supported by the Microsoft Azure @@ -46148,7 +51107,7 @@ free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    +href="https://rclone.org/commands/rclone_about/">rclone about.

    Azure Storage Emulator Support

    You can run rclone with the storage emulator (usually @@ -46169,12 +51128,12 @@ in the advanced settings, setting it to Storage

    Paths are specified as remote: You may put subdirectories in too, e.g. remote:path/to/dir.

    -

    Configuration

    +

    Configuration

    Here is an example of making a Microsoft Azure Files Storage configuration. For a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -46239,14 +51198,14 @@ d) Delete this remote
     y/e/d> 

    Once configured you can use rclone.

    See all files in the top level:

    -
    rclone lsf remote:
    +
    rclone lsf remote:

    Make a new directory in the root:

    -
    rclone mkdir remote:dir
    +
    rclone mkdir remote:dir

    Recursively List the contents:

    -
    rclone ls remote:
    +
    rclone ls remote:

    Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

    -
    rclone sync --interactive /home/local/directory remote:dir
    +
    rclone sync --interactive /home/local/directory remote:dir

    Modified time

    The modified time is stored as Azure standard LastModified time on files

    @@ -46335,7 +51294,7 @@ get replaced if they are the last character in the name:

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Hashes

    +

    Hashes

    MD5 hashes are stored with files. Not all files will have MD5 hashes as these have to be uploaded with the file.

    Authentication

    @@ -46393,11 +51352,11 @@ address)
  • Workload Identity
      -
    • AZURE_TENANT_ID: Tenant to authenticate in.
    • +
    • AZURE_TENANT_ID: Tenant to authenticate in
    • AZURE_CLIENT_ID: Client ID of the application the user -will authenticate to.
    • +will authenticate to
    • AZURE_FEDERATED_TOKEN_FILE: Path to projected service -account token file.
    • +account token file
    • AZURE_AUTHORITY_HOST: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com).
  • @@ -46425,11 +51384,11 @@ Auth: 3. Azure CLI credentials (as used by the az tool) using env_auth.

    For example if you were to login with a service principal like this:

    -
    az login --service-principal -u XXX -p XXX --tenant XXX
    +
    az login --service-principal -u XXX -p XXX --tenant XXX

    Then you could access rclone resources like this:

    -
    rclone lsf :azurefiles,env_auth,account=ACCOUNT:
    +
    rclone lsf :azurefiles,env_auth,account=ACCOUNT:

    Or

    -
    rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles:
    +
    rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles:

    Account and Shared Key

    This is the most straight forward and least flexible way. Just fill in the account and key lines and leave the @@ -46528,7 +51487,9 @@ href="https://learn.microsoft.com/en-us/cli/azure/">Azure CLI tool can be useful if you wish to use the az CLI on a host with a System Managed Identity that you do not want to use. Don't set env_auth at the same time.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to azurefiles (Microsoft Azure Files).

    --azurefiles-account

    @@ -46658,7 +51619,7 @@ obscure.

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to azurefiles (Microsoft Azure Files).

    Type: string
  • Required: false
  • +

    Custom upload headers

    You can set custom upload headers with the --header-upload flag.

    @@ -46899,22 +51861,22 @@ Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,RightPerio
  • Content-Type
  • Eg --header-upload "Content-Type: text/potato"

    -

    Limitations

    +

    Limitations

    MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy.

    Microsoft OneDrive

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for OneDrive involves getting a token from Microsoft which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    e) Edit existing remote
    +
    e) Edit existing remote
     n) New remote
     d) Delete remote
     r) Rename remote
    @@ -46987,20 +51949,21 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

    +docs for how to set it up on a machine without an internet-connected +web browser available.

    Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your OneDrive

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your OneDrive

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to an OneDrive directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Getting your own Client ID and Key

    rclone uses a default Client ID when talking to OneDrive, unless a @@ -47014,8 +51977,9 @@ throttling.

    OneDrive Personal

    To create your own Client ID, please follow these steps:

      -
    1. Open -https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview +
    2. Open https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview and then under the Add menu click App registration.
        @@ -47024,7 +51988,7 @@ This is free, but you need to provide a phone number, address, and credit card for identity verification.
    3. Enter a name for your app, choose account type -Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox), +Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox), select Web in Redirect URI, then type (do not copy and paste) http://localhost:53682/ and click Register. Copy and keep the Application (client) ID under the app @@ -47109,6 +52073,18 @@ enter the tenant ID.
    4. client credentials flow. In particular the "onedrive" option does not work. You can use the "sharepoint" option or if that does not find the correct drive ID type it in manually with the "driveid" option.

      +

      To back up any user's data using this flow, grant your Azure AD +application the necessary Microsoft Graph Application +permissions (such as Files.Read.All, +Sites.Read.All and/or Sites.Selected). With +these permissions, rclone can access drives across the tenant, but it +needs to know which user or drive you want. Supply a specific +drive_id corresponding to that user's OneDrive, or a +SharePoint site ID for SharePoint libraries. You can obtain a user's +drive ID using Microsoft Graph (e.g. +/users/{userUPN}/drive) and then configure it in rclone. +Once the correct drive ID is provided, rclone will back up that user's +data using the app-only token without requiring their credentials.

      NOTE Assigning permissions directly to the application means that anyone with the Client ID and Client Secret can access your OneDrive files. Take care to safeguard these @@ -47267,7 +52243,9 @@ can't be used in JSON strings.

      doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to onedrive (Microsoft OneDrive).

      --onedrive-client-id

      @@ -47329,7 +52307,7 @@ ID.

    5. Type: string
    6. Required: false
    7. -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to onedrive (Microsoft OneDrive).

      --onedrive-token

      @@ -47766,7 +52744,7 @@ Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Del,Ctl,LeftSpace,
    8. Type: string
    9. Required: false
    10. -

      Metadata

      +

      Metadata

      OneDrive supports System Metadata (not User Metadata, as of this writing) for both files and directories. Much of the metadata is read-only, and there are some differences between OneDrive Personal and @@ -47786,69 +52764,69 @@ href="https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/pe API, which differs slightly between OneDrive Personal and Business.

      Example for OneDrive Personal:

      -
      [
      -    {
      -        "id": "1234567890ABC!123",
      -        "grantedTo": {
      -            "user": {
      -                "id": "ryan@contoso.com"
      -            },
      -            "application": {},
      -            "device": {}
      -        },
      -        "invitation": {
      -            "email": "ryan@contoso.com"
      -        },
      -        "link": {
      -            "webUrl": "https://1drv.ms/t/s!1234567890ABC"
      -        },
      -        "roles": [
      -            "read"
      -        ],
      -        "shareId": "s!1234567890ABC"
      -    }
      -]
      +
      [
      +    {
      +        "id": "1234567890ABC!123",
      +        "grantedTo": {
      +            "user": {
      +                "id": "ryan@contoso.com"
      +            },
      +            "application": {},
      +            "device": {}
      +        },
      +        "invitation": {
      +            "email": "ryan@contoso.com"
      +        },
      +        "link": {
      +            "webUrl": "https://1drv.ms/t/s!1234567890ABC"
      +        },
      +        "roles": [
      +            "read"
      +        ],
      +        "shareId": "s!1234567890ABC"
      +    }
      +]

      Example for OneDrive Business:

      -
      [
      -    {
      -        "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
      -        "grantedToIdentities": [
      -            {
      -                "user": {
      -                    "displayName": "ryan@contoso.com"
      -                },
      -                "application": {},
      -                "device": {}
      -            }
      -        ],
      -        "link": {
      -            "type": "view",
      -            "scope": "users",
      -            "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
      -        },
      -        "roles": [
      -            "read"
      -        ],
      -        "shareId": "u!LKj1lkdlals90j1nlkascl"
      -    },
      -    {
      -        "id": "5D33DD65C6932946",
      -        "grantedTo": {
      -            "user": {
      -                "displayName": "John Doe",
      -                "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
      -            },
      -            "application": {},
      -            "device": {}
      -        },
      -        "roles": [
      -            "owner"
      -        ],
      -        "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
      -    }
      -]
      +
      [
      +    {
      +        "id": "48d31887-5fad-4d73-a9f5-3c356e68a038",
      +        "grantedToIdentities": [
      +            {
      +                "user": {
      +                    "displayName": "ryan@contoso.com"
      +                },
      +                "application": {},
      +                "device": {}
      +            }
      +        ],
      +        "link": {
      +            "type": "view",
      +            "scope": "users",
      +            "webUrl": "https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s"
      +        },
      +        "roles": [
      +            "read"
      +        ],
      +        "shareId": "u!LKj1lkdlals90j1nlkascl"
      +    },
      +    {
      +        "id": "5D33DD65C6932946",
      +        "grantedTo": {
      +            "user": {
      +                "displayName": "John Doe",
      +                "id": "efee1b77-fb3b-4f65-99d6-274c11914d12"
      +            },
      +            "application": {},
      +            "device": {}
      +        },
      +        "roles": [
      +            "owner"
      +        ],
      +        "shareId": "FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U"
      +    }
      +]

      To write permissions, pass in a "permissions" metadata key using this same format. The --metadata-mapper @@ -47862,12 +52840,12 @@ for a user. Creating a Public Link is also supported, if Link.Scope is set to "anonymous".

      Example request to add a "read" permission with --metadata-mapper:

      -
      {
      -    "Metadata": {
      -        "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
      -    }
      -}
      +
      {
      +    "Metadata": {
      +        "permissions": "[{\"grantedToIdentities\":[{\"user\":{\"id\":\"ryan@contoso.com\"}}],\"roles\":[\"read\"]}]"
      +    }
      +}

      Note that adding a permission can fail if a conflicting permission already exists for the file/folder.

      To update an existing permission, include both the Permission ID and @@ -48042,20 +53020,21 @@ Personal).

      See the metadata docs for more info.

      +

      Impersonate other users as Admin

      Unlike Google Drive and impersonating any domain user via service accounts, OneDrive requires you to authenticate as an admin account, and manually setup a remote per user you wish to impersonate.

        -
      1. In Microsoft 365 Admin +
      2. In Microsoft 365 Admin Center, open each user you need to "impersonate" and go to the OneDrive section. There is a heading called "Get access to files", you need to click to create the link, this creates the link of the format https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/ -but also changes the permissions so you your admin user has access.

      3. -
      4. Then in powershell run the following commands:
      5. -
      +but also changes the permissions so you your admin user has +access.

      +
    11. Then in powershell run the following commands:

      Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
       Import-Module Microsoft.Graph.Files
       Connect-MgGraph -Scopes "Files.ReadWrite.All"
      @@ -48065,18 +53044,16 @@ Get-MgUserDefaultDrive -UserId '{emailaddress}'
       # This will give you output of the format:
       # Name     Id                                                                 DriveType CreatedDateTime
       # ----     --                                                                 --------- ---------------
      -# OneDrive b!XYZ123                                                           business  14/10/2023 1:00:58 pm
      -
      -
        -
      1. Then in rclone add a onedrive remote type, and use the +# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm
    +
  • Then in rclone add a onedrive remote type, and use the Type in driveID with the DriveID you got in the previous step. One remote per user. It will then confirm the drive ID, and hopefully give you a message of Found drive "root" of type "business" and then include the URL of the format -https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents

  • +https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents

    -

    Limitations

    +

    Limitations

    If you don't use rclone for 90 days the refresh token will expire. This will result in authorization problems. This is easy to fix by running the rclone config reconnect remote: command to get @@ -48176,7 +53153,7 @@ then querying each file for versions it can be quite slow. Rclone does --checkers tests in parallel. The command also supports --interactive/i or --dry-run which is a great way to see what it would do.

    -
    rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
    +
    rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
     rclone cleanup remote:path/subdir               # unconditionally remove all old version for path/subdir

    NB Onedrive personal can't currently delete versions

    @@ -48199,7 +53176,7 @@ causing file size and hash checks to fail. There are also other situations that will cause OneDrive to report inconsistent file sizes. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments:

    -
    --ignore-checksum --ignore-size
    +
    --ignore-checksum --ignore-size

    Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for certain files, by attempting the steps below. Open the web interface for rclone-backup-dir on backend mysharepoint, you may use:

    -
    --backup-dir mysharepoint:rclone-backup-dir
    +
    --backup-dir mysharepoint:rclone-backup-dir

    access_denied (AADSTS65005)

    -
    Error: access_denied
    +
    Error: access_denied
     Code: AADSTS65005
     Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.

    This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins.

    However, there are other ways to interact with your OneDrive account. -Have a look at the WebDAV backend: -https://rclone.org/webdav/#sharepoint

    +Have a look at the WebDAV backend:
    https://rclone.org/webdav/#sharepoint

    invalid_grant (AADSTS50076)

    -
    Error: invalid_grant
    +
    Error: invalid_grant
     Code: AADSTS50076
     Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.

    If you see the error above after enabling multi-factor authentication @@ -48296,7 +53274,7 @@ file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive.

    The different sizes will cause rclone copy/sync to repeatedly recopy unmodified photos something like this:

    -
    DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
    +
    DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
     DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK
     INFO  : 20230203_123826234_iOS.heic: Copied (replaced existing)

    These recopies can be worked around by adding @@ -48305,25 +53283,25 @@ the still-picture not the movie clip, and relies on modification dates being correctly updated on all files in all situations.

    The different sizes will also cause rclone check to report size errors something like this:

    -
    ERROR : 20230203_123826234_iOS.heic: sizes differ
    +
    ERROR : 20230203_123826234_iOS.heic: sizes differ

    These check errors can be suppressed by adding --ignore-size.

    The different sizes will also cause rclone mount to fail downloading with an error something like this:

    -
    ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
    +
    ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF

    or like this when using --cache-mode=full:

    -
    INFO  : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
    +
    INFO  : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
     ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:

    OpenDrive

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    n) New remote
    +
    n) New remote
     d) Delete remote
     q) Quit config
     e/n/d/q> n
    @@ -48356,11 +53334,11 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    List directories in top level of your OpenDrive

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your OpenDrive

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to an OpenDrive directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Modification times and hashes

    OpenDrive allows modification times to be set on objects accurate to @@ -48472,7 +53450,9 @@ name:

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to opendrive (OpenDrive).

    --opendrive-username

    Username.

    @@ -48495,7 +53475,7 @@ obscure.

  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to opendrive (OpenDrive).

    --opendrive-encoding

    The encoding for the backend.

    @@ -48559,7 +53539,8 @@ access the contents
  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    There are quite a few characters that can't be in OpenDrive file @@ -48573,33 +53554,32 @@ rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    +href="https://rclone.org/commands/rclone_about/">rclone about.

    Oracle Object Storage

    +

    Object Storage provided by the Oracle Cloud Infrastructure (OCI). +Read more at <oracle.com>:

    Paths are specified as remote:bucket (or -remote: for the lsd command.) You may put +remote: for the lsd command). You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    Sample command to transfer local artifacts to remote:bucket in oracle object storage:

    -

    rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv

    -

    Configuration

    +
    rclone -vvv  --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64  --retries 2  --oos-chunk-size 10Mi --oos-upload-concurrency 10000  --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts  remote:bucket -vv
    +

    Configuration

    Here is an example of making an oracle object storage configuration. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    n) New remote
    +
    n) New remote
     d) Delete remote
     r) Rename remote
     c) Copy remote
    @@ -48699,11 +53679,11 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    See all buckets

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    Create a new bucket

    -
    rclone mkdir remote:bucket
    +
    rclone mkdir remote:bucket

    List the contents of a bucket

    -
    rclone ls remote:bucket
    +
    rclone ls remote:bucket
     rclone ls remote:bucket --max-depth 1

    Authentication Providers

    OCI has various authentication methods. To learn more about @@ -48712,7 +53692,7 @@ href="https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication authentication methods These choices can be specified in the rclone config file.

    Rclone supports the following OCI authentication provider.

    -
    User Principal
    +
    User Principal
     Instance Principal
     Resource Principal
     Workload Identity
    @@ -48720,27 +53700,35 @@ No authentication

    User Principal

    Sample rclone config file for Authentication Provider User Principal:

    -
    [oos]
    -type = oracleobjectstorage
    -namespace = id<redacted>34
    -compartment = ocid1.compartment.oc1..aa<redacted>ba
    -region = us-ashburn-1
    -provider = user_principal_auth
    -config_file = /home/opc/.oci/config
    -config_profile = Default
    -

    Advantages: - One can use this method from any server within OCI or -on-premises or from other cloud provider.

    -

    Considerations: - you need to configure user’s privileges / policy to -allow access to object storage - Overhead of managing users and keys. - -If the user is deleted, the config file will no longer work and may -cause automation regressions that use the user's credentials.

    +
    [oos]
    +type = oracleobjectstorage
    +namespace = id<redacted>34
    +compartment = ocid1.compartment.oc1..aa<redacted>ba
    +region = us-ashburn-1
    +provider = user_principal_auth
    +config_file = /home/opc/.oci/config
    +config_profile = Default
    +

    Advantages:

    +
      +
    • One can use this method from any server within OCI or on-premises or +from other cloud provider.
    • +
    +

    Considerations:

    +
      +
    • you need to configure user’s privileges / policy to allow access to +object storage
    • +
    • Overhead of managing users and keys.
    • +
    • If the user is deleted, the config file will no longer work and may +cause automation regressions that use the user's credentials.
    • +

    Instance Principal

    An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. With this approach no credentials have to be stored and managed.

    Sample rclone configuration file for Authentication Provider Instance Principal:

    -
    [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf
    +
    [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf
     [oos]
     type = oracleobjectstorage
     namespace = id<redacted>fn
    @@ -48771,18 +53759,19 @@ but used for resources that are not compute instances such as serverless
     functions. To use resource principal ensure Rclone process is
     started with these environment variables set in its process.

    -
    export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
    +
    export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
     export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
     export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
     export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token

    Sample rclone configuration file for Authentication Provider Resource Principal:

    -
    [oos]
    -type = oracleobjectstorage
    -namespace = id<redacted>34
    -compartment = ocid1.compartment.oc1..aa<redacted>ba
    -region = us-ashburn-1
    -provider = resource_principal_auth
    +
    [oos]
    +type = oracleobjectstorage
    +namespace = id<redacted>34
    +compartment = ocid1.compartment.oc1..aa<redacted>ba
    +region = us-ashburn-1
    +provider = resource_principal_auth

    Workload Identity

    Workload Identity auth may be used when running Rclone from Kubernetes pod on a Container Engine for Kubernetes (OKE) cluster. For @@ -48791,17 +53780,18 @@ href="https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingwo Workloads Access to OCI Resources. To use workload identity, ensure Rclone is started with these environment variables set in its process.

    -
    export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
    +
    export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
     export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1

    No authentication

    Public buckets do not require any authentication mechanism to read objects. Sample rclone configuration file for No authentication:

    -
    [oos]
    -type = oracleobjectstorage
    -namespace = id<redacted>34
    -compartment = ocid1.compartment.oc1..aa<redacted>ba
    -region = us-ashburn-1
    -provider = no_auth
    +
    [oos]
    +type = oracleobjectstorage
    +namespace = id<redacted>34
    +compartment = ocid1.compartment.oc1..aa<redacted>ba
    +region = us-ashburn-1
    +provider = no_auth

    Modification times and hashes

    The modification time is stored as metadata on the object as @@ -48839,7 +53829,9 @@ throughput (8 would be a sensible value) and increasing sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).

    --oos-provider

    @@ -48963,7 +53955,7 @@ buckets -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage).

    --oos-storage-tier

    @@ -49261,7 +54253,7 @@ Encryption
  • Type: string
  • Required: false
  • -

    Metadata

    +

    Metadata

    User metadata is stored as opc-meta- keys.

    Here are the possible system metadata items for the oracleobjectstorage backend.

    @@ -49332,8 +54324,8 @@ for more info.

    Backend commands

    Here are the commands specific to the oracleobjectstorage backend.

    -

    Run them with

    -
    rclone backend COMMAND remote:
    +

    Run them with:

    +
    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    See the backend command @@ -49341,80 +54333,87 @@ for more info on how to pass options and arguments.

    These can be run on a running backend using the rc command backend/command.

    rename

    -

    change the name of an object

    -
    rclone backend rename remote: [options] [<arguments>+]
    +

    change the name of an object.

    +
    rclone backend rename remote: [options] [<arguments>+]

    This command can be used to rename a object.

    -

    Usage Examples:

    -
    rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
    +

    Usage example:

    +
    rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name

    list-multipart-uploads

    -

    List the unfinished multipart uploads

    -
    rclone backend list-multipart-uploads remote: [options] [<arguments>+]
    +

    List the unfinished multipart uploads.

    +
    rclone backend list-multipart-uploads remote: [options] [<arguments>+]

    This command lists the unfinished multipart uploads in JSON format.

    -
    rclone backend list-multipart-uploads oos:bucket/path/to/object
    +

    Usage example:

    +
    rclone backend list-multipart-uploads oos:bucket/path/to/object

    It returns a dictionary of buckets with values as lists of unfinished multipart uploads.

    You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path.

    -
    {
    -  "test-bucket": [
    -            {
    -                    "namespace": "test-namespace",
    -                    "bucket": "test-bucket",
    -                    "object": "600m.bin",
    -                    "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
    -                    "timeCreated": "2022-07-29T06:21:16.595Z",
    -                    "storageTier": "Standard"
    -            }
    -    ]
    -

    cleanup

    -

    Remove unfinished multipart uploads.

    -
    rclone backend cleanup remote: [options] [<arguments>+]
    +
    {
    +    "test-bucket": [
    +        {
    +            "namespace": "test-namespace",
    +            "bucket": "test-bucket",
    +            "object": "600m.bin",
    +            "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
    +            "timeCreated": "2022-07-29T06:21:16.595Z",
    +            "storageTier": "Standard"
    +        }
    +    ]
    +}
    +
    +### cleanup
    +
    +Remove unfinished multipart uploads.
    +
    +```console
    +rclone backend cleanup remote: [options] [<arguments>+]

    This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours.

    Note that you can use --interactive/-i or --dry-run with this command to see what it would do.

    -
    rclone backend cleanup oos:bucket/path/to/object
    +

    Usage examples:

    +
    rclone backend cleanup oos:bucket/path/to/object
     rclone backend cleanup -o max-age=7w oos:bucket/path/to/object

    Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.

    Options:

      -
    • "max-age": Max age of upload to delete
    • +
    • "max-age": Max age of upload to delete.

    restore

    -

    Restore objects from Archive to Standard storage

    -
    rclone backend restore remote: [options] [<arguments>+]
    +

    Restore objects from Archive to Standard storage.

    +
    rclone backend restore remote: [options] [<arguments>+]

    This command can be used to restore one or more objects from Archive to Standard storage.

    -
    Usage Examples:
    -
    -rclone backend restore oos:bucket/path/to/directory -o hours=HOURS
    +

    Usage examples:

    +
    rclone backend restore oos:bucket/path/to/directory -o hours=HOURS
     rclone backend restore oos:bucket -o hours=HOURS

    This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags

    -
    rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72
    -

    All the objects shown will be marked for restore, then

    -
    rclone backend restore --include "*.txt" oos:bucket/path -o hours=72
    -
    -It returns a list of status dictionaries with Object Name and Status
    -keys. The Status will be "RESTORED"" if it was successful or an error message
    -if not.
    -
    -[
    -    {
    -        "Object": "test.txt"
    -        "Status": "RESTORED",
    -    },
    -    {
    -        "Object": "test/file4.txt"
    -        "Status": "RESTORED",
    -    }
    -]
    +
    rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72
    +

    All the objects shown will be marked for restore, then:

    +
    rclone backend restore --include "*.txt" oos:bucket/path -o hours=72
    +

    It returns a list of status dictionaries with Object Name and Status +keys. The Status will be "RESTORED"" if it was successful or an error +message if not.

    +
    [
    +    {
    +        "Object": "test.txt"
    +        "Status": "RESTORED",
    +    },
    +    {
    +        "Object": "test/file4.txt"
    +        "Status": "RESTORED",
    +    }
    +]

    Options:

    • "hours": The number of hours for which this object will be restored. Default is 24 hrs.
    +

    Tutorials

    Mounting @@ -49423,11 +54422,11 @@ Buckets

    Paths are specified as remote:bucket (or remote: for the lsd command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir.

    -

    Configuration

    +

    Configuration

    Here is an example of making an QingStor configuration. First run

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     r) Rename remote
     c) Copy remote
    @@ -49486,14 +54485,14 @@ y/e/d> y

    This remote is called remote and can now be used like this

    See all buckets

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    Make a new bucket

    -
    rclone mkdir remote:bucket
    +
    rclone mkdir remote:bucket

    List the contents of a bucket

    -
    rclone ls remote:bucket
    +
    rclone ls remote:bucket

    Sync /home/local/directory to the remote bucket, deleting any excess files in the bucket.

    -
    rclone sync --interactive /home/local/directory remote:bucket
    +
    rclone sync --interactive /home/local/directory remote:bucket

    --fast-list

    This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the incorrect zone, the bucket is not in 'XXX' zone.

    -

    Authentication

    +

    Authentication

    There are two ways to supply rclone with a set of QingStor credentials. In order of precedence:

      @@ -49546,7 +54545,9 @@ restricted characters set. Note that 0x7F is not replaced.

      Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to qingstor (QingCloud Object Storage).

      --qingstor-env-auth

      @@ -49630,7 +54631,7 @@ IAM).
    -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to qingstor (QingCloud Object Storage).

    --qingstor-connection-retries

    @@ -49705,14 +54706,15 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    rclone about is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    +href="https://rclone.org/commands/rclone_about/">rclone about.

    Quatrix

    Quatrix by Maytech is Quatrix Secure @@ -49723,16 +54725,18 @@ Compliant File Sharing | Maytech.

    The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at https://<account>/profile/api-keys or with the help -of the API - -https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create.

    -

    See complete Swagger documentation for Quatrix - -https://docs.maytech.net/quatrix/quatrix-api/api-explorer

    -

    Configuration

    +of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create.

    +

    See complete Swagger +documentation for Quatrix.

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -49760,20 +54764,21 @@ y) Yes this is OK
     e) Edit this remote
     d) Delete this remote
     y/e/d> y
    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your Quatrix

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your Quatrix

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to an Quatrix directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    API key validity

    API Key is created with no expiration date. It will be valid until you delete or deactivate it in your account. After disabling, the API Key can be enabled back. If the API Key was deleted and a new key was created, you can update it in rclone config. The same happens if the hostname was changed.

    -
    $ rclone config
    +
    $ rclone config
     Current remotes:
     
     Name                 Type
    @@ -49851,7 +54856,9 @@ available, all chunks will equal minimal_chunk_size.

    for 30 days. Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to quatrix (Quatrix by Maytech).

    --quatrix-api-key

    @@ -49872,7 +54879,7 @@ Maytech).

  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to quatrix (Quatrix by Maytech).

    --quatrix-encoding

    @@ -49944,6 +54951,7 @@ id="quatrix-skip-project-folders">--quatrix-skip-project-folders
  • Type: string
  • Required: false
  • +

    Storage usage

    The storage usage in Quatrix is restricted to the account during the purchase. You can restrict any user with a smaller storage limit. The @@ -49978,39 +54986,46 @@ impossible).

    However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need -to make a few more provisions: - Ensure you have Sia daemon -installed directly or in a +

      +
    • Ensure you have Sia daemon installed directly or in a docker -container because Sia-UI does not support this mode natively. - Run -it on externally accessible port, for example provide +container because Sia-UI does not support this mode natively.
    • +
    • Run it on externally accessible port, for example provide --api-addr :9980 and --disable-api-security -arguments on the daemon command line. - Enforce API password for the -siad daemon via environment variable -SIA_API_PASSWORD or text file named -apipassword in the daemon directory. - Set rclone backend -option api_password taking it from above locations.

      -

      Notes: 1. If your wallet is locked, rclone cannot unlock it -automatically. You should either unlock it in advance by using Sia-UI or -via command line siac wallet unlock. Alternatively you can -make siad unlock your wallet automatically upon startup by -running it with environment variable SIA_WALLET_PASSWORD. -2. If siad cannot find the SIA_API_PASSWORD +arguments on the daemon command line.

    • +
    • Enforce API password for the siad daemon via +environment variable SIA_API_PASSWORD or text file named +apipassword in the daemon directory.
    • +
    • Set rclone backend option api_password taking it from +above locations.
    • +
    +

    Notes:

    +
      +
    1. If your wallet is locked, rclone cannot unlock it automatically. You +should either unlock it in advance by using Sia-UI or via command line +siac wallet unlock. Alternatively you can make +siad unlock your wallet automatically upon startup by +running it with environment variable +SIA_WALLET_PASSWORD.
    2. +
    3. If siad cannot find the SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR directory, it will generate a random password and store in the text file named apipassword under YOUR_HOME/.sia/ directory on Unix or C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on -Windows. Remember this when you configure password in rclone. 3. The -only way to use siad without API password is to run it -on localhost with command line argument +Windows. Remember this when you configure password in rclone.
    4. +
    5. The only way to use siad without API password is to run +it on localhost with command line argument --authorize-api=false, but this is insecure and -strongly discouraged.

      -

      Configuration

      +strongly discouraged.
    6. +
    +

    Configuration

    Here is an example of how to make a sia remote called mySia. First, run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -50055,19 +55070,17 @@ d) Delete this remote
     y/e/d> y

    Once configured, you can then use rclone like this:

      -
    • List directories in top level of your Sia storage
    • +
    • List directories in top level of your Sia storage

      +
      rclone lsd mySia:
    • +
    • List all the files in your Sia storage

      +
      rclone ls mySia:
    • +
    • Upload a local directory to the Sia directory called +backup

      +
      rclone copy /home/source mySia:backup
    -
    rclone lsd mySia:
    -
      -
    • List all the files in your Sia storage
    • -
    -
    rclone ls mySia:
    -
      -
    • Upload a local directory to the Sia directory called -backup
    • -
    -
    rclone copy /home/source mySia:backup
    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to sia (Sia Decentralized Cloud).

    --sia-api-url

    @@ -50096,7 +55109,7 @@ obscure.

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to sia (Sia Decentralized Cloud).

    --sia-user-agent

    @@ -50130,7 +55143,8 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    • Modification times not supported
    • Checksums not supported
    • @@ -50167,11 +55181,11 @@ Bluemix Cloud ObjectStorage Swift remote: for the lsd command.) You may put subdirectories in too, e.g. remote:container/path/to/dir.

      -

      Configuration

      +

      Configuration

      Here is an example of making a swift configuration. First run

      -
      rclone config
      +
      rclone config

      This will guide you through an interactive setup process.

      -
      No remotes found, make a new one?
      +
      No remotes found, make a new one\?
       n) New remote
       s) Set configuration password
       q) Quit config
      @@ -50264,37 +55278,39 @@ y/e/d> y

      This remote is called remote and can now be used like this

      See all containers

      -
      rclone lsd remote:
      +
      rclone lsd remote:

      Make a new container

      -
      rclone mkdir remote:container
      +
      rclone mkdir remote:container

      List the contents of a container

      -
      rclone ls remote:container
      +
      rclone ls remote:container

      Sync /home/local/directory to the remote container, deleting any excess files in the container.

      -
      rclone sync --interactive /home/local/directory remote:container
      +
      rclone sync --interactive /home/local/directory remote:container

      Configuration from an OpenStack credentials file

      An OpenStack credentials file typically looks something something like this (without the comments)

      -
      export OS_AUTH_URL=https://a.provider.net/v2.0
      -export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
      -export OS_TENANT_NAME="1234567890123456"
      -export OS_USERNAME="123abc567xy"
      -echo "Please enter your OpenStack Password: "
      -read -sr OS_PASSWORD_INPUT
      -export OS_PASSWORD=$OS_PASSWORD_INPUT
      -export OS_REGION_NAME="SBG1"
      -if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi
      +
      export OS_AUTH_URL=https://a.provider.net/v2.0
      +export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
      +export OS_TENANT_NAME="1234567890123456"
      +export OS_USERNAME="123abc567xy"
      +echo "Please enter your OpenStack Password: "
      +read -sr OS_PASSWORD_INPUT
      +export OS_PASSWORD=$OS_PASSWORD_INPUT
      +export OS_REGION_NAME="SBG1"
      +if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi

      The config file needs to look something like this where $OS_USERNAME represents the value of the OS_USERNAME variable - 123abc567xy in the example above.

      -
      [remote]
      -type = swift
      -user = $OS_USERNAME
      -key = $OS_PASSWORD
      -auth = $OS_AUTH_URL
      -tenant = $OS_TENANT_NAME
      +
      [remote]
      +type = swift
      +user = $OS_USERNAME
      +key = $OS_PASSWORD
      +auth = $OS_AUTH_URL
      +tenant = $OS_TENANT_NAME

      Note that you may (or may not) need to set region too - try without first.

      Configuration from the @@ -50323,10 +55339,11 @@ OpenStack installation.

      config file

      You can use rclone with swift without a config file, if desired, like this:

      -
      source openstack-credentials-file
      -export RCLONE_CONFIG_MYREMOTE_TYPE=swift
      -export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
      -rclone lsd myremote:
      +
      source openstack-credentials-file
      +export RCLONE_CONFIG_MYREMOTE_TYPE=swift
      +export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
      +rclone lsd myremote:

      --fast-list

      This remote supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the

      Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

      -

      Standard options

      + + +

      Standard options

      Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).

      --swift-env-auth

      @@ -50630,7 +55649,7 @@ provider.

    -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)).

    --swift-leave-parts-on-error

    @@ -50789,7 +55808,8 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.

    @@ -50824,21 +55844,21 @@ state within the OVH control panel.

    objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following:

    -

    2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)

    +
    2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)

    Rclone will wait for the time specified then retry the copy.

    pCloud

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -50883,8 +55903,8 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

    +docs for how to set it up on a machine without an internet-connected +web browser available.

    Note if you are using remote config with rclone authorize while your pcloud server is the EU region, you will need to set the hostname in 'Edit advanced config', otherwise you might get a token error.

    @@ -50893,13 +55913,14 @@ the token as returned from pCloud. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your pCloud

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your pCloud

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to a pCloud directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Modification times and hashes

    pCloud allows modification times to be set on objects accurate to 1 @@ -50953,14 +55974,24 @@ correct root to use itself.

    However you can set this to restrict rclone to a specific folder hierarchy.

    In order to do this you will have to find the Folder ID -of the directory you wish rclone to display. This will be the -folder field of the URL when you open the relevant folder -in the pCloud web interface.

    -

    So if the folder you want rclone to use has a URL which looks like -https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid -in the browser, then you use 5xxxxxxxx8 as the -root_folder_id in the config.

    -

    Standard options

    +of the directory you wish rclone to display. This can be accomplished by +executing the rclone lsf command using a basic +configuration setup that does not include the +root_folder_id parameter.

    +

    The command will enumerate available directories, allowing you to +locate the appropriate Folder ID for subsequent use.

    +

    Example:

    +
    $ rclone lsf --dirs-only -Fip --csv TestPcloud:
    +dxxxxxxxx2,My Music/
    +dxxxxxxxx3,My Pictures/
    +dxxxxxxxx4,My Videos/
    +

    So if the folder you want rclone to use your is "My Music/", then use +the returned id from rclone lsf command (ex. +dxxxxxxxx2) as the root_folder_id variable +value in the config file.

    + + +

    Standard options

    Here are the Standard options specific to pcloud (Pcloud).

    --pcloud-client-id

    OAuth Client Id.

    @@ -50982,7 +56013,7 @@ in the browser, then you use 5xxxxxxxx8 as the
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to pcloud (Pcloud).

    --pcloud-token

    OAuth Access Token as a JSON blob.

    @@ -51103,17 +56134,18 @@ obscure.

  • Type: string
  • Required: false
  • +

    PikPak

    PikPak is a private cloud drive.

    Paths are specified as remote:path, and may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    Here is an example of making a remote for PikPak.

    First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -51167,7 +56199,9 @@ hashes
     uploading objects, but it does not support changing only the
     modification time

    The MD5 hash algorithm is supported.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to pikpak (PikPak).

    --pikpak-user

    Pikpak username.

    @@ -51190,7 +56224,7 @@ obscure.

  • Type: string
  • Required: true
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to pikpak (PikPak).

    --pikpak-device-id

    Device ID used for authorization.

    @@ -51340,8 +56374,8 @@ Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,LeftSpace,Righ

    Backend commands

    Here are the commands specific to the pikpak backend.

    -

    Run them with

    -
    rclone backend COMMAND remote:
    +

    Run them with:

    +
    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    See the backend command @@ -51349,31 +56383,33 @@ for more info on how to pass options and arguments.

    These can be run on a running backend using the rc command backend/command.

    addurl

    -

    Add offline download task for url

    -
    rclone backend addurl remote: [options] [<arguments>+]
    +

    Add offline download task for url.

    +
    rclone backend addurl remote: [options] [<arguments>+]

    This command adds offline download task for url.

    -

    Usage:

    -
    rclone backend addurl pikpak:dirpath url
    +

    Usage example:

    +
    rclone backend addurl pikpak:dirpath url

    Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, download will fallback to default 'My Pack' folder.

    decompress

    -

    Request decompress of a file/files in a folder

    -
    rclone backend decompress remote: [options] [<arguments>+]
    +

    Request decompress of a file/files in a folder.

    +
    rclone backend decompress remote: [options] [<arguments>+]

    This command requests decompress of file/files in a folder.

    -

    Usage:

    -
    rclone backend decompress pikpak:dirpath {filename} -o password=password
    +

    Usage examples:

    +
    rclone backend decompress pikpak:dirpath {filename} -o password=password
     rclone backend decompress pikpak:dirpath {filename} -o delete-src-file

    An optional argument 'filename' can be specified for a file located in 'pikpak:dirpath'. You may want to pass '-o password=password' for a password-protected files. Also, pass '-o delete-src-file' to delete source files after decompression finished.

    Result:

    -
    {
    -    "Decompressed": 17,
    -    "SourceDeleted": 0,
    -    "Errors": 0
    -}
    -

    Limitations

    +
    {
    +    "Decompressed": 17,
    +    "SourceDeleted": 0,
    +    "Errors": 0
    +}
    + +

    Limitations

    Hashes may be empty

    PikPak supports MD5 hash, but sometimes given empty especially for user-uploaded files.

    @@ -51391,18 +56427,19 @@ subscriptions.

    An overview of the filesystem's features and limitations is available in the filesystem guide on pixeldrain.

    -

    Usage with account

    +

    Usage with account

    To use the personal filesystem you will need a pixeldrain account and either the Prepaid plan or one of the Patreon-based subscriptions. After registering and subscribing, your personal filesystem will be available -at this link: https://pixeldrain.com/d/me.

    +at this link: https://pixeldrain.com/d/me.

    Go to the API keys page on your account and generate a new API key for rclone. Then run rclone config and use the API key to create a new backend.

    Example:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     d) Delete remote
     c) Copy remote
    @@ -51463,7 +56500,7 @@ c) Copy remote
     s) Set configuration password
     q) Quit config
     e/n/d/r/c/s/q> q
    -

    Usage without account

    +

    Usage without account

    It is possible to gain read-only access to publicly shared directories through rclone. For this you only need a directory ID. The directory ID can be found in the URL of a shared directory, the URL will @@ -51475,7 +56512,9 @@ filesystem can also be listed with the lsf command:

    directory and their public IDs.

    Enter this directory ID in the rclone config and you will be able to access the directory.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to pixeldrain (Pixeldrain Filesystem).

    --pixeldrain-api-key

    @@ -51499,7 +56538,7 @@ directory ID to use a shared directory.

  • Type: string
  • Default: "me"
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to pixeldrain (Pixeldrain Filesystem).

    --pixeldrain-api-url

    @@ -51522,7 +56561,7 @@ testing purposes.

  • Type: string
  • Required: false
  • -

    Metadata

    +

    Metadata

    Pixeldrain supports file modes and creation times.

    Here are the possible system metadata items for the pixeldrain backend.

    @@ -51569,20 +56608,21 @@ backend.

    See the metadata docs for more info.

    +

    premiumize.me

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for premiumize.me involves getting a token from premiumize.me which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -51620,21 +56660,22 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> 

    See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

    +docs for how to set it up on a machine without an internet-connected +web browser available.

    Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your premiumize.me

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your premiumize.me

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to an premiumize.me directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Modification times and hashes

    premiumize.me does not support modification times or hashes, @@ -51670,7 +56711,9 @@ replaced:

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to premiumizeme (premiumize.me).

    --premiumizeme-client-id

    @@ -51703,7 +56746,7 @@ can't be used in JSON strings.

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to premiumizeme (premiumize.me).

    --premiumizeme-token

    @@ -51768,7 +56811,8 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".

    premiumize.me file names can't have the \ or @@ -51797,9 +56841,9 @@ rclone forum if you find an incompatibility.

    Configurations

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -51841,14 +56885,15 @@ y/e/d> y

    NOTE: The Proton Drive encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail.

    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your Proton Drive

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your Proton Drive

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to an Proton Drive directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Modification times and hashes

    Proton Drive Bridge does not support updating modification times @@ -51877,7 +56922,9 @@ changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to protondrive (Proton Drive).

    --protondrive-username

    @@ -51913,7 +56960,23 @@ with two-factor authentication

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    --protondrive-otp-secret-key

    +

    The OTP secret key

    +

    The value can also be provided with +--protondrive-otp-secret-key=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567

    +

    The OTP secret key of your proton drive account if the account is set +up with two-factor authentication

    +

    NB Input to this must be obscured - see rclone +obscure.

    +

    Properties:

    +
      +
    • Config: otp_secret_key
    • +
    • Env Var: RCLONE_PROTONDRIVE_OTP_SECRET_KEY
    • +
    • Type: string
    • +
    • Required: false
    • +
    +

    Advanced options

    Here are the Advanced options specific to protondrive (Proton Drive).

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    This backend uses the Proton-API-Bridge, which is based on

    Paths are specified as remote:path

    put.io paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    The initial setup for put.io involves getting a token from put.io which you need to do in your browser. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -52151,8 +57215,8 @@ s) Set configuration password
     q) Quit config
     e/n/d/r/c/s/q> q

    See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

    +docs for how to set it up on a machine without an internet-connected +web browser available.

    Note that rclone runs a webserver on your local machine to collect the token as returned from put.io if using web browser to automatically authenticate. This only runs from the moment it opens your browser to @@ -52162,11 +57226,11 @@ unblock it temporarily if you are running a host firewall, or use manual mode.

    You can then use it like this,

    List directories in top level of your put.io

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your put.io

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to a put.io directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Restricted filename characters

    In addition to the

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to putio (Put.io).

    --putio-client-id

    OAuth Client Id.

    @@ -52214,7 +57280,7 @@ can't be used in JSON strings.

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to putio (Put.io).

    --putio-token

    OAuth Access Token as a JSON blob.

    @@ -52277,7 +57343,8 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    put.io has rate limiting. When you hit a limit, rclone automatically retries after waiting the amount of time requested by the server.

    If you want to avoid ever hitting these limits, you may use the @@ -52305,9 +57372,9 @@ rclone forum if you find an incompatibility.

    Configurations

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -52349,14 +57416,15 @@ y/e/d> y

    NOTE: The Proton Drive encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail.

    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your Proton Drive

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your Proton Drive

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to an Proton Drive directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Modification times and hashes

    Proton Drive Bridge does not support updating modification times @@ -52385,7 +57453,9 @@ changed on the drive, is yet to be implemented, so updates from other clients won’t be reflected in the cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to protondrive (Proton Drive).

    --protondrive-username

    @@ -52421,7 +57491,23 @@ with two-factor authentication

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    --protondrive-otp-secret-key

    +

    The OTP secret key

    +

    The value can also be provided with +--protondrive-otp-secret-key=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567

    +

    The OTP secret key of your proton drive account if the account is set +up with two-factor authentication

    +

    NB Input to this must be obscured - see rclone +obscure.

    +

    Properties:

    +
      +
    • Config: otp_secret_key
    • +
    • Env Var: RCLONE_PROTONDRIVE_OTP_SECRET_KEY
    • +
    • Type: string
    • +
    • Required: false
    • +
    +

    Advanced options

    Here are the Advanced options specific to protondrive (Proton Drive).

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    This backend uses the Proton-API-Bridge, which is based on

    Seafile

    This is a backend for the Seafile storage service: - It works -with both the free community edition or the professional edition. - -Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted -libraries are also supported. - It supports 2FA enabled users - Using a -Library API Token is not supported

    -

    Configuration

    -

    There are two distinct modes you can setup your remote: - you point -your remote to the root of the server, meaning you -don't specify a library during the configuration: Paths are specified as -remote:library. You may put subdirectories in too, e.g. -remote:library/path/to/dir. - you point your remote to a -specific library during the configuration: Paths are specified as -remote:path/to/dir. This is the recommended mode -when using encrypted libraries. (This mode is possibly -slightly faster than the root mode)

    +href="https://www.seafile.com/">Seafile storage service:

    +
      +
    • It works with both the free community edition or the professional +edition.
    • +
    • Seafile versions 6.x, 7.x, 8.x and 9.x are all supported.
    • +
    • Encrypted libraries are also supported.
    • +
    • It supports 2FA enabled users
    • +
    • Using a Library API Token is not supported
    • +
    +

    Configuration

    +

    There are two distinct modes you can setup your remote:

    +
      +
    • you point your remote to the root of the server, +meaning you don't specify a library during the configuration: Paths are +specified as remote:library. You may put subdirectories in +too, e.g. remote:library/path/to/dir.
    • +
    • you point your remote to a specific library during the +configuration: Paths are specified as remote:path/to/dir. +This is the recommended mode when using encrypted +libraries. (This mode is possibly slightly faster than the +root mode)
    • +

    Configuration in root mode

    Here is an example of making a seafile configuration for a user with no two-factor authentication. First run

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process. To authenticate you will need the URL of your server, your email (or username) and your password.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -52685,21 +57779,21 @@ y/e/d> y

    This remote is called seafile. It's pointing to the root of your seafile server and can now be used like this:

    See all libraries

    -
    rclone lsd seafile:
    +
    rclone lsd seafile:

    Create a new library

    -
    rclone mkdir seafile:library
    +
    rclone mkdir seafile:library

    List the contents of a library

    -
    rclone ls seafile:library
    +
    rclone ls seafile:library

    Sync /home/local/directory to the remote library, deleting any excess files in the library.

    -
    rclone sync --interactive /home/local/directory seafile:library
    +
    rclone sync --interactive /home/local/directory seafile:library

    Configuration in library mode

    Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -52772,14 +57866,14 @@ because we only need the password to authenticate you once.

    root of the remote is pointing at the root of the library My Library:

    See all files in the library:

    -
    rclone lsd seafile:
    +
    rclone lsd seafile:

    Create a new directory inside the library

    -
    rclone mkdir seafile:directory
    +
    rclone mkdir seafile:directory

    List the contents of a directory

    -
    rclone ls seafile:directory
    +
    rclone ls seafile:directory

    Sync /home/local/directory to the remote library, deleting any excess files in the library.

    -
    rclone sync --interactive /home/local/directory seafile:
    +
    rclone sync --interactive /home/local/directory seafile:

    --fast-list

    Seafile version 7+ supports --fast-list which allows you to use fewer transactions in exchange for more memory. See the

    Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory:

    -
    rclone link seafile:seafile-tutorial.doc
    +
    $ rclone link seafile:seafile-tutorial.doc
     http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/
     

    or if run on a directory you will get:

    -
    rclone link seafile:dir
    +
    $ rclone link seafile:dir
     http://my.seafile.server/d/9ea2455f6f55478bbb0d/

    Please note a share link is unique for each file or directory. If you run a link command on a file/dir that has already been shared, you will @@ -52836,15 +57930,22 @@ get the exact same link.

    Compatibility

    It has been actively developed using the seafile docker image -of these versions: - 6.3.4 community edition - 7.0.5 community edition - -7.1.3 community edition - 9.0.10 community edition

    +of these versions:

    +
      +
    • 6.3.4 community edition
    • +
    • 7.0.5 community edition
    • +
    • 7.1.3 community edition
    • +
    • 9.0.10 community edition
    • +

    Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.

    Each new version of rclone is automatically tested against the latest docker image of the seafile community server.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to seafile (seafile).

    --seafile-url

    URL of seafile host to connect to.

    @@ -52925,7 +58026,7 @@ obscure.

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to seafile (seafile).

    --seafile-create-library

    Should rclone create a library if it doesn't exist.

    @@ -52956,16 +58057,20 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • +

    SFTP

    SFTP is the Secure (or SSH) File Transfer Protocol.

    The SFTP backend can be used with a number of different providers:

    + +
    • Hetzner Storage Box
    • rsync.net
    +

    SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.

    Paths are specified as remote:path. If the path does not @@ -52982,11 +58087,11 @@ users to OMIT the leading /.

    Note that by default rclone will try to execute shell commands on the server, see shell access considerations.

    -

    Configuration

    +

    Configuration

    Here is an example of making an SFTP configuration. First run

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -53034,19 +58139,19 @@ y/e/d> y

    This remote is called remote and can now be used like this:

    See all directories in the home directory

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    See all directories in the root directory

    -
    rclone lsd remote:/
    +
    rclone lsd remote:/

    Make a new directory

    -
    rclone mkdir remote:path/to/directory
    +
    rclone mkdir remote:path/to/directory

    List the contents of a directory

    -
    rclone ls remote:path/to/directory
    +
    rclone ls remote:path/to/directory

    Sync /home/local/directory to the remote directory, deleting any excess files in the directory.

    -
    rclone sync --interactive /home/local/directory remote:directory
    +
    rclone sync --interactive /home/local/directory remote:directory

    Mount the remote path /srv/www-data/ to the local path /mnt/www-data

    -
    rclone mount remote:/srv/www-data/ /mnt/www-data
    +
    rclone mount remote:/srv/www-data/ /mnt/www-data

    SSH Authentication

    The SFTP remote supports three authentication methods:

      @@ -53060,11 +58165,11 @@ encrypted files are supported.

      The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line -('' or '') separating lines. i.e.

      -
      key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----
      +('' or '') separating lines. I.e.

      +
      key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----

      This will generate it correctly for key_pem for use in the config:

      -
      awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
      +
      awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa

      If you don't specify pass, key_file, or key_pem or ask_password then rclone will attempt to contact an ssh-agent. You can also specify @@ -53089,16 +58194,17 @@ provide the path to the user certificate public key file in key, typically saved as /home/$USER/.ssh/id_rsa.pub. Setting this path in pubkey_file will not work.

      Example:

      -
      [remote]
      -type = sftp
      -host = example.com
      -user = sftpuser
      -key_file = ~/id_rsa
      -pubkey_file = ~/id_rsa-cert.pub
      +
      [remote]
      +type = sftp
      +host = example.com
      +user = sftpuser
      +key_file = ~/id_rsa
      +pubkey_file = ~/id_rsa-cert.pub

      If you concatenate a cert with a private key then you can specify the merged file in both places.

      Note: the cert must come first in the file. e.g.

      -
      cat id_rsa-cert.pub id_rsa > merged_key
      +
      cat id_rsa-cert.pub id_rsa > merged_key

      Host key validation

      By default rclone will not check the server's host key for validation. This can allow an attacker to replace a server with their @@ -53109,14 +58215,15 @@ be turned on by enabling the known_hosts_file option. This can point to the file maintained by OpenSSH or can point to a unique file.

      e.g. using the OpenSSH known_hosts file:

      -
      [remote]
      -type = sftp
      -host = example.com
      -user = sftpuser
      -pass = 
      -known_hosts_file = ~/.ssh/known_hosts
      +
      [remote]
      +type = sftp
      +host = example.com
      +user = sftpuser
      +pass = 
      +known_hosts_file = ~/.ssh/known_hosts

      Alternatively you can create your own known hosts file like this:

      -
      ssh-keyscan -t dsa,rsa,ecdsa,ed25519 example.com >> known_hosts
      +
      ssh-keyscan -t dsa,rsa,ecdsa,ed25519 example.com >> known_hosts

      There are some limitations:

      • rclone will not manage this file for you. If @@ -53128,11 +58235,11 @@ the known_hosts file must be the

        If the host key provided by the server does not match the one in the file (or is missing) then the connection will be aborted and an error returned such as

        -
        NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key mismatch
        +
        NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key mismatch

        or

        -
        NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key is unknown
        +
        NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key is unknown

        If you see an error such as

        -
        NewFs: couldn't connect SSH: ssh: handshake failed: ssh: no authorities for hostname: example.com:22
        +
        NewFs: couldn't connect SSH: ssh: handshake failed: ssh: no authorities for hostname: example.com:22

        then it is likely the server has presented a CA signed host certificate and you will need to add the appropriate @cert-authority entry.

        @@ -53142,9 +58249,9 @@ certificate and you will need to add the appropriate

        Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around seems to be to start an ssh-agent in each session, e.g.

        -
        eval `ssh-agent -s` && ssh-add -A
        +
        eval `ssh-agent -s` && ssh-add -A

        And then at the end of the session

        -
        eval `ssh-agent -k`
        +
        eval `ssh-agent -k`

        These commands can be used in scripts of course.

        Shell access

        Some functionality of the SFTP backend relies on remote shell access, @@ -53284,7 +58391,9 @@ remote's PATH), then this will be used instead. If the server shell is PowerShell, probably with a Windows OpenSSH server, rclone will use a built-in shell command (see shell access). If none of the above is applicable, about will fail.

        -

        Standard options

        + + +

        Standard options

        Here are the Standard options specific to sftp (SSH/SFTP).

        --sftp-host

        SSH host to connect to.

        @@ -53479,7 +58588,7 @@ connection for every hash it calculates.

      • Type: SpaceSepList
      • Default:
      -

      Advanced options

      +

      Advanced options

      Here are the Advanced options specific to sftp (SSH/SFTP).

      --sftp-known-hosts-file

      Optional path to known_hosts file.

      @@ -53938,7 +59047,8 @@ as the source and the destination will be the same file.

    • Type: string
    • Required: false
    -

    Limitations

    + +

    Limitations

    On some SFTP servers (e.g. Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. You can either use --sftp-path-override or @@ -53985,7 +59095,7 @@ entered when you started to share on Windows. On smbd, it's the section title in smb.conf (usually in /etc/samba/) file. You can find shares by querying the root if you're unsure (e.g. rclone lsd remote:).

    -

    You can't access to the shared printers from rclone, obviously.

    +

    You can't access the shared printers from rclone, obviously.

    You can't use Anonymous access for logging in. You have to use the guest user with an empty password instead. The rclone client tries to avoid 8.3 names when uploading files by encoding @@ -53994,12 +59104,12 @@ href="https://rclone.org/local/#paths-on-windows">the local backend on Windows can access SMB servers using UNC paths, by \\server\share. This doesn't apply to non-Windows OSes, such as Linux and macOS.

    -

    Configuration

    +

    Configuration

    Here is an example of making a SMB configuration.

    First run

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process.

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -54069,7 +59179,9 @@ y) Yes this is OK (default)
     e) Edit this remote
     d) Delete this remote
     y/e/d> d
    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to smb (SMB / CIFS).

    --smb-host

    SMB server hostname to connect to.

    @@ -54147,7 +59259,7 @@ KRB5_CONFIG and KRB5CCNAME environment variables.

  • Type: bool
  • Default: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to smb (SMB / CIFS).

    --smb-idle-timeout

    Max time before closing idle connections.

    @@ -54218,6 +59330,7 @@ Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,Rig
  • Type: string
  • Required: false
  • +

    Storj

    Storj is redefining the cloud to support the future of data—sustainably and economically. Storj leverages @@ -54338,17 +59451,20 @@ upload gateway -

    Configuration

    -

    To make a new Storj configuration you need one of the following: * -Access Grant that someone else shared with you. * Configuration +

    To make a new Storj configuration you need one of the following:

    +

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    Setup with access grant

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -54387,7 +59503,7 @@ d) Delete this remote
     y/e/d> y

    Setup with API key and passphrase

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one\?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -54440,7 +59556,9 @@ y) Yes this is OK (default)
     e) Edit this remote
     d) Delete this remote
     y/e/d> y
    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to storj (Storj Decentralized Cloud Storage).

    --storj-provider

    @@ -54522,7 +59640,7 @@ passphrase.
  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to storj (Storj Decentralized Cloud Storage).

    --storj-description

    @@ -54534,6 +59652,7 @@ Cloud Storage).

  • Type: string
  • Required: false
  • +

    Usage

    Paths are specified as remote:bucket (or remote: for the lsf command.) You may put @@ -54542,79 +59661,80 @@ subdirectories in too, e.g. remote:bucket/path/to/dir.

    Create a new bucket

    Use the mkdir command to create new bucket, e.g. bucket.

    -
    rclone mkdir remote:bucket
    +
    rclone mkdir remote:bucket

    List all buckets

    Use the lsf command to list all buckets.

    -
    rclone lsf remote:
    +
    rclone lsf remote:

    Note the colon (:) character at the end of the command line.

    Delete a bucket

    Use the rmdir command to delete an empty bucket.

    -
    rclone rmdir remote:bucket
    +
    rclone rmdir remote:bucket

    Use the purge command to delete a non-empty bucket with all its content.

    -
    rclone purge remote:bucket
    +
    rclone purge remote:bucket

    Upload objects

    Use the copy command to upload an object.

    -
    rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/
    +
    rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/

    The --progress flag is for displaying progress information. Remove it if you don't need this information.

    Use a folder in the local path to upload all its objects.

    -
    rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/
    +
    rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/

    Only modified files will be copied.

    List objects

    Use the ls command to list recursively all objects in a bucket.

    -
    rclone ls remote:bucket
    +
    rclone ls remote:bucket

    Add the folder to the remote path to list recursively all objects in this folder.

    -
    rclone ls remote:bucket/path/to/dir/
    +
    $ rclone ls remote:bucket
    +/path/to/dir/

    Use the lsf command to list non-recursively all objects in a bucket or a folder.

    -
    rclone lsf remote:bucket/path/to/dir/
    +
    rclone lsf remote:bucket/path/to/dir/

    Download objects

    Use the copy command to download an object.

    -
    rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/
    +
    rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/

    The --progress flag is for displaying progress information. Remove it if you don't need this information.

    Use a folder in the remote path to download all its objects.

    -
    rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/
    +
    rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/

    Delete objects

    Use the deletefile command to delete a single object.

    -
    rclone deletefile remote:bucket/path/to/dir/file.ext
    +
    rclone deletefile remote:bucket/path/to/dir/file.ext

    Use the delete command to delete all object in a folder.

    -
    rclone delete remote:bucket/path/to/dir/
    +
    rclone delete remote:bucket/path/to/dir/

    Use the size command to print the total size of objects in a bucket or a folder.

    -
    rclone size remote:bucket/path/to/dir/
    +
    rclone size remote:bucket/path/to/dir/

    Sync two Locations

    Use the sync command to sync the source to the destination, changing the destination only, deleting any excess files.

    -
    rclone sync --interactive --progress /home/local/directory/ remote:bucket/path/to/dir/
    +
    rclone sync --interactive --progress /home/local/directory/ remote:bucket/path/to/dir/

    The --progress flag is for displaying progress information. Remove it if you don't need this information.

    Since this can cause data loss, test first with the --dry-run flag to see exactly what would be copied and deleted.

    The sync can be done also from Storj to the local file system.

    -
    rclone sync --interactive --progress remote:bucket/path/to/dir/ /home/local/directory/
    +
    rclone sync --interactive --progress remote:bucket/path/to/dir/ /home/local/directory/

    Or between two Storj buckets.

    -
    rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/
    +
    rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/

    Or even between another cloud storage and Storj.

    -
    rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
    -

    Limitations

    +
    rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/
    +

    Limitations

    rclone about is not supported by the rclone Storj backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    +href="https://rclone.org/commands/rclone_about/">rclone about.

    Known issues

    If you get errors like too many open files this usually happens when the default ulimit for system max open files @@ -54636,15 +59756,15 @@ operating system manual.

    SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.

    -

    Configuration

    +

    Configuration

    The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config walks you through it.

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -54695,13 +59815,14 @@ d) Delete this remote
     y/e/d> y

    Note that the config asks for your email and password but doesn't store them, it only uses them to get the initial token.

    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories (sync folders) in top level of your SugarSync

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your SugarSync folder "Test"

    -
    rclone ls remote:Test
    +
    rclone ls remote:Test

    To copy a local directory to an SugarSync folder called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    @@ -54728,7 +59849,9 @@ default.

    However you can supply the flag --sugarsync-hard-delete or set the config parameter hard_delete = true if you would like files to be deleted straight away.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to sugarsync (Sugarsync).

    --sugarsync-app-id

    Sugarsync App ID.

    @@ -54771,7 +59894,7 @@ files.

  • Type: bool
  • Default: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to sugarsync (Sugarsync).

    --sugarsync-refresh-token

    Sugarsync refresh token.

    @@ -54854,26 +59977,27 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    rclone about is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    +href="https://rclone.org/commands/rclone_about/">rclone about.

    Uloz.to

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    The initial setup for Uloz.to involves filling in the user credentials. rclone config walks you through it.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -54920,13 +60044,14 @@ y) Yes this is OK (default)
     e) Edit this remote
     d) Delete this remote
     y/e/d> y
    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List folders in root level folder:

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your root folder:

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local folder to a Uloz.to folder called backup:

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    User credentials

    The only reliable method is to authenticate the user using username and password. Uloz.to offers an API key as well, but it's reserved for @@ -54996,7 +60121,9 @@ in the remote path. For example, if your remote's root_folder_slug corresponds to /foo/bar, remote:baz/qux will refer to ABSOLUTE_ULOZTO_ROOT/foo/bar/baz/qux.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to ulozto (Uloz.to).

    --ulozto-app-token

    The application token identifying the app. An app API key can be @@ -55030,7 +60157,7 @@ obscure.

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to ulozto (Uloz.to).

    --ulozto-root-folder-slug

    If set, rclone will use this folder as the root folder for all @@ -55072,7 +60199,8 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    Uloz.to file names can't have the \ character in. rclone maps this to and from an identical looking unicode equivalent (U+FF3C Fullwidth Reverse Solidus).

    @@ -55088,7 +60216,7 @@ determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote.

    See List of backends that do not support rclone about and rclone about

    +href="https://rclone.org/commands/rclone_about/">rclone about.

    Uptobox

    This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional cloud storage provider and @@ -55096,15 +60224,15 @@ therefore not suitable for long term storage.

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    To configure an Uptobox backend you'll need your personal api token. You'll find it in your account -settings

    +settings.

    Here is an example of how to make a remote called remote with the default setup. First run:

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    Current remotes:
    +
    Current remotes:
     
     Name                 Type
     ====                 ====
    @@ -55145,14 +60273,15 @@ api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
     y) Yes this is OK (default)
     e) Edit this remote
     d) Delete this remote
    -y/e/d> 
    -

    Once configured you can then use rclone like this,

    +y/e/d>
    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your Uptobox

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your Uptobox

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to an Uptobox directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Modification times and hashes

    Uptobox supports neither modified times nor checksums. All timestamps @@ -55187,7 +60316,9 @@ replaced:

    Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to uptobox (Uptobox).

    --uptobox-access-token

    Your access token.

    @@ -55199,7 +60330,7 @@ can't be used in XML strings.

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to uptobox (Uptobox).

    --uptobox-private

    Set to make uploaded files private

    @@ -55231,7 +60362,8 @@ Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    Uptobox will delete inactive files that have not been accessed in 60 days.

    rclone about is not supported by this backend an @@ -55267,12 +60399,12 @@ named backup with the remotes segments. Invoking rclone mkdir backup:../desktop is exactly the same as invoking rclone mkdir mydrive:private/backup/../desktop.

    -

    Configuration

    +

    Configuration

    Here is an example of how to make a union called remote for local folders. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -55325,16 +60457,16 @@ c) Copy remote
     s) Set configuration password
     q) Quit config
     e/n/d/r/c/s/q> q
    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this:

    List directories in top level in remote1:dir1, remote2:dir2 and remote3:dir3

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in remote1:dir1, remote2:dir2 and remote3:dir3

    -
    rclone ls remote:
    +
    rclone ls remote:

    Copy another local directory to the union directory called source, which will be placed into remote3:dir3

    -
    rclone copy C:\source remote:source
    +
    rclone copy C:\source remote:source

    Behavior / Policies

    The behavior of union backend is inspired by trapexit/mergerfs. All @@ -55546,12 +60678,13 @@ upstream.

    Writeback

    The tag :writeback on an upstream remote can be used to make a simple cache system like this:

    -
    [union]
    -type = union
    -action_policy = all
    -create_policy = all
    -search_policy = ff
    -upstreams = /local:writeback remote:dir
    +
    [union]
    +type = union
    +action_policy = all
    +create_policy = all
    +search_policy = ff
    +upstreams = /local:writeback remote:dir

    When files are opened for read, if the file is in remote:dir but not /local then rclone will copy the file entirely into /local before returning a @@ -55566,7 +60699,9 @@ there should only be one :writeback tag.

    Rclone does not manage the :writeback remote in any way other than writing files back to it. So if you need to expire old files or manage the size then you will have to do this yourself.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to union (Union merges the contents of several upstream fs).

    --union-upstreams

    @@ -55617,7 +60752,7 @@ dir" upstreamb:', etc.

  • Type: int
  • Default: 120
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to union (Union merges the contents of several upstream fs).

    --union-min-free-space

    @@ -55640,24 +60775,25 @@ considered for use in lfs or eplfs policies.

  • Type: string
  • Required: false
  • -

    Metadata

    +

    Metadata

    Any metadata supported by the underlying remote is read and written.

    See the metadata docs for more info.

    +

    WebDAV

    Paths are specified as remote:path

    Paths may be as deep as required, e.g. remote:directory/subdirectory.

    -

    Configuration

    +

    Configuration

    To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.

    Here is an example of how to make a remote called remote. First run:

    -
     rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     q) Quit config
    @@ -55719,13 +60855,14 @@ y) Yes this is OK
     e) Edit this remote
     d) Delete this remote
     y/e/d> y
    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    List directories in top level of your WebDAV

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    List all the files in your WebDAV

    -
    rclone ls remote:
    +
    rclone ls remote:

    To copy a local directory to an WebDAV directory called backup

    -
    rclone copy /home/source remote:backup
    +
    rclone copy /home/source remote:backup

    Modification times and hashes

    Plain WebDAV does not support modified times. However when used with @@ -55736,7 +60873,9 @@ Fastmail Files, ownCloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of ownCloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to webdav (WebDAV).

    --webdav-url

    URL of http host to connect to.

    @@ -55826,7 +60965,7 @@ obscure.

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to webdav (WebDAV).

    --webdav-bearer-token-command

    Command to run to get a bearer token.

    @@ -55946,6 +61085,7 @@ from the webdav server then you can try this option.

  • Type: string
  • Required: false
  • +

    Provider notes

    See below for notes on specific providers.

    Fastmail Files

    @@ -56004,12 +61144,13 @@ and use your normal account email and password for user and pass. If you have 2FA enabled, you have to generate an app password. Set the vendor to sharepoint.

    Your config file should look like this:

    -
    [sharepoint]
    -type = webdav
    -url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
    -vendor = sharepoint
    -user = YourEmailAddress
    -pass = encryptedpassword
    +
    [sharepoint]
    +type = webdav
    +url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
    +vendor = sharepoint
    +user = YourEmailAddress
    +pass = encryptedpassword

    Sharepoint with NTLM Authentication

    Use this option in case your (hosted) Sharepoint is not tied to @@ -56017,20 +61158,23 @@ OneDrive accounts and uses NTLM authentication.

    To get the url configuration, similarly to the above, first navigate to the desired directory in your browser to get the URL, then strip everything after the name of the opened directory.

    -

    Example: If the URL is: -https://example.sharepoint.com/sites/12345/Documents/Forms/AllItems.aspx

    -

    The configuration to use would be: -https://example.sharepoint.com/sites/12345/Documents

    +

    Example: If the URL is: https://example.sharepoint.com/sites/12345/Documents/Forms/AllItems.aspx

    +

    The configuration to use would be: https://example.sharepoint.com/sites/12345/Documents

    Set the vendor to sharepoint-ntlm.

    NTLM uses domain and user name combination for authentication, set user to DOMAIN\username.

    Your config file should look like this:

    -
    [sharepoint]
    -type = webdav
    -url = https://[YOUR-DOMAIN]/some-path-to/Documents
    -vendor = sharepoint-ntlm
    -user = DOMAIN\user
    -pass = encryptedpassword
    +
    [sharepoint]
    +type = webdav
    +url = https://[YOUR-DOMAIN]/some-path-to/Documents
    +vendor = sharepoint-ntlm
    +user = DOMAIN\user
    +pass = encryptedpassword

    Required Flags for SharePoint

    As SharePoint does some special things with uploaded documents, you @@ -56040,7 +61184,7 @@ if a file has been changed since the upload / which file is newer.

    .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the "Last Modified" datetime property to compare your documents:

    -
    --ignore-size --ignore-checksum --update
    +
    --ignore-size --ignore-checksum --update

    Rclone

    Use this option if you are hosting remotes over WebDAV provided by rclone. Read rclone serve @@ -56060,13 +61204,14 @@ access tokens.

    username or password, instead enter your Macaroon as the bearer_token.

    The config will end up looking something like this.

    -
    [dcache]
    -type = webdav
    -url = https://dcache...
    -vendor = other
    -user =
    -pass =
    -bearer_token = your-macaroon
    +
    [dcache]
    +type = webdav
    +url = https://dcache...
    +vendor = other
    +user =
    +pass =
    +bearer_token = your-macaroon

    There is a script that obtains a Macaroon from a dCache WebDAV endpoint, and creates an @@ -56086,7 +61231,7 @@ installed and configured, an access token is obtained by running the oidc-token command. The following example shows a (shortened) access token obtained from the XDC OIDC Provider.

    -
    paul@celebrimbor:~$ oidc-token XDC
    +
    paul@celebrimbor:~$ oidc-token XDC
     eyJraWQ[...]QFXDt0
     paul@celebrimbor:~$

    Note Before the oidc-token command will @@ -56106,19 +61251,20 @@ the advanced config and enter the command to get a bearer token (e.g.,

    The following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from the XDC OIDC Provider.

    -
    [dcache]
    -type = webdav
    -url = https://dcache.example.org/
    -vendor = other
    -bearer_token_command = oidc-token XDC
    +
    [dcache]
    +type = webdav
    +url = https://dcache.example.org/
    +vendor = other
    +bearer_token_command = oidc-token XDC

    Yandex Disk

    Yandex Disk is a cloud storage solution created by Yandex.

    -

    Configuration

    +

    Configuration

    Here is an example of making a yandex configuration. First run

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     n/s> n
    @@ -56158,23 +61304,24 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> y

    See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

    +docs for how to set it up on a machine without an internet-connected +web browser available.

    Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall.

    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    See top level directories

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    Make a new directory

    -
    rclone mkdir remote:directory
    +
    rclone mkdir remote:directory

    List the contents of a directory

    -
    rclone ls remote:directory
    +
    rclone ls remote:directory

    Sync /home/local/directory to the remote path, deleting any excess files in the path.

    -
    rclone sync --interactive /home/local/directory remote:directory
    +
    rclone sync --interactive /home/local/directory remote:directory

    Yandex paths may be as deep as required, e.g. remote:directory/subdirectory.

    Modification times and @@ -56200,7 +61347,9 @@ restricted characters set are replaced.

    Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to yandex (Yandex Disk).

    --yandex-client-id

    OAuth Client Id.

    @@ -56222,7 +61371,7 @@ can't be used in JSON strings.

  • Type: string
  • Required: false
  • -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to yandex (Yandex Disk).

    --yandex-token

    OAuth Access Token as a JSON blob.

    @@ -56304,7 +61453,8 @@ client. May help with upload performance.

  • Type: string
  • Required: false
  • -

    Limitations

    + +

    Limitations

    When uploading very large files (bigger than about 5 GiB) you will need to increase the --timeout parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) @@ -56319,16 +61469,16 @@ a timeout of 2 * 30 = 60m, that is

    Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. Token generation will work without a mail account, but Rclone won't be able to complete any actions.

    -
    [403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
    +
    [403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.

    Zoho Workdrive

    Zoho WorkDrive is a cloud storage solution created by Zoho.

    -

    Configuration

    +

    Configuration

    Here is an example of making a zoho configuration. First run

    -
    rclone config
    +
    rclone config

    This will guide you through an interactive setup process:

    -
    No remotes found, make a new one?
    +
    No remotes found, make a new one?
     n) New remote
     s) Set configuration password
     n/s> n
    @@ -56386,24 +61536,25 @@ e) Edit this remote
     d) Delete this remote
     y/e/d> 

    See the remote setup -docs for how to set it up on a machine with no Internet browser -available.

    +docs for how to set it up on a machine without an internet-connected +web browser available.

    Rclone runs a webserver on your local computer to collect the authorization token from Zoho Workdrive. This is only from the moment your browser is opened until the token is returned. The webserver runs on http://127.0.0.1:53682/. If local port 53682 is protected by a firewall you may need to temporarily unblock the firewall to complete authorization.

    -

    Once configured you can then use rclone like this,

    +

    Once configured you can then use rclone like this +(replace remote with the name you gave your remote):

    See top level directories

    -
    rclone lsd remote:
    +
    rclone lsd remote:

    Make a new directory

    -
    rclone mkdir remote:directory
    +
    rclone mkdir remote:directory

    List the contents of a directory

    -
    rclone ls remote:directory
    +
    rclone ls remote:directory

    Sync /home/local/directory to the remote path, deleting any excess files in the path.

    -
    rclone sync --interactive /home/local/directory remote:directory
    +
    rclone sync --interactive /home/local/directory remote:directory

    Zoho paths may be as deep as required, eg remote:directory/subdirectory.

    Modification times and @@ -56419,7 +61570,9 @@ characters

    Only control characters and invalid UTF-8 are replaced. In addition most Unicode full-width characters are not supported at all and will be removed from filenames during upload.

    -

    Standard options

    + + +

    Standard options

    Here are the Standard options specific to zoho (Zoho).

    --zoho-client-id

    OAuth Client Id.

    @@ -56480,7 +61633,7 @@ browser.

    -

    Advanced options

    +

    Advanced options

    Here are the Advanced options specific to zoho (Zoho).

    --zoho-token

    OAuth Access Token as a JSON blob.

    @@ -56552,6 +61705,7 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • +

    Setting up your own client_id

    For Zoho we advise you to set up your own client_id. To do so you @@ -56569,15 +61723,15 @@ enable it in other regions.

    Local Filesystem

    Local paths are specified as normal filesystem paths, e.g. /path/to/wherever, so

    -
    rclone sync --interactive /home/source /tmp/destination
    +
    rclone sync --interactive /home/source /tmp/destination

    Will sync /home/source to /tmp/destination.

    -

    Configuration

    +

    Configuration

    For consistencies sake one can also configure a remote of type local in the config file, and access the local filesystem using rclone remote paths, e.g. remote:path/to/wherever, but it is probably easier not to.

    -

    Modification times

    +

    Modification times

    Rclone reads and writes the modification times using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.

    @@ -56594,7 +61748,7 @@ will be replaced with a quoted representation of the invalid bytes. The name gro\xdf will be transferred as gro‛DF. rclone will emit a debug message in this case (use -v to see), e.g.

    -
    Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
    +
    Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"

    Restricted characters

    With the local backend, restrictions on the characters that are usable in file or directory names depend on the operating system. To @@ -56903,13 +62057,15 @@ drivers like EncFS. To disable UNC conversion globally, add this to your .rclone.conf file:

    -
    [local]
    -nounc = true
    +
    [local]
    +nounc = true

    If you want to selectively disable UNC, you can add it to a separate entry like this:

    -
    [nounc]
    -type = local
    -nounc = true
    +
    [nounc]
    +type = local
    +nounc = true

    And use rclone like this:

    rclone copy c:\src nounc:z:\dst

    This will use UNC paths on c:\src but not on @@ -56925,7 +62081,7 @@ directory. Note that this flag is incompatible with --links / -l.

    This flag applies to all commands.

    For example, supposing you have a directory structure like this

    -
    $ tree /tmp/a
    +
    $ tree /tmp/a
     /tmp/a
     ├── b -> ../b
     ├── expected -> ../expected
    @@ -56934,11 +62090,11 @@ directory. Note that this flag is incompatible with --links
         └── three

    Then you can see the difference with and without the flag like this

    -
    $ rclone ls /tmp/a
    +
    $ rclone ls /tmp/a
             6 one
             6 two/three

    and

    -
    $ rclone -L ls /tmp/a
    +
    $ rclone -L ls /tmp/a
          4174 expected
             6 one
             6 two/three
    @@ -56954,32 +62110,32 @@ local storage, and store them as text files, with a
     example).

    This flag applies to all commands.

    For example, supposing you have a directory structure like this

    -
    $ tree /tmp/a
    +
    $ tree /tmp/a
     /tmp/a
     ├── file1 -> ./file4
     └── file2 -> /home/user/file3

    Copying the entire directory with '-l'

    -
    $ rclone copy -l /tmp/a/ remote:/tmp/a/
    +
    rclone copy -l /tmp/a/ remote:/tmp/a/

    The remote files are created with a .rclonelink suffix

    -
    $ rclone ls remote:/tmp/a
    +
    $ rclone ls remote:/tmp/a
            5 file1.rclonelink
           14 file2.rclonelink

    The remote files will contain the target of the symbolic links

    -
    $ rclone cat remote:/tmp/a/file1.rclonelink
    +
    $ rclone cat remote:/tmp/a/file1.rclonelink
     ./file4
     
     $ rclone cat remote:/tmp/a/file2.rclonelink
     /home/user/file3

    Copying them back with '-l'

    -
    $ rclone copy -l remote:/tmp/a/ /tmp/b/
    +
    $ rclone copy -l remote:/tmp/a/ /tmp/b/
     
     $ tree /tmp/b
     /tmp/b
     ├── file1 -> ./file4
     └── file2 -> /home/user/file3

    However, if copied back without '-l'

    -
    $ rclone copyto remote:/tmp/a/ /tmp/b/
    +
    $ rclone copyto remote:/tmp/a/ /tmp/b/
     
     $ tree /tmp/b
     /tmp/b
    @@ -56987,7 +62143,7 @@ $ tree /tmp/b
     └── file2.rclonelink

    If you want to copy a single file with -l then you must use the .rclonelink suffix.

    -
    $ rclone copy -l remote:/tmp/a/file1.rclonelink /tmp/c
    +
    $ rclone copy -l remote:/tmp/a/file1.rclonelink /tmp/c
     
     $ tree /tmp/c
     /tmp/c
    @@ -57004,7 +62160,7 @@ filesystems with --one-file-system
     this tells rclone to stay in the filesystem specified by the root and
     not to recurse into different file systems.

    For example if you have a directory hierarchy like this

    -
    root
    +
    root
     ├── disk1     - disk1 mounted on the root
     │   └── file3 - stored on disk1
     ├── disk2     - disk2 mounted on the root
    @@ -57012,11 +62168,11 @@ not to recurse into different file systems.

    ├── file1 - stored on the root disk └── file2 - stored on the root disk

    Using rclone --one-file-system copy root remote: will -only copy file1 and file2. Eg

    -
    $ rclone -q --one-file-system ls root
    +only copy file1 and file2. E.g.

    +
    $ rclone -q --one-file-system ls root
             0 file1
             0 file2
    -
    $ rclone -q ls root
    +
    $ rclone -q ls root
             0 disk1/file3
             0 disk2/file4
             0 file1
    @@ -57027,7 +62183,9 @@ mount to the same device as being on the same filesystem.

    NB This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored.

    -

    Advanced options

    + + +

    Advanced options

    Here are the Advanced options specific to local (Local Disk).

    --local-nounc

    Disable UNC (long path names) conversion on Windows.

    @@ -57075,6 +62233,18 @@ points, as you explicitly acknowledge that they should be skipped.

  • Type: bool
  • Default: false
  • +

    --skip-specials

    +

    Don't warn about skipped pipes, sockets and device objects.

    +

    This flag disables warning messages on skipped pipes, sockets and +device objects, as you explicitly acknowledge that they should be +skipped.

    +

    Properties:

    +
      +
    • Config: skip_specials
    • +
    • Env Var: RCLONE_LOCAL_SKIP_SPECIALS
    • +
    • Type: bool
    • +
    • Default: false
    • +

    Assume the Stat size of links is zero (and read them instead) (deprecated).

    @@ -57317,7 +62487,7 @@ section in the overview for more info.

  • Type: string
  • Required: false
  • -

    Metadata

    +

    Metadata

    Depending on which OS is in use the local backend may return only some of the system metadata. Setting system metadata is supported on all OSes but setting user metadata is only supported on linux, freebsd, @@ -57402,8 +62572,8 @@ backend.

    for more info.

    Backend commands

    Here are the commands specific to the local backend.

    -

    Run them with

    -
    rclone backend COMMAND remote:
    +

    Run them with:

    +
    rclone backend COMMAND remote:

    The help below will explain what arguments each command takes.

    See the backend command @@ -57411,17 +62581,354 @@ for more info on how to pass options and arguments.

    These can be run on a running backend using the rc command backend/command.

    noop

    -

    A null operation for testing backend commands

    -
    rclone backend noop remote: [options] [<arguments>+]
    +

    A null operation for testing backend commands.

    +
    rclone backend noop remote: [options] [<arguments>+]

    This is a test command which has some options you can try to change the output.

    Options:

      -
    • "echo": echo the input arguments
    • -
    • "error": return an error based on option value
    • +
    • "echo": Echo the input arguments.
    • +
    • "error": Return an error based on option value.
    +

    Changelog

    +

    v1.72.0 - 2025-11-21

    +

    See +commits

    +
      +
    • New backends +
        +
      • Archive backend to read archives on cloud +storage. (Nick Craig-Wood)
      • +
    • +
    • New S3 providers +
    • +
    • New commands +
    • +
    • New Features +
        +
      • backends: many backends have has a paged listing +(ListP) interface added +
          +
        • this enables progress when listing large directories and reduced +memory usage
        • +
      • +
      • build +
          +
        • Bump golang.org/x/crypto from 0.43.0 to 0.45.0 to fix CVE-2025-58181 +(dependabot[bot])
        • +
        • Modernize code and tests (Nick Craig-Wood, russcoss, juejinyuxitu, +reddaisyy, dulanting, Oleksandr Redko)
        • +
        • Update all dependencies (Nick Craig-Wood)
        • +
        • Enable support for aix/ppc64 (Lakshmi-Surekha)
        • +
      • +
      • check: Improved reporting of differences in sizes and contents +(albertony)
      • +
      • copyurl: Added --url to read URLs from CSV file +(S-Pegg1, dougal)
      • +
      • docs: +
          +
        • markdown linting (albertony)
        • +
        • fixes (albertony, Andrew Gunnerson, anon-pradip, Claudius Ellsel, +dougal, iTrooz, Jean-Christophe Cura, Joseph Brownlee, kapitainsky, Matt +LaPaglia, n4n5, Nick Craig-Wood, nielash, SublimePeace, Ted Robertson, +vastonus)
        • +
      • +
      • fs: remove unnecessary Seek call on log file (Aneesh Agrawal)
      • +
      • hashsum: Improved output format when listing algorithms +(albertony)
      • +
      • lib/http: Cleanup indentation and other whitespace in http serve +template (albertony)
      • +
      • lsf: Add support for unix and unixnano +time formats (Motte)
      • +
      • oauthutil: Improved debug logs from token refresh (albertony)
      • +
      • rc +
          +
        • Add job/batch for +sending batches of rc commands to run concurrently (Nick +Craig-Wood)
        • +
        • Add runningIds and finishedIds to job/list (n4n5)
        • +
        • Add osVersion, osKernel and +osArch to core/version (Nick +Craig-Wood)
        • +
        • Make sure fatal errors run via the rc don't crash rclone (Nick +Craig-Wood)
        • +
        • Add executeId to job statuses in job/list (Nikolay +Kiryanov)
        • +
        • config/unlock: rename parameter to +configPassword accept old as well (Nick Craig-Wood)
        • +
      • +
      • serve http: Download folders as zip (dougal)
      • +
    • +
    • Bug Fixes +
        +
      • build +
          +
        • Fix tls: failed to verify certificate: x509: negative serial number +(Nick Craig-Wood)
        • +
      • +
      • march +
          +
        • Fix --no-traverse being very slow (Nick +Craig-Wood)
        • +
      • +
      • serve s3: Fix log output to remove the EXTRA messages (iTrooz)
      • +
    • +
    • Mount +
        +
      • Windows: improve error message on missing WinFSP (divinity76)
      • +
    • +
    • Local +
        +
      • Add --skip-specials to ignore special files (Adam +Dinwoodie)
      • +
    • +
    • Azure Blob +
        +
      • Add ListP interface (dougal)
      • +
    • +
    • Azurefiles +
        +
      • Add ListP interface (Nick Craig-Wood)
      • +
    • +
    • B2 +
        +
      • Add ListP interface (dougal)
      • +
      • Add Server-Side encryption support (fries1234)
      • +
      • Fix "expected a FileSseMode but found: ''" (dougal)
      • +
      • Allow individual old versions to be deleted with +--b2-versions (dougal)
      • +
    • +
    • Box +
        +
      • Add ListP interface (Nick Craig-Wood)
      • +
      • Allow configuration with config file contents (Dominik Sander)
      • +
    • +
    • Compress +
        +
      • Add zstd compression (Alex)
      • +
    • +
    • Drive +
        +
      • Add ListP interface (Nick Craig-Wood)
      • +
    • +
    • Dropbox +
        +
      • Add ListP interface (Nick Craig-Wood)
      • +
      • Fix error moving just created objects (Nick Craig-Wood)
      • +
    • +
    • FTP +
        +
      • Fix SOCKS proxy support (dougal)
      • +
      • Fix transfers from servers that return 250 ok messages +(jijamik)
      • +
    • +
    • Google Cloud Storage +
        +
      • Add ListP interface (dougal)
      • +
      • Fix --gcs-storage-class to work with server side copy +for objects (Riaz Arbi)
      • +
    • +
    • HTTP +
        +
      • Add basic metadata and provide it via serve (Oleg Kunitsyn)
      • +
    • +
    • Jottacloud +
        +
      • Add support for Let's Go Cloud (from MediaMarkt) as a whitelabel +service (albertony)
      • +
      • Add support for MediaMarkt Cloud as a whitelabel service +(albertony)
      • +
      • Added support for traditional oauth authentication also for the main +service (albertony)
      • +
      • Abort attempts to run unsupported rclone authorize command +(albertony)
      • +
      • Improved token refresh handling (albertony)
      • +
      • Fix legacy authentication (albertony)
      • +
      • Fix authentication for whitelabel services from Elkjøp subsidiaries +(albertony)
      • +
    • +
    • Mega +
        +
      • Implement 2FA login (iTrooz)
      • +
    • +
    • Memory +
        +
      • Add ListP interface (dougal)
      • +
    • +
    • Onedrive +
        +
      • Add ListP interface (Nick Craig-Wood)
      • +
    • +
    • Oracle Object Storage +
        +
      • Add ListP interface (dougal)
      • +
    • +
    • Pcloud +
        +
      • Add ListP interface (Nick Craig-Wood)
      • +
    • +
    • Proton Drive +
        +
      • Automated 2FA login with OTP secret key (Microscotch)
      • +
    • +
    • S3 +
        +
      • Make it easier to add new S3 providers (dougal)
      • +
      • Add --s3-use-data-integrity-protections quirk to fix +BadDigest error in Alibaba, Tencent (hunshcn)
      • +
      • Add support for --upload-header, If-Match +and If-None-Match (Sean Turner)
      • +
      • Fix single file copying behavior with low permission (hunshcn)
      • +
    • +
    • SFTP +
        +
      • Fix zombie SSH processes with --sftp-ssh (Copilot)
      • +
    • +
    • Smb +
        +
      • Optimize smb mount performance by avoiding stat checks during +initialization (Sudipto Baral)
      • +
    • +
    • Swift +
        +
      • Add ListP interface (dougal)
      • +
      • If storage_policy isn't set, use the root containers policy (Andrew +Ruthven)
      • +
      • Report disk usage in segment containers (Andrew Ruthven)
      • +
    • +
    • Ulozto +
        +
      • Implement the About functionality (Lukas Krejci)
      • +
      • Fix downloads returning HTML error page (aliaj1)
      • +
    • +
    • WebDAV +
        +
      • Optimize bearer token fetching with singleflight (hunshcn)
      • +
      • Add ListP interface (Nick Craig-Wood)
      • +
      • Use SpaceSepList to parse bearer token command (hunshcn)
      • +
      • Add Access-Control-Max-Age header for CORS preflight +caching (viocha)
      • +
      • Fix out of memory with sharepoint-ntlm when uploading large file +(Nick Craig-Wood)
      • +
    • +
    +

    v1.71.2 - 2025-10-20

    +

    See +commits

    +
      +
    • Bug Fixes +
        +
      • build +
          +
        • update Go to 1.25.3
        • +
        • Update Docker image Alpine version to fix CVE-2025-9230
        • +
      • +
      • bisync: Fix race when CaptureOutput is used concurrently (Nick +Craig-Wood)
      • +
      • doc fixes (albertony, dougal, iTrooz, Matt LaPaglia, Nick +Craig-Wood)
      • +
      • index: Add missing providers (dougal)
      • +
      • serve http: Fix: logging URL on start (dougal)
      • +
    • +
    • Azurefiles +
        +
      • Fix server side copy not waiting for completion (Vikas +Bhansali)
      • +
    • +
    • B2 +
        +
      • Fix 1TB+ uploads (dougal)
      • +
    • +
    • Google Cloud Storage +
        +
      • Add region us-east5 (Dulani Woods)
      • +
    • +
    • Mega +
        +
      • Fix 402 payment required errors (Nick Craig-Wood)
      • +
    • +
    • Pikpak +
        +
      • Fix unnecessary retries by using URL expire parameter (Youfu +Zhang)
      • +
    • +
    +

    v1.71.1 - 2025-09-24

    +

    See +commits

    +
      +
    • Bug Fixes +
        +
      • bisync: Fix error handling for renamed conflicts (nielash)
      • +
      • march: Fix deadlock when using --fast-list on syncs (Nick +Craig-Wood)
      • +
      • operations: Fix partial name collisions for non --inplace copies +(Nick Craig-Wood)
      • +
      • pacer: Fix deadlock with --max-connections (Nick Craig-Wood)
      • +
      • doc fixes (albertony, anon-pradip, Claudius Ellsel, dougal, +Jean-Christophe Cura, Nick Craig-Wood, nielash)
      • +
    • +
    • Mount +
        +
      • Do not log successful unmount as an error (Tilman Vogel)
      • +
    • +
    • VFS +
        +
      • Fix SIGHUP killing serve instead of flushing directory caches +(dougal)
      • +
    • +
    • Local +
        +
      • Fix rmdir "Access is denied" on windows (nielash)
      • +
    • +
    • Box +
        +
      • Fix about after change in API return (Nick Craig-Wood)
      • +
    • +
    • Combine +
        +
      • Propagate SlowHash feature (skbeh)
      • +
    • +
    • Drive +
        +
      • Update making your own client ID instructions (Ed Craig-Wood)
      • +
    • +
    • Internet Archive +
        +
      • Fix server side copy files with spaces (Nick Craig-Wood)
      • +
    • +

    v1.71.0 - 2025-08-22

    See @@ -68707,7 +74214,7 @@ installations

  • Project started
  • Bugs and Limitations

    -

    Limitations

    +

    Limitations

    Directory timestamps aren't preserved on some backends

    As of v1.66, rclone supports syncing directory modtimes, @@ -68773,8 +74280,7 @@ href="https://rclone.org/docs/#configure">config help docs.

    href="https://rclone.org/docs/#backend-path-to-dir">on the fly remotes, you can create an empty config file to get rid of this notice, for example:

    -
    rclone config touch
    +
    rclone config touch

    Can rclone sync directly from drive to s3

    Rclone can sync between two remote cloud storage systems just @@ -68783,21 +74289,18 @@ fine.

    the node running rclone would need to have lots of bandwidth.

    The syncs would be incremental (on a file by file basis).

    e.g.

    -
    rclone sync --interactive drive:Folder s3:bucket
    +
    rclone sync --interactive drive:Folder s3:bucket

    Using rclone from multiple locations at the same time

    You can use rclone from multiple places at the same time if you choose different subdirectory for the output, e.g.

    -
    Server A> rclone sync --interactive /tmp/whatever remote:ServerA
    -Server B> rclone sync --interactive /tmp/whatever remote:ServerB
    +
    Server A> rclone sync --interactive /tmp/whatever remote:ServerA
    +Server B> rclone sync --interactive /tmp/whatever remote:ServerB

    If you sync to the same directory then you should use rclone copy otherwise the two instances of rclone may delete each other's files, e.g.

    -
    Server A> rclone copy /tmp/whatever remote:Backup
    -Server B> rclone copy /tmp/whatever remote:Backup
    +
    Server A> rclone copy /tmp/whatever remote:Backup
    +Server B> rclone copy /tmp/whatever remote:Backup

    The file names you upload from Server A and Server B should be different in this case, otherwise some file systems (e.g. Drive) may make duplicates.

    @@ -68841,25 +74344,22 @@ applications may use http_proxy but another one HTTP_PROXY. The Go libraries used by rclone will try both variations, but you may wish to set all possibilities. So, on Linux, you may end up with code similar to

    -
    export http_proxy=http://proxyserver:12345
    -export https_proxy=$http_proxy
    -export HTTP_PROXY=$http_proxy
    -export HTTPS_PROXY=$http_proxy
    +
    export http_proxy=http://proxyserver:12345
    +export https_proxy=$http_proxy
    +export HTTP_PROXY=$http_proxy
    +export HTTPS_PROXY=$http_proxy

    Note: If the proxy server requires a username and password, then use

    -
    export http_proxy=http://username:password@proxyserver:12345
    -export https_proxy=$http_proxy
    -export HTTP_PROXY=$http_proxy
    -export HTTPS_PROXY=$http_proxy
    +
    export http_proxy=http://username:password@proxyserver:12345
    +export https_proxy=$http_proxy
    +export HTTP_PROXY=$http_proxy
    +export HTTPS_PROXY=$http_proxy

    The NO_PROXY allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance "foo.com" also matches "bar.foo.com".

    e.g.

    -
    export no_proxy=localhost,127.0.0.0/8,my.host.name
    -export NO_PROXY=$no_proxy
    +
    export no_proxy=localhost,127.0.0.0/8,my.host.name
    +export NO_PROXY=$no_proxy

    Note that the FTP backend does not support ftp_proxy yet.

    You can use the command line argument --http-proxy to @@ -68882,17 +74382,15 @@ occur on outdated systems, where rclone can't verify the server with the SSL root certificates.

    Rclone (via the Go runtime) tries to load the root certificates from these places on Linux.

    -
    "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
    -"/etc/pki/tls/certs/ca-bundle.crt",   // Fedora/RHEL
    -"/etc/ssl/ca-bundle.pem",             // OpenSUSE
    -"/etc/pki/tls/cacert.pem",            // OpenELEC
    +
    "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
    +"/etc/pki/tls/certs/ca-bundle.crt",   // Fedora/RHEL
    +"/etc/ssl/ca-bundle.pem",             // OpenSUSE
    +"/etc/pki/tls/cacert.pem",            // OpenELEC

    So doing something like this should fix the problem. It also sets the time which is important for SSL to work properly.

    -
    mkdir -p /etc/ssl/certs/
    -curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
    -ntpclient -s -h pool.ntp.org
    +
    mkdir -p /etc/ssl/certs/
    +curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
    +ntpclient -s -h pool.ntp.org

    The two environment variables SSL_CERT_FILE and SSL_CERT_DIR, mentioned in the x509 package, provide an @@ -68900,16 +74398,14 @@ additional way to provide the SSL root certificates on Unix systems other than macOS.

    Note that you may need to add the --insecure option to the curl command line if it doesn't work without.

    -
    curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
    +
    curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt

    On macOS, you can install ca-certificates with Homebrew, and specify the SSL root certificates with the --ca-cert flag.

    -
    brew install ca-certificates
    -find $(brew --prefix)/etc/ca-certificates -type f
    +
    brew install ca-certificates
    +find $(brew --prefix)/etc/ca-certificates -type f

    Rclone gives Failed to load config file: function not implemented error

    @@ -68928,10 +74424,10 @@ formats

    some.domain.com no such host

    This happens when rclone cannot resolve a domain. Please check that your DNS setup is generally working, e.g.

    -
    # both should print a long list of possible IP addresses
    -dig www.googleapis.com          # resolve using your default DNS
    -dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server
    +
    # both should print a long list of possible IP addresses
    +dig www.googleapis.com          # resolve using your default DNS
    +dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server

    If you are using systemd-resolved (default on Arch Linux), ensure it is at version 233 or higher. Previous releases contain a bug which causes not all domains to be resolved properly.

    @@ -68954,7 +74450,8 @@ yyyy/mm/dd hh:mm:ss Fatal error: config failed to refresh token: failed to start with opening the port on the host.

    A simple solution may be restarting the Host Network Service with eg. Powershell

    -
    Restart-Service hns
    +
    Restart-Service hns

    The total size reported in the stats for a sync is wrong and keeps @@ -69039,9 +74536,9 @@ THE SOFTWARE.

    class="email">nick@craig-wood.com

    Contributors

    -

    {{< rem -email addresses removed from here need to be added to bin/.ignore-emails to make sure update-authors.py doesn't immediately put them back in again. ->}}

    +

    Contact the rclone project

    Forum

    Forum for questions and general discussion:

    Business support

    For business support or sponsorship enquiries please see:

    GitHub repository

    The project's repository is located at:

    There you can file bug reports or contribute with pull requests.

    Twitter

    @@ -71192,7 +76790,8 @@ data-cites="njcw">@njcw

    Or if all else fails or you want to ask something private or confidential

    Please don't email requests for help to this address - those are better directed to the forum unless you'd like to sign up for business diff --git a/MANUAL.md b/MANUAL.md index 997c15c0d..06bc59a49 100644 --- a/MANUAL.md +++ b/MANUAL.md @@ -1,6 +1,6 @@ % rclone(1) User Manual % Nick Craig-Wood -% Aug 22, 2025 +% Nov 21, 2025 # NAME @@ -15,6 +15,7 @@ Usage: Available commands: about Get quota information from the remote. + archive Perform an action on an archive. authorize Remote authorization. backend Run a backend-specific command. bisync Perform bidirectional synchronization between two paths. @@ -76,6 +77,7 @@ Use "rclone help backends" for a list of supported services. ``` # Rclone syncs your files to cloud storage + rclone logo - [About rclone](#about) @@ -148,21 +150,24 @@ Rclone helps you: - Mirror cloud data to other cloud services or locally - Migrate data to the cloud, or between cloud storage vendors - Mount multiple, encrypted, cached or diverse cloud storage as a disk -- Analyse and account for data held on cloud storage using [lsf](https://rclone.org/commands/rclone_lsf/), [ljson](https://rclone.org/commands/rclone_lsjson/), [size](https://rclone.org/commands/rclone_size/), [ncdu](https://rclone.org/commands/rclone_ncdu/) -- [Union](https://rclone.org/union/) file systems together to present multiple local and/or cloud file systems as one +- Analyse and account for data held on cloud storage using [lsf](https://rclone.org/commands/rclone_lsf/), + [ljson](https://rclone.org/commands/rclone_lsjson/), [size](https://rclone.org/commands/rclone_size/), [ncdu](https://rclone.org/commands/rclone_ncdu/) +- [Union](https://rclone.org/union/) file systems together to present multiple local and/or cloud + file systems as one ## Features {#features} - Transfers - - MD5, SHA1 hashes are checked at all times for file integrity - - Timestamps are preserved on files - - Operations can be restarted at any time - - Can be to and from network, e.g. two different cloud providers - - Can use multi-threaded downloads to local disk + - MD5, SHA1 hashes are checked at all times for file integrity + - Timestamps are preserved on files + - Operations can be restarted at any time + - Can be to and from network, e.g. two different cloud providers + - Can use multi-threaded downloads to local disk - [Copy](https://rclone.org/commands/rclone_copy/) new or changed files to cloud storage - [Sync](https://rclone.org/commands/rclone_sync/) (one way) to make a directory identical - [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync bidirectionally -- [Move](https://rclone.org/commands/rclone_move/) files to cloud storage deleting the local after verification +- [Move](https://rclone.org/commands/rclone_move/) files to cloud storage deleting the local after + verification - [Check](https://rclone.org/commands/rclone_check/) hashes and for missing/extra files - [Mount](https://rclone.org/commands/rclone_mount/) your cloud storage as a network disk - [Serve](https://rclone.org/commands/rclone_serve/) local or remote files over [HTTP](https://rclone.org/commands/rclone_serve_http/)/[WebDav](https://rclone.org/commands/rclone_serve_webdav/)/[FTP](https://rclone.org/commands/rclone_serve_ftp/)/[SFTP](https://rclone.org/commands/rclone_serve_sftp/)/[DLNA](https://rclone.org/commands/rclone_serve_dlna/) @@ -173,6 +178,9 @@ Rclone helps you: (There are many others, built on standard protocols such as WebDAV or S3, that work out of the box.) + + + - 1Fichier - Akamai Netstorage @@ -186,6 +194,7 @@ WebDAV or S3, that work out of the box.) - Citrix ShareFile - Cloudflare R2 - Cloudinary +- Cubbit DS3 - DigitalOcean Spaces - Digi Storage - Dreamhost @@ -194,6 +203,7 @@ WebDAV or S3, that work out of the box.) - Exaba - Fastmail Files - FileLu Cloud Storage +- FileLu S5 (S3-Compatible Object Storage) - Files.com - FlashBlade - FTP @@ -202,15 +212,18 @@ WebDAV or S3, that work out of the box.) - Google Drive - Google Photos - HDFS +- Hetzner Object Storage - Hetzner Storage Box - HiDrive - HTTP +- Huawei OBS - iCloud Drive - ImageKit - Internet Archive - Jottacloud - IBM COS S3 - IDrive e2 +- Intercolo Object Storage - IONOS Cloud - Koofr - Leviia Object Storage @@ -247,16 +260,21 @@ WebDAV or S3, that work out of the box.) - QingStor - Qiniu Cloud Object Storage (Kodo) - Quatrix by Maytech +- Rabata Cloud Storage +- RackCorp Object Storage - Rackspace Cloud Files +- Rclone Serve S3 - rsync.net - Scaleway - Seafile - Seagate Lyve Cloud - SeaweedFS - Selectel +- Servercore Object Storage - SFTP - Sia - SMB / CIFS +- Spectra Logic - StackPath - Storj - Synology @@ -272,11 +290,17 @@ WebDAV or S3, that work out of the box.) - The local filesystem + + ## Virtual providers These backends adapt or modify other storage providers: + + + - Alias: Rename existing remotes +- Archive: Read archive files - Cache: Cache remotes (DEPRECATED) - Chunker: Split large files - Combine: Combine multiple remotes into a directory tree @@ -285,13 +309,14 @@ These backends adapt or modify other storage providers: - Hasher: Hash files - Union: Join multiple remotes to work together + ## Links - * [Home page](https://rclone.org/) - * [GitHub project page for source and bug tracker](https://github.com/rclone/rclone) - * [Rclone Forum](https://forum.rclone.org) - * [Downloads](https://rclone.org/downloads/) +- [Home page](https://rclone.org/) +- [GitHub project page for source and bug tracker](https://github.com/rclone/rclone) +- [Rclone Forum](https://forum.rclone.org) +- [Downloads](https://rclone.org/downloads/) # Install @@ -319,13 +344,13 @@ signatures on the release. To install rclone on Linux/macOS/BSD systems, run: -```sh +```console sudo -v ; curl https://rclone.org/install.sh | sudo bash ``` For beta installation, run: -```sh +```console sudo -v ; curl https://rclone.org/install.sh | sudo bash -s beta ``` @@ -338,7 +363,7 @@ won't re-download if not needed. Fetch and unpack -```sh +```console curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip unzip rclone-current-linux-amd64.zip cd rclone-*-linux-amd64 @@ -346,7 +371,7 @@ cd rclone-*-linux-amd64 Copy binary file -```sh +```console sudo cp rclone /usr/bin/ sudo chown root:root /usr/bin/rclone sudo chmod 755 /usr/bin/rclone @@ -354,7 +379,7 @@ sudo chmod 755 /usr/bin/rclone Install manpage -```sh +```console sudo mkdir -p /usr/local/share/man/man1 sudo cp rclone.1 /usr/local/share/man/man1/ sudo mandb @@ -362,7 +387,7 @@ sudo mandb Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details. -```sh +```console rclone config ``` @@ -370,7 +395,7 @@ rclone config ### Installation with brew {#macos-brew} -```sh +```console brew install rclone ``` @@ -388,7 +413,7 @@ developers so it may be out of date. Its current version is as below. On macOS, rclone can also be installed via [MacPorts](https://www.macports.org): -```sh +```console sudo port install rclone ``` @@ -406,19 +431,19 @@ notarized it is enough to download with `curl`. Download the latest version of rclone. -```sh +```console cd && curl -O https://downloads.rclone.org/rclone-current-osx-amd64.zip ``` Unzip the download and cd to the extracted folder. -```sh +```console unzip -a rclone-current-osx-amd64.zip && cd rclone-*-osx-amd64 ``` Move rclone to your $PATH. You will be prompted for your password. -```sh +```console sudo mkdir -p /usr/local/bin sudo mv rclone /usr/local/bin/ ``` @@ -427,13 +452,13 @@ sudo mv rclone /usr/local/bin/ Remove the leftover files. -```sh +```console cd .. && rm -rf rclone-*-osx-amd64 rclone-current-osx-amd64.zip ``` Run `rclone config` to setup. See [rclone config docs](https://rclone.org/docs/) for more details. -```sh +```console rclone config ``` @@ -443,14 +468,14 @@ When downloading a binary with a web browser, the browser will set the macOS gatekeeper quarantine attribute. Starting from Catalina, when attempting to run `rclone`, a pop-up will appear saying: -```sh +```text "rclone" cannot be opened because the developer cannot be verified. macOS cannot verify that this app is free from malware. ``` The simplest fix is to run -```sh +```console xattr -d com.apple.quarantine rclone ``` @@ -564,7 +589,7 @@ The `:latest` tag will always point to the latest stable release. You can use the `:beta` tag to get the latest build from master. You can also use version tags, e.g. `:1.49.1`, `:1.49` or `:1`. -```sh +```console $ docker pull rclone/rclone:latest latest: Pulling from rclone/rclone Digest: sha256:0e0ced72671989bb837fea8e88578b3fc48371aa45d209663683e24cfdaa0e11 @@ -647,7 +672,7 @@ kill %1 Make sure you have [Snapd installed](https://snapcraft.io/docs/installing-snapd) -```sh +```console sudo snap install rclone ``` @@ -674,7 +699,7 @@ Go version 1.24 or newer is required, the latest release is recommended. You can get it from your package manager, or download it from [golang.org/dl](https://golang.org/dl/). Then you can run the following: -```sh +```console git clone https://github.com/rclone/rclone.git cd rclone go build @@ -688,7 +713,7 @@ in the same folder. As an initial check you can now run `./rclone version` Note that on macOS and Windows the [mount](https://rclone.org/commands/rclone_mount/) command will not be available unless you specify an additional build tag `cmount`. -```sh +```console go build -tags cmount ``` @@ -714,7 +739,7 @@ You may add arguments `-ldflags -s` to omit symbol table and debug information, making the executable file smaller, and `-trimpath` to remove references to local file system paths. The official rclone releases are built with both of these. -```sh +```console go build -trimpath -ldflags -s -tags cmount ``` @@ -725,7 +750,7 @@ or `fs.VersionSuffix` (to keep default number but customize the suffix). This can be done from the build command, by adding to the `-ldflags` argument value as shown below. -```sh +```console go build -trimpath -ldflags "-s -X github.com/rclone/rclone/fs.Version=v9.9.9-test" -tags cmount ``` @@ -736,7 +761,7 @@ It generates a Windows resource system object file, with extension .syso, e.g. `resource_windows_amd64.syso`, that will be automatically picked up by future build commands. -```sh +```console go run bin/resource_windows.go ``` @@ -748,7 +773,7 @@ override this version variable in the build command as described above, you need to do that also when generating the resource file, or else it will still use the value from the source. -```sh +```console go run bin/resource_windows.go -version v9.9.9-test ``` @@ -758,13 +783,13 @@ followed by additional commit details, embeds version information binary resourc on Windows, and copies the resulting rclone executable into your GOPATH bin folder (`$(go env GOPATH)/bin`, which corresponds to `~/go/bin/rclone` by default). -```sh +```console make ``` To include mount command on macOS and Windows with Makefile build: -```sh +```console make GOTAGS=cmount ``` @@ -781,7 +806,7 @@ The source will be stored it in the Go module cache, and the resulting executable will be in your GOPATH bin folder (`$(go env GOPATH)/bin`, which corresponds to `~/go/bin/rclone` by default). -```sh +```console go install github.com/rclone/rclone@latest ``` @@ -801,7 +826,7 @@ Instructions your local roles-directory 2. add the role to the hosts you want rclone installed to: - ```yml + ```yaml - hosts: rclone-hosts roles: - rclone @@ -928,7 +953,7 @@ Example of a PowerShell command that creates a Windows service for mounting some `remote:/files` as drive letter `X:`, for *all* users (service will be running as the local system account): -```pwsh +```powershell New-Service -Name Rclone -BinaryPathName 'c:\rclone\rclone.exe mount remote:/files X: --config c:\rclone\config\rclone.conf --log-file c:\rclone\logs\mount.txt' ``` @@ -993,7 +1018,7 @@ file and choose its location.) The easiest way to make the config is to run rclone with the config option: -```sh +```console rclone config ``` @@ -1002,6 +1027,7 @@ See the following for detailed instructions for - [1Fichier](https://rclone.org/fichier/) - [Akamai Netstorage](https://rclone.org/netstorage/) - [Alias](https://rclone.org/alias/) +- [Archive](https://rclone.org/archive/) - [Amazon S3](https://rclone.org/s3/) - [Backblaze B2](https://rclone.org/b2/) - [Box](https://rclone.org/box/) @@ -1070,11 +1096,11 @@ Rclone syncs a directory tree from one storage system to another. Its syntax is like this -```sh +```console rclone subcommand [options] ``` -A `subcommand` is a the rclone operation required, (e.g. `sync`, +A `subcommand` is an rclone operation required (e.g. `sync`, `copy`, `ls`). An `option` is a single letter flag (e.g. `-v`) or a group of single @@ -1085,7 +1111,7 @@ used before the `subcommand`. Anything after a `--` option will not be interpreted as an option so if you need to add a parameter which starts with a `-` then put a `--` on its own first, eg -```sh +```console rclone lsf -- -directory-starting-with-dash ``` @@ -1106,7 +1132,7 @@ learning rclone to avoid accidental data loss. rclone uses a system of subcommands. For example -```sh +```console rclone ls remote:path # lists a remote rclone copy /local/path remote:path # copies /local/path to the remote rclone sync --interactive /local/path remote:path # syncs /local/path to the remote @@ -1122,7 +1148,6 @@ Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. - ``` rclone config [flags] ``` @@ -1137,6 +1162,9 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone config create](https://rclone.org/commands/rclone_config_create/) - Create a new remote with name, type and options. * [rclone config delete](https://rclone.org/commands/rclone_config_delete/) - Delete an existing remote. @@ -1151,10 +1179,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li * [rclone config reconnect](https://rclone.org/commands/rclone_config_reconnect/) - Re-authenticates user with remote. * [rclone config redacted](https://rclone.org/commands/rclone_config_redacted/) - Print redacted (decrypted) config file, or the redacted config for a single remote. * [rclone config show](https://rclone.org/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. +* [rclone config string](https://rclone.org/commands/rclone_config_string/) - Print connection string for a single remote. * [rclone config touch](https://rclone.org/commands/rclone_config_touch/) - Ensure configuration file exists. * [rclone config update](https://rclone.org/commands/rclone_config_update/) - Update options in an existing remote. * [rclone config userinfo](https://rclone.org/commands/rclone_config_userinfo/) - Prints info about logged in user of remote. + + + # rclone copy Copy files from source to dest, skipping identical files. @@ -1180,22 +1212,30 @@ go there. For example - rclone copy source:sourcepath dest:destpath +```sh +rclone copy source:sourcepath dest:destpath +``` Let's say there are two files in sourcepath - sourcepath/one.txt - sourcepath/two.txt +```text +sourcepath/one.txt +sourcepath/two.txt +``` This copies them to - destpath/one.txt - destpath/two.txt +```text +destpath/one.txt +destpath/two.txt +``` Not to - destpath/sourcepath/one.txt - destpath/sourcepath/two.txt +```text +destpath/sourcepath/one.txt +destpath/sourcepath/two.txt +``` If you are familiar with `rsync`, rclone always works as if you had written a trailing `/` - meaning "copy the contents of this directory". @@ -1211,27 +1251,30 @@ For example, if you have many files in /path/to/src but only a few of them change every day, you can copy all the files which have changed recently very efficiently like this: - rclone copy --max-age 24h --no-traverse /path/to/src remote: - +```sh +rclone copy --max-age 24h --no-traverse /path/to/src remote: +``` Rclone will sync the modification times of files and directories if the backend supports it. If metadata syncing is required then use the `--metadata` flag. Note that the modification time and metadata for the root directory -will **not** be synced. See https://github.com/rclone/rclone/issues/7652 +will **not** be synced. See [issue #7652](https://github.com/rclone/rclone/issues/7652) for more info. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. -**Note**: Use the `--dry-run` or the `--interactive`/`-i` flag to test without copying anything. +**Note**: Use the `--dry-run` or the `--interactive`/`-i` flag to test without +copying anything. -# Logger Flags +## Logger Flags -The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, -one per line, to the file name (or stdout if it is `-`) supplied. What they write is described -in the help below. For example `--differ` will write all paths which are present -on both the source and destination but different. +The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` +flags write paths, one per line, to the file name (or stdout if it is `-`) +supplied. What they write is described in the help below. For example +`--differ` will write all paths which are present on both the source and +destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell @@ -1264,9 +1307,7 @@ are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file -(which may or may not match what actually DID.) - - +(which may or may not match what actually DID). ``` rclone copy source:path dest:path [flags] @@ -1292,7 +1333,7 @@ rclone copy source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) ``` Options shared with other commands are described next. @@ -1302,7 +1343,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -1343,7 +1384,7 @@ Flags for anything which can copy a file Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -1353,7 +1394,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -1383,15 +1424,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone sync Make source and dest identical, modifying destination only. @@ -1409,7 +1456,9 @@ want to delete files from destination, use the **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`i` flag. - rclone sync --interactive SOURCE remote:DESTINATION +```sh +rclone sync --interactive SOURCE remote:DESTINATION +``` Files in the destination won't be deleted if there were any errors at any point. Duplicate objects (files with the same name, on those providers that @@ -1426,7 +1475,7 @@ If dest:path doesn't exist, it is created and the source:path contents go there. It is not possible to sync overlapping remotes. However, you may exclude -the destination from the sync with a filter rule or by putting an +the destination from the sync with a filter rule or by putting an exclude-if-present file inside the destination directory and sync to a destination that is inside the source directory. @@ -1435,20 +1484,23 @@ the backend supports it. If metadata syncing is required then use the `--metadata` flag. Note that the modification time and metadata for the root directory -will **not** be synced. See https://github.com/rclone/rclone/issues/7652 +will **not** be synced. See for more info. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics -**Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors. -See [this forum post](https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) for more info. +**Note**: Use the `rclone dedupe` command to deal with "Duplicate +object/directory found in source/destination - ignoring" errors. +See [this forum post](https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) +for more info. -# Logger Flags +## Logger Flags -The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, -one per line, to the file name (or stdout if it is `-`) supplied. What they write is described -in the help below. For example `--differ` will write all paths which are present -on both the source and destination but different. +The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` +flags write paths, one per line, to the file name (or stdout if it is `-`) +supplied. What they write is described in the help below. For example +`--differ` will write all paths which are present on both the source and +destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell @@ -1481,9 +1533,7 @@ are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file -(which may or may not match what actually DID.) - - +(which may or may not match what actually DID). ``` rclone sync source:path dest:path [flags] @@ -1509,7 +1559,7 @@ rclone sync source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) ``` Options shared with other commands are described next. @@ -1519,7 +1569,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -1560,7 +1610,7 @@ Flags for anything which can copy a file Flags used for sync commands -``` +```text --backup-dir string Make backups into hierarchy based in DIR --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring @@ -1580,7 +1630,7 @@ Flags used for sync commands Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -1590,7 +1640,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -1620,15 +1670,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone move Move files from source to dest. @@ -1665,7 +1721,7 @@ the backend supports it. If metadata syncing is required then use the `--metadata` flag. Note that the modification time and metadata for the root directory -will **not** be synced. See https://github.com/rclone/rclone/issues/7652 +will **not** be synced. See for more info. **Important**: Since this can cause data loss, test first with the @@ -1673,12 +1729,13 @@ for more info. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. -# Logger Flags +## Logger Flags -The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, -one per line, to the file name (or stdout if it is `-`) supplied. What they write is described -in the help below. For example `--differ` will write all paths which are present -on both the source and destination but different. +The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` +flags write paths, one per line, to the file name (or stdout if it is `-`) +supplied. What they write is described in the help below. For example +`--differ` will write all paths which are present on both the source and +destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell @@ -1711,9 +1768,7 @@ are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file -(which may or may not match what actually DID.) - - +(which may or may not match what actually DID). ``` rclone move source:path dest:path [flags] @@ -1740,7 +1795,7 @@ rclone move source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) ``` Options shared with other commands are described next. @@ -1750,7 +1805,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -1791,7 +1846,7 @@ Flags for anything which can copy a file Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -1801,7 +1856,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -1831,15 +1886,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone delete Remove the files in path. @@ -1853,19 +1914,23 @@ obeys include/exclude filters so can be used to selectively delete files. alone. If you want to delete a directory and all of its contents use the [purge](https://rclone.org/commands/rclone_purge/) command. -If you supply the `--rmdirs` flag, it will remove all empty directories along with it. -You can also use the separate command [rmdir](https://rclone.org/commands/rclone_rmdir/) or -[rmdirs](https://rclone.org/commands/rclone_rmdirs/) to delete empty directories only. +If you supply the `--rmdirs` flag, it will remove all empty directories along +with it. You can also use the separate command [rmdir](https://rclone.org/commands/rclone_rmdir/) +or [rmdirs](https://rclone.org/commands/rclone_rmdirs/) to delete empty directories only. For example, to delete all files bigger than 100 MiB, you may first want to check what would be deleted (use either): - rclone --min-size 100M lsl remote:path - rclone --dry-run --min-size 100M delete remote:path +```sh +rclone --min-size 100M lsl remote:path +rclone --dry-run --min-size 100M delete remote:path +``` Then proceed with the actual delete: - rclone --min-size 100M delete remote:path +```sh +rclone --min-size 100M delete remote:path +``` That reads "delete everything with a minimum size of 100 MiB", hence delete all files bigger than 100 MiB. @@ -1873,7 +1938,6 @@ delete all files bigger than 100 MiB. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. - ``` rclone delete remote:path [flags] ``` @@ -1892,7 +1956,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -1902,7 +1966,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -1932,15 +1996,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone purge Remove the path and all of its contents. @@ -1953,13 +2023,13 @@ include/exclude filters - everything will be removed. Use the delete files. To delete empty directories only, use command [rmdir](https://rclone.org/commands/rclone_rmdir/) or [rmdirs](https://rclone.org/commands/rclone_rmdirs/). -The concurrency of this operation is controlled by the `--checkers` global flag. However, some backends will -implement this command directly, in which case `--checkers` will be ignored. +The concurrency of this operation is controlled by the `--checkers` global flag. +However, some backends will implement this command directly, in which +case `--checkers` will be ignored. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. - ``` rclone purge remote:path [flags] ``` @@ -1977,7 +2047,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -1985,8 +2055,14 @@ Important flags useful for most commands ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone mkdir Make the path if it doesn't already exist. @@ -2008,7 +2084,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -2016,8 +2092,14 @@ Important flags useful for most commands ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone rmdir Remove the empty directory at path. @@ -2031,7 +2113,6 @@ with option `--rmdirs`) to do that. To delete a path and any objects in it, use [purge](https://rclone.org/commands/rclone_purge/) command. - ``` rclone rmdir remote:path [flags] ``` @@ -2049,7 +2130,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -2057,8 +2138,14 @@ Important flags useful for most commands ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone check Checks the files in the source and destination match. @@ -2108,7 +2195,6 @@ you what happened to it. These are reminiscent of diff files. The default number of parallel checks is 8. See the [--checkers](https://rclone.org/docs/#checkers-int) option for more information. - ``` rclone check source:path dest:path [flags] ``` @@ -2135,7 +2221,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags used for check commands -``` +```text --max-backlog int Maximum number of objects in sync or check backlog (default 10000) ``` @@ -2143,7 +2229,7 @@ Flags used for check commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -2173,15 +2259,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone ls List the objects in the path with size and path. @@ -2191,24 +2283,25 @@ List the objects in the path with size and path. Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default. -Eg - - $ rclone ls swift:bucket - 60295 bevajer5jef - 90613 canole - 94467 diwogej7 - 37600 fubuwic +E.g. +```console +$ rclone ls swift:bucket + 60295 bevajer5jef + 90613 canole + 94467 diwogej7 + 37600 fubuwic +``` Any of the filtering options can be applied to this command. There are several related list commands - * `ls` to list size and path of objects only - * `lsl` to list modification time, size and path of objects only - * `lsd` to list directories only - * `lsf` to list objects and directories in easy to parse format - * `lsjson` to list objects and directories in JSON format +- `ls` to list size and path of objects only +- `lsl` to list modification time, size and path of objects only +- `lsd` to list directories only +- `lsf` to list objects and directories in easy to parse format +- `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human-readable. `lsf` is designed to be human and machine-readable. @@ -2216,13 +2309,13 @@ There are several related list commands Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion. -The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - +use `-R` to make them recurse. Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes). - ``` rclone ls remote:path [flags] ``` @@ -2240,7 +2333,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -2270,15 +2363,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone lsd List all directories/containers/buckets in the path. @@ -2291,31 +2390,34 @@ recurse by default. Use the `-R` flag to recurse. This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name -of the directory, Eg +of the directory, E.g. - $ rclone lsd swift: - 494000 2018-04-26 08:43:20 10000 10000files - 65 2018-04-26 08:43:20 1 1File +```console +$ rclone lsd swift: + 494000 2018-04-26 08:43:20 10000 10000files + 65 2018-04-26 08:43:20 1 1File +``` Or - $ rclone lsd drive:test - -1 2016-10-17 17:41:53 -1 1000files - -1 2017-01-03 14:40:54 -1 2500files - -1 2017-07-08 14:39:28 -1 4000files +```console +$ rclone lsd drive:test + -1 2016-10-17 17:41:53 -1 1000files + -1 2017-01-03 14:40:54 -1 2500files + -1 2017-07-08 14:39:28 -1 4000files +``` If you just want the directory names use `rclone lsf --dirs-only`. - Any of the filtering options can be applied to this command. There are several related list commands - * `ls` to list size and path of objects only - * `lsl` to list modification time, size and path of objects only - * `lsd` to list directories only - * `lsf` to list objects and directories in easy to parse format - * `lsjson` to list objects and directories in JSON format +- `ls` to list size and path of objects only +- `lsl` to list modification time, size and path of objects only +- `lsd` to list directories only +- `lsf` to list objects and directories in easy to parse format +- `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human-readable. `lsf` is designed to be human and machine-readable. @@ -2323,13 +2425,13 @@ There are several related list commands Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion. -The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - +use `-R` to make them recurse. Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes). - ``` rclone lsd remote:path [flags] ``` @@ -2348,7 +2450,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -2378,15 +2480,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone lsl List the objects in path with modification time, size and path. @@ -2396,24 +2504,25 @@ List the objects in path with modification time, size and path. Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default. -Eg - - $ rclone lsl swift:bucket - 60295 2016-06-25 18:55:41.062626927 bevajer5jef - 90613 2016-06-25 18:55:43.302607074 canole - 94467 2016-06-25 18:55:43.046609333 diwogej7 - 37600 2016-06-25 18:55:40.814629136 fubuwic +E.g. +```console +$ rclone lsl swift:bucket + 60295 2016-06-25 18:55:41.062626927 bevajer5jef + 90613 2016-06-25 18:55:43.302607074 canole + 94467 2016-06-25 18:55:43.046609333 diwogej7 + 37600 2016-06-25 18:55:40.814629136 fubuwic +``` Any of the filtering options can be applied to this command. There are several related list commands - * `ls` to list size and path of objects only - * `lsl` to list modification time, size and path of objects only - * `lsd` to list directories only - * `lsf` to list objects and directories in easy to parse format - * `lsjson` to list objects and directories in JSON format +- `ls` to list size and path of objects only +- `lsl` to list modification time, size and path of objects only +- `lsd` to list directories only +- `lsf` to list objects and directories in easy to parse format +- `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human-readable. `lsf` is designed to be human and machine-readable. @@ -2421,13 +2530,13 @@ There are several related list commands Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion. -The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - +use `-R` to make them recurse. Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes). - ``` rclone lsl remote:path [flags] ``` @@ -2445,7 +2554,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -2475,15 +2584,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone md5sum Produces an md5sum file for all the objects in the path. @@ -2507,7 +2622,6 @@ by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hyphen will be treated literally, as a relative path). - ``` rclone md5sum remote:path [flags] ``` @@ -2529,7 +2643,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -2559,15 +2673,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone sha1sum Produces an sha1sum file for all the objects in the path. @@ -2594,7 +2714,6 @@ as a relative path). This command can also hash data received on STDIN, if not passing a remote:path. - ``` rclone sha1sum remote:path [flags] ``` @@ -2616,7 +2735,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -2646,15 +2765,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone size Prints the total size and number of objects in remote:path. @@ -2679,7 +2804,6 @@ Rclone will then show a notice in the log indicating how many such files were encountered, and count them in as empty files in the output of the size command. - ``` rclone size remote:path [flags] ``` @@ -2698,7 +2822,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -2728,15 +2852,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone version Show the version number. @@ -2749,15 +2879,17 @@ build tags and the type of executable (static or dynamic). For example: - $ rclone version - rclone v1.55.0 - - os/version: ubuntu 18.04 (64 bit) - - os/kernel: 4.15.0-136-generic (x86_64) - - os/type: linux - - os/arch: amd64 - - go/version: go1.16 - - go/linking: static - - go/tags: none +```console +$ rclone version +rclone v1.55.0 +- os/version: ubuntu 18.04 (64 bit) +- os/kernel: 4.15.0-136-generic (x86_64) +- os/type: linux +- os/arch: amd64 +- go/version: go1.16 +- go/linking: static +- go/tags: none +``` Note: before rclone version 1.55 the os/type and os/arch lines were merged, and the "go/version" line was tagged as "go version". @@ -2765,25 +2897,28 @@ Note: before rclone version 1.55 the os/type and os/arch lines were merged, If you supply the --check flag, then it will do an online check to compare your version with the latest release and the latest beta. - $ rclone version --check - yours: 1.42.0.6 - latest: 1.42 (released 2018-06-16) - beta: 1.42.0.5 (released 2018-06-17) +```console +$ rclone version --check +yours: 1.42.0.6 +latest: 1.42 (released 2018-06-16) +beta: 1.42.0.5 (released 2018-06-17) +``` Or - $ rclone version --check - yours: 1.41 - latest: 1.42 (released 2018-06-16) - upgrade: https://downloads.rclone.org/v1.42 - beta: 1.42.0.5 (released 2018-06-17) - upgrade: https://beta.rclone.org/v1.42-005-g56e1e820 +```console +$ rclone version --check +yours: 1.41 +latest: 1.42 (released 2018-06-16) + upgrade: https://downloads.rclone.org/v1.42 +beta: 1.42.0.5 (released 2018-06-17) + upgrade: https://beta.rclone.org/v1.42-005-g56e1e820 +``` If you supply the --deps flag then rclone will print a list of all the packages it depends on and their versions along with some other information about the build. - ``` rclone version [flags] ``` @@ -2800,8 +2935,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone cleanup Clean up the remote if possible. @@ -2811,7 +2952,6 @@ Clean up the remote if possible. Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes. - ``` rclone cleanup remote:path [flags] ``` @@ -2829,7 +2969,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -2837,8 +2977,14 @@ Important flags useful for most commands ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone dedupe Interactively find duplicate filenames and delete/rename them. @@ -2865,14 +3011,15 @@ directories have been merged. Next, if deduping by name, for every group of duplicate file names / hashes, it will delete all but one identical file it finds without -confirmation. This means that for most duplicated files the `dedupe` command will not be interactive. +confirmation. This means that for most duplicated files the +`dedupe` command will not be interactive. `dedupe` considers files to be identical if they have the -same file path and the same hash. If the backend does not support hashes (e.g. crypt wrapping -Google Drive) then they will never be found to be identical. If you -use the `--size-only` flag then files will be considered -identical if they have the same size (any hash will be ignored). This -can be useful on crypt backends which do not support hashes. +same file path and the same hash. If the backend does not support +hashes (e.g. crypt wrapping Google Drive) then they will never be found +to be identical. If you use the `--size-only` flag then files +will be considered identical if they have the same size (any hash will be +ignored). This can be useful on crypt backends which do not support hashes. Next rclone will resolve the remaining duplicates. Exactly which action is taken depends on the dedupe mode. By default, rclone will @@ -2885,71 +3032,82 @@ Here is an example run. Before - with duplicates - $ rclone lsl drive:dupes - 6048320 2016-03-05 16:23:16.798000000 one.txt - 6048320 2016-03-05 16:23:11.775000000 one.txt - 564374 2016-03-05 16:23:06.731000000 one.txt - 6048320 2016-03-05 16:18:26.092000000 one.txt - 6048320 2016-03-05 16:22:46.185000000 two.txt - 1744073 2016-03-05 16:22:38.104000000 two.txt - 564374 2016-03-05 16:22:52.118000000 two.txt +```console +$ rclone lsl drive:dupes + 6048320 2016-03-05 16:23:16.798000000 one.txt + 6048320 2016-03-05 16:23:11.775000000 one.txt + 564374 2016-03-05 16:23:06.731000000 one.txt + 6048320 2016-03-05 16:18:26.092000000 one.txt + 6048320 2016-03-05 16:22:46.185000000 two.txt + 1744073 2016-03-05 16:22:38.104000000 two.txt + 564374 2016-03-05 16:22:52.118000000 two.txt +``` Now the `dedupe` session - $ rclone dedupe drive:dupes - 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. - one.txt: Found 4 files with duplicate names - one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36") - one.txt: 2 duplicates remain - 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 - 2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 - s) Skip and do nothing - k) Keep just one (choose which in next step) - r) Rename all to be different (by changing file.jpg to file-1.jpg) - s/k/r> k - Enter the number of the file to keep> 1 - one.txt: Deleted 1 extra copies - two.txt: Found 3 files with duplicate names - two.txt: 3 duplicates remain - 1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 - 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 - 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802 - s) Skip and do nothing - k) Keep just one (choose which in next step) - r) Rename all to be different (by changing file.jpg to file-1.jpg) - s/k/r> r - two-1.txt: renamed from: two.txt - two-2.txt: renamed from: two.txt - two-3.txt: renamed from: two.txt +```console +$ rclone dedupe drive:dupes +2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. +one.txt: Found 4 files with duplicate names +one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36") +one.txt: 2 duplicates remain + 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 + 2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 +s) Skip and do nothing +k) Keep just one (choose which in next step) +r) Rename all to be different (by changing file.jpg to file-1.jpg) +s/k/r> k +Enter the number of the file to keep> 1 +one.txt: Deleted 1 extra copies +two.txt: Found 3 files with duplicate names +two.txt: 3 duplicates remain + 1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 + 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 + 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802 +s) Skip and do nothing +k) Keep just one (choose which in next step) +r) Rename all to be different (by changing file.jpg to file-1.jpg) +s/k/r> r +two-1.txt: renamed from: two.txt +two-2.txt: renamed from: two.txt +two-3.txt: renamed from: two.txt +``` The result being - $ rclone lsl drive:dupes - 6048320 2016-03-05 16:23:16.798000000 one.txt - 564374 2016-03-05 16:22:52.118000000 two-1.txt - 6048320 2016-03-05 16:22:46.185000000 two-2.txt - 1744073 2016-03-05 16:22:38.104000000 two-3.txt +```console +$ rclone lsl drive:dupes + 6048320 2016-03-05 16:23:16.798000000 one.txt + 564374 2016-03-05 16:22:52.118000000 two-1.txt + 6048320 2016-03-05 16:22:46.185000000 two-2.txt + 1744073 2016-03-05 16:22:38.104000000 two-3.txt +``` -Dedupe can be run non interactively using the `--dedupe-mode` flag or by using an extra parameter with the same value +Dedupe can be run non interactively using the `--dedupe-mode` flag +or by using an extra parameter with the same value - * `--dedupe-mode interactive` - interactive as above. - * `--dedupe-mode skip` - removes identical files then skips anything left. - * `--dedupe-mode first` - removes identical files then keeps the first one. - * `--dedupe-mode newest` - removes identical files then keeps the newest one. - * `--dedupe-mode oldest` - removes identical files then keeps the oldest one. - * `--dedupe-mode largest` - removes identical files then keeps the largest one. - * `--dedupe-mode smallest` - removes identical files then keeps the smallest one. - * `--dedupe-mode rename` - removes identical files then renames the rest to be different. - * `--dedupe-mode list` - lists duplicate dirs and files only and changes nothing. +- `--dedupe-mode interactive` - interactive as above. +- `--dedupe-mode skip` - removes identical files then skips anything left. +- `--dedupe-mode first` - removes identical files then keeps the first one. +- `--dedupe-mode newest` - removes identical files then keeps the newest one. +- `--dedupe-mode oldest` - removes identical files then keeps the oldest one. +- `--dedupe-mode largest` - removes identical files then keeps the largest one. +- `--dedupe-mode smallest` - removes identical files then keeps the smallest one. +- `--dedupe-mode rename` - removes identical files then renames the rest to be different. +- `--dedupe-mode list` - lists duplicate dirs and files only and changes nothing. -For example, to rename all the identically named photos in your Google Photos directory, do +For example, to rename all the identically named photos in your Google Photos +directory, do - rclone dedupe --dedupe-mode rename "drive:Google Photos" +```console +rclone dedupe --dedupe-mode rename "drive:Google Photos" +``` Or - rclone dedupe rename "drive:Google Photos" - +```console +rclone dedupe rename "drive:Google Photos" +``` ``` rclone dedupe [mode] remote:path [flags] @@ -2970,7 +3128,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -2978,8 +3136,14 @@ Important flags useful for most commands ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone about Get quota information from the remote. @@ -2991,40 +3155,46 @@ output. The output is typically used, free, quota and trash contents. E.g. Typical output from `rclone about remote:` is: - Total: 17 GiB - Used: 7.444 GiB - Free: 1.315 GiB - Trashed: 100.000 MiB - Other: 8.241 GiB +```text +Total: 17 GiB +Used: 7.444 GiB +Free: 1.315 GiB +Trashed: 100.000 MiB +Other: 8.241 GiB +``` Where the fields are: - * Total: Total size available. - * Used: Total size used. - * Free: Total space available to this user. - * Trashed: Total space used by trash. - * Other: Total amount in other storage (e.g. Gmail, Google Photos). - * Objects: Total number of objects in the storage. +- Total: Total size available. +- Used: Total size used. +- Free: Total space available to this user. +- Trashed: Total space used by trash. +- Other: Total amount in other storage (e.g. Gmail, Google Photos). +- Objects: Total number of objects in the storage. All sizes are in number of bytes. Applying a `--full` flag to the command prints the bytes in full, e.g. - Total: 18253611008 - Used: 7993453766 - Free: 1411001220 - Trashed: 104857602 - Other: 8849156022 +```text +Total: 18253611008 +Used: 7993453766 +Free: 1411001220 +Trashed: 104857602 +Other: 8849156022 +``` A `--json` flag generates conveniently machine-readable output, e.g. - { - "total": 18253611008, - "used": 7993453766, - "trashed": 104857602, - "other": 8849156022, - "free": 1411001220 - } +```json +{ + "total": 18253611008, + "used": 7993453766, + "trashed": 104857602, + "other": 8849156022, + "free": 1411001220 +} +``` Not all backends print all fields. Information is not included if it is not provided by a backend. Where the value is unlimited it is omitted. @@ -3032,7 +3202,6 @@ provided by a backend. Where the value is unlimited it is omitted. Some backends does not support the `rclone about` command at all, see complete list in [documentation](https://rclone.org/overview/#optional-features). - ``` rclone about remote: [flags] ``` @@ -3049,8 +3218,313 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + +# rclone archive + +Perform an action on an archive. + +## Synopsis + +Perform an action on an archive. Requires the use of a +subcommand to specify the protocol, e.g. + + rclone archive list remote:file.zip + +Each subcommand has its own options which you can see in their help. + +See [rclone archive create](https://rclone.org/commands/rclone_archive_create/) for the +archive formats supported. + + +``` +rclone archive [opts] [] [flags] +``` + +## Options + +``` + -h, --help help for archive +``` + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +## See Also + + + + +* [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. +* [rclone archive create](https://rclone.org/commands/rclone_archive_create/) - Archive source file(s) to destination. +* [rclone archive extract](https://rclone.org/commands/rclone_archive_extract/) - Extract archives from source to destination. +* [rclone archive list](https://rclone.org/commands/rclone_archive_list/) - List archive contents from source. + + + + +# rclone archive create + +Archive source file(s) to destination. + +## Synopsis + + +Creates an archive from the files in source:path and saves the archive to +dest:path. If dest:path is missing, it will write to the console. + +The valid formats for the `--format` flag are listed below. If +`--format` is not set rclone will guess it from the extension of dest:path. + +| Format | Extensions | +|:-------|:-----------| +| zip | .zip | +| tar | .tar | +| tar.gz | .tar.gz, .tgz, .taz | +| tar.bz2| .tar.bz2, .tb2, .tbz, .tbz2, .tz2 | +| tar.lz | .tar.lz | +| tar.lz4| .tar.lz4 | +| tar.xz | .tar.xz, .txz | +| tar.zst| .tar.zst, .tzst | +| tar.br | .tar.br | +| tar.sz | .tar.sz | +| tar.mz | .tar.mz | + +The `--prefix` and `--full-path` flags control the prefix for the files +in the archive. + +If the flag `--full-path` is set then the files will have the full source +path as the prefix. + +If the flag `--prefix=` is set then the files will have +`` as prefix. It's possible to create invalid file names with +`--prefix=` so use with caution. Flag `--prefix` has +priority over `--full-path`. + +Given a directory `/sourcedir` with the following: + + file1.txt + dir1/file2.txt + +Running the command `rclone archive create /sourcedir /dest.tar.gz` +will make an archive with the contents: + + file1.txt + dir1/ + dir1/file2.txt + +Running the command `rclone archive create --full-path /sourcedir /dest.tar.gz` +will make an archive with the contents: + + sourcedir/file1.txt + sourcedir/dir1/ + sourcedir/dir1/file2.txt + +Running the command `rclone archive create --prefix=my_new_path /sourcedir /dest.tar.gz` +will make an archive with the contents: + + my_new_path/file1.txt + my_new_path/dir1/ + my_new_path/dir1/file2.txt + + +``` +rclone archive create [flags] [] +``` + +## Options + +``` + --format string Create the archive with format or guess from extension. + --full-path Set prefix for files in archive to source path + -h, --help help for create + --prefix string Set prefix for files in archive to entered value or source path +``` + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +## See Also + + + + +* [rclone archive](https://rclone.org/commands/rclone_archive/) - Perform an action on an archive. + + + + +# rclone archive extract + +Extract archives from source to destination. + +## Synopsis + + + +Extract the archive contents to a destination directory auto detecting +the format. See [rclone archive create](https://rclone.org/commands/rclone_archive_create/) +for the archive formats supported. + +For example on this archive: + +``` +$ rclone archive list --long remote:archive.zip + 6 2025-10-30 09:46:23.000000000 file.txt + 0 2025-10-30 09:46:57.000000000 dir/ + 4 2025-10-30 09:46:57.000000000 dir/bye.txt +``` + +You can run extract like this + +``` +$ rclone archive extract remote:archive.zip remote:extracted +``` + +Which gives this result + +``` +$ rclone tree remote:extracted +/ +├── dir +│ └── bye.txt +└── file.txt +``` + +The source or destination or both can be local or remote. + +Filters can be used to only extract certain files: + +``` +$ rclone archive extract archive.zip partial --include "bye.*" +$ rclone tree partial +/ +└── dir + └── bye.txt +``` + +The [archive backend](https://rclone.org/archive/) can also be used to extract files. It +can be used to read only mount archives also but it supports a +different set of archive formats to the archive commands. + + +``` +rclone archive extract [flags] +``` + +## Options + +``` + -h, --help help for extract +``` + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +## See Also + + + + +* [rclone archive](https://rclone.org/commands/rclone_archive/) - Perform an action on an archive. + + + + +# rclone archive list + +List archive contents from source. + +## Synopsis + + +List the contents of an archive to the console, auto detecting the +format. See [rclone archive create](https://rclone.org/commands/rclone_archive_create/) +for the archive formats supported. + +For example: + +``` +$ rclone archive list remote:archive.zip + 6 file.txt + 0 dir/ + 4 dir/bye.txt +``` + +Or with `--long` flag for more info: + +``` +$ rclone archive list --long remote:archive.zip + 6 2025-10-30 09:46:23.000000000 file.txt + 0 2025-10-30 09:46:57.000000000 dir/ + 4 2025-10-30 09:46:57.000000000 dir/bye.txt +``` + +Or with `--plain` flag which is useful for scripting: + +``` +$ rclone archive list --plain /path/to/archive.zip +file.txt +dir/ +dir/bye.txt +``` + +Or with `--dirs-only`: + +``` +$ rclone archive list --plain --dirs-only /path/to/archive.zip +dir/ +``` + +Or with `--files-only`: + +``` +$ rclone archive list --plain --files-only /path/to/archive.zip +file.txt +dir/bye.txt +``` + +Filters may also be used: + +``` +$ rclone archive list --long archive.zip --include "bye.*" + 4 2025-10-30 09:46:57.000000000 dir/bye.txt +``` + +The [archive backend](https://rclone.org/archive/) can also be used to list files. It +can be used to read only mount archives also but it supports a +different set of archive formats to the archive commands. + + +``` +rclone archive list [flags] +``` + +## Options + +``` + --dirs-only Only list directories + --files-only Only list files + -h, --help help for list + --long List extra attributtes + --plain Only list file names +``` + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +## See Also + + + + +* [rclone archive](https://rclone.org/commands/rclone_archive/) - Perform an action on an archive. + + + + # rclone authorize Remote authorization. @@ -3058,21 +3532,23 @@ Remote authorization. ## Synopsis Remote authorization. Used to authorize a remote or headless -rclone from a machine with a browser - use as instructed by -rclone config. +rclone from a machine with a browser. Use as instructed by rclone config. +See also the [remote setup documentation](/remote_setup). The command requires 1-3 arguments: - - fs name (e.g., "drive", "s3", etc.) - - Either a base64 encoded JSON blob obtained from a previous rclone config session - - Or a client_id and client_secret pair obtained from the remote service + +- Name of a backend (e.g. "drive", "s3") +- Either a base64 encoded JSON blob obtained from a previous rclone config session +- Or a client_id and client_secret pair obtained from the remote service Use --auth-no-open-browser to prevent rclone to open auth link in default browser automatically. -Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used. +Use --template to generate HTML output via a custom Go template. If a blank +string is provided as an argument to this flag, the default template is used. ``` -rclone authorize [base64_json_blob | client_id client_secret] [flags] +rclone authorize [base64_json_blob | client_id client_secret] [flags] ``` ## Options @@ -3087,8 +3563,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone backend Run a backend-specific command. @@ -3101,27 +3583,34 @@ see the backend docs for definitions. You can discover what commands a backend implements by using - rclone backend help remote: - rclone backend help +```console +rclone backend help remote: +rclone backend help +``` You can also discover information about the backend using (see [operations/fsinfo](https://rclone.org/rc/#operations-fsinfo) in the remote control docs for more info). - rclone backend features remote: +```console +rclone backend features remote: +``` Pass options to the backend command with -o. This should be key=value or key, e.g.: - rclone backend stats remote:path stats -o format=json -o long +```console +rclone backend stats remote:path stats -o format=json -o long +``` Pass arguments to the backend by placing them on the end of the line - rclone backend cleanup remote:path file1 file2 file3 +```console +rclone backend cleanup remote:path file1 file2 file3 +``` Note to run these commands on a running backend then see [backend/command](https://rclone.org/rc/#backend-command) in the rc docs. - ``` rclone backend remote:path [opts] [flags] ``` @@ -3141,7 +3630,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -3149,8 +3638,14 @@ Important flags useful for most commands ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone bisync Perform bidirectional synchronization between two paths. @@ -3163,18 +3658,19 @@ Perform bidirectional synchronization between two paths. bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On each successive run it will: + - list files on Path1 and Path2, and check for changes on each side. Changes include `New`, `Newer`, `Older`, and `Deleted` files. - Propagate changes on Path1 to Path2, and vice-versa. Bisync is considered an **advanced command**, so use with care. Make sure you have read and understood the entire [manual](https://rclone.org/bisync) -(especially the [Limitations](https://rclone.org/bisync/#limitations) section) before using, -or data loss can result. Questions can be asked in the [Rclone Forum](https://forum.rclone.org/). +(especially the [Limitations](https://rclone.org/bisync/#limitations) section) +before using, or data loss can result. Questions can be asked in the +[Rclone Forum](https://forum.rclone.org/). See [full bisync description](https://rclone.org/bisync/) for details. - ``` rclone bisync remote1:path1 remote2:path2 [flags] ``` @@ -3216,7 +3712,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -3257,7 +3753,7 @@ Flags for anything which can copy a file Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -3267,7 +3763,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -3295,8 +3791,14 @@ Flags for filtering directory listings ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone cat Concatenates any files and sends them to stdout. @@ -3307,15 +3809,21 @@ Sends any files to standard output. You can use it like this to output a single file - rclone cat remote:path/to/file +```sh +rclone cat remote:path/to/file +``` Or like this to output any file in dir or its subdirectories. - rclone cat remote:path/to/dir +```sh +rclone cat remote:path/to/dir +``` Or like this to output any .txt files in dir or its subdirectories. - rclone --include "*.txt" cat remote:path/to/dir +```sh +rclone --include "*.txt" cat remote:path/to/dir +``` Use the `--head` flag to print characters only at the start, `--tail` for the end and `--offset` and `--count` to print a section in the middle. @@ -3326,14 +3834,17 @@ Use the `--separator` flag to print a separator value between files. Be sure to shell-escape special characters. For example, to print a newline between files, use: -* bash: +- bash: - rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir + ```sh + rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir + ``` -* powershell: - - rclone --include "*.txt" --separator "`n" cat remote:path/to/dir +- powershell: + ```powershell + rclone --include "*.txt" --separator "`n" cat remote:path/to/dir + ``` ``` rclone cat remote:path [flags] @@ -3358,7 +3869,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -3388,15 +3899,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone checksum Checks the files in the destination against a SUM file. @@ -3440,7 +3957,6 @@ you what happened to it. These are reminiscent of diff files. The default number of parallel checks is 8. See the [--checkers](https://rclone.org/docs/#checkers-int) option for more information. - ``` rclone checksum sumfile dst:path [flags] ``` @@ -3466,7 +3982,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -3496,15 +4012,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone completion Output completion script for a given shell. @@ -3514,7 +4036,6 @@ Output completion script for a given shell. Generates a shell completion script for rclone. Run with `--help` to list the supported shells. - ## Options ``` @@ -3525,12 +4046,18 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone completion bash](https://rclone.org/commands/rclone_completion_bash/) - Output bash completion script for rclone. * [rclone completion fish](https://rclone.org/commands/rclone_completion_fish/) - Output fish completion script for rclone. * [rclone completion powershell](https://rclone.org/commands/rclone_completion_powershell/) - Output powershell completion script for rclone. * [rclone completion zsh](https://rclone.org/commands/rclone_completion_zsh/) - Output zsh completion script for rclone. + + + # rclone completion bash Output bash completion script for rclone. @@ -3539,17 +4066,21 @@ Output bash completion script for rclone. Generates a bash shell autocompletion script for rclone. -By default, when run without any arguments, +By default, when run without any arguments, - rclone completion bash +```console +rclone completion bash +``` the generated script will be written to - /etc/bash_completion.d/rclone +```console +/etc/bash_completion.d/rclone +``` and so rclone will probably need to be run as root, or with sudo. -If you supply a path to a file as the command line argument, then +If you supply a path to a file as the command line argument, then the generated script will be written to that file, in which case you should not need root privileges. @@ -3560,12 +4091,13 @@ can logout and login again to use the autocompletion script. Alternatively, you can source the script directly - . /path/to/my_bash_completion_scripts/rclone +```console +. /path/to/my_bash_completion_scripts/rclone +``` and the autocompletion functionality will be added to your current shell. - ``` rclone completion bash [output_file] [flags] ``` @@ -3580,8 +4112,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone completion](https://rclone.org/commands/rclone_completion/) - Output completion script for a given shell. + + + # rclone completion fish Output fish completion script for rclone. @@ -3593,19 +4131,22 @@ Generates a fish autocompletion script for rclone. This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g. - sudo rclone completion fish +```console +sudo rclone completion fish +``` Logout and login again to use the autocompletion scripts, or source them directly - . /etc/fish/completions/rclone.fish +```console +. /etc/fish/completions/rclone.fish +``` If you supply a command line argument the script will be written there. If output_file is "-", then the output will be written to stdout. - ``` rclone completion fish [output_file] [flags] ``` @@ -3620,8 +4161,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone completion](https://rclone.org/commands/rclone_completion/) - Output completion script for a given shell. + + + # rclone completion powershell Output powershell completion script for rclone. @@ -3632,14 +4179,15 @@ Generate the autocompletion script for powershell. To load completions in your current shell session: - rclone completion powershell | Out-String | Invoke-Expression +```console +rclone completion powershell | Out-String | Invoke-Expression +``` To load completions for every new session, add the output of the above command to your powershell profile. If output_file is "-" or missing, then the output will be written to stdout. - ``` rclone completion powershell [output_file] [flags] ``` @@ -3654,8 +4202,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone completion](https://rclone.org/commands/rclone_completion/) - Output completion script for a given shell. + + + # rclone completion zsh Output zsh completion script for rclone. @@ -3667,19 +4221,22 @@ Generates a zsh autocompletion script for rclone. This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g. - sudo rclone completion zsh +```console +sudo rclone completion zsh +``` Logout and login again to use the autocompletion scripts, or source them directly - autoload -U compinit && compinit +```console +autoload -U compinit && compinit +``` If you supply a command line argument the script will be written there. If output_file is "-", then the output will be written to stdout. - ``` rclone completion zsh [output_file] [flags] ``` @@ -3694,8 +4251,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone completion](https://rclone.org/commands/rclone_completion/) - Output completion script for a given shell. + + + # rclone config create Create a new remote with name, type and options. @@ -3708,13 +4271,17 @@ should be passed in pairs of `key` `value` or as `key=value`. For example, to make a swift remote of name myremote using auto config you would do: - rclone config create myremote swift env_auth true - rclone config create myremote swift env_auth=true +```sh +rclone config create myremote swift env_auth true +rclone config create myremote swift env_auth=true +``` So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this: - rclone config create mydrive drive config_is_local=false +```sh +rclone config create mydrive drive config_is_local=false +``` Note that if the config process would normally ask a question the default is taken (unless `--non-interactive` is used). Each time @@ -3742,29 +4309,29 @@ it. This will look something like (some irrelevant detail removed): -``` +```json { - "State": "*oauth-islocal,teamdrive,,", - "Option": { - "Name": "config_is_local", - "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n", - "Default": true, - "Examples": [ - { - "Value": "true", - "Help": "Yes" - }, - { - "Value": "false", - "Help": "No" - } - ], - "Required": false, - "IsPassword": false, - "Type": "bool", - "Exclusive": true, - }, - "Error": "", + "State": "*oauth-islocal,teamdrive,,", + "Option": { + "Name": "config_is_local", + "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n", + "Default": true, + "Examples": [ + { + "Value": "true", + "Help": "Yes" + }, + { + "Value": "false", + "Help": "No" + } + ], + "Required": false, + "IsPassword": false, + "Type": "bool", + "Exclusive": true, + }, + "Error": "", } ``` @@ -3787,7 +4354,9 @@ The keys of `Option` are used as follows: If `Error` is set then it should be shown to the user at the same time as the question. - rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true" +```sh +rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true" +``` Note that when using `--continue` all passwords should be passed in the clear (not obscured). Any default config values should be passed @@ -3803,7 +4372,6 @@ defaults for questions as usual. Note that `bin/config.py` in the rclone source implements this protocol as a readable demonstration. - ``` rclone config create name type [key value]* [flags] ``` @@ -3826,8 +4394,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config delete Delete an existing remote. @@ -3846,8 +4420,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config disconnect Disconnects user from remote @@ -3860,7 +4440,6 @@ This normally means revoking the oauth token. To reconnect use "rclone config reconnect". - ``` rclone config disconnect remote: [flags] ``` @@ -3875,8 +4454,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config dump Dump the config file as JSON. @@ -3895,8 +4480,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config edit Enter an interactive configuration session. @@ -3907,7 +4498,6 @@ Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. - ``` rclone config edit [flags] ``` @@ -3922,8 +4512,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config encryption set, remove and check the encryption for the config file @@ -3933,7 +4529,6 @@ set, remove and check the encryption for the config file This command sets, clears and checks the encryption for the config file using the subcommands below. - ## Options ``` @@ -3944,11 +4539,17 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config encryption check](https://rclone.org/commands/rclone_config_encryption_check/) - Check that the config file is encrypted * [rclone config encryption remove](https://rclone.org/commands/rclone_config_encryption_remove/) - Remove the config file encryption password * [rclone config encryption set](https://rclone.org/commands/rclone_config_encryption_set/) - Set or change the config file encryption password + + + # rclone config encryption check Check that the config file is encrypted @@ -3964,7 +4565,6 @@ If decryption fails it will return a non-zero exit code if using If the config file is not encrypted it will return a non zero exit code. - ``` rclone config encryption check [flags] ``` @@ -3979,8 +4579,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config encryption](https://rclone.org/commands/rclone_config_encryption/) - set, remove and check the encryption for the config file + + + # rclone config encryption remove Remove the config file encryption password @@ -3997,7 +4603,6 @@ password. If the config was not encrypted then no error will be returned and this command will do nothing. - ``` rclone config encryption remove [flags] ``` @@ -4012,8 +4617,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config encryption](https://rclone.org/commands/rclone_config_encryption/) - set, remove and check the encryption for the config file + + + # rclone config encryption set Set or change the config file encryption password @@ -4040,7 +4651,6 @@ encryption remove`), then set it again with this command which may be easier if you don't mind the unencrypted config file being on the disk briefly. - ``` rclone config encryption set [flags] ``` @@ -4055,8 +4665,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config encryption](https://rclone.org/commands/rclone_config_encryption/) - set, remove and check the encryption for the config file + + + # rclone config file Show path of configuration file in use. @@ -4075,8 +4691,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config password Update password in an existing remote. @@ -4089,13 +4711,14 @@ The `password` should be passed in in clear (unobscured). For example, to set password of a remote of name myremote you would do: - rclone config password myremote fieldname mypassword - rclone config password myremote fieldname=mypassword +```sh +rclone config password myremote fieldname mypassword +rclone config password myremote fieldname=mypassword +``` This command is obsolete now that "config update" and "config create" both support obscuring passwords directly. - ``` rclone config password name [key value]+ [flags] ``` @@ -4110,8 +4733,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config paths Show paths used for configuration, cache, temp etc. @@ -4130,8 +4759,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config providers List in JSON format all the providers and options. @@ -4150,8 +4785,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config reconnect Re-authenticates user with remote. @@ -4164,7 +4805,6 @@ To disconnect the remote use "rclone config disconnect". This normally means going through the interactive oauth flow again. - ``` rclone config reconnect remote: [flags] ``` @@ -4179,8 +4819,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config redacted Print redacted (decrypted) config file, or the redacted config for a single remote. @@ -4197,8 +4843,6 @@ This makes the config file suitable for posting online for support. It should be double checked before posting as the redaction may not be perfect. - - ``` rclone config redacted [] [flags] ``` @@ -4213,8 +4857,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config show Print (decrypted) config file, or the config for a single remote. @@ -4233,8 +4883,64 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + +# rclone config string + +Print connection string for a single remote. + +## Synopsis + +Print a connection string for a single remote. + +The [connection strings](https://rclone.org/docs/#connection-strings) can be used +wherever a remote is needed and can be more convenient than using the +config file, especially if using the RC API. + +Backend parameters may be provided to the command also. + +Example: + +```sh +$ rclone config string s3:rclone --s3-no-check-bucket +:s3,access_key_id=XXX,no_check_bucket,provider=AWS,region=eu-west-2,secret_access_key=YYY:rclone +``` + +**NB** the strings are not quoted for use in shells (eg bash, +powershell, windows cmd). Most will work if enclosed in "double +quotes", however connection strings that contain double quotes will +require further quoting which is very shell dependent. + + + +``` +rclone config string [flags] +``` + +## Options + +``` + -h, --help help for string +``` + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +## See Also + + + + +* [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + + # rclone config touch Ensure configuration file exists. @@ -4253,8 +4959,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config update Update options in an existing remote. @@ -4267,13 +4979,17 @@ pairs of `key` `value` or as `key=value`. For example, to update the env_auth field of a remote of name myremote you would do: - rclone config update myremote env_auth true - rclone config update myremote env_auth=true +```sh +rclone config update myremote env_auth true +rclone config update myremote env_auth=true +``` If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus: - rclone config update myremote env_auth=true config_refresh_token=false +```sh +rclone config update myremote env_auth=true config_refresh_token=false +``` Note that if the config process would normally ask a question the default is taken (unless `--non-interactive` is used). Each time @@ -4301,29 +5017,29 @@ it. This will look something like (some irrelevant detail removed): -``` +```json { - "State": "*oauth-islocal,teamdrive,,", - "Option": { - "Name": "config_is_local", - "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n", - "Default": true, - "Examples": [ - { - "Value": "true", - "Help": "Yes" - }, - { - "Value": "false", - "Help": "No" - } - ], - "Required": false, - "IsPassword": false, - "Type": "bool", - "Exclusive": true, - }, - "Error": "", + "State": "*oauth-islocal,teamdrive,,", + "Option": { + "Name": "config_is_local", + "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n", + "Default": true, + "Examples": [ + { + "Value": "true", + "Help": "Yes" + }, + { + "Value": "false", + "Help": "No" + } + ], + "Required": false, + "IsPassword": false, + "Type": "bool", + "Exclusive": true, + }, + "Error": "", } ``` @@ -4346,7 +5062,9 @@ The keys of `Option` are used as follows: If `Error` is set then it should be shown to the user at the same time as the question. - rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true" +```sh +rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true" +``` Note that when using `--continue` all passwords should be passed in the clear (not obscured). Any default config values should be passed @@ -4362,7 +5080,6 @@ defaults for questions as usual. Note that `bin/config.py` in the rclone source implements this protocol as a readable demonstration. - ``` rclone config update name [key value]+ [flags] ``` @@ -4385,8 +5102,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone config userinfo Prints info about logged in user of remote. @@ -4396,7 +5119,6 @@ Prints info about logged in user of remote. This prints the details of the person logged in to the cloud storage system. - ``` rclone config userinfo remote: [flags] ``` @@ -4412,16 +5134,22 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone config](https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. + + + # rclone convmv Convert file and directory names in place. ## Synopsis - -convmv supports advanced path name transformations for converting and renaming files and directories by applying prefixes, suffixes, and other alterations. +convmv supports advanced path name transformations for converting and renaming +files and directories by applying prefixes, suffixes, and other alterations. | Command | Description | |------|------| @@ -4430,10 +5158,13 @@ convmv supports advanced path name transformations for converting and renaming f | `--name-transform suffix_keep_extension=XXXX` | Appends XXXX to the file name while preserving the original file extension. | | `--name-transform trimprefix=XXXX` | Removes XXXX if it appears at the start of the file name. | | `--name-transform trimsuffix=XXXX` | Removes XXXX if it appears at the end of the file name. | -| `--name-transform regex=/pattern/replacement/` | Applies a regex-based transformation. | +| `--name-transform regex=pattern/replacement` | Applies a regex-based transformation. | | `--name-transform replace=old:new` | Replaces occurrences of old with new in the file name. | | `--name-transform date={YYYYMMDD}` | Appends or prefixes the specified date format. | | `--name-transform truncate=N` | Truncates the file name to a maximum of N characters. | +| `--name-transform truncate_keep_extension=N` | Truncates the file name to a maximum of N characters while preserving the original file extension. | +| `--name-transform truncate_bytes=N` | Truncates the file name to a maximum of N bytes (not characters). | +| `--name-transform truncate_bytes_keep_extension=N` | Truncates the file name to a maximum of N bytes (not characters) while preserving the original file extension. | | `--name-transform base64encode` | Encodes the file name in Base64. | | `--name-transform base64decode` | Decodes a Base64-encoded file name. | | `--name-transform encoder=ENCODING` | Converts the file name to the specified encoding (e.g., ISO-8859-1, Windows-1252, Macintosh). | @@ -4448,211 +5179,227 @@ convmv supports advanced path name transformations for converting and renaming f | `--name-transform nfd` | Converts the file name to NFD Unicode normalization form. | | `--name-transform nfkc` | Converts the file name to NFKC Unicode normalization form. | | `--name-transform nfkd` | Converts the file name to NFKD Unicode normalization form. | -| `--name-transform command=/path/to/my/programfile names.` | Executes an external program to transform | +| `--name-transform command=/path/to/my/programfile names.` | Executes an external program to transform. | +Conversion modes: -Conversion modes: +```text +none +nfc +nfd +nfkc +nfkd +replace +prefix +suffix +suffix_keep_extension +trimprefix +trimsuffix +index +date +truncate +truncate_keep_extension +truncate_bytes +truncate_bytes_keep_extension +base64encode +base64decode +encoder +decoder +ISO-8859-1 +Windows-1252 +Macintosh +charmap +lowercase +uppercase +titlecase +ascii +url +regex +command ``` -none -nfc -nfd -nfkc -nfkd -replace -prefix -suffix -suffix_keep_extension -trimprefix -trimsuffix -index -date -truncate -base64encode -base64decode -encoder -decoder -ISO-8859-1 -Windows-1252 -Macintosh -charmap -lowercase -uppercase -titlecase -ascii -url -regex -command -``` -Char maps: -``` - -IBM-Code-Page-037 -IBM-Code-Page-437 -IBM-Code-Page-850 -IBM-Code-Page-852 -IBM-Code-Page-855 -Windows-Code-Page-858 -IBM-Code-Page-860 -IBM-Code-Page-862 -IBM-Code-Page-863 -IBM-Code-Page-865 -IBM-Code-Page-866 -IBM-Code-Page-1047 -IBM-Code-Page-1140 -ISO-8859-1 -ISO-8859-2 -ISO-8859-3 -ISO-8859-4 -ISO-8859-5 -ISO-8859-6 -ISO-8859-7 -ISO-8859-8 -ISO-8859-9 -ISO-8859-10 -ISO-8859-13 -ISO-8859-14 -ISO-8859-15 -ISO-8859-16 -KOI8-R -KOI8-U -Macintosh -Macintosh-Cyrillic -Windows-874 -Windows-1250 -Windows-1251 -Windows-1252 -Windows-1253 -Windows-1254 -Windows-1255 -Windows-1256 -Windows-1257 -Windows-1258 -X-User-Defined -``` -Encoding masks: -``` -Asterisk - BackQuote - BackSlash - Colon - CrLf - Ctl - Del - Dollar - Dot - DoubleQuote - Exclamation - Hash - InvalidUtf8 - LeftCrLfHtVt - LeftPeriod - LeftSpace - LeftTilde - LtGt - None - Percent - Pipe - Question - Raw - RightCrLfHtVt - RightPeriod - RightSpace - Semicolon - SingleQuote - Slash - SquareBracket -``` -Examples: +Char maps: + +```text +IBM-Code-Page-037 +IBM-Code-Page-437 +IBM-Code-Page-850 +IBM-Code-Page-852 +IBM-Code-Page-855 +Windows-Code-Page-858 +IBM-Code-Page-860 +IBM-Code-Page-862 +IBM-Code-Page-863 +IBM-Code-Page-865 +IBM-Code-Page-866 +IBM-Code-Page-1047 +IBM-Code-Page-1140 +ISO-8859-1 +ISO-8859-2 +ISO-8859-3 +ISO-8859-4 +ISO-8859-5 +ISO-8859-6 +ISO-8859-7 +ISO-8859-8 +ISO-8859-9 +ISO-8859-10 +ISO-8859-13 +ISO-8859-14 +ISO-8859-15 +ISO-8859-16 +KOI8-R +KOI8-U +Macintosh +Macintosh-Cyrillic +Windows-874 +Windows-1250 +Windows-1251 +Windows-1252 +Windows-1253 +Windows-1254 +Windows-1255 +Windows-1256 +Windows-1257 +Windows-1258 +X-User-Defined ``` + +Encoding masks: + +```text +Asterisk +BackQuote +BackSlash +Colon +CrLf +Ctl +Del +Dollar +Dot +DoubleQuote +Exclamation +Hash +InvalidUtf8 +LeftCrLfHtVt +LeftPeriod +LeftSpace +LeftTilde +LtGt +None +Percent +Pipe +Question +Raw +RightCrLfHtVt +RightPeriod +RightSpace +Semicolon +SingleQuote +Slash +SquareBracket +``` + +Examples: + +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase" // Output: STORIES/THE QUICK BROWN FOX!.TXT ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow" // Output: stories/The Slow Brown Turtle!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode" // Output: c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0 ``` -``` +```console rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode" // Output: stories/The Quick Brown Fox!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc" // Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd" // Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii" // Output: stories/The Quick Brown Fox!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt" // Output: stories/The Quick Brown Fox! ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_" // Output: OLD_stories/OLD_The Quick Brown Fox!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7" // Output: stories/The Quick Brown _ Fox Went to the Caf_!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket" // Output: stories/The Quick Brown Fox: A Memoir [draft].txt ``` -``` +```console rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21" // Output: stories/The Quick Brown 🦊 Fox ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo" // Output: stories/The Quick Brown Fox!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" -// Output: stories/The Quick Brown Fox!-20250618 +// Output: stories/The Quick Brown Fox!-20251121 ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" -// Output: stories/The Quick Brown Fox!-2025-06-18 0148PM +// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab" // Output: ababababababab/ababab ababababab ababababab ababab!abababab ``` -Multiple transformations can be used in sequence, applied in the order they are specified on the command line. +The regex command generally accepts Perl-style regular expressions, the exact +syntax is defined in the [Go regular expression reference](https://golang.org/pkg/regexp/syntax/). +The replacement string may contain capturing group variables, referencing +capturing groups using the syntax `$name` or `${name}`, where the name can +refer to a named capturing group or it can simply be the index as a number. +To insert a literal $, use $$. + +Multiple transformations can be used in sequence, applied +in the order they are specified on the command line. The `--name-transform` flag is also available in `sync`, `copy`, and `move`. -# Files vs Directories +## Files vs Directories -By default `--name-transform` will only apply to file names. The means only the leaf file name will be transformed. -However some of the transforms would be better applied to the whole path or just directories. -To choose which which part of the file path is affected some tags can be added to the `--name-transform`. +By default `--name-transform` will only apply to file names. The means only the +leaf file name will be transformed. However some of the transforms would be +better applied to the whole path or just directories. To choose which which +part of the file path is affected some tags can be added to the `--name-transform`. | Tag | Effect | |------|------| @@ -4660,42 +5407,58 @@ To choose which which part of the file path is affected some tags can be added t | `dir` | Only transform name of directories - these may appear anywhere in the path | | `all` | Transform the entire path for files and directories | -This is used by adding the tag into the transform name like this: `--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`. +This is used by adding the tag into the transform name like this: +`--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`. -For some conversions using all is more likely to be useful, for example `--name-transform all,nfc`. +For some conversions using all is more likely to be useful, for example +`--name-transform all,nfc`. -Note that `--name-transform` may not add path separators `/` to the name. This will cause an error. +Note that `--name-transform` may not add path separators `/` to the name. +This will cause an error. -# Ordering and Conflicts +## Ordering and Conflicts -* Transformations will be applied in the order specified by the user. - * If the `file` tag is in use (the default) then only the leaf name of files will be transformed. - * If the `dir` tag is in use then directories anywhere in the path will be transformed - * If the `all` tag is in use then directories and files anywhere in the path will be transformed - * Each transformation will be run one path segment at a time. - * If a transformation adds a `/` or ends up with an empty path segment then that will be an error. -* It is up to the user to put the transformations in a sensible order. - * Conflicting transformations, such as `prefix` followed by `trimprefix` or `nfc` followed by `nfd`, are possible. - * Instead of enforcing mutual exclusivity, transformations are applied in sequence as specified by the -user, allowing for intentional use cases (e.g., trimming one prefix before adding another). - * Users should be aware that certain combinations may lead to unexpected results and should verify -transformations using `--dry-run` before execution. +- Transformations will be applied in the order specified by the user. + - If the `file` tag is in use (the default) then only the leaf name of files + will be transformed. + - If the `dir` tag is in use then directories anywhere in the path will be + transformed + - If the `all` tag is in use then directories and files anywhere in the path + will be transformed + - Each transformation will be run one path segment at a time. + - If a transformation adds a `/` or ends up with an empty path segment then + that will be an error. +- It is up to the user to put the transformations in a sensible order. + - Conflicting transformations, such as `prefix` followed by `trimprefix` or + `nfc` followed by `nfd`, are possible. + - Instead of enforcing mutual exclusivity, transformations are applied in + sequence as specified by the user, allowing for intentional use cases + (e.g., trimming one prefix before adding another). + - Users should be aware that certain combinations may lead to unexpected + results and should verify transformations using `--dry-run` before execution. -# Race Conditions and Non-Deterministic Behavior +## Race Conditions and Non-Deterministic Behavior -Some transformations, such as `replace=old:new`, may introduce conflicts where multiple source files map to the same destination name. -This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these. -* If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic. -* Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results. +Some transformations, such as `replace=old:new`, may introduce conflicts where +multiple source files map to the same destination name. This can lead to race +conditions when performing concurrent transfers. It is up to the user to +anticipate these. + +- If two files from the source are transformed into the same name at the + destination, the final state may be non-deterministic. +- Running rclone check after a sync using such transformations may erroneously + report missing or differing files due to overwritten results. To minimize risks, users should: -* Carefully review transformations that may introduce conflicts. -* Use `--dry-run` to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations). -* Avoid transformations that cause multiple distinct source files to map to the same destination name. -* Consider disabling concurrency with `--transfers=1` if necessary. -* Certain transformations (e.g. `prefix`) will have a multiplying effect every time they are used. Avoid these when using `bisync`. - +- Carefully review transformations that may introduce conflicts. +- Use `--dry-run` to inspect changes before executing a sync (but keep in mind + that it won't show the effect of non-deterministic transformations). +- Avoid transformations that cause multiple distinct source files to map to the + same destination name. +- Consider disabling concurrency with `--transfers=1` if necessary. +- Certain transformations (e.g. `prefix`) will have a multiplying effect every + time they are used. Avoid these when using `bisync`. ``` rclone convmv dest:path --name-transform XXX [flags] @@ -4716,7 +5479,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -4757,7 +5520,7 @@ Flags for anything which can copy a file Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -4767,7 +5530,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -4797,15 +5560,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone copyto Copy files from source to dest, skipping identical files. @@ -4821,33 +5590,40 @@ name. If the source is a directory then it acts exactly like the So - rclone copyto src dst +```console +rclone copyto src dst +``` -where src and dst are rclone paths, either remote:path or -/path/to/local or C:\windows\path\if\on\windows. +where src and dst are rclone paths, either `remote:path` or +`/path/to/local` or `C:\windows\path\if\on\windows`. This will: - if src is file - copy it to dst, overwriting an existing file if it exists - if src is directory - copy it to dst, overwriting existing files if they exist - see copy command for full details +```text +if src is file + copy it to dst, overwriting an existing file if it exists +if src is directory + copy it to dst, overwriting existing files if they exist + see copy command for full details +``` This doesn't transfer files that are identical on src and dst, testing by size and modification time or MD5SUM. It doesn't delete files from the destination. -*If you are looking to copy just a byte range of a file, please see 'rclone cat --offset X --count Y'* +*If you are looking to copy just a byte range of a file, please see +`rclone cat --offset X --count Y`.* -**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics +**Note**: Use the `-P`/`--progress` flag to view +real-time transfer statistics. -# Logger Flags +## Logger Flags -The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, -one per line, to the file name (or stdout if it is `-`) supplied. What they write is described -in the help below. For example `--differ` will write all paths which are present -on both the source and destination but different. +The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` +flags write paths, one per line, to the file name (or stdout if it is `-`) +supplied. What they write is described in the help below. For example +`--differ` will write all paths which are present on both the source and +destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell @@ -4880,9 +5656,7 @@ are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file -(which may or may not match what actually DID.) - - +(which may or may not match what actually DID). ``` rclone copyto source:path dest:path [flags] @@ -4907,7 +5681,7 @@ rclone copyto source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) ``` Options shared with other commands are described next. @@ -4917,7 +5691,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -4958,7 +5732,7 @@ Flags for anything which can copy a file Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -4968,7 +5742,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -4998,15 +5772,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone copyurl Copy the contents of the URL supplied content to dest:path. @@ -5025,12 +5805,23 @@ set in HTTP headers, it will be used instead of the name from the URL. With `--print-filename` in addition, the resulting file name will be printed. -Setting `--no-clobber` will prevent overwriting file on the +Setting `--no-clobber` will prevent overwriting file on the destination if there is one with the same name. Setting `--stdout` or making the output file name `-` will cause the output to be written to standard output. +Setting `--urls` allows you to input a CSV file of URLs in format: URL, +FILENAME. If `--urls` is in use then replace the URL in the arguments with the +file containing the URLs, e.g.: +```sh +rclone copyurl --urls myurls.csv remote:dir +``` +Missing filenames will be autogenerated equivalent to using `--auto-filename`. +Note that `--stdout` and `--print-filename` are incompatible with `--urls`. +This will do `--transfers` copies in parallel. Note that if `--auto-filename` +is desired for all URLs then a file with only URLs and no filename can be used. + ## Troubleshooting If you can't get `rclone copyurl` to work then here are some things you can try: @@ -5041,8 +5832,6 @@ If you can't get `rclone copyurl` to work then here are some things you can try: - `--user agent curl` - some sites have whitelists for curl's user-agent - try that - Make sure the site works with `curl` directly - - ``` rclone copyurl https://example.com dest:path [flags] ``` @@ -5056,6 +5845,7 @@ rclone copyurl https://example.com dest:path [flags] --no-clobber Prevent overwriting file with same name -p, --print-filename Print the resulting name from --auto-filename --stdout Write the output to stdout rather than a file + --urls Use a CSV file of links to process multiple URLs ``` Options shared with other commands are described next. @@ -5065,7 +5855,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -5073,15 +5863,21 @@ Important flags useful for most commands ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone cryptcheck Cryptcheck checks the integrity of an encrypted remote. ## Synopsis -Checks a remote against a [crypted](https://rclone.org/crypt/) remote. This is the equivalent +Checks a remote against an [encrypted](https://rclone.org/crypt/) remote. This is the equivalent of running rclone [check](https://rclone.org/commands/rclone_check/), but able to check the checksums of the encrypted remote. @@ -5095,14 +5891,18 @@ checksum of the file it has just encrypted. Use it like this - rclone cryptcheck /path/to/files encryptedremote:path +```console +rclone cryptcheck /path/to/files encryptedremote:path +``` You can use it like this also, but that will involve downloading all -the files in remote:path. +the files in `remote:path`. - rclone cryptcheck remote:path encryptedremote:path +```console +rclone cryptcheck remote:path encryptedremote:path +``` -After it has run it will log the status of the encryptedremote:. +After it has run it will log the status of the `encryptedremote:`. If you supply the `--one-way` flag, it will only check that files in the source match the files in the destination, not the other way @@ -5128,7 +5928,6 @@ you what happened to it. These are reminiscent of diff files. The default number of parallel checks is 8. See the [--checkers](https://rclone.org/docs/#checkers-int) option for more information. - ``` rclone cryptcheck remote:path cryptedremote:path [flags] ``` @@ -5153,7 +5952,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags used for check commands -``` +```text --max-backlog int Maximum number of objects in sync or check backlog (default 10000) ``` @@ -5161,7 +5960,7 @@ Flags used for check commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -5191,15 +5990,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone cryptdecode Cryptdecode returns unencrypted file names. @@ -5213,13 +6018,13 @@ If you supply the `--reverse` flag, it will return encrypted file names. use it like this - rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 - - rclone cryptdecode --reverse encryptedremote: filename1 filename2 - -Another way to accomplish this is by using the `rclone backend encode` (or `decode`) command. -See the documentation on the [crypt](https://rclone.org/crypt/) overlay for more info. +```console +rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 +rclone cryptdecode --reverse encryptedremote: filename1 filename2 +``` +Another way to accomplish this is by using the `rclone backend encode` (or `decode`) +command. See the documentation on the [crypt](https://rclone.org/crypt/) overlay for more info. ``` rclone cryptdecode encryptedremote: encryptedfilename [flags] @@ -5236,8 +6041,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone deletefile Remove a single file from remote. @@ -5245,9 +6056,8 @@ Remove a single file from remote. ## Synopsis Remove a single file from remote. Unlike `delete` it cannot be used to -remove a directory and it doesn't obey include/exclude filters - if the specified file exists, -it will always be removed. - +remove a directory and it doesn't obey include/exclude filters - if the +specified file exists, it will always be removed. ``` rclone deletefile remote:path [flags] @@ -5266,7 +6076,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -5274,8 +6084,14 @@ Important flags useful for most commands ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone gendocs Output markdown docs for rclone to the directory supplied. @@ -5300,8 +6116,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone gitannex Speaks with git-annex over stdin/stdout. @@ -5314,19 +6136,21 @@ users. [git-annex]: https://git-annex.branchable.com/ -Installation on Linux ---------------------- +## Installation on Linux 1. Skip this step if your version of git-annex is [10.20240430] or newer. Otherwise, you must create a symlink somewhere on your PATH with a particular name. This symlink helps git-annex tell rclone it wants to run the "gitannex" subcommand. - ```sh - # Create the helper symlink in "$HOME/bin". + Create the helper symlink in "$HOME/bin": + + ```console ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin" - # Verify the new symlink is on your PATH. + Verify the new symlink is on your PATH: + + ```console which git-annex-remote-rclone-builtin ``` @@ -5338,11 +6162,15 @@ Installation on Linux Start by asking git-annex to describe the remote's available configuration parameters. - ```sh - # If you skipped step 1: - git annex initremote MyRemote type=rclone --whatelse + If you skipped step 1: - # If you created a symlink in step 1: + ```console + git annex initremote MyRemote type=rclone --whatelse + ``` + + If you created a symlink in step 1: + + ```console git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse ``` @@ -5358,7 +6186,7 @@ Installation on Linux be one configured in your rclone.conf file, which can be located with `rclone config file`. - ```sh + ```console git annex initremote MyRemote \ type=external \ externaltype=rclone-builtin \ @@ -5372,13 +6200,12 @@ Installation on Linux remote**. This command is very new and has not been tested on many rclone backends. Caveat emptor! - ```sh + ```console git annex testremote MyRemote ``` Happy annexing! - ``` rclone gitannex [flags] ``` @@ -5393,8 +6220,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone hashsum Produces a hashsum file for all the objects in the path. @@ -5420,25 +6253,28 @@ as a relative path). Run without a hash to see the list of all supported hashes, e.g. - $ rclone hashsum - Supported hashes are: - * md5 - * sha1 - * whirlpool - * crc32 - * sha256 - * sha512 - * blake3 - * xxh3 - * xxh128 +```console +$ rclone hashsum +Supported hashes are: +- md5 +- sha1 +- whirlpool +- crc32 +- sha256 +- sha512 +- blake3 +- xxh3 +- xxh128 +``` Then - $ rclone hashsum MD5 remote:path +```console +rclone hashsum MD5 remote:path +``` Note that hash names are case insensitive and values are output in lower case. - ``` rclone hashsum [ remote:path] [flags] ``` @@ -5460,7 +6296,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -5490,15 +6326,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone link Generate public link to file/folder. @@ -5507,10 +6349,12 @@ Generate public link to file/folder. Create, retrieve or remove a public link to the given file or folder. - rclone link remote:path/to/file - rclone link remote:path/to/folder/ - rclone link --unlink remote:path/to/folder/ - rclone link --expire 1d remote:path/to/file +```console +rclone link remote:path/to/file +rclone link remote:path/to/folder/ +rclone link --unlink remote:path/to/folder/ +rclone link --expire 1d remote:path/to/file +``` If you supply the --expire flag, it will set the expiration time otherwise it will use the default (100 years). **Note** not all @@ -5523,10 +6367,9 @@ don't will just ignore it. If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will -always by default be created with the least constraints – e.g. no +always by default be created with the least constraints - e.g. no expiry, no password protection, accessible without account. - ``` rclone link remote:path [flags] ``` @@ -5543,15 +6386,20 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone listremotes List all the remotes in the config file and defined in environment variables. ## Synopsis - Lists all the available remotes from the config file, or the remotes matching an optional filter. @@ -5565,7 +6413,6 @@ Result can be filtered by a filter argument which applies to all attributes, and/or filter flags specific for each attribute. The values must be specified according to regular rclone filtering pattern syntax. - ``` rclone listremotes [] [flags] ``` @@ -5587,8 +6434,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone lsf List directories and objects in remote:path formatted for parsing. @@ -5600,41 +6453,47 @@ standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. -Eg +E.g. - $ rclone lsf swift:bucket - bevajer5jef - canole - diwogej7 - ferejej3gux/ - fubuwic +```console +$ rclone lsf swift:bucket +bevajer5jef +canole +diwogej7 +ferejej3gux/ +fubuwic +``` Use the `--format` option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: - p - path - s - size - t - modification time - h - hash - i - ID of object - o - Original ID of underlying object - m - MimeType of object if known - e - encrypted name - T - tier of storage if known, e.g. "Hot" or "Cool" - M - Metadata of object in JSON blob format, eg {"key":"value"} +```text +p - path +s - size +t - modification time +h - hash +i - ID of object +o - Original ID of underlying object +m - MimeType of object if known +e - encrypted name +T - tier of storage if known, e.g. "Hot" or "Cool" +M - Metadata of object in JSON blob format, eg {"key":"value"} +``` So if you wanted the path, size and modification time, you would use `--format "pst"`, or maybe `--format "tsp"` to put the path last. -Eg +E.g. - $ rclone lsf --format "tsp" swift:bucket - 2016-06-25 18:55:41;60295;bevajer5jef - 2016-06-25 18:55:43;90613;canole - 2016-06-25 18:55:43;94467;diwogej7 - 2018-04-26 08:50:45;0;ferejej3gux/ - 2016-06-25 18:55:40;37600;fubuwic +```console +$ rclone lsf --format "tsp" swift:bucket +2016-06-25 18:55:41;60295;bevajer5jef +2016-06-25 18:55:43;90613;canole +2016-06-25 18:55:43;94467;diwogej7 +2018-04-26 08:50:45;0;ferejej3gux/ +2016-06-25 18:55:40;37600;fubuwic +``` If you specify "h" in the format you will get the MD5 hash by default, use the `--hash` flag to change which hash you want. Note that this @@ -5645,16 +6504,20 @@ type. For example, to emulate the md5sum command you can use - rclone lsf -R --hash MD5 --format hp --separator " " --files-only . +```console +rclone lsf -R --hash MD5 --format hp --separator " " --files-only . +``` -Eg +E.g. - $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket - 7908e352297f0f530b84a756f188baa3 bevajer5jef - cd65ac234e6fea5925974a51cdd865cc canole - 03b5341b4f234b9d984d03ad076bae91 diwogej7 - 8fd37c3810dd660778137ac3a66cc06d fubuwic - 99713e14a4c4ff553acaf1930fad985b gixacuh7ku +```console +$ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket +7908e352297f0f530b84a756f188baa3 bevajer5jef +cd65ac234e6fea5925974a51cdd865cc canole +03b5341b4f234b9d984d03ad076bae91 diwogej7 +8fd37c3810dd660778137ac3a66cc06d fubuwic +99713e14a4c4ff553acaf1930fad985b gixacuh7ku +``` (Though "rclone md5sum ." is an easier way of typing this.) @@ -5662,24 +6525,28 @@ By default the separator is ";" this can be changed with the `--separator` flag. Note that separators aren't escaped in the path so putting it last is a good strategy. -Eg +E.g. - $ rclone lsf --separator "," --format "tshp" swift:bucket - 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef - 2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole - 2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7 - 2018-04-26 08:52:53,0,,ferejej3gux/ - 2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic +```console +$ rclone lsf --separator "," --format "tshp" swift:bucket +2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef +2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole +2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7 +2018-04-26 08:52:53,0,,ferejej3gux/ +2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic +``` You can output in CSV standard format. This will escape things in " -if they contain , +if they contain, -Eg +E.g. - $ rclone lsf --csv --files-only --format ps remote:path - test.log,22355 - test.sh,449 - "this file contains a comma, in the file name.txt",6 +```console +$ rclone lsf --csv --files-only --format ps remote:path +test.log,22355 +test.sh,449 +"this file contains a comma, in the file name.txt",6 +``` Note that the `--absolute` parameter is useful for making lists of files to pass to an rclone copy with the `--files-from-raw` flag. @@ -5687,32 +6554,38 @@ to pass to an rclone copy with the `--files-from-raw` flag. For example, to find all the files modified within one day and copy those only (without traversing the whole directory structure): - rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files - rclone copy --files-from-raw new_files /path/to/local remote:path +```console +rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files +rclone copy --files-from-raw new_files /path/to/local remote:path +``` The default time format is `'2006-01-02 15:04:05'`. -[Other formats](https://pkg.go.dev/time#pkg-constants) can be specified with the `--time-format` flag. -Examples: +[Other formats](https://pkg.go.dev/time#pkg-constants) can be specified with +the `--time-format` flag. Examples: - rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)' - rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000' - rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00' - rclone lsf remote:path --format pt --time-format RFC3339 - rclone lsf remote:path --format pt --time-format DateOnly - rclone lsf remote:path --format pt --time-format max -`--time-format max` will automatically truncate '`2006-01-02 15:04:05.000000000`' +```console +rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)' +rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000' +rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00' +rclone lsf remote:path --format pt --time-format RFC3339 +rclone lsf remote:path --format pt --time-format DateOnly +rclone lsf remote:path --format pt --time-format max +rclone lsf remote:path --format pt --time-format unix +rclone lsf remote:path --format pt --time-format unixnano +``` + +`--time-format max` will automatically truncate `2006-01-02 15:04:05.000000000` to the maximum precision supported by the remote. - Any of the filtering options can be applied to this command. There are several related list commands - * `ls` to list size and path of objects only - * `lsl` to list modification time, size and path of objects only - * `lsd` to list directories only - * `lsf` to list objects and directories in easy to parse format - * `lsjson` to list objects and directories in JSON format +- `ls` to list size and path of objects only +- `lsl` to list modification time, size and path of objects only +- `lsd` to list directories only +- `lsf` to list objects and directories in easy to parse format +- `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human-readable. `lsf` is designed to be human and machine-readable. @@ -5720,13 +6593,13 @@ There are several related list commands Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion. -The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - +use `-R` to make them recurse. Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes). - ``` rclone lsf remote:path [flags] ``` @@ -5744,7 +6617,7 @@ rclone lsf remote:path [flags] -h, --help help for lsf -R, --recursive Recurse into the listing -s, --separator string Separator for the items in the format (default ";") - -t, --time-format string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --time-format string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) ``` Options shared with other commands are described next. @@ -5754,7 +6627,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -5784,15 +6657,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone lsjson List directories and objects in the path in JSON format. @@ -5803,25 +6682,27 @@ List directories and objects in the path in JSON format. The output is an array of Items, where each Item looks like this: - { - "Hashes" : { - "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", - "MD5" : "b1946ac92492d2347c6235b4d2611184", - "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" - }, - "ID": "y2djkhiujf83u33", - "OrigID": "UYOJVTUW00Q1RzTDA", - "IsBucket" : false, - "IsDir" : false, - "MimeType" : "application/octet-stream", - "ModTime" : "2017-05-31T16:15:57.034468261+01:00", - "Name" : "file.txt", - "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", - "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", - "Path" : "full/path/goes/here/file.txt", - "Size" : 6, - "Tier" : "hot", - } +```json +{ + "Hashes" : { + "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", + "MD5" : "b1946ac92492d2347c6235b4d2611184", + "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" + }, + "ID": "y2djkhiujf83u33", + "OrigID": "UYOJVTUW00Q1RzTDA", + "IsBucket" : false, + "IsDir" : false, + "MimeType" : "application/octet-stream", + "ModTime" : "2017-05-31T16:15:57.034468261+01:00", + "Name" : "file.txt", + "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", + "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", + "Path" : "full/path/goes/here/file.txt", + "Size" : 6, + "Tier" : "hot", +} +``` The exact set of properties included depends on the backend: @@ -5883,11 +6764,11 @@ Any of the filtering options can be applied to this command. There are several related list commands - * `ls` to list size and path of objects only - * `lsl` to list modification time, size and path of objects only - * `lsd` to list directories only - * `lsf` to list objects and directories in easy to parse format - * `lsjson` to list objects and directories in JSON format +- `ls` to list size and path of objects only +- `lsl` to list modification time, size and path of objects only +- `lsd` to list directories only +- `lsf` to list objects and directories in easy to parse format +- `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human-readable. `lsf` is designed to be human and machine-readable. @@ -5895,13 +6776,13 @@ There are several related list commands Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion. -The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - +use `-R` to make them recurse. Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes). - ``` rclone lsjson remote:path [flags] ``` @@ -5930,7 +6811,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -5960,15 +6841,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone mount Mount the remote as file system on a mountpoint. @@ -5978,7 +6865,7 @@ Mount the remote as file system on a mountpoint. Rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. -First set up your remote using `rclone config`. Check it works with `rclone ls` etc. +First set up your remote using `rclone config`. Check it works with `rclone ls` etc. On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the `--daemon` flag @@ -5993,7 +6880,9 @@ mount, waits until success or timeout and exits with appropriate code On Linux/macOS/FreeBSD start the mount like this, where `/path/to/local/mount` is an **empty** **existing** directory: - rclone mount remote:path/to/files /path/to/local/mount +```console +rclone mount remote:path/to/files /path/to/local/mount +``` On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows) for details. If foreground mount is used interactively from a console window, @@ -6003,26 +6892,30 @@ used to work with the mount until rclone is interrupted e.g. by pressing Ctrl-C. The following examples will mount to an automatically assigned drive, to specific drive letter `X:`, to path `C:\path\parent\mount` (where parent directory or drive must exist, and mount must **not** exist, -and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and -the last example will mount as network share `\\cloud\remote` and map it to an +and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), +and the last example will mount as network share `\\cloud\remote` and map it to an automatically assigned drive: - rclone mount remote:path/to/files * - rclone mount remote:path/to/files X: - rclone mount remote:path/to/files C:\path\parent\mount - rclone mount remote:path/to/files \\cloud\remote +```console +rclone mount remote:path/to/files * +rclone mount remote:path/to/files X: +rclone mount remote:path/to/files C:\path\parent\mount +rclone mount remote:path/to/files \\cloud\remote +``` When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount should be automatically stopped. When running in background mode the user will have to stop the mount manually: - # Linux - fusermount -u /path/to/local/mount - #... or on some systems - fusermount3 -u /path/to/local/mount - # OS X or Linux when using nfsmount - umount /path/to/local/mount +```console +# Linux +fusermount -u /path/to/local/mount +#... or on some systems +fusermount3 -u /path/to/local/mount +# OS X or Linux when using nfsmount +umount /path/to/local/mount +``` The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually. @@ -6036,7 +6929,7 @@ at all, then 1 PiB is set as both the total and the free size. ## Installing on Windows -To run rclone mount on Windows, you will need to +To run `rclone mount on Windows`, you will need to download and install [WinFsp](http://www.secfs.net/winfsp/). [WinFsp](https://github.com/winfsp/winfsp) is an open-source @@ -6057,20 +6950,22 @@ thumbnails for image and video files on network drives. In most cases, rclone will mount the remote as a normal, fixed disk drive by default. However, you can also choose to mount it as a remote network drive, often described -as a network share. If you mount an rclone remote using the default, fixed drive mode -and experience unexpected program errors, freezes or other issues, consider mounting -as a network drive instead. +as a network share. If you mount an rclone remote using the default, fixed drive +mode and experience unexpected program errors, freezes or other issues, consider +mounting as a network drive instead. When mounting as a fixed disk drive you can either mount to an unused drive letter, or to a path representing a **nonexistent** subdirectory of an **existing** parent directory or drive. Using the special value `*` will tell rclone to -automatically assign the next available drive letter, starting with Z: and moving backward. -Examples: +automatically assign the next available drive letter, starting with Z: and moving +backward. Examples: - rclone mount remote:path/to/files * - rclone mount remote:path/to/files X: - rclone mount remote:path/to/files C:\path\parent\mount - rclone mount remote:path/to/files X: +```console +rclone mount remote:path/to/files * +rclone mount remote:path/to/files X: +rclone mount remote:path/to/files C:\path\parent\mount +rclone mount remote:path/to/files X: +``` Option `--volname` can be used to set a custom volume name for the mounted file system. The default is to use the remote name and path. @@ -6080,24 +6975,28 @@ to your mount command. Mounting to a directory path is not supported in this mode, it is a limitation Windows imposes on junctions, so the remote must always be mounted to a drive letter. - rclone mount remote:path/to/files X: --network-mode +```console +rclone mount remote:path/to/files X: --network-mode +``` -A volume name specified with `--volname` will be used to create the network share path. -A complete UNC path, such as `\\cloud\remote`, optionally with path +A volume name specified with `--volname` will be used to create the network share +path. A complete UNC path, such as `\\cloud\remote`, optionally with path `\\cloud\remote\madeup\path`, will be used as is. Any other string will be used as the share part, after a default prefix `\\server\`. If no volume name is specified then `\\server\share` will be used. -You must make sure the volume name is unique when you are mounting more than one drive, -or else the mount command will fail. The share name will treated as the volume label for -the mapped drive, shown in Windows Explorer etc, while the complete +You must make sure the volume name is unique when you are mounting more than one +drive, or else the mount command will fail. The share name will treated as the +volume label for the mapped drive, shown in Windows Explorer etc, while the complete `\\server\share` will be reported as the remote UNC path by `net use` etc, just like a normal network drive mapping. If you specify a full network share UNC path with `--volname`, this will implicitly set the `--network-mode` option, so the following two examples have same result: - rclone mount remote:path/to/files X: --network-mode - rclone mount remote:path/to/files X: --volname \\server\share +```console +rclone mount remote:path/to/files X: --network-mode +rclone mount remote:path/to/files X: --volname \\server\share +``` You may also specify the network share UNC path as the mountpoint itself. Then rclone will automatically assign a drive letter, same as with `*` and use that as @@ -6105,15 +7004,16 @@ mountpoint, and instead use the UNC path specified as the volume name, as if it specified with the `--volname` option. This will also implicitly set the `--network-mode` option. This means the following two examples have same result: - rclone mount remote:path/to/files \\cloud\remote - rclone mount remote:path/to/files * --volname \\cloud\remote +```console +rclone mount remote:path/to/files \\cloud\remote +rclone mount remote:path/to/files * --volname \\cloud\remote +``` There is yet another way to enable network mode, and to set the share path, and that is to pass the "native" libfuse/WinFsp option directly: `--fuse-flag --VolumePrefix=\server\share`. Note that the path must be with just a single backslash prefix in this case. - *Note:* In previous versions of rclone this was the only supported method. [Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping) @@ -6126,11 +7026,11 @@ The FUSE emulation layer on Windows must convert between the POSIX-based permission model used in FUSE, and the permission model used in Windows, based on access-control lists (ACL). -The mounted filesystem will normally get three entries in its access-control list (ACL), -representing permissions for the POSIX permission scopes: Owner, group and others. -By default, the owner and group will be taken from the current user, and the built-in -group "Everyone" will be used to represent others. The user/group can be customized -with FUSE options "UserName" and "GroupName", +The mounted filesystem will normally get three entries in its access-control list +(ACL), representing permissions for the POSIX permission scopes: Owner, group and +others. By default, the owner and group will be taken from the current user, and +the built-in group "Everyone" will be used to represent others. The user/group can +be customized with FUSE options "UserName" and "GroupName", e.g. `-o UserName=user123 -o GroupName="Authenticated Users"`. The permissions on each entry will be set according to [options](#options) `--dir-perms` and `--file-perms`, which takes a value in traditional Unix @@ -6230,58 +7130,74 @@ does not suffer from the same limitations. ## Mounting on macOS -Mounting on macOS can be done either via [built-in NFS server](https://rclone.org/commands/rclone_serve_nfs/), [macFUSE](https://osxfuse.github.io/) -(also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional -FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system -which "mounts" via an NFSv4 local server. +Mounting on macOS can be done either via [built-in NFS server](https://rclone.org/commands/rclone_serve_nfs/), +[macFUSE](https://osxfuse.github.io/) (also known as osxfuse) or +[FUSE-T](https://www.fuse-t.org/).macFUSE is a traditional FUSE driver utilizing +a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which +"mounts" via an NFSv4 local server. -#### Unicode Normalization +### Unicode Normalization It is highly recommended to keep the default of `--no-unicode-normalization=false` for all `mount` and `serve` commands on macOS. For details, see [vfs-case-sensitivity](https://rclone.org/commands/rclone_mount/#vfs-case-sensitivity). ### NFS mount -This method spins up an NFS server using [serve nfs](https://rclone.org/commands/rclone_serve_nfs/) command and mounts -it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to -send SIGTERM signal to the rclone process using |kill| command to stop the mount. +This method spins up an NFS server using [serve nfs](https://rclone.org/commands/rclone_serve_nfs/) +command and mounts it to the specified mountpoint. If you run this in background +mode using |--daemon|, you will need to send SIGTERM signal to the rclone process +using |kill| command to stop the mount. -Note that `--nfs-cache-handle-limit` controls the maximum number of cached file handles stored by the `nfsmount` caching handler. -This should not be set too low or you may experience errors when trying to access files. The default is 1000000, +Note that `--nfs-cache-handle-limit` controls the maximum number of cached file +handles stored by the `nfsmount` caching handler. This should not be set too low +or you may experience errors when trying to access files. The default is 1000000, but consider lowering this limit if the server's system resource usage causes problems. ### macFUSE Notes -If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from -the website, rclone will locate the macFUSE libraries without any further intervention. -If however, macFUSE is installed using the [macports](https://www.macports.org/) package manager, -the following addition steps are required. +If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) +from the website, rclone will locate the macFUSE libraries without any further intervention. +If however, macFUSE is installed using the [macports](https://www.macports.org/) +package manager, the following addition steps are required. - sudo mkdir /usr/local/lib - cd /usr/local/lib - sudo ln -s /opt/local/lib/libfuse.2.dylib +```console +sudo mkdir /usr/local/lib +cd /usr/local/lib +sudo ln -s /opt/local/lib/libfuse.2.dylib +``` ### FUSE-T Limitations, Caveats, and Notes -There are some limitations, caveats, and notes about how it works. These are current as -of FUSE-T version 1.0.14. +There are some limitations, caveats, and notes about how it works. These are +current as of FUSE-T version 1.0.14. #### ModTime update on read As per the [FUSE-T wiki](https://github.com/macos-fuse-t/fuse-t/wiki#caveats): -> File access and modification times cannot be set separately as it seems to be an -> issue with the NFS client which always modifies both. Can be reproduced with +> File access and modification times cannot be set separately as it seems to be an +> issue with the NFS client which always modifies both. Can be reproduced with > 'touch -m' and 'touch -a' commands -This means that viewing files with various tools, notably macOS Finder, will cause rlcone -to update the modification time of the file. This may make rclone upload a full new copy -of the file. - +This means that viewing files with various tools, notably macOS Finder, will cause +rlcone to update the modification time of the file. This may make rclone upload a +full new copy of the file. + #### Read Only mounts -When mounting with `--read-only`, attempts to write to files will fail *silently* as -opposed to with a clear warning as in macFUSE. +When mounting with `--read-only`, attempts to write to files will fail *silently* +as opposed to with a clear warning as in macFUSE. + +# Mounting on Linux + +On newer versions of Ubuntu, you may encounter the following error when running +`rclone mount`: + +> NOTICE: mount helper error: fusermount3: mount failed: Permission denied +> CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: exit status 1 +This may be due to newer [Apparmor](https://wiki.ubuntu.com/AppArmor) restrictions, +which can be disabled with `sudo aa-disable /usr/bin/fusermount3` (you may need to +`sudo apt install apparmor-utils` beforehand). ## Limitations @@ -6382,12 +7298,14 @@ helper you should symlink rclone binary to `/sbin/mount.rclone` and optionally rclone will detect it and translate command-line arguments appropriately. Now you can run classic mounts like this: -``` + +```console mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem ``` or create systemd mount units: -``` + +```ini # /etc/systemd/system/mnt-data.mount [Unit] Description=Mount for /mnt/data @@ -6399,7 +7317,8 @@ Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone ``` optionally accompanied by systemd automount unit -``` + +```ini # /etc/systemd/system/mnt-data.automount [Unit] Description=AutoMount for /mnt/data @@ -6411,7 +7330,8 @@ WantedBy=multi-user.target ``` or add in `/etc/fstab` a line like -``` + +```console sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0 ``` @@ -6460,8 +7380,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -6473,16 +7395,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -6513,6 +7441,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -6520,6 +7449,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -6567,13 +7497,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -6583,10 +7513,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -6669,9 +7599,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -6685,9 +7617,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -6725,32 +7657,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -6762,7 +7703,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -6772,7 +7714,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -6850,7 +7792,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -6861,7 +7805,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -6879,7 +7823,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -6904,8 +7848,6 @@ If the file has no metadata it will be returned as `{}` and if there is an error reading the metadata the error will be returned as `{"error":"error string"}`. - - ``` rclone mount remote:path /path/to/mountpoint [flags] ``` @@ -6976,7 +7918,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -7004,8 +7946,14 @@ Flags for filtering directory listings ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone moveto Move file or directory from source to dest. @@ -7021,18 +7969,22 @@ like the [move](https://rclone.org/commands/rclone_move/) command. So - rclone moveto src dst +```console +rclone moveto src dst +``` where src and dst are rclone paths, either remote:path or /path/to/local or C:\windows\path\if\on\windows. This will: - if src is file - move it to dst, overwriting an existing file if it exists - if src is directory - move it to dst, overwriting existing files if they exist - see move command for full details +```text +if src is file + move it to dst, overwriting an existing file if it exists +if src is directory + move it to dst, overwriting existing files if they exist + see move command for full details +``` This doesn't transfer files that are identical on src and dst, testing by size and modification time or MD5SUM. src will be deleted on @@ -7043,12 +7995,13 @@ successful transfer. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. -# Logger Flags +## Logger Flags -The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, -one per line, to the file name (or stdout if it is `-`) supplied. What they write is described -in the help below. For example `--differ` will write all paths which are present -on both the source and destination but different. +The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` +flags write paths, one per line, to the file name (or stdout if it is `-`) +supplied. What they write is described in the help below. For example +`--differ` will write all paths which are present on both the source and +destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell @@ -7081,9 +8034,7 @@ are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file -(which may or may not match what actually DID.) - - +(which may or may not match what actually DID). ``` rclone moveto source:path dest:path [flags] @@ -7108,7 +8059,7 @@ rclone moveto source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) ``` Options shared with other commands are described next. @@ -7118,7 +8069,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -7159,7 +8110,7 @@ Flags for anything which can copy a file Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -7169,7 +8120,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -7199,15 +8150,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone ncdu Explore a remote with a text based user interface. @@ -7228,41 +8185,45 @@ structure as it goes along. You can interact with the user interface using key presses, press '?' to toggle the help on and off. The supported keys are: - ↑,↓ or k,j to Move - →,l to enter - ←,h to return - g toggle graph - c toggle counts - a toggle average size in directory - m toggle modified time - u toggle human-readable format - n,s,C,A,M sort by name,size,count,asize,mtime - d delete file/directory - v select file/directory - V enter visual select mode - D delete selected files/directories - y copy current path to clipboard - Y display current path - ^L refresh screen (fix screen corruption) - r recalculate file sizes - ? to toggle help on and off - ESC to close the menu box - q/^c to quit +```text + ↑,↓ or k,j to Move + →,l to enter + ←,h to return + g toggle graph + c toggle counts + a toggle average size in directory + m toggle modified time + u toggle human-readable format + n,s,C,A,M sort by name,size,count,asize,mtime + d delete file/directory + v select file/directory + V enter visual select mode + D delete selected files/directories + y copy current path to clipboard + Y display current path + ^L refresh screen (fix screen corruption) + r recalculate file sizes + ? to toggle help on and off + ESC to close the menu box + q/^c to quit +``` Listed files/directories may be prefixed by a one-character flag, some of them combined with a description in brackets at end of line. These flags have the following meaning: - e means this is an empty directory, i.e. contains no files (but - may contain empty subdirectories) - ~ means this is a directory where some of the files (possibly in - subdirectories) have unknown size, and therefore the directory - size may be underestimated (and average size inaccurate, as it - is average of the files with known sizes). - . means an error occurred while reading a subdirectory, and - therefore the directory size may be underestimated (and average - size inaccurate) - ! means an error occurred while reading this directory +```text +e means this is an empty directory, i.e. contains no files (but + may contain empty subdirectories) +~ means this is a directory where some of the files (possibly in + subdirectories) have unknown size, and therefore the directory + size may be underestimated (and average size inaccurate, as it + is average of the files with known sizes). +. means an error occurred while reading a subdirectory, and + therefore the directory size may be underestimated (and average + size inaccurate) +! means an error occurred while reading this directory +``` This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for rclone remotes. It is missing lots of features at the moment @@ -7275,7 +8236,6 @@ For a non-interactive listing of the remote, see the [tree](https://rclone.org/commands/rclone_tree/) command. To just get the total size of the remote you can also use the [size](https://rclone.org/commands/rclone_size/) command. - ``` rclone ncdu remote:path [flags] ``` @@ -7293,7 +8253,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -7323,15 +8283,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone nfsmount Mount the remote as file system on a mountpoint. @@ -7341,7 +8307,7 @@ Mount the remote as file system on a mountpoint. Rclone nfsmount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. -First set up your remote using `rclone config`. Check it works with `rclone ls` etc. +First set up your remote using `rclone config`. Check it works with `rclone ls` etc. On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the `--daemon` flag @@ -7356,7 +8322,9 @@ mount, waits until success or timeout and exits with appropriate code On Linux/macOS/FreeBSD start the mount like this, where `/path/to/local/mount` is an **empty** **existing** directory: - rclone nfsmount remote:path/to/files /path/to/local/mount +```console +rclone nfsmount remote:path/to/files /path/to/local/mount +``` On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows) for details. If foreground mount is used interactively from a console window, @@ -7366,26 +8334,30 @@ used to work with the mount until rclone is interrupted e.g. by pressing Ctrl-C. The following examples will mount to an automatically assigned drive, to specific drive letter `X:`, to path `C:\path\parent\mount` (where parent directory or drive must exist, and mount must **not** exist, -and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and -the last example will mount as network share `\\cloud\remote` and map it to an +and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), +and the last example will mount as network share `\\cloud\remote` and map it to an automatically assigned drive: - rclone nfsmount remote:path/to/files * - rclone nfsmount remote:path/to/files X: - rclone nfsmount remote:path/to/files C:\path\parent\mount - rclone nfsmount remote:path/to/files \\cloud\remote +```console +rclone nfsmount remote:path/to/files * +rclone nfsmount remote:path/to/files X: +rclone nfsmount remote:path/to/files C:\path\parent\mount +rclone nfsmount remote:path/to/files \\cloud\remote +``` When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount should be automatically stopped. When running in background mode the user will have to stop the mount manually: - # Linux - fusermount -u /path/to/local/mount - #... or on some systems - fusermount3 -u /path/to/local/mount - # OS X or Linux when using nfsmount - umount /path/to/local/mount +```console +# Linux +fusermount -u /path/to/local/mount +#... or on some systems +fusermount3 -u /path/to/local/mount +# OS X or Linux when using nfsmount +umount /path/to/local/mount +``` The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually. @@ -7399,7 +8371,7 @@ at all, then 1 PiB is set as both the total and the free size. ## Installing on Windows -To run rclone nfsmount on Windows, you will need to +To run `rclone nfsmount on Windows`, you will need to download and install [WinFsp](http://www.secfs.net/winfsp/). [WinFsp](https://github.com/winfsp/winfsp) is an open-source @@ -7420,20 +8392,22 @@ thumbnails for image and video files on network drives. In most cases, rclone will mount the remote as a normal, fixed disk drive by default. However, you can also choose to mount it as a remote network drive, often described -as a network share. If you mount an rclone remote using the default, fixed drive mode -and experience unexpected program errors, freezes or other issues, consider mounting -as a network drive instead. +as a network share. If you mount an rclone remote using the default, fixed drive +mode and experience unexpected program errors, freezes or other issues, consider +mounting as a network drive instead. When mounting as a fixed disk drive you can either mount to an unused drive letter, or to a path representing a **nonexistent** subdirectory of an **existing** parent directory or drive. Using the special value `*` will tell rclone to -automatically assign the next available drive letter, starting with Z: and moving backward. -Examples: +automatically assign the next available drive letter, starting with Z: and moving +backward. Examples: - rclone nfsmount remote:path/to/files * - rclone nfsmount remote:path/to/files X: - rclone nfsmount remote:path/to/files C:\path\parent\mount - rclone nfsmount remote:path/to/files X: +```console +rclone nfsmount remote:path/to/files * +rclone nfsmount remote:path/to/files X: +rclone nfsmount remote:path/to/files C:\path\parent\mount +rclone nfsmount remote:path/to/files X: +``` Option `--volname` can be used to set a custom volume name for the mounted file system. The default is to use the remote name and path. @@ -7443,24 +8417,28 @@ to your nfsmount command. Mounting to a directory path is not supported in this mode, it is a limitation Windows imposes on junctions, so the remote must always be mounted to a drive letter. - rclone nfsmount remote:path/to/files X: --network-mode +```console +rclone nfsmount remote:path/to/files X: --network-mode +``` -A volume name specified with `--volname` will be used to create the network share path. -A complete UNC path, such as `\\cloud\remote`, optionally with path +A volume name specified with `--volname` will be used to create the network share +path. A complete UNC path, such as `\\cloud\remote`, optionally with path `\\cloud\remote\madeup\path`, will be used as is. Any other string will be used as the share part, after a default prefix `\\server\`. If no volume name is specified then `\\server\share` will be used. -You must make sure the volume name is unique when you are mounting more than one drive, -or else the mount command will fail. The share name will treated as the volume label for -the mapped drive, shown in Windows Explorer etc, while the complete +You must make sure the volume name is unique when you are mounting more than one +drive, or else the mount command will fail. The share name will treated as the +volume label for the mapped drive, shown in Windows Explorer etc, while the complete `\\server\share` will be reported as the remote UNC path by `net use` etc, just like a normal network drive mapping. If you specify a full network share UNC path with `--volname`, this will implicitly set the `--network-mode` option, so the following two examples have same result: - rclone nfsmount remote:path/to/files X: --network-mode - rclone nfsmount remote:path/to/files X: --volname \\server\share +```console +rclone nfsmount remote:path/to/files X: --network-mode +rclone nfsmount remote:path/to/files X: --volname \\server\share +``` You may also specify the network share UNC path as the mountpoint itself. Then rclone will automatically assign a drive letter, same as with `*` and use that as @@ -7468,15 +8446,16 @@ mountpoint, and instead use the UNC path specified as the volume name, as if it specified with the `--volname` option. This will also implicitly set the `--network-mode` option. This means the following two examples have same result: - rclone nfsmount remote:path/to/files \\cloud\remote - rclone nfsmount remote:path/to/files * --volname \\cloud\remote +```console +rclone nfsmount remote:path/to/files \\cloud\remote +rclone nfsmount remote:path/to/files * --volname \\cloud\remote +``` There is yet another way to enable network mode, and to set the share path, and that is to pass the "native" libfuse/WinFsp option directly: `--fuse-flag --VolumePrefix=\server\share`. Note that the path must be with just a single backslash prefix in this case. - *Note:* In previous versions of rclone this was the only supported method. [Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping) @@ -7489,11 +8468,11 @@ The FUSE emulation layer on Windows must convert between the POSIX-based permission model used in FUSE, and the permission model used in Windows, based on access-control lists (ACL). -The mounted filesystem will normally get three entries in its access-control list (ACL), -representing permissions for the POSIX permission scopes: Owner, group and others. -By default, the owner and group will be taken from the current user, and the built-in -group "Everyone" will be used to represent others. The user/group can be customized -with FUSE options "UserName" and "GroupName", +The mounted filesystem will normally get three entries in its access-control list +(ACL), representing permissions for the POSIX permission scopes: Owner, group and +others. By default, the owner and group will be taken from the current user, and +the built-in group "Everyone" will be used to represent others. The user/group can +be customized with FUSE options "UserName" and "GroupName", e.g. `-o UserName=user123 -o GroupName="Authenticated Users"`. The permissions on each entry will be set according to [options](#options) `--dir-perms` and `--file-perms`, which takes a value in traditional Unix @@ -7593,58 +8572,74 @@ does not suffer from the same limitations. ## Mounting on macOS -Mounting on macOS can be done either via [built-in NFS server](https://rclone.org/commands/rclone_serve_nfs/), [macFUSE](https://osxfuse.github.io/) -(also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional -FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system -which "mounts" via an NFSv4 local server. +Mounting on macOS can be done either via [built-in NFS server](https://rclone.org/commands/rclone_serve_nfs/), +[macFUSE](https://osxfuse.github.io/) (also known as osxfuse) or +[FUSE-T](https://www.fuse-t.org/).macFUSE is a traditional FUSE driver utilizing +a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which +"mounts" via an NFSv4 local server. -#### Unicode Normalization +### Unicode Normalization It is highly recommended to keep the default of `--no-unicode-normalization=false` for all `mount` and `serve` commands on macOS. For details, see [vfs-case-sensitivity](https://rclone.org/commands/rclone_mount/#vfs-case-sensitivity). ### NFS mount -This method spins up an NFS server using [serve nfs](https://rclone.org/commands/rclone_serve_nfs/) command and mounts -it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to -send SIGTERM signal to the rclone process using |kill| command to stop the mount. +This method spins up an NFS server using [serve nfs](https://rclone.org/commands/rclone_serve_nfs/) +command and mounts it to the specified mountpoint. If you run this in background +mode using |--daemon|, you will need to send SIGTERM signal to the rclone process +using |kill| command to stop the mount. -Note that `--nfs-cache-handle-limit` controls the maximum number of cached file handles stored by the `nfsmount` caching handler. -This should not be set too low or you may experience errors when trying to access files. The default is 1000000, +Note that `--nfs-cache-handle-limit` controls the maximum number of cached file +handles stored by the `nfsmount` caching handler. This should not be set too low +or you may experience errors when trying to access files. The default is 1000000, but consider lowering this limit if the server's system resource usage causes problems. ### macFUSE Notes -If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from -the website, rclone will locate the macFUSE libraries without any further intervention. -If however, macFUSE is installed using the [macports](https://www.macports.org/) package manager, -the following addition steps are required. +If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) +from the website, rclone will locate the macFUSE libraries without any further intervention. +If however, macFUSE is installed using the [macports](https://www.macports.org/) +package manager, the following addition steps are required. - sudo mkdir /usr/local/lib - cd /usr/local/lib - sudo ln -s /opt/local/lib/libfuse.2.dylib +```console +sudo mkdir /usr/local/lib +cd /usr/local/lib +sudo ln -s /opt/local/lib/libfuse.2.dylib +``` ### FUSE-T Limitations, Caveats, and Notes -There are some limitations, caveats, and notes about how it works. These are current as -of FUSE-T version 1.0.14. +There are some limitations, caveats, and notes about how it works. These are +current as of FUSE-T version 1.0.14. #### ModTime update on read As per the [FUSE-T wiki](https://github.com/macos-fuse-t/fuse-t/wiki#caveats): -> File access and modification times cannot be set separately as it seems to be an -> issue with the NFS client which always modifies both. Can be reproduced with +> File access and modification times cannot be set separately as it seems to be an +> issue with the NFS client which always modifies both. Can be reproduced with > 'touch -m' and 'touch -a' commands -This means that viewing files with various tools, notably macOS Finder, will cause rlcone -to update the modification time of the file. This may make rclone upload a full new copy -of the file. - +This means that viewing files with various tools, notably macOS Finder, will cause +rlcone to update the modification time of the file. This may make rclone upload a +full new copy of the file. + #### Read Only mounts -When mounting with `--read-only`, attempts to write to files will fail *silently* as -opposed to with a clear warning as in macFUSE. +When mounting with `--read-only`, attempts to write to files will fail *silently* +as opposed to with a clear warning as in macFUSE. + +# Mounting on Linux + +On newer versions of Ubuntu, you may encounter the following error when running +`rclone mount`: + +> NOTICE: mount helper error: fusermount3: mount failed: Permission denied +> CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: exit status 1 +This may be due to newer [Apparmor](https://wiki.ubuntu.com/AppArmor) restrictions, +which can be disabled with `sudo aa-disable /usr/bin/fusermount3` (you may need to +`sudo apt install apparmor-utils` beforehand). ## Limitations @@ -7745,12 +8740,14 @@ helper you should symlink rclone binary to `/sbin/mount.rclone` and optionally rclone will detect it and translate command-line arguments appropriately. Now you can run classic mounts like this: -``` + +```console mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem ``` or create systemd mount units: -``` + +```ini # /etc/systemd/system/mnt-data.mount [Unit] Description=Mount for /mnt/data @@ -7762,7 +8759,8 @@ Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone ``` optionally accompanied by systemd automount unit -``` + +```ini # /etc/systemd/system/mnt-data.automount [Unit] Description=AutoMount for /mnt/data @@ -7774,7 +8772,8 @@ WantedBy=multi-user.target ``` or add in `/etc/fstab` a line like -``` + +```console sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0 ``` @@ -7823,8 +8822,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -7836,16 +8837,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -7876,6 +8883,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -7883,6 +8891,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -7930,13 +8939,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -7946,10 +8955,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -8032,9 +9041,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -8048,9 +9059,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -8088,32 +9099,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -8125,7 +9145,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -8135,7 +9156,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -8213,7 +9234,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -8224,7 +9247,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -8242,7 +9265,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -8267,8 +9290,6 @@ If the file has no metadata it will be returned as `{}` and if there is an error reading the metadata the error will be returned as `{"error":"error string"}`. - - ``` rclone nfsmount remote:path /path/to/mountpoint [flags] ``` @@ -8344,7 +9365,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -8372,8 +9393,14 @@ Flags for filtering directory listings ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone obscure Obscure password for use in the rclone config file. @@ -8383,9 +9410,8 @@ Obscure password for use in the rclone config file. In the rclone config file, human-readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is **not** a secure way of encrypting these -passwords as rclone can decrypt them - it is to prevent "eyedropping" -- namely someone seeing a password in the rclone config file by -accident. +passwords as rclone can decrypt them - it is to prevent "eyedropping" - +namely someone seeing a password in the rclone config file by accident. Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 @@ -8395,7 +9421,9 @@ This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline. - echo "secretpassword" | rclone obscure - +```console +echo "secretpassword" | rclone obscure - +``` If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself. @@ -8418,8 +9446,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone rc Run a command against a running rclone. @@ -8428,8 +9462,8 @@ Run a command against a running rclone. This runs a command against a running rclone. Use the `--url` flag to specify an non default URL to connect on. This can be either a -":port" which is taken to mean "http://localhost:port" or a -"host:port" which is taken to mean "http://host:port" +":port" which is taken to mean or a +"host:port" which is taken to mean . A username and password can be passed in with `--user` and `--pass`. @@ -8438,10 +9472,12 @@ Note that `--rc-addr`, `--rc-user`, `--rc-pass` will be read also for The `--unix-socket` flag can be used to connect over a unix socket like this - # start server on /tmp/my.socket - rclone rcd --rc-addr unix:///tmp/my.socket - # Connect to it - rclone rc --unix-socket /tmp/my.socket core/stats +```sh +# start server on /tmp/my.socket +rclone rcd --rc-addr unix:///tmp/my.socket +# Connect to it +rclone rc --unix-socket /tmp/my.socket core/stats +``` Arguments should be passed in as parameter=value. @@ -8456,29 +9492,38 @@ options in the form `-o key=value` or `-o key`. It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings. - -o key=value -o key2 +```text +-o key=value -o key2 +``` Will place this in the "opt" value - {"key":"value", "key2","") - +```json +{"key":"value", "key2","") +``` The `-a`/`--arg` option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings. - -a value -a value2 +```text +-a value -a value2 +``` Will place this in the "arg" value - ["value", "value2"] +```json +["value", "value2"] +``` Use `--loopback` to connect to the rclone instance running `rclone rc`. This is very useful for testing commands without having to run an rclone rc server, e.g.: - rclone rc --loopback operations/about fs=/ +```sh +rclone rc --loopback operations/about fs=/ +``` Use `rclone rc` to see a list of all possible commands. @@ -8505,8 +9550,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone rcat Copies standard input to file on remote. @@ -8515,8 +9566,10 @@ Copies standard input to file on remote. Reads from standard input (stdin) and copies it to a single remote file. - echo "hello world" | rclone rcat remote:path/to/file - ffmpeg - | rclone rcat remote:path/to/file +```console +echo "hello world" | rclone rcat remote:path/to/file +ffmpeg - | rclone rcat remote:path/to/file +``` If the remote file already exists, it will be overwritten. @@ -8561,7 +9614,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -8569,8 +9622,14 @@ Important flags useful for most commands ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone rcd Run rclone listening to remote control commands only. @@ -8618,6 +9677,8 @@ inserts leading and trailing "/" on `--rc-baseurl`, so `--rc-baseurl "rclone"`, `--rc-baseurl "/rclone"` and `--rc-baseurl "/rclone/"` are all treated identically. +`--rc-disable-zip` may be set to disable the zipping download option. + ### TLS (SSL) By default this will serve over http. If you want you can serve over @@ -8643,41 +9704,42 @@ by `--rc-addr`). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. Socket activation can be tested ad-hoc with the `systemd-socket-activate`command - systemd-socket-activate -l 8000 -- rclone serve +```console +systemd-socket-activate -l 8000 -- rclone serve +``` This will socket-activate rclone on the first connection to port 8000 over TCP. + ### Template `--rc-template` allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: -| Parameter | Description | -| :---------- | :---------- | -| .Name | The full path of a file/directory. | -| .Title | Directory listing of .Name | -| .Sort | The current sort used. This is changeable via ?sort= parameter | -| | Sort Options: namedirfirst,name,size,time (default namedirfirst) | -| .Order | The current ordering used. This is changeable via ?order= parameter | -| | Order Options: asc,desc (default asc) | -| .Query | Currently unused. | -| .Breadcrumb | Allows for creating a relative navigation | -|-- .Link | The relative to the root link of the Text. | -|-- .Text | The Name of the directory. | -| .Entries | Information about a specific file/directory. | -|-- .URL | The 'url' of an entry. | -|-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | -|-- .IsDir | Boolean for if an entry is a directory or not. | -|-- .Size | Size in Bytes of the entry. | -|-- .ModTime | The UTC timestamp of an entry. | +| Parameter | Subparameter | Description | +| :---------- | :----------- | :---------- | +| .Name | | The full path of a file/directory. | +| .Title | | Directory listing of '.Name'. | +| .Sort | | The current sort used. This is changeable via '?sort=' parameter. Possible values: namedirfirst, name, size, time (default namedirfirst). | +| .Order | | The current ordering used. This is changeable via '?order=' parameter. Possible values: asc, desc (default asc). | +| .Query | | Currently unused. | +| .Breadcrumb | | Allows for creating a relative navigation. | +| | .Link | The link of the Text relative to the root. | +| | .Text | The Name of the directory. | +| .Entries | | Information about a specific file/directory. | +| | .URL | The url of an entry. | +| | .Leaf | Currently same as '.URL' but intended to be just the name. | +| | .IsDir | Boolean for if an entry is a directory or not. | +| | .Size | Size in bytes of the entry. | +| | .ModTime | The UTC timestamp of an entry. | -The server also makes the following functions available so that they can be used within the -template. These functions help extend the options for dynamic rendering of HTML. They can -be used to render HTML based on specific conditions. +The server also makes the following functions available so that they can be used +within the template. These functions help extend the options for dynamic +rendering of HTML. They can be used to render HTML based on specific conditions. | Function | Description | | :---------- | :---------- | @@ -8694,8 +9756,9 @@ You can either use an htpasswd file which can take lots of users, or set a single username and password with the `--rc-user` and `--rc-pass` flags. Alternatively, you can have the reverse proxy manage authentication and use the -username provided in the configured header with `--user-from-header` (e.g., `--rc---user-from-header=x-remote-user`). -Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. +username provided in the configured header with `--user-from-header` (e.g., `--rc-user-from-header=x-remote-user`). +Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration +may lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the `--client-ca` flag passed to the server, the @@ -8707,9 +9770,11 @@ authentication. Bcrypt is recommended. To create an htpasswd file: - touch htpasswd - htpasswd -B htpasswd user - htpasswd -B htpasswd anotherUser +```console +touch htpasswd +htpasswd -B htpasswd user +htpasswd -B htpasswd anotherUser +``` The password file can be updated while rclone is running. @@ -8717,8 +9782,6 @@ Use `--rc-realm` to set the authentication realm. Use `--rc-salt` to change the password hashing salt from the default. - - ``` rclone rcd * [flags] ``` @@ -8736,7 +9799,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags to control the Remote Control API -``` +```text --rc Enable the remote control server --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572) --rc-allow-origin string Origin which cross-domain request (CORS) can be executed from @@ -8771,8 +9834,14 @@ Flags to control the Remote Control API ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone rmdirs Remove empty directories under the path. @@ -8798,7 +9867,6 @@ if you have thousands of empty directories consider increasing this number. To delete a path and any objects in it, use the [purge](https://rclone.org/commands/rclone_purge/) command. - ``` rclone rmdirs remote:path [flags] ``` @@ -8817,7 +9885,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -8825,8 +9893,14 @@ Important flags useful for most commands ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone selfupdate Update the rclone binary. @@ -8878,9 +9952,8 @@ command will rename the old executable to 'rclone.old.exe' upon success. Please note that this command was not available before rclone version 1.55. If it fails for you with the message `unknown command "selfupdate"` then -you will need to update manually following the install instructions located -at https://rclone.org/install/ - +you will need to update manually following the +[install documentation](https://rclone.org/install/). ``` rclone selfupdate [flags] @@ -8902,8 +9975,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone serve Serve a remote over a protocol. @@ -8913,7 +9992,16 @@ Serve a remote over a protocol. Serve a remote over a given protocol. Requires the use of a subcommand to specify the protocol, e.g. - rclone serve http remote: +```console +rclone serve http remote: +``` + +When the "--metadata" flag is enabled, the following metadata fields will be provided as headers: +- "content-disposition" +- "cache-control" +- "content-language" +- "content-encoding" +Note: The availability of these fields depends on whether the remote supports metadata. Each subcommand has its own options which you can see in their help. @@ -8932,6 +10020,9 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone serve dlna](https://rclone.org/commands/rclone_serve_dlna/) - Serve remote:path over DLNA * [rclone serve docker](https://rclone.org/commands/rclone_serve_docker/) - Serve any remote on docker's volume plugin API. @@ -8943,6 +10034,9 @@ See the [global flags page](https://rclone.org/flags/) for global options not li * [rclone serve sftp](https://rclone.org/commands/rclone_serve_sftp/) - Serve the remote over SFTP. * [rclone serve webdav](https://rclone.org/commands/rclone_serve_webdav/) - Serve remote:path over WebDAV. + + + # rclone serve dlna Serve remote:path over DLNA @@ -8997,8 +10091,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -9010,16 +10106,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -9050,6 +10152,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -9057,6 +10160,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -9104,13 +10208,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -9120,10 +10224,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -9206,9 +10310,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -9222,9 +10328,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -9262,32 +10368,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -9299,7 +10414,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -9309,7 +10425,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -9387,7 +10503,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -9398,7 +10516,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -9416,7 +10534,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -9441,8 +10559,6 @@ If the file has no metadata it will be returned as `{}` and if there is an error reading the metadata the error will be returned as `{"error":"error string"}`. - - ``` rclone serve dlna remote:path [flags] ``` @@ -9497,7 +10613,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -9525,8 +10641,14 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. + + + # rclone serve docker Serve any remote on docker's volume plugin API. @@ -9543,7 +10665,8 @@ docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example: -``` + +```console sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv ``` @@ -9593,8 +10716,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -9606,16 +10731,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -9646,6 +10777,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -9653,6 +10785,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -9700,13 +10833,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -9716,10 +10849,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -9802,9 +10935,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -9818,9 +10953,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -9858,32 +10993,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -9895,7 +11039,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -9905,7 +11050,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -9983,7 +11128,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -9994,7 +11141,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -10012,7 +11159,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -10037,8 +11184,6 @@ If the file has no metadata it will be returned as `{}` and if there is an error reading the metadata the error will be returned as `{"error":"error string"}`. - - ``` rclone serve docker [flags] ``` @@ -10114,7 +11259,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -10142,8 +11287,14 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. + + + # rclone serve ftp Serve remote:path over FTP. @@ -10191,8 +11342,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -10204,16 +11357,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -10244,6 +11403,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -10251,6 +11411,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -10298,13 +11459,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -10314,10 +11475,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -10400,9 +11561,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -10416,9 +11579,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -10456,32 +11619,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -10493,7 +11665,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -10503,7 +11676,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -10581,7 +11754,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -10592,7 +11767,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -10610,7 +11785,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -10658,41 +11833,43 @@ options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter + - `_root` - root to use for the backend And it may have this parameter + - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "pass": "mypassword" + "user": "me", + "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" + "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT -``` +```json { - "type": "sftp", - "_root": "", - "_obscure": "pass", - "user": "me", - "pass": "mypassword", - "host": "sftp.example.com" + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" } ``` @@ -10714,9 +11891,7 @@ password or public-key is changed the cache will need to expire (which takes 5 m before it takes effect. This can be used to build general purpose proxies to any kind of -backend that rclone supports. - - +backend that rclone supports. ``` rclone serve ftp remote:path [flags] @@ -10775,7 +11950,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -10803,8 +11978,14 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. + + + # rclone serve http Serve the remote over HTTP. @@ -10854,6 +12035,8 @@ inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, `--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. +`--disable-zip` may be set to disable the zipping download option. + ### TLS (SSL) By default this will serve over http. If you want you can serve over @@ -10879,41 +12062,42 @@ by `--addr`). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. Socket activation can be tested ad-hoc with the `systemd-socket-activate`command - systemd-socket-activate -l 8000 -- rclone serve +```console +systemd-socket-activate -l 8000 -- rclone serve +``` This will socket-activate rclone on the first connection to port 8000 over TCP. + ### Template `--template` allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: -| Parameter | Description | -| :---------- | :---------- | -| .Name | The full path of a file/directory. | -| .Title | Directory listing of .Name | -| .Sort | The current sort used. This is changeable via ?sort= parameter | -| | Sort Options: namedirfirst,name,size,time (default namedirfirst) | -| .Order | The current ordering used. This is changeable via ?order= parameter | -| | Order Options: asc,desc (default asc) | -| .Query | Currently unused. | -| .Breadcrumb | Allows for creating a relative navigation | -|-- .Link | The relative to the root link of the Text. | -|-- .Text | The Name of the directory. | -| .Entries | Information about a specific file/directory. | -|-- .URL | The 'url' of an entry. | -|-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | -|-- .IsDir | Boolean for if an entry is a directory or not. | -|-- .Size | Size in Bytes of the entry. | -|-- .ModTime | The UTC timestamp of an entry. | +| Parameter | Subparameter | Description | +| :---------- | :----------- | :---------- | +| .Name | | The full path of a file/directory. | +| .Title | | Directory listing of '.Name'. | +| .Sort | | The current sort used. This is changeable via '?sort=' parameter. Possible values: namedirfirst, name, size, time (default namedirfirst). | +| .Order | | The current ordering used. This is changeable via '?order=' parameter. Possible values: asc, desc (default asc). | +| .Query | | Currently unused. | +| .Breadcrumb | | Allows for creating a relative navigation. | +| | .Link | The link of the Text relative to the root. | +| | .Text | The Name of the directory. | +| .Entries | | Information about a specific file/directory. | +| | .URL | The url of an entry. | +| | .Leaf | Currently same as '.URL' but intended to be just the name. | +| | .IsDir | Boolean for if an entry is a directory or not. | +| | .Size | Size in bytes of the entry. | +| | .ModTime | The UTC timestamp of an entry. | -The server also makes the following functions available so that they can be used within the -template. These functions help extend the options for dynamic rendering of HTML. They can -be used to render HTML based on specific conditions. +The server also makes the following functions available so that they can be used +within the template. These functions help extend the options for dynamic +rendering of HTML. They can be used to render HTML based on specific conditions. | Function | Description | | :---------- | :---------- | @@ -10930,8 +12114,9 @@ You can either use an htpasswd file which can take lots of users, or set a single username and password with the `--user` and `--pass` flags. Alternatively, you can have the reverse proxy manage authentication and use the -username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`). -Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. +username provided in the configured header with `--user-from-header` (e.g., `--user-from-header=x-remote-user`). +Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration +may lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the `--client-ca` flag passed to the server, the @@ -10943,9 +12128,11 @@ authentication. Bcrypt is recommended. To create an htpasswd file: - touch htpasswd - htpasswd -B htpasswd user - htpasswd -B htpasswd anotherUser +```console +touch htpasswd +htpasswd -B htpasswd user +htpasswd -B htpasswd anotherUser +``` The password file can be updated while rclone is running. @@ -10974,8 +12161,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -10987,16 +12176,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -11027,6 +12222,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -11034,6 +12230,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -11081,13 +12278,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -11097,10 +12294,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -11183,9 +12380,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -11199,9 +12398,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -11239,32 +12438,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -11276,7 +12484,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -11286,7 +12495,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -11364,7 +12573,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -11375,7 +12586,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -11393,7 +12604,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -11441,41 +12652,43 @@ options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter + - `_root` - root to use for the backend And it may have this parameter + - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "pass": "mypassword" + "user": "me", + "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" + "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT -``` +```json { - "type": "sftp", - "_root": "", - "_obscure": "pass", - "user": "me", - "pass": "mypassword", - "host": "sftp.example.com" + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" } ``` @@ -11497,9 +12710,7 @@ password or public-key is changed the cache will need to expire (which takes 5 m before it takes effect. This can be used to build general purpose proxies to any kind of -backend that rclone supports. - - +backend that rclone supports. ``` rclone serve http remote:path [flags] @@ -11516,6 +12727,7 @@ rclone serve http remote:path [flags] --client-ca string Client certificate authority to verify clients with --dir-cache-time Duration Time to cache directory entries for (default 5m0s) --dir-perms FileMode Directory permissions (default 777) + --disable-zip Disable zip download of directories --file-perms FileMode File permissions (default 666) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for http @@ -11568,7 +12780,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -11596,8 +12808,14 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. + + + # rclone serve nfs Serve the remote as an NFS mount @@ -11605,7 +12823,7 @@ Serve the remote as an NFS mount ## Synopsis Create an NFS server that serves the given remote over the network. - + This implements an NFSv3 server to serve any rclone remote via NFS. The primary purpose for this command is to enable the [mount @@ -11659,12 +12877,16 @@ cache. To serve NFS over the network use following command: - rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full +```sh +rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full +``` This specifies a port that can be used in the mount command. To mount the server under Linux/macOS, use the following command: - - mount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpoint + +```sh +mount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpoint +``` Where `$PORT` is the same port number used in the `serve nfs` command and `$HOSTNAME` is the network address of the machine that `serve nfs` @@ -11699,8 +12921,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -11712,16 +12936,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -11752,6 +12982,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -11759,6 +12990,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -11806,13 +13038,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -11822,10 +13054,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -11908,9 +13140,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -11924,9 +13158,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -11964,32 +13198,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -12001,7 +13244,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -12011,7 +13255,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -12089,7 +13333,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -12100,7 +13346,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -12118,7 +13364,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -12143,8 +13389,6 @@ If the file has no metadata it will be returned as `{}` and if there is an error reading the metadata the error will be returned as `{"error":"error string"}`. - - ``` rclone serve nfs remote:path [flags] ``` @@ -12198,7 +13442,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -12226,8 +13470,14 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. + + + # rclone serve restic Serve the remote for restic's REST API. @@ -12246,7 +13496,7 @@ The server will log errors. Use -v to see access logs. `--bwlimit` will be respected for file transfers. Use `--stats` to control the stats printing. -## Setting up rclone for use by restic ### +## Setting up rclone for use by restic First [set up a remote for your chosen cloud provider](https://rclone.org/docs/#configure). @@ -12257,7 +13507,9 @@ following instructions. Now start the rclone restic server - rclone serve restic -v remote:backup +```console +rclone serve restic -v remote:backup +``` Where you can replace "backup" in the above by whatever path in the remote you wish to use. @@ -12271,7 +13523,7 @@ Adding `--cache-objects=false` will cause rclone to stop caching objects returned from the List call. Caching is normally desirable as it speeds up downloading objects, saves transactions and uses very little memory. -## Setting up restic to use rclone ### +## Setting up restic to use rclone Now you can [follow the restic instructions](http://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) @@ -12285,33 +13537,38 @@ the URL for the REST server. For example: - $ export RESTIC_REPOSITORY=rest:http://localhost:8080/ - $ export RESTIC_PASSWORD=yourpassword - $ restic init - created restic backend 8b1a4b56ae at rest:http://localhost:8080/ +```console +$ export RESTIC_REPOSITORY=rest:http://localhost:8080/ +$ export RESTIC_PASSWORD=yourpassword +$ restic init +created restic backend 8b1a4b56ae at rest:http://localhost:8080/ - Please note that knowledge of your password is required to access - the repository. Losing your password means that your data is - irrecoverably lost. - $ restic backup /path/to/files/to/backup - scan [/path/to/files/to/backup] - scanned 189 directories, 312 files in 0:00 - [0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00 - duration: 0:00 - snapshot 45c8fdd8 saved +Please note that knowledge of your password is required to access +the repository. Losing your password means that your data is +irrecoverably lost. +$ restic backup /path/to/files/to/backup +scan [/path/to/files/to/backup] +scanned 189 directories, 312 files in 0:00 +[0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00 +duration: 0:00 +snapshot 45c8fdd8 saved -### Multiple repositories #### +``` + +### Multiple repositories Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these **must** end with /. Eg - $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/ - # backup user1 stuff - $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ - # backup user2 stuff +```console +$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/ +# backup user1 stuff +$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ +# backup user2 stuff +``` -### Private repositories #### +### Private repositories The`--private-repos` flag can be used to limit users to repositories starting with a path of `//`. @@ -12347,6 +13604,8 @@ inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, `--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. +`--disable-zip` may be set to disable the zipping download option. + ### TLS (SSL) By default this will serve over http. If you want you can serve over @@ -12372,13 +13631,16 @@ by `--addr`). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. Socket activation can be tested ad-hoc with the `systemd-socket-activate`command - systemd-socket-activate -l 8000 -- rclone serve +```console +systemd-socket-activate -l 8000 -- rclone serve +``` This will socket-activate rclone on the first connection to port 8000 over TCP. + ### Authentication By default this will serve files without needing a login. @@ -12387,8 +13649,9 @@ You can either use an htpasswd file which can take lots of users, or set a single username and password with the `--user` and `--pass` flags. Alternatively, you can have the reverse proxy manage authentication and use the -username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`). -Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. +username provided in the configured header with `--user-from-header` (e.g., `--user-from-header=x-remote-user`). +Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration +may lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the `--client-ca` flag passed to the server, the @@ -12400,9 +13663,11 @@ authentication. Bcrypt is recommended. To create an htpasswd file: - touch htpasswd - htpasswd -B htpasswd user - htpasswd -B htpasswd anotherUser +```console +touch htpasswd +htpasswd -B htpasswd user +htpasswd -B htpasswd anotherUser +``` The password file can be updated while rclone is running. @@ -12410,8 +13675,6 @@ Use `--realm` to set the authentication realm. Use `--salt` to change the password hashing salt from the default. - - ``` rclone serve restic remote:path [flags] ``` @@ -12446,8 +13709,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. + + + # rclone serve s3 Serve remote:path over s3. @@ -12489,20 +13758,20 @@ cause problems for S3 clients which rely on the Etag being the MD5. For a simple set up, to serve `remote:path` over s3, run the server like this: -``` +```console rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path ``` For example, to use a simple folder in the filesystem, run the server with a command like this: -``` +```console rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY local:/path/to/folder ``` The `rclone.conf` for the server could look like this: -``` +```ini [local] type = local ``` @@ -12515,7 +13784,7 @@ will be visible as a warning in the logs. But it will run nonetheless. This will be compatible with an rclone (client) remote configuration which is defined like this: -``` +```ini [serves3] type = s3 provider = Rclone @@ -12572,21 +13841,21 @@ metadata which will be set as the modification time of the file. `serve s3` currently supports the following operations. - Bucket - - `ListBuckets` - - `CreateBucket` - - `DeleteBucket` + - `ListBuckets` + - `CreateBucket` + - `DeleteBucket` - Object - - `HeadObject` - - `ListObjects` - - `GetObject` - - `PutObject` - - `DeleteObject` - - `DeleteObjects` - - `CreateMultipartUpload` - - `CompleteMultipartUpload` - - `AbortMultipartUpload` - - `CopyObject` - - `UploadPart` + - `HeadObject` + - `ListObjects` + - `GetObject` + - `PutObject` + - `DeleteObject` + - `DeleteObjects` + - `CreateMultipartUpload` + - `CompleteMultipartUpload` + - `AbortMultipartUpload` + - `CopyObject` + - `UploadPart` Other operations will return error `Unimplemented`. @@ -12598,8 +13867,9 @@ You can either use an htpasswd file which can take lots of users, or set a single username and password with the `--user` and `--pass` flags. Alternatively, you can have the reverse proxy manage authentication and use the -username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`). -Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. +username provided in the configured header with `--user-from-header` (e.g., `--user-from-header=x-remote-user`). +Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration +may lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the `--client-ca` flag passed to the server, the @@ -12611,9 +13881,11 @@ authentication. Bcrypt is recommended. To create an htpasswd file: - touch htpasswd - htpasswd -B htpasswd user - htpasswd -B htpasswd anotherUser +```console +touch htpasswd +htpasswd -B htpasswd user +htpasswd -B htpasswd anotherUser +``` The password file can be updated while rclone is running. @@ -12652,6 +13924,8 @@ inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, `--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. +`--disable-zip` may be set to disable the zipping download option. + ### TLS (SSL) By default this will serve over http. If you want you can serve over @@ -12677,13 +13951,16 @@ by `--addr`). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. Socket activation can be tested ad-hoc with the `systemd-socket-activate`command - systemd-socket-activate -l 8000 -- rclone serve +```console +systemd-socket-activate -l 8000 -- rclone serve +``` This will socket-activate rclone on the first connection to port 8000 over TCP. + ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects @@ -12705,8 +13982,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -12718,16 +13997,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -12758,6 +14043,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -12765,6 +14051,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -12812,13 +14099,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -12828,10 +14115,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -12914,9 +14201,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -12930,9 +14219,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -12970,32 +14259,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -13007,7 +14305,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -13017,7 +14316,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -13095,7 +14394,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -13106,7 +14407,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -13124,7 +14425,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -13149,8 +14450,6 @@ If the file has no metadata it will be returned as `{}` and if there is an error reading the metadata the error will be returned as `{"error":"error string"}`. - - ``` rclone serve s3 remote:path [flags] ``` @@ -13221,7 +14520,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -13249,8 +14548,14 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. + + + # rclone serve sftp Serve the remote over SFTP. @@ -13293,11 +14598,13 @@ reachable externally then supply `--addr :2022` for example. This also supports being run with socket activation, in which case it will listen on the first passed FD. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. Socket activation can be tested ad-hoc with the `systemd-socket-activate`command: - systemd-socket-activate -l 2222 -- rclone serve sftp :local:vfs/ +```console +systemd-socket-activate -l 2222 -- rclone serve sftp :local:vfs/ +``` This will socket-activate rclone on the first connection to port 2222 over TCP. @@ -13307,7 +14614,9 @@ sftp backend, but it may not be with other SFTP clients. If `--stdio` is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example: - restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ... +```text +restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ... +``` On the client you need to set `--transfers 1` when using `--stdio`. Otherwise multiple instances of the rclone server are started by OpenSSH @@ -13341,8 +14650,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -13354,16 +14665,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -13394,6 +14711,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -13401,6 +14719,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -13448,13 +14767,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -13464,10 +14783,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -13550,9 +14869,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -13566,9 +14887,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -13606,32 +14927,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -13643,7 +14973,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -13653,7 +14984,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -13731,7 +15062,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -13742,7 +15075,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -13760,7 +15093,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -13808,41 +15141,43 @@ options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter + - `_root` - root to use for the backend And it may have this parameter + - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "pass": "mypassword" + "user": "me", + "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" + "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT -``` +```json { - "type": "sftp", - "_root": "", - "_obscure": "pass", - "user": "me", - "pass": "mypassword", - "host": "sftp.example.com" + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" } ``` @@ -13864,9 +15199,7 @@ password or public-key is changed the cache will need to expire (which takes 5 m before it takes effect. This can be used to build general purpose proxies to any kind of -backend that rclone supports. - - +backend that rclone supports. ``` rclone serve sftp remote:path [flags] @@ -13925,7 +15258,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -13953,8 +15286,14 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. + + + # rclone serve webdav Serve remote:path over WebDAV. @@ -13967,7 +15306,7 @@ browser, or you can make a remote of type WebDAV to read and write it. ## WebDAV options -### --etag-hash +### --etag-hash This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object. @@ -13979,39 +15318,53 @@ to see the full list. ## Access WebDAV on Windows -WebDAV shared folder can be mapped as a drive on Windows, however the default settings prevent it. -Windows will fail to connect to the server using insecure Basic authentication. -It will not even display any login dialog. Windows requires SSL / HTTPS connection to be used with Basic. -If you try to connect via Add Network Location Wizard you will get the following error: +WebDAV shared folder can be mapped as a drive on Windows, however the default +settings prevent it. Windows will fail to connect to the server using insecure +Basic authentication. It will not even display any login dialog. Windows +requires SSL / HTTPS connection to be used with Basic. If you try to connect +via Add Network Location Wizard you will get the following error: "The folder you entered does not appear to be valid. Please choose another". -However, you still can connect if you set the following registry key on a client machine: -HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\BasicAuthLevel to 2. -The BasicAuthLevel can be set to the following values: - 0 - Basic authentication disabled - 1 - Basic authentication enabled for SSL connections only - 2 - Basic authentication enabled for SSL connections and for non-SSL connections +However, you still can connect if you set the following registry key on a +client machine: +`HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\BasicAuthLevel` +to 2. The BasicAuthLevel can be set to the following values: + +```text +0 - Basic authentication disabled +1 - Basic authentication enabled for SSL connections only +2 - Basic authentication enabled for SSL connections and for non-SSL connections +``` + If required, increase the FileSizeLimitInBytes to a higher value. Navigate to the Services interface, then restart the WebClient service. ## Access Office applications on WebDAV -Navigate to following registry HKEY_CURRENT_USER\Software\Microsoft\Office\[14.0/15.0/16.0]\Common\Internet +Navigate to following registry +`HKEY_CURRENT_USER\Software\Microsoft\Office\[14.0/15.0/16.0]\Common\Internet` Create a new DWORD BasicAuthLevel with value 2. - 0 - Basic authentication disabled - 1 - Basic authentication enabled for SSL connections only - 2 - Basic authentication enabled for SSL and for non-SSL connections -https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint +```text +0 - Basic authentication disabled +1 - Basic authentication enabled for SSL connections only +2 - Basic authentication enabled for SSL and for non-SSL connections +``` + + ## Serving over a unix socket You can serve the webdav on a unix socket like this: - rclone serve webdav --addr unix:///tmp/my.socket remote:path +```console +rclone serve webdav --addr unix:///tmp/my.socket remote:path +``` and connect to it like this using rclone and the webdav backend: - rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav: +```console +rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav: +``` Note that there is no authentication on http protocol - this is expected to be done by the permissions on the socket. @@ -14047,6 +15400,8 @@ inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, `--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. +`--disable-zip` may be set to disable the zipping download option. + ### TLS (SSL) By default this will serve over http. If you want you can serve over @@ -14072,41 +15427,42 @@ by `--addr`). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. Socket activation can be tested ad-hoc with the `systemd-socket-activate`command - systemd-socket-activate -l 8000 -- rclone serve +```console +systemd-socket-activate -l 8000 -- rclone serve +``` This will socket-activate rclone on the first connection to port 8000 over TCP. + ### Template `--template` allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: -| Parameter | Description | -| :---------- | :---------- | -| .Name | The full path of a file/directory. | -| .Title | Directory listing of .Name | -| .Sort | The current sort used. This is changeable via ?sort= parameter | -| | Sort Options: namedirfirst,name,size,time (default namedirfirst) | -| .Order | The current ordering used. This is changeable via ?order= parameter | -| | Order Options: asc,desc (default asc) | -| .Query | Currently unused. | -| .Breadcrumb | Allows for creating a relative navigation | -|-- .Link | The relative to the root link of the Text. | -|-- .Text | The Name of the directory. | -| .Entries | Information about a specific file/directory. | -|-- .URL | The 'url' of an entry. | -|-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | -|-- .IsDir | Boolean for if an entry is a directory or not. | -|-- .Size | Size in Bytes of the entry. | -|-- .ModTime | The UTC timestamp of an entry. | +| Parameter | Subparameter | Description | +| :---------- | :----------- | :---------- | +| .Name | | The full path of a file/directory. | +| .Title | | Directory listing of '.Name'. | +| .Sort | | The current sort used. This is changeable via '?sort=' parameter. Possible values: namedirfirst, name, size, time (default namedirfirst). | +| .Order | | The current ordering used. This is changeable via '?order=' parameter. Possible values: asc, desc (default asc). | +| .Query | | Currently unused. | +| .Breadcrumb | | Allows for creating a relative navigation. | +| | .Link | The link of the Text relative to the root. | +| | .Text | The Name of the directory. | +| .Entries | | Information about a specific file/directory. | +| | .URL | The url of an entry. | +| | .Leaf | Currently same as '.URL' but intended to be just the name. | +| | .IsDir | Boolean for if an entry is a directory or not. | +| | .Size | Size in bytes of the entry. | +| | .ModTime | The UTC timestamp of an entry. | -The server also makes the following functions available so that they can be used within the -template. These functions help extend the options for dynamic rendering of HTML. They can -be used to render HTML based on specific conditions. +The server also makes the following functions available so that they can be used +within the template. These functions help extend the options for dynamic +rendering of HTML. They can be used to render HTML based on specific conditions. | Function | Description | | :---------- | :---------- | @@ -14123,8 +15479,9 @@ You can either use an htpasswd file which can take lots of users, or set a single username and password with the `--user` and `--pass` flags. Alternatively, you can have the reverse proxy manage authentication and use the -username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`). -Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. +username provided in the configured header with `--user-from-header` (e.g., `--user-from-header=x-remote-user`). +Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration +may lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the `--client-ca` flag passed to the server, the @@ -14136,9 +15493,11 @@ authentication. Bcrypt is recommended. To create an htpasswd file: - touch htpasswd - htpasswd -B htpasswd user - htpasswd -B htpasswd anotherUser +```console +touch htpasswd +htpasswd -B htpasswd user +htpasswd -B htpasswd anotherUser +``` The password file can be updated while rclone is running. @@ -14167,8 +15526,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -14180,16 +15541,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -14220,6 +15587,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -14227,6 +15595,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -14274,13 +15643,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -14290,10 +15659,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -14376,9 +15745,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -14392,9 +15763,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -14432,32 +15803,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -14469,7 +15849,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](https://rclone.org/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -14479,7 +15860,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -14557,7 +15938,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -14568,7 +15951,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -14586,7 +15969,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -14634,41 +16017,43 @@ options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter + - `_root` - root to use for the backend And it may have this parameter + - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "pass": "mypassword" + "user": "me", + "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" + "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT -``` +```json { - "type": "sftp", - "_root": "", - "_obscure": "pass", - "user": "me", - "pass": "mypassword", - "host": "sftp.example.com" + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" } ``` @@ -14690,9 +16075,7 @@ password or public-key is changed the cache will need to expire (which takes 5 m before it takes effect. This can be used to build general purpose proxies to any kind of -backend that rclone supports. - - +backend that rclone supports. ``` rclone serve webdav remote:path [flags] @@ -14763,7 +16146,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -14791,8 +16174,14 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](https://rclone.org/commands/rclone_serve/) - Serve a remote over a protocol. + + + # rclone settier Changes storage class/tier of objects in remote. @@ -14811,16 +16200,21 @@ inaccessible.true You can use it to tier single object - rclone settier Cool remote:path/file +```console +rclone settier Cool remote:path/file +``` Or use rclone filters to set tier on only specific files - rclone --include "*.txt" settier Hot remote:path/dir +```console +rclone --include "*.txt" settier Hot remote:path/dir +``` Or just provide remote directory and all files in directory will be tiered - rclone settier tier remote:path/dir - +```console +rclone settier tier remote:path/dir +``` ``` rclone settier tier remote:path [flags] @@ -14836,8 +16230,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone test Run a test command @@ -14848,14 +16248,15 @@ Rclone test is used to run test commands. Select which test command you want with the subcommand, eg - rclone test memory remote: +```console +rclone test memory remote: +``` Each subcommand has its own options which you can see in their help. **NB** Be careful running these commands, they may do strange things so reading their documentation first is recommended. - ## Options ``` @@ -14866,6 +16267,9 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone test changenotify](https://rclone.org/commands/rclone_test_changenotify/) - Log any change notify requests for the remote passed in. * [rclone test histogram](https://rclone.org/commands/rclone_test_histogram/) - Makes a histogram of file name characters. @@ -14873,6 +16277,10 @@ See the [global flags page](https://rclone.org/flags/) for global options not li * [rclone test makefile](https://rclone.org/commands/rclone_test_makefile/) - Make files with random contents of the size given * [rclone test makefiles](https://rclone.org/commands/rclone_test_makefiles/) - Make a random file hierarchy in a directory * [rclone test memory](https://rclone.org/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats. +* [rclone test speed](https://rclone.org/commands/rclone_test_speed/) - Run a speed test to the remote + + + # rclone test changenotify @@ -14893,8 +16301,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command + + + # rclone test histogram Makes a histogram of file name characters. @@ -14907,7 +16321,6 @@ in filenames in the remote:path specified. The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression. - ``` rclone test histogram [remote:path] [flags] ``` @@ -14922,8 +16335,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command + + + # rclone test info Discovers file name or other limitations for paths. @@ -14935,8 +16354,7 @@ paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one. -**NB** this can create undeletable files and other hazards - use with care - +**NB** this can create undeletable files and other hazards - use with care! ``` rclone test info [remote:path]+ [flags] @@ -14961,8 +16379,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command + + + # rclone test makefile Make files with random contents of the size given @@ -14987,8 +16411,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command + + + # rclone test makefiles Make a random file hierarchy in a directory @@ -15021,8 +16451,14 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command + + + # rclone test memory Load all the objects at remote:path into memory and report memory stats. @@ -15041,8 +16477,73 @@ See the [global flags page](https://rclone.org/flags/) for global options not li ## See Also + + + * [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command + + + +# rclone test speed + +Run a speed test to the remote + +## Synopsis + +Run a speed test to the remote. + +This command runs a series of uploads and downloads to the remote, measuring +and printing the speed of each test using varying file sizes and numbers of +files. + +Test time can be innaccurate with small file caps and large files. As it +uses the results of an initial test to determine how many files to use in +each subsequent test. + +It is recommended to use -q flag for a simpler output. e.g.: + + rclone test speed remote: -q + +**NB** This command will create and delete files on the remote in a randomly +named directory which will be automatically removed on a clean exit. + +You can use the --json flag to only print the results in JSON format. + +``` +rclone test speed [flags] +``` + +## Options + +``` + --ascii Fill files with random ASCII printable bytes only + --chargen Fill files with a ASCII chargen pattern + --file-cap int Maximum number of files to use in each test (default 100) + -h, --help help for speed + --json Output only results in JSON format + --large SizeSuffix Size of large files (default 1Gi) + --medium SizeSuffix Size of medium files (default 10Mi) + --pattern Fill files with a periodic pattern + --seed int Seed for the random number generator (0 for random) (default 1) + --small SizeSuffix Size of small files (default 1Ki) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --test-time Duration Length for each test to run (default 15s) + --zero Fill files with ASCII 0x00 +``` + +See the [global flags page](https://rclone.org/flags/) for global options not listed here. + +## See Also + + + + +* [rclone test](https://rclone.org/commands/rclone_test/) - Run a test command + + + + # rclone touch Create new file or change file modification time. @@ -15070,7 +16571,6 @@ time instead of the current time. Times may be specified as one of: Note that value of `--timestamp` is in UTC. If you want local time then add the `--localtime` flag. - ``` rclone touch remote:path [flags] ``` @@ -15092,7 +16592,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -15102,7 +16602,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -15132,15 +16632,21 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + # rclone tree List the contents of the remote in a tree like fashion. @@ -15151,16 +16657,18 @@ Lists the contents of a remote in a similar way to the unix tree command. For example - $ rclone tree remote:path - / - ├── file1 - ├── file2 - ├── file3 - └── subdir - ├── file4 - └── file5 +```text +$ rclone tree remote:path +/ +├── file1 +├── file2 +├── file3 +└── subdir + ├── file4 + └── file5 - 1 directories, 5 files +1 directories, 5 files +``` You can use any of the filtering options with the tree command (e.g. `--include` and `--exclude`. You can also use `--fast-list`. @@ -15173,7 +16681,6 @@ short options as they conflict with rclone's short options. For a more interactive navigation of the remote see the [ncdu](https://rclone.org/commands/rclone_ncdu/) command. - ``` rclone tree remote:path [flags] ``` @@ -15209,7 +16716,7 @@ See the [global flags page](https://rclone.org/flags/) for global options not li Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -15239,16 +16746,22 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. + + + ## Copying single files rclone normally syncs or copies directories. However, if the source @@ -15260,7 +16773,7 @@ directory` if it isn't. For example, suppose you have a remote with a file in called `test.jpg`, then you could copy just that file like this -```sh +```console rclone copy remote:test.jpg /tmp/download ``` @@ -15268,13 +16781,13 @@ The file `test.jpg` will be placed inside `/tmp/download`. This is equivalent to specifying -```sh +```console rclone copy --files-from /tmp/files remote: /tmp/download ``` Where `/tmp/files` contains the single line -```sh +```console test.jpg ``` @@ -15320,25 +16833,25 @@ the command line (or in environment variables). Here are some examples: -```sh +```console rclone lsd --http-url https://pub.rclone.org :http: ``` To list all the directories in the root of `https://pub.rclone.org/`. -```sh +```console rclone lsf --http-url https://example.com :http:path/to/dir ``` To list files and directories in `https://example.com/path/to/dir/` -```sh +```console rclone copy --http-url https://example.com :http:path/to/dir /tmp/dir ``` To copy files and directories in `https://example.com/path/to/dir` to `/tmp/dir`. -```sh +```console rclone copy --sftp-host example.com :sftp:path/to/dir /tmp/dir ``` @@ -15352,7 +16865,7 @@ syntax, so instead of providing the arguments as command line parameters `--http-url https://pub.rclone.org` they are provided as part of the remote specification as a kind of connection string. -```sh +```console rclone lsd ":http,url='https://pub.rclone.org':" rclone lsf ":http,url='https://example.com':path/to/dir" rclone copy ":http,url='https://example.com':path/to/dir" /tmp/dir @@ -15363,7 +16876,7 @@ These can apply to modify existing remotes as well as create new remotes with the on the fly syntax. This example is equivalent to adding the `--drive-shared-with-me` parameter to the remote `gdrive:`. -```sh +```console rclone lsf "gdrive,shared_with_me:path/to/dir" ``` @@ -15374,13 +16887,13 @@ file shared on google drive to the normal drive which **does not work** because the `--drive-shared-with-me` flag applies to both the source and the destination. -```sh +```console rclone copy --drive-shared-with-me gdrive:shared-file.txt gdrive: ``` However using the connection string syntax, this does work. -```sh +```console rclone copy "gdrive,shared_with_me:shared-file.txt" gdrive: ``` @@ -15389,13 +16902,13 @@ backend. If for example gdriveCrypt is a crypt based on gdrive, then the following command **will not work** as intended, because `shared_with_me` is ignored by the crypt backend: -```sh +```console rclone copy "gdriveCrypt,shared_with_me:shared-file.txt" gdriveCrypt: ``` The connection strings have the following syntax -```sh +```text remote,parameter=value,parameter2=value2:path/to/dir :backend,parameter=value,parameter2=value2:path/to/dir ``` @@ -15403,7 +16916,7 @@ remote,parameter=value,parameter2=value2:path/to/dir If the `parameter` has a `:` or `,` then it must be placed in quotes `"` or `'`, so -```sh +```text remote,parameter="colon:value",parameter2="comma,value":path/to/dir :backend,parameter='colon:value',parameter2='comma,value':path/to/dir ``` @@ -15411,7 +16924,7 @@ remote,parameter="colon:value",parameter2="comma,value":path/to/dir If a quoted value needs to include that quote, then it should be doubled, so -```sh +```text remote,parameter="with""quote",parameter2='with''quote':path/to/dir ``` @@ -15422,13 +16935,13 @@ If you leave off the `=parameter` then rclone will substitute `=true` which works very well with flags. For example, to use s3 configured in the environment you could use: -```sh +```console rclone lsd :s3,env_auth: ``` Which is equivalent to -```sh +```console rclone lsd :s3,env_auth=true: ``` @@ -15440,7 +16953,7 @@ If you are a shell master then you'll know which strings are OK and which aren't, but if you aren't sure then enclose them in `"` and use `'` as the inside quote. This syntax works on all OSes. -```sh +```console rclone copy ":http,url='https://example.com':path/to/dir" /tmp/dir ``` @@ -15449,23 +16962,26 @@ strings in the shell (notably `\` and `$` and `"`) so if your strings contain those you can swap the roles of `"` and `'` thus. (This syntax does not work on Windows.) -```sh +```console rclone copy ':http,url="https://example.com":path/to/dir' /tmp/dir ``` +You can use [rclone config string](https://rclone.org/commands/rclone_config_string/) to +convert a remote into a connection string. + #### Connection strings, config and logging If you supply extra configuration to a backend by command line flag, environment variable or connection string then rclone will add a suffix based on the hash of the config to the name of the remote, eg -```sh +```console rclone -vv lsf --s3-chunk-size 20M s3: ``` Has the log message -```sh +```text DEBUG : s3: detected overridden config - adding "{Srj1p}" suffix to name ``` @@ -15476,13 +16992,13 @@ This should only be noticeable in the logs. This means that on the fly backends such as -```sh +```console rclone -vv lsf :s3,env_auth: ``` Will get their own names -```sh +```text DEBUG : :s3: detected overridden config - adding "{YTu53}" suffix to name ``` @@ -15616,13 +17132,13 @@ Here are some gotchas which may help users unfamiliar with the shell rules If your names have spaces or shell metacharacters (e.g. `*`, `?`, `$`, `'`, `"`, etc.) then you must quote them. Use single quotes `'` by default. -```sh +```console rclone copy 'Important files?' remote:backup ``` If you want to send a `'` you will need to use `"`, e.g. -```sh +```console rclone copy "O'Reilly Reviews" remote:backup ``` @@ -15655,13 +17171,13 @@ file or directory like this then use the full path starting with a So to sync a directory called `sync:me` to a remote called `remote:` use -```sh +```console rclone sync --interactive ./sync:me remote:path ``` or -```sh +```console rclone sync --interactive /full/path/to/sync:me remote:path ``` @@ -15676,7 +17192,7 @@ to copy them in place. Eg -```sh +```console rclone copy s3:oldbucket s3:newbucket ``` @@ -15697,7 +17213,7 @@ same. This can be used when scripting to make aged backups efficiently, e.g. -```sh +```console rclone sync --interactive remote:current-backup remote:previous-backup rclone sync --interactive /path/to/files remote:current-backup ``` @@ -15937,7 +17453,7 @@ excluded by a filter rule. For example -```sh +```console rclone sync --interactive /path/to/local remote:current --backup-dir remote:old ``` @@ -15947,7 +17463,9 @@ which would have been updated or deleted will be stored in If running rclone from a script you might want to use today's date as the directory name passed to `--backup-dir` to store the old files, or -you might want to pass `--suffix` with today's date. +you might want to pass `--suffix` with today's date. This can be done +with `--suffix $(date +%F)` in bash, and +`--suffix $(Get-Date -Format 'yyyy-MM-dd')` in PowerShell. See `--compare-dest` and `--copy-dest`. @@ -15965,7 +17483,7 @@ You can use `--bind 0.0.0.0` to force rclone to use IPv4 addresses and This option controls the bandwidth limit. For example -```sh +```text --bwlimit 10M ``` @@ -15977,7 +17495,7 @@ suffix B|K|M|G|T|P. The default is `0` which means to not limit bandwidth. The upload and download bandwidth can be specified separately, as `--bwlimit UP:DOWN`, so -```sh +```text --bwlimit 10M:100k ``` @@ -15985,7 +17503,7 @@ would mean limit the upload bandwidth to 10 MiB/s and the download bandwidth to 100 KiB/s. Either limit can be "off" meaning no limit, so to just limit the upload bandwidth you would use -```sh +```text --bwlimit 10M:off ``` @@ -16042,13 +17560,13 @@ be unlimited. Timeslots without `WEEKDAY` are extended to the whole week. So this example: -```sh +```text --bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off" ``` Is equivalent to this: -```sh +```text --bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off" ``` @@ -16068,14 +17586,14 @@ of a long running rclone transfer and to restore it back to the value specified with `--bwlimit` quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this: -```sh +```console kill -SIGUSR2 $(pidof rclone) ``` If you configure rclone with a [remote control](/rc) then you can use change the bwlimit dynamically: -```sh +```console rclone rc core/bwlimit rate=1M ``` @@ -16086,7 +17604,7 @@ This option controls per file bandwidth limit. For the options see the For example use this to allow no transfers to be faster than 1 MiB/s -```sh +```text --bwlimit-file 1M ``` @@ -16376,7 +17894,7 @@ time rclone started up. This disables a comma separated list of optional features. For example to disable server-side move and server-side copy use: -```sh +```text --disable move,copy ``` @@ -16384,13 +17902,13 @@ The features can be put in any case. To see a list of which features can be disabled use: -```sh +```text --disable help ``` The features a remote has can be seen in JSON format with: -```sh +```console rclone backend features remote: ``` @@ -16430,7 +17948,7 @@ support ([RFC 8622](https://tools.ietf.org/html/rfc8622)). For example, if you configured QoS on router to handle LE properly. Running: -```sh +```console rclone copy --dscp LE from:/from to:/to ``` @@ -16522,7 +18040,7 @@ This flag is supported for all HTTP based backends even those not supported by `--header-upload` and `--header-download` so may be used as a workaround for those with care. -```sh +```console rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes" ``` @@ -16531,7 +18049,7 @@ rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes" Add an HTTP header for all download transactions. The flag can be repeated to add multiple headers. -```sh +```console rclone sync --interactive s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar" ``` @@ -16543,7 +18061,7 @@ currently supported backends. Add an HTTP header for all upload transactions. The flag can be repeated to add multiple headers. -```sh +```console rclone sync --interactive ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar" ``` @@ -16721,7 +18239,7 @@ especially with `rclone sync`. For example -```sh +```console $ rclone delete --interactive /tmp/dir rclone: delete "important-file.txt"? y) Yes, this is OK (default) @@ -16811,7 +18329,7 @@ ignored. For example if the following flags are in use -```sh +```console rclone --log-file rclone.log --log-file-max-size 1M --log-file-max-backups 3 ``` @@ -16906,7 +18424,7 @@ once as administrator to create the registry key in advance. severe) than or equal to the `--log-level`. For example to log DEBUG to a log file but ERRORs to the event log you would use -```sh +```text --log-file rclone.log --log-level DEBUG --windows-event-log ERROR ``` @@ -17137,7 +18655,7 @@ it in `"`, if you want a literal `"` in an argument then enclose the argument in `"` and double the `"`. See [CSV encoding](https://godoc.org/encoding/csv) for more info. -```sh +```text --metadata-mapper "python bin/test_metadata_mapper.py" --metadata-mapper 'python bin/test_metadata_mapper.py "argument with a space"' --metadata-mapper 'python bin/test_metadata_mapper.py "argument with ""two"" quotes"' @@ -17166,25 +18684,25 @@ some context for the `Metadata` which may be important. ```json { - "SrcFs": "gdrive:", - "SrcFsType": "drive", - "DstFs": "newdrive:user", - "DstFsType": "onedrive", - "Remote": "test.txt", - "Size": 6, - "MimeType": "text/plain; charset=utf-8", - "ModTime": "2022-10-11T17:53:10.286745272+01:00", - "IsDir": false, - "ID": "xyz", - "Metadata": { - "btime": "2022-10-11T16:53:11Z", - "content-type": "text/plain; charset=utf-8", - "mtime": "2022-10-11T17:53:10.286745272+01:00", - "owner": "user1@domain1.com", - "permissions": "...", - "description": "my nice file", - "starred": "false" - } + "SrcFs": "gdrive:", + "SrcFsType": "drive", + "DstFs": "newdrive:user", + "DstFsType": "onedrive", + "Remote": "test.txt", + "Size": 6, + "MimeType": "text/plain; charset=utf-8", + "ModTime": "2022-10-11T17:53:10.286745272+01:00", + "IsDir": false, + "ID": "xyz", + "Metadata": { + "btime": "2022-10-11T16:53:11Z", + "content-type": "text/plain; charset=utf-8", + "mtime": "2022-10-11T17:53:10.286745272+01:00", + "owner": "user1@domain1.com", + "permissions": "...", + "description": "my nice file", + "starred": "false" + } } ``` @@ -17196,15 +18714,15 @@ the description: ```json { - "Metadata": { - "btime": "2022-10-11T16:53:11Z", - "content-type": "text/plain; charset=utf-8", - "mtime": "2022-10-11T17:53:10.286745272+01:00", - "owner": "user1@domain2.com", - "permissions": "...", - "description": "my nice file [migrated from domain1]", - "starred": "false" - } + "Metadata": { + "btime": "2022-10-11T16:53:11Z", + "content-type": "text/plain; charset=utf-8", + "mtime": "2022-10-11T17:53:10.286745272+01:00", + "owner": "user1@domain2.com", + "permissions": "...", + "description": "my nice file [migrated from domain1]", + "starred": "false" + } } ``` @@ -17508,7 +19026,7 @@ for more info. Eg -```sh +```text --password-command "echo hello" --password-command 'echo "hello with space"' --password-command 'echo "hello with ""quotes"" and space"' @@ -17713,7 +19231,7 @@ or with `--backup-dir`. See `--backup-dir` for more info. For example -```sh +```console rclone copy --interactive /path/to/local/file remote:current --suffix .bak ``` @@ -17724,7 +19242,7 @@ If using `rclone sync` with `--suffix` and without `--backup-dir` then it is recommended to put a filter rule in excluding the suffix otherwise the `sync` will delete the backup files. -```sh +```console rclone sync --interactive /path/to/local/file remote:current --suffix .bak --exclude "*.bak" ``` @@ -18107,7 +19625,7 @@ have to supply the password every time you start rclone. To add a password to your rclone configuration, execute `rclone config`. -```sh +```console $ rclone config Current remotes: @@ -18121,7 +19639,7 @@ e/n/d/s/q> Go into `s`, Set configuration password: -```sh +```text e/n/d/s/q> s Your configuration is not encrypted. If you add a password, you will protect your login information to cloud services. @@ -18194,7 +19712,7 @@ environment variables. The script is supplied either via One useful example of this is using the `passwordstore` application to retrieve the password: -```sh +```console export RCLONE_PASSWORD_COMMAND="pass rclone/config" ``` @@ -18240,13 +19758,13 @@ at rest or transfer. Detailed instructions for popular OSes: - Generate and store a password - ```sh + ```console security add-generic-password -a rclone -s config -w $(openssl rand -base64 40) ``` - Add the retrieval instruction to your `.zprofile` / `.profile` - ```sh + ```console export RCLONE_PASSWORD_COMMAND="/usr/bin/security find-generic-password -a rclone -s config -w" ``` @@ -18259,13 +19777,13 @@ at rest or transfer. Detailed instructions for popular OSes: - Generate and store a password - ```sh + ```console echo $(openssl rand -base64 40) | pass insert -m rclone/config ``` - Add the retrieval instruction - ```sh + ```console export RCLONE_PASSWORD_COMMAND="/usr/bin/pass rclone/config" ``` @@ -18273,13 +19791,13 @@ at rest or transfer. Detailed instructions for popular OSes: - Generate and store a password - ```pwsh + ```powershell New-Object -TypeName PSCredential -ArgumentList "rclone", (ConvertTo-SecureString -String ([System.Web.Security.Membership]::GeneratePassword(40, 10)) -AsPlainText -Force) | Export-Clixml -Path "rclone-credential.xml" ``` - Add the password retrieval instruction - ```pwsh + ```powershell [Environment]::SetEnvironmentVariable("RCLONE_PASSWORD_COMMAND", "[System.Runtime.InteropServices.Marshal]::PtrToStringAuto([System.Runtime.InteropServices.Marshal]::SecureStringToBSTR((Import-Clixml -Path "rclone-credential.xml").Password))") ``` @@ -18525,7 +20043,7 @@ so it can only contain letters, digits, or the `_` (underscore) character. For example, to configure an S3 remote named `mys3:` without a config file (using unix ways of setting environment variables): -```sh +```console $ export RCLONE_CONFIG_MYS3_TYPE=s3 $ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX $ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX @@ -18545,7 +20063,7 @@ You must write the name in uppercase in the environment variable, but as seen from example above it will be listed and can be accessed in lowercase, while you can also refer to the same remote in uppercase: -```sh +```console $ rclone lsd mys3: -1 2016-09-21 12:54:21 -1 my-bucket $ rclone lsd MYS3: @@ -18560,7 +20078,7 @@ set the access key of all remotes using S3, including myS3Crypt. Note also that now rclone has [connection strings](#connection-strings), it is probably easier to use those instead which makes the above example -```sh +```console rclone lsd :s3,access_key_id=XXX,secret_access_key=XXX: ``` @@ -18609,24 +20127,27 @@ For non backend configuration the order is as follows: The options set by environment variables can be seen with the `-vv` and `--log-level=DEBUG` flags, e.g. `rclone version -vv`. -# Configuring rclone on a remote / headless machine # +# Configuring rclone on a remote / headless machine Some of the configurations (those involving oauth2) require an -Internet connected web browser. +internet-connected web browser. -If you are trying to set rclone up on a remote or headless box with no -browser available on it (e.g. a NAS or a server in a datacenter) then -you will need to use an alternative means of configuration. There are -two ways of doing it, described below. +If you are trying to set rclone up on a remote or headless machine with no +browser available on it (e.g. a NAS or a server in a datacenter), then +you will need to use an alternative means of configuration. There are +three ways of doing it, described below. -## Configuring using rclone authorize ## +## Configuring using rclone authorize -On the headless box run `rclone` config but answer `N` to the `Use auto config?` question. +On the headless machine run [rclone config](/commands/rclone_config), but +answer `N` to the question `Use web browser to automatically authenticate +rclone with remote?`. -``` -Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine +```text +Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. If Y failed, try N. y) Yes (default) n) No @@ -18638,33 +20159,35 @@ a web browser available. For more help and alternate methods see: https://rclone.org/remote_setup/ Execute the following on the machine with the web browser (same rclone version recommended): - rclone authorize "onedrive" + rclone authorize "onedrive" Then paste the result. Enter a value. config_token> ``` -Then on your main desktop machine +Then on your main desktop machine, run [rclone authorize](https://rclone.org/commands/rclone_authorize/). -``` +```text rclone authorize "onedrive" -If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... +NOTICE: Make sure your Redirect URL is set to "http://localhost:53682/" in your custom config. +NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx +NOTICE: Log in and authorize rclone for access +NOTICE: Waiting for code... + Got code Paste the following into your remote machine ---> SECRET_TOKEN <---End paste ``` -Then back to the headless box, paste in the code +Then back to the headless machine, paste in the code. -``` +```text config_token> SECRET_TOKEN -------------------- [acd12] -client_id = -client_secret = +client_id = +client_secret = token = SECRET_TOKEN -------------------- y) Yes this is OK @@ -18673,47 +20196,63 @@ d) Delete this remote y/e/d> ``` -## Configuring by copying the config file ## +## Configuring by copying the config file -Rclone stores all of its config in a single configuration file. This -can easily be copied to configure a remote rclone. +Rclone stores all of its configuration in a single file. This can easily be +copied to configure a remote rclone (although some backends does not support +reusing the same configuration, consult your backend documentation to be +sure). -So first configure rclone on your desktop machine with - - rclone config - -to set up the config file. - -Find the config file by running `rclone config file`, for example +Start by running [rclone config](/commands/rclone_config) to create the +configuration file on your desktop machine. +```console +rclone config ``` + +Then locate the file by running [rclone config file](/commands/rclone_config_file). + +```console $ rclone config file Configuration file is stored at: /home/user/.rclone.conf ``` -Now transfer it to the remote box (scp, cut paste, ftp, sftp, etc.) and -place it in the correct place (use `rclone config file` on the remote -box to find out where). +Finally, transfer the file to the remote machine (scp, cut paste, ftp, sftp, etc.) +and place it in the correct location (use [rclone config file](/commands/rclone_config_file) +on the remote machine to find out where). -## Configuring using SSH Tunnel ## +## Configuring using SSH Tunnel -Linux and MacOS users can utilize SSH Tunnel to redirect the headless box port 53682 to local machine by using the following command: -``` +If you have an SSH client installed on your local machine, you can set up an +SSH tunnel to redirect the port 53682 into the headless machine by using the +following command: + +```console ssh -L localhost:53682:localhost:53682 username@remote_server ``` -Then on the headless box run `rclone config` and answer `Y` to the `Use auto config?` question. -``` -Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine +Then on the headless machine run [rclone config](/commands/rclone_config) and +answer `Y` to the question `Use web browser to automatically authenticate rclone +with remote?`. + +```text +Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. If Y failed, try N. y) Yes (default) n) No y/n> y +NOTICE: Make sure your Redirect URL is set to "http://localhost:53682/" in your custom config. +NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx +NOTICE: Log in and authorize rclone for access +NOTICE: Waiting for code... ``` -Then copy and paste the auth url `http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx` to the browser on your local machine, complete the auth and it is done. + +Finally, copy and paste the presented URL `http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx` +to the browser on your local machine, complete the auth and you are done. # Filtering, includes and excludes @@ -18860,9 +20399,9 @@ uses) to make things easy for users. However this does not provide absolute control over the matching, so for advanced users rclone also provides a regular expression syntax. -The regular expressions used are as defined in the [Go regular -expression reference](https://golang.org/pkg/regexp/syntax/). Regular -expressions should be enclosed in `{{` `}}`. They will match only the +Rclone generally accepts Perl-style regular expressions, the exact syntax +is defined in the [Go regular expression reference](https://golang.org/pkg/regexp/syntax/). +Regular expressions should be enclosed in `{{` `}}`. They will match only the last path segment if the glob doesn't start with `/` or the whole path name if it does. Note that rclone does not attempt to parse the supplied regular expression, meaning that using any regular expression @@ -19179,14 +20718,14 @@ E.g. `rclone ls remote: --include "*.{png,jpg}"` lists the files on E.g. multiple rclone copy commands can be combined with `--include` and a pattern-list. -```sh +```console rclone copy /vol1/A remote:A rclone copy /vol1/B remote:B ``` is equivalent to: -```sh +```console rclone copy /vol1 remote: --include "{A,B}/**" ``` @@ -19388,7 +20927,7 @@ user2/prefect Then copy these to a remote: -```sh +```console rclone copy --files-from files-from.txt /home remote:backup ``` @@ -19410,7 +20949,7 @@ Alternatively if `/` is chosen as root `files-from.txt` will be: The copy command will be: -```sh +```console rclone copy --files-from files-from.txt / remote:backup ``` @@ -19516,7 +21055,7 @@ useful for: The flag takes two parameters expressed as a fraction: -```sh +```text --hash-filter K/N ``` @@ -19535,7 +21074,7 @@ Each partition is non-overlapping, ensuring all files are covered without duplic Use `@` as `K` to randomly select a partition: -```sh +```text --hash-filter @/M ``` @@ -19565,7 +21104,7 @@ This will stay constant across retries. Assuming the current directory contains `file1.jpg` through `file9.jpg`: -```sh +```console $ rclone lsf --hash-filter 0/4 . file1.jpg file5.jpg @@ -19590,13 +21129,13 @@ file5.jpg ##### Syncing the first quarter of files -```sh +```console rclone sync --hash-filter 1/4 source:path destination:path ``` ##### Checking a random 1% of files for integrity -```sh +```console rclone check --download --hash-filter @/100 source:path destination:path ``` @@ -19612,7 +21151,7 @@ on the destination which are excluded from the command. E.g. the scope of `rclone sync --interactive A: B:` can be restricted: -```sh +```console rclone --min-size 50k --delete-excluded sync A: B: ``` @@ -19661,13 +21200,13 @@ expressions](#regexp). For example if you wished to list only local files with a mode of `100664` you could do that with: -```sh +```console rclone lsf -M --files-only --metadata-include "mode=100664" . ``` Or if you wished to show files with an `atime`, `mtime` or `btime` at a given date: -```sh +```console rclone lsf -M --files-only --metadata-include "[abm]time=2022-12-16*" . ``` @@ -19709,7 +21248,7 @@ change. Run this command in a terminal and rclone will download and then display the GUI in a web browser. -```sh +```console rclone rcd --rc-web-gui ``` @@ -19988,7 +21527,7 @@ rc` command. You can use it like this: -```sh +```console $ rclone rc rc/noop param1=one param2=two { "param1": "one", @@ -19999,14 +21538,14 @@ $ rclone rc rc/noop param1=one param2=two If the remote is running on a different URL than the default `http://localhost:5572/`, use the `--url` option to specify it: -```sh +```console rclone rc --url http://some.remote:1234/ rc/noop ``` Or, if the remote is listening on a Unix socket, use the `--unix-socket` option instead: -```sh +```console rclone rc --unix-socket /tmp/rclone.sock rc/noop ``` @@ -20019,7 +21558,7 @@ remote server. `rclone rc` also supports a `--json` flag which can be used to send more complicated input parameters. -```sh +```console $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 } }' rc/noop { "p1": [ @@ -20039,13 +21578,13 @@ If the parameter being passed is an object then it can be passed as a JSON string rather than using the `--json` flag which simplifies the command line. -```sh +```console rclone rc operations/list fs=/tmp remote=test opt='{"showHash": true}' ``` Rather than -```sh +```console rclone rc operations/list --json '{"fs": "/tmp", "remote": "test", "opt": {"showHash": true}}' ``` @@ -20060,9 +21599,9 @@ Each rc call is classified as a job and it is assigned its own id. By default jobs are executed immediately as they are created or synchronously. If `_async` has a true value when supplied to an rc call then it will -return immediately with a job id and the task will be run in the -background. The `job/status` call can be used to get information of -the background job. The job can be queried for up to 1 minute after +return immediately with a job id and execute id, and the task will be run in the +background. The `job/status` call can be used to get information of +the background job. The job can be queried for up to 1 minute after it has finished. It is recommended that potentially long running jobs, e.g. `sync/sync`, @@ -20072,22 +21611,29 @@ response timing out. Starting a job with the `_async` flag: -```sh +```console $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop { - "jobid": 2 + "jobid": 2, + "executeId": "d794c33c-463e-4acf-b911-f4b23e4f40b7" } ``` +The `jobid` is a unique identifier for the job within this rclone instance. +The `executeId` identifies the rclone process instance and changes after +rclone restart. Together, the pair (`executeId`, `jobid`) uniquely identifies +a job across rclone restarts. + Query the status to see if the job has finished. For more information on the meaning of these return parameters see the `job/status` call. -```sh +```console $ rclone rc --json '{ "jobid":2 }' job/status { "duration": 0.000124163, "endTime": "2018-10-27T11:38:07.911245881+01:00", "error": "", + "executeId": "d794c33c-463e-4acf-b911-f4b23e4f40b7", "finished": true, "id": 2, "output": { @@ -20108,17 +21654,31 @@ $ rclone rc --json '{ "jobid":2 }' job/status } ``` -`job/list` can be used to show the running or recently completed jobs +`job/list` can be used to show running or recently completed jobs along with their status -```sh +```console $ rclone rc job/list { + "executeId": "d794c33c-463e-4acf-b911-f4b23e4f40b7", + "finished_ids": [ + 1 + ], "jobids": [ + 1, + 2 + ], + "running_ids": [ 2 ] } ``` +This shows: +- `executeId` - the current rclone instance ID (same for all jobs, changes after restart) +- `jobids` - array of all job IDs (both running and finished) +- `running_ids` - array of currently running job IDs +- `finished_ids` - array of finished job IDs + ### Setting config flags with _config If you wish to set config (the equivalent of the global flags) for the @@ -20127,14 +21687,14 @@ duration of an rc call only then pass in the `_config` parameter. This should be in the same format as the `main` key returned by [options/get](#options-get). -```sh +```console rclone rc --loopback options/get blocks=main ``` You can see more help on these options with this command (see [the options blocks section](#option-blocks) for more info). -```sh +```console rclone rc --loopback options/info blocks=main ``` @@ -20147,7 +21707,7 @@ parameter, you would pass this parameter in your JSON blob. If using `rclone rc` this could be passed as -```sh +```console rclone rc sync/sync ... _config='{"CheckSum": true}' ``` @@ -20174,20 +21734,20 @@ pass in the `_filter` parameter. This should be in the same format as the `filter` key returned by [options/get](#options-get). -```sh +```console rclone rc --loopback options/get blocks=filter ``` You can see more help on these options with this command (see [the options blocks section](#option-blocks) for more info). -```sh +```console rclone rc --loopback options/info blocks=filter ``` For example, if you wished to run a sync with these flags -```sh +```text --max-size 1M --max-age 42s --include "a" --include "b" ``` @@ -20199,7 +21759,7 @@ you would pass this parameter in your JSON blob. If using `rclone rc` this could be passed as -```sh +```console rclone rc ... _filter='{"MaxSize":"1M", "IncludeRule":["a","b"], "MaxAge":"42s"}' ``` @@ -20229,7 +21789,7 @@ value. This allows caller to group stats under their own name. Stats for specific group can be accessed by passing `group` to `core/stats`: -```sh +```console $ rclone rc --json '{ "group": "job/1" }' core/stats { "speed": 12345 @@ -20380,7 +21940,7 @@ And this is equivalent to `/tmp/dir` ``` ## Supported commands - + ### backend/command: Runs a backend command. {#backend-command} This takes the following parameters: @@ -20592,7 +22152,7 @@ Unlocks the config file if it is locked. Parameters: -- 'config_password' - password to unlock the config file +- 'configPassword' - password to unlock the config file A good idea is to disable AskPassword before making this call @@ -20890,17 +22450,20 @@ Returns the following values: } ``` -### core/version: Shows the current version of rclone and the go runtime. {#core-version} +### core/version: Shows the current version of rclone, Go and the OS. {#core-version} -This shows the current version of go and the go runtime: +This shows the current versions of rclone, Go and the OS: -- version - rclone version, e.g. "v1.53.0" +- version - rclone version, e.g. "v1.71.2" - decomposed - version number as [major, minor, patch] - isGit - boolean - true if this was compiled from the git version - isBeta - boolean - true if this is a beta version -- os - OS in use as according to Go -- arch - cpu architecture in use according to Go -- goVersion - version of Go runtime in use +- os - OS in use as according to Go GOOS (e.g. "linux") +- osKernel - OS Kernel version (e.g. "6.8.0-86-generic (x86_64)") +- osVersion - OS Version (e.g. "ubuntu 24.04 (64 bit)") +- osArch - cpu architecture in use (e.g. "arm64 (ARMv8 compatible)") +- arch - cpu architecture in use according to Go GOARCH (e.g. "arm64") +- goVersion - version of Go runtime in use (e.g. "go1.25.0") - linking - type of rclone executable (static or dynamic) - goTags - space separated build tags or "none" @@ -21010,6 +22573,67 @@ Returns **Authentication is required for this call.** +### job/batch: Run a batch of rclone rc commands concurrently. {#job-batch} + +This takes the following parameters: + +- concurrency - int - do this many commands concurrently. Defaults to `--transfers` if not set. +- inputs - an list of inputs to the commands with an extra `_path` parameter + +```json +{ + "_path": "rc/path", + "param1": "parameter for the path as documented", + "param2": "parameter for the path as documented, etc", +} +``` + +The inputs may use `_async`, `_group`, `_config` and `_filter` as normal when using the rc. + +Returns: + +- results - a list of results from the commands with one entry for each in inputs. + +For example: + +```sh +rclone rc job/batch --json '{ + "inputs": [ + { + "_path": "rc/noop", + "parameter": "OK" + }, + { + "_path": "rc/error", + "parameter": "BAD" + } + ] +} +' +``` + +Gives the result: + +```json +{ + "results": [ + { + "parameter": "OK" + }, + { + "error": "arbitrary error on input map[parameter:BAD]", + "input": { + "parameter": "BAD" + }, + "path": "rc/error", + "status": 500 + } + ] +} +``` + +**Authentication is required for this call.** + ### job/list: Lists the IDs of the running jobs {#job-list} Parameters: None. @@ -21018,6 +22642,8 @@ Results: - executeId - string id of rclone executing (change after restart) - jobids - array of integer job ids (starting at 1 on each restart) +- runningIds - array of integer job ids that are running +- finishedIds - array of integer job ids that are finished ### job/status: Reads the status of the job ID {#job-status} @@ -21033,6 +22659,7 @@ Results: - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above +- executeId - rclone instance ID (changes after restart); combined with id uniquely identifies a job - startTime - time the job started (e.g. "2018-10-26T18:50:20.528336039+01:00") - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously @@ -21081,14 +22708,18 @@ This takes the following parameters: Example: - rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint - rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint mountType=mount - rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}' +```console +rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint +rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint mountType=mount +rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}' +``` The vfsOpt are as described in options/get and can be seen in the the "vfs" section when running and the mountOpt can be seen in the "mount" section: - rclone rc options/get +```console +rclone rc options/get +``` **Authentication is required for this call.** @@ -21531,8 +23162,6 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -See the [settierfile](https://rclone.org/commands/rclone_settierfile/) command for more information on the above. - **Authentication is required for this call.** ### operations/size: Count the number of bytes and files in remote {#operations-size} @@ -21578,8 +23207,6 @@ This takes the following parameters: - remote - a path within that remote e.g. "dir" - each part in body represents a file to be uploaded -See the [uploadfile](https://rclone.org/commands/rclone_uploadfile/) command for more information on the above. - **Authentication is required for this call.** ### options/blocks: List all the option blocks {#options-blocks} @@ -21757,6 +23384,11 @@ Example: This returns an error with the input as part of its error string. Useful for testing error handling. +### rc/fatal: This returns an fatal error {#rc-fatal} + +This returns an error with the input as part of its error string. +Useful for testing error handling. + ### rc/list: List all the registered remote control commands {#rc-list} This lists all the registered remote control commands as a JSON map in @@ -21776,6 +23408,11 @@ check that parameter passing is working properly. **Authentication is required for this call.** +### rc/panic: This returns an error by panicking {#rc-panic} + +This returns an error with the input as part of its error string. +Useful for testing error handling. + ### serve/list: Show running servers {#serve-list} Show running servers with IDs. @@ -22047,7 +23684,7 @@ This is only useful if `--vfs-cache-mode` > off. If you call it when the `--vfs-cache-mode` is off, it will return an empty result. { - "queued": // an array of files queued for upload + "queue": // an array of files queued for upload [ { "name": "file", // string: name (full path) of the file, @@ -22167,7 +23804,7 @@ supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter must be supplied. - + ## Accessing the remote control via HTTP {#api-http} @@ -22219,7 +23856,7 @@ The response to a preflight OPTIONS request will echo the requested ### Using POST with URL parameters only -```sh +```console curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2' ``` @@ -22234,7 +23871,7 @@ Response Here is what an error response looks like: -```sh +```console curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2' ``` @@ -22250,7 +23887,7 @@ curl -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2' Note that curl doesn't return errors to the shell unless you use the `-f` option -```sh +```console $ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2' curl: (22) The requested URL returned error: 400 Bad Request $ echo $? @@ -22259,7 +23896,7 @@ $ echo $? ### Using POST with a form -```sh +```console curl --data "potato=1" --data "sausage=2" http://localhost:5572/rc/noop ``` @@ -22275,7 +23912,7 @@ Response Note that you can combine these with URL parameters too with the POST parameters taking precedence. -```sh +```console curl --data "potato=1" --data "sausage=2" "http://localhost:5572/rc/noop?rutabaga=3&sausage=4" ``` @@ -22292,7 +23929,7 @@ Response ### Using POST with a JSON blob -```sh +```console curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' http://localhost:5572/rc/noop ``` @@ -22308,7 +23945,7 @@ response This can be combined with URL parameters too if required. The JSON blob takes precedence. -```sh +```console curl -H "Content-Type: application/json" -X POST -d '{"potato":2,"sausage":1}' 'http://localhost:5572/rc/noop?rutabaga=3&potato=4' ``` @@ -22331,7 +23968,7 @@ To use these, first [install go](https://golang.org/doc/install). To profile rclone's memory use you can run: -```sh +```console go tool pprof -web http://localhost:5572/debug/pprof/heap ``` @@ -22340,7 +23977,7 @@ memory. You can also use the `-text` flag to produce a textual summary -```sh +```console $ go tool pprof -text http://localhost:5572/debug/pprof/heap Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total flat flat% sum% cum cum% @@ -22365,7 +24002,7 @@ alive which should have been garbage collected. See all active go routines using -```sh +```console curl http://localhost:5572/debug/pprof/goroutine?debug=1 ``` @@ -22424,7 +24061,7 @@ Here is an overview of the major features of each cloud storage system. | Google Photos | - | - | No | Yes | R | - | | HDFS | - | R/W | No | No | - | - | | HiDrive | HiDrive ¹² | R/W | No | No | - | - | -| HTTP | - | R | No | No | R | - | +| HTTP | - | R | No | No | R | R | | iCloud Drive | - | R | No | No | - | - | | Internet Archive | MD5, SHA1, CRC32 | R/W ¹¹ | No | No | - | RWU | | Jottacloud | MD5 | R/W | Yes | No | R | RW | @@ -22795,7 +24432,7 @@ and to maintain backward compatibility, its behavior has not been changed. To take a specific example, the FTP backend's default encoding is -```sh +```text --ftp-encoding "Slash,Del,Ctl,RightSpace,Dot" ``` @@ -22829,7 +24466,7 @@ To avoid this you can change the set of characters rclone should convert for the local filesystem, using command-line argument `--local-encoding`. Rclone's default behavior on Windows corresponds to -```sh +```text --local-encoding "Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot" ``` @@ -22837,7 +24474,7 @@ If you want to use fullwidth characters `:`, `*` and `?` in your filenames without rclone changing them when uploading to a remote, then set the same as the default value but without `Colon,Question,Asterisk`: -```sh +```text --local-encoding "Slash,LtGt,DoubleQuote,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot" ``` @@ -23165,7 +24802,7 @@ Flags for general networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.71.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0") ``` @@ -23385,6 +25022,8 @@ Backend-only flags (these can be set in the config file also). ``` --alias-description string Description of the remote --alias-remote string Remote or path to alias + --archive-description string Description of the remote + --archive-remote string Remote to wrap to read archives from --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive --azureblob-account string Azure Storage Account Name --azureblob-archive-tier-delete Delete archive tier blobs before overwriting @@ -23462,6 +25101,10 @@ Backend-only flags (these can be set in the config file also). --b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-key string Application Key --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket + --b2-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in B2 + --b2-sse-customer-key string To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data + --b2-sse-customer-key-base64 string To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data + --b2-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-upload-concurrency int Concurrency for multipart uploads (default 4) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) @@ -23523,7 +25166,7 @@ Backend-only flags (these can be set in the config file also). --combine-description string Description of the remote --combine-upstreams SpaceSepList Upstreams for combining --compress-description string Description of the remote - --compress-level int GZIP compression level (-2 to 9) (default -1) + --compress-level string GZIP (levels -2 to 9): --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) --compress-remote string Remote to compress @@ -23818,6 +25461,7 @@ Backend-only flags (these can be set in the config file also). --mailru-token string OAuth Access Token as a JSON blob --mailru-token-url string Token server url --mailru-user string User name (usually email) + --mega-2fa string The 2FA code of your MEGA account if the account is set up with one --mega-debug Output more debug from Mega --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) @@ -23936,6 +25580,7 @@ Backend-only flags (these can be set in the config file also). --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) --protondrive-original-file-size Return the file size before encryption (default true) + --protondrive-otp-secret-key string The OTP secret key (obscured) --protondrive-password string The password of your proton account (obscured) --protondrive-replace-existing-draft Create a new revision when filename conflict is detected --protondrive-username string The username of your proton account @@ -24018,6 +25663,7 @@ Backend-only flags (these can be set in the config file also). --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) --s3-use-arn-region If true, enables arn region support for the service + --s3-use-data-integrity-protections Tristate If true use AWS S3 data integrity protections (default unset) --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) @@ -24100,6 +25746,7 @@ Backend-only flags (these can be set in the config file also). --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-user-agent string Siad User Agent (default "Sia-Agent") --skip-links Don't warn about skipped symlinks + --skip-specials Don't warn about skipped pipes, sockets and device objects --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default "WORKGROUP") @@ -24229,14 +25876,14 @@ As of Docker 1.12 volumes are supported by [Docker Swarm](https://docs.docker.com/engine/swarm/key-concepts/) included with Docker Engine and created from descriptions in [swarm compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference) -files for use with _swarm stacks_ across multiple cluster nodes. +files for use with *swarm stacks* across multiple cluster nodes. [Docker Volume Plugins](https://docs.docker.com/engine/extend/plugins_volume/) augment the default `local` volume driver included in Docker with stateful volumes shared across containers and hosts. Unlike local volumes, your -data will _not_ be deleted when such volume is removed. Plugins can run +data will *not* be deleted when such volume is removed. Plugins can run managed by the docker daemon, as a native system service -(under systemd, _sysv_ or _upstart_) or as a standalone executable. +(under systemd, *sysv* or *upstart*) or as a standalone executable. Rclone can run as docker volume plugin in all these modes. It interacts with the local docker daemon via [plugin API](https://docs.docker.com/engine/extend/plugin_api/) and @@ -24251,39 +25898,43 @@ rclone volume with Docker engine on a standalone Ubuntu machine. Start from [installing Docker](https://docs.docker.com/engine/install/) on the host. -The _FUSE_ driver is a prerequisite for rclone mounting and should be +The *FUSE* driver is a prerequisite for rclone mounting and should be installed on host: -``` + +```console sudo apt-get -y install fuse3 ``` Create two directories required by rclone docker plugin: -``` + +```console sudo mkdir -p /var/lib/docker-plugins/rclone/config sudo mkdir -p /var/lib/docker-plugins/rclone/cache ``` Install the managed rclone docker plugin for your architecture (here `amd64`): -``` + +```console docker plugin install rclone/docker-volume-rclone:amd64 args="-v" --alias rclone --grant-all-permissions docker plugin list ``` Create your [SFTP volume](https://rclone.org/sftp/#standard-options): -``` + +```console docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true ``` Note that since all options are static, you don't even have to run `rclone config` or create the `rclone.conf` file (but the `config` directory should still be present). In the simplest case you can use `localhost` -as _hostname_ and your SSH credentials as _username_ and _password_. +as *hostname* and your SSH credentials as *username* and *password*. You can also change the remote path to your home directory on the host, for example `-o path=/home/username`. - Time to create a test container and mount the volume into it: -``` + +```console docker run --rm -it -v firstvolume:/mnt --workdir /mnt ubuntu:latest bash ``` @@ -24292,7 +25943,8 @@ the mounted SFTP remote. You can type `ls` to list the mounted directory or otherwise play with it. Type `exit` when you are done. The container will stop but the volume will stay, ready to be reused. When it's not needed anymore, remove it: -``` + +```console docker volume list docker volume remove firstvolume ``` @@ -24301,7 +25953,7 @@ Now let us try **something more elaborate**: [Google Drive](https://rclone.org/drive/) volume on multi-node Docker Swarm. You should start from installing Docker and FUSE, creating plugin -directories and installing rclone plugin on _every_ swarm node. +directories and installing rclone plugin on *every* swarm node. Then [setup the Swarm](https://docs.docker.com/engine/swarm/swarm-mode/). Google Drive volumes need an access token which can be setup via web @@ -24310,14 +25962,15 @@ plugin cannot run a browser so we will use a technique similar to the [rclone setup on a headless box](https://rclone.org/remote_setup/). Run [rclone config](https://rclone.org/commands/rclone_config_create/) -on _another_ machine equipped with _web browser_ and graphical user interface. +on *another* machine equipped with *web browser* and graphical user interface. Create the [Google Drive remote](https://rclone.org/drive/#standard-options). When done, transfer the resulting `rclone.conf` to the Swarm cluster and save as `/var/lib/docker-plugins/rclone/config/rclone.conf` -on _every_ node. By default this location is accessible only to the +on *every* node. By default this location is accessible only to the root user so you will need appropriate privileges. The resulting config will look like this: -``` + +```ini [gdrive] type = drive scope = drive @@ -24328,7 +25981,8 @@ token = {"access_token":...} Now create the file named `example.yml` with a swarm stack description like this: -``` + +```yaml version: '3' services: heimdall: @@ -24346,16 +26000,18 @@ volumes: ``` and run the stack: -``` + +```console docker stack deploy example -c ./example.yml ``` After a few seconds docker will spread the parsed stack description -over cluster, create the `example_heimdall` service on port _8080_, +over cluster, create the `example_heimdall` service on port *8080*, run service containers on one or more cluster nodes and request the `example_configdata` volume from rclone plugins on the node hosts. You can use the following commands to confirm results: -``` + +```console docker service ls docker service ps example_heimdall docker volume ls @@ -24372,7 +26028,8 @@ the `docker volume remove example_configdata` command on every node. Volumes can be created with [docker volume create](https://docs.docker.com/engine/reference/commandline/volume_create/). Here are a few examples: -``` + +```console docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full docker volume create vol2 -d rclone -o remote=:storj,access_grant=xxx:heimdall docker volume create vol3 -d rclone -o type=storj -o path=heimdall -o storj-access-grant=xxx -o poll-interval=0 @@ -24384,7 +26041,8 @@ name `rclone/docker-volume-rclone` because you provided the `--alias rclone` option. Volumes can be inspected as follows: -``` + +```console docker volume list docker volume inspect vol1 ``` @@ -24393,7 +26051,7 @@ docker volume inspect vol1 Rclone flags and volume options are set via the `-o` flag to the `docker volume create` command. They include backend-specific parameters -as well as mount and _VFS_ options. Also there are a few +as well as mount and *VFS* options. Also there are a few special `-o` options: `remote`, `fs`, `type`, `path`, `mount-type` and `persist`. @@ -24401,19 +26059,23 @@ special `-o` options: trailing colon and optionally with a remote path. See the full syntax in the [rclone documentation](https://rclone.org/docs/#syntax-of-remote-paths). This option can be aliased as `fs` to prevent confusion with the -_remote_ parameter of such backends as _crypt_ or _alias_. +*remote* parameter of such backends as *crypt* or *alias*. The `remote=:backend:dir/subdir` syntax can be used to create [on-the-fly (config-less) remotes](https://rclone.org/docs/#backend-path-to-dir), while the `type` and `path` options provide a simpler alternative for this. Using two split options -``` + +```text -o type=backend -o path=dir/subdir ``` + is equivalent to the combined syntax -``` + +```text -o remote=:backend:dir/subdir ``` + but is arguably easier to parameterize in scripts. The `path` part is optional. @@ -24428,7 +26090,7 @@ Boolean CLI flags without value will gain the `true` value, e.g. Please note that you can provide parameters only for the backend immediately referenced by the backend type of mounted `remote`. -If this is a wrapping backend like _alias, chunker or crypt_, you cannot +If this is a wrapping backend like *alias, chunker or crypt*, you cannot provide options for the referred to remote or backend. This limitation is imposed by the rclone connection string parser. The only workaround is to feed plugin with `rclone.conf` or configure plugin arguments (see below). @@ -24451,17 +26113,21 @@ In future it will allow to persist on-the-fly remotes in the plugin The `remote` value can be extended with [connection strings](https://rclone.org/docs/#connection-strings) as an alternative way to supply backend parameters. This is equivalent -to the `-o` backend options with one _syntactic difference_. +to the `-o` backend options with one *syntactic difference*. Inside connection string the backend prefix must be dropped from parameter names but in the `-o param=value` array it must be present. For instance, compare the following option array -``` + +```text -o remote=:sftp:/home -o sftp-host=localhost ``` + with equivalent connection string: -``` + +```text -o remote=:sftp,host=localhost:/home ``` + This difference exists because flag options `-o key=val` include not only backend parameters but also mount/VFS flags and possibly other settings. Also it allows to discriminate the `remote` option from the `crypt-remote` @@ -24470,11 +26136,13 @@ due to clearer value substitution. ## Using with Swarm or Compose -Both _Docker Swarm_ and _Docker Compose_ use +Both *Docker Swarm* and *Docker Compose* use [YAML](http://yaml.org/spec/1.2/spec.html)-formatted text files to describe groups (stacks) of containers, their properties, networks and volumes. -_Compose_ uses the [compose v2](https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference) format, -_Swarm_ uses the [compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference) format. +*Compose* uses the [compose v2](https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference) +format, +*Swarm* uses the [compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference) +format. They are mostly similar, differences are explained in the [docker documentation](https://docs.docker.com/compose/compose-file/compose-versioning/#upgrading). @@ -24483,7 +26151,7 @@ Each of them should be named after its volume and have at least two elements, the self-explanatory `driver: rclone` value and the `driver_opts:` structure playing the same role as `-o key=val` CLI flags: -``` +```yaml volumes: volume_name_1: driver: rclone @@ -24496,6 +26164,7 @@ volumes: ``` Notice a few important details: + - YAML prefers `_` in option names instead of `-`. - YAML treats single and double quotes interchangeably. Simple strings and integers can be left unquoted. @@ -24522,6 +26191,7 @@ The plugin requires presence of two directories on the host before it can be installed. Note that plugin will **not** create them automatically. By default they must exist on host at the following locations (though you can tweak the paths): + - `/var/lib/docker-plugins/rclone/config` is reserved for the `rclone.conf` config file and **must** exist even if it's empty and the config file is not present. @@ -24530,14 +26200,16 @@ By default they must exist on host at the following locations You can [install managed plugin](https://docs.docker.com/engine/reference/commandline/plugin_install/) with default settings as follows: -``` + +```console docker plugin install rclone/docker-volume-rclone:amd64 --grant-all-permissions --alias rclone ``` -The `:amd64` part of the image specification after colon is called a _tag_. +The `:amd64` part of the image specification after colon is called a *tag*. Usually you will want to install the latest plugin for your architecture. In this case the tag will just name it, like `amd64` above. The following plugin architectures are currently available: + - `amd64` - `arm64` - `arm-v7` @@ -24571,7 +26243,8 @@ mount namespaces and bind-mounts into requesting user containers. You can tweak a few plugin settings after installation when it's disabled (not in use), for instance: -``` + +```console docker plugin disable rclone docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args="--vfs-cache-mode=writes --allow-other" docker plugin enable rclone @@ -24586,10 +26259,10 @@ plan in advance. You can tweak the following settings: `args`, `config`, `cache`, `HTTP_PROXY`, `HTTPS_PROXY`, `NO_PROXY` and `RCLONE_VERBOSE`. -It's _your_ task to keep plugin settings in sync across swarm cluster nodes. +It's *your* task to keep plugin settings in sync across swarm cluster nodes. `args` sets command-line arguments for the `rclone serve docker` command -(_none_ by default). Arguments should be separated by space so you will +(*none* by default). Arguments should be separated by space so you will normally want to put them in quotes on the [docker plugin set](https://docs.docker.com/engine/reference/commandline/plugin_set/) command line. Both [serve docker flags](https://rclone.org/commands/rclone_serve_docker/#options) @@ -24611,7 +26284,7 @@ at the predefined path `/data/config`. For example, if your key file is named `sftp-box1.key` on the host, the corresponding volume config option should read `-o sftp-key-file=/data/config/sftp-box1.key`. -`cache=/host/dir` sets alternative host location for the _cache_ directory. +`cache=/host/dir` sets alternative host location for the *cache* directory. The plugin will keep VFS caches here. Also it will create and maintain the `docker-plugin.state` file in this directory. When the plugin is restarted or reinstalled, it will look in this file to recreate any volumes @@ -24624,13 +26297,14 @@ failures, daemon restarts or host reboots. to `2` (debugging). Verbosity can be also tweaked via `args="-v [-v] ..."`. Since arguments are more generic, you will rarely need this setting. The plugin output by default feeds the docker daemon log on local host. -Log entries are reflected as _errors_ in the docker log but retain their +Log entries are reflected as *errors* in the docker log but retain their actual level assigned by rclone in the encapsulated message string. `HTTP_PROXY`, `HTTPS_PROXY`, `NO_PROXY` customize the plugin proxy settings. -You can set custom plugin options right when you install it, _in one go_: -``` +You can set custom plugin options right when you install it, *in one go*: + +```console docker plugin remove rclone docker plugin install rclone/docker-volume-rclone:amd64 \ --alias rclone --grant-all-permissions \ @@ -24644,7 +26318,8 @@ The docker plugin volume protocol doesn't provide a way for plugins to inform the docker daemon that a volume is (un-)available. As a workaround you can setup a healthcheck to verify that the mount is responding, for example: -``` + +```yaml services: my_service: image: my_image @@ -24665,8 +26340,9 @@ systems. Proceed further only if you are on Linux. First, [install rclone](https://rclone.org/install/). You can just run it (type `rclone serve docker` and hit enter) for the test. -Install _FUSE_: -``` +Install *FUSE*: + +```console sudo apt-get -y install fuse ``` @@ -24675,22 +26351,25 @@ Download two systemd configuration files: and [docker-volume-rclone.socket](https://raw.githubusercontent.com/rclone/rclone/master/contrib/docker-plugin/systemd/docker-volume-rclone.socket). Put them to the `/etc/systemd/system/` directory: -``` + +```console cp docker-volume-plugin.service /etc/systemd/system/ cp docker-volume-plugin.socket /etc/systemd/system/ ``` -Please note that all commands in this section must be run as _root_ but +Please note that all commands in this section must be run as *root* but we omit `sudo` prefix for brevity. Now create directories required by the service: -``` + +```console mkdir -p /var/lib/docker-volumes/rclone mkdir -p /var/lib/docker-plugins/rclone/config mkdir -p /var/lib/docker-plugins/rclone/cache ``` Run the docker plugin service in the socket activated mode: -``` + +```console systemctl daemon-reload systemctl start docker-volume-rclone.service systemctl enable docker-volume-rclone.socket @@ -24699,6 +26378,7 @@ systemctl restart docker ``` Or run the service directly: + - run `systemctl daemon-reload` to let systemd pick up new config - run `systemctl enable docker-volume-rclone.service` to make the new service start automatically when you power on your machine. @@ -24715,39 +26395,50 @@ prefer socket activation. You can [see managed plugin settings](https://docs.docker.com/engine/extend/#debugging-plugins) with -``` + +```console docker plugin list docker plugin inspect rclone ``` + Note that docker (including latest 20.10.7) will not show actual values of `args`, just the defaults. Use `journalctl --unit docker` to see managed plugin output as part of -the docker daemon log. Note that docker reflects plugin lines as _errors_ +the docker daemon log. Note that docker reflects plugin lines as *errors* but their actual level can be seen from encapsulated message string. You will usually install the latest version of managed plugin for your platform. Use the following commands to print the actual installed version: -``` + +```console PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}') sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone version ``` You can even use `runc` to run shell inside the plugin container: -``` + +```console sudo runc --root /run/docker/runtime-runc/plugins.moby exec --tty $PLUGID bash ``` Also you can use curl to check the plugin socket connectivity: -``` + +```console docker plugin list --no-trunc PLUGID=123abc... sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate ``` + though this is rarely needed. -If the plugin fails to work properly, and only as a last resort after you tried diagnosing with the above methods, you can try clearing the state of the plugin. **Note that all existing rclone docker volumes will probably have to be recreated.** This might be needed because a reinstall don't cleanup existing state files to allow for easy restoration, as stated above. -``` +If the plugin fails to work properly, and only as a last resort after you tried +diagnosing with the above methods, you can try clearing the state of the plugin. +**Note that all existing rclone docker volumes will probably have to be recreated.** +This might be needed because a reinstall don't cleanup existing state files to +allow for easy restoration, as stated above. + +```console docker plugin disable rclone # disable the plugin to ensure no interference sudo rm /var/lib/docker-plugins/rclone/cache/docker-plugin.state # removing the plugin state docker plugin enable rclone # re-enable the plugin afterward @@ -24755,20 +26446,22 @@ docker plugin enable rclone # re-enable the plugin afterward ## Caveats -Finally I'd like to mention a _caveat with updating volume settings_. +Finally I'd like to mention a *caveat with updating volume settings*. Docker CLI does not have a dedicated command like `docker volume update`. It may be tempting to invoke `docker volume create` with updated options on existing volume, but there is a gotcha. The command will do nothing, it won't even return an error. I hope that docker maintainers will fix this some day. In the meantime be aware that you must remove your volume before recreating it with new settings: -``` + +```console docker volume remove my_vol docker volume create my_vol -d rclone -o opt1=new_val1 ... ``` and verify that settings did update: -``` + +```console docker volume list docker volume inspect my_vol ``` @@ -24803,7 +26496,7 @@ section) before using, or data loss can result. Questions can be asked in the For example, your first command might look like this: -```bash +```console rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run ``` @@ -24812,7 +26505,7 @@ After that, remove `--resync` as well. Here is a typical run log (with timestamps removed for clarity): -```bash +```console rclone bisync /testdir/path1/ /testdir/path2/ --verbose INFO : Synching Path1 "/testdir/path1/" with Path2 "/testdir/path2/" INFO : Path1 checking for diffs @@ -24858,7 +26551,7 @@ INFO : Bisync successful ## Command line syntax -```bash +```console $ rclone bisync --help Usage: rclone bisync remote1:path1 remote2:path2 [flags] @@ -24941,7 +26634,7 @@ be copied to Path1, and the process will then copy the Path1 tree to Path2. The `--resync` sequence is roughly equivalent to the following (but see [`--resync-mode`](#resync-mode) for other options): -```bash +```console rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs] rclone copy Path1 Path2 [--create-empty-src-dirs] ``` @@ -24997,7 +26690,7 @@ Shutdown](#graceful-shutdown) mode, when needed) for a very robust almost any interruption it might encounter. Consider adding something like the following: -```bash +```text --resilient --recover --max-lock 2m --conflict-resolve newer ``` @@ -25125,13 +26818,13 @@ simultaneously (or just `modtime` AND `checksum`). being `size`, `modtime`, and `checksum`. For example, if you want to compare size and checksum, but not modtime, you would do: -```bash +```text --compare size,checksum ``` Or if you want to compare all three: -```bash +```text --compare size,modtime,checksum ``` @@ -25399,7 +27092,7 @@ specified (or when two identical suffixes are specified.) i.e. with `--conflict-loser pathname`, all of the following would produce exactly the same result: -```bash +```text --conflict-suffix path --conflict-suffix path,path --conflict-suffix path1,path2 @@ -25414,7 +27107,7 @@ changed with the [`--suffix-keep-extension`](https://rclone.org/docs/#suffix-kee curly braces as globs. This can be helpful to track the date and/or time that each conflict was handled by bisync. For example: -```bash +```text --conflict-suffix {DateOnly}-conflict // result: myfile.txt.2006-01-02-conflict1 ``` @@ -25439,7 +27132,7 @@ conflicts with `..path1` and `..path2` (with two periods, and `path` instead of additional dots can be added by including them in the specified suffix string. For example, for behavior equivalent to the previous default, use: -```bash +```text [--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path ``` @@ -25479,13 +27172,13 @@ For example, a possible sequence could look like this: 1. Normally scheduled bisync run: - ```bash + ```console rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient ``` 2. Periodic independent integrity check (perhaps scheduled nightly or weekly): - ```bash + ```console rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt ``` @@ -25493,7 +27186,7 @@ For example, a possible sequence could look like this: If one side is more up-to-date and you want to make the other side match it, you could run: - ```bash + ```console rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v ``` @@ -25623,7 +27316,7 @@ override `--backup-dir`. Example: -```bash +```console rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case ``` @@ -25818,21 +27511,17 @@ encodings.) The following backends have known issues that need more investigation: - -- `TestGoFile` (`gofile`) - - [`TestBisyncRemoteLocal/all_changed`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) - - [`TestBisyncRemoteLocal/backupdir`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) - - [`TestBisyncRemoteLocal/basic`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) - - [`TestBisyncRemoteLocal/changes`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) - - [`TestBisyncRemoteLocal/check_access`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) - - [78 more](https://pub.rclone.org/integration-tests/current/) -- Updated: 2025-08-21-010015 - + +- `TestDropbox` (`dropbox`) + - [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt) +- Updated: 2025-11-21-010037 + The following backends either have not been tested recently or have known issues that are deemed unfixable for the time being: - + +- `TestArchive` (`archive`) - `TestCache` (`cache`) - `TestFileLu` (`filelu`) - `TestFilesCom` (`filescom`) @@ -25857,7 +27546,7 @@ that are deemed unfixable for the time being: - `TestWebdavNextcloud` (`webdav`) - `TestWebdavOwncloud` (`webdav`) - `TestnStorage` (`netstorage`) - + ([more info](https://github.com/rclone/rclone/blob/master/fstest/test_all/config.yaml)) The above lists are updated for each stable release of rclone. For test results @@ -26155,7 +27844,7 @@ listings and thus not checked during the check access phase. Here are two normal runs. The first one has a newer file on the remote. The second has no deltas between local and remote. -```bash +```text 2021/05/16 00:24:38 INFO : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/" 2021/05/16 00:24:38 INFO : Path1 checking for diffs 2021/05/16 00:24:38 INFO : - Path1 File is new - file.txt @@ -26205,7 +27894,7 @@ numerous such messages in the log. Since there are no final error/warning messages on line *7*, rclone has recovered from failure after a retry, and the overall sync was successful. -```bash +```text 1: 2021/05/14 00:44:12 INFO : Synching Path1 "/path/to/local/tree" with Path2 "dropbox:" 2: 2021/05/14 00:44:12 INFO : Path1 checking for diffs 3: 2021/05/14 00:44:12 INFO : Path2 checking for diffs @@ -26218,7 +27907,7 @@ recovered from failure after a retry, and the overall sync was successful. This log shows a *Critical failure* which requires a `--resync` to recover from. See the [Runtime Error Handling](#error-handling) section. -```bash +```text 2021/05/12 00:49:40 INFO : Google drive root '': Waiting for checks to finish 2021/05/12 00:49:40 INFO : Google drive root '': Waiting for transfers to finish 2021/05/12 00:49:40 INFO : Google drive root '': not deleting files as there were IO errors @@ -26303,7 +27992,7 @@ on Linux you can use *Cron* which is described below. The 1st example runs a sync every 5 minutes between a local directory and an OwnCloud server, with output logged to a runlog file: -```bash +```text # Minute (0-59) # Hour (0-23) # Day of Month (1-31) @@ -26320,7 +28009,7 @@ If you run `rclone bisync` as a cron job, redirect stdout/stderr to a file. The 2nd example runs a sync to Dropbox every hour and logs all stdout (via the `>>`) and stderr (via `2>&1`) to a log file. -```bash +```text 0 * * * * /path/to/rclone bisync /path/to/local/dropbox Dropbox: --check-access --filters-file /home/user/filters.txt >> /path/to/logs/dropbox-run.log 2>&1 ``` @@ -26402,7 +28091,7 @@ Rerunning the test will let it pass. Consider such failures as noise. ### Test command syntax -```bash +```text usage: go test ./cmd/bisync [options...] Options: @@ -26776,13 +28465,14 @@ with a public key compiled into the rclone binary. You may obtain the release signing key from: - From [KEYS](/KEYS) on this website - this file contains all past signing keys also. -- The git repository hosted on GitHub - https://github.com/rclone/rclone/blob/master/docs/content/KEYS +- The git repository hosted on GitHub - - `gpg --keyserver hkps://keys.openpgp.org --search nick@craig-wood.com` - `gpg --keyserver hkps://keyserver.ubuntu.com --search nick@craig-wood.com` -- https://www.craig-wood.com/nick/pub/pgp-key.txt +- After importing the key, verify that the fingerprint of one of the -keys matches: `FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA` as this key is used for signing. +keys matches: `FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA` ads this key is used +for signing. We recommend that you cross-check the fingerprint shown above through the domains listed below. By cross-checking the integrity of the @@ -26797,9 +28487,10 @@ developers at once. ## How to verify the release -In the release directory you will see the release files and some files called `MD5SUMS`, `SHA1SUMS` and `SHA256SUMS`. +In the release directory you will see the release files and some files +called `MD5SUMS`, `SHA1SUMS` and `SHA256SUMS`. -``` +```console $ rclone lsf --http-url https://downloads.rclone.org/v1.63.1 :http: MD5SUMS SHA1SUMS @@ -26817,7 +28508,7 @@ binary files in the release directory along with a signature. For example: -``` +```console $ rclone cat --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 @@ -26845,11 +28536,11 @@ as these are the most secure. You could verify the other types of hash also for extra security. `rclone selfupdate` verifies just the `SHA256SUMS`. -``` -$ mkdir /tmp/check -$ cd /tmp/check -$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS . -$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip . +```console +mkdir /tmp/check +cd /tmp/check +rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS . +rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip . ``` ### Verify the signatures @@ -26858,7 +28549,7 @@ First verify the signatures on the SHA256 file. Import the key. See above for ways to verify this key is correct. -``` +```console $ gpg --keyserver keyserver.ubuntu.com --receive-keys FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA gpg: key 93935E02FF3B54FA: public key "Nick Craig-Wood " imported gpg: Total number processed: 1 @@ -26867,7 +28558,7 @@ gpg: imported: 1 Then check the signature: -``` +```console $ gpg --verify SHA256SUMS gpg: Signature made Mon 17 Jul 2023 15:03:17 BST gpg: using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA @@ -26883,14 +28574,14 @@ Repeat for `MD5SUMS` and `SHA1SUMS` if desired. Now that we know the signatures on the hashes are OK we can verify the binaries match the hashes, completing the verification. -``` +```console $ sha256sum -c SHA256SUMS 2>&1 | grep OK rclone-v1.63.1-windows-amd64.zip: OK ``` Or do the check with rclone -``` +```console $ rclone hashsum sha256 -C SHA256SUMS rclone-v1.63.1-windows-amd64.zip 2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 0 2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 1 @@ -26905,7 +28596,7 @@ $ rclone hashsum sha256 -C SHA256SUMS rclone-v1.63.1-windows-amd64.zip You can verify the signatures and hashes in one command line like this: -``` +```console $ h=$(gpg --decrypt SHA256SUMS) && echo "$h" | sha256sum - -c --ignore-missing gpg: Signature made Mon 17 Jul 2023 15:03:17 BST gpg: using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA @@ -26926,16 +28617,18 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`. ## Configuration -The initial setup for 1Fichier involves getting the API key from the website which you -need to do in your browser. +The initial setup for 1Fichier involves getting the API key from the website +which you need to do in your browser. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -26972,19 +28665,27 @@ d) Delete this remote y/e/d> y ``` -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): + List directories in top level of your 1Fichier account - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your 1Fichier account - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to a 1Fichier directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -27024,7 +28725,7 @@ name: Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. - + ### Standard options Here are the Standard options specific to fichier (1Fichier). @@ -27116,7 +28817,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -27125,13 +28826,14 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). # Alias The `alias` remote provides a new name for another remote. -Paths may be as deep as required or a local path, +Paths may be as deep as required or a local path, e.g. `remote:directory/subdirectory` or `/directory/subdirectory`. During the initial setup with `rclone config` you will specify the target @@ -27147,9 +28849,9 @@ Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking The empty path is not allowed as a remote. To alias the current directory use `.` instead. -The target remote can also be a [connection string](https://rclone.org/docs/#connection-strings). +The target remote can also be a [connection string](https://rclone.org/docs/#connection-strings). This can be used to modify the config of a remote for different uses, e.g. -the alias `myDriveTrash` with the target remote `myDrive,trashed_only:` +the alias `myDriveTrash` with the target remote `myDrive,trashed_only:` can be used to only show the trashed files in `myDrive`. ## Configuration @@ -27157,11 +28859,13 @@ can be used to only show the trashed files in `myDrive`. Here is an example of how to make an alias called `remote` for local folder. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -27204,21 +28908,28 @@ q) Quit config e/n/d/r/c/s/q> q ``` -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level in `/mnt/storage/backup` - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in `/mnt/storage/backup` - rclone ls remote: +```console +rclone ls remote: +``` Copy another local directory to the alias directory called source - rclone copy /home/source remote:source - +```console +rclone copy /home/source remote:source +``` + ### Standard options Here are the Standard options specific to alias (Alias for an existing remote). @@ -27251,12 +28962,15 @@ Properties: - Type: string - Required: false - + # Amazon S3 Storage Providers The S3 backend can be used with a number of different providers: + + + - AWS S3 - Alibaba Cloud (Aliyun) Object Storage System (OSS) @@ -27264,13 +28978,17 @@ The S3 backend can be used with a number of different providers: - China Mobile Ecloud Elastic Object Storage (EOS) - Cloudflare R2 - Arvan Cloud Object Storage (AOS) +- Cubbit DS3 - DigitalOcean Spaces - Dreamhost - Exaba +- FileLu S5 (S3-Compatible Object Storage) - GCS +- Hetzner - Huawei OBS - IBM COS S3 - IDrive e2 +- Intercolo Object Storage - IONOS Cloud - Leviia Object Storage - Liara Object Storage @@ -27283,12 +29001,15 @@ The S3 backend can be used with a number of different providers: - Petabox - Pure Storage FlashBlade - Qiniu Cloud Object Storage (Kodo) +- Rabata Cloud Storage - RackCorp Object Storage - Rclone Serve S3 - Scaleway - Seagate Lyve Cloud - SeaweedFS - Selectel +- Servercore Object Storage +- Spectra Logic - StackPath - Storj - Synology C2 Object Storage @@ -27297,6 +29018,8 @@ The S3 backend can be used with a number of different providers: - Zata + + Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. @@ -27305,20 +29028,28 @@ you can use it like this: See all buckets - rclone lsd remote: +```console +rclone lsd remote: +``` Make a new bucket - rclone mkdir remote:bucket +```console +rclone mkdir remote:bucket +``` List the contents of a bucket - rclone ls remote:bucket +```console +rclone ls remote:bucket +``` Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. - rclone sync --interactive /home/local/directory remote:bucket +```console +rclone sync --interactive /home/local/directory remote:bucket +``` ## Configuration @@ -27327,12 +29058,14 @@ Most applies to the other providers as well, any differences are described [belo First run - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -27553,9 +29286,12 @@ However for objects which were uploaded as multipart uploads or with server side encryption (SSE-AWS or SSE-C) the `ETag` header is no longer the MD5 sum of the data, so rclone adds an additional piece of metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in -the same format as is required for `Content-MD5`). You can use base64 -d and hexdump to check this value manually: +the same format as is required for `Content-MD5`). You can use base64 -d and +hexdump to check this value manually: - echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump +```console +echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump +``` or you can use `rclone check` to verify the hashes are OK. @@ -27585,30 +29321,30 @@ or `rclone copy`) in a few different ways, each with its own tradeoffs. - `--size-only` - - Only checks the size of files. - - Uses no extra transactions. - - If the file doesn't change size then rclone won't detect it has - changed. - - `rclone sync --size-only /path/to/source s3:bucket` + - Only checks the size of files. + - Uses no extra transactions. + - If the file doesn't change size then rclone won't detect it has + changed. + - `rclone sync --size-only /path/to/source s3:bucket` - `--checksum` - - Checks the size and MD5 checksum of files. - - Uses no extra transactions. - - The most accurate detection of changes possible. - - Will cause the source to read an MD5 checksum which, if it is a - local disk, will cause lots of disk activity. - - If the source and destination are both S3 this is the - **recommended** flag to use for maximum efficiency. - - `rclone sync --checksum /path/to/source s3:bucket` + - Checks the size and MD5 checksum of files. + - Uses no extra transactions. + - The most accurate detection of changes possible. + - Will cause the source to read an MD5 checksum which, if it is a + local disk, will cause lots of disk activity. + - If the source and destination are both S3 this is the + **recommended** flag to use for maximum efficiency. + - `rclone sync --checksum /path/to/source s3:bucket` - `--update --use-server-modtime` - - Uses no extra transactions. - - Modification time becomes the time the object was uploaded. - - For many operations this is sufficient to determine if it needs - uploading. - - Using `--update` along with `--use-server-modtime`, avoids the - extra API call and uploads files whose local modification time - is newer than the time it was last uploaded. - - Files created with timestamps in the past will be missed by the sync. - - `rclone sync --update --use-server-modtime /path/to/source s3:bucket` + - Uses no extra transactions. + - Modification time becomes the time the object was uploaded. + - For many operations this is sufficient to determine if it needs + uploading. + - Using `--update` along with `--use-server-modtime`, avoids the + extra API call and uploads files whose local modification time + is newer than the time it was last uploaded. + - Files created with timestamps in the past will be missed by the sync. + - `rclone sync --update --use-server-modtime /path/to/source s3:bucket` These flags can and should be used in combination with `--fast-list` - see below. @@ -27628,7 +29364,9 @@ individually. This takes one API call per directory. Using the memory first using a smaller number of API calls (one per 1000 objects). See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. - rclone sync --fast-list --checksum /path/to/source s3:bucket +```console +rclone sync --fast-list --checksum /path/to/source s3:bucket +``` `--fast-list` trades off API transactions for memory use. As a rough guide rclone uses 1k of memory per object stored, so using @@ -27641,7 +29379,9 @@ instead of through directory listings. You can do a "top-up" sync very cheaply by using `--max-age` and `--no-traverse` to copy only recent files, eg - rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket +```console +rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket +``` You'd then do a full `rclone sync` less often. @@ -27662,32 +29402,39 @@ Setting this flag increases the chance for undetected upload failures. #### Using server-side copy If you are copying objects between S3 buckets in the same region, you should -use server-side copy. -This is much faster than downloading and re-uploading the objects, as no data is transferred. - -For rclone to use server-side copy, you must use the same remote for the source and destination. +use server-side copy. This is much faster than downloading and re-uploading +the objects, as no data is transferred. - rclone copy s3:source-bucket s3:destination-bucket +For rclone to use server-side copy, you must use the same remote for the +source and destination. -When using server-side copy, the performance is limited by the rate at which rclone issues -API requests to S3. -See below for how to increase the number of API requests rclone makes. +```console +rclone copy s3:source-bucket s3:destination-bucket +``` + +When using server-side copy, the performance is limited by the rate at which +rclone issues API requests to S3. See below for how to increase the number of +API requests rclone makes. #### Increasing the rate of API requests -You can increase the rate of API requests to S3 by increasing the parallelism using `--transfers` and `--checkers` -options. +You can increase the rate of API requests to S3 by increasing the parallelism +using `--transfers` and `--checkers` options. -Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests. -Depending on your provider, you can increase significantly the number of transfers and checkers. +Rclone uses a very conservative defaults for these settings, as not all +providers support high rates of requests. Depending on your provider, you can +increase significantly the number of transfers and checkers. -For example, with AWS S3, if you can increase the number of checkers to values like 200. -If you are doing a server-side copy, you can also increase the number of transfers to 200. +For example, with AWS S3, if you can increase the number of checkers to values +like 200. If you are doing a server-side copy, you can also increase the number +of transfers to 200. - rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket - -You will need to experiment with these values to find the optimal settings for your setup. +```console +rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket +``` +You will need to experiment with these values to find the optimal settings for +your setup. ### Data integrity @@ -27802,7 +29549,7 @@ version followed by a `cleanup` of the old versions. Show current version and all the versions with `--s3-versions` flag. -``` +```console $ rclone -q ls s3:cleanup-test 9 one.txt @@ -27815,7 +29562,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test Retrieve an old version -``` +```console $ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp $ ls -l /tmp/one-v2016-07-04-141003-000.txt @@ -27824,7 +29571,7 @@ $ ls -l /tmp/one-v2016-07-04-141003-000.txt Clean up all the old versions and show that they've gone. -``` +```console $ rclone -q backend cleanup-hidden s3:cleanup-test $ rclone -q ls s3:cleanup-test @@ -27839,11 +29586,13 @@ $ rclone -q --s3-versions ls s3:cleanup-test When using `--s3-versions` flag rclone is relying on the file name to work out whether the objects are versions or not. Versions' names are created by inserting timestamp between file name and its extension. -``` + +```text 9 file.txt 8 file-v2023-07-17-161032-000.txt 16 file-v2023-06-15-141003-000.txt ``` + If there are real files present with the same names as versions, then behaviour of `--s3-versions` can be unpredictable. @@ -27851,8 +29600,8 @@ behaviour of `--s3-versions` can be unpredictable. If you run `rclone cleanup s3:bucket` then it will remove all pending multipart uploads older than 24 hours. You can use the `--interactive`/`i` -or `--dry-run` flag to see exactly what it will do. If you want more control over the -expiry date then run `rclone backend cleanup s3:bucket -o max-age=1h` +or `--dry-run` flag to see exactly what it will do. If you want more control +over the expiry date then run `rclone backend cleanup s3:bucket -o max-age=1h` to expire all uploads older than one hour. You can use `rclone backend list-multipart-uploads s3:bucket` to see the pending multipart uploads. @@ -27896,9 +29645,9 @@ The chunk sizes used in the multipart upload are specified by `--s3-chunk-size` and the number of chunks uploaded concurrently is specified by `--s3-upload-concurrency`. -Multipart uploads will use `--transfers` * `--s3-upload-concurrency` * -`--s3-chunk-size` extra memory. Single part uploads to not use extra -memory. +Multipart uploads will use extra memory equal to: `--transfers` × +`--s3-upload-concurrency` × `--s3-chunk-size`. Single part uploads do not +use extra memory. Single part transfers can be faster than multipart transfers or slower depending on your latency from S3 - the more latency, the more likely @@ -27910,7 +29659,6 @@ throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory. - ### Buckets and Regions With Amazon S3 you can list buckets (`rclone lsd`) using any region, @@ -27926,23 +29674,28 @@ credentials, with and without using the environment. The different authentication methods are tried in this order: - - Directly in the rclone configuration file (`env_auth = false` in the config file): - - `access_key_id` and `secret_access_key` are required. - - `session_token` can be optionally set when using AWS STS. - - Runtime configuration (`env_auth = true` in the config file): - - Export the following environment variables before running `rclone`: - - Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` - - Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` - - Session Token: `AWS_SESSION_TOKEN` (optional) - - Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html): - - Profile files are standard files used by AWS CLI tools - - By default it will use the profile in your home directory (e.g. `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables or config keys: - - `AWS_SHARED_CREDENTIALS_FILE` to control which file or the `shared_credentials_file` config key. - - `AWS_PROFILE` to control which profile to use or the `profile` config key. - - Or, run `rclone` in an ECS task with an IAM role (AWS only). - - Or, run `rclone` on an EC2 instance with an IAM role (AWS only). - - Or, run `rclone` in an EKS pod with an IAM role that is associated with a service account (AWS only). - - Or, use [process credentials](https://docs.aws.amazon.com/sdkref/latest/guide/feature-process-credentials.html) to read config from an external program. +- Directly in the rclone configuration file (`env_auth = false` in the config file): + - `access_key_id` and `secret_access_key` are required. + - `session_token` can be optionally set when using AWS STS. +- Runtime configuration (`env_auth = true` in the config file): + - Export the following environment variables before running `rclone`: + - Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY` + - Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY` + - Session Token: `AWS_SESSION_TOKEN` (optional) + - Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html): + - Profile files are standard files used by AWS CLI tools + - By default it will use the profile in your home directory (e.g. `~/.aws/credentials` + on unix based systems) file and the "default" profile, to change set these + environment variables or config keys: + - `AWS_SHARED_CREDENTIALS_FILE` to control which file or the `shared_credentials_file` + config key. + - `AWS_PROFILE` to control which profile to use or the `profile` config key. + - Or, run `rclone` in an ECS task with an IAM role (AWS only). + - Or, run `rclone` on an EC2 instance with an IAM role (AWS only). + - Or, run `rclone` in an EKS pod with an IAM role that is associated with a + service account (AWS only). + - Or, use [process credentials](https://docs.aws.amazon.com/sdkref/latest/guide/feature-process-credentials.html) + to read config from an external program. With `env_auth = true` rclone (which uses the SDK for Go v2) should support [all authentication methods](https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html) @@ -27957,44 +29710,44 @@ credentials then S3 interaction will be non-authenticated (see the When using the `sync` subcommand of `rclone` the following minimum permissions are required to be available on the bucket being written to: -* `ListBucket` -* `DeleteObject` -* `GetObject` -* `PutObject` -* `PutObjectACL` -* `CreateBucket` (unless using [s3-no-check-bucket](#s3-no-check-bucket)) +- `ListBucket` +- `DeleteObject` +- `GetObject` +- `PutObject` +- `PutObjectACL` +- `CreateBucket` (unless using [s3-no-check-bucket](#s3-no-check-bucket)) When using the `lsd` subcommand, the `ListAllMyBuckets` permission is required. Example policy: -``` +```json { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::USER_SID:user/USER_NAME" - }, - "Action": [ - "s3:ListBucket", - "s3:DeleteObject", - "s3:GetObject", - "s3:PutObject", - "s3:PutObjectAcl" - ], - "Resource": [ - "arn:aws:s3:::BUCKET_NAME/*", - "arn:aws:s3:::BUCKET_NAME" - ] - }, - { - "Effect": "Allow", - "Action": "s3:ListAllMyBuckets", - "Resource": "arn:aws:s3:::*" - } - ] + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "arn:aws:iam::USER_SID:user/USER_NAME" + }, + "Action": [ + "s3:ListBucket", + "s3:DeleteObject", + "s3:GetObject", + "s3:PutObject", + "s3:PutObjectAcl" + ], + "Resource": [ + "arn:aws:s3:::BUCKET_NAME/*", + "arn:aws:s3:::BUCKET_NAME" + ] + }, + { + "Effect": "Allow", + "Action": "s3:ListAllMyBuckets", + "Resource": "arn:aws:s3:::*" + } + ] } ``` @@ -28004,7 +29757,8 @@ Notes on above: that `USER_NAME` has been created. 2. The Resource entry must include both resource ARNs, as one implies the bucket and the other implies the bucket's objects. -3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already exists, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included. +3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already +exists, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included. For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b) that will generate one or more buckets that will work with `rclone sync`. @@ -28018,11 +29772,14 @@ create checksum errors. ### Glacier and Glacier Deep Archive -You can upload objects using the glacier storage class or transition them to glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html). +You can upload objects using the glacier storage class or transition them to +glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html). The bucket can still be synced or copied into normally, but if rclone tries to access data from the glacier storage class you will see an error like below. - 2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file +```text +2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file +``` In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html) the object(s) in question before accessing object contents. @@ -28035,16 +29792,18 @@ Vault API, so rclone cannot directly access Glacier Vaults. According to AWS's [documentation on S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-permission): -> If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header. - -As mentioned in the [Modification times and hashes](#modification-times-and-hashes) section, -small files that are not uploaded as multipart, use a different tag, causing the upload to fail. -A simple solution is to set the `--s3-upload-cutoff 0` and force all the files to be uploaded as multipart. +> If you configure a default retention period on a bucket, requests to upload +objects in such a bucket must include the Content-MD5 header. +As mentioned in the [Modification times and hashes](#modification-times-and-hashes) +section, small files that are not uploaded as multipart, use a different tag, causing +the upload to fail. A simple solution is to set the `--s3-upload-cutoff 0` and force +all the files to be uploaded as multipart. + ### Standard options -Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu, Zata and others). +Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other). #### --s3-provider @@ -28057,84 +29816,98 @@ Properties: - Type: string - Required: false - Examples: - - "AWS" - - Amazon Web Services (AWS) S3 - - "Alibaba" - - Alibaba Cloud Object Storage System (OSS) formerly Aliyun - - "ArvanCloud" - - Arvan Cloud Object Storage (AOS) - - "Ceph" - - Ceph Object Storage - - "ChinaMobile" - - China Mobile Ecloud Elastic Object Storage (EOS) - - "Cloudflare" - - Cloudflare R2 Storage - - "DigitalOcean" - - DigitalOcean Spaces - - "Dreamhost" - - Dreamhost DreamObjects - - "Exaba" - - Exaba Object Storage - - "FlashBlade" - - Pure Storage FlashBlade Object Storage - - "GCS" - - Google Cloud Storage - - "HuaweiOBS" - - Huawei Object Storage Service - - "IBMCOS" - - IBM COS S3 - - "IDrive" - - IDrive e2 - - "IONOS" - - IONOS Cloud - - "LyveCloud" - - Seagate Lyve Cloud - - "Leviia" - - Leviia Object Storage - - "Liara" - - Liara Object Storage - - "Linode" - - Linode Object Storage - - "Magalu" - - Magalu Object Storage - - "Mega" - - MEGA S4 Object Storage - - "Minio" - - Minio Object Storage - - "Netease" - - Netease Object Storage (NOS) - - "Outscale" - - OUTSCALE Object Storage (OOS) - - "OVHcloud" - - OVHcloud Object Storage - - "Petabox" - - Petabox Object Storage - - "RackCorp" - - RackCorp Object Storage - - "Rclone" - - Rclone S3 Server - - "Scaleway" - - Scaleway Object Storage - - "SeaweedFS" - - SeaweedFS S3 - - "Selectel" - - Selectel Object Storage - - "StackPath" - - StackPath Object Storage - - "Storj" - - Storj (S3 Compatible Gateway) - - "Synology" - - Synology C2 Object Storage - - "TencentCOS" - - Tencent Cloud Object Storage (COS) - - "Wasabi" - - Wasabi Object Storage - - "Qiniu" - - Qiniu Object Storage (Kodo) - - "Zata" - - Zata (S3 compatible Gateway) - - "Other" - - Any other S3 compatible provider + - "AWS" + - Amazon Web Services (AWS) S3 + - "Alibaba" + - Alibaba Cloud Object Storage System (OSS) formerly Aliyun + - "ArvanCloud" + - Arvan Cloud Object Storage (AOS) + - "Ceph" + - Ceph Object Storage + - "ChinaMobile" + - China Mobile Ecloud Elastic Object Storage (EOS) + - "Cloudflare" + - Cloudflare R2 Storage + - "Cubbit" + - Cubbit DS3 Object Storage + - "DigitalOcean" + - DigitalOcean Spaces + - "Dreamhost" + - Dreamhost DreamObjects + - "Exaba" + - Exaba Object Storage + - "FileLu" + - FileLu S5 (S3-Compatible Object Storage) + - "FlashBlade" + - Pure Storage FlashBlade Object Storage + - "GCS" + - Google Cloud Storage + - "Hetzner" + - Hetzner Object Storage + - "HuaweiOBS" + - Huawei Object Storage Service + - "IBMCOS" + - IBM COS S3 + - "IDrive" + - IDrive e2 + - "Intercolo" + - Intercolo Object Storage + - "IONOS" + - IONOS Cloud + - "Leviia" + - Leviia Object Storage + - "Liara" + - Liara Object Storage + - "Linode" + - Linode Object Storage + - "LyveCloud" + - Seagate Lyve Cloud + - "Magalu" + - Magalu Object Storage + - "Mega" + - MEGA S4 Object Storage + - "Minio" + - Minio Object Storage + - "Netease" + - Netease Object Storage (NOS) + - "Outscale" + - OUTSCALE Object Storage (OOS) + - "OVHcloud" + - OVHcloud Object Storage + - "Petabox" + - Petabox Object Storage + - "Qiniu" + - Qiniu Object Storage (Kodo) + - "Rabata" + - Rabata Cloud Storage + - "RackCorp" + - RackCorp Object Storage + - "Rclone" + - Rclone S3 Server + - "Scaleway" + - Scaleway Object Storage + - "SeaweedFS" + - SeaweedFS S3 + - "Selectel" + - Selectel Object Storage + - "Servercore" + - Servercore Object Storage + - "SpectraLogic" + - Spectra Logic Black Pearl + - "StackPath" + - StackPath Object Storage + - "Storj" + - Storj (S3 Compatible Gateway) + - "Synology" + - Synology C2 Object Storage + - "TencentCOS" + - Tencent Cloud Object Storage (COS) + - "Wasabi" + - Wasabi Object Storage + - "Zata" + - Zata (S3 compatible Gateway) + - "Other" + - Any other S3 compatible provider #### --s3-env-auth @@ -28149,10 +29922,10 @@ Properties: - Type: bool - Default: false - Examples: - - "false" - - Enter AWS credentials in the next step. - - "true" - - Get AWS credentials from the environment (env vars or IAM). + - "false" + - Enter AWS credentials in the next step. + - "true" + - Get AWS credentials from the environment (env vars or IAM). #### --s3-access-key-id @@ -28184,174 +29957,1701 @@ Properties: Region to connect to. +Leave blank if you are using an S3 clone and you don't have a region. + Properties: - Config: region - Env Var: RCLONE_S3_REGION -- Provider: AWS +- Provider: AWS,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other - Type: string - Required: false - Examples: - - "us-east-1" - - The default endpoint - a good choice if you are unsure. - - US Region, Northern Virginia, or Pacific Northwest. - - Leave location constraint empty. - - "us-east-2" - - US East (Ohio) Region. - - Needs location constraint us-east-2. - - "us-west-1" - - US West (Northern California) Region. - - Needs location constraint us-west-1. - - "us-west-2" - - US West (Oregon) Region. - - Needs location constraint us-west-2. - - "ca-central-1" - - Canada (Central) Region. - - Needs location constraint ca-central-1. - - "eu-west-1" - - EU (Ireland) Region. - - Needs location constraint EU or eu-west-1. - - "eu-west-2" - - EU (London) Region. - - Needs location constraint eu-west-2. - - "eu-west-3" - - EU (Paris) Region. - - Needs location constraint eu-west-3. - - "eu-north-1" - - EU (Stockholm) Region. - - Needs location constraint eu-north-1. - - "eu-south-1" - - EU (Milan) Region. - - Needs location constraint eu-south-1. - - "eu-central-1" - - EU (Frankfurt) Region. - - Needs location constraint eu-central-1. - - "ap-southeast-1" - - Asia Pacific (Singapore) Region. - - Needs location constraint ap-southeast-1. - - "ap-southeast-2" - - Asia Pacific (Sydney) Region. - - Needs location constraint ap-southeast-2. - - "ap-northeast-1" - - Asia Pacific (Tokyo) Region. - - Needs location constraint ap-northeast-1. - - "ap-northeast-2" - - Asia Pacific (Seoul). - - Needs location constraint ap-northeast-2. - - "ap-northeast-3" - - Asia Pacific (Osaka-Local). - - Needs location constraint ap-northeast-3. - - "ap-south-1" - - Asia Pacific (Mumbai). - - Needs location constraint ap-south-1. - - "ap-east-1" - - Asia Pacific (Hong Kong) Region. - - Needs location constraint ap-east-1. - - "sa-east-1" - - South America (Sao Paulo) Region. - - Needs location constraint sa-east-1. - - "il-central-1" - - Israel (Tel Aviv) Region. - - Needs location constraint il-central-1. - - "me-south-1" - - Middle East (Bahrain) Region. - - Needs location constraint me-south-1. - - "af-south-1" - - Africa (Cape Town) Region. - - Needs location constraint af-south-1. - - "cn-north-1" - - China (Beijing) Region. - - Needs location constraint cn-north-1. - - "cn-northwest-1" - - China (Ningxia) Region. - - Needs location constraint cn-northwest-1. - - "us-gov-east-1" - - AWS GovCloud (US-East) Region. - - Needs location constraint us-gov-east-1. - - "us-gov-west-1" - - AWS GovCloud (US) Region. - - Needs location constraint us-gov-west-1. + - "us-east-1" + - The default endpoint - a good choice if you are unsure. + - US Region, Northern Virginia, or Pacific Northwest. + - Leave location constraint empty. + - Provider: AWS + - "us-east-2" + - US East (Ohio) Region. + - Needs location constraint us-east-2. + - Provider: AWS + - "us-west-1" + - US West (Northern California) Region. + - Needs location constraint us-west-1. + - Provider: AWS + - "us-west-2" + - US West (Oregon) Region. + - Needs location constraint us-west-2. + - Provider: AWS + - "ca-central-1" + - Canada (Central) Region. + - Needs location constraint ca-central-1. + - Provider: AWS + - "eu-west-1" + - EU (Ireland) Region. + - Needs location constraint EU or eu-west-1. + - Provider: AWS + - "eu-west-2" + - EU (London) Region. + - Needs location constraint eu-west-2. + - Provider: AWS + - "eu-west-3" + - EU (Paris) Region. + - Needs location constraint eu-west-3. + - Provider: AWS + - "eu-north-1" + - EU (Stockholm) Region. + - Needs location constraint eu-north-1. + - Provider: AWS + - "eu-south-1" + - EU (Milan) Region. + - Needs location constraint eu-south-1. + - Provider: AWS + - "eu-central-1" + - EU (Frankfurt) Region. + - Needs location constraint eu-central-1. + - Provider: AWS + - "ap-southeast-1" + - Asia Pacific (Singapore) Region. + - Needs location constraint ap-southeast-1. + - Provider: AWS + - "ap-southeast-2" + - Asia Pacific (Sydney) Region. + - Needs location constraint ap-southeast-2. + - Provider: AWS + - "ap-northeast-1" + - Asia Pacific (Tokyo) Region. + - Needs location constraint ap-northeast-1. + - Provider: AWS + - "ap-northeast-2" + - Asia Pacific (Seoul). + - Needs location constraint ap-northeast-2. + - Provider: AWS + - "ap-northeast-3" + - Asia Pacific (Osaka-Local). + - Needs location constraint ap-northeast-3. + - Provider: AWS + - "ap-south-1" + - Asia Pacific (Mumbai). + - Needs location constraint ap-south-1. + - Provider: AWS + - "ap-east-1" + - Asia Pacific (Hong Kong) Region. + - Needs location constraint ap-east-1. + - Provider: AWS + - "sa-east-1" + - South America (Sao Paulo) Region. + - Needs location constraint sa-east-1. + - Provider: AWS + - "il-central-1" + - Israel (Tel Aviv) Region. + - Needs location constraint il-central-1. + - Provider: AWS + - "me-south-1" + - Middle East (Bahrain) Region. + - Needs location constraint me-south-1. + - Provider: AWS + - "af-south-1" + - Africa (Cape Town) Region. + - Needs location constraint af-south-1. + - Provider: AWS + - "cn-north-1" + - China (Beijing) Region. + - Needs location constraint cn-north-1. + - Provider: AWS + - "cn-northwest-1" + - China (Ningxia) Region. + - Needs location constraint cn-northwest-1. + - Provider: AWS + - "us-gov-east-1" + - AWS GovCloud (US-East) Region. + - Needs location constraint us-gov-east-1. + - Provider: AWS + - "us-gov-west-1" + - AWS GovCloud (US) Region. + - Needs location constraint us-gov-west-1. + - Provider: AWS + - "" + - Use this if unsure. + - Will use v4 signatures and an empty region. + - Provider: Ceph,DigitalOcean,Dreamhost,Exaba,GCS,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,StackPath,Wasabi,Other + - "other-v2-signature" + - Use this only if v4 signatures don't work. + - E.g. pre Jewel/v10 CEPH. + - Provider: Ceph,DigitalOcean,Dreamhost,Exaba,GCS,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,StackPath,Wasabi,Other + - "auto" + - R2 buckets are automatically distributed across Cloudflare's data centers for low latency. + - Provider: Cloudflare + - "eu-west-1" + - Europe West + - Provider: Cubbit + - "global" + - Global + - Provider: FileLu + - "us-east" + - North America (US-East) + - Provider: FileLu + - "eu-central" + - Europe (EU-Central) + - Provider: FileLu + - "ap-southeast" + - Asia Pacific (AP-Southeast) + - Provider: FileLu + - "me-central" + - Middle East (ME-Central) + - Provider: FileLu + - "hel1" + - Helsinki + - Provider: Hetzner + - "fsn1" + - Falkenstein + - Provider: Hetzner + - "nbg1" + - Nuremberg + - Provider: Hetzner + - "af-south-1" + - AF-Johannesburg + - Provider: HuaweiOBS + - "ap-southeast-2" + - AP-Bangkok + - Provider: HuaweiOBS + - "ap-southeast-3" + - AP-Singapore + - Provider: HuaweiOBS + - "cn-east-3" + - CN East-Shanghai1 + - Provider: HuaweiOBS + - "cn-east-2" + - CN East-Shanghai2 + - Provider: HuaweiOBS + - "cn-north-1" + - CN North-Beijing1 + - Provider: HuaweiOBS + - "cn-north-4" + - CN North-Beijing4 + - Provider: HuaweiOBS + - "cn-south-1" + - CN South-Guangzhou + - Provider: HuaweiOBS + - "ap-southeast-1" + - CN-Hong Kong + - Provider: HuaweiOBS + - "sa-argentina-1" + - LA-Buenos Aires1 + - Provider: HuaweiOBS + - "sa-peru-1" + - LA-Lima1 + - Provider: HuaweiOBS + - "na-mexico-1" + - LA-Mexico City1 + - Provider: HuaweiOBS + - "sa-chile-1" + - LA-Santiago2 + - Provider: HuaweiOBS + - "sa-brazil-1" + - LA-Sao Paulo1 + - Provider: HuaweiOBS + - "ru-northwest-2" + - RU-Moscow2 + - Provider: HuaweiOBS + - "de-fra" + - Frankfurt, Germany + - Provider: Intercolo + - "de" + - Frankfurt, Germany + - Provider: IONOS,OVHcloud + - "eu-central-2" + - Berlin, Germany + - Provider: IONOS + - "eu-south-2" + - Logrono, Spain + - Provider: IONOS + - "eu-west-2" + - Paris, France + - Provider: Outscale + - "us-east-2" + - New Jersey, USA + - Provider: Outscale + - "us-west-1" + - California, USA + - Provider: Outscale + - "cloudgouv-eu-west-1" + - SecNumCloud, Paris, France + - Provider: Outscale + - "ap-northeast-1" + - Tokyo, Japan + - Provider: Outscale + - "gra" + - Gravelines, France + - Provider: OVHcloud + - "rbx" + - Roubaix, France + - Provider: OVHcloud + - "sbg" + - Strasbourg, France + - Provider: OVHcloud + - "eu-west-par" + - Paris, France (3AZ) + - Provider: OVHcloud + - "uk" + - London, United Kingdom + - Provider: OVHcloud + - "waw" + - Warsaw, Poland + - Provider: OVHcloud + - "bhs" + - Beauharnois, Canada + - Provider: OVHcloud + - "ca-east-tor" + - Toronto, Canada + - Provider: OVHcloud + - "sgp" + - Singapore + - Provider: OVHcloud + - "ap-southeast-syd" + - Sydney, Australia + - Provider: OVHcloud + - "ap-south-mum" + - Mumbai, India + - Provider: OVHcloud + - "us-east-va" + - Vint Hill, Virginia, USA + - Provider: OVHcloud + - "us-west-or" + - Hillsboro, Oregon, USA + - Provider: OVHcloud + - "rbx-archive" + - Roubaix, France (Cold Archive) + - Provider: OVHcloud + - "us-east-1" + - US East (N. Virginia) + - Provider: Petabox,Rabata + - "eu-central-1" + - Europe (Frankfurt) + - Provider: Petabox + - "ap-southeast-1" + - Asia Pacific (Singapore) + - Provider: Petabox + - "me-south-1" + - Middle East (Bahrain) + - Provider: Petabox + - "sa-east-1" + - South America (São Paulo) + - Provider: Petabox + - "cn-east-1" + - The default endpoint - a good choice if you are unsure. + - East China Region 1. + - Needs location constraint cn-east-1. + - Provider: Qiniu + - "cn-east-2" + - East China Region 2. + - Needs location constraint cn-east-2. + - Provider: Qiniu + - "cn-north-1" + - North China Region 1. + - Needs location constraint cn-north-1. + - Provider: Qiniu + - "cn-south-1" + - South China Region 1. + - Needs location constraint cn-south-1. + - Provider: Qiniu + - "us-north-1" + - North America Region. + - Needs location constraint us-north-1. + - Provider: Qiniu + - "ap-southeast-1" + - Southeast Asia Region 1. + - Needs location constraint ap-southeast-1. + - Provider: Qiniu + - "ap-northeast-1" + - Northeast Asia Region 1. + - Needs location constraint ap-northeast-1. + - Provider: Qiniu + - "eu-west-1" + - EU (Ireland) + - Provider: Rabata + - "eu-west-2" + - EU (London) + - Provider: Rabata + - "global" + - Global CDN (All locations) Region + - Provider: RackCorp + - "au" + - Australia (All states) + - Provider: RackCorp + - "au-nsw" + - NSW (Australia) Region + - Provider: RackCorp + - "au-qld" + - QLD (Australia) Region + - Provider: RackCorp + - "au-vic" + - VIC (Australia) Region + - Provider: RackCorp + - "au-wa" + - Perth (Australia) Region + - Provider: RackCorp + - "ph" + - Manila (Philippines) Region + - Provider: RackCorp + - "th" + - Bangkok (Thailand) Region + - Provider: RackCorp + - "hk" + - HK (Hong Kong) Region + - Provider: RackCorp + - "mn" + - Ulaanbaatar (Mongolia) Region + - Provider: RackCorp + - "kg" + - Bishkek (Kyrgyzstan) Region + - Provider: RackCorp + - "id" + - Jakarta (Indonesia) Region + - Provider: RackCorp + - "jp" + - Tokyo (Japan) Region + - Provider: RackCorp + - "sg" + - SG (Singapore) Region + - Provider: RackCorp + - "de" + - Frankfurt (Germany) Region + - Provider: RackCorp + - "us" + - USA (AnyCast) Region + - Provider: RackCorp + - "us-east-1" + - New York (USA) Region + - Provider: RackCorp + - "us-west-1" + - Freemont (USA) Region + - Provider: RackCorp + - "nz" + - Auckland (New Zealand) Region + - Provider: RackCorp + - "nl-ams" + - Amsterdam, The Netherlands + - Provider: Scaleway + - "fr-par" + - Paris, France + - Provider: Scaleway + - "pl-waw" + - Warsaw, Poland + - Provider: Scaleway + - "ru-1" + - St. Petersburg + - Provider: Selectel,Servercore + - "gis-1" + - Moscow + - Provider: Servercore + - "ru-7" + - Moscow + - Provider: Servercore + - "uz-2" + - Tashkent, Uzbekistan + - Provider: Servercore + - "kz-1" + - Almaty, Kazakhstan + - Provider: Servercore + - "eu-001" + - Europe Region 1 + - Provider: Synology + - "eu-002" + - Europe Region 2 + - Provider: Synology + - "us-001" + - US Region 1 + - Provider: Synology + - "us-002" + - US Region 2 + - Provider: Synology + - "tw-001" + - Asia (Taiwan) + - Provider: Synology + - "us-east-1" + - Indore, Madhya Pradesh, India + - Provider: Zata #### --s3-endpoint Endpoint for S3 API. -Leave blank if using AWS to use the default endpoint for the region. +Required when using an S3 clone. Properties: - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT -- Provider: AWS +- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other - Type: string - Required: false +- Examples: + - "oss-accelerate.aliyuncs.com" + - Global Accelerate + - Provider: Alibaba + - "oss-accelerate-overseas.aliyuncs.com" + - Global Accelerate (outside mainland China) + - Provider: Alibaba + - "oss-cn-hangzhou.aliyuncs.com" + - East China 1 (Hangzhou) + - Provider: Alibaba + - "oss-cn-shanghai.aliyuncs.com" + - East China 2 (Shanghai) + - Provider: Alibaba + - "oss-cn-qingdao.aliyuncs.com" + - North China 1 (Qingdao) + - Provider: Alibaba + - "oss-cn-beijing.aliyuncs.com" + - North China 2 (Beijing) + - Provider: Alibaba + - "oss-cn-zhangjiakou.aliyuncs.com" + - North China 3 (Zhangjiakou) + - Provider: Alibaba + - "oss-cn-huhehaote.aliyuncs.com" + - North China 5 (Hohhot) + - Provider: Alibaba + - "oss-cn-wulanchabu.aliyuncs.com" + - North China 6 (Ulanqab) + - Provider: Alibaba + - "oss-cn-shenzhen.aliyuncs.com" + - South China 1 (Shenzhen) + - Provider: Alibaba + - "oss-cn-heyuan.aliyuncs.com" + - South China 2 (Heyuan) + - Provider: Alibaba + - "oss-cn-guangzhou.aliyuncs.com" + - South China 3 (Guangzhou) + - Provider: Alibaba + - "oss-cn-chengdu.aliyuncs.com" + - West China 1 (Chengdu) + - Provider: Alibaba + - "oss-cn-hongkong.aliyuncs.com" + - Hong Kong (Hong Kong) + - Provider: Alibaba + - "oss-us-west-1.aliyuncs.com" + - US West 1 (Silicon Valley) + - Provider: Alibaba + - "oss-us-east-1.aliyuncs.com" + - US East 1 (Virginia) + - Provider: Alibaba + - "oss-ap-southeast-1.aliyuncs.com" + - Southeast Asia Southeast 1 (Singapore) + - Provider: Alibaba + - "oss-ap-southeast-2.aliyuncs.com" + - Asia Pacific Southeast 2 (Sydney) + - Provider: Alibaba + - "oss-ap-southeast-3.aliyuncs.com" + - Southeast Asia Southeast 3 (Kuala Lumpur) + - Provider: Alibaba + - "oss-ap-southeast-5.aliyuncs.com" + - Asia Pacific Southeast 5 (Jakarta) + - Provider: Alibaba + - "oss-ap-northeast-1.aliyuncs.com" + - Asia Pacific Northeast 1 (Japan) + - Provider: Alibaba + - "oss-ap-south-1.aliyuncs.com" + - Asia Pacific South 1 (Mumbai) + - Provider: Alibaba + - "oss-eu-central-1.aliyuncs.com" + - Central Europe 1 (Frankfurt) + - Provider: Alibaba + - "oss-eu-west-1.aliyuncs.com" + - West Europe (London) + - Provider: Alibaba + - "oss-me-east-1.aliyuncs.com" + - Middle East 1 (Dubai) + - Provider: Alibaba + - "s3.ir-thr-at1.arvanstorage.ir" + - The default endpoint - a good choice if you are unsure. + - Tehran Iran (Simin) + - Provider: ArvanCloud + - "s3.ir-tbz-sh1.arvanstorage.ir" + - Tabriz Iran (Shahriar) + - Provider: ArvanCloud + - "eos-wuxi-1.cmecloud.cn" + - The default endpoint - a good choice if you are unsure. + - East China (Suzhou) + - Provider: ChinaMobile + - "eos-jinan-1.cmecloud.cn" + - East China (Jinan) + - Provider: ChinaMobile + - "eos-ningbo-1.cmecloud.cn" + - East China (Hangzhou) + - Provider: ChinaMobile + - "eos-shanghai-1.cmecloud.cn" + - East China (Shanghai-1) + - Provider: ChinaMobile + - "eos-zhengzhou-1.cmecloud.cn" + - Central China (Zhengzhou) + - Provider: ChinaMobile + - "eos-hunan-1.cmecloud.cn" + - Central China (Changsha-1) + - Provider: ChinaMobile + - "eos-zhuzhou-1.cmecloud.cn" + - Central China (Changsha-2) + - Provider: ChinaMobile + - "eos-guangzhou-1.cmecloud.cn" + - South China (Guangzhou-2) + - Provider: ChinaMobile + - "eos-dongguan-1.cmecloud.cn" + - South China (Guangzhou-3) + - Provider: ChinaMobile + - "eos-beijing-1.cmecloud.cn" + - North China (Beijing-1) + - Provider: ChinaMobile + - "eos-beijing-2.cmecloud.cn" + - North China (Beijing-2) + - Provider: ChinaMobile + - "eos-beijing-4.cmecloud.cn" + - North China (Beijing-3) + - Provider: ChinaMobile + - "eos-huhehaote-1.cmecloud.cn" + - North China (Huhehaote) + - Provider: ChinaMobile + - "eos-chengdu-1.cmecloud.cn" + - Southwest China (Chengdu) + - Provider: ChinaMobile + - "eos-chongqing-1.cmecloud.cn" + - Southwest China (Chongqing) + - Provider: ChinaMobile + - "eos-guiyang-1.cmecloud.cn" + - Southwest China (Guiyang) + - Provider: ChinaMobile + - "eos-xian-1.cmecloud.cn" + - Nouthwest China (Xian) + - Provider: ChinaMobile + - "eos-yunnan.cmecloud.cn" + - Yunnan China (Kunming) + - Provider: ChinaMobile + - "eos-yunnan-2.cmecloud.cn" + - Yunnan China (Kunming-2) + - Provider: ChinaMobile + - "eos-tianjin-1.cmecloud.cn" + - Tianjin China (Tianjin) + - Provider: ChinaMobile + - "eos-jilin-1.cmecloud.cn" + - Jilin China (Changchun) + - Provider: ChinaMobile + - "eos-hubei-1.cmecloud.cn" + - Hubei China (Xiangyan) + - Provider: ChinaMobile + - "eos-jiangxi-1.cmecloud.cn" + - Jiangxi China (Nanchang) + - Provider: ChinaMobile + - "eos-gansu-1.cmecloud.cn" + - Gansu China (Lanzhou) + - Provider: ChinaMobile + - "eos-shanxi-1.cmecloud.cn" + - Shanxi China (Taiyuan) + - Provider: ChinaMobile + - "eos-liaoning-1.cmecloud.cn" + - Liaoning China (Shenyang) + - Provider: ChinaMobile + - "eos-hebei-1.cmecloud.cn" + - Hebei China (Shijiazhuang) + - Provider: ChinaMobile + - "eos-fujian-1.cmecloud.cn" + - Fujian China (Xiamen) + - Provider: ChinaMobile + - "eos-guangxi-1.cmecloud.cn" + - Guangxi China (Nanning) + - Provider: ChinaMobile + - "eos-anhui-1.cmecloud.cn" + - Anhui China (Huainan) + - Provider: ChinaMobile + - "s3.cubbit.eu" + - Cubbit DS3 Object Storage endpoint + - Provider: Cubbit + - "syd1.digitaloceanspaces.com" + - DigitalOcean Spaces Sydney 1 + - Provider: DigitalOcean + - "sfo3.digitaloceanspaces.com" + - DigitalOcean Spaces San Francisco 3 + - Provider: DigitalOcean + - "sfo2.digitaloceanspaces.com" + - DigitalOcean Spaces San Francisco 2 + - Provider: DigitalOcean + - "fra1.digitaloceanspaces.com" + - DigitalOcean Spaces Frankfurt 1 + - Provider: DigitalOcean + - "nyc3.digitaloceanspaces.com" + - DigitalOcean Spaces New York 3 + - Provider: DigitalOcean + - "ams3.digitaloceanspaces.com" + - DigitalOcean Spaces Amsterdam 3 + - Provider: DigitalOcean + - "sgp1.digitaloceanspaces.com" + - DigitalOcean Spaces Singapore 1 + - Provider: DigitalOcean + - "lon1.digitaloceanspaces.com" + - DigitalOcean Spaces London 1 + - Provider: DigitalOcean + - "tor1.digitaloceanspaces.com" + - DigitalOcean Spaces Toronto 1 + - Provider: DigitalOcean + - "blr1.digitaloceanspaces.com" + - DigitalOcean Spaces Bangalore 1 + - Provider: DigitalOcean + - "objects-us-east-1.dream.io" + - Dream Objects endpoint + - Provider: Dreamhost + - "s5lu.com" + - Global FileLu S5 endpoint + - Provider: FileLu + - "us.s5lu.com" + - North America (US-East) region endpoint + - Provider: FileLu + - "eu.s5lu.com" + - Europe (EU-Central) region endpoint + - Provider: FileLu + - "ap.s5lu.com" + - Asia Pacific (AP-Southeast) region endpoint + - Provider: FileLu + - "me.s5lu.com" + - Middle East (ME-Central) region endpoint + - Provider: FileLu + - "https://storage.googleapis.com" + - Google Cloud Storage endpoint + - Provider: GCS + - "hel1.your-objectstorage.com" + - Helsinki + - Provider: Hetzner + - "fsn1.your-objectstorage.com" + - Falkenstein + - Provider: Hetzner + - "nbg1.your-objectstorage.com" + - Nuremberg + - Provider: Hetzner + - "obs.af-south-1.myhuaweicloud.com" + - AF-Johannesburg + - Provider: HuaweiOBS + - "obs.ap-southeast-2.myhuaweicloud.com" + - AP-Bangkok + - Provider: HuaweiOBS + - "obs.ap-southeast-3.myhuaweicloud.com" + - AP-Singapore + - Provider: HuaweiOBS + - "obs.cn-east-3.myhuaweicloud.com" + - CN East-Shanghai1 + - Provider: HuaweiOBS + - "obs.cn-east-2.myhuaweicloud.com" + - CN East-Shanghai2 + - Provider: HuaweiOBS + - "obs.cn-north-1.myhuaweicloud.com" + - CN North-Beijing1 + - Provider: HuaweiOBS + - "obs.cn-north-4.myhuaweicloud.com" + - CN North-Beijing4 + - Provider: HuaweiOBS + - "obs.cn-south-1.myhuaweicloud.com" + - CN South-Guangzhou + - Provider: HuaweiOBS + - "obs.ap-southeast-1.myhuaweicloud.com" + - CN-Hong Kong + - Provider: HuaweiOBS + - "obs.sa-argentina-1.myhuaweicloud.com" + - LA-Buenos Aires1 + - Provider: HuaweiOBS + - "obs.sa-peru-1.myhuaweicloud.com" + - LA-Lima1 + - Provider: HuaweiOBS + - "obs.na-mexico-1.myhuaweicloud.com" + - LA-Mexico City1 + - Provider: HuaweiOBS + - "obs.sa-chile-1.myhuaweicloud.com" + - LA-Santiago2 + - Provider: HuaweiOBS + - "obs.sa-brazil-1.myhuaweicloud.com" + - LA-Sao Paulo1 + - Provider: HuaweiOBS + - "obs.ru-northwest-2.myhuaweicloud.com" + - RU-Moscow2 + - Provider: HuaweiOBS + - "s3.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Endpoint + - Provider: IBMCOS + - "s3.dal.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Dallas Endpoint + - Provider: IBMCOS + - "s3.wdc.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Washington DC Endpoint + - Provider: IBMCOS + - "s3.sjc.us.cloud-object-storage.appdomain.cloud" + - US Cross Region San Jose Endpoint + - Provider: IBMCOS + - "s3.private.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Private Endpoint + - Provider: IBMCOS + - "s3.private.dal.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Dallas Private Endpoint + - Provider: IBMCOS + - "s3.private.wdc.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Washington DC Private Endpoint + - Provider: IBMCOS + - "s3.private.sjc.us.cloud-object-storage.appdomain.cloud" + - US Cross Region San Jose Private Endpoint + - Provider: IBMCOS + - "s3.us-east.cloud-object-storage.appdomain.cloud" + - US Region East Endpoint + - Provider: IBMCOS + - "s3.private.us-east.cloud-object-storage.appdomain.cloud" + - US Region East Private Endpoint + - Provider: IBMCOS + - "s3.us-south.cloud-object-storage.appdomain.cloud" + - US Region South Endpoint + - Provider: IBMCOS + - "s3.private.us-south.cloud-object-storage.appdomain.cloud" + - US Region South Private Endpoint + - Provider: IBMCOS + - "s3.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Endpoint + - Provider: IBMCOS + - "s3.fra.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Frankfurt Endpoint + - Provider: IBMCOS + - "s3.mil.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Milan Endpoint + - Provider: IBMCOS + - "s3.ams.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Amsterdam Endpoint + - Provider: IBMCOS + - "s3.private.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Private Endpoint + - Provider: IBMCOS + - "s3.private.fra.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Frankfurt Private Endpoint + - Provider: IBMCOS + - "s3.private.mil.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Milan Private Endpoint + - Provider: IBMCOS + - "s3.private.ams.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Amsterdam Private Endpoint + - Provider: IBMCOS + - "s3.eu-gb.cloud-object-storage.appdomain.cloud" + - Great Britain Endpoint + - Provider: IBMCOS + - "s3.private.eu-gb.cloud-object-storage.appdomain.cloud" + - Great Britain Private Endpoint + - Provider: IBMCOS + - "s3.eu-de.cloud-object-storage.appdomain.cloud" + - EU Region DE Endpoint + - Provider: IBMCOS + - "s3.private.eu-de.cloud-object-storage.appdomain.cloud" + - EU Region DE Private Endpoint + - Provider: IBMCOS + - "s3.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Endpoint + - Provider: IBMCOS + - "s3.tok.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Tokyo Endpoint + - Provider: IBMCOS + - "s3.hkg.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Hong Kong Endpoint + - Provider: IBMCOS + - "s3.seo.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Seoul Endpoint + - Provider: IBMCOS + - "s3.private.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Private Endpoint + - Provider: IBMCOS + - "s3.private.tok.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Tokyo Private Endpoint + - Provider: IBMCOS + - "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Hong Kong Private Endpoint + - Provider: IBMCOS + - "s3.private.seo.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Seoul Private Endpoint + - Provider: IBMCOS + - "s3.jp-tok.cloud-object-storage.appdomain.cloud" + - APAC Region Japan Endpoint + - Provider: IBMCOS + - "s3.private.jp-tok.cloud-object-storage.appdomain.cloud" + - APAC Region Japan Private Endpoint + - Provider: IBMCOS + - "s3.au-syd.cloud-object-storage.appdomain.cloud" + - APAC Region Australia Endpoint + - Provider: IBMCOS + - "s3.private.au-syd.cloud-object-storage.appdomain.cloud" + - APAC Region Australia Private Endpoint + - Provider: IBMCOS + - "s3.ams03.cloud-object-storage.appdomain.cloud" + - Amsterdam Single Site Endpoint + - Provider: IBMCOS + - "s3.private.ams03.cloud-object-storage.appdomain.cloud" + - Amsterdam Single Site Private Endpoint + - Provider: IBMCOS + - "s3.che01.cloud-object-storage.appdomain.cloud" + - Chennai Single Site Endpoint + - Provider: IBMCOS + - "s3.private.che01.cloud-object-storage.appdomain.cloud" + - Chennai Single Site Private Endpoint + - Provider: IBMCOS + - "s3.mel01.cloud-object-storage.appdomain.cloud" + - Melbourne Single Site Endpoint + - Provider: IBMCOS + - "s3.private.mel01.cloud-object-storage.appdomain.cloud" + - Melbourne Single Site Private Endpoint + - Provider: IBMCOS + - "s3.osl01.cloud-object-storage.appdomain.cloud" + - Oslo Single Site Endpoint + - Provider: IBMCOS + - "s3.private.osl01.cloud-object-storage.appdomain.cloud" + - Oslo Single Site Private Endpoint + - Provider: IBMCOS + - "s3.tor01.cloud-object-storage.appdomain.cloud" + - Toronto Single Site Endpoint + - Provider: IBMCOS + - "s3.private.tor01.cloud-object-storage.appdomain.cloud" + - Toronto Single Site Private Endpoint + - Provider: IBMCOS + - "s3.seo01.cloud-object-storage.appdomain.cloud" + - Seoul Single Site Endpoint + - Provider: IBMCOS + - "s3.private.seo01.cloud-object-storage.appdomain.cloud" + - Seoul Single Site Private Endpoint + - Provider: IBMCOS + - "s3.mon01.cloud-object-storage.appdomain.cloud" + - Montreal Single Site Endpoint + - Provider: IBMCOS + - "s3.private.mon01.cloud-object-storage.appdomain.cloud" + - Montreal Single Site Private Endpoint + - Provider: IBMCOS + - "s3.mex01.cloud-object-storage.appdomain.cloud" + - Mexico Single Site Endpoint + - Provider: IBMCOS + - "s3.private.mex01.cloud-object-storage.appdomain.cloud" + - Mexico Single Site Private Endpoint + - Provider: IBMCOS + - "s3.sjc04.cloud-object-storage.appdomain.cloud" + - San Jose Single Site Endpoint + - Provider: IBMCOS + - "s3.private.sjc04.cloud-object-storage.appdomain.cloud" + - San Jose Single Site Private Endpoint + - Provider: IBMCOS + - "s3.mil01.cloud-object-storage.appdomain.cloud" + - Milan Single Site Endpoint + - Provider: IBMCOS + - "s3.private.mil01.cloud-object-storage.appdomain.cloud" + - Milan Single Site Private Endpoint + - Provider: IBMCOS + - "s3.hkg02.cloud-object-storage.appdomain.cloud" + - Hong Kong Single Site Endpoint + - Provider: IBMCOS + - "s3.private.hkg02.cloud-object-storage.appdomain.cloud" + - Hong Kong Single Site Private Endpoint + - Provider: IBMCOS + - "s3.par01.cloud-object-storage.appdomain.cloud" + - Paris Single Site Endpoint + - Provider: IBMCOS + - "s3.private.par01.cloud-object-storage.appdomain.cloud" + - Paris Single Site Private Endpoint + - Provider: IBMCOS + - "s3.sng01.cloud-object-storage.appdomain.cloud" + - Singapore Single Site Endpoint + - Provider: IBMCOS + - "s3.private.sng01.cloud-object-storage.appdomain.cloud" + - Singapore Single Site Private Endpoint + - Provider: IBMCOS + - "de-fra.i3storage.com" + - Frankfurt, Germany + - Provider: Intercolo + - "s3-eu-central-1.ionoscloud.com" + - Frankfurt, Germany + - Provider: IONOS + - "s3-eu-central-2.ionoscloud.com" + - Berlin, Germany + - Provider: IONOS + - "s3-eu-south-2.ionoscloud.com" + - Logrono, Spain + - Provider: IONOS + - "s3.leviia.com" + - The default endpoint + - Leviia + - Provider: Leviia + - "storage.iran.liara.space" + - The default endpoint + - Iran + - Provider: Liara + - "nl-ams-1.linodeobjects.com" + - Amsterdam (Netherlands), nl-ams-1 + - Provider: Linode + - "us-southeast-1.linodeobjects.com" + - Atlanta, GA (USA), us-southeast-1 + - Provider: Linode + - "in-maa-1.linodeobjects.com" + - Chennai (India), in-maa-1 + - Provider: Linode + - "us-ord-1.linodeobjects.com" + - Chicago, IL (USA), us-ord-1 + - Provider: Linode + - "eu-central-1.linodeobjects.com" + - Frankfurt (Germany), eu-central-1 + - Provider: Linode + - "id-cgk-1.linodeobjects.com" + - Jakarta (Indonesia), id-cgk-1 + - Provider: Linode + - "gb-lon-1.linodeobjects.com" + - London 2 (Great Britain), gb-lon-1 + - Provider: Linode + - "us-lax-1.linodeobjects.com" + - Los Angeles, CA (USA), us-lax-1 + - Provider: Linode + - "es-mad-1.linodeobjects.com" + - Madrid (Spain), es-mad-1 + - Provider: Linode + - "au-mel-1.linodeobjects.com" + - Melbourne (Australia), au-mel-1 + - Provider: Linode + - "us-mia-1.linodeobjects.com" + - Miami, FL (USA), us-mia-1 + - Provider: Linode + - "it-mil-1.linodeobjects.com" + - Milan (Italy), it-mil-1 + - Provider: Linode + - "us-east-1.linodeobjects.com" + - Newark, NJ (USA), us-east-1 + - Provider: Linode + - "jp-osa-1.linodeobjects.com" + - Osaka (Japan), jp-osa-1 + - Provider: Linode + - "fr-par-1.linodeobjects.com" + - Paris (France), fr-par-1 + - Provider: Linode + - "br-gru-1.linodeobjects.com" + - São Paulo (Brazil), br-gru-1 + - Provider: Linode + - "us-sea-1.linodeobjects.com" + - Seattle, WA (USA), us-sea-1 + - Provider: Linode + - "ap-south-1.linodeobjects.com" + - Singapore, ap-south-1 + - Provider: Linode + - "sg-sin-1.linodeobjects.com" + - Singapore 2, sg-sin-1 + - Provider: Linode + - "se-sto-1.linodeobjects.com" + - Stockholm (Sweden), se-sto-1 + - Provider: Linode + - "us-iad-1.linodeobjects.com" + - Washington, DC, (USA), us-iad-1 + - Provider: Linode + - "s3.us-west-1.{account_name}.lyve.seagate.com" + - US West 1 - California + - Provider: LyveCloud + - "s3.eu-west-1.{account_name}.lyve.seagate.com" + - EU West 1 - Ireland + - Provider: LyveCloud + - "br-se1.magaluobjects.com" + - São Paulo, SP (BR), br-se1 + - Provider: Magalu + - "br-ne1.magaluobjects.com" + - Fortaleza, CE (BR), br-ne1 + - Provider: Magalu + - "s3.eu-central-1.s4.mega.io" + - Mega S4 eu-central-1 (Amsterdam) + - Provider: Mega + - "s3.eu-central-2.s4.mega.io" + - Mega S4 eu-central-2 (Bettembourg) + - Provider: Mega + - "s3.ca-central-1.s4.mega.io" + - Mega S4 ca-central-1 (Montreal) + - Provider: Mega + - "s3.ca-west-1.s4.mega.io" + - Mega S4 ca-west-1 (Vancouver) + - Provider: Mega + - "oos.eu-west-2.outscale.com" + - Outscale EU West 2 (Paris) + - Provider: Outscale + - "oos.us-east-2.outscale.com" + - Outscale US east 2 (New Jersey) + - Provider: Outscale + - "oos.us-west-1.outscale.com" + - Outscale EU West 1 (California) + - Provider: Outscale + - "oos.cloudgouv-eu-west-1.outscale.com" + - Outscale SecNumCloud (Paris) + - Provider: Outscale + - "oos.ap-northeast-1.outscale.com" + - Outscale AP Northeast 1 (Japan) + - Provider: Outscale + - "s3.gra.io.cloud.ovh.net" + - OVHcloud Gravelines, France + - Provider: OVHcloud + - "s3.rbx.io.cloud.ovh.net" + - OVHcloud Roubaix, France + - Provider: OVHcloud + - "s3.sbg.io.cloud.ovh.net" + - OVHcloud Strasbourg, France + - Provider: OVHcloud + - "s3.eu-west-par.io.cloud.ovh.net" + - OVHcloud Paris, France (3AZ) + - Provider: OVHcloud + - "s3.de.io.cloud.ovh.net" + - OVHcloud Frankfurt, Germany + - Provider: OVHcloud + - "s3.uk.io.cloud.ovh.net" + - OVHcloud London, United Kingdom + - Provider: OVHcloud + - "s3.waw.io.cloud.ovh.net" + - OVHcloud Warsaw, Poland + - Provider: OVHcloud + - "s3.bhs.io.cloud.ovh.net" + - OVHcloud Beauharnois, Canada + - Provider: OVHcloud + - "s3.ca-east-tor.io.cloud.ovh.net" + - OVHcloud Toronto, Canada + - Provider: OVHcloud + - "s3.sgp.io.cloud.ovh.net" + - OVHcloud Singapore + - Provider: OVHcloud + - "s3.ap-southeast-syd.io.cloud.ovh.net" + - OVHcloud Sydney, Australia + - Provider: OVHcloud + - "s3.ap-south-mum.io.cloud.ovh.net" + - OVHcloud Mumbai, India + - Provider: OVHcloud + - "s3.us-east-va.io.cloud.ovh.us" + - OVHcloud Vint Hill, Virginia, USA + - Provider: OVHcloud + - "s3.us-west-or.io.cloud.ovh.us" + - OVHcloud Hillsboro, Oregon, USA + - Provider: OVHcloud + - "s3.rbx-archive.io.cloud.ovh.net" + - OVHcloud Roubaix, France (Cold Archive) + - Provider: OVHcloud + - "s3.petabox.io" + - US East (N. Virginia) + - Provider: Petabox + - "s3.us-east-1.petabox.io" + - US East (N. Virginia) + - Provider: Petabox + - "s3.eu-central-1.petabox.io" + - Europe (Frankfurt) + - Provider: Petabox + - "s3.ap-southeast-1.petabox.io" + - Asia Pacific (Singapore) + - Provider: Petabox + - "s3.me-south-1.petabox.io" + - Middle East (Bahrain) + - Provider: Petabox + - "s3.sa-east-1.petabox.io" + - South America (São Paulo) + - Provider: Petabox + - "s3-cn-east-1.qiniucs.com" + - East China Endpoint 1 + - Provider: Qiniu + - "s3-cn-east-2.qiniucs.com" + - East China Endpoint 2 + - Provider: Qiniu + - "s3-cn-north-1.qiniucs.com" + - North China Endpoint 1 + - Provider: Qiniu + - "s3-cn-south-1.qiniucs.com" + - South China Endpoint 1 + - Provider: Qiniu + - "s3-us-north-1.qiniucs.com" + - North America Endpoint 1 + - Provider: Qiniu + - "s3-ap-southeast-1.qiniucs.com" + - Southeast Asia Endpoint 1 + - Provider: Qiniu + - "s3-ap-northeast-1.qiniucs.com" + - Northeast Asia Endpoint 1 + - Provider: Qiniu + - "s3.us-east-1.rabata.io" + - US East (N. Virginia) + - Provider: Rabata + - "s3.eu-west-1.rabata.io" + - EU West (Ireland) + - Provider: Rabata + - "s3.eu-west-2.rabata.io" + - EU West (London) + - Provider: Rabata + - "s3.rackcorp.com" + - Global (AnyCast) Endpoint + - Provider: RackCorp + - "au.s3.rackcorp.com" + - Australia (Anycast) Endpoint + - Provider: RackCorp + - "au-nsw.s3.rackcorp.com" + - Sydney (Australia) Endpoint + - Provider: RackCorp + - "au-qld.s3.rackcorp.com" + - Brisbane (Australia) Endpoint + - Provider: RackCorp + - "au-vic.s3.rackcorp.com" + - Melbourne (Australia) Endpoint + - Provider: RackCorp + - "au-wa.s3.rackcorp.com" + - Perth (Australia) Endpoint + - Provider: RackCorp + - "ph.s3.rackcorp.com" + - Manila (Philippines) Endpoint + - Provider: RackCorp + - "th.s3.rackcorp.com" + - Bangkok (Thailand) Endpoint + - Provider: RackCorp + - "hk.s3.rackcorp.com" + - HK (Hong Kong) Endpoint + - Provider: RackCorp + - "mn.s3.rackcorp.com" + - Ulaanbaatar (Mongolia) Endpoint + - Provider: RackCorp + - "kg.s3.rackcorp.com" + - Bishkek (Kyrgyzstan) Endpoint + - Provider: RackCorp + - "id.s3.rackcorp.com" + - Jakarta (Indonesia) Endpoint + - Provider: RackCorp + - "jp.s3.rackcorp.com" + - Tokyo (Japan) Endpoint + - Provider: RackCorp + - "sg.s3.rackcorp.com" + - SG (Singapore) Endpoint + - Provider: RackCorp + - "de.s3.rackcorp.com" + - Frankfurt (Germany) Endpoint + - Provider: RackCorp + - "us.s3.rackcorp.com" + - USA (AnyCast) Endpoint + - Provider: RackCorp + - "us-east-1.s3.rackcorp.com" + - New York (USA) Endpoint + - Provider: RackCorp + - "us-west-1.s3.rackcorp.com" + - Freemont (USA) Endpoint + - Provider: RackCorp + - "nz.s3.rackcorp.com" + - Auckland (New Zealand) Endpoint + - Provider: RackCorp + - "s3.nl-ams.scw.cloud" + - Amsterdam Endpoint + - Provider: Scaleway + - "s3.fr-par.scw.cloud" + - Paris Endpoint + - Provider: Scaleway + - "s3.pl-waw.scw.cloud" + - Warsaw Endpoint + - Provider: Scaleway + - "localhost:8333" + - SeaweedFS S3 localhost + - Provider: SeaweedFS + - "s3.ru-1.storage.selcloud.ru" + - Saint Petersburg + - Provider: Selectel,Servercore + - "s3.gis-1.storage.selcloud.ru" + - Moscow + - Provider: Servercore + - "s3.ru-7.storage.selcloud.ru" + - Moscow + - Provider: Servercore + - "s3.uz-2.srvstorage.uz" + - Tashkent, Uzbekistan + - Provider: Servercore + - "s3.kz-1.srvstorage.kz" + - Almaty, Kazakhstan + - Provider: Servercore + - "s3.us-east-2.stackpathstorage.com" + - US East Endpoint + - Provider: StackPath + - "s3.us-west-1.stackpathstorage.com" + - US West Endpoint + - Provider: StackPath + - "s3.eu-central-1.stackpathstorage.com" + - EU Endpoint + - Provider: StackPath + - "gateway.storjshare.io" + - Global Hosted Gateway + - Provider: Storj + - "eu-001.s3.synologyc2.net" + - EU Endpoint 1 + - Provider: Synology + - "eu-002.s3.synologyc2.net" + - EU Endpoint 2 + - Provider: Synology + - "us-001.s3.synologyc2.net" + - US Endpoint 1 + - Provider: Synology + - "us-002.s3.synologyc2.net" + - US Endpoint 2 + - Provider: Synology + - "tw-001.s3.synologyc2.net" + - TW Endpoint 1 + - Provider: Synology + - "cos.ap-beijing.myqcloud.com" + - Beijing Region + - Provider: TencentCOS + - "cos.ap-nanjing.myqcloud.com" + - Nanjing Region + - Provider: TencentCOS + - "cos.ap-shanghai.myqcloud.com" + - Shanghai Region + - Provider: TencentCOS + - "cos.ap-guangzhou.myqcloud.com" + - Guangzhou Region + - Provider: TencentCOS + - "cos.ap-chengdu.myqcloud.com" + - Chengdu Region + - Provider: TencentCOS + - "cos.ap-chongqing.myqcloud.com" + - Chongqing Region + - Provider: TencentCOS + - "cos.ap-hongkong.myqcloud.com" + - Hong Kong (China) Region + - Provider: TencentCOS + - "cos.ap-singapore.myqcloud.com" + - Singapore Region + - Provider: TencentCOS + - "cos.ap-mumbai.myqcloud.com" + - Mumbai Region + - Provider: TencentCOS + - "cos.ap-seoul.myqcloud.com" + - Seoul Region + - Provider: TencentCOS + - "cos.ap-bangkok.myqcloud.com" + - Bangkok Region + - Provider: TencentCOS + - "cos.ap-tokyo.myqcloud.com" + - Tokyo Region + - Provider: TencentCOS + - "cos.na-siliconvalley.myqcloud.com" + - Silicon Valley Region + - Provider: TencentCOS + - "cos.na-ashburn.myqcloud.com" + - Virginia Region + - Provider: TencentCOS + - "cos.na-toronto.myqcloud.com" + - Toronto Region + - Provider: TencentCOS + - "cos.eu-frankfurt.myqcloud.com" + - Frankfurt Region + - Provider: TencentCOS + - "cos.eu-moscow.myqcloud.com" + - Moscow Region + - Provider: TencentCOS + - "cos.accelerate.myqcloud.com" + - Use Tencent COS Accelerate Endpoint + - Provider: TencentCOS + - "s3.wasabisys.com" + - Wasabi US East 1 (N. Virginia) + - Provider: Wasabi + - "s3.us-east-2.wasabisys.com" + - Wasabi US East 2 (N. Virginia) + - Provider: Wasabi + - "s3.us-central-1.wasabisys.com" + - Wasabi US Central 1 (Texas) + - Provider: Wasabi + - "s3.us-west-1.wasabisys.com" + - Wasabi US West 1 (Oregon) + - Provider: Wasabi + - "s3.ca-central-1.wasabisys.com" + - Wasabi CA Central 1 (Toronto) + - Provider: Wasabi + - "s3.eu-central-1.wasabisys.com" + - Wasabi EU Central 1 (Amsterdam) + - Provider: Wasabi + - "s3.eu-central-2.wasabisys.com" + - Wasabi EU Central 2 (Frankfurt) + - Provider: Wasabi + - "s3.eu-west-1.wasabisys.com" + - Wasabi EU West 1 (London) + - Provider: Wasabi + - "s3.eu-west-2.wasabisys.com" + - Wasabi EU West 2 (Paris) + - Provider: Wasabi + - "s3.eu-south-1.wasabisys.com" + - Wasabi EU South 1 (Milan) + - Provider: Wasabi + - "s3.ap-northeast-1.wasabisys.com" + - Wasabi AP Northeast 1 (Tokyo) endpoint + - Provider: Wasabi + - "s3.ap-northeast-2.wasabisys.com" + - Wasabi AP Northeast 2 (Osaka) endpoint + - Provider: Wasabi + - "s3.ap-southeast-1.wasabisys.com" + - Wasabi AP Southeast 1 (Singapore) + - Provider: Wasabi + - "s3.ap-southeast-2.wasabisys.com" + - Wasabi AP Southeast 2 (Sydney) + - Provider: Wasabi + - "idr01.zata.ai" + - South Asia Endpoint + - Provider: Zata #### --s3-location-constraint Location constraint - must be set to match the Region. -Used when creating buckets only. +Leave blank if not sure. Used when creating buckets only. Properties: - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT -- Provider: AWS +- Provider: AWS,ArvanCloud,Ceph,ChinaMobile,DigitalOcean,Dreamhost,Exaba,GCS,Hetzner,IBMCOS,LyveCloud,Minio,Netease,Qiniu,Rabata,RackCorp,SeaweedFS,Synology,Wasabi,Zata,Other - Type: string - Required: false - Examples: - - "" - - Empty for US Region, Northern Virginia, or Pacific Northwest - - "us-east-2" - - US East (Ohio) Region - - "us-west-1" - - US West (Northern California) Region - - "us-west-2" - - US West (Oregon) Region - - "ca-central-1" - - Canada (Central) Region - - "eu-west-1" - - EU (Ireland) Region - - "eu-west-2" - - EU (London) Region - - "eu-west-3" - - EU (Paris) Region - - "eu-north-1" - - EU (Stockholm) Region - - "eu-south-1" - - EU (Milan) Region - - "EU" - - EU Region - - "ap-southeast-1" - - Asia Pacific (Singapore) Region - - "ap-southeast-2" - - Asia Pacific (Sydney) Region - - "ap-northeast-1" - - Asia Pacific (Tokyo) Region - - "ap-northeast-2" - - Asia Pacific (Seoul) Region - - "ap-northeast-3" - - Asia Pacific (Osaka-Local) Region - - "ap-south-1" - - Asia Pacific (Mumbai) Region - - "ap-east-1" - - Asia Pacific (Hong Kong) Region - - "sa-east-1" - - South America (Sao Paulo) Region - - "il-central-1" - - Israel (Tel Aviv) Region - - "me-south-1" - - Middle East (Bahrain) Region - - "af-south-1" - - Africa (Cape Town) Region - - "cn-north-1" - - China (Beijing) Region - - "cn-northwest-1" - - China (Ningxia) Region - - "us-gov-east-1" - - AWS GovCloud (US-East) Region - - "us-gov-west-1" - - AWS GovCloud (US) Region + - "" + - Empty for US Region, Northern Virginia, or Pacific Northwest + - Provider: AWS + - "us-east-2" + - US East (Ohio) Region + - Provider: AWS + - "us-west-1" + - US West (Northern California) Region + - Provider: AWS + - "us-west-2" + - US West (Oregon) Region + - Provider: AWS + - "ca-central-1" + - Canada (Central) Region + - Provider: AWS + - "eu-west-1" + - EU (Ireland) Region + - Provider: AWS + - "eu-west-2" + - EU (London) Region + - Provider: AWS + - "eu-west-3" + - EU (Paris) Region + - Provider: AWS + - "eu-north-1" + - EU (Stockholm) Region + - Provider: AWS + - "eu-south-1" + - EU (Milan) Region + - Provider: AWS + - "EU" + - EU Region + - Provider: AWS + - "ap-southeast-1" + - Asia Pacific (Singapore) Region + - Provider: AWS + - "ap-southeast-2" + - Asia Pacific (Sydney) Region + - Provider: AWS + - "ap-northeast-1" + - Asia Pacific (Tokyo) Region + - Provider: AWS + - "ap-northeast-2" + - Asia Pacific (Seoul) Region + - Provider: AWS + - "ap-northeast-3" + - Asia Pacific (Osaka-Local) Region + - Provider: AWS + - "ap-south-1" + - Asia Pacific (Mumbai) Region + - Provider: AWS + - "ap-east-1" + - Asia Pacific (Hong Kong) Region + - Provider: AWS + - "sa-east-1" + - South America (Sao Paulo) Region + - Provider: AWS + - "il-central-1" + - Israel (Tel Aviv) Region + - Provider: AWS + - "me-south-1" + - Middle East (Bahrain) Region + - Provider: AWS + - "af-south-1" + - Africa (Cape Town) Region + - Provider: AWS + - "cn-north-1" + - China (Beijing) Region + - Provider: AWS + - "cn-northwest-1" + - China (Ningxia) Region + - Provider: AWS + - "us-gov-east-1" + - AWS GovCloud (US-East) Region + - Provider: AWS + - "us-gov-west-1" + - AWS GovCloud (US) Region + - Provider: AWS + - "ir-thr-at1" + - Tehran Iran (Simin) + - Provider: ArvanCloud + - "ir-tbz-sh1" + - Tabriz Iran (Shahriar) + - Provider: ArvanCloud + - "wuxi1" + - East China (Suzhou) + - Provider: ChinaMobile + - "jinan1" + - East China (Jinan) + - Provider: ChinaMobile + - "ningbo1" + - East China (Hangzhou) + - Provider: ChinaMobile + - "shanghai1" + - East China (Shanghai-1) + - Provider: ChinaMobile + - "zhengzhou1" + - Central China (Zhengzhou) + - Provider: ChinaMobile + - "hunan1" + - Central China (Changsha-1) + - Provider: ChinaMobile + - "zhuzhou1" + - Central China (Changsha-2) + - Provider: ChinaMobile + - "guangzhou1" + - South China (Guangzhou-2) + - Provider: ChinaMobile + - "dongguan1" + - South China (Guangzhou-3) + - Provider: ChinaMobile + - "beijing1" + - North China (Beijing-1) + - Provider: ChinaMobile + - "beijing2" + - North China (Beijing-2) + - Provider: ChinaMobile + - "beijing4" + - North China (Beijing-3) + - Provider: ChinaMobile + - "huhehaote1" + - North China (Huhehaote) + - Provider: ChinaMobile + - "chengdu1" + - Southwest China (Chengdu) + - Provider: ChinaMobile + - "chongqing1" + - Southwest China (Chongqing) + - Provider: ChinaMobile + - "guiyang1" + - Southwest China (Guiyang) + - Provider: ChinaMobile + - "xian1" + - Northwest China (Xian) + - Provider: ChinaMobile + - "yunnan" + - Yunnan China (Kunming) + - Provider: ChinaMobile + - "yunnan2" + - Yunnan China (Kunming-2) + - Provider: ChinaMobile + - "tianjin1" + - Tianjin China (Tianjin) + - Provider: ChinaMobile + - "jilin1" + - Jilin China (Changchun) + - Provider: ChinaMobile + - "hubei1" + - Hubei China (Xiangyan) + - Provider: ChinaMobile + - "jiangxi1" + - Jiangxi China (Nanchang) + - Provider: ChinaMobile + - "gansu1" + - Gansu China (Lanzhou) + - Provider: ChinaMobile + - "shanxi1" + - Shanxi China (Taiyuan) + - Provider: ChinaMobile + - "liaoning1" + - Liaoning China (Shenyang) + - Provider: ChinaMobile + - "hebei1" + - Hebei China (Shijiazhuang) + - Provider: ChinaMobile + - "fujian1" + - Fujian China (Xiamen) + - Provider: ChinaMobile + - "guangxi1" + - Guangxi China (Nanning) + - Provider: ChinaMobile + - "anhui1" + - Anhui China (Huainan) + - Provider: ChinaMobile + - "us-standard" + - US Cross Region Standard + - Provider: IBMCOS + - "us-vault" + - US Cross Region Vault + - Provider: IBMCOS + - "us-cold" + - US Cross Region Cold + - Provider: IBMCOS + - "us-flex" + - US Cross Region Flex + - Provider: IBMCOS + - "us-east-standard" + - US East Region Standard + - Provider: IBMCOS + - "us-east-vault" + - US East Region Vault + - Provider: IBMCOS + - "us-east-cold" + - US East Region Cold + - Provider: IBMCOS + - "us-east-flex" + - US East Region Flex + - Provider: IBMCOS + - "us-south-standard" + - US South Region Standard + - Provider: IBMCOS + - "us-south-vault" + - US South Region Vault + - Provider: IBMCOS + - "us-south-cold" + - US South Region Cold + - Provider: IBMCOS + - "us-south-flex" + - US South Region Flex + - Provider: IBMCOS + - "eu-standard" + - EU Cross Region Standard + - Provider: IBMCOS + - "eu-vault" + - EU Cross Region Vault + - Provider: IBMCOS + - "eu-cold" + - EU Cross Region Cold + - Provider: IBMCOS + - "eu-flex" + - EU Cross Region Flex + - Provider: IBMCOS + - "eu-gb-standard" + - Great Britain Standard + - Provider: IBMCOS + - "eu-gb-vault" + - Great Britain Vault + - Provider: IBMCOS + - "eu-gb-cold" + - Great Britain Cold + - Provider: IBMCOS + - "eu-gb-flex" + - Great Britain Flex + - Provider: IBMCOS + - "ap-standard" + - APAC Standard + - Provider: IBMCOS + - "ap-vault" + - APAC Vault + - Provider: IBMCOS + - "ap-cold" + - APAC Cold + - Provider: IBMCOS + - "ap-flex" + - APAC Flex + - Provider: IBMCOS + - "mel01-standard" + - Melbourne Standard + - Provider: IBMCOS + - "mel01-vault" + - Melbourne Vault + - Provider: IBMCOS + - "mel01-cold" + - Melbourne Cold + - Provider: IBMCOS + - "mel01-flex" + - Melbourne Flex + - Provider: IBMCOS + - "tor01-standard" + - Toronto Standard + - Provider: IBMCOS + - "tor01-vault" + - Toronto Vault + - Provider: IBMCOS + - "tor01-cold" + - Toronto Cold + - Provider: IBMCOS + - "tor01-flex" + - Toronto Flex + - Provider: IBMCOS + - "cn-east-1" + - East China Region 1 + - Provider: Qiniu + - "cn-east-2" + - East China Region 2 + - Provider: Qiniu + - "cn-north-1" + - North China Region 1 + - Provider: Qiniu + - "cn-south-1" + - South China Region 1 + - Provider: Qiniu + - "us-north-1" + - North America Region 1 + - Provider: Qiniu + - "ap-southeast-1" + - Southeast Asia Region 1 + - Provider: Qiniu + - "ap-northeast-1" + - Northeast Asia Region 1 + - Provider: Qiniu + - "us-east-1" + - US East (N. Virginia) + - Provider: Rabata + - "eu-west-1" + - EU (Ireland) + - Provider: Rabata + - "eu-west-2" + - EU (London) + - Provider: Rabata + - "global" + - Global CDN Region + - Provider: RackCorp + - "au" + - Australia (All locations) + - Provider: RackCorp + - "au-nsw" + - NSW (Australia) Region + - Provider: RackCorp + - "au-qld" + - QLD (Australia) Region + - Provider: RackCorp + - "au-vic" + - VIC (Australia) Region + - Provider: RackCorp + - "au-wa" + - Perth (Australia) Region + - Provider: RackCorp + - "ph" + - Manila (Philippines) Region + - Provider: RackCorp + - "th" + - Bangkok (Thailand) Region + - Provider: RackCorp + - "hk" + - HK (Hong Kong) Region + - Provider: RackCorp + - "mn" + - Ulaanbaatar (Mongolia) Region + - Provider: RackCorp + - "kg" + - Bishkek (Kyrgyzstan) Region + - Provider: RackCorp + - "id" + - Jakarta (Indonesia) Region + - Provider: RackCorp + - "jp" + - Tokyo (Japan) Region + - Provider: RackCorp + - "sg" + - SG (Singapore) Region + - Provider: RackCorp + - "de" + - Frankfurt (Germany) Region + - Provider: RackCorp + - "us" + - USA (AnyCast) Region + - Provider: RackCorp + - "us-east-1" + - New York (USA) Region + - Provider: RackCorp + - "us-west-1" + - Fremont (USA) Region + - Provider: RackCorp + - "nz" + - Auckland (New Zealand) Region + - Provider: RackCorp #### --s3-acl @@ -28372,50 +31672,61 @@ Properties: - Config: acl - Env Var: RCLONE_S3_ACL -- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade,Mega +- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other - Type: string - Required: false - Examples: - - "default" - - Owner gets Full_CONTROL. - - No one else has access rights (default). - - "private" - - Owner gets FULL_CONTROL. - - No one else has access rights (default). - - "public-read" - - Owner gets FULL_CONTROL. - - The AllUsers group gets READ access. - - "public-read-write" - - Owner gets FULL_CONTROL. - - The AllUsers group gets READ and WRITE access. - - Granting this on a bucket is generally not recommended. - - "authenticated-read" - - Owner gets FULL_CONTROL. - - The AuthenticatedUsers group gets READ access. - - "bucket-owner-read" - - Object owner gets FULL_CONTROL. - - Bucket owner gets READ access. - - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - - "bucket-owner-full-control" - - Both the object owner and the bucket owner get FULL_CONTROL over the object. - - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - - "private" - - Owner gets FULL_CONTROL. - - No one else has access rights (default). - - This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS. - - "public-read" - - Owner gets FULL_CONTROL. - - The AllUsers group gets READ access. - - This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS. - - "public-read-write" - - Owner gets FULL_CONTROL. - - The AllUsers group gets READ and WRITE access. - - This acl is available on IBM Cloud (Infra), On-Premise IBM COS. - - "authenticated-read" - - Owner gets FULL_CONTROL. - - The AuthenticatedUsers group gets READ access. - - Not supported on Buckets. - - This acl is available on IBM Cloud (Infra) and On-Premise IBM COS. + - "private" + - Owner gets FULL_CONTROL. + - No one else has access rights (default). + - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other + - "public-read" + - Owner gets FULL_CONTROL. + - The AllUsers group gets READ access. + - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other + - "public-read-write" + - Owner gets FULL_CONTROL. + - The AllUsers group gets READ and WRITE access. + - Granting this on a bucket is generally not recommended. + - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other + - "authenticated-read" + - Owner gets FULL_CONTROL. + - The AuthenticatedUsers group gets READ access. + - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other + - "bucket-owner-read" + - Object owner gets FULL_CONTROL. + - Bucket owner gets READ access. + - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + - Provider: AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other + - "bucket-owner-full-control" + - Both the object owner and the bucket owner get FULL_CONTROL over the object. + - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + - Provider: AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other + - "private" + - Owner gets FULL_CONTROL. + - No one else has access rights (default). + - This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS. + - Provider: IBMCOS + - "public-read" + - Owner gets FULL_CONTROL. + - The AllUsers group gets READ access. + - This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS. + - Provider: IBMCOS + - "public-read-write" + - Owner gets FULL_CONTROL. + - The AllUsers group gets READ and WRITE access. + - This acl is available on IBM Cloud (Infra), On-Premise IBM COS. + - Provider: IBMCOS + - "authenticated-read" + - Owner gets FULL_CONTROL. + - The AuthenticatedUsers group gets READ access. + - Not supported on Buckets. + - This acl is available on IBM Cloud (Infra) and On-Premise IBM COS. + - Provider: IBMCOS + - "default" + - Owner gets Full_CONTROL. + - No one else has access rights (default). + - Provider: TencentCOS #### --s3-server-side-encryption @@ -28429,12 +31740,15 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None - - "AES256" - - AES256 - - "aws:kms" - - aws:kms + - "" + - None + - Provider: AWS,Ceph,ChinaMobile,Minio + - "AES256" + - AES256 + - Provider: AWS,Ceph,ChinaMobile,Minio + - "aws:kms" + - aws:kms + - Provider: AWS,Ceph,Minio #### --s3-sse-kms-key-id @@ -28448,10 +31762,10 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None - - "arn:aws:kms:us-east-1:*" - - arn:aws:kms:* + - "" + - None + - "arn:aws:kms:us-east-1:*" + - arn:aws:kms:* #### --s3-storage-class @@ -28461,28 +31775,70 @@ Properties: - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS -- Provider: AWS +- Provider: AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,Qiniu,Scaleway,TencentCOS - Type: string - Required: false - Examples: - - "" - - Default - - "STANDARD" - - Standard storage class - - "REDUCED_REDUNDANCY" - - Reduced redundancy storage class - - "STANDARD_IA" - - Standard Infrequent Access storage class - - "ONEZONE_IA" - - One Zone Infrequent Access storage class - - "GLACIER" - - Glacier Flexible Retrieval storage class - - "DEEP_ARCHIVE" - - Glacier Deep Archive storage class - - "INTELLIGENT_TIERING" - - Intelligent-Tiering storage class - - "GLACIER_IR" - - Glacier Instant Retrieval storage class + - "" + - Default + - Provider: AWS,Alibaba,ChinaMobile,TencentCOS + - "STANDARD" + - Standard storage class + - Provider: AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,Qiniu,TencentCOS + - "REDUCED_REDUNDANCY" + - Reduced redundancy storage class + - Provider: AWS + - "STANDARD_IA" + - Standard Infrequent Access storage class + - Provider: AWS + - "ONEZONE_IA" + - One Zone Infrequent Access storage class + - Provider: AWS + - "GLACIER" + - Glacier Flexible Retrieval storage class + - Provider: AWS + - "DEEP_ARCHIVE" + - Glacier Deep Archive storage class + - Provider: AWS + - "INTELLIGENT_TIERING" + - Intelligent-Tiering storage class + - Provider: AWS + - "GLACIER_IR" + - Glacier Instant Retrieval storage class + - Provider: AWS,Magalu + - "GLACIER" + - Archive storage mode + - Provider: Alibaba,ChinaMobile,Qiniu + - "STANDARD_IA" + - Infrequent access storage mode + - Provider: Alibaba,ChinaMobile,TencentCOS + - "LINE" + - Infrequent access storage mode + - Provider: Qiniu + - "DEEP_ARCHIVE" + - Deep archive storage mode + - Provider: Qiniu + - "" + - Default. + - Provider: Scaleway + - "STANDARD" + - The Standard class for any upload. + - Suitable for on-demand content like streaming or CDN. + - Available in all regions. + - Provider: Scaleway + - "GLACIER" + - Archived storage. + - Prices are lower, but it needs to be restored first to be accessed. + - Available in FR-PAR and NL-AMS regions. + - Provider: Scaleway + - "ONEZONE_IA" + - One Zone - Infrequent Access. + - A good choice for storing secondary backup copies or easily re-creatable data. + - Available in the FR-PAR region only. + - Provider: Scaleway + - "ARCHIVE" + - Archive storage mode + - Provider: TencentCOS #### --s3-ibm-api-key @@ -28510,7 +31866,7 @@ Properties: ### Advanced options -Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu, Zata and others). +Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other). #### --s3-bucket-acl @@ -28529,23 +31885,23 @@ Properties: - Config: bucket_acl - Env Var: RCLONE_S3_BUCKET_ACL -- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade +- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other - Type: string - Required: false - Examples: - - "private" - - Owner gets FULL_CONTROL. - - No one else has access rights (default). - - "public-read" - - Owner gets FULL_CONTROL. - - The AllUsers group gets READ access. - - "public-read-write" - - Owner gets FULL_CONTROL. - - The AllUsers group gets READ and WRITE access. - - Granting this on a bucket is generally not recommended. - - "authenticated-read" - - Owner gets FULL_CONTROL. - - The AuthenticatedUsers group gets READ access. + - "private" + - Owner gets FULL_CONTROL. + - No one else has access rights (default). + - "public-read" + - Owner gets FULL_CONTROL. + - The AllUsers group gets READ access. + - "public-read-write" + - Owner gets FULL_CONTROL. + - The AllUsers group gets READ and WRITE access. + - Granting this on a bucket is generally not recommended. + - "authenticated-read" + - Owner gets FULL_CONTROL. + - The AuthenticatedUsers group gets READ access. #### --s3-requester-pays @@ -28571,10 +31927,10 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None - - "AES256" - - AES256 + - "" + - None + - "AES256" + - AES256 #### --s3-sse-customer-key @@ -28590,8 +31946,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --s3-sse-customer-key-base64 @@ -28607,8 +31963,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --s3-sse-customer-key-md5 @@ -28625,8 +31981,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --s3-upload-cutoff @@ -29156,6 +32512,19 @@ Properties: - Type: bool - Default: false +#### --s3-use-data-integrity-protections + +If true use AWS S3 data integrity protections. + +See [AWS Docs on Data Integrity Protections](https://docs.aws.amazon.com/sdkref/latest/guide/feature-dataintegrity.html) + +Properties: + +- Config: use_data_integrity_protections +- Env Var: RCLONE_S3_USE_DATA_INTEGRITY_PROTECTIONS +- Type: Tristate +- Default: unset + #### --s3-versions Include old versions in directory listings. @@ -29492,9 +32861,11 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info. Here are the commands specific to the s3 backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -29506,114 +32877,137 @@ These can be run on a running backend using the rc command ### restore -Restore objects from GLACIER or INTELLIGENT-TIERING archive tier +Restore objects from GLACIER or INTELLIGENT-TIERING archive tier. - rclone backend restore remote: [options] [+] +```console +rclone backend restore remote: [options] [+] +``` -This command can be used to restore one or more objects from GLACIER to normal storage -or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier. +This command can be used to restore one or more objects from GLACIER to normal +storage or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier +to the Frequent Access tier. -Usage Examples: +Usage examples: - rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS - rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS - rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS - rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY +```console +rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS +rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS +rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS +rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY +``` -This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags +This flag also obeys the filters. Test first with --interactive/-i or --dry-run +flags. - rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 +```console +rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 +``` -All the objects shown will be marked for restore, then +All the objects shown will be marked for restore, then: - rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 +```console +rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 +``` It returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successful or an error message if not. - [ - { - "Status": "OK", - "Remote": "test.txt" - }, - { - "Status": "OK", - "Remote": "test/file4.txt" - } - ] - - +```json +[ + { + "Status": "OK", + "Remote": "test.txt" + }, + { + "Status": "OK", + "Remote": "test/file4.txt" + } +] +``` Options: - "description": The optional description for the job. -- "lifetime": Lifetime of the active copy in days, ignored for INTELLIGENT-TIERING storage +- "lifetime": Lifetime of the active copy in days, ignored for INTELLIGENT-TIERING +storage. - "priority": Priority of restore: Standard|Expedited|Bulk ### restore-status -Show the restore status for objects being restored from GLACIER or INTELLIGENT-TIERING storage +Show the status for objects being restored from GLACIER or INTELLIGENT-TIERING. - rclone backend restore-status remote: [options] [+] +```console +rclone backend restore-status remote: [options] [+] +``` -This command can be used to show the status for objects being restored from GLACIER to normal storage -or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier. +This command can be used to show the status for objects being restored from +GLACIER to normal storage or from INTELLIGENT-TIERING Archive Access / Deep +Archive Access tier to the Frequent Access tier. -Usage Examples: +Usage examples: - rclone backend restore-status s3:bucket/path/to/object - rclone backend restore-status s3:bucket/path/to/directory - rclone backend restore-status -o all s3:bucket/path/to/directory +```console +rclone backend restore-status s3:bucket/path/to/object +rclone backend restore-status s3:bucket/path/to/directory +rclone backend restore-status -o all s3:bucket/path/to/directory +``` This command does not obey the filters. -It returns a list of status dictionaries. +It returns a list of status dictionaries: - [ - { - "Remote": "file.txt", - "VersionID": null, - "RestoreStatus": { - "IsRestoreInProgress": true, - "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" - }, - "StorageClass": "GLACIER" +```json +[ + { + "Remote": "file.txt", + "VersionID": null, + "RestoreStatus": { + "IsRestoreInProgress": true, + "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" }, - { - "Remote": "test.pdf", - "VersionID": null, - "RestoreStatus": { - "IsRestoreInProgress": false, - "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" - }, - "StorageClass": "DEEP_ARCHIVE" + "StorageClass": "GLACIER" + }, + { + "Remote": "test.pdf", + "VersionID": null, + "RestoreStatus": { + "IsRestoreInProgress": false, + "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" }, - { - "Remote": "test.gz", - "VersionID": null, - "RestoreStatus": { - "IsRestoreInProgress": true, - "RestoreExpiryDate": "null" - }, - "StorageClass": "INTELLIGENT_TIERING" - } - ] - + "StorageClass": "DEEP_ARCHIVE" + }, + { + "Remote": "test.gz", + "VersionID": null, + "RestoreStatus": { + "IsRestoreInProgress": true, + "RestoreExpiryDate": "null" + }, + "StorageClass": "INTELLIGENT_TIERING" + } +] +``` Options: -- "all": if set then show all objects, not just ones with restore status +- "all": If set then show all objects, not just ones with restore status. ### list-multipart-uploads -List the unfinished multipart uploads +List the unfinished multipart uploads. - rclone backend list-multipart-uploads remote: [options] [+] +```console +rclone backend list-multipart-uploads remote: [options] [+] +``` This command lists the unfinished multipart uploads in JSON format. - rclone backend list-multipart s3:bucket/path/to/object +Usage examples: + +```console +rclone backend list-multipart s3:bucket/path/to/object +``` It returns a dictionary of buckets with values as lists of unfinished multipart uploads. @@ -29621,98 +33015,117 @@ multipart uploads. You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path. - { - "rclone": [ +```json +{ + "rclone": [ { - "Initiated": "2020-06-26T14:20:36Z", - "Initiator": { - "DisplayName": "XXX", - "ID": "arn:aws:iam::XXX:user/XXX" - }, - "Key": "KEY", - "Owner": { - "DisplayName": null, - "ID": "XXX" - }, - "StorageClass": "STANDARD", - "UploadId": "XXX" + "Initiated": "2020-06-26T14:20:36Z", + "Initiator": { + "DisplayName": "XXX", + "ID": "arn:aws:iam::XXX:user/XXX" + }, + "Key": "KEY", + "Owner": { + "DisplayName": null, + "ID": "XXX" + }, + "StorageClass": "STANDARD", + "UploadId": "XXX" } - ], - "rclone-1000files": [], - "rclone-dst": [] - } - - + ], + "rclone-1000files": [], + "rclone-dst": [] +} +``` ### cleanup Remove unfinished multipart uploads. - rclone backend cleanup remote: [options] [+] +```console +rclone backend cleanup remote: [options] [+] +``` This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours. -Note that you can use --interactive/-i or --dry-run with this command to see what -it would do. +Note that you can use --interactive/-i or --dry-run with this command to see +what it would do. - rclone backend cleanup s3:bucket/path/to/object - rclone backend cleanup -o max-age=7w s3:bucket/path/to/object +Usage examples: + +```console +rclone backend cleanup s3:bucket/path/to/object +rclone backend cleanup -o max-age=7w s3:bucket/path/to/object +``` Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. - Options: -- "max-age": Max age of upload to delete +- "max-age": Max age of upload to delete. ### cleanup-hidden Remove old versions of files. - rclone backend cleanup-hidden remote: [options] [+] +```console +rclone backend cleanup-hidden remote: [options] [+] +``` This command removes any old hidden versions of files on a versions enabled bucket. -Note that you can use --interactive/-i or --dry-run with this command to see what -it would do. +Note that you can use --interactive/-i or --dry-run with this command to see +what it would do. - rclone backend cleanup-hidden s3:bucket/path/to/dir +Usage example: +```console +rclone backend cleanup-hidden s3:bucket/path/to/dir +``` ### versioning Set/get versioning support for a bucket. - rclone backend versioning remote: [options] [+] +```console +rclone backend versioning remote: [options] [+] +``` This command sets versioning support if a parameter is passed and then returns the current versioning status for the bucket supplied. - rclone backend versioning s3:bucket # read status only - rclone backend versioning s3:bucket Enabled - rclone backend versioning s3:bucket Suspended +Usage examples: -It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning -has been enabled the status can't be set back to "Unversioned". +```console +rclone backend versioning s3:bucket # read status only +rclone backend versioning s3:bucket Enabled +rclone backend versioning s3:bucket Suspended +``` +It may return "Enabled", "Suspended" or "Unversioned". Note that once +versioning has been enabled the status can't be set back to "Unversioned". ### set Set command for updating the config parameters. - rclone backend set remote: [options] [+] +```console +rclone backend set remote: [options] [+] +``` This set command can be used to update the config parameters for a running s3 backend. -Usage Examples: +Usage examples: - rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] - rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] - rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X +```console +rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X +``` The option keys are named as they are in the config file. @@ -29722,8 +33135,7 @@ will default to those currently in use. It doesn't return anything. - - + ### Anonymous access to public buckets {#anonymous-access} @@ -29731,7 +33143,7 @@ If you want to use rclone to access a public bucket, configure with a blank `access_key_id` and `secret_access_key`. Your config should end up looking like this: -``` +```ini [anons3] type = s3 provider = AWS @@ -29739,19 +33151,24 @@ provider = AWS Then use it as normal with the name of the public bucket, e.g. - rclone lsd anons3:1000genomes +```console +rclone lsd anons3:1000genomes +``` You will be able to list and copy data but not upload it. You can also do this entirely on the command line - rclone lsd :s3,provider=AWS:1000genomes +```console +rclone lsd :s3,provider=AWS:1000genomes +``` ## Providers ### AWS S3 -This is the provider used as main example and described in the [configuration](#configuration) section above. +This is the provider used as main example and described in the [configuration](#configuration) +section above. ### AWS Directory Buckets @@ -29784,7 +33201,8 @@ does not support query parameter based authentication. With rclone v1.59 or later setting `upload_cutoff` should not be necessary. eg. -``` + +```ini [snowball] type = s3 provider = Other @@ -29799,12 +33217,14 @@ upload_cutoff = 0 Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/) configuration. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -29906,15 +33326,17 @@ y/e/d> y ### ArvanCloud {#arvan-cloud} -[ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud Object Storage goes beyond the limited traditional file storage. -It gives you access to backup and archived files and allows sharing. -Files like profile image in the app, images sent by users or scanned documents can be stored securely and easily in our Object Storage service. +[ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud +Object Storage goes beyond the limited traditional file storage. +It gives you access to backup and archived files and allows sharing. +Files like profile image in the app, images sent by users or scanned documents +can be stored securely and easily in our Object Storage service. ArvanCloud provides an S3 interface which can be configured for use with rclone like this. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password n/s> n @@ -29997,7 +33419,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [ArvanCloud] type = s3 provider = ArvanCloud @@ -30022,8 +33444,7 @@ To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config: - -``` +```ini [ceph] type = s3 provider = Ceph @@ -30052,7 +33473,7 @@ only write `/` in the secret access key. Eg the dump from Ceph looks something like this (irrelevant keys removed). -``` +```json { "user_id": "xxx", "display_name": "xxxx", @@ -30074,12 +33495,14 @@ use the secret key as `xxxxxx/xxxx` it will work fine. Here is an example of making an [China Mobile Ecloud Elastic Object Storage (EOS)](https:///ecloud.10086.cn/home/product-introduction/eos/) configuration. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -30328,7 +33751,9 @@ services. Here is an example of making a Cloudflare R2 configuration. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. @@ -30336,8 +33761,8 @@ Note that all buckets are private, and all are stored in the same "auto" region. It is necessary to use Cloudflare workers to share the content of a bucket publicly. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -30405,7 +33830,7 @@ y/e/d> y This will leave your config looking something like: -``` +```ini [r2] type = s3 provider = Cloudflare @@ -30431,17 +33856,72 @@ does. If this is causing a problem then upload the files with A consequence of this is that `Content-Encoding: gzip` will never appear in the metadata on Cloudflare. +### Cubbit DS3 {#Cubbit} + +[Cubbit Object Storage](https://www.cubbit.io/ds3-cloud) is a geo-distributed +cloud object storage platform. + +To connect to Cubbit DS3 you will need an access key and secret key pair. You +can follow this [guide](https://docs.cubbit.io/getting-started/quickstart#api-keys) +to retrieve these keys. They will be needed when prompted by `rclone config`. + +Default region will correspond to `eu-west-1` and the endpoint has to be specified +as `s3.cubbit.eu`. + +Going through the whole process of creating a new remote by running `rclone config`, +each prompt should be answered as shown below: + +```console +name> cubbit-ds3 (or any name you like) +Storage> s3 +provider> Cubbit +env_auth> false +access_key_id> YOUR_ACCESS_KEY +secret_access_key> YOUR_SECRET_KEY +region> eu-west-1 (or leave empty) +endpoint> s3.cubbit.eu +acl> +``` + +The resulting configuration file should look like: + +```ini +[cubbit-ds3] +type = s3 +provider = Cubbit +access_key_id = ACCESS_KEY +secret_access_key = SECRET_KEY +region = eu-west-1 +endpoint = s3.cubbit.eu +``` + +You can then start using Cubbit DS3 with rclone. For example, to create a new +bucket and copy files into it, you can run: + +```console +rclone mkdir cubbit-ds3:my-bucket +rclone copy /path/to/files cubbit-ds3:my-bucket +``` + ### DigitalOcean Spaces -[Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean. +[Spaces](https://www.digitalocean.com/products/object-storage/) is an +[S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) +object storage service from cloud provider DigitalOcean. -To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when prompted by `rclone config` for your `access_key_id` and `secret_access_key`. +To connect to DigitalOcean Spaces you will need an access key and secret key. +These can be retrieved on the [Applications & API](https://cloud.digitalocean.com/settings/api/tokens) +page of the DigitalOcean control panel. They will be needed when prompted by +`rclone config` for your `access_key_id` and `secret_access_key`. -When prompted for a `region` or `location_constraint`, press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com`). The default values can be used for other settings. +When prompted for a `region` or `location_constraint`, press enter to use the +default value. The region must be included in the `endpoint` setting (e.g. +`nyc3.digitaloceanspaces.com`). The default values can be used for other settings. -Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below: +Going through the whole process of creating a new remote by running `rclone config`, +each prompt should be answered as shown below: -``` +```console Storage> s3 env_auth> 1 access_key_id> YOUR_ACCESS_KEY @@ -30455,7 +33935,7 @@ storage_class> The resulting configuration file should look like: -``` +```ini [spaces] type = s3 provider = DigitalOcean @@ -30472,7 +33952,7 @@ storage_class = Once configured, you can create a new Space and begin copying files. For example: -``` +```console rclone mkdir spaces:my-new-space rclone copy /path/to/files spaces:my-new-space ``` @@ -30486,7 +33966,7 @@ To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config: -``` +```ini [dreamobjects] type = s3 provider = DreamHost @@ -30523,8 +34003,8 @@ if you need more help. An `rclone config` walkthrough might look like this but details may vary depending exactly on how you have set up the container. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -30570,7 +34050,7 @@ y/n> n And the config generated will end up looking like this: -``` +```ini [exaba] type = s3 provider = Exaba @@ -30581,11 +34061,14 @@ endpoint = http://127.0.0.1:9000/ ### Google Cloud Storage -[GoogleCloudStorage](https://cloud.google.com/storage/docs) is an [S3-interoperable](https://cloud.google.com/storage/docs/interoperability) object storage service from Google Cloud Platform. +[GoogleCloudStorage](https://cloud.google.com/storage/docs) is an +[S3-interoperable](https://cloud.google.com/storage/docs/interoperability) object +storage service from Google Cloud Platform. -To connect to Google Cloud Storage you will need an access key and secret key. These can be retrieved by creating an [HMAC key](https://cloud.google.com/storage/docs/authentication/managing-hmackeys). +To connect to Google Cloud Storage you will need an access key and secret key. +These can be retrieved by creating an [HMAC key](https://cloud.google.com/storage/docs/authentication/managing-hmackeys). -``` +```ini [gs] type = s3 provider = GCS @@ -30594,18 +34077,170 @@ secret_access_key = your_secret_key endpoint = https://storage.googleapis.com ``` -**Note** that `--s3-versions` does not work with GCS when it needs to do directory paging. Rclone will return the error: +**Note** that `--s3-versions` does not work with GCS when it needs to do +directory paging. Rclone will return the error: - s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker +```text +s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker +``` This is Google bug [#312292516](https://issuetracker.google.com/u/0/issues/312292516). +### Hetzner Object Storage {#hetzner} + +Here is an example of making a [Hetzner Object Storage](https://www.hetzner.com/storage/object-storage/) +configuration. First run: + +```console +rclone config +``` + +This will guide you through an interactive setup process. + +```text +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> my-hetzner +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] + XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others + \ (s3) +[snip] +Storage> s3 +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / Hetzner Object Storage + \ (Hetzner) +[snip] +provider> Hetzner +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_KEY +Option region. +Region to connect to. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Helsinki + \ (hel1) + 2 / Falkenstein + \ (fsn1) + 3 / Nuremberg + \ (nbg1) +region> +Option endpoint. +Endpoint for Hetzner Object Storage +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Helsinki + \ (hel1.your-objectstorage.com) + 2 / Falkenstein + \ (fsn1.your-objectstorage.com) + 3 / Nuremberg + \ (nbg1.your-objectstorage.com) +endpoint> +Option location_constraint. +Location constraint - must be set to match the Region. +Leave blank if not sure. Used when creating buckets only. +Enter a value. Press Enter to leave empty. +location_constraint> +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +If the acl is an empty string then no X-Amz-Acl: header is added and +the default (private) will be used. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) +acl> +Edit advanced config? +y) Yes +n) No (default) +y/n> +Configuration complete. +Options: +- type: s3 +- provider: Hetzner +- access_key_id: ACCESS_KEY +- secret_access_key: SECRET_KEY +Keep this "my-hetzner" remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> +Current remotes: + +Name Type +==== ==== +my-hetzner s3 + +e) Edit existing remote +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> +``` + +This will leave the config file looking like this. + +```ini +[my-hetzner] +type = s3 +provider = Hetzner +access_key_id = ACCESS_KEY +secret_access_key = SECRET_KEY +region = hel1 +endpoint = hel1.your-objectstorage.com +acl = private +``` + ### Huawei OBS {#huawei-obs} -Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use cloud storage that lets you store virtually any volume of unstructured data in any format and access it from anywhere. +Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use +cloud storage that lets you store virtually any volume of unstructured data in +any format and access it from anywhere. -OBS provides an S3 interface, you can copy and modify the following configuration and add it to your rclone configuration file. -``` +OBS provides an S3 interface, you can copy and modify the following configuration +and add it to your rclone configuration file. + +```ini [obs] type = s3 provider = HuaweiOBS @@ -30617,8 +34252,9 @@ acl = private ``` Or you can also configure via the interactive command line: -``` -No remotes found, make a new one? + +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -30730,208 +34366,237 @@ e/n/d/r/c/s/q> q ### IBM COS (S3) -Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage) +Information stored with IBM Cloud Object Storage is encrypted and dispersed across +multiple geographic locations, and accessed through an implementation of the S3 API. +This service makes use of the distributed storage technologies provided by IBM’s +Cloud Object Storage System (formerly Cleversafe). For more information visit: + To configure access to IBM COS S3, follow the steps below: 1. Run rclone config and select n for a new remote. -``` - 2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n -``` + + ```text + 2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + ``` 2. Enter the name for the configuration -``` - name> -``` + + ```text + name> + ``` 3. Select "s3" storage. -``` -Choose a number from below, or type in your own value -[snip] -XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ "s3" -[snip] -Storage> s3 -``` + + ```text + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ "s3" + [snip] + Storage> s3 + ``` 4. Select IBM COS as the S3 Storage Provider. -``` -Choose the S3 provider. -Choose a number from below, or type in your own value - 1 / Choose this option to configure Storage to AWS S3 - \ "AWS" - 2 / Choose this option to configure Storage to Ceph Systems - \ "Ceph" - 3 / Choose this option to configure Storage to Dreamhost - \ "Dreamhost" - 4 / Choose this option to the configure Storage to IBM COS S3 - \ "IBMCOS" - 5 / Choose this option to the configure Storage to Minio - \ "Minio" - Provider>4 -``` + + ```text + Choose the S3 provider. + Choose a number from below, or type in your own value + 1 / Choose this option to configure Storage to AWS S3 + \ "AWS" + 2 / Choose this option to configure Storage to Ceph Systems + \ "Ceph" + 3 / Choose this option to configure Storage to Dreamhost + \ "Dreamhost" + 4 / Choose this option to the configure Storage to IBM COS S3 + \ "IBMCOS" + 5 / Choose this option to the configure Storage to Minio + \ "Minio" + Provider>4 + ``` 5. Enter the Access Key and Secret. -``` - AWS Access Key ID - leave blank for anonymous access or runtime credentials. - access_key_id> <> - AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. - secret_access_key> <> -``` -6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an endpoint address. -``` - Endpoint for IBM COS S3 API. - Specify if using an IBM COS On Premise. - Choose a number from below, or type in your own value - 1 / US Cross Region Endpoint - \ "s3-api.us-geo.objectstorage.softlayer.net" - 2 / US Cross Region Dallas Endpoint - \ "s3-api.dal.us-geo.objectstorage.softlayer.net" - 3 / US Cross Region Washington DC Endpoint - \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" - 4 / US Cross Region San Jose Endpoint - \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" - 5 / US Cross Region Private Endpoint - \ "s3-api.us-geo.objectstorage.service.networklayer.com" - 6 / US Cross Region Dallas Private Endpoint - \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" - 7 / US Cross Region Washington DC Private Endpoint - \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" - 8 / US Cross Region San Jose Private Endpoint - \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" - 9 / US Region East Endpoint - \ "s3.us-east.objectstorage.softlayer.net" - 10 / US Region East Private Endpoint - \ "s3.us-east.objectstorage.service.networklayer.com" - 11 / US Region South Endpoint -[snip] - 34 / Toronto Single Site Private Endpoint - \ "s3.tor01.objectstorage.service.networklayer.com" - endpoint>1 -``` + ```text + AWS Access Key ID - leave blank for anonymous access or runtime credentials. + access_key_id> <> + AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. + secret_access_key> <> + ``` +6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option +below. For On Premise IBM COS, enter an endpoint address. -7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter -``` - 1 / US Cross Region Standard - \ "us-standard" - 2 / US Cross Region Vault - \ "us-vault" - 3 / US Cross Region Cold - \ "us-cold" - 4 / US Cross Region Flex - \ "us-flex" - 5 / US East Region Standard - \ "us-east-standard" - 6 / US East Region Vault - \ "us-east-vault" - 7 / US East Region Cold - \ "us-east-cold" - 8 / US East Region Flex - \ "us-east-flex" - 9 / US South Region Standard - \ "us-south-standard" - 10 / US South Region Vault - \ "us-south-vault" -[snip] - 32 / Toronto Flex - \ "tor01-flex" -location_constraint>1 -``` + ```text + Endpoint for IBM COS S3 API. + Specify if using an IBM COS On Premise. + Choose a number from below, or type in your own value + 1 / US Cross Region Endpoint + \ "s3-api.us-geo.objectstorage.softlayer.net" + 2 / US Cross Region Dallas Endpoint + \ "s3-api.dal.us-geo.objectstorage.softlayer.net" + 3 / US Cross Region Washington DC Endpoint + \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" + 4 / US Cross Region San Jose Endpoint + \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" + 5 / US Cross Region Private Endpoint + \ "s3-api.us-geo.objectstorage.service.networklayer.com" + 6 / US Cross Region Dallas Private Endpoint + \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" + 7 / US Cross Region Washington DC Private Endpoint + \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" + 8 / US Cross Region San Jose Private Endpoint + \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" + 9 / US Region East Endpoint + \ "s3.us-east.objectstorage.softlayer.net" + 10 / US Region East Private Endpoint + \ "s3.us-east.objectstorage.service.networklayer.com" + 11 / US Region South Endpoint + [snip] + 34 / Toronto Single Site Private Endpoint + \ "s3.tor01.objectstorage.service.networklayer.com" + endpoint>1 + ``` -8. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs. -``` -Canned ACL used when creating buckets and/or storing objects in S3. -For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl -Choose a number from below, or type in your own value - 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS - \ "private" - 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS - \ "public-read" - 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS - \ "public-read-write" - 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS - \ "authenticated-read" -acl> 1 -``` +7. Specify a IBM COS Location Constraint. The location constraint must match +endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection +from this list, hit enter -9. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this -``` - [xxx] - type = s3 - Provider = IBMCOS - access_key_id = xxx - secret_access_key = yyy - endpoint = s3-api.us-geo.objectstorage.softlayer.net - location_constraint = us-standard - acl = private -``` + ```text + 1 / US Cross Region Standard + \ "us-standard" + 2 / US Cross Region Vault + \ "us-vault" + 3 / US Cross Region Cold + \ "us-cold" + 4 / US Cross Region Flex + \ "us-flex" + 5 / US East Region Standard + \ "us-east-standard" + 6 / US East Region Vault + \ "us-east-vault" + 7 / US East Region Cold + \ "us-east-cold" + 8 / US East Region Flex + \ "us-east-flex" + 9 / US South Region Standard + \ "us-south-standard" + 10 / US South Region Vault + \ "us-south-vault" + [snip] + 32 / Toronto Flex + \ "tor01-flex" + location_constraint>1 + ``` + +8. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private". +IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the +canned ACLs. + + ```text + Canned ACL used when creating buckets and/or storing objects in S3. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS + \ "private" + 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS + \ "public-read" + 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS + \ "public-read-write" + 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS + \ "authenticated-read" + acl> 1 + ``` + +9. Review the displayed configuration and accept to save the "remote" then quit. +The config file should look like this + + ```ini + [xxx] + type = s3 + Provider = IBMCOS + access_key_id = xxx + secret_access_key = yyy + endpoint = s3-api.us-geo.objectstorage.softlayer.net + location_constraint = us-standard + acl = private + ``` 10. Execute rclone commands -``` - 1) Create a bucket. - rclone mkdir IBM-COS-XREGION:newbucket - 2) List available buckets. - rclone lsd IBM-COS-XREGION: - -1 2017-11-08 21:16:22 -1 test - -1 2018-02-14 20:16:39 -1 newbucket - 3) List contents of a bucket. - rclone ls IBM-COS-XREGION:newbucket - 18685952 test.exe - 4) Copy a file from local to remote. - rclone copy /Users/file.txt IBM-COS-XREGION:newbucket - 5) Copy a file from remote to local. - rclone copy IBM-COS-XREGION:newbucket/file.txt . - 6) Delete a file on remote. - rclone delete IBM-COS-XREGION:newbucket/file.txt -``` -#### IBM IAM authentication -If using IBM IAM authentication with IBM API KEY you need to fill in these additional parameters + ```text + 1) Create a bucket. + rclone mkdir IBM-COS-XREGION:newbucket + 2) List available buckets. + rclone lsd IBM-COS-XREGION: + -1 2017-11-08 21:16:22 -1 test + -1 2018-02-14 20:16:39 -1 newbucket + 3) List contents of a bucket. + rclone ls IBM-COS-XREGION:newbucket + 18685952 test.exe + 4) Copy a file from local to remote. + rclone copy /Users/file.txt IBM-COS-XREGION:newbucket + 5) Copy a file from remote to local. + rclone copy IBM-COS-XREGION:newbucket/file.txt . + 6) Delete a file on remote. + rclone delete IBM-COS-XREGION:newbucket/file.txt + ``` + +#### IBM IAM authentication + +If using IBM IAM authentication with IBM API KEY you need to fill in these +additional parameters + 1. Select false for env_auth 2. Leave `access_key_id` and `secret_access_key` blank -3. Paste your `ibm_api_key` -``` -Option ibm_api_key. -IBM API Key to be used to obtain IAM token -Enter a value of type string. Press Enter for the default (1). -ibm_api_key> -``` +3. Paste your `ibm_api_key` + + ```text + Option ibm_api_key. + IBM API Key to be used to obtain IAM token + Enter a value of type string. Press Enter for the default (1). + ibm_api_key> + ``` + 4. Paste your `ibm_resource_instance_id` -``` -Option ibm_resource_instance_id. -IBM service instance id -Enter a value of type string. Press Enter for the default (2). -ibm_resource_instance_id> -``` + + ```text + Option ibm_resource_instance_id. + IBM service instance id + Enter a value of type string. Press Enter for the default (2). + ibm_resource_instance_id> + ``` + 5. In advanced settings type true for `v2_auth` -``` -Option v2_auth. -If true use v2 authentication. -If this is false (the default) then rclone will use v4 authentication. -If it is set then rclone will use v2 authentication. -Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH. -Enter a boolean value (true or false). Press Enter for the default (true). -v2_auth> -``` + + ```text + Option v2_auth. + If true use v2 authentication. + If this is false (the default) then rclone will use v4 authentication. + If it is set then rclone will use v2 authentication. + Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH. + Enter a boolean value (true or false). Press Enter for the default (true). + v2_auth> + ``` ### IDrive e2 {#idrive-e2} Here is an example of making an [IDrive e2](https://www.idrive.com/e2/) configuration. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -31031,21 +34696,158 @@ d) Delete this remote y/e/d> y ``` +### Intercolo Object Storage {#intercolo} + +[Intercolo Object Storage](https://intercolo.de/object-storage) offers +GDPR-compliant, transparently priced, S3-compatible +cloud storage hosted in Frankfurt, Germany. + +Here's an example of making a configuration for Intercolo. + +First run: + +```console +rclone config +``` + +This will guide you through an interactive setup process. + +```text +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> intercolo + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] + xx / Amazon S3 Compliant Storage Providers including AWS, ... + \ (s3) +[snip] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +xx / Intercolo Object Storage + \ (Intercolo) +[snip] +provider> Intercolo + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> false + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_KEY + +Option region. +Region where your bucket will be created and your data stored. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Frankfurt, Germany + \ (de-fra) +region> 1 + +Option endpoint. +Endpoint for Intercolo Object Storage. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Frankfurt, Germany + \ (de-fra.i3storage.com) +endpoint> 1 + +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn't copy the ACL from the source but rather writes a fresh one. +If the acl is an empty string then no X-Amz-Acl: header is added and +the default (private) will be used. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + [snip] +acl> + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: s3 +- provider: Intercolo +- access_key_id: ACCESS_KEY +- secret_access_key: SECRET_KEY +- region: de-fra +- endpoint: de-fra.i3storage.com +Keep this "intercolo" remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +This will leave the config file looking like this. + +```ini +[intercolo] +type = s3 +provider = Intercolo +access_key_id = ACCESS_KEY +secret_access_key = SECRET_KEY +region = de-fra +endpoint = de-fra.i3storage.com +``` + ### IONOS Cloud {#ionos} -[IONOS S3 Object Storage](https://cloud.ionos.com/storage/object-storage) is a service offered by IONOS for storing and accessing unstructured data. -To connect to the service, you will need an access key and a secret key. These can be found in the [Data Center Designer](https://dcd.ionos.com/), by selecting **Manager resources** > **Object Storage Key Manager**. +[IONOS S3 Object Storage](https://cloud.ionos.com/storage/object-storage) is a +service offered by IONOS for storing and accessing unstructured data. +To connect to the service, you will need an access key and a secret key. These +can be found in the [Data Center Designer](https://dcd.ionos.com/), by +selecting **Manager resources** > **Object Storage Key Manager**. +Here is an example of a configuration. First, run `rclone config`. This will +walk you through an interactive setup process. Type `n` to add the new remote, +and then enter a name: -Here is an example of a configuration. First, run `rclone config`. This will walk you through an interactive setup process. Type `n` to add the new remote, and then enter a name: - -``` +```text Enter name for new remote. name> ionos-fra ``` Type `s3` to choose the connection type: -``` + +```text Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. @@ -31057,7 +34859,8 @@ Storage> s3 ``` Type `IONOS`: -``` + +```text Option provider. Choose your S3 provider. Choose a number from below, or type in your own value. @@ -31070,7 +34873,8 @@ provider> IONOS ``` Press Enter to choose the default option `Enter AWS credentials in the next step`: -``` + +```text Option env_auth. Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. @@ -31083,8 +34887,11 @@ Press Enter for the default (false). env_auth> ``` -Enter your Access Key and Secret key. These can be retrieved in the [Data Center Designer](https://dcd.ionos.com/), click on the menu “Manager resources” / "Object Storage Key Manager". -``` +Enter your Access Key and Secret key. These can be retrieved in the +[Data Center Designer](https://dcd.ionos.com/), click on the menu +"Manager resources" / "Object Storage Key Manager". + +```text Option access_key_id. AWS Access Key ID. Leave blank for anonymous access or runtime credentials. @@ -31099,7 +34906,8 @@ secret_access_key> YOUR_SECRET_KEY ``` Choose the region where your bucket is located: -``` + +```text Option region. Region where your bucket will be created and your data stored. Choose a number from below, or type in your own value. @@ -31114,7 +34922,8 @@ region> 2 ``` Choose the endpoint from the same region: -``` + +```text Option endpoint. Endpoint for IONOS S3 Object Storage. Specify the endpoint from the same region. @@ -31130,7 +34939,8 @@ endpoint> 1 ``` Press Enter to choose the default option or choose the desired ACL setting: -``` + +```text Option acl. Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. @@ -31148,7 +34958,8 @@ acl> ``` Press Enter to skip the advanced config: -``` + +```text Edit advanced config? y) Yes n) No (default) @@ -31156,7 +34967,8 @@ y/n> ``` Press Enter to save the configuration, and then `q` to quit the configuration process: -``` + +```text Configuration complete. Options: - type: s3 @@ -31173,155 +34985,169 @@ y/e/d> y Done! Now you can try some commands (for macOS, use `./rclone` instead of `rclone`). -1) Create a bucket (the name must be unique within the whole IONOS S3) -``` -rclone mkdir ionos-fra:my-bucket -``` -2) List available buckets -``` -rclone lsd ionos-fra: -``` -4) Copy a file from local to remote -``` -rclone copy /Users/file.txt ionos-fra:my-bucket -``` -3) List contents of a bucket -``` -rclone ls ionos-fra:my-bucket -``` -5) Copy a file from remote to local -``` -rclone copy ionos-fra:my-bucket/file.txt -``` +1) Create a bucket (the name must be unique within the whole IONOS S3) + + ```console + rclone mkdir ionos-fra:my-bucket + ``` + +2) List available buckets + + ```console + rclone lsd ionos-fra: + ``` + +3) Copy a file from local to remote + + ```console + rclone copy /Users/file.txt ionos-fra:my-bucket + ``` + +4) List contents of a bucket + + ```console + rclone ls ionos-fra:my-bucket + ``` + +5) Copy a file from remote to local + + ```console + rclone copy ionos-fra:my-bucket/file.txt + ``` ### Leviia Cloud Object Storage {#leviia} -[Leviia Object Storage](https://www.leviia.com/object-storage/), backup and secure your data in a 100% French cloud, independent of GAFAM.. +[Leviia Object Storage](https://www.leviia.com/object-storage/), backup and secure +your data in a 100% French cloud, independent of GAFAM.. To configure access to Leviia, follow the steps below: 1. Run `rclone config` and select `n` for a new remote. -``` -rclone config -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -``` + ```text + rclone config + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + ``` 2. Give the name of the configuration. For example, name it 'leviia'. -``` -name> leviia -``` + ```text + name> leviia + ``` 3. Select `s3` storage. -``` -Choose a number from below, or type in your own value -[snip] -XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ (s3) -[snip] -Storage> s3 -``` + ```text + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ (s3) + [snip] + Storage> s3 + ``` 4. Select `Leviia` provider. -``` -Choose a number from below, or type in your own value -1 / Amazon Web Services (AWS) S3 - \ "AWS" -[snip] -15 / Leviia Object Storage - \ (Leviia) -[snip] -provider> Leviia -``` + + ```text + Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + [snip] + 15 / Leviia Object Storage + \ (Leviia) + [snip] + provider> Leviia + ``` 5. Enter your SecretId and SecretKey of Leviia. -``` -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Enter a boolean value (true or false). Press Enter for the default ("false"). -Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" -env_auth> 1 -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -access_key_id> ZnIx.xxxxxxxxxxxxxxx -AWS Secret Access Key (password) -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -secret_access_key> xxxxxxxxxxx -``` + ```text + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Enter a boolean value (true or false). Press Enter for the default ("false"). + Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + access_key_id> ZnIx.xxxxxxxxxxxxxxx + AWS Secret Access Key (password) + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + secret_access_key> xxxxxxxxxxx + ``` 6. Select endpoint for Leviia. -``` - / The default endpoint - 1 | Leviia. - \ (s3.leviia.com) -[snip] -endpoint> 1 -``` + ```text + / The default endpoint + 1 | Leviia. + \ (s3.leviia.com) + [snip] + endpoint> 1 + ``` + 7. Choose acl. -``` -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - / Owner gets FULL_CONTROL. - 1 | No one else has access rights (default). - \ (private) - / Owner gets FULL_CONTROL. - 2 | The AllUsers group gets READ access. - \ (public-read) -[snip] -acl> 1 -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> n -Remote config --------------------- -[leviia] -- type: s3 -- provider: Leviia -- access_key_id: ZnIx.xxxxxxx -- secret_access_key: xxxxxxxx -- endpoint: s3.leviia.com -- acl: private --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -Current remotes: - -Name Type -==== ==== -leviia s3 -``` + ```text + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + [snip] + acl> 1 + Edit advanced config? (y/n) + y) Yes + n) No (default) + y/n> n + Remote config + -------------------- + [leviia] + - type: s3 + - provider: Leviia + - access_key_id: ZnIx.xxxxxxx + - secret_access_key: xxxxxxxx + - endpoint: s3.leviia.com + - acl: private + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: + + Name Type + ==== ==== + leviia s3 + ``` ### Liara {#liara-cloud} Here is an example of making a [Liara Object Storage](https://liara.ir/landing/object-storage) configuration. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password n/s> n @@ -31397,7 +35223,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [Liara] type = s3 provider = Liara @@ -31417,12 +35243,14 @@ storage_class = Here is an example of making a [Linode Object Storage](https://www.linode.com/products/object-storage/) configuration. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -31558,7 +35386,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [linode] type = s3 provider = Linode @@ -31572,12 +35400,14 @@ endpoint = eu-central-1.linodeobjects.com Here is an example of making a [Magalu Object Storage](https://magalu.cloud/object-storage/) configuration. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -31675,7 +35505,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [magalu] type = s3 provider = Magalu @@ -31693,12 +35523,14 @@ included in existing Pro plans. Here is an example of making a configuration. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -31785,7 +35617,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [megas4] type = s3 provider = Mega @@ -31796,15 +35628,17 @@ endpoint = s3.eu-central-1.s4.mega.io ### Minio -[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops. +[Minio](https://minio.io/) is an object storage server built for cloud application +developers and devops. -It is very easy to install and provides an S3 compatible server which can be used by rclone. +It is very easy to install and provides an S3 compatible server which can be used +by rclone. To use it, install Minio following the instructions [here](https://docs.minio.io/docs/minio-quickstart-guide). When it configures itself Minio will print something like this -``` +```text Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000 AccessKey: USWUXHGYZQYFYFFIT3RE SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 @@ -31830,7 +35664,7 @@ Drive Capacity: 26 GiB Free, 165 GiB Total These details need to go into `rclone config` like this. Note that it is important to put the region in as stated above. -``` +```text env_auth> 1 access_key_id> USWUXHGYZQYFYFFIT3RE secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 @@ -31842,7 +35676,7 @@ server_side_encryption> Which makes the config file look like this -``` +```ini [minio] type = s3 provider = Minio @@ -31857,7 +35691,7 @@ server_side_encryption = So once set up, for example, to copy files into a bucket -``` +```console rclone copy /path/to/files minio:bucket ``` @@ -31869,11 +35703,15 @@ setting the provider `Netease`. This will automatically set ### Outscale -[OUTSCALE Object Storage (OOS)](https://en.outscale.com/storage/outscale-object-storage/) is an enterprise-grade, S3-compatible storage service provided by OUTSCALE, a brand of Dassault Systèmes. For more information about OOS, see the [official documentation](https://docs.outscale.com/en/userguide/OUTSCALE-Object-Storage-OOS.html). +[OUTSCALE Object Storage (OOS)](https://en.outscale.com/storage/outscale-object-storage/) +is an enterprise-grade, S3-compatible storage service provided by OUTSCALE, +a brand of Dassault Systèmes. For more information about OOS, see the +[official documentation](https://docs.outscale.com/en/userguide/OUTSCALE-Object-Storage-OOS.html). -Here is an example of an OOS configuration that you can paste into your rclone configuration file: +Here is an example of an OOS configuration that you can paste into your rclone +configuration file: -``` +```ini [outscale] type = s3 provider = Outscale @@ -31887,20 +35725,20 @@ acl = private You can also run `rclone config` to go through the interactive setup process: -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config n/s/q> n ``` -``` +```text Enter name for new remote. name> outscale ``` -``` +```text Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. @@ -31911,7 +35749,7 @@ Choose a number from below, or type in your own value. Storage> outscale ``` -``` +```text Option provider. Choose your S3 provider. Choose a number from below, or type in your own value. @@ -31923,7 +35761,7 @@ XX / OUTSCALE Object Storage (OOS) provider> Outscale ``` -``` +```text Option env_auth. Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. @@ -31936,7 +35774,7 @@ Press Enter for the default (false). env_auth> ``` -``` +```text Option access_key_id. AWS Access Key ID. Leave blank for anonymous access or runtime credentials. @@ -31944,7 +35782,7 @@ Enter a value. Press Enter to leave empty. access_key_id> ABCDEFGHIJ0123456789 ``` -``` +```text Option secret_access_key. AWS Secret Access Key (password). Leave blank for anonymous access or runtime credentials. @@ -31952,7 +35790,7 @@ Enter a value. Press Enter to leave empty. secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ``` -``` +```text Option region. Region where your bucket will be created and your data stored. Choose a number from below, or type in your own value. @@ -31970,7 +35808,7 @@ Press Enter to leave empty. region> 1 ``` -``` +```text Option endpoint. Endpoint for S3 API. Required when using an S3 clone. @@ -31989,7 +35827,7 @@ Press Enter to leave empty. endpoint> 1 ``` -``` +```text Option acl. Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. @@ -32007,14 +35845,14 @@ Press Enter to leave empty. acl> 1 ``` -``` +```text Edit advanced config? y) Yes n) No (default) y/n> n ``` -``` +```text Configuration complete. Options: - type: s3 @@ -32032,14 +35870,16 @@ y/e/d> y ### OVHcloud {#ovhcloud} [OVHcloud Object Storage](https://www.ovhcloud.com/en-ie/public-cloud/object-storage/) -is an S3-compatible general-purpose object storage platform available in all OVHcloud regions. -To use the platform, you will need an access key and secret key. To know more about it and how -to interact with the platform, take a look at the [documentation](https://ovh.to/8stqhuo). +is an S3-compatible general-purpose object storage platform available in all +OVHcloud regions. To use the platform, you will need an access key and secret key. +To know more about it and how to interact with the platform, take a look at the +[documentation](https://ovh.to/8stqhuo). -Here is an example of making an OVHcloud Object Storage configuration with `rclone config`: +Here is an example of making an OVHcloud Object Storage configuration with +`rclone config`: -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -32217,7 +36057,7 @@ y/e/d> y Your configuration file should now look like this: -``` +```ini [ovhcloud-rbx] type = s3 provider = OVHcloud @@ -32228,20 +36068,19 @@ endpoint = s3.rbx.io.cloud.ovh.net acl = private ``` - ### Petabox Here is an example of making a [Petabox](https://petabox.io/) configuration. First run: -```bash +```console rclone config ``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password n/s> n @@ -32379,7 +36218,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [My Petabox Storage] type = s3 provider = Petabox @@ -32391,13 +36230,15 @@ endpoint = s3.petabox.io ### Pure Storage FlashBlade -[Pure Storage FlashBlade](https://www.purestorage.com/products/unstructured-data-storage.html) is a high performance S3-compatible object store. +[Pure Storage FlashBlade](https://www.purestorage.com/products/unstructured-data-storage.html) +is a high performance S3-compatible object store. FlashBlade supports most modern S3 features including: - ListObjectsV2 - Multipart uploads with AWS-compatible ETags -- Advanced checksum algorithms (SHA256, CRC32, CRC32C) with trailer support (Purity//FB 4.4.2+) +- Advanced checksum algorithms (SHA256, CRC32, CRC32C) with trailer support + (Purity//FB 4.4.2+) - Object versioning and lifecycle management - Virtual hosted-style requests (requires DNS configuration) @@ -32405,11 +36246,13 @@ To configure rclone for Pure Storage FlashBlade: First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -32488,7 +36331,7 @@ y/e/d> y This results in the following configuration being stored in `~/.config/rclone/rclone.conf`: -``` +```ini [flashblade] type = s3 provider = FlashBlade @@ -32497,217 +36340,468 @@ secret_access_key = SECRET_ACCESS_KEY endpoint = https://s3.flashblade.example.com ``` -Note: The FlashBlade endpoint should be the S3 data VIP. For virtual-hosted style requests, -ensure proper DNS configuration: subdomains of the endpoint hostname should resolve to a -FlashBlade data VIP. For example, if your endpoint is `https://s3.flashblade.example.com`, +Note: The FlashBlade endpoint should be the S3 data VIP. For virtual-hosted style +requests, ensure proper DNS configuration: subdomains of the endpoint hostname should +resolve to a FlashBlade data VIP. For example, if your endpoint is `https://s3.flashblade.example.com`, then `bucket-name.s3.flashblade.example.com` should also resolve to the data VIP. ### Qiniu Cloud Object Storage (Kodo) {#qiniu} -[Qiniu Cloud Object Storage (Kodo)](https://www.qiniu.com/en/products/kodo), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management. +[Qiniu Cloud Object Storage (Kodo)](https://www.qiniu.com/en/products/kodo), a +completely independent-researched core technology which is proven by repeated +customer experience has occupied absolute leading market leader position. Kodo +can be widely applied to mass data management. To configure access to Qiniu Kodo, follow the steps below: 1. Run `rclone config` and select `n` for a new remote. + ```text + rclone config + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + ``` + +2. Give the name of the configuration. For example, name it 'qiniu'. + + ```text + name> qiniu + ``` + +3. Select `s3` storage. + + ```text + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ (s3) + [snip] + Storage> s3 + ``` + +4. Select `Qiniu` provider. + + ```text + Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + [snip] + 22 / Qiniu Object Storage (Kodo) + \ (Qiniu) + [snip] + provider> Qiniu + ``` + +5. Enter your SecretId and SecretKey of Qiniu Kodo. + + ```text + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Enter a boolean value (true or false). Press Enter for the default ("false"). + Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + access_key_id> AKIDxxxxxxxxxx + AWS Secret Access Key (password) + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + secret_access_key> xxxxxxxxxxx + ``` + +6. Select endpoint for Qiniu Kodo. This is the standard endpoint for different region. + + ```text + / The default endpoint - a good choice if you are unsure. + 1 | East China Region 1. + | Needs location constraint cn-east-1. + \ (cn-east-1) + / East China Region 2. + 2 | Needs location constraint cn-east-2. + \ (cn-east-2) + / North China Region 1. + 3 | Needs location constraint cn-north-1. + \ (cn-north-1) + / South China Region 1. + 4 | Needs location constraint cn-south-1. + \ (cn-south-1) + / North America Region. + 5 | Needs location constraint us-north-1. + \ (us-north-1) + / Southeast Asia Region 1. + 6 | Needs location constraint ap-southeast-1. + \ (ap-southeast-1) + / Northeast Asia Region 1. + 7 | Needs location constraint ap-northeast-1. + \ (ap-northeast-1) + [snip] + endpoint> 1 + + Option endpoint. + Endpoint for Qiniu Object Storage. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / East China Endpoint 1 + \ (s3-cn-east-1.qiniucs.com) + 2 / East China Endpoint 2 + \ (s3-cn-east-2.qiniucs.com) + 3 / North China Endpoint 1 + \ (s3-cn-north-1.qiniucs.com) + 4 / South China Endpoint 1 + \ (s3-cn-south-1.qiniucs.com) + 5 / North America Endpoint 1 + \ (s3-us-north-1.qiniucs.com) + 6 / Southeast Asia Endpoint 1 + \ (s3-ap-southeast-1.qiniucs.com) + 7 / Northeast Asia Endpoint 1 + \ (s3-ap-northeast-1.qiniucs.com) + endpoint> 1 + + Option location_constraint. + Location constraint - must be set to match the Region. + Used when creating buckets only. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / East China Region 1 + \ (cn-east-1) + 2 / East China Region 2 + \ (cn-east-2) + 3 / North China Region 1 + \ (cn-north-1) + 4 / South China Region 1 + \ (cn-south-1) + 5 / North America Region 1 + \ (us-north-1) + 6 / Southeast Asia Region 1 + \ (ap-southeast-1) + 7 / Northeast Asia Region 1 + \ (ap-northeast-1) + location_constraint> 1 + ``` + +7. Choose acl and storage class. + + ```text + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + [snip] + acl> 2 + The storage class to use when storing new objects in Tencent COS. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / Standard storage class + \ (STANDARD) + 2 / Infrequent access storage mode + \ (LINE) + 3 / Archive storage mode + \ (GLACIER) + 4 / Deep archive storage mode + \ (DEEP_ARCHIVE) + [snip] + storage_class> 1 + Edit advanced config? (y/n) + y) Yes + n) No (default) + y/n> n + Remote config + -------------------- + [qiniu] + - type: s3 + - provider: Qiniu + - access_key_id: xxx + - secret_access_key: xxx + - region: cn-east-1 + - endpoint: s3-cn-east-1.qiniucs.com + - location_constraint: cn-east-1 + - acl: public-read + - storage_class: STANDARD + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: + + Name Type + ==== ==== + qiniu s3 + ``` + +### FileLu S5 {#filelu-s5} + +[FileLu S5 Object Storage](https://s5lu.com) is an S3-compatible object storage +system. It provides multiple region options (Global, US-East, EU-Central, +AP-Southeast, and ME-Central) while using a single endpoint (`s5lu.com`). +FileLu S5 is designed for scalability, security, and simplicity, with predictable +pricing and no hidden charges for data transfers or API requests. + +Here is an example of making a configuration. First run: + +```console +rclone config ``` + +This will guide you through an interactive setup process. + +```text +No remotes found, make a new one\? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> s5lu + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS,... FileLu, ... + \ (s3) +[snip] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / FileLu S5 Object Storage + \ (FileLu) +[snip] +provider> FileLu + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> XXX + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> XXX + +Option endpoint. +Endpoint for S3 API. +Required when using an S3 clone. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Global + \ (global) + 2 / North America (US-East) + \ (us-east) + 3 / Europe (EU-Central) + \ (eu-central) + 4 / Asia Pacific (AP-Southeast) + \ (ap-southeast) + 5 / Middle East (ME-Central) + \ (me-central) +region> 1 + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: s3 +- provider: FileLu +- access_key_id: XXX +- secret_access_key: XXX +- endpoint: s5lu.com +Keep this "s5lu" remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +This will leave the config file looking like this. + +```ini +[s5lu] +type = s3 +provider = FileLu +access_key_id = XXX +secret_access_key = XXX +endpoint = s5lu.com +``` + +### Rabata {#Rabata} + +[Rabata](https://rabata.io) is an S3-compatible secure cloud storage service that +offers flat, transparent pricing (no API request fees) while supporting standard +S3 APIs. It is suitable for backup, application storage,media workflows, and +archive use cases. + +Server side copy is not implemented with Rabata, also meaning modification time +of objects cannot be updated. + +Rclone config: + +```text rclone config No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n -``` -2. Give the name of the configuration. For example, name it 'qiniu'. +Enter name for new remote. +name> Rabata -``` -name> qiniu -``` - -3. Select `s3` storage. - -``` -Choose a number from below, or type in your own value +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. [snip] XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 -``` -4. Select `Qiniu` provider. -``` -Choose a number from below, or type in your own value -1 / Amazon Web Services (AWS) S3 - \ "AWS" +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. [snip] -22 / Qiniu Object Storage (Kodo) - \ (Qiniu) +XX / Rabata Cloud Storage + \ (Rabata) [snip] -provider> Qiniu -``` +provider> Rabata -5. Enter your SecretId and SecretKey of Qiniu Kodo. - -``` +Option env_auth. Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. -Enter a boolean value (true or false). Press Enter for the default ("false"). -Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" -env_auth> 1 +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> + +Option access_key_id. AWS Access Key ID. Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -access_key_id> AKIDxxxxxxxxxx -AWS Secret Access Key (password) +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY_ID + +Option secret_access_key. +AWS Secret Access Key (password). Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -secret_access_key> xxxxxxxxxxx -``` +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_ACCESS_KEY -6. Select endpoint for Qiniu Kodo. This is the standard endpoint for different region. - -``` - / The default endpoint - a good choice if you are unsure. - 1 | East China Region 1. - | Needs location constraint cn-east-1. - \ (cn-east-1) - / East China Region 2. - 2 | Needs location constraint cn-east-2. - \ (cn-east-2) - / North China Region 1. - 3 | Needs location constraint cn-north-1. - \ (cn-north-1) - / South China Region 1. - 4 | Needs location constraint cn-south-1. - \ (cn-south-1) - / North America Region. - 5 | Needs location constraint us-north-1. - \ (us-north-1) - / Southeast Asia Region 1. - 6 | Needs location constraint ap-southeast-1. - \ (ap-southeast-1) - / Northeast Asia Region 1. - 7 | Needs location constraint ap-northeast-1. - \ (ap-northeast-1) -[snip] -endpoint> 1 +Option region. +Region where your bucket will be created and your data stored. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / US East (N. Virginia) + \ (us-east-1) + 2 / EU (Ireland) + \ (eu-west-1) + 3 / EU (London) + \ (eu-west-2) +region> 3 Option endpoint. -Endpoint for Qiniu Object Storage. +Endpoint for Rabata Object Storage. Choose a number from below, or type in your own value. Press Enter to leave empty. - 1 / East China Endpoint 1 - \ (s3-cn-east-1.qiniucs.com) - 2 / East China Endpoint 2 - \ (s3-cn-east-2.qiniucs.com) - 3 / North China Endpoint 1 - \ (s3-cn-north-1.qiniucs.com) - 4 / South China Endpoint 1 - \ (s3-cn-south-1.qiniucs.com) - 5 / North America Endpoint 1 - \ (s3-us-north-1.qiniucs.com) - 6 / Southeast Asia Endpoint 1 - \ (s3-ap-southeast-1.qiniucs.com) - 7 / Northeast Asia Endpoint 1 - \ (s3-ap-northeast-1.qiniucs.com) -endpoint> 1 + 1 / US East (N. Virginia) + \ (s3.us-east-1.rabata.io) + 2 / EU West (Ireland) + \ (s3.eu-west-1.rabata.io) + 3 / EU West (London) + \ (s3.eu-west-2.rabata.io) +endpoint> 3 Option location_constraint. -Location constraint - must be set to match the Region. -Used when creating buckets only. +location where your bucket will be created and your data stored. Choose a number from below, or type in your own value. Press Enter to leave empty. - 1 / East China Region 1 - \ (cn-east-1) - 2 / East China Region 2 - \ (cn-east-2) - 3 / North China Region 1 - \ (cn-north-1) - 4 / South China Region 1 - \ (cn-south-1) - 5 / North America Region 1 - \ (us-north-1) - 6 / Southeast Asia Region 1 - \ (ap-southeast-1) - 7 / Northeast Asia Region 1 - \ (ap-northeast-1) -location_constraint> 1 -``` + 1 / US East (N. Virginia) + \ (us-east-1) + 2 / EU (Ireland) + \ (eu-west-1) + 3 / EU (London) + \ (eu-west-2) +location_constraint> 3 -7. Choose acl and storage class. - -``` -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - / Owner gets FULL_CONTROL. - 1 | No one else has access rights (default). - \ (private) - / Owner gets FULL_CONTROL. - 2 | The AllUsers group gets READ access. - \ (public-read) -[snip] -acl> 2 -The storage class to use when storing new objects in Tencent COS. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / Standard storage class - \ (STANDARD) - 2 / Infrequent access storage mode - \ (LINE) - 3 / Archive storage mode - \ (GLACIER) - 4 / Deep archive storage mode - \ (DEEP_ARCHIVE) -[snip] -storage_class> 1 -Edit advanced config? (y/n) +Edit advanced config? y) Yes n) No (default) y/n> n -Remote config --------------------- -[qiniu] + +Configuration complete. +Options: - type: s3 -- provider: Qiniu -- access_key_id: xxx -- secret_access_key: xxx -- region: cn-east-1 -- endpoint: s3-cn-east-1.qiniucs.com -- location_constraint: cn-east-1 -- acl: public-read -- storage_class: STANDARD --------------------- +- provider: Rabata +- access_key_id: ACCESS_KEY_ID +- secret_access_key: SECRET_ACCESS_KEY +- region: eu-west-2 +- endpoint: s3.eu-west-2.rabata.io +- location_constraint: eu-west-2 +Keep this "rabata" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y + Current remotes: Name Type ==== ==== -qiniu s3 +rabata s3 ``` ### RackCorp {#RackCorp} -[RackCorp Object Storage](https://www.rackcorp.com/storage/s3storage) is an S3 compatible object storage platform from your friendly cloud provider RackCorp. -The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty. +[RackCorp Object Storage](https://www.rackcorp.com/storage/s3storage) is an S3 +compatible object storage platform from your friendly cloud provider RackCorp. +The service is fast, reliable, well priced and located in many strategic +locations unserviced by others, to ensure you can maintain data sovereignty. -Before you can use RackCorp Object Storage, you'll need to "[sign up](https://www.rackcorp.com/signup)" for an account on our "[portal](https://portal.rackcorp.com)". -Next you can create an `access key`, a `secret key` and `buckets`, in your location of choice with ease. -These details are required for the next steps of configuration, when `rclone config` asks for your `access_key_id` and `secret_access_key`. +Before you can use RackCorp Object Storage, you'll need to +[sign up](https://www.rackcorp.com/signup) for an account on our [portal](https://portal.rackcorp.com). +Next you can create an `access key`, a `secret key` and `buckets`, in your +location of choice with ease. These details are required for the next steps of +configuration, when `rclone config` asks for your `access_key_id` and `secret_access_key`. Your config should end up looking a bit like this: -``` +```ini [RCS3-demo-config] type = s3 provider = RackCorp @@ -32726,13 +36820,13 @@ Rclone can serve any remote over the S3 protocol. For details see the For example, to serve `remote:path` over s3, run the server like this: -``` +```console rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path ``` This will be compatible with an rclone remote which is defined like this: -``` +```ini [serves3] type = s3 provider = Rclone @@ -32747,12 +36841,15 @@ Note that setting `use_multipart_uploads = false` is to work around ### Scaleway -[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. -Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool. +[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform +allows you to store anything from backups, logs and web assets to documents and photos. +Files can be dropped from the Scaleway console or transferred through our API and +CLI or using any S3-compatible tool. -Scaleway provides an S3 interface which can be configured for use with rclone like this: +Scaleway provides an S3 interface which can be configured for use with rclone +like this: -``` +```ini [scaleway] type = s3 provider = Scaleway @@ -32768,19 +36865,25 @@ chunk_size = 5M copy_cutoff = 5M ``` -[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`. -So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above) +[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the +low-cost S3 Glacier alternative from Scaleway and it works the same way as on +S3 by accepting the "GLACIER" `storage_class`. So you can configure your remote +with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. +Don't forget that in this state you can't read files back after, you will need +to restore them to "STANDARD" storage_class first before being able to read +them (see "restore" section above) ### Seagate Lyve Cloud {#lyve} -[Seagate Lyve Cloud](https://www.seagate.com/gb/en/services/cloud/storage/) is an S3 -compatible object storage platform from [Seagate](https://seagate.com/) intended for enterprise use. +[Seagate Lyve Cloud](https://www.seagate.com/gb/en/services/cloud/storage/) is +an S3 compatible object storage platform from [Seagate](https://seagate.com/) +intended for enterprise use. Here is a config run through for a remote called `remote` - you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first. -``` +```console $ rclone config No remotes found, make a new one? n) New remote @@ -32792,7 +36895,7 @@ name> remote Choose `s3` backend -``` +```text Type of storage to configure. Choose a number from below, or type in your own value. [snip] @@ -32804,7 +36907,7 @@ Storage> s3 Choose `LyveCloud` as S3 provider -``` +```text Choose your S3 provider. Choose a number from below, or type in your own value. Press Enter to leave empty. @@ -32815,9 +36918,10 @@ XX / Seagate Lyve Cloud provider> LyveCloud ``` -Take the default (just press enter) to enter access key and secret in the config file. +Take the default (just press enter) to enter access key and secret in the +config file. -``` +```text Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own boolean value (true or false). @@ -32829,14 +36933,14 @@ Press Enter for the default (false). env_auth> ``` -``` +```text AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a value. Press Enter to leave empty. access_key_id> XXX ``` -``` +```text AWS Secret Access Key (password). Leave blank for anonymous access or runtime credentials. Enter a value. Press Enter to leave empty. @@ -32845,7 +36949,7 @@ secret_access_key> YYY Leave region blank -``` +```text Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. Choose a number from below, or type in your own value. @@ -32861,7 +36965,7 @@ region> Enter your Lyve Cloud endpoint. This field cannot be kept empty. -``` +```text Endpoint for Lyve Cloud S3 API. Required when using an S3 clone. Please type in your LyveCloud endpoint. @@ -32874,7 +36978,7 @@ endpoint> s3.us-west-1.global.lyve.seagate.com Leave location constraint blank -``` +```text Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only. Enter a value. Press Enter to leave empty. @@ -32883,7 +36987,7 @@ location_constraint> Choose default ACL (`private`). -``` +```text Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl @@ -32900,7 +37004,7 @@ acl> And the config file should end up looking like this: -``` +```ini [remote] type = s3 provider = LyveCloud @@ -32911,14 +37015,16 @@ endpoint = s3.us-east-1.lyvecloud.seagate.com ### SeaweedFS -[SeaweedFS](https://github.com/chrislusf/seaweedfs/) is a distributed storage system for -blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store. -It has an S3 compatible object storage interface. SeaweedFS can also act as a -[gateway to remote S3 compatible object store](https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remote-Object-Storage) -to cache data and metadata with asynchronous write back, for fast local speed and minimize access cost. +[SeaweedFS](https://github.com/chrislusf/seaweedfs/) is a distributed storage +system for blobs, objects, files, and data lake, with O(1) disk seek and a +scalable file metadata store. It has an S3 compatible object storage interface. +SeaweedFS can also act as a [gateway to remote S3 compatible object store](https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remote-Object-Storage) +to cache data and metadata with asynchronous write back, for fast local speed +and minimize access cost. Assuming the SeaweedFS are configured with `weed shell` as such: -``` + +```text > s3.bucket.create -name foo > s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply { @@ -32943,10 +37049,10 @@ Assuming the SeaweedFS are configured with `weed shell` as such: } ``` -To use rclone with SeaweedFS, above configuration should end up with something like this in -your config: +To use rclone with SeaweedFS, above configuration should end up with something +like this in your config: -``` +```ini [seaweedfs_s3] type = s3 provider = SeaweedFS @@ -32957,7 +37063,7 @@ endpoint = localhost:8333 So once set up, for example to copy files into a bucket -``` +```console rclone copy /path/to/files seaweedfs_s3:foo ``` @@ -32980,8 +37086,8 @@ the recommended default), not "path style". You can use `rclone config` to make a new provider like this -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -33070,7 +37176,7 @@ y/e/d> y And your config should end up looking like this: -``` +```ini [selectel] type = s3 provider = Selectel @@ -33080,6 +37186,217 @@ region = ru-1 endpoint = s3.ru-1.storage.selcloud.ru ``` +### Servercore {#servercore} + +[Servercore Object Storage](https://servercore.com/services/object-storage/) is an S3 +compatible object storage system that provides scalable and secure storage +solutions for businesses of all sizes. + +rclone config example: + +```text +No remotes found, make a new one\? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> servercore + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Amazon S3 Compliant Storage Providers including ..., Servercore, ... + \ (s3) +[snip] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / Servercore Object Storage + \ (Servercore) +[snip] +provider> Servercore + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> 1 + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_ACCESS_KEY + +Option region. +Region where your is data stored. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / St. Petersburg + \ (ru-1) + 2 / Moscow + \ (gis-1) + 3 / Moscow + \ (ru-7) + 4 / Tashkent, Uzbekistan + \ (uz-2) + 5 / Almaty, Kazakhstan + \ (kz-1) +region> 1 + +Option endpoint. +Endpoint for Servercore Object Storage. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Saint Petersburg + \ (s3.ru-1.storage.selcloud.ru) + 2 / Moscow + \ (s3.gis-1.storage.selcloud.ru) + 3 / Moscow + \ (s3.ru-7.storage.selcloud.ru) + 4 / Tashkent, Uzbekistan + \ (s3.uz-2.srvstorage.uz) + 5 / Almaty, Kazakhstan + \ (s3.kz-1.srvstorage.kz) +endpoint> 1 + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: s3 +- provider: Servercore +- access_key_id: ACCESS_KEY +- secret_access_key: SECRET_ACCESS_KEY +- region: ru-1 +- endpoint: s3.ru-1.storage.selcloud.ru +Keep this "servercore" remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +### Spectra Logic {#spectralogic} + +[Spectra Logic](https://www.spectralogic.com/blackpearl-nearline-object-gateway) +is an on-prem S3-compatible object storage gateway that exposes local object +storage and policy-tiers data to Spectra tape and public clouds under a single +namespace for backup and archiving. + +The S3 compatible gateway is configured using `rclone config` with a +type of `s3` and with a provider name of `SpectraLogic`. Here is an example +run of the configurator. + +```text +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> spectralogic + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Amazon S3 Compliant Storage Providers including ..., SpectraLogic, ... + \ (s3) +[snip] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / SpectraLogic BlackPearl + \ (SpectraLogic) +[snip] +provider> SpectraLogic + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) +env_auth> 1 + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_ACCESS_KEY + +Option endpoint. +Endpoint for S3 API. +Required when using an S3 clone. +Enter a value. Press Enter to leave empty. +endpoint> https://bp.example.com + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: s3 +- provider: SpectraLogic +- access_key_id: ACCESS_KEY +- secret_access_key: SECRET_ACCESS_KEY +- endpoint: https://bp.example.com +Keep this "spectratest" remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` + +And your config should end up looking like this: + +```ini +[spectratest] +type = s3 +provider = SpectraLogic +access_key_id = ACCESS_KEY +secret_access_key = SECRET_ACCESS_KEY +endpoint = https://bp.example.com +``` + ### Storj Storj is a decentralized cloud storage which can be used through its @@ -33089,7 +37406,7 @@ The S3 compatible gateway is configured using `rclone config` with a type of `s3` and with a provider name of `Storj`. Here is an example run of the configurator. -``` +```text Type of storage to configure. Storage> s3 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). @@ -33151,11 +37468,14 @@ This has the following consequences: - Using `rclone rcat` will fail as the metadata doesn't match after upload - Uploading files with `rclone mount` will fail for the same reason - - This can worked around by using `--vfs-cache-mode writes` or `--vfs-cache-mode full` or setting `--s3-upload-cutoff` large + - This can worked around by using `--vfs-cache-mode writes` or + `--vfs-cache-mode full` or setting `--s3-upload-cutoff` large - Files uploaded via a multipart upload won't have their modtimes - - This will mean that `rclone sync` will likely keep trying to upload files bigger than `--s3-upload-cutoff` - - This can be worked around with `--checksum` or `--size-only` or setting `--s3-upload-cutoff` large - - The maximum value for `--s3-upload-cutoff` is 5GiB though + - This will mean that `rclone sync` will likely keep trying to upload + files bigger than `--s3-upload-cutoff` + - This can be worked around with `--checksum` or `--size-only` or + setting `--s3-upload-cutoff` large + - The maximum value for `--s3-upload-cutoff` is 5GiB though One general purpose workaround is to set `--s3-upload-cutoff 5G`. This means that rclone will upload files smaller than 5GiB as single parts. @@ -33183,7 +37503,9 @@ For more detailed comparison please check the documentation of the ### Synology C2 Object Storage {#synology-c2} -[Synology C2 Object Storage](https://c2.synology.com/en-global/object-storage/overview) provides a secure, S3-compatible, and cost-effective cloud storage solution without API request, download fees, and deletion penalty. +[Synology C2 Object Storage](https://c2.synology.com/en-global/object-storage/overview) +provides a secure, S3-compatible, and cost-effective cloud storage solution +without API request, download fees, and deletion penalty. The S3 compatible gateway is configured using `rclone config` with a type of `s3` and with a provider name of `Synology`. Here is an example @@ -33191,14 +37513,14 @@ run of the configurator. First run: -``` +```console rclone config ``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -33316,130 +37638,133 @@ y/e/d> y ### Tencent COS {#tencent-cos} -[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost. +[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) +is a distributed storage service offered by Tencent Cloud for unstructured data. +It is secure, stable, massive, convenient, low-delay and low-cost. To configure access to Tencent COS, follow the steps below: 1. Run `rclone config` and select `n` for a new remote. -``` -rclone config -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -``` + ```text + rclone config + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + ``` 2. Give the name of the configuration. For example, name it 'cos'. -``` -name> cos -``` + ```text + name> cos + ``` 3. Select `s3` storage. -``` -Choose a number from below, or type in your own value -[snip] -XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ "s3" -[snip] -Storage> s3 -``` + ```text + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ "s3" + [snip] + Storage> s3 + ``` 4. Select `TencentCOS` provider. -``` -Choose a number from below, or type in your own value -1 / Amazon Web Services (AWS) S3 - \ "AWS" -[snip] -11 / Tencent Cloud Object Storage (COS) - \ "TencentCOS" -[snip] -provider> TencentCOS -``` + + ```text + Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + [snip] + 11 / Tencent Cloud Object Storage (COS) + \ "TencentCOS" + [snip] + provider> TencentCOS + ``` 5. Enter your SecretId and SecretKey of Tencent Cloud. -``` -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Enter a boolean value (true or false). Press Enter for the default ("false"). -Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" -env_auth> 1 -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -access_key_id> AKIDxxxxxxxxxx -AWS Secret Access Key (password) -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -secret_access_key> xxxxxxxxxxx -``` + ```text + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Enter a boolean value (true or false). Press Enter for the default ("false"). + Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + access_key_id> AKIDxxxxxxxxxx + AWS Secret Access Key (password) + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + secret_access_key> xxxxxxxxxxx + ``` 6. Select endpoint for Tencent COS. This is the standard endpoint for different region. -``` - 1 / Beijing Region. - \ "cos.ap-beijing.myqcloud.com" - 2 / Nanjing Region. - \ "cos.ap-nanjing.myqcloud.com" - 3 / Shanghai Region. - \ "cos.ap-shanghai.myqcloud.com" - 4 / Guangzhou Region. - \ "cos.ap-guangzhou.myqcloud.com" -[snip] -endpoint> 4 -``` + ```text + 1 / Beijing Region. + \ "cos.ap-beijing.myqcloud.com" + 2 / Nanjing Region. + \ "cos.ap-nanjing.myqcloud.com" + 3 / Shanghai Region. + \ "cos.ap-shanghai.myqcloud.com" + 4 / Guangzhou Region. + \ "cos.ap-guangzhou.myqcloud.com" + [snip] + endpoint> 4 + ``` 7. Choose acl and storage class. -``` -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / Owner gets Full_CONTROL. No one else has access rights (default). - \ "default" -[snip] -acl> 1 -The storage class to use when storing new objects in Tencent COS. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / Default - \ "" -[snip] -storage_class> 1 -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> n -Remote config --------------------- -[cos] -type = s3 -provider = TencentCOS -env_auth = false -access_key_id = xxx -secret_access_key = xxx -endpoint = cos.ap-guangzhou.myqcloud.com -acl = default --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -Current remotes: - -Name Type -==== ==== -cos s3 -``` + ```text + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / Owner gets Full_CONTROL. No one else has access rights (default). + \ "default" + [snip] + acl> 1 + The storage class to use when storing new objects in Tencent COS. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / Default + \ "" + [snip] + storage_class> 1 + Edit advanced config? (y/n) + y) Yes + n) No (default) + y/n> n + Remote config + -------------------- + [cos] + type = s3 + provider = TencentCOS + env_auth = false + access_key_id = xxx + secret_access_key = xxx + endpoint = cos.ap-guangzhou.myqcloud.com + acl = default + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: + + Name Type + ==== ==== + cos s3 + ``` ### Wasabi @@ -33451,8 +37776,8 @@ reliable, and secure data storage infrastructure at minimal cost. Wasabi provides an S3 interface which can be configured for use with rclone like this. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password n/s> n @@ -33539,7 +37864,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [wasabi] type = s3 provider = Wasabi @@ -33556,15 +37881,17 @@ storage_class = ### Zata Object Storage {#Zata} -[Zata Object Storage](https://zata.ai/) provides a secure, S3-compatible cloud storage solution designed for scalability and performance, ideal for a variety of data storage needs. +[Zata Object Storage](https://zata.ai/) provides a secure, S3-compatible cloud +storage solution designed for scalability and performance, ideal for a variety +of data storage needs. First run: -``` +```console rclone config ``` -``` +```text This will guide you through an interactive setup process: e) Edit existing remote @@ -33693,10 +38020,11 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> +``` -``` This will leave the config file looking like this. -``` + +```ini [my zata storage] type = s3 provider = Zata @@ -33704,7 +38032,6 @@ access_key_id = xxx secret_access_key = xxx region = us-east-1 endpoint = idr01.zata.ai - ``` ## Memory usage {#memory} @@ -33736,7 +38063,289 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). + +# Archive + +The Archive backend allows read only access to the content of archive +files on cloud storage without downloading the complete archive. This +means you could mount a large archive file and use only the parts of +it your application requires, rather than having to extract it. + +The archive files are recognised by their extension. + +| Archive | Extension | +| -------- | --------- | +| Zip | `.zip` | +| Squashfs | `.sqfs` | + +The supported archive file types are cloud friendly - a single file +can be found and downloaded without downloading the whole archive. + +If you just want to create, list or extract archives and don't want to +mount them then you may find the `rclone archive` commands more +convenient. + +- [rclone archive create](https://rclone.org/commands/rclone_archive_create/) +- [rclone archive list](https://rclone.org/commands/rclone_archive_list/) +- [rclone archive extract](https://rclone.org/commands/rclone_archive_extract/) + +These commands supports a wider range of non cloud friendly archives +(but not squashfs) but can't be used for `rclone mount` or any other +rclone commands (eg `rclone check`). + +## Configuration + +This backend is best used without configuration. + +Use it by putting the string `:archive:` in front of another remote, +say `remote:dir` to make `:archive:remote:dir`. + +Any archives in `remote:dir` will become directories and any files may +be read out of them individually. + +For example + +``` +$ rclone lsf s3:rclone/dir +100files.sqfs +100files.zip +``` + +Note that `100files.zip` and `100files.sqfs` are now directories: + +``` +$ rclone lsf :archive:s3:rclone/dir +100files.sqfs/ +100files.zip/ +``` + +Which we can look inside: + +``` +$ rclone lsf :archive:s3:rclone/dir/100files.zip/ +cofofiy5jun +gigi +hevupaz5z +kacak/ +kozemof/ +lamapaq4 +qejahen +quhenen2rey +soboves8 +vibat/ +wose +xade +zilupot +``` + +Files not in an archive can be read and written as normal. Files in an archive can only be read. + +The archive backend can also be used in a configuration file. Use the `remote` variable to point to the destination of the archive. + +``` +[remote] +type = archive +remote = s3:rclone/dir/100files.zip +``` + +Gives + +``` +$ rclone lsf remote: +cofofiy5jun +gigi +hevupaz5z +kacak/ +... +``` + + +## Modification times + +Modification times are preserved with an accuracy depending on the archive type. + +``` +$ rclone lsl --max-depth 1 :archive:s3:rclone/dir/100files.zip + 12 2025-10-27 14:39:20.000000000 cofofiy5jun + 81 2025-10-27 14:39:20.000000000 gigi + 58 2025-10-27 14:39:20.000000000 hevupaz5z + 6 2025-10-27 14:39:20.000000000 lamapaq4 + 43 2025-10-27 14:39:20.000000000 qejahen + 66 2025-10-27 14:39:20.000000000 quhenen2rey + 95 2025-10-27 14:39:20.000000000 soboves8 + 71 2025-10-27 14:39:20.000000000 wose + 76 2025-10-27 14:39:20.000000000 xade + 15 2025-10-27 14:39:20.000000000 zilupot +``` + +For `zip` and `squashfs` files this is 1s. + +## Hashes + +Which hash is supported depends on the archive type. Zip files use +CRC32, Squashfs don't support any hashes. For example: + +``` +$ rclone hashsum crc32 :archive:s3:rclone/dir/100files.zip/ +b2288554 cofofiy5jun +a87e62b6 wose +f90f630b xade +c7d0ef29 gigi +f1c64740 soboves8 +cb7b4a5d quhenen2rey +5115242b kozemof/fonaxo +afeabd9a qejahen +71202402 kozemof/fijubey5di +bd99e512 kozemof/napux +... +``` + +Hashes will be checked when the file is read from the archive and used +as part of syncing if possible. + +``` +$ rclone copy -vv :archive:s3:rclone/dir/100files.zip /tmp/100files +... +2025/10/27 14:56:44 DEBUG : kacak/turovat5c/yuyuquk: crc32 = abd05cc8 OK +2025/10/27 14:56:44 DEBUG : kacak/turovat5c/yuyuquk.aeb661dc.partial: renamed to: kacak/turovat5c/yuyuquk +2025/10/27 14:56:44 INFO : kacak/turovat5c/yuyuquk: Copied (new) +... +``` + +## Zip + +The [Zip file format](https://en.wikipedia.org/wiki/ZIP_(file_format)) +is a widely used archive format that bundles one or more files and +folders into a single file, primarily for easier storage or +transmission. It typically uses compression (most commonly the DEFLATE +algorithm) to reduce the overall size of the archived content. Zip +files are supported natively by most modern operating systems. + +Rclone does not support the following advanced features of Zip files: + +- Splitting large archives into smaller parts +- Password protection +- Zstd compression + +## Squashfs + +Squashfs is a compressed, read-only file system format primarily used +in Linux-based systems. It's designed to compress entire file systems +(including files, directories, and metadata) into a single archive +file, which can then be mounted and read directly, appearing as a +normal directory structure. Because it's read-only and highly +compressed, Squashfs is ideal for live CDs/USBs, embedded devices with +limited storage, and software package distribution, as it saves space +and ensures the integrity of the original files. + +Rclone supports the following squashfs compression formats: + +- `Gzip` +- `Lzma` +- `Xz` +- `Zstd` + +These are not yet working: + +- `Lzo` - Not yet supported +- `Lz4` - Broken with "error decompressing: lz4: bad magic number" + +Rclone works fastest with large squashfs block sizes. For example: + +``` +mksquashfs 100files 100files.sqfs -comp zstd -b 1M +``` + +## Limitations + +Files in the archive backend are read only. It isn't possible to +create archives with the archive backend yet. However you **can** create +archives with [rclone archive create](https://rclone.org/commands/rclone_archive_create/). + +Only `.zip` and `.sqfs` archives are supported as these are the only +common archiving formats which make it easy to read directory listings +from the archive without downloading the whole archive. + +Internally the archive backend uses the VFS to access files. It isn't +possible to configure the internal VFS yet which might be useful. + +## Archive Formats + +Here's a table rating common archive formats on their Cloud +Optimization which is based on their ability to access a single file +without reading the entire archive. + +This capability depends on whether the format has a central **index** +(or "table of contents") that a program can read first to find the +exact location of a specific file. + +| Format | Extensions | Cloud Optimized | Explanation | +| :--- | :--- | :--- | :--- | +| **ZIP** | `.zip` | **Excellent** | **Zip files have an index** (the "central directory") stored at the *end* of the file. A program can seek to the end, read the index to find a file's location and size, and then seek directly to that file's data to extract it. | +| **SquashFS** | `.squashfs`, `.sqfs`, `.sfs` | **Excellent** | This is a compressed read-only *filesystem image*, not just an archive. It is **specifically designed for random access**. It uses metadata and index tables to allow the system to find and decompress individual files or data blocks on demand. | +| **ISO Image** | `.iso` | **Excellent** | Like SquashFS, this is a *filesystem image* (for optical media). It contains a filesystem (like ISO 9660 or UDF) with a **table of contents at a known location**, allowing for direct access to any file without reading the whole disk. | +| **RAR** | `.rar` | **Good** | RAR supports "non-solid" and "solid" modes. In the common **non-solid** mode, files are compressed separately, and an index allows for easy single-file extraction (like ZIP). In "solid" mode, this rating would be "Very Poor." | +| **7z** | `.7z` | **Poor** | By default, 7z uses "solid" archives to maximize compression. This compresses files as one continuous stream. To extract a file from the middle, all preceding files must be decompressed first. (If explicitly created as "non-solid," its rating would be "Excellent"). | +| **tar** | `.tar` | **Poor** | "Tape Archive" is a *streaming* format with **no central index**. To find a file, you must read the archive from the beginning, checking each file header one by one until you find the one you want. This is slow but doesn't require decompressing data. | +| **Gzipped Tar** | `.tar.gz`, `.tgz` | **Very Poor** | This is a `tar` file (already "Poor") compressed with `gzip` as a **single, non-seekable stream**. You cannot seek. To get *any* file, you must decompress the *entire* archive from the beginning up to that file. | +| **Bzipped/XZ Tar** | `.tar.bz2`, `.tar.xz` | **Very Poor** | This is the same principle as `tar.gz`. The entire archive is one large compressed block, making random access impossible. | + +## Ideas for improvements + +It would be possible to add ISO support fairly easily as the library we use ([go-diskfs](https://github.com/diskfs/go-diskfs/)) supports it. We could also add `ext4` and `fat32` the same way, however in my experience these are not very common as files so probably not worth it. Go-diskfs can also read partitions which we could potentially take advantage of. + +It would be possible to add write support, but this would only be for creating new archives, not for updating existing archives. + + +### Standard options + +Here are the Standard options specific to archive (Read archives). + +#### --archive-remote + +Remote to wrap to read archives from. + +Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", +"myremote:bucket" or "myremote:". + +If this is left empty, then the archive backend will use the root as +the remote. + +This means that you can use :archive:remote:path and it will be +equivalent to setting remote="remote:path". + + +Properties: + +- Config: remote +- Env Var: RCLONE_ARCHIVE_REMOTE +- Type: string +- Required: false + +### Advanced options + +Here are the Advanced options specific to archive (Read archives). + +#### --archive-description + +Description of the remote. + +Properties: + +- Config: description +- Env Var: RCLONE_ARCHIVE_DESCRIPTION +- Type: string +- Required: false + +### Metadata + +Any metadata supported by the underlying remote is read and written. + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + + # Backblaze B2 @@ -33749,7 +38358,9 @@ command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. Here is an example of making a b2 configuration. First run - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. To authenticate you will either need your Account ID (a short hex number) and Master @@ -33757,8 +38368,8 @@ Application Key (a long hex number) OR an Application Key, which is the recommended method. See below for further details on generating and using an Application Key. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote q) Quit config n/q> n @@ -33794,20 +38405,28 @@ This remote is called `remote` and can now be used like this See all buckets - rclone lsd remote: +```console +rclone lsd remote: +``` Create a new bucket - rclone mkdir remote:bucket +```console +rclone mkdir remote:bucket +``` List the contents of a bucket - rclone ls remote:bucket +```console +rclone ls remote:bucket +``` Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. - rclone sync --interactive /home/local/directory remote:bucket +```console +rclone sync --interactive /home/local/directory remote:bucket +``` ### Application Keys @@ -33821,7 +38440,7 @@ Follow Backblaze's docs to create an Application Key with the required permission and add the `applicationKeyId` as the `account` and the `Application Key` itself as the `key`. -Note that you must put the _applicationKeyId_ as the `account` – you +Note that you must put the *applicationKeyId* as the `account` – you can't use the master Account ID. If you try then B2 will return 401 errors. @@ -33915,8 +38534,8 @@ You may opt in to a "hard delete" of files with the `--b2-hard-delete` flag which permanently removes files on deletion instead of hiding them. -Old versions of files, where available, are visible using the -`--b2-versions` flag. +Old versions of files, where available, are visible using the +`--b2-versions` flag. These can be deleted as required with `delete`. It is also possible to view a bucket as it was at a certain point in time, using the `--b2-version-at` flag. This will show the file versions as they @@ -33953,7 +38572,7 @@ version followed by a `cleanup` of the old versions. Show current version and all the versions with `--b2-versions` flag. -``` +```console $ rclone -q ls b2:cleanup-test 9 one.txt @@ -33966,7 +38585,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test Retrieve an old version -``` +```console $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp $ ls -l /tmp/one-v2016-07-04-141003-000.txt @@ -33975,7 +38594,7 @@ $ ls -l /tmp/one-v2016-07-04-141003-000.txt Clean up all the old versions and show that they've gone. -``` +```console $ rclone -q cleanup b2:cleanup-test $ rclone -q ls b2:cleanup-test @@ -33990,11 +38609,13 @@ $ rclone -q --b2-versions ls b2:cleanup-test When using `--b2-versions` flag rclone is relying on the file name to work out whether the objects are versions or not. Versions' names are created by inserting timestamp between file name and its extension. -``` + +```console 9 file.txt 8 file-v2023-07-17-161032-000.txt 16 file-v2023-06-15-141003-000.txt ``` + If there are real files present with the same names as versions, then behaviour of `--b2-versions` can be unpredictable. @@ -34004,7 +38625,7 @@ It is useful to know how many requests are sent to the server in different scena All copy commands send the following 4 requests: -``` +```text /b2api/v1/b2_authorize_account /b2api/v1/b2_create_bucket /b2api/v1/b2_list_buckets @@ -34021,7 +38642,7 @@ require any files to be uploaded, no more requests will be sent. Uploading files that do not require chunking, will send 2 requests per file upload: -``` +```text /b2api/v1/b2_get_upload_url /b2api/v1/b2_upload_file/ ``` @@ -34029,7 +38650,7 @@ file upload: Uploading files requiring chunking, will send 2 requests (one each to start and finish the upload) and another 2 requests for each chunk: -``` +```text /b2api/v1/b2_start_large_file /b2api/v1/b2_get_upload_part_url /b2api/v1/b2_upload_part/ @@ -34043,14 +38664,14 @@ rclone will show and act on older versions of files. For example Listing without `--b2-versions` -``` +```console $ rclone -q ls b2:cleanup-test 9 one.txt ``` And with -``` +```console $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 one-v2016-07-04-141032-000.txt @@ -34070,7 +38691,7 @@ permitted, so you can't upload files or delete them. Rclone supports generating file share links for private B2 buckets. They can either be for a file for example: -``` +```console ./rclone link B2:bucket/path/to/file.txt https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx @@ -34078,7 +38699,7 @@ https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx or if run on a directory you will get: -``` +```console ./rclone link B2:bucket/path https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx ``` @@ -34086,14 +38707,14 @@ https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx you can then use the authorization token (the part of the url from the `?Authorization=` on) on any file path under that directory. For example: -``` +```text https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx ``` - + ### Standard options Here are the Standard options specific to b2 (Backblaze B2). @@ -34389,6 +39010,71 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --b2-sse-customer-algorithm + +If using SSE-C, the server-side encryption algorithm used when storing this object in B2. + +Properties: + +- Config: sse_customer_algorithm +- Env Var: RCLONE_B2_SSE_CUSTOMER_ALGORITHM +- Type: string +- Required: false +- Examples: + - "" + - None + - "AES256" + - Advanced Encryption Standard (256 bits key length) + +#### --b2-sse-customer-key + +To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data + +Alternatively you can provide --sse-customer-key-base64. + +Properties: + +- Config: sse_customer_key +- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY +- Type: string +- Required: false +- Examples: + - "" + - None + +#### --b2-sse-customer-key-base64 + +To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data + +Alternatively you can provide --sse-customer-key. + +Properties: + +- Config: sse_customer_key_base64 +- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_BASE64 +- Type: string +- Required: false +- Examples: + - "" + - None + +#### --b2-sse-customer-key-md5 + +If using SSE-C you may provide the secret encryption key MD5 checksum (optional). + +If you leave it blank, this is calculated automatically from the sse_customer_key provided. + + +Properties: + +- Config: sse_customer_key_md5 +- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_MD5 +- Type: string +- Required: false +- Examples: + - "" + - None + #### --b2-description Description of the remote. @@ -34404,9 +39090,11 @@ Properties: Here are the commands specific to the b2 backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -34418,35 +39106,41 @@ These can be run on a running backend using the rc command ### lifecycle -Read or set the lifecycle for a bucket +Read or set the lifecycle for a bucket. - rclone backend lifecycle remote: [options] [+] +```console +rclone backend lifecycle remote: [options] [+] +``` This command can be used to read or set the lifecycle for a bucket. -Usage Examples: - To show the current lifecycle rules: - rclone backend lifecycle b2:bucket +```console +rclone backend lifecycle b2:bucket +``` This will dump something like this showing the lifecycle rules. - [ - { - "daysFromHidingToDeleting": 1, - "daysFromUploadingToHiding": null, - "daysFromStartingToCancelingUnfinishedLargeFiles": null, - "fileNamePrefix": "" - } - ] +```json +[ + { + "daysFromHidingToDeleting": 1, + "daysFromUploadingToHiding": null, + "daysFromStartingToCancelingUnfinishedLargeFiles": null, + "fileNamePrefix": "" + } +] +``` -If there are no lifecycle rules (the default) then it will just return []. +If there are no lifecycle rules (the default) then it will just return `[]`. To reset the current lifecycle rules: - rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30 - rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1 +```console +rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30 +rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1 +``` This will run and then print the new lifecycle rules as above. @@ -34458,22 +39152,27 @@ the daysFromHidingToDeleting to 1 day. You can enable hard_delete in the config also which will mean deletions won't cause versions but overwrites will still cause versions to be made. - rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1 - -See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules +```console +rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1 +``` +See: Options: -- "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off. -- "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any unfinished large file versions after this many days -- "daysFromUploadingToHiding": This many days after uploading a file is hidden +- "daysFromHidingToDeleting": After a file has been hidden for this many days +it is deleted. 0 is off. +- "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any unfinished +large file versions after this many days. +- "daysFromUploadingToHiding": This many days after uploading a file is hidden. ### cleanup Remove unfinished large file uploads. - rclone backend cleanup remote: [options] [+] +```console +rclone backend cleanup remote: [options] [+] +``` This command removes unfinished large file uploads of age greater than max-age, which defaults to 24 hours. @@ -34481,31 +39180,35 @@ max-age, which defaults to 24 hours. Note that you can use --interactive/-i or --dry-run with this command to see what it would do. - rclone backend cleanup b2:bucket/path/to/object - rclone backend cleanup -o max-age=7w b2:bucket/path/to/object +```console +rclone backend cleanup b2:bucket/path/to/object +rclone backend cleanup -o max-age=7w b2:bucket/path/to/object +``` Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. - Options: -- "max-age": Max age of upload to delete +- "max-age": Max age of upload to delete. ### cleanup-hidden Remove old versions of files. - rclone backend cleanup-hidden remote: [options] [+] +```console +rclone backend cleanup-hidden remote: [options] [+] +``` This command removes any old hidden versions of files. Note that you can use --interactive/-i or --dry-run with this command to see what it would do. - rclone backend cleanup-hidden b2:bucket/path/to/dir - - +```console +rclone backend cleanup-hidden b2:bucket/path/to/dir +``` + ## Limitations @@ -34514,7 +39217,8 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). # Box @@ -34530,11 +39234,13 @@ to use JWT authentication. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -34594,7 +39300,7 @@ y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Box. This only runs from the moment it opens @@ -34602,19 +39308,26 @@ your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this may require you to unblock it temporarily if you are running a host firewall. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your Box - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your Box - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an Box directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Using rclone with an Enterprise account with SSO @@ -34635,9 +39348,9 @@ According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section This means that if you - * Don't use the box remote for 60 days - * Copy the config file with a box refresh token in and use it in two places - * Get an error on a token refresh +- Don't use the box remote for 60 days +- Copy the config file with a box refresh token in and use it in two places +- Get an error on a token refresh then rclone will return an error which includes the text `Invalid refresh token`. @@ -34650,7 +39363,7 @@ did the authentication on. Here is how to do it. -``` +```console $ rclone config Current remotes: @@ -34754,8 +39467,8 @@ either be actually deleted from Box or moved to the trash. Emptying the trash is supported via the rclone however cleanup command however this deletes every trashed file and folder individually so it -may take a very long time. -Emptying the trash via the WebUI does not have this limitation +may take a very long time. +Emptying the trash via the WebUI does not have this limitation so it is advised to empty the trash via the WebUI. ### Root folder ID @@ -34780,7 +39493,7 @@ So if the folder you want rclone to use has a URL which looks like in the browser, then you use `11xxxxxxxxx8` as the `root_folder_id` in the config. - + ### Standard options Here are the Standard options specific to box (Box). @@ -34826,6 +39539,19 @@ Properties: - Type: string - Required: false +#### --box-config-credentials + +Box App config.json contents. + +Leave blank normally. + +Properties: + +- Config: config_credentials +- Env Var: RCLONE_BOX_CONFIG_CREDENTIALS +- Type: string +- Required: false + #### --box-access-token Box App Primary Access Token @@ -34850,10 +39576,10 @@ Properties: - Type: string - Default: "user" - Examples: - - "user" - - Rclone should act on behalf of a user. - - "enterprise" - - Rclone should act on behalf of a service account. + - "user" + - Rclone should act on behalf of a user. + - "enterprise" + - Rclone should act on behalf of a service account. ### Advanced options @@ -35012,7 +39738,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -35025,14 +39751,16 @@ Reverse Solidus). Box only supports filenames up to 255 characters in length. -Box has [API rate limits](https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/) that sometimes reduce the speed of rclone. +Box has [API rate limits](https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/) +that sometimes reduce the speed of rclone. `rclone about` is not supported by the Box backend. Backends without this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). ## Get your own Box App ID @@ -35085,11 +39813,13 @@ with `cache`. Here is an example of how to make a remote called `test-cache`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote r) Rename remote @@ -35169,19 +39899,25 @@ You can then use it like this, List directories in top level of your drive - rclone lsd test-cache: +```console +rclone lsd test-cache: +``` List all the files in your drive - rclone ls test-cache: +```console +rclone ls test-cache: +``` To start a cached mount - rclone mount --allow-other test-cache: /var/tmp/test-cache +```console +rclone mount --allow-other test-cache: /var/tmp/test-cache +``` -### Write Features ### +### Write Features -### Offline uploading ### +### Offline uploading In an effort to make writing through cache more reliable, the backend now supports this feature which can be activated by specifying a @@ -35206,7 +39942,7 @@ Uploads will be stored in a queue and be processed based on the order they were The queue and the temporary storage is persistent across restarts but can be cleared on startup with the `--cache-db-purge` flag. -### Write Support ### +### Write Support Writes are supported through `cache`. One caveat is that a mounted cache remote does not add any retry or fallback @@ -35217,9 +39953,9 @@ One special case is covered with `cache-writes` which will cache the file data at the same time as the upload when it is enabled making it available from the cache store immediately once the upload is finished. -### Read Features ### +### Read Features -#### Multiple connections #### +#### Multiple connections To counter the high latency between a local PC where rclone is running and cloud providers, the cache remote can split multiple requests to the @@ -35231,7 +39967,7 @@ This is similar to buffering when media files are played online. Rclone will stay around the current marker but always try its best to stay ahead and prepare the data before. -#### Plex Integration #### +#### Plex Integration There is a direct integration with Plex which allows cache to detect during reading if the file is in playback or not. This helps cache to adapt how it queries @@ -35250,9 +39986,11 @@ How to enable? Run `rclone config` and add all the Plex options (endpoint, usern and password) in your remote and it will be automatically enabled. Affected settings: -- `cache-workers`: _Configured value_ during confirmed playback or _1_ all the other times -##### Certificate Validation ##### +- `cache-workers`: *Configured value* during confirmed playback or *1* all the + other times + +##### Certificate Validation When the Plex server is configured to only accept secure connections, it is possible to use `.plex.direct` URLs to ensure certificate validation succeeds. @@ -35267,60 +40005,63 @@ have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`. To get the `server-hash` part, the easiest way is to visit -https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token + This page will list all the available Plex servers for your account with at least one `.plex.direct` link for each. Copy one URL and replace the IP address with the desired address. This can be used as the `plex_url` value. -### Known issues ### +### Known issues -#### Mount and --dir-cache-time #### +#### Mount and --dir-cache-time ---dir-cache-time controls the first layer of directory caching which works at the mount layer. -Being an independent caching mechanism from the `cache` backend, it will manage its own entries -based on the configured time. +--dir-cache-time controls the first layer of directory caching which works at +the mount layer. Being an independent caching mechanism from the `cache` backend, +it will manage its own entries based on the configured time. -To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct -one, try to set `--dir-cache-time` to a lower time than `--cache-info-age`. Default values are -already configured in this way. +To avoid getting in a scenario where dir cache has obsolete data and cache would +have the correct one, try to set `--dir-cache-time` to a lower time than +`--cache-info-age`. Default values are already configured in this way. -#### Windows support - Experimental #### +#### Windows support - Experimental -There are a couple of issues with Windows `mount` functionality that still require some investigations. -It should be considered as experimental thus far as fixes come in for this OS. +There are a couple of issues with Windows `mount` functionality that still +require some investigations. It should be considered as experimental thus far +as fixes come in for this OS. Most of the issues seem to be related to the difference between filesystems on Linux flavors and Windows as cache is heavily dependent on them. Any reports or feedback on how cache behaves on this OS is greatly appreciated. - -- https://github.com/rclone/rclone/issues/1935 -- https://github.com/rclone/rclone/issues/1907 -- https://github.com/rclone/rclone/issues/1834 -#### Risk of throttling #### +- [Issue #1935](https://github.com/rclone/rclone/issues/1935) +- [Issue #1907](https://github.com/rclone/rclone/issues/1907) +- [Issue #1834](https://github.com/rclone/rclone/issues/1834) + +#### Risk of throttling Future iterations of the cache backend will make use of the pooling functionality of the cloud provider to synchronize and at the same time make writing through it -more tolerant to failures. +more tolerant to failures. There are a couple of enhancements in track to add these but in the meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts. Some recommendations: + - don't use a very small interval for entry information (`--cache-info-age`) -- while writes aren't yet optimised, you can still write through `cache` which gives you the advantage -of adding the file in the cache at the same time if configured to do so. +- while writes aren't yet optimised, you can still write through `cache` which + gives you the advantage of adding the file in the cache at the same time if + configured to do so. Future enhancements: -- https://github.com/rclone/rclone/issues/1937 -- https://github.com/rclone/rclone/issues/1936 +- [Issue #1937](https://github.com/rclone/rclone/issues/1937) +- [Issue #1936](https://github.com/rclone/rclone/issues/1936) -#### cache and crypt #### +#### cache and crypt One common scenario is to keep your data encrypted in the cloud provider using the `crypt` remote. `crypt` uses a similar technique to wrap around @@ -35335,32 +40076,38 @@ which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yields better results: **cloud remote** -> **cache** -> **crypt** -#### absolute remote paths #### +#### absolute remote paths -`cache` can not differentiate between relative and absolute paths for the wrapped remote. -Any path given in the `remote` config setting and on the command line will be passed to -the wrapped remote as is, but for storing the chunks on disk the path will be made -relative by removing any leading `/` character. +`cache` can not differentiate between relative and absolute paths for the wrapped +remote. Any path given in the `remote` config setting and on the command line will +be passed to the wrapped remote as is, but for storing the chunks on disk the path +will be made relative by removing any leading `/` character. -This behavior is irrelevant for most backend types, but there are backends where a leading `/` -changes the effective directory, e.g. in the `sftp` backend paths starting with a `/` are -relative to the root of the SSH server and paths without are relative to the user home directory. -As a result `sftp:bin` and `sftp:/bin` will share the same cache folder, even if they represent -a different directory on the SSH server. +This behavior is irrelevant for most backend types, but there are backends where +a leading `/` changes the effective directory, e.g. in the `sftp` backend paths +starting with a `/` are relative to the root of the SSH server and paths without +are relative to the user home directory. As a result `sftp:bin` and `sftp:/bin` +will share the same cache folder, even if they represent a different directory +on the SSH server. -### Cache and Remote Control (--rc) ### -Cache supports the new `--rc` mode in rclone and can be remote controlled through the following end points: -By default, the listener is disabled if you do not add the flag. +### Cache and Remote Control (--rc) + +Cache supports the new `--rc` mode in rclone and can be remote controlled +through the following end points: By default, the listener is disabled if +you do not add the flag. ### rc cache/expire + Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt. Params: - - **remote** = path to remote **(required)** - - **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_ +- **remote** = path to remote **(required)** +- **withData** = true/false to delete cached data (chunks) as + well *(optional, false by default)* + ### Standard options Here are the Standard options specific to cache (Cache a remote). @@ -35429,12 +40176,12 @@ Properties: - Type: SizeSuffix - Default: 5Mi - Examples: - - "1M" - - 1 MiB - - "5M" - - 5 MiB - - "10M" - - 10 MiB + - "1M" + - 1 MiB + - "5M" + - 5 MiB + - "10M" + - 10 MiB #### --cache-info-age @@ -35449,12 +40196,12 @@ Properties: - Type: Duration - Default: 6h0m0s - Examples: - - "1h" - - 1 hour - - "24h" - - 24 hours - - "48h" - - 48 hours + - "1h" + - 1 hour + - "24h" + - 24 hours + - "48h" + - 48 hours #### --cache-chunk-total-size @@ -35470,12 +40217,12 @@ Properties: - Type: SizeSuffix - Default: 10Gi - Examples: - - "500M" - - 500 MiB - - "1G" - - 1 GiB - - "10G" - - 10 GiB + - "500M" + - 500 MiB + - "1G" + - 1 GiB + - "10G" + - 10 GiB ### Advanced options @@ -35733,9 +40480,11 @@ Properties: Here are the commands specific to the cache backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -35749,9 +40498,11 @@ These can be run on a running backend using the rc command Print stats on the cache backend in JSON format. - rclone backend stats remote: [options] [+] - +```console +rclone backend stats remote: [options] [+] +``` + # Chunker @@ -35774,8 +40525,8 @@ then you should probably put the bucket in the remote `s3:bucket`. Now configure `chunker` using `rclone config`. We will call this one `overlay` to separate it from the `remote` itself. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -35840,16 +40591,15 @@ So if you use a remote of `/path/to/secret/files` then rclone will chunk stuff in that directory. If you use a remote of `name` then rclone will put files in a directory called `name` in the current directory. - ### Chunking When rclone starts a file upload, chunker checks the file size. If it doesn't exceed the configured chunk size, chunker will just pass the file -to the wrapped remote (however, see caveat below). If a file is large, chunker will transparently cut -data in pieces with temporary names and stream them one by one, on the fly. -Each data chunk will contain the specified number of bytes, except for the -last one which may have less data. If file size is unknown in advance -(this is called a streaming upload), chunker will internally create +to the wrapped remote (however, see caveat below). If a file is large, chunker +will transparently cut data in pieces with temporary names and stream them one +by one, on the fly. Each data chunk will contain the specified number of bytes, +except for the last one which may have less data. If file size is unknown in +advance (this is called a streaming upload), chunker will internally create a temporary copy, record its size and repeat the above process. When upload completes, temporary chunk files are finally renamed. @@ -35877,14 +40627,13 @@ proceed with current command. You can set the `--chunker-fail-hard` flag to have commands abort with error message in such cases. -**Caveat**: As it is now, chunker will always create a temporary file in the +**Caveat**: As it is now, chunker will always create a temporary file in the backend and then rename it, even if the file is below the chunk threshold. This will result in unnecessary API calls and can severely restrict throughput -when handling transfers primarily composed of small files on some backends (e.g. Box). -A workaround to this issue is to use chunker only for files above the chunk threshold -via `--min-size` and then perform a separate call without chunker on the remaining -files. - +when handling transfers primarily composed of small files on some backends +(e.g. Box). A workaround to this issue is to use chunker only for files above +the chunk threshold via `--min-size` and then perform a separate call without +chunker on the remaining files. #### Chunk names @@ -35913,7 +40662,6 @@ non-chunked files. When using `norename` transactions, chunk names will additionally have a unique file version suffix. For example, `BIG_FILE_NAME.rclone_chunk.001_bp562k`. - ### Metadata Besides data chunks chunker will by default create metadata object for @@ -35947,7 +40695,6 @@ base name and show group names as virtual composite files. This method is more prone to missing chunk errors (especially missing last chunk) than format with metadata enabled. - ### Hashsums Chunker supports hashsums only when a compatible metadata is present. @@ -35991,7 +40738,6 @@ hashsums at destination. Beware of consequences: the `sync` command will revert (sometimes silently) to time/size comparison if compatible hashsums between source and target are not found. - ### Modification times Chunker stores modification times using the wrapped remote so support @@ -36002,7 +40748,6 @@ modification time of the metadata object on the wrapped remote. If file is chunked but metadata format is `none` then chunker will use modification time of the first data chunk. - ### Migrations The idiomatic way to migrate to a different chunk size, hash type, transaction @@ -36031,7 +40776,6 @@ somewhere using the chunker remote and purge the original directory. The `copy` command will copy only active chunks while the `purge` will remove everything including garbage. - ### Caveats and Limitations Chunker requires wrapped remote to support server-side `move` (or `copy` + @@ -36068,7 +40812,7 @@ to keep rclone up-to-date to avoid data corruption. Changing `transactions` is dangerous and requires explicit migration. - + ### Standard options Here are the Standard options specific to chunker (Transparently chunk/split large files). @@ -36111,22 +40855,22 @@ Properties: - Type: string - Default: "md5" - Examples: - - "none" - - Pass any hash supported by wrapped remote for non-chunked files. - - Return nothing otherwise. - - "md5" - - MD5 for composite files. - - "sha1" - - SHA1 for composite files. - - "md5all" - - MD5 for all files. - - "sha1all" - - SHA1 for all files. - - "md5quick" - - Copying a file to chunker will request MD5 from the source. - - Falling back to SHA1 if unsupported. - - "sha1quick" - - Similar to "md5quick" but prefers SHA1 over MD5. + - "none" + - Pass any hash supported by wrapped remote for non-chunked files. + - Return nothing otherwise. + - "md5" + - MD5 for composite files. + - "sha1" + - SHA1 for composite files. + - "md5all" + - MD5 for all files. + - "sha1all" + - SHA1 for all files. + - "md5quick" + - Copying a file to chunker will request MD5 from the source. + - Falling back to SHA1 if unsupported. + - "sha1quick" + - Similar to "md5quick" but prefers SHA1 over MD5. ### Advanced options @@ -36176,13 +40920,13 @@ Properties: - Type: string - Default: "simplejson" - Examples: - - "none" - - Do not use metadata files at all. - - Requires hash type "none". - - "simplejson" - - Simple JSON supports hash sums and chunk validation. - - - - It has the following fields: ver, size, nchunks, md5, sha1. + - "none" + - Do not use metadata files at all. + - Requires hash type "none". + - "simplejson" + - Simple JSON supports hash sums and chunk validation. + - + - It has the following fields: ver, size, nchunks, md5, sha1. #### --chunker-fail-hard @@ -36195,10 +40939,10 @@ Properties: - Type: bool - Default: false - Examples: - - "true" - - Report errors and abort current command. - - "false" - - Warn user, skip incomplete file and proceed. + - "true" + - Report errors and abort current command. + - "false" + - Warn user, skip incomplete file and proceed. #### --chunker-transactions @@ -36211,19 +40955,19 @@ Properties: - Type: string - Default: "rename" - Examples: - - "rename" - - Rename temporary files after a successful transaction. - - "norename" - - Leave temporary file names and write transaction ID to metadata file. - - Metadata is required for no rename transactions (meta format cannot be "none"). - - If you are using norename transactions you should be careful not to downgrade Rclone - - as older versions of Rclone don't support this transaction style and will misinterpret - - files manipulated by norename transactions. - - This method is EXPERIMENTAL, don't use on production systems. - - "auto" - - Rename or norename will be used depending on capabilities of the backend. - - If meta format is set to "none", rename transactions will always be used. - - This method is EXPERIMENTAL, don't use on production systems. + - "rename" + - Rename temporary files after a successful transaction. + - "norename" + - Leave temporary file names and write transaction ID to metadata file. + - Metadata is required for no rename transactions (meta format cannot be "none"). + - If you are using norename transactions you should be careful not to downgrade Rclone + - as older versions of Rclone don't support this transaction style and will misinterpret + - files manipulated by norename transactions. + - This method is EXPERIMENTAL, don't use on production systems. + - "auto" + - Rename or norename will be used depending on capabilities of the backend. + - If meta format is set to "none", rename transactions will always be used. + - This method is EXPERIMENTAL, don't use on production systems. #### --chunker-description @@ -36236,7 +40980,7 @@ Properties: - Type: string - Required: false - + # Cloudinary @@ -36245,11 +40989,16 @@ This is a backend for the [Cloudinary](https://cloudinary.com/) platform ## About Cloudinary [Cloudinary](https://cloudinary.com/) is an image and video API platform. -Trusted by 1.5 million developers and 10,000 enterprise and hyper-growth companies as a critical part of their tech stack to deliver visually engaging experiences. +Trusted by 1.5 million developers and 10,000 enterprise and hyper-growth +companies as a critical part of their tech stack to deliver visually engaging +experiences. ## Accounts & Pricing -To use this backend, you need to [create a free account](https://cloudinary.com/users/register_free) on Cloudinary. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See [the pricing details](https://cloudinary.com/pricing). +To use this backend, you need to [create a free account](https://cloudinary.com/users/register_free) +on Cloudinary. Start with a free plan with generous usage limits. Then, as your +requirements grow, upgrade to a plan that best fits your needs. +See [the pricing details](https://cloudinary.com/pricing). ## Securing Your Credentials @@ -36259,13 +41008,17 @@ Please refer to the [docs](https://rclone.org/docs/#configuration-encryption-che Here is an example of making a Cloudinary configuration. -First, create a [cloudinary.com](https://cloudinary.com/users/register_free) account and choose a plan. +First, create a [cloudinary.com](https://cloudinary.com/users/register_free) +account and choose a plan. -You will need to log in and get the `API Key` and `API Secret` for your account from the developer section. +You will need to log in and get the `API Key` and `API Secret` for your account +from the developer section. Now run -`rclone config` +```console +rclone config +``` Follow the interactive setup process: @@ -36338,21 +41091,27 @@ y/e/d> y List directories in the top level of your Media Library -`rclone lsd cloudinary-media-library:` +```console +rclone lsd cloudinary-media-library: +``` Make a new directory. -`rclone mkdir cloudinary-media-library:directory` +```console +rclone mkdir cloudinary-media-library:directory +``` List the contents of a directory. -`rclone ls cloudinary-media-library:directory` +```console +rclone ls cloudinary-media-library:directory +``` ### Modified time and hashes Cloudinary stores md5 and timestamps for any successful Put automatically and read-only. - + ### Standard options Here are the Standard options specific to cloudinary (Cloudinary). @@ -36473,11 +41232,12 @@ Properties: - Type: string - Required: false - + # Citrix ShareFile -[Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer service aimed as business. +[Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer +service aimed as business. ## Configuration @@ -36487,11 +41247,13 @@ through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -36552,7 +41314,7 @@ y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Citrix ShareFile. This only runs from the moment it opens @@ -36560,19 +41322,26 @@ your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your ShareFile - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your ShareFile - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an ShareFile directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` Paths may be as deep as required, e.g. `remote:directory/subdirectory`. @@ -36620,7 +41389,7 @@ name: Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. - + ### Standard options Here are the Standard options specific to sharefile (Citrix Sharefile). @@ -36665,16 +41434,16 @@ Properties: - Type: string - Required: false - Examples: - - "" - - Access the Personal Folders (default). - - "favorites" - - Access the Favorites folder. - - "allshared" - - Access all the shared folders. - - "connectors" - - Access all the individual connectors. - - "top" - - Access the home, favorites, and shared folders as well as the connectors. + - "" + - Access the Personal Folders (default). + - "favorites" + - Access the Favorites folder. + - "allshared" + - Access all the shared folders. + - "connectors" + - Access all the individual connectors. + - "top" + - Access the home, favorites, and shared folders as well as the connectors. ### Advanced options @@ -36800,7 +41569,7 @@ Properties: - Type: string - Required: false - + ## Limitations Note that ShareFile is case insensitive so you can't have a file called @@ -36813,7 +41582,8 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). # Crypt @@ -36842,11 +41612,11 @@ will just give you the encrypted (scrambled) format, and anything you upload will *not* become encrypted. The encryption is a secret-key encryption (also called symmetric key encryption) -algorithm, where a password (or pass phrase) is used to generate real encryption key. -The password can be supplied by user, or you may chose to let rclone -generate one. It will be stored in the configuration file, in a lightly obscured form. -If you are in an environment where you are not able to keep your configuration -secured, you should add +algorithm, where a password (or pass phrase) is used to generate real encryption +key. The password can be supplied by user, or you may chose to let rclone +generate one. It will be stored in the configuration file, in a lightly obscured +form. If you are in an environment where you are not able to keep your +configuration secured, you should add [configuration encryption](https://rclone.org/docs/#configuration-encryption) as protection. As long as you have this configuration file, you will be able to decrypt your data. Without the configuration file, as long as you remember @@ -36858,9 +41628,9 @@ See below for guidance to [changing password](#changing-password). Encryption uses [cryptographic salt](https://en.wikipedia.org/wiki/Salt_(cryptography)), to permute the encryption key so that the same string may be encrypted in different ways. When configuring the crypt remote it is optional to enter a salt, -or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique string. -Normally in cryptography, the salt is stored together with the encrypted content, -and do not have to be memorized by the user. This is not the case in rclone, +or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique +string. Normally in cryptography, the salt is stored together with the encrypted +content, and do not have to be memorized by the user. This is not the case in rclone, because rclone does not store any additional information on the remotes. Use of custom salt is effectively a second password that must be memorized. @@ -36897,8 +41667,8 @@ anything you write will be unencrypted. To avoid issues it is best to configure a dedicated path for encrypted content, and access it exclusively through a crypt remote. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -36987,7 +41757,8 @@ y/e/d> **Important** The crypt password stored in `rclone.conf` is lightly obscured. That only protects it from cursory inspection. It is not -secure unless [configuration encryption](https://rclone.org/docs/#configuration-encryption) of `rclone.conf` is specified. +secure unless [configuration encryption](https://rclone.org/docs/#configuration-encryption) +of `rclone.conf` is specified. A long passphrase is recommended, or `rclone config` can generate a random one. @@ -37002,8 +41773,8 @@ due to the different salt. Rclone does not encrypt - * file length - this can be calculated within 16 bytes - * modification time - used for syncing +- file length - this can be calculated within 16 bytes +- modification time - used for syncing ### Specifying the remote @@ -37055,6 +41826,7 @@ is to re-upload everything via a crypt remote configured with your new password. Depending on the size of your data, your bandwidth, storage quota etc, there are different approaches you can take: + - If you have everything in a different location, for example on your local system, you could remove all of the prior encrypted files, change the password for your configured crypt remote (or delete and re-create the crypt configuration), @@ -37083,7 +41855,7 @@ details, and a tool you can use to check if you are affected. Create the following file structure using "standard" file name encryption. -``` +```text plaintext/ ├── file0.txt ├── file1.txt @@ -37096,7 +41868,7 @@ plaintext/ Copy these to the remote, and list them -``` +```console $ rclone -q copy plaintext secret: $ rclone -q ls secret: 7 file1.txt @@ -37108,7 +41880,7 @@ $ rclone -q ls secret: The crypt remote looks like -``` +```console $ rclone -q ls remote:path 55 hagjclgavj2mbiqm6u6cnjjqcg 54 v05749mltvv1tf4onltun46gls @@ -37119,7 +41891,7 @@ $ rclone -q ls remote:path The directory structure is preserved -``` +```console $ rclone -q ls secret:subdir 8 file2.txt 9 file3.txt @@ -37130,7 +41902,7 @@ Without file name encryption `.bin` extensions are added to underlying names. This prevents the cloud provider attempting to interpret file content. -``` +```console $ rclone -q ls remote:path 54 file0.txt.bin 57 subdir/file3.txt.bin @@ -37143,18 +41915,18 @@ $ rclone -q ls remote:path Off - * doesn't hide file names or directory structure - * allows for longer file names (~246 characters) - * can use sub paths and copy single files +- doesn't hide file names or directory structure +- allows for longer file names (~246 characters) +- can use sub paths and copy single files Standard - * file names encrypted - * file names can't be as long (~143 characters) - * can use sub paths and copy single files - * directory structure visible - * identical files names will have identical uploaded names - * can use shortcuts to shorten the directory recursion +- file names encrypted +- file names can't be as long (~143 characters) +- can use sub paths and copy single files +- directory structure visible +- identical files names will have identical uploaded names +- can use shortcuts to shorten the directory recursion Obfuscation @@ -37173,11 +41945,11 @@ equivalents. Obfuscation cannot be relied upon for strong protection. - * file names very lightly obfuscated - * file names can be longer than standard encryption - * can use sub paths and copy single files - * directory structure visible - * identical files names will have identical uploaded names +- file names very lightly obfuscated +- file names can be longer than standard encryption +- can use sub paths and copy single files +- directory structure visible +- identical files names will have identical uploaded names Cloud storage systems have limits on file name length and total path length which rclone is more likely to breach using @@ -37191,7 +41963,7 @@ For cloud storage systems with case sensitive file names (e.g. Google Drive), `base64` can be used to reduce file name length. For cloud storage systems using UTF-16 to store file names internally (e.g. OneDrive, Dropbox, Box), `base32768` can be used to drastically reduce -file name length. +file name length. An alternative, future rclone file name encryption mode may tolerate backend provider path length limits. @@ -37215,7 +41987,6 @@ Example: `1/12/123.txt` is encrypted to `1/12/qgm4avr35m5loi1th53ato71v0` - ### Modification times and hashes Crypt stores modification times using the underlying remote so support @@ -37228,7 +41999,7 @@ Use the `rclone cryptcheck` command to check the integrity of an encrypted remote instead of `rclone check` which can't check the checksums properly. - + ### Standard options Here are the Standard options specific to crypt (Encrypt/Decrypt a remote). @@ -37258,14 +42029,14 @@ Properties: - Type: string - Default: "standard" - Examples: - - "standard" - - Encrypt the filenames. - - See the docs for the details. - - "obfuscate" - - Very simple filename obfuscation. - - "off" - - Don't encrypt the file names. - - Adds a ".bin", or "suffix" extension only. + - "standard" + - Encrypt the filenames. + - See the docs for the details. + - "obfuscate" + - Very simple filename obfuscation. + - "off" + - Don't encrypt the file names. + - Adds a ".bin", or "suffix" extension only. #### --crypt-directory-name-encryption @@ -37280,10 +42051,10 @@ Properties: - Type: bool - Default: true - Examples: - - "true" - - Encrypt directory names. - - "false" - - Don't encrypt directory names, leave them intact. + - "true" + - Encrypt directory names. + - "false" + - Don't encrypt directory names, leave them intact. #### --crypt-password @@ -37370,10 +42141,10 @@ Properties: - Type: bool - Default: false - Examples: - - "true" - - Don't encrypt file data, leave it unencrypted. - - "false" - - Encrypt file data. + - "true" + - Don't encrypt file data, leave it unencrypted. + - "false" + - Encrypt file data. #### --crypt-pass-bad-blocks @@ -37421,13 +42192,13 @@ Properties: - Type: string - Default: "base32" - Examples: - - "base32" - - Encode using base32. Suitable for all remote. - - "base64" - - Encode using base64. Suitable for case sensitive remote. - - "base32768" - - Encode using base32768. Suitable if your remote counts UTF-16 or - - Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox) + - "base32" + - Encode using base32. Suitable for all remote. + - "base64" + - Encode using base64. Suitable for case sensitive remote. + - "base32768" + - Encode using base32768. Suitable if your remote counts UTF-16 or + - Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox) #### --crypt-suffix @@ -37464,9 +42235,11 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info. Here are the commands specific to the crypt backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -37478,36 +42251,42 @@ These can be run on a running backend using the rc command ### encode -Encode the given filename(s) +Encode the given filename(s). - rclone backend encode remote: [options] [+] +```console +rclone backend encode remote: [options] [+] +``` This encodes the filenames given as arguments returning a list of strings of the encoded results. -Usage Example: - - rclone backend encode crypt: file1 [file2...] - rclone rc backend/command command=encode fs=crypt: file1 [file2...] +Usage examples: +```console +rclone backend encode crypt: file1 [file2...] +rclone rc backend/command command=encode fs=crypt: file1 [file2...] +``` ### decode -Decode the given filename(s) +Decode the given filename(s). - rclone backend decode remote: [options] [+] +```console +rclone backend decode remote: [options] [+] +``` This decodes the filenames given as arguments returning a list of strings of the decoded results. It will return an error if any of the inputs are invalid. -Usage Example: - - rclone backend decode crypt: encryptedfile1 [encryptedfile2...] - rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] - +Usage examples: +```console +rclone backend decode crypt: encryptedfile1 [encryptedfile2...] +rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] +``` + ## Backing up an encrypted remote @@ -37517,9 +42296,9 @@ the same in the new encrypted remote. This will have the following advantages - * `rclone sync` will check the checksums while copying - * you can use `rclone check` between the encrypted remotes - * you don't decrypt and encrypt unnecessarily +- `rclone sync` will check the checksums while copying +- you can use `rclone check` between the encrypted remotes +- you don't decrypt and encrypt unnecessarily For example, let's say you have your original remote at `remote:` with the encrypted version at `eremote:` with path `remote:crypt`. You @@ -37529,11 +42308,15 @@ as `eremote:`. To sync the two remotes you would do - rclone sync --interactive remote:crypt remote2:crypt +```console +rclone sync --interactive remote:crypt remote2:crypt +``` And to check the integrity you would do - rclone check remote:crypt remote2:crypt +```console +rclone check remote:crypt remote2:crypt +``` ## File formats @@ -37544,8 +42327,8 @@ has a header and is divided into chunks. #### Header - * 8 bytes magic string `RCLONE\x00\x00` - * 24 bytes Nonce (IV) +- 8 bytes magic string `RCLONE\x00\x00` +- 24 bytes Nonce (IV) The initial nonce is generated from the operating systems crypto strong random number generator. The nonce is incremented for each @@ -37563,8 +42346,8 @@ authenticate messages. Each chunk contains: - * 16 Bytes of Poly1305 authenticator - * 1 - 65536 bytes XSalsa20 encrypted data +- 16 Bytes of Poly1305 authenticator +- 1 - 65536 bytes XSalsa20 encrypted data 64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops @@ -37577,15 +42360,15 @@ This uses a 32 byte (256 bit key) key derived from the user password. 1 byte file will encrypt to - * 32 bytes header - * 17 bytes data chunk +- 32 bytes header +- 17 bytes data chunk 49 bytes total 1 MiB (1048576 bytes) file will encrypt to - * 32 bytes header - * 16 chunks of 65568 bytes +- 32 bytes header +- 16 chunks of 65568 bytes 1049120 bytes total (a 0.05% overhead). This is the overhead for big files. @@ -37608,8 +42391,8 @@ it on the cloud storage system. This means that - * filenames with the same name will encrypt the same - * filenames which start the same won't have a common prefix +- filenames with the same name will encrypt the same +- filenames which start the same won't have a common prefix This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password. @@ -37618,8 +42401,8 @@ After encryption they are written out using a modified version of standard `base32` encoding as described in RFC4648. The standard encoding is modified in two ways: - * it becomes lower case (no-one likes upper case filenames!) - * we strip the padding character `=` +- it becomes lower case (no-one likes upper case filenames!) +- we strip the padding character `=` `base32` is used rather than the more efficient `base64` so rclone can be used on case insensitive remotes (e.g. Windows, Box, Dropbox, Onedrive etc). @@ -37635,26 +42418,30 @@ then rclone uses an internal one. encrypted data. For full protection against this you should always use a salt. -## SEE ALSO +## See Also -* [rclone cryptdecode](https://rclone.org/commands/rclone_cryptdecode/) - Show forward/reverse mapping of encrypted filenames +- [rclone cryptdecode](https://rclone.org/commands/rclone_cryptdecode/) - Show forward/reverse +mapping of encrypted filenames. # Compress ## Warning -This remote is currently **experimental**. Things may break and data may be lost. Anything you do with this remote is -at your own risk. Please understand the risks associated with using experimental code and don't use this remote in -critical applications. +This remote is currently **experimental**. Things may break and data may be lost. +Anything you do with this remote is at your own risk. Please understand the risks +associated with using experimental code and don't use this remote in critical +applications. -The `Compress` remote adds compression to another remote. It is best used with remotes containing -many large compressible files. +The `Compress` remote adds compression to another remote. It is best used with +remotes containing many large compressible files. ## Configuration -To use this remote, all you need to do is specify another remote and a compression mode to use: +To use this remote, all you need to do is specify another remote and a +compression mode to use: -``` +```text +$ rclone config Current remotes: Name Type @@ -37662,7 +42449,6 @@ Name Type remote_to_press sometype e) Edit existing remote -$ rclone config n) New remote d) Delete remote r) Rename remote @@ -37671,59 +42457,92 @@ s) Set configuration password q) Quit config e/n/d/r/c/s/q> n name> compress + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. ... - 8 / Compress a remote - \ "compress" +12 / Compress a remote + \ (compress) ... Storage> compress -** See help for compress backend at: https://rclone.org/compress/ ** +Option remote. Remote to compress. -Enter a string value. Press Enter for the default (""). +Enter a value. remote> remote_to_press:subdir + +Option mode. Compression mode. -Enter a string value. Press Enter for the default ("gzip"). -Choose a number from below, or type in your own value - 1 / Gzip compression balanced for speed and compression strength. - \ "gzip" -compression_mode> gzip -Edit advanced config? (y/n) +Choose a number from below, or type in your own value of type string. +Press Enter for the default (gzip). + 1 / Standard gzip compression with fastest parameters. + \ (gzip) + 2 / Zstandard compression — fast modern algorithm offering adjustable speed-to-compression tradeoffs. + \ (zstd) +mode> gzip + +Option level. +GZIP (levels -2 to 9): +- -2 — Huffman encoding only. Only use if you know what you're doing. +- -1 (default) — recommended; equivalent to level 5. +- 0 — turns off compression. +- 1–9 — increase compression at the cost of speed. Going past 6 generally offers very little return. + +ZSTD (levels 0 to 4): +- 0 — turns off compression entirely. +- 1 — fastest compression with the lowest ratio. +- 2 (default) — good balance of speed and compression. +- 3 — better compression, but uses about 2–3x more CPU than the default. +- 4 — best possible compression ratio (highest CPU cost). + +Notes: +- Choose GZIP for wide compatibility; ZSTD for better speed/ratio tradeoffs. +- Negative gzip levels: -2 = Huffman-only, -1 = default (≈ level 5). +Enter a value. +level> -1 + +Edit advanced config? y) Yes n) No (default) y/n> n -Remote config --------------------- -[compress] -type = compress -remote = remote_to_press:subdir -compression_mode = gzip --------------------- + +Configuration complete. +Options: +- type: compress +- remote: remote_to_press:subdir +- mode: gzip +- level: -1 +Keep this "compress" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y ``` -### Compression Modes +### Compression Algorithms -Currently only gzip compression is supported. It provides a decent balance between speed and size and is well -supported by other applications. Compression strength can further be configured via an advanced setting where 0 is no -compression and 9 is strongest compression. +- **GZIP** – a well-established and widely adopted algorithm that strikes a solid balance between compression speed and ratio. It supports compression levels from -2 to 9, with the default -1 (roughly equivalent to level 5) offering an effective middle ground for most scenarios. + +- **Zstandard (zstd)** – a modern, high-performance algorithm that offers precise control over the trade-off between speed and compression efficiency. Compression levels range from 0 (no compression) to 4 (maximum compression). ### File types -If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to -the compression algorithm you chose. These files are standard files that can be opened by various archive programs, +If you open a remote wrapped by compress, you will see that there are many +files with an extension corresponding to the compression algorithm you chose. +These files are standard files that can be opened by various archive programs, but they have some hidden metadata that allows them to be used by rclone. -While you may download and decompress these files at will, do **not** manually delete or rename files. Files without -correct metadata files will not be recognized by rclone. +While you may download and decompress these files at will, do **not** manually +delete or rename files. Files without correct metadata files will not be +recognized by rclone. ### File names -The compressed files will be named `*.###########.gz` where `*` is the base file and the `#` part is base64 encoded -size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend. - +The compressed files will be named `*.###########.gz` where `*` is the base +file and the `#` part is base64 encoded size of the uncompressed file. The file +names should not be changed by anything other than the rclone compression backend. + ### Standard options Here are the Standard options specific to compress (Compress a remote). @@ -37750,31 +42569,40 @@ Properties: - Type: string - Default: "gzip" - Examples: - - "gzip" - - Standard gzip compression with fastest parameters. - -### Advanced options - -Here are the Advanced options specific to compress (Compress a remote). + - "gzip" + - Standard gzip compression with fastest parameters. + - "zstd" + - Zstandard compression — fast modern algorithm offering adjustable speed-to-compression tradeoffs. #### --compress-level -GZIP compression level (-2 to 9). - -Generally -1 (default, equivalent to 5) is recommended. -Levels 1 to 9 increase compression at the cost of speed. Going past 6 -generally offers very little return. - -Level -2 uses Huffman encoding only. Only use if you know what you -are doing. -Level 0 turns off compression. +GZIP (levels -2 to 9): +- -2 — Huffman encoding only. Only use if you know what you're doing. +- -1 (default) — recommended; equivalent to level 5. +- 0 — turns off compression. +- 1–9 — increase compression at the cost of speed. Going past 6 generally offers very little return. + +ZSTD (levels 0 to 4): +- 0 — turns off compression entirely. +- 1 — fastest compression with the lowest ratio. +- 2 (default) — good balance of speed and compression. +- 3 — better compression, but uses about 2–3x more CPU than the default. +- 4 — best possible compression ratio (highest CPU cost). + +Notes: +- Choose GZIP for wide compatibility; ZSTD for better speed/ratio tradeoffs. +- Negative gzip levels: -2 = Huffman-only, -1 = default (≈ level 5). Properties: - Config: level - Env Var: RCLONE_COMPRESS_LEVEL -- Type: int -- Default: -1 +- Type: string +- Required: true + +### Advanced options + +Here are the Advanced options specific to compress (Compress a remote). #### --compress-ram-cache-limit @@ -37809,7 +42637,7 @@ Any metadata supported by the underlying remote is read and written. See the [metadata](https://rclone.org/docs/#metadata) docs for more info. - + # Combine @@ -37818,7 +42646,7 @@ tree. For example you might have a remote for images on one provider: -``` +```console $ rclone tree s3:imagesbucket / ├── image1.jpg @@ -37827,7 +42655,7 @@ $ rclone tree s3:imagesbucket And a remote for files on another: -``` +```console $ rclone tree drive:important/files / ├── file1.txt @@ -37837,7 +42665,7 @@ $ rclone tree drive:important/files The `combine` backend can join these together into a synthetic directory structure like this: -``` +```console $ rclone tree combined: / ├── files @@ -37851,7 +42679,9 @@ $ rclone tree combined: You'd do this by specifying an `upstreams` parameter in the config like this - upstreams = images=s3:imagesbucket files=drive:important/files +```text +upstreams = images=s3:imagesbucket files=drive:important/files +``` During the initial setup with `rclone config` you will specify the upstreams remotes as a space separated list. The upstream remotes can @@ -37862,11 +42692,13 @@ either be a local paths or other remotes. Here is an example of how to make a combine called `remote` for the example above. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -37910,21 +42742,25 @@ the shared drives you have access to. Assuming your main (non shared drive) Google drive remote is called `drive:` you would run - rclone backend -o config drives drive: +```console +rclone backend -o config drives drive: +``` This would produce something like this: - [My Drive] - type = alias - remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: +```ini +[My Drive] +type = alias +remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: - [Test Drive] - type = alias - remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: +[Test Drive] +type = alias +remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: - [AllDrives] - type = combine - upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" +[AllDrives] +type = combine +upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" +``` If you then add that config to your config file (find it with `rclone config file`) then you can access all the shared drives in one place @@ -37932,7 +42768,7 @@ with the `AllDrives:` remote. See [the Google Drive docs](https://rclone.org/drive/#drives) for full info. - + ### Standard options Here are the Standard options specific to combine (Combine several remotes into one). @@ -37982,13 +42818,15 @@ Any metadata supported by the underlying remote is read and written. See the [metadata](https://rclone.org/docs/#metadata) docs for more info. - + # DOI -The DOI remote is a read only remote for reading files from digital object identifiers (DOI). +The DOI remote is a read only remote for reading files from digital object +identifiers (DOI). Currently, the DOI backend supports DOIs hosted with: + - [InvenioRDM](https://inveniosoftware.org/products/rdm/) - [Zenodo](https://zenodo.org) - [CaltechDATA](https://data.caltech.edu) @@ -38005,11 +42843,13 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -38043,7 +42883,7 @@ d) Delete this remote y/e/d> y ``` - + ### Standard options Here are the Standard options specific to doi (DOI datasets). @@ -38076,14 +42916,14 @@ Properties: - Type: string - Required: false - Examples: - - "auto" - - Auto-detect provider - - "zenodo" - - Zenodo - - "dataverse" - - Dataverse - - "invenio" - - Invenio + - "auto" + - Auto-detect provider + - "zenodo" + - Zenodo + - "dataverse" + - Dataverse + - "invenio" + - Invenio #### --doi-doi-resolver-api-url @@ -38115,9 +42955,11 @@ Properties: Here are the commands specific to the doi backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -38131,29 +42973,38 @@ These can be run on a running backend using the rc command Show metadata about the DOI. - rclone backend metadata remote: [options] [+] +```console +rclone backend metadata remote: [options] [+] +``` This command returns a JSON object with some information about the DOI. - rclone backend medatadata doi: +Usage example: + +```console +rclone backend metadata doi: +``` It returns a JSON object representing metadata about the DOI. - ### set Set command for updating the config parameters. - rclone backend set remote: [options] [+] +```console +rclone backend set remote: [options] [+] +``` This set command can be used to update the config parameters for a running doi backend. -Usage Examples: +Usage examples: - rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2] - rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2] - rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI +```console +rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI +``` The option keys are named as they are in the config file. @@ -38163,8 +43014,7 @@ will default to those currently in use. It doesn't return anything. - - + # Dropbox @@ -38181,11 +43031,13 @@ through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text n) New remote d) Delete remote q) Quit config @@ -38220,7 +43072,7 @@ y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Dropbox. This only @@ -38233,15 +43085,21 @@ You can then use it like this, List directories in top level of your dropbox - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your dropbox - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to a dropbox directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Dropbox for business @@ -38308,7 +43166,9 @@ In this mode rclone will not use upload batching. This was the default before rclone v1.55. It has the disadvantage that it is very likely to encounter `too_many_requests` errors like this - NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds. +```text +NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds. +``` When rclone receives these it has to wait for 15s or sometimes 300s before continuing which really slows down transfers. @@ -38377,7 +43237,7 @@ Here are some examples of how extensions are mapped: | Paper template | mydoc.papert | mydoc.papert.html | | other | mydoc | mydoc.html | -_Importing_ exportable files is not yet supported by rclone. +*Importing* exportable files is not yet supported by rclone. Here are the supported export extensions known by rclone. Note that rclone does not currently support other formats not on this list, @@ -38389,7 +43249,7 @@ of supported formats at any time. | html | HTML | HTML document | | md | Markdown | Markdown text format | - + ### Standard options Here are the Standard options specific to dropbox (Dropbox). @@ -38742,7 +43602,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -38756,10 +43616,9 @@ issue an error message `File name disallowed - not uploading` if it attempts to upload one of those file names, but the sync won't fail. Some errors may occur if you try to sync copyright-protected files -because Dropbox has its own [copyright detector](https://techcrunch.com/2014/03/30/how-dropbox-knows-when-youre-sharing-copyrighted-stuff-without-actually-looking-at-your-stuff/) that -prevents this sort of file being downloaded. This will return the error `ERROR : -/path/to/your/file: Failed to copy: failed to open source object: -path/restricted_content/.` +because Dropbox has its own [copyright detector](https://techcrunch.com/2014/03/30/how-dropbox-knows-when-youre-sharing-copyrighted-stuff-without-actually-looking-at-your-stuff/) +that prevents this sort of file being downloaded. This will return the error +`ERROR : /path/to/your/file: Failed to copy: failed to open source object: path/restricted_content/.` If you have more than 10,000 files in a directory then `rclone purge dropbox:dir` will return the error `Failed to purge: There are too @@ -38769,7 +43628,8 @@ many files involved in this operation`. As a work-around do an When using `rclone link` you'll need to set `--expire` if using a non-personal account otherwise the visibility may not be correct. (Note that `--expire` isn't supported on personal accounts). See the -[forum discussion](https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211) and the +[forum discussion](https://forum.rclone.org/t/rclone-link-dropbox-permissions/23211) +and the [dropbox SDK issue](https://github.com/dropbox/dropbox-sdk-go-unofficial/issues/75). Modification times for Dropbox Paper documents are not exact, and @@ -38779,26 +43639,37 @@ or so, or use `--ignore-times` to force a full sync. ## Get your own Dropbox App ID -When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users. +When you use rclone with Dropbox in its default configuration you are using +rclone's App ID. This is shared between all the rclone users. Here is how to create your own Dropbox App ID for rclone: -1. Log into the [Dropbox App console](https://www.dropbox.com/developers/apps/create) with your Dropbox Account (It need not -to be the same account as the Dropbox you want to access) +1. Log into the [Dropbox App console](https://www.dropbox.com/developers/apps/create) +with your Dropbox Account (It need not to be the same account as the Dropbox you +want to access) 2. Choose an API => Usually this should be `Dropbox API` -3. Choose the type of access you want to use => `Full Dropbox` or `App Folder`. If you want to use Team Folders, `Full Dropbox` is required ([see here](https://www.dropboxforum.com/t5/Dropbox-API-Support-Feedback/How-to-create-team-folder-inside-my-app-s-folder/m-p/601005/highlight/true#M27911)). +3. Choose the type of access you want to use => `Full Dropbox` or `App Folder`. +If you want to use Team Folders, `Full Dropbox` is required +([see here](https://www.dropboxforum.com/t5/Dropbox-API-Support-Feedback/How-to-create-team-folder-inside-my-app-s-folder/m-p/601005/highlight/true#M27911)). 4. Name your App. The app name is global, so you can't use `rclone` for example 5. Click the button `Create App` -6. Switch to the `Permissions` tab. Enable at least the following permissions: `account_info.read`, `files.metadata.write`, `files.content.write`, `files.content.read`, `sharing.write`. The `files.metadata.read` and `sharing.read` checkboxes will be marked too. Click `Submit` +6. Switch to the `Permissions` tab. Enable at least the following permissions: +`account_info.read`, `files.metadata.write`, `files.content.write`, `files.content.read`, +`sharing.write`. The `files.metadata.read` and `sharing.read` checkboxes will be +marked too. Click `Submit` -7. Switch to the `Settings` tab. Fill `OAuth2 - Redirect URIs` as `http://localhost:53682/` and click on `Add` +7. Switch to the `Settings` tab. Fill `OAuth2 - Redirect URIs` as `http://localhost:53682/` +and click on `Add` -8. Find the `App key` and `App secret` values on the `Settings` tab. Use these values in rclone config to add a new remote or edit an existing remote. The `App key` setting corresponds to `client_id` in rclone config, the `App secret` corresponds to `client_secret` +8. Find the `App key` and `App secret` values on the `Settings` tab. Use these +values in rclone config to add a new remote or edit an existing remote. +The `App key` setting corresponds to `client_id` in rclone config, the +`App secret` corresponds to `client_secret` # Enterprise File Fabric @@ -38815,11 +43686,13 @@ do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -38883,19 +43756,26 @@ d) Delete this remote y/e/d> y ``` -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your Enterprise File Fabric - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your Enterprise File Fabric - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an Enterprise File Fabric directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -38920,7 +43800,7 @@ upload an empty file as a single space with a mime type of `application/vnd.rclone.empty.file` and files with that mime type are treated as empty. -### Root folder ID ### +### Root folder ID You can set the `root_folder_id` for rclone. This is the directory (identified by its `Folder ID`) that rclone considers to be the root @@ -38936,7 +43816,7 @@ In order to do this you will have to find the `Folder ID` of the directory you wish rclone to display. These aren't displayed in the web interface, but you can use `rclone lsf` to find them, for example -``` +```console $ rclone lsf --dirs-only -Fip --csv filefabric: 120673758,Burnt PDFs/ 120673759,My Quick Uploads/ @@ -38948,7 +43828,7 @@ $ rclone lsf --dirs-only -Fip --csv filefabric: The ID for "S3 Storage" would be `120673761`. - + ### Standard options Here are the Standard options specific to filefabric (Enterprise File Fabric). @@ -38964,12 +43844,12 @@ Properties: - Type: string - Required: true - Examples: - - "https://storagemadeeasy.com" - - Storage Made Easy US - - "https://eu.storagemadeeasy.com" - - Storage Made Easy EU - - "https://yourfabric.smestorage.com" - - Connect to your Enterprise File Fabric + - "https://storagemadeeasy.com" + - Storage Made Easy US + - "https://eu.storagemadeeasy.com" + - Storage Made Easy EU + - "https://yourfabric.smestorage.com" + - Connect to your Enterprise File Fabric #### --filefabric-root-folder-id @@ -39081,7 +43961,7 @@ Properties: - Type: string - Required: false - + # FileLu @@ -39097,11 +43977,13 @@ device. Here is an example of how to make a remote called `filelu`. First, run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -39133,7 +44015,7 @@ A path without an initial `/` will operate in the `Rclone` directory. A path with an initial `/` will operate at the root where you can see the `Rclone` directory. -``` +```console $ rclone lsf TestFileLu:/ CCTV/ Camera/ @@ -39149,55 +44031,81 @@ Videos/ Create a new folder named `foldername` in the `Rclone` directory: - rclone mkdir filelu:foldername +```console +rclone mkdir filelu:foldername +``` Delete a folder on FileLu: - rclone rmdir filelu:/folder/path/ +```console +rclone rmdir filelu:/folder/path/ +``` Delete a file on FileLu: - rclone delete filelu:/hello.txt +```console +rclone delete filelu:/hello.txt +``` List files from your FileLu account: - rclone ls filelu: +```console +rclone ls filelu: +``` List all folders: - rclone lsd filelu: +```console +rclone lsd filelu: +``` Copy a specific file to the FileLu root: - rclone copy D:\\hello.txt filelu: +```console +rclone copy D:\hello.txt filelu: +``` Copy files from a local directory to a FileLu directory: - rclone copy D:/local-folder filelu:/remote-folder/path/ - +```console +rclone copy D:/local-folder filelu:/remote-folder/path/ +``` + Download a file from FileLu into a local directory: - rclone copy filelu:/file-path/hello.txt D:/local-folder +```console +rclone copy filelu:/file-path/hello.txt D:/local-folder +``` Move files from a local directory to a FileLu directory: - rclone move D:\\local-folder filelu:/remote-path/ +```console +rclone move D:\local-folder filelu:/remote-path/ +``` Sync files from a local directory to a FileLu directory: - rclone sync --interactive D:/local-folder filelu:/remote-path/ - +```console +rclone sync --interactive D:/local-folder filelu:/remote-path/ +``` + Mount remote to local Linux: - rclone mount filelu: /root/mnt --vfs-cache-mode full +```console +rclone mount filelu: /root/mnt --vfs-cache-mode full +``` Mount remote to local Windows: - rclone mount filelu: D:/local_mnt --vfs-cache-mode full +```console +rclone mount filelu: D:/local_mnt --vfs-cache-mode full +``` Get storage info about the FileLu account: - rclone about filelu: +```console +rclone about filelu: +``` All the other rclone commands are supported by this backend. @@ -39214,8 +44122,8 @@ millions of files, duplicate folder names or paths are quite common. FileLu supports both modification times and MD5 hashes. -FileLu only supports filenames and folder names up to 255 characters in length, where a -character is a Unicode character. +FileLu only supports filenames and folder names up to 255 characters in length, +where a character is a Unicode character. ### Duplicated Files @@ -39234,7 +44142,7 @@ key. If you are connecting to your FileLu remote for the first time and encounter an error such as: -``` +```text Failed to create file system for "my-filelu-remote:": couldn't login: Invalid credentials ``` @@ -39247,7 +44155,7 @@ significant memory usage during list/sync operations. Ensure the system running `rclone` has sufficient memory and CPU to handle these operations. - + ### Standard options Here are the Standard options specific to filelu (FileLu Cloud Storage). @@ -39291,7 +44199,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -39317,85 +44225,97 @@ password. Alternatively, you can authenticate using an API Key from Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n +```text +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n - Enter name for new remote. - name> remote +Enter name for new remote. +name> remote - Option Storage. - Type of storage to configure. - Choose a number from below, or type in your own value. - [snip] - XX / Files.com - \ "filescom" - [snip] - Storage> filescom +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Files.com + \ "filescom" +[snip] +Storage> filescom - Option site. - Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com) - Enter a value. Press Enter to leave empty. - site> mysite +Option site. +Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com) +Enter a value. Press Enter to leave empty. +site> mysite - Option username. - The username used to authenticate with Files.com. - Enter a value. Press Enter to leave empty. - username> user +Option username. +The username used to authenticate with Files.com. +Enter a value. Press Enter to leave empty. +username> user - Option password. - The password used to authenticate with Files.com. - Choose an alternative below. Press Enter for the default (n). - y) Yes, type in my own password - g) Generate random password - n) No, leave this optional password blank (default) - y/g/n> y - Enter the password: - password: - Confirm the password: - password: +Option password. +The password used to authenticate with Files.com. +Choose an alternative below. Press Enter for the default (n). +y) Yes, type in my own password +g) Generate random password +n) No, leave this optional password blank (default) +y/g/n> y +Enter the password: +password: +Confirm the password: +password: - Edit advanced config? - y) Yes - n) No (default) - y/n> n +Edit advanced config? +y) Yes +n) No (default) +y/n> n - Configuration complete. - Options: - - type: filescom - - site: mysite - - username: user - - password: *** ENCRYPTED *** - Keep this "remote" remote? - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y +Configuration complete. +Options: +- type: filescom +- site: mysite +- username: user +- password: *** ENCRYPTED *** +Keep this "remote" remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` Once configured you can use rclone. See all files in the top level: - rclone lsf remote: +```console +rclone lsf remote: +``` Make a new directory in the root: - rclone mkdir remote:dir +```console +rclone mkdir remote:dir +``` Recursively List the contents: - rclone ls remote: +```console +rclone ls remote: +``` Sync `/home/local/directory` to the remote directory, deleting any excess files in the directory. - rclone sync --interactive /home/local/directory remote:dir +```console +rclone sync --interactive /home/local/directory remote:dir +``` ### Hashes @@ -39412,7 +44332,7 @@ selecting more checksums will not affect rclone's operations. For use with rclone, selecting at least MD5 is recommended so rclone can do an end to end integrity check. - + ### Standard options Here are the Standard options specific to filescom (Files.com). @@ -39491,7 +44411,7 @@ Properties: - Type: string - Required: false - + # FTP @@ -39509,14 +44429,16 @@ a `/` it is relative to the home directory of the user. An empty path To create an FTP configuration named `remote`, run - rclone config +```console +rclone config +``` Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, see [below](#anonymous-ftp). -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote r) Rename remote c) Copy remote @@ -39575,20 +44497,28 @@ y/e/d> y To see all directories in the home directory of `remote` - rclone lsd remote: +```console +rclone lsd remote: +``` Make a new directory - rclone mkdir remote:path/to/directory +```console +rclone mkdir remote:path/to/directory +``` List the contents of a directory - rclone ls remote:path/to/directory +```console +rclone ls remote:path/to/directory +``` Sync `/home/local/directory` to the remote directory, deleting any excess files in the directory. - rclone sync --interactive /home/local/directory remote:directory +```console +rclone sync --interactive /home/local/directory remote:directory +``` ### Anonymous FTP @@ -39603,8 +44533,10 @@ Using [on-the-fly](#backend-path-to-dir) or such servers, without requiring any configuration in advance. The following are examples of that: - rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy) - rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy): +```console +rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy) +rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy): +``` The above examples work in Linux shells and in PowerShell, but not Windows Command Prompt. They execute the [rclone obscure](https://rclone.org/commands/rclone_obscure/) @@ -39613,8 +44545,10 @@ command to create a password string in the format required by the an already obscured string representation of the same password "dummy", and therefore works even in Windows Command Prompt: - rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM - rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM: +```console +rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM +rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM: +``` ### Implicit TLS @@ -39628,7 +44562,7 @@ can be set with [`--ftp-port`](#ftp-port). TLS options for Implicit and Explicit TLS can be set using the following flags which are specific to the FTP backend: -``` +```text --ftp-no-check-certificate Do not verify the TLS certificate of the server --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) --ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32) @@ -39636,7 +44570,7 @@ following flags which are specific to the FTP backend: However any of the global TLS flags can also be used such as: -``` +```text --ca-cert stringArray CA certificate used to verify servers --client-cert string Client SSL certificate (PEM) for mutual TLS auth --client-key string Client SSL private key (PEM) for mutual TLS auth @@ -39646,7 +44580,7 @@ However any of the global TLS flags can also be used such as: If these need to be put in the config file so they apply to just the FTP backend then use the `override` syntax, eg -``` +```text override.ca_cert = XXX override.client_cert = XXX override.client_key = XXX @@ -39675,7 +44609,7 @@ This backend's interactive configuration wizard provides a selection of sensible encoding settings for major FTP servers: ProFTPd, PureFTPd, VsFTPd. Just hit a selection number when prompted. - + ### Standard options Here are the Standard options specific to ftp (FTP). @@ -40016,12 +44950,12 @@ Properties: - Type: Encoding - Default: Slash,Del,Ctl,RightSpace,Dot - Examples: - - "Asterisk,Ctl,Dot,Slash" - - ProFTPd can't handle '*' in file names - - "BackSlash,Ctl,Del,Dot,RightSpace,Slash,SquareBracket" - - PureFTPd can't handle '[]' or '*' in file names - - "Ctl,LeftPeriod,Slash" - - VsFTPd can't handle file names starting with dot + - "Asterisk,Ctl,Dot,Slash" + - ProFTPd can't handle '*' in file names + - "BackSlash,Ctl,Del,Dot,RightSpace,Slash,SquareBracket" + - PureFTPd can't handle '[]' or '*' in file names + - "Ctl,LeftPeriod,Slash" + - VsFTPd can't handle file names starting with dot #### --ftp-description @@ -40034,7 +44968,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -40052,7 +44986,8 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). The implementation of : `--dump headers`, `--dump bodies`, `--dump auth` for debugging isn't the same as @@ -40104,11 +45039,13 @@ premium account. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -40147,15 +45084,20 @@ d) Delete this remote y/e/d> y ``` -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories and files in the top level of your Gofile - rclone lsf remote: +```console +rclone lsf remote: +``` To copy a local directory to an Gofile directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -40180,7 +45122,6 @@ the following characters are also replaced: | \ | 0x5C | \ | | \| | 0x7C | | | - File names can also not start or end with the following characters. These only get replaced if they are the first or last character in the name: @@ -40217,7 +45158,7 @@ directory you wish rclone to display. You can do this with rclone -``` +```console $ rclone lsf -Fip --dirs-only remote: d6341f53-ee65-4f29-9f59-d11e8070b2a0;Files/ f4f5c9b8-6ece-478b-b03e-4538edfe5a1c;Photos/ @@ -40226,13 +45167,13 @@ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/ The ID to use is the part before the `;` so you could set -``` +```text root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0 ``` To restrict rclone to the `Files` directory. - + ### Standard options Here are the Standard options specific to gofile (Gofile). @@ -40320,7 +45261,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -40356,17 +45297,19 @@ command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. ## Configuration -The initial setup for google cloud storage involves getting a token from Google Cloud Storage -which you need to do in your browser. `rclone config` walks you +The initial setup for google cloud storage involves getting a token from Google +Cloud Storage which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text n) New remote d) Delete remote q) Quit config @@ -40443,7 +45386,9 @@ Choose a number from below, or type in your own value \ "us-east1" 13 / Northern Virginia. \ "us-east4" -14 / Oregon. +14 / Ohio. + \ "us-east5" +15 / Oregon. \ "us-west1" location> 12 The storage class to use when storing objects in Google Cloud Storage. @@ -40490,10 +45435,10 @@ y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if using web browser to automatically +token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this @@ -40504,20 +45449,28 @@ This remote is called `remote` and can now be used like this See all the buckets in your project - rclone lsd remote: +```console +rclone lsd remote: +``` Make a new bucket - rclone mkdir remote:bucket +```console +rclone mkdir remote:bucket +``` List the contents of a bucket - rclone ls remote:bucket +```console +rclone ls remote:bucket +``` Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. - rclone sync --interactive /home/local/directory remote:bucket +```console +rclone sync --interactive /home/local/directory remote:bucket +``` ### Service Account support @@ -40548,52 +45501,67 @@ environment variable. ### Service Account Authentication with Access Tokens -Another option for service account authentication is to use access tokens via *gcloud impersonate-service-account*. Access tokens protect security by avoiding the use of the JSON -key file, which can be breached. They also bypass oauth login flow, which is simpler -on remote VMs that lack a web browser. +Another option for service account authentication is to use access tokens via +*gcloud impersonate-service-account*. Access tokens protect security by avoiding +the use of the JSON key file, which can be breached. They also bypass oauth +login flow, which is simpler on remote VMs that lack a web browser. -If you already have a working service account, skip to step 3. +If you already have a working service account, skip to step 3. -#### 1. Create a service account using +#### 1. Create a service account using - gcloud iam service-accounts create gcs-read-only +```console +gcloud iam service-accounts create gcs-read-only +``` You can re-use an existing service account as well (like the one created above) -#### 2. Attach a Viewer (read-only) or User (read-write) role to the service account - $ PROJECT_ID=my-project - $ gcloud --verbose iam service-accounts add-iam-policy-binding \ - gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \ - --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \ - --role=roles/storage.objectViewer +#### 2. Attach a Viewer (read-only) or User (read-write) role to the service account -Use the Google Cloud console to identify a limited role. Some relevant pre-defined roles: +```console +$ PROJECT_ID=my-project +$ gcloud --verbose iam service-accounts add-iam-policy-binding \ + gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \ + --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \ + --role=roles/storage.objectViewer +``` -* *roles/storage.objectUser* -- read-write access but no admin privileges -* *roles/storage.objectViewer* -- read-only access to objects -* *roles/storage.admin* -- create buckets & administrative roles +Use the Google Cloud console to identify a limited role. Some relevant +pre-defined roles: + +- *roles/storage.objectUser* -- read-write access but no admin privileges +- *roles/storage.objectViewer* -- read-only access to objects +- *roles/storage.admin* -- create buckets & administrative roles #### 3. Get a temporary access key for the service account - $ gcloud auth application-default print-access-token \ - --impersonate-service-account \ - gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com +```console +$ gcloud auth application-default print-access-token \ + --impersonate-service-account \ + gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com - ya29.c.c0ASRK0GbAFEewXD [truncated] +ya29.c.c0ASRK0GbAFEewXD [truncated] +``` #### 4. Update `access_token` setting -hit `CTRL-C` when you see *waiting for code*. This will save the config without doing oauth flow - rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx +hit `CTRL-C` when you see *waiting for code*. This will save the config without +doing oauth flow + +```console +rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx +``` #### 5. Run rclone as usual - rclone ls dev-gcs:${MY_BUCKET}/ +```console +rclone ls dev-gcs:${MY_BUCKET}/ +``` ### More Info on Service Accounts -* [Official GCS Docs](https://cloud.google.com/compute/docs/access/service-accounts) -* [Guide on Service Accounts using Key Files (less secure, but similar concepts)](https://forum.rclone.org/t/access-using-google-service-account/24822/2) +- [Official GCS Docs](https://cloud.google.com/compute/docs/access/service-accounts) +- [Guide on Service Accounts using Key Files (less secure, but similar concepts)](https://forum.rclone.org/t/access-using-google-service-account/24822/2) ### Anonymous Access @@ -40644,13 +45612,16 @@ Note that the last of these is for setting custom metadata in the form ### Modification times Google Cloud Storage stores md5sum natively. -Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time -with one-second precision as `goog-reserved-file-mtime` in file metadata. +Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores +modification time with one-second precision as `goog-reserved-file-mtime` in +file metadata. -To ensure compatibility with gsutil, rclone stores modification time in 2 separate metadata entries. -`mtime` uses RFC3339 format with one-nanosecond precision. -`goog-reserved-file-mtime` uses Unix timestamp format with one-second precision. -To get modification time from object metadata, rclone reads the metadata in the following order: `mtime`, `goog-reserved-file-mtime`, object updated time. +To ensure compatibility with gsutil, rclone stores modification time in 2 +separate metadata entries. `mtime` uses RFC3339 format with one-nanosecond +precision. `goog-reserved-file-mtime` uses Unix timestamp format with one-second +precision. To get modification time from object metadata, rclone reads the +metadata in the following order: `mtime`, `goog-reserved-file-mtime`, object +updated time. Note that rclone's default modify window is 1ns. Files uploaded by gsutil only contain timestamps with one-second precision. @@ -40670,7 +45641,7 @@ To avoid these possibly unnecessary updates, use `--modify-window 1s`. Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. - + ### Standard options Here are the Standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)). @@ -40781,24 +45752,24 @@ Properties: - Type: string - Required: false - Examples: - - "authenticatedRead" - - Object owner gets OWNER access. - - All Authenticated Users get READER access. - - "bucketOwnerFullControl" - - Object owner gets OWNER access. - - Project team owners get OWNER access. - - "bucketOwnerRead" - - Object owner gets OWNER access. - - Project team owners get READER access. - - "private" - - Object owner gets OWNER access. - - Default if left blank. - - "projectPrivate" - - Object owner gets OWNER access. - - Project team members get access according to their roles. - - "publicRead" - - Object owner gets OWNER access. - - All Users get READER access. + - "authenticatedRead" + - Object owner gets OWNER access. + - All Authenticated Users get READER access. + - "bucketOwnerFullControl" + - Object owner gets OWNER access. + - Project team owners get OWNER access. + - "bucketOwnerRead" + - Object owner gets OWNER access. + - Project team owners get READER access. + - "private" + - Object owner gets OWNER access. + - Default if left blank. + - "projectPrivate" + - Object owner gets OWNER access. + - Project team members get access according to their roles. + - "publicRead" + - Object owner gets OWNER access. + - All Users get READER access. #### --gcs-bucket-acl @@ -40811,20 +45782,20 @@ Properties: - Type: string - Required: false - Examples: - - "authenticatedRead" - - Project team owners get OWNER access. - - All Authenticated Users get READER access. - - "private" - - Project team owners get OWNER access. - - Default if left blank. - - "projectPrivate" - - Project team members get access according to their roles. - - "publicRead" - - Project team owners get OWNER access. - - All Users get READER access. - - "publicReadWrite" - - Project team owners get OWNER access. - - All Users get WRITER access. + - "authenticatedRead" + - Project team owners get OWNER access. + - All Authenticated Users get READER access. + - "private" + - Project team owners get OWNER access. + - Default if left blank. + - "projectPrivate" + - Project team members get access according to their roles. + - "publicRead" + - Project team owners get OWNER access. + - All Users get READER access. + - "publicReadWrite" + - Project team owners get OWNER access. + - All Users get WRITER access. #### --gcs-bucket-policy-only @@ -40860,78 +45831,80 @@ Properties: - Type: string - Required: false - Examples: - - "" - - Empty for default location (US) - - "asia" - - Multi-regional location for Asia - - "eu" - - Multi-regional location for Europe - - "us" - - Multi-regional location for United States - - "asia-east1" - - Taiwan - - "asia-east2" - - Hong Kong - - "asia-northeast1" - - Tokyo - - "asia-northeast2" - - Osaka - - "asia-northeast3" - - Seoul - - "asia-south1" - - Mumbai - - "asia-south2" - - Delhi - - "asia-southeast1" - - Singapore - - "asia-southeast2" - - Jakarta - - "australia-southeast1" - - Sydney - - "australia-southeast2" - - Melbourne - - "europe-north1" - - Finland - - "europe-west1" - - Belgium - - "europe-west2" - - London - - "europe-west3" - - Frankfurt - - "europe-west4" - - Netherlands - - "europe-west6" - - Zürich - - "europe-central2" - - Warsaw - - "us-central1" - - Iowa - - "us-east1" - - South Carolina - - "us-east4" - - Northern Virginia - - "us-west1" - - Oregon - - "us-west2" - - California - - "us-west3" - - Salt Lake City - - "us-west4" - - Las Vegas - - "northamerica-northeast1" - - Montréal - - "northamerica-northeast2" - - Toronto - - "southamerica-east1" - - São Paulo - - "southamerica-west1" - - Santiago - - "asia1" - - Dual region: asia-northeast1 and asia-northeast2. - - "eur4" - - Dual region: europe-north1 and europe-west4. - - "nam4" - - Dual region: us-central1 and us-east1. + - "" + - Empty for default location (US) + - "asia" + - Multi-regional location for Asia + - "eu" + - Multi-regional location for Europe + - "us" + - Multi-regional location for United States + - "asia-east1" + - Taiwan + - "asia-east2" + - Hong Kong + - "asia-northeast1" + - Tokyo + - "asia-northeast2" + - Osaka + - "asia-northeast3" + - Seoul + - "asia-south1" + - Mumbai + - "asia-south2" + - Delhi + - "asia-southeast1" + - Singapore + - "asia-southeast2" + - Jakarta + - "australia-southeast1" + - Sydney + - "australia-southeast2" + - Melbourne + - "europe-north1" + - Finland + - "europe-west1" + - Belgium + - "europe-west2" + - London + - "europe-west3" + - Frankfurt + - "europe-west4" + - Netherlands + - "europe-west6" + - Zürich + - "europe-central2" + - Warsaw + - "us-central1" + - Iowa + - "us-east1" + - South Carolina + - "us-east4" + - Northern Virginia + - "us-east5" + - Ohio + - "us-west1" + - Oregon + - "us-west2" + - California + - "us-west3" + - Salt Lake City + - "us-west4" + - Las Vegas + - "northamerica-northeast1" + - Montréal + - "northamerica-northeast2" + - Toronto + - "southamerica-east1" + - São Paulo + - "southamerica-west1" + - Santiago + - "asia1" + - Dual region: asia-northeast1 and asia-northeast2. + - "eur4" + - Dual region: europe-north1 and europe-west4. + - "nam4" + - Dual region: us-central1 and us-east1. #### --gcs-storage-class @@ -40944,20 +45917,20 @@ Properties: - Type: string - Required: false - Examples: - - "" - - Default - - "MULTI_REGIONAL" - - Multi-regional storage class - - "REGIONAL" - - Regional storage class - - "NEARLINE" - - Nearline storage class - - "COLDLINE" - - Coldline storage class - - "ARCHIVE" - - Archive storage class - - "DURABLE_REDUCED_AVAILABILITY" - - Durable reduced availability storage class + - "" + - Default + - "MULTI_REGIONAL" + - Multi-regional storage class + - "REGIONAL" + - Regional storage class + - "NEARLINE" + - Nearline storage class + - "COLDLINE" + - Coldline storage class + - "ARCHIVE" + - Archive storage class + - "DURABLE_REDUCED_AVAILABILITY" + - Durable reduced availability storage class #### --gcs-env-auth @@ -40972,10 +45945,10 @@ Properties: - Type: bool - Default: false - Examples: - - "false" - - Enter credentials in the next step. - - "true" - - Get GCP IAM credentials from the environment (env vars or IAM). + - "false" + - Enter credentials in the next step. + - "true" + - Get GCP IAM credentials from the environment (env vars or IAM). ### Advanced options @@ -41133,7 +46106,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -41142,7 +46115,8 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). # Google Drive @@ -41158,11 +46132,13 @@ through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote r) Rename remote @@ -41234,10 +46210,10 @@ y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if using web browser to automatically +token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and it @@ -41248,15 +46224,21 @@ You can then use it like this, List directories in top level of your drive - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your drive - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to a drive directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Scopes @@ -41308,9 +46290,9 @@ directories. ### Root folder ID -This option has been moved to the advanced section. You can set the `root_folder_id` for rclone. This is the directory -(identified by its `Folder ID`) that rclone considers to be the root -of your drive. +This option has been moved to the advanced section. You can set the +`root_folder_id` for rclone. This is the directory (identified by its +`Folder ID`) that rclone considers to be the root of your drive. Normally you will leave this blank and rclone will determine the correct root to use itself. @@ -41358,49 +46340,51 @@ instead, or set the equivalent environment variable. Let's say that you are the administrator of a Google Workspace. The goal is to read or write data on an individual's Drive account, who IS -a member of the domain. We'll call the domain **example.com**, and the -user **foo@example.com**. +a member of the domain. We'll call the domain , and the +user . There's a few steps we need to go through to accomplish this: ##### 1. Create a service account for example.com - - To create a service account and obtain its credentials, go to the -[Google Developer Console](https://console.developers.google.com). - - You must have a project - create one if you don't and make sure you are on the selected project. - - Then go to "IAM & admin" -> "Service Accounts". - - Use the "Create Service Account" button. Fill in "Service account name" -and "Service account ID" with something that identifies your client. - - Select "Create And Continue". Step 2 and 3 are optional. - - Click on the newly created service account - - Click "Keys" and then "Add Key" and then "Create new key" - - Choose type "JSON" and click create - - This will download a small JSON file that rclone will use for authentication. +- To create a service account and obtain its credentials, go to the + [Google Developer Console](https://console.developers.google.com). +- You must have a project - create one if you don't and make sure you are + on the selected project. +- Then go to "IAM & admin" -> "Service Accounts". +- Use the "Create Service Account" button. Fill in "Service account name" + and "Service account ID" with something that identifies your client. +- Select "Create And Continue". Step 2 and 3 are optional. +- Click on the newly created service account +- Click "Keys" and then "Add Key" and then "Create new key" +- Choose type "JSON" and click create +- This will download a small JSON file that rclone will use for authentication. If you ever need to remove access, press the "Delete service account key" button. ##### 2. Allowing API access to example.com Google Drive - - Go to example.com's [Workspace Admin Console](https://admin.google.com) - - Go into "Security" (or use the search bar) - - Select "Access and data control" and then "API controls" - - Click "Manage domain-wide delegation" - - Click "Add new" - - In the "Client ID" field enter the service account's -"Client ID" - this can be found in the Developer Console under -"IAM & Admin" -> "Service Accounts", then "View Client ID" for -the newly created service account. -It is a ~21 character numerical string. - - In the next field, "OAuth Scopes", enter -`https://www.googleapis.com/auth/drive` -to grant read/write access to Google Drive specifically. -You can also use `https://www.googleapis.com/auth/drive.readonly` for read only access. - - Click "Authorise" +- Go to example.com's [Workspace Admin Console](https://admin.google.com) +- Go into "Security" (or use the search bar) +- Select "Access and data control" and then "API controls" +- Click "Manage domain-wide delegation" +- Click "Add new" +- In the "Client ID" field enter the service account's + "Client ID" - this can be found in the Developer Console under + "IAM & Admin" -> "Service Accounts", then "View Client ID" for + the newly created service account. + It is a ~21 character numerical string. +- In the next field, "OAuth Scopes", enter + `https://www.googleapis.com/auth/drive` + to grant read/write access to Google Drive specifically. + You can also use `https://www.googleapis.com/auth/drive.readonly` for read + only access. +- Click "Authorise" ##### 3. Configure rclone, assuming a new install -``` +```text rclone config n/s/q> n # New @@ -41417,20 +46401,23 @@ y/n> # Auto config, n ##### 4. Verify that it's working - - `rclone -v --drive-impersonate foo@example.com lsf gdrive:backup` - - The arguments do: - - `-v` - verbose logging - - `--drive-impersonate foo@example.com` - this is what does +- `rclone -v --drive-impersonate foo@example.com lsf gdrive:backup` +- The arguments do: + - `-v` - verbose logging + - `--drive-impersonate foo@example.com` - this is what does the magic, pretending to be user foo. - - `lsf` - list files in a parsing friendly way - - `gdrive:backup` - use the remote called gdrive, work in + - `lsf` - list files in a parsing friendly way + - `gdrive:backup` - use the remote called gdrive, work in the folder named backup. -Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using `--drive-impersonate`, do this instead: - - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step 1 - - use rclone without specifying the `--drive-impersonate` option, like this: - `rclone -v lsf gdrive:backup` +Note: in case you configured a specific root folder on gdrive and rclone is +unable to access the contents of that folder when using `--drive-impersonate`, +do this instead: +- in the gdrive web interface, share your root folder with the user/email of the + new Service Account you created/selected at step 1 +- use rclone without specifying the `--drive-impersonate` option, like this: + `rclone -v lsf gdrive:backup` ### Shared drives (team drives) @@ -41444,7 +46431,7 @@ Drive ID if you prefer. For example: -``` +```text Configure this as a Shared Drive (Team Drive)? y) Yes n) No @@ -41481,14 +46468,18 @@ docs](https://rclone.org/docs/#fast-list) for more details. It does this by combining multiple `list` calls into a single API request. This works by combining many `'%s' in parents` filters into one expression. -To list the contents of directories a, b and c, the following requests will be send by the regular `List` function: -``` +To list the contents of directories a, b and c, the following requests will be +send by the regular `List` function: + +```text trashed=false and 'a' in parents trashed=false and 'b' in parents trashed=false and 'c' in parents ``` + These can now be combined into a single request: -``` + +```text trashed=false and ('a' in parents or 'b' in parents or 'c' in parents) ``` @@ -41497,7 +46488,8 @@ It will use the `--checkers` value to specify the number of requests to run in In tests, these batch requests were up to 20x faster than the regular method. Running the following command against different sized folders gives: -``` + +```console rclone lsjson -vv -R --checkers=6 gdrive:folder ``` @@ -41536,8 +46528,8 @@ revision of that file. Revisions follow the standard google policy which at time of writing was - * They are deleted after 30 days or 100 revisions (whatever comes first). - * They do not count towards a user storage quota. +- They are deleted after 30 days or 100 revisions (whatever comes first). +- They do not count towards a user storage quota. ### Deleting files @@ -41565,28 +46557,40 @@ For shortcuts pointing to files: - When listing a file shortcut appears as the destination file. - When downloading the contents of the destination file is downloaded. -- When updating shortcut file with a non shortcut file, the shortcut is removed then a new file is uploaded in place of the shortcut. -- When server-side moving (renaming) the shortcut is renamed, not the destination file. -- When server-side copying the shortcut is copied, not the contents of the shortcut. (unless `--drive-copy-shortcut-content` is in use in which case the contents of the shortcut gets copied). +- When updating shortcut file with a non shortcut file, the shortcut is removed + then a new file is uploaded in place of the shortcut. +- When server-side moving (renaming) the shortcut is renamed, not the destination + file. +- When server-side copying the shortcut is copied, not the contents of the shortcut. + (unless `--drive-copy-shortcut-content` is in use in which case the contents of + the shortcut gets copied). - When deleting the shortcut is deleted not the linked file. -- When setting the modification time, the modification time of the linked file will be set. +- When setting the modification time, the modification time of the linked file + will be set. For shortcuts pointing to folders: -- When listing the shortcut appears as a folder and that folder will contain the contents of the linked folder appear (including any sub folders) +- When listing the shortcut appears as a folder and that folder will contain the + contents of the linked folder appear (including any sub folders) - When downloading the contents of the linked folder and sub contents are downloaded - When uploading to a shortcut folder the file will be placed in the linked folder -- When server-side moving (renaming) the shortcut is renamed, not the destination folder +- When server-side moving (renaming) the shortcut is renamed, not the destination + folder - When server-side copying the contents of the linked folder is copied, not the shortcut. -- When deleting with `rclone rmdir` or `rclone purge` the shortcut is deleted not the linked folder. -- **NB** When deleting with `rclone remove` or `rclone mount` the contents of the linked folder will be deleted. +- When deleting with `rclone rmdir` or `rclone purge` the shortcut is deleted not + the linked folder. +- **NB** When deleting with `rclone remove` or `rclone mount` the contents of the + linked folder will be deleted. -The [rclone backend](https://rclone.org/commands/rclone_backend/) command can be used to create shortcuts. +The [rclone backend](https://rclone.org/commands/rclone_backend/) command can be +used to create shortcuts. Shortcuts can be completely ignored with the `--drive-skip-shortcuts` flag or the corresponding `skip_shortcuts` configuration setting. -If you have shortcuts that lead to an infinite recursion in your drive (e.g. a shortcut pointing to a parent folder), `skip_shortcuts` might be mandatory to be able to copy the drive. +If you have shortcuts that lead to an infinite recursion in your drive (e.g. a +shortcut pointing to a parent folder), `skip_shortcuts` might be mandatory to +be able to copy the drive. ### Emptying trash @@ -41652,11 +46656,12 @@ Here are some examples for allowed and prohibited conversions. This limitation can be disabled by specifying `--drive-allow-import-name-change`. When using this flag, rclone can convert multiple files types resulting in the same document type at once, e.g. with `--drive-import-formats docx,odt,txt`, -all files having these extension would result in a document represented as a docx file. +all files having these extension would result in a document represented as a +docx file. This brings the additional risk of overwriting a document, if multiple files have the same stem. Many rclone operations will not handle this name change in any way. They assume an equal name when copying files and might copy the -file again or delete them when the name changes. +file again or delete them when the name changes. Here are the possible export extensions with their corresponding mime types. Most of these can also be used for importing, but there more that are not @@ -41707,7 +46712,7 @@ Google Documents. | url | INI style link file | macOS, Windows | | webloc | macOS specific XML format | macOS | - + ### Standard options Here are the Standard options specific to drive (Google Drive). @@ -41750,20 +46755,20 @@ Properties: - Type: string - Required: false - Examples: - - "drive" - - Full access all files, excluding Application Data Folder. - - "drive.readonly" - - Read-only access to file metadata and file contents. - - "drive.file" - - Access to files created by rclone only. - - These are visible in the drive website. - - File authorization is revoked when the user deauthorizes the app. - - "drive.appfolder" - - Allows read and write access to the Application Data folder. - - This is not visible in the drive website. - - "drive.metadata.readonly" - - Allows read-only access to file metadata but - - does not allow any access to read or download file content. + - "drive" + - Full access all files, excluding Application Data Folder. + - "drive.readonly" + - Read-only access to file metadata and file contents. + - "drive.file" + - Access to files created by rclone only. + - These are visible in the drive website. + - File authorization is revoked when the user deauthorizes the app. + - "drive.appfolder" + - Allows read and write access to the Application Data folder. + - This is not visible in the drive website. + - "drive.metadata.readonly" + - Allows read-only access to file metadata but + - does not allow any access to read or download file content. #### --drive-service-account-file @@ -42451,16 +47456,16 @@ Properties: - Type: Bits - Default: read - Examples: - - "off" - - Do not read or write the value - - "read" - - Read the value only - - "write" - - Write the value only - - "failok" - - If writing fails log errors only, don't fail the transfer - - "read,write" - - Read and Write the value. + - "off" + - Do not read or write the value + - "read" + - Read the value only + - "write" + - Write the value only + - "failok" + - If writing fails log errors only, don't fail the transfer + - "read,write" + - Read and Write the value. #### --drive-metadata-permissions @@ -42481,16 +47486,16 @@ Properties: - Type: Bits - Default: off - Examples: - - "off" - - Do not read or write the value - - "read" - - Read the value only - - "write" - - Write the value only - - "failok" - - If writing fails log errors only, don't fail the transfer - - "read,write" - - Read and Write the value. + - "off" + - Do not read or write the value + - "read" + - Read the value only + - "write" + - Write the value only + - "failok" + - If writing fails log errors only, don't fail the transfer + - "read,write" + - Read and Write the value. #### --drive-metadata-labels @@ -42518,16 +47523,16 @@ Properties: - Type: Bits - Default: off - Examples: - - "off" - - Do not read or write the value - - "read" - - Read the value only - - "write" - - Write the value only - - "failok" - - If writing fails log errors only, don't fail the transfer - - "read,write" - - Read and Write the value. + - "off" + - Do not read or write the value + - "read" + - Read the value only + - "write" + - Write the value only + - "failok" + - If writing fails log errors only, don't fail the transfer + - "read,write" + - Read and Write the value. #### --drive-encoding @@ -42555,10 +47560,10 @@ Properties: - Type: bool - Default: false - Examples: - - "false" - - Enter credentials in the next step. - - "true" - - Get GCP IAM credentials from the environment (env vars or IAM). + - "false" + - Enter credentials in the next step. + - "true" + - Get GCP IAM credentials from the environment (env vars or IAM). #### --drive-description @@ -42600,9 +47605,11 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info. Here are the commands specific to the drive backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -42614,54 +47621,66 @@ These can be run on a running backend using the rc command ### get -Get command for fetching the drive config parameters +Get command for fetching the drive config parameters. - rclone backend get remote: [options] [+] +```console +rclone backend get remote: [options] [+] +``` -This is a get command which will be used to fetch the various drive config parameters +This is a get command which will be used to fetch the various drive config +parameters. -Usage Examples: - - rclone backend get drive: [-o service_account_file] [-o chunk_size] - rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] +Usage examples: +```console +rclone backend get drive: [-o service_account_file] [-o chunk_size] +rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] +``` Options: -- "chunk_size": show the current upload chunk size -- "service_account_file": show the current service account file +- "chunk_size": Show the current upload chunk size. +- "service_account_file": Show the current service account file. ### set -Set command for updating the drive config parameters +Set command for updating the drive config parameters. - rclone backend set remote: [options] [+] +```console +rclone backend set remote: [options] [+] +``` -This is a set command which will be used to update the various drive config parameters +This is a set command which will be used to update the various drive config +parameters. -Usage Examples: - - rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] - rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] +Usage examples: +```console +rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] +rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] +``` Options: -- "chunk_size": update the current upload chunk size -- "service_account_file": update the current service account file +- "chunk_size": Update the current upload chunk size. +- "service_account_file": Update the current service account file. ### shortcut -Create shortcuts from files or directories +Create shortcuts from files or directories. - rclone backend shortcut remote: [options] [+] +```console +rclone backend shortcut remote: [options] [+] +``` This command creates shortcuts from files or directories. -Usage: +Usage examples: - rclone backend shortcut drive: source_item destination_shortcut - rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut +```console +rclone backend shortcut drive: source_item destination_shortcut +rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut +``` In the first example this creates a shortcut from the "source_item" which can be a file or a directory to the "destination_shortcut". The @@ -42673,54 +47692,61 @@ relative to "drive:" to the "destination_shortcut" relative to "drive2:". This may fail with a permission error if the user authenticated with "drive2:" can't read files from "drive:". - Options: -- "target": optional target remote for the shortcut destination +- "target": Optional target remote for the shortcut destination. ### drives -List the Shared Drives available to this account +List the Shared Drives available to this account. - rclone backend drives remote: [options] [+] +```console +rclone backend drives remote: [options] [+] +``` This command lists the Shared Drives (Team Drives) available to this account. -Usage: +Usage example: - rclone backend [-o config] drives drive: +```console +rclone backend [-o config] drives drive: +``` -This will return a JSON list of objects like this +This will return a JSON list of objects like this: - [ - { - "id": "0ABCDEF-01234567890", - "kind": "drive#teamDrive", - "name": "My Drive" - }, - { - "id": "0ABCDEFabcdefghijkl", - "kind": "drive#teamDrive", - "name": "Test Drive" - } - ] +```json +[ + { + "id": "0ABCDEF-01234567890", + "kind": "drive#teamDrive", + "name": "My Drive" + }, + { + "id": "0ABCDEFabcdefghijkl", + "kind": "drive#teamDrive", + "name": "Test Drive" + } +] +``` With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives found and a combined drive. - [My Drive] - type = alias - remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: +```ini +[My Drive] +type = alias +remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: - [Test Drive] - type = alias - remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: +[Test Drive] +type = alias +remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: - [AllDrives] - type = combine - upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" +[AllDrives] +type = combine +upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" +``` Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. Any illegal characters will be @@ -42728,46 +47754,55 @@ substituted with "_" and duplicate names will have numbers suffixed. It will also add a remote called AllDrives which shows all the shared drives combined into one directory tree. - ### untrash -Untrash files and directories +Untrash files and directories. - rclone backend untrash remote: [options] [+] +```console +rclone backend untrash remote: [options] [+] +``` This command untrashes all the files and directories in the directory passed in recursively. -Usage: +Usage example: + +```console +rclone backend untrash drive:directory +rclone backend --interactive untrash drive:directory subdir +``` This takes an optional directory to trash which make this easier to use via the API. - rclone backend untrash drive:directory - rclone backend --interactive untrash drive:directory subdir - -Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it. +Use the --interactive/-i or --dry-run flag to see what would be restored before +restoring it. Result: - { - "Untrashed": 17, - "Errors": 0 - } - +```json +{ + "Untrashed": 17, + "Errors": 0 +} +``` ### copyid -Copy files by ID +Copy files by ID. - rclone backend copyid remote: [options] [+] +```console +rclone backend copyid remote: [options] [+] +``` -This command copies files by ID +This command copies files by ID. -Usage: +Usage examples: - rclone backend copyid drive: ID path - rclone backend copyid drive: ID1 path1 ID2 path2 +```console +rclone backend copyid drive: ID path +rclone backend copyid drive: ID1 path1 ID2 path2 +``` It copies the drive file with ID given to the path (an rclone path which will be passed internally to rclone copyto). The ID and path pairs can be @@ -42780,21 +47815,25 @@ component will be used as the file name. If the destination is a drive backend then server-side copying will be attempted if possible. -Use the --interactive/-i or --dry-run flag to see what would be copied before copying. - +Use the --interactive/-i or --dry-run flag to see what would be copied before +copying. ### moveid -Move files by ID +Move files by ID. - rclone backend moveid remote: [options] [+] +```console +rclone backend moveid remote: [options] [+] +``` -This command moves files by ID +This command moves files by ID. -Usage: +Usage examples: - rclone backend moveid drive: ID path - rclone backend moveid drive: ID1 path1 ID2 path2 +```console +rclone backend moveid drive: ID path +rclone backend moveid drive: ID1 path1 ID2 path2 +``` It moves the drive file with ID given to the path (an rclone path which will be passed internally to rclone moveto). @@ -42808,69 +47847,84 @@ attempted if possible. Use the --interactive/-i or --dry-run flag to see what would be moved beforehand. - ### exportformats -Dump the export formats for debug purposes +Dump the export formats for debug purposes. - rclone backend exportformats remote: [options] [+] +```console +rclone backend exportformats remote: [options] [+] +``` ### importformats -Dump the import formats for debug purposes +Dump the import formats for debug purposes. - rclone backend importformats remote: [options] [+] +```console +rclone backend importformats remote: [options] [+] +``` ### query -List files using Google Drive query language +List files using Google Drive query language. - rclone backend query remote: [options] [+] +```console +rclone backend query remote: [options] [+] +``` -This command lists files based on a query +This command lists files based on a query. -Usage: +Usage example: + +```console +rclone backend query drive: query +``` - rclone backend query drive: query - The query syntax is documented at [Google Drive Search query terms and operators](https://developers.google.com/drive/api/guides/ref-search-terms). For example: - rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'" +```console +rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'" +``` If the query contains literal ' or \ characters, these need to be escaped with \ characters. "'" becomes "\'" and "\" becomes "\\\", for example to match a file named "foo ' \.txt": - rclone backend query drive: "name = 'foo \' \\\.txt'" +```console +rclone backend query drive: "name = 'foo \' \\\.txt'" +``` The result is a JSON array of matches, for example: - [ - { - "createdTime": "2017-06-29T19:58:28.537Z", - "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD", - "md5Checksum": "68518d16be0c6fbfab918be61d658032", - "mimeType": "text/plain", - "modifiedTime": "2024-02-02T10:40:02.874Z", - "name": "foo ' \\.txt", - "parents": [ - "0BxAe_BCDE4zkFGZpcWJGek0xbzC" - ], - "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC", - "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893", - "size": "311", - "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC" - } - ] +```json +[ + { + "createdTime": "2017-06-29T19:58:28.537Z", + "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD", + "md5Checksum": "68518d16be0c6fbfab918be61d658032", + "mimeType": "text/plain", + "modifiedTime": "2024-02-02T10:40:02.874Z", + "name": "foo ' \\.txt", + "parents": [ + "0BxAe_BCDE4zkFGZpcWJGek0xbzC" + ], + "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC", + "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893", + "size": "311", + "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC" + } +] +```console ### rescue -Rescue or delete any orphaned files +Rescue or delete any orphaned files. - rclone backend rescue remote: [options] [+] +```console +rclone backend rescue remote: [options] [+] +``` This command rescues or deletes any orphaned files or directories. @@ -42880,28 +47934,33 @@ are no longer in any folder in Google Drive. This command finds those files and either rescues them to a directory you specify or deletes them. -Usage: - This can be used in 3 ways. -First, list all orphaned files +First, list all orphaned files: - rclone backend rescue drive: +```console +rclone backend rescue drive: +``` -Second rescue all orphaned files to the directory indicated +Second rescue all orphaned files to the directory indicated: - rclone backend rescue drive: "relative/path/to/rescue/directory" +```console +rclone backend rescue drive: "relative/path/to/rescue/directory" +``` -e.g. To rescue all orphans to a directory called "Orphans" in the top level +E.g. to rescue all orphans to a directory called "Orphans" in the top level: - rclone backend rescue drive: Orphans - -Third delete all orphaned files to the trash - - rclone backend rescue drive: -o delete +```console +rclone backend rescue drive: Orphans +``` +Third delete all orphaned files to the trash: +```console +rclone backend rescue drive: -o delete +``` + ## Limitations @@ -42979,7 +48038,12 @@ second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google. -It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower. +It is strongly recommended to use your own client ID as the default +rclone ID is heavily used. If you have multiple services running, it +is recommended to use an API key for each service. The default Google +quota is 10 transactions per second so it is recommended to stay under +that number as if you use more than that, it will cause rclone to rate +limit and make things slower. Here is how to create your own Google Drive client ID for rclone: @@ -42997,43 +48061,49 @@ be the same account as the Google Drive you want to access) credentials", which opens the wizard). 5. If you already configured an "Oauth Consent Screen", then skip -to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button -(near the top right corner of the right panel), then select "External" -and click on "CREATE"; on the next screen, enter an "Application name" -("rclone" is OK); enter "User Support Email" (your own email is OK); -enter "Developer Contact Email" (your own email is OK); then click on -"Save" (all other data is optional). You will also have to add [some scopes](https://developers.google.com/drive/api/guides/api-specific-auth), +to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button +(near the top right corner of the right panel), then click "Get started". +On the next screen, enter an "Application name" +("rclone" is OK); enter "User Support Email" (your own email is OK); +Next, under Audience select "External". Next enter your own contact information, +agree to terms and click "Create". You should now see rclone (or your project name) +in a box in the top left of the screen. + + (PS: if you are a GSuite user, you could also select "Internal" instead +of "External" above, but this will restrict API use to Google Workspace +users in your organisation). + + You will also have to add [some scopes](https://developers.google.com/drive/api/guides/api-specific-auth), including - - `https://www.googleapis.com/auth/docs` - - `https://www.googleapis.com/auth/drive` in order to be able to edit, -create and delete files with RClone. - - `https://www.googleapis.com/auth/drive.metadata.readonly` which you may also want to add. - - If you want to add all at once, comma separated it would be `https://www.googleapis.com/auth/docs,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/drive.metadata.readonly`. -6. After adding scopes, click -"Save and continue" to add test users. Be sure to add your own account to -the test users. Once you've added yourself as a test user and saved the -changes, click again on "Credentials" on the left panel to go back to -the "Credentials" screen. + - `https://www.googleapis.com/auth/docs` + - `https://www.googleapis.com/auth/drive` in order to be able to edit, + create and delete files with RClone. + - `https://www.googleapis.com/auth/drive.metadata.readonly` which you may + also want to add. - (PS: if you are a GSuite user, you could also select "Internal" instead -of "External" above, but this will restrict API use to Google Workspace -users in your organisation). + To do this, click Data Access on the left side panel, click "add or + remove scopes" and select the three above and press update or go to the + "Manually add scopes" text box (scroll down) and enter + "https://www.googleapis.com/auth/docs,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/drive.metadata.readonly", press add to table then update. -7. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, -then select "OAuth client ID". + You should now see the three scopes on your Data access page. Now press save + at the bottom! -8. Choose an application type of "Desktop app" and click "Create". (the default name is fine) +6. After adding scopes, click Audience +Scroll down and click "+ Add users". Add yourself as a test user and press save. -9. It will show you a client ID and client secret. Make a note of these. - - (If you selected "External" at Step 5 continue to Step 10. +7. Go to Overview on the left panel, click "Create OAuth client". Choose + an application type of "Desktop app" and click "Create". (the default name is fine) + +8. It will show you a client ID and client secret. Make a note of these. + (If you selected "External" at Step 5 continue to Step 9. If you chose "Internal" you don't need to publish and can skip straight to - Step 11 but your destination drive must be part of the same Google Workspace.) + Step 10 but your destination drive must be part of the same Google Workspace.) -10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm. - You will also want to add yourself as a test user. +9. Go to "Audience" and then click "PUBLISH APP" button and confirm. + Add yourself as a test user if you haven't already. -11. Provide the noted client ID and client secret to rclone. +10. Provide the noted client ID and client secret to rclone. Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" @@ -43049,11 +48119,12 @@ testing mode would also be sufficient. (Thanks to @balazer on github for these instructions.) -Sometimes, creation of an OAuth consent in Google API Console fails due to an error message -“The request failed because changes to one of the field of the resource is not supported”. -As a convenient workaround, the necessary Google Drive API key can be created on the -[Python Quickstart](https://developers.google.com/drive/api/v3/quickstart/python) page. -Just push the Enable the Drive API button to receive the Client ID and Secret. +Sometimes, creation of an OAuth consent in Google API Console fails due to an +error message "The request failed because changes to one of the field of the +resource is not supported". As a convenient workaround, the necessary Google +Drive API key can be created on the +[Python Quickstart](https://developers.google.com/drive/api/v3/quickstart/python) +page. Just push the Enable the Drive API button to receive the Client ID and Secret. Note that it will automatically create a new project in the API Console. # Google Photos @@ -43079,11 +48150,13 @@ through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -43147,10 +48220,10 @@ y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if using web browser to automatically +token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this @@ -43161,20 +48234,28 @@ This remote is called `remote` and can now be used like this See all the albums in your photos - rclone lsd remote:album +```console +rclone lsd remote:album +``` Make a new album - rclone mkdir remote:album/newAlbum +```console +rclone mkdir remote:album/newAlbum +``` List the contents of an album - rclone ls remote:album/newAlbum +```console +rclone ls remote:album/newAlbum +``` Sync `/home/local/images` to the Google Photos, removing any excess files in the album. - rclone sync --interactive /home/local/image remote:album/newAlbum +```console +rclone sync --interactive /home/local/image remote:album/newAlbum +``` ### Layout @@ -43191,7 +48272,7 @@ Note that all your photos and videos will appear somewhere under `media`, but they may not appear under `album` unless you've put them into albums. -``` +```text / - upload - file1.jpg @@ -43255,11 +48336,13 @@ may create new directories (albums) under `album`. If you copy files with a directory hierarchy in there then rclone will create albums with the `/` character in them. For example if you do - rclone copy /path/to/images remote:album/images +```console +rclone copy /path/to/images remote:album/images +``` and the images directory contains -``` +```text images - file1.jpg dir @@ -43272,11 +48355,11 @@ images Then rclone will create the following albums with the following files in - images - - file1.jpg + - file1.jpg - images/dir - - file2.jpg + - file2.jpg - images/dir2/dir3 - - file3.jpg + - file3.jpg This means that you can use the `album` path pretty much like a normal filesystem and it is a good target for repeated syncing. @@ -43284,7 +48367,7 @@ filesystem and it is a good target for repeated syncing. The `shared-album` directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface. - + ### Standard options Here are the Standard options specific to google photos (Google Photos). @@ -43578,7 +48661,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -43611,7 +48694,11 @@ When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by [bug #112096115](https://issuetracker.google.com/issues/112096115). -**The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort** +**The current google API does not allow photos to be downloaded at original +resolution. This is very important if you are, for example, relying on +"Google Photos" as a backup of your photos. You will not be able to use +rclone to redownload original images. You could use 'google takeout' +to recover the original photos as a last resort** **NB** you **can** use the [--gphotos-proxy](#gphotos-proxy) flag to use a headless browser to download images in full resolution. @@ -43698,7 +48785,7 @@ client_id stops working) then you can make your own. Please follow the steps in [the google drive docs](https://rclone.org/drive/#making-your-own-client-id). You will need these scopes instead of the drive ones detailed: -``` +```text https://www.googleapis.com/auth/photoslibrary.appendonly https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata @@ -43708,6 +48795,7 @@ https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata Hasher is a special overlay backend to create remotes which handle checksums for other remotes. It's main functions include: + - Emulate hash types unimplemented by backends - Cache checksums to help with slow hashing of large local or (S)FTP files - Warm up checksum cache from external SUM files @@ -43728,8 +48816,9 @@ Now proceed to interactive or manual configuration. ### Interactive configuration Run `rclone config`: -``` -No remotes found, make a new one? + +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -43775,7 +48864,7 @@ usually `YOURHOME/.config/rclone/rclone.conf`. Open it in your favorite text editor, find section for the base remote and create new section for hasher like in the following examples: -``` +```ini [Hasher1] type = hasher remote = myRemote:path @@ -43790,12 +48879,13 @@ max_age = 24h ``` Hasher takes basically the following parameters: -- `remote` is required, + +- `remote` is required - `hashes` is a comma separated list of supported checksums - (by default `md5,sha1`), -- `max_age` - maximum time to keep a checksum value in the cache, - `0` will disable caching completely, - `off` will cache "forever" (that is until the files get changed). + (by default `md5,sha1`) +- `max_age` - maximum time to keep a checksum value in the cache + `0` will disable caching completely + `off` will cache "forever" (that is until the files get changed) Make sure the `remote` has `:` (colon) in. If you specify the remote without a colon then rclone will use a local directory of that name. So if you use @@ -43810,9 +48900,9 @@ If you use `remote = name` literally then rclone will put files Now you can use it as `Hasher2:subdir/file` instead of base remote. Hasher will transparently update cache with new checksums when a file is fully read or overwritten, like: -``` -rclone copy External:path/file Hasher:dest/path +```console +rclone copy External:path/file Hasher:dest/path rclone cat Hasher:path/to/file > /dev/null ``` @@ -43820,16 +48910,16 @@ The way to refresh **all** cached checksums (even unsupported by the base backen for a subtree is to **re-download** all files in the subtree. For example, use `hashsum --download` using **any** supported hashsum on the command line (we just care to re-read): -``` -rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null +```console +rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null rclone backend dump Hasher:path/to/subtree ``` You can print or drop hashsum cache using custom backend commands: -``` -rclone backend dump Hasher:dir/subdir +```console +rclone backend dump Hasher:dir/subdir rclone backend drop Hasher: ``` @@ -43838,7 +48928,7 @@ rclone backend drop Hasher: Hasher supports two backend commands: generic SUM file `import` and faster but less consistent `stickyimport`. -``` +```console rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM [--checkers 4] ``` @@ -43847,6 +48937,7 @@ can point to either a local or an `other-remote:path` text file in SUM format. The command will parse the SUM file, then walk down the path given by the first argument, snapshot current fingerprints and fill in the cache entries correspondingly. + - Paths in the SUM file are treated as relative to `hasher:dir/subdir`. - The command will **not** check that supplied values are correct. You **must know** what you are doing. @@ -43857,7 +48948,7 @@ correspondingly. `--checkers` to make it faster. Or use `stickyimport` if you don't care about fingerprints and consistency. -``` +```console rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1 ``` @@ -43870,7 +48961,7 @@ or by full re-read/re-write of the files. ## Configuration reference - + ### Standard options Here are the Standard options specific to hasher (Better checksums for other remotes). @@ -43944,9 +49035,11 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info. Here are the commands specific to the hasher backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -43958,54 +49051,73 @@ These can be run on a running backend using the rc command ### drop -Drop cache +Drop cache. - rclone backend drop remote: [options] [+] +```console +rclone backend drop remote: [options] [+] +``` Completely drop checksum cache. -Usage Example: - rclone backend drop hasher: +Usage example: + +```console +rclone backend drop hasher: +``` ### dump -Dump the database +Dump the database. - rclone backend dump remote: [options] [+] +```console +rclone backend dump remote: [options] [+] +``` -Dump cache records covered by the current remote +Dump cache records covered by the current remote. ### fulldump -Full dump of the database +Full dump of the database. - rclone backend fulldump remote: [options] [+] +```console +rclone backend fulldump remote: [options] [+] +``` -Dump all cache records in the database +Dump all cache records in the database. ### import -Import a SUM file +Import a SUM file. - rclone backend import remote: [options] [+] +```console +rclone backend import remote: [options] [+] +``` Amend hash cache from a SUM file and bind checksums to files by size/time. -Usage Example: - rclone backend import hasher:subdir md5 /path/to/sum.md5 +Usage example: + +```console +rclone backend import hasher:subdir md5 /path/to/sum.md5 +``` ### stickyimport -Perform fast import of a SUM file +Perform fast import of a SUM file. - rclone backend stickyimport remote: [options] [+] +```console +rclone backend stickyimport remote: [options] [+] +``` Fill hash cache from a SUM file without verifying file fingerprints. -Usage Example: - rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5 +Usage example: +```console +rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5 +``` + ## Implementation details (advanced) @@ -44058,8 +49170,9 @@ Databases can be shared between multiple rclone processes. # HDFS -[HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) is a -distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/) framework. +[HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) +is a distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/) +framework. Paths are specified as `remote:` or `remote:path/to/dir`. @@ -44067,11 +49180,13 @@ Paths are specified as `remote:` or `remote:path/to/dir`. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -44135,15 +49250,21 @@ This remote is called `remote` and can now be used like this See all the top level directories - rclone lsd remote: +```console +rclone lsd remote: +``` List the contents of a directory - rclone ls remote:directory +```console +rclone ls remote:directory +``` Sync the remote `directory` to `/home/local/directory`, deleting any excess files. - rclone sync --interactive remote:directory /home/local/directory +```console +rclone sync --interactive remote:directory /home/local/directory +``` ### Setting up your own HDFS instance for testing @@ -44152,7 +49273,7 @@ or use the docker image from the tests: If you want to build the docker image -``` +```console git clone https://github.com/rclone/rclone.git cd rclone/fstest/testserver/images/test-hdfs docker build --rm -t rclone/test-hdfs . @@ -44160,7 +49281,7 @@ docker build --rm -t rclone/test-hdfs . Or you can just use the latest one pushed -``` +```console docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:8020 --hostname "rclone-hdfs" rclone/test-hdfs ``` @@ -44168,15 +49289,15 @@ docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:80 For this docker image the remote needs to be configured like this: -``` +```ini [remote] type = hdfs namenode = 127.0.0.1:8020 username = root ``` -You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use volumes, so all data -uploaded will be lost.) +You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use +volumes, so all data uploaded will be lost.) ### Modification times @@ -44188,7 +49309,8 @@ No checksums are implemented. ### Usage information -You can use the `rclone about remote:` command which will display filesystem size and current usage. +You can use the `rclone about remote:` command which will display filesystem +size and current usage. ### Restricted filename characters @@ -44201,7 +49323,7 @@ the following characters are also replaced: Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8). - + ### Standard options Here are the Standard options specific to hdfs (Hadoop distributed file system). @@ -44230,8 +49352,8 @@ Properties: - Type: string - Required: false - Examples: - - "root" - - Connect to hdfs as root. + - "root" + - Connect to hdfs as root. ### Advanced options @@ -44268,8 +49390,8 @@ Properties: - Type: string - Required: false - Examples: - - "privacy" - - Ensure authentication, integrity and encryption enabled. + - "privacy" + - Ensure authentication, integrity and encryption enabled. #### --hdfs-encoding @@ -44295,10 +49417,11 @@ Properties: - Type: string - Required: false - + ## Limitations +- Erasure coding not supported, see [issue #8808](https://github.com/rclone/rclone/issues/8808) - No server-side `Move` or `DirMove`. - Checksums not implemented. @@ -44316,11 +49439,13 @@ which you need to do in your browser. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found - make a new one n) New remote s) Set configuration password @@ -44368,7 +49493,7 @@ and hence should not be shared with other persons.** See the [below section](#keeping-your-tokens-safe) for more information. See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from HiDrive. This only runs from the moment it opens @@ -44377,38 +49502,47 @@ The webserver runs on `http://127.0.0.1:53682/`. If local port `53682` is protected by a firewall you may need to temporarily unblock the firewall to complete authorization. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your HiDrive root folder - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your HiDrive filesystem - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to a HiDrive directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Keeping your tokens safe -Any OAuth-tokens will be stored by rclone in the remote's configuration file as unencrypted text. -Anyone can use a valid refresh-token to access your HiDrive filesystem without knowing your password. -Therefore you should make sure no one else can access your configuration. +Any OAuth-tokens will be stored by rclone in the remote's configuration file as +unencrypted text. Anyone can use a valid refresh-token to access your HiDrive +filesystem without knowing your password. Therefore you should make sure no one +else can access your configuration. It is possible to encrypt rclone's configuration file. -You can find information on securing your configuration file by viewing the [configuration encryption docs](https://rclone.org/docs/#configuration-encryption). +You can find information on securing your configuration file by viewing the +[configuration encryption docs](https://rclone.org/docs/#configuration-encryption). ### Invalid refresh token -As can be verified [here](https://developer.hidrive.com/basics-flows/), +As can be verified on [HiDrive's OAuth guide](https://developer.hidrive.com/basics-flows/), each `refresh_token` (for Native Applications) is valid for 60 days. If used to access HiDrivei, its validity will be automatically extended. This means that if you - * Don't use the HiDrive remote for 60 days +- Don't use the HiDrive remote for 60 days then rclone will return an error which includes a text that implies the refresh token is *invalid* or *expired*. @@ -44417,7 +49551,9 @@ To fix this you will need to authorize rclone to access your HiDrive account aga Using - rclone config reconnect remote: +```console +rclone config reconnect remote: +``` the process is very similar to the process of initial setup exemplified before. @@ -44439,7 +49575,7 @@ Therefore rclone will automatically replace these characters, if files or folders are stored or accessed with such names. You can read about how this filename encoding works in general -[here](overview/#restricted-filenames). +in the [main docs](https://rclone.org/overview/#restricted-filenames). Keep in mind that HiDrive only supports file or folder names with a length of 255 characters or less. @@ -44455,9 +49591,9 @@ so you may want to restrict this behaviour on systems with limited resources. You can customize this behaviour using the following options: -* `chunk_size`: size of file parts -* `upload_cutoff`: files larger or equal to this in size will use a chunked transfer -* `upload_concurrency`: number of file-parts to upload at the same time +- `chunk_size`: size of file parts +- `upload_cutoff`: files larger or equal to this in size will use a chunked transfer +- `upload_concurrency`: number of file-parts to upload at the same time See the below section about configuration options for more details. @@ -44474,9 +49610,10 @@ This works by prepending the contents of the `root_prefix` option to any paths accessed by rclone. For example, the following two ways to access the home directory are equivalent: - rclone lsd --hidrive-root-prefix="/users/test/" remote:path - - rclone lsd remote:/users/test/path +```console +rclone lsd --hidrive-root-prefix="/users/test/" remote:path +rclone lsd remote:/users/test/path +``` See the below section about configuration options for more details. @@ -44485,14 +49622,14 @@ See the below section about configuration options for more details. By default, rclone will know the number of directory members contained in a directory. For example, `rclone lsd` uses this information. -The acquisition of this information will result in additional time costs for HiDrive's API. -When dealing with large directory structures, it may be desirable to circumvent this time cost, -especially when this information is not explicitly needed. -For this, the `disable_fetching_member_count` option can be used. +The acquisition of this information will result in additional time costs for +HiDrive's API. When dealing with large directory structures, it may be +desirable to circumvent this time cost, especially when this information is not +explicitly needed. For this, the `disable_fetching_member_count` option can be used. See the below section about configuration options for more details. - + ### Standard options Here are the Standard options specific to hidrive (HiDrive). @@ -44534,10 +49671,10 @@ Properties: - Type: string - Default: "rw" - Examples: - - "rw" - - Read and write access to resources. - - "ro" - - Read-only access to resources. + - "rw" + - Read and write access to resources. + - "ro" + - Read-only access to resources. ### Advanced options @@ -44606,13 +49743,13 @@ Properties: - Type: string - Default: "user" - Examples: - - "user" - - User-level access to management permissions. - - This will be sufficient in most cases. - - "admin" - - Extensive access to management permissions. - - "owner" - - Full access to management permissions. + - "user" + - User-level access to management permissions. + - This will be sufficient in most cases. + - "admin" + - Extensive access to management permissions. + - "owner" + - Full access to management permissions. #### --hidrive-root-prefix @@ -44628,14 +49765,14 @@ Properties: - Type: string - Default: "/" - Examples: - - "/" - - The topmost directory accessible by rclone. - - This will be equivalent with "root" if rclone uses a regular HiDrive user account. - - "root" - - The topmost directory of the HiDrive user account - - "" - - This specifies that there is no root-prefix for your paths. - - When using this you will always need to specify paths to this remote with a valid parent e.g. "remote:/path/to/dir" or "remote:root/path/to/dir". + - "/" + - The topmost directory accessible by rclone. + - This will be equivalent with "root" if rclone uses a regular HiDrive user account. + - "root" + - The topmost directory of the HiDrive user account + - "" + - This specifies that there is no root-prefix for your paths. + - When using this you will always need to specify paths to this remote with a valid parent e.g. "remote:/path/to/dir" or "remote:root/path/to/dir". #### --hidrive-endpoint @@ -44742,7 +49879,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -44752,10 +49889,10 @@ HiDrive is able to store symbolic links (*symlinks*) by design, for example, when unpacked from a zip archive. There exists no direct mechanism to manage native symlinks in remotes. -As such this implementation has chosen to ignore any native symlinks present in the remote. -rclone will not be able to access or show any symlinks stored in the hidrive-remote. -This means symlinks cannot be individually removed, copied, or moved, -except when removing, copying, or moving the parent folder. +As such this implementation has chosen to ignore any native symlinks present in +the remote. rclone will not be able to access or show any symlinks stored in +the hidrive-remote. This means symlinks cannot be individually removed, copied, +or moved, except when removing, copying, or moving the parent folder. *This does not affect the `.rclonelink`-files that rclone uses to encode and store symbolic links.* @@ -44804,11 +49941,13 @@ To just download a single file it is easier to use Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -44857,15 +49996,21 @@ This remote is called `remote` and can now be used like this See all the top level directories - rclone lsd remote: +```console +rclone lsd remote: +``` List the contents of a directory - rclone ls remote:directory +```console +rclone ls remote:directory +``` Sync the remote `directory` to `/home/local/directory`, deleting any excess files. - rclone sync --interactive remote:directory /home/local/directory +```console +rclone sync --interactive remote:directory /home/local/directory +``` ### Read only @@ -44884,13 +50029,17 @@ No checksums are stored. Since the http remote only has one config parameter it is easy to use without a config file: - rclone lsd --http-url https://beta.rclone.org :http: +```console +rclone lsd --http-url https://beta.rclone.org :http: +``` or: - rclone lsd :http,url='https://beta.rclone.org': - +```console +rclone lsd :http,url='https://beta.rclone.org': +``` + ### Standard options Here are the Standard options specific to http (HTTP). @@ -45000,13 +50149,32 @@ Properties: - Type: string - Required: false +### Metadata + +HTTP metadata keys are case insensitive and are always returned in lower case. + +Here are the possible system metadata items for the http backend. + +| Name | Help | Type | Example | Read Only | +|------|------|------|---------|-----------| +| cache-control | Cache-Control header | string | no-cache | N | +| content-disposition | Content-Disposition header | string | inline | N | +| content-disposition-filename | Filename retrieved from Content-Disposition header | string | file.txt | N | +| content-encoding | Content-Encoding header | string | gzip | N | +| content-language | Content-Language header | string | en-US | N | +| content-type | Content-Type header | string | text/plain | N | + +See the [metadata](https://rclone.org/docs/#metadata) docs for more info. + ## Backend commands Here are the commands specific to the http backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -45020,16 +50188,20 @@ These can be run on a running backend using the rc command Set command for updating the config parameters. - rclone backend set remote: [options] [+] +```console +rclone backend set remote: [options] [+] +``` This set command can be used to update the config parameters for a running http backend. -Usage Examples: +Usage examples: - rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2] - rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2] - rclone rc backend/command command=set fs=remote: -o url=https://example.com +```console +rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=remote: -o url=https://example.com +``` The option keys are named as they are in the config file. @@ -45039,8 +50211,7 @@ will default to those currently in use. It doesn't return anything. - - + ## Limitations @@ -45049,18 +50220,20 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). # ImageKit + This is a backend for the [ImageKit.io](https://imagekit.io/) storage service. -#### About ImageKit -[ImageKit.io](https://imagekit.io/) provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web. +[ImageKit.io](https://imagekit.io/) provides real-time image and video +optimizations, transformations, and CDN delivery. Over 1,000 businesses +and 70,000 developers trust ImageKit with their images and videos on the web. - -#### Accounts & Pricing - -To use this backend, you need to [create an account](https://imagekit.io/registration/) on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See [the pricing details](https://imagekit.io/plans). +To use this backend, you need to [create an account](https://imagekit.io/registration/) +on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements +grow, upgrade to a plan that best fits your needs. See [the pricing details](https://imagekit.io/plans). ## Configuration @@ -45068,16 +50241,18 @@ Here is an example of making an imagekit configuration. Firstly create a [ImageKit.io](https://imagekit.io/) account and choose a plan. -You will need to log in and get the `publicKey` and `privateKey` for your account from the developer section. +You will need to log in and get the `publicKey` and `privateKey` for your account +from the developer section. Now run -``` + +```console rclone config ``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -45129,20 +50304,26 @@ e) Edit this remote d) Delete this remote y/e/d> y ``` + List directories in the top level of your Media Library -``` + +```console rclone lsd imagekit-media-library: ``` + Make a new directory. -``` + +```console rclone mkdir imagekit-media-library:directory ``` + List the contents of a directory. -``` + +```console rclone ls imagekit-media-library:directory ``` -### Modified time and hashes +### Modified time and hashes ImageKit does not support modification times or hashes yet. @@ -45150,7 +50331,7 @@ ImageKit does not support modification times or hashes yet. No checksums are supported. - + ### Standard options Here are the Standard options specific to imagekit (ImageKit.io). @@ -45271,26 +50452,32 @@ Here are the possible system metadata items for the imagekit backend. See the [metadata](https://rclone.org/docs/#metadata) docs for more info. - + # iCloud Drive - ## Configuration -The initial setup for an iCloud Drive backend involves getting a trust token/session. This can be done by simply using the regular iCloud password, and accepting the code prompt on another iCloud connected device. +The initial setup for an iCloud Drive backend involves getting a trust token/session. +This can be done by simply using the regular iCloud password, and accepting the code +prompt on another iCloud connected device. -**IMPORTANT**: At the moment an app specific password won't be accepted. Only use your regular password and 2FA. +**IMPORTANT**: At the moment an app specific password won't be accepted. Only +use your regular password and 2FA. -`rclone config` walks you through the token creation. The trust token is valid for 30 days. After which you will have to reauthenticate with `rclone reconnect` or `rclone config`. +`rclone config` walks you through the token creation. The trust token is valid +for 30 days. After which you will have to reauthenticate with `rclone reconnect` +or `rclone config`. Here is an example of how to make a remote called `iclouddrive`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -45346,21 +50533,28 @@ y/e/d> y ADP is currently unsupported and need to be disabled -On iPhone, Settings `>` Apple Account `>` iCloud `>` 'Access iCloud Data on the Web' must be ON, and 'Advanced Data Protection' OFF. +On iPhone, Settings `>` Apple Account `>` iCloud `>` 'Access iCloud Data on the Web' +must be ON, and 'Advanced Data Protection' OFF. ## Troubleshooting ### Missing PCS cookies from the request -This means you have Advanced Data Protection (ADP) turned on. This is not supported at the moment. If you want to use rclone you will have to turn it off. See above for how to turn it off. +This means you have Advanced Data Protection (ADP) turned on. This is not supported +at the moment. If you want to use rclone you will have to turn it off. See above +for how to turn it off. -You will need to clear the `cookies` and the `trust_token` fields in the config. Or you can delete the remote config and start again. +You will need to clear the `cookies` and the `trust_token` fields in the config. +Or you can delete the remote config and start again. You should then run `rclone reconnect remote:`. -Note that changing the ADP setting may not take effect immediately - you may need to wait a few hours or a day before you can get rclone to work - keep clearing the config entry and running `rclone reconnect remote:` until rclone functions properly. - +Note that changing the ADP setting may not take effect immediately - you may +need to wait a few hours or a day before you can get rclone to work - keep +clearing the config entry and running `rclone reconnect remote:` until rclone +functions properly. + ### Standard options Here are the Standard options specific to iclouddrive (iCloud Drive). @@ -45450,13 +50644,14 @@ Properties: - Type: string - Required: false - + # Internet Archive The Internet Archive backend utilizes Items on [archive.org](https://archive.org/) -Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.html) for the API this backend uses. +Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.html) +for the API this backend uses. Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`. @@ -45467,31 +50662,47 @@ Once you have made a remote, you can use it like this: Make a new item - rclone mkdir remote:item +```console +rclone mkdir remote:item +``` List the contents of a item - rclone ls remote:item +```console +rclone ls remote:item +``` Sync `/home/local/directory` to the remote item, deleting any excess files in the item. - rclone sync --interactive /home/local/directory remote:item +```console +rclone sync --interactive /home/local/directory remote:item +``` ## Notes -Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available. -The per-item queue is enqueued to an another queue, Item Deriver Queue. [You can check the status of Item Deriver Queue here.](https://catalogd.archive.org/catalog.php?whereami=1) This queue has a limit, and it may block you from uploading, or even deleting. You should avoid uploading a lot of small files for better behavior. -You can optionally wait for the server's processing to finish, by setting non-zero value to `wait_archive` key. -By making it wait, rclone can do normal file comparison. -Make sure to set a large enough value (e.g. `30m0s` for smaller files) as it can take a long time depending on server's queue. +Because of Internet Archive's architecture, it enqueues write operations (and +extra post-processings) in a per-item queue. You can check item's queue at +. Because of that, all +uploads/deletes will not show up immediately and takes some time to be available. +The per-item queue is enqueued to an another queue, Item Deriver Queue. +[You can check the status of Item Deriver Queue here.](https://catalogd.archive.org/catalog.php?whereami=1) +This queue has a limit, and it may block you from uploading, or even deleting. +You should avoid uploading a lot of small files for better behavior. + +You can optionally wait for the server's processing to finish, by setting +non-zero value to `wait_archive` key. By making it wait, rclone can do normal +file comparison. Make sure to set a large enough value (e.g. `30m0s` for smaller +files) as it can take a long time depending on server's queue. ## About metadata + This backend supports setting, updating and reading metadata of each file. The metadata will appear as file metadata on Internet Archive. However, some fields are reserved by both Internet Archive and rclone. The following are reserved by Internet Archive: + - `name` - `source` - `size` @@ -45504,9 +50715,11 @@ The following are reserved by Internet Archive: - `summation` Trying to set values to these keys is ignored with a warning. -Only setting `mtime` is an exception. Doing so make it the identical behavior as setting ModTime. +Only setting `mtime` is an exception. Doing so make it the identical +behavior as setting ModTime. -rclone reserves all the keys starting with `rclone-`. Setting value for these keys will give you warnings, but values are set according to request. +rclone reserves all the keys starting with `rclone-`. Setting value for +these keys will give you warnings, but values are set according to request. If there are multiple values for a key, only the first one is returned. This is a limitation of rclone, that supports one value per one key. @@ -45524,7 +50737,9 @@ changeable, as they are created by the Internet Archive automatically. These auto-created files can be excluded from the sync using [metadata filtering](https://rclone.org/filtering/#metadata). - rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata" +```console +rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata" +``` Which excludes from the sync any files which have the `source=metadata` or `format=Metadata` flags which are added to @@ -45537,12 +50752,14 @@ Most applies to the other providers as well, any differences are described [belo First run - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -45608,7 +50825,7 @@ d) Delete this remote y/e/d> y ``` - + ### Standard options Here are the Standard options specific to internetarchive (Internet Archive). @@ -45776,117 +50993,198 @@ Here are the possible system metadata items for the internetarchive backend. See the [metadata](https://rclone.org/docs/#metadata) docs for more info. - + # Jottacloud -Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters -in Norway. In addition to the official service at [jottacloud.com](https://www.jottacloud.com/), -it also provides white-label solutions to different companies, such as: -* Telia - * Telia Cloud (cloud.telia.se) - * Telia Sky (sky.telia.no) -* Tele2 - * Tele2 Cloud (mittcloud.tele2.se) -* Onlime - * Onlime Cloud Storage (onlime.dk) -* Elkjøp (with subsidiaries): - * Elkjøp Cloud (cloud.elkjop.no) - * Elgiganten Sweden (cloud.elgiganten.se) - * Elgiganten Denmark (cloud.elgiganten.dk) - * Giganti Cloud (cloud.gigantti.fi) - * ELKO Cloud (cloud.elko.is) +Jottacloud is a cloud storage service provider from a Norwegian company, using +its own datacenters in Norway. -Most of the white-label versions are supported by this backend, although may require different -authentication setup - described below. +In addition to the official service at [jottacloud.com](https://www.jottacloud.com/), +it also provides white-label solutions to different companies. The following +are currently supported by this backend, using a different authentication setup +as described [below](#whitelabel-authentication): + +- Elkjøp (with subsidiaries): + - Elkjøp Cloud (cloud.elkjop.no) + - Elgiganten Cloud (cloud.elgiganten.dk) + - Elgiganten Cloud (cloud.elgiganten.se) + - ELKO Cloud (cloud.elko.is) + - Gigantti Cloud (cloud.gigantti.fi) +- Telia + - Telia Cloud (cloud.telia.se) + - Telia Sky (sky.telia.no) +- Tele2 + - Tele2 Cloud (mittcloud.tele2.se) +- Onlime + - Onlime (onlime.dk) +- MediaMarkt + - MediaMarkt Cloud (mediamarkt.jottacloud.com) + - Let's Go Cloud (letsgo.jotta.cloud) Paths are specified as `remote:path` Paths may be as deep as required, e.g. `remote:directory/subdirectory`. -## Authentication types +## Authentication -Some of the whitelabel versions uses a different authentication method than the official service, -and you have to choose the correct one when setting up the remote. +Authentication in Jottacloud is in general based on OAuth and OpenID Connect +(OIDC). There are different variants to choose from, depending on which service +you are using, e.g. a white-label service may only support one of them. Note +that there is no documentation to rely on, so the descriptions provided here +are based on observations and may not be accurate. -### Standard authentication +Jottacloud uses two optional OAuth security mechanisms, referred to as "Refresh +Token Rotation" and "Automatic Reuse Detection", which has some implications. +Access tokens normally have one hour expiry, after which they need to be +refreshed (rotated), an operation that requires the refresh token to be +supplied. Rclone does this automatically. This is standard OAuth. But in +Jottacloud, such a refresh operation not only creates a new access token, but +also refresh token, and invalidates the existing refresh token, the one that +was supplied. It keeps track of the history of refresh tokens, sometimes +referred to as a token family, descending from the original refresh token that +was issued after the initial authentication. This is used to detect any +attempts at reusing old refresh tokens, and trigger an immedate invalidation of +the current refresh token, and effectively the entire refresh token family. -The standard authentication method used by the official service (jottacloud.com), as well as -some of the whitelabel services, requires you to generate a single-use personal login token -from the account security settings in the service's web interface. Log in to your account, -go to "Settings" and then "Security", or use the direct link presented to you by rclone when -configuring the remote: . Scroll down to the section -"Personal login token", and click the "Generate" button. Note that if you are using a -whitelabel service you probably can't use the direct link, you need to find the same page in -their dedicated web interface, and also it may be in a different location than described above. +When the current refresh token has been invalidated, next time rclone tries to +perform a token refresh, it will fail with an error message something along the +lines of: -To access your account from multiple instances of rclone, you need to configure each of them -with a separate personal login token. E.g. you create a Jottacloud remote with rclone in one -location, and copy the configuration file to a second location where you also want to run -rclone and access the same remote. Then you need to replace the token for one of them, using -the [config reconnect](https://rclone.org/commands/rclone_config_reconnect/) command, which -requires you to generate a new personal login token and supply as input. If you do not -do this, the token may easily end up being invalidated, resulting in both instances failing -with an error message something along the lines of: +```text +CRITICAL: Failed to create file system for "remote:": (...): couldn't fetch token: invalid_grant: maybe token expired? - try refreshing with "rclone config reconnect remote:" +``` - oauth2: cannot fetch token: 400 Bad Request - Response: {"error":"invalid_grant","error_description":"Stale token"} +If you run rclone with verbosity level 2 (`-vv`), you will see a debug message +with an additional error description from the OAuth response: -When this happens, you need to replace the token as described above to be able to use your -remote again. +```text +DEBUG : remote: got fatal oauth error: oauth2: "invalid_grant" "Session doesn't have required client" +``` -All personal login tokens you have taken into use will be listed in the web interface under -"My logged in devices", and from the right side of that list you can click the "X" button to -revoke individual tokens. +(The error description used to be "Stale token" instead of "Session doesn't +have required client", so you may see references to that in older descriptions +of this situation.) -### Legacy authentication +When this happens, you need to re-authenticate to be able to use your remote +again, e.g. using the [config reconnect](https://rclone.org/commands/rclone_config_reconnect/) +command as suggested in the error message. This will create an entirely new +refresh token (family). -If you are using one of the whitelabel versions (e.g. from Elkjøp) you may not have the option -to generate a CLI token. In this case you'll have to use the legacy authentication. To do this select -yes when the setup asks for legacy authentication and enter your username and password. -The rest of the setup is identical to the default setup. +A typical example of how you may end up in this situation, is if you create +a Jottacloud remote with rclone in one location, and then copy the +configuration file to a second location where you start using rclone to access +the same remote. Eventually there will now be a token refresh attempt with an +invalidated token, i.e. refresh token reuse, resulting in both instances +starting to fail with the "invalid_grant" error. It is possible to copy remote +configurations, but you must then replace the token for one of them using the +[config reconnect](https://rclone.org/commands/rclone_config_reconnect/) +command. -### Telia Cloud authentication +You can get some overview of your active tokens in your service's web user +interface, if you navigate to "Settings" and then "Security" (in which case +you end up at or similar). Down on +that page you have a section "My logged in devices". This contains a list +of entries which seemingly represents currently valid refresh tokens, or +refresh token families. From the right side of that list you can click a +button ("X") to revoke (invalidate) it, which means you will still have access +using an existing access token until that expires, but you will not be able to +perform a token refresh. Note that this entire "My logged in devices" feature +seem to behave a bit differently with different authentication variants and +with use of the different (white-label) services. -Similar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and -additionally uses a separate authentication flow where the username is generated internally. To setup -rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is -identical to the default setup. +### Standard -### Tele2 Cloud authentication +This is an OAuth variant designed for command-line applications. It is +primarily supported by the official service (jottacloud.com), but may also be +supported by some of the white-label services. The information necessary to be +able to perform authentication, like domain name and endpoint to connect to, +are found automatically (it is encoded into the supplied login token, described +next), so you do not need to specify which service to configure. -As Tele2-Com Hem merger was completed this authentication can be used for former Com Hem Cloud and -Tele2 Cloud customers as no support for creating a CLI token exists, and additionally uses a separate -authentication flow where the username is generated internally. To setup rclone to use Tele2 Cloud, -choose Tele2 Cloud authentication in the setup. The rest of the setup is identical to the default setup. +When configuring a remote, you are asked to enter a single-use personal login +token, which you must manually generate from the account security settings in +the service's web interface. You do not need a web browser on the same machine +like with traditional OAuth, but need to use a web browser somewhere, and be +able to be copy the generated string into your rclone configuration session. +Log in to your service's web user interface, navigate to "Settings" and then +"Security", or, for the official service, use the direct link presented to you +by rclone when configuring the remote: . +Scroll down to the section "Personal login token", and click the "Generate" +button. Copy the presented string and paste it where rclone asks for it. Rclone +will then use this to perform an initial token request, and receive a regular +OAuth token which it stores in your remote configuration. There will then also +be a new entry in the "My logged in devices" list in the web interface, with +device name and application name "Jottacloud CLI". -### Onlime Cloud Storage authentication +Each time a new token is created this way, i.e. a new personal login token is +generated and traded in for an OAuth token, you get an entirely new refresh +token family, with a new entry in the "My logged in devices". You can create as +many remotes as you want, and use multiple instances of rclone on same or +different machine, as long as you configure them separately like this, and not +get your self into the refresh token reuse issue described above. -Onlime has sold access to Jottacloud proper, while providing localized support to Danish Customers, but -have recently set up their own hosting, transferring their customers from Jottacloud servers to their -own ones. +### Traditional -This, of course, necessitates using their servers for authentication, but otherwise functionality and -architecture seems equivalent to Jottacloud. +Jottacloud also supports a more traditional OAuth variant. Most of the +white-label services support this, and for many of them this is the only +alternative because they do not support personal login tokens. This method +relies on pre-defined service-specific domain names and endpoints, and rclone +need you to specify which service to configure. This also means that any +changes to existing or additions of new white-label services needs an update +in the rclone backend implementation. -To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication in the setup. The rest -of the setup is identical to the default setup. +When configuring a remote, you must interactively login to an OAuth +authorization web site, and a one-time authorization code is sent back to +rclone behind the scene, which it uses to request an OAuth token. This means +that you need to be on a machine with an internet-connected web browser. If you +need it on a machine where this is not the case, then you will have to create +the configuration on a different machine and copy it from there. The Jottacloud +backend does not support the `rclone authorize` command. See the +[remote setup docs](/remote_setup) for details. + +Jottacloud exerts some form of strict session management when authenticating +using this method. This leads to some unexpected cases of the "invalid_grant" +error described above, and effectively limits you to only use of a single +active authentication on the same machine. I.e. you can only create a single +rclone remote, and you can't even log in with the service's official desktop +client while having a rclone remote configured, or else you will eventually get +all sessions invalidated and are forced to re-authenticate. + +When you have successfully authenticated, there will be an entry in the +"My logged in devices" list in the web interface representing your session. It +will typically be listed with application name "Jottacloud for Desktop" or +similar (it depends on the white-label service configuration). + +### Legacy + +Originally Jottacloud used an OAuth variant which required your account's +username and password to be specified. When Jottacloud migrated to the newer +methods, some white-label versions (those from Elkjøp) still used this legacy +method for a long time. Currently there are no known uses of this, it is still +supported by rclone, but the support will be removed in a future version. ## Configuration -Here is an example of how to make a remote called `remote` with the default setup. First run: +Here is an example of how to make a remote called `remote` with the default setup. +First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password q) Quit config n/s/q> n + +Enter name for new remote. name> remote + Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. @@ -45895,60 +51193,63 @@ XX / Jottacloud \ (jottacloud) [snip] Storage> jottacloud + +Option client_id. +OAuth Client Id. +Leave blank normally. +Enter a value. Press Enter to leave empty. +client_id> + +Option client_secret. +OAuth Client Secret. +Leave blank normally. +Enter a value. Press Enter to leave empty. +client_secret> + Edit advanced config? y) Yes n) No (default) y/n> n + Option config_type. -Select authentication type. -Choose a number from below, or type in an existing string value. +Type of authentication. +Choose a number from below, or type in an existing value of type string. Press Enter for the default (standard). / Standard authentication. - 1 | Use this if you're a normal Jottacloud user. + | This is primarily supported by the official service, but may also be + | supported by some white-label services. It is designed for command-line + 1 | applications, and you will be asked to enter a single-use personal login + | token which you must manually generate from the account security settings + | in the web interface of your service. \ (standard) + / Traditional authentication. + | This is supported by the official service and all white-label services + | that rclone knows about. You will be asked which service to connect to. + 2 | It has a limitation of only a single active authentication at a time. You + | need to be on, or have access to, a machine with an internet-connected + | web browser. + \ (traditional) / Legacy authentication. - 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. + 3 | This is no longer supported by any known services and not recommended + | used. You will be asked for your account's username and password. \ (legacy) - / Telia Cloud authentication. - 3 | Use this if you are using Telia Cloud. - \ (telia) - / Tele2 Cloud authentication. - 4 | Use this if you are using Tele2 Cloud. - \ (tele2) - / Onlime Cloud authentication. - 5 | Use this if you are using Onlime Cloud. - \ (onlime) config_type> 1 + +Option config_login_token. Personal login token. -Generate here: https://www.jottacloud.com/web/secure -Login Token> +Generate it from the account security settings in the web interface of your +service, for the official service on https://www.jottacloud.com/web/secure. +Enter a value. +config_login_token> + Use a non-standard device/mountpoint? Choosing no, the default, will let you access the storage used for the archive section of the official Jottacloud client. If you instead want to access the sync or the backup section, for example, you must choose yes. y) Yes n) No (default) -y/n> y -Option config_device. -The device to use. In standard setup the built-in Jotta device is used, -which contains predefined mountpoints for archive, sync etc. All other devices -are treated as backup devices by the official Jottacloud client. You may create -a new by entering a unique name. -Choose a number from below, or type in your own string value. -Press Enter for the default (DESKTOP-3H31129). - 1 > DESKTOP-3H31129 - 2 > Jotta -config_device> 2 -Option config_mountpoint. -The mountpoint to use for the built-in device Jotta. -The standard setup is to use the Archive mountpoint. Most other mountpoints -have very limited support in rclone and should generally be avoided. -Choose a number from below, or type in an existing string value. -Press Enter for the default (Archive). - 1 > Archive - 2 > Shared - 3 > Sync -config_mountpoint> 1 +y/n> n + Configuration complete. Options: - type: jottacloud @@ -45967,19 +51268,26 @@ d) Delete this remote y/e/d> y ``` -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your Jottacloud - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your Jottacloud - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an Jottacloud directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Devices and Mountpoints @@ -46060,18 +51368,21 @@ as they can't be used in XML strings. ### Deleting files -By default, rclone will send all files to the trash when deleting files. They will be permanently -deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately -by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) flag, or set the equivalent environment variable. -Emptying the trash is supported by the [cleanup](https://rclone.org/commands/rclone_cleanup/) command. +By default, rclone will send all files to the trash when deleting files. They +will be permanently deleted automatically after 30 days. You may bypass the +trash and permanently delete files immediately by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) +flag, or set the equivalent environment variable. Emptying the trash is +supported by the [cleanup](https://rclone.org/commands/rclone_cleanup/) command. ### Versions -Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. -Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website. +Jottacloud supports file versioning. When rclone uploads a new version of a +file it creates a new version of it. Currently rclone only supports retrieving +the current version but older versions can be accessed via the Jottacloud Website. -Versioning can be disabled by `--jottacloud-no-versions` option. This is achieved by deleting the remote file prior to uploading -a new version. If the upload the fails no version of the file will be available in the remote. +Versioning can be disabled by `--jottacloud-no-versions` option. This is +achieved by deleting the remote file prior to uploading a new version. If the +upload the fails no version of the file will be available in the remote. ### Quota information @@ -46079,7 +51390,7 @@ To view your current quota you can use the `rclone about remote:` command which will display your usage limit (unless it is unlimited) and the current usage. - + ### Standard options Here are the Standard options specific to jottacloud (Jottacloud). @@ -46262,22 +51573,24 @@ Here are the possible system metadata items for the jottacloud backend. See the [metadata](https://rclone.org/docs/#metadata) docs for more info. - + ## Limitations Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". -There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical -looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead. +There are quite a few characters that can't be in Jottacloud file names. +Rclone will map these names to and from an identical looking unicode +equivalent. For example if a file has a ? in it will be mapped to ? instead. Jottacloud only supports filenames up to 255 characters in length. ## Troubleshooting -Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove -operations to previously deleted paths to fail. Emptying the trash should help in such cases. +Jottacloud exhibits some inconsistent behaviours regarding deleted files and +folders which may cause Copy, Move and DirMove operations to previously +deleted paths to fail. Emptying the trash should help in such cases. # Koofr @@ -46294,11 +51607,13 @@ giving the password a nice name like `rclone` and clicking on generate. Here is an example of how to make a remote called `koofr`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -46360,19 +51675,25 @@ You can choose to edit advanced config in order to enter your own service URL if you use an on-premise or white label Koofr instance, or choose an alternative mount instead of your primary storage. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this: List directories in top level of your Koofr - rclone lsd koofr: +```console +rclone lsd koofr: +``` List all the files in your Koofr - rclone ls koofr: +```console +rclone ls koofr: +``` To copy a local directory to an Koofr directory called backup - rclone copy /home/source koofr:backup +```console +rclone copy /home/source koofr:backup +``` ### Restricted filename characters @@ -46386,7 +51707,7 @@ the following characters are also replaced: Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in XML strings. - + ### Standard options Here are the Standard options specific to koofr (Koofr, Digi Storage and other Koofr-compatible storage providers). @@ -46402,12 +51723,12 @@ Properties: - Type: string - Required: false - Examples: - - "koofr" - - Koofr, https://app.koofr.net/ - - "digistorage" - - Digi Storage, https://storage.rcs-rds.ro/ - - "other" - - Any other Koofr API compatible storage service + - "koofr" + - Koofr, https://app.koofr.net/ + - "digistorage" + - Digi Storage, https://storage.rcs-rds.ro/ + - "other" + - Any other Koofr API compatible storage service #### --koofr-endpoint @@ -46500,7 +51821,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -46511,20 +51832,23 @@ Note that Koofr is case insensitive so you can't have a file called ### Koofr -This is the original [Koofr](https://koofr.eu) storage provider used as main example and described in the [configuration](#configuration) section above. +This is the original [Koofr](https://koofr.eu) storage provider used as main +example and described in the [configuration](#configuration) section above. ### Digi Storage -[Digi Storage](https://www.digi.ro/servicii/online/digi-storage) is a cloud storage service run by [Digi.ro](https://www.digi.ro/) that -provides a Koofr API. +[Digi Storage](https://www.digi.ro/servicii/online/digi-storage) is a cloud +storage service run by [Digi.ro](https://www.digi.ro/) that provides a Koofr API. Here is an example of how to make a remote called `ds`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -46583,15 +51907,19 @@ y/e/d> y ### Other -You may also want to use another, public or private storage provider that runs a Koofr API compatible service, by simply providing the base URL to connect to. +You may also want to use another, public or private storage provider that +runs a Koofr API compatible service, by simply providing the base URL to +connect to. -Here is an example of how to make a remote called `other`. First run: +Here is an example of how to make a remote called `other`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -46663,11 +51991,13 @@ Here is an example of making a remote for Linkbox. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -46701,7 +52031,7 @@ y/e/d> y ``` - + ### Standard options Here are the Standard options specific to linkbox (Linkbox). @@ -46732,7 +52062,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -46741,7 +52071,10 @@ as they can't be used in JSON strings. # Mail.ru Cloud -[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS. +[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a +Russian internet company [Mail.Ru Group](https://mail.ru). The official +desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows +and Mac OS. ## Features highlights @@ -46749,12 +52082,13 @@ as they can't be used in JSON strings. - Files have a `last modified time` property, directories don't - Deleted files are by default moved to the trash - Files and directories can be shared via public links -- Partial uploads or streaming are not supported, file size must be known before upload +- Partial uploads or streaming are not supported, file size must be known before + upload - Maximum file size is limited to 2G for a free account, unlimited for paid accounts - Storage keeps hash for all files and performs transparent deduplication, the hash algorithm is a modified SHA1 -- If a particular file is already present in storage, one can quickly submit file hash - instead of long file upload (this optimization is supported by rclone) +- If a particular file is already present in storage, one can quickly submit file + hash instead of long file upload (this optimization is supported by rclone) ## Configuration @@ -46770,16 +52104,22 @@ give an error like `oauth2: server response missing access_token`. - Go to Security / "Пароль и безопасность" - Click password for apps / "Пароли для внешних приложений" - Add the password - give it a name - eg "rclone" -- Select the permissions level. For some reason just "Full access to Cloud" (WebDav) doesn't work for Rclone currently. You have to select "Full access to Mail, Cloud and Calendar" (all protocols). ([thread on forum.rclone.org](https://forum.rclone.org/t/failed-to-create-file-system-for-mailru-failed-to-authorize-oauth2-invalid-username-or-password-username-or-password-is-incorrect/49298)) -- Copy the password and use this password below - your normal login password won't work. +- Select the permissions level. For some reason just "Full access to Cloud" + (WebDav) doesn't work for Rclone currently. You have to select "Full access + to Mail, Cloud and Calendar" (all protocols). + ([thread on forum.rclone.org](https://forum.rclone.org/t/failed-to-create-file-system-for-mailru-failed-to-authorize-oauth2-invalid-username-or-password-username-or-password-is-incorrect/49298)) +- Copy the password and use this password below - your normal login password + won't work. Now run - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -46844,20 +52184,28 @@ You can use the configured backend as shown below: See top level directories - rclone lsd remote: +```console +rclone lsd remote: +``` Make a new directory - rclone mkdir remote:directory +```console +rclone mkdir remote:directory +``` List the contents of a directory - rclone ls remote:directory +```console +rclone ls remote:directory +``` Sync `/home/local/directory` to the remote path, deleting any excess files in the path. - rclone sync --interactive /home/local/directory remote:directory +```console +rclone sync --interactive /home/local/directory remote:directory +``` ### Modification times and hashes @@ -46903,7 +52251,7 @@ the following characters are also replaced: Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. - + ### Standard options Here are the Standard options specific to mailru (Mail.ru Cloud). @@ -46983,10 +52331,10 @@ Properties: - Type: bool - Default: true - Examples: - - "true" - - Enable - - "false" - - Disable + - "true" + - Enable + - "false" + - Disable ### Advanced options @@ -47057,14 +52405,14 @@ Properties: - Type: string - Default: "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf" - Examples: - - "" - - Empty list completely disables speedup (put by hash). - - "*" - - All files will be attempted for speedup. - - "*.mkv,*.avi,*.mp4,*.mp3" - - Only common audio/video files will be tried for put by hash. - - "*.zip,*.gz,*.rar,*.pdf" - - Only common archives or PDF books will be tried for speedup. + - "" + - Empty list completely disables speedup (put by hash). + - "*" + - All files will be attempted for speedup. + - "*.mkv,*.avi,*.mp4,*.mp3" + - Only common audio/video files will be tried for put by hash. + - "*.zip,*.gz,*.rar,*.pdf" + - Only common archives or PDF books will be tried for speedup. #### --mailru-speedup-max-disk @@ -47079,12 +52427,12 @@ Properties: - Type: SizeSuffix - Default: 3Gi - Examples: - - "0" - - Completely disable speedup (put by hash). - - "1G" - - Files larger than 1Gb will be uploaded directly. - - "3G" - - Choose this option if you have less than 3Gb free on local disk. + - "0" + - Completely disable speedup (put by hash). + - "1G" + - Files larger than 1Gb will be uploaded directly. + - "3G" + - Choose this option if you have less than 3Gb free on local disk. #### --mailru-speedup-max-memory @@ -47097,12 +52445,12 @@ Properties: - Type: SizeSuffix - Default: 32Mi - Examples: - - "0" - - Preliminary hashing will always be done in a temporary disk location. - - "32M" - - Do not dedicate more than 32Mb RAM for preliminary hashing. - - "256M" - - You have at most 256Mb RAM free for hash calculations. + - "0" + - Preliminary hashing will always be done in a temporary disk location. + - "32M" + - Do not dedicate more than 32Mb RAM for preliminary hashing. + - "256M" + - You have at most 256Mb RAM free for hash calculations. #### --mailru-check-hash @@ -47115,10 +52463,10 @@ Properties: - Type: bool - Default: true - Examples: - - "true" - - Fail with error. - - "false" - - Ignore and continue. + - "true" + - Fail with error. + - "false" + - Ignore and continue. #### --mailru-user-agent @@ -47174,7 +52522,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -47196,19 +52544,25 @@ encryption. This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption. +**Note** [MEGA S4 Object Storage](/s3#mega), an S3 compatible object +store, also works with rclone and this is recommended for new projects. + Paths are specified as `remote:path` Paths may be as deep as required, e.g. `remote:directory/subdirectory`. + ## Configuration Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -47246,22 +52600,30 @@ d) Delete this remote y/e/d> y ``` -**NOTE:** The encryption keys need to have been already generated after a regular login -via the browser, otherwise attempting to use the credentials in `rclone` will fail. +**NOTE:** The encryption keys need to have been already generated after a regular +login via the browser, otherwise attempting to use the credentials in `rclone` +will fail. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your Mega - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your Mega - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an Mega directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -47291,26 +52653,26 @@ Use `rclone dedupe` to fix duplicated files. #### Object not found -If you are connecting to your Mega remote for the first time, -to test access and synchronization, you may receive an error such as +If you are connecting to your Mega remote for the first time, +to test access and synchronization, you may receive an error such as -``` -Failed to create file system for "my-mega-remote:": +```text +Failed to create file system for "my-mega-remote:": couldn't login: Object (typically, node or user) not found ``` The diagnostic steps often recommended in the [rclone forum](https://forum.rclone.org/search?q=mega) -start with the **MEGAcmd** utility. Note that this refers to -the official C++ command from https://github.com/meganz/MEGAcmd -and not the go language built command from t3rm1n4l/megacmd -that is no longer maintained. +start with the **MEGAcmd** utility. Note that this refers to +the official C++ command from +and not the go language built command from t3rm1n4l/megacmd +that is no longer maintained. -Follow the instructions for installing MEGAcmd and try accessing -your remote as they recommend. You can establish whether or not -you can log in using MEGAcmd, and obtain diagnostic information -to help you, and search or work with others in the forum. +Follow the instructions for installing MEGAcmd and try accessing +your remote as they recommend. You can establish whether or not +you can log in using MEGAcmd, and obtain diagnostic information +to help you, and search or work with others in the forum. -``` +```text MEGA CMD> login me@example.com Password: Fetching nodes ... @@ -47319,12 +52681,11 @@ Login complete as me@example.com me@example.com:/$ ``` -Note that some have found issues with passwords containing special -characters. If you can not log on with rclone, but MEGAcmd logs on -just fine, then consider changing your password temporarily to +Note that some have found issues with passwords containing special +characters. If you can not log on with rclone, but MEGAcmd logs on +just fine, then consider changing your password temporarily to pure alphanumeric characters, in case that helps. - #### Repeated commands blocks access Mega remotes seem to get blocked (reject logins) under "heavy use". @@ -47371,7 +52732,7 @@ So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while. - + ### Standard options Here are the Standard options specific to mega (Mega). @@ -47400,10 +52761,43 @@ Properties: - Type: string - Required: true +#### --mega-2fa + +The 2FA code of your MEGA account if the account is set up with one + +Properties: + +- Config: 2fa +- Env Var: RCLONE_MEGA_2FA +- Type: string +- Required: false + ### Advanced options Here are the Advanced options specific to mega (Mega). +#### --mega-session-id + +Session (internal use only) + +Properties: + +- Config: session_id +- Env Var: RCLONE_MEGA_SESSION_ID +- Type: string +- Required: false + +#### --mega-master-key + +Master key (internal use only) + +Properties: + +- Config: master_key +- Env Var: RCLONE_MEGA_MASTER_KEY +- Type: string +- Required: false + #### --mega-debug Output more debug from Mega. @@ -47474,18 +52868,23 @@ Properties: - Type: string - Required: false - + ### Process `killed` -On accounts with large files or something else, memory usage can significantly increase when executing list/sync instructions. When running on cloud providers (like AWS with EC2), check if the instance type has sufficient memory/CPU to execute the commands. Use the resource monitoring tools to inspect after sending the commands. Look [at this issue](https://forum.rclone.org/t/rclone-with-mega-appears-to-work-only-in-some-accounts/40233/4). +On accounts with large files or something else, memory usage can significantly +increase when executing list/sync instructions. When running on cloud providers +(like AWS with EC2), check if the instance type has sufficient memory/CPU to +execute the commands. Use the resource monitoring tools to inspect after sending +the commands. Look [at this issue](https://forum.rclone.org/t/rclone-with-mega-appears-to-work-only-in-some-accounts/40233/4). ## Limitations -This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega) which is an opensource +This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega) +which is an opensource go library implementing the Mega API. There doesn't appear to be any -documentation for the mega protocol beyond the [mega C++ SDK](https://github.com/meganz/sdk) source code -so there are likely quite a few errors still remaining in this library. +documentation for the mega protocol beyond the [mega C++ SDK](https://github.com/meganz/sdk) +source code so there are likely quite a few errors still remaining in this library. Mega allows duplicate files which may confuse rclone. @@ -47503,8 +52902,8 @@ s3). Because it has no parameters you can just use it with the You can configure it as a remote like this with `rclone config` too if you want to: -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -47535,9 +52934,11 @@ y/e/d> y Because the memory backend isn't persistent it is most useful for testing or with an rclone server or rclone mount, e.g. - rclone mount :memory: /mnt/tmp - rclone serve webdav :memory: - rclone serve sftp :memory: +```console +rclone mount :memory: /mnt/tmp +rclone serve webdav :memory: +rclone serve sftp :memory: +``` ### Modification times and hashes @@ -47548,7 +52949,7 @@ The memory backend supports MD5 hashes and modification times accurate to 1 nS. The memory backend replaces the [default restricted characters set](https://rclone.org/overview/#restricted-characters). - + ### Advanced options Here are the Advanced options specific to memory (In memory object storage system.). @@ -47564,22 +52965,28 @@ Properties: - Type: string - Required: false - + # Akamai NetStorage Paths are specified as `remote:` You may put subdirectories in too, e.g. `remote:/path/to/dir`. -If you have a CP code you can use that as the folder after the domain such as \\/\\/\. +If you have a CP code you can use that as the folder after the domain such +as \\/\\/\. For example, this is commonly configured with or without a CP code: -* **With a CP code**. `[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/` -* **Without a CP code**. `[your-domain-prefix]-nsu.akamaihd.net` +- **With a CP code**. `[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/` +- **Without a CP code**. `[your-domain-prefix]-nsu.akamaihd.net` See all buckets - rclone lsd remote: -The initial setup for Netstorage involves getting an account and secret. Use `rclone config` to walk you through the setup process. + +```console +rclone lsd remote: +``` + +The initial setup for Netstorage involves getting an account and secret. +Use `rclone config` to walk you through the setup process. ## Configuration @@ -47587,157 +52994,218 @@ Here's an example of how to make a remote called `ns1`. 1. To begin the interactive configuration process, enter this command: -``` -rclone config -``` + ```console + rclone config + ``` 2. Type `n` to create a new remote. -``` -n) New remote -d) Delete remote -q) Quit config -e/n/d/q> n -``` + ```text + n) New remote + d) Delete remote + q) Quit config + e/n/d/q> n + ``` 3. For this example, enter `ns1` when you reach the name> prompt. -``` -name> ns1 -``` + ```text + name> ns1 + ``` 4. Enter `netstorage` as the type of storage to configure. -``` -Type of storage to configure. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value -XX / NetStorage - \ "netstorage" -Storage> netstorage -``` + ```text + Type of storage to configure. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + XX / NetStorage + \ "netstorage" + Storage> netstorage + ``` -5. Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes. +5. Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, +which is the default. HTTP is provided primarily for debugging purposes. + ```text + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / HTTP protocol + \ "http" + 2 / HTTPS protocol + \ "https" + protocol> 1 + ``` -``` -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / HTTP protocol - \ "http" - 2 / HTTPS protocol - \ "https" -protocol> 1 -``` +6. Specify your NetStorage host, CP code, and any necessary content paths using +this format: `///` -6. Specify your NetStorage host, CP code, and any necessary content paths using this format: `///` - -``` -Enter a string value. Press Enter for the default (""). -host> baseball-nsu.akamaihd.net/123456/content/ -``` + ```text + Enter a string value. Press Enter for the default (""). + host> baseball-nsu.akamaihd.net/123456/content/ + ``` 7. Set the netstorage account name -``` -Enter a string value. Press Enter for the default (""). -account> username -``` -8. Set the Netstorage account secret/G2O key which will be used for authentication purposes. Select the `y` option to set your own password then enter your secret. + ```text + Enter a string value. Press Enter for the default (""). + account> username + ``` + +8. Set the Netstorage account secret/G2O key which will be used for authentication +purposes. Select the `y` option to set your own password then enter your secret. Note: The secret is stored in the `rclone.conf` file with hex-encoded encryption. -``` -y) Yes type in my own password -g) Generate random password -y/g> y -Enter the password: -password: -Confirm the password: -password: -``` + ```text + y) Yes type in my own password + g) Generate random password + y/g> y + Enter the password: + password: + Confirm the password: + password: + ``` 9. View the summary and confirm your remote configuration. -``` -[ns1] -type = netstorage -protocol = http -host = baseball-nsu.akamaihd.net/123456/content/ -account = username -secret = *** ENCRYPTED *** --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -``` + ```text + [ns1] + type = netstorage + protocol = http + host = baseball-nsu.akamaihd.net/123456/content/ + account = username + secret = *** ENCRYPTED *** + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + ``` This remote is called `ns1` and can now be used. ## Example operations -Get started with rclone and NetStorage with these examples. For additional rclone commands, visit https://rclone.org/commands/. +Get started with rclone and NetStorage with these examples. For additional rclone +commands, visit . ### See contents of a directory in your project - rclone lsd ns1:/974012/testing/ +```console +rclone lsd ns1:/974012/testing/ +``` ### Sync the contents local with remote - rclone sync . ns1:/974012/testing/ +```console +rclone sync . ns1:/974012/testing/ +``` ### Upload local content to remote - rclone copy notes.txt ns1:/974012/testing/ + +```console +rclone copy notes.txt ns1:/974012/testing/ +``` ### Delete content on remote - rclone delete ns1:/974012/testing/notes.txt -### Move or copy content between CP codes. +```console +rclone delete ns1:/974012/testing/notes.txt +``` -Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes. +### Move or copy content between CP codes - rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/ +Your credentials must have access to two CP codes on the same remote. +You can't perform operations between different remotes. + +```console +rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/ +``` ## Features ### Symlink Support -The Netstorage backend changes the rclone `--links, -l` behavior. When uploading, instead of creating the .rclonelink file, use the "symlink" API in order to create the corresponding symlink on the remote. The .rclonelink file will not be created, the upload will be intercepted and only the symlink file that matches the source file name with no suffix will be created on the remote. +The Netstorage backend changes the rclone `--links, -l` behavior. When uploading, +instead of creating the .rclonelink file, use the "symlink" API in order to create +the corresponding symlink on the remote. The .rclonelink file will not be created, +the upload will be intercepted and only the symlink file that matches the source +file name with no suffix will be created on the remote. -This will effectively allow commands like copy/copyto, move/moveto and sync to upload from local to remote and download from remote to local directories with symlinks. Due to internal rclone limitations, it is not possible to upload an individual symlink file to any remote backend. You can always use the "backend symlink" command to create a symlink on the NetStorage server, refer to "symlink" section below. +This will effectively allow commands like copy/copyto, move/moveto and sync to +upload from local to remote and download from remote to local directories with +symlinks. Due to internal rclone limitations, it is not possible to upload an +individual symlink file to any remote backend. You can always use the "backend +symlink" command to create a symlink on the NetStorage server, refer to "symlink" +section below. -Individual symlink files on the remote can be used with the commands like "cat" to print the destination name, or "delete" to delete symlink, or copy, copy/to and move/moveto to download from the remote to local. Note: individual symlink files on the remote should be specified including the suffix .rclonelink. +Individual symlink files on the remote can be used with the commands like "cat" +to print the destination name, or "delete" to delete symlink, or copy, copy/to +and move/moveto to download from the remote to local. Note: individual symlink +files on the remote should be specified including the suffix .rclonelink. -**Note**: No file with the suffix .rclonelink should ever exist on the server since it is not possible to actually upload/create a file with .rclonelink suffix with rclone, it can only exist if it is manually created through a non-rclone method on the remote. +**Note**: No file with the suffix .rclonelink should ever exist on the server +since it is not possible to actually upload/create a file with .rclonelink suffix +with rclone, it can only exist if it is manually created through a non-rclone +method on the remote. ### Implicit vs. Explicit Directories With NetStorage, directories can exist in one of two forms: -1. **Explicit Directory**. This is an actual, physical directory that you have created in a storage group. -2. **Implicit Directory**. This refers to a directory within a path that has not been physically created. For example, during upload of a file, nonexistent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file. +1. **Explicit Directory**. This is an actual, physical directory that you have + created in a storage group. +2. **Implicit Directory**. This refers to a directory within a path that has + not been physically created. For example, during upload of a file, nonexistent + subdirectories can be specified in the target path. NetStorage creates these + as "implicit." While the directories aren't physically created, they exist + implicitly and the noted path is connected with the uploaded file. -Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly. +Rclone will intercept all file uploads and mkdir commands for the NetStorage +remote and will explicitly issue the mkdir command for each directory in the +uploading path. This will help with the interoperability with the other Akamai +services such as SFTP and the Content Management Shell (CMShell). Rclone will +not guarantee correctness of operations with implicit directories which might +have been created as a result of using an upload API directly. ### `--fast-list` / ListR support -NetStorage remote supports the ListR feature by using the "list" NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they're encountered. +NetStorage remote supports the ListR feature by using the "list" NetStorage API +action to return a lexicographical list of all objects within the specified CP +code, recursing into subdirectories as they're encountered. -* **Rclone will use the ListR method for some commands by default**. Commands such as `lsf -R` will use ListR by default. To disable this, include the `--disable listR` option to use the non-recursive method of listing objects. +- **Rclone will use the ListR method for some commands by default**. Commands +such as `lsf -R` will use ListR by default. To disable this, include the +`--disable listR` option to use the non-recursive method of listing objects. -* **Rclone will not use the ListR method for some commands**. Commands such as `sync` don't use ListR by default. To force using the ListR method, include the `--fast-list` option. +- **Rclone will not use the ListR method for some commands**. Commands such as +`sync` don't use ListR by default. To force using the ListR method, include the +`--fast-list` option. -There are pros and cons of using the ListR method, refer to [rclone documentation](https://rclone.org/docs/#fast-list). In general, the sync command over an existing deep tree on the remote will run faster with the "--fast-list" flag but with extra memory usage as a side effect. It might also result in higher CPU utilization but the whole task can be completed faster. +There are pros and cons of using the ListR method, refer to [rclone documentation](https://rclone.org/docs/#fast-list). +In general, the sync command over an existing deep tree on the remote will +run faster with the "--fast-list" flag but with extra memory usage as a side effect. +It might also result in higher CPU utilization but the whole task can be completed +faster. -**Note**: There is a known limitation that "lsf -R" will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass "--disable listR" flag if these numbers are important in the output. +**Note**: There is a known limitation that "lsf -R" will display number of files +in the directory and directory size as -1 when ListR method is used. The workaround +is to pass "--disable listR" flag if these numbers are important in the output. ### Purge -NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method. - -**Note**: Read the [NetStorage Usage API](https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html) for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible. +NetStorage remote supports the purge feature by using the "quick-delete" +NetStorage API action. The quick-delete action is disabled by default for security +reasons and can be enabled for the account through the Akamai portal. Rclone +will first try to use quick-delete action for the purge command and if this +functionality is disabled then will fall back to a standard delete method. +**Note**: Read the [NetStorage Usage API](https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html) +for considerations when using "quick-delete". In general, using quick-delete +method will not delete the tree immediately and objects targeted for +quick-delete may still be accessible. + ### Standard options Here are the Standard options specific to netstorage (Akamai NetStorage). @@ -47799,10 +53267,10 @@ Properties: - Type: string - Default: "https" - Examples: - - "http" - - HTTP protocol - - "https" - - HTTPS protocol + - "http" + - HTTP protocol + - "https" + - HTTPS protocol #### --netstorage-description @@ -47819,9 +53287,11 @@ Properties: Here are the commands specific to the netstorage backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -47833,9 +53303,11 @@ These can be run on a running backend using the rc command ### du -Return disk usage information for a specified directory +Return disk usage information for a specified directory. - rclone backend du remote: [options] [+] +```console +rclone backend du remote: [options] [+] +``` The usage information returned, includes the targeted directory as well as all files stored in any sub-directories that may exist. @@ -47844,14 +53316,21 @@ files stored in any sub-directories that may exist. You can create a symbolic link in ObjectStore with the symlink action. - rclone backend symlink remote: [options] [+] +```console +rclone backend symlink remote: [options] [+] +``` The desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. -`rclone backend symlink ` +Usage example: +```console +rclone backend symlink +``` + + # Microsoft Azure Blob Storage @@ -47864,11 +53343,13 @@ command.) You may put subdirectories in too, e.g. Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -47904,20 +53385,28 @@ y/e/d> y See all containers - rclone lsd remote: +```console +rclone lsd remote: +``` Make a new container - rclone mkdir remote:container +```console +rclone mkdir remote:container +``` List the contents of a container - rclone ls remote:container +```console +rclone ls remote:container +``` Sync `/home/local/directory` to the remote container, deleting any excess files in the container. - rclone sync --interactive /home/local/directory remote:container +```console +rclone sync --interactive /home/local/directory remote:container +``` ### --fast-list @@ -47996,26 +53485,35 @@ user with a password, depending on which environment variable are set. It reads configuration from these variables, in the following order: 1. Service principal with client secret - - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID. + - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its + "directory" ID. - `AZURE_CLIENT_ID`: the service principal's client ID - `AZURE_CLIENT_SECRET`: one of the service principal's client secrets 2. Service principal with certificate - - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID. + - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its + "directory" ID. - `AZURE_CLIENT_ID`: the service principal's client ID - - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file including the private key. - - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the certificate file. - - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header. + - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file + including the private key. + - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the + certificate file. + - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an + authentication request will include an x5c header to support subject + name / issuer based authentication. When set to "true" or "1", + authentication requests include the x5c header. 3. User with username and password - `AZURE_TENANT_ID`: (optional) tenant to authenticate in. Defaults to "organizations". - - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to + - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate + to - `AZURE_USERNAME`: a username (usually an email address) - `AZURE_PASSWORD`: the user's password 4. Workload Identity - - `AZURE_TENANT_ID`: Tenant to authenticate in. - - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to. - - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file. - - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com). - + - `AZURE_TENANT_ID`: Tenant to authenticate in + - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate + to + - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file + - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint + (default: login.microsoftonline.com). ##### Env Auth: 2. Managed Service Identity Credentials @@ -48042,19 +53540,27 @@ Credentials created with the `az` tool can be picked up using `env_auth`. For example if you were to login with a service principal like this: - az login --service-principal -u XXX -p XXX --tenant XXX +```console +az login --service-principal -u XXX -p XXX --tenant XXX +``` Then you could access rclone resources like this: - rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER +```console +rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER +``` Or - rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER +```console +rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER +``` Which is analogous to using the `az` tool: - az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login +```console +az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login +``` #### Account and Shared Key @@ -48075,18 +53581,24 @@ explorer in the Azure portal. If you use a container level SAS URL, rclone operations are permitted only on a particular container, e.g. - rclone ls azureblob:container +```console +rclone ls azureblob:container +``` You can also list the single container from the root. This will only show the container specified by the SAS URL. - $ rclone lsd azureblob: - container/ +```console +$ rclone lsd azureblob: +container/ +``` Note that you can't see or access any other containers - this will fail - rclone ls azureblob:othercontainer +```console +rclone ls azureblob:othercontainer +``` Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an @@ -48094,7 +53606,8 @@ untrusted environment such as a CI build server. #### Service principal with client secret -If these variables are set, rclone will authenticate with a service principal with a client secret. +If these variables are set, rclone will authenticate with a service principal +with a client secret. - `tenant`: ID of the service principal's tenant. Also called its "directory" ID. - `client_id`: the service principal's client ID @@ -48105,13 +53618,18 @@ The credentials can also be placed in a file using the #### Service principal with certificate -If these variables are set, rclone will authenticate with a service principal with certificate. +If these variables are set, rclone will authenticate with a service principal +with certificate. - `tenant`: ID of the service principal's tenant. Also called its "directory" ID. - `client_id`: the service principal's client ID -- `client_certificate_path`: path to a PEM or PKCS12 certificate file including the private key. +- `client_certificate_path`: path to a PEM or PKCS12 certificate file including + the private key. - `client_certificate_password`: (optional) password for the certificate file. -- `client_send_certificate_chain`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header. +- `client_send_certificate_chain`: (optional) Specifies whether an + authentication request will include an x5c header to support subject name / + issuer based authentication. When set to "true" or "1", authentication + requests include the x5c header. **NB** `client_certificate_password` must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). @@ -48146,15 +53664,18 @@ be explicitly specified using exactly one of the `msi_object_id`, If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is set, this is is equivalent to using `env_auth`. -#### Fedrated Identity Credentials +#### Fedrated Identity Credentials If these variables are set, rclone will authenticate with fedrated identity. - `tenant_id`: tenant_id to authenticate in storage - `client_id`: client ID of the application the user will authenticate to storage -- `msi_client_id`: managed identity client ID of the application the user will authenticate to +- `msi_client_id`: managed identity client ID of the application the user will + authenticate to -By default "api://AzureADTokenExchange" is used as scope for token retrieval over MSI. This token is then exchanged for actual storage token using 'tenant_id' and 'client_id'. +By default "api://AzureADTokenExchange" is used as scope for token retrieval +over MSI. This token is then exchanged for actual storage token using +'tenant_id' and 'client_id'. #### Azure CLI tool `az` {#use_az} @@ -48171,9 +53692,11 @@ Don't set `env_auth` at the same time. If you want to access resources with public anonymous access then set `account` only. You can do this without making an rclone config: - rclone lsf :azureblob,account=ACCOUNT:CONTAINER - +```console +rclone lsf :azureblob,account=ACCOUNT:CONTAINER +``` + ### Standard options Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage). @@ -48764,13 +54287,13 @@ Properties: - Type: string - Required: false - Examples: - - "" - - The container and its blobs can be accessed only with an authorized request. - - It's a default value. - - "blob" - - Blob data within this container can be read via anonymous request. - - "container" - - Allow full public read access for container and blob data. + - "" + - The container and its blobs can be accessed only with an authorized request. + - It's a default value. + - "blob" + - Blob data within this container can be read via anonymous request. + - "container" + - Allow full public read access for container and blob data. #### --azureblob-directory-markers @@ -48827,12 +54350,12 @@ Properties: - Type: string - Required: false - Choices: - - "" - - By default, the delete operation fails if a blob has snapshots - - "include" - - Specify 'include' to remove the root blob and all its snapshots - - "only" - - Specify 'only' to remove only the snapshots but keep the root blob. + - "" + - By default, the delete operation fails if a blob has snapshots + - "include" + - Specify 'include' to remove the root blob and all its snapshots + - "only" + - Specify 'only' to remove only the snapshots but keep the root blob. #### --azureblob-description @@ -48845,11 +54368,11 @@ Properties: - Type: string - Required: false - + ### Custom upload headers -You can set custom upload headers with the `--header-upload` flag. +You can set custom upload headers with the `--header-upload` flag. - Cache-Control - Content-Disposition @@ -48858,19 +54381,21 @@ You can set custom upload headers with the `--header-upload` flag. - Content-Type - X-MS-Tags -Eg `--header-upload "Content-Type: text/potato"` or `--header-upload "X-MS-Tags: foo=bar"` +Eg `--header-upload "Content-Type: text/potato"` or +`--header-upload "X-MS-Tags: foo=bar"`. ## Limitations MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy. -`rclone about` is not supported by the Microsoft Azure Blob storage backend. Backends without -this capability cannot determine free space for an rclone mount or -use policy `mfs` (most free space) as a member of an rclone union +`rclone about` is not supported by the Microsoft Azure Blob storage backend. +Backends without this capability cannot determine free space for an rclone +mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). ## Azure Storage Emulator Support @@ -48899,11 +54424,13 @@ e.g. `remote:path/to/dir`. Here is an example of making a Microsoft Azure Files Storage configuration. For a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -48973,20 +54500,28 @@ Once configured you can use rclone. See all files in the top level: - rclone lsf remote: +```console +rclone lsf remote: +``` Make a new directory in the root: - rclone mkdir remote:dir +```console +rclone mkdir remote:dir +``` Recursively List the contents: - rclone ls remote: +```console +rclone ls remote: +``` Sync `/home/local/directory` to the remote directory, deleting any excess files in the directory. - rclone sync --interactive /home/local/directory remote:dir +```console +rclone sync --interactive /home/local/directory remote:dir +``` ### Modified time @@ -49058,26 +54593,35 @@ user with a password, depending on which environment variable are set. It reads configuration from these variables, in the following order: 1. Service principal with client secret - - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID. + - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its + "directory" ID. - `AZURE_CLIENT_ID`: the service principal's client ID - `AZURE_CLIENT_SECRET`: one of the service principal's client secrets 2. Service principal with certificate - - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID. + - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its + "directory" ID. - `AZURE_CLIENT_ID`: the service principal's client ID - - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file including the private key. - - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the certificate file. - - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header. + - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file + including the private key. + - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the + certificate file. + - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an + authentication request will include an x5c header to support subject + name / issuer based authentication. When set to "true" or "1", + authentication requests include the x5c header. 3. User with username and password - `AZURE_TENANT_ID`: (optional) tenant to authenticate in. Defaults to "organizations". - - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to + - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate + to - `AZURE_USERNAME`: a username (usually an email address) - `AZURE_PASSWORD`: the user's password 4. Workload Identity - - `AZURE_TENANT_ID`: Tenant to authenticate in. - - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to. - - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file. - - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com). - + - `AZURE_TENANT_ID`: Tenant to authenticate in + - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate + to + - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file + - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint + (default: login.microsoftonline.com). ##### Env Auth: 2. Managed Service Identity Credentials @@ -49104,15 +54648,21 @@ Credentials created with the `az` tool can be picked up using `env_auth`. For example if you were to login with a service principal like this: - az login --service-principal -u XXX -p XXX --tenant XXX +```console +az login --service-principal -u XXX -p XXX --tenant XXX +``` Then you could access rclone resources like this: - rclone lsf :azurefiles,env_auth,account=ACCOUNT: +```console +rclone lsf :azurefiles,env_auth,account=ACCOUNT: +``` Or - rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles: +```console +rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles: +``` #### Account and Shared Key @@ -49129,7 +54679,8 @@ To use it leave `account`, `key` and "sas_url" blank and fill in `connection_str #### Service principal with client secret -If these variables are set, rclone will authenticate with a service principal with a client secret. +If these variables are set, rclone will authenticate with a service principal +with a client secret. - `tenant`: ID of the service principal's tenant. Also called its "directory" ID. - `client_id`: the service principal's client ID @@ -49140,13 +54691,18 @@ The credentials can also be placed in a file using the #### Service principal with certificate -If these variables are set, rclone will authenticate with a service principal with certificate. +If these variables are set, rclone will authenticate with a service principal +with certificate. - `tenant`: ID of the service principal's tenant. Also called its "directory" ID. - `client_id`: the service principal's client ID -- `client_certificate_path`: path to a PEM or PKCS12 certificate file including the private key. +- `client_certificate_path`: path to a PEM or PKCS12 certificate file including + the private key. - `client_certificate_password`: (optional) password for the certificate file. -- `client_send_certificate_chain`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header. +- `client_send_certificate_chain`: (optional) Specifies whether an authentication + request will include an x5c header to support subject name / issuer based + authentication. When set to "true" or "1", authentication requests include + the x5c header. **NB** `client_certificate_password` must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). @@ -49181,24 +54737,28 @@ be explicitly specified using exactly one of the `msi_object_id`, If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is set, this is is equivalent to using `env_auth`. -#### Fedrated Identity Credentials +#### Fedrated Identity Credentials If these variables are set, rclone will authenticate with fedrated identity. - `tenant_id`: tenant_id to authenticate in storage - `client_id`: client ID of the application the user will authenticate to storage -- `msi_client_id`: managed identity client ID of the application the user will authenticate to +- `msi_client_id`: managed identity client ID of the application the user will + authenticate to + +By default "api://AzureADTokenExchange" is used as scope for token retrieval +over MSI. This token is then exchanged for actual storage token using 'tenant_id' +and 'client_id'. -By default "api://AzureADTokenExchange" is used as scope for token retrieval over MSI. This token is then exchanged for actual storage token using 'tenant_id' and 'client_id'. - #### Azure CLI tool `az` {#use_az} + Set to use the [Azure CLI tool `az`](https://learn.microsoft.com/en-us/cli/azure/) as the sole means of authentication. Setting this can be useful if you wish to use the `az` CLI on a host with a System Managed Identity that you do not want to use. Don't set `env_auth` at the same time. - + ### Standard options Here are the Standard options specific to azurefiles (Microsoft Azure Files). @@ -49643,7 +55203,7 @@ Properties: - Type: string - Required: false - + ### Custom upload headers @@ -49676,11 +55236,13 @@ you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text e) Edit existing remote n) New remote d) Delete remote @@ -49756,7 +55318,7 @@ y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it @@ -49764,61 +55326,96 @@ opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your OneDrive - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your OneDrive - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an OneDrive directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Getting your own Client ID and Key -rclone uses a default Client ID when talking to OneDrive, unless a custom `client_id` is specified in the config. -The default Client ID and Key are shared by all rclone users when performing requests. +rclone uses a default Client ID when talking to OneDrive, unless a custom +`client_id` is specified in the config. The default Client ID and Key are +shared by all rclone users when performing requests. -You may choose to create and use your own Client ID, in case the default one does not work well for you. -For example, you might see throttling. +You may choose to create and use your own Client ID, in case the default one +does not work well for you. For example, you might see throttling. #### Creating Client ID for OneDrive Personal To create your own Client ID, please follow these steps: -1. Open https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview and then under the `Add` menu click `App registration`. - * If you have not created an Azure account, you will be prompted to. This is free, but you need to provide a phone number, address, and credit card for identity verification. -2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI`, then type (do not copy and paste) `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use. -3. Under `manage` select `Certificates & secrets`, click `New client secret`. Enter a description (can be anything) and set `Expires` to 24 months. Copy and keep that secret _Value_ for later use (you _won't_ be able to see this value afterwards). -4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`. -5. Search and select the following permissions: `Files.Read`, `Files.ReadWrite`, `Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read` and `Sites.Read.All` (if custom access scopes are configured, select the permissions accordingly). Once selected click `Add permissions` at the bottom. +1. Open + and then under the `Add` menu click `App registration`. + - If you have not created an Azure account, you will be prompted to. This is free, + but you need to provide a phone number, address, and credit card for identity + verification. +2. Enter a name for your app, choose account type `Accounts in any organizational + directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts + (e.g. Skype, Xbox)`, + select `Web` in `Redirect URI`, then type (do not copy and paste) + `http://localhost:53682/` and click Register. Copy and keep the + `Application (client) ID` under the app name for later use. +3. Under `manage` select `Certificates & secrets`, click `New client secret`. + Enter a description (can be anything) and set `Expires` to 24 months. + Copy and keep that secret *Value* for later use (you *won't* be able to see + this value afterwards). +4. Under `manage` select `API permissions`, click `Add a permission` and select + `Microsoft Graph` then select `delegated permissions`. +5. Search and select the following permissions: `Files.Read`, `Files.ReadWrite`, + `Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read` and + `Sites.Read.All` (if custom access scopes are configured, select the + permissions accordingly). Once selected click `Add permissions` at the bottom. -Now the application is complete. Run `rclone config` to create or edit a OneDrive remote. -Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps. +Now the application is complete. Run `rclone config` to create or edit a OneDrive +remote. Supply the app ID and password as Client ID and Secret, respectively. +rclone will walk you through the remaining steps. The access_scopes option allows you to configure the permissions requested by rclone. -See [Microsoft Docs](https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions) for more information about the different scopes. +See [Microsoft Docs](https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions) +for more information about the different scopes. -The `Sites.Read.All` permission is required if you need to [search SharePoint sites when configuring the remote](https://github.com/rclone/rclone/pull/5883). However, if that permission is not assigned, you need to exclude `Sites.Read.All` from your access scopes or set `disable_site_permission` option to true in the advanced options. +The `Sites.Read.All` permission is required if you need to +[search SharePoint sites when configuring the remote](https://github.com/rclone/rclone/pull/5883). +However, if that permission is not assigned, you need to exclude `Sites.Read.All` +from your access scopes or set `disable_site_permission` option to true in the +advanced options. #### Creating Client ID for OneDrive Business -The steps for OneDrive Personal may or may not work for OneDrive Business, depending on the security settings of the organization. +The steps for OneDrive Personal may or may not work for OneDrive Business, +depending on the security settings of the organization. A common error is that the publisher of the App is not verified. -You may try to [verify you account](https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview), or try to limit the App to your organization only, as shown below. +You may try to [verify you account](https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview), +or try to limit the App to your organization only, as shown below. 1. Make sure to create the App with your business account. -2. Follow the steps above to create an App. However, we need a different account type here: `Accounts in this organizational directory only (*** - Single tenant)`. Note that you can also change the account type after creating the App. -3. Find the [tenant ID](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant) of your organization. +2. Follow the steps above to create an App. However, we need a different account + type here: `Accounts in this organizational directory only (*** - Single tenant)`. + Note that you can also change the account type after creating the App. +3. Find the [tenant ID](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant) + of your organization. 4. In the rclone config, set `auth_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize`. 5. In the rclone config, set `token_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token`. -Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86). +Note: If you have a special region, you may need a different host in step 4 and 5. +Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86). ### Using OAuth Client Credential flow @@ -49828,16 +55425,32 @@ that adopting the context of an Azure AD user account. This flow can be enabled by following the steps below: -1. Create the Enterprise App registration in the Azure AD portal and obtain a Client ID and Client Secret as described above. -2. Ensure that the application has the appropriate permissions and they are assigned as *Application Permissions* -3. Configure the remote, ensuring that *Client ID* and *Client Secret* are entered correctly. -4. In the *Advanced Config* section, enter `true` for `client_credentials` and in the `tenant` section enter the tenant ID. +1. Create the Enterprise App registration in the Azure AD portal and obtain a + Client ID and Client Secret as described above. +2. Ensure that the application has the appropriate permissions and they are + assigned as *Application Permissions* +3. Configure the remote, ensuring that *Client ID* and *Client Secret* are + entered correctly. +4. In the *Advanced Config* section, enter `true` for `client_credentials` and + in the `tenant` section enter the tenant ID. When it comes to choosing the type of the connection work with the client credentials flow. In particular the "onedrive" option does not work. You can use the "sharepoint" option or if that does not find the correct drive ID type it in manually with the "driveid" option. +To back up any user's data using this flow, grant your Azure AD +application the necessary Microsoft Graph *Application permissions* +(such as `Files.Read.All`, `Sites.Read.All` and/or `Sites.Selected`). +With these permissions, rclone can access drives across the tenant, +but it needs to know *which user or drive* you want. Supply a specific +`drive_id` corresponding to that user's OneDrive, or a SharePoint site +ID for SharePoint libraries. You can obtain a user's drive ID using +Microsoft Graph (e.g. `/users/{userUPN}/drive`) and then configure it +in rclone. Once the correct drive ID is provided, rclone will back up +that user's data using the app-only token without requiring their +credentials. + **NOTE** Assigning permissions directly to the application means that anyone with the *Client ID* and *Client Secret* can access your OneDrive files. Take care to safeguard these credentials. @@ -49931,7 +55544,7 @@ doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website. - + ### Standard options Here are the Standard options specific to onedrive (Microsoft OneDrive). @@ -49973,14 +55586,14 @@ Properties: - Type: string - Default: "global" - Examples: - - "global" - - Microsoft Cloud Global - - "us" - - Microsoft Cloud for US Government - - "de" - - Microsoft Cloud Germany (deprecated - try global region first). - - "cn" - - Azure and Office 365 operated by Vnet Group in China + - "global" + - Microsoft Cloud Global + - "us" + - Microsoft Cloud for US Government + - "de" + - Microsoft Cloud Germany (deprecated - try global region first). + - "cn" + - Azure and Office 365 operated by Vnet Group in China #### --onedrive-tenant @@ -50141,13 +55754,13 @@ Properties: - Type: SpaceSepList - Default: Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access - Examples: - - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access" - - Read and write access to all resources - - "Files.Read Files.Read.All Sites.Read.All offline_access" - - Read only access to all resources - - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access" - - Read and write access to all resources, without the ability to browse SharePoint sites. - - Same as if disable_site_permission was set to true + - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access" + - Read and write access to all resources + - "Files.Read Files.Read.All Sites.Read.All offline_access" + - Read only access to all resources + - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access" + - Read and write access to all resources, without the ability to browse SharePoint sites. + - Same as if disable_site_permission was set to true #### --onedrive-disable-site-permission @@ -50265,13 +55878,13 @@ Properties: - Type: string - Default: "anonymous" - Examples: - - "anonymous" - - Anyone with the link has access, without needing to sign in. - - This may include people outside of your organization. - - Anonymous link support may be disabled by an administrator. - - "organization" - - Anyone signed into your organization (tenant) can use the link to get access. - - Only available in OneDrive for Business and SharePoint. + - "anonymous" + - Anyone with the link has access, without needing to sign in. + - This may include people outside of your organization. + - Anonymous link support may be disabled by an administrator. + - "organization" + - Anyone signed into your organization (tenant) can use the link to get access. + - Only available in OneDrive for Business and SharePoint. #### --onedrive-link-type @@ -50284,12 +55897,12 @@ Properties: - Type: string - Default: "view" - Examples: - - "view" - - Creates a read-only link to the item. - - "edit" - - Creates a read-write link to the item. - - "embed" - - Creates an embeddable link to the item. + - "view" + - Creates a read-only link to the item. + - "edit" + - Creates a read-write link to the item. + - "embed" + - Creates an embeddable link to the item. #### --onedrive-link-password @@ -50334,18 +55947,18 @@ Properties: - Type: string - Default: "auto" - Examples: - - "auto" - - Rclone chooses the best hash - - "quickxor" - - QuickXor - - "sha1" - - SHA1 - - "sha256" - - SHA256 - - "crc32" - - CRC32 - - "none" - - None - don't use any hashes + - "auto" + - Rclone chooses the best hash + - "quickxor" + - QuickXor + - "sha1" + - SHA1 + - "sha256" + - SHA256 + - "crc32" + - CRC32 + - "none" + - None - don't use any hashes #### --onedrive-av-override @@ -50423,16 +56036,16 @@ Properties: - Type: Bits - Default: off - Examples: - - "off" - - Do not read or write the value - - "read" - - Read the value only - - "write" - - Write the value only - - "read,write" - - Read and Write the value. - - "failok" - - If writing fails log errors only, don't fail the transfer + - "off" + - Do not read or write the value + - "read" + - Read the value only + - "write" + - Write the value only + - "read,write" + - Read and Write the value. + - "failok" + - If writing fails log errors only, don't fail the transfer #### --onedrive-encoding @@ -50616,29 +56229,40 @@ Here are the possible system metadata items for the onedrive backend. See the [metadata](https://rclone.org/docs/#metadata) docs for more info. - + ### Impersonate other users as Admin -Unlike Google Drive and impersonating any domain user via service accounts, OneDrive requires you to authenticate as an admin account, and manually setup a remote per user you wish to impersonate. +Unlike Google Drive and impersonating any domain user via service accounts, +OneDrive requires you to authenticate as an admin account, and manually setup +a remote per user you wish to impersonate. -1. In [Microsoft 365 Admin Center](https://admin.microsoft.com), open each user you need to "impersonate" and go to the OneDrive section. There is a heading called "Get access to files", you need to click to create the link, this creates the link of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/` but also changes the permissions so you your admin user has access. +1. In [Microsoft 365 Admin Center](https://admin.microsoft.com), open each user + you need to "impersonate" and go to the OneDrive section. There is a heading + called "Get access to files", you need to click to create the link, this + creates the link of the format + `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/` but also + changes the permissions so you your admin user has access. 2. Then in powershell run the following commands: -```console -Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force -Import-Module Microsoft.Graph.Files -Connect-MgGraph -Scopes "Files.ReadWrite.All" -# Follow the steps to allow access to your admin user -# Then run this for each user you want to impersonate to get the Drive ID -Get-MgUserDefaultDrive -UserId '{emailaddress}' -# This will give you output of the format: -# Name Id DriveType CreatedDateTime -# ---- -- --------- --------------- -# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm -``` -3. Then in rclone add a onedrive remote type, and use the `Type in driveID` with the DriveID you got in the previous step. One remote per user. It will then confirm the drive ID, and hopefully give you a message of `Found drive "root" of type "business"` and then include the URL of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents` + ```console + Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force + Import-Module Microsoft.Graph.Files + Connect-MgGraph -Scopes "Files.ReadWrite.All" + # Follow the steps to allow access to your admin user + # Then run this for each user you want to impersonate to get the Drive ID + Get-MgUserDefaultDrive -UserId '{emailaddress}' + # This will give you output of the format: + # Name Id DriveType CreatedDateTime + # ---- -- --------- --------------- + # OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm + ``` +3. Then in rclone add a onedrive remote type, and use the `Type in driveID` + with the DriveID you got in the previous step. One remote per user. It will + then confirm the drive ID, and hopefully give you a message of + `Found drive "root" of type "business"` and then include the URL of the format + `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents` ## Limitations @@ -50660,11 +56284,16 @@ in it will be mapped to `?` instead. ### File sizes -The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). +The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive +for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). ### Path length -The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones. +The entire path, including the file name, must contain fewer than 400 +characters for OneDrive, OneDrive for Business and SharePoint Online. If you +are encrypting file and folder names with rclone, you may want to pay attention +to this limitation because the encrypted names are typically longer than the +original ones. ### Number of files @@ -50673,7 +56302,8 @@ OneDrive seems to be OK with at least 50,000 files in a folder, but at list files: UnknownError:`. See [#2707](https://github.com/rclone/rclone/issues/2707) for more info. -An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). +An official document about the limitations for different types of OneDrive can +be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). ## Versions @@ -50709,24 +56339,31 @@ command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting: -1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you haven't installed this already) +1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you + haven't installed this already) 2. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking` -3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials) +3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` + (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will + prompt for your credentials) 4. `Set-SPOTenant -EnableMinimumVersionRequirement $False` 5. `Disconnect-SPOService` (to disconnect from the server) -*Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.* +*Below are the steps for normal users to disable versioning. If you don't see +the "No Versioning" option, make sure the above requirements are met.* User [Weropol](https://github.com/Weropol) has found a method to disable versioning on OneDrive -1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page. +1. Open the settings menu by clicking on the gear symbol at the top of the + OneDrive Business page. 2. Click Site settings. -3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists. +3. Once on the Site settings page, navigate to Site Administration > Site libraries + and lists. 4. Click Customize "Documents". 5. Click General Settings > Versioning Settings. 6. Under Document Version History select the option No versioning. -Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe. + Note: This will disable the creation of new file versions, but will not remove + any previous versions. Your documents are safe. 7. Apply the changes by clicking OK. 8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag) 9. Restore the versioning settings after using rclone. (Optional) @@ -50740,20 +56377,25 @@ querying each file for versions it can be quite slow. Rclone does `--checkers` tests in parallel. The command also supports `--interactive`/`i` or `--dry-run` which is a great way to see what it would do. - rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir - rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir +```text +rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir +rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir +``` **NB** Onedrive personal can't currently delete versions -## Troubleshooting ## +## Troubleshooting ### Excessive throttling or blocked on SharePoint -If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: `--user-agent "ISV|rclone.org|rclone/v1.55.1"` +If you experience excessive throttling or is being blocked on SharePoint then +it may help to set the user agent explicitly with a flag like this: +`--user-agent "ISV|rclone.org|rclone/v1.55.1"` -The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling) +The specific details can be found in the Microsoft document: +[Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling) -### Unexpected file size/hash differences on Sharepoint #### +### Unexpected file size/hash differences on Sharepoint It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631) @@ -50764,57 +56406,66 @@ report inconsistent file sizes. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments: -``` +```text --ignore-checksum --ignore-size ``` Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for certain files, by attempting the steps below. Open the web interface for [OneDrive](https://onedrive.live.com) and find the -affected files (which will be in the error messages/log for rclone). Simply click on -each of these files, causing OneDrive to open them on the web. This will cause each -file to be converted in place to a format that is functionally equivalent +affected files (which will be in the error messages/log for rclone). Simply click +on each of these files, causing OneDrive to open them on the web. This will cause +each file to be converted in place to a format that is functionally equivalent but which will no longer trigger the size discrepancy. Once all problematic files are converted you will no longer need the ignore options above. -### Replacing/deleting existing files on Sharepoint gets "item not found" #### +### Replacing/deleting existing files on Sharepoint gets "item not found" It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to -mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use +mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). +As a workaround, you may use the `--backup-dir ` command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory `rclone-backup-dir` on backend `mysharepoint`, you may use: -``` +```text --backup-dir mysharepoint:rclone-backup-dir ``` -### access\_denied (AADSTS65005) #### +### access\_denied (AADSTS65005) -``` +```text Error: access_denied Code: AADSTS65005 Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned. ``` -This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins. +This means that rclone can't use the OneDrive for Business API with your account. +You can't do much about it, maybe write an email to your admins. -However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint +However, there are other ways to interact with your OneDrive account. Have a look +at the WebDAV backend: -### invalid\_grant (AADSTS50076) #### +### invalid\_grant (AADSTS50076) -``` +```text Error: invalid_grant Code: AADSTS50076 Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'. ``` -If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run `rclone config`, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: `Already have a token - refresh?`. For this question, answer `y` and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend. +If you see the error above after enabling multi-factor authentication for your +account, you can fix it by refreshing your OAuth refresh token. To do that, run +`rclone config`, and choose to edit your OneDrive backend. Then, you don't need +to actually make any changes until you reach this question: +`Already have a token - refresh?`. For this question, answer `y` and go through +the process to refresh your token, just like the first time the backend is +configured. After this, rclone should work again for this backend. -### Invalid request when making public links #### +### Invalid request when making public links On Sharepoint and OneDrive for Business, `rclone link` may return an "Invalid request" error. A possible cause is that the organisation admin didn't allow @@ -50825,49 +56476,67 @@ permissions as an admin, take a look at the docs: ### Can not access `Shared` with me files -Shared with me files is not supported by rclone [currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround: +Shared with me files is not supported by rclone +[currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround: 1. Visit [https://onedrive.live.com](https://onedrive.live.com/) 2. Right click a item in `Shared`, then click `Add shortcut to My files` in the context ![make_shortcut](https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png "Screenshot (Shared with me)") -3. The shortcut will appear in `My files`, you can access it with rclone, it behaves like a normal folder/file. +3. The shortcut will appear in `My files`, you can access it with rclone, it + behaves like a normal folder/file. ![in_my_files](https://i.imgur.com/0S8H3li.png "Screenshot (My Files)") ![rclone_mount](https://i.imgur.com/2Iq66sW.png "Screenshot (rclone mount)") ### Live Photos uploaded from iOS (small video clips in .heic files) -The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) -of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020. -The usage and download of these uploaded Live Photos is unfortunately still work-in-progress -and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows. +The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) +of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020. +The usage and download of these uploaded Live Photos is unfortunately still +work-in-progress and this introduces several issues when copying, synchronising +and mounting – both in rclone and in the native OneDrive client on Windows. -The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. -Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. -The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive. +The root cause can easily be seen if you locate one of your Live Photos in the +OneDrive web interface. Then download the photo from the web interface. You +will then see that the size of downloaded .heic file is smaller than the size +displayed in the web interface. The downloaded file is smaller because it only +contains a single frame (still photo) extracted from the Live Photo (movie) +stored in OneDrive. -The different sizes will cause `rclone copy/sync` to repeatedly recopy unmodified photos something like this: +The different sizes will cause `rclone copy/sync` to repeatedly recopy +unmodified photos something like this: - DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) - DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK - INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) +```text +DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) +DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK +INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) +``` -These recopies can be worked around by adding `--ignore-size`. Please note that this workaround only syncs the still-picture not the movie clip, +These recopies can be worked around by adding `--ignore-size`. Please note that +this workaround only syncs the still-picture not the movie clip, and relies on modification dates being correctly updated on all files in all situations. -The different sizes will also cause `rclone check` to report size errors something like this: +The different sizes will also cause `rclone check` to report size errors something +like this: - ERROR : 20230203_123826234_iOS.heic: sizes differ +```text +ERROR : 20230203_123826234_iOS.heic: sizes differ +``` These check errors can be suppressed by adding `--ignore-size`. -The different sizes will also cause `rclone mount` to fail downloading with an error something like this: +The different sizes will also cause `rclone mount` to fail downloading with an +error something like this: - ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF +```text +ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF +``` or like this when using `--cache-mode=full`: - INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: - ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: +```text +INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: +ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: +``` # OpenDrive @@ -50879,11 +56548,13 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text n) New remote d) Delete remote q) Quit config @@ -50920,15 +56591,21 @@ y/e/d> y List directories in top level of your OpenDrive - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your OpenDrive - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an OpenDrive directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -50964,11 +56641,10 @@ These only get replaced if they are the first or last character in the name: | VT | 0x0B | ␋ | | CR | 0x0D | ␍ | - Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. - + ### Standard options Here are the Standard options specific to opendrive (OpenDrive). @@ -51039,12 +56715,12 @@ Properties: - Type: string - Default: "private" - Examples: - - "private" - - The file or folder access can be granted in a way that will allow select users to view, read or write what is absolutely essential for them. - - "public" - - The file or folder can be downloaded by anyone from a web browser. The link can be shared in any way, - - "hidden" - - The file or folder can be accessed has the same restrictions as Public if the user knows the URL of the file or folder link in order to access the contents + - "private" + - The file or folder access can be granted in a way that will allow select users to view, read or write what is absolutely essential for them. + - "public" + - The file or folder can be downloaded by anyone from a web browser. The link can be shared in any way, + - "hidden" + - The file or folder can be accessed has the same restrictions as Public if the user knows the URL of the file or folder link in order to access the contents #### --opendrive-description @@ -51057,7 +56733,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -51075,33 +56751,40 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). # Oracle Object Storage -- [Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) -- [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/) -- [Oracle Object Storage Limits](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf) -Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in -too, e.g. `remote:bucket/path/to/dir`. +Object Storage provided by the Oracle Cloud Infrastructure (OCI). +Read more at : + +- [Oracle Object Storage Overview](https://docs.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) +- [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/) + +Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command). +You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. Sample command to transfer local artifacts to remote:bucket in oracle object storage: -`rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv` +```console +rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv +``` ## Configuration -Here is an example of making an oracle object storage configuration. `rclone config` walks you -through it. +Here is an example of making an oracle object storage configuration. `rclone config` +walks you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: - -``` +```text n) New remote d) Delete remote r) Rename remote @@ -51205,121 +56888,153 @@ y/e/d> y See all buckets - rclone lsd remote: +```console +rclone lsd remote: +``` Create a new bucket - rclone mkdir remote:bucket +```console +rclone mkdir remote:bucket +``` List the contents of a bucket - rclone ls remote:bucket - rclone ls remote:bucket --max-depth 1 +```console +rclone ls remote:bucket +rclone ls remote:bucket --max-depth 1 +``` -## Authentication Providers +## Authentication Providers -OCI has various authentication methods. To learn more about authentication methods please refer [oci authentication -methods](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) +OCI has various authentication methods. To learn more about authentication methods +please refer [oci authentication methods](https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdk_authentication_methods.htm) These choices can be specified in the rclone config file. Rclone supports the following OCI authentication provider. - User Principal - Instance Principal - Resource Principal - Workload Identity - No authentication +```text +User Principal +Instance Principal +Resource Principal +Workload Identity +No authentication +``` ### User Principal Sample rclone config file for Authentication Provider User Principal: - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = user_principal_auth - config_file = /home/opc/.oci/config - config_profile = Default +```ini +[oos] +type = oracleobjectstorage +namespace = id34 +compartment = ocid1.compartment.oc1..aaba +region = us-ashburn-1 +provider = user_principal_auth +config_file = /home/opc/.oci/config +config_profile = Default +``` Advantages: -- One can use this method from any server within OCI or on-premises or from other cloud provider. + +- One can use this method from any server within OCI or on-premises or from + other cloud provider. Considerations: -- you need to configure user’s privileges / policy to allow access to object storage + +- you need to configure user’s privileges / policy to allow access to object + storage - Overhead of managing users and keys. -- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials. +- If the user is deleted, the config file will no longer work and may cause + automation regressions that use the user's credentials. -### Instance Principal +### Instance Principal -An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. -With this approach no credentials have to be stored and managed. +An OCI compute instance can be authorized to use rclone by using it's identity +and certificates as an instance principal. With this approach no credentials +have to be stored and managed. Sample rclone configuration file for Authentication Provider Instance Principal: - [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf - [oos] - type = oracleobjectstorage - namespace = idfn - compartment = ocid1.compartment.oc1..aak7a - region = us-ashburn-1 - provider = instance_principal_auth +```console +[opc@rclone ~]$ cat ~/.config/rclone/rclone.conf +[oos] +type = oracleobjectstorage +namespace = idfn +compartment = ocid1.compartment.oc1..aak7a +region = us-ashburn-1 +provider = instance_principal_auth +``` Advantages: -- With instance principals, you don't need to configure user credentials and transfer/ save it to disk in your compute - instances or rotate the credentials. +- With instance principals, you don't need to configure user credentials and + transfer/ save it to disk in your compute instances or rotate the credentials. - You don’t need to deal with users and keys. -- Greatly helps in automation as you don't have to manage access keys, user private keys, storing them in vault, - using kms etc. +- Greatly helps in automation as you don't have to manage access keys, user + private keys, storing them in vault, using kms etc. Considerations: -- You need to configure a dynamic group having this instance as member and add policy to read object storage to that - dynamic group. +- You need to configure a dynamic group having this instance as member and add + policy to read object storage to that dynamic group. - Everyone who has access to this machine can execute the CLI commands. -- It is applicable for oci compute instances only. It cannot be used on external instance or resources. +- It is applicable for oci compute instances only. It cannot be used on external + instance or resources. ### Resource Principal -Resource principal auth is very similar to instance principal auth but used for resources that are not -compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). -To use resource principal ensure Rclone process is started with these environment variables set in its process. +Resource principal auth is very similar to instance principal auth but used for +resources that are not compute instances such as +[serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). +To use resource principal ensure Rclone process is started with these environment +variables set in its process. - export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 - export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 - export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem - export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token +```console +export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 +export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 +export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem +export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token +``` Sample rclone configuration file for Authentication Provider Resource Principal: - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = resource_principal_auth +```ini +[oos] +type = oracleobjectstorage +namespace = id34 +compartment = ocid1.compartment.oc1..aaba +region = us-ashburn-1 +provider = resource_principal_auth +``` ### Workload Identity -Workload Identity auth may be used when running Rclone from Kubernetes pod on a Container Engine for Kubernetes (OKE) cluster. -For more details on configuring Workload Identity, see [Granting Workloads Access to OCI Resources](https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm). -To use workload identity, ensure Rclone is started with these environment variables set in its process. - export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 - export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 +Workload Identity auth may be used when running Rclone from Kubernetes pod on +a Container Engine for Kubernetes (OKE) cluster. For more details on configuring +Workload Identity, see [Granting Workloads Access to OCI Resources](https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm). +To use workload identity, ensure Rclone is started with these environment +variables set in its process. + +```console +export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 +export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 +``` ### No authentication Public buckets do not require any authentication mechanism to read objects. Sample rclone configuration file for No authentication: - - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = no_auth + +```ini +[oos] +type = oracleobjectstorage +namespace = id34 +compartment = ocid1.compartment.oc1..aaba +region = us-ashburn-1 +provider = no_auth +``` ### Modification times and hashes @@ -51328,10 +57043,11 @@ The modification time is stored as metadata on the object as If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. -In the case the object is larger than 5Gb, the object will be uploaded rather than copied. +In the case the object is larger than 5Gb, the object will be uploaded rather than +copied. -Note that reading this from the object takes an additional `HEAD` request as the metadata -isn't returned in object listings. +Note that reading this from the object takes an additional `HEAD` request as the +metadata isn't returned in object listings. The MD5 hash algorithm is supported. @@ -51365,7 +57081,7 @@ throughput (16M would be sensible). Increasing either of these will use more memory. The default values are high enough to gain most of the possible performance without using too much memory. - + ### Standard options Here are the Standard options specific to oracleobjectstorage (Oracle Cloud Infrastructure Object Storage). @@ -51381,23 +57097,23 @@ Properties: - Type: string - Default: "env_auth" - Examples: - - "env_auth" - - automatically pickup the credentials from runtime(env), first one to provide auth wins - - "user_principal_auth" - - use an OCI user and an API key for authentication. - - you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. - - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm - - "instance_principal_auth" - - use instance principals to authorize an instance to make API calls. - - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. - - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm - - "workload_identity_auth" - - use workload identity to grant OCI Container Engine for Kubernetes workloads policy-driven access to OCI resources using OCI Identity and Access Management (IAM). - - https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm - - "resource_principal_auth" - - use resource principals to make API calls - - "no_auth" - - no credentials needed, this is typically for reading public buckets + - "env_auth" + - automatically pickup the credentials from runtime(env), first one to provide auth wins + - "user_principal_auth" + - use an OCI user and an API key for authentication. + - you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. + - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm + - "instance_principal_auth" + - use instance principals to authorize an instance to make API calls. + - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. + - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm + - "workload_identity_auth" + - use workload identity to grant OCI Container Engine for Kubernetes workloads policy-driven access to OCI resources using OCI Identity and Access Management (IAM). + - https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm + - "resource_principal_auth" + - use resource principals to make API calls + - "no_auth" + - no credentials needed, this is typically for reading public buckets #### --oos-namespace @@ -51460,8 +57176,8 @@ Properties: - Type: string - Default: "~/.oci/config" - Examples: - - "~/.oci/config" - - oci configuration file location + - "~/.oci/config" + - oci configuration file location #### --oos-config-profile @@ -51475,8 +57191,8 @@ Properties: - Type: string - Default: "Default" - Examples: - - "Default" - - Use the default profile + - "Default" + - Use the default profile ### Advanced options @@ -51493,12 +57209,12 @@ Properties: - Type: string - Default: "Standard" - Examples: - - "Standard" - - Standard storage tier, this is the default tier - - "InfrequentAccess" - - InfrequentAccess storage tier - - "Archive" - - Archive storage tier + - "Standard" + - Standard storage tier, this is the default tier + - "InfrequentAccess" + - InfrequentAccess storage tier + - "Archive" + - Archive storage tier #### --oos-upload-cutoff @@ -51710,8 +57426,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --oos-sse-customer-key @@ -51727,8 +57443,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --oos-sse-customer-key-sha256 @@ -51743,8 +57459,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --oos-sse-kms-key-id @@ -51760,8 +57476,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --oos-sse-customer-algorithm @@ -51776,10 +57492,10 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None - - "AES256" - - AES256 + - "" + - None + - "AES256" + - AES256 #### --oos-description @@ -51813,9 +57529,11 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info. Here are the commands specific to the oracleobjectstorage backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -51827,26 +57545,35 @@ These can be run on a running backend using the rc command ### rename -change the name of an object +change the name of an object. - rclone backend rename remote: [options] [+] +```console +rclone backend rename remote: [options] [+] +``` This command can be used to rename a object. -Usage Examples: - - rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name +Usage example: +```console +rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name +``` ### list-multipart-uploads -List the unfinished multipart uploads +List the unfinished multipart uploads. - rclone backend list-multipart-uploads remote: [options] [+] +```console +rclone backend list-multipart-uploads remote: [options] [+] +``` This command lists the unfinished multipart uploads in JSON format. - rclone backend list-multipart-uploads oos:bucket/path/to/object +Usage example: + +```console +rclone backend list-multipart-uploads oos:bucket/path/to/object +``` It returns a dictionary of buckets with values as lists of unfinished multipart uploads. @@ -51854,85 +57581,102 @@ multipart uploads. You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path. - { - "test-bucket": [ - { - "namespace": "test-namespace", - "bucket": "test-bucket", - "object": "600m.bin", - "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8", - "timeCreated": "2022-07-29T06:21:16.595Z", - "storageTier": "Standard" - } - ] - +```json +{ + "test-bucket": [ + { + "namespace": "test-namespace", + "bucket": "test-bucket", + "object": "600m.bin", + "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8", + "timeCreated": "2022-07-29T06:21:16.595Z", + "storageTier": "Standard" + } + ] +} ### cleanup Remove unfinished multipart uploads. - rclone backend cleanup remote: [options] [+] +```console +rclone backend cleanup remote: [options] [+] +``` This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours. -Note that you can use --interactive/-i or --dry-run with this command to see what -it would do. +Note that you can use --interactive/-i or --dry-run with this command to see +what it would do. - rclone backend cleanup oos:bucket/path/to/object - rclone backend cleanup -o max-age=7w oos:bucket/path/to/object +Usage examples: + +```console +rclone backend cleanup oos:bucket/path/to/object +rclone backend cleanup -o max-age=7w oos:bucket/path/to/object +``` Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. - Options: -- "max-age": Max age of upload to delete +- "max-age": Max age of upload to delete. ### restore -Restore objects from Archive to Standard storage +Restore objects from Archive to Standard storage. - rclone backend restore remote: [options] [+] +```console +rclone backend restore remote: [options] [+] +``` -This command can be used to restore one or more objects from Archive to Standard storage. +This command can be used to restore one or more objects from Archive to +Standard storage. - Usage Examples: +Usage examples: - rclone backend restore oos:bucket/path/to/directory -o hours=HOURS - rclone backend restore oos:bucket -o hours=HOURS +```console +rclone backend restore oos:bucket/path/to/directory -o hours=HOURS +rclone backend restore oos:bucket -o hours=HOURS +``` This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags - rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72 +```console +rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72 +``` -All the objects shown will be marked for restore, then +All the objects shown will be marked for restore, then: - rclone backend restore --include "*.txt" oos:bucket/path -o hours=72 +```console +rclone backend restore --include "*.txt" oos:bucket/path -o hours=72 +``` - It returns a list of status dictionaries with Object Name and Status - keys. The Status will be "RESTORED"" if it was successful or an error message - if not. - - [ - { - "Object": "test.txt" - "Status": "RESTORED", - }, - { - "Object": "test/file4.txt" - "Status": "RESTORED", - } - ] +It returns a list of status dictionaries with Object Name and Status keys. +The Status will be "RESTORED"" if it was successful or an error message if not. +```json +[ + { + "Object": "test.txt" + "Status": "RESTORED", + }, + { + "Object": "test/file4.txt" + "Status": "RESTORED", + } +] +``` Options: -- "hours": The number of hours for which this object will be restored. Default is 24 hrs. - +- "hours": The number of hours for which this object will be restored. +Default is 24 hrs. + ## Tutorials + ### [Mounting Buckets](https://rclone.org/oracleobjectstorage/tutorial_mount/) # QingStor @@ -51944,12 +57688,14 @@ command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. Here is an example of making an QingStor configuration. First run - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote r) Rename remote c) Copy remote @@ -52011,20 +57757,28 @@ This remote is called `remote` and can now be used like this See all buckets - rclone lsd remote: +```console +rclone lsd remote: +``` Make a new bucket - rclone mkdir remote:bucket +```console +rclone mkdir remote:bucket +``` List the contents of a bucket - rclone ls remote:bucket +```console +rclone ls remote:bucket +``` Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. - rclone sync --interactive /home/local/directory remote:bucket +```console +rclone sync --interactive /home/local/directory remote:bucket +``` ### --fast-list @@ -52057,13 +57811,13 @@ zone`. There are two ways to supply `rclone` with a set of QingStor credentials. In order of precedence: - - Directly in the rclone configuration file (as configured by `rclone config`) - - set `access_key_id` and `secret_access_key` - - Runtime configuration: - - set `env_auth` to `true` in the config file - - Exporting the following environment variables before running `rclone` - - Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY` - - Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY` +- Directly in the rclone configuration file (as configured by `rclone config`) + - set `access_key_id` and `secret_access_key` +- Runtime configuration: + - set `env_auth` to `true` in the config file + - Exporting the following environment variables before running `rclone` + - Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY` + - Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY` ### Restricted filename characters @@ -52074,7 +57828,7 @@ that 0x7F is not replaced. Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. - + ### Standard options Here are the Standard options specific to qingstor (QingCloud Object Storage). @@ -52092,10 +57846,10 @@ Properties: - Type: bool - Default: false - Examples: - - "false" - - Enter QingStor credentials in the next step. - - "true" - - Get QingStor credentials from the environment (env vars or IAM). + - "false" + - Enter QingStor credentials in the next step. + - "true" + - Get QingStor credentials from the environment (env vars or IAM). #### --qingstor-access-key-id @@ -52149,15 +57903,15 @@ Properties: - Type: string - Required: false - Examples: - - "pek3a" - - The Beijing (China) Three Zone. - - Needs location constraint pek3a. - - "sh1a" - - The Shanghai (China) First Zone. - - Needs location constraint sh1a. - - "gd2a" - - The Guangdong (China) Second Zone. - - Needs location constraint gd2a. + - "pek3a" + - The Beijing (China) Three Zone. + - Needs location constraint pek3a. + - "sh1a" + - The Shanghai (China) First Zone. + - Needs location constraint sh1a. + - "gd2a" + - The Guangdong (China) Second Zone. + - Needs location constraint gd2a. ### Advanced options @@ -52253,7 +58007,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -52262,7 +58016,8 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). # Quatrix @@ -52272,20 +58027,23 @@ Paths are specified as `remote:path` Paths may be as deep as required, e.g., `remote:directory/subdirectory`. -The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at `https:///profile/api-keys` -or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create. +The initial setup for Quatrix involves getting an API Key from Quatrix. You can +get the API key in the user's profile at `https:///profile/api-keys` +or with the help of the API - . -See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer +See complete [Swagger documentation for Quatrix](https://docs.maytech.net/quatrix/quatrix-api/api-explorer). ## Configuration Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -52316,27 +58074,35 @@ d) Delete this remote y/e/d> y ``` -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your Quatrix - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your Quatrix - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an Quatrix directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### API key validity -API Key is created with no expiration date. It will be valid until you delete or deactivate it in your account. -After disabling, the API Key can be enabled back. If the API Key was deleted and a new key was created, you can -update it in rclone config. The same happens if the hostname was changed. +API Key is created with no expiration date. It will be valid until you delete or +deactivate it in your account. After disabling, the API Key can be enabled back. +If the API Key was deleted and a new key was created, you can update it in rclone +config. The same happens if the hostname was changed. -``` +```console $ rclone config Current remotes: @@ -52391,25 +58157,33 @@ Quatrix does not support hashes, so you cannot use the `--checksum` flag. ### Restricted filename characters -File names in Quatrix are case sensitive and have limitations like the maximum length of a filename is 255, and the minimum length is 1. A file name cannot be equal to `.` or `..` nor contain `/` , `\` or non-printable ascii. +File names in Quatrix are case sensitive and have limitations like the maximum +length of a filename is 255, and the minimum length is 1. A file name cannot be +equal to `.` or `..` nor contain `/` , `\` or non-printable ascii. ### Transfers -For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to `--transfers` chunks at the same time (shared among all multipart uploads). -Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by default, and it can be changed in the advanced configuration, so increasing `--transfers` will increase the memory use. -The chunk size has a maximum size limit, which is set to 100_000_000 bytes by default and can be changed in the advanced configuration. +For files above 50 MiB rclone will use a chunked transfer. Rclone will upload +up to `--transfers` chunks at the same time (shared among all multipart uploads). +Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by +default, and it can be changed in the advanced configuration, so increasing `--transfers` +will increase the memory use. The chunk size has a maximum size limit, which is +set to 100_000_000 bytes by default and can be changed in the advanced configuration. The size of the uploaded chunk will dynamically change depending on the upload speed. -The total memory use equals the number of transfers multiplied by the minimal chunk size. -In case there's free memory allocated for the upload (which equals the difference of `maximal_summary_chunk_size` and `minimal_chunk_size` * `transfers`), -the chunk size may increase in case of high upload speed. As well as it can decrease in case of upload speed problems. -If no free memory is available, all chunks will equal `minimal_chunk_size`. +The total memory use equals the number of transfers multiplied by the minimal +chunk size. In case there's free memory allocated for the upload (which equals +the difference of `maximal_summary_chunk_size` and `minimal_chunk_size` * `transfers`), +the chunk size may increase in case of high upload speed. As well as it can decrease +in case of upload speed problems. If no free memory is available, all chunks will +equal `minimal_chunk_size`. ### Deleting files Files you delete with rclone will end up in Trash and be stored there for 30 days. -Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account. - +Quatrix also provides an API to permanently delete files and an API to empty the +Trash so that you can remove files permanently from your account. + ### Standard options Here are the Standard options specific to quatrix (Quatrix by Maytech). @@ -52519,17 +58293,20 @@ Properties: - Type: string - Required: false - + ## Storage usage -The storage usage in Quatrix is restricted to the account during the purchase. You can restrict any user with a smaller storage limit. -The account limit is applied if the user has no custom storage limit. Once you've reached the limit, the upload of files will fail. -This can be fixed by freeing up the space or increasing the quota. +The storage usage in Quatrix is restricted to the account during the purchase. +You can restrict any user with a smaller storage limit. The account limit is +applied if the user has no custom storage limit. Once you've reached the limit, +the upload of files will fail. This can be fixed by freeing up the space or +increasing the quota. ## Server-side operations -Quatrix supports server-side operations (copy and move). In case of conflict, files are overwritten during server-side operation. +Quatrix supports server-side operations (copy and move). In case of conflict, +files are overwritten during server-side operation. # Sia @@ -52549,14 +58326,15 @@ network (e.g. a NAS). Please follow the [Get started](https://sia.tech/get-start guide and install one. rclone interacts with Sia network by talking to the Sia daemon via [HTTP API](https://sia.tech/docs/) -which is usually available on port _9980_. By default you will run the daemon +which is usually available on port *9980*. By default you will run the daemon locally on the same computer so it's safe to leave the API password blank (the API URL will be `http://127.0.0.1:9980` making external access impossible). However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make a few more provisions: -- Ensure you have _Sia daemon_ installed directly or in + +- Ensure you have *Sia daemon* installed directly or in a [docker container](https://github.com/SiaFoundation/siad/pkgs/container/siad) because Sia-UI does not support this mode natively. - Run it on externally accessible port, for example provide `--api-addr :9980` @@ -52565,8 +58343,8 @@ several rclone and Sia-UI instances, you'll need to make a few more provisions: `SIA_API_PASSWORD` or text file named `apipassword` in the daemon directory. - Set rclone backend option `api_password` taking it from above locations. - Notes: + 1. If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line `siac wallet unlock`. @@ -52586,11 +58364,13 @@ Notes: Here is an example of how to make a `sia` remote called `mySia`. First, run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -52640,23 +58420,23 @@ Once configured, you can then use `rclone` like this: - List directories in top level of your Sia storage -``` -rclone lsd mySia: -``` + ```console + rclone lsd mySia: + ``` - List all the files in your Sia storage -``` -rclone ls mySia: -``` + ```console + rclone ls mySia: + ``` -- Upload a local directory to the Sia directory called _backup_ - -``` -rclone copy /home/source mySia:backup -``` +- Upload a local directory to the Sia directory called *backup* + ```console + rclone copy /home/source mySia:backup + ``` + ### Standard options Here are the Standard options specific to sia (Sia Decentralized Cloud). @@ -52731,14 +58511,14 @@ Properties: - Type: string - Required: false - + ## Limitations - Modification times not supported - Checksums not supported - `rclone about` not supported -- rclone can work only with _Siad_ or _Sia-UI_ at the moment, +- rclone can work only with *Siad* or *Sia-UI* at the moment, the **SkyNet daemon is not supported yet.** - Sia does not allow control characters or symbols like question and pound signs in file names. rclone will transparently [encode](https://rclone.org/overview/#encoding) @@ -52749,12 +58529,12 @@ Properties: Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/). Commercial implementations of that being: - * [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/) - * [Memset Memstore](https://www.memset.com/cloud/storage/) - * [OVH Object Storage](https://www.ovhcloud.com/en/public-cloud/object-storage/) - * [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html) - * [Blomp Cloud Storage](https://www.blomp.com/cloud-storage/) - * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) +- [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/) +- [Memset Memstore](https://www.memset.com/cloud/storage/) +- [OVH Object Storage](https://www.ovhcloud.com/en/public-cloud/object-storage/) +- [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html) +- [Blomp Cloud Storage](https://www.blomp.com/cloud-storage/) +- [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html) Paths are specified as `remote:container` (or `remote:` for the `lsd` command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`. @@ -52763,12 +58543,14 @@ command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir Here is an example of making a swift configuration. First run - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -52864,27 +58646,35 @@ This remote is called `remote` and can now be used like this See all containers - rclone lsd remote: +```console +rclone lsd remote: +``` Make a new container - rclone mkdir remote:container +```console +rclone mkdir remote:container +``` List the contents of a container - rclone ls remote:container +```console +rclone ls remote:container +``` Sync `/home/local/directory` to the remote container, deleting any excess files in the container. - rclone sync --interactive /home/local/directory remote:container +```console +rclone sync --interactive /home/local/directory remote:container +``` ### Configuration from an OpenStack credentials file An OpenStack credentials file typically looks something something like this (without the comments) -``` +```sh export OS_AUTH_URL=https://a.provider.net/v2.0 export OS_TENANT_ID=ffffffffffffffffffffffffffffffff export OS_TENANT_NAME="1234567890123456" @@ -52900,7 +58690,7 @@ The config file needs to look something like this where `$OS_USERNAME` represents the value of the `OS_USERNAME` variable - `123abc567xy` in the example above. -``` +```ini [remote] type = swift user = $OS_USERNAME @@ -52928,12 +58718,12 @@ in the docs for the swift library. ### Using an alternate authentication method If your OpenStack installation uses a non-standard authentication method -that might not be yet supported by rclone or the underlying swift library, -you can authenticate externally (e.g. calling manually the `openstack` -commands to get a token). Then, you just need to pass the two -configuration variables ``auth_token`` and ``storage_url``. -If they are both provided, the other variables are ignored. rclone will -not try to authenticate but instead assume it is already authenticated +that might not be yet supported by rclone or the underlying swift library, +you can authenticate externally (e.g. calling manually the `openstack` +commands to get a token). Then, you just need to pass the two +configuration variables ``auth_token`` and ``storage_url``. +If they are both provided, the other variables are ignored. rclone will +not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation. #### Using rclone without a config file @@ -52941,7 +58731,7 @@ and use these two variables to access the OpenStack installation. You can use rclone with swift without a config file, if desired, like this: -``` +```sh source openstack-credentials-file export RCLONE_CONFIG_MYREMOTE_TYPE=swift export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true @@ -52988,7 +58778,7 @@ The MD5 hash algorithm is supported. Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. - + ### Standard options Here are the Standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Blomp Cloud Storage, Memset Memstore, OVH)). @@ -53004,11 +58794,11 @@ Properties: - Type: bool - Default: false - Examples: - - "false" - - Enter swift credentials in the next step. - - "true" - - Get swift credentials from environment vars. - - Leave other fields blank if using this. + - "false" + - Enter swift credentials in the next step. + - "true" + - Get swift credentials from environment vars. + - Leave other fields blank if using this. #### --swift-user @@ -53043,20 +58833,20 @@ Properties: - Type: string - Required: false - Examples: - - "https://auth.api.rackspacecloud.com/v1.0" - - Rackspace US - - "https://lon.auth.api.rackspacecloud.com/v1.0" - - Rackspace UK - - "https://identity.api.rackspacecloud.com/v2.0" - - Rackspace v2 - - "https://auth.storage.memset.com/v1.0" - - Memset Memstore UK - - "https://auth.storage.memset.com/v2.0" - - Memset Memstore UK v2 - - "https://auth.cloud.ovh.net/v3" - - OVH - - "https://authenticate.ain.net" - - Blomp Cloud Storage + - "https://auth.api.rackspacecloud.com/v1.0" + - Rackspace US + - "https://lon.auth.api.rackspacecloud.com/v1.0" + - Rackspace UK + - "https://identity.api.rackspacecloud.com/v2.0" + - Rackspace v2 + - "https://auth.storage.memset.com/v1.0" + - Memset Memstore UK + - "https://auth.storage.memset.com/v2.0" + - Memset Memstore UK v2 + - "https://auth.cloud.ovh.net/v3" + - OVH + - "https://authenticate.ain.net" + - Blomp Cloud Storage #### --swift-user-id @@ -53201,12 +58991,12 @@ Properties: - Type: string - Default: "public" - Examples: - - "public" - - Public (default, choose this if not sure) - - "internal" - - Internal (use internal service net) - - "admin" - - Admin + - "public" + - Public (default, choose this if not sure) + - "internal" + - Internal (use internal service net) + - "admin" + - Admin #### --swift-storage-policy @@ -53224,12 +59014,12 @@ Properties: - Type: string - Required: false - Examples: - - "" - - Default - - "pcs" - - OVH Public Cloud Storage - - "pca" - - OVH Public Cloud Archive + - "" + - Default + - "pcs" + - OVH Public Cloud Storage + - "pca" + - OVH Public Cloud Archive ### Advanced options @@ -53422,7 +59212,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -53451,17 +59241,24 @@ setting up a swift remote. ## OVH Cloud Archive -To use rclone with OVH cloud archive, first use `rclone config` to set up a `swift` backend with OVH, choosing `pca` as the `storage_policy`. +To use rclone with OVH cloud archive, first use `rclone config` to set up a +`swift` backend with OVH, choosing `pca` as the `storage_policy`. ### Uploading Objects -Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel. +Uploading objects to OVH cloud archive is no different to object storage, you +just simply run the command you like (move, copy or sync) to upload the objects. +Once uploaded the objects will show in a "Frozen" state within the OVH control panel. ### Retrieving Objects -To retrieve objects use `rclone copy` as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following: +To retrieve objects use `rclone copy` as normal. If the objects are in a frozen +state then rclone will ask for them all to be unfrozen and it will wait at the +end of the output with a message like the following: -`2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)` +```text +2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s) +``` Rclone will wait for the time specified then retry the copy. @@ -53478,11 +59275,13 @@ need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -53530,7 +59329,7 @@ y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Note if you are using remote config with rclone authorize while your pcloud server is the EU region, you will need to set the hostname in 'Edit advanced @@ -53542,19 +59341,26 @@ your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your pCloud - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your pCloud - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to a pCloud directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -53586,10 +59392,11 @@ be used to empty the trash. ### Emptying the trash -Due to an API limitation, the `rclone cleanup` command will only work if you -set your username and password in the advanced options for this backend. +Due to an API limitation, the `rclone cleanup` command will only work if you +set your username and password in the advanced options for this backend. Since we generally want to avoid storing user passwords in the rclone config -file, we advise you to only set this up if you need the `rclone cleanup` command to work. +file, we advise you to only set this up if you need the `rclone cleanup` command +to work. ### Root folder ID @@ -53604,16 +59411,27 @@ However you can set this to restrict rclone to a specific folder hierarchy. In order to do this you will have to find the `Folder ID` of the -directory you wish rclone to display. This will be the `folder` field -of the URL when you open the relevant folder in the pCloud web -interface. +directory you wish rclone to display. This can be accomplished by executing +the ```rclone lsf``` command using a basic configuration setup that does not +include the ```root_folder_id``` parameter. -So if the folder you want rclone to use has a URL which looks like -`https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid` -in the browser, then you use `5xxxxxxxx8` as -the `root_folder_id` in the config. +The command will enumerate available directories, allowing you to locate the +appropriate Folder ID for subsequent use. +Example: +```console +$ rclone lsf --dirs-only -Fip --csv TestPcloud: +dxxxxxxxx2,My Music/ +dxxxxxxxx3,My Pictures/ +dxxxxxxxx4,My Videos/ +``` + +So if the folder you want rclone to use your is "My Music/", then use the returned +id from ```rclone lsf``` command (ex. `dxxxxxxxx2`) as the `root_folder_id` variable +value in the config file. + + ### Standard options Here are the Standard options specific to pcloud (Pcloud). @@ -53740,10 +59558,10 @@ Properties: - Type: string - Default: "api.pcloud.com" - Examples: - - "api.pcloud.com" - - Original/US region - - "eapi.pcloud.com" - - EU region + - "api.pcloud.com" + - Original/US region + - "eapi.pcloud.com" + - EU region #### --pcloud-username @@ -53784,7 +59602,7 @@ Properties: - Type: string - Required: false - + # PikPak @@ -53798,11 +59616,13 @@ Here is an example of making a remote for PikPak. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -53860,7 +59680,7 @@ but it does not support changing only the modification time The MD5 hash algorithm is supported. - + ### Standard options Here are the Standard options specific to pikpak (PikPak). @@ -54073,9 +59893,11 @@ Properties: Here are the commands specific to the pikpak backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -54087,48 +59909,56 @@ These can be run on a running backend using the rc command ### addurl -Add offline download task for url +Add offline download task for url. - rclone backend addurl remote: [options] [+] +```console +rclone backend addurl remote: [options] [+] +``` This command adds offline download task for url. -Usage: +Usage example: - rclone backend addurl pikpak:dirpath url +```console +rclone backend addurl pikpak:dirpath url +``` -Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, +Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, download will fallback to default 'My Pack' folder. - ### decompress -Request decompress of a file/files in a folder +Request decompress of a file/files in a folder. - rclone backend decompress remote: [options] [+] +```console +rclone backend decompress remote: [options] [+] +``` This command requests decompress of file/files in a folder. -Usage: +Usage examples: - rclone backend decompress pikpak:dirpath {filename} -o password=password - rclone backend decompress pikpak:dirpath {filename} -o delete-src-file +```console +rclone backend decompress pikpak:dirpath {filename} -o password=password +rclone backend decompress pikpak:dirpath {filename} -o delete-src-file +``` -An optional argument 'filename' can be specified for a file located in -'pikpak:dirpath'. You may want to pass '-o password=password' for a -password-protected files. Also, pass '-o delete-src-file' to delete +An optional argument 'filename' can be specified for a file located in +'pikpak:dirpath'. You may want to pass '-o password=password' for a +password-protected files. Also, pass '-o delete-src-file' to delete source files after decompression finished. Result: - { - "Decompressed": 17, - "SourceDeleted": 0, - "Errors": 0 - } - - +```json +{ + "Decompressed": 17, + "SourceDeleted": 0, + "Errors": 0 +} +``` + ## Limitations @@ -54151,12 +59981,12 @@ subscriptions](https://pixeldrain.com/#pro). An overview of the filesystem's features and limitations is available in the [filesystem guide](https://pixeldrain.com/filesystem) on pixeldrain. -### Usage with account +## Usage with account To use the personal filesystem you will need a [pixeldrain account](https://pixeldrain.com/register) and either the Prepaid plan or one of the Patreon-based subscriptions. After registering and subscribing, your -personal filesystem will be available at this link: https://pixeldrain.com/d/me. +personal filesystem will be available at this link: . Go to the [API keys page](https://pixeldrain.com/user/api_keys) on your account and generate a new API key for rclone. Then run `rclone config` and use the API @@ -54164,8 +59994,8 @@ key to create a new backend. Example: -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote d) Delete remote c) Copy remote @@ -54228,7 +60058,7 @@ q) Quit config e/n/d/r/c/s/q> q ``` -### Usage without account +## Usage without account It is possible to gain read-only access to publicly shared directories through rclone. For this you only need a directory ID. The directory ID can be found in @@ -54244,7 +60074,7 @@ IDs. Enter this directory ID in the rclone config and you will be able to access the directory. - + ### Standard options Here are the Standard options specific to pixeldrain (Pixeldrain Filesystem). @@ -54315,7 +60145,7 @@ Here are the possible system metadata items for the pixeldrain backend. See the [metadata](https://rclone.org/docs/#metadata) docs for more info. - + # premiumize.me @@ -54325,16 +60155,19 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`. ## Configuration -The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you -need to do in your browser. `rclone config` walks you through it. +The initial setup for [premiumize.me](https://premiumize.me/) involves getting a +token from premiumize.me which you need to do in your browser. `rclone config` +walks you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -54375,7 +60208,7 @@ y/e/d> ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it opens @@ -54383,19 +60216,26 @@ your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your premiumize.me - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your premiumize.me - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an premiumize.me directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -54416,7 +60256,7 @@ the following characters are also replaced: Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. - + ### Standard options Here are the Standard options specific to premiumizeme (premiumize.me). @@ -54541,7 +60381,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -54562,8 +60402,8 @@ premiumize.me only supports filenames up to 255 characters in length. This is an rclone backend for Proton Drive which supports the file transfer features of Proton Drive using the same client-side encryption. -Due to the fact that Proton Drive doesn't publish its API documentation, this -backend is implemented with best efforts by reading the open-sourced client +Due to the fact that Proton Drive doesn't publish its API documentation, this +backend is implemented with best efforts by reading the open-sourced client source code and observing the Proton Drive traffic in the browser. **NB** This backend is currently in Beta. It is believed to be correct @@ -54580,11 +60420,13 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -54626,23 +60468,30 @@ d) Delete this remote y/e/d> y ``` -**NOTE:** The Proton Drive encryption keys need to have been already generated -after a regular login via the browser, otherwise attempting to use the +**NOTE:** The Proton Drive encryption keys need to have been already generated +after a regular login via the browser, otherwise attempting to use the credentials in `rclone` will fail. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your Proton Drive - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your Proton Drive - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an Proton Drive directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -54652,13 +60501,13 @@ The SHA1 hash algorithm is supported. ### Restricted filename characters -Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and +Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) ### Duplicated files -Proton Drive can not have two files with exactly the same name and path. If the -conflict occurs, depending on the advanced config, the file might or might not +Proton Drive can not have two files with exactly the same name and path. If the +conflict occurs, depending on the advanced config, the file might or might not be overwritten. ### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) @@ -54667,14 +60516,14 @@ Please set your mailbox password in the advanced config section. ### Caching -The cache is currently built for the case when the rclone is the only instance +The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton -API system that provides visibility of what has changed on the drive, is yet -to be implemented, so updates from other clients won’t be reflected in the -cache. Thus, if there are concurrent clients accessing the same mount point, +API system that provides visibility of what has changed on the drive, is yet +to be implemented, so updates from other clients won’t be reflected in the +cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data. - + ### Standard options Here are the Standard options specific to protondrive (Proton Drive). @@ -54719,6 +60568,24 @@ Properties: - Type: string - Required: false +#### --protondrive-otp-secret-key + +The OTP secret key + +The value can also be provided with --protondrive-otp-secret-key=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567 + +The OTP secret key of your proton drive account if the account is set up with +two-factor authentication + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: otp_secret_key +- Env Var: RCLONE_PROTONDRIVE_OTP_SECRET_KEY +- Type: string +- Required: false + ### Advanced options Here are the Advanced options specific to protondrive (Proton Drive). @@ -54891,31 +60758,31 @@ Properties: - Type: string - Required: false - + ## Limitations -This backend uses the -[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which -is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a +This backend uses the +[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which +is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a fork of the [official repo](https://github.com/ProtonMail/go-proton-api). -There is no official API documentation available from Proton Drive. But, thanks -to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) -and the web, iOS, and Android client codebases, we don't need to completely -reverse engineer the APIs by observing the web client traffic! +There is no official API documentation available from Proton Drive. But, thanks +to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) +and the web, iOS, and Android client codebases, we don't need to completely +reverse engineer the APIs by observing the web client traffic! -[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic -building blocks of API calls and error handling, such as 429 exponential -back-off, but it is pretty much just a barebone interface to the Proton API. -For example, the encryption and decryption of the Proton Drive file are not -provided in this library. +[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic +building blocks of API calls and error handling, such as 429 exponential +back-off, but it is pretty much just a barebone interface to the Proton API. +For example, the encryption and decryption of the Proton Drive file are not +provided in this library. -The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on -top of this quickly. This codebase handles the intricate tasks before and after -calling Proton APIs, particularly the complex encryption scheme, allowing -developers to implement features for other software on top of this codebase. -There are likely quite a few errors in this library, as there isn't official +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on +top of this quickly. This codebase handles the intricate tasks before and after +calling Proton APIs, particularly the complex encryption scheme, allowing +developers to implement features for other software on top of this codebase. +There are likely quite a few errors in this library, as there isn't official documentation available. # put.io @@ -54933,11 +60800,13 @@ through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -54992,10 +60861,10 @@ e/n/d/r/c/s/q> q ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the -token as returned from put.io if using web browser to automatically +token as returned from put.io if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this @@ -55006,15 +60875,21 @@ You can then use it like this, List directories in top level of your put.io - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your put.io - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to a put.io directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Restricted filename characters @@ -55028,7 +60903,7 @@ the following characters are also replaced: Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. - + ### Standard options Here are the Standard options specific to putio (Put.io). @@ -55139,7 +61014,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -55158,8 +61033,8 @@ may be different for different operations, and may change over time. This is an rclone backend for Proton Drive which supports the file transfer features of Proton Drive using the same client-side encryption. -Due to the fact that Proton Drive doesn't publish its API documentation, this -backend is implemented with best efforts by reading the open-sourced client +Due to the fact that Proton Drive doesn't publish its API documentation, this +backend is implemented with best efforts by reading the open-sourced client source code and observing the Proton Drive traffic in the browser. **NB** This backend is currently in Beta. It is believed to be correct @@ -55176,11 +61051,13 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -55222,23 +61099,30 @@ d) Delete this remote y/e/d> y ``` -**NOTE:** The Proton Drive encryption keys need to have been already generated -after a regular login via the browser, otherwise attempting to use the +**NOTE:** The Proton Drive encryption keys need to have been already generated +after a regular login via the browser, otherwise attempting to use the credentials in `rclone` will fail. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your Proton Drive - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your Proton Drive - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an Proton Drive directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -55248,13 +61132,13 @@ The SHA1 hash algorithm is supported. ### Restricted filename characters -Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and +Invalid UTF-8 bytes will be [replaced](https://rclone.org/overview/#invalid-utf8), also left and right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51)) ### Duplicated files -Proton Drive can not have two files with exactly the same name and path. If the -conflict occurs, depending on the advanced config, the file might or might not +Proton Drive can not have two files with exactly the same name and path. If the +conflict occurs, depending on the advanced config, the file might or might not be overwritten. ### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password) @@ -55263,14 +61147,14 @@ Please set your mailbox password in the advanced config section. ### Caching -The cache is currently built for the case when the rclone is the only instance +The cache is currently built for the case when the rclone is the only instance performing operations to the mount point. The event system, which is the proton -API system that provides visibility of what has changed on the drive, is yet -to be implemented, so updates from other clients won’t be reflected in the -cache. Thus, if there are concurrent clients accessing the same mount point, +API system that provides visibility of what has changed on the drive, is yet +to be implemented, so updates from other clients won’t be reflected in the +cache. Thus, if there are concurrent clients accessing the same mount point, then we might have a problem with caching the stale data. - + ### Standard options Here are the Standard options specific to protondrive (Proton Drive). @@ -55315,6 +61199,24 @@ Properties: - Type: string - Required: false +#### --protondrive-otp-secret-key + +The OTP secret key + +The value can also be provided with --protondrive-otp-secret-key=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567 + +The OTP secret key of your proton drive account if the account is set up with +two-factor authentication + +**NB** Input to this must be obscured - see [rclone obscure](https://rclone.org/commands/rclone_obscure/). + +Properties: + +- Config: otp_secret_key +- Env Var: RCLONE_PROTONDRIVE_OTP_SECRET_KEY +- Type: string +- Required: false + ### Advanced options Here are the Advanced options specific to protondrive (Proton Drive). @@ -55487,36 +61389,37 @@ Properties: - Type: string - Required: false - + ## Limitations -This backend uses the -[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which -is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a +This backend uses the +[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which +is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a fork of the [official repo](https://github.com/ProtonMail/go-proton-api). -There is no official API documentation available from Proton Drive. But, thanks -to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) -and the web, iOS, and Android client codebases, we don't need to completely -reverse engineer the APIs by observing the web client traffic! +There is no official API documentation available from Proton Drive. But, thanks +to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) +and the web, iOS, and Android client codebases, we don't need to completely +reverse engineer the APIs by observing the web client traffic! -[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic -building blocks of API calls and error handling, such as 429 exponential -back-off, but it is pretty much just a barebone interface to the Proton API. -For example, the encryption and decryption of the Proton Drive file are not -provided in this library. +[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic +building blocks of API calls and error handling, such as 429 exponential +back-off, but it is pretty much just a barebone interface to the Proton API. +For example, the encryption and decryption of the Proton Drive file are not +provided in this library. -The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on -top of this quickly. This codebase handles the intricate tasks before and after -calling Proton APIs, particularly the complex encryption scheme, allowing -developers to implement features for other software on top of this codebase. -There are likely quite a few errors in this library, as there isn't official +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on +top of this quickly. This codebase handles the intricate tasks before and after +calling Proton APIs, particularly the complex encryption scheme, allowing +developers to implement features for other software on top of this codebase. +There are likely quite a few errors in this library, as there isn't official documentation available. # Seafile This is a backend for the [Seafile](https://www.seafile.com/) storage service: + - It works with both the free community edition or the professional edition. - Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries are also supported. @@ -55526,22 +61429,28 @@ This is a backend for the [Seafile](https://www.seafile.com/) storage service: ## Configuration There are two distinct modes you can setup your remote: -- you point your remote to the **root of the server**, meaning you don't specify a library during the configuration: -Paths are specified as `remote:library`. You may put subdirectories in too, e.g. `remote:library/path/to/dir`. + +- you point your remote to the **root of the server**, meaning you don't + specify a library during the configuration: Paths are specified as + `remote:library`. You may put subdirectories in too, e.g. `remote:library/path/to/dir`. - you point your remote to a specific library during the configuration: -Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_) + Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. + (*This mode is possibly slightly faster than the root mode*) ### Configuration in root mode -Here is an example of making a seafile configuration for a user with **no** two-factor authentication. First run +Here is an example of making a seafile configuration for a user with **no** +two-factor authentication. First run - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. To authenticate you will need the URL of your server, your email (or username) and your password. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -55606,31 +61515,42 @@ d) Delete this remote y/e/d> y ``` -This remote is called `seafile`. It's pointing to the root of your seafile server and can now be used like this: +This remote is called `seafile`. It's pointing to the root of your seafile +server and can now be used like this: See all libraries - rclone lsd seafile: +```console +rclone lsd seafile: +``` Create a new library - rclone mkdir seafile:library +```console +rclone mkdir seafile:library +``` List the contents of a library - rclone ls seafile:library +```console +rclone ls seafile:library +``` Sync `/home/local/directory` to the remote library, deleting any excess files in the library. - rclone sync --interactive /home/local/directory seafile:library +```console +rclone sync --interactive /home/local/directory seafile:library +``` ### Configuration in library mode -Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you: +Here's an example of a configuration in library mode with a user that has the +two-factor authentication enabled. Your 2FA code will be asked at the end of +the configuration, and will attempt to authenticate you: -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -55699,28 +61619,36 @@ d) Delete this remote y/e/d> y ``` -You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once. +You'll notice your password is blank in the configuration. It's because we only +need the password to authenticate you once. -You specified `My Library` during the configuration. The root of the remote is pointing at the -root of the library `My Library`: +You specified `My Library` during the configuration. The root of the remote is +pointing at the root of the library `My Library`: See all files in the library: - rclone lsd seafile: +```console +rclone lsd seafile: +``` Create a new directory inside the library - rclone mkdir seafile:directory +```console +rclone mkdir seafile:directory +``` List the contents of a directory - rclone ls seafile:directory +```console +rclone ls seafile:directory +``` Sync `/home/local/directory` to the remote library, deleting any excess files in the library. - rclone sync --interactive /home/local/directory seafile: - +```console +rclone sync --interactive /home/local/directory seafile: +``` ### --fast-list @@ -55729,7 +61657,6 @@ transactions in exchange for more memory. See the [rclone docs](https://rclone.org/docs/#fast-list) for more details. Please note this is not supported on seafile server version 6.x - ### Restricted filename characters In addition to the [default restricted characters set](https://rclone.org/overview/#restricted-characters) @@ -55749,25 +61676,27 @@ as they can't be used in JSON strings. Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory: -``` -rclone link seafile:seafile-tutorial.doc +```console +$ rclone link seafile:seafile-tutorial.doc http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ ``` or if run on a directory you will get: -``` -rclone link seafile:dir +```console +$ rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/ ``` -Please note a share link is unique for each file or directory. If you run a link command on a file/dir -that has already been shared, you will get the exact same link. +Please note a share link is unique for each file or directory. If you run a link +command on a file/dir that has already been shared, you will get the exact same link. ### Compatibility -It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions: +It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker) +of these versions: + - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition @@ -55776,9 +61705,10 @@ It has been actively developed using the [seafile docker image](https://github.c Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly. -Each new version of `rclone` is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/) of the seafile community server. - +Each new version of `rclone` is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/) +of the seafile community server. + ### Standard options Here are the Standard options specific to seafile (seafile). @@ -55794,8 +61724,8 @@ Properties: - Type: string - Required: true - Examples: - - "https://cloud.seafile.com/" - - Connect to cloud.seafile.com. + - "https://cloud.seafile.com/" + - Connect to cloud.seafile.com. #### --seafile-user @@ -55910,7 +61840,7 @@ Properties: - Type: string - Required: false - + # SFTP @@ -55919,19 +61849,24 @@ Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol). The SFTP backend can be used with a number of different providers: + + + - Hetzner Storage Box - rsync.net + + SFTP runs over SSH v2 and is installed as standard with most modern SSH installations. Paths are specified as `remote:path`. If the path does not begin with a `/` it is relative to the home directory of the user. An empty path -`remote:` refers to the user's home directory. For example, `rclone lsd remote:` -would list the home directory of the user configured in the rclone remote config -(`i.e /home/sftpuser`). However, `rclone lsd remote:/` would list the root +`remote:` refers to the user's home directory. For example, `rclone lsd remote:` +would list the home directory of the user configured in the rclone remote config +(`i.e /home/sftpuser`). However, `rclone lsd remote:/` would list the root directory for remote machine (i.e. `/`) Note that some SFTP servers will need the leading / - Synology is a @@ -55945,12 +61880,14 @@ the server, see [shell access considerations](#shell-access-considerations). Here is an example of making an SFTP configuration. First run - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -56001,50 +61938,67 @@ This remote is called `remote` and can now be used like this: See all directories in the home directory - rclone lsd remote: +```console +rclone lsd remote: +``` See all directories in the root directory - rclone lsd remote:/ +```console +rclone lsd remote:/ +``` Make a new directory - rclone mkdir remote:path/to/directory +```console +rclone mkdir remote:path/to/directory +``` List the contents of a directory - rclone ls remote:path/to/directory +```console +rclone ls remote:path/to/directory +``` Sync `/home/local/directory` to the remote directory, deleting any excess files in the directory. - rclone sync --interactive /home/local/directory remote:directory +```console +rclone sync --interactive /home/local/directory remote:directory +``` Mount the remote path `/srv/www-data/` to the local path `/mnt/www-data` - rclone mount remote:/srv/www-data/ /mnt/www-data +```console +rclone mount remote:/srv/www-data/ /mnt/www-data +``` ### SSH Authentication The SFTP remote supports three authentication methods: - * Password - * Key file, including certificate signed keys - * ssh-agent +- Password +- Key file, including certificate signed keys +- ssh-agent Key files should be PEM-encoded private key files. For instance `/home/$USER/.ssh/id_rsa`. Only unencrypted OpenSSH or PEM encrypted files are supported. -The key file can be specified in either an external file (key_file) or contained within the -rclone config file (key_pem). If using key_pem in the config file, the entry should be on a -single line with new line ('\n' or '\r\n') separating lines. i.e. +The key file can be specified in either an external file (key_file) or contained +within the rclone config file (key_pem). If using key_pem in the config file, +the entry should be on a single line with new line ('\n' or '\r\n') separating lines. +I.e. - key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY----- +```text +key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY----- +``` This will generate it correctly for key_pem for use in the config: - awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa +```console +awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa +``` If you don't specify `pass`, `key_file`, or `key_pem` or `ask_password` then rclone will attempt to contact an ssh-agent. You can also specify `key_use_agent` @@ -56072,7 +62026,7 @@ typically saved as `/home/$USER/.ssh/id_rsa.pub`. Setting this path in Example: -``` +```ini [remote] type = sftp host = example.com @@ -56086,7 +62040,7 @@ merged file in both places. Note: the cert must come first in the file. e.g. -``` +```console cat id_rsa-cert.pub id_rsa > merged_key ``` @@ -56102,7 +62056,7 @@ by `OpenSSH` or can point to a unique file. e.g. using the OpenSSH `known_hosts` file: -``` +```ini [remote] type = sftp host = example.com @@ -56113,30 +62067,36 @@ known_hosts_file = ~/.ssh/known_hosts Alternatively you can create your own known hosts file like this: -``` +```console ssh-keyscan -t dsa,rsa,ecdsa,ed25519 example.com >> known_hosts ``` There are some limitations: -* `rclone` will not _manage_ this file for you. If the key is missing or -wrong then the connection will be refused. -* If the server is set up for a certificate host key then the entry in -the `known_hosts` file _must_ be the `@cert-authority` entry for the CA +- `rclone` will not *manage* this file for you. If the key is missing or + wrong then the connection will be refused. +- If the server is set up for a certificate host key then the entry in + the `known_hosts` file *must* be the `@cert-authority` entry for the CA If the host key provided by the server does not match the one in the file (or is missing) then the connection will be aborted and an error returned such as - NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key mismatch +```text +NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key mismatch +``` or - NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key is unknown +```text +NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key is unknown +``` If you see an error such as - NewFs: couldn't connect SSH: ssh: handshake failed: ssh: no authorities for hostname: example.com:22 +```text +NewFs: couldn't connect SSH: ssh: handshake failed: ssh: no authorities for hostname: example.com:22 +``` then it is likely the server has presented a CA signed host certificate and you will need to add the appropriate `@cert-authority` entry. @@ -56150,11 +62110,15 @@ Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around seems to be to start an ssh-agent in each session, e.g. - eval `ssh-agent -s` && ssh-add -A +```console +eval `ssh-agent -s` && ssh-add -A +``` And then at the end of the session - eval `ssh-agent -k` +```console +eval `ssh-agent -k` +``` These commands can be used in scripts of course. @@ -56171,7 +62135,8 @@ and if shell access is available at all. Most servers run on some version of Unix, and then a basic Unix shell can be assumed, without further distinction. Windows 10, Server 2019, and later can also run a SSH server, which is a port of OpenSSH (see official -[installation guide](https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse)). On a Windows server the shell handling is different: Although it can also +[installation guide](https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse)). +On a Windows server the shell handling is different: Although it can also be set up to use a Unix type shell, e.g. Cygwin bash, the default is to use Windows Command Prompt (cmd.exe), and PowerShell is a recommended alternative. All of these have behave differently, which rclone must handle. @@ -56301,7 +62266,7 @@ with a Windows OpenSSH server, rclone will use a built-in shell command (see [shell access](#shell-access)). If none of the above is applicable, `about` will fail. - + ### Standard options Here are the Standard options specific to sftp (SSH/SFTP). @@ -56474,10 +62439,10 @@ Properties: - Type: bool - Default: false - Examples: - - "false" - - Use default Cipher list. - - "true" - - Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. + - "false" + - Use default Cipher list. + - "true" + - Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. #### --sftp-disable-hashcheck @@ -56547,8 +62512,8 @@ Properties: - Type: string - Required: false - Examples: - - "~/.ssh/known_hosts" - - Use OpenSSH's known_hosts file. + - "~/.ssh/known_hosts" + - Use OpenSSH's known_hosts file. #### --sftp-ask-password @@ -56624,14 +62589,14 @@ Properties: - Type: string - Required: false - Examples: - - "none" - - No shell access - - "unix" - - Unix shell - - "powershell" - - PowerShell - - "cmd" - - Windows Command Prompt + - "none" + - No shell access + - "unix" + - Unix shell + - "powershell" + - PowerShell + - "cmd" + - Windows Command Prompt #### --sftp-hashes @@ -57104,7 +63069,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -57146,21 +63111,27 @@ See [Hetzner's documentation for details](https://docs.hetzner.com/robot/storage SMB is [a communication protocol to share files over network](https://en.wikipedia.org/wiki/Server_Message_Block). -This relies on [go-smb2 library](https://github.com/CloudSoda/go-smb2/) for communication with SMB protocol. +This relies on [go-smb2 library](https://github.com/CloudSoda/go-smb2/) for +communication with SMB protocol. Paths are specified as `remote:sharename` (or `remote:` for the `lsd` command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`. ## Notes -The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in `smb.conf` (usually in `/etc/samba/`) file. +The first path segment must be the name of the share, which you entered when +you started to share on Windows. On smbd, it's the section title in `smb.conf` +(usually in `/etc/samba/`) file. You can find shares by querying the root if you're unsure (e.g. `rclone lsd remote:`). -You can't access to the shared printers from rclone, obviously. +You can't access the shared printers from rclone, obviously. -You can't use Anonymous access for logging in. You have to use the `guest` user with an empty password instead. -The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods. -Alternatively, [the local backend](https://rclone.org/local/#paths-on-windows) on Windows can access SMB servers using UNC paths, by `\\server\share`. This doesn't apply to non-Windows OSes, such as Linux and macOS. +You can't use Anonymous access for logging in. You have to use the `guest` user +with an empty password instead. The rclone client tries to avoid 8.3 names when +uploading files by encoding trailing spaces and periods. Alternatively, +[the local backend](https://rclone.org/local/#paths-on-windows) on Windows can access SMB servers +using UNC paths, by `\\server\share`. This doesn't apply to non-Windows OSes, +such as Linux and macOS. ## Configuration @@ -57168,12 +63139,14 @@ Here is an example of making a SMB configuration. First run - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -57245,7 +63218,7 @@ d) Delete this remote y/e/d> d ``` - + ### Standard options Here are the Standard options specific to smb (SMB / CIFS). @@ -57435,7 +63408,7 @@ Properties: - Type: string - Required: false - + # Storj @@ -57467,95 +63440,99 @@ storage nodes across the network. Side by side comparison with more details: -* Characteristics: - * *Storj backend*: Uses native RPC protocol, connects directly +- Characteristics: + - *Storj backend*: Uses native RPC protocol, connects directly to the storage nodes which hosts the data. Requires more CPU resource of encoding/decoding and has network amplification (especially during the upload), uses lots of TCP connections - * *S3 backend*: Uses S3 compatible HTTP Rest API via the shared + - *S3 backend*: Uses S3 compatible HTTP Rest API via the shared gateways. There is no network amplification, but performance depends on the shared gateways and the secret encryption key is shared with the gateway. -* Typical usage: - * *Storj backend*: Server environments and desktops with enough +- Typical usage: + - *Storj backend*: Server environments and desktops with enough resources, internet speed and connectivity - and applications where storjs client-side encryption is required. - * *S3 backend*: Desktops and similar with limited resources, + - *S3 backend*: Desktops and similar with limited resources, internet speed or connectivity. -* Security: - * *Storj backend*: __strong__. Private encryption key doesn't +- Security: + - *Storj backend*: **strong**. Private encryption key doesn't need to leave the local computer. - * *S3 backend*: __weaker__. Private encryption key is [shared + - *S3 backend*: **weaker**. Private encryption key is [shared with](https://docs.storj.io/dcs/api-reference/s3-compatible-gateway#security-and-encryption) the authentication service of the hosted gateway, where it's stored encrypted. It can be stronger when combining with the rclone [crypt](/crypt) backend. -* Bandwidth usage (upload): - * *Storj backend*: __higher__. As data is erasure coded on the +- Bandwidth usage (upload): + - *Storj backend*: **higher**. As data is erasure coded on the client side both the original data and the parities should be uploaded. About ~2.7 times more data is required to be uploaded. Client may start to upload with even higher number of nodes (~3.7 times more) and abandon/stop the slow uploads. - * *S3 backend*: __normal__. Only the raw data is uploaded, erasure + - *S3 backend*: **normal**. Only the raw data is uploaded, erasure coding happens on the gateway. -* Bandwidth usage (download) - * *Storj backend*: __almost normal__. Only the minimal number +- Bandwidth usage (download) + - *Storj backend*: **almost normal**. Only the minimal number of data is required, but to avoid very slow data providers a few more sources are used and the slowest are ignored (max 1.2x overhead). - * *S3 backend*: __normal__. Only the raw data is downloaded, erasure coding happens on the shared gateway. -* CPU usage: - * *Storj backend*: __higher__, but more predictable. Erasure + - *S3 backend*: **normal**. Only the raw data is downloaded, erasure + coding happens on the shared gateway. +- CPU usage: + - *Storj backend*: **higher**, but more predictable. Erasure code and encryption/decryption happens locally which requires significant CPU usage. - * *S3 backend*: __less__. Erasure code and encryption/decryption + - *S3 backend*: **less**. Erasure code and encryption/decryption happens on shared s3 gateways (and as is, it depends on the current load on the gateways) -* TCP connection usage: - * *Storj backend*: __high__. A direct connection is required to +- TCP connection usage: + - *Storj backend*: **high**. A direct connection is required to each of the Storj nodes resulting in 110 connections on upload and 35 on download per 64 MB segment. Not all the connections are actively used (slow ones are pruned), but they are all opened. [Adjusting the max open file limit](https://rclone.org/storj/#known-issues) may be required. - * *S3 backend*: __normal__. Only one connection per download/upload + - *S3 backend*: **normal**. Only one connection per download/upload thread is required to the shared gateway. -* Overall performance: - * *Storj backend*: with enough resources (CPU and bandwidth) +- Overall performance: + - *Storj backend*: with enough resources (CPU and bandwidth) *storj* backend can provide even 2x better performance. Data is directly downloaded to / uploaded from to the client instead of the gateway. - * *S3 backend*: Can be faster on edge devices where CPU and network + - *S3 backend*: Can be faster on edge devices where CPU and network bandwidth is limited as the shared S3 compatible gateways take care about the encrypting/decryption and erasure coding and no download/upload amplification. -* Decentralization: - * *Storj backend*: __high__. Data is downloaded directly from +- Decentralization: + - *Storj backend*: **high**. Data is downloaded directly from the distributed cloud of storage providers. - * *S3 backend*: __low__. Requires a running S3 gateway (either + - *S3 backend*: **low**. Requires a running S3 gateway (either self-hosted or Storj-hosted). -* Limitations: - * *Storj backend*: `rclone checksum` is not possible without +- Limitations: + - *Storj backend*: `rclone checksum` is not possible without download, as checksum metadata is not calculated during upload - * *S3 backend*: secret encryption key is shared with the gateway + - *S3 backend*: secret encryption key is shared with the gateway ## Configuration To make a new Storj configuration you need one of the following: -* Access Grant that someone else shared with you. -* [API Key](https://documentation.storj.io/getting-started/uploading-your-first-object/create-an-api-key) -of a Storj project you are a member of. + +- Access Grant that someone else shared with you. +- [API Key](https://documentation.storj.io/getting-started/uploading-your-first-object/create-an-api-key) + of a Storj project you are a member of. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: ### Setup with access grant -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -57596,8 +63573,8 @@ y/e/d> y ### Setup with API key and passphrase -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -57652,7 +63629,7 @@ d) Delete this remote y/e/d> y ``` - + ### Standard options Here are the Standard options specific to storj (Storj Decentralized Cloud Storage). @@ -57668,10 +63645,10 @@ Properties: - Type: string - Default: "existing" - Examples: - - "existing" - - Use an existing access grant. - - "new" - - Create a new access grant from satellite address, API key, and passphrase. + - "existing" + - Use an existing access grant. + - "new" + - Create a new access grant from satellite address, API key, and passphrase. #### --storj-access-grant @@ -57699,12 +63676,12 @@ Properties: - Type: string - Default: "us1.storj.io" - Examples: - - "us1.storj.io" - - US1 - - "eu1.storj.io" - - EU1 - - "ap1.storj.io" - - AP1 + - "us1.storj.io" + - US1 + - "eu1.storj.io" + - EU1 + - "ap1.storj.io" + - AP1 #### --storj-api-key @@ -57747,7 +63724,7 @@ Properties: - Type: string - Required: false - + ## Usage @@ -57760,13 +63737,17 @@ Once configured you can then use `rclone` like this. Use the `mkdir` command to create new bucket, e.g. `bucket`. - rclone mkdir remote:bucket +```console +rclone mkdir remote:bucket +``` ### List all buckets Use the `lsf` command to list all buckets. - rclone lsf remote: +```console +rclone lsf remote: +``` Note the colon (`:`) character at the end of the command line. @@ -57774,24 +63755,32 @@ Note the colon (`:`) character at the end of the command line. Use the `rmdir` command to delete an empty bucket. - rclone rmdir remote:bucket +```console +rclone rmdir remote:bucket +``` Use the `purge` command to delete a non-empty bucket with all its content. - rclone purge remote:bucket +```console +rclone purge remote:bucket +``` ### Upload objects Use the `copy` command to upload an object. - rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/ +```console +rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/ +``` The `--progress` flag is for displaying progress information. Remove it if you don't need this information. Use a folder in the local path to upload all its objects. - rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/ +```console +rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/ +``` Only modified files will be copied. @@ -57799,51 +63788,70 @@ Only modified files will be copied. Use the `ls` command to list recursively all objects in a bucket. - rclone ls remote:bucket +```console +rclone ls remote:bucket +``` Add the folder to the remote path to list recursively all objects in this folder. - rclone ls remote:bucket/path/to/dir/ +```console +$ rclone ls remote:bucket +/path/to/dir/ +``` Use the `lsf` command to list non-recursively all objects in a bucket or a folder. - rclone lsf remote:bucket/path/to/dir/ +```console +rclone lsf remote:bucket/path/to/dir/ +``` ### Download objects Use the `copy` command to download an object. - rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/ +```console +rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/ +``` The `--progress` flag is for displaying progress information. Remove it if you don't need this information. Use a folder in the remote path to download all its objects. - rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/ +```console +rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/ +``` ### Delete objects Use the `deletefile` command to delete a single object. - rclone deletefile remote:bucket/path/to/dir/file.ext +```console +rclone deletefile remote:bucket/path/to/dir/file.ext +``` Use the `delete` command to delete all object in a folder. - rclone delete remote:bucket/path/to/dir/ +```console +rclone delete remote:bucket/path/to/dir/ +``` ### Print the total size of objects Use the `size` command to print the total size of objects in a bucket or a folder. - rclone size remote:bucket/path/to/dir/ +```console +rclone size remote:bucket/path/to/dir/ +``` ### Sync two Locations Use the `sync` command to sync the source to the destination, changing the destination only, deleting any excess files. - rclone sync --interactive --progress /home/local/directory/ remote:bucket/path/to/dir/ +```console +rclone sync --interactive --progress /home/local/directory/ remote:bucket/path/to/dir/ +``` The `--progress` flag is for displaying progress information. Remove it if you don't need this information. @@ -57853,15 +63861,21 @@ to see exactly what would be copied and deleted. The sync can be done also from Storj to the local file system. - rclone sync --interactive --progress remote:bucket/path/to/dir/ /home/local/directory/ +```console +rclone sync --interactive --progress remote:bucket/path/to/dir/ /home/local/directory/ +``` Or between two Storj buckets. - rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/ +```console +rclone sync --interactive --progress remote-us:bucket/path/to/dir/ remote-europe:bucket/path/to/dir/ +``` Or even between another cloud storage and Storj. - rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/ +```console +rclone sync --interactive --progress s3:bucket/path/to/dir/ storj:bucket/path/to/dir/ +``` ## Limitations @@ -57870,13 +63884,26 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). ## Known issues -If you get errors like `too many open files` this usually happens when the default `ulimit` for system max open files is exceeded. Native Storj protocol opens a large number of TCP connections (each of which is counted as an open file). For a single upload stream you can expect 110 TCP connections to be opened. For a single download stream you can expect 35. This batch of connections will be opened for every 64 MiB segment and you should also expect TCP connections to be reused. If you do many transfers you eventually open a connection to most storage nodes (thousands of nodes). +If you get errors like `too many open files` this usually happens when the +default `ulimit` for system max open files is exceeded. Native Storj protocol +opens a large number of TCP connections (each of which is counted as an open +file). For a single upload stream you can expect 110 TCP connections to be +opened. For a single download stream you can expect 35. This batch of +connections will be opened for every 64 MiB segment and you should also +expect TCP connections to be reused. If you do many transfers you eventually +open a connection to most storage nodes (thousands of nodes). -To fix these, please raise your system limits. You can do this issuing a `ulimit -n 65536` just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. `$HOME/.bashrc`, or change the system-wide configuration, usually `/etc/sysctl.conf` and/or `/etc/security/limits.conf`, but please refer to your operating system manual. +To fix these, please raise your system limits. You can do this issuing a +`ulimit -n 65536` just before you run rclone. To change the limits more +permanently you can add this to your shell startup script, +e.g. `$HOME/.bashrc`, or change the system-wide configuration, +usually `/etc/sysctl.conf` and/or `/etc/security/limits.conf`, but please +refer to your operating system manual. # SugarSync @@ -57891,11 +63918,13 @@ can do with rclone. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -57950,19 +63979,26 @@ y/e/d> y Note that the config asks for your email and password but doesn't store them, it only uses them to get the initial token. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories (sync folders) in top level of your SugarSync - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your SugarSync folder "Test" - rclone ls remote:Test +```console +rclone ls remote:Test +``` To copy a local directory to an SugarSync folder called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` Paths are specified as `remote:path` @@ -57994,8 +64030,7 @@ However you can supply the flag `--sugarsync-hard-delete` or set the config parameter `hard_delete = true` if you would like files to be deleted straight away. - - + ### Standard options Here are the Standard options specific to sugarsync (Sugarsync). @@ -58157,7 +64192,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -58166,7 +64201,8 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). # Uloz.to @@ -58174,18 +64210,20 @@ Paths are specified as `remote:path` Paths may be as deep as required, e.g. `remote:directory/subdirectory`. -The initial setup for Uloz.to involves filling in the user credentials. +The initial setup for Uloz.to involves filling in the user credentials. `rclone config` walks you through it. ## Configuration Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -58235,36 +64273,43 @@ d) Delete this remote y/e/d> y ``` -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List folders in root level folder: - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your root folder: - rclone ls remote: +```console +rclone ls remote: +``` To copy a local folder to a Uloz.to folder called backup: - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### User credentials -The only reliable method is to authenticate the user using -username and password. Uloz.to offers an API key as well, but +The only reliable method is to authenticate the user using +username and password. Uloz.to offers an API key as well, but it's reserved for the use of Uloz.to's in-house application -and using it in different circumstances is unreliable. +and using it in different circumstances is unreliable. ### Modification times and hashes Uloz.to doesn't allow the user to set a custom modification time, or retrieve the hashes after upload. As a result, the integration uses a free form field the API provides to encode client-provided -timestamps and hashes. Timestamps are stored with microsecond -precision. +timestamps and hashes. Timestamps are stored with microsecond +precision. -A server calculated MD5 hash of the file is verified upon upload. +A server calculated MD5 hash of the file is verified upon upload. Afterwards, the backend only serves the client-side calculated hashes. Hashes can also be retrieved upon creating a file download link, but it's impractical for `list`-like use cases. @@ -58283,16 +64328,16 @@ as they can't be used in JSON strings. ### Transfers -All files are currently uploaded using a single HTTP request, so +All files are currently uploaded using a single HTTP request, so for uploading large files a stable connection is necessary. Rclone will -upload up to `--transfers` chunks at the same time (shared among all +upload up to `--transfers` chunks at the same time (shared among all uploads). ### Deleting files By default, files are moved to the recycle bin whereas folders are deleted immediately. Trashed files are permanently deleted after -30 days in the recycle bin. +30 days in the recycle bin. Emptying the trash is currently not implemented in rclone. @@ -58311,15 +64356,15 @@ folder you wish to use as root. This will be the last segment of the URL when you open the relevant folder in the Uloz.to web interface. -For example, for exploring a folder with URL -`https://uloz.to/fm/my-files/foobar`, `foobar` should be used as the +For example, for exploring a folder with URL +`https://uloz.to/fm/my-files/foobar`, `foobar` should be used as the root slug. -`root_folder_slug` can be used alongside a specific path in the remote -path. For example, if your remote's `root_folder_slug` corresponds to `/foo/bar`, +`root_folder_slug` can be used alongside a specific path in the remote +path. For example, if your remote's `root_folder_slug` corresponds to `/foo/bar`, `remote:baz/qux` will refer to `ABSOLUTE_ULOZTO_ROOT/foo/bar/baz/qux`. - + ### Standard options Here are the Standard options specific to ulozto (Uloz.to). @@ -58412,7 +64457,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -58433,12 +64478,14 @@ exposed in the API. Backends without this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). # Uptobox -This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional -cloud storage provider and therefore not suitable for long term storage. +This is a Backend for Uptobox file storage service. Uptobox is closer to a +one-click hoster than a traditional cloud storage provider and therefore not +suitable for long term storage. Paths are specified as `remote:path` @@ -58446,16 +64493,19 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`. ## Configuration -To configure an Uptobox backend you'll need your personal api token. You'll find it in your -[account settings](https://uptobox.com/my_account) +To configure an Uptobox backend you'll need your personal api token. You'll find +it in your [account settings](https://uptobox.com/my_account). -Here is an example of how to make a remote called `remote` with the default setup. First run: +Here is an example of how to make a remote called `remote` with the default setup. +First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text Current remotes: Name Type @@ -58497,21 +64547,29 @@ api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx y) Yes this is OK (default) e) Edit this remote d) Delete this remote -y/e/d> +y/e/d> ``` -Once configured you can then use `rclone` like this, + +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your Uptobox - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your Uptobox - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an Uptobox directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -58531,7 +64589,7 @@ the following characters are also replaced: Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in XML strings. - + ### Standard options Here are the Standard options specific to uptobox (Uptobox). @@ -58588,7 +64646,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -58599,7 +64657,8 @@ been seen in the uptobox web interface. # Union -The `union` backend joins several remotes together to make a single unified view of them. +The `union` backend joins several remotes together to make a single unified view +of them. During the initial setup with `rclone config` you will specify the upstream remotes as a space separated list. The upstream remotes can either be a local @@ -58611,7 +64670,8 @@ to tag the remote as **read only**, **no create** or **writeback**, e.g. - `:ro` means files will only be read from here and never written - `:nc` means new files or directories won't be created here -- `:writeback` means files found in different remotes will be written back here. See the [writeback section](#writeback) for more info. +- `:writeback` means files found in different remotes will be written back here. + See the [writeback section](#writeback) for more info. Subfolders can be used in upstream remotes. Assume a union remote named `backup` with the remotes `mydrive:private/backup`. Invoking `rclone mkdir backup:desktop` @@ -58626,11 +64686,13 @@ mydrive:private/backup/../desktop`. Here is an example of how to make a union called `remote` for local folders. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -58686,23 +64748,37 @@ q) Quit config e/n/d/r/c/s/q> q ``` -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this: List directories in top level in `remote1:dir1`, `remote2:dir2` and `remote3:dir3` - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in `remote1:dir1`, `remote2:dir2` and `remote3:dir3` - rclone ls remote: +```console +rclone ls remote: +``` -Copy another local directory to the union directory called source, which will be placed into `remote3:dir3` +Copy another local directory to the union directory called source, which will be +placed into `remote3:dir3` - rclone copy C:\source remote:source +```console +rclone copy C:\source remote:source +``` ### Behavior / Policies -The behavior of union backend is inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs). All functions are grouped into 3 categories: **action**, **create** and **search**. These functions and categories can be assigned a policy which dictates what file or directory is chosen when performing that behavior. Any policy can be assigned to a function or category though some may not be very useful in practice. For instance: **rand** (random) may be useful for file creation (create) but could lead to very odd behavior if used for `delete` if there were more than one copy of the file. +The behavior of union backend is inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs). +All functions are grouped into 3 categories: **action**, **create** and **search**. +These functions and categories can be assigned a policy which dictates what file +or directory is chosen when performing that behavior. Any policy can be assigned +to a function or category though some may not be very useful in practice. For +instance: **rand** (random) may be useful for file creation (create) but could +lead to very odd behavior if used for `delete` if there were more than one copy +of the file. ### Function / Category classifications @@ -58715,17 +64791,22 @@ The behavior of union backend is inspired by [trapexit/mergerfs](https://github. ### Path Preservation -Policies, as described below, are of two basic types. `path preserving` and `non-path preserving`. +Policies, as described below, are of two basic types. `path preserving` and +`non-path preserving`. -All policies which start with `ep` (**epff**, **eplfs**, **eplus**, **epmfs**, **eprand**) are `path preserving`. `ep` stands for `existing path`. +All policies which start with `ep` (**epff**, **eplfs**, **eplus**, **epmfs**, **eprand**) +are `path preserving`. `ep` stands for `existing path`. -A path preserving policy will only consider upstreams where the relative path being accessed already exists. +A path preserving policy will only consider upstreams where the relative path +being accessed already exists. -When using non-path preserving policies paths will be created in target upstreams as necessary. +When using non-path preserving policies paths will be created in target upstreams +as necessary. ### Quota Relevant Policies -Some policies rely on quota information. These policies should be used only if your upstreams support the respective quota fields. +Some policies rely on quota information. These policies should be used only if +your upstreams support the respective quota fields. | Policy | Required Field | |------------|----------------| @@ -58734,21 +64815,27 @@ Some policies rely on quota information. These policies should be used only if y | lus, eplus | Used | | lno, eplno | Objects | -To check if your upstream supports the field, run `rclone about remote: [flags]` and see if the required field exists. +To check if your upstream supports the field, run `rclone about remote: [flags]` +and see if the required field exists. ### Filters -Policies basically search upstream remotes and create a list of files / paths for functions to work on. The policy is responsible for filtering and sorting. The policy type defines the sorting but filtering is mostly uniform as described below. +Policies basically search upstream remotes and create a list of files / paths for +functions to work on. The policy is responsible for filtering and sorting. The +policy type defines the sorting but filtering is mostly uniform as described below. -* No **search** policies filter. -* All **action** policies will filter out remotes which are tagged as **read-only**. -* All **create** policies will filter out remotes which are tagged **read-only** or **no-create**. +- No **search** policies filter. +- All **action** policies will filter out remotes which are tagged as **read-only**. +- All **create** policies will filter out remotes which are tagged **read-only** + or **no-create**. If all remotes are filtered an error will be returned. ### Policy descriptions -The policies definition are inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs) but not exactly the same. Some policy definition could be different due to the much larger latency of remote file systems. +The policies definition are inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs) +but not exactly the same. Some policy definition could be different due to the +much larger latency of remote file systems. | Policy | Description | |------------------|------------------------------------------------------------| @@ -58768,13 +64855,12 @@ The policies definition are inspired by [trapexit/mergerfs](https://github.com/t | newest | Pick the file / directory with the largest mtime. | | rand (random) | Calls **all** and then randomizes. Returns only one upstream. | - ### Writeback {#writeback} The tag `:writeback` on an upstream remote can be used to make a simple cache system like this: -``` +```ini [union] type = union action_policy = all @@ -58798,7 +64884,7 @@ Rclone does not manage the `:writeback` remote in any way other than writing files back to it. So if you need to expire old files or manage the size then you will have to do this yourself. - + ### Standard options Here are the Standard options specific to union (Union merges the contents of several upstream fs). @@ -58897,7 +64983,7 @@ Any metadata supported by the underlying remote is read and written. See the [metadata](https://rclone.org/docs/#metadata) docs for more info. - + # WebDAV @@ -58913,11 +64999,13 @@ connecting to then rclone can enable extra features. Here is an example of how to make a remote called `remote`. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -58982,19 +65070,26 @@ d) Delete this remote y/e/d> y ``` -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): List directories in top level of your WebDAV - rclone lsd remote: +```console +rclone lsd remote: +``` List all the files in your WebDAV - rclone ls remote: +```console +rclone ls remote: +``` To copy a local directory to an WebDAV directory called backup - rclone copy /home/source remote:backup +```console +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -59007,7 +65102,7 @@ Depending on the exact version of ownCloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them. - + ### Standard options Here are the Standard options specific to webdav (WebDAV). @@ -59036,22 +65131,22 @@ Properties: - Type: string - Required: false - Examples: - - "fastmail" - - Fastmail Files - - "nextcloud" - - Nextcloud - - "owncloud" - - Owncloud 10 PHP based WebDAV server - - "infinitescale" - - ownCloud Infinite Scale - - "sharepoint" - - Sharepoint Online, authenticated by Microsoft account - - "sharepoint-ntlm" - - Sharepoint with NTLM authentication, usually self-hosted or on-premises - - "rclone" - - rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol - - "other" - - Other site/service or software + - "fastmail" + - Fastmail Files + - "nextcloud" + - Nextcloud + - "owncloud" + - Owncloud 10 PHP based WebDAV server + - "infinitescale" + - ownCloud Infinite Scale + - "sharepoint" + - Sharepoint Online, authenticated by Microsoft account + - "sharepoint-ntlm" + - Sharepoint with NTLM authentication, usually self-hosted or on-premises + - "rclone" + - rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol + - "other" + - Other site/service or software #### --webdav-user @@ -59236,7 +65331,7 @@ Properties: - Type: string - Required: false - + ## Provider notes @@ -59264,7 +65359,9 @@ ownCloud supports modified times using the `X-OC-Mtime` header. This is configured in an identical way to ownCloud. Note that Nextcloud initially did not support streaming of files (`rcat`) whereas -ownCloud did, but [this](https://github.com/nextcloud/nextcloud-snap/issues/365) seems to be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud Server v19). +ownCloud did, but [this](https://github.com/nextcloud/nextcloud-snap/issues/365) +seems to be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud +Server v19). ### ownCloud Infinite Scale @@ -59307,7 +65404,7 @@ Set the `vendor` to `sharepoint`. Your config file should look like this: -``` +```ini [sharepoint] type = webdav url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents @@ -59318,17 +65415,19 @@ pass = encryptedpassword ### Sharepoint with NTLM Authentication -Use this option in case your (hosted) Sharepoint is not tied to OneDrive accounts and uses NTLM authentication. +Use this option in case your (hosted) Sharepoint is not tied to OneDrive +accounts and uses NTLM authentication. -To get the `url` configuration, similarly to the above, first navigate to the desired directory in your browser to get the URL, -then strip everything after the name of the opened directory. +To get the `url` configuration, similarly to the above, first navigate to the +desired directory in your browser to get the URL, then strip everything after +the name of the opened directory. Example: If the URL is: -https://example.sharepoint.com/sites/12345/Documents/Forms/AllItems.aspx + The configuration to use would be: -https://example.sharepoint.com/sites/12345/Documents + Set the `vendor` to `sharepoint-ntlm`. @@ -59337,7 +65436,7 @@ set `user` to `DOMAIN\username`. Your config file should look like this: -``` +```ini [sharepoint] type = webdav url = https://[YOUR-DOMAIN]/some-path-to/Documents @@ -59348,11 +65447,15 @@ pass = encryptedpassword #### Required Flags for SharePoint -As SharePoint does some special things with uploaded documents, you won't be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer. +As SharePoint does some special things with uploaded documents, you won't be +able to use the documents size or the documents hash to compare if a file has +been changed since the upload / which file is newer. -For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the "Last Modified" datetime property to compare your documents: +For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) +from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure +Rclone uses the "Last Modified" datetime property to compare your documents: -``` +```text --ignore-size --ignore-checksum --update ``` @@ -59363,7 +65466,6 @@ Read [rclone serve webdav](commands/rclone_serve_webdav/) for more details. rclone serve supports modified times using the `X-OC-Mtime` header. - ### dCache dCache is a storage system that supports many protocols and @@ -59379,7 +65481,7 @@ password, instead enter your Macaroon as the `bearer_token`. The config will end up looking something like this. -``` +```ini [dcache] type = webdav url = https://dcache... @@ -59389,8 +65491,9 @@ pass = bearer_token = your-macaroon ``` -There is a [script](https://github.com/sara-nl/GridScripts/blob/master/get-macaroon) that -obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file. +There is a [script](https://github.com/sara-nl/GridScripts/blob/master/get-macaroon) +that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config +file. Macaroons may also be obtained from the dCacheView web-browser/JavaScript client that comes with dCache. @@ -59409,7 +65512,7 @@ installed and configured, an access token is obtained by running the `oidc-token` command. The following example shows a (shortened) access token obtained from the *XDC* OIDC Provider. -``` +```text paul@celebrimbor:~$ oidc-token XDC eyJraWQ[...]QFXDt0 paul@celebrimbor:~$ @@ -59433,7 +65536,7 @@ edit the advanced config and enter the command to get a bearer token The following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from the *XDC* OIDC Provider. -``` +```ini [dcache] type = webdav url = https://dcache.example.org/ @@ -59449,11 +65552,13 @@ bearer_token_command = oidc-token XDC Here is an example of making a yandex configuration. First run - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -59496,7 +65601,7 @@ y/e/d> y ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it @@ -59504,24 +65609,33 @@ opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this it may require you to unblock it temporarily if you are running a host firewall. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): See top level directories - rclone lsd remote: +```console +rclone lsd remote: +``` Make a new directory - rclone mkdir remote:directory +```console +rclone mkdir remote:directory +``` List the contents of a directory - rclone ls remote:directory +```console +rclone ls remote:directory +``` Sync `/home/local/directory` to the remote path, deleting any excess files in the path. - rclone sync --interactive /home/local/directory remote:directory +```console +rclone sync --interactive /home/local/directory remote:directory +``` Yandex paths may be as deep as required, e.g. `remote:directory/subdirectory`. @@ -59551,7 +65665,7 @@ are replaced. Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be used in JSON strings. - + ### Standard options Here are the Standard options specific to yandex (Yandex Disk). @@ -59684,7 +65798,7 @@ Properties: - Type: string - Required: false - + ## Limitations @@ -59700,24 +65814,29 @@ to upload a 30 GiB file set a timeout of `2 * 30 = 60m`, that is `--timeout 60m`. Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. -Token generation will work without a mail account, but Rclone won't be able to complete any actions. -``` +Token generation will work without a mail account, but Rclone won't be able to +complete any actions. + +```text [403 - DiskUnsupportedUserAccountTypeError] User account type is not supported. ``` # Zoho Workdrive -[Zoho WorkDrive](https://www.zoho.com/workdrive/) is a cloud storage solution created by [Zoho](https://zoho.com). +[Zoho WorkDrive](https://www.zoho.com/workdrive/) is a cloud storage solution +created by [Zoho](https://zoho.com). ## Configuration Here is an example of making a zoho configuration. First run - rclone config +```console +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -59778,7 +65897,7 @@ y/e/d> ``` See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a -machine with no Internet browser available. +machine without an internet-connected web browser available. Rclone runs a webserver on your local computer to collect the authorization token from Zoho Workdrive. This is only from the moment @@ -59787,24 +65906,33 @@ The webserver runs on `http://127.0.0.1:53682/`. If local port `53682` is protected by a firewall you may need to temporarily unblock the firewall to complete authorization. -Once configured you can then use `rclone` like this, +Once configured you can then use `rclone` like this (replace `remote` with the +name you gave your remote): See top level directories - rclone lsd remote: +```console +rclone lsd remote: +``` Make a new directory - rclone mkdir remote:directory +```console +rclone mkdir remote:directory +``` List the contents of a directory - rclone ls remote:directory +```console +rclone ls remote:directory +``` Sync `/home/local/directory` to the remote path, deleting any excess files in the path. - rclone sync --interactive /home/local/directory remote:directory +```console +rclone sync --interactive /home/local/directory remote:directory +``` Zoho paths may be as deep as required, eg `remote:directory/subdirectory`. @@ -59822,10 +65950,10 @@ command which will display your current usage. ### Restricted filename characters Only control characters and invalid UTF-8 are replaced. In addition most -Unicode full-width characters are not supported at all and will be removed +Unicode full-width characters are not supported at all and will be removed from filenames during upload. - + ### Standard options Here are the Standard options specific to zoho (Zoho). @@ -59871,18 +65999,18 @@ Properties: - Type: string - Required: false - Examples: - - "com" - - United states / Global - - "eu" - - Europe - - "in" - - India - - "jp" - - Japan - - "com.cn" - - China - - "com.au" - - Australia + - "com" + - United states / Global + - "eu" + - Europe + - "in" + - India + - "jp" + - Japan + - "com.cn" + - China + - "com.au" + - Australia ### Advanced options @@ -59975,17 +66103,20 @@ Properties: - Type: string - Required: false - + ## Setting up your own client_id -For Zoho we advise you to set up your own client_id. To do so you have to complete the following steps. +For Zoho we advise you to set up your own client_id. To do so you have to +complete the following steps. 1. Log in to the [Zoho API Console](https://api-console.zoho.com) -2. Create a new client of type "Server-based Application". The name and website don't matter, but you must add the redirect URL `http://localhost:53682/`. +2. Create a new client of type "Server-based Application". The name and website +don't matter, but you must add the redirect URL `http://localhost:53682/`. -3. Once the client is created, you can go to the settings tab and enable it in other regions. +3. Once the client is created, you can go to the settings tab and enable it in +other regions. The client id and client secret can now be used with rclone. @@ -59993,7 +66124,9 @@ The client id and client secret can now be used with rclone. Local paths are specified as normal filesystem paths, e.g. `/path/to/wherever`, so - rclone sync --interactive /home/source /tmp/destination +```console +rclone sync --interactive /home/source /tmp/destination +``` Will sync `/home/source` to `/tmp/destination`. @@ -60010,7 +66143,7 @@ Rclone reads and writes the modification times using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X. -### Filenames ### +### Filenames Filenames should be encoded in UTF-8 on disk. This is the normal case for Windows and OS X. @@ -60026,7 +66159,7 @@ be replaced with a quoted representation of the invalid bytes. The name `gro\xdf` will be transferred as `gro‛DF`. `rclone` will emit a debug message in this case (use `-v` to see), e.g. -``` +```text Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf" ``` @@ -60102,7 +66235,7 @@ These only get replaced if they are the last character in the name: Invalid UTF-8 bytes will also be [replaced](https://rclone.org/overview/#invalid-utf8), as they can't be converted to UTF-16. -### Paths on Windows ### +### Paths on Windows On Windows there are many ways of specifying a path to a file system resource. Local paths can be absolute, like `C:\path\to\wherever`, or relative, @@ -60118,10 +66251,11 @@ so in most cases you do not have to worry about this (read more [below](#long-pa Using the same prefix `\\?\` it is also possible to specify path to volumes identified by their GUID, e.g. `\\?\Volume{b75e2c83-0000-0000-0000-602f00000000}\some\path`. -#### Long paths #### +#### Long paths Rclone handles long paths automatically, by converting all paths to -[extended-length path format](https://docs.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation), which allows paths up to 32,767 characters. +[extended-length path format](https://docs.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation), +which allows paths up to 32,767 characters. This conversion will ensure paths are absolute and prefix them with the `\\?\`. This is why you will see that your paths, for instance @@ -60132,18 +66266,19 @@ However, in rare cases this may cause problems with buggy file system drivers like [EncFS](https://github.com/rclone/rclone/issues/261). To disable UNC conversion globally, add this to your `.rclone.conf` file: -``` +```ini [local] nounc = true ``` If you want to selectively disable UNC, you can add it to a separate entry like this: -``` +```ini [nounc] type = local nounc = true ``` + And use rclone like this: `rclone copy c:\src nounc:z:\dst` @@ -60165,7 +66300,7 @@ This flag applies to all commands. For example, supposing you have a directory structure like this -``` +```console $ tree /tmp/a /tmp/a ├── b -> ../b @@ -60177,7 +66312,7 @@ $ tree /tmp/a Then you can see the difference with and without the flag like this -``` +```console $ rclone ls /tmp/a 6 one 6 two/three @@ -60185,7 +66320,7 @@ $ rclone ls /tmp/a and -``` +```console $ rclone -L ls /tmp/a 4174 expected 6 one @@ -60194,7 +66329,7 @@ $ rclone -L ls /tmp/a 6 b/one ``` -#### --local-links, --links, -l +#### --local-links, --links, -l Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows). @@ -60208,7 +66343,7 @@ This flag applies to all commands. For example, supposing you have a directory structure like this -``` +```console $ tree /tmp/a /tmp/a ├── file1 -> ./file4 @@ -60217,13 +66352,13 @@ $ tree /tmp/a Copying the entire directory with '-l' -``` -$ rclone copy -l /tmp/a/ remote:/tmp/a/ +```console +rclone copy -l /tmp/a/ remote:/tmp/a/ ``` The remote files are created with a `.rclonelink` suffix -``` +```console $ rclone ls remote:/tmp/a 5 file1.rclonelink 14 file2.rclonelink @@ -60231,7 +66366,7 @@ $ rclone ls remote:/tmp/a The remote files will contain the target of the symbolic links -``` +```console $ rclone cat remote:/tmp/a/file1.rclonelink ./file4 @@ -60241,7 +66376,7 @@ $ rclone cat remote:/tmp/a/file2.rclonelink Copying them back with '-l' -``` +```console $ rclone copy -l remote:/tmp/a/ /tmp/b/ $ tree /tmp/b @@ -60252,7 +66387,7 @@ $ tree /tmp/b However, if copied back without '-l' -``` +```console $ rclone copyto remote:/tmp/a/ /tmp/b/ $ tree /tmp/b @@ -60263,7 +66398,7 @@ $ tree /tmp/b If you want to copy a single file with `-l` then you must use the `.rclonelink` suffix. -``` +```console $ rclone copy -l remote:/tmp/a/file1.rclonelink /tmp/c $ tree /tmp/c @@ -60287,7 +66422,7 @@ different file systems. For example if you have a directory hierarchy like this -``` +```console root ├── disk1 - disk1 mounted on the root │   └── file3 - stored on disk1 @@ -60297,15 +66432,16 @@ root └── file2 - stored on the root disk ``` -Using `rclone --one-file-system copy root remote:` will only copy `file1` and `file2`. Eg +Using `rclone --one-file-system copy root remote:` will only copy `file1` +and `file2`. E.g. -``` +```console $ rclone -q --one-file-system ls root 0 file1 0 file2 ``` -``` +```console $ rclone -q ls root 0 disk1/file3 0 disk2/file4 @@ -60320,7 +66456,7 @@ filesystem. **NB** This flag is only available on Unix based systems. On systems where it isn't supported (e.g. Windows) it will be ignored. - + ### Advanced options Here are the Advanced options specific to local (Local Disk). @@ -60336,8 +66472,8 @@ Properties: - Type: bool - Default: false - Examples: - - "true" - - Disables long file names. + - "true" + - Disables long file names. #### --copy-links / -L @@ -60375,6 +66511,21 @@ Properties: - Type: bool - Default: false +#### --skip-specials + +Don't warn about skipped pipes, sockets and device objects. + +This flag disables warning messages on skipped pipes, sockets and +device objects, as you explicitly acknowledge that they should be +skipped. + +Properties: + +- Config: skip_specials +- Env Var: RCLONE_LOCAL_SKIP_SPECIALS +- Type: bool +- Default: false + #### --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated). @@ -60606,14 +66757,14 @@ Properties: - Type: mtime|atime|btime|ctime - Default: mtime - Examples: - - "mtime" - - The last modification time. - - "atime" - - The last access time. - - "btime" - - The creation time. - - "ctime" - - The last status change time. + - "mtime" + - The last modification time. + - "atime" + - The last access time. + - "btime" + - The creation time. + - "ctime" + - The last status change time. #### --local-hashes @@ -60681,9 +66832,11 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info. Here are the commands specific to the local backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -60695,24 +66848,197 @@ These can be run on a running backend using the rc command ### noop -A null operation for testing backend commands +A null operation for testing backend commands. - rclone backend noop remote: [options] [+] +```console +rclone backend noop remote: [options] [+] +``` -This is a test command which has some options -you can try to change the output. +This is a test command which has some options you can try to change the output. Options: -- "echo": echo the input arguments -- "error": return an error based on option value - +- "echo": Echo the input arguments. +- "error": Return an error based on option value. + # Changelog +## v1.72.0 - 2025-11-21 + +[See commits](https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0) + +- New backends + - [Archive](/archive) backend to read archives on cloud storage. (Nick Craig-Wood) +- New S3 providers + - [Cubbit Object Storage](https://rclone.org/s3/#Cubbit) (Marco Ferretti) + - [FileLu S5 Object Storage](https://rclone.org/s3/#filelu-s5) (kingston125) + - [Hetzner Object Storage](https://rclone.org/s3/#hetzner) (spiffytech) + - [Intercolo Object Storage](https://rclone.org/s3/#intercolo) (Robin Rolf) + - [Rabata S3-compatible secure cloud storage](https://rclone.org/s3/#Rabata) (dougal) + - [Servercore Object Storage](https://rclone.org/s3/#servercore) (dougal) + - [SpectraLogic](https://rclone.org/s3/#spectralogic) (dougal) +- New commands + - [rclone archive](https://rclone.org/commands/rclone_archive/): command to create and read archive files (Fawzib Rojas) + - [rclone config string](https://rclone.org/commands/rclone_config_string/): for making connection strings (Nick Craig-Wood) + - [rclone test speed](https://rclone.org/commands/rclone_test_speed/): Add command to test a specified remotes speed (dougal) +- New Features + - backends: many backends have has a paged listing (`ListP`) interface added + - this enables progress when listing large directories and reduced memory usage + - build + - Bump golang.org/x/crypto from 0.43.0 to 0.45.0 to fix CVE-2025-58181 (dependabot[bot]) + - Modernize code and tests (Nick Craig-Wood, russcoss, juejinyuxitu, reddaisyy, dulanting, Oleksandr Redko) + - Update all dependencies (Nick Craig-Wood) + - Enable support for `aix/ppc64` (Lakshmi-Surekha) + - check: Improved reporting of differences in sizes and contents (albertony) + - copyurl: Added `--url` to read URLs from CSV file (S-Pegg1, dougal) + - docs: + - markdown linting (albertony) + - fixes (albertony, Andrew Gunnerson, anon-pradip, Claudius Ellsel, dougal, iTrooz, Jean-Christophe Cura, Joseph Brownlee, kapitainsky, Matt LaPaglia, n4n5, Nick Craig-Wood, nielash, SublimePeace, Ted Robertson, vastonus) + - fs: remove unnecessary Seek call on log file (Aneesh Agrawal) + - hashsum: Improved output format when listing algorithms (albertony) + - lib/http: Cleanup indentation and other whitespace in http serve template (albertony) + - lsf: Add support for `unix` and `unixnano` time formats (Motte) + - oauthutil: Improved debug logs from token refresh (albertony) + - rc + - Add [job/batch](https://rclone.org/rc/#job-batch) for sending batches of rc commands to run concurrently (Nick Craig-Wood) + - Add `runningIds` and `finishedIds` to [job/list](https://rclone.org/rc/#job-list) (n4n5) + - Add `osVersion`, `osKernel` and `osArch` to [core/version](https://rclone.org/rc/#core-version) (Nick Craig-Wood) + - Make sure fatal errors run via the rc don't crash rclone (Nick Craig-Wood) + - Add `executeId` to job statuses in [job/list](https://rclone.org/rc/#job-list) (Nikolay Kiryanov) + - `config/unlock`: rename parameter to `configPassword` accept old as well (Nick Craig-Wood) + - serve http: Download folders as zip (dougal) +- Bug Fixes + - build + - Fix tls: failed to verify certificate: x509: negative serial number (Nick Craig-Wood) + - march + - Fix `--no-traverse` being very slow (Nick Craig-Wood) + - serve s3: Fix log output to remove the EXTRA messages (iTrooz) +- Mount + - Windows: improve error message on missing WinFSP (divinity76) +- Local + - Add `--skip-specials` to ignore special files (Adam Dinwoodie) +- Azure Blob + - Add ListP interface (dougal) +- Azurefiles + - Add ListP interface (Nick Craig-Wood) +- B2 + - Add ListP interface (dougal) + - Add Server-Side encryption support (fries1234) + - Fix "expected a FileSseMode but found: ''" (dougal) + - Allow individual old versions to be deleted with `--b2-versions` (dougal) +- Box + - Add ListP interface (Nick Craig-Wood) + - Allow configuration with config file contents (Dominik Sander) +- Compress + - Add zstd compression (Alex) +- Drive + - Add ListP interface (Nick Craig-Wood) +- Dropbox + - Add ListP interface (Nick Craig-Wood) + - Fix error moving just created objects (Nick Craig-Wood) +- FTP + - Fix SOCKS proxy support (dougal) + - Fix transfers from servers that return 250 ok messages (jijamik) +- Google Cloud Storage + - Add ListP interface (dougal) + - Fix `--gcs-storage-class` to work with server side copy for objects (Riaz Arbi) +- HTTP + - Add basic metadata and provide it via serve (Oleg Kunitsyn) +- Jottacloud + - Add support for Let's Go Cloud (from MediaMarkt) as a whitelabel service (albertony) + - Add support for MediaMarkt Cloud as a whitelabel service (albertony) + - Added support for traditional oauth authentication also for the main service (albertony) + - Abort attempts to run unsupported rclone authorize command (albertony) + - Improved token refresh handling (albertony) + - Fix legacy authentication (albertony) + - Fix authentication for whitelabel services from Elkjøp subsidiaries (albertony) +- Mega + - Implement 2FA login (iTrooz) +- Memory + - Add ListP interface (dougal) +- Onedrive + - Add ListP interface (Nick Craig-Wood) +- Oracle Object Storage + - Add ListP interface (dougal) +- Pcloud + - Add ListP interface (Nick Craig-Wood) +- Proton Drive + - Automated 2FA login with OTP secret key (Microscotch) +- S3 + - Make it easier to add new S3 providers (dougal) + - Add `--s3-use-data-integrity-protections` quirk to fix BadDigest error in Alibaba, Tencent (hunshcn) + - Add support for `--upload-header`, `If-Match` and `If-None-Match` (Sean Turner) + - Fix single file copying behavior with low permission (hunshcn) +- SFTP + - Fix zombie SSH processes with `--sftp-ssh` (Copilot) +- Smb + - Optimize smb mount performance by avoiding stat checks during initialization (Sudipto Baral) +- Swift + - Add ListP interface (dougal) + - If storage_policy isn't set, use the root containers policy (Andrew Ruthven) + - Report disk usage in segment containers (Andrew Ruthven) +- Ulozto + - Implement the About functionality (Lukas Krejci) + - Fix downloads returning HTML error page (aliaj1) +- WebDAV + - Optimize bearer token fetching with singleflight (hunshcn) + - Add ListP interface (Nick Craig-Wood) + - Use SpaceSepList to parse bearer token command (hunshcn) + - Add `Access-Control-Max-Age` header for CORS preflight caching (viocha) + - Fix out of memory with sharepoint-ntlm when uploading large file (Nick Craig-Wood) + +## v1.71.2 - 2025-10-20 + +[See commits](https://github.com/rclone/rclone/compare/v1.71.1...v1.71.2) + +- Bug Fixes + - build + - update Go to 1.25.3 + - Update Docker image Alpine version to fix CVE-2025-9230 + - bisync: Fix race when CaptureOutput is used concurrently (Nick Craig-Wood) + - doc fixes (albertony, dougal, iTrooz, Matt LaPaglia, Nick Craig-Wood) + - index: Add missing providers (dougal) + - serve http: Fix: logging URL on start (dougal) +- Azurefiles + - Fix server side copy not waiting for completion (Vikas Bhansali) +- B2 + - Fix 1TB+ uploads (dougal) +- Google Cloud Storage + - Add region us-east5 (Dulani Woods) +- Mega + - Fix 402 payment required errors (Nick Craig-Wood) +- Pikpak + - Fix unnecessary retries by using URL expire parameter (Youfu Zhang) + +## v1.71.1 - 2025-09-24 + +[See commits](https://github.com/rclone/rclone/compare/v1.71.0...v1.71.1) + +- Bug Fixes + - bisync: Fix error handling for renamed conflicts (nielash) + - march: Fix deadlock when using --fast-list on syncs (Nick Craig-Wood) + - operations: Fix partial name collisions for non --inplace copies (Nick Craig-Wood) + - pacer: Fix deadlock with --max-connections (Nick Craig-Wood) + - doc fixes (albertony, anon-pradip, Claudius Ellsel, dougal, Jean-Christophe Cura, Nick Craig-Wood, nielash) +- Mount + - Do not log successful unmount as an error (Tilman Vogel) +- VFS + - Fix SIGHUP killing serve instead of flushing directory caches (dougal) +- Local + - Fix rmdir "Access is denied" on windows (nielash) +- Box + - Fix about after change in API return (Nick Craig-Wood) +- Combine + - Propagate SlowHash feature (skbeh) +- Drive + - Update making your own client ID instructions (Ed Craig-Wood) +- Internet Archive + - Fix server side copy files with spaces (Nick Craig-Wood) + ## v1.71.0 - 2025-08-22 [See commits](https://github.com/rclone/rclone/compare/v1.70.0...v1.71.0) @@ -66727,7 +73053,7 @@ If you need to configure a remote, see the [config help docs](https://rclone.org If you are using rclone entirely with [on the fly remotes](https://rclone.org/docs/#backend-path-to-dir), you can create an empty config file to get rid of this notice, for example: -```sh +```console rclone config touch ``` @@ -66742,7 +73068,7 @@ The syncs would be incremental (on a file by file basis). e.g. -```sh +```console rclone sync --interactive drive:Folder s3:bucket ``` @@ -66751,7 +73077,7 @@ rclone sync --interactive drive:Folder s3:bucket You can use rclone from multiple places at the same time if you choose different subdirectory for the output, e.g. -```sh +```console Server A> rclone sync --interactive /tmp/whatever remote:ServerA Server B> rclone sync --interactive /tmp/whatever remote:ServerB ``` @@ -66759,7 +73085,7 @@ Server B> rclone sync --interactive /tmp/whatever remote:ServerB If you sync to the same directory then you should use rclone copy otherwise the two instances of rclone may delete each other's files, e.g. -```sh +```console Server A> rclone copy /tmp/whatever remote:Backup Server B> rclone copy /tmp/whatever remote:Backup ``` @@ -66813,7 +73139,7 @@ may use `http_proxy` but another one `HTTP_PROXY`. The `Go` libraries used by `rclone` will try both variations, but you may wish to set all possibilities. So, on Linux, you may end up with code similar to -```sh +```console export http_proxy=http://proxyserver:12345 export https_proxy=$http_proxy export HTTP_PROXY=$http_proxy @@ -66822,7 +73148,7 @@ export HTTPS_PROXY=$http_proxy Note: If the proxy server requires a username and password, then use -```sh +```console export http_proxy=http://username:password@proxyserver:12345 export https_proxy=$http_proxy export HTTP_PROXY=$http_proxy @@ -66835,7 +73161,7 @@ For instance "foo.com" also matches "bar.foo.com". e.g. -```sh +```console export no_proxy=localhost,127.0.0.0/8,my.host.name export NO_PROXY=$no_proxy ``` @@ -66864,7 +73190,7 @@ where `rclone` can't verify the server with the SSL root certificates. Rclone (via the Go runtime) tries to load the root certificates from these places on Linux. -```sh +```text "/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc. "/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL "/etc/ssl/ca-bundle.pem", // OpenSUSE @@ -66874,7 +73200,7 @@ these places on Linux. So doing something like this should fix the problem. It also sets the time which is important for SSL to work properly. -```sh +```console mkdir -p /etc/ssl/certs/ curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt ntpclient -s -h pool.ntp.org @@ -66887,7 +73213,7 @@ provide the SSL root certificates on Unix systems other than macOS. Note that you may need to add the `--insecure` option to the `curl` command line if it doesn't work without. -```sh +```console curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt ``` @@ -66896,7 +73222,7 @@ On macOS, you can install Homebrew, and specify the SSL root certificates with the [--ca-cert](https://rclone.org/docs/#ca-cert-stringarray) flag. -```sh +```console brew install ca-certificates find $(brew --prefix)/etc/ca-certificates -type f ``` @@ -66954,7 +73280,7 @@ the port on the host. A simple solution may be restarting the Host Network Service with eg. Powershell -```pwsh +```powershell Restart-Service hns ``` @@ -67048,9 +73374,9 @@ THE SOFTWARE. ## Contributors -{{< rem `email addresses removed from here need to be added to + - Alex Couper - Leonid Shalupov @@ -68026,7 +74352,7 @@ put them back in again.` >}} - Ross Smith II - Vikas Bhansali <64532198+vibhansa-msft@users.noreply.github.com> - Sudipto Baral -- Sam Pegg +- Sam Pegg <70067376+S-Pegg1@users.noreply.github.com> - liubingrun - Albin Parou - n4n5 <56606507+Its-Just-Nans@users.noreply.github.com> @@ -68040,6 +74366,50 @@ put them back in again.` >}} - Lucas Bremgartner - Binbin Qian - cui <523516579@qq.com> +- Tilman Vogel +- skbeh <60107333+skbeh@users.noreply.github.com> +- Claudius Ellsel +- Motte <37443982+dmotte@users.noreply.github.com> +- dougal <147946567+roucc@users.noreply.github.com> +- anon-pradip +- Robin Rolf +- Jean-Christophe Cura +- russcoss +- Matt LaPaglia +- Youfu Zhang <1315097+zhangyoufu@users.noreply.github.com> +- juejinyuxitu +- iTrooz +- Microscotch +- Andrew Ruthven +- spiffytech +- Dulani Woods +- Marco Ferretti +- hunshcn +- vastonus +- Oleksandr Redko +- reddaisyy +- viocha +- Aneesh Agrawal +- divinity76 +- Andrew Gunnerson +- Lakshmi-Surekha +- dulanting +- Adam Dinwoodie +- Lukas Krejci +- Riaz Arbi +- Fawzib Rojas +- fries1234 +- Joseph Brownlee <39440458+JellyJoe198@users.noreply.github.com> +- Ted Robertson <10043369+tredondo@users.noreply.github.com> +- SublimePeace <184005903+SublimePeace@users.noreply.github.com> +- Copilot <198982749+Copilot@users.noreply.github.com> +- Alex <64072843+A1ex3@users.noreply.github.com> +- n4n5 +- aliaj1 +- Sean Turner <30396892+seanturner026@users.noreply.github.com> +- jijamik <30904953+jijamik@users.noreply.github.com> +- Dominik Sander +- Nikolay Kiryanov # Contact the rclone project @@ -68047,20 +74417,20 @@ put them back in again.` >}} Forum for questions and general discussion: -- https://forum.rclone.org +- ## Business support For business support or sponsorship enquiries please see: -- https://rclone.com/ -- sponsorship@rclone.com +- +- ## GitHub repository The project's repository is located at: -- https://github.com/rclone/rclone +- There you can file bug reports or contribute with pull requests. @@ -68075,7 +74445,7 @@ You can also follow Nick on twitter for rclone announcements: Or if all else fails or you want to ask something private or confidential -- info@rclone.com +- Please don't email requests for help to this address - those are better directed to the forum unless you'd like to sign up for business diff --git a/MANUAL.txt b/MANUAL.txt index a530a6440..efb8fd646 100644 --- a/MANUAL.txt +++ b/MANUAL.txt @@ -1,6 +1,6 @@ rclone(1) User Manual Nick Craig-Wood -Aug 22, 2025 +Nov 21, 2025 NAME @@ -14,6 +14,7 @@ SYNOPSIS Available commands: about Get quota information from the remote. + archive Perform an action on an archive. authorize Remote authorization. backend Run a backend-specific command. bisync Perform bidirectional synchronization between two paths. @@ -173,6 +174,7 @@ S3, that work out of the box.) - Citrix ShareFile - Cloudflare R2 - Cloudinary +- Cubbit DS3 - DigitalOcean Spaces - Digi Storage - Dreamhost @@ -181,6 +183,7 @@ S3, that work out of the box.) - Exaba - Fastmail Files - FileLu Cloud Storage +- FileLu S5 (S3-Compatible Object Storage) - Files.com - FlashBlade - FTP @@ -189,15 +192,18 @@ S3, that work out of the box.) - Google Drive - Google Photos - HDFS +- Hetzner Object Storage - Hetzner Storage Box - HiDrive - HTTP +- Huawei OBS - iCloud Drive - ImageKit - Internet Archive - Jottacloud - IBM COS S3 - IDrive e2 +- Intercolo Object Storage - IONOS Cloud - Koofr - Leviia Object Storage @@ -234,16 +240,21 @@ S3, that work out of the box.) - QingStor - Qiniu Cloud Object Storage (Kodo) - Quatrix by Maytech +- Rabata Cloud Storage +- RackCorp Object Storage - Rackspace Cloud Files +- Rclone Serve S3 - rsync.net - Scaleway - Seafile - Seagate Lyve Cloud - SeaweedFS - Selectel +- Servercore Object Storage - SFTP - Sia - SMB / CIFS +- Spectra Logic - StackPath - Storj - Synology @@ -263,6 +274,7 @@ Virtual providers These backends adapt or modify other storage providers: - Alias: Rename existing remotes +- Archive: Read archive files - Cache: Cache remotes (DEPRECATED) - Chunker: Split large files - Combine: Combine multiple remotes into a directory tree @@ -922,6 +934,7 @@ See the following for detailed instructions for - 1Fichier - Akamai Netstorage - Alias +- Archive - Amazon S3 - Backblaze B2 - Box @@ -993,7 +1006,7 @@ Its syntax is like this rclone subcommand [options] -A subcommand is a the rclone operation required, (e.g. sync, copy, ls). +A subcommand is an rclone operation required (e.g. sync, copy, ls). An option is a single letter flag (e.g. -v) or a group of single letter flags (e.g. -Pv) or a long flag (e.g. --progress). No options are @@ -1066,6 +1079,7 @@ See Also the redacted config for a single remote. - rclone config show - Print (decrypted) config file, or the config for a single remote. +- rclone config string - Print connection string for a single remote. - rclone config touch - Ensure configuration file exists. - rclone config update - Update options in an existing remote. - rclone config userinfo - Prints info about logged in user of remote. @@ -1131,8 +1145,7 @@ backend supports it. If metadata syncing is required then use the --metadata flag. Note that the modification time and metadata for the root directory will -not be synced. See https://github.com/rclone/rclone/issues/7652 for more -info. +not be synced. See issue #7652 for more info. Note: Use the -P/--progress flag to view real-time transfer statistics. @@ -1183,7 +1196,7 @@ scenarios are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each -file (which may or may not match what actually DID.) +file (which may or may not match what actually DID). rclone copy source:path dest:path [flags] @@ -1206,7 +1219,7 @@ Options --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) Options shared with other commands are described next. See the global flags page for global options not listed here. @@ -1392,7 +1405,7 @@ scenarios are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each -file (which may or may not match what actually DID.) +file (which may or may not match what actually DID). rclone sync source:path dest:path [flags] @@ -1415,7 +1428,7 @@ Options --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) Options shared with other commands are described next. See the global flags page for global options not listed here. @@ -1608,7 +1621,7 @@ scenarios are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each -file (which may or may not match what actually DID.) +file (which may or may not match what actually DID). rclone move source:path dest:path [flags] @@ -1632,7 +1645,7 @@ Options --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) Options shared with other commands are described next. See the global flags page for global options not listed here. @@ -2035,7 +2048,7 @@ Synopsis Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default. -Eg +E.g. $ rclone ls swift:bucket 60295 bevajer5jef @@ -2126,7 +2139,7 @@ recurse by default. Use the -R flag to recurse. This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of -the directory, Eg +the directory, E.g. $ rclone lsd swift: 494000 2018-04-26 08:43:20 10000 10000files @@ -2223,7 +2236,7 @@ Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default. -Eg +E.g. $ rclone lsl swift:bucket 60295 2016-06-25 18:55:41.062626927 bevajer5jef @@ -2799,11 +2812,11 @@ Applying a --full flag to the command prints the bytes in full, e.g. A --json flag generates conveniently machine-readable output, e.g. { - "total": 18253611008, - "used": 7993453766, - "trashed": 104857602, - "other": 8849156022, - "free": 1411001220 + "total": 18253611008, + "used": 7993453766, + "trashed": 104857602, + "other": 8849156022, + "free": 1411001220 } Not all backends print all fields. Information is not included if it is @@ -2826,6 +2839,236 @@ See Also - rclone - Show help for rclone commands, flags and backends. +rclone archive + +Perform an action on an archive. + +Synopsis + +Perform an action on an archive. Requires the use of a subcommand to +specify the protocol, e.g. + + rclone archive list remote:file.zip + +Each subcommand has its own options which you can see in their help. + +See rclone archive create for the archive formats supported. + + rclone archive [opts] [] [flags] + +Options + + -h, --help help for archive + +See the global flags page for global options not listed here. + +See Also + +- rclone - Show help for rclone commands, flags and backends. +- rclone archive create - Archive source file(s) to destination. +- rclone archive extract - Extract archives from source to + destination. +- rclone archive list - List archive contents from source. + +rclone archive create + +Archive source file(s) to destination. + +Synopsis + +Creates an archive from the files in source:path and saves the archive +to dest:path. If dest:path is missing, it will write to the console. + +The valid formats for the --format flag are listed below. If --format is +not set rclone will guess it from the extension of dest:path. + + Format Extensions + --------- ----------------------------------- + zip .zip + tar .tar + tar.gz .tar.gz, .tgz, .taz + tar.bz2 .tar.bz2, .tb2, .tbz, .tbz2, .tz2 + tar.lz .tar.lz + tar.lz4 .tar.lz4 + tar.xz .tar.xz, .txz + tar.zst .tar.zst, .tzst + tar.br .tar.br + tar.sz .tar.sz + tar.mz .tar.mz + +The --prefix and --full-path flags control the prefix for the files in +the archive. + +If the flag --full-path is set then the files will have the full source +path as the prefix. + +If the flag --prefix= is set then the files will have as +prefix. It's possible to create invalid file names with --prefix= +so use with caution. Flag --prefix has priority over --full-path. + +Given a directory /sourcedir with the following: + + file1.txt + dir1/file2.txt + +Running the command rclone archive create /sourcedir /dest.tar.gz will +make an archive with the contents: + + file1.txt + dir1/ + dir1/file2.txt + +Running the command +rclone archive create --full-path /sourcedir /dest.tar.gz will make an +archive with the contents: + + sourcedir/file1.txt + sourcedir/dir1/ + sourcedir/dir1/file2.txt + +Running the command +rclone archive create --prefix=my_new_path /sourcedir /dest.tar.gz will +make an archive with the contents: + + my_new_path/file1.txt + my_new_path/dir1/ + my_new_path/dir1/file2.txt + + rclone archive create [flags] [] + +Options + + --format string Create the archive with format or guess from extension. + --full-path Set prefix for files in archive to source path + -h, --help help for create + --prefix string Set prefix for files in archive to entered value or source path + +See the global flags page for global options not listed here. + +See Also + +- rclone archive - Perform an action on an archive. + +rclone archive extract + +Extract archives from source to destination. + +Synopsis + +Extract the archive contents to a destination directory auto detecting +the format. See rclone archive create for the archive formats supported. + +For example on this archive: + + $ rclone archive list --long remote:archive.zip + 6 2025-10-30 09:46:23.000000000 file.txt + 0 2025-10-30 09:46:57.000000000 dir/ + 4 2025-10-30 09:46:57.000000000 dir/bye.txt + +You can run extract like this + + $ rclone archive extract remote:archive.zip remote:extracted + +Which gives this result + + $ rclone tree remote:extracted + / + ├── dir + │ └── bye.txt + └── file.txt + +The source or destination or both can be local or remote. + +Filters can be used to only extract certain files: + + $ rclone archive extract archive.zip partial --include "bye.*" + $ rclone tree partial + / + └── dir + └── bye.txt + +The archive backend can also be used to extract files. It can be used to +read only mount archives also but it supports a different set of archive +formats to the archive commands. + + rclone archive extract [flags] + +Options + + -h, --help help for extract + +See the global flags page for global options not listed here. + +See Also + +- rclone archive - Perform an action on an archive. + +rclone archive list + +List archive contents from source. + +Synopsis + +List the contents of an archive to the console, auto detecting the +format. See rclone archive create for the archive formats supported. + +For example: + + $ rclone archive list remote:archive.zip + 6 file.txt + 0 dir/ + 4 dir/bye.txt + +Or with --long flag for more info: + + $ rclone archive list --long remote:archive.zip + 6 2025-10-30 09:46:23.000000000 file.txt + 0 2025-10-30 09:46:57.000000000 dir/ + 4 2025-10-30 09:46:57.000000000 dir/bye.txt + +Or with --plain flag which is useful for scripting: + + $ rclone archive list --plain /path/to/archive.zip + file.txt + dir/ + dir/bye.txt + +Or with --dirs-only: + + $ rclone archive list --plain --dirs-only /path/to/archive.zip + dir/ + +Or with --files-only: + + $ rclone archive list --plain --files-only /path/to/archive.zip + file.txt + dir/bye.txt + +Filters may also be used: + + $ rclone archive list --long archive.zip --include "bye.*" + 4 2025-10-30 09:46:57.000000000 dir/bye.txt + +The archive backend can also be used to list files. It can be used to +read only mount archives also but it supports a different set of archive +formats to the archive commands. + + rclone archive list [flags] + +Options + + --dirs-only Only list directories + --files-only Only list files + -h, --help help for list + --long List extra attributtes + --plain Only list file names + +See the global flags page for global options not listed here. + +See Also + +- rclone archive - Perform an action on an archive. + rclone authorize Remote authorization. @@ -2833,12 +3076,16 @@ Remote authorization. Synopsis Remote authorization. Used to authorize a remote or headless rclone from -a machine with a browser - use as instructed by rclone config. +a machine with a browser. Use as instructed by rclone config. See also +the remote setup documentation. -The command requires 1-3 arguments: - fs name (e.g., "drive", "s3", -etc.) - Either a base64 encoded JSON blob obtained from a previous -rclone config session - Or a client_id and client_secret pair obtained -from the remote service +The command requires 1-3 arguments: + +- Name of a backend (e.g. "drive", "s3") +- Either a base64 encoded JSON blob obtained from a previous rclone + config session +- Or a client_id and client_secret pair obtained from the remote + service Use --auth-no-open-browser to prevent rclone to open auth link in default browser automatically. @@ -2847,7 +3094,7 @@ Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used. - rclone authorize [base64_json_blob | client_id client_secret] [flags] + rclone authorize [base64_json_blob | client_id client_secret] [flags] Options @@ -2926,9 +3173,11 @@ Perform bidirectional synchronization between two paths. Bisync provides a bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On -each successive run it will: - list files on Path1 and Path2, and check -for changes on each side. Changes include New, Newer, Older, and Deleted -files. - Propagate changes on Path1 to Path2, and vice-versa. +each successive run it will: + +- list files on Path1 and Path2, and check for changes on each side. + Changes include New, Newer, Older, and Deleted files. +- Propagate changes on Path1 to Path2, and vice-versa. Bisync is considered an advanced command, so use with care. Make sure you have read and understood the entire manual (especially the @@ -3459,27 +3708,27 @@ it. This will look something like (some irrelevant detail removed): { - "State": "*oauth-islocal,teamdrive,,", - "Option": { - "Name": "config_is_local", - "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n", - "Default": true, - "Examples": [ - { - "Value": "true", - "Help": "Yes" - }, - { - "Value": "false", - "Help": "No" - } - ], - "Required": false, - "IsPassword": false, - "Type": "bool", - "Exclusive": true, - }, - "Error": "", + "State": "*oauth-islocal,teamdrive,,", + "Option": { + "Name": "config_is_local", + "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n", + "Default": true, + "Examples": [ + { + "Value": "true", + "Help": "Yes" + }, + { + "Value": "false", + "Help": "No" + } + ], + "Required": false, + "IsPassword": false, + "Type": "bool", + "Exclusive": true, + }, + "Error": "", } The format of Option is the same as returned by rclone config providers. @@ -3886,6 +4135,42 @@ See Also - rclone config - Enter an interactive configuration session. +rclone config string + +Print connection string for a single remote. + +Synopsis + +Print a connection string for a single remote. + +The connection strings can be used wherever a remote is needed and can +be more convenient than using the config file, especially if using the +RC API. + +Backend parameters may be provided to the command also. + +Example: + + $ rclone config string s3:rclone --s3-no-check-bucket + :s3,access_key_id=XXX,no_check_bucket,provider=AWS,region=eu-west-2,secret_access_key=YYY:rclone + +NB the strings are not quoted for use in shells (eg bash, powershell, +windows cmd). Most will work if enclosed in "double quotes", however +connection strings that contain double quotes will require further +quoting which is very shell dependent. + + rclone config string [flags] + +Options + + -h, --help help for string + +See the global flags page for global options not listed here. + +See Also + +- rclone config - Enter an interactive configuration session. + rclone config touch Ensure configuration file exists. @@ -3948,27 +4233,27 @@ it. This will look something like (some irrelevant detail removed): { - "State": "*oauth-islocal,teamdrive,,", - "Option": { - "Name": "config_is_local", - "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n", - "Default": true, - "Examples": [ - { - "Value": "true", - "Help": "Yes" - }, - { - "Value": "false", - "Help": "No" - } - ], - "Required": false, - "IsPassword": false, - "Type": "bool", - "Exclusive": true, - }, - "Error": "", + "State": "*oauth-islocal,teamdrive,,", + "Option": { + "Name": "config_is_local", + "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n", + "Default": true, + "Examples": [ + { + "Value": "true", + "Help": "Yes" + }, + { + "Value": "false", + "Help": "No" + } + ], + "Required": false, + "IsPassword": false, + "Type": "bool", + "Exclusive": true, + }, + "Error": "", } The format of Option is the same as returned by rclone config providers. @@ -4078,7 +4363,7 @@ alterations. --name-transform trimsuffix=XXXX Removes XXXX if it appears at the end of the file name. - --name-transform regex=/pattern/replacement/ Applies a regex-based + --name-transform regex=pattern/replacement Applies a regex-based transformation. --name-transform replace=old:new Replaces occurrences of old with @@ -4090,6 +4375,20 @@ alterations. --name-transform truncate=N Truncates the file name to a maximum of N characters. + --name-transform truncate_keep_extension=N Truncates the file name to a + maximum of N characters while + preserving the original file + extension. + + --name-transform truncate_bytes=N Truncates the file name to a + maximum of N bytes (not + characters). + + --name-transform truncate_bytes_keep_extension=N Truncates the file name to a + maximum of N bytes (not characters) + while preserving the original file + extension. + --name-transform base64encode Encodes the file name in Base64. --name-transform base64decode Decodes a Base64-encoded file name. @@ -4131,119 +4430,121 @@ alterations. Unicode normalization form. --name-transform command=/path/to/my/programfile names. Executes an external program to - transform + transform. --------------------------------------------------------------------------------------------- Conversion modes: - none - nfc - nfd - nfkc - nfkd - replace - prefix - suffix - suffix_keep_extension - trimprefix - trimsuffix - index - date - truncate - base64encode - base64decode - encoder - decoder - ISO-8859-1 - Windows-1252 - Macintosh - charmap - lowercase - uppercase - titlecase - ascii - url - regex - command + none + nfc + nfd + nfkc + nfkd + replace + prefix + suffix + suffix_keep_extension + trimprefix + trimsuffix + index + date + truncate + truncate_keep_extension + truncate_bytes + truncate_bytes_keep_extension + base64encode + base64decode + encoder + decoder + ISO-8859-1 + Windows-1252 + Macintosh + charmap + lowercase + uppercase + titlecase + ascii + url + regex + command Char maps: - - IBM-Code-Page-037 - IBM-Code-Page-437 - IBM-Code-Page-850 - IBM-Code-Page-852 - IBM-Code-Page-855 - Windows-Code-Page-858 - IBM-Code-Page-860 - IBM-Code-Page-862 - IBM-Code-Page-863 - IBM-Code-Page-865 - IBM-Code-Page-866 - IBM-Code-Page-1047 - IBM-Code-Page-1140 - ISO-8859-1 - ISO-8859-2 - ISO-8859-3 - ISO-8859-4 - ISO-8859-5 - ISO-8859-6 - ISO-8859-7 - ISO-8859-8 - ISO-8859-9 - ISO-8859-10 - ISO-8859-13 - ISO-8859-14 - ISO-8859-15 - ISO-8859-16 - KOI8-R - KOI8-U - Macintosh - Macintosh-Cyrillic - Windows-874 - Windows-1250 - Windows-1251 - Windows-1252 - Windows-1253 - Windows-1254 - Windows-1255 - Windows-1256 - Windows-1257 - Windows-1258 - X-User-Defined + IBM-Code-Page-037 + IBM-Code-Page-437 + IBM-Code-Page-850 + IBM-Code-Page-852 + IBM-Code-Page-855 + Windows-Code-Page-858 + IBM-Code-Page-860 + IBM-Code-Page-862 + IBM-Code-Page-863 + IBM-Code-Page-865 + IBM-Code-Page-866 + IBM-Code-Page-1047 + IBM-Code-Page-1140 + ISO-8859-1 + ISO-8859-2 + ISO-8859-3 + ISO-8859-4 + ISO-8859-5 + ISO-8859-6 + ISO-8859-7 + ISO-8859-8 + ISO-8859-9 + ISO-8859-10 + ISO-8859-13 + ISO-8859-14 + ISO-8859-15 + ISO-8859-16 + KOI8-R + KOI8-U + Macintosh + Macintosh-Cyrillic + Windows-874 + Windows-1250 + Windows-1251 + Windows-1252 + Windows-1253 + Windows-1254 + Windows-1255 + Windows-1256 + Windows-1257 + Windows-1258 + X-User-Defined Encoding masks: - Asterisk - BackQuote - BackSlash - Colon - CrLf - Ctl - Del - Dollar - Dot - DoubleQuote - Exclamation - Hash - InvalidUtf8 - LeftCrLfHtVt - LeftPeriod - LeftSpace - LeftTilde - LtGt - None - Percent - Pipe - Question - Raw - RightCrLfHtVt - RightPeriod - RightSpace - Semicolon - SingleQuote - Slash - SquareBracket + Asterisk + BackQuote + BackSlash + Colon + CrLf + Ctl + Del + Dollar + Dot + DoubleQuote + Exclamation + Hash + InvalidUtf8 + LeftCrLfHtVt + LeftPeriod + LeftSpace + LeftTilde + LtGt + None + Percent + Pipe + Question + Raw + RightCrLfHtVt + RightPeriod + RightSpace + Semicolon + SingleQuote + Slash + SquareBracket Examples: @@ -4287,14 +4588,21 @@ Examples: // Output: stories/The Quick Brown Fox!.txt rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" - // Output: stories/The Quick Brown Fox!-20250618 + // Output: stories/The Quick Brown Fox!-20251121 rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" - // Output: stories/The Quick Brown Fox!-2025-06-18 0148PM + // Output: stories/The Quick Brown Fox!-2025-11-21 0505PM rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab" // Output: ababababababab/ababab ababababab ababababab ababab!abababab +The regex command generally accepts Perl-style regular expressions, the +exact syntax is defined in the Go regular expression reference. The +replacement string may contain capturing group variables, referencing +capturing groups using the syntax $name or ${name}, where the name can +refer to a named capturing group or it can simply be the index as a +number. To insert a literal $, use $$. + Multiple transformations can be used in sequence, applied in the order they are specified on the command line. @@ -4359,20 +4667,25 @@ Race Conditions and Non-Deterministic Behavior Some transformations, such as replace=old:new, may introduce conflicts where multiple source files map to the same destination name. This can lead to race conditions when performing concurrent transfers. It is up -to the user to anticipate these. * If two files from the source are -transformed into the same name at the destination, the final state may -be non-deterministic. * Running rclone check after a sync using such -transformations may erroneously report missing or differing files due to -overwritten results. +to the user to anticipate these. -To minimize risks, users should: * Carefully review transformations that -may introduce conflicts. * Use --dry-run to inspect changes before -executing a sync (but keep in mind that it won't show the effect of -non-deterministic transformations). * Avoid transformations that cause -multiple distinct source files to map to the same destination name. * -Consider disabling concurrency with --transfers=1 if necessary. * -Certain transformations (e.g. prefix) will have a multiplying effect -every time they are used. Avoid these when using bisync. +- If two files from the source are transformed into the same name at + the destination, the final state may be non-deterministic. +- Running rclone check after a sync using such transformations may + erroneously report missing or differing files due to overwritten + results. + +To minimize risks, users should: + +- Carefully review transformations that may introduce conflicts. +- Use --dry-run to inspect changes before executing a sync (but keep + in mind that it won't show the effect of non-deterministic + transformations). +- Avoid transformations that cause multiple distinct source files to + map to the same destination name. +- Consider disabling concurrency with --transfers=1 if necessary. +- Certain transformations (e.g. prefix) will have a multiplying effect + every time they are used. Avoid these when using bisync. rclone convmv dest:path --name-transform XXX [flags] @@ -4489,7 +4802,7 @@ So rclone copyto src dst where src and dst are rclone paths, either remote:path or /path/to/local -or C:. +or C:\windows\path\if\on\windows. This will: @@ -4504,9 +4817,9 @@ by size and modification time or MD5SUM. It doesn't delete files from the destination. If you are looking to copy just a byte range of a file, please see -'rclone cat --offset X --count Y' +rclone cat --offset X --count Y. -Note: Use the -P/--progress flag to view real-time transfer statistics +Note: Use the -P/--progress flag to view real-time transfer statistics. Logger Flags @@ -4552,7 +4865,7 @@ scenarios are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each -file (which may or may not match what actually DID.) +file (which may or may not match what actually DID). rclone copyto source:path dest:path [flags] @@ -4574,7 +4887,7 @@ Options --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) Options shared with other commands are described next. See the global flags page for global options not listed here. @@ -4688,6 +5001,18 @@ there is one with the same name. Setting --stdout or making the output file name - will cause the output to be written to standard output. +Setting --urls allows you to input a CSV file of URLs in format: URL, +FILENAME. If --urls is in use then replace the URL in the arguments with +the file containing the URLs, e.g.: + + rclone copyurl --urls myurls.csv remote:dir + +Missing filenames will be autogenerated equivalent to using +--auto-filename. Note that --stdout and --print-filename are +incompatible with --urls. This will do --transfers copies in parallel. +Note that if --auto-filename is desired for all URLs then a file with +only URLs and no filename can be used. + Troubleshooting If you can't get rclone copyurl to work then here are some things you @@ -4711,6 +5036,7 @@ Options --no-clobber Prevent overwriting file with same name -p, --print-filename Print the resulting name from --auto-filename --stdout Write the output to stdout rather than a file + --urls Use a CSV file of links to process multiple URLs Options shared with other commands are described next. See the global flags page for global options not listed here. @@ -4733,7 +5059,7 @@ Cryptcheck checks the integrity of an encrypted remote. Synopsis -Checks a remote against a crypted remote. This is the equivalent of +Checks a remote against an encrypted remote. This is the equivalent of running rclone check, but able to check the checksums of the encrypted remote. @@ -4860,7 +5186,6 @@ If you supply the --reverse flag, it will return encrypted file names. use it like this rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 - rclone cryptdecode --reverse encryptedremote: filename1 filename2 Another way to accomplish this is by using the rclone backend encode (or @@ -4950,10 +5275,13 @@ Installation on Linux particular name. This symlink helps git-annex tell rclone it wants to run the "gitannex" subcommand. - # Create the helper symlink in "$HOME/bin". + Create the helper symlink in "$HOME/bin": + ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin" - # Verify the new symlink is on your PATH. + Verify the new symlink is on your PATH: + + ```console which git-annex-remote-rclone-builtin 2. Add a new remote to your git-annex repo. This new remote will @@ -4962,10 +5290,12 @@ Installation on Linux Start by asking git-annex to describe the remote's available configuration parameters. - # If you skipped step 1: + If you skipped step 1: + git annex initremote MyRemote type=rclone --whatelse - # If you created a symlink in step 1: + If you created a symlink in step 1: + git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse NOTE: If you're porting an existing git-annex-remote-rclone remote @@ -5033,19 +5363,19 @@ Run without a hash to see the list of all supported hashes, e.g. $ rclone hashsum Supported hashes are: - * md5 - * sha1 - * whirlpool - * crc32 - * sha256 - * sha512 - * blake3 - * xxh3 - * xxh128 + - md5 + - sha1 + - whirlpool + - crc32 + - sha256 + - sha512 + - blake3 + - xxh3 + - xxh128 Then - $ rclone hashsum MD5 remote:path + rclone hashsum MD5 remote:path Note that hash names are case insensitive and values are output in lower case. @@ -5126,7 +5456,7 @@ will just ignore it. If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by default -be created with the least constraints – e.g. no expiry, no password +be created with the least constraints - e.g. no expiry, no password protection, accessible without account. rclone link remote:path [flags] @@ -5193,7 +5523,7 @@ standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. -Eg +E.g. $ rclone lsf swift:bucket bevajer5jef @@ -5219,7 +5549,7 @@ just the path, but you can use these parameters to control the output: So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last. -Eg +E.g. $ rclone lsf --format "tsp" swift:bucket 2016-06-25 18:55:41;60295;bevajer5jef @@ -5238,7 +5568,7 @@ For example, to emulate the md5sum command you can use rclone lsf -R --hash MD5 --format hp --separator " " --files-only . -Eg +E.g. $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket 7908e352297f0f530b84a756f188baa3 bevajer5jef @@ -5253,7 +5583,7 @@ By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy. -Eg +E.g. $ rclone lsf --separator "," --format "tshp" swift:bucket 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef @@ -5263,9 +5593,9 @@ Eg 2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic You can output in CSV standard format. This will escape things in " if -they contain , +they contain, -Eg +E.g. $ rclone lsf --csv --files-only --format ps remote:path test.log,22355 @@ -5290,10 +5620,12 @@ specified with the --time-format flag. Examples: rclone lsf remote:path --format pt --time-format RFC3339 rclone lsf remote:path --format pt --time-format DateOnly rclone lsf remote:path --format pt --time-format max + rclone lsf remote:path --format pt --time-format unix + rclone lsf remote:path --format pt --time-format unixnano --time-format max will automatically truncate -'2006-01-02 15:04:05.000000000' to the maximum precision supported by -the remote. +2006-01-02 15:04:05.000000000 to the maximum precision supported by the +remote. Any of the filtering options can be applied to this command. @@ -5332,7 +5664,7 @@ Options -h, --help help for lsf -R, --recursive Recurse into the listing -s, --separator string Separator for the items in the format (default ";") - -t, --time-format string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --time-format string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) Options shared with other commands are described next. See the global flags page for global options not listed here. @@ -5388,9 +5720,9 @@ The output is an array of Items, where each Item looks like this: { "Hashes" : { - "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", - "MD5" : "b1946ac92492d2347c6235b4d2611184", - "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" + "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", + "MD5" : "b1946ac92492d2347c6235b4d2611184", + "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", @@ -5814,7 +6146,7 @@ not suffer from the same limitations. Mounting on macOS Mounting on macOS can be done either via built-in NFS server, macFUSE -(also known as osxfuse) or FUSE-T. macFUSE is a traditional FUSE driver +(also known as osxfuse) or FUSE-T.macFUSE is a traditional FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which "mounts" via an NFSv4 local server. @@ -5870,6 +6202,17 @@ Read Only mounts When mounting with --read-only, attempts to write to files will fail silently as opposed to with a clear warning as in macFUSE. +Mounting on Linux + +On newer versions of Ubuntu, you may encounter the following error when +running rclone mount: + + NOTICE: mount helper error: fusermount3: mount failed: Permission + denied CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: + exit status 1 This may be due to newer Apparmor restrictions, which + can be disabled with sudo aa-disable /usr/bin/fusermount3 (you may + need to sudo apt install apparmor-utils beforehand). + Limitations Without the use of --vfs-cache-mode this can only write files @@ -6043,8 +6386,8 @@ should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. - --dir-cache-time duration Time to cache directory entries for (default 5m0s) - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory @@ -6095,13 +6438,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -6246,9 +6589,9 @@ cost of an increased number of requests. These flags control the chunking: - --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) - --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) - --vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once The chunking behaves differently depending on the --vfs-read-chunk-streams parameter. @@ -6303,33 +6646,33 @@ In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --read-only Only allow read-only access. + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. - --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS). - --transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: - --links Translate symlinks to/from regular files with a '.rclonelink' extension. - --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS + --links Translate symlinks to/from regular files with a '.rclonelink' extension. + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file @@ -6431,7 +6774,7 @@ This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. - --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) Alternate report of used bytes @@ -6442,7 +6785,7 @@ flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself. -WARNING. Contrary to rclone size, this flag ignores filters so that the +WARNING: Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -6659,7 +7002,7 @@ scenarios are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each -file (which may or may not match what actually DID.) +file (which may or may not match what actually DID). rclone moveto source:path dest:path [flags] @@ -6681,7 +7024,7 @@ Options --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) Options shared with other commands are described next. See the global flags page for global options not listed here. @@ -7152,7 +7495,7 @@ not suffer from the same limitations. Mounting on macOS Mounting on macOS can be done either via built-in NFS server, macFUSE -(also known as osxfuse) or FUSE-T. macFUSE is a traditional FUSE driver +(also known as osxfuse) or FUSE-T.macFUSE is a traditional FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which "mounts" via an NFSv4 local server. @@ -7208,6 +7551,17 @@ Read Only mounts When mounting with --read-only, attempts to write to files will fail silently as opposed to with a clear warning as in macFUSE. +Mounting on Linux + +On newer versions of Ubuntu, you may encounter the following error when +running rclone mount: + + NOTICE: mount helper error: fusermount3: mount failed: Permission + denied CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: + exit status 1 This may be due to newer Apparmor restrictions, which + can be disabled with sudo aa-disable /usr/bin/fusermount3 (you may + need to sudo apt install apparmor-utils beforehand). + Limitations Without the use of --vfs-cache-mode this can only write files @@ -7381,8 +7735,8 @@ should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. - --dir-cache-time duration Time to cache directory entries for (default 5m0s) - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory @@ -7433,13 +7787,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -7584,9 +7938,9 @@ cost of an increased number of requests. These flags control the chunking: - --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) - --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) - --vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once The chunking behaves differently depending on the --vfs-read-chunk-streams parameter. @@ -7641,33 +7995,33 @@ In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --read-only Only allow read-only access. + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. - --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS). - --transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: - --links Translate symlinks to/from regular files with a '.rclonelink' extension. - --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS + --links Translate symlinks to/from regular files with a '.rclonelink' extension. + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file @@ -7769,7 +8123,7 @@ This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. - --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) Alternate report of used bytes @@ -7780,7 +8134,7 @@ flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself. -WARNING. Contrary to rclone size, this flag ignores filters so that the +WARNING: Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -7969,8 +8323,8 @@ Synopsis This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" -which is taken to mean "http://localhost:port" or a "host:port" which is -taken to mean "http://host:port" +which is taken to mean http://localhost:port or a "host:port" which is +taken to mean http://host:port. A username and password can be passed in with --user and --pass. @@ -8150,6 +8504,8 @@ and trailing "/" on --rc-baseurl, so --rc-baseurl "rclone", --rc-baseurl "/rclone" and --rc-baseurl "/rclone/" are all treated identically. +--rc-disable-zip may be set to disable the zipping download option. + TLS (SSL) By default this will serve over http. If you want you can serve over @@ -8174,64 +8530,76 @@ arguments passed by --rc-addr). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html. Socket activation can be tested ad-hoc with the systemd-socket-activatecommand - systemd-socket-activate -l 8000 -- rclone serve + systemd-socket-activate -l 8000 -- rclone serve This will socket-activate rclone on the first connection to port 8000 -over TCP. ### Template +over TCP. + +Template --rc-template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: ----------------------------------------------------------------------- - Parameter Description - ----------------------------------- ----------------------------------- - .Name The full path of a file/directory. + Parameter Subparameter Description + ---------------------- ------------------------- ---------------------- + .Name The full path of a + file/directory. - .Title Directory listing of .Name + .Title Directory listing of + '.Name'. - .Sort The current sort used. This is - changeable via ?sort= parameter + .Sort The current sort used. + This is changeable via + '?sort=' parameter. + Possible values: + namedirfirst, name, + size, time (default + namedirfirst). - Sort Options: - namedirfirst,name,size,time - (default namedirfirst) + .Order The current ordering + used. This is + changeable via + '?order=' parameter. + Possible values: asc, + desc (default asc). - .Order The current ordering used. This is - changeable via ?order= parameter + .Query Currently unused. - Order Options: asc,desc (default - asc) + .Breadcrumb Allows for creating a + relative navigation. - .Query Currently unused. + .Link The link of the Text + relative to the root. - .Breadcrumb Allows for creating a relative - navigation + .Text The Name of the + directory. - -- .Link The relative to the root link of - the Text. + .Entries Information about a + specific + file/directory. - -- .Text The Name of the directory. + .URL The url of an entry. - .Entries Information about a specific - file/directory. + .Leaf Currently same as + '.URL' but intended to + be just the name. - -- .URL The 'url' of an entry. + .IsDir Boolean for if an + entry is a directory + or not. - -- .Leaf Currently same as 'URL' but - intended to be 'just' the name. + .Size Size in bytes of the + entry. - -- .IsDir Boolean for if an entry is a - directory or not. - - -- .Size Size in Bytes of the entry. - - -- .ModTime The UTC timestamp of an entry. + .ModTime The UTC timestamp of + an entry. ----------------------------------------------------------------------- The server also makes the following functions available so that they can @@ -8264,7 +8632,7 @@ a single username and password with the --rc-user and --rc-pass flags. Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with ---user-from-header (e.g., --rc---user-from-header=x-remote-user). Ensure +--user-from-header (e.g., --rc-user-from-header=x-remote-user). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. @@ -8434,8 +8802,8 @@ command will rename the old executable to 'rclone.old.exe' upon success. Please note that this command was not available before rclone version 1.55. If it fails for you with the message unknown command "selfupdate" -then you will need to update manually following the install instructions -located at https://rclone.org/install/ +then you will need to update manually following the install +documentation. rclone selfupdate [flags] @@ -8466,6 +8834,11 @@ to specify the protocol, e.g. rclone serve http remote: +When the "--metadata" flag is enabled, the following metadata fields +will be provided as headers: - "content-disposition" - "cache-control" - +"content-language" - "content-encoding" Note: The availability of these +fields depends on whether the remote supports metadata. + Each subcommand has its own options which you can see in their help. rclone serve [opts] [flags] @@ -8543,8 +8916,8 @@ should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. - --dir-cache-time duration Time to cache directory entries for (default 5m0s) - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory @@ -8595,13 +8968,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -8746,9 +9119,9 @@ cost of an increased number of requests. These flags control the chunking: - --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) - --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) - --vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once The chunking behaves differently depending on the --vfs-read-chunk-streams parameter. @@ -8803,33 +9176,33 @@ In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --read-only Only allow read-only access. + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. - --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS). - --transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: - --links Translate symlinks to/from regular files with a '.rclonelink' extension. - --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS + --links Translate symlinks to/from regular files with a '.rclonelink' extension. + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file @@ -8931,7 +9304,7 @@ This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. - --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) Alternate report of used bytes @@ -8942,7 +9315,7 @@ flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself. -WARNING. Contrary to rclone size, this flag ignores filters so that the +WARNING: Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -9127,8 +9500,8 @@ should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. - --dir-cache-time duration Time to cache directory entries for (default 5m0s) - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory @@ -9179,13 +9552,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -9330,9 +9703,9 @@ cost of an increased number of requests. These flags control the chunking: - --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) - --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) - --vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once The chunking behaves differently depending on the --vfs-read-chunk-streams parameter. @@ -9387,33 +9760,33 @@ In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --read-only Only allow read-only access. + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. - --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS). - --transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: - --links Translate symlinks to/from regular files with a '.rclonelink' extension. - --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS + --links Translate symlinks to/from regular files with a '.rclonelink' extension. + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file @@ -9515,7 +9888,7 @@ This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. - --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) Alternate report of used bytes @@ -9526,7 +9899,7 @@ flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself. -WARNING. Contrary to rclone size, this flag ignores filters so that the +WARNING: Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -9715,8 +10088,8 @@ should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. - --dir-cache-time duration Time to cache directory entries for (default 5m0s) - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory @@ -9767,13 +10140,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -9918,9 +10291,9 @@ cost of an increased number of requests. These flags control the chunking: - --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) - --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) - --vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once The chunking behaves differently depending on the --vfs-read-chunk-streams parameter. @@ -9975,33 +10348,33 @@ In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --read-only Only allow read-only access. + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. - --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS). - --transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: - --links Translate symlinks to/from regular files with a '.rclonelink' extension. - --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS + --links Translate symlinks to/from regular files with a '.rclonelink' extension. + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file @@ -10103,7 +10476,7 @@ This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. - --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) Alternate report of used bytes @@ -10114,7 +10487,7 @@ flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself. -WARNING. Contrary to rclone size, this flag ignores filters so that the +WARNING: Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -10174,37 +10547,39 @@ have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. -This config generated must have this extra parameter - _root - root to -use for the backend +This config generated must have this extra parameter -And it may have this parameter - _obscure - comma separated strings for -parameters to obscure +- _root - root to use for the backend + +And it may have this parameter + +- _obscure - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { - "user": "me", - "pass": "mypassword" + "user": "me", + "pass": "mypassword" } If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { - "user": "me", - "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" + "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } And as an example return this on STDOUT { - "type": "sftp", - "_root": "", - "_obscure": "pass", - "user": "me", - "pass": "mypassword", - "host": "sftp.example.com" + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" } This would mean that an SFTP backend would be created on the fly for the @@ -10357,6 +10732,8 @@ and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. +--disable-zip may be set to disable the zipping download option. + TLS (SSL) By default this will serve over http. If you want you can serve over @@ -10381,64 +10758,76 @@ arguments passed by --addr). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html. Socket activation can be tested ad-hoc with the systemd-socket-activatecommand - systemd-socket-activate -l 8000 -- rclone serve + systemd-socket-activate -l 8000 -- rclone serve This will socket-activate rclone on the first connection to port 8000 -over TCP. ### Template +over TCP. + +Template --template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: ----------------------------------------------------------------------- - Parameter Description - ----------------------------------- ----------------------------------- - .Name The full path of a file/directory. + Parameter Subparameter Description + ---------------------- ------------------------- ---------------------- + .Name The full path of a + file/directory. - .Title Directory listing of .Name + .Title Directory listing of + '.Name'. - .Sort The current sort used. This is - changeable via ?sort= parameter + .Sort The current sort used. + This is changeable via + '?sort=' parameter. + Possible values: + namedirfirst, name, + size, time (default + namedirfirst). - Sort Options: - namedirfirst,name,size,time - (default namedirfirst) + .Order The current ordering + used. This is + changeable via + '?order=' parameter. + Possible values: asc, + desc (default asc). - .Order The current ordering used. This is - changeable via ?order= parameter + .Query Currently unused. - Order Options: asc,desc (default - asc) + .Breadcrumb Allows for creating a + relative navigation. - .Query Currently unused. + .Link The link of the Text + relative to the root. - .Breadcrumb Allows for creating a relative - navigation + .Text The Name of the + directory. - -- .Link The relative to the root link of - the Text. + .Entries Information about a + specific + file/directory. - -- .Text The Name of the directory. + .URL The url of an entry. - .Entries Information about a specific - file/directory. + .Leaf Currently same as + '.URL' but intended to + be just the name. - -- .URL The 'url' of an entry. + .IsDir Boolean for if an + entry is a directory + or not. - -- .Leaf Currently same as 'URL' but - intended to be 'just' the name. + .Size Size in bytes of the + entry. - -- .IsDir Boolean for if an entry is a - directory or not. - - -- .Size Size in Bytes of the entry. - - -- .ModTime The UTC timestamp of an entry. + .ModTime The UTC timestamp of + an entry. ----------------------------------------------------------------------- The server also makes the following functions available so that they can @@ -10471,9 +10860,9 @@ a single username and password with the --user and --pass flags. Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with ---user-from-header (e.g., ----user-from-header=x-remote-user). Ensure -the proxy is trusted and headers cannot be spoofed, as misconfiguration -may lead to unauthorized access. +--user-from-header (e.g., --user-from-header=x-remote-user). Ensure the +proxy is trusted and headers cannot be spoofed, as misconfiguration may +lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the --client-ca flag passed to the @@ -10517,8 +10906,8 @@ should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. - --dir-cache-time duration Time to cache directory entries for (default 5m0s) - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory @@ -10569,13 +10958,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -10720,9 +11109,9 @@ cost of an increased number of requests. These flags control the chunking: - --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) - --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) - --vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once The chunking behaves differently depending on the --vfs-read-chunk-streams parameter. @@ -10777,33 +11166,33 @@ In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --read-only Only allow read-only access. + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. - --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS). - --transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: - --links Translate symlinks to/from regular files with a '.rclonelink' extension. - --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS + --links Translate symlinks to/from regular files with a '.rclonelink' extension. + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file @@ -10905,7 +11294,7 @@ This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. - --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) Alternate report of used bytes @@ -10916,7 +11305,7 @@ flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself. -WARNING. Contrary to rclone size, this flag ignores filters so that the +WARNING: Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -10976,37 +11365,39 @@ have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. -This config generated must have this extra parameter - _root - root to -use for the backend +This config generated must have this extra parameter -And it may have this parameter - _obscure - comma separated strings for -parameters to obscure +- _root - root to use for the backend + +And it may have this parameter + +- _obscure - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { - "user": "me", - "pass": "mypassword" + "user": "me", + "pass": "mypassword" } If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { - "user": "me", - "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" + "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } And as an example return this on STDOUT { - "type": "sftp", - "_root": "", - "_obscure": "pass", - "user": "me", - "pass": "mypassword", - "host": "sftp.example.com" + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" } This would mean that an SFTP backend would be created on the fly for the @@ -11041,6 +11432,7 @@ Options --client-ca string Client certificate authority to verify clients with --dir-cache-time Duration Time to cache directory entries for (default 5m0s) --dir-perms FileMode Directory permissions (default 777) + --disable-zip Disable zip download of directories --file-perms FileMode File permissions (default 666) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for http @@ -11218,8 +11610,8 @@ should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. - --dir-cache-time duration Time to cache directory entries for (default 5m0s) - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory @@ -11270,13 +11662,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -11421,9 +11813,9 @@ cost of an increased number of requests. These flags control the chunking: - --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) - --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) - --vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once The chunking behaves differently depending on the --vfs-read-chunk-streams parameter. @@ -11478,33 +11870,33 @@ In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --read-only Only allow read-only access. + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. - --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS). - --transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: - --links Translate symlinks to/from regular files with a '.rclonelink' extension. - --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS + --links Translate symlinks to/from regular files with a '.rclonelink' extension. + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file @@ -11606,7 +11998,7 @@ This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. - --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) Alternate report of used bytes @@ -11617,7 +12009,7 @@ flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself. -WARNING. Contrary to rclone size, this flag ignores filters so that the +WARNING: Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -11853,6 +12245,8 @@ and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. +--disable-zip may be set to disable the zipping download option. + TLS (SSL) By default this will serve over http. If you want you can serve over @@ -11877,15 +12271,17 @@ arguments passed by --addr). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html. Socket activation can be tested ad-hoc with the systemd-socket-activatecommand - systemd-socket-activate -l 8000 -- rclone serve + systemd-socket-activate -l 8000 -- rclone serve This will socket-activate rclone on the first connection to port 8000 -over TCP. ### Authentication +over TCP. + +Authentication By default this will serve files without needing a login. @@ -11894,9 +12290,9 @@ a single username and password with the --user and --pass flags. Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with ---user-from-header (e.g., ----user-from-header=x-remote-user). Ensure -the proxy is trusted and headers cannot be spoofed, as misconfiguration -may lead to unauthorized access. +--user-from-header (e.g., --user-from-header=x-remote-user). Ensure the +proxy is trusted and headers cannot be spoofed, as misconfiguration may +lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the --client-ca flag passed to the @@ -12088,9 +12484,9 @@ a single username and password with the --user and --pass flags. Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with ---user-from-header (e.g., ----user-from-header=x-remote-user). Ensure -the proxy is trusted and headers cannot be spoofed, as misconfiguration -may lead to unauthorized access. +--user-from-header (e.g., --user-from-header=x-remote-user). Ensure the +proxy is trusted and headers cannot be spoofed, as misconfiguration may +lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the --client-ca flag passed to the @@ -12145,6 +12541,8 @@ and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. +--disable-zip may be set to disable the zipping download option. + TLS (SSL) By default this will serve over http. If you want you can serve over @@ -12169,15 +12567,17 @@ arguments passed by --addr). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html. Socket activation can be tested ad-hoc with the systemd-socket-activatecommand - systemd-socket-activate -l 8000 -- rclone serve + systemd-socket-activate -l 8000 -- rclone serve This will socket-activate rclone on the first connection to port 8000 -over TCP. ## VFS - Virtual File System +over TCP. + +VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something which looks much more like a disk filing @@ -12198,8 +12598,8 @@ should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. - --dir-cache-time duration Time to cache directory entries for (default 5m0s) - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory @@ -12250,13 +12650,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -12401,9 +12801,9 @@ cost of an increased number of requests. These flags control the chunking: - --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) - --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) - --vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once The chunking behaves differently depending on the --vfs-read-chunk-streams parameter. @@ -12458,33 +12858,33 @@ In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --read-only Only allow read-only access. + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. - --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS). - --transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: - --links Translate symlinks to/from regular files with a '.rclonelink' extension. - --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS + --links Translate symlinks to/from regular files with a '.rclonelink' extension. + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file @@ -12586,7 +12986,7 @@ This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. - --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) Alternate report of used bytes @@ -12597,7 +12997,7 @@ flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself. -WARNING. Contrary to rclone size, this flag ignores filters so that the +WARNING: Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -12774,7 +13174,7 @@ reachable externally then supply --addr :2022 for example. This also supports being run with socket activation, in which case it will listen on the first passed FD. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html. Socket activation can be tested ad-hoc with the systemd-socket-activatecommand: @@ -12825,8 +13225,8 @@ should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. - --dir-cache-time duration Time to cache directory entries for (default 5m0s) - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory @@ -12877,13 +13277,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -13028,9 +13428,9 @@ cost of an increased number of requests. These flags control the chunking: - --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) - --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) - --vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once The chunking behaves differently depending on the --vfs-read-chunk-streams parameter. @@ -13085,33 +13485,33 @@ In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --read-only Only allow read-only access. + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. - --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS). - --transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: - --links Translate symlinks to/from regular files with a '.rclonelink' extension. - --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS + --links Translate symlinks to/from regular files with a '.rclonelink' extension. + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file @@ -13213,7 +13613,7 @@ This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. - --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) Alternate report of used bytes @@ -13224,7 +13624,7 @@ flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself. -WARNING. Contrary to rclone size, this flag ignores filters so that the +WARNING: Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -13284,37 +13684,39 @@ have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. -This config generated must have this extra parameter - _root - root to -use for the backend +This config generated must have this extra parameter -And it may have this parameter - _obscure - comma separated strings for -parameters to obscure +- _root - root to use for the backend + +And it may have this parameter + +- _obscure - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { - "user": "me", - "pass": "mypassword" + "user": "me", + "pass": "mypassword" } If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { - "user": "me", - "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" + "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } And as an example return this on STDOUT { - "type": "sftp", - "_root": "", - "_obscure": "pass", - "user": "me", - "pass": "mypassword", - "host": "sftp.example.com" + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" } This would mean that an SFTP backend would be created on the fly for the @@ -13448,20 +13850,26 @@ dialog. Windows requires SSL / HTTPS connection to be used with Basic. If you try to connect via Add Network Location Wizard you will get the following error: "The folder you entered does not appear to be valid. Please choose another". However, you still can connect if you set the -following registry key on a client machine: HKEY_LOCAL_MACHINEto 2. The -BasicAuthLevel can be set to the following values: 0 - Basic -authentication disabled 1 - Basic authentication enabled for SSL -connections only 2 - Basic authentication enabled for SSL connections -and for non-SSL connections If required, increase the -FileSizeLimitInBytes to a higher value. Navigate to the Services -interface, then restart the WebClient service. +following registry key on a client machine: +HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\BasicAuthLevel +to 2. The BasicAuthLevel can be set to the following values: + + 0 - Basic authentication disabled + 1 - Basic authentication enabled for SSL connections only + 2 - Basic authentication enabled for SSL connections and for non-SSL connections + +If required, increase the FileSizeLimitInBytes to a higher value. +Navigate to the Services interface, then restart the WebClient service. Access Office applications on WebDAV -Navigate to following registry HKEY_CURRENT_USER[14.0/15.0/16.0] Create -a new DWORD BasicAuthLevel with value 2. 0 - Basic authentication -disabled 1 - Basic authentication enabled for SSL connections only 2 - -Basic authentication enabled for SSL and for non-SSL connections +Navigate to following registry +HKEY_CURRENT_USER\Software\Microsoft\Office\[14.0/15.0/16.0]\Common\Internet +Create a new DWORD BasicAuthLevel with value 2. + + 0 - Basic authentication disabled + 1 - Basic authentication enabled for SSL connections only + 2 - Basic authentication enabled for SSL and for non-SSL connections https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint @@ -13510,6 +13918,8 @@ and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically. +--disable-zip may be set to disable the zipping download option. + TLS (SSL) By default this will serve over http. If you want you can serve over @@ -13534,64 +13944,76 @@ arguments passed by --addr). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html. Socket activation can be tested ad-hoc with the systemd-socket-activatecommand - systemd-socket-activate -l 8000 -- rclone serve + systemd-socket-activate -l 8000 -- rclone serve This will socket-activate rclone on the first connection to port 8000 -over TCP. ### Template +over TCP. + +Template --template allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: ----------------------------------------------------------------------- - Parameter Description - ----------------------------------- ----------------------------------- - .Name The full path of a file/directory. + Parameter Subparameter Description + ---------------------- ------------------------- ---------------------- + .Name The full path of a + file/directory. - .Title Directory listing of .Name + .Title Directory listing of + '.Name'. - .Sort The current sort used. This is - changeable via ?sort= parameter + .Sort The current sort used. + This is changeable via + '?sort=' parameter. + Possible values: + namedirfirst, name, + size, time (default + namedirfirst). - Sort Options: - namedirfirst,name,size,time - (default namedirfirst) + .Order The current ordering + used. This is + changeable via + '?order=' parameter. + Possible values: asc, + desc (default asc). - .Order The current ordering used. This is - changeable via ?order= parameter + .Query Currently unused. - Order Options: asc,desc (default - asc) + .Breadcrumb Allows for creating a + relative navigation. - .Query Currently unused. + .Link The link of the Text + relative to the root. - .Breadcrumb Allows for creating a relative - navigation + .Text The Name of the + directory. - -- .Link The relative to the root link of - the Text. + .Entries Information about a + specific + file/directory. - -- .Text The Name of the directory. + .URL The url of an entry. - .Entries Information about a specific - file/directory. + .Leaf Currently same as + '.URL' but intended to + be just the name. - -- .URL The 'url' of an entry. + .IsDir Boolean for if an + entry is a directory + or not. - -- .Leaf Currently same as 'URL' but - intended to be 'just' the name. + .Size Size in bytes of the + entry. - -- .IsDir Boolean for if an entry is a - directory or not. - - -- .Size Size in Bytes of the entry. - - -- .ModTime The UTC timestamp of an entry. + .ModTime The UTC timestamp of + an entry. ----------------------------------------------------------------------- The server also makes the following functions available so that they can @@ -13624,9 +14046,9 @@ a single username and password with the --user and --pass flags. Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with ---user-from-header (e.g., ----user-from-header=x-remote-user). Ensure -the proxy is trusted and headers cannot be spoofed, as misconfiguration -may lead to unauthorized access. +--user-from-header (e.g., --user-from-header=x-remote-user). Ensure the +proxy is trusted and headers cannot be spoofed, as misconfiguration may +lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the --client-ca flag passed to the @@ -13670,8 +14092,8 @@ should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. - --dir-cache-time duration Time to cache directory entries for (default 5m0s) - --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory @@ -13722,13 +14144,13 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. - --cache-dir string Directory rclone will use for caching. - --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) - --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) - --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) - --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) - --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) - --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) If run with -vv rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -13873,9 +14295,9 @@ cost of an increased number of requests. These flags control the chunking: - --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) - --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) - --vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once The chunking behaves differently depending on the --vfs-read-chunk-streams parameter. @@ -13930,33 +14352,33 @@ In particular S3 and Swift benefit hugely from the --no-modtime flag (or use --use-server-modtime for a slightly different effect) as each read of the modification time takes a transaction. - --no-checksum Don't compare checksums on up/download. - --no-modtime Don't read/write the modification time (can speed things up). - --no-seek Don't allow seeking in files. - --read-only Only allow read-only access. + --no-checksum Don't compare checksums on up/download. + --no-modtime Don't read/write the modification time (can speed things up). + --no-seek Don't allow seeking in files. + --read-only Only allow read-only access. Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. - --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) - --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) When using VFS write caching (--vfs-cache-mode with value writes or full), the global flag --transfers can be set to adjust the number of parallel uploads of modified files from the cache (the related global flag --checkers has no effect on the VFS). - --transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: - --links Translate symlinks to/from regular files with a '.rclonelink' extension. - --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS + --links Translate symlinks to/from regular files with a '.rclonelink' extension. + --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a file @@ -14058,7 +14480,7 @@ This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. - --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) Alternate report of used bytes @@ -14069,7 +14491,7 @@ flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to rclone size and compute the total used space itself. -WARNING. Contrary to rclone size, this flag ignores filters so that the +WARNING: Contrary to rclone size, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -14129,37 +14551,39 @@ have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. -This config generated must have this extra parameter - _root - root to -use for the backend +This config generated must have this extra parameter -And it may have this parameter - _obscure - comma separated strings for -parameters to obscure +- _root - root to use for the backend + +And it may have this parameter + +- _obscure - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { - "user": "me", - "pass": "mypassword" + "user": "me", + "pass": "mypassword" } If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: { - "user": "me", - "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" + "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } And as an example return this on STDOUT { - "type": "sftp", - "_root": "", - "_obscure": "pass", - "user": "me", - "pass": "mypassword", - "host": "sftp.example.com" + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" } This would mean that an SFTP backend would be created on the fly for the @@ -14352,6 +14776,7 @@ See Also - rclone test makefiles - Make a random file hierarchy in a directory - rclone test memory - Load all the objects at remote:path into memory and report memory stats. +- rclone test speed - Run a speed test to the remote rclone test changenotify @@ -14405,7 +14830,7 @@ paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one. -NB this can create undeletable files and other hazards - use with care +NB this can create undeletable files and other hazards - use with care! rclone test info [remote:path]+ [flags] @@ -14496,6 +14921,55 @@ See Also - rclone test - Run a test command +rclone test speed + +Run a speed test to the remote + +Synopsis + +Run a speed test to the remote. + +This command runs a series of uploads and downloads to the remote, +measuring and printing the speed of each test using varying file sizes +and numbers of files. + +Test time can be innaccurate with small file caps and large files. As it +uses the results of an initial test to determine how many files to use +in each subsequent test. + +It is recommended to use -q flag for a simpler output. e.g.: + + rclone test speed remote: -q + +NB This command will create and delete files on the remote in a randomly +named directory which will be automatically removed on a clean exit. + +You can use the --json flag to only print the results in JSON format. + + rclone test speed [flags] + +Options + + --ascii Fill files with random ASCII printable bytes only + --chargen Fill files with a ASCII chargen pattern + --file-cap int Maximum number of files to use in each test (default 100) + -h, --help help for speed + --json Output only results in JSON format + --large SizeSuffix Size of large files (default 1Gi) + --medium SizeSuffix Size of medium files (default 10Mi) + --pattern Fill files with a periodic pattern + --seed int Seed for the random number generator (0 for random) (default 1) + --small SizeSuffix Size of small files (default 1Ki) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --test-time Duration Length for each test to run (default 15s) + --zero Fill files with ASCII 0x00 + +See the global flags page for global options not listed here. + +See Also + +- rclone test - Run a test command + rclone touch Create new file or change file modification time. @@ -14847,6 +15321,9 @@ Windows.) rclone copy ':http,url="https://example.com":path/to/dir' /tmp/dir +You can use rclone config string to convert a remote into a connection +string. + Connection strings, config and logging If you supply extra configuration to a backend by command line flag, @@ -15334,7 +15811,9 @@ would have been updated or deleted will be stored in remote:old. If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir to store the old files, or you -might want to pass --suffix with today's date. +might want to pass --suffix with today's date. This can be done with +--suffix $(date +%F) in bash, and +--suffix $(Get-Date -Format 'yyyy-MM-dd') in PowerShell. See --compare-dest and --copy-dest. @@ -16501,25 +16980,25 @@ some context for the Metadata which may be important. backend docs. { - "SrcFs": "gdrive:", - "SrcFsType": "drive", - "DstFs": "newdrive:user", - "DstFsType": "onedrive", - "Remote": "test.txt", - "Size": 6, - "MimeType": "text/plain; charset=utf-8", - "ModTime": "2022-10-11T17:53:10.286745272+01:00", - "IsDir": false, - "ID": "xyz", - "Metadata": { - "btime": "2022-10-11T16:53:11Z", - "content-type": "text/plain; charset=utf-8", - "mtime": "2022-10-11T17:53:10.286745272+01:00", - "owner": "user1@domain1.com", - "permissions": "...", - "description": "my nice file", - "starred": "false" - } + "SrcFs": "gdrive:", + "SrcFsType": "drive", + "DstFs": "newdrive:user", + "DstFsType": "onedrive", + "Remote": "test.txt", + "Size": 6, + "MimeType": "text/plain; charset=utf-8", + "ModTime": "2022-10-11T17:53:10.286745272+01:00", + "IsDir": false, + "ID": "xyz", + "Metadata": { + "btime": "2022-10-11T16:53:11Z", + "content-type": "text/plain; charset=utf-8", + "mtime": "2022-10-11T17:53:10.286745272+01:00", + "owner": "user1@domain1.com", + "permissions": "...", + "description": "my nice file", + "starred": "false" + } } The program should then modify the input as desired and send it to @@ -16529,15 +17008,15 @@ example we translate user names and permissions and add something to the description: { - "Metadata": { - "btime": "2022-10-11T16:53:11Z", - "content-type": "text/plain; charset=utf-8", - "mtime": "2022-10-11T17:53:10.286745272+01:00", - "owner": "user1@domain2.com", - "permissions": "...", - "description": "my nice file [migrated from domain1]", - "starred": "false" - } + "Metadata": { + "btime": "2022-10-11T16:53:11Z", + "content-type": "text/plain; charset=utf-8", + "mtime": "2022-10-11T17:53:10.286745272+01:00", + "owner": "user1@domain2.com", + "permissions": "...", + "description": "my nice file [migrated from domain1]", + "starred": "false" + } } Metadata can be removed here too. @@ -17904,22 +18383,23 @@ The options set by environment variables can be seen with the -vv and Configuring rclone on a remote / headless machine -Some of the configurations (those involving oauth2) require an Internet -connected web browser. +Some of the configurations (those involving oauth2) require an +internet-connected web browser. -If you are trying to set rclone up on a remote or headless box with no -browser available on it (e.g. a NAS or a server in a datacenter) then -you will need to use an alternative means of configuration. There are -two ways of doing it, described below. +If you are trying to set rclone up on a remote or headless machine with +no browser available on it (e.g. a NAS or a server in a datacenter), +then you will need to use an alternative means of configuration. There +are three ways of doing it, described below. Configuring using rclone authorize -On the headless box run rclone config but answer N to the -Use auto config? question. +On the headless machine run rclone config, but answer N to the question +Use web browser to automatically authenticate rclone with remote?. - Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine + Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access + If not sure try Y. If Y failed, try N. y) Yes (default) n) No @@ -17931,29 +18411,31 @@ Use auto config? question. For more help and alternate methods see: https://rclone.org/remote_setup/ Execute the following on the machine with the web browser (same rclone version recommended): - rclone authorize "onedrive" + rclone authorize "onedrive" Then paste the result. Enter a value. config_token> -Then on your main desktop machine +Then on your main desktop machine, run rclone authorize. rclone authorize "onedrive" - If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth - Log in and authorize rclone for access - Waiting for code... + NOTICE: Make sure your Redirect URL is set to "http://localhost:53682/" in your custom config. + NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx + NOTICE: Log in and authorize rclone for access + NOTICE: Waiting for code... + Got code Paste the following into your remote machine ---> SECRET_TOKEN <---End paste -Then back to the headless box, paste in the code +Then back to the headless machine, paste in the code. config_token> SECRET_TOKEN -------------------- [acd12] - client_id = - client_secret = + client_id = + client_secret = token = SECRET_TOKEN -------------------- y) Yes this is OK @@ -17963,46 +18445,54 @@ Then back to the headless box, paste in the code Configuring by copying the config file -Rclone stores all of its config in a single configuration file. This can -easily be copied to configure a remote rclone. +Rclone stores all of its configuration in a single file. This can easily +be copied to configure a remote rclone (although some backends does not +support reusing the same configuration, consult your backend +documentation to be sure). -So first configure rclone on your desktop machine with +Start by running rclone config to create the configuration file on your +desktop machine. rclone config -to set up the config file. - -Find the config file by running rclone config file, for example +Then locate the file by running rclone config file. $ rclone config file Configuration file is stored at: /home/user/.rclone.conf -Now transfer it to the remote box (scp, cut paste, ftp, sftp, etc.) and -place it in the correct place (use rclone config file on the remote box -to find out where). +Finally, transfer the file to the remote machine (scp, cut paste, ftp, +sftp, etc.) and place it in the correct location (use rclone config file +on the remote machine to find out where). Configuring using SSH Tunnel -Linux and MacOS users can utilize SSH Tunnel to redirect the headless -box port 53682 to local machine by using the following command: +If you have an SSH client installed on your local machine, you can set +up an SSH tunnel to redirect the port 53682 into the headless machine by +using the following command: ssh -L localhost:53682:localhost:53682 username@remote_server -Then on the headless box run rclone config and answer Y to the -Use auto config? question. +Then on the headless machine run rclone config and answer Y to the +question +Use web browser to automatically authenticate rclone with remote?. - Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine + Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access + If not sure try Y. If Y failed, try N. y) Yes (default) n) No y/n> y + NOTICE: Make sure your Redirect URL is set to "http://localhost:53682/" in your custom config. + NOTICE: If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx + NOTICE: Log in and authorize rclone for access + NOTICE: Waiting for code... -Then copy and paste the auth url -http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx to the browser on your -local machine, complete the auth and it is done. +Finally, copy and paste the presented URL +http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx to the browser +on your local machine, complete the auth and you are done. Filtering, includes and excludes @@ -18128,14 +18618,14 @@ make things easy for users. However this does not provide absolute control over the matching, so for advanced users rclone also provides a regular expression syntax. -The regular expressions used are as defined in the Go regular expression -reference. Regular expressions should be enclosed in {{ }}. They will -match only the last path segment if the glob doesn't start with / or the -whole path name if it does. Note that rclone does not attempt to parse -the supplied regular expression, meaning that using any regular -expression filter will prevent rclone from using directory filter rules, -as it will instead check every path against the supplied regular -expression(s). +Rclone generally accepts Perl-style regular expressions, the exact +syntax is defined in the Go regular expression reference. Regular +expressions should be enclosed in {{ }}. They will match only the last +path segment if the glob doesn't start with / or the whole path name if +it does. Note that rclone does not attempt to parse the supplied regular +expression, meaning that using any regular expression filter will +prevent rclone from using directory filter rules, as it will instead +check every path against the supplied regular expression(s). Here is how the {{regexp}} is transformed into an full regular expression to match the entire path: @@ -19270,10 +19760,10 @@ default jobs are executed immediately as they are created or synchronously. If _async has a true value when supplied to an rc call then it will -return immediately with a job id and the task will be run in the -background. The job/status call can be used to get information of the -background job. The job can be queried for up to 1 minute after it has -finished. +return immediately with a job id and execute id, and the task will be +run in the background. The job/status call can be used to get +information of the background job. The job can be queried for up to 1 +minute after it has finished. It is recommended that potentially long running jobs, e.g. sync/sync, sync/copy, sync/move, operations/purge are run with the _async flag to @@ -19284,9 +19774,15 @@ Starting a job with the _async flag: $ rclone rc --json '{ "p1": [1,"2",null,4], "p2": { "a":1, "b":2 }, "_async": true }' rc/noop { - "jobid": 2 + "jobid": 2, + "executeId": "d794c33c-463e-4acf-b911-f4b23e4f40b7" } +The jobid is a unique identifier for the job within this rclone +instance. The executeId identifies the rclone process instance and +changes after rclone restart. Together, the pair (executeId, jobid) +uniquely identifies a job across rclone restarts. + Query the status to see if the job has finished. For more information on the meaning of these return parameters see the job/status call. @@ -19295,6 +19791,7 @@ the meaning of these return parameters see the job/status call. "duration": 0.000124163, "endTime": "2018-10-27T11:38:07.911245881+01:00", "error": "", + "executeId": "d794c33c-463e-4acf-b911-f4b23e4f40b7", "finished": true, "id": 2, "output": { @@ -19314,15 +19811,29 @@ the meaning of these return parameters see the job/status call. "success": true } -job/list can be used to show the running or recently completed jobs +job/list can be used to show running or recently completed jobs along +with their status $ rclone rc job/list { + "executeId": "d794c33c-463e-4acf-b911-f4b23e4f40b7", + "finished_ids": [ + 1 + ], "jobids": [ + 1, + 2 + ], + "running_ids": [ 2 ] } +This shows: - executeId - the current rclone instance ID (same for all +jobs, changes after restart) - jobids - array of all job IDs (both +running and finished) - running_ids - array of currently running job +IDs - finished_ids - array of finished job IDs + Setting config flags with _config If you wish to set config (the equivalent of the global flags) for the @@ -19797,7 +20308,7 @@ Unlocks the config file if it is locked. Parameters: -- 'config_password' - password to unlock the config file +- 'configPassword' - password to unlock the config file A good idea is to disable AskPassword before making this call @@ -20092,17 +20603,20 @@ Returns the following values: ] } -core/version: Shows the current version of rclone and the go runtime. +core/version: Shows the current version of rclone, Go and the OS. -This shows the current version of go and the go runtime: +This shows the current versions of rclone, Go and the OS: -- version - rclone version, e.g. "v1.53.0" +- version - rclone version, e.g. "v1.71.2" - decomposed - version number as [major, minor, patch] - isGit - boolean - true if this was compiled from the git version - isBeta - boolean - true if this is a beta version -- os - OS in use as according to Go -- arch - cpu architecture in use according to Go -- goVersion - version of Go runtime in use +- os - OS in use as according to Go GOOS (e.g. "linux") +- osKernel - OS Kernel version (e.g. "6.8.0-86-generic (x86_64)") +- osVersion - OS Version (e.g. "ubuntu 24.04 (64 bit)") +- osArch - cpu architecture in use (e.g. "arm64 (ARMv8 compatible)") +- arch - cpu architecture in use according to Go GOARCH (e.g. "arm64") +- goVersion - version of Go runtime in use (e.g. "go1.25.0") - linking - type of rclone executable (static or dynamic) - goTags - space separated build tags or "none" @@ -20222,6 +20736,65 @@ Returns - entries - number of items in the cache Authentication is required for this call. +job/batch: Run a batch of rclone rc commands concurrently. + +This takes the following parameters: + +- concurrency - int - do this many commands concurrently. Defaults to + --transfers if not set. +- inputs - an list of inputs to the commands with an extra _path + parameter + + { + "_path": "rc/path", + "param1": "parameter for the path as documented", + "param2": "parameter for the path as documented, etc", + } + +The inputs may use _async, _group, _config and _filter as normal when +using the rc. + +Returns: + +- results - a list of results from the commands with one entry for + each in inputs. + +For example: + + rclone rc job/batch --json '{ + "inputs": [ + { + "_path": "rc/noop", + "parameter": "OK" + }, + { + "_path": "rc/error", + "parameter": "BAD" + } + ] + } + ' + +Gives the result: + + { + "results": [ + { + "parameter": "OK" + }, + { + "error": "arbitrary error on input map[parameter:BAD]", + "input": { + "parameter": "BAD" + }, + "path": "rc/error", + "status": 500 + } + ] + } + +Authentication is required for this call. + job/list: Lists the IDs of the running jobs Parameters: None. @@ -20230,6 +20803,8 @@ Results: - executeId - string id of rclone executing (change after restart) - jobids - array of integer job ids (starting at 1 on each restart) +- runningIds - array of integer job ids that are running +- finishedIds - array of integer job ids that are finished job/status: Reads the status of the job ID @@ -20246,6 +20821,8 @@ Results: - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above +- executeId - rclone instance ID (changes after restart); combined + with id uniquely identifies a job - startTime - time the job started (e.g. "2018-10-26T18:50:20.528336039+01:00") - success - boolean - true for success false otherwise @@ -20769,8 +21346,6 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -See the settierfile command for more information on the above. - Authentication is required for this call. operations/size: Count the number of bytes and files in remote @@ -20817,8 +21392,6 @@ This takes the following parameters: - remote - a path within that remote e.g. "dir" - each part in body represents a file to be uploaded -See the uploadfile command for more information on the above. - Authentication is required for this call. options/blocks: List all the option blocks @@ -21001,6 +21574,11 @@ rc/error: This returns an error This returns an error with the input as part of its error string. Useful for testing error handling. +rc/fatal: This returns an fatal error + +This returns an error with the input as part of its error string. Useful +for testing error handling. + rc/list: List all the registered remote control commands This lists all the registered remote control commands as a JSON map in @@ -21020,6 +21598,11 @@ check that parameter passing is working properly. Authentication is required for this call. +rc/panic: This returns an error by panicking + +This returns an error with the input as part of its error string. Useful +for testing error handling. + serve/list: Show running servers Show running servers with IDs. @@ -21288,7 +21871,7 @@ This is only useful if --vfs-cache-mode > off. If you call it when the --vfs-cache-mode is off, it will return an empty result. { - "queued": // an array of files queued for upload + "queue": // an array of files queued for upload [ { "name": "file", // string: name (full path) of the file, @@ -21653,7 +22236,7 @@ Here is an overview of the major features of each cloud storage system. HiDrive HiDrive ¹² R/W No No - - - HTTP - R No No R - + HTTP - R No No R R iCloud Drive - R No No - - @@ -22527,7 +23110,7 @@ Flags for general networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.71.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0") Performance @@ -22717,6 +23300,8 @@ Backend-only flags (these can be set in the config file also). --alias-description string Description of the remote --alias-remote string Remote or path to alias + --archive-description string Description of the remote + --archive-remote string Remote to wrap to read archives from --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive --azureblob-account string Azure Storage Account Name --azureblob-archive-tier-delete Delete archive tier blobs before overwriting @@ -22794,6 +23379,10 @@ Backend-only flags (these can be set in the config file also). --b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-key string Application Key --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket + --b2-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in B2 + --b2-sse-customer-key string To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data + --b2-sse-customer-key-base64 string To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data + --b2-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-upload-concurrency int Concurrency for multipart uploads (default 4) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) @@ -22855,7 +23444,7 @@ Backend-only flags (these can be set in the config file also). --combine-description string Description of the remote --combine-upstreams SpaceSepList Upstreams for combining --compress-description string Description of the remote - --compress-level int GZIP compression level (-2 to 9) (default -1) + --compress-level string GZIP (levels -2 to 9): --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) --compress-remote string Remote to compress @@ -23150,6 +23739,7 @@ Backend-only flags (these can be set in the config file also). --mailru-token string OAuth Access Token as a JSON blob --mailru-token-url string Token server url --mailru-user string User name (usually email) + --mega-2fa string The 2FA code of your MEGA account if the account is set up with one --mega-debug Output more debug from Mega --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) @@ -23268,6 +23858,7 @@ Backend-only flags (these can be set in the config file also). --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) --protondrive-original-file-size Return the file size before encryption (default true) + --protondrive-otp-secret-key string The OTP secret key (obscured) --protondrive-password string The password of your proton account (obscured) --protondrive-replace-existing-draft Create a new revision when filename conflict is detected --protondrive-username string The username of your proton account @@ -23350,6 +23941,7 @@ Backend-only flags (these can be set in the config file also). --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) --s3-use-arn-region If true, enables arn region support for the service + --s3-use-data-integrity-protections Tristate If true use AWS S3 data integrity protections (default unset) --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) @@ -23432,6 +24024,7 @@ Backend-only flags (these can be set in the config file also). --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-user-agent string Siad User Agent (default "Sia-Agent") --skip-links Don't warn about skipped symlinks + --skip-specials Don't warn about skipped pipes, sockets and device objects --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default "WORKGROUP") @@ -23795,17 +24388,20 @@ structure playing the same role as -o key=val CLI flags: token: '{"type": "borrower", "expires": "2021-12-31"}' poll_interval: 0 -Notice a few important details: - YAML prefers _ in option names instead -of -. - YAML treats single and double quotes interchangeably. Simple -strings and integers can be left unquoted. - Boolean values must be -quoted like 'true' or "false" because these two words are reserved by -YAML. - The filesystem string is keyed with remote (or with fs). -Normally you can omit quotes here, but if the string ends with colon, -you must quote it like remote: "storage_box:". - YAML is picky about -surrounding braces in values as this is in fact another syntax for -key/value mappings. For example, JSON access tokens usually contain -double quotes and surrounding braces, so you must put them in single -quotes. +Notice a few important details: + +- YAML prefers _ in option names instead of -. +- YAML treats single and double quotes interchangeably. Simple strings + and integers can be left unquoted. +- Boolean values must be quoted like 'true' or "false" because these + two words are reserved by YAML. +- The filesystem string is keyed with remote (or with fs). Normally + you can omit quotes here, but if the string ends with colon, you + must quote it like remote: "storage_box:". +- YAML is picky about surrounding braces in values as this is in fact + another syntax for key/value mappings. For example, JSON access + tokens usually contain double quotes and surrounding braces, so you + must put them in single quotes. Installing as Managed Plugin @@ -23818,11 +24414,13 @@ Rclone volume plugin requires Docker Engine >= 19.03.15 The plugin requires presence of two directories on the host before it can be installed. Note that plugin will not create them automatically. By default they must exist on host at the following locations (though -you can tweak the paths): - /var/lib/docker-plugins/rclone/config is -reserved for the rclone.conf config file and must exist even if it's -empty and the config file is not present. - -/var/lib/docker-plugins/rclone/cache holds the plugin state file as well -as optional VFS caches. +you can tweak the paths): + +- /var/lib/docker-plugins/rclone/config is reserved for the + rclone.conf config file and must exist even if it's empty and the + config file is not present. +- /var/lib/docker-plugins/rclone/cache holds the plugin state file as + well as optional VFS caches. You can install managed plugin with default settings as follows: @@ -23831,8 +24429,11 @@ You can install managed plugin with default settings as follows: The :amd64 part of the image specification after colon is called a tag. Usually you will want to install the latest plugin for your architecture. In this case the tag will just name it, like amd64 above. -The following plugin architectures are currently available: - amd64 - -arm64 - arm-v7 +The following plugin architectures are currently available: + +- amd64 +- arm64 +- arm-v7 Sometimes you might want a concrete plugin version, not the latest one. Then you should use image tag in the form :ARCHITECTURE-VERSION. For @@ -23982,14 +24583,16 @@ Run the docker plugin service in the socket activated mode: systemctl start docker-volume-rclone.socket systemctl restart docker -Or run the service directly: - run systemctl daemon-reload to let -systemd pick up new config - run -systemctl enable docker-volume-rclone.service to make the new service -start automatically when you power on your machine. - run -systemctl start docker-volume-rclone.service to start the service now. - -run systemctl restart docker to restart docker daemon and let it detect -the new plugin socket. Note that this step is not needed in managed mode -where docker knows about plugin state changes. +Or run the service directly: + +- run systemctl daemon-reload to let systemd pick up new config +- run systemctl enable docker-volume-rclone.service to make the new + service start automatically when you power on your machine. +- run systemctl start docker-volume-rclone.service to start the + service now. +- run systemctl restart docker to restart docker daemon and let it + detect the new plugin socket. Note that this step is not needed in + managed mode where docker knows about plugin state changes. The two methods are equivalent from the user perspective, but I personally prefer socket activation. @@ -25129,18 +25732,14 @@ disallowed special characters and filename encodings.) The following backends have known issues that need more investigation: -- TestGoFile (gofile) - - TestBisyncRemoteLocal/all_changed - - TestBisyncRemoteLocal/backupdir - - TestBisyncRemoteLocal/basic - - TestBisyncRemoteLocal/changes - - TestBisyncRemoteLocal/check_access - - 78 more -- Updated: 2025-08-21-010015 +- TestDropbox (dropbox) + - TestBisyncRemoteRemote/normalization +- Updated: 2025-11-21-010037 The following backends either have not been tested recently or have known issues that are deemed unfixable for the time being: +- TestArchive (archive) - TestCache (cache) - TestFileLu (filelu) - TestFilesCom (filescom) @@ -26118,7 +26717,7 @@ You may obtain the release signing key from: - https://www.craig-wood.com/nick/pub/pgp-key.txt After importing the key, verify that the fingerprint of one of the keys -matches: FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA as this key is used +matches: FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA ads this key is used for signing. We recommend that you cross-check the fingerprint shown above through @@ -26178,10 +26777,10 @@ appropriate to your architecture. We've also chosen the SHA256SUMS as these are the most secure. You could verify the other types of hash also for extra security. rclone selfupdate verifies just the SHA256SUMS. - $ mkdir /tmp/check - $ cd /tmp/check - $ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS . - $ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip . + mkdir /tmp/check + cd /tmp/check + rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS . + rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip . Verify the signatures @@ -26251,7 +26850,7 @@ website which you need to do in your browser. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -26290,7 +26889,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your 1Fichier account @@ -26443,7 +27043,7 @@ rclone about is not supported by the 1Fichier backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. Alias @@ -26475,7 +27075,7 @@ Configuration Here is an example of how to make an alias called remote for local folder. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -26520,7 +27120,8 @@ This will guide you through an interactive setup process: q) Quit config e/n/d/r/c/s/q> q -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level in /mnt/storage/backup @@ -26579,13 +27180,17 @@ The S3 backend can be used with a number of different providers: - China Mobile Ecloud Elastic Object Storage (EOS) - Cloudflare R2 - Arvan Cloud Object Storage (AOS) +- Cubbit DS3 - DigitalOcean Spaces - Dreamhost - Exaba +- FileLu S5 (S3-Compatible Object Storage) - GCS +- Hetzner - Huawei OBS - IBM COS S3 - IDrive e2 +- Intercolo Object Storage - IONOS Cloud - Leviia Object Storage - Liara Object Storage @@ -26598,12 +27203,15 @@ The S3 backend can be used with a number of different providers: - Petabox - Pure Storage FlashBlade - Qiniu Cloud Object Storage (Kodo) +- Rabata Cloud Storage - RackCorp Object Storage - Rclone Serve S3 - Scaleway - Seagate Lyve Cloud - SeaweedFS - Selectel +- Servercore Object Storage +- Spectra Logic - StackPath - Storj - Synology C2 Object Storage @@ -26646,7 +27254,7 @@ First run This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -27200,9 +27808,9 @@ The chunk sizes used in the multipart upload are specified by --s3-chunk-size and the number of chunks uploaded concurrently is specified by --s3-upload-concurrency. -Multipart uploads will use --transfers * --s3-upload-concurrency * ---s3-chunk-size extra memory. Single part uploads to not use extra -memory. +Multipart uploads will use extra memory equal to: --transfers × +--s3-upload-concurrency × --s3-chunk-size. Single part uploads do not +use extra memory. Single part transfers can be faster than multipart transfers or slower depending on your latency from S3 - the more latency, the more likely @@ -27281,31 +27889,31 @@ required. Example policy: { - "Version": "2012-10-17", - "Statement": [ - { - "Effect": "Allow", - "Principal": { - "AWS": "arn:aws:iam::USER_SID:user/USER_NAME" - }, - "Action": [ - "s3:ListBucket", - "s3:DeleteObject", - "s3:GetObject", - "s3:PutObject", - "s3:PutObjectAcl" - ], - "Resource": [ - "arn:aws:s3:::BUCKET_NAME/*", - "arn:aws:s3:::BUCKET_NAME" - ] - }, - { - "Effect": "Allow", - "Action": "s3:ListAllMyBuckets", - "Resource": "arn:aws:s3:::*" - } - ] + "Version": "2012-10-17", + "Statement": [ + { + "Effect": "Allow", + "Principal": { + "AWS": "arn:aws:iam::USER_SID:user/USER_NAME" + }, + "Action": [ + "s3:ListBucket", + "s3:DeleteObject", + "s3:GetObject", + "s3:PutObject", + "s3:PutObjectAcl" + ], + "Resource": [ + "arn:aws:s3:::BUCKET_NAME/*", + "arn:aws:s3:::BUCKET_NAME" + ] + }, + { + "Effect": "Allow", + "Action": "s3:ListAllMyBuckets", + "Resource": "arn:aws:s3:::*" + } + ] } Notes on above: @@ -27359,11 +27967,12 @@ Standard options Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, -Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, -IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, -Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, -SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, -Qiniu, Zata and others). +Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, +GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, +Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, +OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, +Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, +TencentCOS, Wasabi, Zata, Other). --s3-provider @@ -27388,32 +27997,40 @@ Properties: - China Mobile Ecloud Elastic Object Storage (EOS) - "Cloudflare" - Cloudflare R2 Storage + - "Cubbit" + - Cubbit DS3 Object Storage - "DigitalOcean" - DigitalOcean Spaces - "Dreamhost" - Dreamhost DreamObjects - "Exaba" - Exaba Object Storage + - "FileLu" + - FileLu S5 (S3-Compatible Object Storage) - "FlashBlade" - Pure Storage FlashBlade Object Storage - "GCS" - Google Cloud Storage + - "Hetzner" + - Hetzner Object Storage - "HuaweiOBS" - Huawei Object Storage Service - "IBMCOS" - IBM COS S3 - "IDrive" - IDrive e2 + - "Intercolo" + - Intercolo Object Storage - "IONOS" - IONOS Cloud - - "LyveCloud" - - Seagate Lyve Cloud - "Leviia" - Leviia Object Storage - "Liara" - Liara Object Storage - "Linode" - Linode Object Storage + - "LyveCloud" + - Seagate Lyve Cloud - "Magalu" - Magalu Object Storage - "Mega" @@ -27428,6 +28045,10 @@ Properties: - OVHcloud Object Storage - "Petabox" - Petabox Object Storage + - "Qiniu" + - Qiniu Object Storage (Kodo) + - "Rabata" + - Rabata Cloud Storage - "RackCorp" - RackCorp Object Storage - "Rclone" @@ -27438,6 +28059,10 @@ Properties: - SeaweedFS S3 - "Selectel" - Selectel Object Storage + - "Servercore" + - Servercore Object Storage + - "SpectraLogic" + - Spectra Logic Black Pearl - "StackPath" - StackPath Object Storage - "Storj" @@ -27448,8 +28073,6 @@ Properties: - Tencent Cloud Object Storage (COS) - "Wasabi" - Wasabi Object Storage - - "Qiniu" - - Qiniu Object Storage (Kodo) - "Zata" - Zata (S3 compatible Gateway) - "Other" @@ -27504,11 +28127,14 @@ Properties: Region to connect to. +Leave blank if you are using an S3 clone and you don't have a region. + Properties: - Config: region - Env Var: RCLONE_S3_REGION -- Provider: AWS +- Provider: + AWS,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other - Type: string - Required: false - Examples: @@ -27516,162 +28142,1692 @@ Properties: - The default endpoint - a good choice if you are unsure. - US Region, Northern Virginia, or Pacific Northwest. - Leave location constraint empty. + - Provider: AWS - "us-east-2" - US East (Ohio) Region. - Needs location constraint us-east-2. + - Provider: AWS - "us-west-1" - US West (Northern California) Region. - Needs location constraint us-west-1. + - Provider: AWS - "us-west-2" - US West (Oregon) Region. - Needs location constraint us-west-2. + - Provider: AWS - "ca-central-1" - Canada (Central) Region. - Needs location constraint ca-central-1. + - Provider: AWS - "eu-west-1" - EU (Ireland) Region. - Needs location constraint EU or eu-west-1. + - Provider: AWS - "eu-west-2" - EU (London) Region. - Needs location constraint eu-west-2. + - Provider: AWS - "eu-west-3" - EU (Paris) Region. - Needs location constraint eu-west-3. + - Provider: AWS - "eu-north-1" - EU (Stockholm) Region. - Needs location constraint eu-north-1. + - Provider: AWS - "eu-south-1" - EU (Milan) Region. - Needs location constraint eu-south-1. + - Provider: AWS - "eu-central-1" - EU (Frankfurt) Region. - Needs location constraint eu-central-1. + - Provider: AWS - "ap-southeast-1" - Asia Pacific (Singapore) Region. - Needs location constraint ap-southeast-1. + - Provider: AWS - "ap-southeast-2" - Asia Pacific (Sydney) Region. - Needs location constraint ap-southeast-2. + - Provider: AWS - "ap-northeast-1" - Asia Pacific (Tokyo) Region. - Needs location constraint ap-northeast-1. + - Provider: AWS - "ap-northeast-2" - Asia Pacific (Seoul). - Needs location constraint ap-northeast-2. + - Provider: AWS - "ap-northeast-3" - Asia Pacific (Osaka-Local). - Needs location constraint ap-northeast-3. + - Provider: AWS - "ap-south-1" - Asia Pacific (Mumbai). - Needs location constraint ap-south-1. + - Provider: AWS - "ap-east-1" - Asia Pacific (Hong Kong) Region. - Needs location constraint ap-east-1. + - Provider: AWS - "sa-east-1" - South America (Sao Paulo) Region. - Needs location constraint sa-east-1. + - Provider: AWS - "il-central-1" - Israel (Tel Aviv) Region. - Needs location constraint il-central-1. + - Provider: AWS - "me-south-1" - Middle East (Bahrain) Region. - Needs location constraint me-south-1. + - Provider: AWS - "af-south-1" - Africa (Cape Town) Region. - Needs location constraint af-south-1. + - Provider: AWS - "cn-north-1" - China (Beijing) Region. - Needs location constraint cn-north-1. + - Provider: AWS - "cn-northwest-1" - China (Ningxia) Region. - Needs location constraint cn-northwest-1. + - Provider: AWS - "us-gov-east-1" - AWS GovCloud (US-East) Region. - Needs location constraint us-gov-east-1. + - Provider: AWS - "us-gov-west-1" - AWS GovCloud (US) Region. - Needs location constraint us-gov-west-1. + - Provider: AWS + - "" + - Use this if unsure. + - Will use v4 signatures and an empty region. + - Provider: + Ceph,DigitalOcean,Dreamhost,Exaba,GCS,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,StackPath,Wasabi,Other + - "other-v2-signature" + - Use this only if v4 signatures don't work. + - E.g. pre Jewel/v10 CEPH. + - Provider: + Ceph,DigitalOcean,Dreamhost,Exaba,GCS,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,StackPath,Wasabi,Other + - "auto" + - R2 buckets are automatically distributed across Cloudflare's + data centers for low latency. + - Provider: Cloudflare + - "eu-west-1" + - Europe West + - Provider: Cubbit + - "global" + - Global + - Provider: FileLu + - "us-east" + - North America (US-East) + - Provider: FileLu + - "eu-central" + - Europe (EU-Central) + - Provider: FileLu + - "ap-southeast" + - Asia Pacific (AP-Southeast) + - Provider: FileLu + - "me-central" + - Middle East (ME-Central) + - Provider: FileLu + - "hel1" + - Helsinki + - Provider: Hetzner + - "fsn1" + - Falkenstein + - Provider: Hetzner + - "nbg1" + - Nuremberg + - Provider: Hetzner + - "af-south-1" + - AF-Johannesburg + - Provider: HuaweiOBS + - "ap-southeast-2" + - AP-Bangkok + - Provider: HuaweiOBS + - "ap-southeast-3" + - AP-Singapore + - Provider: HuaweiOBS + - "cn-east-3" + - CN East-Shanghai1 + - Provider: HuaweiOBS + - "cn-east-2" + - CN East-Shanghai2 + - Provider: HuaweiOBS + - "cn-north-1" + - CN North-Beijing1 + - Provider: HuaweiOBS + - "cn-north-4" + - CN North-Beijing4 + - Provider: HuaweiOBS + - "cn-south-1" + - CN South-Guangzhou + - Provider: HuaweiOBS + - "ap-southeast-1" + - CN-Hong Kong + - Provider: HuaweiOBS + - "sa-argentina-1" + - LA-Buenos Aires1 + - Provider: HuaweiOBS + - "sa-peru-1" + - LA-Lima1 + - Provider: HuaweiOBS + - "na-mexico-1" + - LA-Mexico City1 + - Provider: HuaweiOBS + - "sa-chile-1" + - LA-Santiago2 + - Provider: HuaweiOBS + - "sa-brazil-1" + - LA-Sao Paulo1 + - Provider: HuaweiOBS + - "ru-northwest-2" + - RU-Moscow2 + - Provider: HuaweiOBS + - "de-fra" + - Frankfurt, Germany + - Provider: Intercolo + - "de" + - Frankfurt, Germany + - Provider: IONOS,OVHcloud + - "eu-central-2" + - Berlin, Germany + - Provider: IONOS + - "eu-south-2" + - Logrono, Spain + - Provider: IONOS + - "eu-west-2" + - Paris, France + - Provider: Outscale + - "us-east-2" + - New Jersey, USA + - Provider: Outscale + - "us-west-1" + - California, USA + - Provider: Outscale + - "cloudgouv-eu-west-1" + - SecNumCloud, Paris, France + - Provider: Outscale + - "ap-northeast-1" + - Tokyo, Japan + - Provider: Outscale + - "gra" + - Gravelines, France + - Provider: OVHcloud + - "rbx" + - Roubaix, France + - Provider: OVHcloud + - "sbg" + - Strasbourg, France + - Provider: OVHcloud + - "eu-west-par" + - Paris, France (3AZ) + - Provider: OVHcloud + - "uk" + - London, United Kingdom + - Provider: OVHcloud + - "waw" + - Warsaw, Poland + - Provider: OVHcloud + - "bhs" + - Beauharnois, Canada + - Provider: OVHcloud + - "ca-east-tor" + - Toronto, Canada + - Provider: OVHcloud + - "sgp" + - Singapore + - Provider: OVHcloud + - "ap-southeast-syd" + - Sydney, Australia + - Provider: OVHcloud + - "ap-south-mum" + - Mumbai, India + - Provider: OVHcloud + - "us-east-va" + - Vint Hill, Virginia, USA + - Provider: OVHcloud + - "us-west-or" + - Hillsboro, Oregon, USA + - Provider: OVHcloud + - "rbx-archive" + - Roubaix, France (Cold Archive) + - Provider: OVHcloud + - "us-east-1" + - US East (N. Virginia) + - Provider: Petabox,Rabata + - "eu-central-1" + - Europe (Frankfurt) + - Provider: Petabox + - "ap-southeast-1" + - Asia Pacific (Singapore) + - Provider: Petabox + - "me-south-1" + - Middle East (Bahrain) + - Provider: Petabox + - "sa-east-1" + - South America (São Paulo) + - Provider: Petabox + - "cn-east-1" + - The default endpoint - a good choice if you are unsure. + - East China Region 1. + - Needs location constraint cn-east-1. + - Provider: Qiniu + - "cn-east-2" + - East China Region 2. + - Needs location constraint cn-east-2. + - Provider: Qiniu + - "cn-north-1" + - North China Region 1. + - Needs location constraint cn-north-1. + - Provider: Qiniu + - "cn-south-1" + - South China Region 1. + - Needs location constraint cn-south-1. + - Provider: Qiniu + - "us-north-1" + - North America Region. + - Needs location constraint us-north-1. + - Provider: Qiniu + - "ap-southeast-1" + - Southeast Asia Region 1. + - Needs location constraint ap-southeast-1. + - Provider: Qiniu + - "ap-northeast-1" + - Northeast Asia Region 1. + - Needs location constraint ap-northeast-1. + - Provider: Qiniu + - "eu-west-1" + - EU (Ireland) + - Provider: Rabata + - "eu-west-2" + - EU (London) + - Provider: Rabata + - "global" + - Global CDN (All locations) Region + - Provider: RackCorp + - "au" + - Australia (All states) + - Provider: RackCorp + - "au-nsw" + - NSW (Australia) Region + - Provider: RackCorp + - "au-qld" + - QLD (Australia) Region + - Provider: RackCorp + - "au-vic" + - VIC (Australia) Region + - Provider: RackCorp + - "au-wa" + - Perth (Australia) Region + - Provider: RackCorp + - "ph" + - Manila (Philippines) Region + - Provider: RackCorp + - "th" + - Bangkok (Thailand) Region + - Provider: RackCorp + - "hk" + - HK (Hong Kong) Region + - Provider: RackCorp + - "mn" + - Ulaanbaatar (Mongolia) Region + - Provider: RackCorp + - "kg" + - Bishkek (Kyrgyzstan) Region + - Provider: RackCorp + - "id" + - Jakarta (Indonesia) Region + - Provider: RackCorp + - "jp" + - Tokyo (Japan) Region + - Provider: RackCorp + - "sg" + - SG (Singapore) Region + - Provider: RackCorp + - "de" + - Frankfurt (Germany) Region + - Provider: RackCorp + - "us" + - USA (AnyCast) Region + - Provider: RackCorp + - "us-east-1" + - New York (USA) Region + - Provider: RackCorp + - "us-west-1" + - Freemont (USA) Region + - Provider: RackCorp + - "nz" + - Auckland (New Zealand) Region + - Provider: RackCorp + - "nl-ams" + - Amsterdam, The Netherlands + - Provider: Scaleway + - "fr-par" + - Paris, France + - Provider: Scaleway + - "pl-waw" + - Warsaw, Poland + - Provider: Scaleway + - "ru-1" + - St. Petersburg + - Provider: Selectel,Servercore + - "gis-1" + - Moscow + - Provider: Servercore + - "ru-7" + - Moscow + - Provider: Servercore + - "uz-2" + - Tashkent, Uzbekistan + - Provider: Servercore + - "kz-1" + - Almaty, Kazakhstan + - Provider: Servercore + - "eu-001" + - Europe Region 1 + - Provider: Synology + - "eu-002" + - Europe Region 2 + - Provider: Synology + - "us-001" + - US Region 1 + - Provider: Synology + - "us-002" + - US Region 2 + - Provider: Synology + - "tw-001" + - Asia (Taiwan) + - Provider: Synology + - "us-east-1" + - Indore, Madhya Pradesh, India + - Provider: Zata --s3-endpoint Endpoint for S3 API. -Leave blank if using AWS to use the default endpoint for the region. +Required when using an S3 clone. Properties: - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT -- Provider: AWS +- Provider: + AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other - Type: string - Required: false +- Examples: + - "oss-accelerate.aliyuncs.com" + - Global Accelerate + - Provider: Alibaba + - "oss-accelerate-overseas.aliyuncs.com" + - Global Accelerate (outside mainland China) + - Provider: Alibaba + - "oss-cn-hangzhou.aliyuncs.com" + - East China 1 (Hangzhou) + - Provider: Alibaba + - "oss-cn-shanghai.aliyuncs.com" + - East China 2 (Shanghai) + - Provider: Alibaba + - "oss-cn-qingdao.aliyuncs.com" + - North China 1 (Qingdao) + - Provider: Alibaba + - "oss-cn-beijing.aliyuncs.com" + - North China 2 (Beijing) + - Provider: Alibaba + - "oss-cn-zhangjiakou.aliyuncs.com" + - North China 3 (Zhangjiakou) + - Provider: Alibaba + - "oss-cn-huhehaote.aliyuncs.com" + - North China 5 (Hohhot) + - Provider: Alibaba + - "oss-cn-wulanchabu.aliyuncs.com" + - North China 6 (Ulanqab) + - Provider: Alibaba + - "oss-cn-shenzhen.aliyuncs.com" + - South China 1 (Shenzhen) + - Provider: Alibaba + - "oss-cn-heyuan.aliyuncs.com" + - South China 2 (Heyuan) + - Provider: Alibaba + - "oss-cn-guangzhou.aliyuncs.com" + - South China 3 (Guangzhou) + - Provider: Alibaba + - "oss-cn-chengdu.aliyuncs.com" + - West China 1 (Chengdu) + - Provider: Alibaba + - "oss-cn-hongkong.aliyuncs.com" + - Hong Kong (Hong Kong) + - Provider: Alibaba + - "oss-us-west-1.aliyuncs.com" + - US West 1 (Silicon Valley) + - Provider: Alibaba + - "oss-us-east-1.aliyuncs.com" + - US East 1 (Virginia) + - Provider: Alibaba + - "oss-ap-southeast-1.aliyuncs.com" + - Southeast Asia Southeast 1 (Singapore) + - Provider: Alibaba + - "oss-ap-southeast-2.aliyuncs.com" + - Asia Pacific Southeast 2 (Sydney) + - Provider: Alibaba + - "oss-ap-southeast-3.aliyuncs.com" + - Southeast Asia Southeast 3 (Kuala Lumpur) + - Provider: Alibaba + - "oss-ap-southeast-5.aliyuncs.com" + - Asia Pacific Southeast 5 (Jakarta) + - Provider: Alibaba + - "oss-ap-northeast-1.aliyuncs.com" + - Asia Pacific Northeast 1 (Japan) + - Provider: Alibaba + - "oss-ap-south-1.aliyuncs.com" + - Asia Pacific South 1 (Mumbai) + - Provider: Alibaba + - "oss-eu-central-1.aliyuncs.com" + - Central Europe 1 (Frankfurt) + - Provider: Alibaba + - "oss-eu-west-1.aliyuncs.com" + - West Europe (London) + - Provider: Alibaba + - "oss-me-east-1.aliyuncs.com" + - Middle East 1 (Dubai) + - Provider: Alibaba + - "s3.ir-thr-at1.arvanstorage.ir" + - The default endpoint - a good choice if you are unsure. + - Tehran Iran (Simin) + - Provider: ArvanCloud + - "s3.ir-tbz-sh1.arvanstorage.ir" + - Tabriz Iran (Shahriar) + - Provider: ArvanCloud + - "eos-wuxi-1.cmecloud.cn" + - The default endpoint - a good choice if you are unsure. + - East China (Suzhou) + - Provider: ChinaMobile + - "eos-jinan-1.cmecloud.cn" + - East China (Jinan) + - Provider: ChinaMobile + - "eos-ningbo-1.cmecloud.cn" + - East China (Hangzhou) + - Provider: ChinaMobile + - "eos-shanghai-1.cmecloud.cn" + - East China (Shanghai-1) + - Provider: ChinaMobile + - "eos-zhengzhou-1.cmecloud.cn" + - Central China (Zhengzhou) + - Provider: ChinaMobile + - "eos-hunan-1.cmecloud.cn" + - Central China (Changsha-1) + - Provider: ChinaMobile + - "eos-zhuzhou-1.cmecloud.cn" + - Central China (Changsha-2) + - Provider: ChinaMobile + - "eos-guangzhou-1.cmecloud.cn" + - South China (Guangzhou-2) + - Provider: ChinaMobile + - "eos-dongguan-1.cmecloud.cn" + - South China (Guangzhou-3) + - Provider: ChinaMobile + - "eos-beijing-1.cmecloud.cn" + - North China (Beijing-1) + - Provider: ChinaMobile + - "eos-beijing-2.cmecloud.cn" + - North China (Beijing-2) + - Provider: ChinaMobile + - "eos-beijing-4.cmecloud.cn" + - North China (Beijing-3) + - Provider: ChinaMobile + - "eos-huhehaote-1.cmecloud.cn" + - North China (Huhehaote) + - Provider: ChinaMobile + - "eos-chengdu-1.cmecloud.cn" + - Southwest China (Chengdu) + - Provider: ChinaMobile + - "eos-chongqing-1.cmecloud.cn" + - Southwest China (Chongqing) + - Provider: ChinaMobile + - "eos-guiyang-1.cmecloud.cn" + - Southwest China (Guiyang) + - Provider: ChinaMobile + - "eos-xian-1.cmecloud.cn" + - Nouthwest China (Xian) + - Provider: ChinaMobile + - "eos-yunnan.cmecloud.cn" + - Yunnan China (Kunming) + - Provider: ChinaMobile + - "eos-yunnan-2.cmecloud.cn" + - Yunnan China (Kunming-2) + - Provider: ChinaMobile + - "eos-tianjin-1.cmecloud.cn" + - Tianjin China (Tianjin) + - Provider: ChinaMobile + - "eos-jilin-1.cmecloud.cn" + - Jilin China (Changchun) + - Provider: ChinaMobile + - "eos-hubei-1.cmecloud.cn" + - Hubei China (Xiangyan) + - Provider: ChinaMobile + - "eos-jiangxi-1.cmecloud.cn" + - Jiangxi China (Nanchang) + - Provider: ChinaMobile + - "eos-gansu-1.cmecloud.cn" + - Gansu China (Lanzhou) + - Provider: ChinaMobile + - "eos-shanxi-1.cmecloud.cn" + - Shanxi China (Taiyuan) + - Provider: ChinaMobile + - "eos-liaoning-1.cmecloud.cn" + - Liaoning China (Shenyang) + - Provider: ChinaMobile + - "eos-hebei-1.cmecloud.cn" + - Hebei China (Shijiazhuang) + - Provider: ChinaMobile + - "eos-fujian-1.cmecloud.cn" + - Fujian China (Xiamen) + - Provider: ChinaMobile + - "eos-guangxi-1.cmecloud.cn" + - Guangxi China (Nanning) + - Provider: ChinaMobile + - "eos-anhui-1.cmecloud.cn" + - Anhui China (Huainan) + - Provider: ChinaMobile + - "s3.cubbit.eu" + - Cubbit DS3 Object Storage endpoint + - Provider: Cubbit + - "syd1.digitaloceanspaces.com" + - DigitalOcean Spaces Sydney 1 + - Provider: DigitalOcean + - "sfo3.digitaloceanspaces.com" + - DigitalOcean Spaces San Francisco 3 + - Provider: DigitalOcean + - "sfo2.digitaloceanspaces.com" + - DigitalOcean Spaces San Francisco 2 + - Provider: DigitalOcean + - "fra1.digitaloceanspaces.com" + - DigitalOcean Spaces Frankfurt 1 + - Provider: DigitalOcean + - "nyc3.digitaloceanspaces.com" + - DigitalOcean Spaces New York 3 + - Provider: DigitalOcean + - "ams3.digitaloceanspaces.com" + - DigitalOcean Spaces Amsterdam 3 + - Provider: DigitalOcean + - "sgp1.digitaloceanspaces.com" + - DigitalOcean Spaces Singapore 1 + - Provider: DigitalOcean + - "lon1.digitaloceanspaces.com" + - DigitalOcean Spaces London 1 + - Provider: DigitalOcean + - "tor1.digitaloceanspaces.com" + - DigitalOcean Spaces Toronto 1 + - Provider: DigitalOcean + - "blr1.digitaloceanspaces.com" + - DigitalOcean Spaces Bangalore 1 + - Provider: DigitalOcean + - "objects-us-east-1.dream.io" + - Dream Objects endpoint + - Provider: Dreamhost + - "s5lu.com" + - Global FileLu S5 endpoint + - Provider: FileLu + - "us.s5lu.com" + - North America (US-East) region endpoint + - Provider: FileLu + - "eu.s5lu.com" + - Europe (EU-Central) region endpoint + - Provider: FileLu + - "ap.s5lu.com" + - Asia Pacific (AP-Southeast) region endpoint + - Provider: FileLu + - "me.s5lu.com" + - Middle East (ME-Central) region endpoint + - Provider: FileLu + - "https://storage.googleapis.com" + - Google Cloud Storage endpoint + - Provider: GCS + - "hel1.your-objectstorage.com" + - Helsinki + - Provider: Hetzner + - "fsn1.your-objectstorage.com" + - Falkenstein + - Provider: Hetzner + - "nbg1.your-objectstorage.com" + - Nuremberg + - Provider: Hetzner + - "obs.af-south-1.myhuaweicloud.com" + - AF-Johannesburg + - Provider: HuaweiOBS + - "obs.ap-southeast-2.myhuaweicloud.com" + - AP-Bangkok + - Provider: HuaweiOBS + - "obs.ap-southeast-3.myhuaweicloud.com" + - AP-Singapore + - Provider: HuaweiOBS + - "obs.cn-east-3.myhuaweicloud.com" + - CN East-Shanghai1 + - Provider: HuaweiOBS + - "obs.cn-east-2.myhuaweicloud.com" + - CN East-Shanghai2 + - Provider: HuaweiOBS + - "obs.cn-north-1.myhuaweicloud.com" + - CN North-Beijing1 + - Provider: HuaweiOBS + - "obs.cn-north-4.myhuaweicloud.com" + - CN North-Beijing4 + - Provider: HuaweiOBS + - "obs.cn-south-1.myhuaweicloud.com" + - CN South-Guangzhou + - Provider: HuaweiOBS + - "obs.ap-southeast-1.myhuaweicloud.com" + - CN-Hong Kong + - Provider: HuaweiOBS + - "obs.sa-argentina-1.myhuaweicloud.com" + - LA-Buenos Aires1 + - Provider: HuaweiOBS + - "obs.sa-peru-1.myhuaweicloud.com" + - LA-Lima1 + - Provider: HuaweiOBS + - "obs.na-mexico-1.myhuaweicloud.com" + - LA-Mexico City1 + - Provider: HuaweiOBS + - "obs.sa-chile-1.myhuaweicloud.com" + - LA-Santiago2 + - Provider: HuaweiOBS + - "obs.sa-brazil-1.myhuaweicloud.com" + - LA-Sao Paulo1 + - Provider: HuaweiOBS + - "obs.ru-northwest-2.myhuaweicloud.com" + - RU-Moscow2 + - Provider: HuaweiOBS + - "s3.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Endpoint + - Provider: IBMCOS + - "s3.dal.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Dallas Endpoint + - Provider: IBMCOS + - "s3.wdc.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Washington DC Endpoint + - Provider: IBMCOS + - "s3.sjc.us.cloud-object-storage.appdomain.cloud" + - US Cross Region San Jose Endpoint + - Provider: IBMCOS + - "s3.private.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Private Endpoint + - Provider: IBMCOS + - "s3.private.dal.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Dallas Private Endpoint + - Provider: IBMCOS + - "s3.private.wdc.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Washington DC Private Endpoint + - Provider: IBMCOS + - "s3.private.sjc.us.cloud-object-storage.appdomain.cloud" + - US Cross Region San Jose Private Endpoint + - Provider: IBMCOS + - "s3.us-east.cloud-object-storage.appdomain.cloud" + - US Region East Endpoint + - Provider: IBMCOS + - "s3.private.us-east.cloud-object-storage.appdomain.cloud" + - US Region East Private Endpoint + - Provider: IBMCOS + - "s3.us-south.cloud-object-storage.appdomain.cloud" + - US Region South Endpoint + - Provider: IBMCOS + - "s3.private.us-south.cloud-object-storage.appdomain.cloud" + - US Region South Private Endpoint + - Provider: IBMCOS + - "s3.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Endpoint + - Provider: IBMCOS + - "s3.fra.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Frankfurt Endpoint + - Provider: IBMCOS + - "s3.mil.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Milan Endpoint + - Provider: IBMCOS + - "s3.ams.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Amsterdam Endpoint + - Provider: IBMCOS + - "s3.private.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Private Endpoint + - Provider: IBMCOS + - "s3.private.fra.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Frankfurt Private Endpoint + - Provider: IBMCOS + - "s3.private.mil.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Milan Private Endpoint + - Provider: IBMCOS + - "s3.private.ams.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Amsterdam Private Endpoint + - Provider: IBMCOS + - "s3.eu-gb.cloud-object-storage.appdomain.cloud" + - Great Britain Endpoint + - Provider: IBMCOS + - "s3.private.eu-gb.cloud-object-storage.appdomain.cloud" + - Great Britain Private Endpoint + - Provider: IBMCOS + - "s3.eu-de.cloud-object-storage.appdomain.cloud" + - EU Region DE Endpoint + - Provider: IBMCOS + - "s3.private.eu-de.cloud-object-storage.appdomain.cloud" + - EU Region DE Private Endpoint + - Provider: IBMCOS + - "s3.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Endpoint + - Provider: IBMCOS + - "s3.tok.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Tokyo Endpoint + - Provider: IBMCOS + - "s3.hkg.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Hong Kong Endpoint + - Provider: IBMCOS + - "s3.seo.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Seoul Endpoint + - Provider: IBMCOS + - "s3.private.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Private Endpoint + - Provider: IBMCOS + - "s3.private.tok.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Tokyo Private Endpoint + - Provider: IBMCOS + - "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Hong Kong Private Endpoint + - Provider: IBMCOS + - "s3.private.seo.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Seoul Private Endpoint + - Provider: IBMCOS + - "s3.jp-tok.cloud-object-storage.appdomain.cloud" + - APAC Region Japan Endpoint + - Provider: IBMCOS + - "s3.private.jp-tok.cloud-object-storage.appdomain.cloud" + - APAC Region Japan Private Endpoint + - Provider: IBMCOS + - "s3.au-syd.cloud-object-storage.appdomain.cloud" + - APAC Region Australia Endpoint + - Provider: IBMCOS + - "s3.private.au-syd.cloud-object-storage.appdomain.cloud" + - APAC Region Australia Private Endpoint + - Provider: IBMCOS + - "s3.ams03.cloud-object-storage.appdomain.cloud" + - Amsterdam Single Site Endpoint + - Provider: IBMCOS + - "s3.private.ams03.cloud-object-storage.appdomain.cloud" + - Amsterdam Single Site Private Endpoint + - Provider: IBMCOS + - "s3.che01.cloud-object-storage.appdomain.cloud" + - Chennai Single Site Endpoint + - Provider: IBMCOS + - "s3.private.che01.cloud-object-storage.appdomain.cloud" + - Chennai Single Site Private Endpoint + - Provider: IBMCOS + - "s3.mel01.cloud-object-storage.appdomain.cloud" + - Melbourne Single Site Endpoint + - Provider: IBMCOS + - "s3.private.mel01.cloud-object-storage.appdomain.cloud" + - Melbourne Single Site Private Endpoint + - Provider: IBMCOS + - "s3.osl01.cloud-object-storage.appdomain.cloud" + - Oslo Single Site Endpoint + - Provider: IBMCOS + - "s3.private.osl01.cloud-object-storage.appdomain.cloud" + - Oslo Single Site Private Endpoint + - Provider: IBMCOS + - "s3.tor01.cloud-object-storage.appdomain.cloud" + - Toronto Single Site Endpoint + - Provider: IBMCOS + - "s3.private.tor01.cloud-object-storage.appdomain.cloud" + - Toronto Single Site Private Endpoint + - Provider: IBMCOS + - "s3.seo01.cloud-object-storage.appdomain.cloud" + - Seoul Single Site Endpoint + - Provider: IBMCOS + - "s3.private.seo01.cloud-object-storage.appdomain.cloud" + - Seoul Single Site Private Endpoint + - Provider: IBMCOS + - "s3.mon01.cloud-object-storage.appdomain.cloud" + - Montreal Single Site Endpoint + - Provider: IBMCOS + - "s3.private.mon01.cloud-object-storage.appdomain.cloud" + - Montreal Single Site Private Endpoint + - Provider: IBMCOS + - "s3.mex01.cloud-object-storage.appdomain.cloud" + - Mexico Single Site Endpoint + - Provider: IBMCOS + - "s3.private.mex01.cloud-object-storage.appdomain.cloud" + - Mexico Single Site Private Endpoint + - Provider: IBMCOS + - "s3.sjc04.cloud-object-storage.appdomain.cloud" + - San Jose Single Site Endpoint + - Provider: IBMCOS + - "s3.private.sjc04.cloud-object-storage.appdomain.cloud" + - San Jose Single Site Private Endpoint + - Provider: IBMCOS + - "s3.mil01.cloud-object-storage.appdomain.cloud" + - Milan Single Site Endpoint + - Provider: IBMCOS + - "s3.private.mil01.cloud-object-storage.appdomain.cloud" + - Milan Single Site Private Endpoint + - Provider: IBMCOS + - "s3.hkg02.cloud-object-storage.appdomain.cloud" + - Hong Kong Single Site Endpoint + - Provider: IBMCOS + - "s3.private.hkg02.cloud-object-storage.appdomain.cloud" + - Hong Kong Single Site Private Endpoint + - Provider: IBMCOS + - "s3.par01.cloud-object-storage.appdomain.cloud" + - Paris Single Site Endpoint + - Provider: IBMCOS + - "s3.private.par01.cloud-object-storage.appdomain.cloud" + - Paris Single Site Private Endpoint + - Provider: IBMCOS + - "s3.sng01.cloud-object-storage.appdomain.cloud" + - Singapore Single Site Endpoint + - Provider: IBMCOS + - "s3.private.sng01.cloud-object-storage.appdomain.cloud" + - Singapore Single Site Private Endpoint + - Provider: IBMCOS + - "de-fra.i3storage.com" + - Frankfurt, Germany + - Provider: Intercolo + - "s3-eu-central-1.ionoscloud.com" + - Frankfurt, Germany + - Provider: IONOS + - "s3-eu-central-2.ionoscloud.com" + - Berlin, Germany + - Provider: IONOS + - "s3-eu-south-2.ionoscloud.com" + - Logrono, Spain + - Provider: IONOS + - "s3.leviia.com" + - The default endpoint + - Leviia + - Provider: Leviia + - "storage.iran.liara.space" + - The default endpoint + - Iran + - Provider: Liara + - "nl-ams-1.linodeobjects.com" + - Amsterdam (Netherlands), nl-ams-1 + - Provider: Linode + - "us-southeast-1.linodeobjects.com" + - Atlanta, GA (USA), us-southeast-1 + - Provider: Linode + - "in-maa-1.linodeobjects.com" + - Chennai (India), in-maa-1 + - Provider: Linode + - "us-ord-1.linodeobjects.com" + - Chicago, IL (USA), us-ord-1 + - Provider: Linode + - "eu-central-1.linodeobjects.com" + - Frankfurt (Germany), eu-central-1 + - Provider: Linode + - "id-cgk-1.linodeobjects.com" + - Jakarta (Indonesia), id-cgk-1 + - Provider: Linode + - "gb-lon-1.linodeobjects.com" + - London 2 (Great Britain), gb-lon-1 + - Provider: Linode + - "us-lax-1.linodeobjects.com" + - Los Angeles, CA (USA), us-lax-1 + - Provider: Linode + - "es-mad-1.linodeobjects.com" + - Madrid (Spain), es-mad-1 + - Provider: Linode + - "au-mel-1.linodeobjects.com" + - Melbourne (Australia), au-mel-1 + - Provider: Linode + - "us-mia-1.linodeobjects.com" + - Miami, FL (USA), us-mia-1 + - Provider: Linode + - "it-mil-1.linodeobjects.com" + - Milan (Italy), it-mil-1 + - Provider: Linode + - "us-east-1.linodeobjects.com" + - Newark, NJ (USA), us-east-1 + - Provider: Linode + - "jp-osa-1.linodeobjects.com" + - Osaka (Japan), jp-osa-1 + - Provider: Linode + - "fr-par-1.linodeobjects.com" + - Paris (France), fr-par-1 + - Provider: Linode + - "br-gru-1.linodeobjects.com" + - São Paulo (Brazil), br-gru-1 + - Provider: Linode + - "us-sea-1.linodeobjects.com" + - Seattle, WA (USA), us-sea-1 + - Provider: Linode + - "ap-south-1.linodeobjects.com" + - Singapore, ap-south-1 + - Provider: Linode + - "sg-sin-1.linodeobjects.com" + - Singapore 2, sg-sin-1 + - Provider: Linode + - "se-sto-1.linodeobjects.com" + - Stockholm (Sweden), se-sto-1 + - Provider: Linode + - "us-iad-1.linodeobjects.com" + - Washington, DC, (USA), us-iad-1 + - Provider: Linode + - "s3.us-west-1.{account_name}.lyve.seagate.com" + - US West 1 - California + - Provider: LyveCloud + - "s3.eu-west-1.{account_name}.lyve.seagate.com" + - EU West 1 - Ireland + - Provider: LyveCloud + - "br-se1.magaluobjects.com" + - São Paulo, SP (BR), br-se1 + - Provider: Magalu + - "br-ne1.magaluobjects.com" + - Fortaleza, CE (BR), br-ne1 + - Provider: Magalu + - "s3.eu-central-1.s4.mega.io" + - Mega S4 eu-central-1 (Amsterdam) + - Provider: Mega + - "s3.eu-central-2.s4.mega.io" + - Mega S4 eu-central-2 (Bettembourg) + - Provider: Mega + - "s3.ca-central-1.s4.mega.io" + - Mega S4 ca-central-1 (Montreal) + - Provider: Mega + - "s3.ca-west-1.s4.mega.io" + - Mega S4 ca-west-1 (Vancouver) + - Provider: Mega + - "oos.eu-west-2.outscale.com" + - Outscale EU West 2 (Paris) + - Provider: Outscale + - "oos.us-east-2.outscale.com" + - Outscale US east 2 (New Jersey) + - Provider: Outscale + - "oos.us-west-1.outscale.com" + - Outscale EU West 1 (California) + - Provider: Outscale + - "oos.cloudgouv-eu-west-1.outscale.com" + - Outscale SecNumCloud (Paris) + - Provider: Outscale + - "oos.ap-northeast-1.outscale.com" + - Outscale AP Northeast 1 (Japan) + - Provider: Outscale + - "s3.gra.io.cloud.ovh.net" + - OVHcloud Gravelines, France + - Provider: OVHcloud + - "s3.rbx.io.cloud.ovh.net" + - OVHcloud Roubaix, France + - Provider: OVHcloud + - "s3.sbg.io.cloud.ovh.net" + - OVHcloud Strasbourg, France + - Provider: OVHcloud + - "s3.eu-west-par.io.cloud.ovh.net" + - OVHcloud Paris, France (3AZ) + - Provider: OVHcloud + - "s3.de.io.cloud.ovh.net" + - OVHcloud Frankfurt, Germany + - Provider: OVHcloud + - "s3.uk.io.cloud.ovh.net" + - OVHcloud London, United Kingdom + - Provider: OVHcloud + - "s3.waw.io.cloud.ovh.net" + - OVHcloud Warsaw, Poland + - Provider: OVHcloud + - "s3.bhs.io.cloud.ovh.net" + - OVHcloud Beauharnois, Canada + - Provider: OVHcloud + - "s3.ca-east-tor.io.cloud.ovh.net" + - OVHcloud Toronto, Canada + - Provider: OVHcloud + - "s3.sgp.io.cloud.ovh.net" + - OVHcloud Singapore + - Provider: OVHcloud + - "s3.ap-southeast-syd.io.cloud.ovh.net" + - OVHcloud Sydney, Australia + - Provider: OVHcloud + - "s3.ap-south-mum.io.cloud.ovh.net" + - OVHcloud Mumbai, India + - Provider: OVHcloud + - "s3.us-east-va.io.cloud.ovh.us" + - OVHcloud Vint Hill, Virginia, USA + - Provider: OVHcloud + - "s3.us-west-or.io.cloud.ovh.us" + - OVHcloud Hillsboro, Oregon, USA + - Provider: OVHcloud + - "s3.rbx-archive.io.cloud.ovh.net" + - OVHcloud Roubaix, France (Cold Archive) + - Provider: OVHcloud + - "s3.petabox.io" + - US East (N. Virginia) + - Provider: Petabox + - "s3.us-east-1.petabox.io" + - US East (N. Virginia) + - Provider: Petabox + - "s3.eu-central-1.petabox.io" + - Europe (Frankfurt) + - Provider: Petabox + - "s3.ap-southeast-1.petabox.io" + - Asia Pacific (Singapore) + - Provider: Petabox + - "s3.me-south-1.petabox.io" + - Middle East (Bahrain) + - Provider: Petabox + - "s3.sa-east-1.petabox.io" + - South America (São Paulo) + - Provider: Petabox + - "s3-cn-east-1.qiniucs.com" + - East China Endpoint 1 + - Provider: Qiniu + - "s3-cn-east-2.qiniucs.com" + - East China Endpoint 2 + - Provider: Qiniu + - "s3-cn-north-1.qiniucs.com" + - North China Endpoint 1 + - Provider: Qiniu + - "s3-cn-south-1.qiniucs.com" + - South China Endpoint 1 + - Provider: Qiniu + - "s3-us-north-1.qiniucs.com" + - North America Endpoint 1 + - Provider: Qiniu + - "s3-ap-southeast-1.qiniucs.com" + - Southeast Asia Endpoint 1 + - Provider: Qiniu + - "s3-ap-northeast-1.qiniucs.com" + - Northeast Asia Endpoint 1 + - Provider: Qiniu + - "s3.us-east-1.rabata.io" + - US East (N. Virginia) + - Provider: Rabata + - "s3.eu-west-1.rabata.io" + - EU West (Ireland) + - Provider: Rabata + - "s3.eu-west-2.rabata.io" + - EU West (London) + - Provider: Rabata + - "s3.rackcorp.com" + - Global (AnyCast) Endpoint + - Provider: RackCorp + - "au.s3.rackcorp.com" + - Australia (Anycast) Endpoint + - Provider: RackCorp + - "au-nsw.s3.rackcorp.com" + - Sydney (Australia) Endpoint + - Provider: RackCorp + - "au-qld.s3.rackcorp.com" + - Brisbane (Australia) Endpoint + - Provider: RackCorp + - "au-vic.s3.rackcorp.com" + - Melbourne (Australia) Endpoint + - Provider: RackCorp + - "au-wa.s3.rackcorp.com" + - Perth (Australia) Endpoint + - Provider: RackCorp + - "ph.s3.rackcorp.com" + - Manila (Philippines) Endpoint + - Provider: RackCorp + - "th.s3.rackcorp.com" + - Bangkok (Thailand) Endpoint + - Provider: RackCorp + - "hk.s3.rackcorp.com" + - HK (Hong Kong) Endpoint + - Provider: RackCorp + - "mn.s3.rackcorp.com" + - Ulaanbaatar (Mongolia) Endpoint + - Provider: RackCorp + - "kg.s3.rackcorp.com" + - Bishkek (Kyrgyzstan) Endpoint + - Provider: RackCorp + - "id.s3.rackcorp.com" + - Jakarta (Indonesia) Endpoint + - Provider: RackCorp + - "jp.s3.rackcorp.com" + - Tokyo (Japan) Endpoint + - Provider: RackCorp + - "sg.s3.rackcorp.com" + - SG (Singapore) Endpoint + - Provider: RackCorp + - "de.s3.rackcorp.com" + - Frankfurt (Germany) Endpoint + - Provider: RackCorp + - "us.s3.rackcorp.com" + - USA (AnyCast) Endpoint + - Provider: RackCorp + - "us-east-1.s3.rackcorp.com" + - New York (USA) Endpoint + - Provider: RackCorp + - "us-west-1.s3.rackcorp.com" + - Freemont (USA) Endpoint + - Provider: RackCorp + - "nz.s3.rackcorp.com" + - Auckland (New Zealand) Endpoint + - Provider: RackCorp + - "s3.nl-ams.scw.cloud" + - Amsterdam Endpoint + - Provider: Scaleway + - "s3.fr-par.scw.cloud" + - Paris Endpoint + - Provider: Scaleway + - "s3.pl-waw.scw.cloud" + - Warsaw Endpoint + - Provider: Scaleway + - "localhost:8333" + - SeaweedFS S3 localhost + - Provider: SeaweedFS + - "s3.ru-1.storage.selcloud.ru" + - Saint Petersburg + - Provider: Selectel,Servercore + - "s3.gis-1.storage.selcloud.ru" + - Moscow + - Provider: Servercore + - "s3.ru-7.storage.selcloud.ru" + - Moscow + - Provider: Servercore + - "s3.uz-2.srvstorage.uz" + - Tashkent, Uzbekistan + - Provider: Servercore + - "s3.kz-1.srvstorage.kz" + - Almaty, Kazakhstan + - Provider: Servercore + - "s3.us-east-2.stackpathstorage.com" + - US East Endpoint + - Provider: StackPath + - "s3.us-west-1.stackpathstorage.com" + - US West Endpoint + - Provider: StackPath + - "s3.eu-central-1.stackpathstorage.com" + - EU Endpoint + - Provider: StackPath + - "gateway.storjshare.io" + - Global Hosted Gateway + - Provider: Storj + - "eu-001.s3.synologyc2.net" + - EU Endpoint 1 + - Provider: Synology + - "eu-002.s3.synologyc2.net" + - EU Endpoint 2 + - Provider: Synology + - "us-001.s3.synologyc2.net" + - US Endpoint 1 + - Provider: Synology + - "us-002.s3.synologyc2.net" + - US Endpoint 2 + - Provider: Synology + - "tw-001.s3.synologyc2.net" + - TW Endpoint 1 + - Provider: Synology + - "cos.ap-beijing.myqcloud.com" + - Beijing Region + - Provider: TencentCOS + - "cos.ap-nanjing.myqcloud.com" + - Nanjing Region + - Provider: TencentCOS + - "cos.ap-shanghai.myqcloud.com" + - Shanghai Region + - Provider: TencentCOS + - "cos.ap-guangzhou.myqcloud.com" + - Guangzhou Region + - Provider: TencentCOS + - "cos.ap-chengdu.myqcloud.com" + - Chengdu Region + - Provider: TencentCOS + - "cos.ap-chongqing.myqcloud.com" + - Chongqing Region + - Provider: TencentCOS + - "cos.ap-hongkong.myqcloud.com" + - Hong Kong (China) Region + - Provider: TencentCOS + - "cos.ap-singapore.myqcloud.com" + - Singapore Region + - Provider: TencentCOS + - "cos.ap-mumbai.myqcloud.com" + - Mumbai Region + - Provider: TencentCOS + - "cos.ap-seoul.myqcloud.com" + - Seoul Region + - Provider: TencentCOS + - "cos.ap-bangkok.myqcloud.com" + - Bangkok Region + - Provider: TencentCOS + - "cos.ap-tokyo.myqcloud.com" + - Tokyo Region + - Provider: TencentCOS + - "cos.na-siliconvalley.myqcloud.com" + - Silicon Valley Region + - Provider: TencentCOS + - "cos.na-ashburn.myqcloud.com" + - Virginia Region + - Provider: TencentCOS + - "cos.na-toronto.myqcloud.com" + - Toronto Region + - Provider: TencentCOS + - "cos.eu-frankfurt.myqcloud.com" + - Frankfurt Region + - Provider: TencentCOS + - "cos.eu-moscow.myqcloud.com" + - Moscow Region + - Provider: TencentCOS + - "cos.accelerate.myqcloud.com" + - Use Tencent COS Accelerate Endpoint + - Provider: TencentCOS + - "s3.wasabisys.com" + - Wasabi US East 1 (N. Virginia) + - Provider: Wasabi + - "s3.us-east-2.wasabisys.com" + - Wasabi US East 2 (N. Virginia) + - Provider: Wasabi + - "s3.us-central-1.wasabisys.com" + - Wasabi US Central 1 (Texas) + - Provider: Wasabi + - "s3.us-west-1.wasabisys.com" + - Wasabi US West 1 (Oregon) + - Provider: Wasabi + - "s3.ca-central-1.wasabisys.com" + - Wasabi CA Central 1 (Toronto) + - Provider: Wasabi + - "s3.eu-central-1.wasabisys.com" + - Wasabi EU Central 1 (Amsterdam) + - Provider: Wasabi + - "s3.eu-central-2.wasabisys.com" + - Wasabi EU Central 2 (Frankfurt) + - Provider: Wasabi + - "s3.eu-west-1.wasabisys.com" + - Wasabi EU West 1 (London) + - Provider: Wasabi + - "s3.eu-west-2.wasabisys.com" + - Wasabi EU West 2 (Paris) + - Provider: Wasabi + - "s3.eu-south-1.wasabisys.com" + - Wasabi EU South 1 (Milan) + - Provider: Wasabi + - "s3.ap-northeast-1.wasabisys.com" + - Wasabi AP Northeast 1 (Tokyo) endpoint + - Provider: Wasabi + - "s3.ap-northeast-2.wasabisys.com" + - Wasabi AP Northeast 2 (Osaka) endpoint + - Provider: Wasabi + - "s3.ap-southeast-1.wasabisys.com" + - Wasabi AP Southeast 1 (Singapore) + - Provider: Wasabi + - "s3.ap-southeast-2.wasabisys.com" + - Wasabi AP Southeast 2 (Sydney) + - Provider: Wasabi + - "idr01.zata.ai" + - South Asia Endpoint + - Provider: Zata --s3-location-constraint Location constraint - must be set to match the Region. -Used when creating buckets only. +Leave blank if not sure. Used when creating buckets only. Properties: - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT -- Provider: AWS +- Provider: + AWS,ArvanCloud,Ceph,ChinaMobile,DigitalOcean,Dreamhost,Exaba,GCS,Hetzner,IBMCOS,LyveCloud,Minio,Netease,Qiniu,Rabata,RackCorp,SeaweedFS,Synology,Wasabi,Zata,Other - Type: string - Required: false - Examples: - "" - Empty for US Region, Northern Virginia, or Pacific Northwest + - Provider: AWS - "us-east-2" - US East (Ohio) Region + - Provider: AWS - "us-west-1" - US West (Northern California) Region + - Provider: AWS - "us-west-2" - US West (Oregon) Region + - Provider: AWS - "ca-central-1" - Canada (Central) Region + - Provider: AWS - "eu-west-1" - EU (Ireland) Region + - Provider: AWS - "eu-west-2" - EU (London) Region + - Provider: AWS - "eu-west-3" - EU (Paris) Region + - Provider: AWS - "eu-north-1" - EU (Stockholm) Region + - Provider: AWS - "eu-south-1" - EU (Milan) Region + - Provider: AWS - "EU" - EU Region + - Provider: AWS - "ap-southeast-1" - Asia Pacific (Singapore) Region + - Provider: AWS - "ap-southeast-2" - Asia Pacific (Sydney) Region + - Provider: AWS - "ap-northeast-1" - Asia Pacific (Tokyo) Region + - Provider: AWS - "ap-northeast-2" - Asia Pacific (Seoul) Region + - Provider: AWS - "ap-northeast-3" - Asia Pacific (Osaka-Local) Region + - Provider: AWS - "ap-south-1" - Asia Pacific (Mumbai) Region + - Provider: AWS - "ap-east-1" - Asia Pacific (Hong Kong) Region + - Provider: AWS - "sa-east-1" - South America (Sao Paulo) Region + - Provider: AWS - "il-central-1" - Israel (Tel Aviv) Region + - Provider: AWS - "me-south-1" - Middle East (Bahrain) Region + - Provider: AWS - "af-south-1" - Africa (Cape Town) Region + - Provider: AWS - "cn-north-1" - China (Beijing) Region + - Provider: AWS - "cn-northwest-1" - China (Ningxia) Region + - Provider: AWS - "us-gov-east-1" - AWS GovCloud (US-East) Region + - Provider: AWS - "us-gov-west-1" - AWS GovCloud (US) Region + - Provider: AWS + - "ir-thr-at1" + - Tehran Iran (Simin) + - Provider: ArvanCloud + - "ir-tbz-sh1" + - Tabriz Iran (Shahriar) + - Provider: ArvanCloud + - "wuxi1" + - East China (Suzhou) + - Provider: ChinaMobile + - "jinan1" + - East China (Jinan) + - Provider: ChinaMobile + - "ningbo1" + - East China (Hangzhou) + - Provider: ChinaMobile + - "shanghai1" + - East China (Shanghai-1) + - Provider: ChinaMobile + - "zhengzhou1" + - Central China (Zhengzhou) + - Provider: ChinaMobile + - "hunan1" + - Central China (Changsha-1) + - Provider: ChinaMobile + - "zhuzhou1" + - Central China (Changsha-2) + - Provider: ChinaMobile + - "guangzhou1" + - South China (Guangzhou-2) + - Provider: ChinaMobile + - "dongguan1" + - South China (Guangzhou-3) + - Provider: ChinaMobile + - "beijing1" + - North China (Beijing-1) + - Provider: ChinaMobile + - "beijing2" + - North China (Beijing-2) + - Provider: ChinaMobile + - "beijing4" + - North China (Beijing-3) + - Provider: ChinaMobile + - "huhehaote1" + - North China (Huhehaote) + - Provider: ChinaMobile + - "chengdu1" + - Southwest China (Chengdu) + - Provider: ChinaMobile + - "chongqing1" + - Southwest China (Chongqing) + - Provider: ChinaMobile + - "guiyang1" + - Southwest China (Guiyang) + - Provider: ChinaMobile + - "xian1" + - Northwest China (Xian) + - Provider: ChinaMobile + - "yunnan" + - Yunnan China (Kunming) + - Provider: ChinaMobile + - "yunnan2" + - Yunnan China (Kunming-2) + - Provider: ChinaMobile + - "tianjin1" + - Tianjin China (Tianjin) + - Provider: ChinaMobile + - "jilin1" + - Jilin China (Changchun) + - Provider: ChinaMobile + - "hubei1" + - Hubei China (Xiangyan) + - Provider: ChinaMobile + - "jiangxi1" + - Jiangxi China (Nanchang) + - Provider: ChinaMobile + - "gansu1" + - Gansu China (Lanzhou) + - Provider: ChinaMobile + - "shanxi1" + - Shanxi China (Taiyuan) + - Provider: ChinaMobile + - "liaoning1" + - Liaoning China (Shenyang) + - Provider: ChinaMobile + - "hebei1" + - Hebei China (Shijiazhuang) + - Provider: ChinaMobile + - "fujian1" + - Fujian China (Xiamen) + - Provider: ChinaMobile + - "guangxi1" + - Guangxi China (Nanning) + - Provider: ChinaMobile + - "anhui1" + - Anhui China (Huainan) + - Provider: ChinaMobile + - "us-standard" + - US Cross Region Standard + - Provider: IBMCOS + - "us-vault" + - US Cross Region Vault + - Provider: IBMCOS + - "us-cold" + - US Cross Region Cold + - Provider: IBMCOS + - "us-flex" + - US Cross Region Flex + - Provider: IBMCOS + - "us-east-standard" + - US East Region Standard + - Provider: IBMCOS + - "us-east-vault" + - US East Region Vault + - Provider: IBMCOS + - "us-east-cold" + - US East Region Cold + - Provider: IBMCOS + - "us-east-flex" + - US East Region Flex + - Provider: IBMCOS + - "us-south-standard" + - US South Region Standard + - Provider: IBMCOS + - "us-south-vault" + - US South Region Vault + - Provider: IBMCOS + - "us-south-cold" + - US South Region Cold + - Provider: IBMCOS + - "us-south-flex" + - US South Region Flex + - Provider: IBMCOS + - "eu-standard" + - EU Cross Region Standard + - Provider: IBMCOS + - "eu-vault" + - EU Cross Region Vault + - Provider: IBMCOS + - "eu-cold" + - EU Cross Region Cold + - Provider: IBMCOS + - "eu-flex" + - EU Cross Region Flex + - Provider: IBMCOS + - "eu-gb-standard" + - Great Britain Standard + - Provider: IBMCOS + - "eu-gb-vault" + - Great Britain Vault + - Provider: IBMCOS + - "eu-gb-cold" + - Great Britain Cold + - Provider: IBMCOS + - "eu-gb-flex" + - Great Britain Flex + - Provider: IBMCOS + - "ap-standard" + - APAC Standard + - Provider: IBMCOS + - "ap-vault" + - APAC Vault + - Provider: IBMCOS + - "ap-cold" + - APAC Cold + - Provider: IBMCOS + - "ap-flex" + - APAC Flex + - Provider: IBMCOS + - "mel01-standard" + - Melbourne Standard + - Provider: IBMCOS + - "mel01-vault" + - Melbourne Vault + - Provider: IBMCOS + - "mel01-cold" + - Melbourne Cold + - Provider: IBMCOS + - "mel01-flex" + - Melbourne Flex + - Provider: IBMCOS + - "tor01-standard" + - Toronto Standard + - Provider: IBMCOS + - "tor01-vault" + - Toronto Vault + - Provider: IBMCOS + - "tor01-cold" + - Toronto Cold + - Provider: IBMCOS + - "tor01-flex" + - Toronto Flex + - Provider: IBMCOS + - "cn-east-1" + - East China Region 1 + - Provider: Qiniu + - "cn-east-2" + - East China Region 2 + - Provider: Qiniu + - "cn-north-1" + - North China Region 1 + - Provider: Qiniu + - "cn-south-1" + - South China Region 1 + - Provider: Qiniu + - "us-north-1" + - North America Region 1 + - Provider: Qiniu + - "ap-southeast-1" + - Southeast Asia Region 1 + - Provider: Qiniu + - "ap-northeast-1" + - Northeast Asia Region 1 + - Provider: Qiniu + - "us-east-1" + - US East (N. Virginia) + - Provider: Rabata + - "eu-west-1" + - EU (Ireland) + - Provider: Rabata + - "eu-west-2" + - EU (London) + - Provider: Rabata + - "global" + - Global CDN Region + - Provider: RackCorp + - "au" + - Australia (All locations) + - Provider: RackCorp + - "au-nsw" + - NSW (Australia) Region + - Provider: RackCorp + - "au-qld" + - QLD (Australia) Region + - Provider: RackCorp + - "au-vic" + - VIC (Australia) Region + - Provider: RackCorp + - "au-wa" + - Perth (Australia) Region + - Provider: RackCorp + - "ph" + - Manila (Philippines) Region + - Provider: RackCorp + - "th" + - Bangkok (Thailand) Region + - Provider: RackCorp + - "hk" + - HK (Hong Kong) Region + - Provider: RackCorp + - "mn" + - Ulaanbaatar (Mongolia) Region + - Provider: RackCorp + - "kg" + - Bishkek (Kyrgyzstan) Region + - Provider: RackCorp + - "id" + - Jakarta (Indonesia) Region + - Provider: RackCorp + - "jp" + - Tokyo (Japan) Region + - Provider: RackCorp + - "sg" + - SG (Singapore) Region + - Provider: RackCorp + - "de" + - Frankfurt (Germany) Region + - Provider: RackCorp + - "us" + - USA (AnyCast) Region + - Provider: RackCorp + - "us-east-1" + - New York (USA) Region + - Provider: RackCorp + - "us-west-1" + - Fremont (USA) Region + - Provider: RackCorp + - "nz" + - Auckland (New Zealand) Region + - Provider: RackCorp --s3-acl @@ -27693,57 +29849,75 @@ Properties: - Config: acl - Env Var: RCLONE_S3_ACL -- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade,Mega +- Provider: + AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other - Type: string - Required: false - Examples: - - "default" - - Owner gets Full_CONTROL. - - No one else has access rights (default). - "private" - Owner gets FULL_CONTROL. - No one else has access rights (default). + - Provider: + AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other - "public-read" - Owner gets FULL_CONTROL. - The AllUsers group gets READ access. + - Provider: + AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other - "public-read-write" - Owner gets FULL_CONTROL. - The AllUsers group gets READ and WRITE access. - Granting this on a bucket is generally not recommended. + - Provider: + AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other - "authenticated-read" - Owner gets FULL_CONTROL. - The AuthenticatedUsers group gets READ access. + - Provider: + AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other - "bucket-owner-read" - Object owner gets FULL_CONTROL. - Bucket owner gets READ access. - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + - Provider: + AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other - "bucket-owner-full-control" - Both the object owner and the bucket owner get FULL_CONTROL over the object. - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + - Provider: + AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other - "private" - Owner gets FULL_CONTROL. - No one else has access rights (default). - This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS. + - Provider: IBMCOS - "public-read" - Owner gets FULL_CONTROL. - The AllUsers group gets READ access. - This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS. + - Provider: IBMCOS - "public-read-write" - Owner gets FULL_CONTROL. - The AllUsers group gets READ and WRITE access. - This acl is available on IBM Cloud (Infra), On-Premise IBM COS. + - Provider: IBMCOS - "authenticated-read" - Owner gets FULL_CONTROL. - The AuthenticatedUsers group gets READ access. - Not supported on Buckets. - This acl is available on IBM Cloud (Infra) and On-Premise IBM COS. + - Provider: IBMCOS + - "default" + - Owner gets Full_CONTROL. + - No one else has access rights (default). + - Provider: TencentCOS --s3-server-side-encryption @@ -27760,10 +29934,13 @@ Properties: - Examples: - "" - None + - Provider: AWS,Ceph,ChinaMobile,Minio - "AES256" - AES256 + - Provider: AWS,Ceph,ChinaMobile,Minio - "aws:kms" - aws:kms + - Provider: AWS,Ceph,Minio --s3-sse-kms-key-id @@ -27790,28 +29967,74 @@ Properties: - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS -- Provider: AWS +- Provider: + AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,Qiniu,Scaleway,TencentCOS - Type: string - Required: false - Examples: - "" - Default + - Provider: AWS,Alibaba,ChinaMobile,TencentCOS - "STANDARD" - Standard storage class + - Provider: + AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,Qiniu,TencentCOS - "REDUCED_REDUNDANCY" - Reduced redundancy storage class + - Provider: AWS - "STANDARD_IA" - Standard Infrequent Access storage class + - Provider: AWS - "ONEZONE_IA" - One Zone Infrequent Access storage class + - Provider: AWS - "GLACIER" - Glacier Flexible Retrieval storage class + - Provider: AWS - "DEEP_ARCHIVE" - Glacier Deep Archive storage class + - Provider: AWS - "INTELLIGENT_TIERING" - Intelligent-Tiering storage class + - Provider: AWS - "GLACIER_IR" - Glacier Instant Retrieval storage class + - Provider: AWS,Magalu + - "GLACIER" + - Archive storage mode + - Provider: Alibaba,ChinaMobile,Qiniu + - "STANDARD_IA" + - Infrequent access storage mode + - Provider: Alibaba,ChinaMobile,TencentCOS + - "LINE" + - Infrequent access storage mode + - Provider: Qiniu + - "DEEP_ARCHIVE" + - Deep archive storage mode + - Provider: Qiniu + - "" + - Default. + - Provider: Scaleway + - "STANDARD" + - The Standard class for any upload. + - Suitable for on-demand content like streaming or CDN. + - Available in all regions. + - Provider: Scaleway + - "GLACIER" + - Archived storage. + - Prices are lower, but it needs to be restored first to be + accessed. + - Available in FR-PAR and NL-AMS regions. + - Provider: Scaleway + - "ONEZONE_IA" + - One Zone - Infrequent Access. + - A good choice for storing secondary backup copies or easily + re-creatable data. + - Available in the FR-PAR region only. + - Provider: Scaleway + - "ARCHIVE" + - Archive storage mode + - Provider: TencentCOS --s3-ibm-api-key @@ -27841,11 +30064,12 @@ Advanced options Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, -Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, -IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, -Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, -SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, -Qiniu, Zata and others). +Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, +GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, +Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, +OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, +Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, +TencentCOS, Wasabi, Zata, Other). --s3-bucket-acl @@ -27864,7 +30088,8 @@ Properties: - Config: bucket_acl - Env Var: RCLONE_S3_BUCKET_ACL -- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade +- Provider: + AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other - Type: string - Required: false - Examples: @@ -28486,6 +30711,19 @@ Properties: - Type: bool - Default: false +--s3-use-data-integrity-protections + +If true use AWS S3 data integrity protections. + +See AWS Docs on Data Integrity Protections + +Properties: + +- Config: use_data_integrity_protections +- Env Var: RCLONE_S3_USE_DATA_INTEGRITY_PROTECTIONS +- Type: Tristate +- Default: unset + --s3-versions Include old versions in directory listings. @@ -28831,7 +31069,7 @@ Backend commands Here are the commands specific to the s3 backend. -Run them with +Run them with: rclone backend COMMAND remote: @@ -28845,7 +31083,7 @@ backend/command. restore -Restore objects from GLACIER or INTELLIGENT-TIERING archive tier +Restore objects from GLACIER or INTELLIGENT-TIERING archive tier. rclone backend restore remote: [options] [+] @@ -28853,7 +31091,7 @@ This command can be used to restore one or more objects from GLACIER to normal storage or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier. -Usage Examples: +Usage examples: rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS @@ -28861,11 +31099,11 @@ Usage Examples: rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY This flag also obeys the filters. Test first with --interactive/-i or ---dry-run flags +--dry-run flags. rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 -All the objects shown will be marked for restore, then +All the objects shown will be marked for restore, then: rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 @@ -28887,13 +31125,13 @@ Options: - "description": The optional description for the job. - "lifetime": Lifetime of the active copy in days, ignored for - INTELLIGENT-TIERING storage + INTELLIGENT-TIERING storage. - "priority": Priority of restore: Standard|Expedited|Bulk restore-status -Show the restore status for objects being restored from GLACIER or -INTELLIGENT-TIERING storage +Show the status for objects being restored from GLACIER or +INTELLIGENT-TIERING. rclone backend restore-status remote: [options] [+] @@ -28901,7 +31139,7 @@ This command can be used to show the status for objects being restored from GLACIER to normal storage or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier. -Usage Examples: +Usage examples: rclone backend restore-status s3:bucket/path/to/object rclone backend restore-status s3:bucket/path/to/directory @@ -28909,7 +31147,7 @@ Usage Examples: This command does not obey the filters. -It returns a list of status dictionaries. +It returns a list of status dictionaries: [ { @@ -28943,17 +31181,19 @@ It returns a list of status dictionaries. Options: -- "all": if set then show all objects, not just ones with restore - status +- "all": If set then show all objects, not just ones with restore + status. list-multipart-uploads -List the unfinished multipart uploads +List the unfinished multipart uploads. rclone backend list-multipart-uploads remote: [options] [+] This command lists the unfinished multipart uploads in JSON format. +Usage examples: + rclone backend list-multipart s3:bucket/path/to/object It returns a dictionary of buckets with values as lists of unfinished @@ -28963,24 +31203,24 @@ You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path. { - "rclone": [ - { - "Initiated": "2020-06-26T14:20:36Z", - "Initiator": { - "DisplayName": "XXX", - "ID": "arn:aws:iam::XXX:user/XXX" - }, - "Key": "KEY", - "Owner": { - "DisplayName": null, - "ID": "XXX" - }, - "StorageClass": "STANDARD", - "UploadId": "XXX" - } - ], - "rclone-1000files": [], - "rclone-dst": [] + "rclone": [ + { + "Initiated": "2020-06-26T14:20:36Z", + "Initiator": { + "DisplayName": "XXX", + "ID": "arn:aws:iam::XXX:user/XXX" + }, + "Key": "KEY", + "Owner": { + "DisplayName": null, + "ID": "XXX" + }, + "StorageClass": "STANDARD", + "UploadId": "XXX" + } + ], + "rclone-1000files": [], + "rclone-dst": [] } cleanup @@ -28995,6 +31235,8 @@ max-age which defaults to 24 hours. Note that you can use --interactive/-i or --dry-run with this command to see what it would do. +Usage examples: + rclone backend cleanup s3:bucket/path/to/object rclone backend cleanup -o max-age=7w s3:bucket/path/to/object @@ -29002,7 +31244,7 @@ Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. Options: -- "max-age": Max age of upload to delete +- "max-age": Max age of upload to delete. cleanup-hidden @@ -29016,6 +31258,8 @@ enabled bucket. Note that you can use --interactive/-i or --dry-run with this command to see what it would do. +Usage example: + rclone backend cleanup-hidden s3:bucket/path/to/dir versioning @@ -29027,6 +31271,8 @@ Set/get versioning support for a bucket. This command sets versioning support if a parameter is passed and then returns the current versioning status for the bucket supplied. +Usage examples: + rclone backend versioning s3:bucket # read status only rclone backend versioning s3:bucket Enabled rclone backend versioning s3:bucket Suspended @@ -29044,7 +31290,7 @@ Set command for updating the config parameters. This set command can be used to update the config parameters for a running s3 backend. -Usage Examples: +Usage examples: rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] @@ -29133,7 +31379,7 @@ configuration. First run: This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -29243,7 +31489,7 @@ Object Storage service. ArvanCloud provides an S3 interface which can be configured for use with rclone like this. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password n/s> n @@ -29398,7 +31644,7 @@ Storage (EOS) configuration. First run: This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -29653,7 +31899,7 @@ Note that all buckets are private, and all are stored in the same "auto" region. It is necessary to use Cloudflare workers to share the content of a bucket publicly. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -29743,13 +31989,54 @@ does. If this is causing a problem then upload the files with A consequence of this is that Content-Encoding: gzip will never appear in the metadata on Cloudflare. +Cubbit DS3 + +Cubbit Object Storage is a geo-distributed cloud object storage +platform. + +To connect to Cubbit DS3 you will need an access key and secret key +pair. You can follow this guide to retrieve these keys. They will be +needed when prompted by rclone config. + +Default region will correspond to eu-west-1 and the endpoint has to be +specified as s3.cubbit.eu. + +Going through the whole process of creating a new remote by running +rclone config, each prompt should be answered as shown below: + + name> cubbit-ds3 (or any name you like) + Storage> s3 + provider> Cubbit + env_auth> false + access_key_id> YOUR_ACCESS_KEY + secret_access_key> YOUR_SECRET_KEY + region> eu-west-1 (or leave empty) + endpoint> s3.cubbit.eu + acl> + +The resulting configuration file should look like: + + [cubbit-ds3] + type = s3 + provider = Cubbit + access_key_id = ACCESS_KEY + secret_access_key = SECRET_KEY + region = eu-west-1 + endpoint = s3.cubbit.eu + +You can then start using Cubbit DS3 with rclone. For example, to create +a new bucket and copy files into it, you can run: + + rclone mkdir cubbit-ds3:my-bucket + rclone copy /path/to/files cubbit-ds3:my-bucket + DigitalOcean Spaces Spaces is an S3-interoperable object storage service from cloud provider DigitalOcean. To connect to DigitalOcean Spaces you will need an access key and secret -key. These can be retrieved on the "Applications & API" page of the +key. These can be retrieved on the Applications & API page of the DigitalOcean control panel. They will be needed when prompted by rclone config for your access_key_id and secret_access_key. @@ -29831,7 +32118,7 @@ You can also join the exaba support slack if you need more help. An rclone config walkthrough might look like this but details may vary depending exactly on how you have set up the container. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -29905,6 +32192,145 @@ directory paging. Rclone will return the error: This is Google bug #312292516. +Hetzner Object Storage + +Here is an example of making a Hetzner Object Storage configuration. +First run: + + rclone config + +This will guide you through an interactive setup process. + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + + Enter name for new remote. + name> my-hetzner + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others + \ (s3) + [snip] + Storage> s3 + Option provider. + Choose your S3 provider. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + [snip] + XX / Hetzner Object Storage + \ (Hetzner) + [snip] + provider> Hetzner + Option env_auth. + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Choose a number from below, or type in your own boolean value (true or false). + Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) + env_auth> + Option access_key_id. + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + access_key_id> ACCESS_KEY + Option secret_access_key. + AWS Secret Access Key (password). + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + secret_access_key> SECRET_KEY + Option region. + Region to connect to. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / Helsinki + \ (hel1) + 2 / Falkenstein + \ (fsn1) + 3 / Nuremberg + \ (nbg1) + region> + Option endpoint. + Endpoint for Hetzner Object Storage + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / Helsinki + \ (hel1.your-objectstorage.com) + 2 / Falkenstein + \ (fsn1.your-objectstorage.com) + 3 / Nuremberg + \ (nbg1.your-objectstorage.com) + endpoint> + Option location_constraint. + Location constraint - must be set to match the Region. + Leave blank if not sure. Used when creating buckets only. + Enter a value. Press Enter to leave empty. + location_constraint> + Option acl. + Canned ACL used when creating buckets and storing or copying objects. + This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + If the acl is an empty string then no X-Amz-Acl: header is added and + the default (private) will be used. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + acl> + Edit advanced config? + y) Yes + n) No (default) + y/n> + Configuration complete. + Options: + - type: s3 + - provider: Hetzner + - access_key_id: ACCESS_KEY + - secret_access_key: SECRET_KEY + Keep this "my-hetzner" remote? + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> + Current remotes: + + Name Type + ==== ==== + my-hetzner s3 + + e) Edit existing remote + n) New remote + d) Delete remote + r) Rename remote + c) Copy remote + s) Set configuration password + q) Quit config + e/n/d/r/c/s/q> + +This will leave the config file looking like this. + + [my-hetzner] + type = s3 + provider = Hetzner + access_key_id = ACCESS_KEY + secret_access_key = SECRET_KEY + region = hel1 + endpoint = hel1.your-objectstorage.com + acl = private + Huawei OBS Object Storage Service (OBS) provides stable, secure, efficient, and @@ -29925,7 +32351,7 @@ configuration and add it to your rclone configuration file. Or you can also configure via the interactive command line: - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -30041,7 +32467,7 @@ dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: -(http://www.ibm.com/cloud/object-storage) +http://www.ibm.com/cloud/object-storage To configure access to IBM COS S3, follow the steps below: @@ -30060,28 +32486,28 @@ To configure access to IBM COS S3, follow the steps below: 3. Select "s3" storage. - Choose a number from below, or type in your own value - [snip] - XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ "s3" - [snip] - Storage> s3 + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ "s3" + [snip] + Storage> s3 4. Select IBM COS as the S3 Storage Provider. - Choose the S3 provider. - Choose a number from below, or type in your own value - 1 / Choose this option to configure Storage to AWS S3 - \ "AWS" - 2 / Choose this option to configure Storage to Ceph Systems - \ "Ceph" - 3 / Choose this option to configure Storage to Dreamhost - \ "Dreamhost" - 4 / Choose this option to the configure Storage to IBM COS S3 - \ "IBMCOS" - 5 / Choose this option to the configure Storage to Minio - \ "Minio" - Provider>4 + Choose the S3 provider. + Choose a number from below, or type in your own value + 1 / Choose this option to configure Storage to AWS S3 + \ "AWS" + 2 / Choose this option to configure Storage to Ceph Systems + \ "Ceph" + 3 / Choose this option to configure Storage to Dreamhost + \ "Dreamhost" + 4 / Choose this option to the configure Storage to IBM COS S3 + \ "IBMCOS" + 5 / Choose this option to the configure Storage to Minio + \ "Minio" + Provider>4 5. Enter the Access Key and Secret. @@ -30117,7 +32543,7 @@ To configure access to IBM COS S3, follow the steps below: 10 / US Region East Private Endpoint \ "s3.us-east.objectstorage.service.networklayer.com" 11 / US Region South Endpoint - [snip] + [snip] 34 / Toronto Single Site Private Endpoint \ "s3.tor01.objectstorage.service.networklayer.com" endpoint>1 @@ -30146,27 +32572,27 @@ To configure access to IBM COS S3, follow the steps below: \ "us-south-standard" 10 / US South Region Vault \ "us-south-vault" - [snip] + [snip] 32 / Toronto Flex \ "tor01-flex" - location_constraint>1 + location_constraint>1 8. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs. - Canned ACL used when creating buckets and/or storing objects in S3. - For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl - Choose a number from below, or type in your own value - 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS - \ "private" - 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS - \ "public-read" - 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS - \ "public-read-write" - 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS - \ "authenticated-read" - acl> 1 + Canned ACL used when creating buckets and/or storing objects in S3. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS + \ "private" + 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS + \ "public-read" + 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS + \ "public-read-write" + 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS + \ "authenticated-read" + acl> 1 9. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this @@ -30182,49 +32608,54 @@ To configure access to IBM COS S3, follow the steps below: 10. Execute rclone commands - 1) Create a bucket. - rclone mkdir IBM-COS-XREGION:newbucket - 2) List available buckets. - rclone lsd IBM-COS-XREGION: - -1 2017-11-08 21:16:22 -1 test - -1 2018-02-14 20:16:39 -1 newbucket - 3) List contents of a bucket. - rclone ls IBM-COS-XREGION:newbucket - 18685952 test.exe - 4) Copy a file from local to remote. - rclone copy /Users/file.txt IBM-COS-XREGION:newbucket - 5) Copy a file from remote to local. - rclone copy IBM-COS-XREGION:newbucket/file.txt . - 6) Delete a file on remote. - rclone delete IBM-COS-XREGION:newbucket/file.txt + 1) Create a bucket. + rclone mkdir IBM-COS-XREGION:newbucket + 2) List available buckets. + rclone lsd IBM-COS-XREGION: + -1 2017-11-08 21:16:22 -1 test + -1 2018-02-14 20:16:39 -1 newbucket + 3) List contents of a bucket. + rclone ls IBM-COS-XREGION:newbucket + 18685952 test.exe + 4) Copy a file from local to remote. + rclone copy /Users/file.txt IBM-COS-XREGION:newbucket + 5) Copy a file from remote to local. + rclone copy IBM-COS-XREGION:newbucket/file.txt . + 6) Delete a file on remote. + rclone delete IBM-COS-XREGION:newbucket/file.txt IBM IAM authentication If using IBM IAM authentication with IBM API KEY you need to fill in -these additional parameters 1. Select false for env_auth 2. Leave -access_key_id and secret_access_key blank 3. Paste your ibm_api_key +these additional parameters - Option ibm_api_key. - IBM API Key to be used to obtain IAM token - Enter a value of type string. Press Enter for the default (1). - ibm_api_key> +1. Select false for env_auth + +2. Leave access_key_id and secret_access_key blank + +3. Paste your ibm_api_key + + Option ibm_api_key. + IBM API Key to be used to obtain IAM token + Enter a value of type string. Press Enter for the default (1). + ibm_api_key> 4. Paste your ibm_resource_instance_id - Option ibm_resource_instance_id. - IBM service instance id - Enter a value of type string. Press Enter for the default (2). - ibm_resource_instance_id> + Option ibm_resource_instance_id. + IBM service instance id + Enter a value of type string. Press Enter for the default (2). + ibm_resource_instance_id> 5. In advanced settings type true for v2_auth - Option v2_auth. - If true use v2 authentication. - If this is false (the default) then rclone will use v4 authentication. - If it is set then rclone will use v2 authentication. - Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH. - Enter a boolean value (true or false). Press Enter for the default (true). - v2_auth> + Option v2_auth. + If true use v2 authentication. + If this is false (the default) then rclone will use v4 authentication. + If it is set then rclone will use v2 authentication. + Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH. + Enter a boolean value (true or false). Press Enter for the default (true). + v2_auth> IDrive e2 @@ -30234,7 +32665,7 @@ Here is an example of making an IDrive e2 configuration. First run: This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -30333,6 +32764,131 @@ This will guide you through an interactive setup process. d) Delete this remote y/e/d> y +Intercolo Object Storage + +Intercolo Object Storage offers GDPR-compliant, transparently priced, +S3-compatible cloud storage hosted in Frankfurt, Germany. + +Here's an example of making a configuration for Intercolo. + +First run: + + rclone config + +This will guide you through an interactive setup process. + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + + Enter name for new remote. + name> intercolo + + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + [snip] + xx / Amazon S3 Compliant Storage Providers including AWS, ... + \ (s3) + [snip] + Storage> s3 + + Option provider. + Choose your S3 provider. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + [snip] + xx / Intercolo Object Storage + \ (Intercolo) + [snip] + provider> Intercolo + + Option env_auth. + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Choose a number from below, or type in your own boolean value (true or false). + Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) + env_auth> false + + Option access_key_id. + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + access_key_id> ACCESS_KEY + + Option secret_access_key. + AWS Secret Access Key (password). + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + secret_access_key> SECRET_KEY + + Option region. + Region where your bucket will be created and your data stored. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / Frankfurt, Germany + \ (de-fra) + region> 1 + + Option endpoint. + Endpoint for Intercolo Object Storage. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / Frankfurt, Germany + \ (de-fra.i3storage.com) + endpoint> 1 + + Option acl. + Canned ACL used when creating buckets and storing or copying objects. + This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + If the acl is an empty string then no X-Amz-Acl: header is added and + the default (private) will be used. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + [snip] + acl> + + Edit advanced config? + y) Yes + n) No (default) + y/n> n + + Configuration complete. + Options: + - type: s3 + - provider: Intercolo + - access_key_id: ACCESS_KEY + - secret_access_key: SECRET_KEY + - region: de-fra + - endpoint: de-fra.i3storage.com + Keep this "intercolo" remote? + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + +This will leave the config file looking like this. + + [intercolo] + type = s3 + provider = Intercolo + access_key_id = ACCESS_KEY + secret_access_key = SECRET_KEY + region = de-fra + endpoint = de-fra.i3storage.com + IONOS Cloud IONOS S3 Object Storage is a service offered by IONOS for storing and @@ -30385,7 +32941,7 @@ Enter AWS credentials in the next step: env_auth> Enter your Access Key and Secret key. These can be retrieved in the Data -Center Designer, click on the menu “Manager resources” / "Object Storage +Center Designer, click on the menu "Manager resources" / "Object Storage Key Manager". Option access_key_id. @@ -30475,23 +33031,23 @@ rclone). 1) Create a bucket (the name must be unique within the whole IONOS S3) - rclone mkdir ionos-fra:my-bucket + rclone mkdir ionos-fra:my-bucket 2) List available buckets - rclone lsd ionos-fra: + rclone lsd ionos-fra: -4) Copy a file from local to remote +3) Copy a file from local to remote - rclone copy /Users/file.txt ionos-fra:my-bucket + rclone copy /Users/file.txt ionos-fra:my-bucket -3) List contents of a bucket +4) List contents of a bucket - rclone ls ionos-fra:my-bucket + rclone ls ionos-fra:my-bucket 5) Copy a file from remote to local - rclone copy ionos-fra:my-bucket/file.txt + rclone copy ionos-fra:my-bucket/file.txt Leviia Cloud Object Storage @@ -30502,102 +33058,102 @@ To configure access to Leviia, follow the steps below: 1. Run rclone config and select n for a new remote. - rclone config - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n + rclone config + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n 2. Give the name of the configuration. For example, name it 'leviia'. - name> leviia + name> leviia 3. Select s3 storage. - Choose a number from below, or type in your own value - [snip] - XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ (s3) - [snip] - Storage> s3 + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ (s3) + [snip] + Storage> s3 4. Select Leviia provider. - Choose a number from below, or type in your own value - 1 / Amazon Web Services (AWS) S3 - \ "AWS" - [snip] - 15 / Leviia Object Storage - \ (Leviia) - [snip] - provider> Leviia + Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + [snip] + 15 / Leviia Object Storage + \ (Leviia) + [snip] + provider> Leviia 5. Enter your SecretId and SecretKey of Leviia. - Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - Only applies if access_key_id and secret_access_key is blank. - Enter a boolean value (true or false). Press Enter for the default ("false"). - Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" - env_auth> 1 - AWS Access Key ID. - Leave blank for anonymous access or runtime credentials. - Enter a string value. Press Enter for the default (""). - access_key_id> ZnIx.xxxxxxxxxxxxxxx - AWS Secret Access Key (password) - Leave blank for anonymous access or runtime credentials. - Enter a string value. Press Enter for the default (""). - secret_access_key> xxxxxxxxxxx + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Enter a boolean value (true or false). Press Enter for the default ("false"). + Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + access_key_id> ZnIx.xxxxxxxxxxxxxxx + AWS Secret Access Key (password) + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + secret_access_key> xxxxxxxxxxx 6. Select endpoint for Leviia. - / The default endpoint - 1 | Leviia. - \ (s3.leviia.com) - [snip] - endpoint> 1 + / The default endpoint + 1 | Leviia. + \ (s3.leviia.com) + [snip] + endpoint> 1 7. Choose acl. - Note that this ACL is applied when server-side copying objects as S3 - doesn't copy the ACL from the source but rather writes a fresh one. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - / Owner gets FULL_CONTROL. - 1 | No one else has access rights (default). - \ (private) - / Owner gets FULL_CONTROL. - 2 | The AllUsers group gets READ access. - \ (public-read) - [snip] - acl> 1 - Edit advanced config? (y/n) - y) Yes - n) No (default) - y/n> n - Remote config - -------------------- - [leviia] - - type: s3 - - provider: Leviia - - access_key_id: ZnIx.xxxxxxx - - secret_access_key: xxxxxxxx - - endpoint: s3.leviia.com - - acl: private - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y - Current remotes: + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + [snip] + acl> 1 + Edit advanced config? (y/n) + y) Yes + n) No (default) + y/n> n + Remote config + -------------------- + [leviia] + - type: s3 + - provider: Leviia + - access_key_id: ZnIx.xxxxxxx + - secret_access_key: xxxxxxxx + - endpoint: s3.leviia.com + - acl: private + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: - Name Type - ==== ==== - leviia s3 + Name Type + ==== ==== + leviia s3 Liara @@ -30608,7 +33164,7 @@ run: This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password n/s> n @@ -30705,7 +33261,7 @@ First run: This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -30856,7 +33412,7 @@ First run: This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -30972,7 +33528,7 @@ Here is an example of making a configuration. First run: This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -31154,7 +33710,7 @@ rclone configuration file: You can also run rclone config to go through the interactive setup process: - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -31282,7 +33838,7 @@ how to interact with the platform, take a look at the documentation. Here is an example of making an OVHcloud Object Storage configuration with rclone config: - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -31476,7 +34032,7 @@ Here is an example of making a Petabox configuration. First run: This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password n/s> n @@ -31746,6 +34302,302 @@ To configure access to Qiniu Kodo, follow the steps below: 1. Run rclone config and select n for a new remote. + rclone config + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + +2. Give the name of the configuration. For example, name it 'qiniu'. + + name> qiniu + +3. Select s3 storage. + + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ (s3) + [snip] + Storage> s3 + +4. Select Qiniu provider. + + Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + [snip] + 22 / Qiniu Object Storage (Kodo) + \ (Qiniu) + [snip] + provider> Qiniu + +5. Enter your SecretId and SecretKey of Qiniu Kodo. + + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Enter a boolean value (true or false). Press Enter for the default ("false"). + Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + access_key_id> AKIDxxxxxxxxxx + AWS Secret Access Key (password) + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + secret_access_key> xxxxxxxxxxx + +6. Select endpoint for Qiniu Kodo. This is the standard endpoint for + different region. + + / The default endpoint - a good choice if you are unsure. + 1 | East China Region 1. + | Needs location constraint cn-east-1. + \ (cn-east-1) + / East China Region 2. + 2 | Needs location constraint cn-east-2. + \ (cn-east-2) + / North China Region 1. + 3 | Needs location constraint cn-north-1. + \ (cn-north-1) + / South China Region 1. + 4 | Needs location constraint cn-south-1. + \ (cn-south-1) + / North America Region. + 5 | Needs location constraint us-north-1. + \ (us-north-1) + / Southeast Asia Region 1. + 6 | Needs location constraint ap-southeast-1. + \ (ap-southeast-1) + / Northeast Asia Region 1. + 7 | Needs location constraint ap-northeast-1. + \ (ap-northeast-1) + [snip] + endpoint> 1 + + Option endpoint. + Endpoint for Qiniu Object Storage. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / East China Endpoint 1 + \ (s3-cn-east-1.qiniucs.com) + 2 / East China Endpoint 2 + \ (s3-cn-east-2.qiniucs.com) + 3 / North China Endpoint 1 + \ (s3-cn-north-1.qiniucs.com) + 4 / South China Endpoint 1 + \ (s3-cn-south-1.qiniucs.com) + 5 / North America Endpoint 1 + \ (s3-us-north-1.qiniucs.com) + 6 / Southeast Asia Endpoint 1 + \ (s3-ap-southeast-1.qiniucs.com) + 7 / Northeast Asia Endpoint 1 + \ (s3-ap-northeast-1.qiniucs.com) + endpoint> 1 + + Option location_constraint. + Location constraint - must be set to match the Region. + Used when creating buckets only. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / East China Region 1 + \ (cn-east-1) + 2 / East China Region 2 + \ (cn-east-2) + 3 / North China Region 1 + \ (cn-north-1) + 4 / South China Region 1 + \ (cn-south-1) + 5 / North America Region 1 + \ (us-north-1) + 6 / Southeast Asia Region 1 + \ (ap-southeast-1) + 7 / Northeast Asia Region 1 + \ (ap-northeast-1) + location_constraint> 1 + +7. Choose acl and storage class. + + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + [snip] + acl> 2 + The storage class to use when storing new objects in Tencent COS. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / Standard storage class + \ (STANDARD) + 2 / Infrequent access storage mode + \ (LINE) + 3 / Archive storage mode + \ (GLACIER) + 4 / Deep archive storage mode + \ (DEEP_ARCHIVE) + [snip] + storage_class> 1 + Edit advanced config? (y/n) + y) Yes + n) No (default) + y/n> n + Remote config + -------------------- + [qiniu] + - type: s3 + - provider: Qiniu + - access_key_id: xxx + - secret_access_key: xxx + - region: cn-east-1 + - endpoint: s3-cn-east-1.qiniucs.com + - location_constraint: cn-east-1 + - acl: public-read + - storage_class: STANDARD + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: + + Name Type + ==== ==== + qiniu s3 + +FileLu S5 + +FileLu S5 Object Storage is an S3-compatible object storage system. It +provides multiple region options (Global, US-East, EU-Central, +AP-Southeast, and ME-Central) while using a single endpoint (s5lu.com). +FileLu S5 is designed for scalability, security, and simplicity, with +predictable pricing and no hidden charges for data transfers or API +requests. + +Here is an example of making a configuration. First run: + + rclone config + +This will guide you through an interactive setup process. + + No remotes found, make a new one\? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + + Enter name for new remote. + name> s5lu + + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS,... FileLu, ... + \ (s3) + [snip] + Storage> s3 + + Option provider. + Choose your S3 provider. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + [snip] + XX / FileLu S5 Object Storage + \ (FileLu) + [snip] + provider> FileLu + + Option env_auth. + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Choose a number from below, or type in your own boolean value (true or false). + Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) + env_auth> + + Option access_key_id. + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + access_key_id> XXX + + Option secret_access_key. + AWS Secret Access Key (password). + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + secret_access_key> XXX + + Option endpoint. + Endpoint for S3 API. + Required when using an S3 clone. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / Global + \ (global) + 2 / North America (US-East) + \ (us-east) + 3 / Europe (EU-Central) + \ (eu-central) + 4 / Asia Pacific (AP-Southeast) + \ (ap-southeast) + 5 / Middle East (ME-Central) + \ (me-central) + region> 1 + + Edit advanced config? + y) Yes + n) No (default) + y/n> n + + Configuration complete. + Options: + - type: s3 + - provider: FileLu + - access_key_id: XXX + - secret_access_key: XXX + - endpoint: s5lu.com + Keep this "s5lu" remote? + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + +This will leave the config file looking like this. + + [s5lu] + type = s3 + provider = FileLu + access_key_id = XXX + secret_access_key = XXX + endpoint = s5lu.com + +Rabata + +Rabata is an S3-compatible secure cloud storage service that offers +flat, transparent pricing (no API request fees) while supporting +standard S3 APIs. It is suitable for backup, application storage,media +workflows, and archive use cases. + +Server side copy is not implemented with Rabata, also meaning +modification time of objects cannot be updated. + +Rclone config: + rclone config No remotes found, make a new one? n) New remote @@ -31753,172 +34605,112 @@ To configure access to Qiniu Kodo, follow the steps below: q) Quit config n/s/q> n -2. Give the name of the configuration. For example, name it 'qiniu'. + Enter name for new remote. + name> Rabata - name> qiniu - -3. Select s3 storage. - - Choose a number from below, or type in your own value + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. [snip] XX / Amazon S3 Compliant Storage Providers including AWS, ... \ (s3) [snip] Storage> s3 -4. Select Qiniu provider. - - Choose a number from below, or type in your own value - 1 / Amazon Web Services (AWS) S3 - \ "AWS" + Option provider. + Choose your S3 provider. + Choose a number from below, or type in your own value. + Press Enter to leave empty. [snip] - 22 / Qiniu Object Storage (Kodo) - \ (Qiniu) + XX / Rabata Cloud Storage + \ (Rabata) [snip] - provider> Qiniu - -5. Enter your SecretId and SecretKey of Qiniu Kodo. + provider> Rabata + Option env_auth. Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. - Enter a boolean value (true or false). Press Enter for the default ("false"). - Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" - env_auth> 1 + Choose a number from below, or type in your own boolean value (true or false). + Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) + env_auth> + + Option access_key_id. AWS Access Key ID. Leave blank for anonymous access or runtime credentials. - Enter a string value. Press Enter for the default (""). - access_key_id> AKIDxxxxxxxxxx - AWS Secret Access Key (password) + Enter a value. Press Enter to leave empty. + access_key_id> ACCESS_KEY_ID + + Option secret_access_key. + AWS Secret Access Key (password). Leave blank for anonymous access or runtime credentials. - Enter a string value. Press Enter for the default (""). - secret_access_key> xxxxxxxxxxx + Enter a value. Press Enter to leave empty. + secret_access_key> SECRET_ACCESS_KEY -6. Select endpoint for Qiniu Kodo. This is the standard endpoint for - different region. - - / The default endpoint - a good choice if you are unsure. - 1 | East China Region 1. - | Needs location constraint cn-east-1. - \ (cn-east-1) - / East China Region 2. - 2 | Needs location constraint cn-east-2. - \ (cn-east-2) - / North China Region 1. - 3 | Needs location constraint cn-north-1. - \ (cn-north-1) - / South China Region 1. - 4 | Needs location constraint cn-south-1. - \ (cn-south-1) - / North America Region. - 5 | Needs location constraint us-north-1. - \ (us-north-1) - / Southeast Asia Region 1. - 6 | Needs location constraint ap-southeast-1. - \ (ap-southeast-1) - / Northeast Asia Region 1. - 7 | Needs location constraint ap-northeast-1. - \ (ap-northeast-1) - [snip] - endpoint> 1 + Option region. + Region where your bucket will be created and your data stored. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / US East (N. Virginia) + \ (us-east-1) + 2 / EU (Ireland) + \ (eu-west-1) + 3 / EU (London) + \ (eu-west-2) + region> 3 Option endpoint. - Endpoint for Qiniu Object Storage. + Endpoint for Rabata Object Storage. Choose a number from below, or type in your own value. Press Enter to leave empty. - 1 / East China Endpoint 1 - \ (s3-cn-east-1.qiniucs.com) - 2 / East China Endpoint 2 - \ (s3-cn-east-2.qiniucs.com) - 3 / North China Endpoint 1 - \ (s3-cn-north-1.qiniucs.com) - 4 / South China Endpoint 1 - \ (s3-cn-south-1.qiniucs.com) - 5 / North America Endpoint 1 - \ (s3-us-north-1.qiniucs.com) - 6 / Southeast Asia Endpoint 1 - \ (s3-ap-southeast-1.qiniucs.com) - 7 / Northeast Asia Endpoint 1 - \ (s3-ap-northeast-1.qiniucs.com) - endpoint> 1 + 1 / US East (N. Virginia) + \ (s3.us-east-1.rabata.io) + 2 / EU West (Ireland) + \ (s3.eu-west-1.rabata.io) + 3 / EU West (London) + \ (s3.eu-west-2.rabata.io) + endpoint> 3 Option location_constraint. - Location constraint - must be set to match the Region. - Used when creating buckets only. + location where your bucket will be created and your data stored. Choose a number from below, or type in your own value. Press Enter to leave empty. - 1 / East China Region 1 - \ (cn-east-1) - 2 / East China Region 2 - \ (cn-east-2) - 3 / North China Region 1 - \ (cn-north-1) - 4 / South China Region 1 - \ (cn-south-1) - 5 / North America Region 1 - \ (us-north-1) - 6 / Southeast Asia Region 1 - \ (ap-southeast-1) - 7 / Northeast Asia Region 1 - \ (ap-northeast-1) - location_constraint> 1 + 1 / US East (N. Virginia) + \ (us-east-1) + 2 / EU (Ireland) + \ (eu-west-1) + 3 / EU (London) + \ (eu-west-2) + location_constraint> 3 -7. Choose acl and storage class. - - Note that this ACL is applied when server-side copying objects as S3 - doesn't copy the ACL from the source but rather writes a fresh one. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - / Owner gets FULL_CONTROL. - 1 | No one else has access rights (default). - \ (private) - / Owner gets FULL_CONTROL. - 2 | The AllUsers group gets READ access. - \ (public-read) - [snip] - acl> 2 - The storage class to use when storing new objects in Tencent COS. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - 1 / Standard storage class - \ (STANDARD) - 2 / Infrequent access storage mode - \ (LINE) - 3 / Archive storage mode - \ (GLACIER) - 4 / Deep archive storage mode - \ (DEEP_ARCHIVE) - [snip] - storage_class> 1 - Edit advanced config? (y/n) + Edit advanced config? y) Yes n) No (default) y/n> n - Remote config - -------------------- - [qiniu] + + Configuration complete. + Options: - type: s3 - - provider: Qiniu - - access_key_id: xxx - - secret_access_key: xxx - - region: cn-east-1 - - endpoint: s3-cn-east-1.qiniucs.com - - location_constraint: cn-east-1 - - acl: public-read - - storage_class: STANDARD - -------------------- + - provider: Rabata + - access_key_id: ACCESS_KEY_ID + - secret_access_key: SECRET_ACCESS_KEY + - region: eu-west-2 + - endpoint: s3.eu-west-2.rabata.io + - location_constraint: eu-west-2 + Keep this "rabata" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y + Current remotes: Name Type ==== ==== - qiniu s3 + rabata s3 RackCorp @@ -31927,8 +34719,8 @@ your friendly cloud provider RackCorp. The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty. -Before you can use RackCorp Object Storage, you'll need to "sign up" for -an account on our "portal". Next you can create an access key, a +Before you can use RackCorp Object Storage, you'll need to sign up for +an account on our portal. Next you can create an access key, a secret key and buckets, in your location of choice with ease. These details are required for the next steps of configuration, when rclone config asks for your access_key_id and secret_access_key. @@ -32182,7 +34974,7 @@ recommended default), not "path style". You can use rclone config to make a new provider like this - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -32278,6 +35070,210 @@ And your config should end up looking like this: region = ru-1 endpoint = s3.ru-1.storage.selcloud.ru +Servercore + +Servercore Object Storage is an S3 compatible object storage system that +provides scalable and secure storage solutions for businesses of all +sizes. + +rclone config example: + + No remotes found, make a new one\? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + + Enter name for new remote. + name> servercore + + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + [snip] + XX / Amazon S3 Compliant Storage Providers including ..., Servercore, ... + \ (s3) + [snip] + Storage> s3 + + Option provider. + Choose your S3 provider. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + [snip] + XX / Servercore Object Storage + \ (Servercore) + [snip] + provider> Servercore + + Option env_auth. + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Choose a number from below, or type in your own boolean value (true or false). + Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) + env_auth> 1 + + Option access_key_id. + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + access_key_id> ACCESS_KEY + + Option secret_access_key. + AWS Secret Access Key (password). + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + secret_access_key> SECRET_ACCESS_KEY + + Option region. + Region where your is data stored. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / St. Petersburg + \ (ru-1) + 2 / Moscow + \ (gis-1) + 3 / Moscow + \ (ru-7) + 4 / Tashkent, Uzbekistan + \ (uz-2) + 5 / Almaty, Kazakhstan + \ (kz-1) + region> 1 + + Option endpoint. + Endpoint for Servercore Object Storage. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / Saint Petersburg + \ (s3.ru-1.storage.selcloud.ru) + 2 / Moscow + \ (s3.gis-1.storage.selcloud.ru) + 3 / Moscow + \ (s3.ru-7.storage.selcloud.ru) + 4 / Tashkent, Uzbekistan + \ (s3.uz-2.srvstorage.uz) + 5 / Almaty, Kazakhstan + \ (s3.kz-1.srvstorage.kz) + endpoint> 1 + + Edit advanced config? + y) Yes + n) No (default) + y/n> n + + Configuration complete. + Options: + - type: s3 + - provider: Servercore + - access_key_id: ACCESS_KEY + - secret_access_key: SECRET_ACCESS_KEY + - region: ru-1 + - endpoint: s3.ru-1.storage.selcloud.ru + Keep this "servercore" remote? + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + +Spectra Logic + +Spectra Logic is an on-prem S3-compatible object storage gateway that +exposes local object storage and policy-tiers data to Spectra tape and +public clouds under a single namespace for backup and archiving. + +The S3 compatible gateway is configured using rclone config with a type +of s3 and with a provider name of SpectraLogic. Here is an example run +of the configurator. + + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + + Enter name for new remote. + name> spectralogic + + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. + [snip] + XX / Amazon S3 Compliant Storage Providers including ..., SpectraLogic, ... + \ (s3) + [snip] + Storage> s3 + + Option provider. + Choose your S3 provider. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + [snip] + XX / SpectraLogic BlackPearl + \ (SpectraLogic) + [snip] + provider> SpectraLogic + + Option env_auth. + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Choose a number from below, or type in your own boolean value (true or false). + Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \ (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \ (true) + env_auth> 1 + + Option access_key_id. + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + access_key_id> ACCESS_KEY + + Option secret_access_key. + AWS Secret Access Key (password). + Leave blank for anonymous access or runtime credentials. + Enter a value. Press Enter to leave empty. + secret_access_key> SECRET_ACCESS_KEY + + Option endpoint. + Endpoint for S3 API. + Required when using an S3 clone. + Enter a value. Press Enter to leave empty. + endpoint> https://bp.example.com + + Edit advanced config? + y) Yes + n) No (default) + y/n> n + + Configuration complete. + Options: + - type: s3 + - provider: SpectraLogic + - access_key_id: ACCESS_KEY + - secret_access_key: SECRET_ACCESS_KEY + - endpoint: https://bp.example.com + Keep this "spectratest" remote? + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + +And your config should end up looking like this: + + [spectratest] + type = s3 + provider = SpectraLogic + access_key_id = ACCESS_KEY + secret_access_key = SECRET_ACCESS_KEY + endpoint = https://bp.example.com + Storj Storj is a decentralized cloud storage which can be used through its @@ -32394,7 +35390,7 @@ First run: This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -32519,112 +35515,112 @@ To configure access to Tencent COS, follow the steps below: 1. Run rclone config and select n for a new remote. - rclone config - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n + rclone config + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n 2. Give the name of the configuration. For example, name it 'cos'. - name> cos + name> cos 3. Select s3 storage. - Choose a number from below, or type in your own value - [snip] - XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ "s3" - [snip] - Storage> s3 + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ "s3" + [snip] + Storage> s3 4. Select TencentCOS provider. - Choose a number from below, or type in your own value - 1 / Amazon Web Services (AWS) S3 - \ "AWS" - [snip] - 11 / Tencent Cloud Object Storage (COS) - \ "TencentCOS" - [snip] - provider> TencentCOS + Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + [snip] + 11 / Tencent Cloud Object Storage (COS) + \ "TencentCOS" + [snip] + provider> TencentCOS 5. Enter your SecretId and SecretKey of Tencent Cloud. - Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). - Only applies if access_key_id and secret_access_key is blank. - Enter a boolean value (true or false). Press Enter for the default ("false"). - Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" - env_auth> 1 - AWS Access Key ID. - Leave blank for anonymous access or runtime credentials. - Enter a string value. Press Enter for the default (""). - access_key_id> AKIDxxxxxxxxxx - AWS Secret Access Key (password) - Leave blank for anonymous access or runtime credentials. - Enter a string value. Press Enter for the default (""). - secret_access_key> xxxxxxxxxxx + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Enter a boolean value (true or false). Press Enter for the default ("false"). + Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + access_key_id> AKIDxxxxxxxxxx + AWS Secret Access Key (password) + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + secret_access_key> xxxxxxxxxxx 6. Select endpoint for Tencent COS. This is the standard endpoint for different region. - 1 / Beijing Region. - \ "cos.ap-beijing.myqcloud.com" - 2 / Nanjing Region. - \ "cos.ap-nanjing.myqcloud.com" - 3 / Shanghai Region. - \ "cos.ap-shanghai.myqcloud.com" - 4 / Guangzhou Region. - \ "cos.ap-guangzhou.myqcloud.com" - [snip] - endpoint> 4 + 1 / Beijing Region. + \ "cos.ap-beijing.myqcloud.com" + 2 / Nanjing Region. + \ "cos.ap-nanjing.myqcloud.com" + 3 / Shanghai Region. + \ "cos.ap-shanghai.myqcloud.com" + 4 / Guangzhou Region. + \ "cos.ap-guangzhou.myqcloud.com" + [snip] + endpoint> 4 7. Choose acl and storage class. - Note that this ACL is applied when server-side copying objects as S3 - doesn't copy the ACL from the source but rather writes a fresh one. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - 1 / Owner gets Full_CONTROL. No one else has access rights (default). - \ "default" - [snip] - acl> 1 - The storage class to use when storing new objects in Tencent COS. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - 1 / Default - \ "" - [snip] - storage_class> 1 - Edit advanced config? (y/n) - y) Yes - n) No (default) - y/n> n - Remote config - -------------------- - [cos] - type = s3 - provider = TencentCOS - env_auth = false - access_key_id = xxx - secret_access_key = xxx - endpoint = cos.ap-guangzhou.myqcloud.com - acl = default - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y - Current remotes: + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / Owner gets Full_CONTROL. No one else has access rights (default). + \ "default" + [snip] + acl> 1 + The storage class to use when storing new objects in Tencent COS. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / Default + \ "" + [snip] + storage_class> 1 + Edit advanced config? (y/n) + y) Yes + n) No (default) + y/n> n + Remote config + -------------------- + [cos] + type = s3 + provider = TencentCOS + env_auth = false + access_key_id = xxx + secret_access_key = xxx + endpoint = cos.ap-guangzhou.myqcloud.com + acl = default + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: - Name Type - ==== ==== - cos s3 + Name Type + ==== ==== + cos s3 Wasabi @@ -32636,7 +35632,7 @@ storage infrastructure at minimal cost. Wasabi provides an S3 interface which can be configured for use with rclone like this. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password n/s> n @@ -32911,7 +35907,396 @@ rclone about is not supported by the S3 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. + +Archive + +The Archive backend allows read only access to the content of archive +files on cloud storage without downloading the complete archive. This +means you could mount a large archive file and use only the parts of it +your application requires, rather than having to extract it. + +The archive files are recognised by their extension. + + Archive Extension + ---------- ----------- + Zip .zip + Squashfs .sqfs + +The supported archive file types are cloud friendly - a single file can +be found and downloaded without downloading the whole archive. + +If you just want to create, list or extract archives and don't want to +mount them then you may find the rclone archive commands more +convenient. + +- rclone archive create +- rclone archive list +- rclone archive extract + +These commands supports a wider range of non cloud friendly archives +(but not squashfs) but can't be used for rclone mount or any other +rclone commands (eg rclone check). + +Configuration + +This backend is best used without configuration. + +Use it by putting the string :archive: in front of another remote, say +remote:dir to make :archive:remote:dir. + +Any archives in remote:dir will become directories and any files may be +read out of them individually. + +For example + + $ rclone lsf s3:rclone/dir + 100files.sqfs + 100files.zip + +Note that 100files.zip and 100files.sqfs are now directories: + + $ rclone lsf :archive:s3:rclone/dir + 100files.sqfs/ + 100files.zip/ + +Which we can look inside: + + $ rclone lsf :archive:s3:rclone/dir/100files.zip/ + cofofiy5jun + gigi + hevupaz5z + kacak/ + kozemof/ + lamapaq4 + qejahen + quhenen2rey + soboves8 + vibat/ + wose + xade + zilupot + +Files not in an archive can be read and written as normal. Files in an +archive can only be read. + +The archive backend can also be used in a configuration file. Use the +remote variable to point to the destination of the archive. + + [remote] + type = archive + remote = s3:rclone/dir/100files.zip + +Gives + + $ rclone lsf remote: + cofofiy5jun + gigi + hevupaz5z + kacak/ + ... + +Modification times + +Modification times are preserved with an accuracy depending on the +archive type. + + $ rclone lsl --max-depth 1 :archive:s3:rclone/dir/100files.zip + 12 2025-10-27 14:39:20.000000000 cofofiy5jun + 81 2025-10-27 14:39:20.000000000 gigi + 58 2025-10-27 14:39:20.000000000 hevupaz5z + 6 2025-10-27 14:39:20.000000000 lamapaq4 + 43 2025-10-27 14:39:20.000000000 qejahen + 66 2025-10-27 14:39:20.000000000 quhenen2rey + 95 2025-10-27 14:39:20.000000000 soboves8 + 71 2025-10-27 14:39:20.000000000 wose + 76 2025-10-27 14:39:20.000000000 xade + 15 2025-10-27 14:39:20.000000000 zilupot + +For zip and squashfs files this is 1s. + +Hashes + +Which hash is supported depends on the archive type. Zip files use +CRC32, Squashfs don't support any hashes. For example: + + $ rclone hashsum crc32 :archive:s3:rclone/dir/100files.zip/ + b2288554 cofofiy5jun + a87e62b6 wose + f90f630b xade + c7d0ef29 gigi + f1c64740 soboves8 + cb7b4a5d quhenen2rey + 5115242b kozemof/fonaxo + afeabd9a qejahen + 71202402 kozemof/fijubey5di + bd99e512 kozemof/napux + ... + +Hashes will be checked when the file is read from the archive and used +as part of syncing if possible. + + $ rclone copy -vv :archive:s3:rclone/dir/100files.zip /tmp/100files + ... + 2025/10/27 14:56:44 DEBUG : kacak/turovat5c/yuyuquk: crc32 = abd05cc8 OK + 2025/10/27 14:56:44 DEBUG : kacak/turovat5c/yuyuquk.aeb661dc.partial: renamed to: kacak/turovat5c/yuyuquk + 2025/10/27 14:56:44 INFO : kacak/turovat5c/yuyuquk: Copied (new) + ... + +Zip + +The Zip file format is a widely used archive format that bundles one or +more files and folders into a single file, primarily for easier storage +or transmission. It typically uses compression (most commonly the +DEFLATE algorithm) to reduce the overall size of the archived content. +Zip files are supported natively by most modern operating systems. + +Rclone does not support the following advanced features of Zip files: + +- Splitting large archives into smaller parts +- Password protection +- Zstd compression + +Squashfs + +Squashfs is a compressed, read-only file system format primarily used in +Linux-based systems. It's designed to compress entire file systems +(including files, directories, and metadata) into a single archive file, +which can then be mounted and read directly, appearing as a normal +directory structure. Because it's read-only and highly compressed, +Squashfs is ideal for live CDs/USBs, embedded devices with limited +storage, and software package distribution, as it saves space and +ensures the integrity of the original files. + +Rclone supports the following squashfs compression formats: + +- Gzip +- Lzma +- Xz +- Zstd + +These are not yet working: + +- Lzo - Not yet supported +- Lz4 - Broken with "error decompressing: lz4: bad magic number" + +Rclone works fastest with large squashfs block sizes. For example: + + mksquashfs 100files 100files.sqfs -comp zstd -b 1M + +Limitations + +Files in the archive backend are read only. It isn't possible to create +archives with the archive backend yet. However you can create archives +with rclone archive create. + +Only .zip and .sqfs archives are supported as these are the only common +archiving formats which make it easy to read directory listings from the +archive without downloading the whole archive. + +Internally the archive backend uses the VFS to access files. It isn't +possible to configure the internal VFS yet which might be useful. + +Archive Formats + +Here's a table rating common archive formats on their Cloud Optimization +which is based on their ability to access a single file without reading +the entire archive. + +This capability depends on whether the format has a central index (or +"table of contents") that a program can read first to find the exact +location of a specific file. + + ----------------------------------------------------------------------- + Format Extensions Cloud Optimized Explanation + ----------------- ----------------- ----------------- ----------------- + ZIP .zip Excellent Zip files have an + index (the + "central + directory") + stored at the end + of the file. A + program can seek + to the end, read + the index to find + a file's location + and size, and + then seek + directly to that + file's data to + extract it. + + SquashFS .squashfs, .sqfs, Excellent This is a + .sfs compressed + read-only + filesystem image, + not just an + archive. It is + specifically + designed for + random access. It + uses metadata and + index tables to + allow the system + to find and + decompress + individual files + or data blocks on + demand. + + ISO Image .iso Excellent Like SquashFS, + this is a + filesystem image + (for optical + media). It + contains a + filesystem (like + ISO 9660 or UDF) + with a table of + contents at a + known location, + allowing for + direct access to + any file without + reading the whole + disk. + + RAR .rar Good RAR supports + "non-solid" and + "solid" modes. In + the common + non-solid mode, + files are + compressed + separately, and + an index allows + for easy + single-file + extraction (like + ZIP). In "solid" + mode, this rating + would be "Very + Poor." + + 7z .7z Poor By default, 7z + uses "solid" + archives to + maximize + compression. This + compresses files + as one continuous + stream. To + extract a file + from the middle, + all preceding + files must be + decompressed + first. (If + explicitly + created as + "non-solid," its + rating would be + "Excellent"). + + tar .tar Poor "Tape Archive" is + a streaming + format with no + central index. To + find a file, you + must read the + archive from the + beginning, + checking each + file header one + by one until you + find the one you + want. This is + slow but doesn't + require + decompressing + data. + + Gzipped Tar .tar.gz, .tgz Very Poor This is a tar + file (already + "Poor") + compressed with + gzip as a single, + non-seekable + stream. You + cannot seek. To + get any file, you + must decompress + the entire + archive from the + beginning up to + that file. + + Bzipped/XZ Tar .tar.bz2, .tar.xz Very Poor This is the same + principle as + tar.gz. The + entire archive is + one large + compressed block, + making random + access + impossible. + ----------------------------------------------------------------------- + +Ideas for improvements + +It would be possible to add ISO support fairly easily as the library we +use (go-diskfs) supports it. We could also add ext4 and fat32 the same +way, however in my experience these are not very common as files so +probably not worth it. Go-diskfs can also read partitions which we could +potentially take advantage of. + +It would be possible to add write support, but this would only be for +creating new archives, not for updating existing archives. + +Standard options + +Here are the Standard options specific to archive (Read archives). + +--archive-remote + +Remote to wrap to read archives from. + +Normally should contain a ':' and a path, e.g. "myremote:path/to/dir", +"myremote:bucket" or "myremote:". + +If this is left empty, then the archive backend will use the root as the +remote. + +This means that you can use :archive:remote:path and it will be +equivalent to setting remote="remote:path". + +Properties: + +- Config: remote +- Env Var: RCLONE_ARCHIVE_REMOTE +- Type: string +- Required: false + +Advanced options + +Here are the Advanced options specific to archive (Read archives). + +--archive-description + +Description of the remote. + +Properties: + +- Config: description +- Env Var: RCLONE_ARCHIVE_DESCRIPTION +- Type: string +- Required: false + +Metadata + +Any metadata supported by the underlying remote is read and written. + +See the metadata docs for more info. Backblaze B2 @@ -32932,7 +36317,7 @@ and Master Application Key (a long hex number) OR an Application Key, which is the recommended method. See below for further details on generating and using an Application Key. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote q) Quit config n/q> n @@ -33084,7 +36469,7 @@ You may opt in to a "hard delete" of files with the --b2-hard-delete flag which permanently removes files on deletion instead of hiding them. Old versions of files, where available, are visible using the ---b2-versions flag. +--b2-versions flag. These can be deleted as required with delete. It is also possible to view a bucket as it was at a certain point in time, using the --b2-version-at flag. This will show the file versions @@ -33532,6 +36917,75 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +--b2-sse-customer-algorithm + +If using SSE-C, the server-side encryption algorithm used when storing +this object in B2. + +Properties: + +- Config: sse_customer_algorithm +- Env Var: RCLONE_B2_SSE_CUSTOMER_ALGORITHM +- Type: string +- Required: false +- Examples: + - "" + - None + - "AES256" + - Advanced Encryption Standard (256 bits key length) + +--b2-sse-customer-key + +To use SSE-C, you may provide the secret encryption key encoded in a +UTF-8 compatible string to encrypt/decrypt your data + +Alternatively you can provide --sse-customer-key-base64. + +Properties: + +- Config: sse_customer_key +- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY +- Type: string +- Required: false +- Examples: + - "" + - None + +--b2-sse-customer-key-base64 + +To use SSE-C, you may provide the secret encryption key encoded in +Base64 format to encrypt/decrypt your data + +Alternatively you can provide --sse-customer-key. + +Properties: + +- Config: sse_customer_key_base64 +- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_BASE64 +- Type: string +- Required: false +- Examples: + - "" + - None + +--b2-sse-customer-key-md5 + +If using SSE-C you may provide the secret encryption key MD5 checksum +(optional). + +If you leave it blank, this is calculated automatically from the +sse_customer_key provided. + +Properties: + +- Config: sse_customer_key_md5 +- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_MD5 +- Type: string +- Required: false +- Examples: + - "" + - None + --b2-description Description of the remote. @@ -33547,7 +37001,7 @@ Backend commands Here are the commands specific to the b2 backend. -Run them with +Run them with: rclone backend COMMAND remote: @@ -33561,14 +37015,12 @@ backend/command. lifecycle -Read or set the lifecycle for a bucket +Read or set the lifecycle for a bucket. rclone backend lifecycle remote: [options] [+] This command can be used to read or set the lifecycle for a bucket. -Usage Examples: - To show the current lifecycle rules: rclone backend lifecycle b2:bucket @@ -33611,9 +37063,9 @@ Options: - "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off. - "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any - unfinished large file versions after this many days + unfinished large file versions after this many days. - "daysFromUploadingToHiding": This many days after uploading a file - is hidden + is hidden. cleanup @@ -33634,7 +37086,7 @@ Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. Options: -- "max-age": Max age of upload to delete +- "max-age": Max age of upload to delete. cleanup-hidden @@ -33655,7 +37107,7 @@ rclone about is not supported by the B2 backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. Box @@ -33671,7 +37123,7 @@ Configuration Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -33732,8 +37184,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Box. This only runs from the moment it opens your @@ -33741,7 +37193,8 @@ browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this may require you to unblock it temporarily if you are running a host firewall. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your Box @@ -33958,6 +37411,19 @@ Properties: - Type: string - Required: false +--box-config-credentials + +Box App config.json contents. + +Leave blank normally. + +Properties: + +- Config: config_credentials +- Env Var: RCLONE_BOX_CONFIG_CREDENTIALS +- Type: string +- Required: false + --box-access-token Box App Primary Access Token @@ -34159,7 +37625,7 @@ rclone about is not supported by the Box backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. Get your own Box App ID @@ -34212,7 +37678,7 @@ configured with cache. Here is an example of how to make a remote called test-cache. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -34379,8 +37845,10 @@ How to enable? Run rclone config and add all the Plex options (endpoint, username and password) in your remote and it will be automatically enabled. -Affected settings: - cache-workers: Configured value during confirmed -playback or 1 all the other times +Affected settings: + +- cache-workers: Configured value during confirmed playback or 1 all + the other times Certificate Validation @@ -34432,9 +37900,9 @@ on them. Any reports or feedback on how cache behaves on this OS is greatly appreciated. -- https://github.com/rclone/rclone/issues/1935 -- https://github.com/rclone/rclone/issues/1907 -- https://github.com/rclone/rclone/issues/1834 +- Issue #1935 +- Issue #1907 +- Issue #1834 Risk of throttling @@ -34447,15 +37915,18 @@ meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts. -Some recommendations: - don't use a very small interval for entry -information (--cache-info-age) - while writes aren't yet optimised, you -can still write through cache which gives you the advantage of adding -the file in the cache at the same time if configured to do so. +Some recommendations: + +- don't use a very small interval for entry information + (--cache-info-age) +- while writes aren't yet optimised, you can still write through cache + which gives you the advantage of adding the file in the cache at the + same time if configured to do so. Future enhancements: -- https://github.com/rclone/rclone/issues/1937 -- https://github.com/rclone/rclone/issues/1936 +- Issue #1937 +- Issue #1936 cache and crypt @@ -34499,8 +37970,11 @@ Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt. -Params: - remote = path to remote (required) - withData = true/false to -delete cached data (chunks) as well (optional, false by default) +Params: + +- remote = path to remote (required) +- withData = true/false to delete cached data (chunks) as well + (optional, false by default) Standard options @@ -34875,7 +38349,7 @@ Backend commands Here are the commands specific to the cache backend. -Run them with +Run them with: rclone backend COMMAND remote: @@ -34915,7 +38389,7 @@ remote s3:bucket. Now configure chunker using rclone config. We will call this one overlay to separate it from the remote itself. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -35414,7 +38888,7 @@ account from the developer section. Now run -rclone config + rclone config Follow the interactive setup process: @@ -35485,15 +38959,15 @@ Follow the interactive setup process: List directories in the top level of your Media Library -rclone lsd cloudinary-media-library: + rclone lsd cloudinary-media-library: Make a new directory. -rclone mkdir cloudinary-media-library:directory + rclone mkdir cloudinary-media-library:directory List the contents of a directory. -rclone ls cloudinary-media-library:directory + rclone ls cloudinary-media-library:directory Modified time and hashes @@ -35639,7 +39113,7 @@ through it. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -35701,8 +39175,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Citrix ShareFile. This only runs from the moment @@ -35710,7 +39184,8 @@ it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your ShareFile @@ -35962,7 +39437,7 @@ without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. Crypt @@ -36041,7 +39516,7 @@ anything you read will be in encrypted form, and anything you write will be unencrypted. To avoid issues it is best to configure a dedicated path for encrypted content, and access it exclusively through a crypt remote. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -36197,21 +39672,23 @@ previously encrypted content. The only possibility is to re-upload everything via a crypt remote configured with your new password. Depending on the size of your data, your bandwidth, storage quota etc, -there are different approaches you can take: - If you have everything in -a different location, for example on your local system, you could remove -all of the prior encrypted files, change the password for your -configured crypt remote (or delete and re-create the crypt -configuration), and then re-upload everything from the alternative -location. - If you have enough space on the storage system you can -create a new crypt remote pointing to a separate directory on the same -backend, and then use rclone to copy everything from the original crypt -remote to the new, effectively decrypting everything on the fly using -the old password and re-encrypting using the new password. When done, -delete the original crypt remote directory and finally the rclone crypt -configuration with the old password. All data will be streamed from the -storage system and back, so you will get half the bandwidth and be -charged twice if you have upload and download quota on the storage -system. +there are different approaches you can take: + +- If you have everything in a different location, for example on your + local system, you could remove all of the prior encrypted files, + change the password for your configured crypt remote (or delete and + re-create the crypt configuration), and then re-upload everything + from the alternative location. +- If you have enough space on the storage system you can create a new + crypt remote pointing to a separate directory on the same backend, + and then use rclone to copy everything from the original crypt + remote to the new, effectively decrypting everything on the fly + using the old password and re-encrypting using the new password. + When done, delete the original crypt remote directory and finally + the rclone crypt configuration with the old password. All data will + be streamed from the storage system and back, so you will get half + the bandwidth and be charged twice if you have upload and download + quota on the storage system. Note: A security problem related to the random password generator was fixed in rclone version 1.53.3 (released 2020-11-19). Passwords @@ -36595,7 +40072,7 @@ Backend commands Here are the commands specific to the crypt backend. -Run them with +Run them with: rclone backend COMMAND remote: @@ -36609,21 +40086,21 @@ backend/command. encode -Encode the given filename(s) +Encode the given filename(s). rclone backend encode remote: [options] [+] This encodes the filenames given as arguments returning a list of strings of the encoded results. -Usage Example: +Usage examples: rclone backend encode crypt: file1 [file2...] rclone rc backend/command command=encode fs=crypt: file1 [file2...] decode -Decode the given filename(s) +Decode the given filename(s). rclone backend decode remote: [options] [+] @@ -36631,7 +40108,7 @@ This decodes the filenames given as arguments returning a list of strings of the decoded results. It will return an error if any of the inputs are invalid. -Usage Example: +Usage examples: rclone backend decode crypt: encryptedfile1 [encryptedfile2...] rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] @@ -36762,10 +40239,10 @@ scrypt makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt. -SEE ALSO +See Also - rclone cryptdecode - Show forward/reverse mapping of encrypted - filenames + filenames. Compress @@ -36784,6 +40261,7 @@ Configuration To use this remote, all you need to do is specify another remote and a compression mode to use: + $ rclone config Current remotes: Name Type @@ -36791,7 +40269,6 @@ compression mode to use: remote_to_press sometype e) Edit existing remote - $ rclone config n) New remote d) Delete remote r) Rename remote @@ -36800,45 +40277,80 @@ compression mode to use: q) Quit config e/n/d/r/c/s/q> n name> compress + + Option Storage. + Type of storage to configure. + Choose a number from below, or type in your own value. ... - 8 / Compress a remote - \ "compress" + 12 / Compress a remote + \ (compress) ... Storage> compress - ** See help for compress backend at: https://rclone.org/compress/ ** + Option remote. Remote to compress. - Enter a string value. Press Enter for the default (""). + Enter a value. remote> remote_to_press:subdir + + Option mode. Compression mode. - Enter a string value. Press Enter for the default ("gzip"). - Choose a number from below, or type in your own value - 1 / Gzip compression balanced for speed and compression strength. - \ "gzip" - compression_mode> gzip - Edit advanced config? (y/n) + Choose a number from below, or type in your own value of type string. + Press Enter for the default (gzip). + 1 / Standard gzip compression with fastest parameters. + \ (gzip) + 2 / Zstandard compression — fast modern algorithm offering adjustable speed-to-compression tradeoffs. + \ (zstd) + mode> gzip + + Option level. + GZIP (levels -2 to 9): + - -2 — Huffman encoding only. Only use if you know what you're doing. + - -1 (default) — recommended; equivalent to level 5. + - 0 — turns off compression. + - 1–9 — increase compression at the cost of speed. Going past 6 generally offers very little return. + + ZSTD (levels 0 to 4): + - 0 — turns off compression entirely. + - 1 — fastest compression with the lowest ratio. + - 2 (default) — good balance of speed and compression. + - 3 — better compression, but uses about 2–3x more CPU than the default. + - 4 — best possible compression ratio (highest CPU cost). + + Notes: + - Choose GZIP for wide compatibility; ZSTD for better speed/ratio tradeoffs. + - Negative gzip levels: -2 = Huffman-only, -1 = default (≈ level 5). + Enter a value. + level> -1 + + Edit advanced config? y) Yes n) No (default) y/n> n - Remote config - -------------------- - [compress] - type = compress - remote = remote_to_press:subdir - compression_mode = gzip - -------------------- + + Configuration complete. + Options: + - type: compress + - remote: remote_to_press:subdir + - mode: gzip + - level: -1 + Keep this "compress" remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -Compression Modes +Compression Algorithms -Currently only gzip compression is supported. It provides a decent -balance between speed and size and is well supported by other -applications. Compression strength can further be configured via an -advanced setting where 0 is no compression and 9 is strongest -compression. +- GZIP – a well-established and widely adopted algorithm that strikes + a solid balance between compression speed and ratio. It supports + compression levels from -2 to 9, with the default -1 (roughly + equivalent to level 5) offering an effective middle ground for most + scenarios. + +- Zstandard (zstd) – a modern, high-performance algorithm that offers + precise control over the trade-off between speed and compression + efficiency. Compression levels range from 0 (no compression) to 4 + (maximum compression). File types @@ -36885,28 +40397,37 @@ Properties: - Examples: - "gzip" - Standard gzip compression with fastest parameters. - -Advanced options - -Here are the Advanced options specific to compress (Compress a remote). + - "zstd" + - Zstandard compression — fast modern algorithm offering + adjustable speed-to-compression tradeoffs. --compress-level -GZIP compression level (-2 to 9). +GZIP (levels -2 to 9): - -2 — Huffman encoding only. Only use if you +know what you're doing. - -1 (default) — recommended; equivalent to +level 5. - 0 — turns off compression. - 1–9 — increase compression at +the cost of speed. Going past 6 generally offers very little return. -Generally -1 (default, equivalent to 5) is recommended. Levels 1 to 9 -increase compression at the cost of speed. Going past 6 generally offers -very little return. +ZSTD (levels 0 to 4): - 0 — turns off compression entirely. - 1 — +fastest compression with the lowest ratio. - 2 (default) — good balance +of speed and compression. - 3 — better compression, but uses about 2–3x +more CPU than the default. - 4 — best possible compression ratio +(highest CPU cost). -Level -2 uses Huffman encoding only. Only use if you know what you are -doing. Level 0 turns off compression. +Notes: - Choose GZIP for wide compatibility; ZSTD for better speed/ratio +tradeoffs. - Negative gzip levels: -2 = Huffman-only, -1 = default (≈ +level 5). Properties: - Config: level - Env Var: RCLONE_COMPRESS_LEVEL -- Type: int -- Default: -1 +- Type: string +- Required: true + +Advanced options + +Here are the Advanced options specific to compress (Compress a remote). --compress-ram-cache-limit @@ -36984,7 +40505,7 @@ Configuration Here is an example of how to make a combine called remote for the example above. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -37106,9 +40627,15 @@ DOI The DOI remote is a read only remote for reading files from digital object identifiers (DOI). -Currently, the DOI backend supports DOIs hosted with: - InvenioRDM - -Zenodo - CaltechDATA - Other InvenioRDM repositories - Dataverse - -Harvard Dataverse - Other Dataverse repositories +Currently, the DOI backend supports DOIs hosted with: + +- InvenioRDM + - Zenodo + - CaltechDATA + - Other InvenioRDM repositories +- Dataverse + - Harvard Dataverse + - Other Dataverse repositories Paths are specified as remote:path @@ -37118,7 +40645,7 @@ Configuration Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -37227,7 +40754,7 @@ Backend commands Here are the commands specific to the doi backend. -Run them with +Run them with: rclone backend COMMAND remote: @@ -37247,7 +40774,9 @@ Show metadata about the DOI. This command returns a JSON object with some information about the DOI. - rclone backend medatadata doi: +Usage example: + + rclone backend metadata doi: It returns a JSON object representing metadata about the DOI. @@ -37260,7 +40789,7 @@ Set command for updating the config parameters. This set command can be used to update the config parameters for a running doi backend. -Usage Examples: +Usage examples: rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2] rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2] @@ -37289,7 +40818,7 @@ it. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -37325,8 +40854,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Dropbox. This only runs from the moment it opens @@ -37921,7 +41450,7 @@ your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -37987,7 +41516,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your Enterprise File Fabric @@ -38194,7 +41724,7 @@ Configuration Here is an example of how to make a remote called filelu. First, run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -38262,7 +41792,7 @@ List all folders: Copy a specific file to the FileLu root: - rclone copy D:\\hello.txt filelu: + rclone copy D:\hello.txt filelu: Copy files from a local directory to a FileLu directory: @@ -38274,7 +41804,7 @@ Download a file from FileLu into a local directory: Move files from a local directory to a FileLu directory: - rclone move D:\\local-folder filelu:/remote-path/ + rclone move D:\local-folder filelu:/remote-path/ Sync files from a local directory to a FileLu directory: @@ -38598,7 +42128,7 @@ Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, see below. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote r) Rename remote c) Copy remote @@ -39117,7 +42647,7 @@ rclone about is not supported by the FTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. The implementation of : --dump headers, --dump bodies, --dump auth for debugging isn't the same as for rclone HTTP based backends - it has less @@ -39171,7 +42701,7 @@ Configuration Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -39212,7 +42742,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories and files in the top level of your Gofile @@ -39418,7 +42949,7 @@ walks you through it. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -39498,7 +43029,9 @@ This will guide you through an interactive setup process: \ "us-east1" 13 / Northern Virginia. \ "us-east4" - 14 / Oregon. + 14 / Ohio. + \ "us-east5" + 15 / Oregon. \ "us-west1" location> 12 The storage class to use when storing objects in Google Cloud Storage. @@ -39543,8 +43076,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically @@ -39606,18 +43139,18 @@ If you already have a working service account, skip to step 3. 1. Create a service account using - gcloud iam service-accounts create gcs-read-only + gcloud iam service-accounts create gcs-read-only You can re-use an existing service account as well (like the one created above) 2. Attach a Viewer (read-only) or User (read-write) role to the service account - $ PROJECT_ID=my-project - $ gcloud --verbose iam service-accounts add-iam-policy-binding \ - gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \ - --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \ - --role=roles/storage.objectViewer + $ PROJECT_ID=my-project + $ gcloud --verbose iam service-accounts add-iam-policy-binding \ + gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \ + --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \ + --role=roles/storage.objectViewer Use the Google Cloud console to identify a limited role. Some relevant pre-defined roles: @@ -39969,6 +43502,8 @@ Properties: - South Carolina - "us-east4" - Northern Virginia + - "us-east5" + - Ohio - "us-west1" - Oregon - "us-west2" @@ -40203,7 +43738,7 @@ Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. Google Drive @@ -40220,7 +43755,7 @@ it. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -40293,8 +43828,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically @@ -40415,7 +43950,7 @@ Use case - Google Workspace account and individual Drive Let's say that you are the administrator of a Google Workspace. The goal is to read or write data on an individual's Drive account, who IS a -member of the domain. We'll call the domain example.com, and the user +member of the domain. We'll call the domain , and the user foo@example.com. There's a few steps we need to go through to accomplish this: @@ -40484,10 +44019,12 @@ key" button. Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using ---drive-impersonate, do this instead: - in the gdrive web interface, -share your root folder with the user/email of the new Service Account -you created/selected at step 1 - use rclone without specifying the ---drive-impersonate option, like this: rclone -v lsf gdrive:backup +--drive-impersonate, do this instead: + +- in the gdrive web interface, share your root folder with the + user/email of the new Service Account you created/selected at step 1 +- use rclone without specifying the --drive-impersonate option, like + this: rclone -v lsf gdrive:backup Shared drives (team drives) @@ -41744,7 +45281,7 @@ Backend commands Here are the commands specific to the drive backend. -Run them with +Run them with: rclone backend COMMAND remote: @@ -41758,51 +45295,51 @@ backend/command. get -Get command for fetching the drive config parameters +Get command for fetching the drive config parameters. rclone backend get remote: [options] [+] This is a get command which will be used to fetch the various drive -config parameters +config parameters. -Usage Examples: +Usage examples: rclone backend get drive: [-o service_account_file] [-o chunk_size] rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] Options: -- "chunk_size": show the current upload chunk size -- "service_account_file": show the current service account file +- "chunk_size": Show the current upload chunk size. +- "service_account_file": Show the current service account file. set -Set command for updating the drive config parameters +Set command for updating the drive config parameters. rclone backend set remote: [options] [+] This is a set command which will be used to update the various drive -config parameters +config parameters. -Usage Examples: +Usage examples: rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] Options: -- "chunk_size": update the current upload chunk size -- "service_account_file": update the current service account file +- "chunk_size": Update the current upload chunk size. +- "service_account_file": Update the current service account file. shortcut -Create shortcuts from files or directories +Create shortcuts from files or directories. rclone backend shortcut remote: [options] [+] This command creates shortcuts from files or directories. -Usage: +Usage examples: rclone backend shortcut drive: source_item destination_shortcut rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut @@ -41819,22 +45356,22 @@ authenticated with "drive2:" can't read files from "drive:". Options: -- "target": optional target remote for the shortcut destination +- "target": Optional target remote for the shortcut destination. drives -List the Shared Drives available to this account +List the Shared Drives available to this account. rclone backend drives remote: [options] [+] This command lists the Shared Drives (Team Drives) available to this account. -Usage: +Usage example: rclone backend [-o config] drives drive: -This will return a JSON list of objects like this +This will return a JSON list of objects like this: [ { @@ -41873,21 +45410,21 @@ drives combined into one directory tree. untrash -Untrash files and directories +Untrash files and directories. rclone backend untrash remote: [options] [+] This command untrashes all the files and directories in the directory passed in recursively. -Usage: - -This takes an optional directory to trash which make this easier to use -via the API. +Usage example: rclone backend untrash drive:directory rclone backend --interactive untrash drive:directory subdir +This takes an optional directory to trash which make this easier to use +via the API. + Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it. @@ -41900,13 +45437,13 @@ Result: copyid -Copy files by ID +Copy files by ID. rclone backend copyid remote: [options] [+] -This command copies files by ID +This command copies files by ID. -Usage: +Usage examples: rclone backend copyid drive: ID path rclone backend copyid drive: ID1 path1 ID2 path2 @@ -41927,13 +45464,13 @@ before copying. moveid -Move files by ID +Move files by ID. rclone backend moveid remote: [options] [+] -This command moves files by ID +This command moves files by ID. -Usage: +Usage examples: rclone backend moveid drive: ID path rclone backend moveid drive: ID1 path1 ID2 path2 @@ -41953,25 +45490,25 @@ beforehand. exportformats -Dump the export formats for debug purposes +Dump the export formats for debug purposes. rclone backend exportformats remote: [options] [+] importformats -Dump the import formats for debug purposes +Dump the import formats for debug purposes. rclone backend importformats remote: [options] [+] query -List files using Google Drive query language +List files using Google Drive query language. rclone backend query remote: [options] [+] -This command lists files based on a query +This command lists files based on a query. -Usage: +Usage example: rclone backend query drive: query @@ -41991,27 +45528,29 @@ match a file named "foo ' .txt": The result is a JSON array of matches, for example: [ - { - "createdTime": "2017-06-29T19:58:28.537Z", - "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD", - "md5Checksum": "68518d16be0c6fbfab918be61d658032", - "mimeType": "text/plain", - "modifiedTime": "2024-02-02T10:40:02.874Z", - "name": "foo ' \\.txt", - "parents": [ - "0BxAe_BCDE4zkFGZpcWJGek0xbzC" - ], - "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC", - "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893", - "size": "311", - "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC" - } + { + "createdTime": "2017-06-29T19:58:28.537Z", + "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD", + "md5Checksum": "68518d16be0c6fbfab918be61d658032", + "mimeType": "text/plain", + "modifiedTime": "2024-02-02T10:40:02.874Z", + "name": "foo ' \\.txt", + "parents": [ + "0BxAe_BCDE4zkFGZpcWJGek0xbzC" + ], + "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC", + "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893", + "size": "311", + "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC" + } ] + ```console -rescue + ### rescue -Rescue or delete any orphaned files + Rescue or delete any orphaned files. + ```console rclone backend rescue remote: [options] [+] This command rescues or deletes any orphaned files or directories. @@ -42022,24 +45561,22 @@ are no longer in any folder in Google Drive. This command finds those files and either rescues them to a directory you specify or deletes them. -Usage: - This can be used in 3 ways. -First, list all orphaned files +First, list all orphaned files: rclone backend rescue drive: -Second rescue all orphaned files to the directory indicated +Second rescue all orphaned files to the directory indicated: rclone backend rescue drive: "relative/path/to/rescue/directory" -e.g. To rescue all orphans to a directory called "Orphans" in the top -level +E.g. to rescue all orphans to a directory called "Orphans" in the top +level: rclone backend rescue drive: Orphans -Third delete all orphaned files to the trash +Third delete all orphaned files to the trash: rclone backend rescue drive: -o delete @@ -42142,49 +45679,52 @@ Here is how to create your own Google Drive client ID for rclone: 5. If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button - (near the top right corner of the right panel), then select - "External" and click on "CREATE"; on the next screen, enter an - "Application name" ("rclone" is OK); enter "User Support Email" - (your own email is OK); enter "Developer Contact Email" (your own - email is OK); then click on "Save" (all other data is optional). You - will also have to add some scopes, including - -- https://www.googleapis.com/auth/docs -- https://www.googleapis.com/auth/drive in order to be able to edit, - create and delete files with RClone. -- https://www.googleapis.com/auth/drive.metadata.readonly which you - may also want to add. -- If you want to add all at once, comma separated it would be - https://www.googleapis.com/auth/docs,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/drive.metadata.readonly. - -6. After adding scopes, click "Save and continue" to add test users. Be - sure to add your own account to the test users. Once you've added - yourself as a test user and saved the changes, click again on - "Credentials" on the left panel to go back to the "Credentials" + (near the top right corner of the right panel), then click "Get + started". On the next screen, enter an "Application name" ("rclone" + is OK); enter "User Support Email" (your own email is OK); Next, + under Audience select "External". Next enter your own contact + information, agree to terms and click "Create". You should now see + rclone (or your project name) in a box in the top left of the screen. (PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this will restrict API use to Google Workspace users in your organisation). -7. Click on the "+ CREATE CREDENTIALS" button at the top of the screen, - then select "OAuth client ID". + You will also have to add some scopes, including -8. Choose an application type of "Desktop app" and click "Create". (the + - https://www.googleapis.com/auth/docs + - https://www.googleapis.com/auth/drive in order to be able to + edit, create and delete files with RClone. + - https://www.googleapis.com/auth/drive.metadata.readonly which + you may also want to add. + + To do this, click Data Access on the left side panel, click "add or + remove scopes" and select the three above and press update or go to + the "Manually add scopes" text box (scroll down) and enter + "https://www.googleapis.com/auth/docs,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/drive.metadata.readonly", + press add to table then update. + + You should now see the three scopes on your Data access page. Now + press save at the bottom! + +6. After adding scopes, click Audience Scroll down and click "+ Add + users". Add yourself as a test user and press save. + +7. Go to Overview on the left panel, click "Create OAuth client". + Choose an application type of "Desktop app" and click "Create". (the default name is fine) -9. It will show you a client ID and client secret. Make a note of - these. +8. It will show you a client ID and client secret. Make a note of + these. (If you selected "External" at Step 5 continue to Step 9. If + you chose "Internal" you don't need to publish and can skip straight + to Step 10 but your destination drive must be part of the same + Google Workspace.) - (If you selected "External" at Step 5 continue to Step 10. If you - chose "Internal" you don't need to publish and can skip straight to - Step 11 but your destination drive must be part of the same Google - Workspace.) +9. Go to "Audience" and then click "PUBLISH APP" button and confirm. + Add yourself as a test user if you haven't already. -10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and - confirm. You will also want to add yourself as a test user. - -11. Provide the noted client ID and client secret to rclone. +10. Provide the noted client ID and client secret to rclone. Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for @@ -42202,8 +45742,8 @@ testing mode would also be sufficient. (Thanks to @balazer on github for these instructions.) Sometimes, creation of an OAuth consent in Google API Console fails due -to an error message “The request failed because changes to one of the -field of the resource is not supported”. As a convenient workaround, the +to an error message "The request failed because changes to one of the +field of the resource is not supported". As a convenient workaround, the necessary Google Drive API key can be created on the Python Quickstart page. Just push the Enable the Drive API button to receive the Client ID and Secret. Note that it will automatically create a new project in the @@ -42231,7 +45771,7 @@ you through it. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -42296,8 +45836,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically @@ -42848,10 +46388,12 @@ scopes instead of the drive ones detailed: Hasher Hasher is a special overlay backend to create remotes which handle -checksums for other remotes. It's main functions include: - Emulate hash -types unimplemented by backends - Cache checksums to help with slow -hashing of large local or (S)FTP files - Warm up checksum cache from -external SUM files +checksums for other remotes. It's main functions include: + +- Emulate hash types unimplemented by backends +- Cache checksums to help with slow hashing of large local or (S)FTP + files +- Warm up checksum cache from external SUM files Getting started @@ -42870,7 +46412,7 @@ Interactive configuration Run rclone config: - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -42927,11 +46469,14 @@ hasher like in the following examples: hashes = dropbox,sha1 max_age = 24h -Hasher takes basically the following parameters: - remote is required, - -hashes is a comma separated list of supported checksums (by default -md5,sha1), - max_age - maximum time to keep a checksum value in the -cache, 0 will disable caching completely, off will cache "forever" (that -is until the files get changed). +Hasher takes basically the following parameters: + +- remote is required +- hashes is a comma separated list of supported checksums (by default + md5,sha1) +- max_age - maximum time to keep a checksum value in the cache 0 will + disable caching completely off will cache "forever" (that is until + the files get changed) Make sure the remote has : (colon) in. If you specify the remote without a colon then rclone will use a local directory of that name. So if you @@ -42948,7 +46493,6 @@ will transparently update cache with new checksums when a file is fully read or overwritten, like: rclone copy External:path/file Hasher:dest/path - rclone cat Hasher:path/to/file > /dev/null The way to refresh all cached checksums (even unsupported by the base @@ -42957,13 +46501,11 @@ example, use hashsum --download using any supported hashsum on the command line (we just care to re-read): rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null - rclone backend dump Hasher:path/to/subtree You can print or drop hashsum cache using custom backend commands: rclone backend dump Hasher:dir/subdir - rclone backend drop Hasher: Pre-Seed from a SUM File @@ -42977,14 +46519,17 @@ Instead of SHA1 it can be any hash supported by the remote. The last argument can point to either a local or an other-remote:path text file in SUM format. The command will parse the SUM file, then walk down the path given by the first argument, snapshot current fingerprints and fill -in the cache entries correspondingly. - Paths in the SUM file are -treated as relative to hasher:dir/subdir. - The command will not check -that supplied values are correct. You must know what you are doing. - -This is a one-time action. The SUM file will not get "attached" to the -remote. Cache entries can still be overwritten later, should the -object's fingerprint change. - The tree walk can take long depending on -the tree size. You can increase --checkers to make it faster. Or use -stickyimport if you don't care about fingerprints and consistency. +in the cache entries correspondingly. + +- Paths in the SUM file are treated as relative to hasher:dir/subdir. +- The command will not check that supplied values are correct. You + must know what you are doing. +- This is a one-time action. The SUM file will not get "attached" to + the remote. Cache entries can still be overwritten later, should the + object's fingerprint change. +- The tree walk can take long depending on the tree size. You can + increase --checkers to make it faster. Or use stickyimport if you + don't care about fingerprints and consistency. rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1 @@ -43074,7 +46619,7 @@ Backend commands Here are the commands specific to the hasher backend. -Run them with +Run them with: rclone backend COMMAND remote: @@ -43088,48 +46633,56 @@ backend/command. drop -Drop cache +Drop cache. rclone backend drop remote: [options] [+] -Completely drop checksum cache. Usage Example: rclone backend drop -hasher: +Completely drop checksum cache. + +Usage example: + + rclone backend drop hasher: dump -Dump the database +Dump the database. rclone backend dump remote: [options] [+] -Dump cache records covered by the current remote +Dump cache records covered by the current remote. fulldump -Full dump of the database +Full dump of the database. rclone backend fulldump remote: [options] [+] -Dump all cache records in the database +Dump all cache records in the database. import -Import a SUM file +Import a SUM file. rclone backend import remote: [options] [+] Amend hash cache from a SUM file and bind checksums to files by -size/time. Usage Example: rclone backend import hasher:subdir md5 -/path/to/sum.md5 +size/time. + +Usage example: + + rclone backend import hasher:subdir md5 /path/to/sum.md5 stickyimport -Perform fast import of a SUM file +Perform fast import of a SUM file. rclone backend stickyimport remote: [options] [+] Fill hash cache from a SUM file without verifying file fingerprints. -Usage Example: rclone backend stickyimport hasher:subdir md5 -remote:path/to/sum.md5 + +Usage example: + + rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5 Implementation details (advanced) @@ -43194,7 +46747,7 @@ Configuration Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -43420,6 +46973,7 @@ Properties: Limitations +- Erasure coding not supported, see issue #8808 - No server-side Move or DirMove. - Checksums not implemented. @@ -43437,7 +46991,7 @@ Configuration Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -43486,8 +47040,8 @@ You should be aware that OAuth-tokens can be used to access your account and hence should not be shared with other persons. See the below section for more information. -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from HiDrive. This only runs from the moment it opens @@ -43496,7 +47050,8 @@ webserver runs on http://127.0.0.1:53682/. If local port 53682 is protected by a firewall you may need to temporarily unblock the firewall to complete authorization. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your HiDrive root folder @@ -43523,9 +47078,9 @@ configuration encryption docs. Invalid refresh token -As can be verified here, each refresh_token (for Native Applications) is -valid for 60 days. If used to access HiDrivei, its validity will be -automatically extended. +As can be verified on HiDrive's OAuth guide, each refresh_token (for +Native Applications) is valid for 60 days. If used to access HiDrivei, +its validity will be automatically extended. This means that if you @@ -43562,7 +47117,8 @@ named either of the following: . or .. Therefore rclone will automatically replace these characters, if files or folders are stored or accessed with such names. -You can read about how this filename encoding works in general here. +You can read about how this filename encoding works in general in the +main docs. Keep in mind that HiDrive only supports file or folder names with a length of 255 characters or less. @@ -43602,7 +47158,6 @@ paths accessed by rclone. For example, the following two ways to access the home directory are equivalent: rclone lsd --hidrive-root-prefix="/users/test/" remote:path - rclone lsd remote:/users/test/path See the below section about configuration options for more details. @@ -43940,7 +47495,7 @@ Configuration Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -44138,11 +47693,42 @@ Properties: - Type: string - Required: false +Metadata + +HTTP metadata keys are case insensitive and are always returned in lower +case. + +Here are the possible system metadata items for the http backend. + + ------------------------------------------------------------------------------------------------------ + Name Help Type Example Read Only + ------------------------------ --------------------- ----------- ---------------- -------------------- + cache-control Cache-Control header string no-cache N + + content-disposition Content-Disposition string inline N + header + + content-disposition-filename Filename retrieved string file.txt N + from + Content-Disposition + header + + content-encoding Content-Encoding string gzip N + header + + content-language Content-Language string en-US N + header + + content-type Content-Type header string text/plain N + ------------------------------------------------------------------------------------------------------ + +See the metadata docs for more info. + Backend commands Here are the commands specific to the http backend. -Run them with +Run them with: rclone backend COMMAND remote: @@ -44163,7 +47749,7 @@ Set command for updating the config parameters. This set command can be used to update the config parameters for a running http backend. -Usage Examples: +Usage examples: rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2] rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2] @@ -44183,20 +47769,16 @@ rclone about is not supported by the HTTP backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. ImageKit This is a backend for the ImageKit.io storage service. -About ImageKit - ImageKit.io provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web. -Accounts & Pricing - To use this backend, you need to create an account on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See the pricing @@ -44467,7 +48049,7 @@ rclone reconnect or rclone config. Here is an example of how to make a remote called iclouddrive. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -44666,8 +48248,8 @@ Notes Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's -queue at https://catalogd.archive.org/history/item-name-here . Because -of that, all uploads/deletes will not show up immediately and takes some +queue at https://catalogd.archive.org/history/item-name-here. Because of +that, all uploads/deletes will not show up immediately and takes some time to be available. The per-item queue is enqueued to an another queue, Item Deriver Queue. You can check the status of Item Deriver Queue here. This queue has a limit, and it may block you from uploading, @@ -44686,8 +48268,18 @@ This backend supports setting, updating and reading metadata of each file. The metadata will appear as file metadata on Internet Archive. However, some fields are reserved by both Internet Archive and rclone. -The following are reserved by Internet Archive: - name - source - size - -md5 - crc32 - sha1 - format - old_version - viruscheck - summation +The following are reserved by Internet Archive: + +- name +- source +- size +- md5 +- crc32 +- sha1 +- format +- old_version +- viruscheck +- summation Trying to set values to these keys is ignored with a warning. Only setting mtime is an exception. Doing so make it the identical behavior @@ -44732,7 +48324,7 @@ First run This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -45006,101 +48598,177 @@ See the metadata docs for more info. Jottacloud Jottacloud is a cloud storage service provider from a Norwegian company, -using its own datacenters in Norway. In addition to the official service -at jottacloud.com, it also provides white-label solutions to different -companies, such as: * Telia * Telia Cloud (cloud.telia.se) * Telia Sky -(sky.telia.no) * Tele2 * Tele2 Cloud (mittcloud.tele2.se) * Onlime * -Onlime Cloud Storage (onlime.dk) * Elkjøp (with subsidiaries): * Elkjøp -Cloud (cloud.elkjop.no) * Elgiganten Sweden (cloud.elgiganten.se) * -Elgiganten Denmark (cloud.elgiganten.dk) * Giganti Cloud -(cloud.gigantti.fi) * ELKO Cloud (cloud.elko.is) +using its own datacenters in Norway. -Most of the white-label versions are supported by this backend, although -may require different authentication setup - described below. +In addition to the official service at jottacloud.com, it also provides +white-label solutions to different companies. The following are +currently supported by this backend, using a different authentication +setup as described below: + +- Elkjøp (with subsidiaries): + - Elkjøp Cloud (cloud.elkjop.no) + - Elgiganten Cloud (cloud.elgiganten.dk) + - Elgiganten Cloud (cloud.elgiganten.se) + - ELKO Cloud (cloud.elko.is) + - Gigantti Cloud (cloud.gigantti.fi) +- Telia + - Telia Cloud (cloud.telia.se) + - Telia Sky (sky.telia.no) +- Tele2 + - Tele2 Cloud (mittcloud.tele2.se) +- Onlime + - Onlime (onlime.dk) +- MediaMarkt + - MediaMarkt Cloud (mediamarkt.jottacloud.com) + - Let's Go Cloud (letsgo.jotta.cloud) Paths are specified as remote:path Paths may be as deep as required, e.g. remote:directory/subdirectory. -Authentication types +Authentication -Some of the whitelabel versions uses a different authentication method -than the official service, and you have to choose the correct one when -setting up the remote. +Authentication in Jottacloud is in general based on OAuth and OpenID +Connect (OIDC). There are different variants to choose from, depending +on which service you are using, e.g. a white-label service may only +support one of them. Note that there is no documentation to rely on, so +the descriptions provided here are based on observations and may not be +accurate. -Standard authentication +Jottacloud uses two optional OAuth security mechanisms, referred to as +"Refresh Token Rotation" and "Automatic Reuse Detection", which has some +implications. Access tokens normally have one hour expiry, after which +they need to be refreshed (rotated), an operation that requires the +refresh token to be supplied. Rclone does this automatically. This is +standard OAuth. But in Jottacloud, such a refresh operation not only +creates a new access token, but also refresh token, and invalidates the +existing refresh token, the one that was supplied. It keeps track of the +history of refresh tokens, sometimes referred to as a token family, +descending from the original refresh token that was issued after the +initial authentication. This is used to detect any attempts at reusing +old refresh tokens, and trigger an immedate invalidation of the current +refresh token, and effectively the entire refresh token family. -The standard authentication method used by the official service -(jottacloud.com), as well as some of the whitelabel services, requires -you to generate a single-use personal login token from the account -security settings in the service's web interface. Log in to your -account, go to "Settings" and then "Security", or use the direct link -presented to you by rclone when configuring the remote: -https://www.jottacloud.com/web/secure. Scroll down to the section -"Personal login token", and click the "Generate" button. Note that if -you are using a whitelabel service you probably can't use the direct -link, you need to find the same page in their dedicated web interface, -and also it may be in a different location than described above. +When the current refresh token has been invalidated, next time rclone +tries to perform a token refresh, it will fail with an error message +something along the lines of: -To access your account from multiple instances of rclone, you need to -configure each of them with a separate personal login token. E.g. you -create a Jottacloud remote with rclone in one location, and copy the -configuration file to a second location where you also want to run -rclone and access the same remote. Then you need to replace the token -for one of them, using the config reconnect command, which requires you -to generate a new personal login token and supply as input. If you do -not do this, the token may easily end up being invalidated, resulting in -both instances failing with an error message something along the lines -of: + CRITICAL: Failed to create file system for "remote:": (...): couldn't fetch token: invalid_grant: maybe token expired? - try refreshing with "rclone config reconnect remote:" - oauth2: cannot fetch token: 400 Bad Request - Response: {"error":"invalid_grant","error_description":"Stale token"} +If you run rclone with verbosity level 2 (-vv), you will see a debug +message with an additional error description from the OAuth response: -When this happens, you need to replace the token as described above to -be able to use your remote again. + DEBUG : remote: got fatal oauth error: oauth2: "invalid_grant" "Session doesn't have required client" -All personal login tokens you have taken into use will be listed in the -web interface under "My logged in devices", and from the right side of -that list you can click the "X" button to revoke individual tokens. +(The error description used to be "Stale token" instead of "Session +doesn't have required client", so you may see references to that in +older descriptions of this situation.) -Legacy authentication +When this happens, you need to re-authenticate to be able to use your +remote again, e.g. using the config reconnect command as suggested in +the error message. This will create an entirely new refresh token +(family). -If you are using one of the whitelabel versions (e.g. from Elkjøp) you -may not have the option to generate a CLI token. In this case you'll -have to use the legacy authentication. To do this select yes when the -setup asks for legacy authentication and enter your username and -password. The rest of the setup is identical to the default setup. +A typical example of how you may end up in this situation, is if you +create a Jottacloud remote with rclone in one location, and then copy +the configuration file to a second location where you start using rclone +to access the same remote. Eventually there will now be a token refresh +attempt with an invalidated token, i.e. refresh token reuse, resulting +in both instances starting to fail with the "invalid_grant" error. It is +possible to copy remote configurations, but you must then replace the +token for one of them using the config reconnect command. -Telia Cloud authentication +You can get some overview of your active tokens in your service's web +user interface, if you navigate to "Settings" and then "Security" (in +which case you end up at https://www.jottacloud.com/web/secure or +similar). Down on that page you have a section "My logged in devices". +This contains a list of entries which seemingly represents currently +valid refresh tokens, or refresh token families. From the right side of +that list you can click a button ("X") to revoke (invalidate) it, which +means you will still have access using an existing access token until +that expires, but you will not be able to perform a token refresh. Note +that this entire "My logged in devices" feature seem to behave a bit +differently with different authentication variants and with use of the +different (white-label) services. -Similar to other whitelabel versions Telia Cloud doesn't offer the -option of creating a CLI token, and additionally uses a separate -authentication flow where the username is generated internally. To setup -rclone to use Telia Cloud, choose Telia Cloud authentication in the -setup. The rest of the setup is identical to the default setup. +Standard -Tele2 Cloud authentication +This is an OAuth variant designed for command-line applications. It is +primarily supported by the official service (jottacloud.com), but may +also be supported by some of the white-label services. The information +necessary to be able to perform authentication, like domain name and +endpoint to connect to, are found automatically (it is encoded into the +supplied login token, described next), so you do not need to specify +which service to configure. -As Tele2-Com Hem merger was completed this authentication can be used -for former Com Hem Cloud and Tele2 Cloud customers as no support for -creating a CLI token exists, and additionally uses a separate -authentication flow where the username is generated internally. To setup -rclone to use Tele2 Cloud, choose Tele2 Cloud authentication in the -setup. The rest of the setup is identical to the default setup. +When configuring a remote, you are asked to enter a single-use personal +login token, which you must manually generate from the account security +settings in the service's web interface. You do not need a web browser +on the same machine like with traditional OAuth, but need to use a web +browser somewhere, and be able to be copy the generated string into your +rclone configuration session. Log in to your service's web user +interface, navigate to "Settings" and then "Security", or, for the +official service, use the direct link presented to you by rclone when +configuring the remote: https://www.jottacloud.com/web/secure. Scroll +down to the section "Personal login token", and click the "Generate" +button. Copy the presented string and paste it where rclone asks for it. +Rclone will then use this to perform an initial token request, and +receive a regular OAuth token which it stores in your remote +configuration. There will then also be a new entry in the "My logged in +devices" list in the web interface, with device name and application +name "Jottacloud CLI". -Onlime Cloud Storage authentication +Each time a new token is created this way, i.e. a new personal login +token is generated and traded in for an OAuth token, you get an entirely +new refresh token family, with a new entry in the "My logged in +devices". You can create as many remotes as you want, and use multiple +instances of rclone on same or different machine, as long as you +configure them separately like this, and not get your self into the +refresh token reuse issue described above. -Onlime has sold access to Jottacloud proper, while providing localized -support to Danish Customers, but have recently set up their own hosting, -transferring their customers from Jottacloud servers to their own ones. +Traditional -This, of course, necessitates using their servers for authentication, -but otherwise functionality and architecture seems equivalent to -Jottacloud. +Jottacloud also supports a more traditional OAuth variant. Most of the +white-label services support this, and for many of them this is the only +alternative because they do not support personal login tokens. This +method relies on pre-defined service-specific domain names and +endpoints, and rclone need you to specify which service to configure. +This also means that any changes to existing or additions of new +white-label services needs an update in the rclone backend +implementation. -To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud -authentication in the setup. The rest of the setup is identical to the -default setup. +When configuring a remote, you must interactively login to an OAuth +authorization web site, and a one-time authorization code is sent back +to rclone behind the scene, which it uses to request an OAuth token. +This means that you need to be on a machine with an internet-connected +web browser. If you need it on a machine where this is not the case, +then you will have to create the configuration on a different machine +and copy it from there. The Jottacloud backend does not support the +rclone authorize command. See the remote setup docs for details. + +Jottacloud exerts some form of strict session management when +authenticating using this method. This leads to some unexpected cases of +the "invalid_grant" error described above, and effectively limits you to +only use of a single active authentication on the same machine. I.e. you +can only create a single rclone remote, and you can't even log in with +the service's official desktop client while having a rclone remote +configured, or else you will eventually get all sessions invalidated and +are forced to re-authenticate. + +When you have successfully authenticated, there will be an entry in the +"My logged in devices" list in the web interface representing your +session. It will typically be listed with application name "Jottacloud +for Desktop" or similar (it depends on the white-label service +configuration). + +Legacy + +Originally Jottacloud used an OAuth variant which required your +account's username and password to be specified. When Jottacloud +migrated to the newer methods, some white-label versions (those from +Elkjøp) still used this legacy method for a long time. Currently there +are no known uses of this, it is still supported by rclone, but the +support will be removed in a future version. Configuration @@ -45116,7 +48784,10 @@ This will guide you through an interactive setup process: s) Set configuration password q) Quit config n/s/q> n + + Enter name for new remote. name> remote + Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. @@ -45125,60 +48796,63 @@ This will guide you through an interactive setup process: \ (jottacloud) [snip] Storage> jottacloud + + Option client_id. + OAuth Client Id. + Leave blank normally. + Enter a value. Press Enter to leave empty. + client_id> + + Option client_secret. + OAuth Client Secret. + Leave blank normally. + Enter a value. Press Enter to leave empty. + client_secret> + Edit advanced config? y) Yes n) No (default) y/n> n + Option config_type. - Select authentication type. - Choose a number from below, or type in an existing string value. + Type of authentication. + Choose a number from below, or type in an existing value of type string. Press Enter for the default (standard). / Standard authentication. - 1 | Use this if you're a normal Jottacloud user. + | This is primarily supported by the official service, but may also be + | supported by some white-label services. It is designed for command-line + 1 | applications, and you will be asked to enter a single-use personal login + | token which you must manually generate from the account security settings + | in the web interface of your service. \ (standard) + / Traditional authentication. + | This is supported by the official service and all white-label services + | that rclone knows about. You will be asked which service to connect to. + 2 | It has a limitation of only a single active authentication at a time. You + | need to be on, or have access to, a machine with an internet-connected + | web browser. + \ (traditional) / Legacy authentication. - 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. + 3 | This is no longer supported by any known services and not recommended + | used. You will be asked for your account's username and password. \ (legacy) - / Telia Cloud authentication. - 3 | Use this if you are using Telia Cloud. - \ (telia) - / Tele2 Cloud authentication. - 4 | Use this if you are using Tele2 Cloud. - \ (tele2) - / Onlime Cloud authentication. - 5 | Use this if you are using Onlime Cloud. - \ (onlime) config_type> 1 + + Option config_login_token. Personal login token. - Generate here: https://www.jottacloud.com/web/secure - Login Token> + Generate it from the account security settings in the web interface of your + service, for the official service on https://www.jottacloud.com/web/secure. + Enter a value. + config_login_token> + Use a non-standard device/mountpoint? Choosing no, the default, will let you access the storage used for the archive section of the official Jottacloud client. If you instead want to access the sync or the backup section, for example, you must choose yes. y) Yes n) No (default) - y/n> y - Option config_device. - The device to use. In standard setup the built-in Jotta device is used, - which contains predefined mountpoints for archive, sync etc. All other devices - are treated as backup devices by the official Jottacloud client. You may create - a new by entering a unique name. - Choose a number from below, or type in your own string value. - Press Enter for the default (DESKTOP-3H31129). - 1 > DESKTOP-3H31129 - 2 > Jotta - config_device> 2 - Option config_mountpoint. - The mountpoint to use for the built-in device Jotta. - The standard setup is to use the Archive mountpoint. Most other mountpoints - have very limited support in rclone and should generally be avoided. - Choose a number from below, or type in an existing string value. - Press Enter for the default (Archive). - 1 > Archive - 2 > Shared - 3 > Sync - config_mountpoint> 1 + y/n> n + Configuration complete. Options: - type: jottacloud @@ -45196,7 +48870,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your Jottacloud @@ -45555,7 +49230,7 @@ the password a nice name like rclone and clicking on generate. Here is an example of how to make a remote called koofr. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -45619,7 +49294,7 @@ You can choose to edit advanced config in order to enter your own service URL if you use an on-premise or white label Koofr instance, or choose an alternative mount instead of your primary storage. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this: List directories in top level of your Koofr @@ -45781,7 +49456,7 @@ Koofr API. Here is an example of how to make a remote called ds. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -45848,7 +49523,7 @@ URL to connect to. Here is an example of how to make a remote called other. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -45922,7 +49597,7 @@ Here is an example of making a remote for Linkbox. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -46459,6 +50134,9 @@ files without knowledge of the key used for encryption. This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption. +Note MEGA S4 Object Storage, an S3 compatible object store, also works +with rclone and this is recommended for new projects. + Paths are specified as remote:path Paths may be as deep as required, e.g. remote:directory/subdirectory. @@ -46467,7 +50145,7 @@ Configuration Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -46511,7 +50189,8 @@ NOTE: The encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your Mega @@ -46556,7 +50235,7 @@ Object not found If you are connecting to your Mega remote for the first time, to test access and synchronization, you may receive an error such as - Failed to create file system for "my-mega-remote:": + Failed to create file system for "my-mega-remote:": couldn't login: Object (typically, node or user) not found The diagnostic steps often recommended in the rclone forum start with @@ -46655,10 +50334,43 @@ Properties: - Type: string - Required: true +--mega-2fa + +The 2FA code of your MEGA account if the account is set up with one + +Properties: + +- Config: 2fa +- Env Var: RCLONE_MEGA_2FA +- Type: string +- Required: false + Advanced options Here are the Advanced options specific to mega (Mega). +--mega-session-id + +Session (internal use only) + +Properties: + +- Config: session_id +- Env Var: RCLONE_MEGA_SESSION_ID +- Type: string +- Required: false + +--mega-master-key + +Master key (internal use only) + +Properties: + +- Config: master_key +- Env Var: RCLONE_MEGA_MASTER_KEY +- Type: string +- Required: false + --mega-debug Output more debug from Mega. @@ -46761,7 +50473,7 @@ Configuration You can configure it as a remote like this with rclone config too if you want to: - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -46827,14 +50539,18 @@ remote:/path/to/dir. If you have a CP code you can use that as the folder after the domain such as //. -For example, this is commonly configured with or without a CP code: * -With a CP code. -[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/ * Without a -CP code. [your-domain-prefix]-nsu.akamaihd.net +For example, this is commonly configured with or without a CP code: -See all buckets rclone lsd remote: The initial setup for Netstorage -involves getting an account and secret. Use rclone config to walk you -through the setup process. +- With a CP code. + [your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/ +- Without a CP code. [your-domain-prefix]-nsu.akamaihd.net + +See all buckets + + rclone lsd remote: + +The initial setup for Netstorage involves getting an account and secret. +Use rclone config to walk you through the setup process. Configuration @@ -46842,77 +50558,77 @@ Here's an example of how to make a remote called ns1. 1. To begin the interactive configuration process, enter this command: - rclone config + rclone config 2. Type n to create a new remote. - n) New remote - d) Delete remote - q) Quit config - e/n/d/q> n + n) New remote + d) Delete remote + q) Quit config + e/n/d/q> n 3. For this example, enter ns1 when you reach the name> prompt. - name> ns1 + name> ns1 4. Enter netstorage as the type of storage to configure. - Type of storage to configure. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - XX / NetStorage - \ "netstorage" - Storage> netstorage + Type of storage to configure. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + XX / NetStorage + \ "netstorage" + Storage> netstorage 5. Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes. - Enter a string value. Press Enter for the default (""). - Choose a number from below, or type in your own value - 1 / HTTP protocol - \ "http" - 2 / HTTPS protocol - \ "https" - protocol> 1 + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / HTTP protocol + \ "http" + 2 / HTTPS protocol + \ "https" + protocol> 1 6. Specify your NetStorage host, CP code, and any necessary content paths using this format: /// - Enter a string value. Press Enter for the default (""). - host> baseball-nsu.akamaihd.net/123456/content/ + Enter a string value. Press Enter for the default (""). + host> baseball-nsu.akamaihd.net/123456/content/ 7. Set the netstorage account name - Enter a string value. Press Enter for the default (""). - account> username + Enter a string value. Press Enter for the default (""). + account> username 8. Set the Netstorage account secret/G2O key which will be used for authentication purposes. Select the y option to set your own password then enter your secret. Note: The secret is stored in the rclone.conf file with hex-encoded encryption. - y) Yes type in my own password - g) Generate random password - y/g> y - Enter the password: - password: - Confirm the password: - password: + y) Yes type in my own password + g) Generate random password + y/g> y + Enter the password: + password: + Confirm the password: + password: 9. View the summary and confirm your remote configuration. - [ns1] - type = netstorage - protocol = http - host = baseball-nsu.akamaihd.net/123456/content/ - account = username - secret = *** ENCRYPTED *** - -------------------- - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y + [ns1] + type = netstorage + protocol = http + host = baseball-nsu.akamaihd.net/123456/content/ + account = username + secret = *** ENCRYPTED *** + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y This remote is called ns1 and can now be used. @@ -46937,7 +50653,7 @@ Delete content on remote rclone delete ns1:/974012/testing/notes.txt -Move or copy content between CP codes. +Move or copy content between CP codes Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes. @@ -47119,7 +50835,7 @@ Backend commands Here are the commands specific to the netstorage backend. -Run them with +Run them with: rclone backend COMMAND remote: @@ -47133,7 +50849,7 @@ backend/command. du -Return disk usage information for a specified directory +Return disk usage information for a specified directory. rclone backend du remote: [options] [+] @@ -47149,7 +50865,11 @@ You can create a symbolic link in ObjectStore with the symlink action. The desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if -applicable. rclone backend symlink +applicable. + +Usage example: + + rclone backend symlink Microsoft Azure Blob Storage @@ -47162,7 +50882,7 @@ Configuration Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -47316,11 +51036,11 @@ It reads configuration from these variables, in the following order: - AZURE_USERNAME: a username (usually an email address) - AZURE_PASSWORD: the user's password 4. Workload Identity - - AZURE_TENANT_ID: Tenant to authenticate in. + - AZURE_TENANT_ID: Tenant to authenticate in - AZURE_CLIENT_ID: Client ID of the application the user will - authenticate to. + authenticate to - AZURE_FEDERATED_TOKEN_FILE: Path to projected service account - token file. + token file - AZURE_AUTHORITY_HOST: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com). @@ -48157,7 +51877,7 @@ You can set custom upload headers with the --header-upload flag. - X-MS-Tags Eg --header-upload "Content-Type: text/potato" or ---header-upload "X-MS-Tags: foo=bar" +--header-upload "X-MS-Tags: foo=bar". Limitations @@ -48169,7 +51889,7 @@ backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. Azure Storage Emulator Support @@ -48198,7 +51918,7 @@ Configuration Here is an example of making a Microsoft Azure Files Storage configuration. For a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -48379,11 +52099,11 @@ It reads configuration from these variables, in the following order: - AZURE_USERNAME: a username (usually an email address) - AZURE_PASSWORD: the user's password 4. Workload Identity - - AZURE_TENANT_ID: Tenant to authenticate in. + - AZURE_TENANT_ID: Tenant to authenticate in - AZURE_CLIENT_ID: Client ID of the application the user will - authenticate to. + authenticate to - AZURE_FEDERATED_TOKEN_FILE: Path to projected service account - token file. + token file - AZURE_AUTHORITY_HOST: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com). @@ -48982,7 +52702,7 @@ it. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -49059,8 +52779,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. This only runs from the moment it @@ -49068,7 +52788,8 @@ opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your OneDrive @@ -49102,7 +52823,7 @@ To create your own Client ID, please follow these steps: to. This is free, but you need to provide a phone number, address, and credit card for identity verification. 2. Enter a name for your app, choose account type - Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox), + Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox), select Web in Redirect URI, then type (do not copy and paste) http://localhost:53682/ and click Register. Copy and keep the Application (client) ID under the app name for later use. @@ -49178,6 +52899,17 @@ client credentials flow. In particular the "onedrive" option does not work. You can use the "sharepoint" option or if that does not find the correct drive ID type it in manually with the "driveid" option. +To back up any user's data using this flow, grant your Azure AD +application the necessary Microsoft Graph Application permissions (such +as Files.Read.All, Sites.Read.All and/or Sites.Selected). With these +permissions, rclone can access drives across the tenant, but it needs to +know which user or drive you want. Supply a specific drive_id +corresponding to that user's OneDrive, or a SharePoint site ID for +SharePoint libraries. You can obtain a user's drive ID using Microsoft +Graph (e.g. /users/{userUPN}/drive) and then configure it in rclone. +Once the correct drive ID is provided, rclone will back up that user's +data using the app-only token without requiring their credentials. + NOTE Assigning permissions directly to the application means that anyone with the Client ID and Client Secret can access your OneDrive files. Take care to safeguard these credentials. @@ -50011,18 +53743,19 @@ manually setup a remote per user you wish to impersonate. this creates the link of the format https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/ but also changes the permissions so you your admin user has access. + 2. Then in powershell run the following commands: - Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force - Import-Module Microsoft.Graph.Files - Connect-MgGraph -Scopes "Files.ReadWrite.All" - # Follow the steps to allow access to your admin user - # Then run this for each user you want to impersonate to get the Drive ID - Get-MgUserDefaultDrive -UserId '{emailaddress}' - # This will give you output of the format: - # Name Id DriveType CreatedDateTime - # ---- -- --------- --------------- - # OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm + Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force + Import-Module Microsoft.Graph.Files + Connect-MgGraph -Scopes "Files.ReadWrite.All" + # Follow the steps to allow access to your admin user + # Then run this for each user you want to impersonate to get the Drive ID + Get-MgUserDefaultDrive -UserId '{emailaddress}' + # This will give you output of the format: + # Name Id DriveType CreatedDateTime + # ---- -- --------- --------------- + # OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm 3. Then in rclone add a onedrive remote type, and use the Type in driveID with the DriveID you got in the previous step. One @@ -50293,7 +54026,7 @@ Configuration Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -50490,21 +54223,23 @@ rclone about is not supported by the OpenDrive backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. Oracle Object Storage +Object Storage provided by the Oracle Cloud Infrastructure (OCI). Read +more at : + - Oracle Object Storage Overview - Oracle Object Storage FAQ -- Oracle Object Storage Limits -Paths are specified as remote:bucket (or remote: for the lsd command.) +Paths are specified as remote:bucket (or remote: for the lsd command). You may put subdirectories in too, e.g. remote:bucket/path/to/dir. Sample command to transfer local artifacts to remote:bucket in oracle object storage: -rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv + rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv Configuration @@ -50513,7 +54248,7 @@ rclone config walks you through it. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -50657,13 +54392,18 @@ Sample rclone config file for Authentication Provider User Principal: config_file = /home/opc/.oci/config config_profile = Default -Advantages: - One can use this method from any server within OCI or -on-premises or from other cloud provider. +Advantages: -Considerations: - you need to configure user’s privileges / policy to -allow access to object storage - Overhead of managing users and keys. - -If the user is deleted, the config file will no longer work and may -cause automation regressions that use the user's credentials. +- One can use this method from any server within OCI or on-premises or + from other cloud provider. + +Considerations: + +- you need to configure user’s privileges / policy to allow access to + object storage +- Overhead of managing users and keys. +- If the user is deleted, the config file will no longer work and may + cause automation regressions that use the user's credentials. Instance Principal @@ -51271,7 +55011,7 @@ Backend commands Here are the commands specific to the oracleobjectstorage backend. -Run them with +Run them with: rclone backend COMMAND remote: @@ -51285,24 +55025,26 @@ backend/command. rename -change the name of an object +change the name of an object. rclone backend rename remote: [options] [+] This command can be used to rename a object. -Usage Examples: +Usage example: rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name list-multipart-uploads -List the unfinished multipart uploads +List the unfinished multipart uploads. rclone backend list-multipart-uploads remote: [options] [+] This command lists the unfinished multipart uploads in JSON format. +Usage example: + rclone backend list-multipart-uploads oos:bucket/path/to/object It returns a dictionary of buckets with values as lists of unfinished @@ -51312,21 +55054,23 @@ You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path. { - "test-bucket": [ - { - "namespace": "test-namespace", - "bucket": "test-bucket", - "object": "600m.bin", - "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8", - "timeCreated": "2022-07-29T06:21:16.595Z", - "storageTier": "Standard" - } + "test-bucket": [ + { + "namespace": "test-namespace", + "bucket": "test-bucket", + "object": "600m.bin", + "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8", + "timeCreated": "2022-07-29T06:21:16.595Z", + "storageTier": "Standard" + } ] + } -cleanup + ### cleanup -Remove unfinished multipart uploads. + Remove unfinished multipart uploads. + ```console rclone backend cleanup remote: [options] [+] This command removes unfinished multipart uploads of age greater than @@ -51335,6 +55079,8 @@ max-age which defaults to 24 hours. Note that you can use --interactive/-i or --dry-run with this command to see what it would do. +Usage examples: + rclone backend cleanup oos:bucket/path/to/object rclone backend cleanup -o max-age=7w oos:bucket/path/to/object @@ -51342,18 +55088,18 @@ Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. Options: -- "max-age": Max age of upload to delete +- "max-age": Max age of upload to delete. restore -Restore objects from Archive to Standard storage +Restore objects from Archive to Standard storage. rclone backend restore remote: [options] [+] This command can be used to restore one or more objects from Archive to Standard storage. - Usage Examples: +Usage examples: rclone backend restore oos:bucket/path/to/directory -o hours=HOURS rclone backend restore oos:bucket -o hours=HOURS @@ -51363,13 +55109,13 @@ This flag also obeys the filters. Test first with --interactive/-i or rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72 -All the objects shown will be marked for restore, then +All the objects shown will be marked for restore, then: rclone backend restore --include "*.txt" oos:bucket/path -o hours=72 - It returns a list of status dictionaries with Object Name and Status - keys. The Status will be "RESTORED"" if it was successful or an error message - if not. +It returns a list of status dictionaries with Object Name and Status +keys. The Status will be "RESTORED"" if it was successful or an error +message if not. [ { @@ -51404,7 +55150,7 @@ Here is an example of making an QingStor configuration. First run This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote r) Rename remote c) Copy remote @@ -51715,7 +55461,7 @@ rclone about is not supported by the qingstor backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. Quatrix @@ -51730,14 +55476,13 @@ You can get the API key in the user's profile at https:///profile/api-keys or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create. -See complete Swagger documentation for Quatrix - -https://docs.maytech.net/quatrix/quatrix-api/api-explorer +See complete Swagger documentation for Quatrix. Configuration Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -51770,7 +55515,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your Quatrix @@ -52025,34 +55771,41 @@ external access impossible). However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you'll need to make -a few more provisions: - Ensure you have Sia daemon installed directly -or in a docker container because Sia-UI does not support this mode -natively. - Run it on externally accessible port, for example provide ---api-addr :9980 and --disable-api-security arguments on the daemon -command line. - Enforce API password for the siad daemon via environment -variable SIA_API_PASSWORD or text file named apipassword in the daemon -directory. - Set rclone backend option api_password taking it from above -locations. +a few more provisions: -Notes: 1. If your wallet is locked, rclone cannot unlock it -automatically. You should either unlock it in advance by using Sia-UI or -via command line siac wallet unlock. Alternatively you can make siad -unlock your wallet automatically upon startup by running it with -environment variable SIA_WALLET_PASSWORD. 2. If siad cannot find the -SIA_API_PASSWORD variable or the apipassword file in the SIA_DIR -directory, it will generate a random password and store in the text file -named apipassword under YOUR_HOME/.sia/ directory on Unix or -C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. Remember -this when you configure password in rclone. 3. The only way to use siad -without API password is to run it on localhost with command line -argument --authorize-api=false, but this is insecure and strongly -discouraged. +- Ensure you have Sia daemon installed directly or in a docker + container because Sia-UI does not support this mode natively. +- Run it on externally accessible port, for example provide + --api-addr :9980 and --disable-api-security arguments on the daemon + command line. +- Enforce API password for the siad daemon via environment variable + SIA_API_PASSWORD or text file named apipassword in the daemon + directory. +- Set rclone backend option api_password taking it from above + locations. + +Notes: + +1. If your wallet is locked, rclone cannot unlock it automatically. You + should either unlock it in advance by using Sia-UI or via command + line siac wallet unlock. Alternatively you can make siad unlock your + wallet automatically upon startup by running it with environment + variable SIA_WALLET_PASSWORD. +2. If siad cannot find the SIA_API_PASSWORD variable or the apipassword + file in the SIA_DIR directory, it will generate a random password + and store in the text file named apipassword under YOUR_HOME/.sia/ + directory on Unix or + C:\Users\YOUR_HOME\AppData\Local\Sia\apipassword on Windows. + Remember this when you configure password in rclone. +3. The only way to use siad without API password is to run it on + localhost with command line argument --authorize-api=false, but this + is insecure and strongly discouraged. Configuration Here is an example of how to make a sia remote called mySia. First, run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -52104,15 +55857,15 @@ Once configured, you can then use rclone like this: - List directories in top level of your Sia storage - rclone lsd mySia: + rclone lsd mySia: - List all the files in your Sia storage - rclone ls mySia: + rclone ls mySia: - Upload a local directory to the Sia directory called backup - rclone copy /home/source mySia:backup + rclone copy /home/source mySia:backup Standard options @@ -52225,7 +55978,7 @@ Here is an example of making a swift configuration. First run This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -52915,7 +56668,7 @@ To retrieve objects use rclone copy as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following: -2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s) + 2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s) Rclone will wait for the time specified then retry the copy. @@ -52932,7 +56685,7 @@ you need to do in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -52981,8 +56734,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Note if you are using remote config with rclone authorize while your pcloud server is the EU region, you will need to set the hostname in @@ -52994,7 +56747,8 @@ your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your pCloud @@ -53057,13 +56811,23 @@ However you can set this to restrict rclone to a specific folder hierarchy. In order to do this you will have to find the Folder ID of the directory -you wish rclone to display. This will be the folder field of the URL -when you open the relevant folder in the pCloud web interface. +you wish rclone to display. This can be accomplished by executing the +rclone lsf command using a basic configuration setup that does not +include the root_folder_id parameter. -So if the folder you want rclone to use has a URL which looks like -https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid -in the browser, then you use 5xxxxxxxx8 as the root_folder_id in the -config. +The command will enumerate available directories, allowing you to locate +the appropriate Folder ID for subsequent use. + +Example: + + $ rclone lsf --dirs-only -Fip --csv TestPcloud: + dxxxxxxxx2,My Music/ + dxxxxxxxx3,My Pictures/ + dxxxxxxxx4,My Videos/ + +So if the folder you want rclone to use your is "My Music/", then use +the returned id from rclone lsf command (ex. dxxxxxxxx2) as the +root_folder_id variable value in the config file. Standard options @@ -53249,7 +57013,7 @@ Here is an example of making a remote for PikPak. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -53526,7 +57290,7 @@ Backend commands Here are the commands specific to the pikpak backend. -Run them with +Run them with: rclone backend COMMAND remote: @@ -53540,13 +57304,13 @@ backend/command. addurl -Add offline download task for url +Add offline download task for url. rclone backend addurl remote: [options] [+] This command adds offline download task for url. -Usage: +Usage example: rclone backend addurl pikpak:dirpath url @@ -53555,13 +57319,13 @@ will fallback to default 'My Pack' folder. decompress -Request decompress of a file/files in a folder +Request decompress of a file/files in a folder. rclone backend decompress remote: [options] [+] This command requests decompress of file/files in a folder. -Usage: +Usage examples: rclone backend decompress pikpak:dirpath {filename} -o password=password rclone backend decompress pikpak:dirpath {filename} -o delete-src-file @@ -53614,7 +57378,7 @@ backend. Example: - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote d) Delete remote c) Copy remote @@ -53789,7 +57553,7 @@ you through it. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -53831,8 +57595,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. This only runs from the moment it @@ -53840,7 +57604,8 @@ opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your premiumize.me @@ -54034,7 +57799,7 @@ Configurations Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -54082,7 +57847,8 @@ NOTE: The Proton Drive encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your Proton Drive @@ -54171,6 +57937,25 @@ Properties: - Type: string - Required: false +--protondrive-otp-secret-key + +The OTP secret key + +The value can also be provided with +--protondrive-otp-secret-key=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567 + +The OTP secret key of your proton drive account if the account is set up +with two-factor authentication + +NB Input to this must be obscured - see rclone obscure. + +Properties: + +- Config: otp_secret_key +- Env Var: RCLONE_PROTONDRIVE_OTP_SECRET_KEY +- Type: string +- Required: false + Advanced options Here are the Advanced options specific to protondrive (Proton Drive). @@ -54380,7 +58165,7 @@ you need to do in your browser. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -54436,8 +58221,8 @@ This will guide you through an interactive setup process: q) Quit config e/n/d/r/c/s/q> q -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from put.io if using web browser to automatically @@ -54618,7 +58403,7 @@ Configurations Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -54666,7 +58451,8 @@ NOTE: The Proton Drive encryption keys need to have been already generated after a regular login via the browser, otherwise attempting to use the credentials in rclone will fail. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your Proton Drive @@ -54755,6 +58541,25 @@ Properties: - Type: string - Required: false +--protondrive-otp-secret-key + +The OTP secret key + +The value can also be provided with +--protondrive-otp-secret-key=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567 + +The OTP secret key of your proton drive account if the account is set up +with two-factor authentication + +NB Input to this must be obscured - see rclone obscure. + +Properties: + +- Config: otp_secret_key +- Env Var: RCLONE_PROTONDRIVE_OTP_SECRET_KEY +- Type: string +- Required: false + Advanced options Here are the Advanced options specific to protondrive (Proton Drive). @@ -54952,22 +58757,27 @@ this library, as there isn't official documentation available. Seafile -This is a backend for the Seafile storage service: - It works with both -the free community edition or the professional edition. - Seafile -versions 6.x, 7.x, 8.x and 9.x are all supported. - Encrypted libraries -are also supported. - It supports 2FA enabled users - Using a Library -API Token is not supported +This is a backend for the Seafile storage service: + +- It works with both the free community edition or the professional + edition. +- Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. +- Encrypted libraries are also supported. +- It supports 2FA enabled users +- Using a Library API Token is not supported Configuration -There are two distinct modes you can setup your remote: - you point your -remote to the root of the server, meaning you don't specify a library -during the configuration: Paths are specified as remote:library. You may -put subdirectories in too, e.g. remote:library/path/to/dir. - you point -your remote to a specific library during the configuration: Paths are -specified as remote:path/to/dir. This is the recommended mode when using -encrypted libraries. (This mode is possibly slightly faster than the -root mode) +There are two distinct modes you can setup your remote: + +- you point your remote to the root of the server, meaning you don't + specify a library during the configuration: Paths are specified as + remote:library. You may put subdirectories in too, e.g. + remote:library/path/to/dir. +- you point your remote to a specific library during the + configuration: Paths are specified as remote:path/to/dir. This is + the recommended mode when using encrypted libraries. (This mode is + possibly slightly faster than the root mode) Configuration in root mode @@ -54980,7 +58790,7 @@ This will guide you through an interactive setup process. To authenticate you will need the URL of your server, your email (or username) and your password. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -55070,7 +58880,7 @@ Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you: - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -55186,12 +58996,12 @@ Seafile and rclone link Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory: - rclone link seafile:seafile-tutorial.doc + $ rclone link seafile:seafile-tutorial.doc http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ or if run on a directory you will get: - rclone link seafile:dir + $ rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/ Please note a share link is unique for each file or directory. If you @@ -55201,8 +59011,12 @@ get the exact same link. Compatibility It has been actively developed using the seafile docker image of these -versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 -community edition - 9.0.10 community edition +versions: + +- 6.3.4 community edition +- 7.0.5 community edition +- 7.1.3 community edition +- 9.0.10 community edition Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly. @@ -55375,7 +59189,7 @@ Here is an example of making an SFTP configuration. First run This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -55463,7 +59277,7 @@ are supported. The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line ('' -or '') separating lines. i.e. +or '') separating lines. I.e. key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY----- @@ -56578,7 +60392,7 @@ when you started to share on Windows. On smbd, it's the section title in smb.conf (usually in /etc/samba/) file. You can find shares by querying the root if you're unsure (e.g. rclone lsd remote:). -You can't access to the shared printers from rclone, obviously. +You can't access the shared printers from rclone, obviously. You can't use Anonymous access for logging in. You have to use the guest user with an empty password instead. The rclone client tries to avoid @@ -56597,7 +60411,7 @@ First run This will guide you through an interactive setup process. - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -56958,19 +60772,20 @@ Side by side comparison with more details: Configuration -To make a new Storj configuration you need one of the following: * -Access Grant that someone else shared with you. * API Key of a Storj -project you are a member of. +To make a new Storj configuration you need one of the following: + +- Access Grant that someone else shared with you. +- API Key of a Storj project you are a member of. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: Setup with access grant - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -57010,7 +60825,7 @@ Setup with access grant Setup with API key and passphrase - No remotes found, make a new one? + No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -57217,7 +61032,8 @@ Use the ls command to list recursively all objects in a bucket. Add the folder to the remote path to list recursively all objects in this folder. - rclone ls remote:bucket/path/to/dir/ + $ rclone ls remote:bucket + /path/to/dir/ Use the lsf command to list non-recursively all objects in a bucket or a folder. @@ -57286,7 +61102,7 @@ without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. Known issues @@ -57320,7 +61136,7 @@ which you can do with rclone. rclone config walks you through it. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -57377,7 +61193,8 @@ This will guide you through an interactive setup process: Note that the config asks for your email and password but doesn't store them, it only uses them to get the initial token. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories (sync folders) in top level of your SugarSync @@ -57586,7 +61403,7 @@ rclone about is not supported by the SugarSync backend. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. Uloz.to @@ -57601,7 +61418,7 @@ Configuration Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -57653,7 +61470,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List folders in root level folder: @@ -57848,7 +61666,7 @@ the API. Backends without this capability cannot determine free space for an rclone mount or use policy mfs (most free space) as a member of an rclone union remote. -See List of backends that do not support rclone about and rclone about +See List of backends that do not support rclone about and rclone about. Uptobox @@ -57863,7 +61681,7 @@ Paths may be as deep as required, e.g. remote:directory/subdirectory. Configuration To configure an Uptobox backend you'll need your personal api token. -You'll find it in your account settings +You'll find it in your account settings. Here is an example of how to make a remote called remote with the default setup. First run: @@ -57913,9 +61731,10 @@ This will guide you through an interactive setup process: y) Yes this is OK (default) e) Edit this remote d) Delete this remote - y/e/d> + y/e/d> -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your Uptobox @@ -58044,7 +61863,7 @@ Configuration Here is an example of how to make a union called remote for local folders. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -58102,7 +61921,7 @@ This will guide you through an interactive setup process: q) Quit config e/n/d/r/c/s/q> q -Once configured you can then use rclone like this, +Once configured you can then use rclone like this: List directories in top level in remote1:dir1, remote2:dir2 and remote3:dir3 @@ -58401,7 +62220,7 @@ connecting to then rclone can enable extra features. Here is an example of how to make a remote called remote. First run: - rclone config + rclone config This will guide you through an interactive setup process: @@ -58468,7 +62287,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): List directories in top level of your WebDAV @@ -58970,8 +62790,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> y -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. This only runs from the moment it @@ -58979,7 +62799,8 @@ opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/ and this it may require you to unblock it temporarily if you are running a host firewall. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): See top level directories @@ -59248,8 +63069,8 @@ This will guide you through an interactive setup process: d) Delete this remote y/e/d> -See the remote setup docs for how to set it up on a machine with no -Internet browser available. +See the remote setup docs for how to set it up on a machine without an +internet-connected web browser available. Rclone runs a webserver on your local computer to collect the authorization token from Zoho Workdrive. This is only from the moment @@ -59258,7 +63079,8 @@ on http://127.0.0.1:53682/. If local port 53682 is protected by a firewall you may need to temporarily unblock the firewall to complete authorization. -Once configured you can then use rclone like this, +Once configured you can then use rclone like this (replace remote with +the name you gave your remote): See top level directories @@ -59682,7 +63504,7 @@ For example, supposing you have a directory structure like this Copying the entire directory with '-l' - $ rclone copy -l /tmp/a/ remote:/tmp/a/ + rclone copy -l /tmp/a/ remote:/tmp/a/ The remote files are created with a .rclonelink suffix @@ -59750,7 +63572,7 @@ For example if you have a directory hierarchy like this └── file2 - stored on the root disk Using rclone --one-file-system copy root remote: will only copy file1 -and file2. Eg +and file2. E.g. $ rclone -q --one-file-system ls root 0 file1 @@ -59823,6 +63645,20 @@ Properties: - Type: bool - Default: false +--skip-specials + +Don't warn about skipped pipes, sockets and device objects. + +This flag disables warning messages on skipped pipes, sockets and device +objects, as you explicitly acknowledge that they should be skipped. + +Properties: + +- Config: skip_specials +- Env Var: RCLONE_LOCAL_SKIP_SPECIALS +- Type: bool +- Default: false + --local-zero-size-links Assume the Stat size of links is zero (and read them instead) @@ -60146,7 +63982,7 @@ Backend commands Here are the commands specific to the local backend. -Run them with +Run them with: rclone backend COMMAND remote: @@ -60160,7 +63996,7 @@ backend/command. noop -A null operation for testing backend commands +A null operation for testing backend commands. rclone backend noop remote: [options] [+] @@ -60169,11 +64005,223 @@ output. Options: -- "echo": echo the input arguments -- "error": return an error based on option value +- "echo": Echo the input arguments. +- "error": Return an error based on option value. Changelog +v1.72.0 - 2025-11-21 + +See commits + +- New backends + - Archive backend to read archives on cloud storage. (Nick + Craig-Wood) +- New S3 providers + - Cubbit Object Storage (Marco Ferretti) + - FileLu S5 Object Storage (kingston125) + - Hetzner Object Storage (spiffytech) + - Intercolo Object Storage (Robin Rolf) + - Rabata S3-compatible secure cloud storage (dougal) + - Servercore Object Storage (dougal) + - SpectraLogic (dougal) +- New commands + - rclone archive: command to create and read archive files (Fawzib + Rojas) + - rclone config string: for making connection strings (Nick + Craig-Wood) + - rclone test speed: Add command to test a specified remotes speed + (dougal) +- New Features + - backends: many backends have has a paged listing (ListP) + interface added + - this enables progress when listing large directories and + reduced memory usage + - build + - Bump golang.org/x/crypto from 0.43.0 to 0.45.0 to fix + CVE-2025-58181 (dependabot[bot]) + - Modernize code and tests (Nick Craig-Wood, russcoss, + juejinyuxitu, reddaisyy, dulanting, Oleksandr Redko) + - Update all dependencies (Nick Craig-Wood) + - Enable support for aix/ppc64 (Lakshmi-Surekha) + - check: Improved reporting of differences in sizes and contents + (albertony) + - copyurl: Added --url to read URLs from CSV file (S-Pegg1, + dougal) + - docs: + - markdown linting (albertony) + - fixes (albertony, Andrew Gunnerson, anon-pradip, Claudius + Ellsel, dougal, iTrooz, Jean-Christophe Cura, Joseph + Brownlee, kapitainsky, Matt LaPaglia, n4n5, Nick Craig-Wood, + nielash, SublimePeace, Ted Robertson, vastonus) + - fs: remove unnecessary Seek call on log file (Aneesh Agrawal) + - hashsum: Improved output format when listing algorithms + (albertony) + - lib/http: Cleanup indentation and other whitespace in http serve + template (albertony) + - lsf: Add support for unix and unixnano time formats (Motte) + - oauthutil: Improved debug logs from token refresh (albertony) + - rc + - Add job/batch for sending batches of rc commands to run + concurrently (Nick Craig-Wood) + - Add runningIds and finishedIds to job/list (n4n5) + - Add osVersion, osKernel and osArch to core/version (Nick + Craig-Wood) + - Make sure fatal errors run via the rc don't crash rclone + (Nick Craig-Wood) + - Add executeId to job statuses in job/list (Nikolay Kiryanov) + - config/unlock: rename parameter to configPassword accept old + as well (Nick Craig-Wood) + - serve http: Download folders as zip (dougal) +- Bug Fixes + - build + - Fix tls: failed to verify certificate: x509: negative serial + number (Nick Craig-Wood) + - march + - Fix --no-traverse being very slow (Nick Craig-Wood) + - serve s3: Fix log output to remove the EXTRA messages (iTrooz) +- Mount + - Windows: improve error message on missing WinFSP (divinity76) +- Local + - Add --skip-specials to ignore special files (Adam Dinwoodie) +- Azure Blob + - Add ListP interface (dougal) +- Azurefiles + - Add ListP interface (Nick Craig-Wood) +- B2 + - Add ListP interface (dougal) + - Add Server-Side encryption support (fries1234) + - Fix "expected a FileSseMode but found: ''" (dougal) + - Allow individual old versions to be deleted with --b2-versions + (dougal) +- Box + - Add ListP interface (Nick Craig-Wood) + - Allow configuration with config file contents (Dominik Sander) +- Compress + - Add zstd compression (Alex) +- Drive + - Add ListP interface (Nick Craig-Wood) +- Dropbox + - Add ListP interface (Nick Craig-Wood) + - Fix error moving just created objects (Nick Craig-Wood) +- FTP + - Fix SOCKS proxy support (dougal) + - Fix transfers from servers that return 250 ok messages (jijamik) +- Google Cloud Storage + - Add ListP interface (dougal) + - Fix --gcs-storage-class to work with server side copy for + objects (Riaz Arbi) +- HTTP + - Add basic metadata and provide it via serve (Oleg Kunitsyn) +- Jottacloud + - Add support for Let's Go Cloud (from MediaMarkt) as a whitelabel + service (albertony) + - Add support for MediaMarkt Cloud as a whitelabel service + (albertony) + - Added support for traditional oauth authentication also for the + main service (albertony) + - Abort attempts to run unsupported rclone authorize command + (albertony) + - Improved token refresh handling (albertony) + - Fix legacy authentication (albertony) + - Fix authentication for whitelabel services from Elkjøp + subsidiaries (albertony) +- Mega + - Implement 2FA login (iTrooz) +- Memory + - Add ListP interface (dougal) +- Onedrive + - Add ListP interface (Nick Craig-Wood) +- Oracle Object Storage + - Add ListP interface (dougal) +- Pcloud + - Add ListP interface (Nick Craig-Wood) +- Proton Drive + - Automated 2FA login with OTP secret key (Microscotch) +- S3 + - Make it easier to add new S3 providers (dougal) + - Add --s3-use-data-integrity-protections quirk to fix BadDigest + error in Alibaba, Tencent (hunshcn) + - Add support for --upload-header, If-Match and If-None-Match + (Sean Turner) + - Fix single file copying behavior with low permission (hunshcn) +- SFTP + - Fix zombie SSH processes with --sftp-ssh (Copilot) +- Smb + - Optimize smb mount performance by avoiding stat checks during + initialization (Sudipto Baral) +- Swift + - Add ListP interface (dougal) + - If storage_policy isn't set, use the root containers policy + (Andrew Ruthven) + - Report disk usage in segment containers (Andrew Ruthven) +- Ulozto + - Implement the About functionality (Lukas Krejci) + - Fix downloads returning HTML error page (aliaj1) +- WebDAV + - Optimize bearer token fetching with singleflight (hunshcn) + - Add ListP interface (Nick Craig-Wood) + - Use SpaceSepList to parse bearer token command (hunshcn) + - Add Access-Control-Max-Age header for CORS preflight caching + (viocha) + - Fix out of memory with sharepoint-ntlm when uploading large file + (Nick Craig-Wood) + +v1.71.2 - 2025-10-20 + +See commits + +- Bug Fixes + - build + - update Go to 1.25.3 + - Update Docker image Alpine version to fix CVE-2025-9230 + - bisync: Fix race when CaptureOutput is used concurrently (Nick + Craig-Wood) + - doc fixes (albertony, dougal, iTrooz, Matt LaPaglia, Nick + Craig-Wood) + - index: Add missing providers (dougal) + - serve http: Fix: logging URL on start (dougal) +- Azurefiles + - Fix server side copy not waiting for completion (Vikas Bhansali) +- B2 + - Fix 1TB+ uploads (dougal) +- Google Cloud Storage + - Add region us-east5 (Dulani Woods) +- Mega + - Fix 402 payment required errors (Nick Craig-Wood) +- Pikpak + - Fix unnecessary retries by using URL expire parameter (Youfu + Zhang) + +v1.71.1 - 2025-09-24 + +See commits + +- Bug Fixes + - bisync: Fix error handling for renamed conflicts (nielash) + - march: Fix deadlock when using --fast-list on syncs (Nick + Craig-Wood) + - operations: Fix partial name collisions for non --inplace copies + (Nick Craig-Wood) + - pacer: Fix deadlock with --max-connections (Nick Craig-Wood) + - doc fixes (albertony, anon-pradip, Claudius Ellsel, dougal, + Jean-Christophe Cura, Nick Craig-Wood, nielash) +- Mount + - Do not log successful unmount as an error (Tilman Vogel) +- VFS + - Fix SIGHUP killing serve instead of flushing directory caches + (dougal) +- Local + - Fix rmdir "Access is denied" on windows (nielash) +- Box + - Fix about after change in API return (Nick Craig-Wood) +- Combine + - Propagate SlowHash feature (skbeh) +- Drive + - Update making your own client ID instructions (Ed Craig-Wood) +- Internet Archive + - Fix server side copy files with spaces (Nick Craig-Wood) + v1.71.0 - 2025-08-22 See commits @@ -68412,10 +72460,6 @@ Authors Contributors -{{< rem -email addresses removed from here need to be added to bin/.ignore-emails to make sure update-authors.py doesn't immediately put them back in again. ->}} - - Alex Couper amcouper@gmail.com - Leonid Shalupov leonid@shalupov.com shalupov@diverse.org.ru - Shimon Doodkin helpmepro1@gmail.com @@ -69403,6 +73447,7 @@ email addresses removed from here need to be added to bin/.ignore-emails to make - Vikas Bhansali 64532198+vibhansa-msft@users.noreply.github.com - Sudipto Baral sudiptobaral.me@gmail.com - Sam Pegg samrpegg@gmail.com + 70067376+S-Pegg1@users.noreply.github.com - liubingrun liubr1@chinatelecom.cn - Albin Parou fumesover@gmail.com - n4n5 56606507+Its-Just-Nans@users.noreply.github.com @@ -69416,6 +73461,51 @@ email addresses removed from here need to be added to bin/.ignore-emails to make - Lucas Bremgartner breml@users.noreply.github.com - Binbin Qian qianbinbin@hotmail.com - cui 523516579@qq.com +- Tilman Vogel tilman.vogel@web.de +- skbeh 60107333+skbeh@users.noreply.github.com +- Claudius Ellsel claudius.ellsel@live.de +- Motte 37443982+dmotte@users.noreply.github.com +- dougal dougal.craigwood@gmail.com + 147946567+roucc@users.noreply.github.com +- anon-pradip pradipsubedi360@gmail.com +- Robin Rolf imer@imer.cc +- Jean-Christophe Cura jcaspes@gmail.com +- russcoss russcoss@outlook.com +- Matt LaPaglia mlapaglia@gmail.com +- Youfu Zhang 1315097+zhangyoufu@users.noreply.github.com +- juejinyuxitu juejinyuxitu@outlook.com +- iTrooz hey@itrooz.fr +- Microscotch github.com@microscotch.net +- Andrew Ruthven andrew@etc.gen.nz +- spiffytech git@spiffy.tech +- Dulani Woods Dulani@gmail.com +- Marco Ferretti mferretti93@gmail.com +- hunshcn hunsh.cn@gmail.com +- vastonus vastonus@outlook.com +- Oleksandr Redko oleksandr.red+github@gmail.com +- reddaisyy reddaisy@outlook.jp +- viocha viocha@qq.com +- Aneesh Agrawal aneesh@anthropic.com +- divinity76 hans@loltek.net +- Andrew Gunnerson accounts+github@chiller3.com +- Lakshmi-Surekha Lakshmi.Kovvuri@ibm.com +- dulanting dulanting@outlook.jp +- Adam Dinwoodie me-and@users.noreply.github.com +- Lukas Krejci metlos@users.noreply.github.com +- Riaz Arbi riazarbi@users.noreply.github.com +- Fawzib Rojas fawzib.rojas@gmail.com +- fries1234 fries1234@protonmail.com +- Joseph Brownlee 39440458+JellyJoe198@users.noreply.github.com +- Ted Robertson 10043369+tredondo@users.noreply.github.com +- SublimePeace 184005903+SublimePeace@users.noreply.github.com +- Copilot 198982749+Copilot@users.noreply.github.com +- Alex 64072843+A1ex3@users.noreply.github.com +- n4n5 its.just.n4n5@gmail.com +- aliaj1 ali19961@gmail.com +- Sean Turner 30396892+seanturner026@users.noreply.github.com +- jijamik 30904953+jijamik@users.noreply.github.com +- Dominik Sander git@dsander.de +- Nikolay Kiryanov nikolay@kiryanov.ru Contact the rclone project diff --git a/docs/content/archive.md b/docs/content/archive.md index f4fbff892..5bebac01f 100644 --- a/docs/content/archive.md +++ b/docs/content/archive.md @@ -237,7 +237,6 @@ It would be possible to add ISO support fairly easily as the library we use ([go It would be possible to add write support, but this would only be for creating new archives, not for updating existing archives. - ### Standard options Here are the Standard options specific to archive (Read archives). diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md index 05192b311..3e4a9fb27 100644 --- a/docs/content/azureblob.md +++ b/docs/content/azureblob.md @@ -959,13 +959,13 @@ Properties: - Type: string - Required: false - Examples: - - "" - - The container and its blobs can be accessed only with an authorized request. - - It's a default value. - - "blob" - - Blob data within this container can be read via anonymous request. - - "container" - - Allow full public read access for container and blob data. + - "" + - The container and its blobs can be accessed only with an authorized request. + - It's a default value. + - "blob" + - Blob data within this container can be read via anonymous request. + - "container" + - Allow full public read access for container and blob data. #### --azureblob-directory-markers @@ -1022,12 +1022,12 @@ Properties: - Type: string - Required: false - Choices: - - "" - - By default, the delete operation fails if a blob has snapshots - - "include" - - Specify 'include' to remove the root blob and all its snapshots - - "only" - - Specify 'only' to remove only the snapshots but keep the root blob. + - "" + - By default, the delete operation fails if a blob has snapshots + - "include" + - Specify 'include' to remove the root blob and all its snapshots + - "only" + - Specify 'only' to remove only the snapshots but keep the root blob. #### --azureblob-description diff --git a/docs/content/b2.md b/docs/content/b2.md index 4b23d9565..eb283dff9 100644 --- a/docs/content/b2.md +++ b/docs/content/b2.md @@ -667,6 +667,71 @@ Properties: - Type: Encoding - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +#### --b2-sse-customer-algorithm + +If using SSE-C, the server-side encryption algorithm used when storing this object in B2. + +Properties: + +- Config: sse_customer_algorithm +- Env Var: RCLONE_B2_SSE_CUSTOMER_ALGORITHM +- Type: string +- Required: false +- Examples: + - "" + - None + - "AES256" + - Advanced Encryption Standard (256 bits key length) + +#### --b2-sse-customer-key + +To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data + +Alternatively you can provide --sse-customer-key-base64. + +Properties: + +- Config: sse_customer_key +- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY +- Type: string +- Required: false +- Examples: + - "" + - None + +#### --b2-sse-customer-key-base64 + +To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data + +Alternatively you can provide --sse-customer-key. + +Properties: + +- Config: sse_customer_key_base64 +- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_BASE64 +- Type: string +- Required: false +- Examples: + - "" + - None + +#### --b2-sse-customer-key-md5 + +If using SSE-C you may provide the secret encryption key MD5 checksum (optional). + +If you leave it blank, this is calculated automatically from the sse_customer_key provided. + + +Properties: + +- Config: sse_customer_key_md5 +- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_MD5 +- Type: string +- Required: false +- Examples: + - "" + - None + #### --b2-description Description of the remote. @@ -682,9 +747,11 @@ Properties: Here are the commands specific to the b2 backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -696,35 +763,41 @@ These can be run on a running backend using the rc command ### lifecycle -Read or set the lifecycle for a bucket +Read or set the lifecycle for a bucket. - rclone backend lifecycle remote: [options] [+] +```console +rclone backend lifecycle remote: [options] [+] +``` This command can be used to read or set the lifecycle for a bucket. -Usage Examples: - To show the current lifecycle rules: - rclone backend lifecycle b2:bucket +```console +rclone backend lifecycle b2:bucket +``` This will dump something like this showing the lifecycle rules. - [ - { - "daysFromHidingToDeleting": 1, - "daysFromUploadingToHiding": null, - "daysFromStartingToCancelingUnfinishedLargeFiles": null, - "fileNamePrefix": "" - } - ] +```json +[ + { + "daysFromHidingToDeleting": 1, + "daysFromUploadingToHiding": null, + "daysFromStartingToCancelingUnfinishedLargeFiles": null, + "fileNamePrefix": "" + } +] +``` -If there are no lifecycle rules (the default) then it will just return []. +If there are no lifecycle rules (the default) then it will just return `[]`. To reset the current lifecycle rules: - rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30 - rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1 +```console +rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30 +rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1 +``` This will run and then print the new lifecycle rules as above. @@ -736,22 +809,27 @@ the daysFromHidingToDeleting to 1 day. You can enable hard_delete in the config also which will mean deletions won't cause versions but overwrites will still cause versions to be made. - rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1 - -See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules +```console +rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1 +``` +See: Options: -- "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off. -- "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any unfinished large file versions after this many days -- "daysFromUploadingToHiding": This many days after uploading a file is hidden +- "daysFromHidingToDeleting": After a file has been hidden for this many days +it is deleted. 0 is off. +- "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any unfinished +large file versions after this many days. +- "daysFromUploadingToHiding": This many days after uploading a file is hidden. ### cleanup Remove unfinished large file uploads. - rclone backend cleanup remote: [options] [+] +```console +rclone backend cleanup remote: [options] [+] +``` This command removes unfinished large file uploads of age greater than max-age, which defaults to 24 hours. @@ -759,29 +837,33 @@ max-age, which defaults to 24 hours. Note that you can use --interactive/-i or --dry-run with this command to see what it would do. - rclone backend cleanup b2:bucket/path/to/object - rclone backend cleanup -o max-age=7w b2:bucket/path/to/object +```console +rclone backend cleanup b2:bucket/path/to/object +rclone backend cleanup -o max-age=7w b2:bucket/path/to/object +``` Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. - Options: -- "max-age": Max age of upload to delete +- "max-age": Max age of upload to delete. ### cleanup-hidden Remove old versions of files. - rclone backend cleanup-hidden remote: [options] [+] +```console +rclone backend cleanup-hidden remote: [options] [+] +``` This command removes any old hidden versions of files. Note that you can use --interactive/-i or --dry-run with this command to see what it would do. - rclone backend cleanup-hidden b2:bucket/path/to/dir - +```console +rclone backend cleanup-hidden b2:bucket/path/to/dir +``` diff --git a/docs/content/bisync.md b/docs/content/bisync.md index 85f9ea387..b3405f9b8 100644 --- a/docs/content/bisync.md +++ b/docs/content/bisync.md @@ -1047,20 +1047,16 @@ encodings.) The following backends have known issues that need more investigation: -- `TestGoFile` (`gofile`) - - [`TestBisyncRemoteLocal/all_changed`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) - - [`TestBisyncRemoteLocal/backupdir`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) - - [`TestBisyncRemoteLocal/basic`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) - - [`TestBisyncRemoteLocal/changes`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) - - [`TestBisyncRemoteLocal/check_access`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) - - [78 more](https://pub.rclone.org/integration-tests/current/) -- Updated: 2025-08-21-010015 +- `TestDropbox` (`dropbox`) + - [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt) +- Updated: 2025-11-21-010037 The following backends either have not been tested recently or have known issues that are deemed unfixable for the time being: +- `TestArchive` (`archive`) - `TestCache` (`cache`) - `TestFileLu` (`filelu`) - `TestFilesCom` (`filescom`) diff --git a/docs/content/box.md b/docs/content/box.md index 0d1e2f190..1519d6a81 100644 --- a/docs/content/box.md +++ b/docs/content/box.md @@ -323,6 +323,19 @@ Properties: - Type: string - Required: false +#### --box-config-credentials + +Box App config.json contents. + +Leave blank normally. + +Properties: + +- Config: config_credentials +- Env Var: RCLONE_BOX_CONFIG_CREDENTIALS +- Type: string +- Required: false + #### --box-access-token Box App Primary Access Token @@ -347,10 +360,10 @@ Properties: - Type: string - Default: "user" - Examples: - - "user" - - Rclone should act on behalf of a user. - - "enterprise" - - Rclone should act on behalf of a service account. + - "user" + - Rclone should act on behalf of a user. + - "enterprise" + - Rclone should act on behalf of a service account. ### Advanced options diff --git a/docs/content/cache.md b/docs/content/cache.md index f5a61563d..d3f1d865e 100644 --- a/docs/content/cache.md +++ b/docs/content/cache.md @@ -394,12 +394,12 @@ Properties: - Type: SizeSuffix - Default: 5Mi - Examples: - - "1M" - - 1 MiB - - "5M" - - 5 MiB - - "10M" - - 10 MiB + - "1M" + - 1 MiB + - "5M" + - 5 MiB + - "10M" + - 10 MiB #### --cache-info-age @@ -414,12 +414,12 @@ Properties: - Type: Duration - Default: 6h0m0s - Examples: - - "1h" - - 1 hour - - "24h" - - 24 hours - - "48h" - - 48 hours + - "1h" + - 1 hour + - "24h" + - 24 hours + - "48h" + - 48 hours #### --cache-chunk-total-size @@ -435,12 +435,12 @@ Properties: - Type: SizeSuffix - Default: 10Gi - Examples: - - "500M" - - 500 MiB - - "1G" - - 1 GiB - - "10G" - - 10 GiB + - "500M" + - 500 MiB + - "1G" + - 1 GiB + - "10G" + - 10 GiB ### Advanced options @@ -698,9 +698,11 @@ Properties: Here are the commands specific to the cache backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -714,6 +716,8 @@ These can be run on a running backend using the rc command Print stats on the cache backend in JSON format. - rclone backend stats remote: [options] [+] +```console +rclone backend stats remote: [options] [+] +``` diff --git a/docs/content/changelog.md b/docs/content/changelog.md index b0b1698fa..635f3066a 100644 --- a/docs/content/changelog.md +++ b/docs/content/changelog.md @@ -6,6 +6,130 @@ description: "Rclone Changelog" # Changelog +## v1.72.0 - 2025-11-21 + +[See commits](https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0) + +- New backends + - [Archive](/archive) backend to read archives on cloud storage. (Nick Craig-Wood) +- New S3 providers + - [Cubbit Object Storage](/s3/#Cubbit) (Marco Ferretti) + - [FileLu S5 Object Storage](/s3/#filelu-s5) (kingston125) + - [Hetzner Object Storage](/s3/#hetzner) (spiffytech) + - [Intercolo Object Storage](/s3/#intercolo) (Robin Rolf) + - [Rabata S3-compatible secure cloud storage](/s3/#Rabata) (dougal) + - [Servercore Object Storage](/s3/#servercore) (dougal) + - [SpectraLogic](/s3/#spectralogic) (dougal) +- New commands + - [rclone archive](/commands/rclone_archive/): command to create and read archive files (Fawzib Rojas) + - [rclone config string](/commands/rclone_config_string/): for making connection strings (Nick Craig-Wood) + - [rclone test speed](/commands/rclone_test_speed/): Add command to test a specified remotes speed (dougal) +- New Features + - backends: many backends have has a paged listing (`ListP`) interface added + - this enables progress when listing large directories and reduced memory usage + - build + - Bump golang.org/x/crypto from 0.43.0 to 0.45.0 to fix CVE-2025-58181 (dependabot[bot]) + - Modernize code and tests (Nick Craig-Wood, russcoss, juejinyuxitu, reddaisyy, dulanting, Oleksandr Redko) + - Update all dependencies (Nick Craig-Wood) + - Enable support for `aix/ppc64` (Lakshmi-Surekha) + - check: Improved reporting of differences in sizes and contents (albertony) + - copyurl: Added `--url` to read URLs from CSV file (S-Pegg1, dougal) + - docs: + - markdown linting (albertony) + - fixes (albertony, Andrew Gunnerson, anon-pradip, Claudius Ellsel, dougal, iTrooz, Jean-Christophe Cura, Joseph Brownlee, kapitainsky, Matt LaPaglia, n4n5, Nick Craig-Wood, nielash, SublimePeace, Ted Robertson, vastonus) + - fs: remove unnecessary Seek call on log file (Aneesh Agrawal) + - hashsum: Improved output format when listing algorithms (albertony) + - lib/http: Cleanup indentation and other whitespace in http serve template (albertony) + - lsf: Add support for `unix` and `unixnano` time formats (Motte) + - oauthutil: Improved debug logs from token refresh (albertony) + - rc + - Add [job/batch](/rc/#job-batch) for sending batches of rc commands to run concurrently (Nick Craig-Wood) + - Add `runningIds` and `finishedIds` to [job/list](/rc/#job-list) (n4n5) + - Add `osVersion`, `osKernel` and `osArch` to [core/version](/rc/#core-version) (Nick Craig-Wood) + - Make sure fatal errors run via the rc don't crash rclone (Nick Craig-Wood) + - Add `executeId` to job statuses in [job/list](/rc/#job-list) (Nikolay Kiryanov) + - `config/unlock`: rename parameter to `configPassword` accept old as well (Nick Craig-Wood) + - serve http: Download folders as zip (dougal) +- Bug Fixes + - build + - Fix tls: failed to verify certificate: x509: negative serial number (Nick Craig-Wood) + - march + - Fix `--no-traverse` being very slow (Nick Craig-Wood) + - serve s3: Fix log output to remove the EXTRA messages (iTrooz) +- Mount + - Windows: improve error message on missing WinFSP (divinity76) +- Local + - Add `--skip-specials` to ignore special files (Adam Dinwoodie) +- Azure Blob + - Add ListP interface (dougal) +- Azurefiles + - Add ListP interface (Nick Craig-Wood) +- B2 + - Add ListP interface (dougal) + - Add Server-Side encryption support (fries1234) + - Fix "expected a FileSseMode but found: ''" (dougal) + - Allow individual old versions to be deleted with `--b2-versions` (dougal) +- Box + - Add ListP interface (Nick Craig-Wood) + - Allow configuration with config file contents (Dominik Sander) +- Compress + - Add zstd compression (Alex) +- Drive + - Add ListP interface (Nick Craig-Wood) +- Dropbox + - Add ListP interface (Nick Craig-Wood) + - Fix error moving just created objects (Nick Craig-Wood) +- FTP + - Fix SOCKS proxy support (dougal) + - Fix transfers from servers that return 250 ok messages (jijamik) +- Google Cloud Storage + - Add ListP interface (dougal) + - Fix `--gcs-storage-class` to work with server side copy for objects (Riaz Arbi) +- HTTP + - Add basic metadata and provide it via serve (Oleg Kunitsyn) +- Jottacloud + - Add support for Let's Go Cloud (from MediaMarkt) as a whitelabel service (albertony) + - Add support for MediaMarkt Cloud as a whitelabel service (albertony) + - Added support for traditional oauth authentication also for the main service (albertony) + - Abort attempts to run unsupported rclone authorize command (albertony) + - Improved token refresh handling (albertony) + - Fix legacy authentication (albertony) + - Fix authentication for whitelabel services from Elkjøp subsidiaries (albertony) +- Mega + - Implement 2FA login (iTrooz) +- Memory + - Add ListP interface (dougal) +- Onedrive + - Add ListP interface (Nick Craig-Wood) +- Oracle Object Storage + - Add ListP interface (dougal) +- Pcloud + - Add ListP interface (Nick Craig-Wood) +- Proton Drive + - Automated 2FA login with OTP secret key (Microscotch) +- S3 + - Make it easier to add new S3 providers (dougal) + - Add `--s3-use-data-integrity-protections` quirk to fix BadDigest error in Alibaba, Tencent (hunshcn) + - Add support for `--upload-header`, `If-Match` and `If-None-Match` (Sean Turner) + - Fix single file copying behavior with low permission (hunshcn) +- SFTP + - Fix zombie SSH processes with `--sftp-ssh` (Copilot) +- Smb + - Optimize smb mount performance by avoiding stat checks during initialization (Sudipto Baral) +- Swift + - Add ListP interface (dougal) + - If storage_policy isn't set, use the root containers policy (Andrew Ruthven) + - Report disk usage in segment containers (Andrew Ruthven) +- Ulozto + - Implement the About functionality (Lukas Krejci) + - Fix downloads returning HTML error page (aliaj1) +- WebDAV + - Optimize bearer token fetching with singleflight (hunshcn) + - Add ListP interface (Nick Craig-Wood) + - Use SpaceSepList to parse bearer token command (hunshcn) + - Add `Access-Control-Max-Age` header for CORS preflight caching (viocha) + - Fix out of memory with sharepoint-ntlm when uploading large file (Nick Craig-Wood) + ## v1.71.2 - 2025-10-20 [See commits](https://github.com/rclone/rclone/compare/v1.71.1...v1.71.2) diff --git a/docs/content/chunker.md b/docs/content/chunker.md index 1e291a96d..57cf1ce8d 100644 --- a/docs/content/chunker.md +++ b/docs/content/chunker.md @@ -356,22 +356,22 @@ Properties: - Type: string - Default: "md5" - Examples: - - "none" - - Pass any hash supported by wrapped remote for non-chunked files. - - Return nothing otherwise. - - "md5" - - MD5 for composite files. - - "sha1" - - SHA1 for composite files. - - "md5all" - - MD5 for all files. - - "sha1all" - - SHA1 for all files. - - "md5quick" - - Copying a file to chunker will request MD5 from the source. - - Falling back to SHA1 if unsupported. - - "sha1quick" - - Similar to "md5quick" but prefers SHA1 over MD5. + - "none" + - Pass any hash supported by wrapped remote for non-chunked files. + - Return nothing otherwise. + - "md5" + - MD5 for composite files. + - "sha1" + - SHA1 for composite files. + - "md5all" + - MD5 for all files. + - "sha1all" + - SHA1 for all files. + - "md5quick" + - Copying a file to chunker will request MD5 from the source. + - Falling back to SHA1 if unsupported. + - "sha1quick" + - Similar to "md5quick" but prefers SHA1 over MD5. ### Advanced options @@ -421,13 +421,13 @@ Properties: - Type: string - Default: "simplejson" - Examples: - - "none" - - Do not use metadata files at all. - - Requires hash type "none". - - "simplejson" - - Simple JSON supports hash sums and chunk validation. - - - - It has the following fields: ver, size, nchunks, md5, sha1. + - "none" + - Do not use metadata files at all. + - Requires hash type "none". + - "simplejson" + - Simple JSON supports hash sums and chunk validation. + - + - It has the following fields: ver, size, nchunks, md5, sha1. #### --chunker-fail-hard @@ -440,10 +440,10 @@ Properties: - Type: bool - Default: false - Examples: - - "true" - - Report errors and abort current command. - - "false" - - Warn user, skip incomplete file and proceed. + - "true" + - Report errors and abort current command. + - "false" + - Warn user, skip incomplete file and proceed. #### --chunker-transactions @@ -456,19 +456,19 @@ Properties: - Type: string - Default: "rename" - Examples: - - "rename" - - Rename temporary files after a successful transaction. - - "norename" - - Leave temporary file names and write transaction ID to metadata file. - - Metadata is required for no rename transactions (meta format cannot be "none"). - - If you are using norename transactions you should be careful not to downgrade Rclone - - as older versions of Rclone don't support this transaction style and will misinterpret - - files manipulated by norename transactions. - - This method is EXPERIMENTAL, don't use on production systems. - - "auto" - - Rename or norename will be used depending on capabilities of the backend. - - If meta format is set to "none", rename transactions will always be used. - - This method is EXPERIMENTAL, don't use on production systems. + - "rename" + - Rename temporary files after a successful transaction. + - "norename" + - Leave temporary file names and write transaction ID to metadata file. + - Metadata is required for no rename transactions (meta format cannot be "none"). + - If you are using norename transactions you should be careful not to downgrade Rclone + - as older versions of Rclone don't support this transaction style and will misinterpret + - files manipulated by norename transactions. + - This method is EXPERIMENTAL, don't use on production systems. + - "auto" + - Rename or norename will be used depending on capabilities of the backend. + - If meta format is set to "none", rename transactions will always be used. + - This method is EXPERIMENTAL, don't use on production systems. #### --chunker-description diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md index 3a1287eb3..8fb1e6dd3 100644 --- a/docs/content/commands/rclone.md +++ b/docs/content/commands/rclone.md @@ -15,8 +15,6 @@ mounting them, listing them in lots of different ways. See the home page (https://rclone.org/) for installation, usage, documentation, changelog and configuration walkthroughs. - - ``` rclone [flags] ``` @@ -26,6 +24,8 @@ rclone [flags] ``` --alias-description string Description of the remote --alias-remote string Remote or path to alias + --archive-description string Description of the remote + --archive-remote string Remote to wrap to read archives from --ask-password Allow prompt for password for encrypted configuration (default true) --auto-confirm If enabled, do not request console confirmation --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive @@ -105,6 +105,10 @@ rclone [flags] --b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-key string Application Key --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket + --b2-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in B2 + --b2-sse-customer-key string To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data + --b2-sse-customer-key-base64 string To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data + --b2-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-upload-concurrency int Concurrency for multipart uploads (default 4) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) @@ -181,7 +185,7 @@ rclone [flags] --combine-upstreams SpaceSepList Upstreams for combining --compare-dest stringArray Include additional server-side paths during comparison --compress-description string Description of the remote - --compress-level int GZIP compression level (-2 to 9) (default -1) + --compress-level string GZIP (levels -2 to 9): --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) --compress-remote string Remote to compress @@ -549,6 +553,7 @@ rclone [flags] --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-transfer SizeSuffix Maximum size of data to transfer (default off) + --mega-2fa string The 2FA code of your MEGA account if the account is set up with one --mega-debug Output more debug from Mega --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) @@ -715,6 +720,7 @@ rclone [flags] --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) --protondrive-original-file-size Return the file size before encryption (default true) + --protondrive-otp-secret-key string The OTP secret key (obscured) --protondrive-password string The password of your proton account (obscured) --protondrive-replace-existing-draft Create a new revision when filename conflict is detected --protondrive-username string The username of your proton account @@ -831,6 +837,7 @@ rclone [flags] --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) --s3-use-arn-region If true, enables arn region support for the service + --s3-use-data-integrity-protections Tristate If true use AWS S3 data integrity protections (default unset) --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) @@ -915,6 +922,7 @@ rclone [flags] --sia-user-agent string Siad User Agent (default "Sia-Agent") --size-only Skip based on size only, not modtime or checksum --skip-links Don't warn about skipped symlinks + --skip-specials Don't warn about skipped pipes, sockets and device objects --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default "WORKGROUP") @@ -1015,7 +1023,7 @@ rclone [flags] --use-json-log Use json log format --use-mmap Use mmap allocator (see docs) --use-server-modtime Use server modified time instead of object metadata - --user-agent string Set the user-agent to a specified string (default "rclone/v1.71.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0") -v, --verbose count Print lots more stuff (repeat for more) -V, --version Print the version number --webdav-auth-redirect Preserve authentication on redirect @@ -1057,7 +1065,11 @@ rclone [flags] ## See Also + + + * [rclone about](/commands/rclone_about/) - Get quota information from the remote. +* [rclone archive](/commands/rclone_archive/) - Perform an action on an archive. * [rclone authorize](/commands/rclone_authorize/) - Remote authorization. * [rclone backend](/commands/rclone_backend/) - Run a backend-specific command. * [rclone bisync](/commands/rclone_bisync/) - Perform bidirectional synchronization between two paths. @@ -1111,3 +1123,5 @@ rclone [flags] * [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion. * [rclone version](/commands/rclone_version/) - Show the version number. + + diff --git a/docs/content/commands/rclone_about.md b/docs/content/commands/rclone_about.md index 5ef4d9234..ff01b92c7 100644 --- a/docs/content/commands/rclone_about.md +++ b/docs/content/commands/rclone_about.md @@ -15,40 +15,46 @@ output. The output is typically used, free, quota and trash contents. E.g. Typical output from `rclone about remote:` is: - Total: 17 GiB - Used: 7.444 GiB - Free: 1.315 GiB - Trashed: 100.000 MiB - Other: 8.241 GiB +```text +Total: 17 GiB +Used: 7.444 GiB +Free: 1.315 GiB +Trashed: 100.000 MiB +Other: 8.241 GiB +``` Where the fields are: - * Total: Total size available. - * Used: Total size used. - * Free: Total space available to this user. - * Trashed: Total space used by trash. - * Other: Total amount in other storage (e.g. Gmail, Google Photos). - * Objects: Total number of objects in the storage. +- Total: Total size available. +- Used: Total size used. +- Free: Total space available to this user. +- Trashed: Total space used by trash. +- Other: Total amount in other storage (e.g. Gmail, Google Photos). +- Objects: Total number of objects in the storage. All sizes are in number of bytes. Applying a `--full` flag to the command prints the bytes in full, e.g. - Total: 18253611008 - Used: 7993453766 - Free: 1411001220 - Trashed: 104857602 - Other: 8849156022 +```text +Total: 18253611008 +Used: 7993453766 +Free: 1411001220 +Trashed: 104857602 +Other: 8849156022 +``` A `--json` flag generates conveniently machine-readable output, e.g. - { - "total": 18253611008, - "used": 7993453766, - "trashed": 104857602, - "other": 8849156022, - "free": 1411001220 - } +```json +{ + "total": 18253611008, + "used": 7993453766, + "trashed": 104857602, + "other": 8849156022, + "free": 1411001220 +} +``` Not all backends print all fields. Information is not included if it is not provided by a backend. Where the value is unlimited it is omitted. @@ -56,7 +62,6 @@ provided by a backend. Where the value is unlimited it is omitted. Some backends does not support the `rclone about` command at all, see complete list in [documentation](https://rclone.org/overview/#optional-features). - ``` rclone about remote: [flags] ``` @@ -73,5 +78,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_archive.md b/docs/content/commands/rclone_archive.md new file mode 100644 index 000000000..5c81f5b19 --- /dev/null +++ b/docs/content/commands/rclone_archive.md @@ -0,0 +1,47 @@ +--- +title: "rclone archive" +description: "Perform an action on an archive." +versionIntroduced: v1.72 +# autogenerated - DO NOT EDIT, instead edit the source code in cmd/archive/ and as part of making a release run "make commanddocs" +--- +# rclone archive + +Perform an action on an archive. + +## Synopsis + +Perform an action on an archive. Requires the use of a +subcommand to specify the protocol, e.g. + + rclone archive list remote:file.zip + +Each subcommand has its own options which you can see in their help. + +See [rclone archive create](/commands/rclone_archive_create/) for the +archive formats supported. + + +``` +rclone archive [opts] [] [flags] +``` + +## Options + +``` + -h, --help help for archive +``` + +See the [global flags page](/flags/) for global options not listed here. + +## See Also + + + + +* [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. +* [rclone archive create](/commands/rclone_archive_create/) - Archive source file(s) to destination. +* [rclone archive extract](/commands/rclone_archive_extract/) - Extract archives from source to destination. +* [rclone archive list](/commands/rclone_archive_list/) - List archive contents from source. + + + diff --git a/docs/content/commands/rclone_archive_create.md b/docs/content/commands/rclone_archive_create.md new file mode 100644 index 000000000..00561f552 --- /dev/null +++ b/docs/content/commands/rclone_archive_create.md @@ -0,0 +1,95 @@ +--- +title: "rclone archive create" +description: "Archive source file(s) to destination." +versionIntroduced: v1.72 +# autogenerated - DO NOT EDIT, instead edit the source code in cmd/archive/create/ and as part of making a release run "make commanddocs" +--- +# rclone archive create + +Archive source file(s) to destination. + +## Synopsis + + +Creates an archive from the files in source:path and saves the archive to +dest:path. If dest:path is missing, it will write to the console. + +The valid formats for the `--format` flag are listed below. If +`--format` is not set rclone will guess it from the extension of dest:path. + +| Format | Extensions | +|:-------|:-----------| +| zip | .zip | +| tar | .tar | +| tar.gz | .tar.gz, .tgz, .taz | +| tar.bz2| .tar.bz2, .tb2, .tbz, .tbz2, .tz2 | +| tar.lz | .tar.lz | +| tar.lz4| .tar.lz4 | +| tar.xz | .tar.xz, .txz | +| tar.zst| .tar.zst, .tzst | +| tar.br | .tar.br | +| tar.sz | .tar.sz | +| tar.mz | .tar.mz | + +The `--prefix` and `--full-path` flags control the prefix for the files +in the archive. + +If the flag `--full-path` is set then the files will have the full source +path as the prefix. + +If the flag `--prefix=` is set then the files will have +`` as prefix. It's possible to create invalid file names with +`--prefix=` so use with caution. Flag `--prefix` has +priority over `--full-path`. + +Given a directory `/sourcedir` with the following: + + file1.txt + dir1/file2.txt + +Running the command `rclone archive create /sourcedir /dest.tar.gz` +will make an archive with the contents: + + file1.txt + dir1/ + dir1/file2.txt + +Running the command `rclone archive create --full-path /sourcedir /dest.tar.gz` +will make an archive with the contents: + + sourcedir/file1.txt + sourcedir/dir1/ + sourcedir/dir1/file2.txt + +Running the command `rclone archive create --prefix=my_new_path /sourcedir /dest.tar.gz` +will make an archive with the contents: + + my_new_path/file1.txt + my_new_path/dir1/ + my_new_path/dir1/file2.txt + + +``` +rclone archive create [flags] [] +``` + +## Options + +``` + --format string Create the archive with format or guess from extension. + --full-path Set prefix for files in archive to source path + -h, --help help for create + --prefix string Set prefix for files in archive to entered value or source path +``` + +See the [global flags page](/flags/) for global options not listed here. + +## See Also + + + + +* [rclone archive](/commands/rclone_archive/) - Perform an action on an archive. + + + diff --git a/docs/content/commands/rclone_archive_extract.md b/docs/content/commands/rclone_archive_extract.md new file mode 100644 index 000000000..2e4d19101 --- /dev/null +++ b/docs/content/commands/rclone_archive_extract.md @@ -0,0 +1,81 @@ +--- +title: "rclone archive extract" +description: "Extract archives from source to destination." +versionIntroduced: v1.72 +# autogenerated - DO NOT EDIT, instead edit the source code in cmd/archive/extract/ and as part of making a release run "make commanddocs" +--- +# rclone archive extract + +Extract archives from source to destination. + +## Synopsis + + + +Extract the archive contents to a destination directory auto detecting +the format. See [rclone archive create](/commands/rclone_archive_create/) +for the archive formats supported. + +For example on this archive: + +``` +$ rclone archive list --long remote:archive.zip + 6 2025-10-30 09:46:23.000000000 file.txt + 0 2025-10-30 09:46:57.000000000 dir/ + 4 2025-10-30 09:46:57.000000000 dir/bye.txt +``` + +You can run extract like this + +``` +$ rclone archive extract remote:archive.zip remote:extracted +``` + +Which gives this result + +``` +$ rclone tree remote:extracted +/ +├── dir +│ └── bye.txt +└── file.txt +``` + +The source or destination or both can be local or remote. + +Filters can be used to only extract certain files: + +``` +$ rclone archive extract archive.zip partial --include "bye.*" +$ rclone tree partial +/ +└── dir + └── bye.txt +``` + +The [archive backend](/archive/) can also be used to extract files. It +can be used to read only mount archives also but it supports a +different set of archive formats to the archive commands. + + +``` +rclone archive extract [flags] +``` + +## Options + +``` + -h, --help help for extract +``` + +See the [global flags page](/flags/) for global options not listed here. + +## See Also + + + + +* [rclone archive](/commands/rclone_archive/) - Perform an action on an archive. + + + diff --git a/docs/content/commands/rclone_archive_list.md b/docs/content/commands/rclone_archive_list.md new file mode 100644 index 000000000..6ee44919f --- /dev/null +++ b/docs/content/commands/rclone_archive_list.md @@ -0,0 +1,96 @@ +--- +title: "rclone archive list" +description: "List archive contents from source." +versionIntroduced: v1.72 +# autogenerated - DO NOT EDIT, instead edit the source code in cmd/archive/list/ and as part of making a release run "make commanddocs" +--- +# rclone archive list + +List archive contents from source. + +## Synopsis + + +List the contents of an archive to the console, auto detecting the +format. See [rclone archive create](/commands/rclone_archive_create/) +for the archive formats supported. + +For example: + +``` +$ rclone archive list remote:archive.zip + 6 file.txt + 0 dir/ + 4 dir/bye.txt +``` + +Or with `--long` flag for more info: + +``` +$ rclone archive list --long remote:archive.zip + 6 2025-10-30 09:46:23.000000000 file.txt + 0 2025-10-30 09:46:57.000000000 dir/ + 4 2025-10-30 09:46:57.000000000 dir/bye.txt +``` + +Or with `--plain` flag which is useful for scripting: + +``` +$ rclone archive list --plain /path/to/archive.zip +file.txt +dir/ +dir/bye.txt +``` + +Or with `--dirs-only`: + +``` +$ rclone archive list --plain --dirs-only /path/to/archive.zip +dir/ +``` + +Or with `--files-only`: + +``` +$ rclone archive list --plain --files-only /path/to/archive.zip +file.txt +dir/bye.txt +``` + +Filters may also be used: + +``` +$ rclone archive list --long archive.zip --include "bye.*" + 4 2025-10-30 09:46:57.000000000 dir/bye.txt +``` + +The [archive backend](/archive/) can also be used to list files. It +can be used to read only mount archives also but it supports a +different set of archive formats to the archive commands. + + +``` +rclone archive list [flags] +``` + +## Options + +``` + --dirs-only Only list directories + --files-only Only list files + -h, --help help for list + --long List extra attributtes + --plain Only list file names +``` + +See the [global flags page](/flags/) for global options not listed here. + +## See Also + + + + +* [rclone archive](/commands/rclone_archive/) - Perform an action on an archive. + + + diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md index 9e9fbcfa1..36db78496 100644 --- a/docs/content/commands/rclone_authorize.md +++ b/docs/content/commands/rclone_authorize.md @@ -11,21 +11,23 @@ Remote authorization. ## Synopsis Remote authorization. Used to authorize a remote or headless -rclone from a machine with a browser - use as instructed by -rclone config. +rclone from a machine with a browser. Use as instructed by rclone config. +See also the [remote setup documentation](/remote_setup). The command requires 1-3 arguments: - - fs name (e.g., "drive", "s3", etc.) - - Either a base64 encoded JSON blob obtained from a previous rclone config session - - Or a client_id and client_secret pair obtained from the remote service + +- Name of a backend (e.g. "drive", "s3") +- Either a base64 encoded JSON blob obtained from a previous rclone config session +- Or a client_id and client_secret pair obtained from the remote service Use --auth-no-open-browser to prevent rclone to open auth link in default browser automatically. -Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used. +Use --template to generate HTML output via a custom Go template. If a blank +string is provided as an argument to this flag, the default template is used. ``` -rclone authorize [base64_json_blob | client_id client_secret] [flags] +rclone authorize [base64_json_blob | client_id client_secret] [flags] ``` ## Options @@ -40,5 +42,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_backend.md b/docs/content/commands/rclone_backend.md index 6708ef634..ed83f2e3d 100644 --- a/docs/content/commands/rclone_backend.md +++ b/docs/content/commands/rclone_backend.md @@ -16,27 +16,34 @@ see the backend docs for definitions. You can discover what commands a backend implements by using - rclone backend help remote: - rclone backend help +```console +rclone backend help remote: +rclone backend help +``` You can also discover information about the backend using (see [operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs for more info). - rclone backend features remote: +```console +rclone backend features remote: +``` Pass options to the backend command with -o. This should be key=value or key, e.g.: - rclone backend stats remote:path stats -o format=json -o long +```console +rclone backend stats remote:path stats -o format=json -o long +``` Pass arguments to the backend by placing them on the end of the line - rclone backend cleanup remote:path file1 file2 file3 +```console +rclone backend cleanup remote:path file1 file2 file3 +``` Note to run these commands on a running backend then see [backend/command](/rc/#backend-command) in the rc docs. - ``` rclone backend remote:path [opts] [flags] ``` @@ -56,7 +63,7 @@ See the [global flags page](/flags/) for global options not listed here. Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -64,5 +71,10 @@ Important flags useful for most commands ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_bisync.md b/docs/content/commands/rclone_bisync.md index 970d08335..fc56fd535 100644 --- a/docs/content/commands/rclone_bisync.md +++ b/docs/content/commands/rclone_bisync.md @@ -16,18 +16,19 @@ Perform bidirectional synchronization between two paths. bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. On each successive run it will: + - list files on Path1 and Path2, and check for changes on each side. Changes include `New`, `Newer`, `Older`, and `Deleted` files. - Propagate changes on Path1 to Path2, and vice-versa. Bisync is considered an **advanced command**, so use with care. Make sure you have read and understood the entire [manual](https://rclone.org/bisync) -(especially the [Limitations](https://rclone.org/bisync/#limitations) section) before using, -or data loss can result. Questions can be asked in the [Rclone Forum](https://forum.rclone.org/). +(especially the [Limitations](https://rclone.org/bisync/#limitations) section) +before using, or data loss can result. Questions can be asked in the +[Rclone Forum](https://forum.rclone.org/). See [full bisync description](https://rclone.org/bisync/) for details. - ``` rclone bisync remote1:path1 remote2:path2 [flags] ``` @@ -69,7 +70,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -110,7 +111,7 @@ Flags for anything which can copy a file Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -120,7 +121,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -148,5 +149,10 @@ Flags for filtering directory listings ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md index 71d5a9814..3ca7f450c 100644 --- a/docs/content/commands/rclone_cat.md +++ b/docs/content/commands/rclone_cat.md @@ -14,15 +14,21 @@ Sends any files to standard output. You can use it like this to output a single file - rclone cat remote:path/to/file +```sh +rclone cat remote:path/to/file +``` Or like this to output any file in dir or its subdirectories. - rclone cat remote:path/to/dir +```sh +rclone cat remote:path/to/dir +``` Or like this to output any .txt files in dir or its subdirectories. - rclone --include "*.txt" cat remote:path/to/dir +```sh +rclone --include "*.txt" cat remote:path/to/dir +``` Use the `--head` flag to print characters only at the start, `--tail` for the end and `--offset` and `--count` to print a section in the middle. @@ -33,14 +39,17 @@ Use the `--separator` flag to print a separator value between files. Be sure to shell-escape special characters. For example, to print a newline between files, use: -* bash: +- bash: - rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir + ```sh + rclone --include "*.txt" --separator $'\n' cat remote:path/to/dir + ``` -* powershell: - - rclone --include "*.txt" --separator "`n" cat remote:path/to/dir +- powershell: + ```powershell + rclone --include "*.txt" --separator "`n" cat remote:path/to/dir + ``` ``` rclone cat remote:path [flags] @@ -65,7 +74,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -95,12 +104,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md index bfebdbded..a5ddf7861 100644 --- a/docs/content/commands/rclone_check.md +++ b/docs/content/commands/rclone_check.md @@ -52,7 +52,6 @@ you what happened to it. These are reminiscent of diff files. The default number of parallel checks is 8. See the [--checkers](/docs/#checkers-int) option for more information. - ``` rclone check source:path dest:path [flags] ``` @@ -79,7 +78,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags used for check commands -``` +```text --max-backlog int Maximum number of objects in sync or check backlog (default 10000) ``` @@ -87,7 +86,7 @@ Flags used for check commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -117,12 +116,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_checksum.md b/docs/content/commands/rclone_checksum.md index d14d5760c..7b090b073 100644 --- a/docs/content/commands/rclone_checksum.md +++ b/docs/content/commands/rclone_checksum.md @@ -47,7 +47,6 @@ you what happened to it. These are reminiscent of diff files. The default number of parallel checks is 8. See the [--checkers](/docs/#checkers-int) option for more information. - ``` rclone checksum sumfile dst:path [flags] ``` @@ -73,7 +72,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -103,12 +102,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md index 8502d3f72..a2dc92c17 100644 --- a/docs/content/commands/rclone_cleanup.md +++ b/docs/content/commands/rclone_cleanup.md @@ -13,7 +13,6 @@ Clean up the remote if possible. Clean up the remote if possible. Empty the trash or delete old file versions. Not supported by all remotes. - ``` rclone cleanup remote:path [flags] ``` @@ -31,7 +30,7 @@ See the [global flags page](/flags/) for global options not listed here. Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -39,5 +38,10 @@ Important flags useful for most commands ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_completion.md b/docs/content/commands/rclone_completion.md index d9b7e605d..2f43ab108 100644 --- a/docs/content/commands/rclone_completion.md +++ b/docs/content/commands/rclone_completion.md @@ -15,7 +15,6 @@ Output completion script for a given shell. Generates a shell completion script for rclone. Run with `--help` to list the supported shells. - ## Options ``` @@ -26,9 +25,14 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone completion bash](/commands/rclone_completion_bash/) - Output bash completion script for rclone. * [rclone completion fish](/commands/rclone_completion_fish/) - Output fish completion script for rclone. * [rclone completion powershell](/commands/rclone_completion_powershell/) - Output powershell completion script for rclone. * [rclone completion zsh](/commands/rclone_completion_zsh/) - Output zsh completion script for rclone. + + diff --git a/docs/content/commands/rclone_completion_bash.md b/docs/content/commands/rclone_completion_bash.md index 54af5149c..51de90f69 100644 --- a/docs/content/commands/rclone_completion_bash.md +++ b/docs/content/commands/rclone_completion_bash.md @@ -13,17 +13,21 @@ Output bash completion script for rclone. Generates a bash shell autocompletion script for rclone. -By default, when run without any arguments, +By default, when run without any arguments, - rclone completion bash +```console +rclone completion bash +``` the generated script will be written to - /etc/bash_completion.d/rclone +```console +/etc/bash_completion.d/rclone +``` and so rclone will probably need to be run as root, or with sudo. -If you supply a path to a file as the command line argument, then +If you supply a path to a file as the command line argument, then the generated script will be written to that file, in which case you should not need root privileges. @@ -34,12 +38,13 @@ can logout and login again to use the autocompletion script. Alternatively, you can source the script directly - . /path/to/my_bash_completion_scripts/rclone +```console +. /path/to/my_bash_completion_scripts/rclone +``` and the autocompletion functionality will be added to your current shell. - ``` rclone completion bash [output_file] [flags] ``` @@ -54,5 +59,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell. + + diff --git a/docs/content/commands/rclone_completion_fish.md b/docs/content/commands/rclone_completion_fish.md index 59dfa52ad..8dacce116 100644 --- a/docs/content/commands/rclone_completion_fish.md +++ b/docs/content/commands/rclone_completion_fish.md @@ -16,19 +16,22 @@ Generates a fish autocompletion script for rclone. This writes to /etc/fish/completions/rclone.fish by default so will probably need to be run with sudo or as root, e.g. - sudo rclone completion fish +```console +sudo rclone completion fish +``` Logout and login again to use the autocompletion scripts, or source them directly - . /etc/fish/completions/rclone.fish +```console +. /etc/fish/completions/rclone.fish +``` If you supply a command line argument the script will be written there. If output_file is "-", then the output will be written to stdout. - ``` rclone completion fish [output_file] [flags] ``` @@ -43,5 +46,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell. + + diff --git a/docs/content/commands/rclone_completion_powershell.md b/docs/content/commands/rclone_completion_powershell.md index f872531a3..bfe73f023 100644 --- a/docs/content/commands/rclone_completion_powershell.md +++ b/docs/content/commands/rclone_completion_powershell.md @@ -15,14 +15,15 @@ Generate the autocompletion script for powershell. To load completions in your current shell session: - rclone completion powershell | Out-String | Invoke-Expression +```console +rclone completion powershell | Out-String | Invoke-Expression +``` To load completions for every new session, add the output of the above command to your powershell profile. If output_file is "-" or missing, then the output will be written to stdout. - ``` rclone completion powershell [output_file] [flags] ``` @@ -37,5 +38,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell. + + diff --git a/docs/content/commands/rclone_completion_zsh.md b/docs/content/commands/rclone_completion_zsh.md index a12f3aa84..8fa652d58 100644 --- a/docs/content/commands/rclone_completion_zsh.md +++ b/docs/content/commands/rclone_completion_zsh.md @@ -16,19 +16,22 @@ Generates a zsh autocompletion script for rclone. This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, e.g. - sudo rclone completion zsh +```console +sudo rclone completion zsh +``` Logout and login again to use the autocompletion scripts, or source them directly - autoload -U compinit && compinit +```console +autoload -U compinit && compinit +``` If you supply a command line argument the script will be written there. If output_file is "-", then the output will be written to stdout. - ``` rclone completion zsh [output_file] [flags] ``` @@ -43,5 +46,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone completion](/commands/rclone_completion/) - Output completion script for a given shell. + + diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md index 91b717cbe..89b5962f0 100644 --- a/docs/content/commands/rclone_config.md +++ b/docs/content/commands/rclone_config.md @@ -14,7 +14,6 @@ Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. - ``` rclone config [flags] ``` @@ -29,6 +28,9 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options. * [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote. @@ -43,7 +45,10 @@ See the [global flags page](/flags/) for global options not listed here. * [rclone config reconnect](/commands/rclone_config_reconnect/) - Re-authenticates user with remote. * [rclone config redacted](/commands/rclone_config_redacted/) - Print redacted (decrypted) config file, or the redacted config for a single remote. * [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. +* [rclone config string](/commands/rclone_config_string/) - Print connection string for a single remote. * [rclone config touch](/commands/rclone_config_touch/) - Ensure configuration file exists. * [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote. * [rclone config userinfo](/commands/rclone_config_userinfo/) - Prints info about logged in user of remote. + + diff --git a/docs/content/commands/rclone_config_create.md b/docs/content/commands/rclone_config_create.md index 22eeb7d85..65b7ac10a 100644 --- a/docs/content/commands/rclone_config_create.md +++ b/docs/content/commands/rclone_config_create.md @@ -16,13 +16,17 @@ should be passed in pairs of `key` `value` or as `key=value`. For example, to make a swift remote of name myremote using auto config you would do: - rclone config create myremote swift env_auth true - rclone config create myremote swift env_auth=true +```sh +rclone config create myremote swift env_auth true +rclone config create myremote swift env_auth=true +``` So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this: - rclone config create mydrive drive config_is_local=false +```sh +rclone config create mydrive drive config_is_local=false +``` Note that if the config process would normally ask a question the default is taken (unless `--non-interactive` is used). Each time @@ -50,29 +54,29 @@ it. This will look something like (some irrelevant detail removed): -``` +```json { - "State": "*oauth-islocal,teamdrive,,", - "Option": { - "Name": "config_is_local", - "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n", - "Default": true, - "Examples": [ - { - "Value": "true", - "Help": "Yes" - }, - { - "Value": "false", - "Help": "No" - } - ], - "Required": false, - "IsPassword": false, - "Type": "bool", - "Exclusive": true, - }, - "Error": "", + "State": "*oauth-islocal,teamdrive,,", + "Option": { + "Name": "config_is_local", + "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n", + "Default": true, + "Examples": [ + { + "Value": "true", + "Help": "Yes" + }, + { + "Value": "false", + "Help": "No" + } + ], + "Required": false, + "IsPassword": false, + "Type": "bool", + "Exclusive": true, + }, + "Error": "", } ``` @@ -95,7 +99,9 @@ The keys of `Option` are used as follows: If `Error` is set then it should be shown to the user at the same time as the question. - rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true" +```sh +rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true" +``` Note that when using `--continue` all passwords should be passed in the clear (not obscured). Any default config values should be passed @@ -111,7 +117,6 @@ defaults for questions as usual. Note that `bin/config.py` in the rclone source implements this protocol as a readable demonstration. - ``` rclone config create name type [key value]* [flags] ``` @@ -134,5 +139,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_delete.md b/docs/content/commands/rclone_config_delete.md index 8ef2f744a..9f87e54e9 100644 --- a/docs/content/commands/rclone_config_delete.md +++ b/docs/content/commands/rclone_config_delete.md @@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_disconnect.md b/docs/content/commands/rclone_config_disconnect.md index 044842043..9c7288788 100644 --- a/docs/content/commands/rclone_config_disconnect.md +++ b/docs/content/commands/rclone_config_disconnect.md @@ -15,7 +15,6 @@ This normally means revoking the oauth token. To reconnect use "rclone config reconnect". - ``` rclone config disconnect remote: [flags] ``` @@ -30,5 +29,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_dump.md b/docs/content/commands/rclone_config_dump.md index 7a204b3ee..dc99d31e6 100644 --- a/docs/content/commands/rclone_config_dump.md +++ b/docs/content/commands/rclone_config_dump.md @@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_edit.md b/docs/content/commands/rclone_config_edit.md index 0e988af6d..095074501 100644 --- a/docs/content/commands/rclone_config_edit.md +++ b/docs/content/commands/rclone_config_edit.md @@ -14,7 +14,6 @@ Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration. - ``` rclone config edit [flags] ``` @@ -29,5 +28,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_encryption.md b/docs/content/commands/rclone_config_encryption.md index b7c552ee6..721973d14 100644 --- a/docs/content/commands/rclone_config_encryption.md +++ b/docs/content/commands/rclone_config_encryption.md @@ -12,7 +12,6 @@ set, remove and check the encryption for the config file This command sets, clears and checks the encryption for the config file using the subcommands below. - ## Options ``` @@ -23,8 +22,13 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. * [rclone config encryption check](/commands/rclone_config_encryption_check/) - Check that the config file is encrypted * [rclone config encryption remove](/commands/rclone_config_encryption_remove/) - Remove the config file encryption password * [rclone config encryption set](/commands/rclone_config_encryption_set/) - Set or change the config file encryption password + + diff --git a/docs/content/commands/rclone_config_encryption_check.md b/docs/content/commands/rclone_config_encryption_check.md index f64c265f6..bd0784b54 100644 --- a/docs/content/commands/rclone_config_encryption_check.md +++ b/docs/content/commands/rclone_config_encryption_check.md @@ -18,7 +18,6 @@ If decryption fails it will return a non-zero exit code if using If the config file is not encrypted it will return a non zero exit code. - ``` rclone config encryption check [flags] ``` @@ -33,5 +32,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config encryption](/commands/rclone_config_encryption/) - set, remove and check the encryption for the config file + + diff --git a/docs/content/commands/rclone_config_encryption_remove.md b/docs/content/commands/rclone_config_encryption_remove.md index fa78458e2..a3a2134ff 100644 --- a/docs/content/commands/rclone_config_encryption_remove.md +++ b/docs/content/commands/rclone_config_encryption_remove.md @@ -19,7 +19,6 @@ password. If the config was not encrypted then no error will be returned and this command will do nothing. - ``` rclone config encryption remove [flags] ``` @@ -34,5 +33,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config encryption](/commands/rclone_config_encryption/) - set, remove and check the encryption for the config file + + diff --git a/docs/content/commands/rclone_config_encryption_set.md b/docs/content/commands/rclone_config_encryption_set.md index 780c086dc..e6c6ac488 100644 --- a/docs/content/commands/rclone_config_encryption_set.md +++ b/docs/content/commands/rclone_config_encryption_set.md @@ -29,7 +29,6 @@ encryption remove`), then set it again with this command which may be easier if you don't mind the unencrypted config file being on the disk briefly. - ``` rclone config encryption set [flags] ``` @@ -44,5 +43,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config encryption](/commands/rclone_config_encryption/) - set, remove and check the encryption for the config file + + diff --git a/docs/content/commands/rclone_config_file.md b/docs/content/commands/rclone_config_file.md index 68b4f3158..66aeb58a1 100644 --- a/docs/content/commands/rclone_config_file.md +++ b/docs/content/commands/rclone_config_file.md @@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_password.md b/docs/content/commands/rclone_config_password.md index 6a9909ad1..10b9d1ec1 100644 --- a/docs/content/commands/rclone_config_password.md +++ b/docs/content/commands/rclone_config_password.md @@ -16,13 +16,14 @@ The `password` should be passed in in clear (unobscured). For example, to set password of a remote of name myremote you would do: - rclone config password myremote fieldname mypassword - rclone config password myremote fieldname=mypassword +```sh +rclone config password myremote fieldname mypassword +rclone config password myremote fieldname=mypassword +``` This command is obsolete now that "config update" and "config create" both support obscuring passwords directly. - ``` rclone config password name [key value]+ [flags] ``` @@ -37,5 +38,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_paths.md b/docs/content/commands/rclone_config_paths.md index 807d40259..e148865cb 100644 --- a/docs/content/commands/rclone_config_paths.md +++ b/docs/content/commands/rclone_config_paths.md @@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_providers.md b/docs/content/commands/rclone_config_providers.md index d18c663ad..77d1cd790 100644 --- a/docs/content/commands/rclone_config_providers.md +++ b/docs/content/commands/rclone_config_providers.md @@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_reconnect.md b/docs/content/commands/rclone_config_reconnect.md index 0237850d8..9c83c71b7 100644 --- a/docs/content/commands/rclone_config_reconnect.md +++ b/docs/content/commands/rclone_config_reconnect.md @@ -15,7 +15,6 @@ To disconnect the remote use "rclone config disconnect". This normally means going through the interactive oauth flow again. - ``` rclone config reconnect remote: [flags] ``` @@ -30,5 +29,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_redacted.md b/docs/content/commands/rclone_config_redacted.md index e37f5d4ef..04375a8bc 100644 --- a/docs/content/commands/rclone_config_redacted.md +++ b/docs/content/commands/rclone_config_redacted.md @@ -20,8 +20,6 @@ This makes the config file suitable for posting online for support. It should be double checked before posting as the redaction may not be perfect. - - ``` rclone config redacted [] [flags] ``` @@ -36,5 +34,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_show.md b/docs/content/commands/rclone_config_show.md index eb1897105..ab3b37194 100644 --- a/docs/content/commands/rclone_config_show.md +++ b/docs/content/commands/rclone_config_show.md @@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_string.md b/docs/content/commands/rclone_config_string.md new file mode 100644 index 000000000..7a5a05ee1 --- /dev/null +++ b/docs/content/commands/rclone_config_string.md @@ -0,0 +1,55 @@ +--- +title: "rclone config string" +description: "Print connection string for a single remote." +versionIntroduced: v1.72 +# autogenerated - DO NOT EDIT, instead edit the source code in cmd/config/string/ and as part of making a release run "make commanddocs" +--- +# rclone config string + +Print connection string for a single remote. + +## Synopsis + +Print a connection string for a single remote. + +The [connection strings](/docs/#connection-strings) can be used +wherever a remote is needed and can be more convenient than using the +config file, especially if using the RC API. + +Backend parameters may be provided to the command also. + +Example: + +```sh +$ rclone config string s3:rclone --s3-no-check-bucket +:s3,access_key_id=XXX,no_check_bucket,provider=AWS,region=eu-west-2,secret_access_key=YYY:rclone +``` + +**NB** the strings are not quoted for use in shells (eg bash, +powershell, windows cmd). Most will work if enclosed in "double +quotes", however connection strings that contain double quotes will +require further quoting which is very shell dependent. + + + +``` +rclone config string [flags] +``` + +## Options + +``` + -h, --help help for string +``` + +See the [global flags page](/flags/) for global options not listed here. + +## See Also + + + + +* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + + diff --git a/docs/content/commands/rclone_config_touch.md b/docs/content/commands/rclone_config_touch.md index 8fd7a0028..ac2915f99 100644 --- a/docs/content/commands/rclone_config_touch.md +++ b/docs/content/commands/rclone_config_touch.md @@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_update.md b/docs/content/commands/rclone_config_update.md index af9660db2..1dc6ed197 100644 --- a/docs/content/commands/rclone_config_update.md +++ b/docs/content/commands/rclone_config_update.md @@ -16,13 +16,17 @@ pairs of `key` `value` or as `key=value`. For example, to update the env_auth field of a remote of name myremote you would do: - rclone config update myremote env_auth true - rclone config update myremote env_auth=true +```sh +rclone config update myremote env_auth true +rclone config update myremote env_auth=true +``` If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus: - rclone config update myremote env_auth=true config_refresh_token=false +```sh +rclone config update myremote env_auth=true config_refresh_token=false +``` Note that if the config process would normally ask a question the default is taken (unless `--non-interactive` is used). Each time @@ -50,29 +54,29 @@ it. This will look something like (some irrelevant detail removed): -``` +```json { - "State": "*oauth-islocal,teamdrive,,", - "Option": { - "Name": "config_is_local", - "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n", - "Default": true, - "Examples": [ - { - "Value": "true", - "Help": "Yes" - }, - { - "Value": "false", - "Help": "No" - } - ], - "Required": false, - "IsPassword": false, - "Type": "bool", - "Exclusive": true, - }, - "Error": "", + "State": "*oauth-islocal,teamdrive,,", + "Option": { + "Name": "config_is_local", + "Help": "Use web browser to automatically authenticate rclone with remote?\n * Say Y if the machine running rclone has a web browser you can use\n * Say N if running rclone on a (remote) machine without web browser access\nIf not sure try Y. If Y failed, try N.\n", + "Default": true, + "Examples": [ + { + "Value": "true", + "Help": "Yes" + }, + { + "Value": "false", + "Help": "No" + } + ], + "Required": false, + "IsPassword": false, + "Type": "bool", + "Exclusive": true, + }, + "Error": "", } ``` @@ -95,7 +99,9 @@ The keys of `Option` are used as follows: If `Error` is set then it should be shown to the user at the same time as the question. - rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true" +```sh +rclone config update name --continue --state "*oauth-islocal,teamdrive,," --result "true" +``` Note that when using `--continue` all passwords should be passed in the clear (not obscured). Any default config values should be passed @@ -111,7 +117,6 @@ defaults for questions as usual. Note that `bin/config.py` in the rclone source implements this protocol as a readable demonstration. - ``` rclone config update name [key value]+ [flags] ``` @@ -134,5 +139,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_config_userinfo.md b/docs/content/commands/rclone_config_userinfo.md index cd6a04cdf..662e81c37 100644 --- a/docs/content/commands/rclone_config_userinfo.md +++ b/docs/content/commands/rclone_config_userinfo.md @@ -12,7 +12,6 @@ Prints info about logged in user of remote. This prints the details of the person logged in to the cloud storage system. - ``` rclone config userinfo remote: [flags] ``` @@ -28,5 +27,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone config](/commands/rclone_config/) - Enter an interactive configuration session. + + diff --git a/docs/content/commands/rclone_convmv.md b/docs/content/commands/rclone_convmv.md index 15cd7b739..04f9026ce 100644 --- a/docs/content/commands/rclone_convmv.md +++ b/docs/content/commands/rclone_convmv.md @@ -10,8 +10,8 @@ Convert file and directory names in place. ## Synopsis - -convmv supports advanced path name transformations for converting and renaming files and directories by applying prefixes, suffixes, and other alterations. +convmv supports advanced path name transformations for converting and renaming +files and directories by applying prefixes, suffixes, and other alterations. | Command | Description | |------|------| @@ -20,10 +20,13 @@ convmv supports advanced path name transformations for converting and renaming f | `--name-transform suffix_keep_extension=XXXX` | Appends XXXX to the file name while preserving the original file extension. | | `--name-transform trimprefix=XXXX` | Removes XXXX if it appears at the start of the file name. | | `--name-transform trimsuffix=XXXX` | Removes XXXX if it appears at the end of the file name. | -| `--name-transform regex=/pattern/replacement/` | Applies a regex-based transformation. | +| `--name-transform regex=pattern/replacement` | Applies a regex-based transformation. | | `--name-transform replace=old:new` | Replaces occurrences of old with new in the file name. | | `--name-transform date={YYYYMMDD}` | Appends or prefixes the specified date format. | | `--name-transform truncate=N` | Truncates the file name to a maximum of N characters. | +| `--name-transform truncate_keep_extension=N` | Truncates the file name to a maximum of N characters while preserving the original file extension. | +| `--name-transform truncate_bytes=N` | Truncates the file name to a maximum of N bytes (not characters). | +| `--name-transform truncate_bytes_keep_extension=N` | Truncates the file name to a maximum of N bytes (not characters) while preserving the original file extension. | | `--name-transform base64encode` | Encodes the file name in Base64. | | `--name-transform base64decode` | Decodes a Base64-encoded file name. | | `--name-transform encoder=ENCODING` | Converts the file name to the specified encoding (e.g., ISO-8859-1, Windows-1252, Macintosh). | @@ -38,211 +41,227 @@ convmv supports advanced path name transformations for converting and renaming f | `--name-transform nfd` | Converts the file name to NFD Unicode normalization form. | | `--name-transform nfkc` | Converts the file name to NFKC Unicode normalization form. | | `--name-transform nfkd` | Converts the file name to NFKD Unicode normalization form. | -| `--name-transform command=/path/to/my/programfile names.` | Executes an external program to transform | +| `--name-transform command=/path/to/my/programfile names.` | Executes an external program to transform. | +Conversion modes: -Conversion modes: +```text +none +nfc +nfd +nfkc +nfkd +replace +prefix +suffix +suffix_keep_extension +trimprefix +trimsuffix +index +date +truncate +truncate_keep_extension +truncate_bytes +truncate_bytes_keep_extension +base64encode +base64decode +encoder +decoder +ISO-8859-1 +Windows-1252 +Macintosh +charmap +lowercase +uppercase +titlecase +ascii +url +regex +command ``` -none -nfc -nfd -nfkc -nfkd -replace -prefix -suffix -suffix_keep_extension -trimprefix -trimsuffix -index -date -truncate -base64encode -base64decode -encoder -decoder -ISO-8859-1 -Windows-1252 -Macintosh -charmap -lowercase -uppercase -titlecase -ascii -url -regex -command -``` -Char maps: -``` - -IBM-Code-Page-037 -IBM-Code-Page-437 -IBM-Code-Page-850 -IBM-Code-Page-852 -IBM-Code-Page-855 -Windows-Code-Page-858 -IBM-Code-Page-860 -IBM-Code-Page-862 -IBM-Code-Page-863 -IBM-Code-Page-865 -IBM-Code-Page-866 -IBM-Code-Page-1047 -IBM-Code-Page-1140 -ISO-8859-1 -ISO-8859-2 -ISO-8859-3 -ISO-8859-4 -ISO-8859-5 -ISO-8859-6 -ISO-8859-7 -ISO-8859-8 -ISO-8859-9 -ISO-8859-10 -ISO-8859-13 -ISO-8859-14 -ISO-8859-15 -ISO-8859-16 -KOI8-R -KOI8-U -Macintosh -Macintosh-Cyrillic -Windows-874 -Windows-1250 -Windows-1251 -Windows-1252 -Windows-1253 -Windows-1254 -Windows-1255 -Windows-1256 -Windows-1257 -Windows-1258 -X-User-Defined -``` -Encoding masks: -``` -Asterisk - BackQuote - BackSlash - Colon - CrLf - Ctl - Del - Dollar - Dot - DoubleQuote - Exclamation - Hash - InvalidUtf8 - LeftCrLfHtVt - LeftPeriod - LeftSpace - LeftTilde - LtGt - None - Percent - Pipe - Question - Raw - RightCrLfHtVt - RightPeriod - RightSpace - Semicolon - SingleQuote - Slash - SquareBracket -``` -Examples: +Char maps: + +```text +IBM-Code-Page-037 +IBM-Code-Page-437 +IBM-Code-Page-850 +IBM-Code-Page-852 +IBM-Code-Page-855 +Windows-Code-Page-858 +IBM-Code-Page-860 +IBM-Code-Page-862 +IBM-Code-Page-863 +IBM-Code-Page-865 +IBM-Code-Page-866 +IBM-Code-Page-1047 +IBM-Code-Page-1140 +ISO-8859-1 +ISO-8859-2 +ISO-8859-3 +ISO-8859-4 +ISO-8859-5 +ISO-8859-6 +ISO-8859-7 +ISO-8859-8 +ISO-8859-9 +ISO-8859-10 +ISO-8859-13 +ISO-8859-14 +ISO-8859-15 +ISO-8859-16 +KOI8-R +KOI8-U +Macintosh +Macintosh-Cyrillic +Windows-874 +Windows-1250 +Windows-1251 +Windows-1252 +Windows-1253 +Windows-1254 +Windows-1255 +Windows-1256 +Windows-1257 +Windows-1258 +X-User-Defined ``` + +Encoding masks: + +```text +Asterisk +BackQuote +BackSlash +Colon +CrLf +Ctl +Del +Dollar +Dot +DoubleQuote +Exclamation +Hash +InvalidUtf8 +LeftCrLfHtVt +LeftPeriod +LeftSpace +LeftTilde +LtGt +None +Percent +Pipe +Question +Raw +RightCrLfHtVt +RightPeriod +RightSpace +Semicolon +SingleQuote +Slash +SquareBracket +``` + +Examples: + +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase" // Output: STORIES/THE QUICK BROWN FOX!.TXT ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow" // Output: stories/The Slow Brown Turtle!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode" // Output: c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0 ``` -``` +```console rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode" // Output: stories/The Quick Brown Fox!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc" // Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd" // Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii" // Output: stories/The Quick Brown Fox!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt" // Output: stories/The Quick Brown Fox! ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_" // Output: OLD_stories/OLD_The Quick Brown Fox!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7" // Output: stories/The Quick Brown _ Fox Went to the Caf_!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket" // Output: stories/The Quick Brown Fox: A Memoir [draft].txt ``` -``` +```console rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21" // Output: stories/The Quick Brown 🦊 Fox ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo" // Output: stories/The Quick Brown Fox!.txt ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" -// Output: stories/The Quick Brown Fox!-20250618 +// Output: stories/The Quick Brown Fox!-20251121 ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" -// Output: stories/The Quick Brown Fox!-2025-06-18 0148PM +// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM ``` -``` +```console rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab" // Output: ababababababab/ababab ababababab ababababab ababab!abababab ``` -Multiple transformations can be used in sequence, applied in the order they are specified on the command line. +The regex command generally accepts Perl-style regular expressions, the exact +syntax is defined in the [Go regular expression reference](https://golang.org/pkg/regexp/syntax/). +The replacement string may contain capturing group variables, referencing +capturing groups using the syntax `$name` or `${name}`, where the name can +refer to a named capturing group or it can simply be the index as a number. +To insert a literal $, use $$. + +Multiple transformations can be used in sequence, applied +in the order they are specified on the command line. The `--name-transform` flag is also available in `sync`, `copy`, and `move`. -# Files vs Directories +## Files vs Directories -By default `--name-transform` will only apply to file names. The means only the leaf file name will be transformed. -However some of the transforms would be better applied to the whole path or just directories. -To choose which which part of the file path is affected some tags can be added to the `--name-transform`. +By default `--name-transform` will only apply to file names. The means only the +leaf file name will be transformed. However some of the transforms would be +better applied to the whole path or just directories. To choose which which +part of the file path is affected some tags can be added to the `--name-transform`. | Tag | Effect | |------|------| @@ -250,42 +269,58 @@ To choose which which part of the file path is affected some tags can be added t | `dir` | Only transform name of directories - these may appear anywhere in the path | | `all` | Transform the entire path for files and directories | -This is used by adding the tag into the transform name like this: `--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`. +This is used by adding the tag into the transform name like this: +`--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`. -For some conversions using all is more likely to be useful, for example `--name-transform all,nfc`. +For some conversions using all is more likely to be useful, for example +`--name-transform all,nfc`. -Note that `--name-transform` may not add path separators `/` to the name. This will cause an error. +Note that `--name-transform` may not add path separators `/` to the name. +This will cause an error. -# Ordering and Conflicts +## Ordering and Conflicts -* Transformations will be applied in the order specified by the user. - * If the `file` tag is in use (the default) then only the leaf name of files will be transformed. - * If the `dir` tag is in use then directories anywhere in the path will be transformed - * If the `all` tag is in use then directories and files anywhere in the path will be transformed - * Each transformation will be run one path segment at a time. - * If a transformation adds a `/` or ends up with an empty path segment then that will be an error. -* It is up to the user to put the transformations in a sensible order. - * Conflicting transformations, such as `prefix` followed by `trimprefix` or `nfc` followed by `nfd`, are possible. - * Instead of enforcing mutual exclusivity, transformations are applied in sequence as specified by the -user, allowing for intentional use cases (e.g., trimming one prefix before adding another). - * Users should be aware that certain combinations may lead to unexpected results and should verify -transformations using `--dry-run` before execution. +- Transformations will be applied in the order specified by the user. + - If the `file` tag is in use (the default) then only the leaf name of files + will be transformed. + - If the `dir` tag is in use then directories anywhere in the path will be + transformed + - If the `all` tag is in use then directories and files anywhere in the path + will be transformed + - Each transformation will be run one path segment at a time. + - If a transformation adds a `/` or ends up with an empty path segment then + that will be an error. +- It is up to the user to put the transformations in a sensible order. + - Conflicting transformations, such as `prefix` followed by `trimprefix` or + `nfc` followed by `nfd`, are possible. + - Instead of enforcing mutual exclusivity, transformations are applied in + sequence as specified by the user, allowing for intentional use cases + (e.g., trimming one prefix before adding another). + - Users should be aware that certain combinations may lead to unexpected + results and should verify transformations using `--dry-run` before execution. -# Race Conditions and Non-Deterministic Behavior +## Race Conditions and Non-Deterministic Behavior -Some transformations, such as `replace=old:new`, may introduce conflicts where multiple source files map to the same destination name. -This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these. -* If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic. -* Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results. +Some transformations, such as `replace=old:new`, may introduce conflicts where +multiple source files map to the same destination name. This can lead to race +conditions when performing concurrent transfers. It is up to the user to +anticipate these. + +- If two files from the source are transformed into the same name at the + destination, the final state may be non-deterministic. +- Running rclone check after a sync using such transformations may erroneously + report missing or differing files due to overwritten results. To minimize risks, users should: -* Carefully review transformations that may introduce conflicts. -* Use `--dry-run` to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations). -* Avoid transformations that cause multiple distinct source files to map to the same destination name. -* Consider disabling concurrency with `--transfers=1` if necessary. -* Certain transformations (e.g. `prefix`) will have a multiplying effect every time they are used. Avoid these when using `bisync`. - +- Carefully review transformations that may introduce conflicts. +- Use `--dry-run` to inspect changes before executing a sync (but keep in mind + that it won't show the effect of non-deterministic transformations). +- Avoid transformations that cause multiple distinct source files to map to the + same destination name. +- Consider disabling concurrency with `--transfers=1` if necessary. +- Certain transformations (e.g. `prefix`) will have a multiplying effect every + time they are used. Avoid these when using `bisync`. ``` rclone convmv dest:path --name-transform XXX [flags] @@ -306,7 +341,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -347,7 +382,7 @@ Flags for anything which can copy a file Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -357,7 +392,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -387,12 +422,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md index b143fde20..488c22479 100644 --- a/docs/content/commands/rclone_copy.md +++ b/docs/content/commands/rclone_copy.md @@ -28,22 +28,30 @@ go there. For example - rclone copy source:sourcepath dest:destpath +```sh +rclone copy source:sourcepath dest:destpath +``` Let's say there are two files in sourcepath - sourcepath/one.txt - sourcepath/two.txt +```text +sourcepath/one.txt +sourcepath/two.txt +``` This copies them to - destpath/one.txt - destpath/two.txt +```text +destpath/one.txt +destpath/two.txt +``` Not to - destpath/sourcepath/one.txt - destpath/sourcepath/two.txt +```text +destpath/sourcepath/one.txt +destpath/sourcepath/two.txt +``` If you are familiar with `rsync`, rclone always works as if you had written a trailing `/` - meaning "copy the contents of this directory". @@ -59,27 +67,30 @@ For example, if you have many files in /path/to/src but only a few of them change every day, you can copy all the files which have changed recently very efficiently like this: - rclone copy --max-age 24h --no-traverse /path/to/src remote: - +```sh +rclone copy --max-age 24h --no-traverse /path/to/src remote: +``` Rclone will sync the modification times of files and directories if the backend supports it. If metadata syncing is required then use the `--metadata` flag. Note that the modification time and metadata for the root directory -will **not** be synced. See https://github.com/rclone/rclone/issues/7652 +will **not** be synced. See [issue #7652](https://github.com/rclone/rclone/issues/7652) for more info. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. -**Note**: Use the `--dry-run` or the `--interactive`/`-i` flag to test without copying anything. +**Note**: Use the `--dry-run` or the `--interactive`/`-i` flag to test without +copying anything. -# Logger Flags +## Logger Flags -The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, -one per line, to the file name (or stdout if it is `-`) supplied. What they write is described -in the help below. For example `--differ` will write all paths which are present -on both the source and destination but different. +The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` +flags write paths, one per line, to the file name (or stdout if it is `-`) +supplied. What they write is described in the help below. For example +`--differ` will write all paths which are present on both the source and +destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell @@ -112,9 +123,7 @@ are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file -(which may or may not match what actually DID.) - - +(which may or may not match what actually DID). ``` rclone copy source:path dest:path [flags] @@ -140,7 +149,7 @@ rclone copy source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) ``` Options shared with other commands are described next. @@ -150,7 +159,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -191,7 +200,7 @@ Flags for anything which can copy a file Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -201,7 +210,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -231,12 +240,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md index b5979dfff..9afce1ad2 100644 --- a/docs/content/commands/rclone_copyto.md +++ b/docs/content/commands/rclone_copyto.md @@ -19,33 +19,40 @@ name. If the source is a directory then it acts exactly like the So - rclone copyto src dst +```console +rclone copyto src dst +``` -where src and dst are rclone paths, either remote:path or -/path/to/local or C:\windows\path\if\on\windows. +where src and dst are rclone paths, either `remote:path` or +`/path/to/local` or `C:\windows\path\if\on\windows`. This will: - if src is file - copy it to dst, overwriting an existing file if it exists - if src is directory - copy it to dst, overwriting existing files if they exist - see copy command for full details +```text +if src is file + copy it to dst, overwriting an existing file if it exists +if src is directory + copy it to dst, overwriting existing files if they exist + see copy command for full details +``` This doesn't transfer files that are identical on src and dst, testing by size and modification time or MD5SUM. It doesn't delete files from the destination. -*If you are looking to copy just a byte range of a file, please see 'rclone cat --offset X --count Y'* +*If you are looking to copy just a byte range of a file, please see +`rclone cat --offset X --count Y`.* -**Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics +**Note**: Use the `-P`/`--progress` flag to view +real-time transfer statistics. -# Logger Flags +## Logger Flags -The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, -one per line, to the file name (or stdout if it is `-`) supplied. What they write is described -in the help below. For example `--differ` will write all paths which are present -on both the source and destination but different. +The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` +flags write paths, one per line, to the file name (or stdout if it is `-`) +supplied. What they write is described in the help below. For example +`--differ` will write all paths which are present on both the source and +destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell @@ -78,9 +85,7 @@ are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file -(which may or may not match what actually DID.) - - +(which may or may not match what actually DID). ``` rclone copyto source:path dest:path [flags] @@ -105,7 +110,7 @@ rclone copyto source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) ``` Options shared with other commands are described next. @@ -115,7 +120,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -156,7 +161,7 @@ Flags for anything which can copy a file Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -166,7 +171,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -196,12 +201,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_copyurl.md b/docs/content/commands/rclone_copyurl.md index 779b6b5e1..0df388f87 100644 --- a/docs/content/commands/rclone_copyurl.md +++ b/docs/content/commands/rclone_copyurl.md @@ -22,12 +22,23 @@ set in HTTP headers, it will be used instead of the name from the URL. With `--print-filename` in addition, the resulting file name will be printed. -Setting `--no-clobber` will prevent overwriting file on the +Setting `--no-clobber` will prevent overwriting file on the destination if there is one with the same name. Setting `--stdout` or making the output file name `-` will cause the output to be written to standard output. +Setting `--urls` allows you to input a CSV file of URLs in format: URL, +FILENAME. If `--urls` is in use then replace the URL in the arguments with the +file containing the URLs, e.g.: +```sh +rclone copyurl --urls myurls.csv remote:dir +``` +Missing filenames will be autogenerated equivalent to using `--auto-filename`. +Note that `--stdout` and `--print-filename` are incompatible with `--urls`. +This will do `--transfers` copies in parallel. Note that if `--auto-filename` +is desired for all URLs then a file with only URLs and no filename can be used. + ## Troubleshooting If you can't get `rclone copyurl` to work then here are some things you can try: @@ -38,8 +49,6 @@ If you can't get `rclone copyurl` to work then here are some things you can try: - `--user agent curl` - some sites have whitelists for curl's user-agent - try that - Make sure the site works with `curl` directly - - ``` rclone copyurl https://example.com dest:path [flags] ``` @@ -53,6 +62,7 @@ rclone copyurl https://example.com dest:path [flags] --no-clobber Prevent overwriting file with same name -p, --print-filename Print the resulting name from --auto-filename --stdout Write the output to stdout rather than a file + --urls Use a CSV file of links to process multiple URLs ``` Options shared with other commands are described next. @@ -62,7 +72,7 @@ See the [global flags page](/flags/) for global options not listed here. Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -70,5 +80,10 @@ Important flags useful for most commands ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md index 3c25df3cc..3b021e63d 100644 --- a/docs/content/commands/rclone_cryptcheck.md +++ b/docs/content/commands/rclone_cryptcheck.md @@ -10,7 +10,7 @@ Cryptcheck checks the integrity of an encrypted remote. ## Synopsis -Checks a remote against a [crypted](/crypt/) remote. This is the equivalent +Checks a remote against an [encrypted](/crypt/) remote. This is the equivalent of running rclone [check](/commands/rclone_check/), but able to check the checksums of the encrypted remote. @@ -24,14 +24,18 @@ checksum of the file it has just encrypted. Use it like this - rclone cryptcheck /path/to/files encryptedremote:path +```console +rclone cryptcheck /path/to/files encryptedremote:path +``` You can use it like this also, but that will involve downloading all -the files in remote:path. +the files in `remote:path`. - rclone cryptcheck remote:path encryptedremote:path +```console +rclone cryptcheck remote:path encryptedremote:path +``` -After it has run it will log the status of the encryptedremote:. +After it has run it will log the status of the `encryptedremote:`. If you supply the `--one-way` flag, it will only check that files in the source match the files in the destination, not the other way @@ -57,7 +61,6 @@ you what happened to it. These are reminiscent of diff files. The default number of parallel checks is 8. See the [--checkers](/docs/#checkers-int) option for more information. - ``` rclone cryptcheck remote:path cryptedremote:path [flags] ``` @@ -82,7 +85,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags used for check commands -``` +```text --max-backlog int Maximum number of objects in sync or check backlog (default 10000) ``` @@ -90,7 +93,7 @@ Flags used for check commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -120,12 +123,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_cryptdecode.md b/docs/content/commands/rclone_cryptdecode.md index 42691cd70..26c265ea3 100644 --- a/docs/content/commands/rclone_cryptdecode.md +++ b/docs/content/commands/rclone_cryptdecode.md @@ -17,13 +17,13 @@ If you supply the `--reverse` flag, it will return encrypted file names. use it like this - rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 - - rclone cryptdecode --reverse encryptedremote: filename1 filename2 - -Another way to accomplish this is by using the `rclone backend encode` (or `decode`) command. -See the documentation on the [crypt](/crypt/) overlay for more info. +```console +rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 +rclone cryptdecode --reverse encryptedremote: filename1 filename2 +``` +Another way to accomplish this is by using the `rclone backend encode` (or `decode`) +command. See the documentation on the [crypt](/crypt/) overlay for more info. ``` rclone cryptdecode encryptedremote: encryptedfilename [flags] @@ -40,5 +40,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md index 477da82bb..57ad7d9b1 100644 --- a/docs/content/commands/rclone_dedupe.md +++ b/docs/content/commands/rclone_dedupe.md @@ -30,14 +30,15 @@ directories have been merged. Next, if deduping by name, for every group of duplicate file names / hashes, it will delete all but one identical file it finds without -confirmation. This means that for most duplicated files the `dedupe` command will not be interactive. +confirmation. This means that for most duplicated files the +`dedupe` command will not be interactive. `dedupe` considers files to be identical if they have the -same file path and the same hash. If the backend does not support hashes (e.g. crypt wrapping -Google Drive) then they will never be found to be identical. If you -use the `--size-only` flag then files will be considered -identical if they have the same size (any hash will be ignored). This -can be useful on crypt backends which do not support hashes. +same file path and the same hash. If the backend does not support +hashes (e.g. crypt wrapping Google Drive) then they will never be found +to be identical. If you use the `--size-only` flag then files +will be considered identical if they have the same size (any hash will be +ignored). This can be useful on crypt backends which do not support hashes. Next rclone will resolve the remaining duplicates. Exactly which action is taken depends on the dedupe mode. By default, rclone will @@ -50,71 +51,82 @@ Here is an example run. Before - with duplicates - $ rclone lsl drive:dupes - 6048320 2016-03-05 16:23:16.798000000 one.txt - 6048320 2016-03-05 16:23:11.775000000 one.txt - 564374 2016-03-05 16:23:06.731000000 one.txt - 6048320 2016-03-05 16:18:26.092000000 one.txt - 6048320 2016-03-05 16:22:46.185000000 two.txt - 1744073 2016-03-05 16:22:38.104000000 two.txt - 564374 2016-03-05 16:22:52.118000000 two.txt +```console +$ rclone lsl drive:dupes + 6048320 2016-03-05 16:23:16.798000000 one.txt + 6048320 2016-03-05 16:23:11.775000000 one.txt + 564374 2016-03-05 16:23:06.731000000 one.txt + 6048320 2016-03-05 16:18:26.092000000 one.txt + 6048320 2016-03-05 16:22:46.185000000 two.txt + 1744073 2016-03-05 16:22:38.104000000 two.txt + 564374 2016-03-05 16:22:52.118000000 two.txt +``` Now the `dedupe` session - $ rclone dedupe drive:dupes - 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. - one.txt: Found 4 files with duplicate names - one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36") - one.txt: 2 duplicates remain - 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 - 2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 - s) Skip and do nothing - k) Keep just one (choose which in next step) - r) Rename all to be different (by changing file.jpg to file-1.jpg) - s/k/r> k - Enter the number of the file to keep> 1 - one.txt: Deleted 1 extra copies - two.txt: Found 3 files with duplicate names - two.txt: 3 duplicates remain - 1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 - 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 - 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802 - s) Skip and do nothing - k) Keep just one (choose which in next step) - r) Rename all to be different (by changing file.jpg to file-1.jpg) - s/k/r> r - two-1.txt: renamed from: two.txt - two-2.txt: renamed from: two.txt - two-3.txt: renamed from: two.txt +```console +$ rclone dedupe drive:dupes +2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. +one.txt: Found 4 files with duplicate names +one.txt: Deleting 2/3 identical duplicates (MD5 "1eedaa9fe86fd4b8632e2ac549403b36") +one.txt: 2 duplicates remain + 1: 6048320 bytes, 2016-03-05 16:23:16.798000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 + 2: 564374 bytes, 2016-03-05 16:23:06.731000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 +s) Skip and do nothing +k) Keep just one (choose which in next step) +r) Rename all to be different (by changing file.jpg to file-1.jpg) +s/k/r> k +Enter the number of the file to keep> 1 +one.txt: Deleted 1 extra copies +two.txt: Found 3 files with duplicate names +two.txt: 3 duplicates remain + 1: 564374 bytes, 2016-03-05 16:22:52.118000000, MD5 7594e7dc9fc28f727c42ee3e0749de81 + 2: 6048320 bytes, 2016-03-05 16:22:46.185000000, MD5 1eedaa9fe86fd4b8632e2ac549403b36 + 3: 1744073 bytes, 2016-03-05 16:22:38.104000000, MD5 851957f7fb6f0bc4ce76be966d336802 +s) Skip and do nothing +k) Keep just one (choose which in next step) +r) Rename all to be different (by changing file.jpg to file-1.jpg) +s/k/r> r +two-1.txt: renamed from: two.txt +two-2.txt: renamed from: two.txt +two-3.txt: renamed from: two.txt +``` The result being - $ rclone lsl drive:dupes - 6048320 2016-03-05 16:23:16.798000000 one.txt - 564374 2016-03-05 16:22:52.118000000 two-1.txt - 6048320 2016-03-05 16:22:46.185000000 two-2.txt - 1744073 2016-03-05 16:22:38.104000000 two-3.txt +```console +$ rclone lsl drive:dupes + 6048320 2016-03-05 16:23:16.798000000 one.txt + 564374 2016-03-05 16:22:52.118000000 two-1.txt + 6048320 2016-03-05 16:22:46.185000000 two-2.txt + 1744073 2016-03-05 16:22:38.104000000 two-3.txt +``` -Dedupe can be run non interactively using the `--dedupe-mode` flag or by using an extra parameter with the same value +Dedupe can be run non interactively using the `--dedupe-mode` flag +or by using an extra parameter with the same value - * `--dedupe-mode interactive` - interactive as above. - * `--dedupe-mode skip` - removes identical files then skips anything left. - * `--dedupe-mode first` - removes identical files then keeps the first one. - * `--dedupe-mode newest` - removes identical files then keeps the newest one. - * `--dedupe-mode oldest` - removes identical files then keeps the oldest one. - * `--dedupe-mode largest` - removes identical files then keeps the largest one. - * `--dedupe-mode smallest` - removes identical files then keeps the smallest one. - * `--dedupe-mode rename` - removes identical files then renames the rest to be different. - * `--dedupe-mode list` - lists duplicate dirs and files only and changes nothing. +- `--dedupe-mode interactive` - interactive as above. +- `--dedupe-mode skip` - removes identical files then skips anything left. +- `--dedupe-mode first` - removes identical files then keeps the first one. +- `--dedupe-mode newest` - removes identical files then keeps the newest one. +- `--dedupe-mode oldest` - removes identical files then keeps the oldest one. +- `--dedupe-mode largest` - removes identical files then keeps the largest one. +- `--dedupe-mode smallest` - removes identical files then keeps the smallest one. +- `--dedupe-mode rename` - removes identical files then renames the rest to be different. +- `--dedupe-mode list` - lists duplicate dirs and files only and changes nothing. -For example, to rename all the identically named photos in your Google Photos directory, do +For example, to rename all the identically named photos in your Google Photos +directory, do - rclone dedupe --dedupe-mode rename "drive:Google Photos" +```console +rclone dedupe --dedupe-mode rename "drive:Google Photos" +``` Or - rclone dedupe rename "drive:Google Photos" - +```console +rclone dedupe rename "drive:Google Photos" +``` ``` rclone dedupe [mode] remote:path [flags] @@ -135,7 +147,7 @@ See the [global flags page](/flags/) for global options not listed here. Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -143,5 +155,10 @@ Important flags useful for most commands ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md index d4153f87d..063b38952 100644 --- a/docs/content/commands/rclone_delete.md +++ b/docs/content/commands/rclone_delete.md @@ -17,19 +17,23 @@ obeys include/exclude filters so can be used to selectively delete files. alone. If you want to delete a directory and all of its contents use the [purge](/commands/rclone_purge/) command. -If you supply the `--rmdirs` flag, it will remove all empty directories along with it. -You can also use the separate command [rmdir](/commands/rclone_rmdir/) or -[rmdirs](/commands/rclone_rmdirs/) to delete empty directories only. +If you supply the `--rmdirs` flag, it will remove all empty directories along +with it. You can also use the separate command [rmdir](/commands/rclone_rmdir/) +or [rmdirs](/commands/rclone_rmdirs/) to delete empty directories only. For example, to delete all files bigger than 100 MiB, you may first want to check what would be deleted (use either): - rclone --min-size 100M lsl remote:path - rclone --dry-run --min-size 100M delete remote:path +```sh +rclone --min-size 100M lsl remote:path +rclone --dry-run --min-size 100M delete remote:path +``` Then proceed with the actual delete: - rclone --min-size 100M delete remote:path +```sh +rclone --min-size 100M delete remote:path +``` That reads "delete everything with a minimum size of 100 MiB", hence delete all files bigger than 100 MiB. @@ -37,7 +41,6 @@ delete all files bigger than 100 MiB. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. - ``` rclone delete remote:path [flags] ``` @@ -56,7 +59,7 @@ See the [global flags page](/flags/) for global options not listed here. Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -66,7 +69,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -96,12 +99,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_deletefile.md b/docs/content/commands/rclone_deletefile.md index 17fb064d1..19143176c 100644 --- a/docs/content/commands/rclone_deletefile.md +++ b/docs/content/commands/rclone_deletefile.md @@ -11,9 +11,8 @@ Remove a single file from remote. ## Synopsis Remove a single file from remote. Unlike `delete` it cannot be used to -remove a directory and it doesn't obey include/exclude filters - if the specified file exists, -it will always be removed. - +remove a directory and it doesn't obey include/exclude filters - if the +specified file exists, it will always be removed. ``` rclone deletefile remote:path [flags] @@ -32,7 +31,7 @@ See the [global flags page](/flags/) for global options not listed here. Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -40,5 +39,10 @@ Important flags useful for most commands ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md index 3b7bd9aaf..96aac6bb3 100644 --- a/docs/content/commands/rclone_gendocs.md +++ b/docs/content/commands/rclone_gendocs.md @@ -28,5 +28,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_gitannex.md b/docs/content/commands/rclone_gitannex.md index 39410238e..431d25b6e 100644 --- a/docs/content/commands/rclone_gitannex.md +++ b/docs/content/commands/rclone_gitannex.md @@ -18,19 +18,21 @@ users. [git-annex]: https://git-annex.branchable.com/ -Installation on Linux ---------------------- +## Installation on Linux 1. Skip this step if your version of git-annex is [10.20240430] or newer. Otherwise, you must create a symlink somewhere on your PATH with a particular name. This symlink helps git-annex tell rclone it wants to run the "gitannex" subcommand. - ```sh - # Create the helper symlink in "$HOME/bin". + Create the helper symlink in "$HOME/bin": + + ```console ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin" - # Verify the new symlink is on your PATH. + Verify the new symlink is on your PATH: + + ```console which git-annex-remote-rclone-builtin ``` @@ -42,11 +44,15 @@ Installation on Linux Start by asking git-annex to describe the remote's available configuration parameters. - ```sh - # If you skipped step 1: - git annex initremote MyRemote type=rclone --whatelse + If you skipped step 1: - # If you created a symlink in step 1: + ```console + git annex initremote MyRemote type=rclone --whatelse + ``` + + If you created a symlink in step 1: + + ```console git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse ``` @@ -62,7 +68,7 @@ Installation on Linux be one configured in your rclone.conf file, which can be located with `rclone config file`. - ```sh + ```console git annex initremote MyRemote \ type=external \ externaltype=rclone-builtin \ @@ -76,13 +82,12 @@ Installation on Linux remote**. This command is very new and has not been tested on many rclone backends. Caveat emptor! - ```sh + ```console git annex testremote MyRemote ``` Happy annexing! - ``` rclone gitannex [flags] ``` @@ -97,5 +102,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_hashsum.md b/docs/content/commands/rclone_hashsum.md index 5694bf16c..ffe477417 100644 --- a/docs/content/commands/rclone_hashsum.md +++ b/docs/content/commands/rclone_hashsum.md @@ -29,25 +29,28 @@ as a relative path). Run without a hash to see the list of all supported hashes, e.g. - $ rclone hashsum - Supported hashes are: - * md5 - * sha1 - * whirlpool - * crc32 - * sha256 - * sha512 - * blake3 - * xxh3 - * xxh128 +```console +$ rclone hashsum +Supported hashes are: +- md5 +- sha1 +- whirlpool +- crc32 +- sha256 +- sha512 +- blake3 +- xxh3 +- xxh128 +``` Then - $ rclone hashsum MD5 remote:path +```console +rclone hashsum MD5 remote:path +``` Note that hash names are case insensitive and values are output in lower case. - ``` rclone hashsum [ remote:path] [flags] ``` @@ -69,7 +72,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -99,12 +102,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_link.md b/docs/content/commands/rclone_link.md index c07e2e221..9f34bab61 100644 --- a/docs/content/commands/rclone_link.md +++ b/docs/content/commands/rclone_link.md @@ -12,10 +12,12 @@ Generate public link to file/folder. Create, retrieve or remove a public link to the given file or folder. - rclone link remote:path/to/file - rclone link remote:path/to/folder/ - rclone link --unlink remote:path/to/folder/ - rclone link --expire 1d remote:path/to/file +```console +rclone link remote:path/to/file +rclone link remote:path/to/folder/ +rclone link --unlink remote:path/to/folder/ +rclone link --expire 1d remote:path/to/file +``` If you supply the --expire flag, it will set the expiration time otherwise it will use the default (100 years). **Note** not all @@ -28,10 +30,9 @@ don't will just ignore it. If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will -always by default be created with the least constraints – e.g. no +always by default be created with the least constraints - e.g. no expiry, no password protection, accessible without account. - ``` rclone link remote:path [flags] ``` @@ -48,5 +49,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md index 77361752b..51488704e 100644 --- a/docs/content/commands/rclone_listremotes.md +++ b/docs/content/commands/rclone_listremotes.md @@ -10,7 +10,6 @@ List all the remotes in the config file and defined in environment variables. ## Synopsis - Lists all the available remotes from the config file, or the remotes matching an optional filter. @@ -24,7 +23,6 @@ Result can be filtered by a filter argument which applies to all attributes, and/or filter flags specific for each attribute. The values must be specified according to regular rclone filtering pattern syntax. - ``` rclone listremotes [] [flags] ``` @@ -46,5 +44,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md index 30b8cfd88..122a9ab0f 100644 --- a/docs/content/commands/rclone_ls.md +++ b/docs/content/commands/rclone_ls.md @@ -12,24 +12,25 @@ List the objects in the path with size and path. Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default. -Eg - - $ rclone ls swift:bucket - 60295 bevajer5jef - 90613 canole - 94467 diwogej7 - 37600 fubuwic +E.g. +```console +$ rclone ls swift:bucket + 60295 bevajer5jef + 90613 canole + 94467 diwogej7 + 37600 fubuwic +``` Any of the filtering options can be applied to this command. There are several related list commands - * `ls` to list size and path of objects only - * `lsl` to list modification time, size and path of objects only - * `lsd` to list directories only - * `lsf` to list objects and directories in easy to parse format - * `lsjson` to list objects and directories in JSON format +- `ls` to list size and path of objects only +- `lsl` to list modification time, size and path of objects only +- `lsd` to list directories only +- `lsf` to list objects and directories in easy to parse format +- `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human-readable. `lsf` is designed to be human and machine-readable. @@ -37,13 +38,13 @@ There are several related list commands Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion. -The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - +use `-R` to make them recurse. Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes). - ``` rclone ls remote:path [flags] ``` @@ -61,7 +62,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -91,12 +92,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md index 0fa31360c..b30f0c851 100644 --- a/docs/content/commands/rclone_lsd.md +++ b/docs/content/commands/rclone_lsd.md @@ -15,31 +15,34 @@ recurse by default. Use the `-R` flag to recurse. This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name -of the directory, Eg +of the directory, E.g. - $ rclone lsd swift: - 494000 2018-04-26 08:43:20 10000 10000files - 65 2018-04-26 08:43:20 1 1File +```console +$ rclone lsd swift: + 494000 2018-04-26 08:43:20 10000 10000files + 65 2018-04-26 08:43:20 1 1File +``` Or - $ rclone lsd drive:test - -1 2016-10-17 17:41:53 -1 1000files - -1 2017-01-03 14:40:54 -1 2500files - -1 2017-07-08 14:39:28 -1 4000files +```console +$ rclone lsd drive:test + -1 2016-10-17 17:41:53 -1 1000files + -1 2017-01-03 14:40:54 -1 2500files + -1 2017-07-08 14:39:28 -1 4000files +``` If you just want the directory names use `rclone lsf --dirs-only`. - Any of the filtering options can be applied to this command. There are several related list commands - * `ls` to list size and path of objects only - * `lsl` to list modification time, size and path of objects only - * `lsd` to list directories only - * `lsf` to list objects and directories in easy to parse format - * `lsjson` to list objects and directories in JSON format +- `ls` to list size and path of objects only +- `lsl` to list modification time, size and path of objects only +- `lsd` to list directories only +- `lsf` to list objects and directories in easy to parse format +- `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human-readable. `lsf` is designed to be human and machine-readable. @@ -47,13 +50,13 @@ There are several related list commands Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion. -The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - +use `-R` to make them recurse. Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes). - ``` rclone lsd remote:path [flags] ``` @@ -72,7 +75,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -102,12 +105,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_lsf.md b/docs/content/commands/rclone_lsf.md index 73ca2077a..3eb2bd8e6 100644 --- a/docs/content/commands/rclone_lsf.md +++ b/docs/content/commands/rclone_lsf.md @@ -15,41 +15,47 @@ standard output in a form which is easy to parse by scripts. By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. -Eg +E.g. - $ rclone lsf swift:bucket - bevajer5jef - canole - diwogej7 - ferejej3gux/ - fubuwic +```console +$ rclone lsf swift:bucket +bevajer5jef +canole +diwogej7 +ferejej3gux/ +fubuwic +``` Use the `--format` option to control what gets listed. By default this is just the path, but you can use these parameters to control the output: - p - path - s - size - t - modification time - h - hash - i - ID of object - o - Original ID of underlying object - m - MimeType of object if known - e - encrypted name - T - tier of storage if known, e.g. "Hot" or "Cool" - M - Metadata of object in JSON blob format, eg {"key":"value"} +```text +p - path +s - size +t - modification time +h - hash +i - ID of object +o - Original ID of underlying object +m - MimeType of object if known +e - encrypted name +T - tier of storage if known, e.g. "Hot" or "Cool" +M - Metadata of object in JSON blob format, eg {"key":"value"} +``` So if you wanted the path, size and modification time, you would use `--format "pst"`, or maybe `--format "tsp"` to put the path last. -Eg +E.g. - $ rclone lsf --format "tsp" swift:bucket - 2016-06-25 18:55:41;60295;bevajer5jef - 2016-06-25 18:55:43;90613;canole - 2016-06-25 18:55:43;94467;diwogej7 - 2018-04-26 08:50:45;0;ferejej3gux/ - 2016-06-25 18:55:40;37600;fubuwic +```console +$ rclone lsf --format "tsp" swift:bucket +2016-06-25 18:55:41;60295;bevajer5jef +2016-06-25 18:55:43;90613;canole +2016-06-25 18:55:43;94467;diwogej7 +2018-04-26 08:50:45;0;ferejej3gux/ +2016-06-25 18:55:40;37600;fubuwic +``` If you specify "h" in the format you will get the MD5 hash by default, use the `--hash` flag to change which hash you want. Note that this @@ -60,16 +66,20 @@ type. For example, to emulate the md5sum command you can use - rclone lsf -R --hash MD5 --format hp --separator " " --files-only . +```console +rclone lsf -R --hash MD5 --format hp --separator " " --files-only . +``` -Eg +E.g. - $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket - 7908e352297f0f530b84a756f188baa3 bevajer5jef - cd65ac234e6fea5925974a51cdd865cc canole - 03b5341b4f234b9d984d03ad076bae91 diwogej7 - 8fd37c3810dd660778137ac3a66cc06d fubuwic - 99713e14a4c4ff553acaf1930fad985b gixacuh7ku +```console +$ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket +7908e352297f0f530b84a756f188baa3 bevajer5jef +cd65ac234e6fea5925974a51cdd865cc canole +03b5341b4f234b9d984d03ad076bae91 diwogej7 +8fd37c3810dd660778137ac3a66cc06d fubuwic +99713e14a4c4ff553acaf1930fad985b gixacuh7ku +``` (Though "rclone md5sum ." is an easier way of typing this.) @@ -77,24 +87,28 @@ By default the separator is ";" this can be changed with the `--separator` flag. Note that separators aren't escaped in the path so putting it last is a good strategy. -Eg +E.g. - $ rclone lsf --separator "," --format "tshp" swift:bucket - 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef - 2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole - 2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7 - 2018-04-26 08:52:53,0,,ferejej3gux/ - 2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic +```console +$ rclone lsf --separator "," --format "tshp" swift:bucket +2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef +2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole +2016-06-25 18:55:43,94467,03b5341b4f234b9d984d03ad076bae91,diwogej7 +2018-04-26 08:52:53,0,,ferejej3gux/ +2016-06-25 18:55:40,37600,8fd37c3810dd660778137ac3a66cc06d,fubuwic +``` You can output in CSV standard format. This will escape things in " -if they contain , +if they contain, -Eg +E.g. - $ rclone lsf --csv --files-only --format ps remote:path - test.log,22355 - test.sh,449 - "this file contains a comma, in the file name.txt",6 +```console +$ rclone lsf --csv --files-only --format ps remote:path +test.log,22355 +test.sh,449 +"this file contains a comma, in the file name.txt",6 +``` Note that the `--absolute` parameter is useful for making lists of files to pass to an rclone copy with the `--files-from-raw` flag. @@ -102,32 +116,38 @@ to pass to an rclone copy with the `--files-from-raw` flag. For example, to find all the files modified within one day and copy those only (without traversing the whole directory structure): - rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files - rclone copy --files-from-raw new_files /path/to/local remote:path +```console +rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files +rclone copy --files-from-raw new_files /path/to/local remote:path +``` The default time format is `'2006-01-02 15:04:05'`. -[Other formats](https://pkg.go.dev/time#pkg-constants) can be specified with the `--time-format` flag. -Examples: +[Other formats](https://pkg.go.dev/time#pkg-constants) can be specified with +the `--time-format` flag. Examples: - rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)' - rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000' - rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00' - rclone lsf remote:path --format pt --time-format RFC3339 - rclone lsf remote:path --format pt --time-format DateOnly - rclone lsf remote:path --format pt --time-format max -`--time-format max` will automatically truncate '`2006-01-02 15:04:05.000000000`' +```console +rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)' +rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000' +rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00' +rclone lsf remote:path --format pt --time-format RFC3339 +rclone lsf remote:path --format pt --time-format DateOnly +rclone lsf remote:path --format pt --time-format max +rclone lsf remote:path --format pt --time-format unix +rclone lsf remote:path --format pt --time-format unixnano +``` + +`--time-format max` will automatically truncate `2006-01-02 15:04:05.000000000` to the maximum precision supported by the remote. - Any of the filtering options can be applied to this command. There are several related list commands - * `ls` to list size and path of objects only - * `lsl` to list modification time, size and path of objects only - * `lsd` to list directories only - * `lsf` to list objects and directories in easy to parse format - * `lsjson` to list objects and directories in JSON format +- `ls` to list size and path of objects only +- `lsl` to list modification time, size and path of objects only +- `lsd` to list directories only +- `lsf` to list objects and directories in easy to parse format +- `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human-readable. `lsf` is designed to be human and machine-readable. @@ -135,13 +155,13 @@ There are several related list commands Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion. -The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - +use `-R` to make them recurse. Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes). - ``` rclone lsf remote:path [flags] ``` @@ -159,7 +179,7 @@ rclone lsf remote:path [flags] -h, --help help for lsf -R, --recursive Recurse into the listing -s, --separator string Separator for the items in the format (default ";") - -t, --time-format string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --time-format string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) ``` Options shared with other commands are described next. @@ -169,7 +189,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -199,12 +219,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md index 952828ff4..7f19807bf 100644 --- a/docs/content/commands/rclone_lsjson.md +++ b/docs/content/commands/rclone_lsjson.md @@ -14,25 +14,27 @@ List directories and objects in the path in JSON format. The output is an array of Items, where each Item looks like this: - { - "Hashes" : { - "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", - "MD5" : "b1946ac92492d2347c6235b4d2611184", - "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" - }, - "ID": "y2djkhiujf83u33", - "OrigID": "UYOJVTUW00Q1RzTDA", - "IsBucket" : false, - "IsDir" : false, - "MimeType" : "application/octet-stream", - "ModTime" : "2017-05-31T16:15:57.034468261+01:00", - "Name" : "file.txt", - "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", - "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", - "Path" : "full/path/goes/here/file.txt", - "Size" : 6, - "Tier" : "hot", - } +```json +{ + "Hashes" : { + "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", + "MD5" : "b1946ac92492d2347c6235b4d2611184", + "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" + }, + "ID": "y2djkhiujf83u33", + "OrigID": "UYOJVTUW00Q1RzTDA", + "IsBucket" : false, + "IsDir" : false, + "MimeType" : "application/octet-stream", + "ModTime" : "2017-05-31T16:15:57.034468261+01:00", + "Name" : "file.txt", + "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", + "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", + "Path" : "full/path/goes/here/file.txt", + "Size" : 6, + "Tier" : "hot", +} +``` The exact set of properties included depends on the backend: @@ -94,11 +96,11 @@ Any of the filtering options can be applied to this command. There are several related list commands - * `ls` to list size and path of objects only - * `lsl` to list modification time, size and path of objects only - * `lsd` to list directories only - * `lsf` to list objects and directories in easy to parse format - * `lsjson` to list objects and directories in JSON format +- `ls` to list size and path of objects only +- `lsl` to list modification time, size and path of objects only +- `lsd` to list directories only +- `lsf` to list objects and directories in easy to parse format +- `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human-readable. `lsf` is designed to be human and machine-readable. @@ -106,13 +108,13 @@ There are several related list commands Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion. -The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - +use `-R` to make them recurse. Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes). - ``` rclone lsjson remote:path [flags] ``` @@ -141,7 +143,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -171,12 +173,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md index d02090071..4c02b1d54 100644 --- a/docs/content/commands/rclone_lsl.md +++ b/docs/content/commands/rclone_lsl.md @@ -13,24 +13,25 @@ List the objects in path with modification time, size and path. Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default. -Eg - - $ rclone lsl swift:bucket - 60295 2016-06-25 18:55:41.062626927 bevajer5jef - 90613 2016-06-25 18:55:43.302607074 canole - 94467 2016-06-25 18:55:43.046609333 diwogej7 - 37600 2016-06-25 18:55:40.814629136 fubuwic +E.g. +```console +$ rclone lsl swift:bucket + 60295 2016-06-25 18:55:41.062626927 bevajer5jef + 90613 2016-06-25 18:55:43.302607074 canole + 94467 2016-06-25 18:55:43.046609333 diwogej7 + 37600 2016-06-25 18:55:40.814629136 fubuwic +``` Any of the filtering options can be applied to this command. There are several related list commands - * `ls` to list size and path of objects only - * `lsl` to list modification time, size and path of objects only - * `lsd` to list directories only - * `lsf` to list objects and directories in easy to parse format - * `lsjson` to list objects and directories in JSON format +- `ls` to list size and path of objects only +- `lsl` to list modification time, size and path of objects only +- `lsd` to list directories only +- `lsf` to list objects and directories in easy to parse format +- `lsjson` to list objects and directories in JSON format `ls`,`lsl`,`lsd` are designed to be human-readable. `lsf` is designed to be human and machine-readable. @@ -38,13 +39,13 @@ There are several related list commands Note that `ls` and `lsl` recurse by default - use `--max-depth 1` to stop the recursion. -The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - use `-R` to make them recurse. +The other list commands `lsd`,`lsf`,`lsjson` do not recurse by default - +use `-R` to make them recurse. Listing a nonexistent directory will produce an error except for remotes which can't have empty directories (e.g. s3, swift, or gcs - the bucket-based remotes). - ``` rclone lsl remote:path [flags] ``` @@ -62,7 +63,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -92,12 +93,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md index 435ca45b1..c8e2c75a3 100644 --- a/docs/content/commands/rclone_md5sum.md +++ b/docs/content/commands/rclone_md5sum.md @@ -27,7 +27,6 @@ by not passing a remote:path, or by passing a hyphen as remote:path when there is data to read (if not, the hyphen will be treated literally, as a relative path). - ``` rclone md5sum remote:path [flags] ``` @@ -49,7 +48,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -79,12 +78,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md index 94d6637c8..8da0409db 100644 --- a/docs/content/commands/rclone_mkdir.md +++ b/docs/content/commands/rclone_mkdir.md @@ -24,7 +24,7 @@ See the [global flags page](/flags/) for global options not listed here. Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -32,5 +32,10 @@ Important flags useful for most commands ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md index 10e3ce473..ce5af7d95 100644 --- a/docs/content/commands/rclone_mount.md +++ b/docs/content/commands/rclone_mount.md @@ -13,7 +13,7 @@ Mount the remote as file system on a mountpoint. Rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. -First set up your remote using `rclone config`. Check it works with `rclone ls` etc. +First set up your remote using `rclone config`. Check it works with `rclone ls` etc. On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the `--daemon` flag @@ -28,7 +28,9 @@ mount, waits until success or timeout and exits with appropriate code On Linux/macOS/FreeBSD start the mount like this, where `/path/to/local/mount` is an **empty** **existing** directory: - rclone mount remote:path/to/files /path/to/local/mount +```console +rclone mount remote:path/to/files /path/to/local/mount +``` On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows) for details. If foreground mount is used interactively from a console window, @@ -38,26 +40,30 @@ used to work with the mount until rclone is interrupted e.g. by pressing Ctrl-C. The following examples will mount to an automatically assigned drive, to specific drive letter `X:`, to path `C:\path\parent\mount` (where parent directory or drive must exist, and mount must **not** exist, -and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and -the last example will mount as network share `\\cloud\remote` and map it to an +and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), +and the last example will mount as network share `\\cloud\remote` and map it to an automatically assigned drive: - rclone mount remote:path/to/files * - rclone mount remote:path/to/files X: - rclone mount remote:path/to/files C:\path\parent\mount - rclone mount remote:path/to/files \\cloud\remote +```console +rclone mount remote:path/to/files * +rclone mount remote:path/to/files X: +rclone mount remote:path/to/files C:\path\parent\mount +rclone mount remote:path/to/files \\cloud\remote +``` When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount should be automatically stopped. When running in background mode the user will have to stop the mount manually: - # Linux - fusermount -u /path/to/local/mount - #... or on some systems - fusermount3 -u /path/to/local/mount - # OS X or Linux when using nfsmount - umount /path/to/local/mount +```console +# Linux +fusermount -u /path/to/local/mount +#... or on some systems +fusermount3 -u /path/to/local/mount +# OS X or Linux when using nfsmount +umount /path/to/local/mount +``` The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually. @@ -71,7 +77,7 @@ at all, then 1 PiB is set as both the total and the free size. ## Installing on Windows -To run rclone mount on Windows, you will need to +To run `rclone mount on Windows`, you will need to download and install [WinFsp](http://www.secfs.net/winfsp/). [WinFsp](https://github.com/winfsp/winfsp) is an open-source @@ -92,20 +98,22 @@ thumbnails for image and video files on network drives. In most cases, rclone will mount the remote as a normal, fixed disk drive by default. However, you can also choose to mount it as a remote network drive, often described -as a network share. If you mount an rclone remote using the default, fixed drive mode -and experience unexpected program errors, freezes or other issues, consider mounting -as a network drive instead. +as a network share. If you mount an rclone remote using the default, fixed drive +mode and experience unexpected program errors, freezes or other issues, consider +mounting as a network drive instead. When mounting as a fixed disk drive you can either mount to an unused drive letter, or to a path representing a **nonexistent** subdirectory of an **existing** parent directory or drive. Using the special value `*` will tell rclone to -automatically assign the next available drive letter, starting with Z: and moving backward. -Examples: +automatically assign the next available drive letter, starting with Z: and moving +backward. Examples: - rclone mount remote:path/to/files * - rclone mount remote:path/to/files X: - rclone mount remote:path/to/files C:\path\parent\mount - rclone mount remote:path/to/files X: +```console +rclone mount remote:path/to/files * +rclone mount remote:path/to/files X: +rclone mount remote:path/to/files C:\path\parent\mount +rclone mount remote:path/to/files X: +``` Option `--volname` can be used to set a custom volume name for the mounted file system. The default is to use the remote name and path. @@ -115,24 +123,28 @@ to your mount command. Mounting to a directory path is not supported in this mode, it is a limitation Windows imposes on junctions, so the remote must always be mounted to a drive letter. - rclone mount remote:path/to/files X: --network-mode +```console +rclone mount remote:path/to/files X: --network-mode +``` -A volume name specified with `--volname` will be used to create the network share path. -A complete UNC path, such as `\\cloud\remote`, optionally with path +A volume name specified with `--volname` will be used to create the network share +path. A complete UNC path, such as `\\cloud\remote`, optionally with path `\\cloud\remote\madeup\path`, will be used as is. Any other string will be used as the share part, after a default prefix `\\server\`. If no volume name is specified then `\\server\share` will be used. -You must make sure the volume name is unique when you are mounting more than one drive, -or else the mount command will fail. The share name will treated as the volume label for -the mapped drive, shown in Windows Explorer etc, while the complete +You must make sure the volume name is unique when you are mounting more than one +drive, or else the mount command will fail. The share name will treated as the +volume label for the mapped drive, shown in Windows Explorer etc, while the complete `\\server\share` will be reported as the remote UNC path by `net use` etc, just like a normal network drive mapping. If you specify a full network share UNC path with `--volname`, this will implicitly set the `--network-mode` option, so the following two examples have same result: - rclone mount remote:path/to/files X: --network-mode - rclone mount remote:path/to/files X: --volname \\server\share +```console +rclone mount remote:path/to/files X: --network-mode +rclone mount remote:path/to/files X: --volname \\server\share +``` You may also specify the network share UNC path as the mountpoint itself. Then rclone will automatically assign a drive letter, same as with `*` and use that as @@ -140,15 +152,16 @@ mountpoint, and instead use the UNC path specified as the volume name, as if it specified with the `--volname` option. This will also implicitly set the `--network-mode` option. This means the following two examples have same result: - rclone mount remote:path/to/files \\cloud\remote - rclone mount remote:path/to/files * --volname \\cloud\remote +```console +rclone mount remote:path/to/files \\cloud\remote +rclone mount remote:path/to/files * --volname \\cloud\remote +``` There is yet another way to enable network mode, and to set the share path, and that is to pass the "native" libfuse/WinFsp option directly: `--fuse-flag --VolumePrefix=\server\share`. Note that the path must be with just a single backslash prefix in this case. - *Note:* In previous versions of rclone this was the only supported method. [Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping) @@ -161,11 +174,11 @@ The FUSE emulation layer on Windows must convert between the POSIX-based permission model used in FUSE, and the permission model used in Windows, based on access-control lists (ACL). -The mounted filesystem will normally get three entries in its access-control list (ACL), -representing permissions for the POSIX permission scopes: Owner, group and others. -By default, the owner and group will be taken from the current user, and the built-in -group "Everyone" will be used to represent others. The user/group can be customized -with FUSE options "UserName" and "GroupName", +The mounted filesystem will normally get three entries in its access-control list +(ACL), representing permissions for the POSIX permission scopes: Owner, group and +others. By default, the owner and group will be taken from the current user, and +the built-in group "Everyone" will be used to represent others. The user/group can +be customized with FUSE options "UserName" and "GroupName", e.g. `-o UserName=user123 -o GroupName="Authenticated Users"`. The permissions on each entry will be set according to [options](#options) `--dir-perms` and `--file-perms`, which takes a value in traditional Unix @@ -265,58 +278,74 @@ does not suffer from the same limitations. ## Mounting on macOS -Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_serve_nfs/), [macFUSE](https://osxfuse.github.io/) -(also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional -FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system -which "mounts" via an NFSv4 local server. +Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_serve_nfs/), +[macFUSE](https://osxfuse.github.io/) (also known as osxfuse) or +[FUSE-T](https://www.fuse-t.org/).macFUSE is a traditional FUSE driver utilizing +a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which +"mounts" via an NFSv4 local server. -#### Unicode Normalization +### Unicode Normalization It is highly recommended to keep the default of `--no-unicode-normalization=false` for all `mount` and `serve` commands on macOS. For details, see [vfs-case-sensitivity](https://rclone.org/commands/rclone_mount/#vfs-case-sensitivity). ### NFS mount -This method spins up an NFS server using [serve nfs](/commands/rclone_serve_nfs/) command and mounts -it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to -send SIGTERM signal to the rclone process using |kill| command to stop the mount. +This method spins up an NFS server using [serve nfs](/commands/rclone_serve_nfs/) +command and mounts it to the specified mountpoint. If you run this in background +mode using |--daemon|, you will need to send SIGTERM signal to the rclone process +using |kill| command to stop the mount. -Note that `--nfs-cache-handle-limit` controls the maximum number of cached file handles stored by the `nfsmount` caching handler. -This should not be set too low or you may experience errors when trying to access files. The default is 1000000, +Note that `--nfs-cache-handle-limit` controls the maximum number of cached file +handles stored by the `nfsmount` caching handler. This should not be set too low +or you may experience errors when trying to access files. The default is 1000000, but consider lowering this limit if the server's system resource usage causes problems. ### macFUSE Notes -If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from -the website, rclone will locate the macFUSE libraries without any further intervention. -If however, macFUSE is installed using the [macports](https://www.macports.org/) package manager, -the following addition steps are required. +If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) +from the website, rclone will locate the macFUSE libraries without any further intervention. +If however, macFUSE is installed using the [macports](https://www.macports.org/) +package manager, the following addition steps are required. - sudo mkdir /usr/local/lib - cd /usr/local/lib - sudo ln -s /opt/local/lib/libfuse.2.dylib +```console +sudo mkdir /usr/local/lib +cd /usr/local/lib +sudo ln -s /opt/local/lib/libfuse.2.dylib +``` ### FUSE-T Limitations, Caveats, and Notes -There are some limitations, caveats, and notes about how it works. These are current as -of FUSE-T version 1.0.14. +There are some limitations, caveats, and notes about how it works. These are +current as of FUSE-T version 1.0.14. #### ModTime update on read As per the [FUSE-T wiki](https://github.com/macos-fuse-t/fuse-t/wiki#caveats): -> File access and modification times cannot be set separately as it seems to be an -> issue with the NFS client which always modifies both. Can be reproduced with +> File access and modification times cannot be set separately as it seems to be an +> issue with the NFS client which always modifies both. Can be reproduced with > 'touch -m' and 'touch -a' commands -This means that viewing files with various tools, notably macOS Finder, will cause rlcone -to update the modification time of the file. This may make rclone upload a full new copy -of the file. - +This means that viewing files with various tools, notably macOS Finder, will cause +rlcone to update the modification time of the file. This may make rclone upload a +full new copy of the file. + #### Read Only mounts -When mounting with `--read-only`, attempts to write to files will fail *silently* as -opposed to with a clear warning as in macFUSE. +When mounting with `--read-only`, attempts to write to files will fail *silently* +as opposed to with a clear warning as in macFUSE. + +# Mounting on Linux + +On newer versions of Ubuntu, you may encounter the following error when running +`rclone mount`: + +> NOTICE: mount helper error: fusermount3: mount failed: Permission denied +> CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: exit status 1 +This may be due to newer [Apparmor](https://wiki.ubuntu.com/AppArmor) restrictions, +which can be disabled with `sudo aa-disable /usr/bin/fusermount3` (you may need to +`sudo apt install apparmor-utils` beforehand). ## Limitations @@ -417,12 +446,14 @@ helper you should symlink rclone binary to `/sbin/mount.rclone` and optionally rclone will detect it and translate command-line arguments appropriately. Now you can run classic mounts like this: -``` + +```console mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem ``` or create systemd mount units: -``` + +```ini # /etc/systemd/system/mnt-data.mount [Unit] Description=Mount for /mnt/data @@ -434,7 +465,8 @@ Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone ``` optionally accompanied by systemd automount unit -``` + +```ini # /etc/systemd/system/mnt-data.automount [Unit] Description=AutoMount for /mnt/data @@ -446,7 +478,8 @@ WantedBy=multi-user.target ``` or add in `/etc/fstab` a line like -``` + +```console sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0 ``` @@ -495,8 +528,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -508,16 +543,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -548,6 +589,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -555,6 +597,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -602,13 +645,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -618,10 +661,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -704,9 +747,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -720,9 +765,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -760,32 +805,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -797,7 +851,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -807,7 +862,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -885,7 +940,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -896,7 +953,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -914,7 +971,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -939,8 +996,6 @@ If the file has no metadata it will be returned as `{}` and if there is an error reading the metadata the error will be returned as `{"error":"error string"}`. - - ``` rclone mount remote:path /path/to/mountpoint [flags] ``` @@ -1011,7 +1066,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -1039,5 +1094,10 @@ Flags for filtering directory listings ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md index bcd5277da..f24f8ce35 100644 --- a/docs/content/commands/rclone_move.md +++ b/docs/content/commands/rclone_move.md @@ -40,7 +40,7 @@ the backend supports it. If metadata syncing is required then use the `--metadata` flag. Note that the modification time and metadata for the root directory -will **not** be synced. See https://github.com/rclone/rclone/issues/7652 +will **not** be synced. See for more info. **Important**: Since this can cause data loss, test first with the @@ -48,12 +48,13 @@ for more info. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. -# Logger Flags +## Logger Flags -The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, -one per line, to the file name (or stdout if it is `-`) supplied. What they write is described -in the help below. For example `--differ` will write all paths which are present -on both the source and destination but different. +The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` +flags write paths, one per line, to the file name (or stdout if it is `-`) +supplied. What they write is described in the help below. For example +`--differ` will write all paths which are present on both the source and +destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell @@ -86,9 +87,7 @@ are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file -(which may or may not match what actually DID.) - - +(which may or may not match what actually DID). ``` rclone move source:path dest:path [flags] @@ -115,7 +114,7 @@ rclone move source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) ``` Options shared with other commands are described next. @@ -125,7 +124,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -166,7 +165,7 @@ Flags for anything which can copy a file Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -176,7 +175,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -206,12 +205,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md index 7ae2e66d1..3441f4642 100644 --- a/docs/content/commands/rclone_moveto.md +++ b/docs/content/commands/rclone_moveto.md @@ -19,18 +19,22 @@ like the [move](/commands/rclone_move/) command. So - rclone moveto src dst +```console +rclone moveto src dst +``` where src and dst are rclone paths, either remote:path or /path/to/local or C:\windows\path\if\on\windows. This will: - if src is file - move it to dst, overwriting an existing file if it exists - if src is directory - move it to dst, overwriting existing files if they exist - see move command for full details +```text +if src is file + move it to dst, overwriting an existing file if it exists +if src is directory + move it to dst, overwriting existing files if they exist + see move command for full details +``` This doesn't transfer files that are identical on src and dst, testing by size and modification time or MD5SUM. src will be deleted on @@ -41,12 +45,13 @@ successful transfer. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics. -# Logger Flags +## Logger Flags -The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, -one per line, to the file name (or stdout if it is `-`) supplied. What they write is described -in the help below. For example `--differ` will write all paths which are present -on both the source and destination but different. +The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` +flags write paths, one per line, to the file name (or stdout if it is `-`) +supplied. What they write is described in the help below. For example +`--differ` will write all paths which are present on both the source and +destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell @@ -79,9 +84,7 @@ are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file -(which may or may not match what actually DID.) - - +(which may or may not match what actually DID). ``` rclone moveto source:path dest:path [flags] @@ -106,7 +109,7 @@ rclone moveto source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) ``` Options shared with other commands are described next. @@ -116,7 +119,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -157,7 +160,7 @@ Flags for anything which can copy a file Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -167,7 +170,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -197,12 +200,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md index 08387a1c2..fa9ee1270 100644 --- a/docs/content/commands/rclone_ncdu.md +++ b/docs/content/commands/rclone_ncdu.md @@ -24,41 +24,45 @@ structure as it goes along. You can interact with the user interface using key presses, press '?' to toggle the help on and off. The supported keys are: - ↑,↓ or k,j to Move - →,l to enter - ←,h to return - g toggle graph - c toggle counts - a toggle average size in directory - m toggle modified time - u toggle human-readable format - n,s,C,A,M sort by name,size,count,asize,mtime - d delete file/directory - v select file/directory - V enter visual select mode - D delete selected files/directories - y copy current path to clipboard - Y display current path - ^L refresh screen (fix screen corruption) - r recalculate file sizes - ? to toggle help on and off - ESC to close the menu box - q/^c to quit +```text + ↑,↓ or k,j to Move + →,l to enter + ←,h to return + g toggle graph + c toggle counts + a toggle average size in directory + m toggle modified time + u toggle human-readable format + n,s,C,A,M sort by name,size,count,asize,mtime + d delete file/directory + v select file/directory + V enter visual select mode + D delete selected files/directories + y copy current path to clipboard + Y display current path + ^L refresh screen (fix screen corruption) + r recalculate file sizes + ? to toggle help on and off + ESC to close the menu box + q/^c to quit +``` Listed files/directories may be prefixed by a one-character flag, some of them combined with a description in brackets at end of line. These flags have the following meaning: - e means this is an empty directory, i.e. contains no files (but - may contain empty subdirectories) - ~ means this is a directory where some of the files (possibly in - subdirectories) have unknown size, and therefore the directory - size may be underestimated (and average size inaccurate, as it - is average of the files with known sizes). - . means an error occurred while reading a subdirectory, and - therefore the directory size may be underestimated (and average - size inaccurate) - ! means an error occurred while reading this directory +```text +e means this is an empty directory, i.e. contains no files (but + may contain empty subdirectories) +~ means this is a directory where some of the files (possibly in + subdirectories) have unknown size, and therefore the directory + size may be underestimated (and average size inaccurate, as it + is average of the files with known sizes). +. means an error occurred while reading a subdirectory, and + therefore the directory size may be underestimated (and average + size inaccurate) +! means an error occurred while reading this directory +``` This an homage to the [ncdu tool](https://dev.yorhel.nl/ncdu) but for rclone remotes. It is missing lots of features at the moment @@ -71,7 +75,6 @@ For a non-interactive listing of the remote, see the [tree](/commands/rclone_tree/) command. To just get the total size of the remote you can also use the [size](/commands/rclone_size/) command. - ``` rclone ncdu remote:path [flags] ``` @@ -89,7 +92,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -119,12 +122,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_nfsmount.md b/docs/content/commands/rclone_nfsmount.md index 43c349c8f..446eb613b 100644 --- a/docs/content/commands/rclone_nfsmount.md +++ b/docs/content/commands/rclone_nfsmount.md @@ -14,7 +14,7 @@ Mount the remote as file system on a mountpoint. Rclone nfsmount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE. -First set up your remote using `rclone config`. Check it works with `rclone ls` etc. +First set up your remote using `rclone config`. Check it works with `rclone ls` etc. On Linux and macOS, you can run mount in either foreground or background (aka daemon) mode. Mount runs in foreground mode by default. Use the `--daemon` flag @@ -29,7 +29,9 @@ mount, waits until success or timeout and exits with appropriate code On Linux/macOS/FreeBSD start the mount like this, where `/path/to/local/mount` is an **empty** **existing** directory: - rclone nfsmount remote:path/to/files /path/to/local/mount +```console +rclone nfsmount remote:path/to/files /path/to/local/mount +``` On Windows you can start a mount in different ways. See [below](#mounting-modes-on-windows) for details. If foreground mount is used interactively from a console window, @@ -39,26 +41,30 @@ used to work with the mount until rclone is interrupted e.g. by pressing Ctrl-C. The following examples will mount to an automatically assigned drive, to specific drive letter `X:`, to path `C:\path\parent\mount` (where parent directory or drive must exist, and mount must **not** exist, -and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), and -the last example will mount as network share `\\cloud\remote` and map it to an +and is not supported when [mounting as a network drive](#mounting-modes-on-windows)), +and the last example will mount as network share `\\cloud\remote` and map it to an automatically assigned drive: - rclone nfsmount remote:path/to/files * - rclone nfsmount remote:path/to/files X: - rclone nfsmount remote:path/to/files C:\path\parent\mount - rclone nfsmount remote:path/to/files \\cloud\remote +```console +rclone nfsmount remote:path/to/files * +rclone nfsmount remote:path/to/files X: +rclone nfsmount remote:path/to/files C:\path\parent\mount +rclone nfsmount remote:path/to/files \\cloud\remote +``` When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount should be automatically stopped. When running in background mode the user will have to stop the mount manually: - # Linux - fusermount -u /path/to/local/mount - #... or on some systems - fusermount3 -u /path/to/local/mount - # OS X or Linux when using nfsmount - umount /path/to/local/mount +```console +# Linux +fusermount -u /path/to/local/mount +#... or on some systems +fusermount3 -u /path/to/local/mount +# OS X or Linux when using nfsmount +umount /path/to/local/mount +``` The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually. @@ -72,7 +78,7 @@ at all, then 1 PiB is set as both the total and the free size. ## Installing on Windows -To run rclone nfsmount on Windows, you will need to +To run `rclone nfsmount on Windows`, you will need to download and install [WinFsp](http://www.secfs.net/winfsp/). [WinFsp](https://github.com/winfsp/winfsp) is an open-source @@ -93,20 +99,22 @@ thumbnails for image and video files on network drives. In most cases, rclone will mount the remote as a normal, fixed disk drive by default. However, you can also choose to mount it as a remote network drive, often described -as a network share. If you mount an rclone remote using the default, fixed drive mode -and experience unexpected program errors, freezes or other issues, consider mounting -as a network drive instead. +as a network share. If you mount an rclone remote using the default, fixed drive +mode and experience unexpected program errors, freezes or other issues, consider +mounting as a network drive instead. When mounting as a fixed disk drive you can either mount to an unused drive letter, or to a path representing a **nonexistent** subdirectory of an **existing** parent directory or drive. Using the special value `*` will tell rclone to -automatically assign the next available drive letter, starting with Z: and moving backward. -Examples: +automatically assign the next available drive letter, starting with Z: and moving +backward. Examples: - rclone nfsmount remote:path/to/files * - rclone nfsmount remote:path/to/files X: - rclone nfsmount remote:path/to/files C:\path\parent\mount - rclone nfsmount remote:path/to/files X: +```console +rclone nfsmount remote:path/to/files * +rclone nfsmount remote:path/to/files X: +rclone nfsmount remote:path/to/files C:\path\parent\mount +rclone nfsmount remote:path/to/files X: +``` Option `--volname` can be used to set a custom volume name for the mounted file system. The default is to use the remote name and path. @@ -116,24 +124,28 @@ to your nfsmount command. Mounting to a directory path is not supported in this mode, it is a limitation Windows imposes on junctions, so the remote must always be mounted to a drive letter. - rclone nfsmount remote:path/to/files X: --network-mode +```console +rclone nfsmount remote:path/to/files X: --network-mode +``` -A volume name specified with `--volname` will be used to create the network share path. -A complete UNC path, such as `\\cloud\remote`, optionally with path +A volume name specified with `--volname` will be used to create the network share +path. A complete UNC path, such as `\\cloud\remote`, optionally with path `\\cloud\remote\madeup\path`, will be used as is. Any other string will be used as the share part, after a default prefix `\\server\`. If no volume name is specified then `\\server\share` will be used. -You must make sure the volume name is unique when you are mounting more than one drive, -or else the mount command will fail. The share name will treated as the volume label for -the mapped drive, shown in Windows Explorer etc, while the complete +You must make sure the volume name is unique when you are mounting more than one +drive, or else the mount command will fail. The share name will treated as the +volume label for the mapped drive, shown in Windows Explorer etc, while the complete `\\server\share` will be reported as the remote UNC path by `net use` etc, just like a normal network drive mapping. If you specify a full network share UNC path with `--volname`, this will implicitly set the `--network-mode` option, so the following two examples have same result: - rclone nfsmount remote:path/to/files X: --network-mode - rclone nfsmount remote:path/to/files X: --volname \\server\share +```console +rclone nfsmount remote:path/to/files X: --network-mode +rclone nfsmount remote:path/to/files X: --volname \\server\share +``` You may also specify the network share UNC path as the mountpoint itself. Then rclone will automatically assign a drive letter, same as with `*` and use that as @@ -141,15 +153,16 @@ mountpoint, and instead use the UNC path specified as the volume name, as if it specified with the `--volname` option. This will also implicitly set the `--network-mode` option. This means the following two examples have same result: - rclone nfsmount remote:path/to/files \\cloud\remote - rclone nfsmount remote:path/to/files * --volname \\cloud\remote +```console +rclone nfsmount remote:path/to/files \\cloud\remote +rclone nfsmount remote:path/to/files * --volname \\cloud\remote +``` There is yet another way to enable network mode, and to set the share path, and that is to pass the "native" libfuse/WinFsp option directly: `--fuse-flag --VolumePrefix=\server\share`. Note that the path must be with just a single backslash prefix in this case. - *Note:* In previous versions of rclone this was the only supported method. [Read more about drive mapping](https://en.wikipedia.org/wiki/Drive_mapping) @@ -162,11 +175,11 @@ The FUSE emulation layer on Windows must convert between the POSIX-based permission model used in FUSE, and the permission model used in Windows, based on access-control lists (ACL). -The mounted filesystem will normally get three entries in its access-control list (ACL), -representing permissions for the POSIX permission scopes: Owner, group and others. -By default, the owner and group will be taken from the current user, and the built-in -group "Everyone" will be used to represent others. The user/group can be customized -with FUSE options "UserName" and "GroupName", +The mounted filesystem will normally get three entries in its access-control list +(ACL), representing permissions for the POSIX permission scopes: Owner, group and +others. By default, the owner and group will be taken from the current user, and +the built-in group "Everyone" will be used to represent others. The user/group can +be customized with FUSE options "UserName" and "GroupName", e.g. `-o UserName=user123 -o GroupName="Authenticated Users"`. The permissions on each entry will be set according to [options](#options) `--dir-perms` and `--file-perms`, which takes a value in traditional Unix @@ -266,58 +279,74 @@ does not suffer from the same limitations. ## Mounting on macOS -Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_serve_nfs/), [macFUSE](https://osxfuse.github.io/) -(also known as osxfuse) or [FUSE-T](https://www.fuse-t.org/). macFUSE is a traditional -FUSE driver utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system -which "mounts" via an NFSv4 local server. +Mounting on macOS can be done either via [built-in NFS server](/commands/rclone_serve_nfs/), +[macFUSE](https://osxfuse.github.io/) (also known as osxfuse) or +[FUSE-T](https://www.fuse-t.org/).macFUSE is a traditional FUSE driver utilizing +a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which +"mounts" via an NFSv4 local server. -#### Unicode Normalization +### Unicode Normalization It is highly recommended to keep the default of `--no-unicode-normalization=false` for all `mount` and `serve` commands on macOS. For details, see [vfs-case-sensitivity](https://rclone.org/commands/rclone_mount/#vfs-case-sensitivity). ### NFS mount -This method spins up an NFS server using [serve nfs](/commands/rclone_serve_nfs/) command and mounts -it to the specified mountpoint. If you run this in background mode using |--daemon|, you will need to -send SIGTERM signal to the rclone process using |kill| command to stop the mount. +This method spins up an NFS server using [serve nfs](/commands/rclone_serve_nfs/) +command and mounts it to the specified mountpoint. If you run this in background +mode using |--daemon|, you will need to send SIGTERM signal to the rclone process +using |kill| command to stop the mount. -Note that `--nfs-cache-handle-limit` controls the maximum number of cached file handles stored by the `nfsmount` caching handler. -This should not be set too low or you may experience errors when trying to access files. The default is 1000000, +Note that `--nfs-cache-handle-limit` controls the maximum number of cached file +handles stored by the `nfsmount` caching handler. This should not be set too low +or you may experience errors when trying to access files. The default is 1000000, but consider lowering this limit if the server's system resource usage causes problems. ### macFUSE Notes -If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) from -the website, rclone will locate the macFUSE libraries without any further intervention. -If however, macFUSE is installed using the [macports](https://www.macports.org/) package manager, -the following addition steps are required. +If installing macFUSE using [dmg packages](https://github.com/osxfuse/osxfuse/releases) +from the website, rclone will locate the macFUSE libraries without any further intervention. +If however, macFUSE is installed using the [macports](https://www.macports.org/) +package manager, the following addition steps are required. - sudo mkdir /usr/local/lib - cd /usr/local/lib - sudo ln -s /opt/local/lib/libfuse.2.dylib +```console +sudo mkdir /usr/local/lib +cd /usr/local/lib +sudo ln -s /opt/local/lib/libfuse.2.dylib +``` ### FUSE-T Limitations, Caveats, and Notes -There are some limitations, caveats, and notes about how it works. These are current as -of FUSE-T version 1.0.14. +There are some limitations, caveats, and notes about how it works. These are +current as of FUSE-T version 1.0.14. #### ModTime update on read As per the [FUSE-T wiki](https://github.com/macos-fuse-t/fuse-t/wiki#caveats): -> File access and modification times cannot be set separately as it seems to be an -> issue with the NFS client which always modifies both. Can be reproduced with +> File access and modification times cannot be set separately as it seems to be an +> issue with the NFS client which always modifies both. Can be reproduced with > 'touch -m' and 'touch -a' commands -This means that viewing files with various tools, notably macOS Finder, will cause rlcone -to update the modification time of the file. This may make rclone upload a full new copy -of the file. - +This means that viewing files with various tools, notably macOS Finder, will cause +rlcone to update the modification time of the file. This may make rclone upload a +full new copy of the file. + #### Read Only mounts -When mounting with `--read-only`, attempts to write to files will fail *silently* as -opposed to with a clear warning as in macFUSE. +When mounting with `--read-only`, attempts to write to files will fail *silently* +as opposed to with a clear warning as in macFUSE. + +# Mounting on Linux + +On newer versions of Ubuntu, you may encounter the following error when running +`rclone mount`: + +> NOTICE: mount helper error: fusermount3: mount failed: Permission denied +> CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: exit status 1 +This may be due to newer [Apparmor](https://wiki.ubuntu.com/AppArmor) restrictions, +which can be disabled with `sudo aa-disable /usr/bin/fusermount3` (you may need to +`sudo apt install apparmor-utils` beforehand). ## Limitations @@ -418,12 +447,14 @@ helper you should symlink rclone binary to `/sbin/mount.rclone` and optionally rclone will detect it and translate command-line arguments appropriately. Now you can run classic mounts like this: -``` + +```console mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem ``` or create systemd mount units: -``` + +```ini # /etc/systemd/system/mnt-data.mount [Unit] Description=Mount for /mnt/data @@ -435,7 +466,8 @@ Options=rw,_netdev,allow_other,args2env,vfs-cache-mode=writes,config=/etc/rclone ``` optionally accompanied by systemd automount unit -``` + +```ini # /etc/systemd/system/mnt-data.automount [Unit] Description=AutoMount for /mnt/data @@ -447,7 +479,8 @@ WantedBy=multi-user.target ``` or add in `/etc/fstab` a line like -``` + +```console sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0 ``` @@ -496,8 +529,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -509,16 +544,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -549,6 +590,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -556,6 +598,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -603,13 +646,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -619,10 +662,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -705,9 +748,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -721,9 +766,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -761,32 +806,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -798,7 +852,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -808,7 +863,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -886,7 +941,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -897,7 +954,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -915,7 +972,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -940,8 +997,6 @@ If the file has no metadata it will be returned as `{}` and if there is an error reading the metadata the error will be returned as `{"error":"error string"}`. - - ``` rclone nfsmount remote:path /path/to/mountpoint [flags] ``` @@ -1017,7 +1072,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -1045,5 +1100,10 @@ Flags for filtering directory listings ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md index 07f0ddeff..81a3ed99e 100644 --- a/docs/content/commands/rclone_obscure.md +++ b/docs/content/commands/rclone_obscure.md @@ -13,9 +13,8 @@ Obscure password for use in the rclone config file. In the rclone config file, human-readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is **not** a secure way of encrypting these -passwords as rclone can decrypt them - it is to prevent "eyedropping" -- namely someone seeing a password in the rclone config file by -accident. +passwords as rclone can decrypt them - it is to prevent "eyedropping" - +namely someone seeing a password in the rclone config file by accident. Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 @@ -25,7 +24,9 @@ This command can also accept a password through STDIN instead of an argument by passing a hyphen as an argument. This will use the first line of STDIN as the password not including the trailing newline. - echo "secretpassword" | rclone obscure - +```console +echo "secretpassword" | rclone obscure - +``` If there is no data on STDIN to read, rclone obscure will default to obfuscating the hyphen itself. @@ -48,5 +49,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md index c1fded41c..6734a05f1 100644 --- a/docs/content/commands/rclone_purge.md +++ b/docs/content/commands/rclone_purge.md @@ -15,13 +15,13 @@ include/exclude filters - everything will be removed. Use the delete files. To delete empty directories only, use command [rmdir](/commands/rclone_rmdir/) or [rmdirs](/commands/rclone_rmdirs/). -The concurrency of this operation is controlled by the `--checkers` global flag. However, some backends will -implement this command directly, in which case `--checkers` will be ignored. +The concurrency of this operation is controlled by the `--checkers` global flag. +However, some backends will implement this command directly, in which +case `--checkers` will be ignored. **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`-i` flag. - ``` rclone purge remote:path [flags] ``` @@ -39,7 +39,7 @@ See the [global flags page](/flags/) for global options not listed here. Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -47,5 +47,10 @@ Important flags useful for most commands ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_rc.md b/docs/content/commands/rclone_rc.md index cf54e8454..5f4d3c0e8 100644 --- a/docs/content/commands/rclone_rc.md +++ b/docs/content/commands/rclone_rc.md @@ -12,8 +12,8 @@ Run a command against a running rclone. This runs a command against a running rclone. Use the `--url` flag to specify an non default URL to connect on. This can be either a -":port" which is taken to mean "http://localhost:port" or a -"host:port" which is taken to mean "http://host:port" +":port" which is taken to mean or a +"host:port" which is taken to mean . A username and password can be passed in with `--user` and `--pass`. @@ -22,10 +22,12 @@ Note that `--rc-addr`, `--rc-user`, `--rc-pass` will be read also for The `--unix-socket` flag can be used to connect over a unix socket like this - # start server on /tmp/my.socket - rclone rcd --rc-addr unix:///tmp/my.socket - # Connect to it - rclone rc --unix-socket /tmp/my.socket core/stats +```sh +# start server on /tmp/my.socket +rclone rcd --rc-addr unix:///tmp/my.socket +# Connect to it +rclone rc --unix-socket /tmp/my.socket core/stats +``` Arguments should be passed in as parameter=value. @@ -40,29 +42,38 @@ options in the form `-o key=value` or `-o key`. It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings. - -o key=value -o key2 +```text +-o key=value -o key2 +``` Will place this in the "opt" value - {"key":"value", "key2","") - +```json +{"key":"value", "key2","") +``` The `-a`/`--arg` option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings. - -a value -a value2 +```text +-a value -a value2 +``` Will place this in the "arg" value - ["value", "value2"] +```json +["value", "value2"] +``` Use `--loopback` to connect to the rclone instance running `rclone rc`. This is very useful for testing commands without having to run an rclone rc server, e.g.: - rclone rc --loopback operations/about fs=/ +```sh +rclone rc --loopback operations/about fs=/ +``` Use `rclone rc` to see a list of all possible commands. @@ -89,5 +100,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_rcat.md b/docs/content/commands/rclone_rcat.md index bc96d3af6..0c44838c7 100644 --- a/docs/content/commands/rclone_rcat.md +++ b/docs/content/commands/rclone_rcat.md @@ -12,8 +12,10 @@ Copies standard input to file on remote. Reads from standard input (stdin) and copies it to a single remote file. - echo "hello world" | rclone rcat remote:path/to/file - ffmpeg - | rclone rcat remote:path/to/file +```console +echo "hello world" | rclone rcat remote:path/to/file +ffmpeg - | rclone rcat remote:path/to/file +``` If the remote file already exists, it will be overwritten. @@ -58,7 +60,7 @@ See the [global flags page](/flags/) for global options not listed here. Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -66,5 +68,10 @@ Important flags useful for most commands ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_rcd.md b/docs/content/commands/rclone_rcd.md index 126727b34..ff0a83cbc 100644 --- a/docs/content/commands/rclone_rcd.md +++ b/docs/content/commands/rclone_rcd.md @@ -51,6 +51,8 @@ inserts leading and trailing "/" on `--rc-baseurl`, so `--rc-baseurl "rclone"`, `--rc-baseurl "/rclone"` and `--rc-baseurl "/rclone/"` are all treated identically. +`--rc-disable-zip` may be set to disable the zipping download option. + ### TLS (SSL) By default this will serve over http. If you want you can serve over @@ -76,41 +78,42 @@ by `--rc-addr`). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. Socket activation can be tested ad-hoc with the `systemd-socket-activate`command - systemd-socket-activate -l 8000 -- rclone serve +```console +systemd-socket-activate -l 8000 -- rclone serve +``` This will socket-activate rclone on the first connection to port 8000 over TCP. + ### Template `--rc-template` allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: -| Parameter | Description | -| :---------- | :---------- | -| .Name | The full path of a file/directory. | -| .Title | Directory listing of .Name | -| .Sort | The current sort used. This is changeable via ?sort= parameter | -| | Sort Options: namedirfirst,name,size,time (default namedirfirst) | -| .Order | The current ordering used. This is changeable via ?order= parameter | -| | Order Options: asc,desc (default asc) | -| .Query | Currently unused. | -| .Breadcrumb | Allows for creating a relative navigation | -|-- .Link | The relative to the root link of the Text. | -|-- .Text | The Name of the directory. | -| .Entries | Information about a specific file/directory. | -|-- .URL | The 'url' of an entry. | -|-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | -|-- .IsDir | Boolean for if an entry is a directory or not. | -|-- .Size | Size in Bytes of the entry. | -|-- .ModTime | The UTC timestamp of an entry. | +| Parameter | Subparameter | Description | +| :---------- | :----------- | :---------- | +| .Name | | The full path of a file/directory. | +| .Title | | Directory listing of '.Name'. | +| .Sort | | The current sort used. This is changeable via '?sort=' parameter. Possible values: namedirfirst, name, size, time (default namedirfirst). | +| .Order | | The current ordering used. This is changeable via '?order=' parameter. Possible values: asc, desc (default asc). | +| .Query | | Currently unused. | +| .Breadcrumb | | Allows for creating a relative navigation. | +| | .Link | The link of the Text relative to the root. | +| | .Text | The Name of the directory. | +| .Entries | | Information about a specific file/directory. | +| | .URL | The url of an entry. | +| | .Leaf | Currently same as '.URL' but intended to be just the name. | +| | .IsDir | Boolean for if an entry is a directory or not. | +| | .Size | Size in bytes of the entry. | +| | .ModTime | The UTC timestamp of an entry. | -The server also makes the following functions available so that they can be used within the -template. These functions help extend the options for dynamic rendering of HTML. They can -be used to render HTML based on specific conditions. +The server also makes the following functions available so that they can be used +within the template. These functions help extend the options for dynamic +rendering of HTML. They can be used to render HTML based on specific conditions. | Function | Description | | :---------- | :---------- | @@ -127,8 +130,9 @@ You can either use an htpasswd file which can take lots of users, or set a single username and password with the `--rc-user` and `--rc-pass` flags. Alternatively, you can have the reverse proxy manage authentication and use the -username provided in the configured header with `--user-from-header` (e.g., `--rc---user-from-header=x-remote-user`). -Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. +username provided in the configured header with `--user-from-header` (e.g., `--rc-user-from-header=x-remote-user`). +Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration +may lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the `--client-ca` flag passed to the server, the @@ -140,9 +144,11 @@ authentication. Bcrypt is recommended. To create an htpasswd file: - touch htpasswd - htpasswd -B htpasswd user - htpasswd -B htpasswd anotherUser +```console +touch htpasswd +htpasswd -B htpasswd user +htpasswd -B htpasswd anotherUser +``` The password file can be updated while rclone is running. @@ -150,8 +156,6 @@ Use `--rc-realm` to set the authentication realm. Use `--rc-salt` to change the password hashing salt from the default. - - ``` rclone rcd * [flags] ``` @@ -169,7 +173,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags to control the Remote Control API -``` +```text --rc Enable the remote control server --rc-addr stringArray IPaddress:Port or :Port to bind server to (default localhost:5572) --rc-allow-origin string Origin which cross-domain request (CORS) can be executed from @@ -204,5 +208,10 @@ Flags to control the Remote Control API ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md index 9eb865ee1..937fbe7a8 100644 --- a/docs/content/commands/rclone_rmdir.md +++ b/docs/content/commands/rclone_rmdir.md @@ -16,7 +16,6 @@ with option `--rmdirs`) to do that. To delete a path and any objects in it, use [purge](/commands/rclone_purge/) command. - ``` rclone rmdir remote:path [flags] ``` @@ -34,7 +33,7 @@ See the [global flags page](/flags/) for global options not listed here. Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -42,5 +41,10 @@ Important flags useful for most commands ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md index b64d3a616..045ce718a 100644 --- a/docs/content/commands/rclone_rmdirs.md +++ b/docs/content/commands/rclone_rmdirs.md @@ -29,7 +29,6 @@ if you have thousands of empty directories consider increasing this number. To delete a path and any objects in it, use the [purge](/commands/rclone_purge/) command. - ``` rclone rmdirs remote:path [flags] ``` @@ -48,7 +47,7 @@ See the [global flags page](/flags/) for global options not listed here. Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -56,5 +55,10 @@ Important flags useful for most commands ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_selfupdate.md b/docs/content/commands/rclone_selfupdate.md index b32a1ed85..a7c9b0f05 100644 --- a/docs/content/commands/rclone_selfupdate.md +++ b/docs/content/commands/rclone_selfupdate.md @@ -57,9 +57,8 @@ command will rename the old executable to 'rclone.old.exe' upon success. Please note that this command was not available before rclone version 1.55. If it fails for you with the message `unknown command "selfupdate"` then -you will need to update manually following the install instructions located -at https://rclone.org/install/ - +you will need to update manually following the +[install documentation](https://rclone.org/install/). ``` rclone selfupdate [flags] @@ -81,5 +80,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_serve.md b/docs/content/commands/rclone_serve.md index df5c9a9de..ae6f2f192 100644 --- a/docs/content/commands/rclone_serve.md +++ b/docs/content/commands/rclone_serve.md @@ -13,7 +13,16 @@ Serve a remote over a protocol. Serve a remote over a given protocol. Requires the use of a subcommand to specify the protocol, e.g. - rclone serve http remote: +```console +rclone serve http remote: +``` + +When the "--metadata" flag is enabled, the following metadata fields will be provided as headers: +- "content-disposition" +- "cache-control" +- "content-language" +- "content-encoding" +Note: The availability of these fields depends on whether the remote supports metadata. Each subcommand has its own options which you can see in their help. @@ -32,6 +41,9 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone serve dlna](/commands/rclone_serve_dlna/) - Serve remote:path over DLNA * [rclone serve docker](/commands/rclone_serve_docker/) - Serve any remote on docker's volume plugin API. @@ -43,3 +55,5 @@ See the [global flags page](/flags/) for global options not listed here. * [rclone serve sftp](/commands/rclone_serve_sftp/) - Serve the remote over SFTP. * [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over WebDAV. + + diff --git a/docs/content/commands/rclone_serve_dlna.md b/docs/content/commands/rclone_serve_dlna.md index a0c8b8d4d..54873a53f 100644 --- a/docs/content/commands/rclone_serve_dlna.md +++ b/docs/content/commands/rclone_serve_dlna.md @@ -58,8 +58,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -71,16 +73,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -111,6 +119,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -118,6 +127,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -165,13 +175,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -181,10 +191,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -267,9 +277,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -283,9 +295,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -323,32 +335,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -360,7 +381,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -370,7 +392,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -448,7 +470,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -459,7 +483,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -477,7 +501,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -502,8 +526,6 @@ If the file has no metadata it will be returned as `{}` and if there is an error reading the metadata the error will be returned as `{"error":"error string"}`. - - ``` rclone serve dlna remote:path [flags] ``` @@ -558,7 +580,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -586,5 +608,10 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. + + diff --git a/docs/content/commands/rclone_serve_docker.md b/docs/content/commands/rclone_serve_docker.md index 5f4536ba4..c0edbea83 100644 --- a/docs/content/commands/rclone_serve_docker.md +++ b/docs/content/commands/rclone_serve_docker.md @@ -20,7 +20,8 @@ docker daemon and runs the corresponding code when necessary. Docker plugins can run as a managed plugin under control of the docker daemon or as an independent native service. For testing, you can just run it directly from the command line, for example: -``` + +```console sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv ``` @@ -70,8 +71,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -83,16 +86,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -123,6 +132,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -130,6 +140,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -177,13 +188,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -193,10 +204,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -279,9 +290,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -295,9 +308,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -335,32 +348,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -372,7 +394,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -382,7 +405,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -460,7 +483,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -471,7 +496,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -489,7 +514,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -514,8 +539,6 @@ If the file has no metadata it will be returned as `{}` and if there is an error reading the metadata the error will be returned as `{"error":"error string"}`. - - ``` rclone serve docker [flags] ``` @@ -591,7 +614,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -619,5 +642,10 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. + + diff --git a/docs/content/commands/rclone_serve_ftp.md b/docs/content/commands/rclone_serve_ftp.md index 219b1dd79..4f2197426 100644 --- a/docs/content/commands/rclone_serve_ftp.md +++ b/docs/content/commands/rclone_serve_ftp.md @@ -51,8 +51,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -64,16 +66,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -104,6 +112,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -111,6 +120,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -158,13 +168,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -174,10 +184,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -260,9 +270,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -276,9 +288,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -316,32 +328,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -353,7 +374,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -363,7 +385,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -441,7 +463,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -452,7 +476,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -470,7 +494,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -518,41 +542,43 @@ options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter + - `_root` - root to use for the backend And it may have this parameter + - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "pass": "mypassword" + "user": "me", + "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" + "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT -``` +```json { - "type": "sftp", - "_root": "", - "_obscure": "pass", - "user": "me", - "pass": "mypassword", - "host": "sftp.example.com" + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" } ``` @@ -574,9 +600,7 @@ password or public-key is changed the cache will need to expire (which takes 5 m before it takes effect. This can be used to build general purpose proxies to any kind of -backend that rclone supports. - - +backend that rclone supports. ``` rclone serve ftp remote:path [flags] @@ -635,7 +659,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -663,5 +687,10 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. + + diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md index 36c3de07e..7d31e05ed 100644 --- a/docs/content/commands/rclone_serve_http.md +++ b/docs/content/commands/rclone_serve_http.md @@ -53,6 +53,8 @@ inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, `--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. +`--disable-zip` may be set to disable the zipping download option. + ### TLS (SSL) By default this will serve over http. If you want you can serve over @@ -78,41 +80,42 @@ by `--addr`). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. Socket activation can be tested ad-hoc with the `systemd-socket-activate`command - systemd-socket-activate -l 8000 -- rclone serve +```console +systemd-socket-activate -l 8000 -- rclone serve +``` This will socket-activate rclone on the first connection to port 8000 over TCP. + ### Template `--template` allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: -| Parameter | Description | -| :---------- | :---------- | -| .Name | The full path of a file/directory. | -| .Title | Directory listing of .Name | -| .Sort | The current sort used. This is changeable via ?sort= parameter | -| | Sort Options: namedirfirst,name,size,time (default namedirfirst) | -| .Order | The current ordering used. This is changeable via ?order= parameter | -| | Order Options: asc,desc (default asc) | -| .Query | Currently unused. | -| .Breadcrumb | Allows for creating a relative navigation | -|-- .Link | The relative to the root link of the Text. | -|-- .Text | The Name of the directory. | -| .Entries | Information about a specific file/directory. | -|-- .URL | The 'url' of an entry. | -|-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | -|-- .IsDir | Boolean for if an entry is a directory or not. | -|-- .Size | Size in Bytes of the entry. | -|-- .ModTime | The UTC timestamp of an entry. | +| Parameter | Subparameter | Description | +| :---------- | :----------- | :---------- | +| .Name | | The full path of a file/directory. | +| .Title | | Directory listing of '.Name'. | +| .Sort | | The current sort used. This is changeable via '?sort=' parameter. Possible values: namedirfirst, name, size, time (default namedirfirst). | +| .Order | | The current ordering used. This is changeable via '?order=' parameter. Possible values: asc, desc (default asc). | +| .Query | | Currently unused. | +| .Breadcrumb | | Allows for creating a relative navigation. | +| | .Link | The link of the Text relative to the root. | +| | .Text | The Name of the directory. | +| .Entries | | Information about a specific file/directory. | +| | .URL | The url of an entry. | +| | .Leaf | Currently same as '.URL' but intended to be just the name. | +| | .IsDir | Boolean for if an entry is a directory or not. | +| | .Size | Size in bytes of the entry. | +| | .ModTime | The UTC timestamp of an entry. | -The server also makes the following functions available so that they can be used within the -template. These functions help extend the options for dynamic rendering of HTML. They can -be used to render HTML based on specific conditions. +The server also makes the following functions available so that they can be used +within the template. These functions help extend the options for dynamic +rendering of HTML. They can be used to render HTML based on specific conditions. | Function | Description | | :---------- | :---------- | @@ -129,8 +132,9 @@ You can either use an htpasswd file which can take lots of users, or set a single username and password with the `--user` and `--pass` flags. Alternatively, you can have the reverse proxy manage authentication and use the -username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`). -Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. +username provided in the configured header with `--user-from-header` (e.g., `--user-from-header=x-remote-user`). +Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration +may lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the `--client-ca` flag passed to the server, the @@ -142,9 +146,11 @@ authentication. Bcrypt is recommended. To create an htpasswd file: - touch htpasswd - htpasswd -B htpasswd user - htpasswd -B htpasswd anotherUser +```console +touch htpasswd +htpasswd -B htpasswd user +htpasswd -B htpasswd anotherUser +``` The password file can be updated while rclone is running. @@ -173,8 +179,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -186,16 +194,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -226,6 +240,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -233,6 +248,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -280,13 +296,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -296,10 +312,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -382,9 +398,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -398,9 +416,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -438,32 +456,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -475,7 +502,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -485,7 +513,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -563,7 +591,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -574,7 +604,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -592,7 +622,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -640,41 +670,43 @@ options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter + - `_root` - root to use for the backend And it may have this parameter + - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "pass": "mypassword" + "user": "me", + "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" + "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT -``` +```json { - "type": "sftp", - "_root": "", - "_obscure": "pass", - "user": "me", - "pass": "mypassword", - "host": "sftp.example.com" + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" } ``` @@ -696,9 +728,7 @@ password or public-key is changed the cache will need to expire (which takes 5 m before it takes effect. This can be used to build general purpose proxies to any kind of -backend that rclone supports. - - +backend that rclone supports. ``` rclone serve http remote:path [flags] @@ -715,6 +745,7 @@ rclone serve http remote:path [flags] --client-ca string Client certificate authority to verify clients with --dir-cache-time Duration Time to cache directory entries for (default 5m0s) --dir-perms FileMode Directory permissions (default 777) + --disable-zip Disable zip download of directories --file-perms FileMode File permissions (default 666) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for http @@ -767,7 +798,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -795,5 +826,10 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. + + diff --git a/docs/content/commands/rclone_serve_nfs.md b/docs/content/commands/rclone_serve_nfs.md index 7f5b88b37..a3930cde0 100644 --- a/docs/content/commands/rclone_serve_nfs.md +++ b/docs/content/commands/rclone_serve_nfs.md @@ -12,7 +12,7 @@ Serve the remote as an NFS mount ## Synopsis Create an NFS server that serves the given remote over the network. - + This implements an NFSv3 server to serve any rclone remote via NFS. The primary purpose for this command is to enable the [mount @@ -66,12 +66,16 @@ cache. To serve NFS over the network use following command: - rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full +```sh +rclone serve nfs remote: --addr 0.0.0.0:$PORT --vfs-cache-mode=full +``` This specifies a port that can be used in the mount command. To mount the server under Linux/macOS, use the following command: - - mount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpoint + +```sh +mount -t nfs -o port=$PORT,mountport=$PORT,tcp $HOSTNAME:/ path/to/mountpoint +``` Where `$PORT` is the same port number used in the `serve nfs` command and `$HOSTNAME` is the network address of the machine that `serve nfs` @@ -106,8 +110,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -119,16 +125,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -159,6 +171,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -166,6 +179,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -213,13 +227,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -229,10 +243,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -315,9 +329,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -331,9 +347,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -371,32 +387,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -408,7 +433,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -418,7 +444,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -496,7 +522,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -507,7 +535,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -525,7 +553,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -550,8 +578,6 @@ If the file has no metadata it will be returned as `{}` and if there is an error reading the metadata the error will be returned as `{"error":"error string"}`. - - ``` rclone serve nfs remote:path [flags] ``` @@ -605,7 +631,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -633,5 +659,10 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. + + diff --git a/docs/content/commands/rclone_serve_restic.md b/docs/content/commands/rclone_serve_restic.md index 6bc94f3d8..9815773b3 100644 --- a/docs/content/commands/rclone_serve_restic.md +++ b/docs/content/commands/rclone_serve_restic.md @@ -22,7 +22,7 @@ The server will log errors. Use -v to see access logs. `--bwlimit` will be respected for file transfers. Use `--stats` to control the stats printing. -## Setting up rclone for use by restic ### +## Setting up rclone for use by restic First [set up a remote for your chosen cloud provider](/docs/#configure). @@ -33,7 +33,9 @@ following instructions. Now start the rclone restic server - rclone serve restic -v remote:backup +```console +rclone serve restic -v remote:backup +``` Where you can replace "backup" in the above by whatever path in the remote you wish to use. @@ -47,7 +49,7 @@ Adding `--cache-objects=false` will cause rclone to stop caching objects returned from the List call. Caching is normally desirable as it speeds up downloading objects, saves transactions and uses very little memory. -## Setting up restic to use rclone ### +## Setting up restic to use rclone Now you can [follow the restic instructions](http://restic.readthedocs.io/en/latest/030_preparing_a_new_repo.html#rest-server) @@ -61,33 +63,38 @@ the URL for the REST server. For example: - $ export RESTIC_REPOSITORY=rest:http://localhost:8080/ - $ export RESTIC_PASSWORD=yourpassword - $ restic init - created restic backend 8b1a4b56ae at rest:http://localhost:8080/ +```console +$ export RESTIC_REPOSITORY=rest:http://localhost:8080/ +$ export RESTIC_PASSWORD=yourpassword +$ restic init +created restic backend 8b1a4b56ae at rest:http://localhost:8080/ - Please note that knowledge of your password is required to access - the repository. Losing your password means that your data is - irrecoverably lost. - $ restic backup /path/to/files/to/backup - scan [/path/to/files/to/backup] - scanned 189 directories, 312 files in 0:00 - [0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00 - duration: 0:00 - snapshot 45c8fdd8 saved +Please note that knowledge of your password is required to access +the repository. Losing your password means that your data is +irrecoverably lost. +$ restic backup /path/to/files/to/backup +scan [/path/to/files/to/backup] +scanned 189 directories, 312 files in 0:00 +[0:00] 100.00% 38.128 MiB / 38.128 MiB 501 / 501 items 0 errors ETA 0:00 +duration: 0:00 +snapshot 45c8fdd8 saved -### Multiple repositories #### +``` + +### Multiple repositories Note that you can use the endpoint to host multiple repositories. Do this by adding a directory name or path after the URL. Note that these **must** end with /. Eg - $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/ - # backup user1 stuff - $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ - # backup user2 stuff +```console +$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/ +# backup user1 stuff +$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ +# backup user2 stuff +``` -### Private repositories #### +### Private repositories The`--private-repos` flag can be used to limit users to repositories starting with a path of `//`. @@ -123,6 +130,8 @@ inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, `--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. +`--disable-zip` may be set to disable the zipping download option. + ### TLS (SSL) By default this will serve over http. If you want you can serve over @@ -148,13 +157,16 @@ by `--addr`). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. Socket activation can be tested ad-hoc with the `systemd-socket-activate`command - systemd-socket-activate -l 8000 -- rclone serve +```console +systemd-socket-activate -l 8000 -- rclone serve +``` This will socket-activate rclone on the first connection to port 8000 over TCP. + ### Authentication By default this will serve files without needing a login. @@ -163,8 +175,9 @@ You can either use an htpasswd file which can take lots of users, or set a single username and password with the `--user` and `--pass` flags. Alternatively, you can have the reverse proxy manage authentication and use the -username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`). -Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. +username provided in the configured header with `--user-from-header` (e.g., `--user-from-header=x-remote-user`). +Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration +may lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the `--client-ca` flag passed to the server, the @@ -176,9 +189,11 @@ authentication. Bcrypt is recommended. To create an htpasswd file: - touch htpasswd - htpasswd -B htpasswd user - htpasswd -B htpasswd anotherUser +```console +touch htpasswd +htpasswd -B htpasswd user +htpasswd -B htpasswd anotherUser +``` The password file can be updated while rclone is running. @@ -186,8 +201,6 @@ Use `--realm` to set the authentication realm. Use `--salt` to change the password hashing salt from the default. - - ``` rclone serve restic remote:path [flags] ``` @@ -222,5 +235,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. + + diff --git a/docs/content/commands/rclone_serve_s3.md b/docs/content/commands/rclone_serve_s3.md index 21d72f4e6..e3ae8618a 100644 --- a/docs/content/commands/rclone_serve_s3.md +++ b/docs/content/commands/rclone_serve_s3.md @@ -46,20 +46,20 @@ cause problems for S3 clients which rely on the Etag being the MD5. For a simple set up, to serve `remote:path` over s3, run the server like this: -``` +```console rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path ``` For example, to use a simple folder in the filesystem, run the server with a command like this: -``` +```console rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY local:/path/to/folder ``` The `rclone.conf` for the server could look like this: -``` +```ini [local] type = local ``` @@ -72,7 +72,7 @@ will be visible as a warning in the logs. But it will run nonetheless. This will be compatible with an rclone (client) remote configuration which is defined like this: -``` +```ini [serves3] type = s3 provider = Rclone @@ -129,21 +129,21 @@ metadata which will be set as the modification time of the file. `serve s3` currently supports the following operations. - Bucket - - `ListBuckets` - - `CreateBucket` - - `DeleteBucket` + - `ListBuckets` + - `CreateBucket` + - `DeleteBucket` - Object - - `HeadObject` - - `ListObjects` - - `GetObject` - - `PutObject` - - `DeleteObject` - - `DeleteObjects` - - `CreateMultipartUpload` - - `CompleteMultipartUpload` - - `AbortMultipartUpload` - - `CopyObject` - - `UploadPart` + - `HeadObject` + - `ListObjects` + - `GetObject` + - `PutObject` + - `DeleteObject` + - `DeleteObjects` + - `CreateMultipartUpload` + - `CompleteMultipartUpload` + - `AbortMultipartUpload` + - `CopyObject` + - `UploadPart` Other operations will return error `Unimplemented`. @@ -155,8 +155,9 @@ You can either use an htpasswd file which can take lots of users, or set a single username and password with the `--user` and `--pass` flags. Alternatively, you can have the reverse proxy manage authentication and use the -username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`). -Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. +username provided in the configured header with `--user-from-header` (e.g., `--user-from-header=x-remote-user`). +Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration +may lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the `--client-ca` flag passed to the server, the @@ -168,9 +169,11 @@ authentication. Bcrypt is recommended. To create an htpasswd file: - touch htpasswd - htpasswd -B htpasswd user - htpasswd -B htpasswd anotherUser +```console +touch htpasswd +htpasswd -B htpasswd user +htpasswd -B htpasswd anotherUser +``` The password file can be updated while rclone is running. @@ -209,6 +212,8 @@ inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, `--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. +`--disable-zip` may be set to disable the zipping download option. + ### TLS (SSL) By default this will serve over http. If you want you can serve over @@ -234,13 +239,16 @@ by `--addr`). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. Socket activation can be tested ad-hoc with the `systemd-socket-activate`command - systemd-socket-activate -l 8000 -- rclone serve +```console +systemd-socket-activate -l 8000 -- rclone serve +``` This will socket-activate rclone on the first connection to port 8000 over TCP. + ## VFS - Virtual File System This command uses the VFS layer. This adapts the cloud storage objects @@ -262,8 +270,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -275,16 +285,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -315,6 +331,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -322,6 +339,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -369,13 +387,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -385,10 +403,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -471,9 +489,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -487,9 +507,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -527,32 +547,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -564,7 +593,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -574,7 +604,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -652,7 +682,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -663,7 +695,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -681,7 +713,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -706,8 +738,6 @@ If the file has no metadata it will be returned as `{}` and if there is an error reading the metadata the error will be returned as `{"error":"error string"}`. - - ``` rclone serve s3 remote:path [flags] ``` @@ -778,7 +808,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -806,5 +836,10 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. + + diff --git a/docs/content/commands/rclone_serve_sftp.md b/docs/content/commands/rclone_serve_sftp.md index 2d2c6974d..60299de99 100644 --- a/docs/content/commands/rclone_serve_sftp.md +++ b/docs/content/commands/rclone_serve_sftp.md @@ -46,11 +46,13 @@ reachable externally then supply `--addr :2022` for example. This also supports being run with socket activation, in which case it will listen on the first passed FD. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. Socket activation can be tested ad-hoc with the `systemd-socket-activate`command: - systemd-socket-activate -l 2222 -- rclone serve sftp :local:vfs/ +```console +systemd-socket-activate -l 2222 -- rclone serve sftp :local:vfs/ +``` This will socket-activate rclone on the first connection to port 2222 over TCP. @@ -60,7 +62,9 @@ sftp backend, but it may not be with other SFTP clients. If `--stdio` is specified, rclone will serve SFTP over stdio, which can be used with sshd via ~/.ssh/authorized_keys, for example: - restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ... +```text +restrict,command="rclone serve sftp --stdio ./photos" ssh-rsa ... +``` On the client you need to set `--transfers 1` when using `--stdio`. Otherwise multiple instances of the rclone server are started by OpenSSH @@ -94,8 +98,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -107,16 +113,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -147,6 +159,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -154,6 +167,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -201,13 +215,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -217,10 +231,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -303,9 +317,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -319,9 +335,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -359,32 +375,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -396,7 +421,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -406,7 +432,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -484,7 +510,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -495,7 +523,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -513,7 +541,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -561,41 +589,43 @@ options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter + - `_root` - root to use for the backend And it may have this parameter + - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "pass": "mypassword" + "user": "me", + "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" + "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT -``` +```json { - "type": "sftp", - "_root": "", - "_obscure": "pass", - "user": "me", - "pass": "mypassword", - "host": "sftp.example.com" + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" } ``` @@ -617,9 +647,7 @@ password or public-key is changed the cache will need to expire (which takes 5 m before it takes effect. This can be used to build general purpose proxies to any kind of -backend that rclone supports. - - +backend that rclone supports. ``` rclone serve sftp remote:path [flags] @@ -678,7 +706,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -706,5 +734,10 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. + + diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md index 5da838fa3..5821df0da 100644 --- a/docs/content/commands/rclone_serve_webdav.md +++ b/docs/content/commands/rclone_serve_webdav.md @@ -16,7 +16,7 @@ browser, or you can make a remote of type WebDAV to read and write it. ## WebDAV options -### --etag-hash +### --etag-hash This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object. @@ -28,39 +28,53 @@ to see the full list. ## Access WebDAV on Windows -WebDAV shared folder can be mapped as a drive on Windows, however the default settings prevent it. -Windows will fail to connect to the server using insecure Basic authentication. -It will not even display any login dialog. Windows requires SSL / HTTPS connection to be used with Basic. -If you try to connect via Add Network Location Wizard you will get the following error: +WebDAV shared folder can be mapped as a drive on Windows, however the default +settings prevent it. Windows will fail to connect to the server using insecure +Basic authentication. It will not even display any login dialog. Windows +requires SSL / HTTPS connection to be used with Basic. If you try to connect +via Add Network Location Wizard you will get the following error: "The folder you entered does not appear to be valid. Please choose another". -However, you still can connect if you set the following registry key on a client machine: -HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\BasicAuthLevel to 2. -The BasicAuthLevel can be set to the following values: - 0 - Basic authentication disabled - 1 - Basic authentication enabled for SSL connections only - 2 - Basic authentication enabled for SSL connections and for non-SSL connections +However, you still can connect if you set the following registry key on a +client machine: +`HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters\BasicAuthLevel` +to 2. The BasicAuthLevel can be set to the following values: + +```text +0 - Basic authentication disabled +1 - Basic authentication enabled for SSL connections only +2 - Basic authentication enabled for SSL connections and for non-SSL connections +``` + If required, increase the FileSizeLimitInBytes to a higher value. Navigate to the Services interface, then restart the WebClient service. ## Access Office applications on WebDAV -Navigate to following registry HKEY_CURRENT_USER\Software\Microsoft\Office\[14.0/15.0/16.0]\Common\Internet +Navigate to following registry +`HKEY_CURRENT_USER\Software\Microsoft\Office\[14.0/15.0/16.0]\Common\Internet` Create a new DWORD BasicAuthLevel with value 2. - 0 - Basic authentication disabled - 1 - Basic authentication enabled for SSL connections only - 2 - Basic authentication enabled for SSL and for non-SSL connections -https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint +```text +0 - Basic authentication disabled +1 - Basic authentication enabled for SSL connections only +2 - Basic authentication enabled for SSL and for non-SSL connections +``` + + ## Serving over a unix socket You can serve the webdav on a unix socket like this: - rclone serve webdav --addr unix:///tmp/my.socket remote:path +```console +rclone serve webdav --addr unix:///tmp/my.socket remote:path +``` and connect to it like this using rclone and the webdav backend: - rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav: +```console +rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav: +``` Note that there is no authentication on http protocol - this is expected to be done by the permissions on the socket. @@ -96,6 +110,8 @@ inserts leading and trailing "/" on `--baseurl`, so `--baseurl "rclone"`, `--baseurl "/rclone"` and `--baseurl "/rclone/"` are all treated identically. +`--disable-zip` may be set to disable the zipping download option. + ### TLS (SSL) By default this will serve over http. If you want you can serve over @@ -121,41 +137,42 @@ by `--addr`). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. Socket activation can be tested ad-hoc with the `systemd-socket-activate`command - systemd-socket-activate -l 8000 -- rclone serve +```console +systemd-socket-activate -l 8000 -- rclone serve +``` This will socket-activate rclone on the first connection to port 8000 over TCP. + ### Template `--template` allows a user to specify a custom markup template for HTTP and WebDAV serve functions. The server exports the following markup to be used within the template to server pages: -| Parameter | Description | -| :---------- | :---------- | -| .Name | The full path of a file/directory. | -| .Title | Directory listing of .Name | -| .Sort | The current sort used. This is changeable via ?sort= parameter | -| | Sort Options: namedirfirst,name,size,time (default namedirfirst) | -| .Order | The current ordering used. This is changeable via ?order= parameter | -| | Order Options: asc,desc (default asc) | -| .Query | Currently unused. | -| .Breadcrumb | Allows for creating a relative navigation | -|-- .Link | The relative to the root link of the Text. | -|-- .Text | The Name of the directory. | -| .Entries | Information about a specific file/directory. | -|-- .URL | The 'url' of an entry. | -|-- .Leaf | Currently same as 'URL' but intended to be 'just' the name. | -|-- .IsDir | Boolean for if an entry is a directory or not. | -|-- .Size | Size in Bytes of the entry. | -|-- .ModTime | The UTC timestamp of an entry. | +| Parameter | Subparameter | Description | +| :---------- | :----------- | :---------- | +| .Name | | The full path of a file/directory. | +| .Title | | Directory listing of '.Name'. | +| .Sort | | The current sort used. This is changeable via '?sort=' parameter. Possible values: namedirfirst, name, size, time (default namedirfirst). | +| .Order | | The current ordering used. This is changeable via '?order=' parameter. Possible values: asc, desc (default asc). | +| .Query | | Currently unused. | +| .Breadcrumb | | Allows for creating a relative navigation. | +| | .Link | The link of the Text relative to the root. | +| | .Text | The Name of the directory. | +| .Entries | | Information about a specific file/directory. | +| | .URL | The url of an entry. | +| | .Leaf | Currently same as '.URL' but intended to be just the name. | +| | .IsDir | Boolean for if an entry is a directory or not. | +| | .Size | Size in bytes of the entry. | +| | .ModTime | The UTC timestamp of an entry. | -The server also makes the following functions available so that they can be used within the -template. These functions help extend the options for dynamic rendering of HTML. They can -be used to render HTML based on specific conditions. +The server also makes the following functions available so that they can be used +within the template. These functions help extend the options for dynamic +rendering of HTML. They can be used to render HTML based on specific conditions. | Function | Description | | :---------- | :---------- | @@ -172,8 +189,9 @@ You can either use an htpasswd file which can take lots of users, or set a single username and password with the `--user` and `--pass` flags. Alternatively, you can have the reverse proxy manage authentication and use the -username provided in the configured header with `--user-from-header` (e.g., `----user-from-header=x-remote-user`). -Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. +username provided in the configured header with `--user-from-header` (e.g., `--user-from-header=x-remote-user`). +Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration +may lead to unauthorized access. If either of the above authentication methods is not configured and client certificates are required by the `--client-ca` flag passed to the server, the @@ -185,9 +203,11 @@ authentication. Bcrypt is recommended. To create an htpasswd file: - touch htpasswd - htpasswd -B htpasswd user - htpasswd -B htpasswd anotherUser +```console +touch htpasswd +htpasswd -B htpasswd user +htpasswd -B htpasswd anotherUser +``` The password file can be updated while rclone is running. @@ -216,8 +236,10 @@ directory should be considered up to date and not refreshed from the backend. Changes made through the VFS will appear immediately or invalidate the cache. +```text --dir-cache-time duration Time to cache directory entries for (default 5m0s) --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) +``` However, changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once @@ -229,16 +251,22 @@ You can send a `SIGHUP` signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this: - kill -SIGHUP $(pidof rclone) +```console +kill -SIGHUP $(pidof rclone) +``` If you configure rclone with a [remote control](/rc) then you can use rclone rc to flush the whole directory cache: - rclone rc vfs/forget +```console +rclone rc vfs/forget +``` Or individual files or directories: - rclone rc vfs/forget file=path/to/file dir=path/to/dir +```console +rclone rc vfs/forget file=path/to/file dir=path/to/dir +``` ## VFS File Buffering @@ -269,6 +297,7 @@ write simultaneously to a file. See below for more details. Note that the VFS cache is separate from the cache backend and you may find that you need one or the other or both. +```text --cache-dir string Directory rclone will use for caching. --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) @@ -276,6 +305,7 @@ find that you need one or the other or both. --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) +``` If run with `-vv` rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but @@ -323,13 +353,13 @@ directly to the remote without caching anything on disk. This will mean some operations are not possible - * Files can't be opened for both read AND write - * Files opened for write can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files open for read with O_TRUNC will be opened write only - * Files open for write only will behave as if O_TRUNC was supplied - * Open modes O_APPEND, O_TRUNC are ignored - * If an upload fails it can't be retried +- Files can't be opened for both read AND write +- Files opened for write can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files open for read with O_TRUNC will be opened write only +- Files open for write only will behave as if O_TRUNC was supplied +- Open modes O_APPEND, O_TRUNC are ignored +- If an upload fails it can't be retried ### --vfs-cache-mode minimal @@ -339,10 +369,10 @@ write will be a lot more compatible, but uses the minimal disk space. These operations are not possible - * Files opened for write only can't be seeked - * Existing files opened for write must have O_TRUNC set - * Files opened for write only will ignore O_APPEND, O_TRUNC - * If an upload fails it can't be retried +- Files opened for write only can't be seeked +- Existing files opened for write must have O_TRUNC set +- Files opened for write only will ignore O_APPEND, O_TRUNC +- If an upload fails it can't be retried ### --vfs-cache-mode writes @@ -425,9 +455,11 @@ read, at the cost of an increased number of requests. These flags control the chunking: +```text --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) --vfs-read-chunk-streams int The number of parallel streams to read at once +``` The chunking behaves differently depending on the `--vfs-read-chunk-streams` parameter. @@ -441,9 +473,9 @@ value is "off", which is the default, the limit is disabled and the chunk size will grow indefinitely. With `--vfs-read-chunk-size 100M` and `--vfs-read-chunk-size-limit 0` -the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. -When `--vfs-read-chunk-size-limit 500M` is specified, the result would be -0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. +the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M +and so on. When `--vfs-read-chunk-size-limit 500M` is specified, the result would +be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on. Setting `--vfs-read-chunk-size` to `0` or "off" disables chunked reading. @@ -481,32 +513,41 @@ In particular S3 and Swift benefit hugely from the `--no-modtime` flag (or use `--use-server-modtime` for a slightly different effect) as each read of the modification time takes a transaction. +```text --no-checksum Don't compare checksums on up/download. --no-modtime Don't read/write the modification time (can speed things up). --no-seek Don't allow seeking in files. --read-only Only allow read-only access. +``` Sometimes rclone is delivered reads or writes out of order. Rather than seeking rclone will wait a short time for the in sequence read or write to come in. These flags only come into effect when not using an on disk cache file. +```text --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) +``` When using VFS write caching (`--vfs-cache-mode` with value writes or full), -the global flag `--transfers` can be set to adjust the number of parallel uploads of -modified files from the cache (the related global flag `--checkers` has no effect on the VFS). +the global flag `--transfers` can be set to adjust the number of parallel uploads +of modified files from the cache (the related global flag `--checkers` has no +effect on the VFS). +```text --transfers int Number of file transfers to run in parallel (default 4) +``` ## Symlinks By default the VFS does not support symlinks. However this may be enabled with either of the following flags: +```text --links Translate symlinks to/from regular files with a '.rclonelink' extension. --vfs-links Translate symlinks to/from regular files with a '.rclonelink' extension for the VFS +``` As most cloud storage systems do not support symlinks directly, rclone stores the symlink as a normal file with a special extension. So a @@ -518,7 +559,8 @@ Note that `--links` enables symlink translation globally in rclone - this includes any backend which supports the concept (for example the local backend). `--vfs-links` just enables it for the VFS layer. -This scheme is compatible with that used by the [local backend with the --local-links flag](/local/#symlinks-junction-points). +This scheme is compatible with that used by the +[local backend with the --local-links flag](/local/#symlinks-junction-points). The `--vfs-links` flag has been designed for `rclone mount`, `rclone nfsmount` and `rclone serve nfs`. @@ -528,7 +570,7 @@ It hasn't been tested with the other `rclone serve` commands yet. A limitation of the current implementation is that it expects the caller to resolve sub-symlinks. For example given this directory tree -``` +```text . ├── dir │   └── file.txt @@ -606,7 +648,9 @@ sync`. This flag allows you to manually set the statistics about the filing system. It can be useful when those statistics cannot be read correctly automatically. +```text --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) +``` ## Alternate report of used bytes @@ -617,7 +661,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to `rclone size` and compute the total used space itself. -_WARNING._ Contrary to `rclone size`, this flag ignores filters so that the +**WARNING**: Contrary to `rclone size`, this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. Use it as a last resort and only with caching. @@ -635,7 +679,7 @@ Note that some backends won't create metadata unless you pass in the For example, using `rclone mount` with `--metadata --vfs-metadata-extension .metadata` we get -``` +```console $ ls -l /mnt/ total 1048577 -rw-rw-r-- 1 user user 1073741824 Mar 3 16:03 1G @@ -683,41 +727,43 @@ options - it is the job of the proxy program to make a complete config. This config generated must have this extra parameter + - `_root` - root to use for the backend And it may have this parameter + - `_obscure` - comma separated strings for parameters to obscure If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "pass": "mypassword" + "user": "me", + "pass": "mypassword" } ``` If public-key authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: -``` +```json { - "user": "me", - "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" + "user": "me", + "public_key": "AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf" } ``` And as an example return this on STDOUT -``` +```json { - "type": "sftp", - "_root": "", - "_obscure": "pass", - "user": "me", - "pass": "mypassword", - "host": "sftp.example.com" + "type": "sftp", + "_root": "", + "_obscure": "pass", + "user": "me", + "pass": "mypassword", + "host": "sftp.example.com" } ``` @@ -739,9 +785,7 @@ password or public-key is changed the cache will need to expire (which takes 5 m before it takes effect. This can be used to build general purpose proxies to any kind of -backend that rclone supports. - - +backend that rclone supports. ``` rclone serve webdav remote:path [flags] @@ -812,7 +856,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -840,5 +884,10 @@ Flags for filtering directory listings ## See Also + + + * [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol. + + diff --git a/docs/content/commands/rclone_settier.md b/docs/content/commands/rclone_settier.md index 268fd61dc..6952b40d8 100644 --- a/docs/content/commands/rclone_settier.md +++ b/docs/content/commands/rclone_settier.md @@ -22,16 +22,21 @@ inaccessible.true You can use it to tier single object - rclone settier Cool remote:path/file +```console +rclone settier Cool remote:path/file +``` Or use rclone filters to set tier on only specific files - rclone --include "*.txt" settier Hot remote:path/dir +```console +rclone --include "*.txt" settier Hot remote:path/dir +``` Or just provide remote directory and all files in directory will be tiered - rclone settier tier remote:path/dir - +```console +rclone settier tier remote:path/dir +``` ``` rclone settier tier remote:path [flags] @@ -47,5 +52,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md index cae7f22b9..99701ef99 100644 --- a/docs/content/commands/rclone_sha1sum.md +++ b/docs/content/commands/rclone_sha1sum.md @@ -30,7 +30,6 @@ as a relative path). This command can also hash data received on STDIN, if not passing a remote:path. - ``` rclone sha1sum remote:path [flags] ``` @@ -52,7 +51,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -82,12 +81,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md index c4bcc0367..f686c2985 100644 --- a/docs/content/commands/rclone_size.md +++ b/docs/content/commands/rclone_size.md @@ -28,7 +28,6 @@ Rclone will then show a notice in the log indicating how many such files were encountered, and count them in as empty files in the output of the size command. - ``` rclone size remote:path [flags] ``` @@ -47,7 +46,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -77,12 +76,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md index b200958e0..1b38f0793 100644 --- a/docs/content/commands/rclone_sync.md +++ b/docs/content/commands/rclone_sync.md @@ -20,7 +20,9 @@ want to delete files from destination, use the **Important**: Since this can cause data loss, test first with the `--dry-run` or the `--interactive`/`i` flag. - rclone sync --interactive SOURCE remote:DESTINATION +```sh +rclone sync --interactive SOURCE remote:DESTINATION +``` Files in the destination won't be deleted if there were any errors at any point. Duplicate objects (files with the same name, on those providers that @@ -37,7 +39,7 @@ If dest:path doesn't exist, it is created and the source:path contents go there. It is not possible to sync overlapping remotes. However, you may exclude -the destination from the sync with a filter rule or by putting an +the destination from the sync with a filter rule or by putting an exclude-if-present file inside the destination directory and sync to a destination that is inside the source directory. @@ -46,20 +48,23 @@ the backend supports it. If metadata syncing is required then use the `--metadata` flag. Note that the modification time and metadata for the root directory -will **not** be synced. See https://github.com/rclone/rclone/issues/7652 +will **not** be synced. See for more info. **Note**: Use the `-P`/`--progress` flag to view real-time transfer statistics -**Note**: Use the `rclone dedupe` command to deal with "Duplicate object/directory found in source/destination - ignoring" errors. -See [this forum post](https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) for more info. +**Note**: Use the `rclone dedupe` command to deal with "Duplicate +object/directory found in source/destination - ignoring" errors. +See [this forum post](https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) +for more info. -# Logger Flags +## Logger Flags -The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` flags write paths, -one per line, to the file name (or stdout if it is `-`) supplied. What they write is described -in the help below. For example `--differ` will write all paths which are present -on both the source and destination but different. +The `--differ`, `--missing-on-dst`, `--missing-on-src`, `--match` and `--error` +flags write paths, one per line, to the file name (or stdout if it is `-`) +supplied. What they write is described in the help below. For example +`--differ` will write all paths which are present on both the source and +destination but different. The `--combined` flag will write a file (or stdout) which contains all file paths with a symbol and then a space and then the path to tell @@ -92,9 +97,7 @@ are not currently supported: Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each file -(which may or may not match what actually DID.) - - +(which may or may not match what actually DID). ``` rclone sync source:path dest:path [flags] @@ -120,7 +123,7 @@ rclone sync source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default ";") - -t, --timeformat string Specify a custom time format, or 'max' for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) ``` Options shared with other commands are described next. @@ -130,7 +133,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for anything which can copy a file -``` +```text --check-first Do all the checks before starting transfers -c, --checksum Check for changes with size & checksum (if available, or fallback to size only) --compare-dest stringArray Include additional server-side paths during comparison @@ -171,7 +174,7 @@ Flags for anything which can copy a file Flags used for sync commands -``` +```text --backup-dir string Make backups into hierarchy based in DIR --delete-after When synchronizing, delete files on destination after transferring (default) --delete-before When synchronizing, delete files on destination before transferring @@ -191,7 +194,7 @@ Flags used for sync commands Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -201,7 +204,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -231,12 +234,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_test.md b/docs/content/commands/rclone_test.md index 0aaddb775..ff33a340b 100644 --- a/docs/content/commands/rclone_test.md +++ b/docs/content/commands/rclone_test.md @@ -14,14 +14,15 @@ Rclone test is used to run test commands. Select which test command you want with the subcommand, eg - rclone test memory remote: +```console +rclone test memory remote: +``` Each subcommand has its own options which you can see in their help. **NB** Be careful running these commands, they may do strange things so reading their documentation first is recommended. - ## Options ``` @@ -32,6 +33,9 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. * [rclone test changenotify](/commands/rclone_test_changenotify/) - Log any change notify requests for the remote passed in. * [rclone test histogram](/commands/rclone_test_histogram/) - Makes a histogram of file name characters. @@ -39,4 +43,7 @@ See the [global flags page](/flags/) for global options not listed here. * [rclone test makefile](/commands/rclone_test_makefile/) - Make files with random contents of the size given * [rclone test makefiles](/commands/rclone_test_makefiles/) - Make a random file hierarchy in a directory * [rclone test memory](/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats. +* [rclone test speed](/commands/rclone_test_speed/) - Run a speed test to the remote + + diff --git a/docs/content/commands/rclone_test_changenotify.md b/docs/content/commands/rclone_test_changenotify.md index 1efc25554..c911609bd 100644 --- a/docs/content/commands/rclone_test_changenotify.md +++ b/docs/content/commands/rclone_test_changenotify.md @@ -23,5 +23,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone test](/commands/rclone_test/) - Run a test command + + diff --git a/docs/content/commands/rclone_test_histogram.md b/docs/content/commands/rclone_test_histogram.md index b3b3088ab..efd8f780b 100644 --- a/docs/content/commands/rclone_test_histogram.md +++ b/docs/content/commands/rclone_test_histogram.md @@ -16,7 +16,6 @@ in filenames in the remote:path specified. The data doesn't contain any identifying information but is useful for the rclone developers when developing filename compression. - ``` rclone test histogram [remote:path] [flags] ``` @@ -31,5 +30,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone test](/commands/rclone_test/) - Run a test command + + diff --git a/docs/content/commands/rclone_test_info.md b/docs/content/commands/rclone_test_info.md index 2a9ccf16f..50104fd24 100644 --- a/docs/content/commands/rclone_test_info.md +++ b/docs/content/commands/rclone_test_info.md @@ -15,8 +15,7 @@ paths passed in and how long they can be. It can take some time. It will write test files into the remote:path passed in. It outputs a bit of go code for each one. -**NB** this can create undeletable files and other hazards - use with care - +**NB** this can create undeletable files and other hazards - use with care! ``` rclone test info [remote:path]+ [flags] @@ -41,5 +40,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone test](/commands/rclone_test/) - Run a test command + + diff --git a/docs/content/commands/rclone_test_makefile.md b/docs/content/commands/rclone_test_makefile.md index 82e5da0bb..543102f62 100644 --- a/docs/content/commands/rclone_test_makefile.md +++ b/docs/content/commands/rclone_test_makefile.md @@ -28,5 +28,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone test](/commands/rclone_test/) - Run a test command + + diff --git a/docs/content/commands/rclone_test_makefiles.md b/docs/content/commands/rclone_test_makefiles.md index 79fdfab83..35f554647 100644 --- a/docs/content/commands/rclone_test_makefiles.md +++ b/docs/content/commands/rclone_test_makefiles.md @@ -36,5 +36,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone test](/commands/rclone_test/) - Run a test command + + diff --git a/docs/content/commands/rclone_test_memory.md b/docs/content/commands/rclone_test_memory.md index 50b985824..c5ac71798 100644 --- a/docs/content/commands/rclone_test_memory.md +++ b/docs/content/commands/rclone_test_memory.md @@ -22,5 +22,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone test](/commands/rclone_test/) - Run a test command + + diff --git a/docs/content/commands/rclone_test_speed.md b/docs/content/commands/rclone_test_speed.md new file mode 100644 index 000000000..8a5cb9697 --- /dev/null +++ b/docs/content/commands/rclone_test_speed.md @@ -0,0 +1,64 @@ +--- +title: "rclone test speed" +description: "Run a speed test to the remote" +versionIntroduced: v1.72 +# autogenerated - DO NOT EDIT, instead edit the source code in cmd/test/speed/ and as part of making a release run "make commanddocs" +--- +# rclone test speed + +Run a speed test to the remote + +## Synopsis + +Run a speed test to the remote. + +This command runs a series of uploads and downloads to the remote, measuring +and printing the speed of each test using varying file sizes and numbers of +files. + +Test time can be innaccurate with small file caps and large files. As it +uses the results of an initial test to determine how many files to use in +each subsequent test. + +It is recommended to use -q flag for a simpler output. e.g.: + + rclone test speed remote: -q + +**NB** This command will create and delete files on the remote in a randomly +named directory which will be automatically removed on a clean exit. + +You can use the --json flag to only print the results in JSON format. + +``` +rclone test speed [flags] +``` + +## Options + +``` + --ascii Fill files with random ASCII printable bytes only + --chargen Fill files with a ASCII chargen pattern + --file-cap int Maximum number of files to use in each test (default 100) + -h, --help help for speed + --json Output only results in JSON format + --large SizeSuffix Size of large files (default 1Gi) + --medium SizeSuffix Size of medium files (default 10Mi) + --pattern Fill files with a periodic pattern + --seed int Seed for the random number generator (0 for random) (default 1) + --small SizeSuffix Size of small files (default 1Ki) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --test-time Duration Length for each test to run (default 15s) + --zero Fill files with ASCII 0x00 +``` + +See the [global flags page](/flags/) for global options not listed here. + +## See Also + + + + +* [rclone test](/commands/rclone_test/) - Run a test command + + + diff --git a/docs/content/commands/rclone_touch.md b/docs/content/commands/rclone_touch.md index 82b5bf4df..afbd9b1fc 100644 --- a/docs/content/commands/rclone_touch.md +++ b/docs/content/commands/rclone_touch.md @@ -31,7 +31,6 @@ time instead of the current time. Times may be specified as one of: Note that value of `--timestamp` is in UTC. If you want local time then add the `--localtime` flag. - ``` rclone touch remote:path [flags] ``` @@ -53,7 +52,7 @@ See the [global flags page](/flags/) for global options not listed here. Important flags useful for most commands -``` +```text -n, --dry-run Do a trial run with no permanent changes -i, --interactive Enable interactive mode -v, --verbose count Print lots more stuff (repeat for more) @@ -63,7 +62,7 @@ Important flags useful for most commands Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -93,12 +92,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_tree.md b/docs/content/commands/rclone_tree.md index 74bfa15fe..08000e8d8 100644 --- a/docs/content/commands/rclone_tree.md +++ b/docs/content/commands/rclone_tree.md @@ -14,16 +14,18 @@ Lists the contents of a remote in a similar way to the unix tree command. For example - $ rclone tree remote:path - / - ├── file1 - ├── file2 - ├── file3 - └── subdir - ├── file4 - └── file5 +```text +$ rclone tree remote:path +/ +├── file1 +├── file2 +├── file3 +└── subdir + ├── file4 + └── file5 - 1 directories, 5 files +1 directories, 5 files +``` You can use any of the filtering options with the tree command (e.g. `--include` and `--exclude`. You can also use `--fast-list`. @@ -36,7 +38,6 @@ short options as they conflict with rclone's short options. For a more interactive navigation of the remote see the [ncdu](/commands/rclone_ncdu/) command. - ``` rclone tree remote:path [flags] ``` @@ -72,7 +73,7 @@ See the [global flags page](/flags/) for global options not listed here. Flags for filtering directory listings -``` +```text --delete-excluded Delete files on dest excluded from sync --exclude stringArray Exclude files matching pattern --exclude-from stringArray Read file exclude patterns from file (use - to read from stdin) @@ -102,12 +103,17 @@ Flags for filtering directory listings Flags for listing directories -``` +```text --default-time Time Time to show if modtime is unknown for files and directories (default 2000-01-01T00:00:00Z) --fast-list Use recursive list if available; uses more memory but fewer transactions ``` ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md index 9aca17dd8..03c1ab263 100644 --- a/docs/content/commands/rclone_version.md +++ b/docs/content/commands/rclone_version.md @@ -16,15 +16,17 @@ build tags and the type of executable (static or dynamic). For example: - $ rclone version - rclone v1.55.0 - - os/version: ubuntu 18.04 (64 bit) - - os/kernel: 4.15.0-136-generic (x86_64) - - os/type: linux - - os/arch: amd64 - - go/version: go1.16 - - go/linking: static - - go/tags: none +```console +$ rclone version +rclone v1.55.0 +- os/version: ubuntu 18.04 (64 bit) +- os/kernel: 4.15.0-136-generic (x86_64) +- os/type: linux +- os/arch: amd64 +- go/version: go1.16 +- go/linking: static +- go/tags: none +``` Note: before rclone version 1.55 the os/type and os/arch lines were merged, and the "go/version" line was tagged as "go version". @@ -32,25 +34,28 @@ Note: before rclone version 1.55 the os/type and os/arch lines were merged, If you supply the --check flag, then it will do an online check to compare your version with the latest release and the latest beta. - $ rclone version --check - yours: 1.42.0.6 - latest: 1.42 (released 2018-06-16) - beta: 1.42.0.5 (released 2018-06-17) +```console +$ rclone version --check +yours: 1.42.0.6 +latest: 1.42 (released 2018-06-16) +beta: 1.42.0.5 (released 2018-06-17) +``` Or - $ rclone version --check - yours: 1.41 - latest: 1.42 (released 2018-06-16) - upgrade: https://downloads.rclone.org/v1.42 - beta: 1.42.0.5 (released 2018-06-17) - upgrade: https://beta.rclone.org/v1.42-005-g56e1e820 +```console +$ rclone version --check +yours: 1.41 +latest: 1.42 (released 2018-06-16) + upgrade: https://downloads.rclone.org/v1.42 +beta: 1.42.0.5 (released 2018-06-17) + upgrade: https://beta.rclone.org/v1.42-005-g56e1e820 +``` If you supply the --deps flag then rclone will print a list of all the packages it depends on and their versions along with some other information about the build. - ``` rclone version [flags] ``` @@ -67,5 +72,10 @@ See the [global flags page](/flags/) for global options not listed here. ## See Also + + + * [rclone](/commands/rclone/) - Show help for rclone commands, flags and backends. + + diff --git a/docs/content/compress.md b/docs/content/compress.md index c09ce2579..0d854bedd 100644 --- a/docs/content/compress.md +++ b/docs/content/compress.md @@ -151,10 +151,10 @@ Properties: - Type: string - Default: "gzip" - Examples: - - "gzip" - - Standard gzip compression with fastest parameters. - - "zstd" - - Zstandard compression — fast modern algorithm offering adjustable speed-to-compression tradeoffs. + - "gzip" + - Standard gzip compression with fastest parameters. + - "zstd" + - Zstandard compression — fast modern algorithm offering adjustable speed-to-compression tradeoffs. #### --compress-level diff --git a/docs/content/crypt.md b/docs/content/crypt.md index 83be38303..0a0d2a42c 100644 --- a/docs/content/crypt.md +++ b/docs/content/crypt.md @@ -448,14 +448,14 @@ Properties: - Type: string - Default: "standard" - Examples: - - "standard" - - Encrypt the filenames. - - See the docs for the details. - - "obfuscate" - - Very simple filename obfuscation. - - "off" - - Don't encrypt the file names. - - Adds a ".bin", or "suffix" extension only. + - "standard" + - Encrypt the filenames. + - See the docs for the details. + - "obfuscate" + - Very simple filename obfuscation. + - "off" + - Don't encrypt the file names. + - Adds a ".bin", or "suffix" extension only. #### --crypt-directory-name-encryption @@ -470,10 +470,10 @@ Properties: - Type: bool - Default: true - Examples: - - "true" - - Encrypt directory names. - - "false" - - Don't encrypt directory names, leave them intact. + - "true" + - Encrypt directory names. + - "false" + - Don't encrypt directory names, leave them intact. #### --crypt-password @@ -560,10 +560,10 @@ Properties: - Type: bool - Default: false - Examples: - - "true" - - Don't encrypt file data, leave it unencrypted. - - "false" - - Encrypt file data. + - "true" + - Don't encrypt file data, leave it unencrypted. + - "false" + - Encrypt file data. #### --crypt-pass-bad-blocks @@ -611,13 +611,13 @@ Properties: - Type: string - Default: "base32" - Examples: - - "base32" - - Encode using base32. Suitable for all remote. - - "base64" - - Encode using base64. Suitable for case sensitive remote. - - "base32768" - - Encode using base32768. Suitable if your remote counts UTF-16 or - - Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox) + - "base32" + - Encode using base32. Suitable for all remote. + - "base64" + - Encode using base64. Suitable for case sensitive remote. + - "base32768" + - Encode using base32768. Suitable if your remote counts UTF-16 or + - Unicode codepoint instead of UTF-8 byte length. (Eg. Onedrive, Dropbox) #### --crypt-suffix @@ -654,9 +654,11 @@ See the [metadata](/docs/#metadata) docs for more info. Here are the commands specific to the crypt backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -668,34 +670,40 @@ These can be run on a running backend using the rc command ### encode -Encode the given filename(s) +Encode the given filename(s). - rclone backend encode remote: [options] [+] +```console +rclone backend encode remote: [options] [+] +``` This encodes the filenames given as arguments returning a list of strings of the encoded results. -Usage Example: - - rclone backend encode crypt: file1 [file2...] - rclone rc backend/command command=encode fs=crypt: file1 [file2...] +Usage examples: +```console +rclone backend encode crypt: file1 [file2...] +rclone rc backend/command command=encode fs=crypt: file1 [file2...] +``` ### decode -Decode the given filename(s) +Decode the given filename(s). - rclone backend decode remote: [options] [+] +```console +rclone backend decode remote: [options] [+] +``` This decodes the filenames given as arguments returning a list of strings of the decoded results. It will return an error if any of the inputs are invalid. -Usage Example: - - rclone backend decode crypt: encryptedfile1 [encryptedfile2...] - rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] +Usage examples: +```console +rclone backend decode crypt: encryptedfile1 [encryptedfile2...] +rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] +``` diff --git a/docs/content/doi.md b/docs/content/doi.md index a1778fa83..9fe179c06 100644 --- a/docs/content/doi.md +++ b/docs/content/doi.md @@ -100,14 +100,14 @@ Properties: - Type: string - Required: false - Examples: - - "auto" - - Auto-detect provider - - "zenodo" - - Zenodo - - "dataverse" - - Dataverse - - "invenio" - - Invenio + - "auto" + - Auto-detect provider + - "zenodo" + - Zenodo + - "dataverse" + - Dataverse + - "invenio" + - Invenio #### --doi-doi-resolver-api-url @@ -139,9 +139,11 @@ Properties: Here are the commands specific to the doi backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -155,29 +157,38 @@ These can be run on a running backend using the rc command Show metadata about the DOI. - rclone backend metadata remote: [options] [+] +```console +rclone backend metadata remote: [options] [+] +``` This command returns a JSON object with some information about the DOI. - rclone backend medatadata doi: +Usage example: + +```console +rclone backend metadata doi: +``` It returns a JSON object representing metadata about the DOI. - ### set Set command for updating the config parameters. - rclone backend set remote: [options] [+] +```console +rclone backend set remote: [options] [+] +``` This set command can be used to update the config parameters for a running doi backend. -Usage Examples: +Usage examples: - rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2] - rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2] - rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI +```console +rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI +``` The option keys are named as they are in the config file. @@ -187,5 +198,4 @@ will default to those currently in use. It doesn't return anything. - diff --git a/docs/content/drive.md b/docs/content/drive.md index 8de9d6858..afbd4119a 100644 --- a/docs/content/drive.md +++ b/docs/content/drive.md @@ -641,20 +641,20 @@ Properties: - Type: string - Required: false - Examples: - - "drive" - - Full access all files, excluding Application Data Folder. - - "drive.readonly" - - Read-only access to file metadata and file contents. - - "drive.file" - - Access to files created by rclone only. - - These are visible in the drive website. - - File authorization is revoked when the user deauthorizes the app. - - "drive.appfolder" - - Allows read and write access to the Application Data folder. - - This is not visible in the drive website. - - "drive.metadata.readonly" - - Allows read-only access to file metadata but - - does not allow any access to read or download file content. + - "drive" + - Full access all files, excluding Application Data Folder. + - "drive.readonly" + - Read-only access to file metadata and file contents. + - "drive.file" + - Access to files created by rclone only. + - These are visible in the drive website. + - File authorization is revoked when the user deauthorizes the app. + - "drive.appfolder" + - Allows read and write access to the Application Data folder. + - This is not visible in the drive website. + - "drive.metadata.readonly" + - Allows read-only access to file metadata but + - does not allow any access to read or download file content. #### --drive-service-account-file @@ -1342,16 +1342,16 @@ Properties: - Type: Bits - Default: read - Examples: - - "off" - - Do not read or write the value - - "read" - - Read the value only - - "write" - - Write the value only - - "failok" - - If writing fails log errors only, don't fail the transfer - - "read,write" - - Read and Write the value. + - "off" + - Do not read or write the value + - "read" + - Read the value only + - "write" + - Write the value only + - "failok" + - If writing fails log errors only, don't fail the transfer + - "read,write" + - Read and Write the value. #### --drive-metadata-permissions @@ -1372,16 +1372,16 @@ Properties: - Type: Bits - Default: off - Examples: - - "off" - - Do not read or write the value - - "read" - - Read the value only - - "write" - - Write the value only - - "failok" - - If writing fails log errors only, don't fail the transfer - - "read,write" - - Read and Write the value. + - "off" + - Do not read or write the value + - "read" + - Read the value only + - "write" + - Write the value only + - "failok" + - If writing fails log errors only, don't fail the transfer + - "read,write" + - Read and Write the value. #### --drive-metadata-labels @@ -1409,16 +1409,16 @@ Properties: - Type: Bits - Default: off - Examples: - - "off" - - Do not read or write the value - - "read" - - Read the value only - - "write" - - Write the value only - - "failok" - - If writing fails log errors only, don't fail the transfer - - "read,write" - - Read and Write the value. + - "off" + - Do not read or write the value + - "read" + - Read the value only + - "write" + - Write the value only + - "failok" + - If writing fails log errors only, don't fail the transfer + - "read,write" + - Read and Write the value. #### --drive-encoding @@ -1446,10 +1446,10 @@ Properties: - Type: bool - Default: false - Examples: - - "false" - - Enter credentials in the next step. - - "true" - - Get GCP IAM credentials from the environment (env vars or IAM). + - "false" + - Enter credentials in the next step. + - "true" + - Get GCP IAM credentials from the environment (env vars or IAM). #### --drive-description @@ -1491,9 +1491,11 @@ See the [metadata](/docs/#metadata) docs for more info. Here are the commands specific to the drive backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -1505,54 +1507,66 @@ These can be run on a running backend using the rc command ### get -Get command for fetching the drive config parameters +Get command for fetching the drive config parameters. - rclone backend get remote: [options] [+] +```console +rclone backend get remote: [options] [+] +``` -This is a get command which will be used to fetch the various drive config parameters +This is a get command which will be used to fetch the various drive config +parameters. -Usage Examples: - - rclone backend get drive: [-o service_account_file] [-o chunk_size] - rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] +Usage examples: +```console +rclone backend get drive: [-o service_account_file] [-o chunk_size] +rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] +``` Options: -- "chunk_size": show the current upload chunk size -- "service_account_file": show the current service account file +- "chunk_size": Show the current upload chunk size. +- "service_account_file": Show the current service account file. ### set -Set command for updating the drive config parameters +Set command for updating the drive config parameters. - rclone backend set remote: [options] [+] +```console +rclone backend set remote: [options] [+] +``` -This is a set command which will be used to update the various drive config parameters +This is a set command which will be used to update the various drive config +parameters. -Usage Examples: - - rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] - rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] +Usage examples: +```console +rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] +rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] +``` Options: -- "chunk_size": update the current upload chunk size -- "service_account_file": update the current service account file +- "chunk_size": Update the current upload chunk size. +- "service_account_file": Update the current service account file. ### shortcut -Create shortcuts from files or directories +Create shortcuts from files or directories. - rclone backend shortcut remote: [options] [+] +```console +rclone backend shortcut remote: [options] [+] +``` This command creates shortcuts from files or directories. -Usage: +Usage examples: - rclone backend shortcut drive: source_item destination_shortcut - rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut +```console +rclone backend shortcut drive: source_item destination_shortcut +rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut +``` In the first example this creates a shortcut from the "source_item" which can be a file or a directory to the "destination_shortcut". The @@ -1564,54 +1578,61 @@ relative to "drive:" to the "destination_shortcut" relative to "drive2:". This may fail with a permission error if the user authenticated with "drive2:" can't read files from "drive:". - Options: -- "target": optional target remote for the shortcut destination +- "target": Optional target remote for the shortcut destination. ### drives -List the Shared Drives available to this account +List the Shared Drives available to this account. - rclone backend drives remote: [options] [+] +```console +rclone backend drives remote: [options] [+] +``` This command lists the Shared Drives (Team Drives) available to this account. -Usage: +Usage example: - rclone backend [-o config] drives drive: +```console +rclone backend [-o config] drives drive: +``` -This will return a JSON list of objects like this +This will return a JSON list of objects like this: - [ - { - "id": "0ABCDEF-01234567890", - "kind": "drive#teamDrive", - "name": "My Drive" - }, - { - "id": "0ABCDEFabcdefghijkl", - "kind": "drive#teamDrive", - "name": "Test Drive" - } - ] +```json +[ + { + "id": "0ABCDEF-01234567890", + "kind": "drive#teamDrive", + "name": "My Drive" + }, + { + "id": "0ABCDEFabcdefghijkl", + "kind": "drive#teamDrive", + "name": "Test Drive" + } +] +``` With the -o config parameter it will output the list in a format suitable for adding to a config file to make aliases for all the drives found and a combined drive. - [My Drive] - type = alias - remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: +```ini +[My Drive] +type = alias +remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: - [Test Drive] - type = alias - remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: +[Test Drive] +type = alias +remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: - [AllDrives] - type = combine - upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" +[AllDrives] +type = combine +upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" +``` Adding this to the rclone config file will cause those team drives to be accessible with the aliases shown. Any illegal characters will be @@ -1619,46 +1640,55 @@ substituted with "_" and duplicate names will have numbers suffixed. It will also add a remote called AllDrives which shows all the shared drives combined into one directory tree. - ### untrash -Untrash files and directories +Untrash files and directories. - rclone backend untrash remote: [options] [+] +```console +rclone backend untrash remote: [options] [+] +``` This command untrashes all the files and directories in the directory passed in recursively. -Usage: +Usage example: + +```console +rclone backend untrash drive:directory +rclone backend --interactive untrash drive:directory subdir +``` This takes an optional directory to trash which make this easier to use via the API. - rclone backend untrash drive:directory - rclone backend --interactive untrash drive:directory subdir - -Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it. +Use the --interactive/-i or --dry-run flag to see what would be restored before +restoring it. Result: - { - "Untrashed": 17, - "Errors": 0 - } - +```json +{ + "Untrashed": 17, + "Errors": 0 +} +``` ### copyid -Copy files by ID +Copy files by ID. - rclone backend copyid remote: [options] [+] +```console +rclone backend copyid remote: [options] [+] +``` -This command copies files by ID +This command copies files by ID. -Usage: +Usage examples: - rclone backend copyid drive: ID path - rclone backend copyid drive: ID1 path1 ID2 path2 +```console +rclone backend copyid drive: ID path +rclone backend copyid drive: ID1 path1 ID2 path2 +``` It copies the drive file with ID given to the path (an rclone path which will be passed internally to rclone copyto). The ID and path pairs can be @@ -1671,21 +1701,25 @@ component will be used as the file name. If the destination is a drive backend then server-side copying will be attempted if possible. -Use the --interactive/-i or --dry-run flag to see what would be copied before copying. - +Use the --interactive/-i or --dry-run flag to see what would be copied before +copying. ### moveid -Move files by ID +Move files by ID. - rclone backend moveid remote: [options] [+] +```console +rclone backend moveid remote: [options] [+] +``` -This command moves files by ID +This command moves files by ID. -Usage: +Usage examples: - rclone backend moveid drive: ID path - rclone backend moveid drive: ID1 path1 ID2 path2 +```console +rclone backend moveid drive: ID path +rclone backend moveid drive: ID1 path1 ID2 path2 +``` It moves the drive file with ID given to the path (an rclone path which will be passed internally to rclone moveto). @@ -1699,69 +1733,84 @@ attempted if possible. Use the --interactive/-i or --dry-run flag to see what would be moved beforehand. - ### exportformats -Dump the export formats for debug purposes +Dump the export formats for debug purposes. - rclone backend exportformats remote: [options] [+] +```console +rclone backend exportformats remote: [options] [+] +``` ### importformats -Dump the import formats for debug purposes +Dump the import formats for debug purposes. - rclone backend importformats remote: [options] [+] +```console +rclone backend importformats remote: [options] [+] +``` ### query -List files using Google Drive query language +List files using Google Drive query language. - rclone backend query remote: [options] [+] +```console +rclone backend query remote: [options] [+] +``` -This command lists files based on a query +This command lists files based on a query. -Usage: +Usage example: + +```console +rclone backend query drive: query +``` - rclone backend query drive: query - The query syntax is documented at [Google Drive Search query terms and operators](https://developers.google.com/drive/api/guides/ref-search-terms). For example: - rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'" +```console +rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'" +``` If the query contains literal ' or \ characters, these need to be escaped with \ characters. "'" becomes "\'" and "\" becomes "\\\", for example to match a file named "foo ' \.txt": - rclone backend query drive: "name = 'foo \' \\\.txt'" +```console +rclone backend query drive: "name = 'foo \' \\\.txt'" +``` The result is a JSON array of matches, for example: - [ - { - "createdTime": "2017-06-29T19:58:28.537Z", - "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD", - "md5Checksum": "68518d16be0c6fbfab918be61d658032", - "mimeType": "text/plain", - "modifiedTime": "2024-02-02T10:40:02.874Z", - "name": "foo ' \\.txt", - "parents": [ - "0BxAe_BCDE4zkFGZpcWJGek0xbzC" - ], - "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC", - "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893", - "size": "311", - "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC" - } - ] +```json +[ + { + "createdTime": "2017-06-29T19:58:28.537Z", + "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD", + "md5Checksum": "68518d16be0c6fbfab918be61d658032", + "mimeType": "text/plain", + "modifiedTime": "2024-02-02T10:40:02.874Z", + "name": "foo ' \\.txt", + "parents": [ + "0BxAe_BCDE4zkFGZpcWJGek0xbzC" + ], + "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC", + "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893", + "size": "311", + "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC" + } +] +```console ### rescue -Rescue or delete any orphaned files +Rescue or delete any orphaned files. - rclone backend rescue remote: [options] [+] +```console +rclone backend rescue remote: [options] [+] +``` This command rescues or deletes any orphaned files or directories. @@ -1771,26 +1820,31 @@ are no longer in any folder in Google Drive. This command finds those files and either rescues them to a directory you specify or deletes them. -Usage: - This can be used in 3 ways. -First, list all orphaned files +First, list all orphaned files: - rclone backend rescue drive: +```console +rclone backend rescue drive: +``` -Second rescue all orphaned files to the directory indicated +Second rescue all orphaned files to the directory indicated: - rclone backend rescue drive: "relative/path/to/rescue/directory" +```console +rclone backend rescue drive: "relative/path/to/rescue/directory" +``` -e.g. To rescue all orphans to a directory called "Orphans" in the top level +E.g. to rescue all orphans to a directory called "Orphans" in the top level: - rclone backend rescue drive: Orphans +```console +rclone backend rescue drive: Orphans +``` -Third delete all orphaned files to the trash - - rclone backend rescue drive: -o delete +Third delete all orphaned files to the trash: +```console +rclone backend rescue drive: -o delete +``` diff --git a/docs/content/filefabric.md b/docs/content/filefabric.md index 7e28c6c31..fffe255b1 100644 --- a/docs/content/filefabric.md +++ b/docs/content/filefabric.md @@ -177,12 +177,12 @@ Properties: - Type: string - Required: true - Examples: - - "https://storagemadeeasy.com" - - Storage Made Easy US - - "https://eu.storagemadeeasy.com" - - Storage Made Easy EU - - "https://yourfabric.smestorage.com" - - Connect to your Enterprise File Fabric + - "https://storagemadeeasy.com" + - Storage Made Easy US + - "https://eu.storagemadeeasy.com" + - Storage Made Easy EU + - "https://yourfabric.smestorage.com" + - Connect to your Enterprise File Fabric #### --filefabric-root-folder-id diff --git a/docs/content/flags.md b/docs/content/flags.md index c026a142e..cc3b11e0d 100644 --- a/docs/content/flags.md +++ b/docs/content/flags.md @@ -121,7 +121,7 @@ Flags for general networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default "rclone/v1.71.0") + --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0") ``` @@ -341,6 +341,8 @@ Backend-only flags (these can be set in the config file also). ``` --alias-description string Description of the remote --alias-remote string Remote or path to alias + --archive-description string Description of the remote + --archive-remote string Remote to wrap to read archives from --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive --azureblob-account string Azure Storage Account Name --azureblob-archive-tier-delete Delete archive tier blobs before overwriting @@ -418,6 +420,10 @@ Backend-only flags (these can be set in the config file also). --b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-key string Application Key --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket + --b2-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in B2 + --b2-sse-customer-key string To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data + --b2-sse-customer-key-base64 string To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data + --b2-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-upload-concurrency int Concurrency for multipart uploads (default 4) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) @@ -479,7 +485,7 @@ Backend-only flags (these can be set in the config file also). --combine-description string Description of the remote --combine-upstreams SpaceSepList Upstreams for combining --compress-description string Description of the remote - --compress-level int GZIP compression level (-2 to 9) (default -1) + --compress-level string GZIP (levels -2 to 9): --compress-mode string Compression mode (default "gzip") --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) --compress-remote string Remote to compress @@ -774,6 +780,7 @@ Backend-only flags (these can be set in the config file also). --mailru-token string OAuth Access Token as a JSON blob --mailru-token-url string Token server url --mailru-user string User name (usually email) + --mega-2fa string The 2FA code of your MEGA account if the account is set up with one --mega-debug Output more debug from Mega --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) @@ -892,6 +899,7 @@ Backend-only flags (these can be set in the config file also). --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) --protondrive-original-file-size Return the file size before encryption (default true) + --protondrive-otp-secret-key string The OTP secret key (obscured) --protondrive-password string The password of your proton account (obscured) --protondrive-replace-existing-draft Create a new revision when filename conflict is detected --protondrive-username string The username of your proton account @@ -974,6 +982,7 @@ Backend-only flags (these can be set in the config file also). --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) --s3-use-arn-region If true, enables arn region support for the service + --s3-use-data-integrity-protections Tristate If true use AWS S3 data integrity protections (default unset) --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) @@ -1056,6 +1065,7 @@ Backend-only flags (these can be set in the config file also). --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-user-agent string Siad User Agent (default "Sia-Agent") --skip-links Don't warn about skipped symlinks + --skip-specials Don't warn about skipped pipes, sockets and device objects --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default "WORKGROUP") diff --git a/docs/content/ftp.md b/docs/content/ftp.md index 0de99cc93..38b9bad0c 100644 --- a/docs/content/ftp.md +++ b/docs/content/ftp.md @@ -541,12 +541,12 @@ Properties: - Type: Encoding - Default: Slash,Del,Ctl,RightSpace,Dot - Examples: - - "Asterisk,Ctl,Dot,Slash" - - ProFTPd can't handle '*' in file names - - "BackSlash,Ctl,Del,Dot,RightSpace,Slash,SquareBracket" - - PureFTPd can't handle '[]' or '*' in file names - - "Ctl,LeftPeriod,Slash" - - VsFTPd can't handle file names starting with dot + - "Asterisk,Ctl,Dot,Slash" + - ProFTPd can't handle '*' in file names + - "BackSlash,Ctl,Del,Dot,RightSpace,Slash,SquareBracket" + - PureFTPd can't handle '[]' or '*' in file names + - "Ctl,LeftPeriod,Slash" + - VsFTPd can't handle file names starting with dot #### --ftp-description diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md index 8f869ef0f..902ff1f31 100644 --- a/docs/content/googlecloudstorage.md +++ b/docs/content/googlecloudstorage.md @@ -466,24 +466,24 @@ Properties: - Type: string - Required: false - Examples: - - "authenticatedRead" - - Object owner gets OWNER access. - - All Authenticated Users get READER access. - - "bucketOwnerFullControl" - - Object owner gets OWNER access. - - Project team owners get OWNER access. - - "bucketOwnerRead" - - Object owner gets OWNER access. - - Project team owners get READER access. - - "private" - - Object owner gets OWNER access. - - Default if left blank. - - "projectPrivate" - - Object owner gets OWNER access. - - Project team members get access according to their roles. - - "publicRead" - - Object owner gets OWNER access. - - All Users get READER access. + - "authenticatedRead" + - Object owner gets OWNER access. + - All Authenticated Users get READER access. + - "bucketOwnerFullControl" + - Object owner gets OWNER access. + - Project team owners get OWNER access. + - "bucketOwnerRead" + - Object owner gets OWNER access. + - Project team owners get READER access. + - "private" + - Object owner gets OWNER access. + - Default if left blank. + - "projectPrivate" + - Object owner gets OWNER access. + - Project team members get access according to their roles. + - "publicRead" + - Object owner gets OWNER access. + - All Users get READER access. #### --gcs-bucket-acl @@ -496,20 +496,20 @@ Properties: - Type: string - Required: false - Examples: - - "authenticatedRead" - - Project team owners get OWNER access. - - All Authenticated Users get READER access. - - "private" - - Project team owners get OWNER access. - - Default if left blank. - - "projectPrivate" - - Project team members get access according to their roles. - - "publicRead" - - Project team owners get OWNER access. - - All Users get READER access. - - "publicReadWrite" - - Project team owners get OWNER access. - - All Users get WRITER access. + - "authenticatedRead" + - Project team owners get OWNER access. + - All Authenticated Users get READER access. + - "private" + - Project team owners get OWNER access. + - Default if left blank. + - "projectPrivate" + - Project team members get access according to their roles. + - "publicRead" + - Project team owners get OWNER access. + - All Users get READER access. + - "publicReadWrite" + - Project team owners get OWNER access. + - All Users get WRITER access. #### --gcs-bucket-policy-only @@ -545,78 +545,80 @@ Properties: - Type: string - Required: false - Examples: - - "" - - Empty for default location (US) - - "asia" - - Multi-regional location for Asia - - "eu" - - Multi-regional location for Europe - - "us" - - Multi-regional location for United States - - "asia-east1" - - Taiwan - - "asia-east2" - - Hong Kong - - "asia-northeast1" - - Tokyo - - "asia-northeast2" - - Osaka - - "asia-northeast3" - - Seoul - - "asia-south1" - - Mumbai - - "asia-south2" - - Delhi - - "asia-southeast1" - - Singapore - - "asia-southeast2" - - Jakarta - - "australia-southeast1" - - Sydney - - "australia-southeast2" - - Melbourne - - "europe-north1" - - Finland - - "europe-west1" - - Belgium - - "europe-west2" - - London - - "europe-west3" - - Frankfurt - - "europe-west4" - - Netherlands - - "europe-west6" - - Zürich - - "europe-central2" - - Warsaw - - "us-central1" - - Iowa - - "us-east1" - - South Carolina - - "us-east4" - - Northern Virginia - - "us-west1" - - Oregon - - "us-west2" - - California - - "us-west3" - - Salt Lake City - - "us-west4" - - Las Vegas - - "northamerica-northeast1" - - Montréal - - "northamerica-northeast2" - - Toronto - - "southamerica-east1" - - São Paulo - - "southamerica-west1" - - Santiago - - "asia1" - - Dual region: asia-northeast1 and asia-northeast2. - - "eur4" - - Dual region: europe-north1 and europe-west4. - - "nam4" - - Dual region: us-central1 and us-east1. + - "" + - Empty for default location (US) + - "asia" + - Multi-regional location for Asia + - "eu" + - Multi-regional location for Europe + - "us" + - Multi-regional location for United States + - "asia-east1" + - Taiwan + - "asia-east2" + - Hong Kong + - "asia-northeast1" + - Tokyo + - "asia-northeast2" + - Osaka + - "asia-northeast3" + - Seoul + - "asia-south1" + - Mumbai + - "asia-south2" + - Delhi + - "asia-southeast1" + - Singapore + - "asia-southeast2" + - Jakarta + - "australia-southeast1" + - Sydney + - "australia-southeast2" + - Melbourne + - "europe-north1" + - Finland + - "europe-west1" + - Belgium + - "europe-west2" + - London + - "europe-west3" + - Frankfurt + - "europe-west4" + - Netherlands + - "europe-west6" + - Zürich + - "europe-central2" + - Warsaw + - "us-central1" + - Iowa + - "us-east1" + - South Carolina + - "us-east4" + - Northern Virginia + - "us-east5" + - Ohio + - "us-west1" + - Oregon + - "us-west2" + - California + - "us-west3" + - Salt Lake City + - "us-west4" + - Las Vegas + - "northamerica-northeast1" + - Montréal + - "northamerica-northeast2" + - Toronto + - "southamerica-east1" + - São Paulo + - "southamerica-west1" + - Santiago + - "asia1" + - Dual region: asia-northeast1 and asia-northeast2. + - "eur4" + - Dual region: europe-north1 and europe-west4. + - "nam4" + - Dual region: us-central1 and us-east1. #### --gcs-storage-class @@ -629,20 +631,20 @@ Properties: - Type: string - Required: false - Examples: - - "" - - Default - - "MULTI_REGIONAL" - - Multi-regional storage class - - "REGIONAL" - - Regional storage class - - "NEARLINE" - - Nearline storage class - - "COLDLINE" - - Coldline storage class - - "ARCHIVE" - - Archive storage class - - "DURABLE_REDUCED_AVAILABILITY" - - Durable reduced availability storage class + - "" + - Default + - "MULTI_REGIONAL" + - Multi-regional storage class + - "REGIONAL" + - Regional storage class + - "NEARLINE" + - Nearline storage class + - "COLDLINE" + - Coldline storage class + - "ARCHIVE" + - Archive storage class + - "DURABLE_REDUCED_AVAILABILITY" + - Durable reduced availability storage class #### --gcs-env-auth @@ -657,10 +659,10 @@ Properties: - Type: bool - Default: false - Examples: - - "false" - - Enter credentials in the next step. - - "true" - - Get GCP IAM credentials from the environment (env vars or IAM). + - "false" + - Enter credentials in the next step. + - "true" + - Get GCP IAM credentials from the environment (env vars or IAM). ### Advanced options diff --git a/docs/content/hasher.md b/docs/content/hasher.md index d1448fa41..ed5c933e0 100644 --- a/docs/content/hasher.md +++ b/docs/content/hasher.md @@ -249,9 +249,11 @@ See the [metadata](/docs/#metadata) docs for more info. Here are the commands specific to the hasher backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -263,52 +265,71 @@ These can be run on a running backend using the rc command ### drop -Drop cache +Drop cache. - rclone backend drop remote: [options] [+] +```console +rclone backend drop remote: [options] [+] +``` Completely drop checksum cache. -Usage Example: - rclone backend drop hasher: +Usage example: + +```console +rclone backend drop hasher: +``` ### dump -Dump the database +Dump the database. - rclone backend dump remote: [options] [+] +```console +rclone backend dump remote: [options] [+] +``` -Dump cache records covered by the current remote +Dump cache records covered by the current remote. ### fulldump -Full dump of the database +Full dump of the database. - rclone backend fulldump remote: [options] [+] +```console +rclone backend fulldump remote: [options] [+] +``` -Dump all cache records in the database +Dump all cache records in the database. ### import -Import a SUM file +Import a SUM file. - rclone backend import remote: [options] [+] +```console +rclone backend import remote: [options] [+] +``` Amend hash cache from a SUM file and bind checksums to files by size/time. -Usage Example: - rclone backend import hasher:subdir md5 /path/to/sum.md5 +Usage example: + +```console +rclone backend import hasher:subdir md5 /path/to/sum.md5 +``` ### stickyimport -Perform fast import of a SUM file +Perform fast import of a SUM file. - rclone backend stickyimport remote: [options] [+] +```console +rclone backend stickyimport remote: [options] [+] +``` Fill hash cache from a SUM file without verifying file fingerprints. -Usage Example: - rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5 +Usage example: + +```console +rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5 +``` diff --git a/docs/content/hdfs.md b/docs/content/hdfs.md index 1ef43c2bd..a05ff1da9 100644 --- a/docs/content/hdfs.md +++ b/docs/content/hdfs.md @@ -188,8 +188,8 @@ Properties: - Type: string - Required: false - Examples: - - "root" - - Connect to hdfs as root. + - "root" + - Connect to hdfs as root. ### Advanced options @@ -226,8 +226,8 @@ Properties: - Type: string - Required: false - Examples: - - "privacy" - - Ensure authentication, integrity and encryption enabled. + - "privacy" + - Ensure authentication, integrity and encryption enabled. #### --hdfs-encoding diff --git a/docs/content/hidrive.md b/docs/content/hidrive.md index 02c4ea207..5355a26be 100644 --- a/docs/content/hidrive.md +++ b/docs/content/hidrive.md @@ -250,10 +250,10 @@ Properties: - Type: string - Default: "rw" - Examples: - - "rw" - - Read and write access to resources. - - "ro" - - Read-only access to resources. + - "rw" + - Read and write access to resources. + - "ro" + - Read-only access to resources. ### Advanced options @@ -322,13 +322,13 @@ Properties: - Type: string - Default: "user" - Examples: - - "user" - - User-level access to management permissions. - - This will be sufficient in most cases. - - "admin" - - Extensive access to management permissions. - - "owner" - - Full access to management permissions. + - "user" + - User-level access to management permissions. + - This will be sufficient in most cases. + - "admin" + - Extensive access to management permissions. + - "owner" + - Full access to management permissions. #### --hidrive-root-prefix @@ -344,14 +344,14 @@ Properties: - Type: string - Default: "/" - Examples: - - "/" - - The topmost directory accessible by rclone. - - This will be equivalent with "root" if rclone uses a regular HiDrive user account. - - "root" - - The topmost directory of the HiDrive user account - - "" - - This specifies that there is no root-prefix for your paths. - - When using this you will always need to specify paths to this remote with a valid parent e.g. "remote:/path/to/dir" or "remote:root/path/to/dir". + - "/" + - The topmost directory accessible by rclone. + - This will be equivalent with "root" if rclone uses a regular HiDrive user account. + - "root" + - The topmost directory of the HiDrive user account + - "" + - This specifies that there is no root-prefix for your paths. + - When using this you will always need to specify paths to this remote with a valid parent e.g. "remote:/path/to/dir" or "remote:root/path/to/dir". #### --hidrive-endpoint diff --git a/docs/content/http.md b/docs/content/http.md index f80f04fad..0c71a2b4a 100644 --- a/docs/content/http.md +++ b/docs/content/http.md @@ -247,13 +247,32 @@ Properties: - Type: string - Required: false +### Metadata + +HTTP metadata keys are case insensitive and are always returned in lower case. + +Here are the possible system metadata items for the http backend. + +| Name | Help | Type | Example | Read Only | +|------|------|------|---------|-----------| +| cache-control | Cache-Control header | string | no-cache | N | +| content-disposition | Content-Disposition header | string | inline | N | +| content-disposition-filename | Filename retrieved from Content-Disposition header | string | file.txt | N | +| content-encoding | Content-Encoding header | string | gzip | N | +| content-language | Content-Language header | string | en-US | N | +| content-type | Content-Type header | string | text/plain | N | + +See the [metadata](/docs/#metadata) docs for more info. + ## Backend commands Here are the commands specific to the http backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -267,16 +286,20 @@ These can be run on a running backend using the rc command Set command for updating the config parameters. - rclone backend set remote: [options] [+] +```console +rclone backend set remote: [options] [+] +``` This set command can be used to update the config parameters for a running http backend. -Usage Examples: +Usage examples: - rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2] - rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2] - rclone rc backend/command command=set fs=remote: -o url=https://example.com +```console +rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=remote: -o url=https://example.com +``` The option keys are named as they are in the config file. @@ -286,7 +309,6 @@ will default to those currently in use. It doesn't return anything. - ## Limitations diff --git a/docs/content/koofr.md b/docs/content/koofr.md index cb506188b..7f973e85e 100644 --- a/docs/content/koofr.md +++ b/docs/content/koofr.md @@ -135,12 +135,12 @@ Properties: - Type: string - Required: false - Examples: - - "koofr" - - Koofr, https://app.koofr.net/ - - "digistorage" - - Digi Storage, https://storage.rcs-rds.ro/ - - "other" - - Any other Koofr API compatible storage service + - "koofr" + - Koofr, https://app.koofr.net/ + - "digistorage" + - Digi Storage, https://storage.rcs-rds.ro/ + - "other" + - Any other Koofr API compatible storage service #### --koofr-endpoint diff --git a/docs/content/local.md b/docs/content/local.md index 9a35f3e34..538573cfa 100644 --- a/docs/content/local.md +++ b/docs/content/local.md @@ -356,8 +356,8 @@ Properties: - Type: bool - Default: false - Examples: - - "true" - - Disables long file names. + - "true" + - Disables long file names. #### --copy-links / -L @@ -395,6 +395,21 @@ Properties: - Type: bool - Default: false +#### --skip-specials + +Don't warn about skipped pipes, sockets and device objects. + +This flag disables warning messages on skipped pipes, sockets and +device objects, as you explicitly acknowledge that they should be +skipped. + +Properties: + +- Config: skip_specials +- Env Var: RCLONE_LOCAL_SKIP_SPECIALS +- Type: bool +- Default: false + #### --local-zero-size-links Assume the Stat size of links is zero (and read them instead) (deprecated). @@ -626,14 +641,14 @@ Properties: - Type: mtime|atime|btime|ctime - Default: mtime - Examples: - - "mtime" - - The last modification time. - - "atime" - - The last access time. - - "btime" - - The creation time. - - "ctime" - - The last status change time. + - "mtime" + - The last modification time. + - "atime" + - The last access time. + - "btime" + - The creation time. + - "ctime" + - The last status change time. #### --local-hashes @@ -701,9 +716,11 @@ See the [metadata](/docs/#metadata) docs for more info. Here are the commands specific to the local backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -715,16 +732,17 @@ These can be run on a running backend using the rc command ### noop -A null operation for testing backend commands +A null operation for testing backend commands. - rclone backend noop remote: [options] [+] +```console +rclone backend noop remote: [options] [+] +``` -This is a test command which has some options -you can try to change the output. +This is a test command which has some options you can try to change the output. Options: -- "echo": echo the input arguments -- "error": return an error based on option value +- "echo": Echo the input arguments. +- "error": Return an error based on option value. diff --git a/docs/content/mailru.md b/docs/content/mailru.md index 6af9c4b4d..a232bd7e1 100644 --- a/docs/content/mailru.md +++ b/docs/content/mailru.md @@ -266,10 +266,10 @@ Properties: - Type: bool - Default: true - Examples: - - "true" - - Enable - - "false" - - Disable + - "true" + - Enable + - "false" + - Disable ### Advanced options @@ -340,14 +340,14 @@ Properties: - Type: string - Default: "*.mkv,*.avi,*.mp4,*.mp3,*.zip,*.gz,*.rar,*.pdf" - Examples: - - "" - - Empty list completely disables speedup (put by hash). - - "*" - - All files will be attempted for speedup. - - "*.mkv,*.avi,*.mp4,*.mp3" - - Only common audio/video files will be tried for put by hash. - - "*.zip,*.gz,*.rar,*.pdf" - - Only common archives or PDF books will be tried for speedup. + - "" + - Empty list completely disables speedup (put by hash). + - "*" + - All files will be attempted for speedup. + - "*.mkv,*.avi,*.mp4,*.mp3" + - Only common audio/video files will be tried for put by hash. + - "*.zip,*.gz,*.rar,*.pdf" + - Only common archives or PDF books will be tried for speedup. #### --mailru-speedup-max-disk @@ -362,12 +362,12 @@ Properties: - Type: SizeSuffix - Default: 3Gi - Examples: - - "0" - - Completely disable speedup (put by hash). - - "1G" - - Files larger than 1Gb will be uploaded directly. - - "3G" - - Choose this option if you have less than 3Gb free on local disk. + - "0" + - Completely disable speedup (put by hash). + - "1G" + - Files larger than 1Gb will be uploaded directly. + - "3G" + - Choose this option if you have less than 3Gb free on local disk. #### --mailru-speedup-max-memory @@ -380,12 +380,12 @@ Properties: - Type: SizeSuffix - Default: 32Mi - Examples: - - "0" - - Preliminary hashing will always be done in a temporary disk location. - - "32M" - - Do not dedicate more than 32Mb RAM for preliminary hashing. - - "256M" - - You have at most 256Mb RAM free for hash calculations. + - "0" + - Preliminary hashing will always be done in a temporary disk location. + - "32M" + - Do not dedicate more than 32Mb RAM for preliminary hashing. + - "256M" + - You have at most 256Mb RAM free for hash calculations. #### --mailru-check-hash @@ -398,10 +398,10 @@ Properties: - Type: bool - Default: true - Examples: - - "true" - - Fail with error. - - "false" - - Ignore and continue. + - "true" + - Fail with error. + - "false" + - Ignore and continue. #### --mailru-user-agent diff --git a/docs/content/mega.md b/docs/content/mega.md index b18902569..7f31bdd9d 100644 --- a/docs/content/mega.md +++ b/docs/content/mega.md @@ -232,10 +232,43 @@ Properties: - Type: string - Required: true +#### --mega-2fa + +The 2FA code of your MEGA account if the account is set up with one + +Properties: + +- Config: 2fa +- Env Var: RCLONE_MEGA_2FA +- Type: string +- Required: false + ### Advanced options Here are the Advanced options specific to mega (Mega). +#### --mega-session-id + +Session (internal use only) + +Properties: + +- Config: session_id +- Env Var: RCLONE_MEGA_SESSION_ID +- Type: string +- Required: false + +#### --mega-master-key + +Master key (internal use only) + +Properties: + +- Config: master_key +- Env Var: RCLONE_MEGA_MASTER_KEY +- Type: string +- Required: false + #### --mega-debug Output more debug from Mega. diff --git a/docs/content/netstorage.md b/docs/content/netstorage.md index 51a79e685..3f0e53f80 100644 --- a/docs/content/netstorage.md +++ b/docs/content/netstorage.md @@ -304,10 +304,10 @@ Properties: - Type: string - Default: "https" - Examples: - - "http" - - HTTP protocol - - "https" - - HTTPS protocol + - "http" + - HTTP protocol + - "https" + - HTTPS protocol #### --netstorage-description @@ -324,9 +324,11 @@ Properties: Here are the commands specific to the netstorage backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -338,9 +340,11 @@ These can be run on a running backend using the rc command ### du -Return disk usage information for a specified directory +Return disk usage information for a specified directory. - rclone backend du remote: [options] [+] +```console +rclone backend du remote: [options] [+] +``` The usage information returned, includes the targeted directory as well as all files stored in any sub-directories that may exist. @@ -349,11 +353,18 @@ files stored in any sub-directories that may exist. You can create a symbolic link in ObjectStore with the symlink action. - rclone backend symlink remote: [options] [+] +```console +rclone backend symlink remote: [options] [+] +``` The desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. -`rclone backend symlink ` + +Usage example: + +```console +rclone backend symlink +``` diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md index f4886bd63..5437a0080 100644 --- a/docs/content/onedrive.md +++ b/docs/content/onedrive.md @@ -368,14 +368,14 @@ Properties: - Type: string - Default: "global" - Examples: - - "global" - - Microsoft Cloud Global - - "us" - - Microsoft Cloud for US Government - - "de" - - Microsoft Cloud Germany (deprecated - try global region first). - - "cn" - - Azure and Office 365 operated by Vnet Group in China + - "global" + - Microsoft Cloud Global + - "us" + - Microsoft Cloud for US Government + - "de" + - Microsoft Cloud Germany (deprecated - try global region first). + - "cn" + - Azure and Office 365 operated by Vnet Group in China #### --onedrive-tenant @@ -536,13 +536,13 @@ Properties: - Type: SpaceSepList - Default: Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access - Examples: - - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access" - - Read and write access to all resources - - "Files.Read Files.Read.All Sites.Read.All offline_access" - - Read only access to all resources - - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access" - - Read and write access to all resources, without the ability to browse SharePoint sites. - - Same as if disable_site_permission was set to true + - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All Sites.Read.All offline_access" + - Read and write access to all resources + - "Files.Read Files.Read.All Sites.Read.All offline_access" + - Read only access to all resources + - "Files.Read Files.ReadWrite Files.Read.All Files.ReadWrite.All offline_access" + - Read and write access to all resources, without the ability to browse SharePoint sites. + - Same as if disable_site_permission was set to true #### --onedrive-disable-site-permission @@ -660,13 +660,13 @@ Properties: - Type: string - Default: "anonymous" - Examples: - - "anonymous" - - Anyone with the link has access, without needing to sign in. - - This may include people outside of your organization. - - Anonymous link support may be disabled by an administrator. - - "organization" - - Anyone signed into your organization (tenant) can use the link to get access. - - Only available in OneDrive for Business and SharePoint. + - "anonymous" + - Anyone with the link has access, without needing to sign in. + - This may include people outside of your organization. + - Anonymous link support may be disabled by an administrator. + - "organization" + - Anyone signed into your organization (tenant) can use the link to get access. + - Only available in OneDrive for Business and SharePoint. #### --onedrive-link-type @@ -679,12 +679,12 @@ Properties: - Type: string - Default: "view" - Examples: - - "view" - - Creates a read-only link to the item. - - "edit" - - Creates a read-write link to the item. - - "embed" - - Creates an embeddable link to the item. + - "view" + - Creates a read-only link to the item. + - "edit" + - Creates a read-write link to the item. + - "embed" + - Creates an embeddable link to the item. #### --onedrive-link-password @@ -729,18 +729,18 @@ Properties: - Type: string - Default: "auto" - Examples: - - "auto" - - Rclone chooses the best hash - - "quickxor" - - QuickXor - - "sha1" - - SHA1 - - "sha256" - - SHA256 - - "crc32" - - CRC32 - - "none" - - None - don't use any hashes + - "auto" + - Rclone chooses the best hash + - "quickxor" + - QuickXor + - "sha1" + - SHA1 + - "sha256" + - SHA256 + - "crc32" + - CRC32 + - "none" + - None - don't use any hashes #### --onedrive-av-override @@ -818,16 +818,16 @@ Properties: - Type: Bits - Default: off - Examples: - - "off" - - Do not read or write the value - - "read" - - Read the value only - - "write" - - Write the value only - - "read,write" - - Read and Write the value. - - "failok" - - If writing fails log errors only, don't fail the transfer + - "off" + - Do not read or write the value + - "read" + - Read the value only + - "write" + - Write the value only + - "read,write" + - Read and Write the value. + - "failok" + - If writing fails log errors only, don't fail the transfer #### --onedrive-encoding diff --git a/docs/content/opendrive.md b/docs/content/opendrive.md index 5ffea7dd6..8c4fc0a09 100644 --- a/docs/content/opendrive.md +++ b/docs/content/opendrive.md @@ -181,12 +181,12 @@ Properties: - Type: string - Default: "private" - Examples: - - "private" - - The file or folder access can be granted in a way that will allow select users to view, read or write what is absolutely essential for them. - - "public" - - The file or folder can be downloaded by anyone from a web browser. The link can be shared in any way, - - "hidden" - - The file or folder can be accessed has the same restrictions as Public if the user knows the URL of the file or folder link in order to access the contents + - "private" + - The file or folder access can be granted in a way that will allow select users to view, read or write what is absolutely essential for them. + - "public" + - The file or folder can be downloaded by anyone from a web browser. The link can be shared in any way, + - "hidden" + - The file or folder can be accessed has the same restrictions as Public if the user knows the URL of the file or folder link in order to access the contents #### --opendrive-description diff --git a/docs/content/oracleobjectstorage/_index.md b/docs/content/oracleobjectstorage/_index.md index 71a7a1879..ca60e7f0d 100644 --- a/docs/content/oracleobjectstorage/_index.md +++ b/docs/content/oracleobjectstorage/_index.md @@ -348,23 +348,23 @@ Properties: - Type: string - Default: "env_auth" - Examples: - - "env_auth" - - automatically pickup the credentials from runtime(env), first one to provide auth wins - - "user_principal_auth" - - use an OCI user and an API key for authentication. - - you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. - - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm - - "instance_principal_auth" - - use instance principals to authorize an instance to make API calls. - - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. - - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm - - "workload_identity_auth" - - use workload identity to grant OCI Container Engine for Kubernetes workloads policy-driven access to OCI resources using OCI Identity and Access Management (IAM). - - https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm - - "resource_principal_auth" - - use resource principals to make API calls - - "no_auth" - - no credentials needed, this is typically for reading public buckets + - "env_auth" + - automatically pickup the credentials from runtime(env), first one to provide auth wins + - "user_principal_auth" + - use an OCI user and an API key for authentication. + - you’ll need to put in a config file your tenancy OCID, user OCID, region, the path, fingerprint to an API key. + - https://docs.oracle.com/en-us/iaas/Content/API/Concepts/sdkconfig.htm + - "instance_principal_auth" + - use instance principals to authorize an instance to make API calls. + - each instance has its own identity, and authenticates using the certificates that are read from instance metadata. + - https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/callingservicesfrominstances.htm + - "workload_identity_auth" + - use workload identity to grant OCI Container Engine for Kubernetes workloads policy-driven access to OCI resources using OCI Identity and Access Management (IAM). + - https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm + - "resource_principal_auth" + - use resource principals to make API calls + - "no_auth" + - no credentials needed, this is typically for reading public buckets #### --oos-namespace @@ -427,8 +427,8 @@ Properties: - Type: string - Default: "~/.oci/config" - Examples: - - "~/.oci/config" - - oci configuration file location + - "~/.oci/config" + - oci configuration file location #### --oos-config-profile @@ -442,8 +442,8 @@ Properties: - Type: string - Default: "Default" - Examples: - - "Default" - - Use the default profile + - "Default" + - Use the default profile ### Advanced options @@ -460,12 +460,12 @@ Properties: - Type: string - Default: "Standard" - Examples: - - "Standard" - - Standard storage tier, this is the default tier - - "InfrequentAccess" - - InfrequentAccess storage tier - - "Archive" - - Archive storage tier + - "Standard" + - Standard storage tier, this is the default tier + - "InfrequentAccess" + - InfrequentAccess storage tier + - "Archive" + - Archive storage tier #### --oos-upload-cutoff @@ -677,8 +677,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --oos-sse-customer-key @@ -694,8 +694,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --oos-sse-customer-key-sha256 @@ -710,8 +710,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --oos-sse-kms-key-id @@ -727,8 +727,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --oos-sse-customer-algorithm @@ -743,10 +743,10 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None - - "AES256" - - AES256 + - "" + - None + - "AES256" + - AES256 #### --oos-description @@ -780,9 +780,11 @@ See the [metadata](/docs/#metadata) docs for more info. Here are the commands specific to the oracleobjectstorage backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -794,26 +796,35 @@ These can be run on a running backend using the rc command ### rename -change the name of an object +change the name of an object. - rclone backend rename remote: [options] [+] +```console +rclone backend rename remote: [options] [+] +``` This command can be used to rename a object. -Usage Examples: - - rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name +Usage example: +```console +rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name +``` ### list-multipart-uploads -List the unfinished multipart uploads +List the unfinished multipart uploads. - rclone backend list-multipart-uploads remote: [options] [+] +```console +rclone backend list-multipart-uploads remote: [options] [+] +``` This command lists the unfinished multipart uploads in JSON format. - rclone backend list-multipart-uploads oos:bucket/path/to/object +Usage example: + +```console +rclone backend list-multipart-uploads oos:bucket/path/to/object +``` It returns a dictionary of buckets with values as lists of unfinished multipart uploads. @@ -821,81 +832,97 @@ multipart uploads. You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path. - { - "test-bucket": [ - { - "namespace": "test-namespace", - "bucket": "test-bucket", - "object": "600m.bin", - "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8", - "timeCreated": "2022-07-29T06:21:16.595Z", - "storageTier": "Standard" - } - ] - +```json +{ + "test-bucket": [ + { + "namespace": "test-namespace", + "bucket": "test-bucket", + "object": "600m.bin", + "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8", + "timeCreated": "2022-07-29T06:21:16.595Z", + "storageTier": "Standard" + } + ] +} ### cleanup Remove unfinished multipart uploads. - rclone backend cleanup remote: [options] [+] +```console +rclone backend cleanup remote: [options] [+] +``` This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours. -Note that you can use --interactive/-i or --dry-run with this command to see what -it would do. +Note that you can use --interactive/-i or --dry-run with this command to see +what it would do. - rclone backend cleanup oos:bucket/path/to/object - rclone backend cleanup -o max-age=7w oos:bucket/path/to/object +Usage examples: + +```console +rclone backend cleanup oos:bucket/path/to/object +rclone backend cleanup -o max-age=7w oos:bucket/path/to/object +``` Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. - Options: -- "max-age": Max age of upload to delete +- "max-age": Max age of upload to delete. ### restore -Restore objects from Archive to Standard storage +Restore objects from Archive to Standard storage. - rclone backend restore remote: [options] [+] +```console +rclone backend restore remote: [options] [+] +``` -This command can be used to restore one or more objects from Archive to Standard storage. +This command can be used to restore one or more objects from Archive to +Standard storage. - Usage Examples: +Usage examples: - rclone backend restore oos:bucket/path/to/directory -o hours=HOURS - rclone backend restore oos:bucket -o hours=HOURS +```console +rclone backend restore oos:bucket/path/to/directory -o hours=HOURS +rclone backend restore oos:bucket -o hours=HOURS +``` This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags - rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72 +```console +rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72 +``` -All the objects shown will be marked for restore, then +All the objects shown will be marked for restore, then: - rclone backend restore --include "*.txt" oos:bucket/path -o hours=72 +```console +rclone backend restore --include "*.txt" oos:bucket/path -o hours=72 +``` - It returns a list of status dictionaries with Object Name and Status - keys. The Status will be "RESTORED"" if it was successful or an error message - if not. - - [ - { - "Object": "test.txt" - "Status": "RESTORED", - }, - { - "Object": "test/file4.txt" - "Status": "RESTORED", - } - ] +It returns a list of status dictionaries with Object Name and Status keys. +The Status will be "RESTORED"" if it was successful or an error message if not. +```json +[ + { + "Object": "test.txt" + "Status": "RESTORED", + }, + { + "Object": "test/file4.txt" + "Status": "RESTORED", + } +] +``` Options: -- "hours": The number of hours for which this object will be restored. Default is 24 hrs. +- "hours": The number of hours for which this object will be restored. +Default is 24 hrs. diff --git a/docs/content/pcloud.md b/docs/content/pcloud.md index 7902fda8a..8f8167e34 100644 --- a/docs/content/pcloud.md +++ b/docs/content/pcloud.md @@ -300,10 +300,10 @@ Properties: - Type: string - Default: "api.pcloud.com" - Examples: - - "api.pcloud.com" - - Original/US region - - "eapi.pcloud.com" - - EU region + - "api.pcloud.com" + - Original/US region + - "eapi.pcloud.com" + - EU region #### --pcloud-username diff --git a/docs/content/pikpak.md b/docs/content/pikpak.md index b21b0dc02..65a6337f3 100644 --- a/docs/content/pikpak.md +++ b/docs/content/pikpak.md @@ -293,9 +293,11 @@ Properties: Here are the commands specific to the pikpak backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -307,46 +309,54 @@ These can be run on a running backend using the rc command ### addurl -Add offline download task for url +Add offline download task for url. - rclone backend addurl remote: [options] [+] +```console +rclone backend addurl remote: [options] [+] +``` This command adds offline download task for url. -Usage: +Usage example: - rclone backend addurl pikpak:dirpath url +```console +rclone backend addurl pikpak:dirpath url +``` -Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, +Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, download will fallback to default 'My Pack' folder. - ### decompress -Request decompress of a file/files in a folder +Request decompress of a file/files in a folder. - rclone backend decompress remote: [options] [+] +```console +rclone backend decompress remote: [options] [+] +``` This command requests decompress of file/files in a folder. -Usage: +Usage examples: - rclone backend decompress pikpak:dirpath {filename} -o password=password - rclone backend decompress pikpak:dirpath {filename} -o delete-src-file +```console +rclone backend decompress pikpak:dirpath {filename} -o password=password +rclone backend decompress pikpak:dirpath {filename} -o delete-src-file +``` -An optional argument 'filename' can be specified for a file located in -'pikpak:dirpath'. You may want to pass '-o password=password' for a -password-protected files. Also, pass '-o delete-src-file' to delete +An optional argument 'filename' can be specified for a file located in +'pikpak:dirpath'. You may want to pass '-o password=password' for a +password-protected files. Also, pass '-o delete-src-file' to delete source files after decompression finished. Result: - { - "Decompressed": 17, - "SourceDeleted": 0, - "Errors": 0 - } - +```json +{ + "Decompressed": 17, + "SourceDeleted": 0, + "Errors": 0 +} +``` diff --git a/docs/content/protondrive.md b/docs/content/protondrive.md index de66b9c03..107f960fd 100644 --- a/docs/content/protondrive.md +++ b/docs/content/protondrive.md @@ -179,6 +179,24 @@ Properties: - Type: string - Required: false +#### --protondrive-otp-secret-key + +The OTP secret key + +The value can also be provided with --protondrive-otp-secret-key=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567 + +The OTP secret key of your proton drive account if the account is set up with +two-factor authentication + +**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/). + +Properties: + +- Config: otp_secret_key +- Env Var: RCLONE_PROTONDRIVE_OTP_SECRET_KEY +- Type: string +- Required: false + ### Advanced options Here are the Advanced options specific to protondrive (Proton Drive). diff --git a/docs/content/qingstor.md b/docs/content/qingstor.md index ad798f83e..074ccc05f 100644 --- a/docs/content/qingstor.md +++ b/docs/content/qingstor.md @@ -171,10 +171,10 @@ Properties: - Type: bool - Default: false - Examples: - - "false" - - Enter QingStor credentials in the next step. - - "true" - - Get QingStor credentials from the environment (env vars or IAM). + - "false" + - Enter QingStor credentials in the next step. + - "true" + - Get QingStor credentials from the environment (env vars or IAM). #### --qingstor-access-key-id @@ -228,15 +228,15 @@ Properties: - Type: string - Required: false - Examples: - - "pek3a" - - The Beijing (China) Three Zone. - - Needs location constraint pek3a. - - "sh1a" - - The Shanghai (China) First Zone. - - Needs location constraint sh1a. - - "gd2a" - - The Guangdong (China) Second Zone. - - Needs location constraint gd2a. + - "pek3a" + - The Beijing (China) Three Zone. + - Needs location constraint pek3a. + - "sh1a" + - The Shanghai (China) First Zone. + - Needs location constraint sh1a. + - "gd2a" + - The Guangdong (China) Second Zone. + - Needs location constraint gd2a. ### Advanced options diff --git a/docs/content/rc.md b/docs/content/rc.md index 8a4953ebf..acf4fbb77 100644 --- a/docs/content/rc.md +++ b/docs/content/rc.md @@ -810,7 +810,7 @@ Unlocks the config file if it is locked. Parameters: -- 'config_password' - password to unlock the config file +- 'configPassword' - password to unlock the config file A good idea is to disable AskPassword before making this call @@ -1108,17 +1108,20 @@ Returns the following values: } ``` -### core/version: Shows the current version of rclone and the go runtime. {#core-version} +### core/version: Shows the current version of rclone, Go and the OS. {#core-version} -This shows the current version of go and the go runtime: +This shows the current versions of rclone, Go and the OS: -- version - rclone version, e.g. "v1.53.0" +- version - rclone version, e.g. "v1.71.2" - decomposed - version number as [major, minor, patch] - isGit - boolean - true if this was compiled from the git version - isBeta - boolean - true if this is a beta version -- os - OS in use as according to Go -- arch - cpu architecture in use according to Go -- goVersion - version of Go runtime in use +- os - OS in use as according to Go GOOS (e.g. "linux") +- osKernel - OS Kernel version (e.g. "6.8.0-86-generic (x86_64)") +- osVersion - OS Version (e.g. "ubuntu 24.04 (64 bit)") +- osArch - cpu architecture in use (e.g. "arm64 (ARMv8 compatible)") +- arch - cpu architecture in use according to Go GOARCH (e.g. "arm64") +- goVersion - version of Go runtime in use (e.g. "go1.25.0") - linking - type of rclone executable (static or dynamic) - goTags - space separated build tags or "none" @@ -1228,6 +1231,67 @@ Returns **Authentication is required for this call.** +### job/batch: Run a batch of rclone rc commands concurrently. {#job-batch} + +This takes the following parameters: + +- concurrency - int - do this many commands concurrently. Defaults to `--transfers` if not set. +- inputs - an list of inputs to the commands with an extra `_path` parameter + +```json +{ + "_path": "rc/path", + "param1": "parameter for the path as documented", + "param2": "parameter for the path as documented, etc", +} +``` + +The inputs may use `_async`, `_group`, `_config` and `_filter` as normal when using the rc. + +Returns: + +- results - a list of results from the commands with one entry for each in inputs. + +For example: + +```sh +rclone rc job/batch --json '{ + "inputs": [ + { + "_path": "rc/noop", + "parameter": "OK" + }, + { + "_path": "rc/error", + "parameter": "BAD" + } + ] +} +' +``` + +Gives the result: + +```json +{ + "results": [ + { + "parameter": "OK" + }, + { + "error": "arbitrary error on input map[parameter:BAD]", + "input": { + "parameter": "BAD" + }, + "path": "rc/error", + "status": 500 + } + ] +} +``` + +**Authentication is required for this call.** + ### job/list: Lists the IDs of the running jobs {#job-list} Parameters: None. @@ -1236,6 +1300,8 @@ Results: - executeId - string id of rclone executing (change after restart) - jobids - array of integer job ids (starting at 1 on each restart) +- runningIds - array of integer job ids that are running +- finishedIds - array of integer job ids that are finished ### job/status: Reads the status of the job ID {#job-status} @@ -1251,6 +1317,7 @@ Results: - error - error from the job or empty string for no error - finished - boolean whether the job has finished or not - id - as passed in above +- executeId - rclone instance ID (changes after restart); combined with id uniquely identifies a job - startTime - time the job started (e.g. "2018-10-26T18:50:20.528336039+01:00") - success - boolean - true for success false otherwise - output - output of the job as would have been returned if called synchronously @@ -1299,14 +1366,18 @@ This takes the following parameters: Example: - rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint - rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint mountType=mount - rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}' +```console +rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint +rclone rc mount/mount fs=mydrive: mountPoint=/home//mountPoint mountType=mount +rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}' +``` The vfsOpt are as described in options/get and can be seen in the the "vfs" section when running and the mountOpt can be seen in the "mount" section: - rclone rc options/get +```console +rclone rc options/get +``` **Authentication is required for this call.** @@ -1749,8 +1820,6 @@ This takes the following parameters: - fs - a remote name string e.g. "drive:" - remote - a path within that remote e.g. "dir" -See the [settierfile](/commands/rclone_settierfile/) command for more information on the above. - **Authentication is required for this call.** ### operations/size: Count the number of bytes and files in remote {#operations-size} @@ -1796,8 +1865,6 @@ This takes the following parameters: - remote - a path within that remote e.g. "dir" - each part in body represents a file to be uploaded -See the [uploadfile](/commands/rclone_uploadfile/) command for more information on the above. - **Authentication is required for this call.** ### options/blocks: List all the option blocks {#options-blocks} @@ -1975,6 +2042,11 @@ Example: This returns an error with the input as part of its error string. Useful for testing error handling. +### rc/fatal: This returns an fatal error {#rc-fatal} + +This returns an error with the input as part of its error string. +Useful for testing error handling. + ### rc/list: List all the registered remote control commands {#rc-list} This lists all the registered remote control commands as a JSON map in @@ -1994,6 +2066,11 @@ check that parameter passing is working properly. **Authentication is required for this call.** +### rc/panic: This returns an error by panicking {#rc-panic} + +This returns an error with the input as part of its error string. +Useful for testing error handling. + ### serve/list: Show running servers {#serve-list} Show running servers with IDs. @@ -2265,7 +2342,7 @@ This is only useful if `--vfs-cache-mode` > off. If you call it when the `--vfs-cache-mode` is off, it will return an empty result. { - "queued": // an array of files queued for upload + "queue": // an array of files queued for upload [ { "name": "file", // string: name (full path) of the file, @@ -2319,6 +2396,7 @@ This takes the following parameters This returns an empty result on success, or an error. + This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter diff --git a/docs/content/s3.md b/docs/content/s3.md index 75b38d34e..f16779bcc 100644 --- a/docs/content/s3.md +++ b/docs/content/s3.md @@ -843,7 +843,7 @@ all the files to be uploaded as multipart. ### Standard options -Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu, Zata and others). +Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other). #### --s3-provider @@ -856,84 +856,98 @@ Properties: - Type: string - Required: false - Examples: - - "AWS" - - Amazon Web Services (AWS) S3 - - "Alibaba" - - Alibaba Cloud Object Storage System (OSS) formerly Aliyun - - "ArvanCloud" - - Arvan Cloud Object Storage (AOS) - - "Ceph" - - Ceph Object Storage - - "ChinaMobile" - - China Mobile Ecloud Elastic Object Storage (EOS) - - "Cloudflare" - - Cloudflare R2 Storage - - "DigitalOcean" - - DigitalOcean Spaces - - "Dreamhost" - - Dreamhost DreamObjects - - "Exaba" - - Exaba Object Storage - - "FlashBlade" - - Pure Storage FlashBlade Object Storage - - "GCS" - - Google Cloud Storage - - "HuaweiOBS" - - Huawei Object Storage Service - - "IBMCOS" - - IBM COS S3 - - "IDrive" - - IDrive e2 - - "IONOS" - - IONOS Cloud - - "LyveCloud" - - Seagate Lyve Cloud - - "Leviia" - - Leviia Object Storage - - "Liara" - - Liara Object Storage - - "Linode" - - Linode Object Storage - - "Magalu" - - Magalu Object Storage - - "Mega" - - MEGA S4 Object Storage - - "Minio" - - Minio Object Storage - - "Netease" - - Netease Object Storage (NOS) - - "Outscale" - - OUTSCALE Object Storage (OOS) - - "OVHcloud" - - OVHcloud Object Storage - - "Petabox" - - Petabox Object Storage - - "RackCorp" - - RackCorp Object Storage - - "Rclone" - - Rclone S3 Server - - "Scaleway" - - Scaleway Object Storage - - "SeaweedFS" - - SeaweedFS S3 - - "Selectel" - - Selectel Object Storage - - "StackPath" - - StackPath Object Storage - - "Storj" - - Storj (S3 Compatible Gateway) - - "Synology" - - Synology C2 Object Storage - - "TencentCOS" - - Tencent Cloud Object Storage (COS) - - "Wasabi" - - Wasabi Object Storage - - "Qiniu" - - Qiniu Object Storage (Kodo) - - "Zata" - - Zata (S3 compatible Gateway) - - "Other" - - Any other S3 compatible provider + - "AWS" + - Amazon Web Services (AWS) S3 + - "Alibaba" + - Alibaba Cloud Object Storage System (OSS) formerly Aliyun + - "ArvanCloud" + - Arvan Cloud Object Storage (AOS) + - "Ceph" + - Ceph Object Storage + - "ChinaMobile" + - China Mobile Ecloud Elastic Object Storage (EOS) + - "Cloudflare" + - Cloudflare R2 Storage + - "Cubbit" + - Cubbit DS3 Object Storage + - "DigitalOcean" + - DigitalOcean Spaces + - "Dreamhost" + - Dreamhost DreamObjects + - "Exaba" + - Exaba Object Storage + - "FileLu" + - FileLu S5 (S3-Compatible Object Storage) + - "FlashBlade" + - Pure Storage FlashBlade Object Storage + - "GCS" + - Google Cloud Storage + - "Hetzner" + - Hetzner Object Storage + - "HuaweiOBS" + - Huawei Object Storage Service + - "IBMCOS" + - IBM COS S3 + - "IDrive" + - IDrive e2 + - "Intercolo" + - Intercolo Object Storage + - "IONOS" + - IONOS Cloud + - "Leviia" + - Leviia Object Storage + - "Liara" + - Liara Object Storage + - "Linode" + - Linode Object Storage + - "LyveCloud" + - Seagate Lyve Cloud + - "Magalu" + - Magalu Object Storage + - "Mega" + - MEGA S4 Object Storage + - "Minio" + - Minio Object Storage + - "Netease" + - Netease Object Storage (NOS) + - "Outscale" + - OUTSCALE Object Storage (OOS) + - "OVHcloud" + - OVHcloud Object Storage + - "Petabox" + - Petabox Object Storage + - "Qiniu" + - Qiniu Object Storage (Kodo) + - "Rabata" + - Rabata Cloud Storage + - "RackCorp" + - RackCorp Object Storage + - "Rclone" + - Rclone S3 Server + - "Scaleway" + - Scaleway Object Storage + - "SeaweedFS" + - SeaweedFS S3 + - "Selectel" + - Selectel Object Storage + - "Servercore" + - Servercore Object Storage + - "SpectraLogic" + - Spectra Logic Black Pearl + - "StackPath" + - StackPath Object Storage + - "Storj" + - Storj (S3 Compatible Gateway) + - "Synology" + - Synology C2 Object Storage + - "TencentCOS" + - Tencent Cloud Object Storage (COS) + - "Wasabi" + - Wasabi Object Storage + - "Zata" + - Zata (S3 compatible Gateway) + - "Other" + - Any other S3 compatible provider #### --s3-env-auth @@ -948,10 +962,10 @@ Properties: - Type: bool - Default: false - Examples: - - "false" - - Enter AWS credentials in the next step. - - "true" - - Get AWS credentials from the environment (env vars or IAM). + - "false" + - Enter AWS credentials in the next step. + - "true" + - Get AWS credentials from the environment (env vars or IAM). #### --s3-access-key-id @@ -983,174 +997,1701 @@ Properties: Region to connect to. +Leave blank if you are using an S3 clone and you don't have a region. + Properties: - Config: region - Env Var: RCLONE_S3_REGION -- Provider: AWS +- Provider: AWS,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other - Type: string - Required: false - Examples: - - "us-east-1" - - The default endpoint - a good choice if you are unsure. - - US Region, Northern Virginia, or Pacific Northwest. - - Leave location constraint empty. - - "us-east-2" - - US East (Ohio) Region. - - Needs location constraint us-east-2. - - "us-west-1" - - US West (Northern California) Region. - - Needs location constraint us-west-1. - - "us-west-2" - - US West (Oregon) Region. - - Needs location constraint us-west-2. - - "ca-central-1" - - Canada (Central) Region. - - Needs location constraint ca-central-1. - - "eu-west-1" - - EU (Ireland) Region. - - Needs location constraint EU or eu-west-1. - - "eu-west-2" - - EU (London) Region. - - Needs location constraint eu-west-2. - - "eu-west-3" - - EU (Paris) Region. - - Needs location constraint eu-west-3. - - "eu-north-1" - - EU (Stockholm) Region. - - Needs location constraint eu-north-1. - - "eu-south-1" - - EU (Milan) Region. - - Needs location constraint eu-south-1. - - "eu-central-1" - - EU (Frankfurt) Region. - - Needs location constraint eu-central-1. - - "ap-southeast-1" - - Asia Pacific (Singapore) Region. - - Needs location constraint ap-southeast-1. - - "ap-southeast-2" - - Asia Pacific (Sydney) Region. - - Needs location constraint ap-southeast-2. - - "ap-northeast-1" - - Asia Pacific (Tokyo) Region. - - Needs location constraint ap-northeast-1. - - "ap-northeast-2" - - Asia Pacific (Seoul). - - Needs location constraint ap-northeast-2. - - "ap-northeast-3" - - Asia Pacific (Osaka-Local). - - Needs location constraint ap-northeast-3. - - "ap-south-1" - - Asia Pacific (Mumbai). - - Needs location constraint ap-south-1. - - "ap-east-1" - - Asia Pacific (Hong Kong) Region. - - Needs location constraint ap-east-1. - - "sa-east-1" - - South America (Sao Paulo) Region. - - Needs location constraint sa-east-1. - - "il-central-1" - - Israel (Tel Aviv) Region. - - Needs location constraint il-central-1. - - "me-south-1" - - Middle East (Bahrain) Region. - - Needs location constraint me-south-1. - - "af-south-1" - - Africa (Cape Town) Region. - - Needs location constraint af-south-1. - - "cn-north-1" - - China (Beijing) Region. - - Needs location constraint cn-north-1. - - "cn-northwest-1" - - China (Ningxia) Region. - - Needs location constraint cn-northwest-1. - - "us-gov-east-1" - - AWS GovCloud (US-East) Region. - - Needs location constraint us-gov-east-1. - - "us-gov-west-1" - - AWS GovCloud (US) Region. - - Needs location constraint us-gov-west-1. + - "us-east-1" + - The default endpoint - a good choice if you are unsure. + - US Region, Northern Virginia, or Pacific Northwest. + - Leave location constraint empty. + - Provider: AWS + - "us-east-2" + - US East (Ohio) Region. + - Needs location constraint us-east-2. + - Provider: AWS + - "us-west-1" + - US West (Northern California) Region. + - Needs location constraint us-west-1. + - Provider: AWS + - "us-west-2" + - US West (Oregon) Region. + - Needs location constraint us-west-2. + - Provider: AWS + - "ca-central-1" + - Canada (Central) Region. + - Needs location constraint ca-central-1. + - Provider: AWS + - "eu-west-1" + - EU (Ireland) Region. + - Needs location constraint EU or eu-west-1. + - Provider: AWS + - "eu-west-2" + - EU (London) Region. + - Needs location constraint eu-west-2. + - Provider: AWS + - "eu-west-3" + - EU (Paris) Region. + - Needs location constraint eu-west-3. + - Provider: AWS + - "eu-north-1" + - EU (Stockholm) Region. + - Needs location constraint eu-north-1. + - Provider: AWS + - "eu-south-1" + - EU (Milan) Region. + - Needs location constraint eu-south-1. + - Provider: AWS + - "eu-central-1" + - EU (Frankfurt) Region. + - Needs location constraint eu-central-1. + - Provider: AWS + - "ap-southeast-1" + - Asia Pacific (Singapore) Region. + - Needs location constraint ap-southeast-1. + - Provider: AWS + - "ap-southeast-2" + - Asia Pacific (Sydney) Region. + - Needs location constraint ap-southeast-2. + - Provider: AWS + - "ap-northeast-1" + - Asia Pacific (Tokyo) Region. + - Needs location constraint ap-northeast-1. + - Provider: AWS + - "ap-northeast-2" + - Asia Pacific (Seoul). + - Needs location constraint ap-northeast-2. + - Provider: AWS + - "ap-northeast-3" + - Asia Pacific (Osaka-Local). + - Needs location constraint ap-northeast-3. + - Provider: AWS + - "ap-south-1" + - Asia Pacific (Mumbai). + - Needs location constraint ap-south-1. + - Provider: AWS + - "ap-east-1" + - Asia Pacific (Hong Kong) Region. + - Needs location constraint ap-east-1. + - Provider: AWS + - "sa-east-1" + - South America (Sao Paulo) Region. + - Needs location constraint sa-east-1. + - Provider: AWS + - "il-central-1" + - Israel (Tel Aviv) Region. + - Needs location constraint il-central-1. + - Provider: AWS + - "me-south-1" + - Middle East (Bahrain) Region. + - Needs location constraint me-south-1. + - Provider: AWS + - "af-south-1" + - Africa (Cape Town) Region. + - Needs location constraint af-south-1. + - Provider: AWS + - "cn-north-1" + - China (Beijing) Region. + - Needs location constraint cn-north-1. + - Provider: AWS + - "cn-northwest-1" + - China (Ningxia) Region. + - Needs location constraint cn-northwest-1. + - Provider: AWS + - "us-gov-east-1" + - AWS GovCloud (US-East) Region. + - Needs location constraint us-gov-east-1. + - Provider: AWS + - "us-gov-west-1" + - AWS GovCloud (US) Region. + - Needs location constraint us-gov-west-1. + - Provider: AWS + - "" + - Use this if unsure. + - Will use v4 signatures and an empty region. + - Provider: Ceph,DigitalOcean,Dreamhost,Exaba,GCS,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,StackPath,Wasabi,Other + - "other-v2-signature" + - Use this only if v4 signatures don't work. + - E.g. pre Jewel/v10 CEPH. + - Provider: Ceph,DigitalOcean,Dreamhost,Exaba,GCS,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,StackPath,Wasabi,Other + - "auto" + - R2 buckets are automatically distributed across Cloudflare's data centers for low latency. + - Provider: Cloudflare + - "eu-west-1" + - Europe West + - Provider: Cubbit + - "global" + - Global + - Provider: FileLu + - "us-east" + - North America (US-East) + - Provider: FileLu + - "eu-central" + - Europe (EU-Central) + - Provider: FileLu + - "ap-southeast" + - Asia Pacific (AP-Southeast) + - Provider: FileLu + - "me-central" + - Middle East (ME-Central) + - Provider: FileLu + - "hel1" + - Helsinki + - Provider: Hetzner + - "fsn1" + - Falkenstein + - Provider: Hetzner + - "nbg1" + - Nuremberg + - Provider: Hetzner + - "af-south-1" + - AF-Johannesburg + - Provider: HuaweiOBS + - "ap-southeast-2" + - AP-Bangkok + - Provider: HuaweiOBS + - "ap-southeast-3" + - AP-Singapore + - Provider: HuaweiOBS + - "cn-east-3" + - CN East-Shanghai1 + - Provider: HuaweiOBS + - "cn-east-2" + - CN East-Shanghai2 + - Provider: HuaweiOBS + - "cn-north-1" + - CN North-Beijing1 + - Provider: HuaweiOBS + - "cn-north-4" + - CN North-Beijing4 + - Provider: HuaweiOBS + - "cn-south-1" + - CN South-Guangzhou + - Provider: HuaweiOBS + - "ap-southeast-1" + - CN-Hong Kong + - Provider: HuaweiOBS + - "sa-argentina-1" + - LA-Buenos Aires1 + - Provider: HuaweiOBS + - "sa-peru-1" + - LA-Lima1 + - Provider: HuaweiOBS + - "na-mexico-1" + - LA-Mexico City1 + - Provider: HuaweiOBS + - "sa-chile-1" + - LA-Santiago2 + - Provider: HuaweiOBS + - "sa-brazil-1" + - LA-Sao Paulo1 + - Provider: HuaweiOBS + - "ru-northwest-2" + - RU-Moscow2 + - Provider: HuaweiOBS + - "de-fra" + - Frankfurt, Germany + - Provider: Intercolo + - "de" + - Frankfurt, Germany + - Provider: IONOS,OVHcloud + - "eu-central-2" + - Berlin, Germany + - Provider: IONOS + - "eu-south-2" + - Logrono, Spain + - Provider: IONOS + - "eu-west-2" + - Paris, France + - Provider: Outscale + - "us-east-2" + - New Jersey, USA + - Provider: Outscale + - "us-west-1" + - California, USA + - Provider: Outscale + - "cloudgouv-eu-west-1" + - SecNumCloud, Paris, France + - Provider: Outscale + - "ap-northeast-1" + - Tokyo, Japan + - Provider: Outscale + - "gra" + - Gravelines, France + - Provider: OVHcloud + - "rbx" + - Roubaix, France + - Provider: OVHcloud + - "sbg" + - Strasbourg, France + - Provider: OVHcloud + - "eu-west-par" + - Paris, France (3AZ) + - Provider: OVHcloud + - "uk" + - London, United Kingdom + - Provider: OVHcloud + - "waw" + - Warsaw, Poland + - Provider: OVHcloud + - "bhs" + - Beauharnois, Canada + - Provider: OVHcloud + - "ca-east-tor" + - Toronto, Canada + - Provider: OVHcloud + - "sgp" + - Singapore + - Provider: OVHcloud + - "ap-southeast-syd" + - Sydney, Australia + - Provider: OVHcloud + - "ap-south-mum" + - Mumbai, India + - Provider: OVHcloud + - "us-east-va" + - Vint Hill, Virginia, USA + - Provider: OVHcloud + - "us-west-or" + - Hillsboro, Oregon, USA + - Provider: OVHcloud + - "rbx-archive" + - Roubaix, France (Cold Archive) + - Provider: OVHcloud + - "us-east-1" + - US East (N. Virginia) + - Provider: Petabox,Rabata + - "eu-central-1" + - Europe (Frankfurt) + - Provider: Petabox + - "ap-southeast-1" + - Asia Pacific (Singapore) + - Provider: Petabox + - "me-south-1" + - Middle East (Bahrain) + - Provider: Petabox + - "sa-east-1" + - South America (São Paulo) + - Provider: Petabox + - "cn-east-1" + - The default endpoint - a good choice if you are unsure. + - East China Region 1. + - Needs location constraint cn-east-1. + - Provider: Qiniu + - "cn-east-2" + - East China Region 2. + - Needs location constraint cn-east-2. + - Provider: Qiniu + - "cn-north-1" + - North China Region 1. + - Needs location constraint cn-north-1. + - Provider: Qiniu + - "cn-south-1" + - South China Region 1. + - Needs location constraint cn-south-1. + - Provider: Qiniu + - "us-north-1" + - North America Region. + - Needs location constraint us-north-1. + - Provider: Qiniu + - "ap-southeast-1" + - Southeast Asia Region 1. + - Needs location constraint ap-southeast-1. + - Provider: Qiniu + - "ap-northeast-1" + - Northeast Asia Region 1. + - Needs location constraint ap-northeast-1. + - Provider: Qiniu + - "eu-west-1" + - EU (Ireland) + - Provider: Rabata + - "eu-west-2" + - EU (London) + - Provider: Rabata + - "global" + - Global CDN (All locations) Region + - Provider: RackCorp + - "au" + - Australia (All states) + - Provider: RackCorp + - "au-nsw" + - NSW (Australia) Region + - Provider: RackCorp + - "au-qld" + - QLD (Australia) Region + - Provider: RackCorp + - "au-vic" + - VIC (Australia) Region + - Provider: RackCorp + - "au-wa" + - Perth (Australia) Region + - Provider: RackCorp + - "ph" + - Manila (Philippines) Region + - Provider: RackCorp + - "th" + - Bangkok (Thailand) Region + - Provider: RackCorp + - "hk" + - HK (Hong Kong) Region + - Provider: RackCorp + - "mn" + - Ulaanbaatar (Mongolia) Region + - Provider: RackCorp + - "kg" + - Bishkek (Kyrgyzstan) Region + - Provider: RackCorp + - "id" + - Jakarta (Indonesia) Region + - Provider: RackCorp + - "jp" + - Tokyo (Japan) Region + - Provider: RackCorp + - "sg" + - SG (Singapore) Region + - Provider: RackCorp + - "de" + - Frankfurt (Germany) Region + - Provider: RackCorp + - "us" + - USA (AnyCast) Region + - Provider: RackCorp + - "us-east-1" + - New York (USA) Region + - Provider: RackCorp + - "us-west-1" + - Freemont (USA) Region + - Provider: RackCorp + - "nz" + - Auckland (New Zealand) Region + - Provider: RackCorp + - "nl-ams" + - Amsterdam, The Netherlands + - Provider: Scaleway + - "fr-par" + - Paris, France + - Provider: Scaleway + - "pl-waw" + - Warsaw, Poland + - Provider: Scaleway + - "ru-1" + - St. Petersburg + - Provider: Selectel,Servercore + - "gis-1" + - Moscow + - Provider: Servercore + - "ru-7" + - Moscow + - Provider: Servercore + - "uz-2" + - Tashkent, Uzbekistan + - Provider: Servercore + - "kz-1" + - Almaty, Kazakhstan + - Provider: Servercore + - "eu-001" + - Europe Region 1 + - Provider: Synology + - "eu-002" + - Europe Region 2 + - Provider: Synology + - "us-001" + - US Region 1 + - Provider: Synology + - "us-002" + - US Region 2 + - Provider: Synology + - "tw-001" + - Asia (Taiwan) + - Provider: Synology + - "us-east-1" + - Indore, Madhya Pradesh, India + - Provider: Zata #### --s3-endpoint Endpoint for S3 API. -Leave blank if using AWS to use the default endpoint for the region. +Required when using an S3 clone. Properties: - Config: endpoint - Env Var: RCLONE_S3_ENDPOINT -- Provider: AWS +- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other - Type: string - Required: false +- Examples: + - "oss-accelerate.aliyuncs.com" + - Global Accelerate + - Provider: Alibaba + - "oss-accelerate-overseas.aliyuncs.com" + - Global Accelerate (outside mainland China) + - Provider: Alibaba + - "oss-cn-hangzhou.aliyuncs.com" + - East China 1 (Hangzhou) + - Provider: Alibaba + - "oss-cn-shanghai.aliyuncs.com" + - East China 2 (Shanghai) + - Provider: Alibaba + - "oss-cn-qingdao.aliyuncs.com" + - North China 1 (Qingdao) + - Provider: Alibaba + - "oss-cn-beijing.aliyuncs.com" + - North China 2 (Beijing) + - Provider: Alibaba + - "oss-cn-zhangjiakou.aliyuncs.com" + - North China 3 (Zhangjiakou) + - Provider: Alibaba + - "oss-cn-huhehaote.aliyuncs.com" + - North China 5 (Hohhot) + - Provider: Alibaba + - "oss-cn-wulanchabu.aliyuncs.com" + - North China 6 (Ulanqab) + - Provider: Alibaba + - "oss-cn-shenzhen.aliyuncs.com" + - South China 1 (Shenzhen) + - Provider: Alibaba + - "oss-cn-heyuan.aliyuncs.com" + - South China 2 (Heyuan) + - Provider: Alibaba + - "oss-cn-guangzhou.aliyuncs.com" + - South China 3 (Guangzhou) + - Provider: Alibaba + - "oss-cn-chengdu.aliyuncs.com" + - West China 1 (Chengdu) + - Provider: Alibaba + - "oss-cn-hongkong.aliyuncs.com" + - Hong Kong (Hong Kong) + - Provider: Alibaba + - "oss-us-west-1.aliyuncs.com" + - US West 1 (Silicon Valley) + - Provider: Alibaba + - "oss-us-east-1.aliyuncs.com" + - US East 1 (Virginia) + - Provider: Alibaba + - "oss-ap-southeast-1.aliyuncs.com" + - Southeast Asia Southeast 1 (Singapore) + - Provider: Alibaba + - "oss-ap-southeast-2.aliyuncs.com" + - Asia Pacific Southeast 2 (Sydney) + - Provider: Alibaba + - "oss-ap-southeast-3.aliyuncs.com" + - Southeast Asia Southeast 3 (Kuala Lumpur) + - Provider: Alibaba + - "oss-ap-southeast-5.aliyuncs.com" + - Asia Pacific Southeast 5 (Jakarta) + - Provider: Alibaba + - "oss-ap-northeast-1.aliyuncs.com" + - Asia Pacific Northeast 1 (Japan) + - Provider: Alibaba + - "oss-ap-south-1.aliyuncs.com" + - Asia Pacific South 1 (Mumbai) + - Provider: Alibaba + - "oss-eu-central-1.aliyuncs.com" + - Central Europe 1 (Frankfurt) + - Provider: Alibaba + - "oss-eu-west-1.aliyuncs.com" + - West Europe (London) + - Provider: Alibaba + - "oss-me-east-1.aliyuncs.com" + - Middle East 1 (Dubai) + - Provider: Alibaba + - "s3.ir-thr-at1.arvanstorage.ir" + - The default endpoint - a good choice if you are unsure. + - Tehran Iran (Simin) + - Provider: ArvanCloud + - "s3.ir-tbz-sh1.arvanstorage.ir" + - Tabriz Iran (Shahriar) + - Provider: ArvanCloud + - "eos-wuxi-1.cmecloud.cn" + - The default endpoint - a good choice if you are unsure. + - East China (Suzhou) + - Provider: ChinaMobile + - "eos-jinan-1.cmecloud.cn" + - East China (Jinan) + - Provider: ChinaMobile + - "eos-ningbo-1.cmecloud.cn" + - East China (Hangzhou) + - Provider: ChinaMobile + - "eos-shanghai-1.cmecloud.cn" + - East China (Shanghai-1) + - Provider: ChinaMobile + - "eos-zhengzhou-1.cmecloud.cn" + - Central China (Zhengzhou) + - Provider: ChinaMobile + - "eos-hunan-1.cmecloud.cn" + - Central China (Changsha-1) + - Provider: ChinaMobile + - "eos-zhuzhou-1.cmecloud.cn" + - Central China (Changsha-2) + - Provider: ChinaMobile + - "eos-guangzhou-1.cmecloud.cn" + - South China (Guangzhou-2) + - Provider: ChinaMobile + - "eos-dongguan-1.cmecloud.cn" + - South China (Guangzhou-3) + - Provider: ChinaMobile + - "eos-beijing-1.cmecloud.cn" + - North China (Beijing-1) + - Provider: ChinaMobile + - "eos-beijing-2.cmecloud.cn" + - North China (Beijing-2) + - Provider: ChinaMobile + - "eos-beijing-4.cmecloud.cn" + - North China (Beijing-3) + - Provider: ChinaMobile + - "eos-huhehaote-1.cmecloud.cn" + - North China (Huhehaote) + - Provider: ChinaMobile + - "eos-chengdu-1.cmecloud.cn" + - Southwest China (Chengdu) + - Provider: ChinaMobile + - "eos-chongqing-1.cmecloud.cn" + - Southwest China (Chongqing) + - Provider: ChinaMobile + - "eos-guiyang-1.cmecloud.cn" + - Southwest China (Guiyang) + - Provider: ChinaMobile + - "eos-xian-1.cmecloud.cn" + - Nouthwest China (Xian) + - Provider: ChinaMobile + - "eos-yunnan.cmecloud.cn" + - Yunnan China (Kunming) + - Provider: ChinaMobile + - "eos-yunnan-2.cmecloud.cn" + - Yunnan China (Kunming-2) + - Provider: ChinaMobile + - "eos-tianjin-1.cmecloud.cn" + - Tianjin China (Tianjin) + - Provider: ChinaMobile + - "eos-jilin-1.cmecloud.cn" + - Jilin China (Changchun) + - Provider: ChinaMobile + - "eos-hubei-1.cmecloud.cn" + - Hubei China (Xiangyan) + - Provider: ChinaMobile + - "eos-jiangxi-1.cmecloud.cn" + - Jiangxi China (Nanchang) + - Provider: ChinaMobile + - "eos-gansu-1.cmecloud.cn" + - Gansu China (Lanzhou) + - Provider: ChinaMobile + - "eos-shanxi-1.cmecloud.cn" + - Shanxi China (Taiyuan) + - Provider: ChinaMobile + - "eos-liaoning-1.cmecloud.cn" + - Liaoning China (Shenyang) + - Provider: ChinaMobile + - "eos-hebei-1.cmecloud.cn" + - Hebei China (Shijiazhuang) + - Provider: ChinaMobile + - "eos-fujian-1.cmecloud.cn" + - Fujian China (Xiamen) + - Provider: ChinaMobile + - "eos-guangxi-1.cmecloud.cn" + - Guangxi China (Nanning) + - Provider: ChinaMobile + - "eos-anhui-1.cmecloud.cn" + - Anhui China (Huainan) + - Provider: ChinaMobile + - "s3.cubbit.eu" + - Cubbit DS3 Object Storage endpoint + - Provider: Cubbit + - "syd1.digitaloceanspaces.com" + - DigitalOcean Spaces Sydney 1 + - Provider: DigitalOcean + - "sfo3.digitaloceanspaces.com" + - DigitalOcean Spaces San Francisco 3 + - Provider: DigitalOcean + - "sfo2.digitaloceanspaces.com" + - DigitalOcean Spaces San Francisco 2 + - Provider: DigitalOcean + - "fra1.digitaloceanspaces.com" + - DigitalOcean Spaces Frankfurt 1 + - Provider: DigitalOcean + - "nyc3.digitaloceanspaces.com" + - DigitalOcean Spaces New York 3 + - Provider: DigitalOcean + - "ams3.digitaloceanspaces.com" + - DigitalOcean Spaces Amsterdam 3 + - Provider: DigitalOcean + - "sgp1.digitaloceanspaces.com" + - DigitalOcean Spaces Singapore 1 + - Provider: DigitalOcean + - "lon1.digitaloceanspaces.com" + - DigitalOcean Spaces London 1 + - Provider: DigitalOcean + - "tor1.digitaloceanspaces.com" + - DigitalOcean Spaces Toronto 1 + - Provider: DigitalOcean + - "blr1.digitaloceanspaces.com" + - DigitalOcean Spaces Bangalore 1 + - Provider: DigitalOcean + - "objects-us-east-1.dream.io" + - Dream Objects endpoint + - Provider: Dreamhost + - "s5lu.com" + - Global FileLu S5 endpoint + - Provider: FileLu + - "us.s5lu.com" + - North America (US-East) region endpoint + - Provider: FileLu + - "eu.s5lu.com" + - Europe (EU-Central) region endpoint + - Provider: FileLu + - "ap.s5lu.com" + - Asia Pacific (AP-Southeast) region endpoint + - Provider: FileLu + - "me.s5lu.com" + - Middle East (ME-Central) region endpoint + - Provider: FileLu + - "https://storage.googleapis.com" + - Google Cloud Storage endpoint + - Provider: GCS + - "hel1.your-objectstorage.com" + - Helsinki + - Provider: Hetzner + - "fsn1.your-objectstorage.com" + - Falkenstein + - Provider: Hetzner + - "nbg1.your-objectstorage.com" + - Nuremberg + - Provider: Hetzner + - "obs.af-south-1.myhuaweicloud.com" + - AF-Johannesburg + - Provider: HuaweiOBS + - "obs.ap-southeast-2.myhuaweicloud.com" + - AP-Bangkok + - Provider: HuaweiOBS + - "obs.ap-southeast-3.myhuaweicloud.com" + - AP-Singapore + - Provider: HuaweiOBS + - "obs.cn-east-3.myhuaweicloud.com" + - CN East-Shanghai1 + - Provider: HuaweiOBS + - "obs.cn-east-2.myhuaweicloud.com" + - CN East-Shanghai2 + - Provider: HuaweiOBS + - "obs.cn-north-1.myhuaweicloud.com" + - CN North-Beijing1 + - Provider: HuaweiOBS + - "obs.cn-north-4.myhuaweicloud.com" + - CN North-Beijing4 + - Provider: HuaweiOBS + - "obs.cn-south-1.myhuaweicloud.com" + - CN South-Guangzhou + - Provider: HuaweiOBS + - "obs.ap-southeast-1.myhuaweicloud.com" + - CN-Hong Kong + - Provider: HuaweiOBS + - "obs.sa-argentina-1.myhuaweicloud.com" + - LA-Buenos Aires1 + - Provider: HuaweiOBS + - "obs.sa-peru-1.myhuaweicloud.com" + - LA-Lima1 + - Provider: HuaweiOBS + - "obs.na-mexico-1.myhuaweicloud.com" + - LA-Mexico City1 + - Provider: HuaweiOBS + - "obs.sa-chile-1.myhuaweicloud.com" + - LA-Santiago2 + - Provider: HuaweiOBS + - "obs.sa-brazil-1.myhuaweicloud.com" + - LA-Sao Paulo1 + - Provider: HuaweiOBS + - "obs.ru-northwest-2.myhuaweicloud.com" + - RU-Moscow2 + - Provider: HuaweiOBS + - "s3.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Endpoint + - Provider: IBMCOS + - "s3.dal.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Dallas Endpoint + - Provider: IBMCOS + - "s3.wdc.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Washington DC Endpoint + - Provider: IBMCOS + - "s3.sjc.us.cloud-object-storage.appdomain.cloud" + - US Cross Region San Jose Endpoint + - Provider: IBMCOS + - "s3.private.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Private Endpoint + - Provider: IBMCOS + - "s3.private.dal.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Dallas Private Endpoint + - Provider: IBMCOS + - "s3.private.wdc.us.cloud-object-storage.appdomain.cloud" + - US Cross Region Washington DC Private Endpoint + - Provider: IBMCOS + - "s3.private.sjc.us.cloud-object-storage.appdomain.cloud" + - US Cross Region San Jose Private Endpoint + - Provider: IBMCOS + - "s3.us-east.cloud-object-storage.appdomain.cloud" + - US Region East Endpoint + - Provider: IBMCOS + - "s3.private.us-east.cloud-object-storage.appdomain.cloud" + - US Region East Private Endpoint + - Provider: IBMCOS + - "s3.us-south.cloud-object-storage.appdomain.cloud" + - US Region South Endpoint + - Provider: IBMCOS + - "s3.private.us-south.cloud-object-storage.appdomain.cloud" + - US Region South Private Endpoint + - Provider: IBMCOS + - "s3.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Endpoint + - Provider: IBMCOS + - "s3.fra.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Frankfurt Endpoint + - Provider: IBMCOS + - "s3.mil.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Milan Endpoint + - Provider: IBMCOS + - "s3.ams.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Amsterdam Endpoint + - Provider: IBMCOS + - "s3.private.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Private Endpoint + - Provider: IBMCOS + - "s3.private.fra.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Frankfurt Private Endpoint + - Provider: IBMCOS + - "s3.private.mil.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Milan Private Endpoint + - Provider: IBMCOS + - "s3.private.ams.eu.cloud-object-storage.appdomain.cloud" + - EU Cross Region Amsterdam Private Endpoint + - Provider: IBMCOS + - "s3.eu-gb.cloud-object-storage.appdomain.cloud" + - Great Britain Endpoint + - Provider: IBMCOS + - "s3.private.eu-gb.cloud-object-storage.appdomain.cloud" + - Great Britain Private Endpoint + - Provider: IBMCOS + - "s3.eu-de.cloud-object-storage.appdomain.cloud" + - EU Region DE Endpoint + - Provider: IBMCOS + - "s3.private.eu-de.cloud-object-storage.appdomain.cloud" + - EU Region DE Private Endpoint + - Provider: IBMCOS + - "s3.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Endpoint + - Provider: IBMCOS + - "s3.tok.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Tokyo Endpoint + - Provider: IBMCOS + - "s3.hkg.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Hong Kong Endpoint + - Provider: IBMCOS + - "s3.seo.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Seoul Endpoint + - Provider: IBMCOS + - "s3.private.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Private Endpoint + - Provider: IBMCOS + - "s3.private.tok.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Tokyo Private Endpoint + - Provider: IBMCOS + - "s3.private.hkg.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Hong Kong Private Endpoint + - Provider: IBMCOS + - "s3.private.seo.ap.cloud-object-storage.appdomain.cloud" + - APAC Cross Regional Seoul Private Endpoint + - Provider: IBMCOS + - "s3.jp-tok.cloud-object-storage.appdomain.cloud" + - APAC Region Japan Endpoint + - Provider: IBMCOS + - "s3.private.jp-tok.cloud-object-storage.appdomain.cloud" + - APAC Region Japan Private Endpoint + - Provider: IBMCOS + - "s3.au-syd.cloud-object-storage.appdomain.cloud" + - APAC Region Australia Endpoint + - Provider: IBMCOS + - "s3.private.au-syd.cloud-object-storage.appdomain.cloud" + - APAC Region Australia Private Endpoint + - Provider: IBMCOS + - "s3.ams03.cloud-object-storage.appdomain.cloud" + - Amsterdam Single Site Endpoint + - Provider: IBMCOS + - "s3.private.ams03.cloud-object-storage.appdomain.cloud" + - Amsterdam Single Site Private Endpoint + - Provider: IBMCOS + - "s3.che01.cloud-object-storage.appdomain.cloud" + - Chennai Single Site Endpoint + - Provider: IBMCOS + - "s3.private.che01.cloud-object-storage.appdomain.cloud" + - Chennai Single Site Private Endpoint + - Provider: IBMCOS + - "s3.mel01.cloud-object-storage.appdomain.cloud" + - Melbourne Single Site Endpoint + - Provider: IBMCOS + - "s3.private.mel01.cloud-object-storage.appdomain.cloud" + - Melbourne Single Site Private Endpoint + - Provider: IBMCOS + - "s3.osl01.cloud-object-storage.appdomain.cloud" + - Oslo Single Site Endpoint + - Provider: IBMCOS + - "s3.private.osl01.cloud-object-storage.appdomain.cloud" + - Oslo Single Site Private Endpoint + - Provider: IBMCOS + - "s3.tor01.cloud-object-storage.appdomain.cloud" + - Toronto Single Site Endpoint + - Provider: IBMCOS + - "s3.private.tor01.cloud-object-storage.appdomain.cloud" + - Toronto Single Site Private Endpoint + - Provider: IBMCOS + - "s3.seo01.cloud-object-storage.appdomain.cloud" + - Seoul Single Site Endpoint + - Provider: IBMCOS + - "s3.private.seo01.cloud-object-storage.appdomain.cloud" + - Seoul Single Site Private Endpoint + - Provider: IBMCOS + - "s3.mon01.cloud-object-storage.appdomain.cloud" + - Montreal Single Site Endpoint + - Provider: IBMCOS + - "s3.private.mon01.cloud-object-storage.appdomain.cloud" + - Montreal Single Site Private Endpoint + - Provider: IBMCOS + - "s3.mex01.cloud-object-storage.appdomain.cloud" + - Mexico Single Site Endpoint + - Provider: IBMCOS + - "s3.private.mex01.cloud-object-storage.appdomain.cloud" + - Mexico Single Site Private Endpoint + - Provider: IBMCOS + - "s3.sjc04.cloud-object-storage.appdomain.cloud" + - San Jose Single Site Endpoint + - Provider: IBMCOS + - "s3.private.sjc04.cloud-object-storage.appdomain.cloud" + - San Jose Single Site Private Endpoint + - Provider: IBMCOS + - "s3.mil01.cloud-object-storage.appdomain.cloud" + - Milan Single Site Endpoint + - Provider: IBMCOS + - "s3.private.mil01.cloud-object-storage.appdomain.cloud" + - Milan Single Site Private Endpoint + - Provider: IBMCOS + - "s3.hkg02.cloud-object-storage.appdomain.cloud" + - Hong Kong Single Site Endpoint + - Provider: IBMCOS + - "s3.private.hkg02.cloud-object-storage.appdomain.cloud" + - Hong Kong Single Site Private Endpoint + - Provider: IBMCOS + - "s3.par01.cloud-object-storage.appdomain.cloud" + - Paris Single Site Endpoint + - Provider: IBMCOS + - "s3.private.par01.cloud-object-storage.appdomain.cloud" + - Paris Single Site Private Endpoint + - Provider: IBMCOS + - "s3.sng01.cloud-object-storage.appdomain.cloud" + - Singapore Single Site Endpoint + - Provider: IBMCOS + - "s3.private.sng01.cloud-object-storage.appdomain.cloud" + - Singapore Single Site Private Endpoint + - Provider: IBMCOS + - "de-fra.i3storage.com" + - Frankfurt, Germany + - Provider: Intercolo + - "s3-eu-central-1.ionoscloud.com" + - Frankfurt, Germany + - Provider: IONOS + - "s3-eu-central-2.ionoscloud.com" + - Berlin, Germany + - Provider: IONOS + - "s3-eu-south-2.ionoscloud.com" + - Logrono, Spain + - Provider: IONOS + - "s3.leviia.com" + - The default endpoint + - Leviia + - Provider: Leviia + - "storage.iran.liara.space" + - The default endpoint + - Iran + - Provider: Liara + - "nl-ams-1.linodeobjects.com" + - Amsterdam (Netherlands), nl-ams-1 + - Provider: Linode + - "us-southeast-1.linodeobjects.com" + - Atlanta, GA (USA), us-southeast-1 + - Provider: Linode + - "in-maa-1.linodeobjects.com" + - Chennai (India), in-maa-1 + - Provider: Linode + - "us-ord-1.linodeobjects.com" + - Chicago, IL (USA), us-ord-1 + - Provider: Linode + - "eu-central-1.linodeobjects.com" + - Frankfurt (Germany), eu-central-1 + - Provider: Linode + - "id-cgk-1.linodeobjects.com" + - Jakarta (Indonesia), id-cgk-1 + - Provider: Linode + - "gb-lon-1.linodeobjects.com" + - London 2 (Great Britain), gb-lon-1 + - Provider: Linode + - "us-lax-1.linodeobjects.com" + - Los Angeles, CA (USA), us-lax-1 + - Provider: Linode + - "es-mad-1.linodeobjects.com" + - Madrid (Spain), es-mad-1 + - Provider: Linode + - "au-mel-1.linodeobjects.com" + - Melbourne (Australia), au-mel-1 + - Provider: Linode + - "us-mia-1.linodeobjects.com" + - Miami, FL (USA), us-mia-1 + - Provider: Linode + - "it-mil-1.linodeobjects.com" + - Milan (Italy), it-mil-1 + - Provider: Linode + - "us-east-1.linodeobjects.com" + - Newark, NJ (USA), us-east-1 + - Provider: Linode + - "jp-osa-1.linodeobjects.com" + - Osaka (Japan), jp-osa-1 + - Provider: Linode + - "fr-par-1.linodeobjects.com" + - Paris (France), fr-par-1 + - Provider: Linode + - "br-gru-1.linodeobjects.com" + - São Paulo (Brazil), br-gru-1 + - Provider: Linode + - "us-sea-1.linodeobjects.com" + - Seattle, WA (USA), us-sea-1 + - Provider: Linode + - "ap-south-1.linodeobjects.com" + - Singapore, ap-south-1 + - Provider: Linode + - "sg-sin-1.linodeobjects.com" + - Singapore 2, sg-sin-1 + - Provider: Linode + - "se-sto-1.linodeobjects.com" + - Stockholm (Sweden), se-sto-1 + - Provider: Linode + - "us-iad-1.linodeobjects.com" + - Washington, DC, (USA), us-iad-1 + - Provider: Linode + - "s3.us-west-1.{account_name}.lyve.seagate.com" + - US West 1 - California + - Provider: LyveCloud + - "s3.eu-west-1.{account_name}.lyve.seagate.com" + - EU West 1 - Ireland + - Provider: LyveCloud + - "br-se1.magaluobjects.com" + - São Paulo, SP (BR), br-se1 + - Provider: Magalu + - "br-ne1.magaluobjects.com" + - Fortaleza, CE (BR), br-ne1 + - Provider: Magalu + - "s3.eu-central-1.s4.mega.io" + - Mega S4 eu-central-1 (Amsterdam) + - Provider: Mega + - "s3.eu-central-2.s4.mega.io" + - Mega S4 eu-central-2 (Bettembourg) + - Provider: Mega + - "s3.ca-central-1.s4.mega.io" + - Mega S4 ca-central-1 (Montreal) + - Provider: Mega + - "s3.ca-west-1.s4.mega.io" + - Mega S4 ca-west-1 (Vancouver) + - Provider: Mega + - "oos.eu-west-2.outscale.com" + - Outscale EU West 2 (Paris) + - Provider: Outscale + - "oos.us-east-2.outscale.com" + - Outscale US east 2 (New Jersey) + - Provider: Outscale + - "oos.us-west-1.outscale.com" + - Outscale EU West 1 (California) + - Provider: Outscale + - "oos.cloudgouv-eu-west-1.outscale.com" + - Outscale SecNumCloud (Paris) + - Provider: Outscale + - "oos.ap-northeast-1.outscale.com" + - Outscale AP Northeast 1 (Japan) + - Provider: Outscale + - "s3.gra.io.cloud.ovh.net" + - OVHcloud Gravelines, France + - Provider: OVHcloud + - "s3.rbx.io.cloud.ovh.net" + - OVHcloud Roubaix, France + - Provider: OVHcloud + - "s3.sbg.io.cloud.ovh.net" + - OVHcloud Strasbourg, France + - Provider: OVHcloud + - "s3.eu-west-par.io.cloud.ovh.net" + - OVHcloud Paris, France (3AZ) + - Provider: OVHcloud + - "s3.de.io.cloud.ovh.net" + - OVHcloud Frankfurt, Germany + - Provider: OVHcloud + - "s3.uk.io.cloud.ovh.net" + - OVHcloud London, United Kingdom + - Provider: OVHcloud + - "s3.waw.io.cloud.ovh.net" + - OVHcloud Warsaw, Poland + - Provider: OVHcloud + - "s3.bhs.io.cloud.ovh.net" + - OVHcloud Beauharnois, Canada + - Provider: OVHcloud + - "s3.ca-east-tor.io.cloud.ovh.net" + - OVHcloud Toronto, Canada + - Provider: OVHcloud + - "s3.sgp.io.cloud.ovh.net" + - OVHcloud Singapore + - Provider: OVHcloud + - "s3.ap-southeast-syd.io.cloud.ovh.net" + - OVHcloud Sydney, Australia + - Provider: OVHcloud + - "s3.ap-south-mum.io.cloud.ovh.net" + - OVHcloud Mumbai, India + - Provider: OVHcloud + - "s3.us-east-va.io.cloud.ovh.us" + - OVHcloud Vint Hill, Virginia, USA + - Provider: OVHcloud + - "s3.us-west-or.io.cloud.ovh.us" + - OVHcloud Hillsboro, Oregon, USA + - Provider: OVHcloud + - "s3.rbx-archive.io.cloud.ovh.net" + - OVHcloud Roubaix, France (Cold Archive) + - Provider: OVHcloud + - "s3.petabox.io" + - US East (N. Virginia) + - Provider: Petabox + - "s3.us-east-1.petabox.io" + - US East (N. Virginia) + - Provider: Petabox + - "s3.eu-central-1.petabox.io" + - Europe (Frankfurt) + - Provider: Petabox + - "s3.ap-southeast-1.petabox.io" + - Asia Pacific (Singapore) + - Provider: Petabox + - "s3.me-south-1.petabox.io" + - Middle East (Bahrain) + - Provider: Petabox + - "s3.sa-east-1.petabox.io" + - South America (São Paulo) + - Provider: Petabox + - "s3-cn-east-1.qiniucs.com" + - East China Endpoint 1 + - Provider: Qiniu + - "s3-cn-east-2.qiniucs.com" + - East China Endpoint 2 + - Provider: Qiniu + - "s3-cn-north-1.qiniucs.com" + - North China Endpoint 1 + - Provider: Qiniu + - "s3-cn-south-1.qiniucs.com" + - South China Endpoint 1 + - Provider: Qiniu + - "s3-us-north-1.qiniucs.com" + - North America Endpoint 1 + - Provider: Qiniu + - "s3-ap-southeast-1.qiniucs.com" + - Southeast Asia Endpoint 1 + - Provider: Qiniu + - "s3-ap-northeast-1.qiniucs.com" + - Northeast Asia Endpoint 1 + - Provider: Qiniu + - "s3.us-east-1.rabata.io" + - US East (N. Virginia) + - Provider: Rabata + - "s3.eu-west-1.rabata.io" + - EU West (Ireland) + - Provider: Rabata + - "s3.eu-west-2.rabata.io" + - EU West (London) + - Provider: Rabata + - "s3.rackcorp.com" + - Global (AnyCast) Endpoint + - Provider: RackCorp + - "au.s3.rackcorp.com" + - Australia (Anycast) Endpoint + - Provider: RackCorp + - "au-nsw.s3.rackcorp.com" + - Sydney (Australia) Endpoint + - Provider: RackCorp + - "au-qld.s3.rackcorp.com" + - Brisbane (Australia) Endpoint + - Provider: RackCorp + - "au-vic.s3.rackcorp.com" + - Melbourne (Australia) Endpoint + - Provider: RackCorp + - "au-wa.s3.rackcorp.com" + - Perth (Australia) Endpoint + - Provider: RackCorp + - "ph.s3.rackcorp.com" + - Manila (Philippines) Endpoint + - Provider: RackCorp + - "th.s3.rackcorp.com" + - Bangkok (Thailand) Endpoint + - Provider: RackCorp + - "hk.s3.rackcorp.com" + - HK (Hong Kong) Endpoint + - Provider: RackCorp + - "mn.s3.rackcorp.com" + - Ulaanbaatar (Mongolia) Endpoint + - Provider: RackCorp + - "kg.s3.rackcorp.com" + - Bishkek (Kyrgyzstan) Endpoint + - Provider: RackCorp + - "id.s3.rackcorp.com" + - Jakarta (Indonesia) Endpoint + - Provider: RackCorp + - "jp.s3.rackcorp.com" + - Tokyo (Japan) Endpoint + - Provider: RackCorp + - "sg.s3.rackcorp.com" + - SG (Singapore) Endpoint + - Provider: RackCorp + - "de.s3.rackcorp.com" + - Frankfurt (Germany) Endpoint + - Provider: RackCorp + - "us.s3.rackcorp.com" + - USA (AnyCast) Endpoint + - Provider: RackCorp + - "us-east-1.s3.rackcorp.com" + - New York (USA) Endpoint + - Provider: RackCorp + - "us-west-1.s3.rackcorp.com" + - Freemont (USA) Endpoint + - Provider: RackCorp + - "nz.s3.rackcorp.com" + - Auckland (New Zealand) Endpoint + - Provider: RackCorp + - "s3.nl-ams.scw.cloud" + - Amsterdam Endpoint + - Provider: Scaleway + - "s3.fr-par.scw.cloud" + - Paris Endpoint + - Provider: Scaleway + - "s3.pl-waw.scw.cloud" + - Warsaw Endpoint + - Provider: Scaleway + - "localhost:8333" + - SeaweedFS S3 localhost + - Provider: SeaweedFS + - "s3.ru-1.storage.selcloud.ru" + - Saint Petersburg + - Provider: Selectel,Servercore + - "s3.gis-1.storage.selcloud.ru" + - Moscow + - Provider: Servercore + - "s3.ru-7.storage.selcloud.ru" + - Moscow + - Provider: Servercore + - "s3.uz-2.srvstorage.uz" + - Tashkent, Uzbekistan + - Provider: Servercore + - "s3.kz-1.srvstorage.kz" + - Almaty, Kazakhstan + - Provider: Servercore + - "s3.us-east-2.stackpathstorage.com" + - US East Endpoint + - Provider: StackPath + - "s3.us-west-1.stackpathstorage.com" + - US West Endpoint + - Provider: StackPath + - "s3.eu-central-1.stackpathstorage.com" + - EU Endpoint + - Provider: StackPath + - "gateway.storjshare.io" + - Global Hosted Gateway + - Provider: Storj + - "eu-001.s3.synologyc2.net" + - EU Endpoint 1 + - Provider: Synology + - "eu-002.s3.synologyc2.net" + - EU Endpoint 2 + - Provider: Synology + - "us-001.s3.synologyc2.net" + - US Endpoint 1 + - Provider: Synology + - "us-002.s3.synologyc2.net" + - US Endpoint 2 + - Provider: Synology + - "tw-001.s3.synologyc2.net" + - TW Endpoint 1 + - Provider: Synology + - "cos.ap-beijing.myqcloud.com" + - Beijing Region + - Provider: TencentCOS + - "cos.ap-nanjing.myqcloud.com" + - Nanjing Region + - Provider: TencentCOS + - "cos.ap-shanghai.myqcloud.com" + - Shanghai Region + - Provider: TencentCOS + - "cos.ap-guangzhou.myqcloud.com" + - Guangzhou Region + - Provider: TencentCOS + - "cos.ap-chengdu.myqcloud.com" + - Chengdu Region + - Provider: TencentCOS + - "cos.ap-chongqing.myqcloud.com" + - Chongqing Region + - Provider: TencentCOS + - "cos.ap-hongkong.myqcloud.com" + - Hong Kong (China) Region + - Provider: TencentCOS + - "cos.ap-singapore.myqcloud.com" + - Singapore Region + - Provider: TencentCOS + - "cos.ap-mumbai.myqcloud.com" + - Mumbai Region + - Provider: TencentCOS + - "cos.ap-seoul.myqcloud.com" + - Seoul Region + - Provider: TencentCOS + - "cos.ap-bangkok.myqcloud.com" + - Bangkok Region + - Provider: TencentCOS + - "cos.ap-tokyo.myqcloud.com" + - Tokyo Region + - Provider: TencentCOS + - "cos.na-siliconvalley.myqcloud.com" + - Silicon Valley Region + - Provider: TencentCOS + - "cos.na-ashburn.myqcloud.com" + - Virginia Region + - Provider: TencentCOS + - "cos.na-toronto.myqcloud.com" + - Toronto Region + - Provider: TencentCOS + - "cos.eu-frankfurt.myqcloud.com" + - Frankfurt Region + - Provider: TencentCOS + - "cos.eu-moscow.myqcloud.com" + - Moscow Region + - Provider: TencentCOS + - "cos.accelerate.myqcloud.com" + - Use Tencent COS Accelerate Endpoint + - Provider: TencentCOS + - "s3.wasabisys.com" + - Wasabi US East 1 (N. Virginia) + - Provider: Wasabi + - "s3.us-east-2.wasabisys.com" + - Wasabi US East 2 (N. Virginia) + - Provider: Wasabi + - "s3.us-central-1.wasabisys.com" + - Wasabi US Central 1 (Texas) + - Provider: Wasabi + - "s3.us-west-1.wasabisys.com" + - Wasabi US West 1 (Oregon) + - Provider: Wasabi + - "s3.ca-central-1.wasabisys.com" + - Wasabi CA Central 1 (Toronto) + - Provider: Wasabi + - "s3.eu-central-1.wasabisys.com" + - Wasabi EU Central 1 (Amsterdam) + - Provider: Wasabi + - "s3.eu-central-2.wasabisys.com" + - Wasabi EU Central 2 (Frankfurt) + - Provider: Wasabi + - "s3.eu-west-1.wasabisys.com" + - Wasabi EU West 1 (London) + - Provider: Wasabi + - "s3.eu-west-2.wasabisys.com" + - Wasabi EU West 2 (Paris) + - Provider: Wasabi + - "s3.eu-south-1.wasabisys.com" + - Wasabi EU South 1 (Milan) + - Provider: Wasabi + - "s3.ap-northeast-1.wasabisys.com" + - Wasabi AP Northeast 1 (Tokyo) endpoint + - Provider: Wasabi + - "s3.ap-northeast-2.wasabisys.com" + - Wasabi AP Northeast 2 (Osaka) endpoint + - Provider: Wasabi + - "s3.ap-southeast-1.wasabisys.com" + - Wasabi AP Southeast 1 (Singapore) + - Provider: Wasabi + - "s3.ap-southeast-2.wasabisys.com" + - Wasabi AP Southeast 2 (Sydney) + - Provider: Wasabi + - "idr01.zata.ai" + - South Asia Endpoint + - Provider: Zata #### --s3-location-constraint Location constraint - must be set to match the Region. -Used when creating buckets only. +Leave blank if not sure. Used when creating buckets only. Properties: - Config: location_constraint - Env Var: RCLONE_S3_LOCATION_CONSTRAINT -- Provider: AWS +- Provider: AWS,ArvanCloud,Ceph,ChinaMobile,DigitalOcean,Dreamhost,Exaba,GCS,Hetzner,IBMCOS,LyveCloud,Minio,Netease,Qiniu,Rabata,RackCorp,SeaweedFS,Synology,Wasabi,Zata,Other - Type: string - Required: false - Examples: - - "" - - Empty for US Region, Northern Virginia, or Pacific Northwest - - "us-east-2" - - US East (Ohio) Region - - "us-west-1" - - US West (Northern California) Region - - "us-west-2" - - US West (Oregon) Region - - "ca-central-1" - - Canada (Central) Region - - "eu-west-1" - - EU (Ireland) Region - - "eu-west-2" - - EU (London) Region - - "eu-west-3" - - EU (Paris) Region - - "eu-north-1" - - EU (Stockholm) Region - - "eu-south-1" - - EU (Milan) Region - - "EU" - - EU Region - - "ap-southeast-1" - - Asia Pacific (Singapore) Region - - "ap-southeast-2" - - Asia Pacific (Sydney) Region - - "ap-northeast-1" - - Asia Pacific (Tokyo) Region - - "ap-northeast-2" - - Asia Pacific (Seoul) Region - - "ap-northeast-3" - - Asia Pacific (Osaka-Local) Region - - "ap-south-1" - - Asia Pacific (Mumbai) Region - - "ap-east-1" - - Asia Pacific (Hong Kong) Region - - "sa-east-1" - - South America (Sao Paulo) Region - - "il-central-1" - - Israel (Tel Aviv) Region - - "me-south-1" - - Middle East (Bahrain) Region - - "af-south-1" - - Africa (Cape Town) Region - - "cn-north-1" - - China (Beijing) Region - - "cn-northwest-1" - - China (Ningxia) Region - - "us-gov-east-1" - - AWS GovCloud (US-East) Region - - "us-gov-west-1" - - AWS GovCloud (US) Region + - "" + - Empty for US Region, Northern Virginia, or Pacific Northwest + - Provider: AWS + - "us-east-2" + - US East (Ohio) Region + - Provider: AWS + - "us-west-1" + - US West (Northern California) Region + - Provider: AWS + - "us-west-2" + - US West (Oregon) Region + - Provider: AWS + - "ca-central-1" + - Canada (Central) Region + - Provider: AWS + - "eu-west-1" + - EU (Ireland) Region + - Provider: AWS + - "eu-west-2" + - EU (London) Region + - Provider: AWS + - "eu-west-3" + - EU (Paris) Region + - Provider: AWS + - "eu-north-1" + - EU (Stockholm) Region + - Provider: AWS + - "eu-south-1" + - EU (Milan) Region + - Provider: AWS + - "EU" + - EU Region + - Provider: AWS + - "ap-southeast-1" + - Asia Pacific (Singapore) Region + - Provider: AWS + - "ap-southeast-2" + - Asia Pacific (Sydney) Region + - Provider: AWS + - "ap-northeast-1" + - Asia Pacific (Tokyo) Region + - Provider: AWS + - "ap-northeast-2" + - Asia Pacific (Seoul) Region + - Provider: AWS + - "ap-northeast-3" + - Asia Pacific (Osaka-Local) Region + - Provider: AWS + - "ap-south-1" + - Asia Pacific (Mumbai) Region + - Provider: AWS + - "ap-east-1" + - Asia Pacific (Hong Kong) Region + - Provider: AWS + - "sa-east-1" + - South America (Sao Paulo) Region + - Provider: AWS + - "il-central-1" + - Israel (Tel Aviv) Region + - Provider: AWS + - "me-south-1" + - Middle East (Bahrain) Region + - Provider: AWS + - "af-south-1" + - Africa (Cape Town) Region + - Provider: AWS + - "cn-north-1" + - China (Beijing) Region + - Provider: AWS + - "cn-northwest-1" + - China (Ningxia) Region + - Provider: AWS + - "us-gov-east-1" + - AWS GovCloud (US-East) Region + - Provider: AWS + - "us-gov-west-1" + - AWS GovCloud (US) Region + - Provider: AWS + - "ir-thr-at1" + - Tehran Iran (Simin) + - Provider: ArvanCloud + - "ir-tbz-sh1" + - Tabriz Iran (Shahriar) + - Provider: ArvanCloud + - "wuxi1" + - East China (Suzhou) + - Provider: ChinaMobile + - "jinan1" + - East China (Jinan) + - Provider: ChinaMobile + - "ningbo1" + - East China (Hangzhou) + - Provider: ChinaMobile + - "shanghai1" + - East China (Shanghai-1) + - Provider: ChinaMobile + - "zhengzhou1" + - Central China (Zhengzhou) + - Provider: ChinaMobile + - "hunan1" + - Central China (Changsha-1) + - Provider: ChinaMobile + - "zhuzhou1" + - Central China (Changsha-2) + - Provider: ChinaMobile + - "guangzhou1" + - South China (Guangzhou-2) + - Provider: ChinaMobile + - "dongguan1" + - South China (Guangzhou-3) + - Provider: ChinaMobile + - "beijing1" + - North China (Beijing-1) + - Provider: ChinaMobile + - "beijing2" + - North China (Beijing-2) + - Provider: ChinaMobile + - "beijing4" + - North China (Beijing-3) + - Provider: ChinaMobile + - "huhehaote1" + - North China (Huhehaote) + - Provider: ChinaMobile + - "chengdu1" + - Southwest China (Chengdu) + - Provider: ChinaMobile + - "chongqing1" + - Southwest China (Chongqing) + - Provider: ChinaMobile + - "guiyang1" + - Southwest China (Guiyang) + - Provider: ChinaMobile + - "xian1" + - Northwest China (Xian) + - Provider: ChinaMobile + - "yunnan" + - Yunnan China (Kunming) + - Provider: ChinaMobile + - "yunnan2" + - Yunnan China (Kunming-2) + - Provider: ChinaMobile + - "tianjin1" + - Tianjin China (Tianjin) + - Provider: ChinaMobile + - "jilin1" + - Jilin China (Changchun) + - Provider: ChinaMobile + - "hubei1" + - Hubei China (Xiangyan) + - Provider: ChinaMobile + - "jiangxi1" + - Jiangxi China (Nanchang) + - Provider: ChinaMobile + - "gansu1" + - Gansu China (Lanzhou) + - Provider: ChinaMobile + - "shanxi1" + - Shanxi China (Taiyuan) + - Provider: ChinaMobile + - "liaoning1" + - Liaoning China (Shenyang) + - Provider: ChinaMobile + - "hebei1" + - Hebei China (Shijiazhuang) + - Provider: ChinaMobile + - "fujian1" + - Fujian China (Xiamen) + - Provider: ChinaMobile + - "guangxi1" + - Guangxi China (Nanning) + - Provider: ChinaMobile + - "anhui1" + - Anhui China (Huainan) + - Provider: ChinaMobile + - "us-standard" + - US Cross Region Standard + - Provider: IBMCOS + - "us-vault" + - US Cross Region Vault + - Provider: IBMCOS + - "us-cold" + - US Cross Region Cold + - Provider: IBMCOS + - "us-flex" + - US Cross Region Flex + - Provider: IBMCOS + - "us-east-standard" + - US East Region Standard + - Provider: IBMCOS + - "us-east-vault" + - US East Region Vault + - Provider: IBMCOS + - "us-east-cold" + - US East Region Cold + - Provider: IBMCOS + - "us-east-flex" + - US East Region Flex + - Provider: IBMCOS + - "us-south-standard" + - US South Region Standard + - Provider: IBMCOS + - "us-south-vault" + - US South Region Vault + - Provider: IBMCOS + - "us-south-cold" + - US South Region Cold + - Provider: IBMCOS + - "us-south-flex" + - US South Region Flex + - Provider: IBMCOS + - "eu-standard" + - EU Cross Region Standard + - Provider: IBMCOS + - "eu-vault" + - EU Cross Region Vault + - Provider: IBMCOS + - "eu-cold" + - EU Cross Region Cold + - Provider: IBMCOS + - "eu-flex" + - EU Cross Region Flex + - Provider: IBMCOS + - "eu-gb-standard" + - Great Britain Standard + - Provider: IBMCOS + - "eu-gb-vault" + - Great Britain Vault + - Provider: IBMCOS + - "eu-gb-cold" + - Great Britain Cold + - Provider: IBMCOS + - "eu-gb-flex" + - Great Britain Flex + - Provider: IBMCOS + - "ap-standard" + - APAC Standard + - Provider: IBMCOS + - "ap-vault" + - APAC Vault + - Provider: IBMCOS + - "ap-cold" + - APAC Cold + - Provider: IBMCOS + - "ap-flex" + - APAC Flex + - Provider: IBMCOS + - "mel01-standard" + - Melbourne Standard + - Provider: IBMCOS + - "mel01-vault" + - Melbourne Vault + - Provider: IBMCOS + - "mel01-cold" + - Melbourne Cold + - Provider: IBMCOS + - "mel01-flex" + - Melbourne Flex + - Provider: IBMCOS + - "tor01-standard" + - Toronto Standard + - Provider: IBMCOS + - "tor01-vault" + - Toronto Vault + - Provider: IBMCOS + - "tor01-cold" + - Toronto Cold + - Provider: IBMCOS + - "tor01-flex" + - Toronto Flex + - Provider: IBMCOS + - "cn-east-1" + - East China Region 1 + - Provider: Qiniu + - "cn-east-2" + - East China Region 2 + - Provider: Qiniu + - "cn-north-1" + - North China Region 1 + - Provider: Qiniu + - "cn-south-1" + - South China Region 1 + - Provider: Qiniu + - "us-north-1" + - North America Region 1 + - Provider: Qiniu + - "ap-southeast-1" + - Southeast Asia Region 1 + - Provider: Qiniu + - "ap-northeast-1" + - Northeast Asia Region 1 + - Provider: Qiniu + - "us-east-1" + - US East (N. Virginia) + - Provider: Rabata + - "eu-west-1" + - EU (Ireland) + - Provider: Rabata + - "eu-west-2" + - EU (London) + - Provider: Rabata + - "global" + - Global CDN Region + - Provider: RackCorp + - "au" + - Australia (All locations) + - Provider: RackCorp + - "au-nsw" + - NSW (Australia) Region + - Provider: RackCorp + - "au-qld" + - QLD (Australia) Region + - Provider: RackCorp + - "au-vic" + - VIC (Australia) Region + - Provider: RackCorp + - "au-wa" + - Perth (Australia) Region + - Provider: RackCorp + - "ph" + - Manila (Philippines) Region + - Provider: RackCorp + - "th" + - Bangkok (Thailand) Region + - Provider: RackCorp + - "hk" + - HK (Hong Kong) Region + - Provider: RackCorp + - "mn" + - Ulaanbaatar (Mongolia) Region + - Provider: RackCorp + - "kg" + - Bishkek (Kyrgyzstan) Region + - Provider: RackCorp + - "id" + - Jakarta (Indonesia) Region + - Provider: RackCorp + - "jp" + - Tokyo (Japan) Region + - Provider: RackCorp + - "sg" + - SG (Singapore) Region + - Provider: RackCorp + - "de" + - Frankfurt (Germany) Region + - Provider: RackCorp + - "us" + - USA (AnyCast) Region + - Provider: RackCorp + - "us-east-1" + - New York (USA) Region + - Provider: RackCorp + - "us-west-1" + - Fremont (USA) Region + - Provider: RackCorp + - "nz" + - Auckland (New Zealand) Region + - Provider: RackCorp #### --s3-acl @@ -1171,50 +2712,61 @@ Properties: - Config: acl - Env Var: RCLONE_S3_ACL -- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade,Mega +- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other - Type: string - Required: false - Examples: - - "default" - - Owner gets Full_CONTROL. - - No one else has access rights (default). - - "private" - - Owner gets FULL_CONTROL. - - No one else has access rights (default). - - "public-read" - - Owner gets FULL_CONTROL. - - The AllUsers group gets READ access. - - "public-read-write" - - Owner gets FULL_CONTROL. - - The AllUsers group gets READ and WRITE access. - - Granting this on a bucket is generally not recommended. - - "authenticated-read" - - Owner gets FULL_CONTROL. - - The AuthenticatedUsers group gets READ access. - - "bucket-owner-read" - - Object owner gets FULL_CONTROL. - - Bucket owner gets READ access. - - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - - "bucket-owner-full-control" - - Both the object owner and the bucket owner get FULL_CONTROL over the object. - - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. - - "private" - - Owner gets FULL_CONTROL. - - No one else has access rights (default). - - This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS. - - "public-read" - - Owner gets FULL_CONTROL. - - The AllUsers group gets READ access. - - This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS. - - "public-read-write" - - Owner gets FULL_CONTROL. - - The AllUsers group gets READ and WRITE access. - - This acl is available on IBM Cloud (Infra), On-Premise IBM COS. - - "authenticated-read" - - Owner gets FULL_CONTROL. - - The AuthenticatedUsers group gets READ access. - - Not supported on Buckets. - - This acl is available on IBM Cloud (Infra) and On-Premise IBM COS. + - "private" + - Owner gets FULL_CONTROL. + - No one else has access rights (default). + - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other + - "public-read" + - Owner gets FULL_CONTROL. + - The AllUsers group gets READ access. + - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other + - "public-read-write" + - Owner gets FULL_CONTROL. + - The AllUsers group gets READ and WRITE access. + - Granting this on a bucket is generally not recommended. + - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other + - "authenticated-read" + - Owner gets FULL_CONTROL. + - The AuthenticatedUsers group gets READ access. + - Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other + - "bucket-owner-read" + - Object owner gets FULL_CONTROL. + - Bucket owner gets READ access. + - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + - Provider: AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other + - "bucket-owner-full-control" + - Both the object owner and the bucket owner get FULL_CONTROL over the object. + - If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. + - Provider: AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other + - "private" + - Owner gets FULL_CONTROL. + - No one else has access rights (default). + - This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS. + - Provider: IBMCOS + - "public-read" + - Owner gets FULL_CONTROL. + - The AllUsers group gets READ access. + - This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS. + - Provider: IBMCOS + - "public-read-write" + - Owner gets FULL_CONTROL. + - The AllUsers group gets READ and WRITE access. + - This acl is available on IBM Cloud (Infra), On-Premise IBM COS. + - Provider: IBMCOS + - "authenticated-read" + - Owner gets FULL_CONTROL. + - The AuthenticatedUsers group gets READ access. + - Not supported on Buckets. + - This acl is available on IBM Cloud (Infra) and On-Premise IBM COS. + - Provider: IBMCOS + - "default" + - Owner gets Full_CONTROL. + - No one else has access rights (default). + - Provider: TencentCOS #### --s3-server-side-encryption @@ -1228,12 +2780,15 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None - - "AES256" - - AES256 - - "aws:kms" - - aws:kms + - "" + - None + - Provider: AWS,Ceph,ChinaMobile,Minio + - "AES256" + - AES256 + - Provider: AWS,Ceph,ChinaMobile,Minio + - "aws:kms" + - aws:kms + - Provider: AWS,Ceph,Minio #### --s3-sse-kms-key-id @@ -1247,10 +2802,10 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None - - "arn:aws:kms:us-east-1:*" - - arn:aws:kms:* + - "" + - None + - "arn:aws:kms:us-east-1:*" + - arn:aws:kms:* #### --s3-storage-class @@ -1260,28 +2815,70 @@ Properties: - Config: storage_class - Env Var: RCLONE_S3_STORAGE_CLASS -- Provider: AWS +- Provider: AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,Qiniu,Scaleway,TencentCOS - Type: string - Required: false - Examples: - - "" - - Default - - "STANDARD" - - Standard storage class - - "REDUCED_REDUNDANCY" - - Reduced redundancy storage class - - "STANDARD_IA" - - Standard Infrequent Access storage class - - "ONEZONE_IA" - - One Zone Infrequent Access storage class - - "GLACIER" - - Glacier Flexible Retrieval storage class - - "DEEP_ARCHIVE" - - Glacier Deep Archive storage class - - "INTELLIGENT_TIERING" - - Intelligent-Tiering storage class - - "GLACIER_IR" - - Glacier Instant Retrieval storage class + - "" + - Default + - Provider: AWS,Alibaba,ChinaMobile,TencentCOS + - "STANDARD" + - Standard storage class + - Provider: AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,Qiniu,TencentCOS + - "REDUCED_REDUNDANCY" + - Reduced redundancy storage class + - Provider: AWS + - "STANDARD_IA" + - Standard Infrequent Access storage class + - Provider: AWS + - "ONEZONE_IA" + - One Zone Infrequent Access storage class + - Provider: AWS + - "GLACIER" + - Glacier Flexible Retrieval storage class + - Provider: AWS + - "DEEP_ARCHIVE" + - Glacier Deep Archive storage class + - Provider: AWS + - "INTELLIGENT_TIERING" + - Intelligent-Tiering storage class + - Provider: AWS + - "GLACIER_IR" + - Glacier Instant Retrieval storage class + - Provider: AWS,Magalu + - "GLACIER" + - Archive storage mode + - Provider: Alibaba,ChinaMobile,Qiniu + - "STANDARD_IA" + - Infrequent access storage mode + - Provider: Alibaba,ChinaMobile,TencentCOS + - "LINE" + - Infrequent access storage mode + - Provider: Qiniu + - "DEEP_ARCHIVE" + - Deep archive storage mode + - Provider: Qiniu + - "" + - Default. + - Provider: Scaleway + - "STANDARD" + - The Standard class for any upload. + - Suitable for on-demand content like streaming or CDN. + - Available in all regions. + - Provider: Scaleway + - "GLACIER" + - Archived storage. + - Prices are lower, but it needs to be restored first to be accessed. + - Available in FR-PAR and NL-AMS regions. + - Provider: Scaleway + - "ONEZONE_IA" + - One Zone - Infrequent Access. + - A good choice for storing secondary backup copies or easily re-creatable data. + - Available in the FR-PAR region only. + - Provider: Scaleway + - "ARCHIVE" + - Archive storage mode + - Provider: TencentCOS #### --s3-ibm-api-key @@ -1309,7 +2906,7 @@ Properties: ### Advanced options -Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu, Zata and others). +Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, TencentCOS, Wasabi, Zata, Other). #### --s3-bucket-acl @@ -1328,23 +2925,23 @@ Properties: - Config: bucket_acl - Env Var: RCLONE_S3_BUCKET_ACL -- Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade +- Provider: AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other - Type: string - Required: false - Examples: - - "private" - - Owner gets FULL_CONTROL. - - No one else has access rights (default). - - "public-read" - - Owner gets FULL_CONTROL. - - The AllUsers group gets READ access. - - "public-read-write" - - Owner gets FULL_CONTROL. - - The AllUsers group gets READ and WRITE access. - - Granting this on a bucket is generally not recommended. - - "authenticated-read" - - Owner gets FULL_CONTROL. - - The AuthenticatedUsers group gets READ access. + - "private" + - Owner gets FULL_CONTROL. + - No one else has access rights (default). + - "public-read" + - Owner gets FULL_CONTROL. + - The AllUsers group gets READ access. + - "public-read-write" + - Owner gets FULL_CONTROL. + - The AllUsers group gets READ and WRITE access. + - Granting this on a bucket is generally not recommended. + - "authenticated-read" + - Owner gets FULL_CONTROL. + - The AuthenticatedUsers group gets READ access. #### --s3-requester-pays @@ -1370,10 +2967,10 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None - - "AES256" - - AES256 + - "" + - None + - "AES256" + - AES256 #### --s3-sse-customer-key @@ -1389,8 +2986,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --s3-sse-customer-key-base64 @@ -1406,8 +3003,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --s3-sse-customer-key-md5 @@ -1424,8 +3021,8 @@ Properties: - Type: string - Required: false - Examples: - - "" - - None + - "" + - None #### --s3-upload-cutoff @@ -1955,6 +3552,19 @@ Properties: - Type: bool - Default: false +#### --s3-use-data-integrity-protections + +If true use AWS S3 data integrity protections. + +See [AWS Docs on Data Integrity Protections](https://docs.aws.amazon.com/sdkref/latest/guide/feature-dataintegrity.html) + +Properties: + +- Config: use_data_integrity_protections +- Env Var: RCLONE_S3_USE_DATA_INTEGRITY_PROTECTIONS +- Type: Tristate +- Default: unset + #### --s3-versions Include old versions in directory listings. @@ -2291,9 +3901,11 @@ See the [metadata](/docs/#metadata) docs for more info. Here are the commands specific to the s3 backend. -Run them with +Run them with: - rclone backend COMMAND remote: +```console +rclone backend COMMAND remote: +``` The help below will explain what arguments each command takes. @@ -2305,114 +3917,137 @@ These can be run on a running backend using the rc command ### restore -Restore objects from GLACIER or INTELLIGENT-TIERING archive tier +Restore objects from GLACIER or INTELLIGENT-TIERING archive tier. - rclone backend restore remote: [options] [+] +```console +rclone backend restore remote: [options] [+] +``` -This command can be used to restore one or more objects from GLACIER to normal storage -or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier. +This command can be used to restore one or more objects from GLACIER to normal +storage or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier +to the Frequent Access tier. -Usage Examples: +Usage examples: - rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS - rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS - rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS - rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY +```console +rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS +rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS +rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS +rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY +``` -This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags +This flag also obeys the filters. Test first with --interactive/-i or --dry-run +flags. - rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 +```console +rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 +``` -All the objects shown will be marked for restore, then +All the objects shown will be marked for restore, then: - rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 +```console +rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 +``` It returns a list of status dictionaries with Remote and Status keys. The Status will be OK if it was successful or an error message if not. - [ - { - "Status": "OK", - "Remote": "test.txt" - }, - { - "Status": "OK", - "Remote": "test/file4.txt" - } - ] - - +```json +[ + { + "Status": "OK", + "Remote": "test.txt" + }, + { + "Status": "OK", + "Remote": "test/file4.txt" + } +] +``` Options: - "description": The optional description for the job. -- "lifetime": Lifetime of the active copy in days, ignored for INTELLIGENT-TIERING storage +- "lifetime": Lifetime of the active copy in days, ignored for INTELLIGENT-TIERING +storage. - "priority": Priority of restore: Standard|Expedited|Bulk ### restore-status -Show the restore status for objects being restored from GLACIER or INTELLIGENT-TIERING storage +Show the status for objects being restored from GLACIER or INTELLIGENT-TIERING. - rclone backend restore-status remote: [options] [+] +```console +rclone backend restore-status remote: [options] [+] +``` -This command can be used to show the status for objects being restored from GLACIER to normal storage -or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier. +This command can be used to show the status for objects being restored from +GLACIER to normal storage or from INTELLIGENT-TIERING Archive Access / Deep +Archive Access tier to the Frequent Access tier. -Usage Examples: +Usage examples: - rclone backend restore-status s3:bucket/path/to/object - rclone backend restore-status s3:bucket/path/to/directory - rclone backend restore-status -o all s3:bucket/path/to/directory +```console +rclone backend restore-status s3:bucket/path/to/object +rclone backend restore-status s3:bucket/path/to/directory +rclone backend restore-status -o all s3:bucket/path/to/directory +``` This command does not obey the filters. -It returns a list of status dictionaries. +It returns a list of status dictionaries: - [ - { - "Remote": "file.txt", - "VersionID": null, - "RestoreStatus": { - "IsRestoreInProgress": true, - "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" - }, - "StorageClass": "GLACIER" +```json +[ + { + "Remote": "file.txt", + "VersionID": null, + "RestoreStatus": { + "IsRestoreInProgress": true, + "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" }, - { - "Remote": "test.pdf", - "VersionID": null, - "RestoreStatus": { - "IsRestoreInProgress": false, - "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" - }, - "StorageClass": "DEEP_ARCHIVE" + "StorageClass": "GLACIER" + }, + { + "Remote": "test.pdf", + "VersionID": null, + "RestoreStatus": { + "IsRestoreInProgress": false, + "RestoreExpiryDate": "2023-09-06T12:29:19+01:00" }, - { - "Remote": "test.gz", - "VersionID": null, - "RestoreStatus": { - "IsRestoreInProgress": true, - "RestoreExpiryDate": "null" - }, - "StorageClass": "INTELLIGENT_TIERING" - } - ] - + "StorageClass": "DEEP_ARCHIVE" + }, + { + "Remote": "test.gz", + "VersionID": null, + "RestoreStatus": { + "IsRestoreInProgress": true, + "RestoreExpiryDate": "null" + }, + "StorageClass": "INTELLIGENT_TIERING" + } +] +``` Options: -- "all": if set then show all objects, not just ones with restore status +- "all": If set then show all objects, not just ones with restore status. ### list-multipart-uploads -List the unfinished multipart uploads +List the unfinished multipart uploads. - rclone backend list-multipart-uploads remote: [options] [+] +```console +rclone backend list-multipart-uploads remote: [options] [+] +``` This command lists the unfinished multipart uploads in JSON format. - rclone backend list-multipart s3:bucket/path/to/object +Usage examples: + +```console +rclone backend list-multipart s3:bucket/path/to/object +``` It returns a dictionary of buckets with values as lists of unfinished multipart uploads. @@ -2420,98 +4055,117 @@ multipart uploads. You can call it with no bucket in which case it lists all bucket, with a bucket or with a bucket and path. - { - "rclone": [ +```json +{ + "rclone": [ { - "Initiated": "2020-06-26T14:20:36Z", - "Initiator": { - "DisplayName": "XXX", - "ID": "arn:aws:iam::XXX:user/XXX" - }, - "Key": "KEY", - "Owner": { - "DisplayName": null, - "ID": "XXX" - }, - "StorageClass": "STANDARD", - "UploadId": "XXX" + "Initiated": "2020-06-26T14:20:36Z", + "Initiator": { + "DisplayName": "XXX", + "ID": "arn:aws:iam::XXX:user/XXX" + }, + "Key": "KEY", + "Owner": { + "DisplayName": null, + "ID": "XXX" + }, + "StorageClass": "STANDARD", + "UploadId": "XXX" } - ], - "rclone-1000files": [], - "rclone-dst": [] - } - - + ], + "rclone-1000files": [], + "rclone-dst": [] +} +``` ### cleanup Remove unfinished multipart uploads. - rclone backend cleanup remote: [options] [+] +```console +rclone backend cleanup remote: [options] [+] +``` This command removes unfinished multipart uploads of age greater than max-age which defaults to 24 hours. -Note that you can use --interactive/-i or --dry-run with this command to see what -it would do. +Note that you can use --interactive/-i or --dry-run with this command to see +what it would do. - rclone backend cleanup s3:bucket/path/to/object - rclone backend cleanup -o max-age=7w s3:bucket/path/to/object +Usage examples: + +```console +rclone backend cleanup s3:bucket/path/to/object +rclone backend cleanup -o max-age=7w s3:bucket/path/to/object +``` Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. - Options: -- "max-age": Max age of upload to delete +- "max-age": Max age of upload to delete. ### cleanup-hidden Remove old versions of files. - rclone backend cleanup-hidden remote: [options] [+] +```console +rclone backend cleanup-hidden remote: [options] [+] +``` This command removes any old hidden versions of files on a versions enabled bucket. -Note that you can use --interactive/-i or --dry-run with this command to see what -it would do. +Note that you can use --interactive/-i or --dry-run with this command to see +what it would do. - rclone backend cleanup-hidden s3:bucket/path/to/dir +Usage example: +```console +rclone backend cleanup-hidden s3:bucket/path/to/dir +``` ### versioning Set/get versioning support for a bucket. - rclone backend versioning remote: [options] [+] +```console +rclone backend versioning remote: [options] [+] +``` This command sets versioning support if a parameter is passed and then returns the current versioning status for the bucket supplied. - rclone backend versioning s3:bucket # read status only - rclone backend versioning s3:bucket Enabled - rclone backend versioning s3:bucket Suspended +Usage examples: -It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning -has been enabled the status can't be set back to "Unversioned". +```console +rclone backend versioning s3:bucket # read status only +rclone backend versioning s3:bucket Enabled +rclone backend versioning s3:bucket Suspended +``` +It may return "Enabled", "Suspended" or "Unversioned". Note that once +versioning has been enabled the status can't be set back to "Unversioned". ### set Set command for updating the config parameters. - rclone backend set remote: [options] [+] +```console +rclone backend set remote: [options] [+] +``` This set command can be used to update the config parameters for a running s3 backend. -Usage Examples: +Usage examples: - rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] - rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] - rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X +```console +rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] +rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X +``` The option keys are named as they are in the config file. @@ -2521,7 +4175,6 @@ will default to those currently in use. It doesn't return anything. - ### Anonymous access to public buckets {#anonymous-access} @@ -6575,7 +8228,7 @@ endpoint = s3.ru-1.storage.selcloud.ru ### Servercore {#servercore} -[Servercore Object Storage](https://servercore.io/object-storage/) is an S3 +[Servercore Object Storage](https://servercore.com/services/object-storage/) is an S3 compatible object storage system that provides scalable and secure storage solutions for businesses of all sizes. diff --git a/docs/content/seafile.md b/docs/content/seafile.md index 2c5777adf..5d43534aa 100644 --- a/docs/content/seafile.md +++ b/docs/content/seafile.md @@ -312,8 +312,8 @@ Properties: - Type: string - Required: true - Examples: - - "https://cloud.seafile.com/" - - Connect to cloud.seafile.com. + - "https://cloud.seafile.com/" + - Connect to cloud.seafile.com. #### --seafile-user diff --git a/docs/content/sftp.md b/docs/content/sftp.md index 58e155296..be7aa96ae 100644 --- a/docs/content/sftp.md +++ b/docs/content/sftp.md @@ -601,10 +601,10 @@ Properties: - Type: bool - Default: false - Examples: - - "false" - - Use default Cipher list. - - "true" - - Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. + - "false" + - Use default Cipher list. + - "true" + - Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange. #### --sftp-disable-hashcheck @@ -674,8 +674,8 @@ Properties: - Type: string - Required: false - Examples: - - "~/.ssh/known_hosts" - - Use OpenSSH's known_hosts file. + - "~/.ssh/known_hosts" + - Use OpenSSH's known_hosts file. #### --sftp-ask-password @@ -751,14 +751,14 @@ Properties: - Type: string - Required: false - Examples: - - "none" - - No shell access - - "unix" - - Unix shell - - "powershell" - - PowerShell - - "cmd" - - Windows Command Prompt + - "none" + - No shell access + - "unix" + - Unix shell + - "powershell" + - PowerShell + - "cmd" + - Windows Command Prompt #### --sftp-hashes diff --git a/docs/content/sharefile.md b/docs/content/sharefile.md index 60658c5d6..9446f02ee 100644 --- a/docs/content/sharefile.md +++ b/docs/content/sharefile.md @@ -204,16 +204,16 @@ Properties: - Type: string - Required: false - Examples: - - "" - - Access the Personal Folders (default). - - "favorites" - - Access the Favorites folder. - - "allshared" - - Access all the shared folders. - - "connectors" - - Access all the individual connectors. - - "top" - - Access the home, favorites, and shared folders as well as the connectors. + - "" + - Access the Personal Folders (default). + - "favorites" + - Access the Favorites folder. + - "allshared" + - Access all the shared folders. + - "connectors" + - Access all the individual connectors. + - "top" + - Access the home, favorites, and shared folders as well as the connectors. ### Advanced options diff --git a/docs/content/storj.md b/docs/content/storj.md index b8c590034..66e1f92d3 100644 --- a/docs/content/storj.md +++ b/docs/content/storj.md @@ -241,10 +241,10 @@ Properties: - Type: string - Default: "existing" - Examples: - - "existing" - - Use an existing access grant. - - "new" - - Create a new access grant from satellite address, API key, and passphrase. + - "existing" + - Use an existing access grant. + - "new" + - Create a new access grant from satellite address, API key, and passphrase. #### --storj-access-grant @@ -272,12 +272,12 @@ Properties: - Type: string - Default: "us1.storj.io" - Examples: - - "us1.storj.io" - - US1 - - "eu1.storj.io" - - EU1 - - "ap1.storj.io" - - AP1 + - "us1.storj.io" + - US1 + - "eu1.storj.io" + - EU1 + - "ap1.storj.io" + - AP1 #### --storj-api-key diff --git a/docs/content/swift.md b/docs/content/swift.md index a5e5b3103..6c41809bc 100644 --- a/docs/content/swift.md +++ b/docs/content/swift.md @@ -274,11 +274,11 @@ Properties: - Type: bool - Default: false - Examples: - - "false" - - Enter swift credentials in the next step. - - "true" - - Get swift credentials from environment vars. - - Leave other fields blank if using this. + - "false" + - Enter swift credentials in the next step. + - "true" + - Get swift credentials from environment vars. + - Leave other fields blank if using this. #### --swift-user @@ -313,20 +313,20 @@ Properties: - Type: string - Required: false - Examples: - - "https://auth.api.rackspacecloud.com/v1.0" - - Rackspace US - - "https://lon.auth.api.rackspacecloud.com/v1.0" - - Rackspace UK - - "https://identity.api.rackspacecloud.com/v2.0" - - Rackspace v2 - - "https://auth.storage.memset.com/v1.0" - - Memset Memstore UK - - "https://auth.storage.memset.com/v2.0" - - Memset Memstore UK v2 - - "https://auth.cloud.ovh.net/v3" - - OVH - - "https://authenticate.ain.net" - - Blomp Cloud Storage + - "https://auth.api.rackspacecloud.com/v1.0" + - Rackspace US + - "https://lon.auth.api.rackspacecloud.com/v1.0" + - Rackspace UK + - "https://identity.api.rackspacecloud.com/v2.0" + - Rackspace v2 + - "https://auth.storage.memset.com/v1.0" + - Memset Memstore UK + - "https://auth.storage.memset.com/v2.0" + - Memset Memstore UK v2 + - "https://auth.cloud.ovh.net/v3" + - OVH + - "https://authenticate.ain.net" + - Blomp Cloud Storage #### --swift-user-id @@ -471,12 +471,12 @@ Properties: - Type: string - Default: "public" - Examples: - - "public" - - Public (default, choose this if not sure) - - "internal" - - Internal (use internal service net) - - "admin" - - Admin + - "public" + - Public (default, choose this if not sure) + - "internal" + - Internal (use internal service net) + - "admin" + - Admin #### --swift-storage-policy @@ -494,12 +494,12 @@ Properties: - Type: string - Required: false - Examples: - - "" - - Default - - "pcs" - - OVH Public Cloud Storage - - "pca" - - OVH Public Cloud Archive + - "" + - Default + - "pcs" + - OVH Public Cloud Storage + - "pca" + - OVH Public Cloud Archive ### Advanced options diff --git a/docs/content/webdav.md b/docs/content/webdav.md index e30ef76cb..f7b60265d 100644 --- a/docs/content/webdav.md +++ b/docs/content/webdav.md @@ -150,22 +150,22 @@ Properties: - Type: string - Required: false - Examples: - - "fastmail" - - Fastmail Files - - "nextcloud" - - Nextcloud - - "owncloud" - - Owncloud 10 PHP based WebDAV server - - "infinitescale" - - ownCloud Infinite Scale - - "sharepoint" - - Sharepoint Online, authenticated by Microsoft account - - "sharepoint-ntlm" - - Sharepoint with NTLM authentication, usually self-hosted or on-premises - - "rclone" - - rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol - - "other" - - Other site/service or software + - "fastmail" + - Fastmail Files + - "nextcloud" + - Nextcloud + - "owncloud" + - Owncloud 10 PHP based WebDAV server + - "infinitescale" + - ownCloud Infinite Scale + - "sharepoint" + - Sharepoint Online, authenticated by Microsoft account + - "sharepoint-ntlm" + - Sharepoint with NTLM authentication, usually self-hosted or on-premises + - "rclone" + - rclone WebDAV server to serve a remote over HTTP via the WebDAV protocol + - "other" + - Other site/service or software #### --webdav-user diff --git a/docs/content/zoho.md b/docs/content/zoho.md index 575ac3bda..6c596b953 100644 --- a/docs/content/zoho.md +++ b/docs/content/zoho.md @@ -182,18 +182,18 @@ Properties: - Type: string - Required: false - Examples: - - "com" - - United states / Global - - "eu" - - Europe - - "in" - - India - - "jp" - - Japan - - "com.cn" - - China - - "com.au" - - Australia + - "com" + - United states / Global + - "eu" + - Europe + - "in" + - India + - "jp" + - Japan + - "com.cn" + - China + - "com.au" + - Australia ### Advanced options diff --git a/lib/transform/transform.md b/lib/transform/transform.md index 42a31ce09..bb8d08e0e 100644 --- a/lib/transform/transform.md +++ b/lib/transform/transform.md @@ -7,7 +7,7 @@ | `--name-transform suffix_keep_extension=XXXX` | Appends XXXX to the file name while preserving the original file extension. | | `--name-transform trimprefix=XXXX` | Removes XXXX if it appears at the start of the file name. | | `--name-transform trimsuffix=XXXX` | Removes XXXX if it appears at the end of the file name. | -| `--name-transform regex=/pattern/replacement/` | Applies a regex-based transformation. | +| `--name-transform regex=pattern/replacement` | Applies a regex-based transformation. | | `--name-transform replace=old:new` | Replaces occurrences of old with new in the file name. | | `--name-transform date={YYYYMMDD}` | Appends or prefixes the specified date format. | | `--name-transform truncate=N` | Truncates the file name to a maximum of N characters. | @@ -152,81 +152,81 @@ SquareBracket Examples: ```console -$ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase" -STORIES/THE QUICK BROWN FOX!.TXT +rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase" +// Output: STORIES/THE QUICK BROWN FOX!.TXT ``` ```console -$ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow" -stories/The Slow Brown Turtle!.txt +rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow" +// Output: stories/The Slow Brown Turtle!.txt ``` ```console -$ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode" -c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0 +rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode" +// Output: c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0 ``` ```console -$ rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode" -stories/The Quick Brown Fox!.txt +rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode" +// Output: stories/The Quick Brown Fox!.txt ``` ```console -$ rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc" -stories/The Quick Brown 🦊 Fox Went to the Café!.txt +rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc" +// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt ``` ```console -$ rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd" -stories/The Quick Brown 🦊 Fox Went to the Café!.txt +rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd" +// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt ``` ```console -$ rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii" -stories/The Quick Brown Fox!.txt +rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii" +// Output: stories/The Quick Brown Fox!.txt ``` ```console -$ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt" -stories/The Quick Brown Fox! +rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt" +// Output: stories/The Quick Brown Fox! ``` ```console -$ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_" -OLD_stories/OLD_The Quick Brown Fox!.txt +rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_" +// Output: OLD_stories/OLD_The Quick Brown Fox!.txt ``` ```console -$ rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7" -stories/The Quick Brown _ Fox Went to the Caf_!.txt +rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7" +// Output: stories/The Quick Brown _ Fox Went to the Caf_!.txt ``` ```console -$ rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket" -stories/The Quick Brown Fox: A Memoir [draft].txt +rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket" +// Output: stories/The Quick Brown Fox: A Memoir [draft].txt ``` ```console -$ rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21" -stories/The Quick Brown 🦊 Fox +rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21" +// Output: stories/The Quick Brown 🦊 Fox ``` ```console -$ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo" -stories/The Quick Brown Fox!.txt +rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo" +// Output: stories/The Quick Brown Fox!.txt ``` ```console -$ rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" -stories/The Quick Brown Fox!-20250830 +rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" +// Output: stories/The Quick Brown Fox!-20251121 ``` ```console -$ rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" -stories/The Quick Brown Fox!-2025-08-30 1234AM +rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" +// Output: stories/The Quick Brown Fox!-2025-11-21 0508PM ``` ```console -$ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab" -ababababababab/ababab ababababab ababababab ababab!abababab +rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab" +// Output: ababababababab/ababab ababababab ababababab ababab!abababab ``` diff --git a/rclone.1 b/rclone.1 index 9008bce9c..2c3a0298b 100644 --- a/rclone.1 +++ b/rclone.1 @@ -15,7 +15,7 @@ . ftr VB CB . ftr VBI CBI .\} -.TH "rclone" "1" "Aug 22, 2025" "User Manual" "" +.TH "rclone" "1" "Nov 21, 2025" "User Manual" "" .hy .SH NAME .PP @@ -30,6 +30,7 @@ Usage: Available commands: about Get quota information from the remote. + archive Perform an action on an archive. authorize Remote authorization. backend Run a backend-specific command. bisync Perform bidirectional synchronization between two paths. @@ -254,6 +255,8 @@ Cloudflare R2 .IP \[bu] 2 Cloudinary .IP \[bu] 2 +Cubbit DS3 +.IP \[bu] 2 DigitalOcean Spaces .IP \[bu] 2 Digi Storage @@ -270,6 +273,8 @@ Fastmail Files .IP \[bu] 2 FileLu Cloud Storage .IP \[bu] 2 +FileLu S5 (S3-Compatible Object Storage) +.IP \[bu] 2 Files.com .IP \[bu] 2 FlashBlade @@ -286,12 +291,16 @@ Google Photos .IP \[bu] 2 HDFS .IP \[bu] 2 +Hetzner Object Storage +.IP \[bu] 2 Hetzner Storage Box .IP \[bu] 2 HiDrive .IP \[bu] 2 HTTP .IP \[bu] 2 +Huawei OBS +.IP \[bu] 2 iCloud Drive .IP \[bu] 2 ImageKit @@ -304,6 +313,8 @@ IBM COS S3 .IP \[bu] 2 IDrive e2 .IP \[bu] 2 +Intercolo Object Storage +.IP \[bu] 2 IONOS Cloud .IP \[bu] 2 Koofr @@ -376,8 +387,14 @@ Qiniu Cloud Object Storage (Kodo) .IP \[bu] 2 Quatrix by Maytech .IP \[bu] 2 +Rabata Cloud Storage +.IP \[bu] 2 +RackCorp Object Storage +.IP \[bu] 2 Rackspace Cloud Files .IP \[bu] 2 +Rclone Serve S3 +.IP \[bu] 2 rsync.net .IP \[bu] 2 Scaleway @@ -390,12 +407,16 @@ SeaweedFS .IP \[bu] 2 Selectel .IP \[bu] 2 +Servercore Object Storage +.IP \[bu] 2 SFTP .IP \[bu] 2 Sia .IP \[bu] 2 SMB / CIFS .IP \[bu] 2 +Spectra Logic +.IP \[bu] 2 StackPath .IP \[bu] 2 Storj @@ -427,6 +448,8 @@ These backends adapt or modify other storage providers: .IP \[bu] 2 Alias: Rename existing remotes .IP \[bu] 2 +Archive: Read archive files +.IP \[bu] 2 Cache: Cache remotes (DEPRECATED) .IP \[bu] 2 Chunker: Split large files @@ -1328,6 +1351,8 @@ Akamai Netstorage (https://rclone.org/netstorage/) .IP \[bu] 2 Alias (https://rclone.org/alias/) .IP \[bu] 2 +Archive (https://rclone.org/archive/) +.IP \[bu] 2 Amazon S3 (https://rclone.org/s3/) .IP \[bu] 2 Backblaze B2 (https://rclone.org/b2/) @@ -1464,7 +1489,7 @@ rclone subcommand [options] \f[R] .fi .PP -A \f[V]subcommand\f[R] is a the rclone operation required, (e.g. +A \f[V]subcommand\f[R] is an rclone operation required (e.g. \f[V]sync\f[R], \f[V]copy\f[R], \f[V]ls\f[R]). .PP An \f[V]option\f[R] is a single letter flag (e.g. @@ -1584,6 +1609,9 @@ remote. rclone config show (https://rclone.org/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote. .IP \[bu] 2 +rclone config string (https://rclone.org/commands/rclone_config_string/) +- Print connection string for a single remote. +.IP \[bu] 2 rclone config touch (https://rclone.org/commands/rclone_config_touch/) - Ensure configuration file exists. .IP \[bu] 2 @@ -1679,7 +1707,8 @@ If metadata syncing is required then use the \f[V]--metadata\f[R] flag. .PP Note that the modification time and metadata for the root directory will \f[B]not\f[R] be synced. -See https://github.com/rclone/rclone/issues/7652 for more info. +See issue #7652 (https://github.com/rclone/rclone/issues/7652) for more +info. .PP \f[B]Note\f[R]: Use the \f[V]-P\f[R]/\f[V]--progress\f[R] flag to view real-time transfer statistics. @@ -1687,7 +1716,7 @@ real-time transfer statistics. \f[B]Note\f[R]: Use the \f[V]--dry-run\f[R] or the \f[V]--interactive\f[R]/\f[V]-i\f[R] flag to test without copying anything. -.SH Logger Flags +.SS Logger Flags .PP The \f[V]--differ\f[R], \f[V]--missing-on-dst\f[R], \f[V]--missing-on-src\f[R], \f[V]--match\f[R] and \f[V]--error\f[R] @@ -1746,7 +1775,7 @@ Possibly some unusual error scenarios .PP Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each -file (which may or may not match what actually DID.) +file (which may or may not match what actually DID). .IP .nf \f[C] @@ -1774,7 +1803,7 @@ rclone copy source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default \[dq];\[dq]) - -t, --timeformat string Specify a custom time format, or \[aq]max\[aq] for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) \f[R] .fi .PP @@ -1931,7 +1960,7 @@ If metadata syncing is required then use the \f[V]--metadata\f[R] flag. .PP Note that the modification time and metadata for the root directory will \f[B]not\f[R] be synced. -See https://github.com/rclone/rclone/issues/7652 for more info. +See for more info. .PP \f[B]Note\f[R]: Use the \f[V]-P\f[R]/\f[V]--progress\f[R] flag to view real-time transfer statistics @@ -1942,7 +1971,7 @@ ignoring\[dq] errors. See this forum post (https://forum.rclone.org/t/sync-not-clearing-duplicates/14372) for more info. -.SH Logger Flags +.SS Logger Flags .PP The \f[V]--differ\f[R], \f[V]--missing-on-dst\f[R], \f[V]--missing-on-src\f[R], \f[V]--match\f[R] and \f[V]--error\f[R] @@ -2001,7 +2030,7 @@ Possibly some unusual error scenarios .PP Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each -file (which may or may not match what actually DID.) +file (which may or may not match what actually DID). .IP .nf \f[C] @@ -2029,7 +2058,7 @@ rclone sync source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default \[dq];\[dq]) - -t, --timeformat string Specify a custom time format, or \[aq]max\[aq] for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) \f[R] .fi .PP @@ -2191,14 +2220,14 @@ If metadata syncing is required then use the \f[V]--metadata\f[R] flag. .PP Note that the modification time and metadata for the root directory will \f[B]not\f[R] be synced. -See https://github.com/rclone/rclone/issues/7652 for more info. +See for more info. .PP \f[B]Important\f[R]: Since this can cause data loss, test first with the \f[V]--dry-run\f[R] or the \f[V]--interactive\f[R]/\f[V]-i\f[R] flag. .PP \f[B]Note\f[R]: Use the \f[V]-P\f[R]/\f[V]--progress\f[R] flag to view real-time transfer statistics. -.SH Logger Flags +.SS Logger Flags .PP The \f[V]--differ\f[R], \f[V]--missing-on-dst\f[R], \f[V]--missing-on-src\f[R], \f[V]--match\f[R] and \f[V]--error\f[R] @@ -2257,7 +2286,7 @@ Possibly some unusual error scenarios .PP Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each -file (which may or may not match what actually DID.) +file (which may or may not match what actually DID). .IP .nf \f[C] @@ -2286,7 +2315,7 @@ rclone move source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default \[dq];\[dq]) - -t, --timeformat string Specify a custom time format, or \[aq]max\[aq] for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) \f[R] .fi .PP @@ -2796,7 +2825,7 @@ Lists the objects in the source path to standard output in a human readable format with size and path. Recurses by default. .PP -Eg +E.g. .IP .nf \f[C] @@ -2910,7 +2939,7 @@ Use the \f[V]-R\f[R] flag to recurse. This command lists the total size of the directory (if known, -1 if not), the modification time (if known, the current time if not), the number of objects in the directory (if known, -1 if not) and the name of -the directory, Eg +the directory, E.g. .IP .nf \f[C] @@ -3034,7 +3063,7 @@ Lists the objects in the source path to standard output in a human readable format with modification time, size and path. Recurses by default. .PP -Eg +E.g. .IP .nf \f[C] @@ -3762,11 +3791,11 @@ e.g. .nf \f[C] { - \[dq]total\[dq]: 18253611008, - \[dq]used\[dq]: 7993453766, - \[dq]trashed\[dq]: 104857602, - \[dq]other\[dq]: 8849156022, - \[dq]free\[dq]: 1411001220 + \[dq]total\[dq]: 18253611008, + \[dq]used\[dq]: 7993453766, + \[dq]trashed\[dq]: 104857602, + \[dq]other\[dq]: 8849156022, + \[dq]free\[dq]: 1411001220 } \f[R] .fi @@ -3800,6 +3829,394 @@ not listed here. .IP \[bu] 2 rclone (https://rclone.org/commands/rclone/) - Show help for rclone commands, flags and backends. +.SH rclone archive +.PP +Perform an action on an archive. +.SS Synopsis +.PP +Perform an action on an archive. +Requires the use of a subcommand to specify the protocol, e.g. +.IP +.nf +\f[C] +rclone archive list remote:file.zip +\f[R] +.fi +.PP +Each subcommand has its own options which you can see in their help. +.PP +See rclone archive +create (https://rclone.org/commands/rclone_archive_create/) for the +archive formats supported. +.IP +.nf +\f[C] +rclone archive [opts] [] [flags] +\f[R] +.fi +.SS Options +.IP +.nf +\f[C] + -h, --help help for archive +\f[R] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SS See Also +.IP \[bu] 2 +rclone (https://rclone.org/commands/rclone/) - Show help for rclone +commands, flags and backends. +.IP \[bu] 2 +rclone archive +create (https://rclone.org/commands/rclone_archive_create/) - Archive +source file(s) to destination. +.IP \[bu] 2 +rclone archive +extract (https://rclone.org/commands/rclone_archive_extract/) - Extract +archives from source to destination. +.IP \[bu] 2 +rclone archive list (https://rclone.org/commands/rclone_archive_list/) - +List archive contents from source. +.SH rclone archive create +.PP +Archive source file(s) to destination. +.SS Synopsis +.PP +Creates an archive from the files in source:path and saves the archive +to dest:path. +If dest:path is missing, it will write to the console. +.PP +The valid formats for the \f[V]--format\f[R] flag are listed below. +If \f[V]--format\f[R] is not set rclone will guess it from the extension +of dest:path. +.PP +.TS +tab(@); +l l. +T{ +Format +T}@T{ +Extensions +T} +_ +T{ +zip +T}@T{ +\&.zip +T} +T{ +tar +T}@T{ +\&.tar +T} +T{ +tar.gz +T}@T{ +\&.tar.gz, .tgz, .taz +T} +T{ +tar.bz2 +T}@T{ +\&.tar.bz2, .tb2, .tbz, .tbz2, .tz2 +T} +T{ +tar.lz +T}@T{ +\&.tar.lz +T} +T{ +tar.lz4 +T}@T{ +\&.tar.lz4 +T} +T{ +tar.xz +T}@T{ +\&.tar.xz, .txz +T} +T{ +tar.zst +T}@T{ +\&.tar.zst, .tzst +T} +T{ +tar.br +T}@T{ +\&.tar.br +T} +T{ +tar.sz +T}@T{ +\&.tar.sz +T} +T{ +tar.mz +T}@T{ +\&.tar.mz +T} +.TE +.PP +The \f[V]--prefix\f[R] and \f[V]--full-path\f[R] flags control the +prefix for the files in the archive. +.PP +If the flag \f[V]--full-path\f[R] is set then the files will have the +full source path as the prefix. +.PP +If the flag \f[V]--prefix=\f[R] is set then the files will have +\f[V]\f[R] as prefix. +It\[aq]s possible to create invalid file names with +\f[V]--prefix=\f[R] so use with caution. +Flag \f[V]--prefix\f[R] has priority over \f[V]--full-path\f[R]. +.PP +Given a directory \f[V]/sourcedir\f[R] with the following: +.IP +.nf +\f[C] +file1.txt +dir1/file2.txt +\f[R] +.fi +.PP +Running the command +\f[V]rclone archive create /sourcedir /dest.tar.gz\f[R] will make an +archive with the contents: +.IP +.nf +\f[C] +file1.txt +dir1/ +dir1/file2.txt +\f[R] +.fi +.PP +Running the command +\f[V]rclone archive create --full-path /sourcedir /dest.tar.gz\f[R] will +make an archive with the contents: +.IP +.nf +\f[C] +sourcedir/file1.txt +sourcedir/dir1/ +sourcedir/dir1/file2.txt +\f[R] +.fi +.PP +Running the command +\f[V]rclone archive create --prefix=my_new_path /sourcedir /dest.tar.gz\f[R] +will make an archive with the contents: +.IP +.nf +\f[C] +my_new_path/file1.txt +my_new_path/dir1/ +my_new_path/dir1/file2.txt +\f[R] +.fi +.IP +.nf +\f[C] +rclone archive create [flags] [] +\f[R] +.fi +.SS Options +.IP +.nf +\f[C] + --format string Create the archive with format or guess from extension. + --full-path Set prefix for files in archive to source path + -h, --help help for create + --prefix string Set prefix for files in archive to entered value or source path +\f[R] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SS See Also +.IP \[bu] 2 +rclone archive (https://rclone.org/commands/rclone_archive/) - Perform +an action on an archive. +.SH rclone archive extract +.PP +Extract archives from source to destination. +.SS Synopsis +.PP +Extract the archive contents to a destination directory auto detecting +the format. +See rclone archive +create (https://rclone.org/commands/rclone_archive_create/) for the +archive formats supported. +.PP +For example on this archive: +.IP +.nf +\f[C] +$ rclone archive list --long remote:archive.zip + 6 2025-10-30 09:46:23.000000000 file.txt + 0 2025-10-30 09:46:57.000000000 dir/ + 4 2025-10-30 09:46:57.000000000 dir/bye.txt +\f[R] +.fi +.PP +You can run extract like this +.IP +.nf +\f[C] +$ rclone archive extract remote:archive.zip remote:extracted +\f[R] +.fi +.PP +Which gives this result +.IP +.nf +\f[C] +$ rclone tree remote:extracted +/ +├── dir +│ └── bye.txt +└── file.txt +\f[R] +.fi +.PP +The source or destination or both can be local or remote. +.PP +Filters can be used to only extract certain files: +.IP +.nf +\f[C] +$ rclone archive extract archive.zip partial --include \[dq]bye.*\[dq] +$ rclone tree partial +/ +└── dir + └── bye.txt +\f[R] +.fi +.PP +The archive backend (https://rclone.org/archive/) can also be used to +extract files. +It can be used to read only mount archives also but it supports a +different set of archive formats to the archive commands. +.IP +.nf +\f[C] +rclone archive extract [flags] +\f[R] +.fi +.SS Options +.IP +.nf +\f[C] + -h, --help help for extract +\f[R] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SS See Also +.IP \[bu] 2 +rclone archive (https://rclone.org/commands/rclone_archive/) - Perform +an action on an archive. +.SH rclone archive list +.PP +List archive contents from source. +.SS Synopsis +.PP +List the contents of an archive to the console, auto detecting the +format. +See rclone archive +create (https://rclone.org/commands/rclone_archive_create/) for the +archive formats supported. +.PP +For example: +.IP +.nf +\f[C] +$ rclone archive list remote:archive.zip + 6 file.txt + 0 dir/ + 4 dir/bye.txt +\f[R] +.fi +.PP +Or with \f[V]--long\f[R] flag for more info: +.IP +.nf +\f[C] +$ rclone archive list --long remote:archive.zip + 6 2025-10-30 09:46:23.000000000 file.txt + 0 2025-10-30 09:46:57.000000000 dir/ + 4 2025-10-30 09:46:57.000000000 dir/bye.txt +\f[R] +.fi +.PP +Or with \f[V]--plain\f[R] flag which is useful for scripting: +.IP +.nf +\f[C] +$ rclone archive list --plain /path/to/archive.zip +file.txt +dir/ +dir/bye.txt +\f[R] +.fi +.PP +Or with \f[V]--dirs-only\f[R]: +.IP +.nf +\f[C] +$ rclone archive list --plain --dirs-only /path/to/archive.zip +dir/ +\f[R] +.fi +.PP +Or with \f[V]--files-only\f[R]: +.IP +.nf +\f[C] +$ rclone archive list --plain --files-only /path/to/archive.zip +file.txt +dir/bye.txt +\f[R] +.fi +.PP +Filters may also be used: +.IP +.nf +\f[C] +$ rclone archive list --long archive.zip --include \[dq]bye.*\[dq] + 4 2025-10-30 09:46:57.000000000 dir/bye.txt +\f[R] +.fi +.PP +The archive backend (https://rclone.org/archive/) can also be used to +list files. +It can be used to read only mount archives also but it supports a +different set of archive formats to the archive commands. +.IP +.nf +\f[C] +rclone archive list [flags] +\f[R] +.fi +.SS Options +.IP +.nf +\f[C] + --dirs-only Only list directories + --files-only Only list files + -h, --help help for list + --long List extra attributtes + --plain Only list file names +\f[R] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SS See Also +.IP \[bu] 2 +rclone archive (https://rclone.org/commands/rclone_archive/) - Perform +an action on an archive. .SH rclone authorize .PP Remote authorization. @@ -3807,13 +4224,19 @@ Remote authorization. .PP Remote authorization. Used to authorize a remote or headless rclone from a machine with a -browser - use as instructed by rclone config. +browser. +Use as instructed by rclone config. +See also the remote setup documentation. .PP -The command requires 1-3 arguments: - fs name (e.g., \[dq]drive\[dq], -\[dq]s3\[dq], etc.) -- Either a base64 encoded JSON blob obtained from a previous rclone -config session - Or a client_id and client_secret pair obtained from the -remote service +The command requires 1-3 arguments: +.IP \[bu] 2 +Name of a backend (e.g. +\[dq]drive\[dq], \[dq]s3\[dq]) +.IP \[bu] 2 +Either a base64 encoded JSON blob obtained from a previous rclone config +session +.IP \[bu] 2 +Or a client_id and client_secret pair obtained from the remote service .PP Use --auth-no-open-browser to prevent rclone to open auth link in default browser automatically. @@ -3824,7 +4247,7 @@ template is used. .IP .nf \f[C] -rclone authorize [base64_json_blob | client_id client_secret] [flags] +rclone authorize [base64_json_blob | client_id client_secret] [flags] \f[R] .fi .SS Options @@ -3935,11 +4358,13 @@ Perform bidirectional synchronization between two paths. Bisync (https://rclone.org/bisync/) provides a bidirectional cloud sync solution in rclone. It retains the Path1 and Path2 filesystem listings from the prior run. -On each successive run it will: - list files on Path1 and Path2, and -check for changes on each side. +On each successive run it will: +.IP \[bu] 2 +list files on Path1 and Path2, and check for changes on each side. Changes include \f[V]New\f[R], \f[V]Newer\f[R], \f[V]Older\f[R], and \f[V]Deleted\f[R] files. -- Propagate changes on Path1 to Path2, and vice-versa. +.IP \[bu] 2 +Propagate changes on Path1 to Path2, and vice-versa. .PP Bisync is considered an \f[B]advanced command\f[R], so use with care. Make sure you have read and understood the entire @@ -4632,27 +5057,27 @@ This will look something like (some irrelevant detail removed): .nf \f[C] { - \[dq]State\[dq]: \[dq]*oauth-islocal,teamdrive,,\[dq], - \[dq]Option\[dq]: { - \[dq]Name\[dq]: \[dq]config_is_local\[dq], - \[dq]Help\[dq]: \[dq]Use web browser to automatically authenticate rclone with remote?\[rs]n * Say Y if the machine running rclone has a web browser you can use\[rs]n * Say N if running rclone on a (remote) machine without web browser access\[rs]nIf not sure try Y. If Y failed, try N.\[rs]n\[dq], - \[dq]Default\[dq]: true, - \[dq]Examples\[dq]: [ - { - \[dq]Value\[dq]: \[dq]true\[dq], - \[dq]Help\[dq]: \[dq]Yes\[dq] - }, - { - \[dq]Value\[dq]: \[dq]false\[dq], - \[dq]Help\[dq]: \[dq]No\[dq] - } - ], - \[dq]Required\[dq]: false, - \[dq]IsPassword\[dq]: false, - \[dq]Type\[dq]: \[dq]bool\[dq], - \[dq]Exclusive\[dq]: true, - }, - \[dq]Error\[dq]: \[dq]\[dq], + \[dq]State\[dq]: \[dq]*oauth-islocal,teamdrive,,\[dq], + \[dq]Option\[dq]: { + \[dq]Name\[dq]: \[dq]config_is_local\[dq], + \[dq]Help\[dq]: \[dq]Use web browser to automatically authenticate rclone with remote?\[rs]n * Say Y if the machine running rclone has a web browser you can use\[rs]n * Say N if running rclone on a (remote) machine without web browser access\[rs]nIf not sure try Y. If Y failed, try N.\[rs]n\[dq], + \[dq]Default\[dq]: true, + \[dq]Examples\[dq]: [ + { + \[dq]Value\[dq]: \[dq]true\[dq], + \[dq]Help\[dq]: \[dq]Yes\[dq] + }, + { + \[dq]Value\[dq]: \[dq]false\[dq], + \[dq]Help\[dq]: \[dq]No\[dq] + } + ], + \[dq]Required\[dq]: false, + \[dq]IsPassword\[dq]: false, + \[dq]Type\[dq]: \[dq]bool\[dq], + \[dq]Exclusive\[dq]: true, + }, + \[dq]Error\[dq]: \[dq]\[dq], } \f[R] .fi @@ -5191,6 +5616,53 @@ not listed here. .IP \[bu] 2 rclone config (https://rclone.org/commands/rclone_config/) - Enter an interactive configuration session. +.SH rclone config string +.PP +Print connection string for a single remote. +.SS Synopsis +.PP +Print a connection string for a single remote. +.PP +The connection strings (https://rclone.org/docs/#connection-strings) can +be used wherever a remote is needed and can be more convenient than +using the config file, especially if using the RC API. +.PP +Backend parameters may be provided to the command also. +.PP +Example: +.IP +.nf +\f[C] +$ rclone config string s3:rclone --s3-no-check-bucket +:s3,access_key_id=XXX,no_check_bucket,provider=AWS,region=eu-west-2,secret_access_key=YYY:rclone +\f[R] +.fi +.PP +\f[B]NB\f[R] the strings are not quoted for use in shells (eg bash, +powershell, windows cmd). +Most will work if enclosed in \[dq]double quotes\[dq], however +connection strings that contain double quotes will require further +quoting which is very shell dependent. +.IP +.nf +\f[C] +rclone config string [flags] +\f[R] +.fi +.SS Options +.IP +.nf +\f[C] + -h, --help help for string +\f[R] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SS See Also +.IP \[bu] 2 +rclone config (https://rclone.org/commands/rclone_config/) - Enter an +interactive configuration session. .SH rclone config touch .PP Ensure configuration file exists. @@ -5272,27 +5744,27 @@ This will look something like (some irrelevant detail removed): .nf \f[C] { - \[dq]State\[dq]: \[dq]*oauth-islocal,teamdrive,,\[dq], - \[dq]Option\[dq]: { - \[dq]Name\[dq]: \[dq]config_is_local\[dq], - \[dq]Help\[dq]: \[dq]Use web browser to automatically authenticate rclone with remote?\[rs]n * Say Y if the machine running rclone has a web browser you can use\[rs]n * Say N if running rclone on a (remote) machine without web browser access\[rs]nIf not sure try Y. If Y failed, try N.\[rs]n\[dq], - \[dq]Default\[dq]: true, - \[dq]Examples\[dq]: [ - { - \[dq]Value\[dq]: \[dq]true\[dq], - \[dq]Help\[dq]: \[dq]Yes\[dq] - }, - { - \[dq]Value\[dq]: \[dq]false\[dq], - \[dq]Help\[dq]: \[dq]No\[dq] - } - ], - \[dq]Required\[dq]: false, - \[dq]IsPassword\[dq]: false, - \[dq]Type\[dq]: \[dq]bool\[dq], - \[dq]Exclusive\[dq]: true, - }, - \[dq]Error\[dq]: \[dq]\[dq], + \[dq]State\[dq]: \[dq]*oauth-islocal,teamdrive,,\[dq], + \[dq]Option\[dq]: { + \[dq]Name\[dq]: \[dq]config_is_local\[dq], + \[dq]Help\[dq]: \[dq]Use web browser to automatically authenticate rclone with remote?\[rs]n * Say Y if the machine running rclone has a web browser you can use\[rs]n * Say N if running rclone on a (remote) machine without web browser access\[rs]nIf not sure try Y. If Y failed, try N.\[rs]n\[dq], + \[dq]Default\[dq]: true, + \[dq]Examples\[dq]: [ + { + \[dq]Value\[dq]: \[dq]true\[dq], + \[dq]Help\[dq]: \[dq]Yes\[dq] + }, + { + \[dq]Value\[dq]: \[dq]false\[dq], + \[dq]Help\[dq]: \[dq]No\[dq] + } + ], + \[dq]Required\[dq]: false, + \[dq]IsPassword\[dq]: false, + \[dq]Type\[dq]: \[dq]bool\[dq], + \[dq]Exclusive\[dq]: true, + }, + \[dq]Error\[dq]: \[dq]\[dq], } \f[R] .fi @@ -5453,7 +5925,7 @@ T}@T{ Removes XXXX if it appears at the end of the file name. T} T{ -\f[V]--name-transform regex=/pattern/replacement/\f[R] +\f[V]--name-transform regex=pattern/replacement\f[R] T}@T{ Applies a regex-based transformation. T} @@ -5473,6 +5945,23 @@ T}@T{ Truncates the file name to a maximum of N characters. T} T{ +\f[V]--name-transform truncate_keep_extension=N\f[R] +T}@T{ +Truncates the file name to a maximum of N characters while preserving +the original file extension. +T} +T{ +\f[V]--name-transform truncate_bytes=N\f[R] +T}@T{ +Truncates the file name to a maximum of N bytes (not characters). +T} +T{ +\f[V]--name-transform truncate_bytes_keep_extension=N\f[R] +T}@T{ +Truncates the file name to a maximum of N bytes (not characters) while +preserving the original file extension. +T} +T{ \f[V]--name-transform base64encode\f[R] T}@T{ Encodes the file name in Base64. @@ -5546,7 +6035,7 @@ T} T{ \f[V]--name-transform command=/path/to/my/programfile names.\f[R] T}@T{ -Executes an external program to transform +Executes an external program to transform. T} .TE .PP @@ -5554,35 +6043,38 @@ Conversion modes: .IP .nf \f[C] -none -nfc -nfd -nfkc -nfkd -replace -prefix -suffix -suffix_keep_extension -trimprefix -trimsuffix -index -date -truncate -base64encode -base64decode -encoder -decoder -ISO-8859-1 -Windows-1252 -Macintosh -charmap -lowercase -uppercase -titlecase -ascii -url -regex -command +none +nfc +nfd +nfkc +nfkd +replace +prefix +suffix +suffix_keep_extension +trimprefix +trimsuffix +index +date +truncate +truncate_keep_extension +truncate_bytes +truncate_bytes_keep_extension +base64encode +base64decode +encoder +decoder +ISO-8859-1 +Windows-1252 +Macintosh +charmap +lowercase +uppercase +titlecase +ascii +url +regex +command \f[R] .fi .PP @@ -5590,49 +6082,48 @@ Char maps: .IP .nf \f[C] - -IBM-Code-Page-037 -IBM-Code-Page-437 -IBM-Code-Page-850 -IBM-Code-Page-852 -IBM-Code-Page-855 -Windows-Code-Page-858 -IBM-Code-Page-860 -IBM-Code-Page-862 -IBM-Code-Page-863 -IBM-Code-Page-865 -IBM-Code-Page-866 -IBM-Code-Page-1047 -IBM-Code-Page-1140 -ISO-8859-1 -ISO-8859-2 -ISO-8859-3 -ISO-8859-4 -ISO-8859-5 -ISO-8859-6 -ISO-8859-7 -ISO-8859-8 -ISO-8859-9 -ISO-8859-10 -ISO-8859-13 -ISO-8859-14 -ISO-8859-15 -ISO-8859-16 -KOI8-R -KOI8-U -Macintosh -Macintosh-Cyrillic -Windows-874 -Windows-1250 -Windows-1251 -Windows-1252 -Windows-1253 -Windows-1254 -Windows-1255 -Windows-1256 -Windows-1257 -Windows-1258 -X-User-Defined +IBM-Code-Page-037 +IBM-Code-Page-437 +IBM-Code-Page-850 +IBM-Code-Page-852 +IBM-Code-Page-855 +Windows-Code-Page-858 +IBM-Code-Page-860 +IBM-Code-Page-862 +IBM-Code-Page-863 +IBM-Code-Page-865 +IBM-Code-Page-866 +IBM-Code-Page-1047 +IBM-Code-Page-1140 +ISO-8859-1 +ISO-8859-2 +ISO-8859-3 +ISO-8859-4 +ISO-8859-5 +ISO-8859-6 +ISO-8859-7 +ISO-8859-8 +ISO-8859-9 +ISO-8859-10 +ISO-8859-13 +ISO-8859-14 +ISO-8859-15 +ISO-8859-16 +KOI8-R +KOI8-U +Macintosh +Macintosh-Cyrillic +Windows-874 +Windows-1250 +Windows-1251 +Windows-1252 +Windows-1253 +Windows-1254 +Windows-1255 +Windows-1256 +Windows-1257 +Windows-1258 +X-User-Defined \f[R] .fi .PP @@ -5640,36 +6131,36 @@ Encoding masks: .IP .nf \f[C] -Asterisk - BackQuote - BackSlash - Colon - CrLf - Ctl - Del - Dollar - Dot - DoubleQuote - Exclamation - Hash - InvalidUtf8 - LeftCrLfHtVt - LeftPeriod - LeftSpace - LeftTilde - LtGt - None - Percent - Pipe - Question - Raw - RightCrLfHtVt - RightPeriod - RightSpace - Semicolon - SingleQuote - Slash - SquareBracket +Asterisk +BackQuote +BackSlash +Colon +CrLf +Ctl +Del +Dollar +Dot +DoubleQuote +Exclamation +Hash +InvalidUtf8 +LeftCrLfHtVt +LeftPeriod +LeftSpace +LeftTilde +LtGt +None +Percent +Pipe +Question +Raw +RightCrLfHtVt +RightPeriod +RightSpace +Semicolon +SingleQuote +Slash +SquareBracket \f[R] .fi .PP @@ -5769,14 +6260,14 @@ rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]a .nf \f[C] rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{YYYYMMDD}\[dq] -// Output: stories/The Quick Brown Fox!-20250618 +// Output: stories/The Quick Brown Fox!-20251121 \f[R] .fi .IP .nf \f[C] rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{macfriendlytime}\[dq] -// Output: stories/The Quick Brown Fox!-2025-06-18 0148PM +// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM \f[R] .fi .IP @@ -5787,12 +6278,21 @@ rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]a \f[R] .fi .PP +The regex command generally accepts Perl-style regular expressions, the +exact syntax is defined in the Go regular expression +reference (https://golang.org/pkg/regexp/syntax/). +The replacement string may contain capturing group variables, +referencing capturing groups using the syntax \f[V]$name\f[R] or +\f[V]${name}\f[R], where the name can refer to a named capturing group +or it can simply be the index as a number. +To insert a literal $, use $$. +.PP Multiple transformations can be used in sequence, applied in the order they are specified on the command line. .PP The \f[V]--name-transform\f[R] flag is also available in \f[V]sync\f[R], \f[V]copy\f[R], and \f[V]move\f[R]. -.SH Files vs Directories +.SS Files vs Directories .PP By default \f[V]--name-transform\f[R] will only apply to file names. The means only the leaf file name will be transformed. @@ -5838,7 +6338,7 @@ For some conversions using all is more likely to be useful, for example Note that \f[V]--name-transform\f[R] may not add path separators \f[V]/\f[R] to the name. This will cause an error. -.SH Ordering and Conflicts +.SS Ordering and Conflicts .IP \[bu] 2 Transformations will be applied in the order specified by the user. .RS 2 @@ -5873,28 +6373,35 @@ Users should be aware that certain combinations may lead to unexpected results and should verify transformations using \f[V]--dry-run\f[R] before execution. .RE -.SH Race Conditions and Non-Deterministic Behavior +.SS Race Conditions and Non-Deterministic Behavior .PP Some transformations, such as \f[V]replace=old:new\f[R], may introduce conflicts where multiple source files map to the same destination name. This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these. -* If two files from the source are transformed into the same name at the +.IP \[bu] 2 +If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic. -* Running rclone check after a sync using such transformations may +.IP \[bu] 2 +Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results. .PP -To minimize risks, users should: * Carefully review transformations that -may introduce conflicts. -* Use \f[V]--dry-run\f[R] to inspect changes before executing a sync -(but keep in mind that it won\[aq]t show the effect of non-deterministic +To minimize risks, users should: +.IP \[bu] 2 +Carefully review transformations that may introduce conflicts. +.IP \[bu] 2 +Use \f[V]--dry-run\f[R] to inspect changes before executing a sync (but +keep in mind that it won\[aq]t show the effect of non-deterministic transformations). -* Avoid transformations that cause multiple distinct source files to map +.IP \[bu] 2 +Avoid transformations that cause multiple distinct source files to map to the same destination name. -* Consider disabling concurrency with \f[V]--transfers=1\f[R] if +.IP \[bu] 2 +Consider disabling concurrency with \f[V]--transfers=1\f[R] if necessary. -* Certain transformations (e.g. +.IP \[bu] 2 +Certain transformations (e.g. \f[V]prefix\f[R]) will have a multiplying effect every time they are used. Avoid these when using \f[V]bisync\f[R]. @@ -6036,8 +6543,9 @@ rclone copyto src dst \f[R] .fi .PP -where src and dst are rclone paths, either remote:path or /path/to/local -or C:. +where src and dst are rclone paths, either \f[V]remote:path\f[R] or +\f[V]/path/to/local\f[R] or +\f[V]C:\[rs]windows\[rs]path\[rs]if\[rs]on\[rs]windows\f[R]. .PP This will: .IP @@ -6056,11 +6564,11 @@ testing by size and modification time or MD5SUM. It doesn\[aq]t delete files from the destination. .PP \f[I]If you are looking to copy just a byte range of a file, please see -\[aq]rclone cat --offset X --count Y\[aq]\f[R] +\f[VI]rclone cat --offset X --count Y\f[I].\f[R] .PP \f[B]Note\f[R]: Use the \f[V]-P\f[R]/\f[V]--progress\f[R] flag to view -real-time transfer statistics -.SH Logger Flags +real-time transfer statistics. +.SS Logger Flags .PP The \f[V]--differ\f[R], \f[V]--missing-on-dst\f[R], \f[V]--missing-on-src\f[R], \f[V]--match\f[R] and \f[V]--error\f[R] @@ -6119,7 +6627,7 @@ Possibly some unusual error scenarios .PP Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each -file (which may or may not match what actually DID.) +file (which may or may not match what actually DID). .IP .nf \f[C] @@ -6146,7 +6654,7 @@ rclone copyto source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default \[dq];\[dq]) - -t, --timeformat string Specify a custom time format, or \[aq]max\[aq] for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) \f[R] .fi .PP @@ -6273,6 +6781,25 @@ destination if there is one with the same name. .PP Setting \f[V]--stdout\f[R] or making the output file name \f[V]-\f[R] will cause the output to be written to standard output. +.PP +Setting \f[V]--urls\f[R] allows you to input a CSV file of URLs in +format: URL, FILENAME. +If \f[V]--urls\f[R] is in use then replace the URL in the arguments with +the file containing the URLs, e.g.: +.IP +.nf +\f[C] +rclone copyurl --urls myurls.csv remote:dir +\f[R] +.fi +.PP +Missing filenames will be autogenerated equivalent to using +\f[V]--auto-filename\f[R]. +Note that \f[V]--stdout\f[R] and \f[V]--print-filename\f[R] are +incompatible with \f[V]--urls\f[R]. +This will do \f[V]--transfers\f[R] copies in parallel. +Note that if \f[V]--auto-filename\f[R] is desired for all URLs then a +file with only URLs and no filename can be used. .SS Troubleshooting .PP If you can\[aq]t get \f[V]rclone copyurl\f[R] to work then here are some @@ -6306,6 +6833,7 @@ rclone copyurl https://example.com dest:path [flags] --no-clobber Prevent overwriting file with same name -p, --print-filename Print the resulting name from --auto-filename --stdout Write the output to stdout rather than a file + --urls Use a CSV file of links to process multiple URLs \f[R] .fi .PP @@ -6332,7 +6860,7 @@ commands, flags and backends. Cryptcheck checks the integrity of an encrypted remote. .SS Synopsis .PP -Checks a remote against a crypted (https://rclone.org/crypt/) remote. +Checks a remote against an encrypted (https://rclone.org/crypt/) remote. This is the equivalent of running rclone check (https://rclone.org/commands/rclone_check/), but able to check the checksums of the encrypted remote. @@ -6354,7 +6882,7 @@ rclone cryptcheck /path/to/files encryptedremote:path .fi .PP You can use it like this also, but that will involve downloading all the -files in remote:path. +files in \f[V]remote:path\f[R]. .IP .nf \f[C] @@ -6362,7 +6890,8 @@ rclone cryptcheck remote:path encryptedremote:path \f[R] .fi .PP -After it has run it will log the status of the encryptedremote:. +After it has run it will log the status of the +\f[V]encryptedremote:\f[R]. .PP If you supply the \f[V]--one-way\f[R] flag, it will only check that files in the source match the files in the destination, not the other @@ -6496,7 +7025,6 @@ use it like this .nf \f[C] rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 - rclone cryptdecode --reverse encryptedremote: filename1 filename2 \f[R] .fi @@ -6615,13 +7143,16 @@ particular name. This symlink helps git-annex tell rclone it wants to run the \[dq]gitannex\[dq] subcommand. .RS 4 +.PP +Create the helper symlink in \[dq]$HOME/bin\[dq]: .IP .nf \f[C] -# Create the helper symlink in \[dq]$HOME/bin\[dq]. ln -s \[dq]$(realpath rclone)\[dq] \[dq]$HOME/bin/git-annex-remote-rclone-builtin\[dq] -# Verify the new symlink is on your PATH. +Verify the new symlink is on your PATH: + +\[ga]\[ga]\[ga]console which git-annex-remote-rclone-builtin \f[R] .fi @@ -6634,13 +7165,19 @@ This new remote will connect git-annex with the .PP Start by asking git-annex to describe the remote\[aq]s available configuration parameters. +.PP +If you skipped step 1: .IP .nf \f[C] -# If you skipped step 1: git annex initremote MyRemote type=rclone --whatelse - -# If you created a symlink in step 1: +\f[R] +.fi +.PP +If you created a symlink in step 1: +.IP +.nf +\f[C] git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse \f[R] .fi @@ -6736,15 +7273,15 @@ Run without a hash to see the list of all supported hashes, e.g. \f[C] $ rclone hashsum Supported hashes are: - * md5 - * sha1 - * whirlpool - * crc32 - * sha256 - * sha512 - * blake3 - * xxh3 - * xxh128 +- md5 +- sha1 +- whirlpool +- crc32 +- sha256 +- sha512 +- blake3 +- xxh3 +- xxh128 \f[R] .fi .PP @@ -6752,7 +7289,7 @@ Then .IP .nf \f[C] -$ rclone hashsum MD5 remote:path +rclone hashsum MD5 remote:path \f[R] .fi .PP @@ -6852,7 +7389,7 @@ that don\[aq]t will just ignore it. .PP If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always by -default be created with the least constraints \[en] e.g. +default be created with the least constraints - e.g. no expiry, no password protection, accessible without account. .IP .nf @@ -6934,7 +7471,7 @@ By default this will just be the names of the objects and directories, one per line. The directories will have a / suffix. .PP -Eg +E.g. .IP .nf \f[C] @@ -6970,7 +7507,7 @@ So if you wanted the path, size and modification time, you would use \f[V]--format \[dq]pst\[dq]\f[R], or maybe \f[V]--format \[dq]tsp\[dq]\f[R] to put the path last. .PP -Eg +E.g. .IP .nf \f[C] @@ -6998,7 +7535,7 @@ rclone lsf -R --hash MD5 --format hp --separator \[dq] \[dq] --files-only . \f[R] .fi .PP -Eg +E.g. .IP .nf \f[C] @@ -7018,7 +7555,7 @@ By default the separator is \[dq];\[dq] this can be changed with the Note that separators aren\[aq]t escaped in the path so putting it last is a good strategy. .PP -Eg +E.g. .IP .nf \f[C] @@ -7032,9 +7569,9 @@ $ rclone lsf --separator \[dq],\[dq] --format \[dq]tshp\[dq] swift:bucket .fi .PP You can output in CSV standard format. -This will escape things in \[dq] if they contain , +This will escape things in \[dq] if they contain, .PP -Eg +E.g. .IP .nf \f[C] @@ -7072,12 +7609,14 @@ rclone lsf remote:path --format pt --time-format \[aq]2006-01-02T15:04:05.999999 rclone lsf remote:path --format pt --time-format RFC3339 rclone lsf remote:path --format pt --time-format DateOnly rclone lsf remote:path --format pt --time-format max +rclone lsf remote:path --format pt --time-format unix +rclone lsf remote:path --format pt --time-format unixnano \f[R] .fi .PP \f[V]--time-format max\f[R] will automatically truncate -\[aq]\f[V]2006-01-02 15:04:05.000000000\f[R]\[aq] to the maximum -precision supported by the remote. +\f[V]2006-01-02 15:04:05.000000000\f[R] to the maximum precision +supported by the remote. .PP Any of the filtering options can be applied to this command. .PP @@ -7127,7 +7666,7 @@ rclone lsf remote:path [flags] -h, --help help for lsf -R, --recursive Recurse into the listing -s, --separator string Separator for the items in the format (default \[dq];\[dq]) - -t, --time-format string Specify a custom time format, or \[aq]max\[aq] for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --time-format string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) \f[R] .fi .PP @@ -7192,9 +7731,9 @@ The output is an array of Items, where each Item looks like this: \f[C] { \[dq]Hashes\[dq] : { - \[dq]SHA-1\[dq] : \[dq]f572d396fae9206628714fb2ce00f72e94f2258f\[dq], - \[dq]MD5\[dq] : \[dq]b1946ac92492d2347c6235b4d2611184\[dq], - \[dq]DropboxHash\[dq] : \[dq]ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc\[dq] + \[dq]SHA-1\[dq] : \[dq]f572d396fae9206628714fb2ce00f72e94f2258f\[dq], + \[dq]MD5\[dq] : \[dq]b1946ac92492d2347c6235b4d2611184\[dq], + \[dq]DropboxHash\[dq] : \[dq]ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc\[dq] }, \[dq]ID\[dq]: \[dq]y2djkhiujf83u33\[dq], \[dq]OrigID\[dq]: \[dq]UYOJVTUW00Q1RzTDA\[dq], @@ -7475,8 +8014,8 @@ support (https://rclone.org/overview/#optional-features) the about feature at all, then 1 PiB is set as both the total and the free size. .SS Installing on Windows .PP -To run rclone mount on Windows, you will need to download and install -WinFsp (http://www.secfs.net/winfsp/). +To run \f[V]rclone mount on Windows\f[R], you will need to download and +install WinFsp (http://www.secfs.net/winfsp/). .PP WinFsp (https://github.com/winfsp/winfsp) is an open-source Windows File System Proxy which makes it easy to write user space file systems for @@ -7727,9 +8266,8 @@ not suffer from the same limitations. Mounting on macOS can be done either via built-in NFS server (https://rclone.org/commands/rclone_serve_nfs/), macFUSE (https://osxfuse.github.io/) (also known as osxfuse) or -FUSE-T (https://www.fuse-t.org/). -macFUSE is a traditional FUSE driver utilizing a macOS kernel extension -(kext). +FUSE-T (https://www.fuse-t.org/).macFUSE is a traditional FUSE driver +utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which \[dq]mounts\[dq] via an NFSv4 local server. .SS Unicode Normalization @@ -7795,6 +8333,19 @@ This may make rclone upload a full new copy of the file. When mounting with \f[V]--read-only\f[R], attempts to write to files will fail \f[I]silently\f[R] as opposed to with a clear warning as in macFUSE. +.SH Mounting on Linux +.PP +On newer versions of Ubuntu, you may encounter the following error when +running \f[V]rclone mount\f[R]: +.RS +.PP +NOTICE: mount helper error: fusermount3: mount failed: Permission denied +CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: exit status +1 This may be due to newer Apparmor (https://wiki.ubuntu.com/AppArmor) +restrictions, which can be disabled with +\f[V]sudo aa-disable /usr/bin/fusermount3\f[R] (you may need to +\f[V]sudo apt install apparmor-utils\f[R] beforehand). +.RE .SS Limitations .PP Without the use of \f[V]--vfs-cache-mode\f[R] this can only write files @@ -8013,8 +8564,8 @@ cache. .IP .nf \f[C] ---dir-cache-time duration Time to cache directory entries for (default 5m0s) ---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) \f[R] .fi .PP @@ -8084,13 +8635,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -8263,9 +8814,9 @@ These flags control the chunking: .IP .nf \f[C] ---vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) ---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) ---vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once \f[R] .fi .PP @@ -8328,10 +8879,10 @@ transaction. .IP .nf \f[C] ---no-checksum Don\[aq]t compare checksums on up/download. ---no-modtime Don\[aq]t read/write the modification time (can speed things up). ---no-seek Don\[aq]t allow seeking in files. ---read-only Only allow read-only access. + --no-checksum Don\[aq]t compare checksums on up/download. + --no-modtime Don\[aq]t read/write the modification time (can speed things up). + --no-seek Don\[aq]t allow seeking in files. + --read-only Only allow read-only access. \f[R] .fi .PP @@ -8342,8 +8893,8 @@ These flags only come into effect when not using an on disk cache file. .IP .nf \f[C] ---vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) ---vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi .PP @@ -8354,7 +8905,7 @@ adjust the number of parallel uploads of modified files from the cache .IP .nf \f[C] ---transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) \f[R] .fi .SS Symlinks @@ -8364,8 +8915,8 @@ However this may be enabled with either of the following flags: .IP .nf \f[C] ---links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. ---vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS + --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. + --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS \f[R] .fi .PP @@ -8491,7 +9042,7 @@ automatically. .IP .nf \f[C] ---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) \f[R] .fi .SS Alternate report of used bytes @@ -8504,7 +9055,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to \f[V]rclone size\f[R] and compute the total used space itself. .PP -\f[I]WARNING.\f[R] Contrary to \f[V]rclone size\f[R], this flag ignores +\f[B]WARNING\f[R]: Contrary to \f[V]rclone size\f[R], this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. @@ -8700,7 +9251,7 @@ src will be deleted on successful transfer. .PP \f[B]Note\f[R]: Use the \f[V]-P\f[R]/\f[V]--progress\f[R] flag to view real-time transfer statistics. -.SH Logger Flags +.SS Logger Flags .PP The \f[V]--differ\f[R], \f[V]--missing-on-dst\f[R], \f[V]--missing-on-src\f[R], \f[V]--match\f[R] and \f[V]--error\f[R] @@ -8759,7 +9310,7 @@ Possibly some unusual error scenarios .PP Note also that each file is logged during execution, as opposed to after, so it is most useful as a predictor of what SHOULD happen to each -file (which may or may not match what actually DID.) +file (which may or may not match what actually DID). .IP .nf \f[C] @@ -8786,7 +9337,7 @@ rclone moveto source:path dest:path [flags] --missing-on-dst string Report all files missing from the destination to this file --missing-on-src string Report all files missing from the source to this file -s, --separator string Separator for the items in the format (default \[dq];\[dq]) - -t, --timeformat string Specify a custom time format, or \[aq]max\[aq] for max precision supported by remote (default: 2006-01-02 15:04:05) + -t, --timeformat string Specify a custom time format - see docs for details (default: 2006-01-02 15:04:05) \f[R] .fi .PP @@ -9117,8 +9668,8 @@ support (https://rclone.org/overview/#optional-features) the about feature at all, then 1 PiB is set as both the total and the free size. .SS Installing on Windows .PP -To run rclone nfsmount on Windows, you will need to download and install -WinFsp (http://www.secfs.net/winfsp/). +To run \f[V]rclone nfsmount on Windows\f[R], you will need to download +and install WinFsp (http://www.secfs.net/winfsp/). .PP WinFsp (https://github.com/winfsp/winfsp) is an open-source Windows File System Proxy which makes it easy to write user space file systems for @@ -9369,9 +9920,8 @@ not suffer from the same limitations. Mounting on macOS can be done either via built-in NFS server (https://rclone.org/commands/rclone_serve_nfs/), macFUSE (https://osxfuse.github.io/) (also known as osxfuse) or -FUSE-T (https://www.fuse-t.org/). -macFUSE is a traditional FUSE driver utilizing a macOS kernel extension -(kext). +FUSE-T (https://www.fuse-t.org/).macFUSE is a traditional FUSE driver +utilizing a macOS kernel extension (kext). FUSE-T is an alternative FUSE system which \[dq]mounts\[dq] via an NFSv4 local server. .SS Unicode Normalization @@ -9437,6 +9987,19 @@ This may make rclone upload a full new copy of the file. When mounting with \f[V]--read-only\f[R], attempts to write to files will fail \f[I]silently\f[R] as opposed to with a clear warning as in macFUSE. +.SH Mounting on Linux +.PP +On newer versions of Ubuntu, you may encounter the following error when +running \f[V]rclone mount\f[R]: +.RS +.PP +NOTICE: mount helper error: fusermount3: mount failed: Permission denied +CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: exit status +1 This may be due to newer Apparmor (https://wiki.ubuntu.com/AppArmor) +restrictions, which can be disabled with +\f[V]sudo aa-disable /usr/bin/fusermount3\f[R] (you may need to +\f[V]sudo apt install apparmor-utils\f[R] beforehand). +.RE .SS Limitations .PP Without the use of \f[V]--vfs-cache-mode\f[R] this can only write files @@ -9656,8 +10219,8 @@ cache. .IP .nf \f[C] ---dir-cache-time duration Time to cache directory entries for (default 5m0s) ---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) \f[R] .fi .PP @@ -9727,13 +10290,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -9906,9 +10469,9 @@ These flags control the chunking: .IP .nf \f[C] ---vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) ---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) ---vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once \f[R] .fi .PP @@ -9971,10 +10534,10 @@ transaction. .IP .nf \f[C] ---no-checksum Don\[aq]t compare checksums on up/download. ---no-modtime Don\[aq]t read/write the modification time (can speed things up). ---no-seek Don\[aq]t allow seeking in files. ---read-only Only allow read-only access. + --no-checksum Don\[aq]t compare checksums on up/download. + --no-modtime Don\[aq]t read/write the modification time (can speed things up). + --no-seek Don\[aq]t allow seeking in files. + --read-only Only allow read-only access. \f[R] .fi .PP @@ -9985,8 +10548,8 @@ These flags only come into effect when not using an on disk cache file. .IP .nf \f[C] ---vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) ---vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi .PP @@ -9997,7 +10560,7 @@ adjust the number of parallel uploads of modified files from the cache .IP .nf \f[C] ---transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) \f[R] .fi .SS Symlinks @@ -10007,8 +10570,8 @@ However this may be enabled with either of the following flags: .IP .nf \f[C] ---links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. ---vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS + --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. + --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS \f[R] .fi .PP @@ -10134,7 +10697,7 @@ automatically. .IP .nf \f[C] ---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) \f[R] .fi .SS Alternate report of used bytes @@ -10147,7 +10710,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to \f[V]rclone size\f[R] and compute the total used space itself. .PP -\f[I]WARNING.\f[R] Contrary to \f[V]rclone size\f[R], this flag ignores +\f[B]WARNING\f[R]: Contrary to \f[V]rclone size\f[R], this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. @@ -10365,8 +10928,8 @@ This runs a command against a running rclone. Use the \f[V]--url\f[R] flag to specify an non default URL to connect on. This can be either a \[dq]:port\[dq] which is taken to mean -\[dq]http://localhost:port\[dq] or a \[dq]host:port\[dq] which is taken -to mean \[dq]http://host:port\[dq] +http://localhost:port or a \[dq]host:port\[dq] which is taken to mean +http://host:port. .PP A username and password can be passed in with \f[V]--user\f[R] and \f[V]--pass\f[R]. @@ -10606,6 +11169,9 @@ Rclone automatically inserts leading and trailing \[dq]/\[dq] on \f[V]--rc-baseurl\f[R], so \f[V]--rc-baseurl \[dq]rclone\[dq]\f[R], \f[V]--rc-baseurl \[dq]/rclone\[dq]\f[R] and \f[V]--rc-baseurl \[dq]/rclone/\[dq]\f[R] are all treated identically. +.PP +\f[V]--rc-disable-zip\f[R] may be set to disable the zipping download +option. .SS TLS (SSL) .PP By default this will serve over http. @@ -10636,20 +11202,20 @@ arguments passed by \f[V]--rc-addr\f[R]). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. .PP Socket activation can be tested ad-hoc with the \f[V]systemd-socket-activate\f[R]command .IP .nf \f[C] - systemd-socket-activate -l 8000 -- rclone serve +systemd-socket-activate -l 8000 -- rclone serve \f[R] .fi .PP This will socket-activate rclone on the first connection to port 8000 over TCP. -### Template +.SS Template .PP \f[V]--rc-template\f[R] allows a user to specify a custom markup template for HTTP and WebDAV serve functions. @@ -10658,91 +11224,100 @@ to server pages: .PP .TS tab(@); -lw(35.0n) lw(35.0n). +lw(22.6n) lw(24.7n) lw(22.6n). T{ Parameter T}@T{ +Subparameter +T}@T{ Description T} _ T{ \&.Name T}@T{ +T}@T{ The full path of a file/directory. T} T{ \&.Title T}@T{ -Directory listing of .Name +T}@T{ +Directory listing of \[aq].Name\[aq]. T} T{ \&.Sort T}@T{ -The current sort used. -This is changeable via ?sort= parameter -T} -T{ T}@T{ -Sort Options: namedirfirst,name,size,time (default namedirfirst) +The current sort used. +This is changeable via \[aq]?sort=\[aq] parameter. +Possible values: namedirfirst, name, size, time (default namedirfirst). T} T{ \&.Order T}@T{ -The current ordering used. -This is changeable via ?order= parameter -T} -T{ T}@T{ -Order Options: asc,desc (default asc) +The current ordering used. +This is changeable via \[aq]?order=\[aq] parameter. +Possible values: asc, desc (default asc). T} T{ \&.Query T}@T{ +T}@T{ Currently unused. T} T{ \&.Breadcrumb T}@T{ -Allows for creating a relative navigation -T} -T{ --- .Link T}@T{ -The relative to the root link of the Text. +Allows for creating a relative navigation. T} T{ --- .Text +T}@T{ +\&.Link +T}@T{ +The link of the Text relative to the root. +T} +T{ +T}@T{ +\&.Text T}@T{ The Name of the directory. T} T{ \&.Entries T}@T{ +T}@T{ Information about a specific file/directory. T} T{ --- .URL T}@T{ -The \[aq]url\[aq] of an entry. +\&.URL +T}@T{ +The url of an entry. T} T{ --- .Leaf T}@T{ -Currently same as \[aq]URL\[aq] but intended to be \[aq]just\[aq] the -name. +\&.Leaf +T}@T{ +Currently same as \[aq].URL\[aq] but intended to be just the name. T} T{ --- .IsDir +T}@T{ +\&.IsDir T}@T{ Boolean for if an entry is a directory or not. T} T{ --- .Size T}@T{ -Size in Bytes of the entry. +\&.Size +T}@T{ +Size in bytes of the entry. T} T{ --- .ModTime +T}@T{ +\&.ModTime T}@T{ The UTC timestamp of an entry. T} @@ -10794,7 +11369,7 @@ a single username and password with the \f[V]--rc-user\f[R] and Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with \f[V]--user-from-header\f[R] (e.g., -\f[V]--rc---user-from-header=x-remote-user\f[R]). +\f[V]--rc-user-from-header=x-remote-user\f[R]). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. .PP @@ -11008,8 +11583,8 @@ Please note that this command was not available before rclone version 1.55. If it fails for you with the message \f[V]unknown command \[dq]selfupdate\[dq]\f[R] then you will need to -update manually following the install instructions located at -https://rclone.org/install/ +update manually following the install +documentation (https://rclone.org/install/). .IP .nf \f[C] @@ -11050,6 +11625,12 @@ rclone serve http remote: \f[R] .fi .PP +When the \[dq]--metadata\[dq] flag is enabled, the following metadata +fields will be provided as headers: - \[dq]content-disposition\[dq] - +\[dq]cache-control\[dq] - \[dq]content-language\[dq] - +\[dq]content-encoding\[dq] Note: The availability of these fields +depends on whether the remote supports metadata. +.PP Each subcommand has its own options which you can see in their help. .IP .nf @@ -11156,8 +11737,8 @@ cache. .IP .nf \f[C] ---dir-cache-time duration Time to cache directory entries for (default 5m0s) ---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) \f[R] .fi .PP @@ -11227,13 +11808,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -11406,9 +11987,9 @@ These flags control the chunking: .IP .nf \f[C] ---vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) ---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) ---vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once \f[R] .fi .PP @@ -11471,10 +12052,10 @@ transaction. .IP .nf \f[C] ---no-checksum Don\[aq]t compare checksums on up/download. ---no-modtime Don\[aq]t read/write the modification time (can speed things up). ---no-seek Don\[aq]t allow seeking in files. ---read-only Only allow read-only access. + --no-checksum Don\[aq]t compare checksums on up/download. + --no-modtime Don\[aq]t read/write the modification time (can speed things up). + --no-seek Don\[aq]t allow seeking in files. + --read-only Only allow read-only access. \f[R] .fi .PP @@ -11485,8 +12066,8 @@ These flags only come into effect when not using an on disk cache file. .IP .nf \f[C] ---vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) ---vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi .PP @@ -11497,7 +12078,7 @@ adjust the number of parallel uploads of modified files from the cache .IP .nf \f[C] ---transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) \f[R] .fi .SS Symlinks @@ -11507,8 +12088,8 @@ However this may be enabled with either of the following flags: .IP .nf \f[C] ---links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. ---vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS + --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. + --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS \f[R] .fi .PP @@ -11634,7 +12215,7 @@ automatically. .IP .nf \f[C] ---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) \f[R] .fi .SS Alternate report of used bytes @@ -11647,7 +12228,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to \f[V]rclone size\f[R] and compute the total used space itself. .PP -\f[I]WARNING.\f[R] Contrary to \f[V]rclone size\f[R], this flag ignores +\f[B]WARNING\f[R]: Contrary to \f[V]rclone size\f[R], this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. @@ -11860,8 +12441,8 @@ cache. .IP .nf \f[C] ---dir-cache-time duration Time to cache directory entries for (default 5m0s) ---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) \f[R] .fi .PP @@ -11931,13 +12512,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -12110,9 +12691,9 @@ These flags control the chunking: .IP .nf \f[C] ---vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) ---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) ---vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once \f[R] .fi .PP @@ -12175,10 +12756,10 @@ transaction. .IP .nf \f[C] ---no-checksum Don\[aq]t compare checksums on up/download. ---no-modtime Don\[aq]t read/write the modification time (can speed things up). ---no-seek Don\[aq]t allow seeking in files. ---read-only Only allow read-only access. + --no-checksum Don\[aq]t compare checksums on up/download. + --no-modtime Don\[aq]t read/write the modification time (can speed things up). + --no-seek Don\[aq]t allow seeking in files. + --read-only Only allow read-only access. \f[R] .fi .PP @@ -12189,8 +12770,8 @@ These flags only come into effect when not using an on disk cache file. .IP .nf \f[C] ---vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) ---vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi .PP @@ -12201,7 +12782,7 @@ adjust the number of parallel uploads of modified files from the cache .IP .nf \f[C] ---transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) \f[R] .fi .SS Symlinks @@ -12211,8 +12792,8 @@ However this may be enabled with either of the following flags: .IP .nf \f[C] ---links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. ---vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS + --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. + --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS \f[R] .fi .PP @@ -12338,7 +12919,7 @@ automatically. .IP .nf \f[C] ---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) \f[R] .fi .SS Alternate report of used bytes @@ -12351,7 +12932,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to \f[V]rclone size\f[R] and compute the total used space itself. .PP -\f[I]WARNING.\f[R] Contrary to \f[V]rclone size\f[R], this flag ignores +\f[B]WARNING\f[R]: Contrary to \f[V]rclone size\f[R], this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. @@ -12555,8 +13136,8 @@ cache. .IP .nf \f[C] ---dir-cache-time duration Time to cache directory entries for (default 5m0s) ---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) \f[R] .fi .PP @@ -12626,13 +13207,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -12805,9 +13386,9 @@ These flags control the chunking: .IP .nf \f[C] ---vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) ---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) ---vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once \f[R] .fi .PP @@ -12870,10 +13451,10 @@ transaction. .IP .nf \f[C] ---no-checksum Don\[aq]t compare checksums on up/download. ---no-modtime Don\[aq]t read/write the modification time (can speed things up). ---no-seek Don\[aq]t allow seeking in files. ---read-only Only allow read-only access. + --no-checksum Don\[aq]t compare checksums on up/download. + --no-modtime Don\[aq]t read/write the modification time (can speed things up). + --no-seek Don\[aq]t allow seeking in files. + --read-only Only allow read-only access. \f[R] .fi .PP @@ -12884,8 +13465,8 @@ These flags only come into effect when not using an on disk cache file. .IP .nf \f[C] ---vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) ---vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi .PP @@ -12896,7 +13477,7 @@ adjust the number of parallel uploads of modified files from the cache .IP .nf \f[C] ---transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) \f[R] .fi .SS Symlinks @@ -12906,8 +13487,8 @@ However this may be enabled with either of the following flags: .IP .nf \f[C] ---links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. ---vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS + --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. + --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS \f[R] .fi .PP @@ -13033,7 +13614,7 @@ automatically. .IP .nf \f[C] ---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) \f[R] .fi .SS Alternate report of used bytes @@ -13046,7 +13627,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to \f[V]rclone size\f[R] and compute the total used space itself. .PP -\f[I]WARNING.\f[R] Contrary to \f[V]rclone size\f[R], this flag ignores +\f[B]WARNING\f[R]: Contrary to \f[V]rclone size\f[R], this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. @@ -13117,11 +13698,13 @@ it won\[aq]t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. .PP -This config generated must have this extra parameter - \f[V]_root\f[R] - -root to use for the backend +This config generated must have this extra parameter +.IP \[bu] 2 +\f[V]_root\f[R] - root to use for the backend .PP -And it may have this parameter - \f[V]_obscure\f[R] - comma separated -strings for parameters to obscure +And it may have this parameter +.IP \[bu] 2 +\f[V]_obscure\f[R] - comma separated strings for parameters to obscure .PP If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: @@ -13129,8 +13712,8 @@ process (on STDIN) would look similar to this: .nf \f[C] { - \[dq]user\[dq]: \[dq]me\[dq], - \[dq]pass\[dq]: \[dq]mypassword\[dq] + \[dq]user\[dq]: \[dq]me\[dq], + \[dq]pass\[dq]: \[dq]mypassword\[dq] } \f[R] .fi @@ -13141,8 +13724,8 @@ process (on STDIN) would look similar to this: .nf \f[C] { - \[dq]user\[dq]: \[dq]me\[dq], - \[dq]public_key\[dq]: \[dq]AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf\[dq] + \[dq]user\[dq]: \[dq]me\[dq], + \[dq]public_key\[dq]: \[dq]AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf\[dq] } \f[R] .fi @@ -13152,12 +13735,12 @@ And as an example return this on STDOUT .nf \f[C] { - \[dq]type\[dq]: \[dq]sftp\[dq], - \[dq]_root\[dq]: \[dq]\[dq], - \[dq]_obscure\[dq]: \[dq]pass\[dq], - \[dq]user\[dq]: \[dq]me\[dq], - \[dq]pass\[dq]: \[dq]mypassword\[dq], - \[dq]host\[dq]: \[dq]sftp.example.com\[dq] + \[dq]type\[dq]: \[dq]sftp\[dq], + \[dq]_root\[dq]: \[dq]\[dq], + \[dq]_obscure\[dq]: \[dq]pass\[dq], + \[dq]user\[dq]: \[dq]me\[dq], + \[dq]pass\[dq]: \[dq]mypassword\[dq], + \[dq]host\[dq]: \[dq]sftp.example.com\[dq] } \f[R] .fi @@ -13330,6 +13913,9 @@ Rclone automatically inserts leading and trailing \[dq]/\[dq] on \f[V]--baseurl\f[R], so \f[V]--baseurl \[dq]rclone\[dq]\f[R], \f[V]--baseurl \[dq]/rclone\[dq]\f[R] and \f[V]--baseurl \[dq]/rclone/\[dq]\f[R] are all treated identically. +.PP +\f[V]--disable-zip\f[R] may be set to disable the zipping download +option. .SS TLS (SSL) .PP By default this will serve over http. @@ -13358,20 +13944,20 @@ arguments passed by \f[V]--addr\f[R]). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. .PP Socket activation can be tested ad-hoc with the \f[V]systemd-socket-activate\f[R]command .IP .nf \f[C] - systemd-socket-activate -l 8000 -- rclone serve +systemd-socket-activate -l 8000 -- rclone serve \f[R] .fi .PP This will socket-activate rclone on the first connection to port 8000 over TCP. -### Template +.SS Template .PP \f[V]--template\f[R] allows a user to specify a custom markup template for HTTP and WebDAV serve functions. @@ -13380,91 +13966,100 @@ to server pages: .PP .TS tab(@); -lw(35.0n) lw(35.0n). +lw(22.6n) lw(24.7n) lw(22.6n). T{ Parameter T}@T{ +Subparameter +T}@T{ Description T} _ T{ \&.Name T}@T{ +T}@T{ The full path of a file/directory. T} T{ \&.Title T}@T{ -Directory listing of .Name +T}@T{ +Directory listing of \[aq].Name\[aq]. T} T{ \&.Sort T}@T{ -The current sort used. -This is changeable via ?sort= parameter -T} -T{ T}@T{ -Sort Options: namedirfirst,name,size,time (default namedirfirst) +The current sort used. +This is changeable via \[aq]?sort=\[aq] parameter. +Possible values: namedirfirst, name, size, time (default namedirfirst). T} T{ \&.Order T}@T{ -The current ordering used. -This is changeable via ?order= parameter -T} -T{ T}@T{ -Order Options: asc,desc (default asc) +The current ordering used. +This is changeable via \[aq]?order=\[aq] parameter. +Possible values: asc, desc (default asc). T} T{ \&.Query T}@T{ +T}@T{ Currently unused. T} T{ \&.Breadcrumb T}@T{ -Allows for creating a relative navigation -T} -T{ --- .Link T}@T{ -The relative to the root link of the Text. +Allows for creating a relative navigation. T} T{ --- .Text +T}@T{ +\&.Link +T}@T{ +The link of the Text relative to the root. +T} +T{ +T}@T{ +\&.Text T}@T{ The Name of the directory. T} T{ \&.Entries T}@T{ +T}@T{ Information about a specific file/directory. T} T{ --- .URL T}@T{ -The \[aq]url\[aq] of an entry. +\&.URL +T}@T{ +The url of an entry. T} T{ --- .Leaf T}@T{ -Currently same as \[aq]URL\[aq] but intended to be \[aq]just\[aq] the -name. +\&.Leaf +T}@T{ +Currently same as \[aq].URL\[aq] but intended to be just the name. T} T{ --- .IsDir +T}@T{ +\&.IsDir T}@T{ Boolean for if an entry is a directory or not. T} T{ --- .Size T}@T{ -Size in Bytes of the entry. +\&.Size +T}@T{ +Size in bytes of the entry. T} T{ --- .ModTime +T}@T{ +\&.ModTime T}@T{ The UTC timestamp of an entry. T} @@ -13516,7 +14111,7 @@ a single username and password with the \f[V]--user\f[R] and Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with \f[V]--user-from-header\f[R] (e.g., -\f[V]----user-from-header=x-remote-user\f[R]). +\f[V]--user-from-header=x-remote-user\f[R]). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. .PP @@ -13570,8 +14165,8 @@ cache. .IP .nf \f[C] ---dir-cache-time duration Time to cache directory entries for (default 5m0s) ---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) \f[R] .fi .PP @@ -13641,13 +14236,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -13820,9 +14415,9 @@ These flags control the chunking: .IP .nf \f[C] ---vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) ---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) ---vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once \f[R] .fi .PP @@ -13885,10 +14480,10 @@ transaction. .IP .nf \f[C] ---no-checksum Don\[aq]t compare checksums on up/download. ---no-modtime Don\[aq]t read/write the modification time (can speed things up). ---no-seek Don\[aq]t allow seeking in files. ---read-only Only allow read-only access. + --no-checksum Don\[aq]t compare checksums on up/download. + --no-modtime Don\[aq]t read/write the modification time (can speed things up). + --no-seek Don\[aq]t allow seeking in files. + --read-only Only allow read-only access. \f[R] .fi .PP @@ -13899,8 +14494,8 @@ These flags only come into effect when not using an on disk cache file. .IP .nf \f[C] ---vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) ---vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi .PP @@ -13911,7 +14506,7 @@ adjust the number of parallel uploads of modified files from the cache .IP .nf \f[C] ---transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) \f[R] .fi .SS Symlinks @@ -13921,8 +14516,8 @@ However this may be enabled with either of the following flags: .IP .nf \f[C] ---links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. ---vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS + --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. + --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS \f[R] .fi .PP @@ -14048,7 +14643,7 @@ automatically. .IP .nf \f[C] ---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) \f[R] .fi .SS Alternate report of used bytes @@ -14061,7 +14656,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to \f[V]rclone size\f[R] and compute the total used space itself. .PP -\f[I]WARNING.\f[R] Contrary to \f[V]rclone size\f[R], this flag ignores +\f[B]WARNING\f[R]: Contrary to \f[V]rclone size\f[R], this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. @@ -14132,11 +14727,13 @@ it won\[aq]t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. .PP -This config generated must have this extra parameter - \f[V]_root\f[R] - -root to use for the backend +This config generated must have this extra parameter +.IP \[bu] 2 +\f[V]_root\f[R] - root to use for the backend .PP -And it may have this parameter - \f[V]_obscure\f[R] - comma separated -strings for parameters to obscure +And it may have this parameter +.IP \[bu] 2 +\f[V]_obscure\f[R] - comma separated strings for parameters to obscure .PP If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: @@ -14144,8 +14741,8 @@ process (on STDIN) would look similar to this: .nf \f[C] { - \[dq]user\[dq]: \[dq]me\[dq], - \[dq]pass\[dq]: \[dq]mypassword\[dq] + \[dq]user\[dq]: \[dq]me\[dq], + \[dq]pass\[dq]: \[dq]mypassword\[dq] } \f[R] .fi @@ -14156,8 +14753,8 @@ process (on STDIN) would look similar to this: .nf \f[C] { - \[dq]user\[dq]: \[dq]me\[dq], - \[dq]public_key\[dq]: \[dq]AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf\[dq] + \[dq]user\[dq]: \[dq]me\[dq], + \[dq]public_key\[dq]: \[dq]AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf\[dq] } \f[R] .fi @@ -14167,12 +14764,12 @@ And as an example return this on STDOUT .nf \f[C] { - \[dq]type\[dq]: \[dq]sftp\[dq], - \[dq]_root\[dq]: \[dq]\[dq], - \[dq]_obscure\[dq]: \[dq]pass\[dq], - \[dq]user\[dq]: \[dq]me\[dq], - \[dq]pass\[dq]: \[dq]mypassword\[dq], - \[dq]host\[dq]: \[dq]sftp.example.com\[dq] + \[dq]type\[dq]: \[dq]sftp\[dq], + \[dq]_root\[dq]: \[dq]\[dq], + \[dq]_obscure\[dq]: \[dq]pass\[dq], + \[dq]user\[dq]: \[dq]me\[dq], + \[dq]pass\[dq]: \[dq]mypassword\[dq], + \[dq]host\[dq]: \[dq]sftp.example.com\[dq] } \f[R] .fi @@ -14218,6 +14815,7 @@ rclone serve http remote:path [flags] --client-ca string Client certificate authority to verify clients with --dir-cache-time Duration Time to cache directory entries for (default 5m0s) --dir-perms FileMode Directory permissions (default 777) + --disable-zip Disable zip download of directories --file-perms FileMode File permissions (default 666) --gid uint32 Override the gid field set by the filesystem (not supported on Windows) (default 1000) -h, --help help for http @@ -14423,8 +15021,8 @@ cache. .IP .nf \f[C] ---dir-cache-time duration Time to cache directory entries for (default 5m0s) ---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) \f[R] .fi .PP @@ -14494,13 +15092,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -14673,9 +15271,9 @@ These flags control the chunking: .IP .nf \f[C] ---vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) ---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) ---vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once \f[R] .fi .PP @@ -14738,10 +15336,10 @@ transaction. .IP .nf \f[C] ---no-checksum Don\[aq]t compare checksums on up/download. ---no-modtime Don\[aq]t read/write the modification time (can speed things up). ---no-seek Don\[aq]t allow seeking in files. ---read-only Only allow read-only access. + --no-checksum Don\[aq]t compare checksums on up/download. + --no-modtime Don\[aq]t read/write the modification time (can speed things up). + --no-seek Don\[aq]t allow seeking in files. + --read-only Only allow read-only access. \f[R] .fi .PP @@ -14752,8 +15350,8 @@ These flags only come into effect when not using an on disk cache file. .IP .nf \f[C] ---vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) ---vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi .PP @@ -14764,7 +15362,7 @@ adjust the number of parallel uploads of modified files from the cache .IP .nf \f[C] ---transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) \f[R] .fi .SS Symlinks @@ -14774,8 +15372,8 @@ However this may be enabled with either of the following flags: .IP .nf \f[C] ---links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. ---vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS + --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. + --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS \f[R] .fi .PP @@ -14901,7 +15499,7 @@ automatically. .IP .nf \f[C] ---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) \f[R] .fi .SS Alternate report of used bytes @@ -14914,7 +15512,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to \f[V]rclone size\f[R] and compute the total used space itself. .PP -\f[I]WARNING.\f[R] Contrary to \f[V]rclone size\f[R], this flag ignores +\f[B]WARNING\f[R]: Contrary to \f[V]rclone size\f[R], this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. @@ -15183,6 +15781,9 @@ Rclone automatically inserts leading and trailing \[dq]/\[dq] on \f[V]--baseurl\f[R], so \f[V]--baseurl \[dq]rclone\[dq]\f[R], \f[V]--baseurl \[dq]/rclone\[dq]\f[R] and \f[V]--baseurl \[dq]/rclone/\[dq]\f[R] are all treated identically. +.PP +\f[V]--disable-zip\f[R] may be set to disable the zipping download +option. .SS TLS (SSL) .PP By default this will serve over http. @@ -15211,20 +15812,20 @@ arguments passed by \f[V]--addr\f[R]). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. .PP Socket activation can be tested ad-hoc with the \f[V]systemd-socket-activate\f[R]command .IP .nf \f[C] - systemd-socket-activate -l 8000 -- rclone serve +systemd-socket-activate -l 8000 -- rclone serve \f[R] .fi .PP This will socket-activate rclone on the first connection to port 8000 over TCP. -### Authentication +.SS Authentication .PP By default this will serve files without needing a login. .PP @@ -15235,7 +15836,7 @@ a single username and password with the \f[V]--user\f[R] and Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with \f[V]--user-from-header\f[R] (e.g., -\f[V]----user-from-header=x-remote-user\f[R]). +\f[V]--user-from-header=x-remote-user\f[R]). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. .PP @@ -15489,7 +16090,7 @@ a single username and password with the \f[V]--user\f[R] and Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with \f[V]--user-from-header\f[R] (e.g., -\f[V]----user-from-header=x-remote-user\f[R]). +\f[V]--user-from-header=x-remote-user\f[R]). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. .PP @@ -15555,6 +16156,9 @@ Rclone automatically inserts leading and trailing \[dq]/\[dq] on \f[V]--baseurl\f[R], so \f[V]--baseurl \[dq]rclone\[dq]\f[R], \f[V]--baseurl \[dq]/rclone\[dq]\f[R] and \f[V]--baseurl \[dq]/rclone/\[dq]\f[R] are all treated identically. +.PP +\f[V]--disable-zip\f[R] may be set to disable the zipping download +option. .SS TLS (SSL) .PP By default this will serve over http. @@ -15583,20 +16187,20 @@ arguments passed by \f[V]--addr\f[R]). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. .PP Socket activation can be tested ad-hoc with the \f[V]systemd-socket-activate\f[R]command .IP .nf \f[C] - systemd-socket-activate -l 8000 -- rclone serve +systemd-socket-activate -l 8000 -- rclone serve \f[R] .fi .PP This will socket-activate rclone on the first connection to port 8000 over TCP. -## VFS - Virtual File System +.SS VFS - Virtual File System .PP This command uses the VFS layer. This adapts the cloud storage objects that rclone uses into something @@ -15620,8 +16224,8 @@ cache. .IP .nf \f[C] ---dir-cache-time duration Time to cache directory entries for (default 5m0s) ---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) \f[R] .fi .PP @@ -15691,13 +16295,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -15870,9 +16474,9 @@ These flags control the chunking: .IP .nf \f[C] ---vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) ---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) ---vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once \f[R] .fi .PP @@ -15935,10 +16539,10 @@ transaction. .IP .nf \f[C] ---no-checksum Don\[aq]t compare checksums on up/download. ---no-modtime Don\[aq]t read/write the modification time (can speed things up). ---no-seek Don\[aq]t allow seeking in files. ---read-only Only allow read-only access. + --no-checksum Don\[aq]t compare checksums on up/download. + --no-modtime Don\[aq]t read/write the modification time (can speed things up). + --no-seek Don\[aq]t allow seeking in files. + --read-only Only allow read-only access. \f[R] .fi .PP @@ -15949,8 +16553,8 @@ These flags only come into effect when not using an on disk cache file. .IP .nf \f[C] ---vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) ---vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi .PP @@ -15961,7 +16565,7 @@ adjust the number of parallel uploads of modified files from the cache .IP .nf \f[C] ---transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) \f[R] .fi .SS Symlinks @@ -15971,8 +16575,8 @@ However this may be enabled with either of the following flags: .IP .nf \f[C] ---links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. ---vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS + --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. + --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS \f[R] .fi .PP @@ -16098,7 +16702,7 @@ automatically. .IP .nf \f[C] ---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) \f[R] .fi .SS Alternate report of used bytes @@ -16111,7 +16715,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to \f[V]rclone size\f[R] and compute the total used space itself. .PP -\f[I]WARNING.\f[R] Contrary to \f[V]rclone size\f[R], this flag ignores +\f[B]WARNING\f[R]: Contrary to \f[V]rclone size\f[R], this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. @@ -16307,7 +16911,7 @@ This also supports being run with socket activation, in which case it will listen on the first passed FD. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. .PP Socket activation can be tested ad-hoc with the \f[V]systemd-socket-activate\f[R]command: @@ -16370,8 +16974,8 @@ cache. .IP .nf \f[C] ---dir-cache-time duration Time to cache directory entries for (default 5m0s) ---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) \f[R] .fi .PP @@ -16441,13 +17045,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -16620,9 +17224,9 @@ These flags control the chunking: .IP .nf \f[C] ---vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) ---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) ---vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once \f[R] .fi .PP @@ -16685,10 +17289,10 @@ transaction. .IP .nf \f[C] ---no-checksum Don\[aq]t compare checksums on up/download. ---no-modtime Don\[aq]t read/write the modification time (can speed things up). ---no-seek Don\[aq]t allow seeking in files. ---read-only Only allow read-only access. + --no-checksum Don\[aq]t compare checksums on up/download. + --no-modtime Don\[aq]t read/write the modification time (can speed things up). + --no-seek Don\[aq]t allow seeking in files. + --read-only Only allow read-only access. \f[R] .fi .PP @@ -16699,8 +17303,8 @@ These flags only come into effect when not using an on disk cache file. .IP .nf \f[C] ---vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) ---vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi .PP @@ -16711,7 +17315,7 @@ adjust the number of parallel uploads of modified files from the cache .IP .nf \f[C] ---transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) \f[R] .fi .SS Symlinks @@ -16721,8 +17325,8 @@ However this may be enabled with either of the following flags: .IP .nf \f[C] ---links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. ---vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS + --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. + --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS \f[R] .fi .PP @@ -16848,7 +17452,7 @@ automatically. .IP .nf \f[C] ---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) \f[R] .fi .SS Alternate report of used bytes @@ -16861,7 +17465,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to \f[V]rclone size\f[R] and compute the total used space itself. .PP -\f[I]WARNING.\f[R] Contrary to \f[V]rclone size\f[R], this flag ignores +\f[B]WARNING\f[R]: Contrary to \f[V]rclone size\f[R], this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. @@ -16932,11 +17536,13 @@ it won\[aq]t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. .PP -This config generated must have this extra parameter - \f[V]_root\f[R] - -root to use for the backend +This config generated must have this extra parameter +.IP \[bu] 2 +\f[V]_root\f[R] - root to use for the backend .PP -And it may have this parameter - \f[V]_obscure\f[R] - comma separated -strings for parameters to obscure +And it may have this parameter +.IP \[bu] 2 +\f[V]_obscure\f[R] - comma separated strings for parameters to obscure .PP If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: @@ -16944,8 +17550,8 @@ process (on STDIN) would look similar to this: .nf \f[C] { - \[dq]user\[dq]: \[dq]me\[dq], - \[dq]pass\[dq]: \[dq]mypassword\[dq] + \[dq]user\[dq]: \[dq]me\[dq], + \[dq]pass\[dq]: \[dq]mypassword\[dq] } \f[R] .fi @@ -16956,8 +17562,8 @@ process (on STDIN) would look similar to this: .nf \f[C] { - \[dq]user\[dq]: \[dq]me\[dq], - \[dq]public_key\[dq]: \[dq]AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf\[dq] + \[dq]user\[dq]: \[dq]me\[dq], + \[dq]public_key\[dq]: \[dq]AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf\[dq] } \f[R] .fi @@ -16967,12 +17573,12 @@ And as an example return this on STDOUT .nf \f[C] { - \[dq]type\[dq]: \[dq]sftp\[dq], - \[dq]_root\[dq]: \[dq]\[dq], - \[dq]_obscure\[dq]: \[dq]pass\[dq], - \[dq]user\[dq]: \[dq]me\[dq], - \[dq]pass\[dq]: \[dq]mypassword\[dq], - \[dq]host\[dq]: \[dq]sftp.example.com\[dq] + \[dq]type\[dq]: \[dq]sftp\[dq], + \[dq]_root\[dq]: \[dq]\[dq], + \[dq]_obscure\[dq]: \[dq]pass\[dq], + \[dq]user\[dq]: \[dq]me\[dq], + \[dq]pass\[dq]: \[dq]mypassword\[dq], + \[dq]host\[dq]: \[dq]sftp.example.com\[dq] } \f[R] .fi @@ -17126,22 +17732,36 @@ following error: \[dq]The folder you entered does not appear to be valid. Please choose another\[dq]. However, you still can connect if you set the following registry key on -a client machine: HKEY_LOCAL_MACHINEto 2. -The BasicAuthLevel can be set to the following values: 0 - Basic -authentication disabled 1 - Basic authentication enabled for SSL -connections only 2 - Basic authentication enabled for SSL connections -and for non-SSL connections If required, increase the -FileSizeLimitInBytes to a higher value. +a client machine: +\f[V]HKEY_LOCAL_MACHINE\[rs]SYSTEM\[rs]CurrentControlSet\[rs]Services\[rs]WebClient\[rs]Parameters\[rs]BasicAuthLevel\f[R] +to 2. +The BasicAuthLevel can be set to the following values: +.IP +.nf +\f[C] +0 - Basic authentication disabled +1 - Basic authentication enabled for SSL connections only +2 - Basic authentication enabled for SSL connections and for non-SSL connections +\f[R] +.fi +.PP +If required, increase the FileSizeLimitInBytes to a higher value. Navigate to the Services interface, then restart the WebClient service. .SS Access Office applications on WebDAV .PP -Navigate to following registry HKEY_CURRENT_USER[14.0/15.0/16.0] Create -a new DWORD BasicAuthLevel with value 2. -0 - Basic authentication disabled 1 - Basic authentication enabled for -SSL connections only 2 - Basic authentication enabled for SSL and for -non-SSL connections +Navigate to following registry +\f[V]HKEY_CURRENT_USER\[rs]Software\[rs]Microsoft\[rs]Office\[rs][14.0/15.0/16.0]\[rs]Common\[rs]Internet\f[R] +Create a new DWORD BasicAuthLevel with value 2. +.IP +.nf +\f[C] +0 - Basic authentication disabled +1 - Basic authentication enabled for SSL connections only +2 - Basic authentication enabled for SSL and for non-SSL connections +\f[R] +.fi .PP -https://learn.microsoft.com/en-us/office/troubleshoot/powerpoint/office-opens-blank-from-sharepoint + .SS Serving over a unix socket .PP You can serve the webdav on a unix socket like this: @@ -17198,6 +17818,9 @@ Rclone automatically inserts leading and trailing \[dq]/\[dq] on \f[V]--baseurl\f[R], so \f[V]--baseurl \[dq]rclone\[dq]\f[R], \f[V]--baseurl \[dq]/rclone\[dq]\f[R] and \f[V]--baseurl \[dq]/rclone/\[dq]\f[R] are all treated identically. +.PP +\f[V]--disable-zip\f[R] may be set to disable the zipping download +option. .SS TLS (SSL) .PP By default this will serve over http. @@ -17226,20 +17849,20 @@ arguments passed by \f[V]--addr\f[R]). This allows rclone to be a socket-activated service. It can be configured with .socket and .service unit files as described in -https://www.freedesktop.org/software/systemd/man/latest/systemd.socket.html +. .PP Socket activation can be tested ad-hoc with the \f[V]systemd-socket-activate\f[R]command .IP .nf \f[C] - systemd-socket-activate -l 8000 -- rclone serve +systemd-socket-activate -l 8000 -- rclone serve \f[R] .fi .PP This will socket-activate rclone on the first connection to port 8000 over TCP. -### Template +.SS Template .PP \f[V]--template\f[R] allows a user to specify a custom markup template for HTTP and WebDAV serve functions. @@ -17248,91 +17871,100 @@ to server pages: .PP .TS tab(@); -lw(35.0n) lw(35.0n). +lw(22.6n) lw(24.7n) lw(22.6n). T{ Parameter T}@T{ +Subparameter +T}@T{ Description T} _ T{ \&.Name T}@T{ +T}@T{ The full path of a file/directory. T} T{ \&.Title T}@T{ -Directory listing of .Name +T}@T{ +Directory listing of \[aq].Name\[aq]. T} T{ \&.Sort T}@T{ -The current sort used. -This is changeable via ?sort= parameter -T} -T{ T}@T{ -Sort Options: namedirfirst,name,size,time (default namedirfirst) +The current sort used. +This is changeable via \[aq]?sort=\[aq] parameter. +Possible values: namedirfirst, name, size, time (default namedirfirst). T} T{ \&.Order T}@T{ -The current ordering used. -This is changeable via ?order= parameter -T} -T{ T}@T{ -Order Options: asc,desc (default asc) +The current ordering used. +This is changeable via \[aq]?order=\[aq] parameter. +Possible values: asc, desc (default asc). T} T{ \&.Query T}@T{ +T}@T{ Currently unused. T} T{ \&.Breadcrumb T}@T{ -Allows for creating a relative navigation -T} -T{ --- .Link T}@T{ -The relative to the root link of the Text. +Allows for creating a relative navigation. T} T{ --- .Text +T}@T{ +\&.Link +T}@T{ +The link of the Text relative to the root. +T} +T{ +T}@T{ +\&.Text T}@T{ The Name of the directory. T} T{ \&.Entries T}@T{ +T}@T{ Information about a specific file/directory. T} T{ --- .URL T}@T{ -The \[aq]url\[aq] of an entry. +\&.URL +T}@T{ +The url of an entry. T} T{ --- .Leaf T}@T{ -Currently same as \[aq]URL\[aq] but intended to be \[aq]just\[aq] the -name. +\&.Leaf +T}@T{ +Currently same as \[aq].URL\[aq] but intended to be just the name. T} T{ --- .IsDir +T}@T{ +\&.IsDir T}@T{ Boolean for if an entry is a directory or not. T} T{ --- .Size T}@T{ -Size in Bytes of the entry. +\&.Size +T}@T{ +Size in bytes of the entry. T} T{ --- .ModTime +T}@T{ +\&.ModTime T}@T{ The UTC timestamp of an entry. T} @@ -17384,7 +18016,7 @@ a single username and password with the \f[V]--user\f[R] and Alternatively, you can have the reverse proxy manage authentication and use the username provided in the configured header with \f[V]--user-from-header\f[R] (e.g., -\f[V]----user-from-header=x-remote-user\f[R]). +\f[V]--user-from-header=x-remote-user\f[R]). Ensure the proxy is trusted and headers cannot be spoofed, as misconfiguration may lead to unauthorized access. .PP @@ -17438,8 +18070,8 @@ cache. .IP .nf \f[C] ---dir-cache-time duration Time to cache directory entries for (default 5m0s) ---poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) + --dir-cache-time duration Time to cache directory entries for (default 5m0s) + --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable (default 1m0s) \f[R] .fi .PP @@ -17509,13 +18141,13 @@ find that you need one or the other or both. .IP .nf \f[C] ---cache-dir string Directory rclone will use for caching. ---vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) ---vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) ---vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) ---vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) ---vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) ---vfs-write-back duration Time to writeback files after last use when using cache (default 5s) + --cache-dir string Directory rclone will use for caching. + --vfs-cache-mode CacheMode Cache mode off|minimal|writes|full (default off) + --vfs-cache-max-age duration Max time since last access of objects in the cache (default 1h0m0s) + --vfs-cache-max-size SizeSuffix Max total size of objects in the cache (default off) + --vfs-cache-min-free-space SizeSuffix Target minimum free space on the disk containing the cache (default off) + --vfs-cache-poll-interval duration Interval to poll the cache for stale objects (default 1m0s) + --vfs-write-back duration Time to writeback files after last use when using cache (default 5s) \f[R] .fi .PP @@ -17688,9 +18320,9 @@ These flags control the chunking: .IP .nf \f[C] ---vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) ---vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) ---vfs-read-chunk-streams int The number of parallel streams to read at once + --vfs-read-chunk-size SizeSuffix Read the source objects in chunks (default 128M) + --vfs-read-chunk-size-limit SizeSuffix Max chunk doubling size (default off) + --vfs-read-chunk-streams int The number of parallel streams to read at once \f[R] .fi .PP @@ -17753,10 +18385,10 @@ transaction. .IP .nf \f[C] ---no-checksum Don\[aq]t compare checksums on up/download. ---no-modtime Don\[aq]t read/write the modification time (can speed things up). ---no-seek Don\[aq]t allow seeking in files. ---read-only Only allow read-only access. + --no-checksum Don\[aq]t compare checksums on up/download. + --no-modtime Don\[aq]t read/write the modification time (can speed things up). + --no-seek Don\[aq]t allow seeking in files. + --read-only Only allow read-only access. \f[R] .fi .PP @@ -17767,8 +18399,8 @@ These flags only come into effect when not using an on disk cache file. .IP .nf \f[C] ---vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) ---vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) + --vfs-read-wait duration Time to wait for in-sequence read before seeking (default 20ms) + --vfs-write-wait duration Time to wait for in-sequence write before giving error (default 1s) \f[R] .fi .PP @@ -17779,7 +18411,7 @@ adjust the number of parallel uploads of modified files from the cache .IP .nf \f[C] ---transfers int Number of file transfers to run in parallel (default 4) + --transfers int Number of file transfers to run in parallel (default 4) \f[R] .fi .SS Symlinks @@ -17789,8 +18421,8 @@ However this may be enabled with either of the following flags: .IP .nf \f[C] ---links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. ---vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS + --links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension. + --vfs-links Translate symlinks to/from regular files with a \[aq].rclonelink\[aq] extension for the VFS \f[R] .fi .PP @@ -17916,7 +18548,7 @@ automatically. .IP .nf \f[C] ---vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) + --vfs-disk-space-total-size Manually set the total disk space size (example: 256G, default: -1) \f[R] .fi .SS Alternate report of used bytes @@ -17929,7 +18561,7 @@ With this flag set, instead of relying on the backend to report this information, rclone will scan the whole remote similar to \f[V]rclone size\f[R] and compute the total used space itself. .PP -\f[I]WARNING.\f[R] Contrary to \f[V]rclone size\f[R], this flag ignores +\f[B]WARNING\f[R]: Contrary to \f[V]rclone size\f[R], this flag ignores filters so that the result is accurate. However, this is very inefficient and may cost lots of API calls resulting in extra charges. @@ -18000,11 +18632,13 @@ it won\[aq]t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config. .PP -This config generated must have this extra parameter - \f[V]_root\f[R] - -root to use for the backend +This config generated must have this extra parameter +.IP \[bu] 2 +\f[V]_root\f[R] - root to use for the backend .PP -And it may have this parameter - \f[V]_obscure\f[R] - comma separated -strings for parameters to obscure +And it may have this parameter +.IP \[bu] 2 +\f[V]_obscure\f[R] - comma separated strings for parameters to obscure .PP If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this: @@ -18012,8 +18646,8 @@ process (on STDIN) would look similar to this: .nf \f[C] { - \[dq]user\[dq]: \[dq]me\[dq], - \[dq]pass\[dq]: \[dq]mypassword\[dq] + \[dq]user\[dq]: \[dq]me\[dq], + \[dq]pass\[dq]: \[dq]mypassword\[dq] } \f[R] .fi @@ -18024,8 +18658,8 @@ process (on STDIN) would look similar to this: .nf \f[C] { - \[dq]user\[dq]: \[dq]me\[dq], - \[dq]public_key\[dq]: \[dq]AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf\[dq] + \[dq]user\[dq]: \[dq]me\[dq], + \[dq]public_key\[dq]: \[dq]AAAAB3NzaC1yc2EAAAADAQABAAABAQDuwESFdAe14hVS6omeyX7edc...JQdf\[dq] } \f[R] .fi @@ -18035,12 +18669,12 @@ And as an example return this on STDOUT .nf \f[C] { - \[dq]type\[dq]: \[dq]sftp\[dq], - \[dq]_root\[dq]: \[dq]\[dq], - \[dq]_obscure\[dq]: \[dq]pass\[dq], - \[dq]user\[dq]: \[dq]me\[dq], - \[dq]pass\[dq]: \[dq]mypassword\[dq], - \[dq]host\[dq]: \[dq]sftp.example.com\[dq] + \[dq]type\[dq]: \[dq]sftp\[dq], + \[dq]_root\[dq]: \[dq]\[dq], + \[dq]_obscure\[dq]: \[dq]pass\[dq], + \[dq]user\[dq]: \[dq]me\[dq], + \[dq]pass\[dq]: \[dq]mypassword\[dq], + \[dq]host\[dq]: \[dq]sftp.example.com\[dq] } \f[R] .fi @@ -18287,6 +18921,9 @@ random file hierarchy in a directory .IP \[bu] 2 rclone test memory (https://rclone.org/commands/rclone_test_memory/) - Load all the objects at remote:path into memory and report memory stats. +.IP \[bu] 2 +rclone test speed (https://rclone.org/commands/rclone_test_speed/) - Run +a speed test to the remote .SH rclone test changenotify .PP Log any change notify requests for the remote passed in. @@ -18353,7 +18990,7 @@ It will write test files into the remote:path passed in. It outputs a bit of go code for each one. .PP \f[B]NB\f[R] this can create undeletable files and other hazards - use -with care +with care! .IP .nf \f[C] @@ -18472,6 +19109,67 @@ not listed here. .IP \[bu] 2 rclone test (https://rclone.org/commands/rclone_test/) - Run a test command +.SH rclone test speed +.PP +Run a speed test to the remote +.SS Synopsis +.PP +Run a speed test to the remote. +.PP +This command runs a series of uploads and downloads to the remote, +measuring and printing the speed of each test using varying file sizes +and numbers of files. +.PP +Test time can be innaccurate with small file caps and large files. +As it uses the results of an initial test to determine how many files to +use in each subsequent test. +.PP +It is recommended to use -q flag for a simpler output. +e.g.: +.IP +.nf +\f[C] +rclone test speed remote: -q +\f[R] +.fi +.PP +\f[B]NB\f[R] This command will create and delete files on the remote in +a randomly named directory which will be automatically removed on a +clean exit. +.PP +You can use the --json flag to only print the results in JSON format. +.IP +.nf +\f[C] +rclone test speed [flags] +\f[R] +.fi +.SS Options +.IP +.nf +\f[C] + --ascii Fill files with random ASCII printable bytes only + --chargen Fill files with a ASCII chargen pattern + --file-cap int Maximum number of files to use in each test (default 100) + -h, --help help for speed + --json Output only results in JSON format + --large SizeSuffix Size of large files (default 1Gi) + --medium SizeSuffix Size of medium files (default 10Mi) + --pattern Fill files with a periodic pattern + --seed int Seed for the random number generator (0 for random) (default 1) + --small SizeSuffix Size of small files (default 1Ki) + --sparse Make the files sparse (appear to be filled with ASCII 0x00) + --test-time Duration Length for each test to run (default 15s) + --zero Fill files with ASCII 0x00 +\f[R] +.fi +.PP +See the global flags page (https://rclone.org/flags/) for global options +not listed here. +.SS See Also +.IP \[bu] 2 +rclone test (https://rclone.org/commands/rclone_test/) - Run a test +command .SH rclone touch .PP Create new file or change file modification time. @@ -18949,6 +19647,10 @@ can swap the roles of \f[V]\[dq]\f[R] and \f[V]\[aq]\f[R] thus. rclone copy \[aq]:http,url=\[dq]https://example.com\[dq]:path/to/dir\[aq] /tmp/dir \f[R] .fi +.PP +You can use rclone config +string (https://rclone.org/commands/rclone_config_string/) to convert a +remote into a connection string. .SS Connection strings, config and logging .PP If you supply extra configuration to a backend by command line flag, @@ -19632,6 +20334,9 @@ If running rclone from a script you might want to use today\[aq]s date as the directory name passed to \f[V]--backup-dir\f[R] to store the old files, or you might want to pass \f[V]--suffix\f[R] with today\[aq]s date. +This can be done with \f[V]--suffix $(date +%F)\f[R] in bash, and +\f[V]--suffix $(Get-Date -Format \[aq]yyyy-MM-dd\[aq])\f[R] in +PowerShell. .PP See \f[V]--compare-dest\f[R] and \f[V]--copy-dest\f[R]. .SS --bind string @@ -21050,25 +21755,25 @@ backend docs. .nf \f[C] { - \[dq]SrcFs\[dq]: \[dq]gdrive:\[dq], - \[dq]SrcFsType\[dq]: \[dq]drive\[dq], - \[dq]DstFs\[dq]: \[dq]newdrive:user\[dq], - \[dq]DstFsType\[dq]: \[dq]onedrive\[dq], - \[dq]Remote\[dq]: \[dq]test.txt\[dq], - \[dq]Size\[dq]: 6, - \[dq]MimeType\[dq]: \[dq]text/plain; charset=utf-8\[dq], - \[dq]ModTime\[dq]: \[dq]2022-10-11T17:53:10.286745272+01:00\[dq], - \[dq]IsDir\[dq]: false, - \[dq]ID\[dq]: \[dq]xyz\[dq], - \[dq]Metadata\[dq]: { - \[dq]btime\[dq]: \[dq]2022-10-11T16:53:11Z\[dq], - \[dq]content-type\[dq]: \[dq]text/plain; charset=utf-8\[dq], - \[dq]mtime\[dq]: \[dq]2022-10-11T17:53:10.286745272+01:00\[dq], - \[dq]owner\[dq]: \[dq]user1\[at]domain1.com\[dq], - \[dq]permissions\[dq]: \[dq]...\[dq], - \[dq]description\[dq]: \[dq]my nice file\[dq], - \[dq]starred\[dq]: \[dq]false\[dq] - } + \[dq]SrcFs\[dq]: \[dq]gdrive:\[dq], + \[dq]SrcFsType\[dq]: \[dq]drive\[dq], + \[dq]DstFs\[dq]: \[dq]newdrive:user\[dq], + \[dq]DstFsType\[dq]: \[dq]onedrive\[dq], + \[dq]Remote\[dq]: \[dq]test.txt\[dq], + \[dq]Size\[dq]: 6, + \[dq]MimeType\[dq]: \[dq]text/plain; charset=utf-8\[dq], + \[dq]ModTime\[dq]: \[dq]2022-10-11T17:53:10.286745272+01:00\[dq], + \[dq]IsDir\[dq]: false, + \[dq]ID\[dq]: \[dq]xyz\[dq], + \[dq]Metadata\[dq]: { + \[dq]btime\[dq]: \[dq]2022-10-11T16:53:11Z\[dq], + \[dq]content-type\[dq]: \[dq]text/plain; charset=utf-8\[dq], + \[dq]mtime\[dq]: \[dq]2022-10-11T17:53:10.286745272+01:00\[dq], + \[dq]owner\[dq]: \[dq]user1\[at]domain1.com\[dq], + \[dq]permissions\[dq]: \[dq]...\[dq], + \[dq]description\[dq]: \[dq]my nice file\[dq], + \[dq]starred\[dq]: \[dq]false\[dq] + } } \f[R] .fi @@ -21084,15 +21789,15 @@ something to the description: .nf \f[C] { - \[dq]Metadata\[dq]: { - \[dq]btime\[dq]: \[dq]2022-10-11T16:53:11Z\[dq], - \[dq]content-type\[dq]: \[dq]text/plain; charset=utf-8\[dq], - \[dq]mtime\[dq]: \[dq]2022-10-11T17:53:10.286745272+01:00\[dq], - \[dq]owner\[dq]: \[dq]user1\[at]domain2.com\[dq], - \[dq]permissions\[dq]: \[dq]...\[dq], - \[dq]description\[dq]: \[dq]my nice file [migrated from domain1]\[dq], - \[dq]starred\[dq]: \[dq]false\[dq] - } + \[dq]Metadata\[dq]: { + \[dq]btime\[dq]: \[dq]2022-10-11T16:53:11Z\[dq], + \[dq]content-type\[dq]: \[dq]text/plain; charset=utf-8\[dq], + \[dq]mtime\[dq]: \[dq]2022-10-11T17:53:10.286745272+01:00\[dq], + \[dq]owner\[dq]: \[dq]user1\[at]domain2.com\[dq], + \[dq]permissions\[dq]: \[dq]...\[dq], + \[dq]description\[dq]: \[dq]my nice file [migrated from domain1]\[dq], + \[dq]starred\[dq]: \[dq]false\[dq] + } } \f[R] .fi @@ -22748,24 +23453,26 @@ The options set by environment variables can be seen with the \f[V]rclone version -vv\f[R]. .SH Configuring rclone on a remote / headless machine .PP -Some of the configurations (those involving oauth2) require an Internet -connected web browser. +Some of the configurations (those involving oauth2) require an +internet-connected web browser. .PP -If you are trying to set rclone up on a remote or headless box with no -browser available on it (e.g. -a NAS or a server in a datacenter) then you will need to use an +If you are trying to set rclone up on a remote or headless machine with +no browser available on it (e.g. +a NAS or a server in a datacenter), then you will need to use an alternative means of configuration. -There are two ways of doing it, described below. +There are three ways of doing it, described below. .SS Configuring using rclone authorize .PP -On the headless box run \f[V]rclone\f[R] config but answer \f[V]N\f[R] -to the \f[V]Use auto config?\f[R] question. +On the headless machine run rclone config, but answer \f[V]N\f[R] to the +question +\f[V]Use web browser to automatically authenticate rclone with remote?\f[R]. .IP .nf \f[C] -Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine +Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. If Y failed, try N. y) Yes (default) n) No @@ -22777,21 +23484,24 @@ a web browser available. For more help and alternate methods see: https://rclone.org/remote_setup/ Execute the following on the machine with the web browser (same rclone version recommended): - rclone authorize \[dq]onedrive\[dq] + rclone authorize \[dq]onedrive\[dq] Then paste the result. Enter a value. config_token> \f[R] .fi .PP -Then on your main desktop machine +Then on your main desktop machine, run rclone +authorize (https://rclone.org/commands/rclone_authorize/). .IP .nf \f[C] rclone authorize \[dq]onedrive\[dq] -If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth -Log in and authorize rclone for access -Waiting for code... +NOTICE: Make sure your Redirect URL is set to \[dq]http://localhost:53682/\[dq] in your custom config. +NOTICE: If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx +NOTICE: Log in and authorize rclone for access +NOTICE: Waiting for code... + Got code Paste the following into your remote machine ---> SECRET_TOKEN @@ -22799,15 +23509,15 @@ SECRET_TOKEN \f[R] .fi .PP -Then back to the headless box, paste in the code +Then back to the headless machine, paste in the code. .IP .nf \f[C] config_token> SECRET_TOKEN -------------------- [acd12] -client_id = -client_secret = +client_id = +client_secret = token = SECRET_TOKEN -------------------- y) Yes this is OK @@ -22818,10 +23528,13 @@ y/e/d> .fi .SS Configuring by copying the config file .PP -Rclone stores all of its config in a single configuration file. -This can easily be copied to configure a remote rclone. +Rclone stores all of its configuration in a single file. +This can easily be copied to configure a remote rclone (although some +backends does not support reusing the same configuration, consult your +backend documentation to be sure). .PP -So first configure rclone on your desktop machine with +Start by running rclone config to create the configuration file on your +desktop machine. .IP .nf \f[C] @@ -22829,10 +23542,7 @@ rclone config \f[R] .fi .PP -to set up the config file. -.PP -Find the config file by running \f[V]rclone config file\f[R], for -example +Then locate the file by running rclone config file. .IP .nf \f[C] @@ -22842,13 +23552,15 @@ Configuration file is stored at: \f[R] .fi .PP -Now transfer it to the remote box (scp, cut paste, ftp, sftp, etc.) -and place it in the correct place (use \f[V]rclone config file\f[R] on -the remote box to find out where). +Finally, transfer the file to the remote machine (scp, cut paste, ftp, +sftp, etc.) +and place it in the correct location (use rclone config file on the +remote machine to find out where). .SS Configuring using SSH Tunnel .PP -Linux and MacOS users can utilize SSH Tunnel to redirect the headless -box port 53682 to local machine by using the following command: +If you have an SSH client installed on your local machine, you can set +up an SSH tunnel to redirect the port 53682 into the headless machine by +using the following command: .IP .nf \f[C] @@ -22856,24 +23568,30 @@ ssh -L localhost:53682:localhost:53682 username\[at]remote_server \f[R] .fi .PP -Then on the headless box run \f[V]rclone config\f[R] and answer -\f[V]Y\f[R] to the \f[V]Use auto config?\f[R] question. +Then on the headless machine run rclone config and answer \f[V]Y\f[R] to +the question +\f[V]Use web browser to automatically authenticate rclone with remote?\f[R]. .IP .nf \f[C] -Use auto config? - * Say Y if not sure - * Say N if you are working on a remote or headless machine +Use web browser to automatically authenticate rclone with remote? + * Say Y if the machine running rclone has a web browser you can use + * Say N if running rclone on a (remote) machine without web browser access +If not sure try Y. If Y failed, try N. y) Yes (default) n) No y/n> y +NOTICE: Make sure your Redirect URL is set to \[dq]http://localhost:53682/\[dq] in your custom config. +NOTICE: If your browser doesn\[aq]t open automatically go to the following link: http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx +NOTICE: Log in and authorize rclone for access +NOTICE: Waiting for code... \f[R] .fi .PP -Then copy and paste the auth url -\f[V]http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx\f[R] to the browser -on your local machine, complete the auth and it is done. +Finally, copy and paste the presented URL +\f[V]http://127.0.0.1:53682/auth?state=xxxxxxxxxxxxxxxxxxxxxx\f[R] to +the browser on your local machine, complete the auth and you are done. .SH Filtering, includes and excludes .PP Filter flags determine which files rclone \f[V]sync\f[R], @@ -23043,7 +23761,8 @@ The syntax of filter patterns is glob style matching (like However this does not provide absolute control over the matching, so for advanced users rclone also provides a regular expression syntax. .PP -The regular expressions used are as defined in the Go regular expression +Rclone generally accepts Perl-style regular expressions, the exact +syntax is defined in the Go regular expression reference (https://golang.org/pkg/regexp/syntax/). Regular expressions should be enclosed in \f[V]{{\f[R] \f[V]}}\f[R]. They will match only the last path segment if the glob doesn\[aq]t start @@ -24579,8 +25298,8 @@ By default jobs are executed immediately as they are created or synchronously. .PP If \f[V]_async\f[R] has a true value when supplied to an rc call then it -will return immediately with a job id and the task will be run in the -background. +will return immediately with a job id and execute id, and the task will +be run in the background. The \f[V]job/status\f[R] call can be used to get information of the background job. The job can be queried for up to 1 minute after it has finished. @@ -24597,11 +25316,19 @@ Starting a job with the \f[V]_async\f[R] flag: \f[C] $ rclone rc --json \[aq]{ \[dq]p1\[dq]: [1,\[dq]2\[dq],null,4], \[dq]p2\[dq]: { \[dq]a\[dq]:1, \[dq]b\[dq]:2 }, \[dq]_async\[dq]: true }\[aq] rc/noop { - \[dq]jobid\[dq]: 2 + \[dq]jobid\[dq]: 2, + \[dq]executeId\[dq]: \[dq]d794c33c-463e-4acf-b911-f4b23e4f40b7\[dq] } \f[R] .fi .PP +The \f[V]jobid\f[R] is a unique identifier for the job within this +rclone instance. +The \f[V]executeId\f[R] identifies the rclone process instance and +changes after rclone restart. +Together, the pair (\f[V]executeId\f[R], \f[V]jobid\f[R]) uniquely +identifies a job across rclone restarts. +.PP Query the status to see if the job has finished. For more information on the meaning of these return parameters see the \f[V]job/status\f[R] call. @@ -24613,6 +25340,7 @@ $ rclone rc --json \[aq]{ \[dq]jobid\[dq]:2 }\[aq] job/status \[dq]duration\[dq]: 0.000124163, \[dq]endTime\[dq]: \[dq]2018-10-27T11:38:07.911245881+01:00\[dq], \[dq]error\[dq]: \[dq]\[dq], + \[dq]executeId\[dq]: \[dq]d794c33c-463e-4acf-b911-f4b23e4f40b7\[dq], \[dq]finished\[dq]: true, \[dq]id\[dq]: 2, \[dq]output\[dq]: { @@ -24634,19 +25362,33 @@ $ rclone rc --json \[aq]{ \[dq]jobid\[dq]:2 }\[aq] job/status \f[R] .fi .PP -\f[V]job/list\f[R] can be used to show the running or recently completed -jobs +\f[V]job/list\f[R] can be used to show running or recently completed +jobs along with their status .IP .nf \f[C] $ rclone rc job/list { + \[dq]executeId\[dq]: \[dq]d794c33c-463e-4acf-b911-f4b23e4f40b7\[dq], + \[dq]finished_ids\[dq]: [ + 1 + ], \[dq]jobids\[dq]: [ + 1, + 2 + ], + \[dq]running_ids\[dq]: [ 2 ] } \f[R] .fi +.PP +This shows: - \f[V]executeId\f[R] - the current rclone instance ID (same +for all jobs, changes after restart) - \f[V]jobids\f[R] - array of all +job IDs (both running and finished) - \f[V]running_ids\f[R] - array of +currently running job IDs - \f[V]finished_ids\f[R] - array of finished +job IDs .SS Setting config flags with _config .PP If you wish to set config (the equivalent of the global flags) for the @@ -25356,7 +26098,7 @@ Unlocks the config file if it is locked. .PP Parameters: .IP \[bu] 2 -\[aq]config_password\[aq] - password to unlock the config file +\[aq]configPassword\[aq] - password to unlock the config file .PP A good idea is to disable AskPassword before making this call .PP @@ -25710,12 +26452,12 @@ Returns the following values: } \f[R] .fi -.SS core/version: Shows the current version of rclone and the go runtime. +.SS core/version: Shows the current version of rclone, Go and the OS. .PP -This shows the current version of go and the go runtime: +This shows the current versions of rclone, Go and the OS: .IP \[bu] 2 version - rclone version, e.g. -\[dq]v1.53.0\[dq] +\[dq]v1.71.2\[dq] .IP \[bu] 2 decomposed - version number as [major, minor, patch] .IP \[bu] 2 @@ -25723,11 +26465,23 @@ isGit - boolean - true if this was compiled from the git version .IP \[bu] 2 isBeta - boolean - true if this is a beta version .IP \[bu] 2 -os - OS in use as according to Go +os - OS in use as according to Go GOOS (e.g. +\[dq]linux\[dq]) .IP \[bu] 2 -arch - cpu architecture in use according to Go +osKernel - OS Kernel version (e.g. +\[dq]6.8.0-86-generic (x86_64)\[dq]) .IP \[bu] 2 -goVersion - version of Go runtime in use +osVersion - OS Version (e.g. +\[dq]ubuntu 24.04 (64 bit)\[dq]) +.IP \[bu] 2 +osArch - cpu architecture in use (e.g. +\[dq]arm64 (ARMv8 compatible)\[dq]) +.IP \[bu] 2 +arch - cpu architecture in use according to Go GOARCH (e.g. +\[dq]arm64\[dq]) +.IP \[bu] 2 +goVersion - version of Go runtime in use (e.g. +\[dq]go1.25.0\[dq]) .IP \[bu] 2 linking - type of rclone executable (static or dynamic) .IP \[bu] 2 @@ -25857,6 +26611,77 @@ This returns the number of entries in the fs cache. Returns - entries - number of items in the cache .PP \f[B]Authentication is required for this call.\f[R] +.SS job/batch: Run a batch of rclone rc commands concurrently. +.PP +This takes the following parameters: +.IP \[bu] 2 +concurrency - int - do this many commands concurrently. +Defaults to \f[V]--transfers\f[R] if not set. +.IP \[bu] 2 +inputs - an list of inputs to the commands with an extra \f[V]_path\f[R] +parameter +.IP +.nf +\f[C] +{ + \[dq]_path\[dq]: \[dq]rc/path\[dq], + \[dq]param1\[dq]: \[dq]parameter for the path as documented\[dq], + \[dq]param2\[dq]: \[dq]parameter for the path as documented, etc\[dq], +} +\f[R] +.fi +.PP +The inputs may use \f[V]_async\f[R], \f[V]_group\f[R], \f[V]_config\f[R] +and \f[V]_filter\f[R] as normal when using the rc. +.PP +Returns: +.IP \[bu] 2 +results - a list of results from the commands with one entry for each in +inputs. +.PP +For example: +.IP +.nf +\f[C] +rclone rc job/batch --json \[aq]{ + \[dq]inputs\[dq]: [ + { + \[dq]_path\[dq]: \[dq]rc/noop\[dq], + \[dq]parameter\[dq]: \[dq]OK\[dq] + }, + { + \[dq]_path\[dq]: \[dq]rc/error\[dq], + \[dq]parameter\[dq]: \[dq]BAD\[dq] + } + ] +} +\[aq] +\f[R] +.fi +.PP +Gives the result: +.IP +.nf +\f[C] +{ + \[dq]results\[dq]: [ + { + \[dq]parameter\[dq]: \[dq]OK\[dq] + }, + { + \[dq]error\[dq]: \[dq]arbitrary error on input map[parameter:BAD]\[dq], + \[dq]input\[dq]: { + \[dq]parameter\[dq]: \[dq]BAD\[dq] + }, + \[dq]path\[dq]: \[dq]rc/error\[dq], + \[dq]status\[dq]: 500 + } + ] +} +\f[R] +.fi +.PP +\f[B]Authentication is required for this call.\f[R] .SS job/list: Lists the IDs of the running jobs .PP Parameters: None. @@ -25866,6 +26691,10 @@ Results: executeId - string id of rclone executing (change after restart) .IP \[bu] 2 jobids - array of integer job ids (starting at 1 on each restart) +.IP \[bu] 2 +runningIds - array of integer job ids that are running +.IP \[bu] 2 +finishedIds - array of integer job ids that are finished .SS job/status: Reads the status of the job ID .PP Parameters: @@ -25887,6 +26716,9 @@ finished - boolean whether the job has finished or not .IP \[bu] 2 id - as passed in above .IP \[bu] 2 +executeId - rclone instance ID (changes after restart); combined with id +uniquely identifies a job +.IP \[bu] 2 startTime - time the job started (e.g. \[dq]2018-10-26T18:50:20.528336039+01:00\[dq]) .IP \[bu] 2 @@ -26529,9 +27361,6 @@ fs - a remote name string e.g. remote - a path within that remote e.g. \[dq]dir\[dq] .PP -See the settierfile (https://rclone.org/commands/rclone_settierfile/) -command for more information on the above. -.PP \f[B]Authentication is required for this call.\f[R] .SS operations/size: Count the number of bytes and files in remote .PP @@ -26588,9 +27417,6 @@ remote - a path within that remote e.g. .IP \[bu] 2 each part in body represents a file to be uploaded .PP -See the uploadfile (https://rclone.org/commands/rclone_uploadfile/) -command for more information on the above. -.PP \f[B]Authentication is required for this call.\f[R] .SS options/blocks: List all the option blocks .PP @@ -26796,6 +27622,10 @@ rclone rc pluginsctl/removeTestPlugin name=rclone/rclone-webui-react .PP This returns an error with the input as part of its error string. Useful for testing error handling. +.SS rc/fatal: This returns an fatal error +.PP +This returns an error with the input as part of its error string. +Useful for testing error handling. .SS rc/list: List all the registered remote control commands .PP This lists all the registered remote control commands as a JSON map in @@ -26814,6 +27644,10 @@ It can be used to check that rclone is still alive and to check that parameter passing is working properly. .PP \f[B]Authentication is required for this call.\f[R] +.SS rc/panic: This returns an error by panicking +.PP +This returns an error with the input as part of its error string. +Useful for testing error handling. .SS serve/list: Show running servers .PP Show running servers with IDs. @@ -27163,7 +27997,7 @@ return an empty result. .nf \f[C] { - \[dq]queued\[dq]: // an array of files queued for upload + \[dq]queue\[dq]: // an array of files queued for upload [ { \[dq]name\[dq]: \[dq]file\[dq], // string: name (full path) of the file, @@ -27889,7 +28723,7 @@ No T}@T{ R T}@T{ -- +R T} T{ iCloud Drive @@ -30907,7 +31741,7 @@ Flags for general networking and HTTP stuff. --tpslimit float Limit HTTP transactions per second to this --tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --use-cookies Enable session cookiejar - --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.71.0\[dq]) + --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.72.0\[dq]) \f[R] .fi .SS Performance @@ -31127,6 +31961,8 @@ Backend-only flags (these can be set in the config file also). \f[C] --alias-description string Description of the remote --alias-remote string Remote or path to alias + --archive-description string Description of the remote + --archive-remote string Remote to wrap to read archives from --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive --azureblob-account string Azure Storage Account Name --azureblob-archive-tier-delete Delete archive tier blobs before overwriting @@ -31204,6 +32040,10 @@ Backend-only flags (these can be set in the config file also). --b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-key string Application Key --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket + --b2-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in B2 + --b2-sse-customer-key string To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data + --b2-sse-customer-key-base64 string To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data + --b2-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional) --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-upload-concurrency int Concurrency for multipart uploads (default 4) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) @@ -31265,7 +32105,7 @@ Backend-only flags (these can be set in the config file also). --combine-description string Description of the remote --combine-upstreams SpaceSepList Upstreams for combining --compress-description string Description of the remote - --compress-level int GZIP compression level (-2 to 9) (default -1) + --compress-level string GZIP (levels -2 to 9): --compress-mode string Compression mode (default \[dq]gzip\[dq]) --compress-ram-cache-limit SizeSuffix Some remotes don\[aq]t allow the upload of files with unknown size (default 20Mi) --compress-remote string Remote to compress @@ -31560,6 +32400,7 @@ Backend-only flags (these can be set in the config file also). --mailru-token string OAuth Access Token as a JSON blob --mailru-token-url string Token server url --mailru-user string User name (usually email) + --mega-2fa string The 2FA code of your MEGA account if the account is set up with one --mega-debug Output more debug from Mega --mega-description string Description of the remote --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) @@ -31678,6 +32519,7 @@ Backend-only flags (these can be set in the config file also). --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) --protondrive-original-file-size Return the file size before encryption (default true) + --protondrive-otp-secret-key string The OTP secret key (obscured) --protondrive-password string The password of your proton account (obscured) --protondrive-replace-existing-draft Create a new revision when filename conflict is detected --protondrive-username string The username of your proton account @@ -31760,6 +32602,7 @@ Backend-only flags (these can be set in the config file also). --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) --s3-use-arn-region If true, enables arn region support for the service + --s3-use-data-integrity-protections Tristate If true use AWS S3 data integrity protections (default unset) --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) @@ -31842,6 +32685,7 @@ Backend-only flags (these can be set in the config file also). --sia-encoding Encoding The encoding for the backend (default Slash,Question,Hash,Percent,Del,Ctl,InvalidUtf8,Dot) --sia-user-agent string Siad User Agent (default \[dq]Sia-Agent\[dq]) --skip-links Don\[aq]t warn about skipped symlinks + --skip-specials Don\[aq]t warn about skipped pipes, sockets and device objects --smb-case-insensitive Whether the server is configured to be case-insensitive (default true) --smb-description string Description of the remote --smb-domain string Domain name for NTLM authentication (default \[dq]WORKGROUP\[dq]) @@ -32321,18 +33165,23 @@ volumes: \f[R] .fi .PP -Notice a few important details: - YAML prefers \f[V]_\f[R] in option -names instead of \f[V]-\f[R]. -- YAML treats single and double quotes interchangeably. +Notice a few important details: +.IP \[bu] 2 +YAML prefers \f[V]_\f[R] in option names instead of \f[V]-\f[R]. +.IP \[bu] 2 +YAML treats single and double quotes interchangeably. Simple strings and integers can be left unquoted. -- Boolean values must be quoted like \f[V]\[aq]true\[aq]\f[R] or +.IP \[bu] 2 +Boolean values must be quoted like \f[V]\[aq]true\[aq]\f[R] or \f[V]\[dq]false\[dq]\f[R] because these two words are reserved by YAML. -- The filesystem string is keyed with \f[V]remote\f[R] (or with +.IP \[bu] 2 +The filesystem string is keyed with \f[V]remote\f[R] (or with \f[V]fs\f[R]). Normally you can omit quotes here, but if the string ends with colon, you \f[B]must\f[R] quote it like \f[V]remote: \[dq]storage_box:\[dq]\f[R]. -- YAML is picky about surrounding braces in values as this is in fact +.IP \[bu] 2 +YAML is picky about surrounding braces in values as this is in fact another syntax for key/value mappings (http://yaml.org/spec/1.2/spec.html#id2790832). For example, JSON access tokens usually contain double quotes and @@ -32351,11 +33200,13 @@ The plugin requires presence of two directories on the host before it can be installed. Note that plugin will \f[B]not\f[R] create them automatically. By default they must exist on host at the following locations (though -you can tweak the paths): - +you can tweak the paths): +.IP \[bu] 2 \f[V]/var/lib/docker-plugins/rclone/config\f[R] is reserved for the \f[V]rclone.conf\f[R] config file and \f[B]must\f[R] exist even if it\[aq]s empty and the config file is not present. -- \f[V]/var/lib/docker-plugins/rclone/cache\f[R] holds the plugin state +.IP \[bu] 2 +\f[V]/var/lib/docker-plugins/rclone/cache\f[R] holds the plugin state file as well as optional VFS caches. .PP You can install managed @@ -32373,8 +33224,13 @@ called a \f[I]tag\f[R]. Usually you will want to install the latest plugin for your architecture. In this case the tag will just name it, like \f[V]amd64\f[R] above. -The following plugin architectures are currently available: - -\f[V]amd64\f[R] - \f[V]arm64\f[R] - \f[V]arm-v7\f[R] +The following plugin architectures are currently available: +.IP \[bu] 2 +\f[V]amd64\f[R] +.IP \[bu] 2 +\f[V]arm64\f[R] +.IP \[bu] 2 +\f[V]arm-v7\f[R] .PP Sometimes you might want a concrete plugin version, not the latest one. Then you should use image tag in the form @@ -32583,14 +33439,18 @@ systemctl restart docker \f[R] .fi .PP -Or run the service directly: - run \f[V]systemctl daemon-reload\f[R] to -let systemd pick up new config - run -\f[V]systemctl enable docker-volume-rclone.service\f[R] to make the new -service start automatically when you power on your machine. -- run \f[V]systemctl start docker-volume-rclone.service\f[R] to start -the service now. -- run \f[V]systemctl restart docker\f[R] to restart docker daemon and -let it detect the new plugin socket. +Or run the service directly: +.IP \[bu] 2 +run \f[V]systemctl daemon-reload\f[R] to let systemd pick up new config +.IP \[bu] 2 +run \f[V]systemctl enable docker-volume-rclone.service\f[R] to make the +new service start automatically when you power on your machine. +.IP \[bu] 2 +run \f[V]systemctl start docker-volume-rclone.service\f[R] to start the +service now. +.IP \[bu] 2 +run \f[V]systemctl restart docker\f[R] to restart docker daemon and let +it detect the new plugin socket. Note that this step is not needed in managed mode where docker knows about plugin state changes. .PP @@ -34102,27 +34962,19 @@ filename encodings.) .PP The following backends have known issues that need more investigation: .IP \[bu] 2 -\f[V]TestGoFile\f[R] (\f[V]gofile\f[R]) +\f[V]TestDropbox\f[R] (\f[V]dropbox\f[R]) .RS 2 .IP \[bu] 2 -\f[V]TestBisyncRemoteLocal/all_changed\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) -.IP \[bu] 2 -\f[V]TestBisyncRemoteLocal/backupdir\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) -.IP \[bu] 2 -\f[V]TestBisyncRemoteLocal/basic\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) -.IP \[bu] 2 -\f[V]TestBisyncRemoteLocal/changes\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) -.IP \[bu] 2 -\f[V]TestBisyncRemoteLocal/check_access\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) -.IP \[bu] 2 -78 more (https://pub.rclone.org/integration-tests/current/) +\f[V]TestBisyncRemoteRemote/normalization\f[R] (https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt) .RE .IP \[bu] 2 -Updated: 2025-08-21-010015 +Updated: 2025-11-21-010037 .PP The following backends either have not been tested recently or have known issues that are deemed unfixable for the time being: .IP \[bu] 2 +\f[V]TestArchive\f[R] (\f[V]archive\f[R]) +.IP \[bu] 2 \f[V]TestCache\f[R] (\f[V]cache\f[R]) .IP \[bu] 2 \f[V]TestFileLu\f[R] (\f[V]filelu\f[R]) @@ -35386,16 +36238,16 @@ From KEYS on this website - this file contains all past signing keys also. .IP \[bu] 2 The git repository hosted on GitHub - -https://github.com/rclone/rclone/blob/master/docs/content/KEYS + .IP \[bu] 2 \f[V]gpg --keyserver hkps://keys.openpgp.org --search nick\[at]craig-wood.com\f[R] .IP \[bu] 2 \f[V]gpg --keyserver hkps://keyserver.ubuntu.com --search nick\[at]craig-wood.com\f[R] .IP \[bu] 2 -https://www.craig-wood.com/nick/pub/pgp-key.txt + .PP After importing the key, verify that the fingerprint of one of the keys -matches: \f[V]FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA\f[R] as this key +matches: \f[V]FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA\f[R] ads this key is used for signing. .PP We recommend that you cross-check the fingerprint shown above through @@ -35468,10 +36320,10 @@ You could verify the other types of hash also for extra security. .IP .nf \f[C] -$ mkdir /tmp/check -$ cd /tmp/check -$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS . -$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip . +mkdir /tmp/check +cd /tmp/check +rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS . +rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip . \f[R] .fi .SS Verify the signatures @@ -35564,7 +36416,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -35609,7 +36461,8 @@ y/e/d> y \f[R] .fi .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your 1Fichier account .IP @@ -35858,7 +36711,7 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .SH Alias .PP The \f[V]alias\f[R] remote provides a new name for another remote. @@ -35899,7 +36752,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -35950,7 +36803,8 @@ e/n/d/r/c/s/q> q \f[R] .fi .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level in \f[V]/mnt/storage/backup\f[R] .IP @@ -36028,20 +36882,28 @@ Cloudflare R2 .IP \[bu] 2 Arvan Cloud Object Storage (AOS) .IP \[bu] 2 +Cubbit DS3 +.IP \[bu] 2 DigitalOcean Spaces .IP \[bu] 2 Dreamhost .IP \[bu] 2 Exaba .IP \[bu] 2 +FileLu S5 (S3-Compatible Object Storage) +.IP \[bu] 2 GCS .IP \[bu] 2 +Hetzner +.IP \[bu] 2 Huawei OBS .IP \[bu] 2 IBM COS S3 .IP \[bu] 2 IDrive e2 .IP \[bu] 2 +Intercolo Object Storage +.IP \[bu] 2 IONOS Cloud .IP \[bu] 2 Leviia Object Storage @@ -36066,6 +36928,8 @@ Pure Storage FlashBlade .IP \[bu] 2 Qiniu Cloud Object Storage (Kodo) .IP \[bu] 2 +Rabata Cloud Storage +.IP \[bu] 2 RackCorp Object Storage .IP \[bu] 2 Rclone Serve S3 @@ -36078,6 +36942,10 @@ SeaweedFS .IP \[bu] 2 Selectel .IP \[bu] 2 +Servercore Object Storage +.IP \[bu] 2 +Spectra Logic +.IP \[bu] 2 StackPath .IP \[bu] 2 Storj @@ -36149,7 +37017,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -36816,10 +37684,9 @@ The chunk sizes used in the multipart upload are specified by \f[V]--s3-chunk-size\f[R] and the number of chunks uploaded concurrently is specified by \f[V]--s3-upload-concurrency\f[R]. .PP -Multipart uploads will use \f[V]--transfers\f[R] * -\f[V]--s3-upload-concurrency\f[R] * \f[V]--s3-chunk-size\f[R] extra -memory. -Single part uploads to not use extra memory. +Multipart uploads will use extra memory equal to: \f[V]--transfers\f[R] +× \f[V]--s3-upload-concurrency\f[R] × \f[V]--s3-chunk-size\f[R]. +Single part uploads do not use extra memory. .PP Single part transfers can be faster than multipart transfers or slower depending on your latency from S3 - the more latency, the more likely @@ -36936,31 +37803,31 @@ Example policy: .nf \f[C] { - \[dq]Version\[dq]: \[dq]2012-10-17\[dq], - \[dq]Statement\[dq]: [ - { - \[dq]Effect\[dq]: \[dq]Allow\[dq], - \[dq]Principal\[dq]: { - \[dq]AWS\[dq]: \[dq]arn:aws:iam::USER_SID:user/USER_NAME\[dq] - }, - \[dq]Action\[dq]: [ - \[dq]s3:ListBucket\[dq], - \[dq]s3:DeleteObject\[dq], - \[dq]s3:GetObject\[dq], - \[dq]s3:PutObject\[dq], - \[dq]s3:PutObjectAcl\[dq] - ], - \[dq]Resource\[dq]: [ - \[dq]arn:aws:s3:::BUCKET_NAME/*\[dq], - \[dq]arn:aws:s3:::BUCKET_NAME\[dq] - ] - }, - { - \[dq]Effect\[dq]: \[dq]Allow\[dq], - \[dq]Action\[dq]: \[dq]s3:ListAllMyBuckets\[dq], - \[dq]Resource\[dq]: \[dq]arn:aws:s3:::*\[dq] - } - ] + \[dq]Version\[dq]: \[dq]2012-10-17\[dq], + \[dq]Statement\[dq]: [ + { + \[dq]Effect\[dq]: \[dq]Allow\[dq], + \[dq]Principal\[dq]: { + \[dq]AWS\[dq]: \[dq]arn:aws:iam::USER_SID:user/USER_NAME\[dq] + }, + \[dq]Action\[dq]: [ + \[dq]s3:ListBucket\[dq], + \[dq]s3:DeleteObject\[dq], + \[dq]s3:GetObject\[dq], + \[dq]s3:PutObject\[dq], + \[dq]s3:PutObjectAcl\[dq] + ], + \[dq]Resource\[dq]: [ + \[dq]arn:aws:s3:::BUCKET_NAME/*\[dq], + \[dq]arn:aws:s3:::BUCKET_NAME\[dq] + ] + }, + { + \[dq]Effect\[dq]: \[dq]Allow\[dq], + \[dq]Action\[dq]: \[dq]s3:ListAllMyBuckets\[dq], + \[dq]Resource\[dq]: \[dq]arn:aws:s3:::*\[dq] + } + ] } \f[R] .fi @@ -37028,11 +37895,12 @@ all the files to be uploaded as multipart. .PP Here are the Standard options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, -Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, -IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, -Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, -SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, -Qiniu, Zata and others). +Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, +GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, +Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, +OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, +Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, +TencentCOS, Wasabi, Zata, Other). .SS --s3-provider .PP Choose your S3 provider. @@ -37086,6 +37954,12 @@ China Mobile Ecloud Elastic Object Storage (EOS) Cloudflare R2 Storage .RE .IP \[bu] 2 +\[dq]Cubbit\[dq] +.RS 2 +.IP \[bu] 2 +Cubbit DS3 Object Storage +.RE +.IP \[bu] 2 \[dq]DigitalOcean\[dq] .RS 2 .IP \[bu] 2 @@ -37104,6 +37978,12 @@ Dreamhost DreamObjects Exaba Object Storage .RE .IP \[bu] 2 +\[dq]FileLu\[dq] +.RS 2 +.IP \[bu] 2 +FileLu S5 (S3-Compatible Object Storage) +.RE +.IP \[bu] 2 \[dq]FlashBlade\[dq] .RS 2 .IP \[bu] 2 @@ -37116,6 +37996,12 @@ Pure Storage FlashBlade Object Storage Google Cloud Storage .RE .IP \[bu] 2 +\[dq]Hetzner\[dq] +.RS 2 +.IP \[bu] 2 +Hetzner Object Storage +.RE +.IP \[bu] 2 \[dq]HuaweiOBS\[dq] .RS 2 .IP \[bu] 2 @@ -37134,18 +38020,18 @@ IBM COS S3 IDrive e2 .RE .IP \[bu] 2 +\[dq]Intercolo\[dq] +.RS 2 +.IP \[bu] 2 +Intercolo Object Storage +.RE +.IP \[bu] 2 \[dq]IONOS\[dq] .RS 2 .IP \[bu] 2 IONOS Cloud .RE .IP \[bu] 2 -\[dq]LyveCloud\[dq] -.RS 2 -.IP \[bu] 2 -Seagate Lyve Cloud -.RE -.IP \[bu] 2 \[dq]Leviia\[dq] .RS 2 .IP \[bu] 2 @@ -37164,6 +38050,12 @@ Liara Object Storage Linode Object Storage .RE .IP \[bu] 2 +\[dq]LyveCloud\[dq] +.RS 2 +.IP \[bu] 2 +Seagate Lyve Cloud +.RE +.IP \[bu] 2 \[dq]Magalu\[dq] .RS 2 .IP \[bu] 2 @@ -37206,6 +38098,18 @@ OVHcloud Object Storage Petabox Object Storage .RE .IP \[bu] 2 +\[dq]Qiniu\[dq] +.RS 2 +.IP \[bu] 2 +Qiniu Object Storage (Kodo) +.RE +.IP \[bu] 2 +\[dq]Rabata\[dq] +.RS 2 +.IP \[bu] 2 +Rabata Cloud Storage +.RE +.IP \[bu] 2 \[dq]RackCorp\[dq] .RS 2 .IP \[bu] 2 @@ -37236,6 +38140,18 @@ SeaweedFS S3 Selectel Object Storage .RE .IP \[bu] 2 +\[dq]Servercore\[dq] +.RS 2 +.IP \[bu] 2 +Servercore Object Storage +.RE +.IP \[bu] 2 +\[dq]SpectraLogic\[dq] +.RS 2 +.IP \[bu] 2 +Spectra Logic Black Pearl +.RE +.IP \[bu] 2 \[dq]StackPath\[dq] .RS 2 .IP \[bu] 2 @@ -37266,12 +38182,6 @@ Tencent Cloud Object Storage (COS) Wasabi Object Storage .RE .IP \[bu] 2 -\[dq]Qiniu\[dq] -.RS 2 -.IP \[bu] 2 -Qiniu Object Storage (Kodo) -.RE -.IP \[bu] 2 \[dq]Zata\[dq] .RS 2 .IP \[bu] 2 @@ -37350,13 +38260,17 @@ Required: false .PP Region to connect to. .PP +Leave blank if you are using an S3 clone and you don\[aq]t have a +region. +.PP Properties: .IP \[bu] 2 Config: region .IP \[bu] 2 Env Var: RCLONE_S3_REGION .IP \[bu] 2 -Provider: AWS +Provider: +AWS,Ceph,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,LyveCloud,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Scaleway,SeaweedFS,Selectel,Servercore,StackPath,Synology,Wasabi,Zata,Other .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -37373,6 +38287,8 @@ The default endpoint - a good choice if you are unsure. US Region, Northern Virginia, or Pacific Northwest. .IP \[bu] 2 Leave location constraint empty. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]us-east-2\[dq] @@ -37381,6 +38297,8 @@ Leave location constraint empty. US East (Ohio) Region. .IP \[bu] 2 Needs location constraint us-east-2. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]us-west-1\[dq] @@ -37389,6 +38307,8 @@ Needs location constraint us-east-2. US West (Northern California) Region. .IP \[bu] 2 Needs location constraint us-west-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]us-west-2\[dq] @@ -37397,6 +38317,8 @@ Needs location constraint us-west-1. US West (Oregon) Region. .IP \[bu] 2 Needs location constraint us-west-2. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ca-central-1\[dq] @@ -37405,6 +38327,8 @@ Needs location constraint us-west-2. Canada (Central) Region. .IP \[bu] 2 Needs location constraint ca-central-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]eu-west-1\[dq] @@ -37413,6 +38337,8 @@ Needs location constraint ca-central-1. EU (Ireland) Region. .IP \[bu] 2 Needs location constraint EU or eu-west-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]eu-west-2\[dq] @@ -37421,6 +38347,8 @@ Needs location constraint EU or eu-west-1. EU (London) Region. .IP \[bu] 2 Needs location constraint eu-west-2. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]eu-west-3\[dq] @@ -37429,6 +38357,8 @@ Needs location constraint eu-west-2. EU (Paris) Region. .IP \[bu] 2 Needs location constraint eu-west-3. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]eu-north-1\[dq] @@ -37437,6 +38367,8 @@ Needs location constraint eu-west-3. EU (Stockholm) Region. .IP \[bu] 2 Needs location constraint eu-north-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]eu-south-1\[dq] @@ -37445,6 +38377,8 @@ Needs location constraint eu-north-1. EU (Milan) Region. .IP \[bu] 2 Needs location constraint eu-south-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]eu-central-1\[dq] @@ -37453,6 +38387,8 @@ Needs location constraint eu-south-1. EU (Frankfurt) Region. .IP \[bu] 2 Needs location constraint eu-central-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-southeast-1\[dq] @@ -37461,6 +38397,8 @@ Needs location constraint eu-central-1. Asia Pacific (Singapore) Region. .IP \[bu] 2 Needs location constraint ap-southeast-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-southeast-2\[dq] @@ -37469,6 +38407,8 @@ Needs location constraint ap-southeast-1. Asia Pacific (Sydney) Region. .IP \[bu] 2 Needs location constraint ap-southeast-2. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-northeast-1\[dq] @@ -37477,6 +38417,8 @@ Needs location constraint ap-southeast-2. Asia Pacific (Tokyo) Region. .IP \[bu] 2 Needs location constraint ap-northeast-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-northeast-2\[dq] @@ -37485,6 +38427,8 @@ Needs location constraint ap-northeast-1. Asia Pacific (Seoul). .IP \[bu] 2 Needs location constraint ap-northeast-2. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-northeast-3\[dq] @@ -37493,6 +38437,8 @@ Needs location constraint ap-northeast-2. Asia Pacific (Osaka-Local). .IP \[bu] 2 Needs location constraint ap-northeast-3. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-south-1\[dq] @@ -37501,6 +38447,8 @@ Needs location constraint ap-northeast-3. Asia Pacific (Mumbai). .IP \[bu] 2 Needs location constraint ap-south-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-east-1\[dq] @@ -37509,6 +38457,8 @@ Needs location constraint ap-south-1. Asia Pacific (Hong Kong) Region. .IP \[bu] 2 Needs location constraint ap-east-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]sa-east-1\[dq] @@ -37517,6 +38467,8 @@ Needs location constraint ap-east-1. South America (Sao Paulo) Region. .IP \[bu] 2 Needs location constraint sa-east-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]il-central-1\[dq] @@ -37525,6 +38477,8 @@ Needs location constraint sa-east-1. Israel (Tel Aviv) Region. .IP \[bu] 2 Needs location constraint il-central-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]me-south-1\[dq] @@ -37533,6 +38487,8 @@ Needs location constraint il-central-1. Middle East (Bahrain) Region. .IP \[bu] 2 Needs location constraint me-south-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]af-south-1\[dq] @@ -37541,6 +38497,8 @@ Needs location constraint me-south-1. Africa (Cape Town) Region. .IP \[bu] 2 Needs location constraint af-south-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]cn-north-1\[dq] @@ -37549,6 +38507,8 @@ Needs location constraint af-south-1. China (Beijing) Region. .IP \[bu] 2 Needs location constraint cn-north-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]cn-northwest-1\[dq] @@ -37557,6 +38517,8 @@ Needs location constraint cn-north-1. China (Ningxia) Region. .IP \[bu] 2 Needs location constraint cn-northwest-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]us-gov-east-1\[dq] @@ -37565,6 +38527,8 @@ Needs location constraint cn-northwest-1. AWS GovCloud (US-East) Region. .IP \[bu] 2 Needs location constraint us-gov-east-1. +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]us-gov-west-1\[dq] @@ -37573,13 +38537,817 @@ Needs location constraint us-gov-east-1. AWS GovCloud (US) Region. .IP \[bu] 2 Needs location constraint us-gov-west-1. +.IP \[bu] 2 +Provider: AWS +.RE +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +Use this if unsure. +.IP \[bu] 2 +Will use v4 signatures and an empty region. +.IP \[bu] 2 +Provider: +Ceph,DigitalOcean,Dreamhost,Exaba,GCS,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,StackPath,Wasabi,Other +.RE +.IP \[bu] 2 +\[dq]other-v2-signature\[dq] +.RS 2 +.IP \[bu] 2 +Use this only if v4 signatures don\[aq]t work. +.IP \[bu] 2 +E.g. +pre Jewel/v10 CEPH. +.IP \[bu] 2 +Provider: +Ceph,DigitalOcean,Dreamhost,Exaba,GCS,IBMCOS,Leviia,LyveCloud,Minio,Netease,SeaweedFS,StackPath,Wasabi,Other +.RE +.IP \[bu] 2 +\[dq]auto\[dq] +.RS 2 +.IP \[bu] 2 +R2 buckets are automatically distributed across Cloudflare\[aq]s data +centers for low latency. +.IP \[bu] 2 +Provider: Cloudflare +.RE +.IP \[bu] 2 +\[dq]eu-west-1\[dq] +.RS 2 +.IP \[bu] 2 +Europe West +.IP \[bu] 2 +Provider: Cubbit +.RE +.IP \[bu] 2 +\[dq]global\[dq] +.RS 2 +.IP \[bu] 2 +Global +.IP \[bu] 2 +Provider: FileLu +.RE +.IP \[bu] 2 +\[dq]us-east\[dq] +.RS 2 +.IP \[bu] 2 +North America (US-East) +.IP \[bu] 2 +Provider: FileLu +.RE +.IP \[bu] 2 +\[dq]eu-central\[dq] +.RS 2 +.IP \[bu] 2 +Europe (EU-Central) +.IP \[bu] 2 +Provider: FileLu +.RE +.IP \[bu] 2 +\[dq]ap-southeast\[dq] +.RS 2 +.IP \[bu] 2 +Asia Pacific (AP-Southeast) +.IP \[bu] 2 +Provider: FileLu +.RE +.IP \[bu] 2 +\[dq]me-central\[dq] +.RS 2 +.IP \[bu] 2 +Middle East (ME-Central) +.IP \[bu] 2 +Provider: FileLu +.RE +.IP \[bu] 2 +\[dq]hel1\[dq] +.RS 2 +.IP \[bu] 2 +Helsinki +.IP \[bu] 2 +Provider: Hetzner +.RE +.IP \[bu] 2 +\[dq]fsn1\[dq] +.RS 2 +.IP \[bu] 2 +Falkenstein +.IP \[bu] 2 +Provider: Hetzner +.RE +.IP \[bu] 2 +\[dq]nbg1\[dq] +.RS 2 +.IP \[bu] 2 +Nuremberg +.IP \[bu] 2 +Provider: Hetzner +.RE +.IP \[bu] 2 +\[dq]af-south-1\[dq] +.RS 2 +.IP \[bu] 2 +AF-Johannesburg +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]ap-southeast-2\[dq] +.RS 2 +.IP \[bu] 2 +AP-Bangkok +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]ap-southeast-3\[dq] +.RS 2 +.IP \[bu] 2 +AP-Singapore +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]cn-east-3\[dq] +.RS 2 +.IP \[bu] 2 +CN East-Shanghai1 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]cn-east-2\[dq] +.RS 2 +.IP \[bu] 2 +CN East-Shanghai2 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]cn-north-1\[dq] +.RS 2 +.IP \[bu] 2 +CN North-Beijing1 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]cn-north-4\[dq] +.RS 2 +.IP \[bu] 2 +CN North-Beijing4 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]cn-south-1\[dq] +.RS 2 +.IP \[bu] 2 +CN South-Guangzhou +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]ap-southeast-1\[dq] +.RS 2 +.IP \[bu] 2 +CN-Hong Kong +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]sa-argentina-1\[dq] +.RS 2 +.IP \[bu] 2 +LA-Buenos Aires1 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]sa-peru-1\[dq] +.RS 2 +.IP \[bu] 2 +LA-Lima1 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]na-mexico-1\[dq] +.RS 2 +.IP \[bu] 2 +LA-Mexico City1 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]sa-chile-1\[dq] +.RS 2 +.IP \[bu] 2 +LA-Santiago2 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]sa-brazil-1\[dq] +.RS 2 +.IP \[bu] 2 +LA-Sao Paulo1 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]ru-northwest-2\[dq] +.RS 2 +.IP \[bu] 2 +RU-Moscow2 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]de-fra\[dq] +.RS 2 +.IP \[bu] 2 +Frankfurt, Germany +.IP \[bu] 2 +Provider: Intercolo +.RE +.IP \[bu] 2 +\[dq]de\[dq] +.RS 2 +.IP \[bu] 2 +Frankfurt, Germany +.IP \[bu] 2 +Provider: IONOS,OVHcloud +.RE +.IP \[bu] 2 +\[dq]eu-central-2\[dq] +.RS 2 +.IP \[bu] 2 +Berlin, Germany +.IP \[bu] 2 +Provider: IONOS +.RE +.IP \[bu] 2 +\[dq]eu-south-2\[dq] +.RS 2 +.IP \[bu] 2 +Logrono, Spain +.IP \[bu] 2 +Provider: IONOS +.RE +.IP \[bu] 2 +\[dq]eu-west-2\[dq] +.RS 2 +.IP \[bu] 2 +Paris, France +.IP \[bu] 2 +Provider: Outscale +.RE +.IP \[bu] 2 +\[dq]us-east-2\[dq] +.RS 2 +.IP \[bu] 2 +New Jersey, USA +.IP \[bu] 2 +Provider: Outscale +.RE +.IP \[bu] 2 +\[dq]us-west-1\[dq] +.RS 2 +.IP \[bu] 2 +California, USA +.IP \[bu] 2 +Provider: Outscale +.RE +.IP \[bu] 2 +\[dq]cloudgouv-eu-west-1\[dq] +.RS 2 +.IP \[bu] 2 +SecNumCloud, Paris, France +.IP \[bu] 2 +Provider: Outscale +.RE +.IP \[bu] 2 +\[dq]ap-northeast-1\[dq] +.RS 2 +.IP \[bu] 2 +Tokyo, Japan +.IP \[bu] 2 +Provider: Outscale +.RE +.IP \[bu] 2 +\[dq]gra\[dq] +.RS 2 +.IP \[bu] 2 +Gravelines, France +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]rbx\[dq] +.RS 2 +.IP \[bu] 2 +Roubaix, France +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]sbg\[dq] +.RS 2 +.IP \[bu] 2 +Strasbourg, France +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]eu-west-par\[dq] +.RS 2 +.IP \[bu] 2 +Paris, France (3AZ) +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]uk\[dq] +.RS 2 +.IP \[bu] 2 +London, United Kingdom +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]waw\[dq] +.RS 2 +.IP \[bu] 2 +Warsaw, Poland +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]bhs\[dq] +.RS 2 +.IP \[bu] 2 +Beauharnois, Canada +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]ca-east-tor\[dq] +.RS 2 +.IP \[bu] 2 +Toronto, Canada +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]sgp\[dq] +.RS 2 +.IP \[bu] 2 +Singapore +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]ap-southeast-syd\[dq] +.RS 2 +.IP \[bu] 2 +Sydney, Australia +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]ap-south-mum\[dq] +.RS 2 +.IP \[bu] 2 +Mumbai, India +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]us-east-va\[dq] +.RS 2 +.IP \[bu] 2 +Vint Hill, Virginia, USA +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]us-west-or\[dq] +.RS 2 +.IP \[bu] 2 +Hillsboro, Oregon, USA +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]rbx-archive\[dq] +.RS 2 +.IP \[bu] 2 +Roubaix, France (Cold Archive) +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]us-east-1\[dq] +.RS 2 +.IP \[bu] 2 +US East (N. +Virginia) +.IP \[bu] 2 +Provider: Petabox,Rabata +.RE +.IP \[bu] 2 +\[dq]eu-central-1\[dq] +.RS 2 +.IP \[bu] 2 +Europe (Frankfurt) +.IP \[bu] 2 +Provider: Petabox +.RE +.IP \[bu] 2 +\[dq]ap-southeast-1\[dq] +.RS 2 +.IP \[bu] 2 +Asia Pacific (Singapore) +.IP \[bu] 2 +Provider: Petabox +.RE +.IP \[bu] 2 +\[dq]me-south-1\[dq] +.RS 2 +.IP \[bu] 2 +Middle East (Bahrain) +.IP \[bu] 2 +Provider: Petabox +.RE +.IP \[bu] 2 +\[dq]sa-east-1\[dq] +.RS 2 +.IP \[bu] 2 +South America (São Paulo) +.IP \[bu] 2 +Provider: Petabox +.RE +.IP \[bu] 2 +\[dq]cn-east-1\[dq] +.RS 2 +.IP \[bu] 2 +The default endpoint - a good choice if you are unsure. +.IP \[bu] 2 +East China Region 1. +.IP \[bu] 2 +Needs location constraint cn-east-1. +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]cn-east-2\[dq] +.RS 2 +.IP \[bu] 2 +East China Region 2. +.IP \[bu] 2 +Needs location constraint cn-east-2. +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]cn-north-1\[dq] +.RS 2 +.IP \[bu] 2 +North China Region 1. +.IP \[bu] 2 +Needs location constraint cn-north-1. +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]cn-south-1\[dq] +.RS 2 +.IP \[bu] 2 +South China Region 1. +.IP \[bu] 2 +Needs location constraint cn-south-1. +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]us-north-1\[dq] +.RS 2 +.IP \[bu] 2 +North America Region. +.IP \[bu] 2 +Needs location constraint us-north-1. +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]ap-southeast-1\[dq] +.RS 2 +.IP \[bu] 2 +Southeast Asia Region 1. +.IP \[bu] 2 +Needs location constraint ap-southeast-1. +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]ap-northeast-1\[dq] +.RS 2 +.IP \[bu] 2 +Northeast Asia Region 1. +.IP \[bu] 2 +Needs location constraint ap-northeast-1. +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]eu-west-1\[dq] +.RS 2 +.IP \[bu] 2 +EU (Ireland) +.IP \[bu] 2 +Provider: Rabata +.RE +.IP \[bu] 2 +\[dq]eu-west-2\[dq] +.RS 2 +.IP \[bu] 2 +EU (London) +.IP \[bu] 2 +Provider: Rabata +.RE +.IP \[bu] 2 +\[dq]global\[dq] +.RS 2 +.IP \[bu] 2 +Global CDN (All locations) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au\[dq] +.RS 2 +.IP \[bu] 2 +Australia (All states) +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au-nsw\[dq] +.RS 2 +.IP \[bu] 2 +NSW (Australia) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au-qld\[dq] +.RS 2 +.IP \[bu] 2 +QLD (Australia) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au-vic\[dq] +.RS 2 +.IP \[bu] 2 +VIC (Australia) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au-wa\[dq] +.RS 2 +.IP \[bu] 2 +Perth (Australia) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]ph\[dq] +.RS 2 +.IP \[bu] 2 +Manila (Philippines) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]th\[dq] +.RS 2 +.IP \[bu] 2 +Bangkok (Thailand) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]hk\[dq] +.RS 2 +.IP \[bu] 2 +HK (Hong Kong) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]mn\[dq] +.RS 2 +.IP \[bu] 2 +Ulaanbaatar (Mongolia) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]kg\[dq] +.RS 2 +.IP \[bu] 2 +Bishkek (Kyrgyzstan) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]id\[dq] +.RS 2 +.IP \[bu] 2 +Jakarta (Indonesia) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]jp\[dq] +.RS 2 +.IP \[bu] 2 +Tokyo (Japan) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]sg\[dq] +.RS 2 +.IP \[bu] 2 +SG (Singapore) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]de\[dq] +.RS 2 +.IP \[bu] 2 +Frankfurt (Germany) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]us\[dq] +.RS 2 +.IP \[bu] 2 +USA (AnyCast) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]us-east-1\[dq] +.RS 2 +.IP \[bu] 2 +New York (USA) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]us-west-1\[dq] +.RS 2 +.IP \[bu] 2 +Freemont (USA) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]nz\[dq] +.RS 2 +.IP \[bu] 2 +Auckland (New Zealand) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]nl-ams\[dq] +.RS 2 +.IP \[bu] 2 +Amsterdam, The Netherlands +.IP \[bu] 2 +Provider: Scaleway +.RE +.IP \[bu] 2 +\[dq]fr-par\[dq] +.RS 2 +.IP \[bu] 2 +Paris, France +.IP \[bu] 2 +Provider: Scaleway +.RE +.IP \[bu] 2 +\[dq]pl-waw\[dq] +.RS 2 +.IP \[bu] 2 +Warsaw, Poland +.IP \[bu] 2 +Provider: Scaleway +.RE +.IP \[bu] 2 +\[dq]ru-1\[dq] +.RS 2 +.IP \[bu] 2 +St. +Petersburg +.IP \[bu] 2 +Provider: Selectel,Servercore +.RE +.IP \[bu] 2 +\[dq]gis-1\[dq] +.RS 2 +.IP \[bu] 2 +Moscow +.IP \[bu] 2 +Provider: Servercore +.RE +.IP \[bu] 2 +\[dq]ru-7\[dq] +.RS 2 +.IP \[bu] 2 +Moscow +.IP \[bu] 2 +Provider: Servercore +.RE +.IP \[bu] 2 +\[dq]uz-2\[dq] +.RS 2 +.IP \[bu] 2 +Tashkent, Uzbekistan +.IP \[bu] 2 +Provider: Servercore +.RE +.IP \[bu] 2 +\[dq]kz-1\[dq] +.RS 2 +.IP \[bu] 2 +Almaty, Kazakhstan +.IP \[bu] 2 +Provider: Servercore +.RE +.IP \[bu] 2 +\[dq]eu-001\[dq] +.RS 2 +.IP \[bu] 2 +Europe Region 1 +.IP \[bu] 2 +Provider: Synology +.RE +.IP \[bu] 2 +\[dq]eu-002\[dq] +.RS 2 +.IP \[bu] 2 +Europe Region 2 +.IP \[bu] 2 +Provider: Synology +.RE +.IP \[bu] 2 +\[dq]us-001\[dq] +.RS 2 +.IP \[bu] 2 +US Region 1 +.IP \[bu] 2 +Provider: Synology +.RE +.IP \[bu] 2 +\[dq]us-002\[dq] +.RS 2 +.IP \[bu] 2 +US Region 2 +.IP \[bu] 2 +Provider: Synology +.RE +.IP \[bu] 2 +\[dq]tw-001\[dq] +.RS 2 +.IP \[bu] 2 +Asia (Taiwan) +.IP \[bu] 2 +Provider: Synology +.RE +.IP \[bu] 2 +\[dq]us-east-1\[dq] +.RS 2 +.IP \[bu] 2 +Indore, Madhya Pradesh, India +.IP \[bu] 2 +Provider: Zata .RE .RE .SS --s3-endpoint .PP Endpoint for S3 API. .PP -Leave blank if using AWS to use the default endpoint for the region. +Required when using an S3 clone. .PP Properties: .IP \[bu] 2 @@ -37587,15 +39355,2402 @@ Config: endpoint .IP \[bu] 2 Env Var: RCLONE_S3_ENDPOINT .IP \[bu] 2 -Provider: AWS +Provider: +AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,FlashBlade,GCS,Hetzner,HuaweiOBS,IBMCOS,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,Rabata,RackCorp,Rclone,Scaleway,SeaweedFS,Selectel,Servercore,SpectraLogic,StackPath,Storj,Synology,TencentCOS,Wasabi,Zata,Other .IP \[bu] 2 Type: string .IP \[bu] 2 Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]oss-accelerate.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +Global Accelerate +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-accelerate-overseas.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +Global Accelerate (outside mainland China) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-cn-hangzhou.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +East China 1 (Hangzhou) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-cn-shanghai.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +East China 2 (Shanghai) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-cn-qingdao.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +North China 1 (Qingdao) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-cn-beijing.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +North China 2 (Beijing) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-cn-zhangjiakou.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +North China 3 (Zhangjiakou) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-cn-huhehaote.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +North China 5 (Hohhot) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-cn-wulanchabu.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +North China 6 (Ulanqab) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-cn-shenzhen.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +South China 1 (Shenzhen) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-cn-heyuan.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +South China 2 (Heyuan) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-cn-guangzhou.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +South China 3 (Guangzhou) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-cn-chengdu.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +West China 1 (Chengdu) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-cn-hongkong.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +Hong Kong (Hong Kong) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-us-west-1.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +US West 1 (Silicon Valley) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-us-east-1.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +US East 1 (Virginia) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-ap-southeast-1.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +Southeast Asia Southeast 1 (Singapore) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-ap-southeast-2.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +Asia Pacific Southeast 2 (Sydney) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-ap-southeast-3.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +Southeast Asia Southeast 3 (Kuala Lumpur) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-ap-southeast-5.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +Asia Pacific Southeast 5 (Jakarta) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-ap-northeast-1.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +Asia Pacific Northeast 1 (Japan) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-ap-south-1.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +Asia Pacific South 1 (Mumbai) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-eu-central-1.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +Central Europe 1 (Frankfurt) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-eu-west-1.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +West Europe (London) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]oss-me-east-1.aliyuncs.com\[dq] +.RS 2 +.IP \[bu] 2 +Middle East 1 (Dubai) +.IP \[bu] 2 +Provider: Alibaba +.RE +.IP \[bu] 2 +\[dq]s3.ir-thr-at1.arvanstorage.ir\[dq] +.RS 2 +.IP \[bu] 2 +The default endpoint - a good choice if you are unsure. +.IP \[bu] 2 +Tehran Iran (Simin) +.IP \[bu] 2 +Provider: ArvanCloud +.RE +.IP \[bu] 2 +\[dq]s3.ir-tbz-sh1.arvanstorage.ir\[dq] +.RS 2 +.IP \[bu] 2 +Tabriz Iran (Shahriar) +.IP \[bu] 2 +Provider: ArvanCloud +.RE +.IP \[bu] 2 +\[dq]eos-wuxi-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +The default endpoint - a good choice if you are unsure. +.IP \[bu] 2 +East China (Suzhou) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-jinan-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +East China (Jinan) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-ningbo-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +East China (Hangzhou) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-shanghai-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +East China (Shanghai-1) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-zhengzhou-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Central China (Zhengzhou) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-hunan-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Central China (Changsha-1) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-zhuzhou-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Central China (Changsha-2) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-guangzhou-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +South China (Guangzhou-2) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-dongguan-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +South China (Guangzhou-3) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-beijing-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +North China (Beijing-1) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-beijing-2.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +North China (Beijing-2) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-beijing-4.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +North China (Beijing-3) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-huhehaote-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +North China (Huhehaote) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-chengdu-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Southwest China (Chengdu) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-chongqing-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Southwest China (Chongqing) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-guiyang-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Southwest China (Guiyang) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-xian-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Nouthwest China (Xian) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-yunnan.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Yunnan China (Kunming) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-yunnan-2.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Yunnan China (Kunming-2) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-tianjin-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Tianjin China (Tianjin) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-jilin-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Jilin China (Changchun) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-hubei-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Hubei China (Xiangyan) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-jiangxi-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Jiangxi China (Nanchang) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-gansu-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Gansu China (Lanzhou) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-shanxi-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Shanxi China (Taiyuan) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-liaoning-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Liaoning China (Shenyang) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-hebei-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Hebei China (Shijiazhuang) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-fujian-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Fujian China (Xiamen) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-guangxi-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Guangxi China (Nanning) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]eos-anhui-1.cmecloud.cn\[dq] +.RS 2 +.IP \[bu] 2 +Anhui China (Huainan) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]s3.cubbit.eu\[dq] +.RS 2 +.IP \[bu] 2 +Cubbit DS3 Object Storage endpoint +.IP \[bu] 2 +Provider: Cubbit +.RE +.IP \[bu] 2 +\[dq]syd1.digitaloceanspaces.com\[dq] +.RS 2 +.IP \[bu] 2 +DigitalOcean Spaces Sydney 1 +.IP \[bu] 2 +Provider: DigitalOcean +.RE +.IP \[bu] 2 +\[dq]sfo3.digitaloceanspaces.com\[dq] +.RS 2 +.IP \[bu] 2 +DigitalOcean Spaces San Francisco 3 +.IP \[bu] 2 +Provider: DigitalOcean +.RE +.IP \[bu] 2 +\[dq]sfo2.digitaloceanspaces.com\[dq] +.RS 2 +.IP \[bu] 2 +DigitalOcean Spaces San Francisco 2 +.IP \[bu] 2 +Provider: DigitalOcean +.RE +.IP \[bu] 2 +\[dq]fra1.digitaloceanspaces.com\[dq] +.RS 2 +.IP \[bu] 2 +DigitalOcean Spaces Frankfurt 1 +.IP \[bu] 2 +Provider: DigitalOcean +.RE +.IP \[bu] 2 +\[dq]nyc3.digitaloceanspaces.com\[dq] +.RS 2 +.IP \[bu] 2 +DigitalOcean Spaces New York 3 +.IP \[bu] 2 +Provider: DigitalOcean +.RE +.IP \[bu] 2 +\[dq]ams3.digitaloceanspaces.com\[dq] +.RS 2 +.IP \[bu] 2 +DigitalOcean Spaces Amsterdam 3 +.IP \[bu] 2 +Provider: DigitalOcean +.RE +.IP \[bu] 2 +\[dq]sgp1.digitaloceanspaces.com\[dq] +.RS 2 +.IP \[bu] 2 +DigitalOcean Spaces Singapore 1 +.IP \[bu] 2 +Provider: DigitalOcean +.RE +.IP \[bu] 2 +\[dq]lon1.digitaloceanspaces.com\[dq] +.RS 2 +.IP \[bu] 2 +DigitalOcean Spaces London 1 +.IP \[bu] 2 +Provider: DigitalOcean +.RE +.IP \[bu] 2 +\[dq]tor1.digitaloceanspaces.com\[dq] +.RS 2 +.IP \[bu] 2 +DigitalOcean Spaces Toronto 1 +.IP \[bu] 2 +Provider: DigitalOcean +.RE +.IP \[bu] 2 +\[dq]blr1.digitaloceanspaces.com\[dq] +.RS 2 +.IP \[bu] 2 +DigitalOcean Spaces Bangalore 1 +.IP \[bu] 2 +Provider: DigitalOcean +.RE +.IP \[bu] 2 +\[dq]objects-us-east-1.dream.io\[dq] +.RS 2 +.IP \[bu] 2 +Dream Objects endpoint +.IP \[bu] 2 +Provider: Dreamhost +.RE +.IP \[bu] 2 +\[dq]s5lu.com\[dq] +.RS 2 +.IP \[bu] 2 +Global FileLu S5 endpoint +.IP \[bu] 2 +Provider: FileLu +.RE +.IP \[bu] 2 +\[dq]us.s5lu.com\[dq] +.RS 2 +.IP \[bu] 2 +North America (US-East) region endpoint +.IP \[bu] 2 +Provider: FileLu +.RE +.IP \[bu] 2 +\[dq]eu.s5lu.com\[dq] +.RS 2 +.IP \[bu] 2 +Europe (EU-Central) region endpoint +.IP \[bu] 2 +Provider: FileLu +.RE +.IP \[bu] 2 +\[dq]ap.s5lu.com\[dq] +.RS 2 +.IP \[bu] 2 +Asia Pacific (AP-Southeast) region endpoint +.IP \[bu] 2 +Provider: FileLu +.RE +.IP \[bu] 2 +\[dq]me.s5lu.com\[dq] +.RS 2 +.IP \[bu] 2 +Middle East (ME-Central) region endpoint +.IP \[bu] 2 +Provider: FileLu +.RE +.IP \[bu] 2 +\[dq]https://storage.googleapis.com\[dq] +.RS 2 +.IP \[bu] 2 +Google Cloud Storage endpoint +.IP \[bu] 2 +Provider: GCS +.RE +.IP \[bu] 2 +\[dq]hel1.your-objectstorage.com\[dq] +.RS 2 +.IP \[bu] 2 +Helsinki +.IP \[bu] 2 +Provider: Hetzner +.RE +.IP \[bu] 2 +\[dq]fsn1.your-objectstorage.com\[dq] +.RS 2 +.IP \[bu] 2 +Falkenstein +.IP \[bu] 2 +Provider: Hetzner +.RE +.IP \[bu] 2 +\[dq]nbg1.your-objectstorage.com\[dq] +.RS 2 +.IP \[bu] 2 +Nuremberg +.IP \[bu] 2 +Provider: Hetzner +.RE +.IP \[bu] 2 +\[dq]obs.af-south-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +AF-Johannesburg +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.ap-southeast-2.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +AP-Bangkok +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.ap-southeast-3.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +AP-Singapore +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.cn-east-3.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +CN East-Shanghai1 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.cn-east-2.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +CN East-Shanghai2 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.cn-north-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +CN North-Beijing1 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.cn-north-4.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +CN North-Beijing4 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.cn-south-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +CN South-Guangzhou +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.ap-southeast-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +CN-Hong Kong +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.sa-argentina-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +LA-Buenos Aires1 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.sa-peru-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +LA-Lima1 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.na-mexico-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +LA-Mexico City1 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.sa-chile-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +LA-Santiago2 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.sa-brazil-1.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +LA-Sao Paulo1 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]obs.ru-northwest-2.myhuaweicloud.com\[dq] +.RS 2 +.IP \[bu] 2 +RU-Moscow2 +.IP \[bu] 2 +Provider: HuaweiOBS +.RE +.IP \[bu] 2 +\[dq]s3.us.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +US Cross Region Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.dal.us.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +US Cross Region Dallas Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.wdc.us.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +US Cross Region Washington DC Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.sjc.us.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +US Cross Region San Jose Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.us.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +US Cross Region Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.dal.us.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +US Cross Region Dallas Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.wdc.us.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +US Cross Region Washington DC Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.sjc.us.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +US Cross Region San Jose Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.us-east.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +US Region East Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.us-east.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +US Region East Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.us-south.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +US Region South Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.us-south.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +US Region South Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.eu.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +EU Cross Region Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.fra.eu.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +EU Cross Region Frankfurt Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.mil.eu.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +EU Cross Region Milan Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.ams.eu.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +EU Cross Region Amsterdam Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.eu.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +EU Cross Region Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.fra.eu.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +EU Cross Region Frankfurt Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.mil.eu.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +EU Cross Region Milan Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.ams.eu.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +EU Cross Region Amsterdam Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.eu-gb.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Great Britain Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.eu-gb.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Great Britain Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.eu-de.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +EU Region DE Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.eu-de.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +EU Region DE Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.ap.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +APAC Cross Regional Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.tok.ap.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +APAC Cross Regional Tokyo Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.hkg.ap.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +APAC Cross Regional Hong Kong Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.seo.ap.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +APAC Cross Regional Seoul Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.ap.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +APAC Cross Regional Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.tok.ap.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +APAC Cross Regional Tokyo Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.hkg.ap.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +APAC Cross Regional Hong Kong Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.seo.ap.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +APAC Cross Regional Seoul Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.jp-tok.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +APAC Region Japan Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.jp-tok.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +APAC Region Japan Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.au-syd.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +APAC Region Australia Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.au-syd.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +APAC Region Australia Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.ams03.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Amsterdam Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.ams03.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Amsterdam Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.che01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Chennai Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.che01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Chennai Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.mel01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Melbourne Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.mel01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Melbourne Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.osl01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Oslo Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.osl01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Oslo Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.tor01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Toronto Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.tor01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Toronto Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.seo01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Seoul Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.seo01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Seoul Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.mon01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Montreal Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.mon01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Montreal Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.mex01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Mexico Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.mex01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Mexico Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.sjc04.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +San Jose Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.sjc04.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +San Jose Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.mil01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Milan Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.mil01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Milan Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.hkg02.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Hong Kong Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.hkg02.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Hong Kong Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.par01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Paris Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.par01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Paris Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.sng01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Singapore Single Site Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]s3.private.sng01.cloud-object-storage.appdomain.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Singapore Single Site Private Endpoint +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]de-fra.i3storage.com\[dq] +.RS 2 +.IP \[bu] 2 +Frankfurt, Germany +.IP \[bu] 2 +Provider: Intercolo +.RE +.IP \[bu] 2 +\[dq]s3-eu-central-1.ionoscloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Frankfurt, Germany +.IP \[bu] 2 +Provider: IONOS +.RE +.IP \[bu] 2 +\[dq]s3-eu-central-2.ionoscloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Berlin, Germany +.IP \[bu] 2 +Provider: IONOS +.RE +.IP \[bu] 2 +\[dq]s3-eu-south-2.ionoscloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Logrono, Spain +.IP \[bu] 2 +Provider: IONOS +.RE +.IP \[bu] 2 +\[dq]s3.leviia.com\[dq] +.RS 2 +.IP \[bu] 2 +The default endpoint +.IP \[bu] 2 +Leviia +.IP \[bu] 2 +Provider: Leviia +.RE +.IP \[bu] 2 +\[dq]storage.iran.liara.space\[dq] +.RS 2 +.IP \[bu] 2 +The default endpoint +.IP \[bu] 2 +Iran +.IP \[bu] 2 +Provider: Liara +.RE +.IP \[bu] 2 +\[dq]nl-ams-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Amsterdam (Netherlands), nl-ams-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]us-southeast-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Atlanta, GA (USA), us-southeast-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]in-maa-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Chennai (India), in-maa-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]us-ord-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Chicago, IL (USA), us-ord-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]eu-central-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Frankfurt (Germany), eu-central-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]id-cgk-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Jakarta (Indonesia), id-cgk-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]gb-lon-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +London 2 (Great Britain), gb-lon-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]us-lax-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Los Angeles, CA (USA), us-lax-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]es-mad-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Madrid (Spain), es-mad-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]au-mel-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Melbourne (Australia), au-mel-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]us-mia-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Miami, FL (USA), us-mia-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]it-mil-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Milan (Italy), it-mil-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]us-east-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Newark, NJ (USA), us-east-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]jp-osa-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Osaka (Japan), jp-osa-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]fr-par-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Paris (France), fr-par-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]br-gru-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +São Paulo (Brazil), br-gru-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]us-sea-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Seattle, WA (USA), us-sea-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]ap-south-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Singapore, ap-south-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]sg-sin-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Singapore 2, sg-sin-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]se-sto-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Stockholm (Sweden), se-sto-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]us-iad-1.linodeobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Washington, DC, (USA), us-iad-1 +.IP \[bu] 2 +Provider: Linode +.RE +.IP \[bu] 2 +\[dq]s3.us-west-1.{account_name}.lyve.seagate.com\[dq] +.RS 2 +.IP \[bu] 2 +US West 1 - California +.IP \[bu] 2 +Provider: LyveCloud +.RE +.IP \[bu] 2 +\[dq]s3.eu-west-1.{account_name}.lyve.seagate.com\[dq] +.RS 2 +.IP \[bu] 2 +EU West 1 - Ireland +.IP \[bu] 2 +Provider: LyveCloud +.RE +.IP \[bu] 2 +\[dq]br-se1.magaluobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +São Paulo, SP (BR), br-se1 +.IP \[bu] 2 +Provider: Magalu +.RE +.IP \[bu] 2 +\[dq]br-ne1.magaluobjects.com\[dq] +.RS 2 +.IP \[bu] 2 +Fortaleza, CE (BR), br-ne1 +.IP \[bu] 2 +Provider: Magalu +.RE +.IP \[bu] 2 +\[dq]s3.eu-central-1.s4.mega.io\[dq] +.RS 2 +.IP \[bu] 2 +Mega S4 eu-central-1 (Amsterdam) +.IP \[bu] 2 +Provider: Mega +.RE +.IP \[bu] 2 +\[dq]s3.eu-central-2.s4.mega.io\[dq] +.RS 2 +.IP \[bu] 2 +Mega S4 eu-central-2 (Bettembourg) +.IP \[bu] 2 +Provider: Mega +.RE +.IP \[bu] 2 +\[dq]s3.ca-central-1.s4.mega.io\[dq] +.RS 2 +.IP \[bu] 2 +Mega S4 ca-central-1 (Montreal) +.IP \[bu] 2 +Provider: Mega +.RE +.IP \[bu] 2 +\[dq]s3.ca-west-1.s4.mega.io\[dq] +.RS 2 +.IP \[bu] 2 +Mega S4 ca-west-1 (Vancouver) +.IP \[bu] 2 +Provider: Mega +.RE +.IP \[bu] 2 +\[dq]oos.eu-west-2.outscale.com\[dq] +.RS 2 +.IP \[bu] 2 +Outscale EU West 2 (Paris) +.IP \[bu] 2 +Provider: Outscale +.RE +.IP \[bu] 2 +\[dq]oos.us-east-2.outscale.com\[dq] +.RS 2 +.IP \[bu] 2 +Outscale US east 2 (New Jersey) +.IP \[bu] 2 +Provider: Outscale +.RE +.IP \[bu] 2 +\[dq]oos.us-west-1.outscale.com\[dq] +.RS 2 +.IP \[bu] 2 +Outscale EU West 1 (California) +.IP \[bu] 2 +Provider: Outscale +.RE +.IP \[bu] 2 +\[dq]oos.cloudgouv-eu-west-1.outscale.com\[dq] +.RS 2 +.IP \[bu] 2 +Outscale SecNumCloud (Paris) +.IP \[bu] 2 +Provider: Outscale +.RE +.IP \[bu] 2 +\[dq]oos.ap-northeast-1.outscale.com\[dq] +.RS 2 +.IP \[bu] 2 +Outscale AP Northeast 1 (Japan) +.IP \[bu] 2 +Provider: Outscale +.RE +.IP \[bu] 2 +\[dq]s3.gra.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Gravelines, France +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.rbx.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Roubaix, France +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.sbg.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Strasbourg, France +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.eu-west-par.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Paris, France (3AZ) +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.de.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Frankfurt, Germany +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.uk.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud London, United Kingdom +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.waw.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Warsaw, Poland +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.bhs.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Beauharnois, Canada +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.ca-east-tor.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Toronto, Canada +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.sgp.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Singapore +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.ap-southeast-syd.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Sydney, Australia +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.ap-south-mum.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Mumbai, India +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.us-east-va.io.cloud.ovh.us\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Vint Hill, Virginia, USA +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.us-west-or.io.cloud.ovh.us\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Hillsboro, Oregon, USA +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.rbx-archive.io.cloud.ovh.net\[dq] +.RS 2 +.IP \[bu] 2 +OVHcloud Roubaix, France (Cold Archive) +.IP \[bu] 2 +Provider: OVHcloud +.RE +.IP \[bu] 2 +\[dq]s3.petabox.io\[dq] +.RS 2 +.IP \[bu] 2 +US East (N. +Virginia) +.IP \[bu] 2 +Provider: Petabox +.RE +.IP \[bu] 2 +\[dq]s3.us-east-1.petabox.io\[dq] +.RS 2 +.IP \[bu] 2 +US East (N. +Virginia) +.IP \[bu] 2 +Provider: Petabox +.RE +.IP \[bu] 2 +\[dq]s3.eu-central-1.petabox.io\[dq] +.RS 2 +.IP \[bu] 2 +Europe (Frankfurt) +.IP \[bu] 2 +Provider: Petabox +.RE +.IP \[bu] 2 +\[dq]s3.ap-southeast-1.petabox.io\[dq] +.RS 2 +.IP \[bu] 2 +Asia Pacific (Singapore) +.IP \[bu] 2 +Provider: Petabox +.RE +.IP \[bu] 2 +\[dq]s3.me-south-1.petabox.io\[dq] +.RS 2 +.IP \[bu] 2 +Middle East (Bahrain) +.IP \[bu] 2 +Provider: Petabox +.RE +.IP \[bu] 2 +\[dq]s3.sa-east-1.petabox.io\[dq] +.RS 2 +.IP \[bu] 2 +South America (São Paulo) +.IP \[bu] 2 +Provider: Petabox +.RE +.IP \[bu] 2 +\[dq]s3-cn-east-1.qiniucs.com\[dq] +.RS 2 +.IP \[bu] 2 +East China Endpoint 1 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]s3-cn-east-2.qiniucs.com\[dq] +.RS 2 +.IP \[bu] 2 +East China Endpoint 2 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]s3-cn-north-1.qiniucs.com\[dq] +.RS 2 +.IP \[bu] 2 +North China Endpoint 1 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]s3-cn-south-1.qiniucs.com\[dq] +.RS 2 +.IP \[bu] 2 +South China Endpoint 1 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]s3-us-north-1.qiniucs.com\[dq] +.RS 2 +.IP \[bu] 2 +North America Endpoint 1 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]s3-ap-southeast-1.qiniucs.com\[dq] +.RS 2 +.IP \[bu] 2 +Southeast Asia Endpoint 1 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]s3-ap-northeast-1.qiniucs.com\[dq] +.RS 2 +.IP \[bu] 2 +Northeast Asia Endpoint 1 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]s3.us-east-1.rabata.io\[dq] +.RS 2 +.IP \[bu] 2 +US East (N. +Virginia) +.IP \[bu] 2 +Provider: Rabata +.RE +.IP \[bu] 2 +\[dq]s3.eu-west-1.rabata.io\[dq] +.RS 2 +.IP \[bu] 2 +EU West (Ireland) +.IP \[bu] 2 +Provider: Rabata +.RE +.IP \[bu] 2 +\[dq]s3.eu-west-2.rabata.io\[dq] +.RS 2 +.IP \[bu] 2 +EU West (London) +.IP \[bu] 2 +Provider: Rabata +.RE +.IP \[bu] 2 +\[dq]s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Global (AnyCast) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Australia (Anycast) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au-nsw.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Sydney (Australia) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au-qld.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Brisbane (Australia) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au-vic.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Melbourne (Australia) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au-wa.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Perth (Australia) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]ph.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Manila (Philippines) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]th.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Bangkok (Thailand) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]hk.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +HK (Hong Kong) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]mn.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Ulaanbaatar (Mongolia) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]kg.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Bishkek (Kyrgyzstan) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]id.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Jakarta (Indonesia) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]jp.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Tokyo (Japan) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]sg.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +SG (Singapore) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]de.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Frankfurt (Germany) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]us.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +USA (AnyCast) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]us-east-1.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +New York (USA) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]us-west-1.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Freemont (USA) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]nz.s3.rackcorp.com\[dq] +.RS 2 +.IP \[bu] 2 +Auckland (New Zealand) Endpoint +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]s3.nl-ams.scw.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Amsterdam Endpoint +.IP \[bu] 2 +Provider: Scaleway +.RE +.IP \[bu] 2 +\[dq]s3.fr-par.scw.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Paris Endpoint +.IP \[bu] 2 +Provider: Scaleway +.RE +.IP \[bu] 2 +\[dq]s3.pl-waw.scw.cloud\[dq] +.RS 2 +.IP \[bu] 2 +Warsaw Endpoint +.IP \[bu] 2 +Provider: Scaleway +.RE +.IP \[bu] 2 +\[dq]localhost:8333\[dq] +.RS 2 +.IP \[bu] 2 +SeaweedFS S3 localhost +.IP \[bu] 2 +Provider: SeaweedFS +.RE +.IP \[bu] 2 +\[dq]s3.ru-1.storage.selcloud.ru\[dq] +.RS 2 +.IP \[bu] 2 +Saint Petersburg +.IP \[bu] 2 +Provider: Selectel,Servercore +.RE +.IP \[bu] 2 +\[dq]s3.gis-1.storage.selcloud.ru\[dq] +.RS 2 +.IP \[bu] 2 +Moscow +.IP \[bu] 2 +Provider: Servercore +.RE +.IP \[bu] 2 +\[dq]s3.ru-7.storage.selcloud.ru\[dq] +.RS 2 +.IP \[bu] 2 +Moscow +.IP \[bu] 2 +Provider: Servercore +.RE +.IP \[bu] 2 +\[dq]s3.uz-2.srvstorage.uz\[dq] +.RS 2 +.IP \[bu] 2 +Tashkent, Uzbekistan +.IP \[bu] 2 +Provider: Servercore +.RE +.IP \[bu] 2 +\[dq]s3.kz-1.srvstorage.kz\[dq] +.RS 2 +.IP \[bu] 2 +Almaty, Kazakhstan +.IP \[bu] 2 +Provider: Servercore +.RE +.IP \[bu] 2 +\[dq]s3.us-east-2.stackpathstorage.com\[dq] +.RS 2 +.IP \[bu] 2 +US East Endpoint +.IP \[bu] 2 +Provider: StackPath +.RE +.IP \[bu] 2 +\[dq]s3.us-west-1.stackpathstorage.com\[dq] +.RS 2 +.IP \[bu] 2 +US West Endpoint +.IP \[bu] 2 +Provider: StackPath +.RE +.IP \[bu] 2 +\[dq]s3.eu-central-1.stackpathstorage.com\[dq] +.RS 2 +.IP \[bu] 2 +EU Endpoint +.IP \[bu] 2 +Provider: StackPath +.RE +.IP \[bu] 2 +\[dq]gateway.storjshare.io\[dq] +.RS 2 +.IP \[bu] 2 +Global Hosted Gateway +.IP \[bu] 2 +Provider: Storj +.RE +.IP \[bu] 2 +\[dq]eu-001.s3.synologyc2.net\[dq] +.RS 2 +.IP \[bu] 2 +EU Endpoint 1 +.IP \[bu] 2 +Provider: Synology +.RE +.IP \[bu] 2 +\[dq]eu-002.s3.synologyc2.net\[dq] +.RS 2 +.IP \[bu] 2 +EU Endpoint 2 +.IP \[bu] 2 +Provider: Synology +.RE +.IP \[bu] 2 +\[dq]us-001.s3.synologyc2.net\[dq] +.RS 2 +.IP \[bu] 2 +US Endpoint 1 +.IP \[bu] 2 +Provider: Synology +.RE +.IP \[bu] 2 +\[dq]us-002.s3.synologyc2.net\[dq] +.RS 2 +.IP \[bu] 2 +US Endpoint 2 +.IP \[bu] 2 +Provider: Synology +.RE +.IP \[bu] 2 +\[dq]tw-001.s3.synologyc2.net\[dq] +.RS 2 +.IP \[bu] 2 +TW Endpoint 1 +.IP \[bu] 2 +Provider: Synology +.RE +.IP \[bu] 2 +\[dq]cos.ap-beijing.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Beijing Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.ap-nanjing.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Nanjing Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.ap-shanghai.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Shanghai Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.ap-guangzhou.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Guangzhou Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.ap-chengdu.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Chengdu Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.ap-chongqing.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Chongqing Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.ap-hongkong.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Hong Kong (China) Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.ap-singapore.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Singapore Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.ap-mumbai.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Mumbai Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.ap-seoul.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Seoul Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.ap-bangkok.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Bangkok Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.ap-tokyo.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Tokyo Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.na-siliconvalley.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Silicon Valley Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.na-ashburn.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Virginia Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.na-toronto.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Toronto Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.eu-frankfurt.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Frankfurt Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.eu-moscow.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Moscow Region +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]cos.accelerate.myqcloud.com\[dq] +.RS 2 +.IP \[bu] 2 +Use Tencent COS Accelerate Endpoint +.IP \[bu] 2 +Provider: TencentCOS +.RE +.IP \[bu] 2 +\[dq]s3.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi US East 1 (N. +Virginia) +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.us-east-2.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi US East 2 (N. +Virginia) +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.us-central-1.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi US Central 1 (Texas) +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.us-west-1.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi US West 1 (Oregon) +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.ca-central-1.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi CA Central 1 (Toronto) +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.eu-central-1.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi EU Central 1 (Amsterdam) +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.eu-central-2.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi EU Central 2 (Frankfurt) +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.eu-west-1.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi EU West 1 (London) +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.eu-west-2.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi EU West 2 (Paris) +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.eu-south-1.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi EU South 1 (Milan) +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.ap-northeast-1.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi AP Northeast 1 (Tokyo) endpoint +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.ap-northeast-2.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi AP Northeast 2 (Osaka) endpoint +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.ap-southeast-1.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi AP Southeast 1 (Singapore) +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]s3.ap-southeast-2.wasabisys.com\[dq] +.RS 2 +.IP \[bu] 2 +Wasabi AP Southeast 2 (Sydney) +.IP \[bu] 2 +Provider: Wasabi +.RE +.IP \[bu] 2 +\[dq]idr01.zata.ai\[dq] +.RS 2 +.IP \[bu] 2 +South Asia Endpoint +.IP \[bu] 2 +Provider: Zata +.RE +.RE .SS --s3-location-constraint .PP Location constraint - must be set to match the Region. .PP +Leave blank if not sure. Used when creating buckets only. .PP Properties: @@ -37604,7 +41759,8 @@ Config: location_constraint .IP \[bu] 2 Env Var: RCLONE_S3_LOCATION_CONSTRAINT .IP \[bu] 2 -Provider: AWS +Provider: +AWS,ArvanCloud,Ceph,ChinaMobile,DigitalOcean,Dreamhost,Exaba,GCS,Hetzner,IBMCOS,LyveCloud,Minio,Netease,Qiniu,Rabata,RackCorp,SeaweedFS,Synology,Wasabi,Zata,Other .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -37617,156 +41773,953 @@ Examples: .RS 2 .IP \[bu] 2 Empty for US Region, Northern Virginia, or Pacific Northwest +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]us-east-2\[dq] .RS 2 .IP \[bu] 2 US East (Ohio) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]us-west-1\[dq] .RS 2 .IP \[bu] 2 US West (Northern California) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]us-west-2\[dq] .RS 2 .IP \[bu] 2 US West (Oregon) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ca-central-1\[dq] .RS 2 .IP \[bu] 2 Canada (Central) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]eu-west-1\[dq] .RS 2 .IP \[bu] 2 EU (Ireland) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]eu-west-2\[dq] .RS 2 .IP \[bu] 2 EU (London) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]eu-west-3\[dq] .RS 2 .IP \[bu] 2 EU (Paris) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]eu-north-1\[dq] .RS 2 .IP \[bu] 2 EU (Stockholm) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]eu-south-1\[dq] .RS 2 .IP \[bu] 2 EU (Milan) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]EU\[dq] .RS 2 .IP \[bu] 2 EU Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-southeast-1\[dq] .RS 2 .IP \[bu] 2 Asia Pacific (Singapore) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-southeast-2\[dq] .RS 2 .IP \[bu] 2 Asia Pacific (Sydney) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-northeast-1\[dq] .RS 2 .IP \[bu] 2 Asia Pacific (Tokyo) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-northeast-2\[dq] .RS 2 .IP \[bu] 2 Asia Pacific (Seoul) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-northeast-3\[dq] .RS 2 .IP \[bu] 2 Asia Pacific (Osaka-Local) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-south-1\[dq] .RS 2 .IP \[bu] 2 Asia Pacific (Mumbai) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ap-east-1\[dq] .RS 2 .IP \[bu] 2 Asia Pacific (Hong Kong) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]sa-east-1\[dq] .RS 2 .IP \[bu] 2 South America (Sao Paulo) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]il-central-1\[dq] .RS 2 .IP \[bu] 2 Israel (Tel Aviv) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]me-south-1\[dq] .RS 2 .IP \[bu] 2 Middle East (Bahrain) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]af-south-1\[dq] .RS 2 .IP \[bu] 2 Africa (Cape Town) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]cn-north-1\[dq] .RS 2 .IP \[bu] 2 China (Beijing) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]cn-northwest-1\[dq] .RS 2 .IP \[bu] 2 China (Ningxia) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]us-gov-east-1\[dq] .RS 2 .IP \[bu] 2 AWS GovCloud (US-East) Region +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]us-gov-west-1\[dq] .RS 2 .IP \[bu] 2 AWS GovCloud (US) Region +.IP \[bu] 2 +Provider: AWS +.RE +.IP \[bu] 2 +\[dq]ir-thr-at1\[dq] +.RS 2 +.IP \[bu] 2 +Tehran Iran (Simin) +.IP \[bu] 2 +Provider: ArvanCloud +.RE +.IP \[bu] 2 +\[dq]ir-tbz-sh1\[dq] +.RS 2 +.IP \[bu] 2 +Tabriz Iran (Shahriar) +.IP \[bu] 2 +Provider: ArvanCloud +.RE +.IP \[bu] 2 +\[dq]wuxi1\[dq] +.RS 2 +.IP \[bu] 2 +East China (Suzhou) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]jinan1\[dq] +.RS 2 +.IP \[bu] 2 +East China (Jinan) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]ningbo1\[dq] +.RS 2 +.IP \[bu] 2 +East China (Hangzhou) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]shanghai1\[dq] +.RS 2 +.IP \[bu] 2 +East China (Shanghai-1) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]zhengzhou1\[dq] +.RS 2 +.IP \[bu] 2 +Central China (Zhengzhou) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]hunan1\[dq] +.RS 2 +.IP \[bu] 2 +Central China (Changsha-1) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]zhuzhou1\[dq] +.RS 2 +.IP \[bu] 2 +Central China (Changsha-2) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]guangzhou1\[dq] +.RS 2 +.IP \[bu] 2 +South China (Guangzhou-2) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]dongguan1\[dq] +.RS 2 +.IP \[bu] 2 +South China (Guangzhou-3) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]beijing1\[dq] +.RS 2 +.IP \[bu] 2 +North China (Beijing-1) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]beijing2\[dq] +.RS 2 +.IP \[bu] 2 +North China (Beijing-2) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]beijing4\[dq] +.RS 2 +.IP \[bu] 2 +North China (Beijing-3) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]huhehaote1\[dq] +.RS 2 +.IP \[bu] 2 +North China (Huhehaote) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]chengdu1\[dq] +.RS 2 +.IP \[bu] 2 +Southwest China (Chengdu) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]chongqing1\[dq] +.RS 2 +.IP \[bu] 2 +Southwest China (Chongqing) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]guiyang1\[dq] +.RS 2 +.IP \[bu] 2 +Southwest China (Guiyang) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]xian1\[dq] +.RS 2 +.IP \[bu] 2 +Northwest China (Xian) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]yunnan\[dq] +.RS 2 +.IP \[bu] 2 +Yunnan China (Kunming) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]yunnan2\[dq] +.RS 2 +.IP \[bu] 2 +Yunnan China (Kunming-2) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]tianjin1\[dq] +.RS 2 +.IP \[bu] 2 +Tianjin China (Tianjin) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]jilin1\[dq] +.RS 2 +.IP \[bu] 2 +Jilin China (Changchun) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]hubei1\[dq] +.RS 2 +.IP \[bu] 2 +Hubei China (Xiangyan) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]jiangxi1\[dq] +.RS 2 +.IP \[bu] 2 +Jiangxi China (Nanchang) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]gansu1\[dq] +.RS 2 +.IP \[bu] 2 +Gansu China (Lanzhou) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]shanxi1\[dq] +.RS 2 +.IP \[bu] 2 +Shanxi China (Taiyuan) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]liaoning1\[dq] +.RS 2 +.IP \[bu] 2 +Liaoning China (Shenyang) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]hebei1\[dq] +.RS 2 +.IP \[bu] 2 +Hebei China (Shijiazhuang) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]fujian1\[dq] +.RS 2 +.IP \[bu] 2 +Fujian China (Xiamen) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]guangxi1\[dq] +.RS 2 +.IP \[bu] 2 +Guangxi China (Nanning) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]anhui1\[dq] +.RS 2 +.IP \[bu] 2 +Anhui China (Huainan) +.IP \[bu] 2 +Provider: ChinaMobile +.RE +.IP \[bu] 2 +\[dq]us-standard\[dq] +.RS 2 +.IP \[bu] 2 +US Cross Region Standard +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]us-vault\[dq] +.RS 2 +.IP \[bu] 2 +US Cross Region Vault +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]us-cold\[dq] +.RS 2 +.IP \[bu] 2 +US Cross Region Cold +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]us-flex\[dq] +.RS 2 +.IP \[bu] 2 +US Cross Region Flex +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]us-east-standard\[dq] +.RS 2 +.IP \[bu] 2 +US East Region Standard +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]us-east-vault\[dq] +.RS 2 +.IP \[bu] 2 +US East Region Vault +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]us-east-cold\[dq] +.RS 2 +.IP \[bu] 2 +US East Region Cold +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]us-east-flex\[dq] +.RS 2 +.IP \[bu] 2 +US East Region Flex +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]us-south-standard\[dq] +.RS 2 +.IP \[bu] 2 +US South Region Standard +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]us-south-vault\[dq] +.RS 2 +.IP \[bu] 2 +US South Region Vault +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]us-south-cold\[dq] +.RS 2 +.IP \[bu] 2 +US South Region Cold +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]us-south-flex\[dq] +.RS 2 +.IP \[bu] 2 +US South Region Flex +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]eu-standard\[dq] +.RS 2 +.IP \[bu] 2 +EU Cross Region Standard +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]eu-vault\[dq] +.RS 2 +.IP \[bu] 2 +EU Cross Region Vault +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]eu-cold\[dq] +.RS 2 +.IP \[bu] 2 +EU Cross Region Cold +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]eu-flex\[dq] +.RS 2 +.IP \[bu] 2 +EU Cross Region Flex +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]eu-gb-standard\[dq] +.RS 2 +.IP \[bu] 2 +Great Britain Standard +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]eu-gb-vault\[dq] +.RS 2 +.IP \[bu] 2 +Great Britain Vault +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]eu-gb-cold\[dq] +.RS 2 +.IP \[bu] 2 +Great Britain Cold +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]eu-gb-flex\[dq] +.RS 2 +.IP \[bu] 2 +Great Britain Flex +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]ap-standard\[dq] +.RS 2 +.IP \[bu] 2 +APAC Standard +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]ap-vault\[dq] +.RS 2 +.IP \[bu] 2 +APAC Vault +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]ap-cold\[dq] +.RS 2 +.IP \[bu] 2 +APAC Cold +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]ap-flex\[dq] +.RS 2 +.IP \[bu] 2 +APAC Flex +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]mel01-standard\[dq] +.RS 2 +.IP \[bu] 2 +Melbourne Standard +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]mel01-vault\[dq] +.RS 2 +.IP \[bu] 2 +Melbourne Vault +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]mel01-cold\[dq] +.RS 2 +.IP \[bu] 2 +Melbourne Cold +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]mel01-flex\[dq] +.RS 2 +.IP \[bu] 2 +Melbourne Flex +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]tor01-standard\[dq] +.RS 2 +.IP \[bu] 2 +Toronto Standard +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]tor01-vault\[dq] +.RS 2 +.IP \[bu] 2 +Toronto Vault +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]tor01-cold\[dq] +.RS 2 +.IP \[bu] 2 +Toronto Cold +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]tor01-flex\[dq] +.RS 2 +.IP \[bu] 2 +Toronto Flex +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]cn-east-1\[dq] +.RS 2 +.IP \[bu] 2 +East China Region 1 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]cn-east-2\[dq] +.RS 2 +.IP \[bu] 2 +East China Region 2 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]cn-north-1\[dq] +.RS 2 +.IP \[bu] 2 +North China Region 1 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]cn-south-1\[dq] +.RS 2 +.IP \[bu] 2 +South China Region 1 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]us-north-1\[dq] +.RS 2 +.IP \[bu] 2 +North America Region 1 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]ap-southeast-1\[dq] +.RS 2 +.IP \[bu] 2 +Southeast Asia Region 1 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]ap-northeast-1\[dq] +.RS 2 +.IP \[bu] 2 +Northeast Asia Region 1 +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]us-east-1\[dq] +.RS 2 +.IP \[bu] 2 +US East (N. +Virginia) +.IP \[bu] 2 +Provider: Rabata +.RE +.IP \[bu] 2 +\[dq]eu-west-1\[dq] +.RS 2 +.IP \[bu] 2 +EU (Ireland) +.IP \[bu] 2 +Provider: Rabata +.RE +.IP \[bu] 2 +\[dq]eu-west-2\[dq] +.RS 2 +.IP \[bu] 2 +EU (London) +.IP \[bu] 2 +Provider: Rabata +.RE +.IP \[bu] 2 +\[dq]global\[dq] +.RS 2 +.IP \[bu] 2 +Global CDN Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au\[dq] +.RS 2 +.IP \[bu] 2 +Australia (All locations) +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au-nsw\[dq] +.RS 2 +.IP \[bu] 2 +NSW (Australia) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au-qld\[dq] +.RS 2 +.IP \[bu] 2 +QLD (Australia) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au-vic\[dq] +.RS 2 +.IP \[bu] 2 +VIC (Australia) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]au-wa\[dq] +.RS 2 +.IP \[bu] 2 +Perth (Australia) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]ph\[dq] +.RS 2 +.IP \[bu] 2 +Manila (Philippines) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]th\[dq] +.RS 2 +.IP \[bu] 2 +Bangkok (Thailand) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]hk\[dq] +.RS 2 +.IP \[bu] 2 +HK (Hong Kong) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]mn\[dq] +.RS 2 +.IP \[bu] 2 +Ulaanbaatar (Mongolia) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]kg\[dq] +.RS 2 +.IP \[bu] 2 +Bishkek (Kyrgyzstan) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]id\[dq] +.RS 2 +.IP \[bu] 2 +Jakarta (Indonesia) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]jp\[dq] +.RS 2 +.IP \[bu] 2 +Tokyo (Japan) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]sg\[dq] +.RS 2 +.IP \[bu] 2 +SG (Singapore) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]de\[dq] +.RS 2 +.IP \[bu] 2 +Frankfurt (Germany) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]us\[dq] +.RS 2 +.IP \[bu] 2 +USA (AnyCast) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]us-east-1\[dq] +.RS 2 +.IP \[bu] 2 +New York (USA) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]us-west-1\[dq] +.RS 2 +.IP \[bu] 2 +Fremont (USA) Region +.IP \[bu] 2 +Provider: RackCorp +.RE +.IP \[bu] 2 +\[dq]nz\[dq] +.RS 2 +.IP \[bu] 2 +Auckland (New Zealand) Region +.IP \[bu] 2 +Provider: RackCorp .RE .RE .SS --s3-acl @@ -37791,7 +42744,8 @@ Config: acl .IP \[bu] 2 Env Var: RCLONE_S3_ACL .IP \[bu] 2 -Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade,Mega +Provider: +AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -37800,20 +42754,15 @@ Required: false Examples: .RS 2 .IP \[bu] 2 -\[dq]default\[dq] -.RS 2 -.IP \[bu] 2 -Owner gets Full_CONTROL. -.IP \[bu] 2 -No one else has access rights (default). -.RE -.IP \[bu] 2 \[dq]private\[dq] .RS 2 .IP \[bu] 2 Owner gets FULL_CONTROL. .IP \[bu] 2 No one else has access rights (default). +.IP \[bu] 2 +Provider: +AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,Wasabi,Zata,Other .RE .IP \[bu] 2 \[dq]public-read\[dq] @@ -37822,6 +42771,9 @@ No one else has access rights (default). Owner gets FULL_CONTROL. .IP \[bu] 2 The AllUsers group gets READ access. +.IP \[bu] 2 +Provider: +AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other .RE .IP \[bu] 2 \[dq]public-read-write\[dq] @@ -37832,6 +42784,9 @@ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. .IP \[bu] 2 Granting this on a bucket is generally not recommended. +.IP \[bu] 2 +Provider: +AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other .RE .IP \[bu] 2 \[dq]authenticated-read\[dq] @@ -37840,6 +42795,9 @@ Granting this on a bucket is generally not recommended. Owner gets FULL_CONTROL. .IP \[bu] 2 The AuthenticatedUsers group gets READ access. +.IP \[bu] 2 +Provider: +AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other .RE .IP \[bu] 2 \[dq]bucket-owner-read\[dq] @@ -37851,6 +42809,9 @@ Bucket owner gets READ access. .IP \[bu] 2 If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. +.IP \[bu] 2 +Provider: +AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other .RE .IP \[bu] 2 \[dq]bucket-owner-full-control\[dq] @@ -37861,6 +42822,9 @@ object. .IP \[bu] 2 If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. +.IP \[bu] 2 +Provider: +AWS,Alibaba,ArvanCloud,Ceph,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,StackPath,TencentCOS,Wasabi,Zata,Other .RE .IP \[bu] 2 \[dq]private\[dq] @@ -37872,6 +42836,8 @@ No one else has access rights (default). .IP \[bu] 2 This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS. +.IP \[bu] 2 +Provider: IBMCOS .RE .IP \[bu] 2 \[dq]public-read\[dq] @@ -37883,6 +42849,8 @@ The AllUsers group gets READ access. .IP \[bu] 2 This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS. +.IP \[bu] 2 +Provider: IBMCOS .RE .IP \[bu] 2 \[dq]public-read-write\[dq] @@ -37893,6 +42861,8 @@ Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. .IP \[bu] 2 This acl is available on IBM Cloud (Infra), On-Premise IBM COS. +.IP \[bu] 2 +Provider: IBMCOS .RE .IP \[bu] 2 \[dq]authenticated-read\[dq] @@ -37905,6 +42875,18 @@ The AuthenticatedUsers group gets READ access. Not supported on Buckets. .IP \[bu] 2 This acl is available on IBM Cloud (Infra) and On-Premise IBM COS. +.IP \[bu] 2 +Provider: IBMCOS +.RE +.IP \[bu] 2 +\[dq]default\[dq] +.RS 2 +.IP \[bu] 2 +Owner gets Full_CONTROL. +.IP \[bu] 2 +No one else has access rights (default). +.IP \[bu] 2 +Provider: TencentCOS .RE .RE .SS --s3-server-side-encryption @@ -37931,18 +42913,24 @@ Examples: .RS 2 .IP \[bu] 2 None +.IP \[bu] 2 +Provider: AWS,Ceph,ChinaMobile,Minio .RE .IP \[bu] 2 \[dq]AES256\[dq] .RS 2 .IP \[bu] 2 AES256 +.IP \[bu] 2 +Provider: AWS,Ceph,ChinaMobile,Minio .RE .IP \[bu] 2 \[dq]aws:kms\[dq] .RS 2 .IP \[bu] 2 aws:kms +.IP \[bu] 2 +Provider: AWS,Ceph,Minio .RE .RE .SS --s3-sse-kms-key-id @@ -37986,7 +42974,8 @@ Config: storage_class .IP \[bu] 2 Env Var: RCLONE_S3_STORAGE_CLASS .IP \[bu] 2 -Provider: AWS +Provider: +AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,Qiniu,Scaleway,TencentCOS .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -37999,54 +42988,158 @@ Examples: .RS 2 .IP \[bu] 2 Default +.IP \[bu] 2 +Provider: AWS,Alibaba,ChinaMobile,TencentCOS .RE .IP \[bu] 2 \[dq]STANDARD\[dq] .RS 2 .IP \[bu] 2 Standard storage class +.IP \[bu] 2 +Provider: +AWS,Alibaba,ArvanCloud,ChinaMobile,Liara,Magalu,Qiniu,TencentCOS .RE .IP \[bu] 2 \[dq]REDUCED_REDUNDANCY\[dq] .RS 2 .IP \[bu] 2 Reduced redundancy storage class +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]STANDARD_IA\[dq] .RS 2 .IP \[bu] 2 Standard Infrequent Access storage class +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]ONEZONE_IA\[dq] .RS 2 .IP \[bu] 2 One Zone Infrequent Access storage class +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]GLACIER\[dq] .RS 2 .IP \[bu] 2 Glacier Flexible Retrieval storage class +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]DEEP_ARCHIVE\[dq] .RS 2 .IP \[bu] 2 Glacier Deep Archive storage class +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]INTELLIGENT_TIERING\[dq] .RS 2 .IP \[bu] 2 Intelligent-Tiering storage class +.IP \[bu] 2 +Provider: AWS .RE .IP \[bu] 2 \[dq]GLACIER_IR\[dq] .RS 2 .IP \[bu] 2 Glacier Instant Retrieval storage class +.IP \[bu] 2 +Provider: AWS,Magalu +.RE +.IP \[bu] 2 +\[dq]GLACIER\[dq] +.RS 2 +.IP \[bu] 2 +Archive storage mode +.IP \[bu] 2 +Provider: Alibaba,ChinaMobile,Qiniu +.RE +.IP \[bu] 2 +\[dq]STANDARD_IA\[dq] +.RS 2 +.IP \[bu] 2 +Infrequent access storage mode +.IP \[bu] 2 +Provider: Alibaba,ChinaMobile,TencentCOS +.RE +.IP \[bu] 2 +\[dq]LINE\[dq] +.RS 2 +.IP \[bu] 2 +Infrequent access storage mode +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]DEEP_ARCHIVE\[dq] +.RS 2 +.IP \[bu] 2 +Deep archive storage mode +.IP \[bu] 2 +Provider: Qiniu +.RE +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +Default. +.IP \[bu] 2 +Provider: Scaleway +.RE +.IP \[bu] 2 +\[dq]STANDARD\[dq] +.RS 2 +.IP \[bu] 2 +The Standard class for any upload. +.IP \[bu] 2 +Suitable for on-demand content like streaming or CDN. +.IP \[bu] 2 +Available in all regions. +.IP \[bu] 2 +Provider: Scaleway +.RE +.IP \[bu] 2 +\[dq]GLACIER\[dq] +.RS 2 +.IP \[bu] 2 +Archived storage. +.IP \[bu] 2 +Prices are lower, but it needs to be restored first to be accessed. +.IP \[bu] 2 +Available in FR-PAR and NL-AMS regions. +.IP \[bu] 2 +Provider: Scaleway +.RE +.IP \[bu] 2 +\[dq]ONEZONE_IA\[dq] +.RS 2 +.IP \[bu] 2 +One Zone - Infrequent Access. +.IP \[bu] 2 +A good choice for storing secondary backup copies or easily re-creatable +data. +.IP \[bu] 2 +Available in the FR-PAR region only. +.IP \[bu] 2 +Provider: Scaleway +.RE +.IP \[bu] 2 +\[dq]ARCHIVE\[dq] +.RS 2 +.IP \[bu] 2 +Archive storage mode +.IP \[bu] 2 +Provider: TencentCOS .RE .RE .SS --s3-ibm-api-key @@ -38083,11 +43176,12 @@ Required: false .PP Here are the Advanced options specific to s3 (Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, -Cloudflare, DigitalOcean, Dreamhost, Exaba, FlashBlade, GCS, HuaweiOBS, -IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Mega, -Minio, Netease, Outscale, OVHcloud, Petabox, RackCorp, Rclone, Scaleway, -SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, -Qiniu, Zata and others). +Cloudflare, Cubbit, DigitalOcean, Dreamhost, Exaba, FileLu, FlashBlade, +GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, Intercolo, IONOS, Leviia, +Liara, Linode, LyveCloud, Magalu, Mega, Minio, Netease, Outscale, +OVHcloud, Petabox, Qiniu, Rabata, RackCorp, Rclone, Scaleway, SeaweedFS, +Selectel, Servercore, SpectraLogic, StackPath, Storj, Synology, +TencentCOS, Wasabi, Zata, Other). .SS --s3-bucket-acl .PP Canned ACL used when creating buckets. @@ -38107,7 +43201,8 @@ Config: bucket_acl .IP \[bu] 2 Env Var: RCLONE_S3_BUCKET_ACL .IP \[bu] 2 -Provider: !Storj,Selectel,Synology,Cloudflare,FlashBlade +Provider: +AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cubbit,DigitalOcean,Dreamhost,Exaba,FileLu,GCS,Hetzner,HuaweiOBS,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,LyveCloud,Magalu,Mega,Minio,Netease,Outscale,OVHcloud,Petabox,Qiniu,RackCorp,Scaleway,SeaweedFS,Servercore,StackPath,TencentCOS,Wasabi,Zata,Other .IP \[bu] 2 Type: string .IP \[bu] 2 @@ -38887,6 +43982,22 @@ Env Var: RCLONE_S3_USE_PRESIGNED_REQUEST Type: bool .IP \[bu] 2 Default: false +.SS --s3-use-data-integrity-protections +.PP +If true use AWS S3 data integrity protections. +.PP +See AWS Docs on Data Integrity +Protections (https://docs.aws.amazon.com/sdkref/latest/guide/feature-dataintegrity.html) +.PP +Properties: +.IP \[bu] 2 +Config: use_data_integrity_protections +.IP \[bu] 2 +Env Var: RCLONE_S3_USE_DATA_INTEGRITY_PROTECTIONS +.IP \[bu] 2 +Type: Tristate +.IP \[bu] 2 +Default: unset .SS --s3-versions .PP Include old versions in directory listings. @@ -39372,7 +44483,7 @@ See the metadata (https://rclone.org/docs/#metadata) docs for more info. .PP Here are the commands specific to the s3 backend. .PP -Run them with +Run them with: .IP .nf \f[C] @@ -39389,7 +44500,7 @@ These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). .SS restore .PP -Restore objects from GLACIER or INTELLIGENT-TIERING archive tier +Restore objects from GLACIER or INTELLIGENT-TIERING archive tier. .IP .nf \f[C] @@ -39401,7 +44512,7 @@ This command can be used to restore one or more objects from GLACIER to normal storage or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier. .PP -Usage Examples: +Usage examples: .IP .nf \f[C] @@ -39413,7 +44524,7 @@ rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY .fi .PP This flag also obeys the filters. -Test first with --interactive/-i or --dry-run flags +Test first with --interactive/-i or --dry-run flags. .IP .nf \f[C] @@ -39421,7 +44532,7 @@ rclone --interactive backend restore --include \[dq]*.txt\[dq] s3:bucket/path -o \f[R] .fi .PP -All the objects shown will be marked for restore, then +All the objects shown will be marked for restore, then: .IP .nf \f[C] @@ -39452,13 +44563,13 @@ Options: \[dq]description\[dq]: The optional description for the job. .IP \[bu] 2 \[dq]lifetime\[dq]: Lifetime of the active copy in days, ignored for -INTELLIGENT-TIERING storage +INTELLIGENT-TIERING storage. .IP \[bu] 2 \[dq]priority\[dq]: Priority of restore: Standard|Expedited|Bulk .SS restore-status .PP -Show the restore status for objects being restored from GLACIER or -INTELLIGENT-TIERING storage +Show the status for objects being restored from GLACIER or +INTELLIGENT-TIERING. .IP .nf \f[C] @@ -39470,7 +44581,7 @@ This command can be used to show the status for objects being restored from GLACIER to normal storage or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier. .PP -Usage Examples: +Usage examples: .IP .nf \f[C] @@ -39482,7 +44593,7 @@ rclone backend restore-status -o all s3:bucket/path/to/directory .PP This command does not obey the filters. .PP -It returns a list of status dictionaries. +It returns a list of status dictionaries: .IP .nf \f[C] @@ -39520,11 +44631,11 @@ It returns a list of status dictionaries. .PP Options: .IP \[bu] 2 -\[dq]all\[dq]: if set then show all objects, not just ones with restore -status +\[dq]all\[dq]: If set then show all objects, not just ones with restore +status. .SS list-multipart-uploads .PP -List the unfinished multipart uploads +List the unfinished multipart uploads. .IP .nf \f[C] @@ -39533,6 +44644,8 @@ rclone backend list-multipart-uploads remote: [options] [+] .fi .PP This command lists the unfinished multipart uploads in JSON format. +.PP +Usage examples: .IP .nf \f[C] @@ -39549,24 +44662,24 @@ bucket or with a bucket and path. .nf \f[C] { - \[dq]rclone\[dq]: [ - { - \[dq]Initiated\[dq]: \[dq]2020-06-26T14:20:36Z\[dq], - \[dq]Initiator\[dq]: { - \[dq]DisplayName\[dq]: \[dq]XXX\[dq], - \[dq]ID\[dq]: \[dq]arn:aws:iam::XXX:user/XXX\[dq] - }, - \[dq]Key\[dq]: \[dq]KEY\[dq], - \[dq]Owner\[dq]: { - \[dq]DisplayName\[dq]: null, - \[dq]ID\[dq]: \[dq]XXX\[dq] - }, - \[dq]StorageClass\[dq]: \[dq]STANDARD\[dq], - \[dq]UploadId\[dq]: \[dq]XXX\[dq] - } - ], - \[dq]rclone-1000files\[dq]: [], - \[dq]rclone-dst\[dq]: [] + \[dq]rclone\[dq]: [ + { + \[dq]Initiated\[dq]: \[dq]2020-06-26T14:20:36Z\[dq], + \[dq]Initiator\[dq]: { + \[dq]DisplayName\[dq]: \[dq]XXX\[dq], + \[dq]ID\[dq]: \[dq]arn:aws:iam::XXX:user/XXX\[dq] + }, + \[dq]Key\[dq]: \[dq]KEY\[dq], + \[dq]Owner\[dq]: { + \[dq]DisplayName\[dq]: null, + \[dq]ID\[dq]: \[dq]XXX\[dq] + }, + \[dq]StorageClass\[dq]: \[dq]STANDARD\[dq], + \[dq]UploadId\[dq]: \[dq]XXX\[dq] + } + ], + \[dq]rclone-1000files\[dq]: [], + \[dq]rclone-dst\[dq]: [] } \f[R] .fi @@ -39585,6 +44698,8 @@ max-age which defaults to 24 hours. .PP Note that you can use --interactive/-i or --dry-run with this command to see what it would do. +.PP +Usage examples: .IP .nf \f[C] @@ -39597,7 +44712,7 @@ Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. .PP Options: .IP \[bu] 2 -\[dq]max-age\[dq]: Max age of upload to delete +\[dq]max-age\[dq]: Max age of upload to delete. .SS cleanup-hidden .PP Remove old versions of files. @@ -39613,6 +44728,8 @@ enabled bucket. .PP Note that you can use --interactive/-i or --dry-run with this command to see what it would do. +.PP +Usage example: .IP .nf \f[C] @@ -39631,6 +44748,8 @@ rclone backend versioning remote: [options] [+] .PP This command sets versioning support if a parameter is passed and then returns the current versioning status for the bucket supplied. +.PP +Usage examples: .IP .nf \f[C] @@ -39657,7 +44776,7 @@ rclone backend set remote: [options] [+] This set command can be used to update the config parameters for a running s3 backend. .PP -Usage Examples: +Usage examples: .IP .nf \f[C] @@ -39772,7 +44891,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -39887,7 +45006,7 @@ rclone like this. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password n/s> n @@ -40065,7 +45184,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -40330,7 +45449,7 @@ bucket publicly. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -40427,6 +45546,61 @@ If this is causing a problem then upload the files with .PP A consequence of this is that \f[V]Content-Encoding: gzip\f[R] will never appear in the metadata on Cloudflare. +.SS Cubbit DS3 +.PP +Cubbit Object Storage (https://www.cubbit.io/ds3-cloud) is a +geo-distributed cloud object storage platform. +.PP +To connect to Cubbit DS3 you will need an access key and secret key +pair. +You can follow this +guide (https://docs.cubbit.io/getting-started/quickstart#api-keys) to +retrieve these keys. +They will be needed when prompted by \f[V]rclone config\f[R]. +.PP +Default region will correspond to \f[V]eu-west-1\f[R] and the endpoint +has to be specified as \f[V]s3.cubbit.eu\f[R]. +.PP +Going through the whole process of creating a new remote by running +\f[V]rclone config\f[R], each prompt should be answered as shown below: +.IP +.nf +\f[C] +name> cubbit-ds3 (or any name you like) +Storage> s3 +provider> Cubbit +env_auth> false +access_key_id> YOUR_ACCESS_KEY +secret_access_key> YOUR_SECRET_KEY +region> eu-west-1 (or leave empty) +endpoint> s3.cubbit.eu +acl> +\f[R] +.fi +.PP +The resulting configuration file should look like: +.IP +.nf +\f[C] +[cubbit-ds3] +type = s3 +provider = Cubbit +access_key_id = ACCESS_KEY +secret_access_key = SECRET_KEY +region = eu-west-1 +endpoint = s3.cubbit.eu +\f[R] +.fi +.PP +You can then start using Cubbit DS3 with rclone. +For example, to create a new bucket and copy files into it, you can run: +.IP +.nf +\f[C] +rclone mkdir cubbit-ds3:my-bucket +rclone copy /path/to/files cubbit-ds3:my-bucket +\f[R] +.fi .SS DigitalOcean Spaces .PP Spaces (https://www.digitalocean.com/products/object-storage/) is an @@ -40435,9 +45609,9 @@ object storage service from cloud provider DigitalOcean. .PP To connect to DigitalOcean Spaces you will need an access key and secret key. -These can be retrieved on the \[dq]Applications & -API (https://cloud.digitalocean.com/settings/api/tokens)\[dq] page of -the DigitalOcean control panel. +These can be retrieved on the Applications & +API (https://cloud.digitalocean.com/settings/api/tokens) page of the +DigitalOcean control panel. They will be needed when prompted by \f[V]rclone config\f[R] for your \f[V]access_key_id\f[R] and \f[V]secret_access_key\f[R]. .PP @@ -40541,7 +45715,7 @@ may vary depending exactly on how you have set up the container. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -40632,6 +45806,157 @@ s3 protocol error: received versions listing with IsTruncated set with no NextKe .PP This is Google bug #312292516 (https://issuetracker.google.com/u/0/issues/312292516). +.SS Hetzner Object Storage +.PP +Here is an example of making a Hetzner Object +Storage (https://www.hetzner.com/storage/object-storage/) configuration. +First run: +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> my-hetzner +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] + XX / Amazon S3 Compliant Storage Providers including AWS, Alibaba, ArvanCloud, Ceph, ChinaMobile, Cloudflare, DigitalOcean, Dreamhost, GCS, Hetzner, HuaweiOBS, IBMCOS, IDrive, IONOS, LyveCloud, Leviia, Liara, Linode, Magalu, Minio, Netease, Outscale, Petabox, RackCorp, Rclone, Scaleway, SeaweedFS, Selectel, StackPath, Storj, Synology, TencentCOS, Wasabi, Qiniu and others + \[rs] (s3) +[snip] +Storage> s3 +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / Hetzner Object Storage + \[rs] (Hetzner) +[snip] +provider> Hetzner +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \[rs] (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \[rs] (true) +env_auth> +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_KEY +Option region. +Region to connect to. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Helsinki + \[rs] (hel1) + 2 / Falkenstein + \[rs] (fsn1) + 3 / Nuremberg + \[rs] (nbg1) +region> +Option endpoint. +Endpoint for Hetzner Object Storage +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Helsinki + \[rs] (hel1.your-objectstorage.com) + 2 / Falkenstein + \[rs] (fsn1.your-objectstorage.com) + 3 / Nuremberg + \[rs] (nbg1.your-objectstorage.com) +endpoint> +Option location_constraint. +Location constraint - must be set to match the Region. +Leave blank if not sure. Used when creating buckets only. +Enter a value. Press Enter to leave empty. +location_constraint> +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn\[aq]t set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn\[aq]t copy the ACL from the source but rather writes a fresh one. +If the acl is an empty string then no X-Amz-Acl: header is added and +the default (private) will be used. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \[rs] (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \[rs] (public-read) +acl> +Edit advanced config? +y) Yes +n) No (default) +y/n> +Configuration complete. +Options: +- type: s3 +- provider: Hetzner +- access_key_id: ACCESS_KEY +- secret_access_key: SECRET_KEY +Keep this \[dq]my-hetzner\[dq] remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> +Current remotes: + +Name Type +==== ==== +my-hetzner s3 + +e) Edit existing remote +n) New remote +d) Delete remote +r) Rename remote +c) Copy remote +s) Set configuration password +q) Quit config +e/n/d/r/c/s/q> +\f[R] +.fi +.PP +This will leave the config file looking like this. +.IP +.nf +\f[C] +[my-hetzner] +type = s3 +provider = Hetzner +access_key_id = ACCESS_KEY +secret_access_key = SECRET_KEY +region = hel1 +endpoint = hel1.your-objectstorage.com +acl = private +\f[R] +.fi .SS Huawei OBS .PP Object Storage Service (OBS) provides stable, secure, efficient, and @@ -40658,7 +45983,7 @@ Or you can also configure via the interactive command line: .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -40775,32 +46100,37 @@ dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM\[cq]s Cloud Object Storage System (formerly Cleversafe). -For more information visit: (http://www.ibm.com/cloud/object-storage) +For more information visit: .PP To configure access to IBM COS S3, follow the steps below: -.IP "1." 3 +.IP " 1." 4 Run rclone config and select n for a new remote. +.RS 4 .IP .nf \f[C] - 2018/02/14 14:13:11 NOTICE: Config file \[dq]C:\[rs]\[rs]Users\[rs]\[rs]a\[rs]\[rs].config\[rs]\[rs]rclone\[rs]\[rs]rclone.conf\[dq] not found - using defaults - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n +2018/02/14 14:13:11 NOTICE: Config file \[dq]C:\[rs]\[rs]Users\[rs]\[rs]a\[rs]\[rs].config\[rs]\[rs]rclone\[rs]\[rs]rclone.conf\[dq] not found - using defaults +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n \f[R] .fi -.IP "2." 3 +.RE +.IP " 2." 4 Enter the name for the configuration +.RS 4 .IP .nf \f[C] - name> +name> \f[R] .fi -.IP "3." 3 +.RE +.IP " 3." 4 Select \[dq]s3\[dq] storage. +.RS 4 .IP .nf \f[C] @@ -40812,113 +46142,123 @@ XX / Amazon S3 Compliant Storage Providers including AWS, ... Storage> s3 \f[R] .fi -.IP "4." 3 +.RE +.IP " 4." 4 Select IBM COS as the S3 Storage Provider. +.RS 4 .IP .nf \f[C] Choose the S3 provider. Choose a number from below, or type in your own value - 1 / Choose this option to configure Storage to AWS S3 - \[rs] \[dq]AWS\[dq] - 2 / Choose this option to configure Storage to Ceph Systems + 1 / Choose this option to configure Storage to AWS S3 + \[rs] \[dq]AWS\[dq] + 2 / Choose this option to configure Storage to Ceph Systems \[rs] \[dq]Ceph\[dq] - 3 / Choose this option to configure Storage to Dreamhost + 3 / Choose this option to configure Storage to Dreamhost \[rs] \[dq]Dreamhost\[dq] 4 / Choose this option to the configure Storage to IBM COS S3 \[rs] \[dq]IBMCOS\[dq] - 5 / Choose this option to the configure Storage to Minio + 5 / Choose this option to the configure Storage to Minio \[rs] \[dq]Minio\[dq] - Provider>4 + Provider>4 \f[R] .fi -.IP "5." 3 +.RE +.IP " 5." 4 Enter the Access Key and Secret. +.RS 4 .IP .nf \f[C] - AWS Access Key ID - leave blank for anonymous access or runtime credentials. - access_key_id> <> - AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. - secret_access_key> <> +AWS Access Key ID - leave blank for anonymous access or runtime credentials. +access_key_id> <> +AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. +secret_access_key> <> \f[R] .fi -.IP "6." 3 +.RE +.IP " 6." 4 Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an endpoint address. +.RS 4 .IP .nf \f[C] - Endpoint for IBM COS S3 API. - Specify if using an IBM COS On Premise. - Choose a number from below, or type in your own value - 1 / US Cross Region Endpoint - \[rs] \[dq]s3-api.us-geo.objectstorage.softlayer.net\[dq] - 2 / US Cross Region Dallas Endpoint - \[rs] \[dq]s3-api.dal.us-geo.objectstorage.softlayer.net\[dq] - 3 / US Cross Region Washington DC Endpoint - \[rs] \[dq]s3-api.wdc-us-geo.objectstorage.softlayer.net\[dq] - 4 / US Cross Region San Jose Endpoint - \[rs] \[dq]s3-api.sjc-us-geo.objectstorage.softlayer.net\[dq] - 5 / US Cross Region Private Endpoint - \[rs] \[dq]s3-api.us-geo.objectstorage.service.networklayer.com\[dq] - 6 / US Cross Region Dallas Private Endpoint - \[rs] \[dq]s3-api.dal-us-geo.objectstorage.service.networklayer.com\[dq] - 7 / US Cross Region Washington DC Private Endpoint - \[rs] \[dq]s3-api.wdc-us-geo.objectstorage.service.networklayer.com\[dq] - 8 / US Cross Region San Jose Private Endpoint - \[rs] \[dq]s3-api.sjc-us-geo.objectstorage.service.networklayer.com\[dq] - 9 / US Region East Endpoint - \[rs] \[dq]s3.us-east.objectstorage.softlayer.net\[dq] - 10 / US Region East Private Endpoint - \[rs] \[dq]s3.us-east.objectstorage.service.networklayer.com\[dq] - 11 / US Region South Endpoint +Endpoint for IBM COS S3 API. +Specify if using an IBM COS On Premise. +Choose a number from below, or type in your own value + 1 / US Cross Region Endpoint + \[rs] \[dq]s3-api.us-geo.objectstorage.softlayer.net\[dq] + 2 / US Cross Region Dallas Endpoint + \[rs] \[dq]s3-api.dal.us-geo.objectstorage.softlayer.net\[dq] + 3 / US Cross Region Washington DC Endpoint + \[rs] \[dq]s3-api.wdc-us-geo.objectstorage.softlayer.net\[dq] + 4 / US Cross Region San Jose Endpoint + \[rs] \[dq]s3-api.sjc-us-geo.objectstorage.softlayer.net\[dq] + 5 / US Cross Region Private Endpoint + \[rs] \[dq]s3-api.us-geo.objectstorage.service.networklayer.com\[dq] + 6 / US Cross Region Dallas Private Endpoint + \[rs] \[dq]s3-api.dal-us-geo.objectstorage.service.networklayer.com\[dq] + 7 / US Cross Region Washington DC Private Endpoint + \[rs] \[dq]s3-api.wdc-us-geo.objectstorage.service.networklayer.com\[dq] + 8 / US Cross Region San Jose Private Endpoint + \[rs] \[dq]s3-api.sjc-us-geo.objectstorage.service.networklayer.com\[dq] + 9 / US Region East Endpoint + \[rs] \[dq]s3.us-east.objectstorage.softlayer.net\[dq] +10 / US Region East Private Endpoint + \[rs] \[dq]s3.us-east.objectstorage.service.networklayer.com\[dq] +11 / US Region South Endpoint [snip] - 34 / Toronto Single Site Private Endpoint - \[rs] \[dq]s3.tor01.objectstorage.service.networklayer.com\[dq] - endpoint>1 +34 / Toronto Single Site Private Endpoint + \[rs] \[dq]s3.tor01.objectstorage.service.networklayer.com\[dq] +endpoint>1 \f[R] .fi -.IP "7." 3 +.RE +.IP " 7." 4 Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter +.RS 4 .IP .nf \f[C] - 1 / US Cross Region Standard - \[rs] \[dq]us-standard\[dq] - 2 / US Cross Region Vault - \[rs] \[dq]us-vault\[dq] - 3 / US Cross Region Cold - \[rs] \[dq]us-cold\[dq] - 4 / US Cross Region Flex - \[rs] \[dq]us-flex\[dq] - 5 / US East Region Standard - \[rs] \[dq]us-east-standard\[dq] - 6 / US East Region Vault - \[rs] \[dq]us-east-vault\[dq] - 7 / US East Region Cold - \[rs] \[dq]us-east-cold\[dq] - 8 / US East Region Flex - \[rs] \[dq]us-east-flex\[dq] - 9 / US South Region Standard - \[rs] \[dq]us-south-standard\[dq] - 10 / US South Region Vault - \[rs] \[dq]us-south-vault\[dq] + 1 / US Cross Region Standard + \[rs] \[dq]us-standard\[dq] + 2 / US Cross Region Vault + \[rs] \[dq]us-vault\[dq] + 3 / US Cross Region Cold + \[rs] \[dq]us-cold\[dq] + 4 / US Cross Region Flex + \[rs] \[dq]us-flex\[dq] + 5 / US East Region Standard + \[rs] \[dq]us-east-standard\[dq] + 6 / US East Region Vault + \[rs] \[dq]us-east-vault\[dq] + 7 / US East Region Cold + \[rs] \[dq]us-east-cold\[dq] + 8 / US East Region Flex + \[rs] \[dq]us-east-flex\[dq] + 9 / US South Region Standard + \[rs] \[dq]us-south-standard\[dq] +10 / US South Region Vault + \[rs] \[dq]us-south-vault\[dq] [snip] - 32 / Toronto Flex - \[rs] \[dq]tor01-flex\[dq] +32 / Toronto Flex + \[rs] \[dq]tor01-flex\[dq] location_constraint>1 \f[R] .fi -.IP "8." 3 +.RE +.IP " 8." 4 Specify a canned ACL. IBM Cloud (Storage) supports \[dq]public-read\[dq] and \[dq]private\[dq]. IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs. +.RS 4 .IP .nf \f[C] @@ -40936,52 +46276,59 @@ Choose a number from below, or type in your own value acl> 1 \f[R] .fi -.IP "9." 3 +.RE +.IP " 9." 4 Review the displayed configuration and accept to save the \[dq]remote\[dq] then quit. The config file should look like this +.RS 4 .IP .nf \f[C] - [xxx] - type = s3 - Provider = IBMCOS - access_key_id = xxx - secret_access_key = yyy - endpoint = s3-api.us-geo.objectstorage.softlayer.net - location_constraint = us-standard - acl = private +[xxx] +type = s3 +Provider = IBMCOS +access_key_id = xxx +secret_access_key = yyy +endpoint = s3-api.us-geo.objectstorage.softlayer.net +location_constraint = us-standard +acl = private \f[R] .fi +.RE .IP "10." 4 Execute rclone commands .IP .nf \f[C] - 1) Create a bucket. - rclone mkdir IBM-COS-XREGION:newbucket - 2) List available buckets. - rclone lsd IBM-COS-XREGION: - -1 2017-11-08 21:16:22 -1 test - -1 2018-02-14 20:16:39 -1 newbucket - 3) List contents of a bucket. - rclone ls IBM-COS-XREGION:newbucket - 18685952 test.exe - 4) Copy a file from local to remote. - rclone copy /Users/file.txt IBM-COS-XREGION:newbucket - 5) Copy a file from remote to local. - rclone copy IBM-COS-XREGION:newbucket/file.txt . - 6) Delete a file on remote. - rclone delete IBM-COS-XREGION:newbucket/file.txt +1) Create a bucket. + rclone mkdir IBM-COS-XREGION:newbucket +2) List available buckets. + rclone lsd IBM-COS-XREGION: + -1 2017-11-08 21:16:22 -1 test + -1 2018-02-14 20:16:39 -1 newbucket +3) List contents of a bucket. + rclone ls IBM-COS-XREGION:newbucket + 18685952 test.exe +4) Copy a file from local to remote. + rclone copy /Users/file.txt IBM-COS-XREGION:newbucket +5) Copy a file from remote to local. + rclone copy IBM-COS-XREGION:newbucket/file.txt . +6) Delete a file on remote. + rclone delete IBM-COS-XREGION:newbucket/file.txt \f[R] .fi .SS IBM IAM authentication .PP If using IBM IAM authentication with IBM API KEY you need to fill in -these additional parameters 1. -Select false for env_auth 2. -Leave \f[V]access_key_id\f[R] and \f[V]secret_access_key\f[R] blank 3. +these additional parameters +.IP "1." 3 +Select false for env_auth +.IP "2." 3 +Leave \f[V]access_key_id\f[R] and \f[V]secret_access_key\f[R] blank +.IP "3." 3 Paste your \f[V]ibm_api_key\f[R] +.RS 4 .IP .nf \f[C] @@ -40991,8 +46338,10 @@ Enter a value of type string. Press Enter for the default (1). ibm_api_key> \f[R] .fi +.RE .IP "4." 3 Paste your \f[V]ibm_resource_instance_id\f[R] +.RS 4 .IP .nf \f[C] @@ -41002,8 +46351,10 @@ Enter a value of type string. Press Enter for the default (2). ibm_resource_instance_id> \f[R] .fi +.RE .IP "5." 3 In advanced settings type true for \f[V]v2_auth\f[R] +.RS 4 .IP .nf \f[C] @@ -41016,6 +46367,7 @@ Enter a boolean value (true or false). Press Enter for the default (true). v2_auth> \f[R] .fi +.RE .SS IDrive e2 .PP Here is an example of making an IDrive e2 (https://www.idrive.com/e2/) @@ -41032,7 +46384,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -41132,6 +46484,143 @@ d) Delete this remote y/e/d> y \f[R] .fi +.SS Intercolo Object Storage +.PP +Intercolo Object Storage (https://intercolo.de/object-storage) offers +GDPR-compliant, transparently priced, S3-compatible cloud storage hosted +in Frankfurt, Germany. +.PP +Here\[aq]s an example of making a configuration for Intercolo. +.PP +First run: +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> intercolo + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] + xx / Amazon S3 Compliant Storage Providers including AWS, ... + \[rs] (s3) +[snip] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +xx / Intercolo Object Storage + \[rs] (Intercolo) +[snip] +provider> Intercolo + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \[rs] (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \[rs] (true) +env_auth> false + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_KEY + +Option region. +Region where your bucket will be created and your data stored. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Frankfurt, Germany + \[rs] (de-fra) +region> 1 + +Option endpoint. +Endpoint for Intercolo Object Storage. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Frankfurt, Germany + \[rs] (de-fra.i3storage.com) +endpoint> 1 + +Option acl. +Canned ACL used when creating buckets and storing or copying objects. +This ACL is used for creating objects and if bucket_acl isn\[aq]t set, for creating buckets too. +For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl +Note that this ACL is applied when server-side copying objects as S3 +doesn\[aq]t copy the ACL from the source but rather writes a fresh one. +If the acl is an empty string then no X-Amz-Acl: header is added and +the default (private) will be used. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \[rs] (private) + [snip] +acl> + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: s3 +- provider: Intercolo +- access_key_id: ACCESS_KEY +- secret_access_key: SECRET_KEY +- region: de-fra +- endpoint: de-fra.i3storage.com +Keep this \[dq]intercolo\[dq] remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +This will leave the config file looking like this. +.IP +.nf +\f[C] +[intercolo] +type = s3 +provider = Intercolo +access_key_id = ACCESS_KEY +secret_access_key = SECRET_KEY +region = de-fra +endpoint = de-fra.i3storage.com +\f[R] +.fi .SS IONOS Cloud .PP IONOS S3 Object Storage (https://cloud.ionos.com/storage/object-storage) @@ -41205,8 +46694,8 @@ env_auth> .PP Enter your Access Key and Secret key. These can be retrieved in the Data Center -Designer (https://dcd.ionos.com/), click on the menu \[lq]Manager -resources\[rq] / \[dq]Object Storage Key Manager\[dq]. +Designer (https://dcd.ionos.com/), click on the menu \[dq]Manager +resources\[dq] / \[dq]Object Storage Key Manager\[dq]. .IP .nf \f[C] @@ -41319,44 +46808,54 @@ Now you can try some commands (for macOS, use \f[V]./rclone\f[R] instead of \f[V]rclone\f[R]). .IP "1)" 3 Create a bucket (the name must be unique within the whole IONOS S3) +.RS 4 .IP .nf \f[C] rclone mkdir ionos-fra:my-bucket \f[R] .fi +.RE .IP "2)" 3 List available buckets +.RS 4 .IP .nf \f[C] rclone lsd ionos-fra: \f[R] .fi -.IP "4)" 3 +.RE +.IP "3)" 3 Copy a file from local to remote +.RS 4 .IP .nf \f[C] rclone copy /Users/file.txt ionos-fra:my-bucket \f[R] .fi -.IP "3)" 3 +.RE +.IP "4)" 3 List contents of a bucket +.RS 4 .IP .nf \f[C] rclone ls ionos-fra:my-bucket \f[R] .fi +.RE .IP "5)" 3 Copy a file from remote to local +.RS 4 .IP .nf \f[C] rclone copy ionos-fra:my-bucket/file.txt \f[R] .fi +.RE .SS Leviia Cloud Object Storage .PP Leviia Object Storage (https://www.leviia.com/object-storage/), backup @@ -41365,6 +46864,7 @@ and secure your data in a 100% French cloud, independent of GAFAM.. To configure access to Leviia, follow the steps below: .IP "1." 3 Run \f[V]rclone config\f[R] and select \f[V]n\f[R] for a new remote. +.RS 4 .IP .nf \f[C] @@ -41376,17 +46876,21 @@ q) Quit config n/s/q> n \f[R] .fi +.RE .IP "2." 3 Give the name of the configuration. For example, name it \[aq]leviia\[aq]. +.RS 4 .IP .nf \f[C] name> leviia \f[R] .fi +.RE .IP "3." 3 Select \f[V]s3\f[R] storage. +.RS 4 .IP .nf \f[C] @@ -41398,8 +46902,10 @@ XX / Amazon S3 Compliant Storage Providers including AWS, ... Storage> s3 \f[R] .fi +.RE .IP "4." 3 Select \f[V]Leviia\f[R] provider. +.RS 4 .IP .nf \f[C] @@ -41413,8 +46919,10 @@ Choose a number from below, or type in your own value provider> Leviia \f[R] .fi +.RE .IP "5." 3 Enter your SecretId and SecretKey of Leviia. +.RS 4 .IP .nf \f[C] @@ -41437,8 +46945,10 @@ Enter a string value. Press Enter for the default (\[dq]\[dq]). secret_access_key> xxxxxxxxxxx \f[R] .fi +.RE .IP "6." 3 Select endpoint for Leviia. +.RS 4 .IP .nf \f[C] @@ -41449,8 +46959,10 @@ Select endpoint for Leviia. endpoint> 1 \f[R] .fi +.RE .IP "7." 3 Choose acl. +.RS 4 .IP .nf \f[C] @@ -41491,6 +47003,7 @@ Name Type leviia s3 \f[R] .fi +.RE .SS Liara .PP Here is an example of making a Liara Object @@ -41507,7 +47020,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password n/s> n @@ -41616,7 +47129,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -41779,7 +47292,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -41908,7 +47421,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -42120,7 +47633,7 @@ setup process: .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -42298,7 +47811,7 @@ with \f[V]rclone config\f[R]: .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -42505,7 +48018,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password n/s> n @@ -42799,6 +48312,7 @@ Kodo can be widely applied to mass data management. To configure access to Qiniu Kodo, follow the steps below: .IP "1." 3 Run \f[V]rclone config\f[R] and select \f[V]n\f[R] for a new remote. +.RS 4 .IP .nf \f[C] @@ -42810,17 +48324,21 @@ q) Quit config n/s/q> n \f[R] .fi +.RE .IP "2." 3 Give the name of the configuration. For example, name it \[aq]qiniu\[aq]. +.RS 4 .IP .nf \f[C] name> qiniu \f[R] .fi +.RE .IP "3." 3 Select \f[V]s3\f[R] storage. +.RS 4 .IP .nf \f[C] @@ -42832,8 +48350,10 @@ XX / Amazon S3 Compliant Storage Providers including AWS, ... Storage> s3 \f[R] .fi +.RE .IP "4." 3 Select \f[V]Qiniu\f[R] provider. +.RS 4 .IP .nf \f[C] @@ -42847,8 +48367,10 @@ Choose a number from below, or type in your own value provider> Qiniu \f[R] .fi +.RE .IP "5." 3 Enter your SecretId and SecretKey of Qiniu Kodo. +.RS 4 .IP .nf \f[C] @@ -42871,9 +48393,11 @@ Enter a string value. Press Enter for the default (\[dq]\[dq]). secret_access_key> xxxxxxxxxxx \f[R] .fi +.RE .IP "6." 3 Select endpoint for Qiniu Kodo. This is the standard endpoint for different region. +.RS 4 .IP .nf \f[C] @@ -42944,8 +48468,10 @@ Press Enter to leave empty. location_constraint> 1 \f[R] .fi +.RE .IP "7." 3 Choose acl and storage class. +.RS 4 .IP .nf \f[C] @@ -43002,6 +48528,261 @@ Name Type qiniu s3 \f[R] .fi +.RE +.SS FileLu S5 +.PP +FileLu S5 Object Storage (https://s5lu.com) is an S3-compatible object +storage system. +It provides multiple region options (Global, US-East, EU-Central, +AP-Southeast, and ME-Central) while using a single endpoint +(\f[V]s5lu.com\f[R]). +FileLu S5 is designed for scalability, security, and simplicity, with +predictable pricing and no hidden charges for data transfers or API +requests. +.PP +Here is an example of making a configuration. +First run: +.IP +.nf +\f[C] +rclone config +\f[R] +.fi +.PP +This will guide you through an interactive setup process. +.IP +.nf +\f[C] +No remotes found, make a new one\[rs]? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> s5lu + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS,... FileLu, ... + \[rs] (s3) +[snip] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / FileLu S5 Object Storage + \[rs] (FileLu) +[snip] +provider> FileLu + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \[rs] (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \[rs] (true) +env_auth> + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> XXX + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> XXX + +Option endpoint. +Endpoint for S3 API. +Required when using an S3 clone. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Global + \[rs] (global) + 2 / North America (US-East) + \[rs] (us-east) + 3 / Europe (EU-Central) + \[rs] (eu-central) + 4 / Asia Pacific (AP-Southeast) + \[rs] (ap-southeast) + 5 / Middle East (ME-Central) + \[rs] (me-central) +region> 1 + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: s3 +- provider: FileLu +- access_key_id: XXX +- secret_access_key: XXX +- endpoint: s5lu.com +Keep this \[dq]s5lu\[dq] remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +This will leave the config file looking like this. +.IP +.nf +\f[C] +[s5lu] +type = s3 +provider = FileLu +access_key_id = XXX +secret_access_key = XXX +endpoint = s5lu.com +\f[R] +.fi +.SS Rabata +.PP +Rabata (https://rabata.io) is an S3-compatible secure cloud storage +service that offers flat, transparent pricing (no API request fees) +while supporting standard S3 APIs. +It is suitable for backup, application storage,media workflows, and +archive use cases. +.PP +Server side copy is not implemented with Rabata, also meaning +modification time of objects cannot be updated. +.PP +Rclone config: +.IP +.nf +\f[C] +rclone config +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> Rabata + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Amazon S3 Compliant Storage Providers including AWS, ... + \[rs] (s3) +[snip] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / Rabata Cloud Storage + \[rs] (Rabata) +[snip] +provider> Rabata + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \[rs] (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \[rs] (true) +env_auth> + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY_ID + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_ACCESS_KEY + +Option region. +Region where your bucket will be created and your data stored. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / US East (N. Virginia) + \[rs] (us-east-1) + 2 / EU (Ireland) + \[rs] (eu-west-1) + 3 / EU (London) + \[rs] (eu-west-2) +region> 3 + +Option endpoint. +Endpoint for Rabata Object Storage. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / US East (N. Virginia) + \[rs] (s3.us-east-1.rabata.io) + 2 / EU West (Ireland) + \[rs] (s3.eu-west-1.rabata.io) + 3 / EU West (London) + \[rs] (s3.eu-west-2.rabata.io) +endpoint> 3 + +Option location_constraint. +location where your bucket will be created and your data stored. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / US East (N. Virginia) + \[rs] (us-east-1) + 2 / EU (Ireland) + \[rs] (eu-west-1) + 3 / EU (London) + \[rs] (eu-west-2) +location_constraint> 3 + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: s3 +- provider: Rabata +- access_key_id: ACCESS_KEY_ID +- secret_access_key: SECRET_ACCESS_KEY +- region: eu-west-2 +- endpoint: s3.eu-west-2.rabata.io +- location_constraint: eu-west-2 +Keep this \[dq]rabata\[dq] remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y + +Current remotes: + +Name Type +==== ==== +rabata s3 +\f[R] +.fi .SS RackCorp .PP RackCorp Object Storage (https://www.rackcorp.com/storage/s3storage) is @@ -43011,9 +48792,9 @@ The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty. .PP -Before you can use RackCorp Object Storage, you\[aq]ll need to \[dq]sign -up (https://www.rackcorp.com/signup)\[dq] for an account on our -\[dq]portal (https://portal.rackcorp.com)\[dq]. +Before you can use RackCorp Object Storage, you\[aq]ll need to sign +up (https://www.rackcorp.com/signup) for an account on our +portal (https://portal.rackcorp.com). Next you can create an \f[V]access key\f[R], a \f[V]secret key\f[R] and \f[V]buckets\f[R], in your location of choice with ease. These details are required for the next steps of configuration, when @@ -43352,7 +49133,7 @@ You can use \f[V]rclone config\f[R] to make a new provider like this .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -43453,6 +49234,224 @@ region = ru-1 endpoint = s3.ru-1.storage.selcloud.ru \f[R] .fi +.SS Servercore +.PP +Servercore Object +Storage (https://servercore.com/services/object-storage/) is an S3 +compatible object storage system that provides scalable and secure +storage solutions for businesses of all sizes. +.PP +rclone config example: +.IP +.nf +\f[C] +No remotes found, make a new one\[rs]? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> servercore + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Amazon S3 Compliant Storage Providers including ..., Servercore, ... + \[rs] (s3) +[snip] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / Servercore Object Storage + \[rs] (Servercore) +[snip] +provider> Servercore + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \[rs] (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \[rs] (true) +env_auth> 1 + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_ACCESS_KEY + +Option region. +Region where your is data stored. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / St. Petersburg + \[rs] (ru-1) + 2 / Moscow + \[rs] (gis-1) + 3 / Moscow + \[rs] (ru-7) + 4 / Tashkent, Uzbekistan + \[rs] (uz-2) + 5 / Almaty, Kazakhstan + \[rs] (kz-1) +region> 1 + +Option endpoint. +Endpoint for Servercore Object Storage. +Choose a number from below, or type in your own value. +Press Enter to leave empty. + 1 / Saint Petersburg + \[rs] (s3.ru-1.storage.selcloud.ru) + 2 / Moscow + \[rs] (s3.gis-1.storage.selcloud.ru) + 3 / Moscow + \[rs] (s3.ru-7.storage.selcloud.ru) + 4 / Tashkent, Uzbekistan + \[rs] (s3.uz-2.srvstorage.uz) + 5 / Almaty, Kazakhstan + \[rs] (s3.kz-1.srvstorage.kz) +endpoint> 1 + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: s3 +- provider: Servercore +- access_key_id: ACCESS_KEY +- secret_access_key: SECRET_ACCESS_KEY +- region: ru-1 +- endpoint: s3.ru-1.storage.selcloud.ru +Keep this \[dq]servercore\[dq] remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.SS Spectra Logic +.PP +Spectra +Logic (https://www.spectralogic.com/blackpearl-nearline-object-gateway) +is an on-prem S3-compatible object storage gateway that exposes local +object storage and policy-tiers data to Spectra tape and public clouds +under a single namespace for backup and archiving. +.PP +The S3 compatible gateway is configured using \f[V]rclone config\f[R] +with a type of \f[V]s3\f[R] and with a provider name of +\f[V]SpectraLogic\f[R]. +Here is an example run of the configurator. +.IP +.nf +\f[C] +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n + +Enter name for new remote. +name> spectralogic + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Amazon S3 Compliant Storage Providers including ..., SpectraLogic, ... + \[rs] (s3) +[snip] +Storage> s3 + +Option provider. +Choose your S3 provider. +Choose a number from below, or type in your own value. +Press Enter to leave empty. +[snip] +XX / SpectraLogic BlackPearl + \[rs] (SpectraLogic) +[snip] +provider> SpectraLogic + +Option env_auth. +Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). +Only applies if access_key_id and secret_access_key is blank. +Choose a number from below, or type in your own boolean value (true or false). +Press Enter for the default (false). + 1 / Enter AWS credentials in the next step. + \[rs] (false) + 2 / Get AWS credentials from the environment (env vars or IAM). + \[rs] (true) +env_auth> 1 + +Option access_key_id. +AWS Access Key ID. +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +access_key_id> ACCESS_KEY + +Option secret_access_key. +AWS Secret Access Key (password). +Leave blank for anonymous access or runtime credentials. +Enter a value. Press Enter to leave empty. +secret_access_key> SECRET_ACCESS_KEY + +Option endpoint. +Endpoint for S3 API. +Required when using an S3 clone. +Enter a value. Press Enter to leave empty. +endpoint> https://bp.example.com + +Edit advanced config? +y) Yes +n) No (default) +y/n> n + +Configuration complete. +Options: +- type: s3 +- provider: SpectraLogic +- access_key_id: ACCESS_KEY +- secret_access_key: SECRET_ACCESS_KEY +- endpoint: https://bp.example.com +Keep this \[dq]spectratest\[dq] remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +\f[R] +.fi +.PP +And your config should end up looking like this: +.IP +.nf +\f[C] +[spectratest] +type = s3 +provider = SpectraLogic +access_key_id = ACCESS_KEY +secret_access_key = SECRET_ACCESS_KEY +endpoint = https://bp.example.com +\f[R] +.fi .SS Storj .PP Storj is a decentralized cloud storage which can be used through its @@ -43596,7 +49595,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -43722,6 +49721,7 @@ It is secure, stable, massive, convenient, low-delay and low-cost. To configure access to Tencent COS, follow the steps below: .IP "1." 3 Run \f[V]rclone config\f[R] and select \f[V]n\f[R] for a new remote. +.RS 4 .IP .nf \f[C] @@ -43733,17 +49733,21 @@ q) Quit config n/s/q> n \f[R] .fi +.RE .IP "2." 3 Give the name of the configuration. For example, name it \[aq]cos\[aq]. +.RS 4 .IP .nf \f[C] name> cos \f[R] .fi +.RE .IP "3." 3 Select \f[V]s3\f[R] storage. +.RS 4 .IP .nf \f[C] @@ -43755,8 +49759,10 @@ XX / Amazon S3 Compliant Storage Providers including AWS, ... Storage> s3 \f[R] .fi +.RE .IP "4." 3 Select \f[V]TencentCOS\f[R] provider. +.RS 4 .IP .nf \f[C] @@ -43770,8 +49776,10 @@ Choose a number from below, or type in your own value provider> TencentCOS \f[R] .fi +.RE .IP "5." 3 Enter your SecretId and SecretKey of Tencent Cloud. +.RS 4 .IP .nf \f[C] @@ -43794,9 +49802,11 @@ Enter a string value. Press Enter for the default (\[dq]\[dq]). secret_access_key> xxxxxxxxxxx \f[R] .fi +.RE .IP "6." 3 Select endpoint for Tencent COS. This is the standard endpoint for different region. +.RS 4 .IP .nf \f[C] @@ -43812,8 +49822,10 @@ This is the standard endpoint for different region. endpoint> 4 \f[R] .fi +.RE .IP "7." 3 Choose acl and storage class. +.RS 4 .IP .nf \f[C] @@ -43858,6 +49870,7 @@ Name Type cos s3 \f[R] .fi +.RE .SS Wasabi .PP Wasabi (https://wasabi.com) is a cloud-based object storage service for @@ -43871,7 +49884,7 @@ rclone like this. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password n/s> n @@ -44167,7 +50180,444 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). +.SH Archive +.PP +The Archive backend allows read only access to the content of archive +files on cloud storage without downloading the complete archive. +This means you could mount a large archive file and use only the parts +of it your application requires, rather than having to extract it. +.PP +The archive files are recognised by their extension. +.PP +.TS +tab(@); +l l. +T{ +Archive +T}@T{ +Extension +T} +_ +T{ +Zip +T}@T{ +\f[V].zip\f[R] +T} +T{ +Squashfs +T}@T{ +\f[V].sqfs\f[R] +T} +.TE +.PP +The supported archive file types are cloud friendly - a single file can +be found and downloaded without downloading the whole archive. +.PP +If you just want to create, list or extract archives and don\[aq]t want +to mount them then you may find the \f[V]rclone archive\f[R] commands +more convenient. +.IP \[bu] 2 +rclone archive +create (https://rclone.org/commands/rclone_archive_create/) +.IP \[bu] 2 +rclone archive list (https://rclone.org/commands/rclone_archive_list/) +.IP \[bu] 2 +rclone archive +extract (https://rclone.org/commands/rclone_archive_extract/) +.PP +These commands supports a wider range of non cloud friendly archives +(but not squashfs) but can\[aq]t be used for \f[V]rclone mount\f[R] or +any other rclone commands (eg \f[V]rclone check\f[R]). +.SS Configuration +.PP +This backend is best used without configuration. +.PP +Use it by putting the string \f[V]:archive:\f[R] in front of another +remote, say \f[V]remote:dir\f[R] to make \f[V]:archive:remote:dir\f[R]. +.PP +Any archives in \f[V]remote:dir\f[R] will become directories and any +files may be read out of them individually. +.PP +For example +.IP +.nf +\f[C] +$ rclone lsf s3:rclone/dir +100files.sqfs +100files.zip +\f[R] +.fi +.PP +Note that \f[V]100files.zip\f[R] and \f[V]100files.sqfs\f[R] are now +directories: +.IP +.nf +\f[C] +$ rclone lsf :archive:s3:rclone/dir +100files.sqfs/ +100files.zip/ +\f[R] +.fi +.PP +Which we can look inside: +.IP +.nf +\f[C] +$ rclone lsf :archive:s3:rclone/dir/100files.zip/ +cofofiy5jun +gigi +hevupaz5z +kacak/ +kozemof/ +lamapaq4 +qejahen +quhenen2rey +soboves8 +vibat/ +wose +xade +zilupot +\f[R] +.fi +.PP +Files not in an archive can be read and written as normal. +Files in an archive can only be read. +.PP +The archive backend can also be used in a configuration file. +Use the \f[V]remote\f[R] variable to point to the destination of the +archive. +.IP +.nf +\f[C] +[remote] +type = archive +remote = s3:rclone/dir/100files.zip +\f[R] +.fi +.PP +Gives +.IP +.nf +\f[C] +$ rclone lsf remote: +cofofiy5jun +gigi +hevupaz5z +kacak/ +\&... +\f[R] +.fi +.SS Modification times +.PP +Modification times are preserved with an accuracy depending on the +archive type. +.IP +.nf +\f[C] +$ rclone lsl --max-depth 1 :archive:s3:rclone/dir/100files.zip + 12 2025-10-27 14:39:20.000000000 cofofiy5jun + 81 2025-10-27 14:39:20.000000000 gigi + 58 2025-10-27 14:39:20.000000000 hevupaz5z + 6 2025-10-27 14:39:20.000000000 lamapaq4 + 43 2025-10-27 14:39:20.000000000 qejahen + 66 2025-10-27 14:39:20.000000000 quhenen2rey + 95 2025-10-27 14:39:20.000000000 soboves8 + 71 2025-10-27 14:39:20.000000000 wose + 76 2025-10-27 14:39:20.000000000 xade + 15 2025-10-27 14:39:20.000000000 zilupot +\f[R] +.fi +.PP +For \f[V]zip\f[R] and \f[V]squashfs\f[R] files this is 1s. +.SS Hashes +.PP +Which hash is supported depends on the archive type. +Zip files use CRC32, Squashfs don\[aq]t support any hashes. +For example: +.IP +.nf +\f[C] +$ rclone hashsum crc32 :archive:s3:rclone/dir/100files.zip/ +b2288554 cofofiy5jun +a87e62b6 wose +f90f630b xade +c7d0ef29 gigi +f1c64740 soboves8 +cb7b4a5d quhenen2rey +5115242b kozemof/fonaxo +afeabd9a qejahen +71202402 kozemof/fijubey5di +bd99e512 kozemof/napux +\&... +\f[R] +.fi +.PP +Hashes will be checked when the file is read from the archive and used +as part of syncing if possible. +.IP +.nf +\f[C] +$ rclone copy -vv :archive:s3:rclone/dir/100files.zip /tmp/100files +\&... +2025/10/27 14:56:44 DEBUG : kacak/turovat5c/yuyuquk: crc32 = abd05cc8 OK +2025/10/27 14:56:44 DEBUG : kacak/turovat5c/yuyuquk.aeb661dc.partial: renamed to: kacak/turovat5c/yuyuquk +2025/10/27 14:56:44 INFO : kacak/turovat5c/yuyuquk: Copied (new) +\&... +\f[R] +.fi +.SS Zip +.PP +The Zip file format (https://en.wikipedia.org/wiki/ZIP_(file_format)) is +a widely used archive format that bundles one or more files and folders +into a single file, primarily for easier storage or transmission. +It typically uses compression (most commonly the DEFLATE algorithm) to +reduce the overall size of the archived content. +Zip files are supported natively by most modern operating systems. +.PP +Rclone does not support the following advanced features of Zip files: +.IP \[bu] 2 +Splitting large archives into smaller parts +.IP \[bu] 2 +Password protection +.IP \[bu] 2 +Zstd compression +.SS Squashfs +.PP +Squashfs is a compressed, read-only file system format primarily used in +Linux-based systems. +It\[aq]s designed to compress entire file systems (including files, +directories, and metadata) into a single archive file, which can then be +mounted and read directly, appearing as a normal directory structure. +Because it\[aq]s read-only and highly compressed, Squashfs is ideal for +live CDs/USBs, embedded devices with limited storage, and software +package distribution, as it saves space and ensures the integrity of the +original files. +.PP +Rclone supports the following squashfs compression formats: +.IP \[bu] 2 +\f[V]Gzip\f[R] +.IP \[bu] 2 +\f[V]Lzma\f[R] +.IP \[bu] 2 +\f[V]Xz\f[R] +.IP \[bu] 2 +\f[V]Zstd\f[R] +.PP +These are not yet working: +.IP \[bu] 2 +\f[V]Lzo\f[R] - Not yet supported +.IP \[bu] 2 +\f[V]Lz4\f[R] - Broken with \[dq]error decompressing: lz4: bad magic +number\[dq] +.PP +Rclone works fastest with large squashfs block sizes. +For example: +.IP +.nf +\f[C] +mksquashfs 100files 100files.sqfs -comp zstd -b 1M +\f[R] +.fi +.SS Limitations +.PP +Files in the archive backend are read only. +It isn\[aq]t possible to create archives with the archive backend yet. +However you \f[B]can\f[R] create archives with rclone archive +create (https://rclone.org/commands/rclone_archive_create/). +.PP +Only \f[V].zip\f[R] and \f[V].sqfs\f[R] archives are supported as these +are the only common archiving formats which make it easy to read +directory listings from the archive without downloading the whole +archive. +.PP +Internally the archive backend uses the VFS to access files. +It isn\[aq]t possible to configure the internal VFS yet which might be +useful. +.SS Archive Formats +.PP +Here\[aq]s a table rating common archive formats on their Cloud +Optimization which is based on their ability to access a single file +without reading the entire archive. +.PP +This capability depends on whether the format has a central +\f[B]index\f[R] (or \[dq]table of contents\[dq]) that a program can read +first to find the exact location of a specific file. +.PP +.TS +tab(@); +lw(17.5n) lw(17.5n) lw(17.5n) lw(17.5n). +T{ +Format +T}@T{ +Extensions +T}@T{ +Cloud Optimized +T}@T{ +Explanation +T} +_ +T{ +\f[B]ZIP\f[R] +T}@T{ +\f[V].zip\f[R] +T}@T{ +\f[B]Excellent\f[R] +T}@T{ +\f[B]Zip files have an index\f[R] (the \[dq]central directory\[dq]) +stored at the \f[I]end\f[R] of the file. +A program can seek to the end, read the index to find a file\[aq]s +location and size, and then seek directly to that file\[aq]s data to +extract it. +T} +T{ +\f[B]SquashFS\f[R] +T}@T{ +\f[V].squashfs\f[R], \f[V].sqfs\f[R], \f[V].sfs\f[R] +T}@T{ +\f[B]Excellent\f[R] +T}@T{ +This is a compressed read-only \f[I]filesystem image\f[R], not just an +archive. +It is \f[B]specifically designed for random access\f[R]. +It uses metadata and index tables to allow the system to find and +decompress individual files or data blocks on demand. +T} +T{ +\f[B]ISO Image\f[R] +T}@T{ +\f[V].iso\f[R] +T}@T{ +\f[B]Excellent\f[R] +T}@T{ +Like SquashFS, this is a \f[I]filesystem image\f[R] (for optical media). +It contains a filesystem (like ISO 9660 or UDF) with a \f[B]table of +contents at a known location\f[R], allowing for direct access to any +file without reading the whole disk. +T} +T{ +\f[B]RAR\f[R] +T}@T{ +\f[V].rar\f[R] +T}@T{ +\f[B]Good\f[R] +T}@T{ +RAR supports \[dq]non-solid\[dq] and \[dq]solid\[dq] modes. +In the common \f[B]non-solid\f[R] mode, files are compressed separately, +and an index allows for easy single-file extraction (like ZIP). +In \[dq]solid\[dq] mode, this rating would be \[dq]Very Poor.\[dq] +T} +T{ +\f[B]7z\f[R] +T}@T{ +\f[V].7z\f[R] +T}@T{ +\f[B]Poor\f[R] +T}@T{ +By default, 7z uses \[dq]solid\[dq] archives to maximize compression. +This compresses files as one continuous stream. +To extract a file from the middle, all preceding files must be +decompressed first. +(If explicitly created as \[dq]non-solid,\[dq] its rating would be +\[dq]Excellent\[dq]). +T} +T{ +\f[B]tar\f[R] +T}@T{ +\f[V].tar\f[R] +T}@T{ +\f[B]Poor\f[R] +T}@T{ +\[dq]Tape Archive\[dq] is a \f[I]streaming\f[R] format with \f[B]no +central index\f[R]. +To find a file, you must read the archive from the beginning, checking +each file header one by one until you find the one you want. +This is slow but doesn\[aq]t require decompressing data. +T} +T{ +\f[B]Gzipped Tar\f[R] +T}@T{ +\f[V].tar.gz\f[R], \f[V].tgz\f[R] +T}@T{ +\f[B]Very Poor\f[R] +T}@T{ +This is a \f[V]tar\f[R] file (already \[dq]Poor\[dq]) compressed with +\f[V]gzip\f[R] as a \f[B]single, non-seekable stream\f[R]. +You cannot seek. +To get \f[I]any\f[R] file, you must decompress the \f[I]entire\f[R] +archive from the beginning up to that file. +T} +T{ +\f[B]Bzipped/XZ Tar\f[R] +T}@T{ +\f[V].tar.bz2\f[R], \f[V].tar.xz\f[R] +T}@T{ +\f[B]Very Poor\f[R] +T}@T{ +This is the same principle as \f[V]tar.gz\f[R]. +The entire archive is one large compressed block, making random access +impossible. +T} +.TE +.SS Ideas for improvements +.PP +It would be possible to add ISO support fairly easily as the library we +use (go-diskfs (https://github.com/diskfs/go-diskfs/)) supports it. +We could also add \f[V]ext4\f[R] and \f[V]fat32\f[R] the same way, +however in my experience these are not very common as files so probably +not worth it. +Go-diskfs can also read partitions which we could potentially take +advantage of. +.PP +It would be possible to add write support, but this would only be for +creating new archives, not for updating existing archives. +.SS Standard options +.PP +Here are the Standard options specific to archive (Read archives). +.SS --archive-remote +.PP +Remote to wrap to read archives from. +.PP +Normally should contain a \[aq]:\[aq] and a path, e.g. +\[dq]myremote:path/to/dir\[dq], \[dq]myremote:bucket\[dq] or +\[dq]myremote:\[dq]. +.PP +If this is left empty, then the archive backend will use the root as the +remote. +.PP +This means that you can use :archive:remote:path and it will be +equivalent to setting remote=\[dq]remote:path\[dq]. +.PP +Properties: +.IP \[bu] 2 +Config: remote +.IP \[bu] 2 +Env Var: RCLONE_ARCHIVE_REMOTE +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Advanced options +.PP +Here are the Advanced options specific to archive (Read archives). +.SS --archive-description +.PP +Description of the remote. +.PP +Properties: +.IP \[bu] 2 +Config: description +.IP \[bu] 2 +Env Var: RCLONE_ARCHIVE_DESCRIPTION +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS Metadata +.PP +Any metadata supported by the underlying remote is read and written. +.PP +See the metadata (https://rclone.org/docs/#metadata) docs for more info. .SH Backblaze B2 .PP B2 is Backblaze\[aq]s cloud storage @@ -44197,7 +50647,7 @@ Key. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote q) Quit config n/q> n @@ -44393,6 +50843,7 @@ deletion instead of hiding them. .PP Old versions of files, where available, are visible using the \f[V]--b2-versions\f[R] flag. +These can be deleted as required with \f[V]delete\f[R]. .PP It is also possible to view a bucket as it was at a certain point in time, using the \f[V]--b2-version-at\f[R] flag. @@ -44943,6 +51394,115 @@ Env Var: RCLONE_B2_ENCODING Type: Encoding .IP \[bu] 2 Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot +.SS --b2-sse-customer-algorithm +.PP +If using SSE-C, the server-side encryption algorithm used when storing +this object in B2. +.PP +Properties: +.IP \[bu] 2 +Config: sse_customer_algorithm +.IP \[bu] 2 +Env Var: RCLONE_B2_SSE_CUSTOMER_ALGORITHM +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.IP \[bu] 2 +\[dq]AES256\[dq] +.RS 2 +.IP \[bu] 2 +Advanced Encryption Standard (256 bits key length) +.RE +.RE +.SS --b2-sse-customer-key +.PP +To use SSE-C, you may provide the secret encryption key encoded in a +UTF-8 compatible string to encrypt/decrypt your data +.PP +Alternatively you can provide --sse-customer-key-base64. +.PP +Properties: +.IP \[bu] 2 +Config: sse_customer_key +.IP \[bu] 2 +Env Var: RCLONE_B2_SSE_CUSTOMER_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.RE +.SS --b2-sse-customer-key-base64 +.PP +To use SSE-C, you may provide the secret encryption key encoded in +Base64 format to encrypt/decrypt your data +.PP +Alternatively you can provide --sse-customer-key. +.PP +Properties: +.IP \[bu] 2 +Config: sse_customer_key_base64 +.IP \[bu] 2 +Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_BASE64 +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.RE +.SS --b2-sse-customer-key-md5 +.PP +If using SSE-C you may provide the secret encryption key MD5 checksum +(optional). +.PP +If you leave it blank, this is calculated automatically from the +sse_customer_key provided. +.PP +Properties: +.IP \[bu] 2 +Config: sse_customer_key_md5 +.IP \[bu] 2 +Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_MD5 +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.IP \[bu] 2 +Examples: +.RS 2 +.IP \[bu] 2 +\[dq]\[dq] +.RS 2 +.IP \[bu] 2 +None +.RE +.RE .SS --b2-description .PP Description of the remote. @@ -44960,7 +51520,7 @@ Required: false .PP Here are the commands specific to the b2 backend. .PP -Run them with +Run them with: .IP .nf \f[C] @@ -44977,7 +51537,7 @@ These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). .SS lifecycle .PP -Read or set the lifecycle for a bucket +Read or set the lifecycle for a bucket. .IP .nf \f[C] @@ -44987,8 +51547,6 @@ rclone backend lifecycle remote: [options] [+] .PP This command can be used to read or set the lifecycle for a bucket. .PP -Usage Examples: -.PP To show the current lifecycle rules: .IP .nf @@ -45013,7 +51571,7 @@ This will dump something like this showing the lifecycle rules. .fi .PP If there are no lifecycle rules (the default) then it will just return -[]. +\f[V][]\f[R]. .PP To reset the current lifecycle rules: .IP @@ -45041,7 +51599,7 @@ rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1 \f[R] .fi .PP -See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules +See: .PP Options: .IP \[bu] 2 @@ -45050,10 +51608,10 @@ this many days it is deleted. 0 is off. .IP \[bu] 2 \[dq]daysFromStartingToCancelingUnfinishedLargeFiles\[dq]: Cancels any -unfinished large file versions after this many days +unfinished large file versions after this many days. .IP \[bu] 2 \[dq]daysFromUploadingToHiding\[dq]: This many days after uploading a -file is hidden +file is hidden. .SS cleanup .PP Remove unfinished large file uploads. @@ -45081,7 +51639,7 @@ Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. .PP Options: .IP \[bu] 2 -\[dq]max-age\[dq]: Max age of upload to delete +\[dq]max-age\[dq]: Max age of upload to delete. .SS cleanup-hidden .PP Remove old versions of files. @@ -45111,7 +51669,7 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .SH Box .PP Paths are specified as \f[V]remote:path\f[R] @@ -45130,7 +51688,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -45198,7 +51756,8 @@ y/e/d> y .fi .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Note that rclone runs a webserver on your local machine to collect the token as returned from Box. @@ -45207,7 +51766,8 @@ get back the verification code. This is on \f[V]http://127.0.0.1:53682/\f[R] and this may require you to unblock it temporarily if you are running a host firewall. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your Box .IP @@ -45484,6 +52044,21 @@ Env Var: RCLONE_BOX_BOX_CONFIG_FILE Type: string .IP \[bu] 2 Required: false +.SS --box-config-credentials +.PP +Box App config.json contents. +.PP +Leave blank normally. +.PP +Properties: +.IP \[bu] 2 +Config: config_credentials +.IP \[bu] 2 +Env Var: RCLONE_BOX_CONFIG_CREDENTIALS +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS --box-access-token .PP Box App Primary Access Token @@ -45729,7 +52304,7 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .SS Get your own Box App ID .PP Here is how to create your own Box App ID for rclone: @@ -45793,7 +52368,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -45982,8 +52557,10 @@ Run \f[V]rclone config\f[R] and add all the Plex options (endpoint, username and password) in your remote and it will be automatically enabled. .PP -Affected settings: - \f[V]cache-workers\f[R]: \f[I]Configured value\f[R] -during confirmed playback or \f[I]1\f[R] all the other times +Affected settings: +.IP \[bu] 2 +\f[V]cache-workers\f[R]: \f[I]Configured value\f[R] during confirmed +playback or \f[I]1\f[R] all the other times .SS Certificate Validation .PP When the Plex server is configured to only accept secure connections, it @@ -46002,7 +52579,7 @@ the dots have been replaced with dashes, e.g. .PP To get the \f[V]server-hash\f[R] part, the easiest way is to visit .PP -https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token + .PP This page will list all the available Plex servers for your account with at least one \f[V].plex.direct\f[R] link for each. @@ -46034,11 +52611,11 @@ on them. Any reports or feedback on how cache behaves on this OS is greatly appreciated. .IP \[bu] 2 -https://github.com/rclone/rclone/issues/1935 +Issue #1935 (https://github.com/rclone/rclone/issues/1935) .IP \[bu] 2 -https://github.com/rclone/rclone/issues/1907 +Issue #1907 (https://github.com/rclone/rclone/issues/1907) .IP \[bu] 2 -https://github.com/rclone/rclone/issues/1834 +Issue #1834 (https://github.com/rclone/rclone/issues/1834) .SS Risk of throttling .PP Future iterations of the cache backend will make use of the pooling @@ -46050,17 +52627,20 @@ meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts. .PP -Some recommendations: - don\[aq]t use a very small interval for entry -information (\f[V]--cache-info-age\f[R]) - while writes aren\[aq]t yet -optimised, you can still write through \f[V]cache\f[R] which gives you -the advantage of adding the file in the cache at the same time if -configured to do so. +Some recommendations: +.IP \[bu] 2 +don\[aq]t use a very small interval for entry information +(\f[V]--cache-info-age\f[R]) +.IP \[bu] 2 +while writes aren\[aq]t yet optimised, you can still write through +\f[V]cache\f[R] which gives you the advantage of adding the file in the +cache at the same time if configured to do so. .PP Future enhancements: .IP \[bu] 2 -https://github.com/rclone/rclone/issues/1937 +Issue #1937 (https://github.com/rclone/rclone/issues/1937) .IP \[bu] 2 -https://github.com/rclone/rclone/issues/1936 +Issue #1936 (https://github.com/rclone/rclone/issues/1936) .SS cache and crypt .PP One common scenario is to keep your data encrypted in the cloud provider @@ -46108,7 +52688,10 @@ Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt. .PP -Params: - \f[B]remote\f[R] = path to remote \f[B](required)\f[R] - +Params: +.IP \[bu] 2 +\f[B]remote\f[R] = path to remote \f[B](required)\f[R] +.IP \[bu] 2 \f[B]withData\f[R] = true/false to delete cached data (chunks) as well \f[I](optional, false by default)\f[R] .SS Standard options @@ -46581,7 +53164,7 @@ Required: false .PP Here are the commands specific to the cache backend. .PP -Run them with +Run them with: .IP .nf \f[C] @@ -46632,7 +53215,7 @@ We will call this one \f[V]overlay\f[R] to separate it from the .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -47266,8 +53849,12 @@ You will need to log in and get the \f[V]API Key\f[R] and \f[V]API Secret\f[R] for your account from the developer section. .PP Now run -.PP -\f[V]rclone config\f[R] +.IP +.nf +\f[C] +rclone config +\f[R] +.fi .PP Follow the interactive setup process: .IP @@ -47341,16 +53928,28 @@ y/e/d> y .fi .PP List directories in the top level of your Media Library -.PP -\f[V]rclone lsd cloudinary-media-library:\f[R] +.IP +.nf +\f[C] +rclone lsd cloudinary-media-library: +\f[R] +.fi .PP Make a new directory. -.PP -\f[V]rclone mkdir cloudinary-media-library:directory\f[R] +.IP +.nf +\f[C] +rclone mkdir cloudinary-media-library:directory +\f[R] +.fi .PP List the contents of a directory. -.PP -\f[V]rclone ls cloudinary-media-library:directory\f[R] +.IP +.nf +\f[C] +rclone ls cloudinary-media-library:directory +\f[R] +.fi .SS Modified time and hashes .PP Cloudinary stores md5 and timestamps for any successful Put @@ -47515,7 +54114,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -47584,7 +54183,8 @@ y/e/d> y .fi .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Note that rclone runs a webserver on your local machine to collect the token as returned from Citrix ShareFile. @@ -47593,7 +54193,8 @@ get back the verification code. This is on \f[V]http://127.0.0.1:53682/\f[R] and this it may require you to unblock it temporarily if you are running a host firewall. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your ShareFile .IP @@ -47987,7 +54588,7 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .SH Crypt .PP Rclone \f[V]crypt\f[R] remotes encrypt and decrypt other remotes. @@ -48088,7 +54689,7 @@ content, and access it exclusively through a crypt remote. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -48258,13 +54859,15 @@ The only possibility is to re-upload everything via a crypt remote configured with your new password. .PP Depending on the size of your data, your bandwidth, storage quota etc, -there are different approaches you can take: - If you have everything in -a different location, for example on your local system, you could remove -all of the prior encrypted files, change the password for your -configured crypt remote (or delete and re-create the crypt -configuration), and then re-upload everything from the alternative -location. -- If you have enough space on the storage system you can create a new +there are different approaches you can take: +.IP \[bu] 2 +If you have everything in a different location, for example on your +local system, you could remove all of the prior encrypted files, change +the password for your configured crypt remote (or delete and re-create +the crypt configuration), and then re-upload everything from the +alternative location. +.IP \[bu] 2 +If you have enough space on the storage system you can create a new crypt remote pointing to a separate directory on the same backend, and then use rclone to copy everything from the original crypt remote to the new, effectively decrypting everything on the fly using the old password @@ -48778,7 +55381,7 @@ See the metadata (https://rclone.org/docs/#metadata) docs for more info. .PP Here are the commands specific to the crypt backend. .PP -Run them with +Run them with: .IP .nf \f[C] @@ -48795,7 +55398,7 @@ These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). .SS encode .PP -Encode the given filename(s) +Encode the given filename(s). .IP .nf \f[C] @@ -48806,7 +55409,7 @@ rclone backend encode remote: [options] [+] This encodes the filenames given as arguments returning a list of strings of the encoded results. .PP -Usage Example: +Usage examples: .IP .nf \f[C] @@ -48816,7 +55419,7 @@ rclone rc backend/command command=encode fs=crypt: file1 [file2...] .fi .SS decode .PP -Decode the given filename(s) +Decode the given filename(s). .IP .nf \f[C] @@ -48828,7 +55431,7 @@ This decodes the filenames given as arguments returning a list of strings of the decoded results. It will return an error if any of the inputs are invalid. .PP -Usage Example: +Usage examples: .IP .nf \f[C] @@ -48975,10 +55578,10 @@ If the user doesn\[aq]t supply a salt then rclone uses an internal one. \f[V]scrypt\f[R] makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt. -.SS SEE ALSO +.SS See Also .IP \[bu] 2 rclone cryptdecode (https://rclone.org/commands/rclone_cryptdecode/) - -Show forward/reverse mapping of encrypted filenames +Show forward/reverse mapping of encrypted filenames. .SH Compress .SS Warning .PP @@ -48997,6 +55600,7 @@ compression mode to use: .IP .nf \f[C] +$ rclone config Current remotes: Name Type @@ -49004,7 +55608,6 @@ Name Type remote_to_press sometype e) Edit existing remote -$ rclone config n) New remote d) Delete remote r) Rename remote @@ -49013,46 +55616,82 @@ s) Set configuration password q) Quit config e/n/d/r/c/s/q> n name> compress + +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. \&... - 8 / Compress a remote - \[rs] \[dq]compress\[dq] +12 / Compress a remote + \[rs] (compress) \&... Storage> compress -** See help for compress backend at: https://rclone.org/compress/ ** +Option remote. Remote to compress. -Enter a string value. Press Enter for the default (\[dq]\[dq]). +Enter a value. remote> remote_to_press:subdir + +Option mode. Compression mode. -Enter a string value. Press Enter for the default (\[dq]gzip\[dq]). -Choose a number from below, or type in your own value - 1 / Gzip compression balanced for speed and compression strength. - \[rs] \[dq]gzip\[dq] -compression_mode> gzip -Edit advanced config? (y/n) +Choose a number from below, or type in your own value of type string. +Press Enter for the default (gzip). + 1 / Standard gzip compression with fastest parameters. + \[rs] (gzip) + 2 / Zstandard compression \[em] fast modern algorithm offering adjustable speed-to-compression tradeoffs. + \[rs] (zstd) +mode> gzip + +Option level. +GZIP (levels -2 to 9): +- -2 \[em] Huffman encoding only. Only use if you know what you\[aq]re doing. +- -1 (default) \[em] recommended; equivalent to level 5. +- 0 \[em] turns off compression. +- 1\[en]9 \[em] increase compression at the cost of speed. Going past 6 generally offers very little return. + +ZSTD (levels 0 to 4): +- 0 \[em] turns off compression entirely. +- 1 \[em] fastest compression with the lowest ratio. +- 2 (default) \[em] good balance of speed and compression. +- 3 \[em] better compression, but uses about 2\[en]3x more CPU than the default. +- 4 \[em] best possible compression ratio (highest CPU cost). + +Notes: +- Choose GZIP for wide compatibility; ZSTD for better speed/ratio tradeoffs. +- Negative gzip levels: -2 = Huffman-only, -1 = default (≈ level 5). +Enter a value. +level> -1 + +Edit advanced config? y) Yes n) No (default) y/n> n -Remote config --------------------- -[compress] -type = compress -remote = remote_to_press:subdir -compression_mode = gzip --------------------- + +Configuration complete. +Options: +- type: compress +- remote: remote_to_press:subdir +- mode: gzip +- level: -1 +Keep this \[dq]compress\[dq] remote? y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y \f[R] .fi -.SS Compression Modes -.PP -Currently only gzip compression is supported. -It provides a decent balance between speed and size and is well -supported by other applications. -Compression strength can further be configured via an advanced setting -where 0 is no compression and 9 is strongest compression. +.SS Compression Algorithms +.IP \[bu] 2 +\f[B]GZIP\f[R] \[en] a well-established and widely adopted algorithm +that strikes a solid balance between compression speed and ratio. +It supports compression levels from -2 to 9, with the default -1 +(roughly equivalent to level 5) offering an effective middle ground for +most scenarios. +.IP \[bu] 2 +\f[B]Zstandard (zstd)\f[R] \[en] a modern, high-performance algorithm +that offers precise control over the trade-off between speed and +compression efficiency. +Compression levels range from 0 (no compression) to 4 (maximum +compression). .SS File types .PP If you open a remote wrapped by compress, you will see that there are @@ -49109,21 +55748,33 @@ Examples: .IP \[bu] 2 Standard gzip compression with fastest parameters. .RE +.IP \[bu] 2 +\[dq]zstd\[dq] +.RS 2 +.IP \[bu] 2 +Zstandard compression \[em] fast modern algorithm offering adjustable +speed-to-compression tradeoffs. +.RE .RE -.SS Advanced options -.PP -Here are the Advanced options specific to compress (Compress a remote). .SS --compress-level .PP -GZIP compression level (-2 to 9). -.PP -Generally -1 (default, equivalent to 5) is recommended. -Levels 1 to 9 increase compression at the cost of speed. +GZIP (levels -2 to 9): - -2 \[em] Huffman encoding only. +Only use if you know what you\[aq]re doing. +- -1 (default) \[em] recommended; equivalent to level 5. +- 0 \[em] turns off compression. +- 1\[en]9 \[em] increase compression at the cost of speed. Going past 6 generally offers very little return. .PP -Level -2 uses Huffman encoding only. -Only use if you know what you are doing. -Level 0 turns off compression. +ZSTD (levels 0 to 4): - 0 \[em] turns off compression entirely. +- 1 \[em] fastest compression with the lowest ratio. +- 2 (default) \[em] good balance of speed and compression. +- 3 \[em] better compression, but uses about 2\[en]3x more CPU than the +default. +- 4 \[em] best possible compression ratio (highest CPU cost). +.PP +Notes: - Choose GZIP for wide compatibility; ZSTD for better speed/ratio +tradeoffs. +- Negative gzip levels: -2 = Huffman-only, -1 = default (≈ level 5). .PP Properties: .IP \[bu] 2 @@ -49131,9 +55782,12 @@ Config: level .IP \[bu] 2 Env Var: RCLONE_COMPRESS_LEVEL .IP \[bu] 2 -Type: int +Type: string .IP \[bu] 2 -Default: -1 +Required: true +.SS Advanced options +.PP +Here are the Advanced options specific to compress (Compress a remote). .SS --compress-ram-cache-limit .PP Some remotes don\[aq]t allow the upload of files with unknown size. @@ -49233,7 +55887,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -49377,13 +56031,25 @@ See the metadata (https://rclone.org/docs/#metadata) docs for more info. The DOI remote is a read only remote for reading files from digital object identifiers (DOI). .PP -Currently, the DOI backend supports DOIs hosted with: - -InvenioRDM (https://inveniosoftware.org/products/rdm/) - -Zenodo (https://zenodo.org) - CaltechDATA (https://data.caltech.edu) - -Other InvenioRDM repositories (https://inveniosoftware.org/showcase/) - -Dataverse (https://dataverse.org) - Harvard -Dataverse (https://dataverse.harvard.edu) - Other Dataverse -repositories (https://dataverse.org/installations) +Currently, the DOI backend supports DOIs hosted with: +.IP \[bu] 2 +InvenioRDM (https://inveniosoftware.org/products/rdm/) +.RS 2 +.IP \[bu] 2 +Zenodo (https://zenodo.org) +.IP \[bu] 2 +CaltechDATA (https://data.caltech.edu) +.IP \[bu] 2 +Other InvenioRDM repositories (https://inveniosoftware.org/showcase/) +.RE +.IP \[bu] 2 +Dataverse (https://dataverse.org) +.RS 2 +.IP \[bu] 2 +Harvard Dataverse (https://dataverse.harvard.edu) +.IP \[bu] 2 +Other Dataverse repositories (https://dataverse.org/installations) +.RE .PP Paths are specified as \f[V]remote:path\f[R] .PP @@ -49396,7 +56062,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -49535,7 +56201,7 @@ Required: false .PP Here are the commands specific to the doi backend. .PP -Run them with +Run them with: .IP .nf \f[C] @@ -49561,10 +56227,12 @@ rclone backend metadata remote: [options] [+] .fi .PP This command returns a JSON object with some information about the DOI. +.PP +Usage example: .IP .nf \f[C] -rclone backend medatadata doi: +rclone backend metadata doi: \f[R] .fi .PP @@ -49582,7 +56250,7 @@ rclone backend set remote: [options] [+] This set command can be used to update the config parameters for a running doi backend. .PP -Usage Examples: +Usage examples: .IP .nf \f[C] @@ -49617,7 +56285,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -49660,7 +56328,8 @@ y/e/d> y .fi .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Note that rclone runs a webserver on your local machine to collect the token as returned from Dropbox. @@ -50455,7 +57124,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -50527,7 +57196,8 @@ y/e/d> y \f[R] .fi .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your Enterprise File Fabric .IP @@ -50782,7 +57452,7 @@ First, run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -50882,7 +57552,7 @@ Copy a specific file to the FileLu root: .IP .nf \f[C] -rclone copy D:\[rs]\[rs]hello.txt filelu: +rclone copy D:\[rs]hello.txt filelu: \f[R] .fi .PP @@ -50906,7 +57576,7 @@ Move files from a local directory to a FileLu directory: .IP .nf \f[C] -rclone move D:\[rs]\[rs]local-folder filelu:/remote-path/ +rclone move D:\[rs]local-folder filelu:/remote-path/ \f[R] .fi .PP @@ -51306,7 +57976,7 @@ For an anonymous FTP server, see below. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote r) Rename remote c) Copy remote @@ -51981,7 +58651,7 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .PP The implementation of : \f[V]--dump headers\f[R], \f[V]--dump bodies\f[R], \f[V]--dump auth\f[R] for debugging isn\[aq]t @@ -52042,7 +58712,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -52089,7 +58759,8 @@ y/e/d> y \f[R] .fi .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories and files in the top level of your Gofile .IP @@ -52406,7 +59077,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -52490,7 +59161,9 @@ Choose a number from below, or type in your own value \[rs] \[dq]us-east1\[dq] 13 / Northern Virginia. \[rs] \[dq]us-east4\[dq] -14 / Oregon. +14 / Ohio. + \[rs] \[dq]us-east5\[dq] +15 / Oregon. \[rs] \[dq]us-west1\[dq] location> 12 The storage class to use when storing objects in Google Cloud Storage. @@ -52538,7 +59211,8 @@ y/e/d> y .fi .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically @@ -52626,7 +59300,7 @@ If you already have a working service account, skip to step 3. .IP .nf \f[C] -gcloud iam service-accounts create gcs-read-only +gcloud iam service-accounts create gcs-read-only \f[R] .fi .PP @@ -52636,11 +59310,11 @@ above) .IP .nf \f[C] - $ PROJECT_ID=my-project - $ gcloud --verbose iam service-accounts add-iam-policy-binding \[rs] - gcs-read-only\[at]${PROJECT_ID}.iam.gserviceaccount.com \[rs] - --member=serviceAccount:gcs-read-only\[at]${PROJECT_ID}.iam.gserviceaccount.com \[rs] - --role=roles/storage.objectViewer +$ PROJECT_ID=my-project +$ gcloud --verbose iam service-accounts add-iam-policy-binding \[rs] + gcs-read-only\[at]${PROJECT_ID}.iam.gserviceaccount.com \[rs] + --member=serviceAccount:gcs-read-only\[at]${PROJECT_ID}.iam.gserviceaccount.com \[rs] + --role=roles/storage.objectViewer \f[R] .fi .PP @@ -53239,6 +59913,12 @@ South Carolina Northern Virginia .RE .IP \[bu] 2 +\[dq]us-east5\[dq] +.RS 2 +.IP \[bu] 2 +Ohio +.RE +.IP \[bu] 2 \[dq]us-west1\[dq] .RS 2 .IP \[bu] 2 @@ -53588,7 +60268,7 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .SH Google Drive .PP Paths are specified as \f[V]drive:path\f[R] @@ -53606,7 +60286,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -53686,7 +60366,8 @@ y/e/d> y .fi .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically @@ -53820,8 +60501,7 @@ environment variable. Let\[aq]s say that you are the administrator of a Google Workspace. The goal is to read or write data on an individual\[aq]s Drive account, who IS a member of the domain. -We\[aq]ll call the domain \f[B]example.com\f[R], and the user -\f[B]foo\[at]example.com\f[R]. +We\[aq]ll call the domain , and the user . .PP There\[aq]s a few steps we need to go through to accomplish this: .SS 1. Create a service account for example.com @@ -53917,11 +60597,13 @@ folder named backup. .PP Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using -\f[V]--drive-impersonate\f[R], do this instead: - in the gdrive web -interface, share your root folder with the user/email of the new Service -Account you created/selected at step 1 - use rclone without specifying -the \f[V]--drive-impersonate\f[R] option, like this: -\f[V]rclone -v lsf gdrive:backup\f[R] +\f[V]--drive-impersonate\f[R], do this instead: +.IP \[bu] 2 +in the gdrive web interface, share your root folder with the user/email +of the new Service Account you created/selected at step 1 +.IP \[bu] 2 +use rclone without specifying the \f[V]--drive-impersonate\f[R] option, +like this: \f[V]rclone -v lsf gdrive:backup\f[R] .SS Shared drives (team drives) .PP If you want to configure the remote to point to a Google Shared Drive @@ -55786,7 +62468,7 @@ See the metadata (https://rclone.org/docs/#metadata) docs for more info. .PP Here are the commands specific to the drive backend. .PP -Run them with +Run them with: .IP .nf \f[C] @@ -55803,7 +62485,7 @@ These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). .SS get .PP -Get command for fetching the drive config parameters +Get command for fetching the drive config parameters. .IP .nf \f[C] @@ -55812,9 +62494,9 @@ rclone backend get remote: [options] [+] .fi .PP This is a get command which will be used to fetch the various drive -config parameters +config parameters. .PP -Usage Examples: +Usage examples: .IP .nf \f[C] @@ -55825,12 +62507,12 @@ rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o ch .PP Options: .IP \[bu] 2 -\[dq]chunk_size\[dq]: show the current upload chunk size +\[dq]chunk_size\[dq]: Show the current upload chunk size. .IP \[bu] 2 -\[dq]service_account_file\[dq]: show the current service account file +\[dq]service_account_file\[dq]: Show the current service account file. .SS set .PP -Set command for updating the drive config parameters +Set command for updating the drive config parameters. .IP .nf \f[C] @@ -55839,9 +62521,9 @@ rclone backend set remote: [options] [+] .fi .PP This is a set command which will be used to update the various drive -config parameters +config parameters. .PP -Usage Examples: +Usage examples: .IP .nf \f[C] @@ -55852,12 +62534,12 @@ rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json .PP Options: .IP \[bu] 2 -\[dq]chunk_size\[dq]: update the current upload chunk size +\[dq]chunk_size\[dq]: Update the current upload chunk size. .IP \[bu] 2 -\[dq]service_account_file\[dq]: update the current service account file +\[dq]service_account_file\[dq]: Update the current service account file. .SS shortcut .PP -Create shortcuts from files or directories +Create shortcuts from files or directories. .IP .nf \f[C] @@ -55867,7 +62549,7 @@ rclone backend shortcut remote: [options] [+] .PP This command creates shortcuts from files or directories. .PP -Usage: +Usage examples: .IP .nf \f[C] @@ -55890,10 +62572,10 @@ This may fail with a permission error if the user authenticated with .PP Options: .IP \[bu] 2 -\[dq]target\[dq]: optional target remote for the shortcut destination +\[dq]target\[dq]: Optional target remote for the shortcut destination. .SS drives .PP -List the Shared Drives available to this account +List the Shared Drives available to this account. .IP .nf \f[C] @@ -55904,7 +62586,7 @@ rclone backend drives remote: [options] [+] This command lists the Shared Drives (Team Drives) available to this account. .PP -Usage: +Usage example: .IP .nf \f[C] @@ -55912,7 +62594,7 @@ rclone backend [-o config] drives drive: \f[R] .fi .PP -This will return a JSON list of objects like this +This will return a JSON list of objects like this: .IP .nf \f[C] @@ -55959,7 +62641,7 @@ It will also add a remote called AllDrives which shows all the shared drives combined into one directory tree. .SS untrash .PP -Untrash files and directories +Untrash files and directories. .IP .nf \f[C] @@ -55970,10 +62652,7 @@ rclone backend untrash remote: [options] [+] This command untrashes all the files and directories in the directory passed in recursively. .PP -Usage: -.PP -This takes an optional directory to trash which make this easier to use -via the API. +Usage example: .IP .nf \f[C] @@ -55982,6 +62661,9 @@ rclone backend --interactive untrash drive:directory subdir \f[R] .fi .PP +This takes an optional directory to trash which make this easier to use +via the API. +.PP Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it. .PP @@ -55997,7 +62679,7 @@ Result: .fi .SS copyid .PP -Copy files by ID +Copy files by ID. .IP .nf \f[C] @@ -56005,9 +62687,9 @@ rclone backend copyid remote: [options] [+] \f[R] .fi .PP -This command copies files by ID +This command copies files by ID. .PP -Usage: +Usage examples: .IP .nf \f[C] @@ -56032,7 +62714,7 @@ Use the --interactive/-i or --dry-run flag to see what would be copied before copying. .SS moveid .PP -Move files by ID +Move files by ID. .IP .nf \f[C] @@ -56040,9 +62722,9 @@ rclone backend moveid remote: [options] [+] \f[R] .fi .PP -This command moves files by ID +This command moves files by ID. .PP -Usage: +Usage examples: .IP .nf \f[C] @@ -56066,7 +62748,7 @@ Use the --interactive/-i or --dry-run flag to see what would be moved beforehand. .SS exportformats .PP -Dump the export formats for debug purposes +Dump the export formats for debug purposes. .IP .nf \f[C] @@ -56075,7 +62757,7 @@ rclone backend exportformats remote: [options] [+] .fi .SS importformats .PP -Dump the import formats for debug purposes +Dump the import formats for debug purposes. .IP .nf \f[C] @@ -56084,7 +62766,7 @@ rclone backend importformats remote: [options] [+] .fi .SS query .PP -List files using Google Drive query language +List files using Google Drive query language. .IP .nf \f[C] @@ -56092,9 +62774,9 @@ rclone backend query remote: [options] [+] \f[R] .fi .PP -This command lists files based on a query +This command lists files based on a query. .PP -Usage: +Usage example: .IP .nf \f[C] @@ -56130,30 +62812,29 @@ The result is a JSON array of matches, for example: .nf \f[C] [ -{ - \[dq]createdTime\[dq]: \[dq]2017-06-29T19:58:28.537Z\[dq], - \[dq]id\[dq]: \[dq]0AxBe_CDEF4zkGHI4d0FjYko2QkD\[dq], - \[dq]md5Checksum\[dq]: \[dq]68518d16be0c6fbfab918be61d658032\[dq], - \[dq]mimeType\[dq]: \[dq]text/plain\[dq], - \[dq]modifiedTime\[dq]: \[dq]2024-02-02T10:40:02.874Z\[dq], - \[dq]name\[dq]: \[dq]foo \[aq] \[rs]\[rs].txt\[dq], - \[dq]parents\[dq]: [ - \[dq]0BxAe_BCDE4zkFGZpcWJGek0xbzC\[dq] - ], - \[dq]resourceKey\[dq]: \[dq]0-ABCDEFGHIXJQpIGqBJq3MC\[dq], - \[dq]sha1Checksum\[dq]: \[dq]8f284fa768bfb4e45d076a579ab3905ab6bfa893\[dq], - \[dq]size\[dq]: \[dq]311\[dq], - \[dq]webViewLink\[dq]: \[dq]https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\[rs]u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC\[dq] -} + { + \[dq]createdTime\[dq]: \[dq]2017-06-29T19:58:28.537Z\[dq], + \[dq]id\[dq]: \[dq]0AxBe_CDEF4zkGHI4d0FjYko2QkD\[dq], + \[dq]md5Checksum\[dq]: \[dq]68518d16be0c6fbfab918be61d658032\[dq], + \[dq]mimeType\[dq]: \[dq]text/plain\[dq], + \[dq]modifiedTime\[dq]: \[dq]2024-02-02T10:40:02.874Z\[dq], + \[dq]name\[dq]: \[dq]foo \[aq] \[rs]\[rs].txt\[dq], + \[dq]parents\[dq]: [ + \[dq]0BxAe_BCDE4zkFGZpcWJGek0xbzC\[dq] + ], + \[dq]resourceKey\[dq]: \[dq]0-ABCDEFGHIXJQpIGqBJq3MC\[dq], + \[dq]sha1Checksum\[dq]: \[dq]8f284fa768bfb4e45d076a579ab3905ab6bfa893\[dq], + \[dq]size\[dq]: \[dq]311\[dq], + \[dq]webViewLink\[dq]: \[dq]https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\[rs]u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC\[dq] + } ] -\f[R] -.fi -.SS rescue -.PP -Rescue or delete any orphaned files -.IP -.nf -\f[C] +\[ga]\[ga]\[ga]console + +### rescue + +Rescue or delete any orphaned files. + +\[ga]\[ga]\[ga]console rclone backend rescue remote: [options] [+] \f[R] .fi @@ -56166,11 +62847,9 @@ This means that they are no longer in any folder in Google Drive. This command finds those files and either rescues them to a directory you specify or deletes them. .PP -Usage: -.PP This can be used in 3 ways. .PP -First, list all orphaned files +First, list all orphaned files: .IP .nf \f[C] @@ -56178,7 +62857,7 @@ rclone backend rescue drive: \f[R] .fi .PP -Second rescue all orphaned files to the directory indicated +Second rescue all orphaned files to the directory indicated: .IP .nf \f[C] @@ -56186,9 +62865,9 @@ rclone backend rescue drive: \[dq]relative/path/to/rescue/directory\[dq] \f[R] .fi .PP -e.g. -To rescue all orphans to a directory called \[dq]Orphans\[dq] in the top -level +E.g. +to rescue all orphans to a directory called \[dq]Orphans\[dq] in the top +level: .IP .nf \f[C] @@ -56196,7 +62875,7 @@ rclone backend rescue drive: Orphans \f[R] .fi .PP -Third delete all orphaned files to the trash +Third delete all orphaned files to the trash: .IP .nf \f[C] @@ -56292,28 +62971,37 @@ recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower. .PP Here is how to create your own Google Drive client ID for rclone: -.IP "1." 3 +.IP " 1." 4 Log into the Google API Console (https://console.developers.google.com/) with your Google account. It doesn\[aq]t matter what Google account you use. (It need not be the same account as the Google Drive you want to access) -.IP "2." 3 +.IP " 2." 4 Select a project or create a new project. -.IP "3." 3 +.IP " 3." 4 Under \[dq]ENABLE APIS AND SERVICES\[dq] search for \[dq]Drive\[dq], and enable the \[dq]Google Drive API\[dq]. -.IP "4." 3 +.IP " 4." 4 Click \[dq]Credentials\[dq] in the left-side panel (not \[dq]Create credentials\[dq], which opens the wizard). -.IP "5." 3 +.IP " 5." 4 If you already configured an \[dq]Oauth Consent Screen\[dq], then skip to the next step; if not, click on \[dq]CONFIGURE CONSENT SCREEN\[dq] -button (near the top right corner of the right panel), then select -\[dq]External\[dq] and click on \[dq]CREATE\[dq]; on the next screen, -enter an \[dq]Application name\[dq] (\[dq]rclone\[dq] is OK); enter -\[dq]User Support Email\[dq] (your own email is OK); enter -\[dq]Developer Contact Email\[dq] (your own email is OK); then click on -\[dq]Save\[dq] (all other data is optional). +button (near the top right corner of the right panel), then click +\[dq]Get started\[dq]. +On the next screen, enter an \[dq]Application name\[dq] +(\[dq]rclone\[dq] is OK); enter \[dq]User Support Email\[dq] (your own +email is OK); Next, under Audience select \[dq]External\[dq]. +Next enter your own contact information, agree to terms and click +\[dq]Create\[dq]. +You should now see rclone (or your project name) in a box in the top +left of the screen. +.RS 4 +.PP +(PS: if you are a GSuite user, you could also select \[dq]Internal\[dq] +instead of \[dq]External\[dq] above, but this will restrict API use to +Google Workspace users in your organisation). +.PP You will also have to add some scopes (https://developers.google.com/drive/api/guides/api-specific-auth), including @@ -56325,44 +63013,37 @@ edit, create and delete files with RClone. .IP \[bu] 2 \f[V]https://www.googleapis.com/auth/drive.metadata.readonly\f[R] which you may also want to add. -.IP \[bu] 2 -If you want to add all at once, comma separated it would be -\f[V]https://www.googleapis.com/auth/docs,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/drive.metadata.readonly\f[R]. -.IP " 6." 4 -After adding scopes, click \[dq]Save and continue\[dq] to add test -users. -Be sure to add your own account to the test users. -Once you\[aq]ve added yourself as a test user and saved the changes, -click again on \[dq]Credentials\[dq] on the left panel to go back to the -\[dq]Credentials\[dq] screen. -.RS 4 .PP -(PS: if you are a GSuite user, you could also select \[dq]Internal\[dq] -instead of \[dq]External\[dq] above, but this will restrict API use to -Google Workspace users in your organisation). +To do this, click Data Access on the left side panel, click \[dq]add or +remove scopes\[dq] and select the three above and press update or go to +the \[dq]Manually add scopes\[dq] text box (scroll down) and enter +\[dq]https://www.googleapis.com/auth/docs,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/drive.metadata.readonly\[dq], +press add to table then update. +.PP +You should now see the three scopes on your Data access page. +Now press save at the bottom! .RE +.IP " 6." 4 +After adding scopes, click Audience Scroll down and click \[dq]+ Add +users\[dq]. +Add yourself as a test user and press save. .IP " 7." 4 -Click on the \[dq]+ CREATE CREDENTIALS\[dq] button at the top of the -screen, then select \[dq]OAuth client ID\[dq]. -.IP " 8." 4 +Go to Overview on the left panel, click \[dq]Create OAuth client\[dq]. Choose an application type of \[dq]Desktop app\[dq] and click \[dq]Create\[dq]. (the default name is fine) -.IP " 9." 4 +.IP " 8." 4 It will show you a client ID and client secret. Make a note of these. -.RS 4 -.PP -(If you selected \[dq]External\[dq] at Step 5 continue to Step 10. +(If you selected \[dq]External\[dq] at Step 5 continue to Step 9. If you chose \[dq]Internal\[dq] you don\[aq]t need to publish and can -skip straight to Step 11 but your destination drive must be part of the +skip straight to Step 10 but your destination drive must be part of the same Google Workspace.) -.RE +.IP " 9." 4 +Go to \[dq]Audience\[dq] and then click \[dq]PUBLISH APP\[dq] button and +confirm. +Add yourself as a test user if you haven\[aq]t already. .IP "10." 4 -Go to \[dq]Oauth consent screen\[dq] and then click \[dq]PUBLISH -APP\[dq] button and confirm. -You will also want to add yourself as a test user. -.IP "11." 4 Provide the noted client ID and client secret to rclone. .PP Be aware that, due to the \[dq]enhanced security\[dq] recently @@ -56382,8 +63063,8 @@ keeping the application in testing mode would also be sufficient. (Thanks to \[at]balazer on github for these instructions.) .PP Sometimes, creation of an OAuth consent in Google API Console fails due -to an error message \[lq]The request failed because changes to one of -the field of the resource is not supported\[rq]. +to an error message \[dq]The request failed because changes to one of +the field of the resource is not supported\[dq]. As a convenient workaround, the necessary Google Drive API key can be created on the Python Quickstart (https://developers.google.com/drive/api/v3/quickstart/python) @@ -56417,7 +63098,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -56489,7 +63170,8 @@ y/e/d> y .fi .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Note that rclone runs a webserver on your local machine to collect the token as returned from Google if using web browser to automatically @@ -57153,9 +63835,13 @@ https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata .PP Hasher is a special overlay backend to create remotes which handle checksums for other remotes. -It\[aq]s main functions include: - Emulate hash types unimplemented by -backends - Cache checksums to help with slow hashing of large local or -(S)FTP files - Warm up checksum cache from external SUM files +It\[aq]s main functions include: +.IP \[bu] 2 +Emulate hash types unimplemented by backends +.IP \[bu] 2 +Cache checksums to help with slow hashing of large local or (S)FTP files +.IP \[bu] 2 +Warm up checksum cache from external SUM files .SS Getting started .PP To use Hasher, first set up the underlying remote following the @@ -57176,7 +63862,7 @@ Run \f[V]rclone config\f[R]: .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -57238,12 +63924,16 @@ max_age = 24h \f[R] .fi .PP -Hasher takes basically the following parameters: - \f[V]remote\f[R] is -required, - \f[V]hashes\f[R] is a comma separated list of supported -checksums (by default \f[V]md5,sha1\f[R]), - \f[V]max_age\f[R] - maximum -time to keep a checksum value in the cache, \f[V]0\f[R] will disable -caching completely, \f[V]off\f[R] will cache \[dq]forever\[dq] (that is -until the files get changed). +Hasher takes basically the following parameters: +.IP \[bu] 2 +\f[V]remote\f[R] is required +.IP \[bu] 2 +\f[V]hashes\f[R] is a comma separated list of supported checksums (by +default \f[V]md5,sha1\f[R]) +.IP \[bu] 2 +\f[V]max_age\f[R] - maximum time to keep a checksum value in the cache +\f[V]0\f[R] will disable caching completely \f[V]off\f[R] will cache +\[dq]forever\[dq] (that is until the files get changed) .PP Make sure the \f[V]remote\f[R] has \f[V]:\f[R] (colon) in. If you specify the remote without a colon then rclone will use a local @@ -57264,7 +63954,6 @@ fully read or overwritten, like: .nf \f[C] rclone copy External:path/file Hasher:dest/path - rclone cat Hasher:path/to/file > /dev/null \f[R] .fi @@ -57278,7 +63967,6 @@ supported hashsum on the command line (we just care to re-read): .nf \f[C] rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null - rclone backend dump Hasher:path/to/subtree \f[R] .fi @@ -57288,7 +63976,6 @@ You can print or drop hashsum cache using custom backend commands: .nf \f[C] rclone backend dump Hasher:dir/subdir - rclone backend drop Hasher: \f[R] .fi @@ -57309,15 +63996,19 @@ The last argument can point to either a local or an The command will parse the SUM file, then walk down the path given by the first argument, snapshot current fingerprints and fill in the cache entries correspondingly. -- Paths in the SUM file are treated as relative to +.IP \[bu] 2 +Paths in the SUM file are treated as relative to \f[V]hasher:dir/subdir\f[R]. -- The command will \f[B]not\f[R] check that supplied values are correct. +.IP \[bu] 2 +The command will \f[B]not\f[R] check that supplied values are correct. You \f[B]must know\f[R] what you are doing. -- This is a one-time action. +.IP \[bu] 2 +This is a one-time action. The SUM file will not get \[dq]attached\[dq] to the remote. Cache entries can still be overwritten later, should the object\[aq]s fingerprint change. -- The tree walk can take long depending on the tree size. +.IP \[bu] 2 +The tree walk can take long depending on the tree size. You can increase \f[V]--checkers\f[R] to make it faster. Or use \f[V]stickyimport\f[R] if you don\[aq]t care about fingerprints and consistency. @@ -57423,7 +64114,7 @@ See the metadata (https://rclone.org/docs/#metadata) docs for more info. .PP Here are the commands specific to the hasher backend. .PP -Run them with +Run them with: .IP .nf \f[C] @@ -57440,7 +64131,7 @@ These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). .SS drop .PP -Drop cache +Drop cache. .IP .nf \f[C] @@ -57449,10 +64140,17 @@ rclone backend drop remote: [options] [+] .fi .PP Completely drop checksum cache. -Usage Example: rclone backend drop hasher: +.PP +Usage example: +.IP +.nf +\f[C] +rclone backend drop hasher: +\f[R] +.fi .SS dump .PP -Dump the database +Dump the database. .IP .nf \f[C] @@ -57460,10 +64158,10 @@ rclone backend dump remote: [options] [+] \f[R] .fi .PP -Dump cache records covered by the current remote +Dump cache records covered by the current remote. .SS fulldump .PP -Full dump of the database +Full dump of the database. .IP .nf \f[C] @@ -57471,10 +64169,10 @@ rclone backend fulldump remote: [options] [+] \f[R] .fi .PP -Dump all cache records in the database +Dump all cache records in the database. .SS import .PP -Import a SUM file +Import a SUM file. .IP .nf \f[C] @@ -57484,10 +64182,17 @@ rclone backend import remote: [options] [+] .PP Amend hash cache from a SUM file and bind checksums to files by size/time. -Usage Example: rclone backend import hasher:subdir md5 /path/to/sum.md5 +.PP +Usage example: +.IP +.nf +\f[C] +rclone backend import hasher:subdir md5 /path/to/sum.md5 +\f[R] +.fi .SS stickyimport .PP -Perform fast import of a SUM file +Perform fast import of a SUM file. .IP .nf \f[C] @@ -57496,8 +64201,14 @@ rclone backend stickyimport remote: [options] [+] .fi .PP Fill hash cache from a SUM file without verifying file fingerprints. -Usage Example: rclone backend stickyimport hasher:subdir md5 -remote:path/to/sum.md5 +.PP +Usage example: +.IP +.nf +\f[C] +rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5 +\f[R] +.fi .SS Implementation details (advanced) .PP This section explains how various rclone operations work on a hasher @@ -57575,7 +64286,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -57875,6 +64586,9 @@ Type: string Required: false .SS Limitations .IP \[bu] 2 +Erasure coding not supported, see issue +#8808 (https://github.com/rclone/rclone/issues/8808) +.IP \[bu] 2 No server-side \f[V]Move\f[R] or \f[V]DirMove\f[R]. .IP \[bu] 2 Checksums not implemented. @@ -57895,7 +64609,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -57951,7 +64665,8 @@ account and hence should not be shared with other persons.\f[R] See the below section for more information. .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Note that rclone runs a webserver on your local machine to collect the token as returned from HiDrive. @@ -57961,7 +64676,8 @@ The webserver runs on \f[V]http://127.0.0.1:53682/\f[R]. If local port \f[V]53682\f[R] is protected by a firewall you may need to temporarily unblock the firewall to complete authorization. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your HiDrive root folder .IP @@ -58001,9 +64717,9 @@ the configuration encryption docs (https://rclone.org/docs/#configuration-encryption). .SS Invalid refresh token .PP -As can be verified here (https://developer.hidrive.com/basics-flows/), -each \f[V]refresh_token\f[R] (for Native Applications) is valid for 60 -days. +As can be verified on HiDrive\[aq]s OAuth +guide (https://developer.hidrive.com/basics-flows/), each +\f[V]refresh_token\f[R] (for Native Applications) is valid for 60 days. If used to access HiDrivei, its validity will be automatically extended. .PP This means that if you @@ -58045,7 +64761,8 @@ Additionally, files or folders cannot be named either of the following: Therefore rclone will automatically replace these characters, if files or folders are stored or accessed with such names. .PP -You can read about how this filename encoding works in general here. +You can read about how this filename encoding works in general in the +main docs (https://rclone.org/overview/#restricted-filenames). .PP Keep in mind that HiDrive only supports file or folder names with a length of 255 characters or less. @@ -58090,7 +64807,6 @@ equivalent: .nf \f[C] rclone lsd --hidrive-root-prefix=\[dq]/users/test/\[dq] remote:path - rclone lsd remote:/users/test/path \f[R] .fi @@ -58517,7 +65233,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -58751,11 +65467,102 @@ Env Var: RCLONE_HTTP_DESCRIPTION Type: string .IP \[bu] 2 Required: false +.SS Metadata +.PP +HTTP metadata keys are case insensitive and are always returned in lower +case. +.PP +Here are the possible system metadata items for the http backend. +.PP +.TS +tab(@); +lw(11.1n) lw(11.1n) lw(11.1n) lw(16.6n) lw(20.3n). +T{ +Name +T}@T{ +Help +T}@T{ +Type +T}@T{ +Example +T}@T{ +Read Only +T} +_ +T{ +cache-control +T}@T{ +Cache-Control header +T}@T{ +string +T}@T{ +no-cache +T}@T{ +N +T} +T{ +content-disposition +T}@T{ +Content-Disposition header +T}@T{ +string +T}@T{ +inline +T}@T{ +N +T} +T{ +content-disposition-filename +T}@T{ +Filename retrieved from Content-Disposition header +T}@T{ +string +T}@T{ +file.txt +T}@T{ +N +T} +T{ +content-encoding +T}@T{ +Content-Encoding header +T}@T{ +string +T}@T{ +gzip +T}@T{ +N +T} +T{ +content-language +T}@T{ +Content-Language header +T}@T{ +string +T}@T{ +en-US +T}@T{ +N +T} +T{ +content-type +T}@T{ +Content-Type header +T}@T{ +string +T}@T{ +text/plain +T}@T{ +N +T} +.TE +.PP +See the metadata (https://rclone.org/docs/#metadata) docs for more info. .SS Backend commands .PP Here are the commands specific to the http backend. .PP -Run them with +Run them with: .IP .nf \f[C] @@ -58783,7 +65590,7 @@ rclone backend set remote: [options] [+] This set command can be used to update the config parameters for a running http backend. .PP -Usage Examples: +Usage examples: .IP .nf \f[C] @@ -58810,18 +65617,16 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .SH ImageKit .PP This is a backend for the ImageKit.io (https://imagekit.io/) storage service. -.SS About ImageKit .PP ImageKit.io (https://imagekit.io/) provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web. -.SS Accounts & Pricing .PP To use this backend, you need to create an account (https://imagekit.io/registration/) on ImageKit. @@ -59214,7 +66019,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -59449,7 +66254,7 @@ rclone sync --interactive /home/local/directory remote:item Because of Internet Archive\[aq]s architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item\[aq]s queue at -https://catalogd.archive.org/history/item-name-here . +. Because of that, all uploads/deletes will not show up immediately and takes some time to be available. The per-item queue is enqueued to an another queue, Item Deriver Queue. @@ -59471,10 +66276,27 @@ file. The metadata will appear as file metadata on Internet Archive. However, some fields are reserved by both Internet Archive and rclone. .PP -The following are reserved by Internet Archive: - \f[V]name\f[R] - -\f[V]source\f[R] - \f[V]size\f[R] - \f[V]md5\f[R] - \f[V]crc32\f[R] - -\f[V]sha1\f[R] - \f[V]format\f[R] - \f[V]old_version\f[R] - -\f[V]viruscheck\f[R] - \f[V]summation\f[R] +The following are reserved by Internet Archive: +.IP \[bu] 2 +\f[V]name\f[R] +.IP \[bu] 2 +\f[V]source\f[R] +.IP \[bu] 2 +\f[V]size\f[R] +.IP \[bu] 2 +\f[V]md5\f[R] +.IP \[bu] 2 +\f[V]crc32\f[R] +.IP \[bu] 2 +\f[V]sha1\f[R] +.IP \[bu] 2 +\f[V]format\f[R] +.IP \[bu] 2 +\f[V]old_version\f[R] +.IP \[bu] 2 +\f[V]viruscheck\f[R] +.IP \[bu] 2 +\f[V]summation\f[R] .PP Trying to set values to these keys is ignored with a warning. Only setting \f[V]mtime\f[R] is an exception. @@ -59528,7 +66350,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -59952,110 +66774,232 @@ See the metadata (https://rclone.org/docs/#metadata) docs for more info. .PP Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway. +.PP In addition to the official service at jottacloud.com (https://www.jottacloud.com/), it also provides -white-label solutions to different companies, such as: * Telia * Telia -Cloud (cloud.telia.se) * Telia Sky (sky.telia.no) * Tele2 * Tele2 Cloud -(mittcloud.tele2.se) * Onlime * Onlime Cloud Storage (onlime.dk) * -Elkjøp (with subsidiaries): * Elkjøp Cloud (cloud.elkjop.no) * -Elgiganten Sweden (cloud.elgiganten.se) * Elgiganten Denmark -(cloud.elgiganten.dk) * Giganti Cloud (cloud.gigantti.fi) * ELKO Cloud -(cloud.elko.is) -.PP -Most of the white-label versions are supported by this backend, although -may require different authentication setup - described below. +white-label solutions to different companies. +The following are currently supported by this backend, using a different +authentication setup as described below: +.IP \[bu] 2 +Elkjøp (with subsidiaries): +.RS 2 +.IP \[bu] 2 +Elkjøp Cloud (cloud.elkjop.no) +.IP \[bu] 2 +Elgiganten Cloud (cloud.elgiganten.dk) +.IP \[bu] 2 +Elgiganten Cloud (cloud.elgiganten.se) +.IP \[bu] 2 +ELKO Cloud (cloud.elko.is) +.IP \[bu] 2 +Gigantti Cloud (cloud.gigantti.fi) +.RE +.IP \[bu] 2 +Telia +.RS 2 +.IP \[bu] 2 +Telia Cloud (cloud.telia.se) +.IP \[bu] 2 +Telia Sky (sky.telia.no) +.RE +.IP \[bu] 2 +Tele2 +.RS 2 +.IP \[bu] 2 +Tele2 Cloud (mittcloud.tele2.se) +.RE +.IP \[bu] 2 +Onlime +.RS 2 +.IP \[bu] 2 +Onlime (onlime.dk) +.RE +.IP \[bu] 2 +MediaMarkt +.RS 2 +.IP \[bu] 2 +MediaMarkt Cloud (mediamarkt.jottacloud.com) +.IP \[bu] 2 +Let\[aq]s Go Cloud (letsgo.jotta.cloud) +.RE .PP Paths are specified as \f[V]remote:path\f[R] .PP Paths may be as deep as required, e.g. \f[V]remote:directory/subdirectory\f[R]. -.SS Authentication types +.SS Authentication .PP -Some of the whitelabel versions uses a different authentication method -than the official service, and you have to choose the correct one when -setting up the remote. -.SS Standard authentication +Authentication in Jottacloud is in general based on OAuth and OpenID +Connect (OIDC). +There are different variants to choose from, depending on which service +you are using, e.g. +a white-label service may only support one of them. +Note that there is no documentation to rely on, so the descriptions +provided here are based on observations and may not be accurate. .PP -The standard authentication method used by the official service -(jottacloud.com), as well as some of the whitelabel services, requires -you to generate a single-use personal login token from the account -security settings in the service\[aq]s web interface. -Log in to your account, go to \[dq]Settings\[dq] and then -\[dq]Security\[dq], or use the direct link presented to you by rclone -when configuring the remote: . -Scroll down to the section \[dq]Personal login token\[dq], and click the -\[dq]Generate\[dq] button. -Note that if you are using a whitelabel service you probably can\[aq]t -use the direct link, you need to find the same page in their dedicated -web interface, and also it may be in a different location than described -above. +Jottacloud uses two optional OAuth security mechanisms, referred to as +\[dq]Refresh Token Rotation\[dq] and \[dq]Automatic Reuse +Detection\[dq], which has some implications. +Access tokens normally have one hour expiry, after which they need to be +refreshed (rotated), an operation that requires the refresh token to be +supplied. +Rclone does this automatically. +This is standard OAuth. +But in Jottacloud, such a refresh operation not only creates a new +access token, but also refresh token, and invalidates the existing +refresh token, the one that was supplied. +It keeps track of the history of refresh tokens, sometimes referred to +as a token family, descending from the original refresh token that was +issued after the initial authentication. +This is used to detect any attempts at reusing old refresh tokens, and +trigger an immedate invalidation of the current refresh token, and +effectively the entire refresh token family. .PP -To access your account from multiple instances of rclone, you need to -configure each of them with a separate personal login token. -E.g. -you create a Jottacloud remote with rclone in one location, and copy the -configuration file to a second location where you also want to run -rclone and access the same remote. -Then you need to replace the token for one of them, using the config -reconnect (https://rclone.org/commands/rclone_config_reconnect/) -command, which requires you to generate a new personal login token and -supply as input. -If you do not do this, the token may easily end up being invalidated, -resulting in both instances failing with an error message something -along the lines of: +When the current refresh token has been invalidated, next time rclone +tries to perform a token refresh, it will fail with an error message +something along the lines of: .IP .nf \f[C] -oauth2: cannot fetch token: 400 Bad Request -Response: {\[dq]error\[dq]:\[dq]invalid_grant\[dq],\[dq]error_description\[dq]:\[dq]Stale token\[dq]} +CRITICAL: Failed to create file system for \[dq]remote:\[dq]: (...): couldn\[aq]t fetch token: invalid_grant: maybe token expired? - try refreshing with \[dq]rclone config reconnect remote:\[dq] \f[R] .fi .PP -When this happens, you need to replace the token as described above to -be able to use your remote again. +If you run rclone with verbosity level 2 (\f[V]-vv\f[R]), you will see a +debug message with an additional error description from the OAuth +response: +.IP +.nf +\f[C] +DEBUG : remote: got fatal oauth error: oauth2: \[dq]invalid_grant\[dq] \[dq]Session doesn\[aq]t have required client\[dq] +\f[R] +.fi .PP -All personal login tokens you have taken into use will be listed in the -web interface under \[dq]My logged in devices\[dq], and from the right -side of that list you can click the \[dq]X\[dq] button to revoke -individual tokens. -.SS Legacy authentication +(The error description used to be \[dq]Stale token\[dq] instead of +\[dq]Session doesn\[aq]t have required client\[dq], so you may see +references to that in older descriptions of this situation.) .PP -If you are using one of the whitelabel versions (e.g. -from Elkjøp) you may not have the option to generate a CLI token. -In this case you\[aq]ll have to use the legacy authentication. -To do this select yes when the setup asks for legacy authentication and -enter your username and password. -The rest of the setup is identical to the default setup. -.SS Telia Cloud authentication +When this happens, you need to re-authenticate to be able to use your +remote again, e.g. +using the config +reconnect (https://rclone.org/commands/rclone_config_reconnect/) command +as suggested in the error message. +This will create an entirely new refresh token (family). .PP -Similar to other whitelabel versions Telia Cloud doesn\[aq]t offer the -option of creating a CLI token, and additionally uses a separate -authentication flow where the username is generated internally. -To setup rclone to use Telia Cloud, choose Telia Cloud authentication in -the setup. -The rest of the setup is identical to the default setup. -.SS Tele2 Cloud authentication +A typical example of how you may end up in this situation, is if you +create a Jottacloud remote with rclone in one location, and then copy +the configuration file to a second location where you start using rclone +to access the same remote. +Eventually there will now be a token refresh attempt with an invalidated +token, i.e. +refresh token reuse, resulting in both instances starting to fail with +the \[dq]invalid_grant\[dq] error. +It is possible to copy remote configurations, but you must then replace +the token for one of them using the config +reconnect (https://rclone.org/commands/rclone_config_reconnect/) +command. .PP -As Tele2-Com Hem merger was completed this authentication can be used -for former Com Hem Cloud and Tele2 Cloud customers as no support for -creating a CLI token exists, and additionally uses a separate -authentication flow where the username is generated internally. -To setup rclone to use Tele2 Cloud, choose Tele2 Cloud authentication in -the setup. -The rest of the setup is identical to the default setup. -.SS Onlime Cloud Storage authentication +You can get some overview of your active tokens in your service\[aq]s +web user interface, if you navigate to \[dq]Settings\[dq] and then +\[dq]Security\[dq] (in which case you end up at + or similar). +Down on that page you have a section \[dq]My logged in devices\[dq]. +This contains a list of entries which seemingly represents currently +valid refresh tokens, or refresh token families. +From the right side of that list you can click a button (\[dq]X\[dq]) to +revoke (invalidate) it, which means you will still have access using an +existing access token until that expires, but you will not be able to +perform a token refresh. +Note that this entire \[dq]My logged in devices\[dq] feature seem to +behave a bit differently with different authentication variants and with +use of the different (white-label) services. +.SS Standard .PP -Onlime has sold access to Jottacloud proper, while providing localized -support to Danish Customers, but have recently set up their own hosting, -transferring their customers from Jottacloud servers to their own ones. +This is an OAuth variant designed for command-line applications. +It is primarily supported by the official service (jottacloud.com), but +may also be supported by some of the white-label services. +The information necessary to be able to perform authentication, like +domain name and endpoint to connect to, are found automatically (it is +encoded into the supplied login token, described next), so you do not +need to specify which service to configure. .PP -This, of course, necessitates using their servers for authentication, -but otherwise functionality and architecture seems equivalent to -Jottacloud. +When configuring a remote, you are asked to enter a single-use personal +login token, which you must manually generate from the account security +settings in the service\[aq]s web interface. +You do not need a web browser on the same machine like with traditional +OAuth, but need to use a web browser somewhere, and be able to be copy +the generated string into your rclone configuration session. +Log in to your service\[aq]s web user interface, navigate to +\[dq]Settings\[dq] and then \[dq]Security\[dq], or, for the official +service, use the direct link presented to you by rclone when configuring +the remote: . +Scroll down to the section \[dq]Personal login token\[dq], and click the +\[dq]Generate\[dq] button. +Copy the presented string and paste it where rclone asks for it. +Rclone will then use this to perform an initial token request, and +receive a regular OAuth token which it stores in your remote +configuration. +There will then also be a new entry in the \[dq]My logged in +devices\[dq] list in the web interface, with device name and application +name \[dq]Jottacloud CLI\[dq]. .PP -To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud -authentication in the setup. -The rest of the setup is identical to the default setup. +Each time a new token is created this way, i.e. +a new personal login token is generated and traded in for an OAuth +token, you get an entirely new refresh token family, with a new entry in +the \[dq]My logged in devices\[dq]. +You can create as many remotes as you want, and use multiple instances +of rclone on same or different machine, as long as you configure them +separately like this, and not get your self into the refresh token reuse +issue described above. +.SS Traditional +.PP +Jottacloud also supports a more traditional OAuth variant. +Most of the white-label services support this, and for many of them this +is the only alternative because they do not support personal login +tokens. +This method relies on pre-defined service-specific domain names and +endpoints, and rclone need you to specify which service to configure. +This also means that any changes to existing or additions of new +white-label services needs an update in the rclone backend +implementation. +.PP +When configuring a remote, you must interactively login to an OAuth +authorization web site, and a one-time authorization code is sent back +to rclone behind the scene, which it uses to request an OAuth token. +This means that you need to be on a machine with an internet-connected +web browser. +If you need it on a machine where this is not the case, then you will +have to create the configuration on a different machine and copy it from +there. +The Jottacloud backend does not support the \f[V]rclone authorize\f[R] +command. +See the remote setup docs for details. +.PP +Jottacloud exerts some form of strict session management when +authenticating using this method. +This leads to some unexpected cases of the \[dq]invalid_grant\[dq] error +described above, and effectively limits you to only use of a single +active authentication on the same machine. +I.e. +you can only create a single rclone remote, and you can\[aq]t even log +in with the service\[aq]s official desktop client while having a rclone +remote configured, or else you will eventually get all sessions +invalidated and are forced to re-authenticate. +.PP +When you have successfully authenticated, there will be an entry in the +\[dq]My logged in devices\[dq] list in the web interface representing +your session. +It will typically be listed with application name \[dq]Jottacloud for +Desktop\[dq] or similar (it depends on the white-label service +configuration). +.SS Legacy +.PP +Originally Jottacloud used an OAuth variant which required your +account\[aq]s username and password to be specified. +When Jottacloud migrated to the newer methods, some white-label versions +(those from Elkjøp) still used this legacy method for a long time. +Currently there are no known uses of this, it is still supported by +rclone, but the support will be removed in a future version. .SS Configuration .PP Here is an example of how to make a remote called \f[V]remote\f[R] with @@ -60077,7 +67021,10 @@ n) New remote s) Set configuration password q) Quit config n/s/q> n + +Enter name for new remote. name> remote + Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. @@ -60086,60 +67033,63 @@ XX / Jottacloud \[rs] (jottacloud) [snip] Storage> jottacloud + +Option client_id. +OAuth Client Id. +Leave blank normally. +Enter a value. Press Enter to leave empty. +client_id> + +Option client_secret. +OAuth Client Secret. +Leave blank normally. +Enter a value. Press Enter to leave empty. +client_secret> + Edit advanced config? y) Yes n) No (default) y/n> n + Option config_type. -Select authentication type. -Choose a number from below, or type in an existing string value. +Type of authentication. +Choose a number from below, or type in an existing value of type string. Press Enter for the default (standard). / Standard authentication. - 1 | Use this if you\[aq]re a normal Jottacloud user. + | This is primarily supported by the official service, but may also be + | supported by some white-label services. It is designed for command-line + 1 | applications, and you will be asked to enter a single-use personal login + | token which you must manually generate from the account security settings + | in the web interface of your service. \[rs] (standard) + / Traditional authentication. + | This is supported by the official service and all white-label services + | that rclone knows about. You will be asked which service to connect to. + 2 | It has a limitation of only a single active authentication at a time. You + | need to be on, or have access to, a machine with an internet-connected + | web browser. + \[rs] (traditional) / Legacy authentication. - 2 | This is only required for certain whitelabel versions of Jottacloud and not recommended for normal users. + 3 | This is no longer supported by any known services and not recommended + | used. You will be asked for your account\[aq]s username and password. \[rs] (legacy) - / Telia Cloud authentication. - 3 | Use this if you are using Telia Cloud. - \[rs] (telia) - / Tele2 Cloud authentication. - 4 | Use this if you are using Tele2 Cloud. - \[rs] (tele2) - / Onlime Cloud authentication. - 5 | Use this if you are using Onlime Cloud. - \[rs] (onlime) config_type> 1 + +Option config_login_token. Personal login token. -Generate here: https://www.jottacloud.com/web/secure -Login Token> +Generate it from the account security settings in the web interface of your +service, for the official service on https://www.jottacloud.com/web/secure. +Enter a value. +config_login_token> + Use a non-standard device/mountpoint? Choosing no, the default, will let you access the storage used for the archive section of the official Jottacloud client. If you instead want to access the sync or the backup section, for example, you must choose yes. y) Yes n) No (default) -y/n> y -Option config_device. -The device to use. In standard setup the built-in Jotta device is used, -which contains predefined mountpoints for archive, sync etc. All other devices -are treated as backup devices by the official Jottacloud client. You may create -a new by entering a unique name. -Choose a number from below, or type in your own string value. -Press Enter for the default (DESKTOP-3H31129). - 1 > DESKTOP-3H31129 - 2 > Jotta -config_device> 2 -Option config_mountpoint. -The mountpoint to use for the built-in device Jotta. -The standard setup is to use the Archive mountpoint. Most other mountpoints -have very limited support in rclone and should generally be avoided. -Choose a number from below, or type in an existing string value. -Press Enter for the default (Archive). - 1 > Archive - 2 > Shared - 3 > Sync -config_mountpoint> 1 +y/n> n + Configuration complete. Options: - type: jottacloud @@ -60159,7 +67109,8 @@ y/e/d> y \f[R] .fi .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your Jottacloud .IP @@ -60654,7 +67605,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -60724,7 +67675,7 @@ You can choose to edit advanced config in order to enter your own service URL if you use an on-premise or white label Koofr instance, or choose an alternative mount instead of your primary storage. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this: .PP List directories in top level of your Koofr .IP @@ -60948,7 +67899,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -61023,7 +67974,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -61103,7 +68054,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -61858,6 +68809,9 @@ files without knowledge of the key used for encryption. This is an rclone backend for Mega which supports the file transfer features of Mega using the same client side encryption. .PP +\f[B]Note\f[R] MEGA S4 Object Storage, an S3 compatible object store, +also works with rclone and this is recommended for new projects. +.PP Paths are specified as \f[V]remote:path\f[R] .PP Paths may be as deep as required, e.g. @@ -61869,7 +68823,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -61919,7 +68873,8 @@ y/e/d> y after a regular login via the browser, otherwise attempting to use the credentials in \f[V]rclone\f[R] will fail. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your Mega .IP @@ -61996,7 +68951,7 @@ access and synchronization, you may receive an error such as .IP .nf \f[C] -Failed to create file system for \[dq]my-mega-remote:\[dq]: +Failed to create file system for \[dq]my-mega-remote:\[dq]: couldn\[aq]t login: Object (typically, node or user) not found \f[R] .fi @@ -62005,8 +68960,8 @@ The diagnostic steps often recommended in the rclone forum (https://forum.rclone.org/search?q=mega) start with the \f[B]MEGAcmd\f[R] utility. Note that this refers to the official C++ command from -https://github.com/meganz/MEGAcmd and not the go language built command -from t3rm1n4l/megacmd that is no longer maintained. + and not the go language built +command from t3rm1n4l/megacmd that is no longer maintained. .PP Follow the instructions for installing MEGAcmd and try accessing your remote as they recommend. @@ -62112,9 +69067,48 @@ Env Var: RCLONE_MEGA_PASS Type: string .IP \[bu] 2 Required: true +.SS --mega-2fa +.PP +The 2FA code of your MEGA account if the account is set up with one +.PP +Properties: +.IP \[bu] 2 +Config: 2fa +.IP \[bu] 2 +Env Var: RCLONE_MEGA_2FA +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Advanced options .PP Here are the Advanced options specific to mega (Mega). +.SS --mega-session-id +.PP +Session (internal use only) +.PP +Properties: +.IP \[bu] 2 +Config: session_id +.IP \[bu] 2 +Env Var: RCLONE_MEGA_SESSION_ID +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false +.SS --mega-master-key +.PP +Master key (internal use only) +.PP +Properties: +.IP \[bu] 2 +Config: master_key +.IP \[bu] 2 +Env Var: RCLONE_MEGA_MASTER_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS --mega-debug .PP Output more debug from Mega. @@ -62233,7 +69227,7 @@ too if you want to: .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -62305,28 +69299,40 @@ too, e.g. If you have a CP code you can use that as the folder after the domain such as //. .PP -For example, this is commonly configured with or without a CP code: * +For example, this is commonly configured with or without a CP code: +.IP \[bu] 2 \f[B]With a CP code\f[R]. -\f[V][your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/\f[R] * +\f[V][your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/\f[R] +.IP \[bu] 2 \f[B]Without a CP code\f[R]. \f[V][your-domain-prefix]-nsu.akamaihd.net\f[R] .PP -See all buckets rclone lsd remote: The initial setup for Netstorage -involves getting an account and secret. +See all buckets +.IP +.nf +\f[C] +rclone lsd remote: +\f[R] +.fi +.PP +The initial setup for Netstorage involves getting an account and secret. Use \f[V]rclone config\f[R] to walk you through the setup process. .SS Configuration .PP Here\[aq]s an example of how to make a remote called \f[V]ns1\f[R]. .IP "1." 3 To begin the interactive configuration process, enter this command: +.RS 4 .IP .nf \f[C] rclone config \f[R] .fi +.RE .IP "2." 3 Type \f[V]n\f[R] to create a new remote. +.RS 4 .IP .nf \f[C] @@ -62336,16 +69342,20 @@ q) Quit config e/n/d/q> n \f[R] .fi +.RE .IP "3." 3 For this example, enter \f[V]ns1\f[R] when you reach the name> prompt. +.RS 4 .IP .nf \f[C] name> ns1 \f[R] .fi +.RE .IP "4." 3 Enter \f[V]netstorage\f[R] as the type of storage to configure. +.RS 4 .IP .nf \f[C] @@ -62357,10 +69367,12 @@ XX / NetStorage Storage> netstorage \f[R] .fi +.RE .IP "5." 3 Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes. +.RS 4 .IP .nf \f[C] @@ -62373,9 +69385,11 @@ Choose a number from below, or type in your own value protocol> 1 \f[R] .fi +.RE .IP "6." 3 Specify your NetStorage host, CP code, and any necessary content paths using this format: \f[V]///\f[R] +.RS 4 .IP .nf \f[C] @@ -62383,8 +69397,10 @@ Enter a string value. Press Enter for the default (\[dq]\[dq]). host> baseball-nsu.akamaihd.net/123456/content/ \f[R] .fi +.RE .IP "7." 3 Set the netstorage account name +.RS 4 .IP .nf \f[C] @@ -62392,6 +69408,7 @@ Enter a string value. Press Enter for the default (\[dq]\[dq]). account> username \f[R] .fi +.RE .IP "8." 3 Set the Netstorage account secret/G2O key which will be used for authentication purposes. @@ -62399,6 +69416,7 @@ Select the \f[V]y\f[R] option to set your own password then enter your secret. Note: The secret is stored in the \f[V]rclone.conf\f[R] file with hex-encoded encryption. +.RS 4 .IP .nf \f[C] @@ -62411,8 +69429,10 @@ Confirm the password: password: \f[R] .fi +.RE .IP "9." 3 View the summary and confirm your remote configuration. +.RS 4 .IP .nf \f[C] @@ -62429,12 +69449,13 @@ d) Delete this remote y/e/d> y \f[R] .fi +.RE .PP This remote is called \f[V]ns1\f[R] and can now be used. .SS Example operations .PP Get started with rclone and NetStorage with these examples. -For additional rclone commands, visit https://rclone.org/commands/. +For additional rclone commands, visit . .SS See contents of a directory in your project .IP .nf @@ -62463,7 +69484,7 @@ rclone copy notes.txt ns1:/974012/testing/ rclone delete ns1:/974012/testing/notes.txt \f[R] .fi -.SS Move or copy content between CP codes. +.SS Move or copy content between CP codes .PP Your credentials must have access to two CP codes on the same remote. You can\[aq]t perform operations between different remotes. @@ -62680,7 +69701,7 @@ Required: false .PP Here are the commands specific to the netstorage backend. .PP -Run them with +Run them with: .IP .nf \f[C] @@ -62697,7 +69718,7 @@ These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). .SS du .PP -Return disk usage information for a specified directory +Return disk usage information for a specified directory. .IP .nf \f[C] @@ -62721,7 +69742,14 @@ The desired path location (including applicable sub-directories) ending in the object that will be the target of the symlink (for example, /links/mylink). Include the file extension for the object, if applicable. -\f[V]rclone backend symlink \f[R] +.PP +Usage example: +.IP +.nf +\f[C] +rclone backend symlink +\f[R] +.fi .SH Microsoft Azure Blob Storage .PP Paths are specified as \f[V]remote:container\f[R] (or \f[V]remote:\f[R] @@ -62737,7 +69765,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -62980,13 +70008,13 @@ authenticate to Workload Identity .RS 4 .IP \[bu] 2 -\f[V]AZURE_TENANT_ID\f[R]: Tenant to authenticate in. +\f[V]AZURE_TENANT_ID\f[R]: Tenant to authenticate in .IP \[bu] 2 \f[V]AZURE_CLIENT_ID\f[R]: Client ID of the application the user will -authenticate to. +authenticate to .IP \[bu] 2 \f[V]AZURE_FEDERATED_TOKEN_FILE\f[R]: Path to projected service account -token file. +token file .IP \[bu] 2 \f[V]AZURE_AUTHORITY_HOST\f[R]: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com). @@ -64022,7 +71050,7 @@ Content-Type X-MS-Tags .PP Eg \f[V]--header-upload \[dq]Content-Type: text/potato\[dq]\f[R] or -\f[V]--header-upload \[dq]X-MS-Tags: foo=bar\[dq]\f[R] +\f[V]--header-upload \[dq]X-MS-Tags: foo=bar\[dq]\f[R]. .SS Limitations .PP MD5 sums are only uploaded with chunked files if the source has an MD5 @@ -64037,7 +71065,7 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .SS Azure Storage Emulator Support .PP You can run rclone with the storage emulator (usually @@ -64071,7 +71099,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -64374,13 +71402,13 @@ authenticate to Workload Identity .RS 4 .IP \[bu] 2 -\f[V]AZURE_TENANT_ID\f[R]: Tenant to authenticate in. +\f[V]AZURE_TENANT_ID\f[R]: Tenant to authenticate in .IP \[bu] 2 \f[V]AZURE_CLIENT_ID\f[R]: Client ID of the application the user will -authenticate to. +authenticate to .IP \[bu] 2 \f[V]AZURE_FEDERATED_TOKEN_FILE\f[R]: Path to projected service account -token file. +token file .IP \[bu] 2 \f[V]AZURE_AUTHORITY_HOST\f[R]: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com). @@ -65087,7 +72115,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -65171,7 +72199,8 @@ y/e/d> y .fi .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Note that rclone runs a webserver on your local machine to collect the token as returned from Microsoft. @@ -65180,7 +72209,8 @@ get back the verification code. This is on \f[V]http://127.0.0.1:53682/\f[R] and this it may require you to unblock it temporarily if you are running a host firewall. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your OneDrive .IP @@ -65220,7 +72250,7 @@ For example, you might see throttling. To create your own Client ID, please follow these steps: .IP "1." 3 Open -https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/\[ti]/Overview + and then under the \f[V]Add\f[R] menu click \f[V]App registration\f[R]. .RS 4 .IP \[bu] 2 @@ -65230,7 +72260,7 @@ credit card for identity verification. .RE .IP "2." 3 Enter a name for your app, choose account type -\f[V]Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)\f[R], +\f[V]Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)\f[R], select \f[V]Web\f[R] in \f[V]Redirect URI\f[R], then type (do not copy and paste) \f[V]http://localhost:53682/\f[R] and click Register. Copy and keep the \f[V]Application (client) ID\f[R] under the app name @@ -65330,6 +72360,20 @@ In particular the \[dq]onedrive\[dq] option does not work. You can use the \[dq]sharepoint\[dq] option or if that does not find the correct drive ID type it in manually with the \[dq]driveid\[dq] option. .PP +To back up any user\[aq]s data using this flow, grant your Azure AD +application the necessary Microsoft Graph \f[I]Application +permissions\f[R] (such as \f[V]Files.Read.All\f[R], +\f[V]Sites.Read.All\f[R] and/or \f[V]Sites.Selected\f[R]). +With these permissions, rclone can access drives across the tenant, but +it needs to know \f[I]which user or drive\f[R] you want. +Supply a specific \f[V]drive_id\f[R] corresponding to that user\[aq]s +OneDrive, or a SharePoint site ID for SharePoint libraries. +You can obtain a user\[aq]s drive ID using Microsoft Graph (e.g. +\f[V]/users/{userUPN}/drive\f[R]) and then configure it in rclone. +Once the correct drive ID is provided, rclone will back up that +user\[aq]s data using the app-only token without requiring their +credentials. +.PP \f[B]NOTE\f[R] Assigning permissions directly to the application means that anyone with the \f[I]Client ID\f[R] and \f[I]Client Secret\f[R] can access your OneDrive files. @@ -66638,6 +73682,7 @@ click to create the link, this creates the link of the format but also changes the permissions so you your admin user has access. .IP "2." 3 Then in powershell run the following commands: +.RS 4 .IP .nf \f[C] @@ -66650,9 +73695,10 @@ Get-MgUserDefaultDrive -UserId \[aq]{emailaddress}\[aq] # This will give you output of the format: # Name Id DriveType CreatedDateTime # ---- -- --------- --------------- -# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm +# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm \f[R] .fi +.RE .IP "3." 3 Then in rclone add a onedrive remote type, and use the \f[V]Type in driveID\f[R] with the DriveID you got in the previous step. @@ -66878,7 +73924,8 @@ your account. You can\[aq]t do much about it, maybe write an email to your admins. .PP However, there are other ways to interact with your OneDrive account. -Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint +Have a look at the WebDAV backend: + .SS invalid_grant (AADSTS50076) .IP .nf @@ -66998,7 +74045,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -67353,27 +74400,31 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .SH Oracle Object Storage +.PP +Object Storage provided by the Oracle Cloud Infrastructure (OCI). +Read more at : .IP \[bu] 2 Oracle Object Storage -Overview (https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) +Overview (https://docs.oracle.com/iaas/Content/Object/Concepts/objectstorageoverview.htm) .IP \[bu] 2 Oracle Object Storage FAQ (https://www.oracle.com/cloud/storage/object-storage/faq/) -.IP \[bu] 2 -Oracle Object Storage -Limits (https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf) .PP Paths are specified as \f[V]remote:bucket\f[R] (or \f[V]remote:\f[R] for -the \f[V]lsd\f[R] command.) +the \f[V]lsd\f[R] command). You may put subdirectories in too, e.g. \f[V]remote:bucket/path/to/dir\f[R]. .PP Sample command to transfer local artifacts to remote:bucket in oracle object storage: -.PP -\f[V]rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv\f[R] +.IP +.nf +\f[C] +rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv +\f[R] +.fi .SS Configuration .PP Here is an example of making an oracle object storage configuration. @@ -67384,7 +74435,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -67554,12 +74605,19 @@ config_profile = Default \f[R] .fi .PP -Advantages: - One can use this method from any server within OCI or -on-premises or from other cloud provider. +Advantages: +.IP \[bu] 2 +One can use this method from any server within OCI or on-premises or +from other cloud provider. .PP -Considerations: - you need to configure user\[cq]s privileges / policy -to allow access to object storage - Overhead of managing users and keys. -- If the user is deleted, the config file will no longer work and may +Considerations: +.IP \[bu] 2 +you need to configure user\[cq]s privileges / policy to allow access to +object storage +.IP \[bu] 2 +Overhead of managing users and keys. +.IP \[bu] 2 +If the user is deleted, the config file will no longer work and may cause automation regressions that use the user\[aq]s credentials. .SS Instance Principal .PP @@ -68408,7 +75466,7 @@ See the metadata (https://rclone.org/docs/#metadata) docs for more info. .PP Here are the commands specific to the oracleobjectstorage backend. .PP -Run them with +Run them with: .IP .nf \f[C] @@ -68425,7 +75483,7 @@ These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). .SS rename .PP -change the name of an object +change the name of an object. .IP .nf \f[C] @@ -68435,7 +75493,7 @@ rclone backend rename remote: [options] [+] .PP This command can be used to rename a object. .PP -Usage Examples: +Usage example: .IP .nf \f[C] @@ -68444,7 +75502,7 @@ rclone backend rename oos:bucket relative-object-path-under-bucket object-new-na .fi .SS list-multipart-uploads .PP -List the unfinished multipart uploads +List the unfinished multipart uploads. .IP .nf \f[C] @@ -68453,6 +75511,8 @@ rclone backend list-multipart-uploads remote: [options] [+] .fi .PP This command lists the unfinished multipart uploads in JSON format. +.PP +Usage example: .IP .nf \f[C] @@ -68469,24 +75529,23 @@ bucket or with a bucket and path. .nf \f[C] { - \[dq]test-bucket\[dq]: [ - { - \[dq]namespace\[dq]: \[dq]test-namespace\[dq], - \[dq]bucket\[dq]: \[dq]test-bucket\[dq], - \[dq]object\[dq]: \[dq]600m.bin\[dq], - \[dq]uploadId\[dq]: \[dq]51dd8114-52a4-b2f2-c42f-5291f05eb3c8\[dq], - \[dq]timeCreated\[dq]: \[dq]2022-07-29T06:21:16.595Z\[dq], - \[dq]storageTier\[dq]: \[dq]Standard\[dq] - } + \[dq]test-bucket\[dq]: [ + { + \[dq]namespace\[dq]: \[dq]test-namespace\[dq], + \[dq]bucket\[dq]: \[dq]test-bucket\[dq], + \[dq]object\[dq]: \[dq]600m.bin\[dq], + \[dq]uploadId\[dq]: \[dq]51dd8114-52a4-b2f2-c42f-5291f05eb3c8\[dq], + \[dq]timeCreated\[dq]: \[dq]2022-07-29T06:21:16.595Z\[dq], + \[dq]storageTier\[dq]: \[dq]Standard\[dq] + } ] -\f[R] -.fi -.SS cleanup -.PP +} + +### cleanup + Remove unfinished multipart uploads. -.IP -.nf -\f[C] + +\[ga]\[ga]\[ga]console rclone backend cleanup remote: [options] [+] \f[R] .fi @@ -68496,6 +75555,8 @@ max-age which defaults to 24 hours. .PP Note that you can use --interactive/-i or --dry-run with this command to see what it would do. +.PP +Usage examples: .IP .nf \f[C] @@ -68508,10 +75569,10 @@ Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. .PP Options: .IP \[bu] 2 -\[dq]max-age\[dq]: Max age of upload to delete +\[dq]max-age\[dq]: Max age of upload to delete. .SS restore .PP -Restore objects from Archive to Standard storage +Restore objects from Archive to Standard storage. .IP .nf \f[C] @@ -68521,11 +75582,11 @@ rclone backend restore remote: [options] [+] .PP This command can be used to restore one or more objects from Archive to Standard storage. +.PP +Usage examples: .IP .nf \f[C] -Usage Examples: - rclone backend restore oos:bucket/path/to/directory -o hours=HOURS rclone backend restore oos:bucket -o hours=HOURS \f[R] @@ -68540,16 +75601,21 @@ rclone --interactive backend restore --include \[dq]*.txt\[dq] oos:bucket/path - \f[R] .fi .PP -All the objects shown will be marked for restore, then +All the objects shown will be marked for restore, then: .IP .nf \f[C] rclone backend restore --include \[dq]*.txt\[dq] oos:bucket/path -o hours=72 - +\f[R] +.fi +.PP It returns a list of status dictionaries with Object Name and Status -keys. The Status will be \[dq]RESTORED\[dq]\[dq] if it was successful or an error message -if not. - +keys. +The Status will be \[dq]RESTORED\[dq]\[dq] if it was successful or an +error message if not. +.IP +.nf +\f[C] [ { \[dq]Object\[dq]: \[dq]test.txt\[dq] @@ -68591,7 +75657,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote r) Rename remote c) Copy remote @@ -68987,7 +76053,7 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .SH Quatrix .PP Quatrix by Maytech is Quatrix Secure Compliant File Sharing | @@ -69002,10 +76068,10 @@ The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user\[aq]s profile at \f[V]https:///profile/api-keys\f[R] or with the help of the API - -https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create. +. .PP -See complete Swagger documentation for Quatrix - -https://docs.maytech.net/quatrix/quatrix-api/api-explorer +See complete Swagger documentation for +Quatrix (https://docs.maytech.net/quatrix/quatrix-api/api-explorer). .SS Configuration .PP Here is an example of how to make a remote called \f[V]remote\f[R]. @@ -69013,7 +76079,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -69052,7 +76118,8 @@ y/e/d> y \f[R] .fi .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your Quatrix .IP @@ -69346,27 +76413,32 @@ it\[aq]s safe to leave the API password blank (the API URL will be However, if you want to access Sia daemon running on another node, for example due to memory constraints or because you want to share single daemon between several rclone and Sia-UI instances, you\[aq]ll need to -make a few more provisions: - Ensure you have \f[I]Sia daemon\f[R] -installed directly or in a docker +make a few more provisions: +.IP \[bu] 2 +Ensure you have \f[I]Sia daemon\f[R] installed directly or in a docker container (https://github.com/SiaFoundation/siad/pkgs/container/siad) because Sia-UI does not support this mode natively. -- Run it on externally accessible port, for example provide +.IP \[bu] 2 +Run it on externally accessible port, for example provide \f[V]--api-addr :9980\f[R] and \f[V]--disable-api-security\f[R] arguments on the daemon command line. -- Enforce API password for the \f[V]siad\f[R] daemon via environment +.IP \[bu] 2 +Enforce API password for the \f[V]siad\f[R] daemon via environment variable \f[V]SIA_API_PASSWORD\f[R] or text file named \f[V]apipassword\f[R] in the daemon directory. -- Set rclone backend option \f[V]api_password\f[R] taking it from above +.IP \[bu] 2 +Set rclone backend option \f[V]api_password\f[R] taking it from above locations. .PP -Notes: 1. +Notes: +.IP "1." 3 If your wallet is locked, rclone cannot unlock it automatically. You should either unlock it in advance by using Sia-UI or via command line \f[V]siac wallet unlock\f[R]. Alternatively you can make \f[V]siad\f[R] unlock your wallet automatically upon startup by running it with environment variable \f[V]SIA_WALLET_PASSWORD\f[R]. -2. +.IP "2." 3 If \f[V]siad\f[R] cannot find the \f[V]SIA_API_PASSWORD\f[R] variable or the \f[V]apipassword\f[R] file in the \f[V]SIA_DIR\f[R] directory, it will generate a random password and store in the text file named @@ -69375,7 +76447,7 @@ or \f[V]C:\[rs]Users\[rs]YOUR_HOME\[rs]AppData\[rs]Local\[rs]Sia\[rs]apipassword\f[R] on Windows. Remember this when you configure password in rclone. -3. +.IP "3." 3 The only way to use \f[V]siad\f[R] without API password is to run it \f[B]on localhost\f[R] with command line argument \f[V]--authorize-api=false\f[R], but this is insecure and \f[B]strongly @@ -69388,7 +76460,7 @@ First, run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -69445,28 +76517,34 @@ y/e/d> y Once configured, you can then use \f[V]rclone\f[R] like this: .IP \[bu] 2 List directories in top level of your Sia storage +.RS 2 .IP .nf \f[C] rclone lsd mySia: \f[R] .fi +.RE .IP \[bu] 2 List all the files in your Sia storage +.RS 2 .IP .nf \f[C] rclone ls mySia: \f[R] .fi +.RE .IP \[bu] 2 Upload a local directory to the Sia directory called \f[I]backup\f[R] +.RS 2 .IP .nf \f[C] rclone copy /home/source mySia:backup \f[R] .fi +.RE .SS Standard options .PP Here are the Standard options specific to sia (Sia Decentralized Cloud). @@ -69609,7 +76687,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -70482,8 +77560,12 @@ To retrieve objects use \f[V]rclone copy\f[R] as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following: -.PP -\f[V]2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)\f[R] +.IP +.nf +\f[C] +2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s) +\f[R] +.fi .PP Rclone will wait for the time specified then retry the copy. .SH pCloud @@ -70503,7 +77585,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -70559,7 +77641,8 @@ y/e/d> y .fi .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Note if you are using remote config with rclone authorize while your pcloud server is the EU region, you will need to set the hostname in @@ -70572,7 +77655,8 @@ get back the verification code. This is on \f[V]http://127.0.0.1:53682/\f[R] and this it may require you to unblock it temporarily if you are running a host firewall. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your pCloud .IP @@ -70664,13 +77748,28 @@ hierarchy. .PP In order to do this you will have to find the \f[V]Folder ID\f[R] of the directory you wish rclone to display. -This will be the \f[V]folder\f[R] field of the URL when you open the -relevant folder in the pCloud web interface. +This can be accomplished by executing the \f[V]rclone lsf\f[R] command +using a basic configuration setup that does not include the +\f[V]root_folder_id\f[R] parameter. .PP -So if the folder you want rclone to use has a URL which looks like -\f[V]https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid\f[R] -in the browser, then you use \f[V]5xxxxxxxx8\f[R] as the -\f[V]root_folder_id\f[R] in the config. +The command will enumerate available directories, allowing you to locate +the appropriate Folder ID for subsequent use. +.PP +Example: +.IP +.nf +\f[C] +$ rclone lsf --dirs-only -Fip --csv TestPcloud: +dxxxxxxxx2,My Music/ +dxxxxxxxx3,My Pictures/ +dxxxxxxxx4,My Videos/ +\f[R] +.fi +.PP +So if the folder you want rclone to use your is \[dq]My Music/\[dq], +then use the returned id from \f[V]rclone lsf\f[R] command (ex. +\f[V]dxxxxxxxx2\f[R]) as the \f[V]root_folder_id\f[R] variable value in +the config file. .SS Standard options .PP Here are the Standard options specific to pcloud (Pcloud). @@ -70892,7 +77991,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -71203,7 +78302,7 @@ Required: false .PP Here are the commands specific to the pikpak backend. .PP -Run them with +Run them with: .IP .nf \f[C] @@ -71220,7 +78319,7 @@ These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). .SS addurl .PP -Add offline download task for url +Add offline download task for url. .IP .nf \f[C] @@ -71230,7 +78329,7 @@ rclone backend addurl remote: [options] [+] .PP This command adds offline download task for url. .PP -Usage: +Usage example: .IP .nf \f[C] @@ -71243,7 +78342,7 @@ If \[aq]dirpath\[aq] is invalid, download will fallback to default \[aq]My Pack\[aq] folder. .SS decompress .PP -Request decompress of a file/files in a folder +Request decompress of a file/files in a folder. .IP .nf \f[C] @@ -71253,7 +78352,7 @@ rclone backend decompress remote: [options] [+] .PP This command requests decompress of file/files in a folder. .PP -Usage: +Usage examples: .IP .nf \f[C] @@ -71306,7 +78405,7 @@ To use the personal filesystem you will need a pixeldrain account (https://pixeldrain.com/register) and either the Prepaid plan or one of the Patreon-based subscriptions. After registering and subscribing, your personal filesystem will be -available at this link: https://pixeldrain.com/d/me. +available at this link: . .PP Go to the API keys page (https://pixeldrain.com/user/api_keys) on your account and generate a new API key for rclone. @@ -71317,7 +78416,7 @@ Example: .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote d) Delete remote c) Copy remote @@ -71538,7 +78637,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -71587,7 +78686,8 @@ y/e/d> .fi .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Note that rclone runs a webserver on your local machine to collect the token as returned from premiumize.me. @@ -71596,7 +78696,8 @@ get back the verification code. This is on \f[V]http://127.0.0.1:53682/\f[R] and this it may require you to unblock it temporarily if you are running a host firewall. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your premiumize.me .IP @@ -71845,7 +78946,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -71899,7 +79000,8 @@ y/e/d> y already generated after a regular login via the browser, otherwise attempting to use the credentials in \f[V]rclone\f[R] will fail. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your Proton Drive .IP @@ -72002,6 +79104,28 @@ Env Var: RCLONE_PROTONDRIVE_2FA Type: string .IP \[bu] 2 Required: false +.SS --protondrive-otp-secret-key +.PP +The OTP secret key +.PP +The value can also be provided with +--protondrive-otp-secret-key=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567 +.PP +The OTP secret key of your proton drive account if the account is set up +with two-factor authentication +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: otp_secret_key +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_OTP_SECRET_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Advanced options .PP Here are the Advanced options specific to protondrive (Proton Drive). @@ -72246,7 +79370,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -72309,7 +79433,8 @@ e/n/d/r/c/s/q> q .fi .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Note that rclone runs a webserver on your local machine to collect the token as returned from put.io if using web browser to automatically @@ -72541,7 +79666,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -72595,7 +79720,8 @@ y/e/d> y already generated after a regular login via the browser, otherwise attempting to use the credentials in \f[V]rclone\f[R] will fail. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your Proton Drive .IP @@ -72698,6 +79824,28 @@ Env Var: RCLONE_PROTONDRIVE_2FA Type: string .IP \[bu] 2 Required: false +.SS --protondrive-otp-secret-key +.PP +The OTP secret key +.PP +The value can also be provided with +--protondrive-otp-secret-key=ABCDEFGHIJKLMNOPQRSTUVWXYZ234567 +.PP +The OTP secret key of your proton drive account if the account is set up +with two-factor authentication +.PP +\f[B]NB\f[R] Input to this must be obscured - see rclone +obscure (https://rclone.org/commands/rclone_obscure/). +.PP +Properties: +.IP \[bu] 2 +Config: otp_secret_key +.IP \[bu] 2 +Env Var: RCLONE_PROTONDRIVE_OTP_SECRET_KEY +.IP \[bu] 2 +Type: string +.IP \[bu] 2 +Required: false .SS Advanced options .PP Here are the Advanced options specific to protondrive (Proton Drive). @@ -72928,21 +80076,29 @@ official documentation available. .SH Seafile .PP This is a backend for the Seafile (https://www.seafile.com/) storage -service: - It works with both the free community edition or the -professional edition. -- Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. -- Encrypted libraries are also supported. -- It supports 2FA enabled users - Using a Library API Token is -\f[B]not\f[R] supported +service: +.IP \[bu] 2 +It works with both the free community edition or the professional +edition. +.IP \[bu] 2 +Seafile versions 6.x, 7.x, 8.x and 9.x are all supported. +.IP \[bu] 2 +Encrypted libraries are also supported. +.IP \[bu] 2 +It supports 2FA enabled users +.IP \[bu] 2 +Using a Library API Token is \f[B]not\f[R] supported .SS Configuration .PP -There are two distinct modes you can setup your remote: - you point your -remote to the \f[B]root of the server\f[R], meaning you don\[aq]t -specify a library during the configuration: Paths are specified as -\f[V]remote:library\f[R]. +There are two distinct modes you can setup your remote: +.IP \[bu] 2 +you point your remote to the \f[B]root of the server\f[R], meaning you +don\[aq]t specify a library during the configuration: Paths are +specified as \f[V]remote:library\f[R]. You may put subdirectories in too, e.g. \f[V]remote:library/path/to/dir\f[R]. -- you point your remote to a specific library during the configuration: +.IP \[bu] 2 +you point your remote to a specific library during the configuration: Paths are specified as \f[V]remote:path/to/dir\f[R]. \f[B]This is the recommended mode when using encrypted libraries\f[R]. (\f[I]This mode is possibly slightly faster than the root mode\f[R]) @@ -72964,7 +80120,7 @@ username) and your password. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -73075,7 +80231,7 @@ attempt to authenticate you: .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -73241,7 +80397,7 @@ They can either be for a file or a directory: .IP .nf \f[C] -rclone link seafile:seafile-tutorial.doc +$ rclone link seafile:seafile-tutorial.doc http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/ \f[R] .fi @@ -73250,7 +80406,7 @@ or if run on a directory you will get: .IP .nf \f[C] -rclone link seafile:dir +$ rclone link seafile:dir http://my.seafile.server/d/9ea2455f6f55478bbb0d/ \f[R] .fi @@ -73261,9 +80417,15 @@ you will get the exact same link. .SS Compatibility .PP It has been actively developed using the seafile docker -image (https://github.com/haiwen/seafile-docker) of these versions: - -6.3.4 community edition - 7.0.5 community edition - 7.1.3 community -edition - 9.0.10 community edition +image (https://github.com/haiwen/seafile-docker) of these versions: +.IP \[bu] 2 +6.3.4 community edition +.IP \[bu] 2 +7.0.5 community edition +.IP \[bu] 2 +7.1.3 community edition +.IP \[bu] 2 +9.0.10 community edition .PP Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven\[aq]t been tested and might not work @@ -73479,7 +80641,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -73596,7 +80758,7 @@ The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line (\[aq]\[aq] or \[aq]\[aq]) separating lines. -i.e. +I.e. .IP .nf \f[C] @@ -75016,7 +82178,7 @@ On smbd, it\[aq]s the section title in \f[V]smb.conf\f[R] (usually in You can find shares by querying the root if you\[aq]re unsure (e.g. \f[V]rclone lsd remote:\f[R]). .PP -You can\[aq]t access to the shared printers from rclone, obviously. +You can\[aq]t access the shared printers from rclone, obviously. .PP You can\[aq]t use Anonymous access for logging in. You have to use the \f[V]guest\f[R] user with an empty password instead. @@ -75043,7 +82205,7 @@ This will guide you through an interactive setup process. .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -75497,9 +82659,11 @@ without download, as checksum metadata is not calculated during upload .RE .SS Configuration .PP -To make a new Storj configuration you need one of the following: * +To make a new Storj configuration you need one of the following: +.IP \[bu] 2 Access Grant that someone else shared with you. -* API +.IP \[bu] 2 +API Key (https://documentation.storj.io/getting-started/uploading-your-first-object/create-an-api-key) of a Storj project you are a member of. .PP @@ -75508,7 +82672,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -75517,7 +82681,7 @@ This will guide you through an interactive setup process: .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -75560,7 +82724,7 @@ y/e/d> y .IP .nf \f[C] -No remotes found, make a new one? +No remotes found, make a new one\[rs]? n) New remote s) Set configuration password q) Quit config @@ -75838,7 +83002,8 @@ this folder. .IP .nf \f[C] -rclone ls remote:bucket/path/to/dir/ +$ rclone ls remote:bucket +/path/to/dir/ \f[R] .fi .PP @@ -75946,7 +83111,7 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .SS Known issues .PP If you get errors like \f[V]too many open files\f[R] this usually @@ -75987,7 +83152,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -76050,7 +83215,8 @@ y/e/d> y Note that the config asks for your email and password but doesn\[aq]t store them, it only uses them to get the initial token. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories (sync folders) in top level of your SugarSync .IP @@ -76299,7 +83465,7 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .SH Uloz.to .PP Paths are specified as \f[V]remote:path\f[R] @@ -76316,7 +83482,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -76374,7 +83540,8 @@ y/e/d> y \f[R] .fi .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List folders in root level folder: .IP @@ -76619,7 +83786,7 @@ of an rclone union remote. .PP See List of backends that do not support rclone about (https://rclone.org/overview/#optional-features) and rclone -about (https://rclone.org/commands/rclone_about/) +about (https://rclone.org/commands/rclone_about/). .SH Uptobox .PP This is a Backend for Uptobox file storage service. @@ -76634,7 +83801,7 @@ Paths may be as deep as required, e.g. .PP To configure an Uptobox backend you\[aq]ll need your personal api token. You\[aq]ll find it in your account -settings (https://uptobox.com/my_account) +settings (https://uptobox.com/my_account). .PP Here is an example of how to make a remote called \f[V]remote\f[R] with the default setup. @@ -76691,11 +83858,12 @@ api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx y) Yes this is OK (default) e) Edit this remote d) Delete this remote -y/e/d> +y/e/d> \f[R] .fi .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your Uptobox .IP @@ -76871,7 +84039,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -76935,7 +84103,7 @@ e/n/d/r/c/s/q> q \f[R] .fi .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this: .PP List directories in top level in \f[V]remote1:dir1\f[R], \f[V]remote2:dir2\f[R] and \f[V]remote3:dir3\f[R] @@ -77369,7 +84537,7 @@ First run: .IP .nf \f[C] - rclone config +rclone config \f[R] .fi .PP @@ -77442,7 +84610,8 @@ y/e/d> y \f[R] .fi .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP List directories in top level of your WebDAV .IP @@ -77874,10 +85043,10 @@ navigate to the desired directory in your browser to get the URL, then strip everything after the name of the opened directory. .PP Example: If the URL is: -https://example.sharepoint.com/sites/12345/Documents/Forms/AllItems.aspx + .PP The configuration to use would be: -https://example.sharepoint.com/sites/12345/Documents + .PP Set the \f[V]vendor\f[R] to \f[V]sharepoint-ntlm\f[R]. .PP @@ -78072,7 +85241,8 @@ y/e/d> y .fi .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Note that rclone runs a webserver on your local machine to collect the token as returned from Yandex Disk. @@ -78081,7 +85251,8 @@ get back the verification code. This is on \f[V]http://127.0.0.1:53682/\f[R] and this it may require you to unblock it temporarily if you are running a host firewall. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP See top level directories .IP @@ -78403,7 +85574,8 @@ y/e/d> .fi .PP See the remote setup docs (https://rclone.org/remote_setup/) for how to -set it up on a machine with no Internet browser available. +set it up on a machine without an internet-connected web browser +available. .PP Rclone runs a webserver on your local computer to collect the authorization token from Zoho Workdrive. @@ -78413,7 +85585,8 @@ The webserver runs on \f[V]http://127.0.0.1:53682/\f[R]. If local port \f[V]53682\f[R] is protected by a firewall you may need to temporarily unblock the firewall to complete authorization. .PP -Once configured you can then use \f[V]rclone\f[R] like this, +Once configured you can then use \f[V]rclone\f[R] like this (replace +\f[V]remote\f[R] with the name you gave your remote): .PP See top level directories .IP @@ -79243,7 +86416,7 @@ Copying the entire directory with \[aq]-l\[aq] .IP .nf \f[C] -$ rclone copy -l /tmp/a/ remote:/tmp/a/ +rclone copy -l /tmp/a/ remote:/tmp/a/ \f[R] .fi .PP @@ -79339,7 +86512,7 @@ root .PP Using \f[V]rclone --one-file-system copy root remote:\f[R] will only copy \f[V]file1\f[R] and \f[V]file2\f[R]. -Eg +E.g. .IP .nf \f[C] @@ -79435,6 +86608,22 @@ Env Var: RCLONE_LOCAL_SKIP_LINKS Type: bool .IP \[bu] 2 Default: false +.SS --skip-specials +.PP +Don\[aq]t warn about skipped pipes, sockets and device objects. +.PP +This flag disables warning messages on skipped pipes, sockets and device +objects, as you explicitly acknowledge that they should be skipped. +.PP +Properties: +.IP \[bu] 2 +Config: skip_specials +.IP \[bu] 2 +Env Var: RCLONE_LOCAL_SKIP_SPECIALS +.IP \[bu] 2 +Type: bool +.IP \[bu] 2 +Default: false .SS --local-zero-size-links .PP Assume the Stat size of links is zero (and read them instead) @@ -79892,7 +87081,7 @@ See the metadata (https://rclone.org/docs/#metadata) docs for more info. .PP Here are the commands specific to the local backend. .PP -Run them with +Run them with: .IP .nf \f[C] @@ -79909,7 +87098,7 @@ These can be run on a running backend using the rc command backend/command (https://rclone.org/rc/#backend-command). .SS noop .PP -A null operation for testing backend commands +A null operation for testing backend commands. .IP .nf \f[C] @@ -79922,10 +87111,474 @@ output. .PP Options: .IP \[bu] 2 -\[dq]echo\[dq]: echo the input arguments +\[dq]echo\[dq]: Echo the input arguments. .IP \[bu] 2 -\[dq]error\[dq]: return an error based on option value +\[dq]error\[dq]: Return an error based on option value. .SH Changelog +.SS v1.72.0 - 2025-11-21 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0) +.IP \[bu] 2 +New backends +.RS 2 +.IP \[bu] 2 +Archive backend to read archives on cloud storage. +(Nick Craig-Wood) +.RE +.IP \[bu] 2 +New S3 providers +.RS 2 +.IP \[bu] 2 +Cubbit Object Storage (https://rclone.org/s3/#Cubbit) (Marco Ferretti) +.IP \[bu] 2 +FileLu S5 Object Storage (https://rclone.org/s3/#filelu-s5) +(kingston125) +.IP \[bu] 2 +Hetzner Object Storage (https://rclone.org/s3/#hetzner) (spiffytech) +.IP \[bu] 2 +Intercolo Object Storage (https://rclone.org/s3/#intercolo) (Robin Rolf) +.IP \[bu] 2 +Rabata S3-compatible secure cloud +storage (https://rclone.org/s3/#Rabata) (dougal) +.IP \[bu] 2 +Servercore Object Storage (https://rclone.org/s3/#servercore) (dougal) +.IP \[bu] 2 +SpectraLogic (https://rclone.org/s3/#spectralogic) (dougal) +.RE +.IP \[bu] 2 +New commands +.RS 2 +.IP \[bu] 2 +rclone archive (https://rclone.org/commands/rclone_archive/): command to +create and read archive files (Fawzib Rojas) +.IP \[bu] 2 +rclone config +string (https://rclone.org/commands/rclone_config_string/): for making +connection strings (Nick Craig-Wood) +.IP \[bu] 2 +rclone test speed (https://rclone.org/commands/rclone_test_speed/): Add +command to test a specified remotes speed (dougal) +.RE +.IP \[bu] 2 +New Features +.RS 2 +.IP \[bu] 2 +backends: many backends have has a paged listing (\f[V]ListP\f[R]) +interface added +.RS 2 +.IP \[bu] 2 +this enables progress when listing large directories and reduced memory +usage +.RE +.IP \[bu] 2 +build +.RS 2 +.IP \[bu] 2 +Bump golang.org/x/crypto from 0.43.0 to 0.45.0 to fix CVE-2025-58181 +(dependabot[bot]) +.IP \[bu] 2 +Modernize code and tests (Nick Craig-Wood, russcoss, juejinyuxitu, +reddaisyy, dulanting, Oleksandr Redko) +.IP \[bu] 2 +Update all dependencies (Nick Craig-Wood) +.IP \[bu] 2 +Enable support for \f[V]aix/ppc64\f[R] (Lakshmi-Surekha) +.RE +.IP \[bu] 2 +check: Improved reporting of differences in sizes and contents +(albertony) +.IP \[bu] 2 +copyurl: Added \f[V]--url\f[R] to read URLs from CSV file (S-Pegg1, +dougal) +.IP \[bu] 2 +docs: +.RS 2 +.IP \[bu] 2 +markdown linting (albertony) +.IP \[bu] 2 +fixes (albertony, Andrew Gunnerson, anon-pradip, Claudius Ellsel, +dougal, iTrooz, Jean-Christophe Cura, Joseph Brownlee, kapitainsky, Matt +LaPaglia, n4n5, Nick Craig-Wood, nielash, SublimePeace, Ted Robertson, +vastonus) +.RE +.IP \[bu] 2 +fs: remove unnecessary Seek call on log file (Aneesh Agrawal) +.IP \[bu] 2 +hashsum: Improved output format when listing algorithms (albertony) +.IP \[bu] 2 +lib/http: Cleanup indentation and other whitespace in http serve +template (albertony) +.IP \[bu] 2 +lsf: Add support for \f[V]unix\f[R] and \f[V]unixnano\f[R] time formats +(Motte) +.IP \[bu] 2 +oauthutil: Improved debug logs from token refresh (albertony) +.IP \[bu] 2 +rc +.RS 2 +.IP \[bu] 2 +Add job/batch (https://rclone.org/rc/#job-batch) for sending batches of +rc commands to run concurrently (Nick Craig-Wood) +.IP \[bu] 2 +Add \f[V]runningIds\f[R] and \f[V]finishedIds\f[R] to +job/list (https://rclone.org/rc/#job-list) (n4n5) +.IP \[bu] 2 +Add \f[V]osVersion\f[R], \f[V]osKernel\f[R] and \f[V]osArch\f[R] to +core/version (https://rclone.org/rc/#core-version) (Nick Craig-Wood) +.IP \[bu] 2 +Make sure fatal errors run via the rc don\[aq]t crash rclone (Nick +Craig-Wood) +.IP \[bu] 2 +Add \f[V]executeId\f[R] to job statuses in +job/list (https://rclone.org/rc/#job-list) (Nikolay Kiryanov) +.IP \[bu] 2 +\f[V]config/unlock\f[R]: rename parameter to \f[V]configPassword\f[R] +accept old as well (Nick Craig-Wood) +.RE +.IP \[bu] 2 +serve http: Download folders as zip (dougal) +.RE +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +build +.RS 2 +.IP \[bu] 2 +Fix tls: failed to verify certificate: x509: negative serial number +(Nick Craig-Wood) +.RE +.IP \[bu] 2 +march +.RS 2 +.IP \[bu] 2 +Fix \f[V]--no-traverse\f[R] being very slow (Nick Craig-Wood) +.RE +.IP \[bu] 2 +serve s3: Fix log output to remove the EXTRA messages (iTrooz) +.RE +.IP \[bu] 2 +Mount +.RS 2 +.IP \[bu] 2 +Windows: improve error message on missing WinFSP (divinity76) +.RE +.IP \[bu] 2 +Local +.RS 2 +.IP \[bu] 2 +Add \f[V]--skip-specials\f[R] to ignore special files (Adam Dinwoodie) +.RE +.IP \[bu] 2 +Azure Blob +.RS 2 +.IP \[bu] 2 +Add ListP interface (dougal) +.RE +.IP \[bu] 2 +Azurefiles +.RS 2 +.IP \[bu] 2 +Add ListP interface (Nick Craig-Wood) +.RE +.IP \[bu] 2 +B2 +.RS 2 +.IP \[bu] 2 +Add ListP interface (dougal) +.IP \[bu] 2 +Add Server-Side encryption support (fries1234) +.IP \[bu] 2 +Fix \[dq]expected a FileSseMode but found: \[aq]\[aq]\[dq] (dougal) +.IP \[bu] 2 +Allow individual old versions to be deleted with \f[V]--b2-versions\f[R] +(dougal) +.RE +.IP \[bu] 2 +Box +.RS 2 +.IP \[bu] 2 +Add ListP interface (Nick Craig-Wood) +.IP \[bu] 2 +Allow configuration with config file contents (Dominik Sander) +.RE +.IP \[bu] 2 +Compress +.RS 2 +.IP \[bu] 2 +Add zstd compression (Alex) +.RE +.IP \[bu] 2 +Drive +.RS 2 +.IP \[bu] 2 +Add ListP interface (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Dropbox +.RS 2 +.IP \[bu] 2 +Add ListP interface (Nick Craig-Wood) +.IP \[bu] 2 +Fix error moving just created objects (Nick Craig-Wood) +.RE +.IP \[bu] 2 +FTP +.RS 2 +.IP \[bu] 2 +Fix SOCKS proxy support (dougal) +.IP \[bu] 2 +Fix transfers from servers that return 250 ok messages (jijamik) +.RE +.IP \[bu] 2 +Google Cloud Storage +.RS 2 +.IP \[bu] 2 +Add ListP interface (dougal) +.IP \[bu] 2 +Fix \f[V]--gcs-storage-class\f[R] to work with server side copy for +objects (Riaz Arbi) +.RE +.IP \[bu] 2 +HTTP +.RS 2 +.IP \[bu] 2 +Add basic metadata and provide it via serve (Oleg Kunitsyn) +.RE +.IP \[bu] 2 +Jottacloud +.RS 2 +.IP \[bu] 2 +Add support for Let\[aq]s Go Cloud (from MediaMarkt) as a whitelabel +service (albertony) +.IP \[bu] 2 +Add support for MediaMarkt Cloud as a whitelabel service (albertony) +.IP \[bu] 2 +Added support for traditional oauth authentication also for the main +service (albertony) +.IP \[bu] 2 +Abort attempts to run unsupported rclone authorize command (albertony) +.IP \[bu] 2 +Improved token refresh handling (albertony) +.IP \[bu] 2 +Fix legacy authentication (albertony) +.IP \[bu] 2 +Fix authentication for whitelabel services from Elkjøp subsidiaries +(albertony) +.RE +.IP \[bu] 2 +Mega +.RS 2 +.IP \[bu] 2 +Implement 2FA login (iTrooz) +.RE +.IP \[bu] 2 +Memory +.RS 2 +.IP \[bu] 2 +Add ListP interface (dougal) +.RE +.IP \[bu] 2 +Onedrive +.RS 2 +.IP \[bu] 2 +Add ListP interface (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Oracle Object Storage +.RS 2 +.IP \[bu] 2 +Add ListP interface (dougal) +.RE +.IP \[bu] 2 +Pcloud +.RS 2 +.IP \[bu] 2 +Add ListP interface (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Proton Drive +.RS 2 +.IP \[bu] 2 +Automated 2FA login with OTP secret key (Microscotch) +.RE +.IP \[bu] 2 +S3 +.RS 2 +.IP \[bu] 2 +Make it easier to add new S3 providers (dougal) +.IP \[bu] 2 +Add \f[V]--s3-use-data-integrity-protections\f[R] quirk to fix BadDigest +error in Alibaba, Tencent (hunshcn) +.IP \[bu] 2 +Add support for \f[V]--upload-header\f[R], \f[V]If-Match\f[R] and +\f[V]If-None-Match\f[R] (Sean Turner) +.IP \[bu] 2 +Fix single file copying behavior with low permission (hunshcn) +.RE +.IP \[bu] 2 +SFTP +.RS 2 +.IP \[bu] 2 +Fix zombie SSH processes with \f[V]--sftp-ssh\f[R] (Copilot) +.RE +.IP \[bu] 2 +Smb +.RS 2 +.IP \[bu] 2 +Optimize smb mount performance by avoiding stat checks during +initialization (Sudipto Baral) +.RE +.IP \[bu] 2 +Swift +.RS 2 +.IP \[bu] 2 +Add ListP interface (dougal) +.IP \[bu] 2 +If storage_policy isn\[aq]t set, use the root containers policy (Andrew +Ruthven) +.IP \[bu] 2 +Report disk usage in segment containers (Andrew Ruthven) +.RE +.IP \[bu] 2 +Ulozto +.RS 2 +.IP \[bu] 2 +Implement the About functionality (Lukas Krejci) +.IP \[bu] 2 +Fix downloads returning HTML error page (aliaj1) +.RE +.IP \[bu] 2 +WebDAV +.RS 2 +.IP \[bu] 2 +Optimize bearer token fetching with singleflight (hunshcn) +.IP \[bu] 2 +Add ListP interface (Nick Craig-Wood) +.IP \[bu] 2 +Use SpaceSepList to parse bearer token command (hunshcn) +.IP \[bu] 2 +Add \f[V]Access-Control-Max-Age\f[R] header for CORS preflight caching +(viocha) +.IP \[bu] 2 +Fix out of memory with sharepoint-ntlm when uploading large file (Nick +Craig-Wood) +.RE +.SS v1.71.2 - 2025-10-20 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.71.1...v1.71.2) +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +build +.RS 2 +.IP \[bu] 2 +update Go to 1.25.3 +.IP \[bu] 2 +Update Docker image Alpine version to fix CVE-2025-9230 +.RE +.IP \[bu] 2 +bisync: Fix race when CaptureOutput is used concurrently (Nick +Craig-Wood) +.IP \[bu] 2 +doc fixes (albertony, dougal, iTrooz, Matt LaPaglia, Nick Craig-Wood) +.IP \[bu] 2 +index: Add missing providers (dougal) +.IP \[bu] 2 +serve http: Fix: logging URL on start (dougal) +.RE +.IP \[bu] 2 +Azurefiles +.RS 2 +.IP \[bu] 2 +Fix server side copy not waiting for completion (Vikas Bhansali) +.RE +.IP \[bu] 2 +B2 +.RS 2 +.IP \[bu] 2 +Fix 1TB+ uploads (dougal) +.RE +.IP \[bu] 2 +Google Cloud Storage +.RS 2 +.IP \[bu] 2 +Add region us-east5 (Dulani Woods) +.RE +.IP \[bu] 2 +Mega +.RS 2 +.IP \[bu] 2 +Fix 402 payment required errors (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Pikpak +.RS 2 +.IP \[bu] 2 +Fix unnecessary retries by using URL expire parameter (Youfu Zhang) +.RE +.SS v1.71.1 - 2025-09-24 +.PP +See commits (https://github.com/rclone/rclone/compare/v1.71.0...v1.71.1) +.IP \[bu] 2 +Bug Fixes +.RS 2 +.IP \[bu] 2 +bisync: Fix error handling for renamed conflicts (nielash) +.IP \[bu] 2 +march: Fix deadlock when using --fast-list on syncs (Nick Craig-Wood) +.IP \[bu] 2 +operations: Fix partial name collisions for non --inplace copies (Nick +Craig-Wood) +.IP \[bu] 2 +pacer: Fix deadlock with --max-connections (Nick Craig-Wood) +.IP \[bu] 2 +doc fixes (albertony, anon-pradip, Claudius Ellsel, dougal, +Jean-Christophe Cura, Nick Craig-Wood, nielash) +.RE +.IP \[bu] 2 +Mount +.RS 2 +.IP \[bu] 2 +Do not log successful unmount as an error (Tilman Vogel) +.RE +.IP \[bu] 2 +VFS +.RS 2 +.IP \[bu] 2 +Fix SIGHUP killing serve instead of flushing directory caches (dougal) +.RE +.IP \[bu] 2 +Local +.RS 2 +.IP \[bu] 2 +Fix rmdir \[dq]Access is denied\[dq] on windows (nielash) +.RE +.IP \[bu] 2 +Box +.RS 2 +.IP \[bu] 2 +Fix about after change in API return (Nick Craig-Wood) +.RE +.IP \[bu] 2 +Combine +.RS 2 +.IP \[bu] 2 +Propagate SlowHash feature (skbeh) +.RE +.IP \[bu] 2 +Drive +.RS 2 +.IP \[bu] 2 +Update making your own client ID instructions (Ed Craig-Wood) +.RE +.IP \[bu] 2 +Internet Archive +.RS 2 +.IP \[bu] 2 +Fix server side copy files with spaces (Nick Craig-Wood) +.RE .SS v1.71.0 - 2025-08-22 .PP See commits (https://github.com/rclone/rclone/compare/v1.70.0...v1.71.0) @@ -96160,10 +103813,6 @@ THE SOFTWARE. .IP \[bu] 2 Nick Craig-Wood .SS Contributors -.PP -{{< rem -\f[V]email addresses removed from here need to be added to bin/.ignore-emails to make sure update-authors.py doesn\[aq]t immediately put them back in again.\f[R] ->}} .IP \[bu] 2 Alex Couper .IP \[bu] 2 @@ -98138,6 +105787,7 @@ Vikas Bhansali <64532198+vibhansa-msft@users.noreply.github.com> Sudipto Baral .IP \[bu] 2 Sam Pegg +<70067376+S-Pegg1@users.noreply.github.com> .IP \[bu] 2 liubingrun .IP \[bu] 2 @@ -98164,24 +105814,113 @@ Lucas Bremgartner Binbin Qian .IP \[bu] 2 cui <523516579@qq.com> +.IP \[bu] 2 +Tilman Vogel +.IP \[bu] 2 +skbeh <60107333+skbeh@users.noreply.github.com> +.IP \[bu] 2 +Claudius Ellsel +.IP \[bu] 2 +Motte <37443982+dmotte@users.noreply.github.com> +.IP \[bu] 2 +dougal +<147946567+roucc@users.noreply.github.com> +.IP \[bu] 2 +anon-pradip +.IP \[bu] 2 +Robin Rolf +.IP \[bu] 2 +Jean-Christophe Cura +.IP \[bu] 2 +russcoss +.IP \[bu] 2 +Matt LaPaglia +.IP \[bu] 2 +Youfu Zhang <1315097+zhangyoufu@users.noreply.github.com> +.IP \[bu] 2 +juejinyuxitu +.IP \[bu] 2 +iTrooz +.IP \[bu] 2 +Microscotch +.IP \[bu] 2 +Andrew Ruthven +.IP \[bu] 2 +spiffytech +.IP \[bu] 2 +Dulani Woods +.IP \[bu] 2 +Marco Ferretti +.IP \[bu] 2 +hunshcn +.IP \[bu] 2 +vastonus +.IP \[bu] 2 +Oleksandr Redko +.IP \[bu] 2 +reddaisyy +.IP \[bu] 2 +viocha +.IP \[bu] 2 +Aneesh Agrawal +.IP \[bu] 2 +divinity76 +.IP \[bu] 2 +Andrew Gunnerson +.IP \[bu] 2 +Lakshmi-Surekha +.IP \[bu] 2 +dulanting +.IP \[bu] 2 +Adam Dinwoodie +.IP \[bu] 2 +Lukas Krejci +.IP \[bu] 2 +Riaz Arbi +.IP \[bu] 2 +Fawzib Rojas +.IP \[bu] 2 +fries1234 +.IP \[bu] 2 +Joseph Brownlee <39440458+JellyJoe198@users.noreply.github.com> +.IP \[bu] 2 +Ted Robertson <10043369+tredondo@users.noreply.github.com> +.IP \[bu] 2 +SublimePeace <184005903+SublimePeace@users.noreply.github.com> +.IP \[bu] 2 +Copilot <198982749+Copilot@users.noreply.github.com> +.IP \[bu] 2 +Alex <64072843+A1ex3@users.noreply.github.com> +.IP \[bu] 2 +n4n5 +.IP \[bu] 2 +aliaj1 +.IP \[bu] 2 +Sean Turner <30396892+seanturner026@users.noreply.github.com> +.IP \[bu] 2 +jijamik <30904953+jijamik@users.noreply.github.com> +.IP \[bu] 2 +Dominik Sander +.IP \[bu] 2 +Nikolay Kiryanov .SH Contact the rclone project .SS Forum .PP Forum for questions and general discussion: .IP \[bu] 2 -https://forum.rclone.org + .SS Business support .PP For business support or sponsorship enquiries please see: .IP \[bu] 2 -https://rclone.com/ + .IP \[bu] 2 -sponsorship\[at]rclone.com + .SS GitHub repository .PP The project\[aq]s repository is located at: .IP \[bu] 2 -https://github.com/rclone/rclone + .PP There you can file bug reports or contribute with pull requests. .SS Twitter @@ -98194,7 +105933,7 @@ You can also follow Nick on twitter for rclone announcements: Or if all else fails or you want to ask something private or confidential .IP \[bu] 2 -info\[at]rclone.com + .PP Please don\[aq]t email requests for help to this address - those are better directed to the forum unless you\[aq]d like to sign up for