1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

docs: fix markdown lint issues in backend docs

This commit is contained in:
albertony
2025-08-25 00:00:48 +02:00
committed by Nick Craig-Wood
parent fc6bd9ff79
commit 41eef6608b
71 changed files with 2663 additions and 1646 deletions

View File

@@ -85,11 +85,11 @@ Rclone helps you:
## Features {#features}
- Transfers
- MD5, SHA1 hashes are checked at all times for file integrity
- Timestamps are preserved on files
- Operations can be restarted at any time
- Can be to and from network, e.g. two different cloud providers
- Can use multi-threaded downloads to local disk
- MD5, SHA1 hashes are checked at all times for file integrity
- Timestamps are preserved on files
- Operations can be restarted at any time
- Can be to and from network, e.g. two different cloud providers
- Can use multi-threaded downloads to local disk
- [Copy](/commands/rclone_copy/) new or changed files to cloud storage
- [Sync](/commands/rclone_sync/) (one way) to make a directory identical
- [Bisync](/bisync/) (two way) to keep two directories in sync bidirectionally
@@ -216,10 +216,9 @@ These backends adapt or modify other storage providers:
{{< provider name="Hasher: Hash files" home="/hasher/" config="/hasher/" >}}
{{< provider name="Union: Join multiple remotes to work together" home="/union/" config="/union/" >}}
## Links
* {{< icon "fa fa-home" >}} [Home page](https://rclone.org/)
* {{< icon "fab fa-github" >}} [GitHub project page for source and bug tracker](https://github.com/rclone/rclone)
* {{< icon "fa fa-comments" >}} [Rclone Forum](https://forum.rclone.org)
* {{< icon "fas fa-cloud-download-alt" >}}[Downloads](/downloads/)
- {{< icon "fa fa-home" >}} [Home page](https://rclone.org/)
- {{< icon "fab fa-github" >}} [GitHub project page for source and bug tracker](https://github.com/rclone/rclone)
- {{< icon "fa fa-comments" >}} [Rclone Forum](https://forum.rclone.org)
- {{< icon "fas fa-cloud-download-alt" >}}[Downloads](/downloads/)

View File

@@ -8,7 +8,7 @@ versionIntroduced: "v1.40"
The `alias` remote provides a new name for another remote.
Paths may be as deep as required or a local path,
Paths may be as deep as required or a local path,
e.g. `remote:directory/subdirectory` or `/directory/subdirectory`.
During the initial setup with `rclone config` you will specify the target
@@ -24,9 +24,9 @@ Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking
The empty path is not allowed as a remote. To alias the current directory
use `.` instead.
The target remote can also be a [connection string](/docs/#connection-strings).
The target remote can also be a [connection string](/docs/#connection-strings).
This can be used to modify the config of a remote for different uses, e.g.
the alias `myDriveTrash` with the target remote `myDrive,trashed_only:`
the alias `myDriveTrash` with the target remote `myDrive,trashed_only:`
can be used to only show the trashed files in `myDrive`.
## Configuration
@@ -34,11 +34,13 @@ can be used to only show the trashed files in `myDrive`.
Here is an example of how to make an alias called `remote` for local folder.
First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -85,15 +87,21 @@ Once configured you can then use `rclone` like this,
List directories in top level in `/mnt/storage/backup`
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in `/mnt/storage/backup`
rclone ls remote:
```sh
rclone ls remote:
```
Copy another local directory to the alias directory called source
rclone copy /home/source remote:source
```sh
rclone copy /home/source remote:source
```
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/alias/alias.go then run make backenddocs" >}}
### Standard options

View File

@@ -15,11 +15,13 @@ command.) You may put subdirectories in too, e.g.
Here is an example of making a Microsoft Azure Blob Storage
configuration. For a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -55,20 +57,28 @@ y/e/d> y
See all containers
rclone lsd remote:
```sh
rclone lsd remote:
```
Make a new container
rclone mkdir remote:container
```sh
rclone mkdir remote:container
```
List the contents of a container
rclone ls remote:container
```sh
rclone ls remote:container
```
Sync `/home/local/directory` to the remote container, deleting any excess
files in the container.
rclone sync --interactive /home/local/directory remote:container
```sh
rclone sync --interactive /home/local/directory remote:container
```
### --fast-list
@@ -147,26 +157,35 @@ user with a password, depending on which environment variable are set.
It reads configuration from these variables, in the following order:
1. Service principal with client secret
- `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID.
- `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its
"directory" ID.
- `AZURE_CLIENT_ID`: the service principal's client ID
- `AZURE_CLIENT_SECRET`: one of the service principal's client secrets
2. Service principal with certificate
- `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID.
- `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its
"directory" ID.
- `AZURE_CLIENT_ID`: the service principal's client ID
- `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file including the private key.
- `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the certificate file.
- `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
- `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file
including the private key.
- `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the
certificate file.
- `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an
authentication request will include an x5c header to support subject
name / issuer based authentication. When set to "true" or "1",
authentication requests include the x5c header.
3. User with username and password
- `AZURE_TENANT_ID`: (optional) tenant to authenticate in. Defaults to "organizations".
- `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to
- `AZURE_CLIENT_ID`: client ID of the application the user will authenticate
to
- `AZURE_USERNAME`: a username (usually an email address)
- `AZURE_PASSWORD`: the user's password
4. Workload Identity
- `AZURE_TENANT_ID`: Tenant to authenticate in.
- `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to.
- `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file.
- `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com).
- `AZURE_TENANT_ID`: Tenant to authenticate in
- `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate
to
- `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file
- `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint
(default: login.microsoftonline.com).
##### Env Auth: 2. Managed Service Identity Credentials
@@ -193,19 +212,27 @@ Credentials created with the `az` tool can be picked up using `env_auth`.
For example if you were to login with a service principal like this:
az login --service-principal -u XXX -p XXX --tenant XXX
```sh
az login --service-principal -u XXX -p XXX --tenant XXX
```
Then you could access rclone resources like this:
rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER
```sh
rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER
```
Or
rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER
```sh
rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER
```
Which is analogous to using the `az` tool:
az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login
```sh
az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login
```
#### Account and Shared Key
@@ -226,18 +253,24 @@ explorer in the Azure portal.
If you use a container level SAS URL, rclone operations are permitted
only on a particular container, e.g.
rclone ls azureblob:container
```sh
rclone ls azureblob:container
```
You can also list the single container from the root. This will only
show the container specified by the SAS URL.
$ rclone lsd azureblob:
container/
```sh
$ rclone lsd azureblob:
container/
```
Note that you can't see or access any other containers - this will
fail
rclone ls azureblob:othercontainer
```sh
rclone ls azureblob:othercontainer
```
Container level SAS URLs are useful for temporarily allowing third
parties access to a single container or putting credentials into an
@@ -245,7 +278,8 @@ untrusted environment such as a CI build server.
#### Service principal with client secret
If these variables are set, rclone will authenticate with a service principal with a client secret.
If these variables are set, rclone will authenticate with a service principal
with a client secret.
- `tenant`: ID of the service principal's tenant. Also called its "directory" ID.
- `client_id`: the service principal's client ID
@@ -256,13 +290,18 @@ The credentials can also be placed in a file using the
#### Service principal with certificate
If these variables are set, rclone will authenticate with a service principal with certificate.
If these variables are set, rclone will authenticate with a service principal
with certificate.
- `tenant`: ID of the service principal's tenant. Also called its "directory" ID.
- `client_id`: the service principal's client ID
- `client_certificate_path`: path to a PEM or PKCS12 certificate file including the private key.
- `client_certificate_path`: path to a PEM or PKCS12 certificate file including
the private key.
- `client_certificate_password`: (optional) password for the certificate file.
- `client_send_certificate_chain`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
- `client_send_certificate_chain`: (optional) Specifies whether an
authentication request will include an x5c header to support subject name /
issuer based authentication. When set to "true" or "1", authentication
requests include the x5c header.
**NB** `client_certificate_password` must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@@ -297,15 +336,18 @@ be explicitly specified using exactly one of the `msi_object_id`,
If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is
set, this is is equivalent to using `env_auth`.
#### Fedrated Identity Credentials
#### Fedrated Identity Credentials
If these variables are set, rclone will authenticate with fedrated identity.
- `tenant_id`: tenant_id to authenticate in storage
- `client_id`: client ID of the application the user will authenticate to storage
- `msi_client_id`: managed identity client ID of the application the user will authenticate to
- `msi_client_id`: managed identity client ID of the application the user will
authenticate to
By default "api://AzureADTokenExchange" is used as scope for token retrieval over MSI. This token is then exchanged for actual storage token using 'tenant_id' and 'client_id'.
By default "api://AzureADTokenExchange" is used as scope for token retrieval
over MSI. This token is then exchanged for actual storage token using
'tenant_id' and 'client_id'.
#### Azure CLI tool `az` {#use_az}
@@ -322,7 +364,9 @@ Don't set `env_auth` at the same time.
If you want to access resources with public anonymous access then set
`account` only. You can do this without making an rclone config:
rclone lsf :azureblob,account=ACCOUNT:CONTAINER
```sh
rclone lsf :azureblob,account=ACCOUNT:CONTAINER
```
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/azureblob/azureblob.go then run make backenddocs" >}}
### Standard options

View File

@@ -14,11 +14,13 @@ e.g. `remote:path/to/dir`.
Here is an example of making a Microsoft Azure Files Storage
configuration. For a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -88,20 +90,28 @@ Once configured you can use rclone.
See all files in the top level:
rclone lsf remote:
```sh
rclone lsf remote:
```
Make a new directory in the root:
rclone mkdir remote:dir
```sh
rclone mkdir remote:dir
```
Recursively List the contents:
rclone ls remote:
```sh
rclone ls remote:
```
Sync `/home/local/directory` to the remote directory, deleting any
excess files in the directory.
rclone sync --interactive /home/local/directory remote:dir
```sh
rclone sync --interactive /home/local/directory remote:dir
```
### Modified time
@@ -173,26 +183,35 @@ user with a password, depending on which environment variable are set.
It reads configuration from these variables, in the following order:
1. Service principal with client secret
- `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID.
- `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its
"directory" ID.
- `AZURE_CLIENT_ID`: the service principal's client ID
- `AZURE_CLIENT_SECRET`: one of the service principal's client secrets
2. Service principal with certificate
- `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID.
- `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its
"directory" ID.
- `AZURE_CLIENT_ID`: the service principal's client ID
- `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file including the private key.
- `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the certificate file.
- `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
- `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file
including the private key.
- `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the
certificate file.
- `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an
authentication request will include an x5c header to support subject
name / issuer based authentication. When set to "true" or "1",
authentication requests include the x5c header.
3. User with username and password
- `AZURE_TENANT_ID`: (optional) tenant to authenticate in. Defaults to "organizations".
- `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to
- `AZURE_CLIENT_ID`: client ID of the application the user will authenticate
to
- `AZURE_USERNAME`: a username (usually an email address)
- `AZURE_PASSWORD`: the user's password
4. Workload Identity
- `AZURE_TENANT_ID`: Tenant to authenticate in.
- `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to.
- `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file.
- `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com).
- `AZURE_TENANT_ID`: Tenant to authenticate in
- `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate
to
- `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file
- `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint
(default: login.microsoftonline.com).
##### Env Auth: 2. Managed Service Identity Credentials
@@ -219,15 +238,21 @@ Credentials created with the `az` tool can be picked up using `env_auth`.
For example if you were to login with a service principal like this:
az login --service-principal -u XXX -p XXX --tenant XXX
```sh
az login --service-principal -u XXX -p XXX --tenant XXX
```
Then you could access rclone resources like this:
rclone lsf :azurefiles,env_auth,account=ACCOUNT:
```sh
rclone lsf :azurefiles,env_auth,account=ACCOUNT:
```
Or
rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles:
```sh
rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles:
```
#### Account and Shared Key
@@ -244,7 +269,8 @@ To use it leave `account`, `key` and "sas_url" blank and fill in `connection_str
#### Service principal with client secret
If these variables are set, rclone will authenticate with a service principal with a client secret.
If these variables are set, rclone will authenticate with a service principal
with a client secret.
- `tenant`: ID of the service principal's tenant. Also called its "directory" ID.
- `client_id`: the service principal's client ID
@@ -255,13 +281,18 @@ The credentials can also be placed in a file using the
#### Service principal with certificate
If these variables are set, rclone will authenticate with a service principal with certificate.
If these variables are set, rclone will authenticate with a service principal
with certificate.
- `tenant`: ID of the service principal's tenant. Also called its "directory" ID.
- `client_id`: the service principal's client ID
- `client_certificate_path`: path to a PEM or PKCS12 certificate file including the private key.
- `client_certificate_path`: path to a PEM or PKCS12 certificate file including
the private key.
- `client_certificate_password`: (optional) password for the certificate file.
- `client_send_certificate_chain`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header.
- `client_send_certificate_chain`: (optional) Specifies whether an authentication
request will include an x5c header to support subject name / issuer based
authentication. When set to "true" or "1", authentication requests include
the x5c header.
**NB** `client_certificate_password` must be obscured - see [rclone obscure](/commands/rclone_obscure/).
@@ -296,17 +327,21 @@ be explicitly specified using exactly one of the `msi_object_id`,
If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is
set, this is is equivalent to using `env_auth`.
#### Fedrated Identity Credentials
#### Fedrated Identity Credentials
If these variables are set, rclone will authenticate with fedrated identity.
- `tenant_id`: tenant_id to authenticate in storage
- `client_id`: client ID of the application the user will authenticate to storage
- `msi_client_id`: managed identity client ID of the application the user will authenticate to
- `msi_client_id`: managed identity client ID of the application the user will
authenticate to
By default "api://AzureADTokenExchange" is used as scope for token retrieval
over MSI. This token is then exchanged for actual storage token using 'tenant_id'
and 'client_id'.
By default "api://AzureADTokenExchange" is used as scope for token retrieval over MSI. This token is then exchanged for actual storage token using 'tenant_id' and 'client_id'.
#### Azure CLI tool `az` {#use_az}
Set to use the [Azure CLI tool `az`](https://learn.microsoft.com/en-us/cli/azure/)
as the sole means of authentication.
Setting this can be useful if you wish to use the `az` CLI on a host with

View File

@@ -15,7 +15,9 @@ command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
Here is an example of making a b2 configuration. First run
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process. To authenticate
you will either need your Account ID (a short hex number) and Master
@@ -23,8 +25,8 @@ Application Key (a long hex number) OR an Application Key, which is the
recommended method. See below for further details on generating and using
an Application Key.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
q) Quit config
n/q> n
@@ -60,20 +62,29 @@ This remote is called `remote` and can now be used like this
See all buckets
rclone lsd remote:
```sh
rclone lsd remote:
```
Create a new bucket
rclone mkdir remote:bucket
```sh
rclone mkdir remote:bucket
```
List the contents of a bucket
rclone ls remote:bucket
```sh
rclone ls remote:bucket
```
Sync `/home/local/directory` to the remote bucket, deleting any
excess files in the bucket.
rclone sync --interactive /home/local/directory remote:bucket
```sh
rclone sync --interactive /home/local/directory remote:bucket
```
### Application Keys
@@ -219,7 +230,7 @@ version followed by a `cleanup` of the old versions.
Show current version and all the versions with `--b2-versions` flag.
```
```sh
$ rclone -q ls b2:cleanup-test
9 one.txt
@@ -232,7 +243,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test
Retrieve an old version
```
```sh
$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
$ ls -l /tmp/one-v2016-07-04-141003-000.txt
@@ -241,7 +252,7 @@ $ ls -l /tmp/one-v2016-07-04-141003-000.txt
Clean up all the old versions and show that they've gone.
```
```sh
$ rclone -q cleanup b2:cleanup-test
$ rclone -q ls b2:cleanup-test
@@ -256,11 +267,13 @@ $ rclone -q --b2-versions ls b2:cleanup-test
When using `--b2-versions` flag rclone is relying on the file name
to work out whether the objects are versions or not. Versions' names
are created by inserting timestamp between file name and its extension.
```
```sh
9 file.txt
8 file-v2023-07-17-161032-000.txt
16 file-v2023-06-15-141003-000.txt
```
If there are real files present with the same names as versions, then
behaviour of `--b2-versions` can be unpredictable.
@@ -270,7 +283,7 @@ It is useful to know how many requests are sent to the server in different scena
All copy commands send the following 4 requests:
```
```text
/b2api/v1/b2_authorize_account
/b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets
@@ -287,7 +300,7 @@ require any files to be uploaded, no more requests will be sent.
Uploading files that do not require chunking, will send 2 requests per
file upload:
```
```text
/b2api/v1/b2_get_upload_url
/b2api/v1/b2_upload_file/
```
@@ -295,7 +308,7 @@ file upload:
Uploading files requiring chunking, will send 2 requests (one each to
start and finish the upload) and another 2 requests for each chunk:
```
```text
/b2api/v1/b2_start_large_file
/b2api/v1/b2_get_upload_part_url
/b2api/v1/b2_upload_part/
@@ -309,14 +322,14 @@ rclone will show and act on older versions of files. For example
Listing without `--b2-versions`
```
```sh
$ rclone -q ls b2:cleanup-test
9 one.txt
```
And with
```
```sh
$ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt
8 one-v2016-07-04-141032-000.txt
@@ -336,7 +349,7 @@ permitted, so you can't upload files or delete them.
Rclone supports generating file share links for private B2 buckets.
They can either be for a file for example:
```
```sh
./rclone link B2:bucket/path/to/file.txt
https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx
@@ -344,7 +357,7 @@ https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx
or if run on a directory you will get:
```
```sh
./rclone link B2:bucket/path
https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx
```
@@ -352,7 +365,7 @@ https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx
you can then use the authorization token (the part of the url from the
`?Authorization=` on) on any file path under that directory. For example:
```
```text
https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx
https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx
https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx

View File

@@ -31,7 +31,7 @@ section) before using, or data loss can result. Questions can be asked in the
For example, your first command might look like this:
```bash
```sh
rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run
```
@@ -40,7 +40,7 @@ After that, remove `--resync` as well.
Here is a typical run log (with timestamps removed for clarity):
```bash
```sh
rclone bisync /testdir/path1/ /testdir/path2/ --verbose
INFO : Synching Path1 "/testdir/path1/" with Path2 "/testdir/path2/"
INFO : Path1 checking for diffs
@@ -86,7 +86,7 @@ INFO : Bisync successful
## Command line syntax
```bash
```sh
$ rclone bisync --help
Usage:
rclone bisync remote1:path1 remote2:path2 [flags]
@@ -169,7 +169,7 @@ be copied to Path1, and the process will then copy the Path1 tree to Path2.
The `--resync` sequence is roughly equivalent to the following
(but see [`--resync-mode`](#resync-mode) for other options):
```bash
```sh
rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs]
rclone copy Path1 Path2 [--create-empty-src-dirs]
```
@@ -225,7 +225,7 @@ Shutdown](#graceful-shutdown) mode, when needed) for a very robust
almost any interruption it might encounter. Consider adding something like the
following:
```bash
```sh
--resilient --recover --max-lock 2m --conflict-resolve newer
```
@@ -353,13 +353,13 @@ simultaneously (or just `modtime` AND `checksum`).
being `size`, `modtime`, and `checksum`. For example, if you want to compare
size and checksum, but not modtime, you would do:
```bash
```sh
--compare size,checksum
```
Or if you want to compare all three:
```bash
```sh
--compare size,modtime,checksum
```
@@ -627,7 +627,7 @@ specified (or when two identical suffixes are specified.) i.e. with
`--conflict-loser pathname`, all of the following would produce exactly the
same result:
```bash
```sh
--conflict-suffix path
--conflict-suffix path,path
--conflict-suffix path1,path2
@@ -642,7 +642,7 @@ changed with the [`--suffix-keep-extension`](/docs/#suffix-keep-extension) flag
curly braces as globs. This can be helpful to track the date and/or time that
each conflict was handled by bisync. For example:
```bash
```sh
--conflict-suffix {DateOnly}-conflict
// result: myfile.txt.2006-01-02-conflict1
```
@@ -667,7 +667,7 @@ conflicts with `..path1` and `..path2` (with two periods, and `path` instead of
additional dots can be added by including them in the specified suffix string.
For example, for behavior equivalent to the previous default, use:
```bash
```sh
[--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path
```
@@ -707,13 +707,13 @@ For example, a possible sequence could look like this:
1. Normally scheduled bisync run:
```bash
```sh
rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient
```
2. Periodic independent integrity check (perhaps scheduled nightly or weekly):
```bash
```sh
rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt
```
@@ -721,7 +721,7 @@ For example, a possible sequence could look like this:
If one side is more up-to-date and you want to make the other side match it,
you could run:
```bash
```sh
rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v
```
@@ -851,7 +851,7 @@ override `--backup-dir`.
Example:
```bash
```sh
rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case
```
@@ -1383,7 +1383,7 @@ listings and thus not checked during the check access phase.
Here are two normal runs. The first one has a newer file on the remote.
The second has no deltas between local and remote.
```bash
```sh
2021/05/16 00:24:38 INFO : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/"
2021/05/16 00:24:38 INFO : Path1 checking for diffs
2021/05/16 00:24:38 INFO : - Path1 File is new - file.txt
@@ -1433,7 +1433,7 @@ numerous such messages in the log.
Since there are no final error/warning messages on line *7*, rclone has
recovered from failure after a retry, and the overall sync was successful.
```bash
```sh
1: 2021/05/14 00:44:12 INFO : Synching Path1 "/path/to/local/tree" with Path2 "dropbox:"
2: 2021/05/14 00:44:12 INFO : Path1 checking for diffs
3: 2021/05/14 00:44:12 INFO : Path2 checking for diffs
@@ -1446,7 +1446,7 @@ recovered from failure after a retry, and the overall sync was successful.
This log shows a *Critical failure* which requires a `--resync` to recover from.
See the [Runtime Error Handling](#error-handling) section.
```bash
```sh
2021/05/12 00:49:40 INFO : Google drive root '': Waiting for checks to finish
2021/05/12 00:49:40 INFO : Google drive root '': Waiting for transfers to finish
2021/05/12 00:49:40 INFO : Google drive root '': not deleting files as there were IO errors
@@ -1531,7 +1531,7 @@ on Linux you can use *Cron* which is described below.
The 1st example runs a sync every 5 minutes between a local directory
and an OwnCloud server, with output logged to a runlog file:
```bash
```sh
# Minute (0-59)
# Hour (0-23)
# Day of Month (1-31)
@@ -1548,7 +1548,7 @@ If you run `rclone bisync` as a cron job, redirect stdout/stderr to a file.
The 2nd example runs a sync to Dropbox every hour and logs all stdout (via the `>>`)
and stderr (via `2>&1`) to a log file.
```bash
```sh
0 * * * * /path/to/rclone bisync /path/to/local/dropbox Dropbox: --check-access --filters-file /home/user/filters.txt >> /path/to/logs/dropbox-run.log 2>&1
```
@@ -1630,7 +1630,7 @@ Rerunning the test will let it pass. Consider such failures as noise.
### Test command syntax
```bash
```sh
usage: go test ./cmd/bisync [options...]
Options:

View File

@@ -18,11 +18,13 @@ to use JWT authentication. `rclone config` walks you through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -94,11 +96,15 @@ Once configured you can then use `rclone` like this,
List directories in top level of your Box
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your Box
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to an Box directory called backup
@@ -123,9 +129,9 @@ According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section
This means that if you
* Don't use the box remote for 60 days
* Copy the config file with a box refresh token in and use it in two places
* Get an error on a token refresh
- Don't use the box remote for 60 days
- Copy the config file with a box refresh token in and use it in two places
- Get an error on a token refresh
then rclone will return an error which includes the text `Invalid
refresh token`.
@@ -138,7 +144,7 @@ did the authentication on.
Here is how to do it.
```
```sh
$ rclone config
Current remotes:

View File

@@ -31,11 +31,13 @@ with `cache`.
Here is an example of how to make a remote called `test-cache`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
r) Rename remote
@@ -115,19 +117,25 @@ You can then use it like this,
List directories in top level of your drive
rclone lsd test-cache:
```sh
rclone lsd test-cache:
```
List all the files in your drive
rclone ls test-cache:
```sh
rclone ls test-cache:
```
To start a cached mount
rclone mount --allow-other test-cache: /var/tmp/test-cache
```sh
rclone mount --allow-other test-cache: /var/tmp/test-cache
```
### Write Features ###
### Write Features
### Offline uploading ###
### Offline uploading
In an effort to make writing through cache more reliable, the backend
now supports this feature which can be activated by specifying a
@@ -152,7 +160,7 @@ Uploads will be stored in a queue and be processed based on the order they were
The queue and the temporary storage is persistent across restarts but
can be cleared on startup with the `--cache-db-purge` flag.
### Write Support ###
### Write Support
Writes are supported through `cache`.
One caveat is that a mounted cache remote does not add any retry or fallback
@@ -163,9 +171,9 @@ One special case is covered with `cache-writes` which will cache the file
data at the same time as the upload when it is enabled making it available
from the cache store immediately once the upload is finished.
### Read Features ###
### Read Features
#### Multiple connections ####
#### Multiple connections
To counter the high latency between a local PC where rclone is running
and cloud providers, the cache remote can split multiple requests to the
@@ -177,7 +185,7 @@ This is similar to buffering when media files are played online. Rclone
will stay around the current marker but always try its best to stay ahead
and prepare the data before.
#### Plex Integration ####
#### Plex Integration
There is a direct integration with Plex which allows cache to detect during reading
if the file is in playback or not. This helps cache to adapt how it queries
@@ -196,9 +204,11 @@ How to enable? Run `rclone config` and add all the Plex options (endpoint, usern
and password) in your remote and it will be automatically enabled.
Affected settings:
- `cache-workers`: _Configured value_ during confirmed playback or _1_ all the other times
##### Certificate Validation #####
- `cache-workers`: *Configured value* during confirmed playback or *1* all the
other times
##### Certificate Validation
When the Plex server is configured to only accept secure connections, it is
possible to use `.plex.direct` URLs to ensure certificate validation succeeds.
@@ -213,60 +223,63 @@ have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`.
To get the `server-hash` part, the easiest way is to visit
https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token
<https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token>
This page will list all the available Plex servers for your account
with at least one `.plex.direct` link for each. Copy one URL and replace
the IP address with the desired address. This can be used as the
`plex_url` value.
### Known issues ###
### Known issues
#### Mount and --dir-cache-time ####
#### Mount and --dir-cache-time
--dir-cache-time controls the first layer of directory caching which works at the mount layer.
Being an independent caching mechanism from the `cache` backend, it will manage its own entries
based on the configured time.
--dir-cache-time controls the first layer of directory caching which works at
the mount layer. Being an independent caching mechanism from the `cache` backend,
it will manage its own entries based on the configured time.
To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct
one, try to set `--dir-cache-time` to a lower time than `--cache-info-age`. Default values are
already configured in this way.
To avoid getting in a scenario where dir cache has obsolete data and cache would
have the correct one, try to set `--dir-cache-time` to a lower time than
`--cache-info-age`. Default values are already configured in this way.
#### Windows support - Experimental ####
#### Windows support - Experimental
There are a couple of issues with Windows `mount` functionality that still require some investigations.
It should be considered as experimental thus far as fixes come in for this OS.
There are a couple of issues with Windows `mount` functionality that still
require some investigations. It should be considered as experimental thus far
as fixes come in for this OS.
Most of the issues seem to be related to the difference between filesystems
on Linux flavors and Windows as cache is heavily dependent on them.
Any reports or feedback on how cache behaves on this OS is greatly appreciated.
- https://github.com/rclone/rclone/issues/1935
- https://github.com/rclone/rclone/issues/1907
- https://github.com/rclone/rclone/issues/1834
#### Risk of throttling ####
- [Issue #1935](https://github.com/rclone/rclone/issues/1935)
- [Issue #1907](https://github.com/rclone/rclone/issues/1907)
- [Issue #1834](https://github.com/rclone/rclone/issues/1834)
#### Risk of throttling
Future iterations of the cache backend will make use of the pooling functionality
of the cloud provider to synchronize and at the same time make writing through it
more tolerant to failures.
more tolerant to failures.
There are a couple of enhancements in track to add these but in the meantime
there is a valid concern that the expiring cache listings can lead to cloud provider
throttles or bans due to repeated queries on it for very large mounts.
Some recommendations:
- don't use a very small interval for entry information (`--cache-info-age`)
- while writes aren't yet optimised, you can still write through `cache` which gives you the advantage
of adding the file in the cache at the same time if configured to do so.
- while writes aren't yet optimised, you can still write through `cache` which
gives you the advantage of adding the file in the cache at the same time if
configured to do so.
Future enhancements:
- https://github.com/rclone/rclone/issues/1937
- https://github.com/rclone/rclone/issues/1936
- [Issue #1937](https://github.com/rclone/rclone/issues/1937)
- [Issue #1936](https://github.com/rclone/rclone/issues/1936)
#### cache and crypt ####
#### cache and crypt
One common scenario is to keep your data encrypted in the cloud provider
using the `crypt` remote. `crypt` uses a similar technique to wrap around
@@ -281,30 +294,36 @@ which makes it think we're downloading the full file instead of small chunks.
Organizing the remotes in this order yields better results:
{{<color green>}}**cloud remote** -> **cache** -> **crypt**{{</color>}}
#### absolute remote paths ####
#### absolute remote paths
`cache` can not differentiate between relative and absolute paths for the wrapped remote.
Any path given in the `remote` config setting and on the command line will be passed to
the wrapped remote as is, but for storing the chunks on disk the path will be made
relative by removing any leading `/` character.
`cache` can not differentiate between relative and absolute paths for the wrapped
remote. Any path given in the `remote` config setting and on the command line will
be passed to the wrapped remote as is, but for storing the chunks on disk the path
will be made relative by removing any leading `/` character.
This behavior is irrelevant for most backend types, but there are backends where a leading `/`
changes the effective directory, e.g. in the `sftp` backend paths starting with a `/` are
relative to the root of the SSH server and paths without are relative to the user home directory.
As a result `sftp:bin` and `sftp:/bin` will share the same cache folder, even if they represent
a different directory on the SSH server.
This behavior is irrelevant for most backend types, but there are backends where
a leading `/` changes the effective directory, e.g. in the `sftp` backend paths
starting with a `/` are relative to the root of the SSH server and paths without
are relative to the user home directory. As a result `sftp:bin` and `sftp:/bin`
will share the same cache folder, even if they represent a different directory
on the SSH server.
### Cache and Remote Control (--rc) ###
Cache supports the new `--rc` mode in rclone and can be remote controlled through the following end points:
By default, the listener is disabled if you do not add the flag.
### Cache and Remote Control (--rc)
Cache supports the new `--rc` mode in rclone and can be remote controlled
through the following end points: By default, the listener is disabled if
you do not add the flag.
### rc cache/expire
Purge a remote from the cache backend. Supports either a directory or a file.
It supports both encrypted and unencrypted file names if cache is wrapped by crypt.
Params:
- **remote** = path to remote **(required)**
- **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_
- **remote** = path to remote **(required)**
- **withData** = true/false to delete cached data (chunks) as
well *(optional, false by default)*
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/cache/cache.go then run make backenddocs" >}}
### Standard options

View File

@@ -26,8 +26,8 @@ then you should probably put the bucket in the remote `s3:bucket`.
Now configure `chunker` using `rclone config`. We will call this one `overlay`
to separate it from the `remote` itself.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -92,16 +92,15 @@ So if you use a remote of `/path/to/secret/files` then rclone will
chunk stuff in that directory. If you use a remote of `name` then rclone
will put files in a directory called `name` in the current directory.
### Chunking
When rclone starts a file upload, chunker checks the file size. If it
doesn't exceed the configured chunk size, chunker will just pass the file
to the wrapped remote (however, see caveat below). If a file is large, chunker will transparently cut
data in pieces with temporary names and stream them one by one, on the fly.
Each data chunk will contain the specified number of bytes, except for the
last one which may have less data. If file size is unknown in advance
(this is called a streaming upload), chunker will internally create
to the wrapped remote (however, see caveat below). If a file is large, chunker
will transparently cut data in pieces with temporary names and stream them one
by one, on the fly. Each data chunk will contain the specified number of bytes,
except for the last one which may have less data. If file size is unknown in
advance (this is called a streaming upload), chunker will internally create
a temporary copy, record its size and repeat the above process.
When upload completes, temporary chunk files are finally renamed.
@@ -129,14 +128,13 @@ proceed with current command.
You can set the `--chunker-fail-hard` flag to have commands abort with
error message in such cases.
**Caveat**: As it is now, chunker will always create a temporary file in the
**Caveat**: As it is now, chunker will always create a temporary file in the
backend and then rename it, even if the file is below the chunk threshold.
This will result in unnecessary API calls and can severely restrict throughput
when handling transfers primarily composed of small files on some backends (e.g. Box).
A workaround to this issue is to use chunker only for files above the chunk threshold
via `--min-size` and then perform a separate call without chunker on the remaining
files.
when handling transfers primarily composed of small files on some backends
(e.g. Box). A workaround to this issue is to use chunker only for files above
the chunk threshold via `--min-size` and then perform a separate call without
chunker on the remaining files.
#### Chunk names
@@ -165,7 +163,6 @@ non-chunked files.
When using `norename` transactions, chunk names will additionally have a unique
file version suffix. For example, `BIG_FILE_NAME.rclone_chunk.001_bp562k`.
### Metadata
Besides data chunks chunker will by default create metadata object for
@@ -199,7 +196,6 @@ base name and show group names as virtual composite files.
This method is more prone to missing chunk errors (especially missing
last chunk) than format with metadata enabled.
### Hashsums
Chunker supports hashsums only when a compatible metadata is present.
@@ -243,7 +239,6 @@ hashsums at destination. Beware of consequences: the `sync` command will
revert (sometimes silently) to time/size comparison if compatible hashsums
between source and target are not found.
### Modification times
Chunker stores modification times using the wrapped remote so support
@@ -254,7 +249,6 @@ modification time of the metadata object on the wrapped remote.
If file is chunked but metadata format is `none` then chunker will
use modification time of the first data chunk.
### Migrations
The idiomatic way to migrate to a different chunk size, hash type, transaction
@@ -283,7 +277,6 @@ somewhere using the chunker remote and purge the original directory.
The `copy` command will copy only active chunks while the `purge` will
remove everything including garbage.
### Caveats and Limitations
Chunker requires wrapped remote to support server-side `move` (or `copy` +

View File

@@ -11,11 +11,16 @@ This is a backend for the [Cloudinary](https://cloudinary.com/) platform
## About Cloudinary
[Cloudinary](https://cloudinary.com/) is an image and video API platform.
Trusted by 1.5 million developers and 10,000 enterprise and hyper-growth companies as a critical part of their tech stack to deliver visually engaging experiences.
Trusted by 1.5 million developers and 10,000 enterprise and hyper-growth
companies as a critical part of their tech stack to deliver visually engaging
experiences.
## Accounts & Pricing
To use this backend, you need to [create a free account](https://cloudinary.com/users/register_free) on Cloudinary. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See [the pricing details](https://cloudinary.com/pricing).
To use this backend, you need to [create a free account](https://cloudinary.com/users/register_free)
on Cloudinary. Start with a free plan with generous usage limits. Then, as your
requirements grow, upgrade to a plan that best fits your needs.
See [the pricing details](https://cloudinary.com/pricing).
## Securing Your Credentials
@@ -25,13 +30,17 @@ Please refer to the [docs](/docs/#configuration-encryption-cheatsheet)
Here is an example of making a Cloudinary configuration.
First, create a [cloudinary.com](https://cloudinary.com/users/register_free) account and choose a plan.
First, create a [cloudinary.com](https://cloudinary.com/users/register_free)
account and choose a plan.
You will need to log in and get the `API Key` and `API Secret` for your account from the developer section.
You will need to log in and get the `API Key` and `API Secret` for your account
from the developer section.
Now run
`rclone config`
```sh
rclone config
```
Follow the interactive setup process:
@@ -104,15 +113,21 @@ y/e/d> y
List directories in the top level of your Media Library
`rclone lsd cloudinary-media-library:`
```sh
rclone lsd cloudinary-media-library:
```
Make a new directory.
`rclone mkdir cloudinary-media-library:directory`
```sh
rclone mkdir cloudinary-media-library:directory
```
List the contents of a directory.
`rclone ls cloudinary-media-library:directory`
```sh
rclone ls cloudinary-media-library:directory
```
### Modified time and hashes

View File

@@ -11,7 +11,7 @@ tree.
For example you might have a remote for images on one provider:
```
```sh
$ rclone tree s3:imagesbucket
/
├── image1.jpg
@@ -20,7 +20,7 @@ $ rclone tree s3:imagesbucket
And a remote for files on another:
```
```sh
$ rclone tree drive:important/files
/
├── file1.txt
@@ -30,7 +30,7 @@ $ rclone tree drive:important/files
The `combine` backend can join these together into a synthetic
directory structure like this:
```
```sh
$ rclone tree combined:
/
├── files
@@ -44,7 +44,9 @@ $ rclone tree combined:
You'd do this by specifying an `upstreams` parameter in the config
like this
upstreams = images=s3:imagesbucket files=drive:important/files
```text
upstreams = images=s3:imagesbucket files=drive:important/files
```
During the initial setup with `rclone config` you will specify the
upstreams remotes as a space separated list. The upstream remotes can
@@ -55,11 +57,13 @@ either be a local paths or other remotes.
Here is an example of how to make a combine called `remote` for the
example above. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -103,21 +107,25 @@ the shared drives you have access to.
Assuming your main (non shared drive) Google drive remote is called
`drive:` you would run
rclone backend -o config drives drive:
```sh
rclone backend -o config drives drive:
```
This would produce something like this:
[My Drive]
type = alias
remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
```ini
[My Drive]
type = alias
remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
[Test Drive]
type = alias
remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[Test Drive]
type = alias
remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives]
type = combine
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
[AllDrives]
type = combine
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
```
If you then add that config to your config file (find it with `rclone
config file`) then you can access all the shared drives in one place

View File

@@ -9,18 +9,20 @@ status: Experimental
## Warning
This remote is currently **experimental**. Things may break and data may be lost. Anything you do with this remote is
at your own risk. Please understand the risks associated with using experimental code and don't use this remote in
critical applications.
This remote is currently **experimental**. Things may break and data may be lost.
Anything you do with this remote is at your own risk. Please understand the risks
associated with using experimental code and don't use this remote in critical
applications.
The `Compress` remote adds compression to another remote. It is best used with remotes containing
many large compressible files.
The `Compress` remote adds compression to another remote. It is best used with
remotes containing many large compressible files.
## Configuration
To use this remote, all you need to do is specify another remote and a compression mode to use:
To use this remote, all you need to do is specify another remote and a
compression mode to use:
```
```text
Current remotes:
Name Type
@@ -72,22 +74,26 @@ y/e/d> y
### Compression Modes
Currently only gzip compression is supported. It provides a decent balance between speed and size and is well
supported by other applications. Compression strength can further be configured via an advanced setting where 0 is no
Currently only gzip compression is supported. It provides a decent balance
between speed and size and is well supported by other applications. Compression
strength can further be configured via an advanced setting where 0 is no
compression and 9 is strongest compression.
### File types
If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to
the compression algorithm you chose. These files are standard files that can be opened by various archive programs,
If you open a remote wrapped by compress, you will see that there are many
files with an extension corresponding to the compression algorithm you chose.
These files are standard files that can be opened by various archive programs,
but they have some hidden metadata that allows them to be used by rclone.
While you may download and decompress these files at will, do **not** manually delete or rename files. Files without
correct metadata files will not be recognized by rclone.
While you may download and decompress these files at will, do **not** manually
delete or rename files. Files without correct metadata files will not be
recognized by rclone.
### File names
The compressed files will be named `*.###########.gz` where `*` is the base file and the `#` part is base64 encoded
size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend.
The compressed files will be named `*.###########.gz` where `*` is the base
file and the `#` part is base64 encoded size of the uncompressed file. The file
names should not be changed by anything other than the rclone compression backend.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/compress/compress.go then run make backenddocs" >}}
### Standard options

View File

@@ -9,20 +9,20 @@ description: "Contact the rclone project"
Forum for questions and general discussion:
- https://forum.rclone.org
- <https://forum.rclone.org>
## Business support
For business support or sponsorship enquiries please see:
- https://rclone.com/
- sponsorship@rclone.com
- <https://rclone.com/>
- <sponsorship@rclone.com>
## GitHub repository
The project's repository is located at:
- https://github.com/rclone/rclone
- <https://github.com/rclone/rclone>
There you can file bug reports or contribute with pull requests.
@@ -37,7 +37,7 @@ You can also follow Nick on twitter for rclone announcements:
Or if all else fails or you want to ask something private or
confidential
- info@rclone.com
- <info@rclone.com>
Please don't email requests for help to this address - those are
better directed to the forum unless you'd like to sign up for business

View File

@@ -31,11 +31,11 @@ will just give you the encrypted (scrambled) format, and anything you
upload will *not* become encrypted.
The encryption is a secret-key encryption (also called symmetric key encryption)
algorithm, where a password (or pass phrase) is used to generate real encryption key.
The password can be supplied by user, or you may chose to let rclone
generate one. It will be stored in the configuration file, in a lightly obscured form.
If you are in an environment where you are not able to keep your configuration
secured, you should add
algorithm, where a password (or pass phrase) is used to generate real encryption
key. The password can be supplied by user, or you may chose to let rclone
generate one. It will be stored in the configuration file, in a lightly obscured
form. If you are in an environment where you are not able to keep your
configuration secured, you should add
[configuration encryption](https://rclone.org/docs/#configuration-encryption)
as protection. As long as you have this configuration file, you will be able to
decrypt your data. Without the configuration file, as long as you remember
@@ -47,9 +47,9 @@ See below for guidance to [changing password](#changing-password).
Encryption uses [cryptographic salt](https://en.wikipedia.org/wiki/Salt_(cryptography)),
to permute the encryption key so that the same string may be encrypted in
different ways. When configuring the crypt remote it is optional to enter a salt,
or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique string.
Normally in cryptography, the salt is stored together with the encrypted content,
and do not have to be memorized by the user. This is not the case in rclone,
or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique
string. Normally in cryptography, the salt is stored together with the encrypted
content, and do not have to be memorized by the user. This is not the case in rclone,
because rclone does not store any additional information on the remotes. Use of
custom salt is effectively a second password that must be memorized.
@@ -86,8 +86,8 @@ anything you write will be unencrypted. To avoid issues it is best to
configure a dedicated path for encrypted content, and access it
exclusively through a crypt remote.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -176,7 +176,8 @@ y/e/d>
**Important** The crypt password stored in `rclone.conf` is lightly
obscured. That only protects it from cursory inspection. It is not
secure unless [configuration encryption](https://rclone.org/docs/#configuration-encryption) of `rclone.conf` is specified.
secure unless [configuration encryption](https://rclone.org/docs/#configuration-encryption)
of `rclone.conf` is specified.
A long passphrase is recommended, or `rclone config` can generate a
random one.
@@ -191,8 +192,8 @@ due to the different salt.
Rclone does not encrypt
* file length - this can be calculated within 16 bytes
* modification time - used for syncing
- file length - this can be calculated within 16 bytes
- modification time - used for syncing
### Specifying the remote
@@ -244,6 +245,7 @@ is to re-upload everything via a crypt remote configured with your new password.
Depending on the size of your data, your bandwidth, storage quota etc, there are
different approaches you can take:
- If you have everything in a different location, for example on your local system,
you could remove all of the prior encrypted files, change the password for your
configured crypt remote (or delete and re-create the crypt configuration),
@@ -272,7 +274,7 @@ details, and a tool you can use to check if you are affected.
Create the following file structure using "standard" file name
encryption.
```
```sh
plaintext/
├── file0.txt
├── file1.txt
@@ -285,7 +287,7 @@ plaintext/
Copy these to the remote, and list them
```
```sh
$ rclone -q copy plaintext secret:
$ rclone -q ls secret:
7 file1.txt
@@ -297,7 +299,7 @@ $ rclone -q ls secret:
The crypt remote looks like
```
```sh
$ rclone -q ls remote:path
55 hagjclgavj2mbiqm6u6cnjjqcg
54 v05749mltvv1tf4onltun46gls
@@ -308,7 +310,7 @@ $ rclone -q ls remote:path
The directory structure is preserved
```
```sh
$ rclone -q ls secret:subdir
8 file2.txt
9 file3.txt
@@ -319,7 +321,7 @@ Without file name encryption `.bin` extensions are added to underlying
names. This prevents the cloud provider attempting to interpret file
content.
```
```sh
$ rclone -q ls remote:path
54 file0.txt.bin
57 subdir/file3.txt.bin
@@ -332,18 +334,18 @@ $ rclone -q ls remote:path
Off
* doesn't hide file names or directory structure
* allows for longer file names (~246 characters)
* can use sub paths and copy single files
- doesn't hide file names or directory structure
- allows for longer file names (~246 characters)
- can use sub paths and copy single files
Standard
* file names encrypted
* file names can't be as long (~143 characters)
* can use sub paths and copy single files
* directory structure visible
* identical files names will have identical uploaded names
* can use shortcuts to shorten the directory recursion
- file names encrypted
- file names can't be as long (~143 characters)
- can use sub paths and copy single files
- directory structure visible
- identical files names will have identical uploaded names
- can use shortcuts to shorten the directory recursion
Obfuscation
@@ -362,11 +364,11 @@ equivalents.
Obfuscation cannot be relied upon for strong protection.
* file names very lightly obfuscated
* file names can be longer than standard encryption
* can use sub paths and copy single files
* directory structure visible
* identical files names will have identical uploaded names
- file names very lightly obfuscated
- file names can be longer than standard encryption
- can use sub paths and copy single files
- directory structure visible
- identical files names will have identical uploaded names
Cloud storage systems have limits on file name length and
total path length which rclone is more likely to breach using
@@ -380,7 +382,7 @@ For cloud storage systems with case sensitive file names (e.g. Google Drive),
`base64` can be used to reduce file name length.
For cloud storage systems using UTF-16 to store file names internally
(e.g. OneDrive, Dropbox, Box), `base32768` can be used to drastically reduce
file name length.
file name length.
An alternative, future rclone file name encryption mode may tolerate
backend provider path length limits.
@@ -404,7 +406,6 @@ Example:
`1/12/123.txt` is encrypted to
`1/12/qgm4avr35m5loi1th53ato71v0`
### Modification times and hashes
Crypt stores modification times using the underlying remote so support

View File

@@ -20,14 +20,14 @@ As of Docker 1.12 volumes are supported by
[Docker Swarm](https://docs.docker.com/engine/swarm/key-concepts/)
included with Docker Engine and created from descriptions in
[swarm compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference)
files for use with _swarm stacks_ across multiple cluster nodes.
files for use with *swarm stacks* across multiple cluster nodes.
[Docker Volume Plugins](https://docs.docker.com/engine/extend/plugins_volume/)
augment the default `local` volume driver included in Docker with stateful
volumes shared across containers and hosts. Unlike local volumes, your
data will _not_ be deleted when such volume is removed. Plugins can run
data will *not* be deleted when such volume is removed. Plugins can run
managed by the docker daemon, as a native system service
(under systemd, _sysv_ or _upstart_) or as a standalone executable.
(under systemd, *sysv* or *upstart*) or as a standalone executable.
Rclone can run as docker volume plugin in all these modes.
It interacts with the local docker daemon
via [plugin API](https://docs.docker.com/engine/extend/plugin_api/) and
@@ -42,39 +42,43 @@ rclone volume with Docker engine on a standalone Ubuntu machine.
Start from [installing Docker](https://docs.docker.com/engine/install/)
on the host.
The _FUSE_ driver is a prerequisite for rclone mounting and should be
The *FUSE* driver is a prerequisite for rclone mounting and should be
installed on host:
```
```sh
sudo apt-get -y install fuse3
```
Create two directories required by rclone docker plugin:
```
```sh
sudo mkdir -p /var/lib/docker-plugins/rclone/config
sudo mkdir -p /var/lib/docker-plugins/rclone/cache
```
Install the managed rclone docker plugin for your architecture (here `amd64`):
```
```sh
docker plugin install rclone/docker-volume-rclone:amd64 args="-v" --alias rclone --grant-all-permissions
docker plugin list
```
Create your [SFTP volume](/sftp/#standard-options):
```
```sh
docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true
```
Note that since all options are static, you don't even have to run
`rclone config` or create the `rclone.conf` file (but the `config` directory
should still be present). In the simplest case you can use `localhost`
as _hostname_ and your SSH credentials as _username_ and _password_.
as *hostname* and your SSH credentials as *username* and *password*.
You can also change the remote path to your home directory on the host,
for example `-o path=/home/username`.
Time to create a test container and mount the volume into it:
```
```sh
docker run --rm -it -v firstvolume:/mnt --workdir /mnt ubuntu:latest bash
```
@@ -83,7 +87,8 @@ the mounted SFTP remote. You can type `ls` to list the mounted directory
or otherwise play with it. Type `exit` when you are done.
The container will stop but the volume will stay, ready to be reused.
When it's not needed anymore, remove it:
```
```sh
docker volume list
docker volume remove firstvolume
```
@@ -92,7 +97,7 @@ Now let us try **something more elaborate**:
[Google Drive](/drive/) volume on multi-node Docker Swarm.
You should start from installing Docker and FUSE, creating plugin
directories and installing rclone plugin on _every_ swarm node.
directories and installing rclone plugin on *every* swarm node.
Then [setup the Swarm](https://docs.docker.com/engine/swarm/swarm-mode/).
Google Drive volumes need an access token which can be setup via web
@@ -101,14 +106,15 @@ plugin cannot run a browser so we will use a technique similar to the
[rclone setup on a headless box](/remote_setup/).
Run [rclone config](/commands/rclone_config_create/)
on _another_ machine equipped with _web browser_ and graphical user interface.
on *another* machine equipped with *web browser* and graphical user interface.
Create the [Google Drive remote](/drive/#standard-options).
When done, transfer the resulting `rclone.conf` to the Swarm cluster
and save as `/var/lib/docker-plugins/rclone/config/rclone.conf`
on _every_ node. By default this location is accessible only to the
on *every* node. By default this location is accessible only to the
root user so you will need appropriate privileges. The resulting config
will look like this:
```
```ini
[gdrive]
type = drive
scope = drive
@@ -119,7 +125,8 @@ token = {"access_token":...}
Now create the file named `example.yml` with a swarm stack description
like this:
```
```yml
version: '3'
services:
heimdall:
@@ -137,16 +144,18 @@ volumes:
```
and run the stack:
```
```sh
docker stack deploy example -c ./example.yml
```
After a few seconds docker will spread the parsed stack description
over cluster, create the `example_heimdall` service on port _8080_,
over cluster, create the `example_heimdall` service on port *8080*,
run service containers on one or more cluster nodes and request
the `example_configdata` volume from rclone plugins on the node hosts.
You can use the following commands to confirm results:
```
```sh
docker service ls
docker service ps example_heimdall
docker volume ls
@@ -163,7 +172,8 @@ the `docker volume remove example_configdata` command on every node.
Volumes can be created with [docker volume create](https://docs.docker.com/engine/reference/commandline/volume_create/).
Here are a few examples:
```
```sh
docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full
docker volume create vol2 -d rclone -o remote=:storj,access_grant=xxx:heimdall
docker volume create vol3 -d rclone -o type=storj -o path=heimdall -o storj-access-grant=xxx -o poll-interval=0
@@ -175,7 +185,8 @@ name `rclone/docker-volume-rclone` because you provided the `--alias rclone`
option.
Volumes can be inspected as follows:
```
```sh
docker volume list
docker volume inspect vol1
```
@@ -184,7 +195,7 @@ docker volume inspect vol1
Rclone flags and volume options are set via the `-o` flag to the
`docker volume create` command. They include backend-specific parameters
as well as mount and _VFS_ options. Also there are a few
as well as mount and *VFS* options. Also there are a few
special `-o` options:
`remote`, `fs`, `type`, `path`, `mount-type` and `persist`.
@@ -192,19 +203,23 @@ special `-o` options:
trailing colon and optionally with a remote path. See the full syntax in
the [rclone documentation](/docs/#syntax-of-remote-paths).
This option can be aliased as `fs` to prevent confusion with the
_remote_ parameter of such backends as _crypt_ or _alias_.
*remote* parameter of such backends as *crypt* or *alias*.
The `remote=:backend:dir/subdir` syntax can be used to create
[on-the-fly (config-less) remotes](/docs/#backend-path-to-dir),
while the `type` and `path` options provide a simpler alternative for this.
Using two split options
```
```sh
-o type=backend -o path=dir/subdir
```
is equivalent to the combined syntax
```
```sh
-o remote=:backend:dir/subdir
```
but is arguably easier to parameterize in scripts.
The `path` part is optional.
@@ -219,7 +234,7 @@ Boolean CLI flags without value will gain the `true` value, e.g.
Please note that you can provide parameters only for the backend immediately
referenced by the backend type of mounted `remote`.
If this is a wrapping backend like _alias, chunker or crypt_, you cannot
If this is a wrapping backend like *alias, chunker or crypt*, you cannot
provide options for the referred to remote or backend. This limitation is
imposed by the rclone connection string parser. The only workaround is to
feed plugin with `rclone.conf` or configure plugin arguments (see below).
@@ -242,17 +257,21 @@ In future it will allow to persist on-the-fly remotes in the plugin
The `remote` value can be extended
with [connection strings](/docs/#connection-strings)
as an alternative way to supply backend parameters. This is equivalent
to the `-o` backend options with one _syntactic difference_.
to the `-o` backend options with one *syntactic difference*.
Inside connection string the backend prefix must be dropped from parameter
names but in the `-o param=value` array it must be present.
For instance, compare the following option array
```
```sh
-o remote=:sftp:/home -o sftp-host=localhost
```
with equivalent connection string:
```
```sh
-o remote=:sftp,host=localhost:/home
```
This difference exists because flag options `-o key=val` include not only
backend parameters but also mount/VFS flags and possibly other settings.
Also it allows to discriminate the `remote` option from the `crypt-remote`
@@ -261,11 +280,13 @@ due to clearer value substitution.
## Using with Swarm or Compose
Both _Docker Swarm_ and _Docker Compose_ use
Both *Docker Swarm* and *Docker Compose* use
[YAML](http://yaml.org/spec/1.2/spec.html)-formatted text files to describe
groups (stacks) of containers, their properties, networks and volumes.
_Compose_ uses the [compose v2](https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference) format,
_Swarm_ uses the [compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference) format.
*Compose* uses the [compose v2](https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference)
format,
*Swarm* uses the [compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference)
format.
They are mostly similar, differences are explained in the
[docker documentation](https://docs.docker.com/compose/compose-file/compose-versioning/#upgrading).
@@ -274,7 +295,7 @@ Each of them should be named after its volume and have at least two
elements, the self-explanatory `driver: rclone` value and the
`driver_opts:` structure playing the same role as `-o key=val` CLI flags:
```
```yml
volumes:
volume_name_1:
driver: rclone
@@ -287,6 +308,7 @@ volumes:
```
Notice a few important details:
- YAML prefers `_` in option names instead of `-`.
- YAML treats single and double quotes interchangeably.
Simple strings and integers can be left unquoted.
@@ -313,6 +335,7 @@ The plugin requires presence of two directories on the host before it can
be installed. Note that plugin will **not** create them automatically.
By default they must exist on host at the following locations
(though you can tweak the paths):
- `/var/lib/docker-plugins/rclone/config`
is reserved for the `rclone.conf` config file and **must** exist
even if it's empty and the config file is not present.
@@ -321,14 +344,16 @@ By default they must exist on host at the following locations
You can [install managed plugin](https://docs.docker.com/engine/reference/commandline/plugin_install/)
with default settings as follows:
```
```sh
docker plugin install rclone/docker-volume-rclone:amd64 --grant-all-permissions --alias rclone
```
The `:amd64` part of the image specification after colon is called a _tag_.
The `:amd64` part of the image specification after colon is called a *tag*.
Usually you will want to install the latest plugin for your architecture. In
this case the tag will just name it, like `amd64` above. The following plugin
architectures are currently available:
- `amd64`
- `arm64`
- `arm-v7`
@@ -362,7 +387,8 @@ mount namespaces and bind-mounts into requesting user containers.
You can tweak a few plugin settings after installation when it's disabled
(not in use), for instance:
```
```sh
docker plugin disable rclone
docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args="--vfs-cache-mode=writes --allow-other"
docker plugin enable rclone
@@ -377,10 +403,10 @@ plan in advance.
You can tweak the following settings:
`args`, `config`, `cache`, `HTTP_PROXY`, `HTTPS_PROXY`, `NO_PROXY`
and `RCLONE_VERBOSE`.
It's _your_ task to keep plugin settings in sync across swarm cluster nodes.
It's *your* task to keep plugin settings in sync across swarm cluster nodes.
`args` sets command-line arguments for the `rclone serve docker` command
(_none_ by default). Arguments should be separated by space so you will
(*none* by default). Arguments should be separated by space so you will
normally want to put them in quotes on the
[docker plugin set](https://docs.docker.com/engine/reference/commandline/plugin_set/)
command line. Both [serve docker flags](/commands/rclone_serve_docker/#options)
@@ -402,7 +428,7 @@ at the predefined path `/data/config`. For example, if your key file is
named `sftp-box1.key` on the host, the corresponding volume config option
should read `-o sftp-key-file=/data/config/sftp-box1.key`.
`cache=/host/dir` sets alternative host location for the _cache_ directory.
`cache=/host/dir` sets alternative host location for the *cache* directory.
The plugin will keep VFS caches here. Also it will create and maintain
the `docker-plugin.state` file in this directory. When the plugin is
restarted or reinstalled, it will look in this file to recreate any volumes
@@ -415,13 +441,14 @@ failures, daemon restarts or host reboots.
to `2` (debugging). Verbosity can be also tweaked via `args="-v [-v] ..."`.
Since arguments are more generic, you will rarely need this setting.
The plugin output by default feeds the docker daemon log on local host.
Log entries are reflected as _errors_ in the docker log but retain their
Log entries are reflected as *errors* in the docker log but retain their
actual level assigned by rclone in the encapsulated message string.
`HTTP_PROXY`, `HTTPS_PROXY`, `NO_PROXY` customize the plugin proxy settings.
You can set custom plugin options right when you install it, _in one go_:
```
You can set custom plugin options right when you install it, *in one go*:
```sh
docker plugin remove rclone
docker plugin install rclone/docker-volume-rclone:amd64 \
--alias rclone --grant-all-permissions \
@@ -435,7 +462,8 @@ The docker plugin volume protocol doesn't provide a way for plugins
to inform the docker daemon that a volume is (un-)available.
As a workaround you can setup a healthcheck to verify that the mount
is responding, for example:
```
```yml
services:
my_service:
image: my_image
@@ -456,8 +484,9 @@ systems. Proceed further only if you are on Linux.
First, [install rclone](/install/).
You can just run it (type `rclone serve docker` and hit enter) for the test.
Install _FUSE_:
```
Install *FUSE*:
```sh
sudo apt-get -y install fuse
```
@@ -466,22 +495,25 @@ Download two systemd configuration files:
and [docker-volume-rclone.socket](https://raw.githubusercontent.com/rclone/rclone/master/contrib/docker-plugin/systemd/docker-volume-rclone.socket).
Put them to the `/etc/systemd/system/` directory:
```
```sh
cp docker-volume-plugin.service /etc/systemd/system/
cp docker-volume-plugin.socket /etc/systemd/system/
```
Please note that all commands in this section must be run as _root_ but
Please note that all commands in this section must be run as *root* but
we omit `sudo` prefix for brevity.
Now create directories required by the service:
```
```sh
mkdir -p /var/lib/docker-volumes/rclone
mkdir -p /var/lib/docker-plugins/rclone/config
mkdir -p /var/lib/docker-plugins/rclone/cache
```
Run the docker plugin service in the socket activated mode:
```
```sh
systemctl daemon-reload
systemctl start docker-volume-rclone.service
systemctl enable docker-volume-rclone.socket
@@ -490,6 +522,7 @@ systemctl restart docker
```
Or run the service directly:
- run `systemctl daemon-reload` to let systemd pick up new config
- run `systemctl enable docker-volume-rclone.service` to make the new
service start automatically when you power on your machine.
@@ -506,39 +539,50 @@ prefer socket activation.
You can [see managed plugin settings](https://docs.docker.com/engine/extend/#debugging-plugins)
with
```
```sh
docker plugin list
docker plugin inspect rclone
```
Note that docker (including latest 20.10.7) will not show actual values
of `args`, just the defaults.
Use `journalctl --unit docker` to see managed plugin output as part of
the docker daemon log. Note that docker reflects plugin lines as _errors_
the docker daemon log. Note that docker reflects plugin lines as *errors*
but their actual level can be seen from encapsulated message string.
You will usually install the latest version of managed plugin for your platform.
Use the following commands to print the actual installed version:
```
```sh
PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}')
sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone version
```
You can even use `runc` to run shell inside the plugin container:
```
```sh
sudo runc --root /run/docker/runtime-runc/plugins.moby exec --tty $PLUGID bash
```
Also you can use curl to check the plugin socket connectivity:
```
```sh
docker plugin list --no-trunc
PLUGID=123abc...
sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate
```
though this is rarely needed.
If the plugin fails to work properly, and only as a last resort after you tried diagnosing with the above methods, you can try clearing the state of the plugin. **Note that all existing rclone docker volumes will probably have to be recreated.** This might be needed because a reinstall don't cleanup existing state files to allow for easy restoration, as stated above.
```
If the plugin fails to work properly, and only as a last resort after you tried
diagnosing with the above methods, you can try clearing the state of the plugin.
**Note that all existing rclone docker volumes will probably have to be recreated.**
This might be needed because a reinstall don't cleanup existing state files to
allow for easy restoration, as stated above.
```sh
docker plugin disable rclone # disable the plugin to ensure no interference
sudo rm /var/lib/docker-plugins/rclone/cache/docker-plugin.state # removing the plugin state
docker plugin enable rclone # re-enable the plugin afterward
@@ -546,20 +590,22 @@ docker plugin enable rclone # re-enable the plugin afterward
## Caveats
Finally I'd like to mention a _caveat with updating volume settings_.
Finally I'd like to mention a *caveat with updating volume settings*.
Docker CLI does not have a dedicated command like `docker volume update`.
It may be tempting to invoke `docker volume create` with updated options
on existing volume, but there is a gotcha. The command will do nothing,
it won't even return an error. I hope that docker maintainers will fix
this some day. In the meantime be aware that you must remove your volume
before recreating it with new settings:
```
```sh
docker volume remove my_vol
docker volume create my_vol -d rclone -o opt1=new_val1 ...
```
and verify that settings did update:
```
```sh
docker volume list
docker volume inspect my_vol
```

View File

@@ -6,9 +6,11 @@ versionIntroduced: "?"
# {{< icon "fa fa-building-columns" >}} DOI
The DOI remote is a read only remote for reading files from digital object identifiers (DOI).
The DOI remote is a read only remote for reading files from digital object
identifiers (DOI).
Currently, the DOI backend supports DOIs hosted with:
- [InvenioRDM](https://inveniosoftware.org/products/rdm/)
- [Zenodo](https://zenodo.org)
- [CaltechDATA](https://data.caltech.edu)
@@ -25,11 +27,13 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password

View File

@@ -18,11 +18,13 @@ through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
r) Rename remote
@@ -97,7 +99,7 @@ See the [remote setup docs](/remote_setup/) for how to set it up on a
machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the
token as returned from Google if using web browser to automatically
token as returned from Google if using web browser to automatically
authenticate. This only
runs from the moment it opens your browser to the moment you get back
the verification code. This is on `http://127.0.0.1:53682/` and it
@@ -108,15 +110,21 @@ You can then use it like this,
List directories in top level of your drive
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your drive
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to a drive directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Scopes
@@ -168,9 +176,9 @@ directories.
### Root folder ID
This option has been moved to the advanced section. You can set the `root_folder_id` for rclone. This is the directory
(identified by its `Folder ID`) that rclone considers to be the root
of your drive.
This option has been moved to the advanced section. You can set the
`root_folder_id` for rclone. This is the directory (identified by its
`Folder ID`) that rclone considers to be the root of your drive.
Normally you will leave this blank and rclone will determine the
correct root to use itself.
@@ -218,49 +226,51 @@ instead, or set the equivalent environment variable.
Let's say that you are the administrator of a Google Workspace. The
goal is to read or write data on an individual's Drive account, who IS
a member of the domain. We'll call the domain **example.com**, and the
user **foo@example.com**.
a member of the domain. We'll call the domain <example.com>, and the
user <foo@example.com>.
There's a few steps we need to go through to accomplish this:
##### 1. Create a service account for example.com
- To create a service account and obtain its credentials, go to the
[Google Developer Console](https://console.developers.google.com).
- You must have a project - create one if you don't and make sure you are on the selected project.
- Then go to "IAM & admin" -> "Service Accounts".
- Use the "Create Service Account" button. Fill in "Service account name"
and "Service account ID" with something that identifies your client.
- Select "Create And Continue". Step 2 and 3 are optional.
- Click on the newly created service account
- Click "Keys" and then "Add Key" and then "Create new key"
- Choose type "JSON" and click create
- This will download a small JSON file that rclone will use for authentication.
- To create a service account and obtain its credentials, go to the
[Google Developer Console](https://console.developers.google.com).
- You must have a project - create one if you don't and make sure you are
on the selected project.
- Then go to "IAM & admin" -> "Service Accounts".
- Use the "Create Service Account" button. Fill in "Service account name"
and "Service account ID" with something that identifies your client.
- Select "Create And Continue". Step 2 and 3 are optional.
- Click on the newly created service account
- Click "Keys" and then "Add Key" and then "Create new key"
- Choose type "JSON" and click create
- This will download a small JSON file that rclone will use for authentication.
If you ever need to remove access, press the "Delete service
account key" button.
##### 2. Allowing API access to example.com Google Drive
- Go to example.com's [Workspace Admin Console](https://admin.google.com)
- Go into "Security" (or use the search bar)
- Select "Access and data control" and then "API controls"
- Click "Manage domain-wide delegation"
- Click "Add new"
- In the "Client ID" field enter the service account's
"Client ID" - this can be found in the Developer Console under
"IAM & Admin" -> "Service Accounts", then "View Client ID" for
the newly created service account.
It is a ~21 character numerical string.
- In the next field, "OAuth Scopes", enter
`https://www.googleapis.com/auth/drive`
to grant read/write access to Google Drive specifically.
You can also use `https://www.googleapis.com/auth/drive.readonly` for read only access.
- Click "Authorise"
- Go to example.com's [Workspace Admin Console](https://admin.google.com)
- Go into "Security" (or use the search bar)
- Select "Access and data control" and then "API controls"
- Click "Manage domain-wide delegation"
- Click "Add new"
- In the "Client ID" field enter the service account's
"Client ID" - this can be found in the Developer Console under
"IAM & Admin" -> "Service Accounts", then "View Client ID" for
the newly created service account.
It is a ~21 character numerical string.
- In the next field, "OAuth Scopes", enter
`https://www.googleapis.com/auth/drive`
to grant read/write access to Google Drive specifically.
You can also use `https://www.googleapis.com/auth/drive.readonly` for read
only access.
- Click "Authorise"
##### 3. Configure rclone, assuming a new install
```
```sh
rclone config
n/s/q> n # New
@@ -277,20 +287,23 @@ y/n> # Auto config, n
##### 4. Verify that it's working
- `rclone -v --drive-impersonate foo@example.com lsf gdrive:backup`
- The arguments do:
- `-v` - verbose logging
- `--drive-impersonate foo@example.com` - this is what does
- `rclone -v --drive-impersonate foo@example.com lsf gdrive:backup`
- The arguments do:
- `-v` - verbose logging
- `--drive-impersonate foo@example.com` - this is what does
the magic, pretending to be user foo.
- `lsf` - list files in a parsing friendly way
- `gdrive:backup` - use the remote called gdrive, work in
- `lsf` - list files in a parsing friendly way
- `gdrive:backup` - use the remote called gdrive, work in
the folder named backup.
Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using `--drive-impersonate`, do this instead:
- in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step 1
- use rclone without specifying the `--drive-impersonate` option, like this:
`rclone -v lsf gdrive:backup`
Note: in case you configured a specific root folder on gdrive and rclone is
unable to access the contents of that folder when using `--drive-impersonate`,
do this instead:
- in the gdrive web interface, share your root folder with the user/email of the
new Service Account you created/selected at step 1
- use rclone without specifying the `--drive-impersonate` option, like this:
`rclone -v lsf gdrive:backup`
### Shared drives (team drives)
@@ -304,7 +317,7 @@ Drive ID if you prefer.
For example:
```
```text
Configure this as a Shared Drive (Team Drive)?
y) Yes
n) No
@@ -341,14 +354,18 @@ docs](/docs/#fast-list) for more details.
It does this by combining multiple `list` calls into a single API request.
This works by combining many `'%s' in parents` filters into one expression.
To list the contents of directories a, b and c, the following requests will be send by the regular `List` function:
```
To list the contents of directories a, b and c, the following requests will be
send by the regular `List` function:
```text
trashed=false and 'a' in parents
trashed=false and 'b' in parents
trashed=false and 'c' in parents
```
These can now be combined into a single request:
```
```text
trashed=false and ('a' in parents or 'b' in parents or 'c' in parents)
```
@@ -357,7 +374,8 @@ It will use the `--checkers` value to specify the number of requests to run in
In tests, these batch requests were up to 20x faster than the regular method.
Running the following command against different sized folders gives:
```
```sh
rclone lsjson -vv -R --checkers=6 gdrive:folder
```
@@ -396,8 +414,8 @@ revision of that file.
Revisions follow the standard google policy which at time of writing
was
* They are deleted after 30 days or 100 revisions (whatever comes first).
* They do not count towards a user storage quota.
- They are deleted after 30 days or 100 revisions (whatever comes first).
- They do not count towards a user storage quota.
### Deleting files
@@ -425,28 +443,40 @@ For shortcuts pointing to files:
- When listing a file shortcut appears as the destination file.
- When downloading the contents of the destination file is downloaded.
- When updating shortcut file with a non shortcut file, the shortcut is removed then a new file is uploaded in place of the shortcut.
- When server-side moving (renaming) the shortcut is renamed, not the destination file.
- When server-side copying the shortcut is copied, not the contents of the shortcut. (unless `--drive-copy-shortcut-content` is in use in which case the contents of the shortcut gets copied).
- When updating shortcut file with a non shortcut file, the shortcut is removed
then a new file is uploaded in place of the shortcut.
- When server-side moving (renaming) the shortcut is renamed, not the destination
file.
- When server-side copying the shortcut is copied, not the contents of the shortcut.
(unless `--drive-copy-shortcut-content` is in use in which case the contents of
the shortcut gets copied).
- When deleting the shortcut is deleted not the linked file.
- When setting the modification time, the modification time of the linked file will be set.
- When setting the modification time, the modification time of the linked file
will be set.
For shortcuts pointing to folders:
- When listing the shortcut appears as a folder and that folder will contain the contents of the linked folder appear (including any sub folders)
- When listing the shortcut appears as a folder and that folder will contain the
contents of the linked folder appear (including any sub folders)
- When downloading the contents of the linked folder and sub contents are downloaded
- When uploading to a shortcut folder the file will be placed in the linked folder
- When server-side moving (renaming) the shortcut is renamed, not the destination folder
- When server-side moving (renaming) the shortcut is renamed, not the destination
folder
- When server-side copying the contents of the linked folder is copied, not the shortcut.
- When deleting with `rclone rmdir` or `rclone purge` the shortcut is deleted not the linked folder.
- **NB** When deleting with `rclone remove` or `rclone mount` the contents of the linked folder will be deleted.
- When deleting with `rclone rmdir` or `rclone purge` the shortcut is deleted not
the linked folder.
- **NB** When deleting with `rclone remove` or `rclone mount` the contents of the
linked folder will be deleted.
The [rclone backend](https://rclone.org/commands/rclone_backend/) command can be used to create shortcuts.
The [rclone backend](https://rclone.org/commands/rclone_backend/) command can be
used to create shortcuts.
Shortcuts can be completely ignored with the `--drive-skip-shortcuts` flag
or the corresponding `skip_shortcuts` configuration setting.
If you have shortcuts that lead to an infinite recursion in your drive (e.g. a shortcut pointing to a parent folder), `skip_shortcuts` might be mandatory to be able to copy the drive.
If you have shortcuts that lead to an infinite recursion in your drive (e.g. a
shortcut pointing to a parent folder), `skip_shortcuts` might be mandatory to
be able to copy the drive.
### Emptying trash
@@ -512,11 +542,12 @@ Here are some examples for allowed and prohibited conversions.
This limitation can be disabled by specifying `--drive-allow-import-name-change`.
When using this flag, rclone can convert multiple files types resulting
in the same document type at once, e.g. with `--drive-import-formats docx,odt,txt`,
all files having these extension would result in a document represented as a docx file.
all files having these extension would result in a document represented as a
docx file.
This brings the additional risk of overwriting a document, if multiple files
have the same stem. Many rclone operations will not handle this name change
in any way. They assume an equal name when copying files and might copy the
file again or delete them when the name changes.
file again or delete them when the name changes.
Here are the possible export extensions with their corresponding mime types.
Most of these can also be used for importing, but there more that are not

View File

@@ -19,11 +19,13 @@ through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
n) New remote
d) Delete remote
q) Quit config
@@ -71,15 +73,21 @@ You can then use it like this,
List directories in top level of your dropbox
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your dropbox
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to a dropbox directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Dropbox for business
@@ -146,7 +154,9 @@ In this mode rclone will not use upload batching. This was the default
before rclone v1.55. It has the disadvantage that it is very likely to
encounter `too_many_requests` errors like this
NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds.
```text
NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds.
```
When rclone receives these it has to wait for 15s or sometimes 300s
before continuing which really slows down transfers.
@@ -215,7 +225,7 @@ Here are some examples of how extensions are mapped:
| Paper template | mydoc.papert | mydoc.papert.html |
| other | mydoc | mydoc.html |
_Importing_ exportable files is not yet supported by rclone.
*Importing* exportable files is not yet supported by rclone.
Here are the supported export extensions known by rclone. Note that
rclone does not currently support other formats not on this list,

View File

@@ -16,16 +16,18 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
The initial setup for 1Fichier involves getting the API key from the website which you
need to do in your browser.
The initial setup for 1Fichier involves getting the API key from the website
which you need to do in your browser.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -66,15 +68,21 @@ Once configured you can then use `rclone` like this,
List directories in top level of your 1Fichier account
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your 1Fichier account
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to a 1Fichier directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Modification times and hashes

View File

@@ -19,11 +19,13 @@ do in your browser. `rclone config` walks you through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -91,15 +93,21 @@ Once configured you can then use `rclone` like this,
List directories in top level of your Enterprise File Fabric
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your Enterprise File Fabric
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to an Enterprise File Fabric directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Modification times and hashes
@@ -124,7 +132,7 @@ upload an empty file as a single space with a mime type of
`application/vnd.rclone.empty.file` and files with that mime type are
treated as empty.
### Root folder ID ###
### Root folder ID
You can set the `root_folder_id` for rclone. This is the directory
(identified by its `Folder ID`) that rclone considers to be the root
@@ -140,7 +148,7 @@ In order to do this you will have to find the `Folder ID` of the
directory you wish rclone to display. These aren't displayed in the
web interface, but you can use `rclone lsf` to find them, for example
```
```sh
$ rclone lsf --dirs-only -Fip --csv filefabric:
120673758,Burnt PDFs/
120673759,My Quick Uploads/

View File

@@ -18,11 +18,13 @@ device.
Here is an example of how to make a remote called `filelu`. First, run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -54,7 +56,7 @@ A path without an initial `/` will operate in the `Rclone` directory.
A path with an initial `/` will operate at the root where you can see
the `Rclone` directory.
```
```sh
$ rclone lsf TestFileLu:/
CCTV/
Camera/
@@ -70,55 +72,81 @@ Videos/
Create a new folder named `foldername` in the `Rclone` directory:
rclone mkdir filelu:foldername
```sh
rclone mkdir filelu:foldername
```
Delete a folder on FileLu:
rclone rmdir filelu:/folder/path/
```sh
rclone rmdir filelu:/folder/path/
```
Delete a file on FileLu:
rclone delete filelu:/hello.txt
```sh
rclone delete filelu:/hello.txt
```
List files from your FileLu account:
rclone ls filelu:
```sh
rclone ls filelu:
```
List all folders:
rclone lsd filelu:
```sh
rclone lsd filelu:
```
Copy a specific file to the FileLu root:
rclone copy D:\\hello.txt filelu:
```sh
rclone copy D:\\hello.txt filelu:
```
Copy files from a local directory to a FileLu directory:
rclone copy D:/local-folder filelu:/remote-folder/path/
```sh
rclone copy D:/local-folder filelu:/remote-folder/path/
```
Download a file from FileLu into a local directory:
rclone copy filelu:/file-path/hello.txt D:/local-folder
```sh
rclone copy filelu:/file-path/hello.txt D:/local-folder
```
Move files from a local directory to a FileLu directory:
rclone move D:\\local-folder filelu:/remote-path/
```sh
rclone move D:\\local-folder filelu:/remote-path/
```
Sync files from a local directory to a FileLu directory:
rclone sync --interactive D:/local-folder filelu:/remote-path/
```sh
rclone sync --interactive D:/local-folder filelu:/remote-path/
```
Mount remote to local Linux:
rclone mount filelu: /root/mnt --vfs-cache-mode full
```sh
rclone mount filelu: /root/mnt --vfs-cache-mode full
```
Mount remote to local Windows:
rclone mount filelu: D:/local_mnt --vfs-cache-mode full
```sh
rclone mount filelu: D:/local_mnt --vfs-cache-mode full
```
Get storage info about the FileLu account:
rclone about filelu:
```sh
rclone about filelu:
```
All the other rclone commands are supported by this backend.
@@ -135,8 +163,8 @@ millions of files, duplicate folder names or paths are quite common.
FileLu supports both modification times and MD5 hashes.
FileLu only supports filenames and folder names up to 255 characters in length, where a
character is a Unicode character.
FileLu only supports filenames and folder names up to 255 characters in length,
where a character is a Unicode character.
### Duplicated Files
@@ -155,7 +183,7 @@ key.
If you are connecting to your FileLu remote for the first time and
encounter an error such as:
```
```text
Failed to create file system for "my-filelu-remote:": couldn't login: Invalid credentials
```

View File

@@ -19,85 +19,97 @@ password. Alternatively, you can authenticate using an API Key from
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> remote
Enter name for new remote.
name> remote
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Files.com
\ "filescom"
[snip]
Storage> filescom
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Files.com
\ "filescom"
[snip]
Storage> filescom
Option site.
Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com)
Enter a value. Press Enter to leave empty.
site> mysite
Option site.
Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com)
Enter a value. Press Enter to leave empty.
site> mysite
Option username.
The username used to authenticate with Files.com.
Enter a value. Press Enter to leave empty.
username> user
Option username.
The username used to authenticate with Files.com.
Enter a value. Press Enter to leave empty.
username> user
Option password.
The password used to authenticate with Files.com.
Choose an alternative below. Press Enter for the default (n).
y) Yes, type in my own password
g) Generate random password
n) No, leave this optional password blank (default)
y/g/n> y
Enter the password:
password:
Confirm the password:
password:
Option password.
The password used to authenticate with Files.com.
Choose an alternative below. Press Enter for the default (n).
y) Yes, type in my own password
g) Generate random password
n) No, leave this optional password blank (default)
y/g/n> y
Enter the password:
password:
Confirm the password:
password:
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: filescom
- site: mysite
- username: user
- password: *** ENCRYPTED ***
Keep this "remote" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
Configuration complete.
Options:
- type: filescom
- site: mysite
- username: user
- password: *** ENCRYPTED ***
Keep this "remote" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
Once configured you can use rclone.
See all files in the top level:
rclone lsf remote:
```sh
rclone lsf remote:
```
Make a new directory in the root:
rclone mkdir remote:dir
```sh
rclone mkdir remote:dir
```
Recursively List the contents:
rclone ls remote:
```sh
rclone ls remote:
```
Sync `/home/local/directory` to the remote directory, deleting any
excess files in the directory.
rclone sync --interactive /home/local/directory remote:dir
```sh
rclone sync --interactive /home/local/directory remote:dir
```
### Hashes

View File

@@ -20,14 +20,16 @@ a `/` it is relative to the home directory of the user. An empty path
To create an FTP configuration named `remote`, run
rclone config
```sh
rclone config
```
Rclone config guides you through an interactive setup process. A minimal
rclone FTP remote definition only requires host, username and password.
For an anonymous FTP server, see [below](#anonymous-ftp).
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
r) Rename remote
c) Copy remote
@@ -86,20 +88,28 @@ y/e/d> y
To see all directories in the home directory of `remote`
rclone lsd remote:
```sh
rclone lsd remote:
```
Make a new directory
rclone mkdir remote:path/to/directory
```sh
rclone mkdir remote:path/to/directory
```
List the contents of a directory
rclone ls remote:path/to/directory
```sh
rclone ls remote:path/to/directory
```
Sync `/home/local/directory` to the remote directory, deleting any
excess files in the directory.
rclone sync --interactive /home/local/directory remote:directory
```sh
rclone sync --interactive /home/local/directory remote:directory
```
### Anonymous FTP
@@ -114,8 +124,10 @@ Using [on-the-fly](#backend-path-to-dir) or
such servers, without requiring any configuration in advance. The following
are examples of that:
rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy)
rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy):
```sh
rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy)
rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy):
```
The above examples work in Linux shells and in PowerShell, but not Windows
Command Prompt. They execute the [rclone obscure](/commands/rclone_obscure/)
@@ -124,8 +136,10 @@ command to create a password string in the format required by the
an already obscured string representation of the same password "dummy", and
therefore works even in Windows Command Prompt:
rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM
rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM:
```sh
rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM
rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM:
```
### Implicit TLS
@@ -139,7 +153,7 @@ can be set with [`--ftp-port`](#ftp-port).
TLS options for Implicit and Explicit TLS can be set using the
following flags which are specific to the FTP backend:
```
```text
--ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS)
--ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32)
@@ -147,7 +161,7 @@ following flags which are specific to the FTP backend:
However any of the global TLS flags can also be used such as:
```
```text
--ca-cert stringArray CA certificate used to verify servers
--client-cert string Client SSL certificate (PEM) for mutual TLS auth
--client-key string Client SSL private key (PEM) for mutual TLS auth
@@ -157,7 +171,7 @@ However any of the global TLS flags can also be used such as:
If these need to be put in the config file so they apply to just the
FTP backend then use the `override` syntax, eg
```
```text
override.ca_cert = XXX
override.client_cert = XXX
override.client_key = XXX

View File

@@ -21,11 +21,13 @@ premium account.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -68,11 +70,15 @@ Once configured you can then use `rclone` like this,
List directories and files in the top level of your Gofile
rclone lsf remote:
```sh
rclone lsf remote:
```
To copy a local directory to an Gofile directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Modification times and hashes
@@ -97,7 +103,6 @@ the following characters are also replaced:
| \ | 0x5C | |
| \| | 0x7C | |
File names can also not start or end with the following characters.
These only get replaced if they are the first or last character in the
name:
@@ -134,7 +139,7 @@ directory you wish rclone to display.
You can do this with rclone
```
```sh
$ rclone lsf -Fip --dirs-only remote:
d6341f53-ee65-4f29-9f59-d11e8070b2a0;Files/
f4f5c9b8-6ece-478b-b03e-4538edfe5a1c;Photos/
@@ -143,7 +148,7 @@ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
The ID to use is the part before the `;` so you could set
```
```text
root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0
```

View File

@@ -11,17 +11,19 @@ command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
## Configuration
The initial setup for google cloud storage involves getting a token from Google Cloud Storage
which you need to do in your browser. `rclone config` walks you
The initial setup for google cloud storage involves getting a token from Google
Cloud Storage which you need to do in your browser. `rclone config` walks you
through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
n) New remote
d) Delete remote
q) Quit config
@@ -148,7 +150,7 @@ See the [remote setup docs](/remote_setup/) for how to set it up on a
machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the
token as returned from Google if using web browser to automatically
token as returned from Google if using web browser to automatically
authenticate. This only
runs from the moment it opens your browser to the moment you get back
the verification code. This is on `http://127.0.0.1:53682/` and this
@@ -159,20 +161,28 @@ This remote is called `remote` and can now be used like this
See all the buckets in your project
rclone lsd remote:
```sh
rclone lsd remote:
```
Make a new bucket
rclone mkdir remote:bucket
```sh
rclone mkdir remote:bucket
```
List the contents of a bucket
rclone ls remote:bucket
```sh
rclone ls remote:bucket
```
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
rclone sync --interactive /home/local/directory remote:bucket
```sh
rclone sync --interactive /home/local/directory remote:bucket
```
### Service Account support
@@ -203,52 +213,67 @@ environment variable.
### Service Account Authentication with Access Tokens
Another option for service account authentication is to use access tokens via *gcloud impersonate-service-account*. Access tokens protect security by avoiding the use of the JSON
key file, which can be breached. They also bypass oauth login flow, which is simpler
on remote VMs that lack a web browser.
Another option for service account authentication is to use access tokens via
*gcloud impersonate-service-account*. Access tokens protect security by avoiding
the use of the JSON key file, which can be breached. They also bypass oauth
login flow, which is simpler on remote VMs that lack a web browser.
If you already have a working service account, skip to step 3.
If you already have a working service account, skip to step 3.
#### 1. Create a service account using
#### 1. Create a service account using
gcloud iam service-accounts create gcs-read-only
```sh
gcloud iam service-accounts create gcs-read-only
```
You can re-use an existing service account as well (like the one created above)
#### 2. Attach a Viewer (read-only) or User (read-write) role to the service account
$ PROJECT_ID=my-project
$ gcloud --verbose iam service-accounts add-iam-policy-binding \
gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
--member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
--role=roles/storage.objectViewer
#### 2. Attach a Viewer (read-only) or User (read-write) role to the service account
Use the Google Cloud console to identify a limited role. Some relevant pre-defined roles:
```sh
$ PROJECT_ID=my-project
$ gcloud --verbose iam service-accounts add-iam-policy-binding \
gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
--member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \
--role=roles/storage.objectViewer
```
* *roles/storage.objectUser* -- read-write access but no admin privileges
* *roles/storage.objectViewer* -- read-only access to objects
* *roles/storage.admin* -- create buckets & administrative roles
Use the Google Cloud console to identify a limited role. Some relevant
pre-defined roles:
- *roles/storage.objectUser* -- read-write access but no admin privileges
- *roles/storage.objectViewer* -- read-only access to objects
- *roles/storage.admin* -- create buckets & administrative roles
#### 3. Get a temporary access key for the service account
$ gcloud auth application-default print-access-token \
--impersonate-service-account \
gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com
```sh
$ gcloud auth application-default print-access-token \
--impersonate-service-account \
gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com
ya29.c.c0ASRK0GbAFEewXD [truncated]
ya29.c.c0ASRK0GbAFEewXD [truncated]
```
#### 4. Update `access_token` setting
hit `CTRL-C` when you see *waiting for code*. This will save the config without doing oauth flow
rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx
hit `CTRL-C` when you see *waiting for code*. This will save the config without
doing oauth flow
```sh
rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx
```
#### 5. Run rclone as usual
rclone ls dev-gcs:${MY_BUCKET}/
```sh
rclone ls dev-gcs:${MY_BUCKET}/
```
### More Info on Service Accounts
* [Official GCS Docs](https://cloud.google.com/compute/docs/access/service-accounts)
* [Guide on Service Accounts using Key Files (less secure, but similar concepts)](https://forum.rclone.org/t/access-using-google-service-account/24822/2)
- [Official GCS Docs](https://cloud.google.com/compute/docs/access/service-accounts)
- [Guide on Service Accounts using Key Files (less secure, but similar concepts)](https://forum.rclone.org/t/access-using-google-service-account/24822/2)
### Anonymous Access
@@ -299,13 +324,16 @@ Note that the last of these is for setting custom metadata in the form
### Modification times
Google Cloud Storage stores md5sum natively.
Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time
with one-second precision as `goog-reserved-file-mtime` in file metadata.
Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores
modification time with one-second precision as `goog-reserved-file-mtime` in
file metadata.
To ensure compatibility with gsutil, rclone stores modification time in 2 separate metadata entries.
`mtime` uses RFC3339 format with one-nanosecond precision.
`goog-reserved-file-mtime` uses Unix timestamp format with one-second precision.
To get modification time from object metadata, rclone reads the metadata in the following order: `mtime`, `goog-reserved-file-mtime`, object updated time.
To ensure compatibility with gsutil, rclone stores modification time in 2
separate metadata entries. `mtime` uses RFC3339 format with one-nanosecond
precision. `goog-reserved-file-mtime` uses Unix timestamp format with one-second
precision. To get modification time from object metadata, rclone reads the
metadata in the following order: `mtime`, `goog-reserved-file-mtime`, object
updated time.
Note that rclone's default modify window is 1ns.
Files uploaded by gsutil only contain timestamps with one-second precision.

View File

@@ -27,11 +27,13 @@ through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -98,7 +100,7 @@ See the [remote setup docs](/remote_setup/) for how to set it up on a
machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the
token as returned from Google if using web browser to automatically
token as returned from Google if using web browser to automatically
authenticate. This only
runs from the moment it opens your browser to the moment you get back
the verification code. This is on `http://127.0.0.1:53682/` and this
@@ -109,20 +111,28 @@ This remote is called `remote` and can now be used like this
See all the albums in your photos
rclone lsd remote:album
```sh
rclone lsd remote:album
```
Make a new album
rclone mkdir remote:album/newAlbum
```sh
rclone mkdir remote:album/newAlbum
```
List the contents of an album
rclone ls remote:album/newAlbum
```sh
rclone ls remote:album/newAlbum
```
Sync `/home/local/images` to the Google Photos, removing any excess
files in the album.
rclone sync --interactive /home/local/image remote:album/newAlbum
```sh
rclone sync --interactive /home/local/image remote:album/newAlbum
```
### Layout
@@ -139,7 +149,7 @@ Note that all your photos and videos will appear somewhere under
`media`, but they may not appear under `album` unless you've put them
into albums.
```
```text
/
- upload
- file1.jpg
@@ -203,11 +213,13 @@ may create new directories (albums) under `album`. If you copy files
with a directory hierarchy in there then rclone will create albums
with the `/` character in them. For example if you do
rclone copy /path/to/images remote:album/images
```sh
rclone copy /path/to/images remote:album/images
```
and the images directory contains
```
```text
images
- file1.jpg
dir
@@ -220,11 +232,11 @@ images
Then rclone will create the following albums with the following files in
- images
- file1.jpg
- file1.jpg
- images/dir
- file2.jpg
- file2.jpg
- images/dir2/dir3
- file3.jpg
- file3.jpg
This means that you can use the `album` path pretty much like a normal
filesystem and it is a good target for repeated syncing.

View File

@@ -9,6 +9,7 @@ status: Experimental
Hasher is a special overlay backend to create remotes which handle
checksums for other remotes. It's main functions include:
- Emulate hash types unimplemented by backends
- Cache checksums to help with slow hashing of large local or (S)FTP files
- Warm up checksum cache from external SUM files
@@ -29,8 +30,9 @@ Now proceed to interactive or manual configuration.
### Interactive configuration
Run `rclone config`:
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -76,7 +78,7 @@ usually `YOURHOME/.config/rclone/rclone.conf`.
Open it in your favorite text editor, find section for the base remote
and create new section for hasher like in the following examples:
```
```ini
[Hasher1]
type = hasher
remote = myRemote:path
@@ -91,12 +93,13 @@ max_age = 24h
```
Hasher takes basically the following parameters:
- `remote` is required,
- `remote` is required
- `hashes` is a comma separated list of supported checksums
(by default `md5,sha1`),
- `max_age` - maximum time to keep a checksum value in the cache,
`0` will disable caching completely,
`off` will cache "forever" (that is until the files get changed).
(by default `md5,sha1`)
- `max_age` - maximum time to keep a checksum value in the cache
`0` will disable caching completely
`off` will cache "forever" (that is until the files get changed)
Make sure the `remote` has `:` (colon) in. If you specify the remote without
a colon then rclone will use a local directory of that name. So if you use
@@ -111,7 +114,8 @@ If you use `remote = name` literally then rclone will put files
Now you can use it as `Hasher2:subdir/file` instead of base remote.
Hasher will transparently update cache with new checksums when a file
is fully read or overwritten, like:
```
```sh
rclone copy External:path/file Hasher:dest/path
rclone cat Hasher:path/to/file > /dev/null
@@ -121,14 +125,16 @@ The way to refresh **all** cached checksums (even unsupported by the base backen
for a subtree is to **re-download** all files in the subtree. For example,
use `hashsum --download` using **any** supported hashsum on the command line
(we just care to re-read):
```
```sh
rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null
rclone backend dump Hasher:path/to/subtree
```
You can print or drop hashsum cache using custom backend commands:
```
```sh
rclone backend dump Hasher:dir/subdir
rclone backend drop Hasher:
@@ -139,7 +145,7 @@ rclone backend drop Hasher:
Hasher supports two backend commands: generic SUM file `import` and faster
but less consistent `stickyimport`.
```
```sh
rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM [--checkers 4]
```
@@ -148,6 +154,7 @@ can point to either a local or an `other-remote:path` text file in SUM format.
The command will parse the SUM file, then walk down the path given by the
first argument, snapshot current fingerprints and fill in the cache entries
correspondingly.
- Paths in the SUM file are treated as relative to `hasher:dir/subdir`.
- The command will **not** check that supplied values are correct.
You **must know** what you are doing.
@@ -158,7 +165,7 @@ correspondingly.
`--checkers` to make it faster. Or use `stickyimport` if you don't care
about fingerprints and consistency.
```
```sh
rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1
```

View File

@@ -6,8 +6,9 @@ versionIntroduced: "v1.54"
# {{< icon "fa fa-globe" >}} HDFS
[HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) is a
distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/) framework.
[HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html)
is a distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/)
framework.
Paths are specified as `remote:` or `remote:path/to/dir`.
@@ -15,11 +16,13 @@ Paths are specified as `remote:` or `remote:path/to/dir`.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -83,15 +86,21 @@ This remote is called `remote` and can now be used like this
See all the top level directories
rclone lsd remote:
```sh
rclone lsd remote:
```
List the contents of a directory
rclone ls remote:directory
```sh
rclone ls remote:directory
```
Sync the remote `directory` to `/home/local/directory`, deleting any excess files.
rclone sync --interactive remote:directory /home/local/directory
```sh
rclone sync --interactive remote:directory /home/local/directory
```
### Setting up your own HDFS instance for testing
@@ -100,7 +109,7 @@ or use the docker image from the tests:
If you want to build the docker image
```
```sh
git clone https://github.com/rclone/rclone.git
cd rclone/fstest/testserver/images/test-hdfs
docker build --rm -t rclone/test-hdfs .
@@ -108,7 +117,7 @@ docker build --rm -t rclone/test-hdfs .
Or you can just use the latest one pushed
```
```sh
docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:8020 --hostname "rclone-hdfs" rclone/test-hdfs
```
@@ -116,15 +125,15 @@ docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:80
For this docker image the remote needs to be configured like this:
```
```ini
[remote]
type = hdfs
namenode = 127.0.0.1:8020
username = root
```
You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use volumes, so all data
uploaded will be lost.)
You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use
volumes, so all data uploaded will be lost.)
### Modification times
@@ -136,7 +145,8 @@ No checksums are implemented.
### Usage information
You can use the `rclone about remote:` command which will display filesystem size and current usage.
You can use the `rclone about remote:` command which will display filesystem
size and current usage.
### Restricted filename characters

View File

@@ -18,11 +18,13 @@ which you need to do in your browser.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found - make a new one
n) New remote
s) Set configuration password
@@ -83,34 +85,42 @@ Once configured you can then use `rclone` like this,
List directories in top level of your HiDrive root folder
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your HiDrive filesystem
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to a HiDrive directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Keeping your tokens safe
Any OAuth-tokens will be stored by rclone in the remote's configuration file as unencrypted text.
Anyone can use a valid refresh-token to access your HiDrive filesystem without knowing your password.
Therefore you should make sure no one else can access your configuration.
Any OAuth-tokens will be stored by rclone in the remote's configuration file as
unencrypted text. Anyone can use a valid refresh-token to access your HiDrive
filesystem without knowing your password. Therefore you should make sure no one
else can access your configuration.
It is possible to encrypt rclone's configuration file.
You can find information on securing your configuration file by viewing the [configuration encryption docs](/docs/#configuration-encryption).
You can find information on securing your configuration file by viewing the
[configuration encryption docs](/docs/#configuration-encryption).
### Invalid refresh token
As can be verified [here](https://developer.hidrive.com/basics-flows/),
As can be verified on [HiDrive's OAuth guide](https://developer.hidrive.com/basics-flows/),
each `refresh_token` (for Native Applications) is valid for 60 days.
If used to access HiDrivei, its validity will be automatically extended.
This means that if you
* Don't use the HiDrive remote for 60 days
- Don't use the HiDrive remote for 60 days
then rclone will return an error which includes a text
that implies the refresh token is *invalid* or *expired*.
@@ -119,7 +129,9 @@ To fix this you will need to authorize rclone to access your HiDrive account aga
Using
rclone config reconnect remote:
```sh
rclone config reconnect remote:
```
the process is very similar to the process of initial setup exemplified before.
@@ -141,7 +153,7 @@ Therefore rclone will automatically replace these characters,
if files or folders are stored or accessed with such names.
You can read about how this filename encoding works in general
[here](overview/#restricted-filenames).
in the [main docs](/overview/#restricted-filenames).
Keep in mind that HiDrive only supports file or folder names
with a length of 255 characters or less.
@@ -157,9 +169,9 @@ so you may want to restrict this behaviour on systems with limited resources.
You can customize this behaviour using the following options:
* `chunk_size`: size of file parts
* `upload_cutoff`: files larger or equal to this in size will use a chunked transfer
* `upload_concurrency`: number of file-parts to upload at the same time
- `chunk_size`: size of file parts
- `upload_cutoff`: files larger or equal to this in size will use a chunked transfer
- `upload_concurrency`: number of file-parts to upload at the same time
See the below section about configuration options for more details.
@@ -176,9 +188,10 @@ This works by prepending the contents of the `root_prefix` option
to any paths accessed by rclone.
For example, the following two ways to access the home directory are equivalent:
rclone lsd --hidrive-root-prefix="/users/test/" remote:path
rclone lsd remote:/users/test/path
```sh
rclone lsd --hidrive-root-prefix="/users/test/" remote:path
rclone lsd remote:/users/test/path
```
See the below section about configuration options for more details.
@@ -187,10 +200,10 @@ See the below section about configuration options for more details.
By default, rclone will know the number of directory members contained in a directory.
For example, `rclone lsd` uses this information.
The acquisition of this information will result in additional time costs for HiDrive's API.
When dealing with large directory structures, it may be desirable to circumvent this time cost,
especially when this information is not explicitly needed.
For this, the `disable_fetching_member_count` option can be used.
The acquisition of this information will result in additional time costs for
HiDrive's API. When dealing with large directory structures, it may be
desirable to circumvent this time cost, especially when this information is not
explicitly needed. For this, the `disable_fetching_member_count` option can be used.
See the below section about configuration options for more details.

View File

@@ -39,11 +39,13 @@ To just download a single file it is easier to use
Here is an example of how to make a remote called `remote`. First
run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -92,15 +94,21 @@ This remote is called `remote` and can now be used like this
See all the top level directories
rclone lsd remote:
```sh
rclone lsd remote:
```
List the contents of a directory
rclone ls remote:directory
```sh
rclone ls remote:directory
```
Sync the remote `directory` to `/home/local/directory`, deleting any excess files.
rclone sync --interactive remote:directory /home/local/directory
```sh
rclone sync --interactive remote:directory /home/local/directory
```
### Read only
@@ -119,11 +127,15 @@ No checksums are stored.
Since the http remote only has one config parameter it is easy to use
without a config file:
rclone lsd --http-url https://beta.rclone.org :http:
```sh
rclone lsd --http-url https://beta.rclone.org :http:
```
or:
rclone lsd :http,url='https://beta.rclone.org':
```sh
rclone lsd :http,url='https://beta.rclone.org':
```
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/http/http.go then run make backenddocs" >}}
### Standard options

View File

@@ -7,22 +7,28 @@ status: Beta
# {{< icon "fa fa-cloud" >}} iCloud Drive
## Configuration
The initial setup for an iCloud Drive backend involves getting a trust token/session. This can be done by simply using the regular iCloud password, and accepting the code prompt on another iCloud connected device.
The initial setup for an iCloud Drive backend involves getting a trust token/session.
This can be done by simply using the regular iCloud password, and accepting the code
prompt on another iCloud connected device.
**IMPORTANT**: At the moment an app specific password won't be accepted. Only use your regular password and 2FA.
**IMPORTANT**: At the moment an app specific password won't be accepted. Only
use your regular password and 2FA.
`rclone config` walks you through the token creation. The trust token is valid for 30 days. After which you will have to reauthenticate with `rclone reconnect` or `rclone config`.
`rclone config` walks you through the token creation. The trust token is valid
for 30 days. After which you will have to reauthenticate with `rclone reconnect`
or `rclone config`.
Here is an example of how to make a remote called `iclouddrive`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -78,19 +84,26 @@ y/e/d> y
ADP is currently unsupported and need to be disabled
On iPhone, Settings `>` Apple Account `>` iCloud `>` 'Access iCloud Data on the Web' must be ON, and 'Advanced Data Protection' OFF.
On iPhone, Settings `>` Apple Account `>` iCloud `>` 'Access iCloud Data on the Web'
must be ON, and 'Advanced Data Protection' OFF.
## Troubleshooting
### Missing PCS cookies from the request
This means you have Advanced Data Protection (ADP) turned on. This is not supported at the moment. If you want to use rclone you will have to turn it off. See above for how to turn it off.
This means you have Advanced Data Protection (ADP) turned on. This is not supported
at the moment. If you want to use rclone you will have to turn it off. See above
for how to turn it off.
You will need to clear the `cookies` and the `trust_token` fields in the config. Or you can delete the remote config and start again.
You will need to clear the `cookies` and the `trust_token` fields in the config.
Or you can delete the remote config and start again.
You should then run `rclone reconnect remote:`.
Note that changing the ADP setting may not take effect immediately - you may need to wait a few hours or a day before you can get rclone to work - keep clearing the config entry and running `rclone reconnect remote:` until rclone functions properly.
Note that changing the ADP setting may not take effect immediately - you may
need to wait a few hours or a day before you can get rclone to work - keep
clearing the config entry and running `rclone reconnect remote:` until rclone
functions properly.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/iclouddrive/iclouddrive.go then run make backenddocs" >}}
### Standard options

View File

@@ -2,18 +2,19 @@
title: "ImageKit"
description: "Rclone docs for ImageKit backend."
versionIntroduced: "v1.63"
---
# {{< icon "fa fa-cloud" >}} ImageKit
This is a backend for the [ImageKit.io](https://imagekit.io/) storage service.
#### About ImageKit
[ImageKit.io](https://imagekit.io/) provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web.
[ImageKit.io](https://imagekit.io/) provides real-time image and video
optimizations, transformations, and CDN delivery. Over 1,000 businesses
and 70,000 developers trust ImageKit with their images and videos on the web.
#### Accounts & Pricing
To use this backend, you need to [create an account](https://imagekit.io/registration/) on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See [the pricing details](https://imagekit.io/plans).
To use this backend, you need to [create an account](https://imagekit.io/registration/)
on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements
grow, upgrade to a plan that best fits your needs. See [the pricing details](https://imagekit.io/plans).
## Configuration
@@ -21,16 +22,18 @@ Here is an example of making an imagekit configuration.
Firstly create a [ImageKit.io](https://imagekit.io/) account and choose a plan.
You will need to log in and get the `publicKey` and `privateKey` for your account from the developer section.
You will need to log in and get the `publicKey` and `privateKey` for your account
from the developer section.
Now run
```
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -82,20 +85,26 @@ e) Edit this remote
d) Delete this remote
y/e/d> y
```
List directories in the top level of your Media Library
```
```sh
rclone lsd imagekit-media-library:
```
Make a new directory.
```
```sh
rclone mkdir imagekit-media-library:directory
```
List the contents of a directory.
```
```sh
rclone ls imagekit-media-library:directory
```
### Modified time and hashes
### Modified time and hashes
ImageKit does not support modification times or hashes yet.

View File

@@ -8,7 +8,8 @@ versionIntroduced: "v1.59"
The Internet Archive backend utilizes Items on [archive.org](https://archive.org/)
Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.html) for the API this backend uses.
Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.html)
for the API this backend uses.
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`.
@@ -19,31 +20,47 @@ Once you have made a remote, you can use it like this:
Make a new item
rclone mkdir remote:item
```sh
rclone mkdir remote:item
```
List the contents of a item
rclone ls remote:item
```sh
rclone ls remote:item
```
Sync `/home/local/directory` to the remote item, deleting any excess
files in the item.
rclone sync --interactive /home/local/directory remote:item
```sh
rclone sync --interactive /home/local/directory remote:item
```
## Notes
Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available.
The per-item queue is enqueued to an another queue, Item Deriver Queue. [You can check the status of Item Deriver Queue here.](https://catalogd.archive.org/catalog.php?whereami=1) This queue has a limit, and it may block you from uploading, or even deleting. You should avoid uploading a lot of small files for better behavior.
You can optionally wait for the server's processing to finish, by setting non-zero value to `wait_archive` key.
By making it wait, rclone can do normal file comparison.
Make sure to set a large enough value (e.g. `30m0s` for smaller files) as it can take a long time depending on server's queue.
Because of Internet Archive's architecture, it enqueues write operations (and
extra post-processings) in a per-item queue. You can check item's queue at
<https://catalogd.archive.org/history/item-name-here>. Because of that, all
uploads/deletes will not show up immediately and takes some time to be available.
The per-item queue is enqueued to an another queue, Item Deriver Queue.
[You can check the status of Item Deriver Queue here.](https://catalogd.archive.org/catalog.php?whereami=1)
This queue has a limit, and it may block you from uploading, or even deleting.
You should avoid uploading a lot of small files for better behavior.
You can optionally wait for the server's processing to finish, by setting
non-zero value to `wait_archive` key. By making it wait, rclone can do normal
file comparison. Make sure to set a large enough value (e.g. `30m0s` for smaller
files) as it can take a long time depending on server's queue.
## About metadata
This backend supports setting, updating and reading metadata of each file.
The metadata will appear as file metadata on Internet Archive.
However, some fields are reserved by both Internet Archive and rclone.
The following are reserved by Internet Archive:
- `name`
- `source`
- `size`
@@ -56,9 +73,11 @@ The following are reserved by Internet Archive:
- `summation`
Trying to set values to these keys is ignored with a warning.
Only setting `mtime` is an exception. Doing so make it the identical behavior as setting ModTime.
Only setting `mtime` is an exception. Doing so make it the identical
behavior as setting ModTime.
rclone reserves all the keys starting with `rclone-`. Setting value for these keys will give you warnings, but values are set according to request.
rclone reserves all the keys starting with `rclone-`. Setting value for
these keys will give you warnings, but values are set according to request.
If there are multiple values for a key, only the first one is returned.
This is a limitation of rclone, that supports one value per one key.
@@ -76,7 +95,9 @@ changeable, as they are created by the Internet Archive automatically.
These auto-created files can be excluded from the sync using [metadata
filtering](/filtering/#metadata).
rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata"
```sh
rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata"
```
Which excludes from the sync any files which have the
`source=metadata` or `format=Metadata` flags which are added to
@@ -89,12 +110,14 @@ Most applies to the other providers as well, any differences are described [belo
First run
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config

View File

@@ -6,25 +6,27 @@ versionIntroduced: "v1.43"
# {{< icon "fa fa-cloud" >}} Jottacloud
Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters
in Norway. In addition to the official service at [jottacloud.com](https://www.jottacloud.com/),
it also provides white-label solutions to different companies, such as:
* Telia
* Telia Cloud (cloud.telia.se)
* Telia Sky (sky.telia.no)
* Tele2
* Tele2 Cloud (mittcloud.tele2.se)
* Onlime
* Onlime Cloud Storage (onlime.dk)
* Elkjøp (with subsidiaries):
* Elkjøp Cloud (cloud.elkjop.no)
* Elgiganten Sweden (cloud.elgiganten.se)
* Elgiganten Denmark (cloud.elgiganten.dk)
* Giganti Cloud (cloud.gigantti.fi)
* ELKO Cloud (cloud.elko.is)
Jottacloud is a cloud storage service provider from a Norwegian company, using
its own datacenters in Norway. In addition to the official service at
[jottacloud.com](https://www.jottacloud.com/), it also provides white-label
solutions to different companies, such as:
Most of the white-label versions are supported by this backend, although may require different
authentication setup - described below.
- Telia
- Telia Cloud (cloud.telia.se)
- Telia Sky (sky.telia.no)
- Tele2
- Tele2 Cloud (mittcloud.tele2.se)
- Onlime
- Onlime Cloud Storage (onlime.dk)
- Elkjøp (with subsidiaries):
- Elkjøp Cloud (cloud.elkjop.no)
- Elgiganten Sweden (cloud.elgiganten.se)
- Elgiganten Denmark (cloud.elgiganten.dk)
- Giganti Cloud (cloud.gigantti.fi)
- ELKO Cloud (cloud.elko.is)
Most of the white-label versions are supported by this backend, although may
require different authentication setup - described below.
Paths are specified as `remote:path`
@@ -32,81 +34,92 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Authentication types
Some of the whitelabel versions uses a different authentication method than the official service,
and you have to choose the correct one when setting up the remote.
Some of the whitelabel versions uses a different authentication method than the
official service, and you have to choose the correct one when setting up the remote.
### Standard authentication
The standard authentication method used by the official service (jottacloud.com), as well as
some of the whitelabel services, requires you to generate a single-use personal login token
from the account security settings in the service's web interface. Log in to your account,
go to "Settings" and then "Security", or use the direct link presented to you by rclone when
configuring the remote: <https://www.jottacloud.com/web/secure>. Scroll down to the section
"Personal login token", and click the "Generate" button. Note that if you are using a
whitelabel service you probably can't use the direct link, you need to find the same page in
their dedicated web interface, and also it may be in a different location than described above.
The standard authentication method used by the official service (jottacloud.com),
as well as some of the whitelabel services, requires you to generate a single-use
personal login token from the account security settings in the service's web
interface. Log in to your account, go to "Settings" and then "Security", or use
the direct link presented to you by rclone when configuring the remote:
<https://www.jottacloud.com/web/secure>. Scroll down to the section "Personal login
token", and click the "Generate" button. Note that if you are using a whitelabel
service you probably can't use the direct link, you need to find the same page in
their dedicated web interface, and also it may be in a different location than
described above.
To access your account from multiple instances of rclone, you need to configure each of them
with a separate personal login token. E.g. you create a Jottacloud remote with rclone in one
location, and copy the configuration file to a second location where you also want to run
rclone and access the same remote. Then you need to replace the token for one of them, using
the [config reconnect](https://rclone.org/commands/rclone_config_reconnect/) command, which
requires you to generate a new personal login token and supply as input. If you do not
do this, the token may easily end up being invalidated, resulting in both instances failing
with an error message something along the lines of:
To access your account from multiple instances of rclone, you need to configure
each of them with a separate personal login token. E.g. you create a Jottacloud
remote with rclone in one location, and copy the configuration file to a second
location where you also want to run rclone and access the same remote. Then you
need to replace the token for one of them, using the [config reconnect](https://rclone.org/commands/rclone_config_reconnect/)
command, which requires you to generate a new personal login token and supply
as input. If you do not do this, the token may easily end up being invalidated,
resulting in both instances failing with an error message something along the
lines of:
oauth2: cannot fetch token: 400 Bad Request
Response: {"error":"invalid_grant","error_description":"Stale token"}
```text
oauth2: cannot fetch token: 400 Bad Request
Response: {"error":"invalid_grant","error_description":"Stale token"}
```
When this happens, you need to replace the token as described above to be able to use your
remote again.
When this happens, you need to replace the token as described above to be able
to use your remote again.
All personal login tokens you have taken into use will be listed in the web interface under
"My logged in devices", and from the right side of that list you can click the "X" button to
revoke individual tokens.
All personal login tokens you have taken into use will be listed in the web
interface under "My logged in devices", and from the right side of that list
you can click the "X" button to revoke individual tokens.
### Legacy authentication
If you are using one of the whitelabel versions (e.g. from Elkjøp) you may not have the option
to generate a CLI token. In this case you'll have to use the legacy authentication. To do this select
yes when the setup asks for legacy authentication and enter your username and password.
The rest of the setup is identical to the default setup.
If you are using one of the whitelabel versions (e.g. from Elkjøp) you may not
have the option to generate a CLI token. In this case you'll have to use the
legacy authentication. To do this select yes when the setup asks for legacy
authentication and enter your username and password. The rest of the setup is
identical to the default setup.
### Telia Cloud authentication
Similar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and
additionally uses a separate authentication flow where the username is generated internally. To setup
rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is
Similar to other whitelabel versions Telia Cloud doesn't offer the option of
creating a CLI token, and additionally uses a separate authentication flow
where the username is generated internally. To setup rclone to use Telia Cloud,
choose Telia Cloud authentication in the setup. The rest of the setup is
identical to the default setup.
### Tele2 Cloud authentication
As Tele2-Com Hem merger was completed this authentication can be used for former Com Hem Cloud and
Tele2 Cloud customers as no support for creating a CLI token exists, and additionally uses a separate
authentication flow where the username is generated internally. To setup rclone to use Tele2 Cloud,
choose Tele2 Cloud authentication in the setup. The rest of the setup is identical to the default setup.
As Tele2-Com Hem merger was completed this authentication can be used for former
Com Hem Cloud and Tele2 Cloud customers as no support for creating a CLI token
exists, and additionally uses a separate authentication flow where the username
is generated internally. To setup rclone to use Tele2 Cloud, choose Tele2 Cloud
authentication in the setup. The rest of the setup is identical to the default setup.
### Onlime Cloud Storage authentication
Onlime has sold access to Jottacloud proper, while providing localized support to Danish Customers, but
have recently set up their own hosting, transferring their customers from Jottacloud servers to their
own ones.
Onlime has sold access to Jottacloud proper, while providing localized support
to Danish Customers, but have recently set up their own hosting, transferring
their customers from Jottacloud servers to their own ones.
This, of course, necessitates using their servers for authentication, but otherwise functionality and
architecture seems equivalent to Jottacloud.
This, of course, necessitates using their servers for authentication, but
otherwise functionality and architecture seems equivalent to Jottacloud.
To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication in the setup. The rest
of the setup is identical to the default setup.
To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication
in the setup. The rest of the setup is identical to the default setup.
## Configuration
Here is an example of how to make a remote called `remote` with the default setup. First run:
Here is an example of how to make a remote called `remote` with the default setup.
First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -197,15 +210,21 @@ Once configured you can then use `rclone` like this,
List directories in top level of your Jottacloud
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your Jottacloud
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to an Jottacloud directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Devices and Mountpoints
@@ -286,18 +305,21 @@ as they can't be used in XML strings.
### Deleting files
By default, rclone will send all files to the trash when deleting files. They will be permanently
deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately
by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) flag, or set the equivalent environment variable.
Emptying the trash is supported by the [cleanup](/commands/rclone_cleanup/) command.
By default, rclone will send all files to the trash when deleting files. They
will be permanently deleted automatically after 30 days. You may bypass the
trash and permanently delete files immediately by using the [--jottacloud-hard-delete](#jottacloud-hard-delete)
flag, or set the equivalent environment variable. Emptying the trash is
supported by the [cleanup](/commands/rclone_cleanup/) command.
### Versions
Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it.
Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.
Jottacloud supports file versioning. When rclone uploads a new version of a
file it creates a new version of it. Currently rclone only supports retrieving
the current version but older versions can be accessed via the Jottacloud Website.
Versioning can be disabled by `--jottacloud-no-versions` option. This is achieved by deleting the remote file prior to uploading
a new version. If the upload the fails no version of the file will be available in the remote.
Versioning can be disabled by `--jottacloud-no-versions` option. This is
achieved by deleting the remote file prior to uploading a new version. If the
upload the fails no version of the file will be available in the remote.
### Quota information

View File

@@ -19,11 +19,13 @@ giving the password a nice name like `rclone` and clicking on generate.
Here is an example of how to make a remote called `koofr`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -89,15 +91,21 @@ Once configured you can then use `rclone` like this,
List directories in top level of your Koofr
rclone lsd koofr:
```sh
rclone lsd koofr:
```
List all the files in your Koofr
rclone ls koofr:
```sh
rclone ls koofr:
```
To copy a local directory to an Koofr directory called backup
rclone copy /home/source koofr:backup
```sh
rclone copy /home/source koofr:backup
```
### Restricted filename characters
@@ -245,11 +253,13 @@ provides a Koofr API.
Here is an example of how to make a remote called `ds`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -312,11 +322,13 @@ You may also want to use another, public or private storage provider that runs a
Here is an example of how to make a remote called `other`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password

View File

@@ -14,11 +14,13 @@ Here is an example of making a remote for Linkbox.
First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password

View File

@@ -8,7 +8,9 @@ versionIntroduced: "v0.91"
Local paths are specified as normal filesystem paths, e.g. `/path/to/wherever`, so
rclone sync --interactive /home/source /tmp/destination
```sh
rclone sync --interactive /home/source /tmp/destination
```
Will sync `/home/source` to `/tmp/destination`.
@@ -25,7 +27,7 @@ Rclone reads and writes the modification times using an accuracy determined
by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second
on OS X.
### Filenames ###
### Filenames
Filenames should be encoded in UTF-8 on disk. This is the normal case
for Windows and OS X.
@@ -41,7 +43,7 @@ be replaced with a quoted representation of the invalid bytes. The name
`gro\xdf` will be transferred as `groDF`. `rclone` will emit a debug
message in this case (use `-v` to see), e.g.
```
```text
Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
```
@@ -117,7 +119,7 @@ These only get replaced if they are the last character in the name:
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be converted to UTF-16.
### Paths on Windows ###
### Paths on Windows
On Windows there are many ways of specifying a path to a file system resource.
Local paths can be absolute, like `C:\path\to\wherever`, or relative,
@@ -133,10 +135,11 @@ so in most cases you do not have to worry about this (read more [below](#long-pa
Using the same prefix `\\?\` it is also possible to specify path to volumes
identified by their GUID, e.g. `\\?\Volume{b75e2c83-0000-0000-0000-602f00000000}\some\path`.
#### Long paths ####
#### Long paths
Rclone handles long paths automatically, by converting all paths to
[extended-length path format](https://docs.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation), which allows paths up to 32,767 characters.
[extended-length path format](https://docs.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation),
which allows paths up to 32,767 characters.
This conversion will ensure paths are absolute and prefix them with
the `\\?\`. This is why you will see that your paths, for instance
@@ -147,18 +150,19 @@ However, in rare cases this may cause problems with buggy file
system drivers like [EncFS](https://github.com/rclone/rclone/issues/261).
To disable UNC conversion globally, add this to your `.rclone.conf` file:
```
```ini
[local]
nounc = true
```
If you want to selectively disable UNC, you can add it to a separate entry like this:
```
```ini
[nounc]
type = local
nounc = true
```
And use rclone like this:
`rclone copy c:\src nounc:z:\dst`
@@ -180,7 +184,7 @@ This flag applies to all commands.
For example, supposing you have a directory structure like this
```
```sh
$ tree /tmp/a
/tmp/a
├── b -> ../b
@@ -192,7 +196,7 @@ $ tree /tmp/a
Then you can see the difference with and without the flag like this
```
```sh
$ rclone ls /tmp/a
6 one
6 two/three
@@ -200,7 +204,7 @@ $ rclone ls /tmp/a
and
```
```sh
$ rclone -L ls /tmp/a
4174 expected
6 one
@@ -209,7 +213,7 @@ $ rclone -L ls /tmp/a
6 b/one
```
#### --local-links, --links, -l
#### --local-links, --links, -l
Normally rclone will ignore symlinks or junction points (which behave
like symlinks under Windows).
@@ -223,7 +227,7 @@ This flag applies to all commands.
For example, supposing you have a directory structure like this
```
```sh
$ tree /tmp/a
/tmp/a
├── file1 -> ./file4
@@ -232,13 +236,13 @@ $ tree /tmp/a
Copying the entire directory with '-l'
```
$ rclone copy -l /tmp/a/ remote:/tmp/a/
```sh
rclone copy -l /tmp/a/ remote:/tmp/a/
```
The remote files are created with a `.rclonelink` suffix
```
```sh
$ rclone ls remote:/tmp/a
5 file1.rclonelink
14 file2.rclonelink
@@ -246,7 +250,7 @@ $ rclone ls remote:/tmp/a
The remote files will contain the target of the symbolic links
```
```sh
$ rclone cat remote:/tmp/a/file1.rclonelink
./file4
@@ -256,7 +260,7 @@ $ rclone cat remote:/tmp/a/file2.rclonelink
Copying them back with '-l'
```
```sh
$ rclone copy -l remote:/tmp/a/ /tmp/b/
$ tree /tmp/b
@@ -267,7 +271,7 @@ $ tree /tmp/b
However, if copied back without '-l'
```
```sh
$ rclone copyto remote:/tmp/a/ /tmp/b/
$ tree /tmp/b
@@ -278,7 +282,7 @@ $ tree /tmp/b
If you want to copy a single file with `-l` then you must use the `.rclonelink` suffix.
```
```sh
$ rclone copy -l remote:/tmp/a/file1.rclonelink /tmp/c
$ tree /tmp/c
@@ -302,7 +306,7 @@ different file systems.
For example if you have a directory hierarchy like this
```
```sh
root
├── disk1 - disk1 mounted on the root
│   └── file3 - stored on disk1
@@ -312,15 +316,16 @@ root
└── file2 - stored on the root disk
```
Using `rclone --one-file-system copy root remote:` will only copy `file1` and `file2`. Eg
Using `rclone --one-file-system copy root remote:` will only copy `file1`
and `file2`. E.g.
```
```sh
$ rclone -q --one-file-system ls root
0 file1
0 file2
```
```
```sh
$ rclone -q ls root
0 disk1/file3
0 disk2/file4

View File

@@ -6,7 +6,10 @@ versionIntroduced: "v1.50"
# {{< icon "fas fa-at" >}} Mail.ru Cloud
[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS.
[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a
Russian internet company [Mail.Ru Group](https://mail.ru). The official
desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows
and Mac OS.
## Features highlights
@@ -14,12 +17,13 @@ versionIntroduced: "v1.50"
- Files have a `last modified time` property, directories don't
- Deleted files are by default moved to the trash
- Files and directories can be shared via public links
- Partial uploads or streaming are not supported, file size must be known before upload
- Partial uploads or streaming are not supported, file size must be known before
upload
- Maximum file size is limited to 2G for a free account, unlimited for paid accounts
- Storage keeps hash for all files and performs transparent deduplication,
the hash algorithm is a modified SHA1
- If a particular file is already present in storage, one can quickly submit file hash
instead of long file upload (this optimization is supported by rclone)
- If a particular file is already present in storage, one can quickly submit file
hash instead of long file upload (this optimization is supported by rclone)
## Configuration
@@ -35,16 +39,22 @@ give an error like `oauth2: server response missing access_token`.
- Go to Security / "Пароль и безопасность"
- Click password for apps / "Пароли для внешних приложений"
- Add the password - give it a name - eg "rclone"
- Select the permissions level. For some reason just "Full access to Cloud" (WebDav) doesn't work for Rclone currently. You have to select "Full access to Mail, Cloud and Calendar" (all protocols). ([thread on forum.rclone.org](https://forum.rclone.org/t/failed-to-create-file-system-for-mailru-failed-to-authorize-oauth2-invalid-username-or-password-username-or-password-is-incorrect/49298))
- Copy the password and use this password below - your normal login password won't work.
- Select the permissions level. For some reason just "Full access to Cloud"
(WebDav) doesn't work for Rclone currently. You have to select "Full access
to Mail, Cloud and Calendar" (all protocols).
([thread on forum.rclone.org](https://forum.rclone.org/t/failed-to-create-file-system-for-mailru-failed-to-authorize-oauth2-invalid-username-or-password-username-or-password-is-incorrect/49298))
- Copy the password and use this password below - your normal login password
won't work.
Now run
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -109,20 +119,28 @@ You can use the configured backend as shown below:
See top level directories
rclone lsd remote:
```sh
rclone lsd remote:
```
Make a new directory
rclone mkdir remote:directory
```sh
rclone mkdir remote:directory
```
List the contents of a directory
rclone ls remote:directory
```sh
rclone ls remote:directory
```
Sync `/home/local/directory` to the remote path, deleting any
excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
```sh
rclone sync --interactive /home/local/directory remote:directory
```
### Modification times and hashes

View File

@@ -23,11 +23,13 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -65,22 +67,29 @@ d) Delete this remote
y/e/d> y
```
**NOTE:** The encryption keys need to have been already generated after a regular login
via the browser, otherwise attempting to use the credentials in `rclone` will fail.
**NOTE:** The encryption keys need to have been already generated after a regular
login via the browser, otherwise attempting to use the credentials in `rclone`
will fail.
Once configured you can then use `rclone` like this,
List directories in top level of your Mega
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your Mega
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to an Mega directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Modification times and hashes
@@ -110,26 +119,26 @@ Use `rclone dedupe` to fix duplicated files.
#### Object not found
If you are connecting to your Mega remote for the first time,
to test access and synchronization, you may receive an error such as
If you are connecting to your Mega remote for the first time,
to test access and synchronization, you may receive an error such as
```
Failed to create file system for "my-mega-remote:":
```text
Failed to create file system for "my-mega-remote:":
couldn't login: Object (typically, node or user) not found
```
The diagnostic steps often recommended in the [rclone forum](https://forum.rclone.org/search?q=mega)
start with the **MEGAcmd** utility. Note that this refers to
the official C++ command from https://github.com/meganz/MEGAcmd
and not the go language built command from t3rm1n4l/megacmd
that is no longer maintained.
start with the **MEGAcmd** utility. Note that this refers to
the official C++ command from <https://github.com/meganz/MEGAcmd>
and not the go language built command from t3rm1n4l/megacmd
that is no longer maintained.
Follow the instructions for installing MEGAcmd and try accessing
your remote as they recommend. You can establish whether or not
you can log in using MEGAcmd, and obtain diagnostic information
to help you, and search or work with others in the forum.
Follow the instructions for installing MEGAcmd and try accessing
your remote as they recommend. You can establish whether or not
you can log in using MEGAcmd, and obtain diagnostic information
to help you, and search or work with others in the forum.
```
```text
MEGA CMD> login me@example.com
Password:
Fetching nodes ...
@@ -138,12 +147,11 @@ Login complete as me@example.com
me@example.com:/$
```
Note that some have found issues with passwords containing special
characters. If you can not log on with rclone, but MEGAcmd logs on
just fine, then consider changing your password temporarily to
Note that some have found issues with passwords containing special
characters. If you can not log on with rclone, but MEGAcmd logs on
just fine, then consider changing your password temporarily to
pure alphanumeric characters, in case that helps.
#### Repeated commands blocks access
Mega remotes seem to get blocked (reject logins) under "heavy use".

View File

@@ -18,8 +18,8 @@ s3). Because it has no parameters you can just use it with the
You can configure it as a remote like this with `rclone config` too if
you want to:
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -50,9 +50,11 @@ y/e/d> y
Because the memory backend isn't persistent it is most useful for
testing or with an rclone server or rclone mount, e.g.
rclone mount :memory: /mnt/tmp
rclone serve webdav :memory:
rclone serve sftp :memory:
```sh
rclone mount :memory: /mnt/tmp
rclone serve webdav :memory:
rclone serve sftp :memory:
```
### Modification times and hashes

View File

@@ -8,16 +8,22 @@ versionIntroduced: "v1.58"
Paths are specified as `remote:`
You may put subdirectories in too, e.g. `remote:/path/to/dir`.
If you have a CP code you can use that as the folder after the domain such as \<domain>\/\<cpcode>\/\<internal directories within cpcode>.
If you have a CP code you can use that as the folder after the domain such
as \<domain>\/\<cpcode>\/\<internal directories within cpcode>.
For example, this is commonly configured with or without a CP code:
* **With a CP code**. `[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/`
* **Without a CP code**. `[your-domain-prefix]-nsu.akamaihd.net`
- **With a CP code**. `[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/`
- **Without a CP code**. `[your-domain-prefix]-nsu.akamaihd.net`
See all buckets
rclone lsd remote:
The initial setup for Netstorage involves getting an account and secret. Use `rclone config` to walk you through the setup process.
```sh
rclone lsd remote:
```
The initial setup for Netstorage involves getting an account and secret.
Use `rclone config` to walk you through the setup process.
## Configuration
@@ -25,155 +31,216 @@ Here's an example of how to make a remote called `ns1`.
1. To begin the interactive configuration process, enter this command:
```
rclone config
```
```sh
rclone config
```
2. Type `n` to create a new remote.
```
n) New remote
d) Delete remote
q) Quit config
e/n/d/q> n
```
```text
n) New remote
d) Delete remote
q) Quit config
e/n/d/q> n
```
3. For this example, enter `ns1` when you reach the name> prompt.
```
name> ns1
```
```text
name> ns1
```
4. Enter `netstorage` as the type of storage to configure.
```
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
XX / NetStorage
\ "netstorage"
Storage> netstorage
```
```text
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
XX / NetStorage
\ "netstorage"
Storage> netstorage
```
5. Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes.
5. Select between the HTTP or HTTPS protocol. Most users should choose HTTPS,
which is the default. HTTP is provided primarily for debugging purposes.
```text
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / HTTP protocol
\ "http"
2 / HTTPS protocol
\ "https"
protocol> 1
```
```
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / HTTP protocol
\ "http"
2 / HTTPS protocol
\ "https"
protocol> 1
```
6. Specify your NetStorage host, CP code, and any necessary content paths using
this format: `<domain>/<cpcode>/<content>/`
6. Specify your NetStorage host, CP code, and any necessary content paths using this format: `<domain>/<cpcode>/<content>/`
```
Enter a string value. Press Enter for the default ("").
host> baseball-nsu.akamaihd.net/123456/content/
```
```text
Enter a string value. Press Enter for the default ("").
host> baseball-nsu.akamaihd.net/123456/content/
```
7. Set the netstorage account name
```
Enter a string value. Press Enter for the default ("").
account> username
```
8. Set the Netstorage account secret/G2O key which will be used for authentication purposes. Select the `y` option to set your own password then enter your secret.
```text
Enter a string value. Press Enter for the default ("").
account> username
```
8. Set the Netstorage account secret/G2O key which will be used for authentication
purposes. Select the `y` option to set your own password then enter your secret.
Note: The secret is stored in the `rclone.conf` file with hex-encoded encryption.
```
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
```
```text
y) Yes type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
```
9. View the summary and confirm your remote configuration.
```
[ns1]
type = netstorage
protocol = http
host = baseball-nsu.akamaihd.net/123456/content/
account = username
secret = *** ENCRYPTED ***
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
```text
[ns1]
type = netstorage
protocol = http
host = baseball-nsu.akamaihd.net/123456/content/
account = username
secret = *** ENCRYPTED ***
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
This remote is called `ns1` and can now be used.
## Example operations
Get started with rclone and NetStorage with these examples. For additional rclone commands, visit https://rclone.org/commands/.
Get started with rclone and NetStorage with these examples. For additional rclone
commands, visit <https://rclone.org/commands/>.
### See contents of a directory in your project
rclone lsd ns1:/974012/testing/
```sh
rclone lsd ns1:/974012/testing/
```
### Sync the contents local with remote
rclone sync . ns1:/974012/testing/
```sh
rclone sync . ns1:/974012/testing/
```
### Upload local content to remote
rclone copy notes.txt ns1:/974012/testing/
```sh
rclone copy notes.txt ns1:/974012/testing/
```
### Delete content on remote
rclone delete ns1:/974012/testing/notes.txt
### Move or copy content between CP codes.
```sh
rclone delete ns1:/974012/testing/notes.txt
```
Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes.
### Move or copy content between CP codes
rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/
Your credentials must have access to two CP codes on the same remote.
You can't perform operations between different remotes.
```sh
rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/
```
## Features
### Symlink Support
The Netstorage backend changes the rclone `--links, -l` behavior. When uploading, instead of creating the .rclonelink file, use the "symlink" API in order to create the corresponding symlink on the remote. The .rclonelink file will not be created, the upload will be intercepted and only the symlink file that matches the source file name with no suffix will be created on the remote.
The Netstorage backend changes the rclone `--links, -l` behavior. When uploading,
instead of creating the .rclonelink file, use the "symlink" API in order to create
the corresponding symlink on the remote. The .rclonelink file will not be created,
the upload will be intercepted and only the symlink file that matches the source
file name with no suffix will be created on the remote.
This will effectively allow commands like copy/copyto, move/moveto and sync to upload from local to remote and download from remote to local directories with symlinks. Due to internal rclone limitations, it is not possible to upload an individual symlink file to any remote backend. You can always use the "backend symlink" command to create a symlink on the NetStorage server, refer to "symlink" section below.
This will effectively allow commands like copy/copyto, move/moveto and sync to
upload from local to remote and download from remote to local directories with
symlinks. Due to internal rclone limitations, it is not possible to upload an
individual symlink file to any remote backend. You can always use the "backend
symlink" command to create a symlink on the NetStorage server, refer to "symlink"
section below.
Individual symlink files on the remote can be used with the commands like "cat" to print the destination name, or "delete" to delete symlink, or copy, copy/to and move/moveto to download from the remote to local. Note: individual symlink files on the remote should be specified including the suffix .rclonelink.
Individual symlink files on the remote can be used with the commands like "cat"
to print the destination name, or "delete" to delete symlink, or copy, copy/to
and move/moveto to download from the remote to local. Note: individual symlink
files on the remote should be specified including the suffix .rclonelink.
**Note**: No file with the suffix .rclonelink should ever exist on the server since it is not possible to actually upload/create a file with .rclonelink suffix with rclone, it can only exist if it is manually created through a non-rclone method on the remote.
**Note**: No file with the suffix .rclonelink should ever exist on the server
since it is not possible to actually upload/create a file with .rclonelink suffix
with rclone, it can only exist if it is manually created through a non-rclone
method on the remote.
### Implicit vs. Explicit Directories
With NetStorage, directories can exist in one of two forms:
1. **Explicit Directory**. This is an actual, physical directory that you have created in a storage group.
2. **Implicit Directory**. This refers to a directory within a path that has not been physically created. For example, during upload of a file, nonexistent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file.
1. **Explicit Directory**. This is an actual, physical directory that you have
created in a storage group.
2. **Implicit Directory**. This refers to a directory within a path that has
not been physically created. For example, during upload of a file, nonexistent
subdirectories can be specified in the target path. NetStorage creates these
as "implicit." While the directories aren't physically created, they exist
implicitly and the noted path is connected with the uploaded file.
Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly.
Rclone will intercept all file uploads and mkdir commands for the NetStorage
remote and will explicitly issue the mkdir command for each directory in the
uploading path. This will help with the interoperability with the other Akamai
services such as SFTP and the Content Management Shell (CMShell). Rclone will
not guarantee correctness of operations with implicit directories which might
have been created as a result of using an upload API directly.
### `--fast-list` / ListR support
NetStorage remote supports the ListR feature by using the "list" NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they're encountered.
NetStorage remote supports the ListR feature by using the "list" NetStorage API
action to return a lexicographical list of all objects within the specified CP
code, recursing into subdirectories as they're encountered.
* **Rclone will use the ListR method for some commands by default**. Commands such as `lsf -R` will use ListR by default. To disable this, include the `--disable listR` option to use the non-recursive method of listing objects.
- **Rclone will use the ListR method for some commands by default**. Commands
such as `lsf -R` will use ListR by default. To disable this, include the
`--disable listR` option to use the non-recursive method of listing objects.
* **Rclone will not use the ListR method for some commands**. Commands such as `sync` don't use ListR by default. To force using the ListR method, include the `--fast-list` option.
- **Rclone will not use the ListR method for some commands**. Commands such as
`sync` don't use ListR by default. To force using the ListR method, include the
`--fast-list` option.
There are pros and cons of using the ListR method, refer to [rclone documentation](https://rclone.org/docs/#fast-list). In general, the sync command over an existing deep tree on the remote will run faster with the "--fast-list" flag but with extra memory usage as a side effect. It might also result in higher CPU utilization but the whole task can be completed faster.
There are pros and cons of using the ListR method, refer to [rclone documentation](https://rclone.org/docs/#fast-list).
In general, the sync command over an existing deep tree on the remote will
run faster with the "--fast-list" flag but with extra memory usage as a side effect.
It might also result in higher CPU utilization but the whole task can be completed
faster.
**Note**: There is a known limitation that "lsf -R" will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass "--disable listR" flag if these numbers are important in the output.
**Note**: There is a known limitation that "lsf -R" will display number of files
in the directory and directory size as -1 when ListR method is used. The workaround
is to pass "--disable listR" flag if these numbers are important in the output.
### Purge
NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method.
NetStorage remote supports the purge feature by using the "quick-delete"
NetStorage API action. The quick-delete action is disabled by default for security
reasons and can be enabled for the account through the Akamai portal. Rclone
will first try to use quick-delete action for the purge command and if this
functionality is disabled then will fall back to a standard delete method.
**Note**: Read the [NetStorage Usage API](https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html) for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible.
**Note**: Read the [NetStorage Usage API](https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html)
for considerations when using "quick-delete". In general, using quick-delete
method will not delete the tree immediately and objects targeted for
quick-delete may still be accessible.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/netstorage/netstorage.go then run make backenddocs" >}}
### Standard options

View File

@@ -18,11 +18,13 @@ you through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
e) Edit existing remote
n) New remote
d) Delete remote
@@ -110,57 +112,88 @@ Once configured you can then use `rclone` like this,
List directories in top level of your OneDrive
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your OneDrive
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to an OneDrive directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Getting your own Client ID and Key
rclone uses a default Client ID when talking to OneDrive, unless a custom `client_id` is specified in the config.
The default Client ID and Key are shared by all rclone users when performing requests.
rclone uses a default Client ID when talking to OneDrive, unless a custom
`client_id` is specified in the config. The default Client ID and Key are
shared by all rclone users when performing requests.
You may choose to create and use your own Client ID, in case the default one does not work well for you.
For example, you might see throttling.
You may choose to create and use your own Client ID, in case the default one
does not work well for you. For example, you might see throttling.
#### Creating Client ID for OneDrive Personal
To create your own Client ID, please follow these steps:
1. Open https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview and then under the `Add` menu click `App registration`.
* If you have not created an Azure account, you will be prompted to. This is free, but you need to provide a phone number, address, and credit card for identity verification.
2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI`, then type (do not copy and paste) `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use.
3. Under `manage` select `Certificates & secrets`, click `New client secret`. Enter a description (can be anything) and set `Expires` to 24 months. Copy and keep that secret _Value_ for later use (you _won't_ be able to see this value afterwards).
4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`.
5. Search and select the following permissions: `Files.Read`, `Files.ReadWrite`, `Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read` and `Sites.Read.All` (if custom access scopes are configured, select the permissions accordingly). Once selected click `Add permissions` at the bottom.
1. Open <https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview>
and then under the `Add` menu click `App registration`.
- If you have not created an Azure account, you will be prompted to. This is free,
but you need to provide a phone number, address, and credit card for identity
verification.
2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`,
select `Web` in `Redirect URI`, then type (do not copy and paste)
`http://localhost:53682/` and click Register. Copy and keep the
`Application (client) ID` under the app name for later use.
3. Under `manage` select `Certificates & secrets`, click `New client secret`.
Enter a description (can be anything) and set `Expires` to 24 months.
Copy and keep that secret *Value* for later use (you *won't* be able to see
this value afterwards).
4. Under `manage` select `API permissions`, click `Add a permission` and select
`Microsoft Graph` then select `delegated permissions`.
5. Search and select the following permissions: `Files.Read`, `Files.ReadWrite`,
`Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read` and
`Sites.Read.All` (if custom access scopes are configured, select the
permissions accordingly). Once selected click `Add permissions` at the bottom.
Now the application is complete. Run `rclone config` to create or edit a OneDrive remote.
Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps.
Now the application is complete. Run `rclone config` to create or edit a OneDrive
remote. Supply the app ID and password as Client ID and Secret, respectively.
rclone will walk you through the remaining steps.
The access_scopes option allows you to configure the permissions requested by rclone.
See [Microsoft Docs](https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions) for more information about the different scopes.
See [Microsoft Docs](https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions)
for more information about the different scopes.
The `Sites.Read.All` permission is required if you need to [search SharePoint sites when configuring the remote](https://github.com/rclone/rclone/pull/5883). However, if that permission is not assigned, you need to exclude `Sites.Read.All` from your access scopes or set `disable_site_permission` option to true in the advanced options.
The `Sites.Read.All` permission is required if you need to [search SharePoint sites when configuring the remote](https://github.com/rclone/rclone/pull/5883).
However, if that permission is not assigned, you need to exclude `Sites.Read.All`
from your access scopes or set `disable_site_permission` option to true in the
advanced options.
#### Creating Client ID for OneDrive Business
The steps for OneDrive Personal may or may not work for OneDrive Business, depending on the security settings of the organization.
The steps for OneDrive Personal may or may not work for OneDrive Business,
depending on the security settings of the organization.
A common error is that the publisher of the App is not verified.
You may try to [verify you account](https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview), or try to limit the App to your organization only, as shown below.
You may try to [verify you account](https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview),
or try to limit the App to your organization only, as shown below.
1. Make sure to create the App with your business account.
2. Follow the steps above to create an App. However, we need a different account type here: `Accounts in this organizational directory only (*** - Single tenant)`. Note that you can also change the account type after creating the App.
3. Find the [tenant ID](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant) of your organization.
2. Follow the steps above to create an App. However, we need a different account
type here: `Accounts in this organizational directory only (*** - Single tenant)`.
Note that you can also change the account type after creating the App.
3. Find the [tenant ID](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant)
of your organization.
4. In the rclone config, set `auth_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize`.
5. In the rclone config, set `token_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token`.
Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
Note: If you have a special region, you may need a different host in step 4 and 5.
Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86).
### Using OAuth Client Credential flow
@@ -170,10 +203,14 @@ that adopting the context of an Azure AD user account.
This flow can be enabled by following the steps below:
1. Create the Enterprise App registration in the Azure AD portal and obtain a Client ID and Client Secret as described above.
2. Ensure that the application has the appropriate permissions and they are assigned as *Application Permissions*
3. Configure the remote, ensuring that *Client ID* and *Client Secret* are entered correctly.
4. In the *Advanced Config* section, enter `true` for `client_credentials` and in the `tenant` section enter the tenant ID.
1. Create the Enterprise App registration in the Azure AD portal and obtain a
Client ID and Client Secret as described above.
2. Ensure that the application has the appropriate permissions and they are
assigned as *Application Permissions*
3. Configure the remote, ensuring that *Client ID* and *Client Secret* are
entered correctly.
4. In the *Advanced Config* section, enter `true` for `client_credentials` and
in the `tenant` section enter the tenant ID.
When it comes to choosing the type of the connection work with the
client credentials flow. In particular the "onedrive" option does not

View File

@@ -14,11 +14,13 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
n) New remote
d) Delete remote
q) Quit config
@@ -55,15 +57,21 @@ y/e/d> y
List directories in top level of your OpenDrive
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your OpenDrive
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to an OpenDrive directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Modification times and hashes
@@ -99,7 +107,6 @@ These only get replaced if they are the first or last character in the name:
| VT | 0x0B | ␋ |
| CR | 0x0D | ␍ |
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.

View File

@@ -6,30 +6,34 @@ versionIntroduced: "v1.60"
---
# {{< icon "fa fa-cloud" >}} Oracle Object Storage
- [Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm)
- [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/)
- [Oracle Object Storage Limits](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf)
Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in
too, e.g. `remote:bucket/path/to/dir`.
Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command).
You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
Sample command to transfer local artifacts to remote:bucket in oracle object storage:
`rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv`
```sh
rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv
```
## Configuration
Here is an example of making an oracle object storage configuration. `rclone config` walks you
through it.
Here is an example of making an oracle object storage configuration. `rclone config`
walks you through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
n) New remote
d) Delete remote
r) Rename remote
@@ -133,16 +137,22 @@ y/e/d> y
See all buckets
rclone lsd remote:
```sh
rclone lsd remote:
```
Create a new bucket
rclone mkdir remote:bucket
```sh
rclone mkdir remote:bucket
```
List the contents of a bucket
rclone ls remote:bucket
rclone ls remote:bucket --max-depth 1
```sh
rclone ls remote:bucket
rclone ls remote:bucket --max-depth 1
```
## Authentication Providers
@@ -152,102 +162,128 @@ These choices can be specified in the rclone config file.
Rclone supports the following OCI authentication provider.
User Principal
Instance Principal
Resource Principal
Workload Identity
No authentication
```text
User Principal
Instance Principal
Resource Principal
Workload Identity
No authentication
```
### User Principal
Sample rclone config file for Authentication Provider User Principal:
[oos]
type = oracleobjectstorage
namespace = id<redacted>34
compartment = ocid1.compartment.oc1..aa<redacted>ba
region = us-ashburn-1
provider = user_principal_auth
config_file = /home/opc/.oci/config
config_profile = Default
```ini
[oos]
type = oracleobjectstorage
namespace = id<redacted>34
compartment = ocid1.compartment.oc1..aa<redacted>ba
region = us-ashburn-1
provider = user_principal_auth
config_file = /home/opc/.oci/config
config_profile = Default
```
Advantages:
- One can use this method from any server within OCI or on-premises or from other cloud provider.
- One can use this method from any server within OCI or on-premises or from
other cloud provider.
Considerations:
- you need to configure users privileges / policy to allow access to object storage
- you need to configure users privileges / policy to allow access to object
storage
- Overhead of managing users and keys.
- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials.
- If the user is deleted, the config file will no longer work and may cause
automation regressions that use the user's credentials.
### Instance Principal
### Instance Principal
An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal.
With this approach no credentials have to be stored and managed.
An OCI compute instance can be authorized to use rclone by using it's identity
and certificates as an instance principal. With this approach no credentials
have to be stored and managed.
Sample rclone configuration file for Authentication Provider Instance Principal:
[opc@rclone ~]$ cat ~/.config/rclone/rclone.conf
[oos]
type = oracleobjectstorage
namespace = id<redacted>fn
compartment = ocid1.compartment.oc1..aa<redacted>k7a
region = us-ashburn-1
provider = instance_principal_auth
```sh
[opc@rclone ~]$ cat ~/.config/rclone/rclone.conf
[oos]
type = oracleobjectstorage
namespace = id<redacted>fn
compartment = ocid1.compartment.oc1..aa<redacted>k7a
region = us-ashburn-1
provider = instance_principal_auth
```
Advantages:
- With instance principals, you don't need to configure user credentials and transfer/ save it to disk in your compute
instances or rotate the credentials.
- With instance principals, you don't need to configure user credentials and
transfer/ save it to disk in your compute instances or rotate the credentials.
- You dont need to deal with users and keys.
- Greatly helps in automation as you don't have to manage access keys, user private keys, storing them in vault,
using kms etc.
- Greatly helps in automation as you don't have to manage access keys, user
private keys, storing them in vault, using kms etc.
Considerations:
- You need to configure a dynamic group having this instance as member and add policy to read object storage to that
dynamic group.
- You need to configure a dynamic group having this instance as member and add
policy to read object storage to that dynamic group.
- Everyone who has access to this machine can execute the CLI commands.
- It is applicable for oci compute instances only. It cannot be used on external instance or resources.
- It is applicable for oci compute instances only. It cannot be used on external
instance or resources.
### Resource Principal
Resource principal auth is very similar to instance principal auth but used for resources that are not
compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm).
To use resource principal ensure Rclone process is started with these environment variables set in its process.
Resource principal auth is very similar to instance principal auth but used for
resources that are not compute instances such as
[serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm).
To use resource principal ensure Rclone process is started with these environment
variables set in its process.
export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token
```sh
export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem
export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token
```
Sample rclone configuration file for Authentication Provider Resource Principal:
[oos]
type = oracleobjectstorage
namespace = id<redacted>34
compartment = ocid1.compartment.oc1..aa<redacted>ba
region = us-ashburn-1
provider = resource_principal_auth
```ini
[oos]
type = oracleobjectstorage
namespace = id<redacted>34
compartment = ocid1.compartment.oc1..aa<redacted>ba
region = us-ashburn-1
provider = resource_principal_auth
```
### Workload Identity
Workload Identity auth may be used when running Rclone from Kubernetes pod on a Container Engine for Kubernetes (OKE) cluster.
For more details on configuring Workload Identity, see [Granting Workloads Access to OCI Resources](https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm).
To use workload identity, ensure Rclone is started with these environment variables set in its process.
export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
Workload Identity auth may be used when running Rclone from Kubernetes pod on
a Container Engine for Kubernetes (OKE) cluster. For more details on configuring
Workload Identity, see [Granting Workloads Access to OCI Resources](https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm).
To use workload identity, ensure Rclone is started with these environment
variables set in its process.
```sh
export OCI_RESOURCE_PRINCIPAL_VERSION=2.2
export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1
```
### No authentication
Public buckets do not require any authentication mechanism to read objects.
Sample rclone configuration file for No authentication:
[oos]
type = oracleobjectstorage
namespace = id<redacted>34
compartment = ocid1.compartment.oc1..aa<redacted>ba
region = us-ashburn-1
provider = no_auth
```ini
[oos]
type = oracleobjectstorage
namespace = id<redacted>34
compartment = ocid1.compartment.oc1..aa<redacted>ba
region = us-ashburn-1
provider = no_auth
```
### Modification times and hashes
@@ -256,10 +292,11 @@ The modification time is stored as metadata on the object as
If the modification time needs to be updated rclone will attempt to perform a server
side copy to update the modification if the object can be copied in a single part.
In the case the object is larger than 5Gb, the object will be uploaded rather than copied.
In the case the object is larger than 5Gb, the object will be uploaded rather than
copied.
Note that reading this from the object takes an additional `HEAD` request as the metadata
isn't returned in object listings.
Note that reading this from the object takes an additional `HEAD` request as the
metadata isn't returned in object listings.
The MD5 hash algorithm is supported.

View File

@@ -3,23 +3,25 @@ title: "Oracle Object Storage Mount"
description: "Oracle Object Storage mounting tutorial"
---
# {{< icon "fa fa-cloud" >}} Mount Buckets and Expose via NFS Tutorial
This runbook shows how to [mount](/commands/rclone_mount/) *Oracle Object Storage* buckets as local file system in
OCI compute Instance using rclone tool.
# {{< icon "fa fa-cloud" >}} Mount Buckets and Expose via NFS Tutorial
You will also learn how to export the rclone mounts as NFS mount, so that other NFS client can access them.
This runbook shows how to [mount](/commands/rclone_mount/) *Oracle Object Storage*
buckets as local file system in OCI compute Instance using rclone tool.
Usage Pattern :
You will also learn how to export the rclone mounts as NFS mount, so that other
NFS client can access them.
Usage Pattern:
NFS Client --> NFS Server --> RClone Mount --> OCI Object Storage
## Step 1 : Install Rclone
In oracle linux 8, Rclone can be installed from
[OL8_Developer](https://yum.oracle.com/repo/OracleLinux/OL8/developer/x86_64/index.html) Yum Repo, Please enable the
repo if not enabled already.
[OL8_Developer](https://yum.oracle.com/repo/OracleLinux/OL8/developer/x86_64/index.html)
Yum Repo, Please enable the repo if not enabled already.
```shell
```sh
[opc@base-inst-boot ~]$ sudo yum-config-manager --enable ol8_developer
[opc@base-inst-boot ~]$ sudo yum install -y rclone
[opc@base-inst-boot ~]$ sudo yum install -y fuse
@@ -42,67 +44,68 @@ License : MIT
Description : Rclone is a command line program to sync files and directories to and from various cloud services.
```
To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone and optionally /usr/bin/rclonefs,
e.g. ln -s /usr/bin/rclone /sbin/mount.rclone. rclone will detect it and translate command-line arguments appropriately.
To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone
and optionally /usr/bin/rclonefs, e.g. `ln -s /usr/bin/rclone /sbin/mount.rclone`.
rclone will detect it and translate command-line arguments appropriately.
```shell
```sh
ln -s /usr/bin/rclone /sbin/mount.rclone
```
## Step 2: Setup Rclone Configuration file
Let's assume you want to access 3 buckets from the oci compute instance using instance principal provider as means of
authenticating with object storage service.
Let's assume you want to access 3 buckets from the oci compute instance using
instance principal provider as means of authenticating with object storage service.
- namespace-a, bucket-a,
- namespace-b, bucket-b,
- namespace-c, bucket-c
Rclone configuration file needs to have 3 remote sections, one section of each of above 3 buckets. Create a
configuration file in a accessible location that rclone program can read.
```shell
Rclone configuration file needs to have 3 remote sections, one section of each
of above 3 buckets. Create a configuration file in a accessible location that
rclone program can read.
```sh
[opc@base-inst-boot ~]$ mkdir -p /etc/rclone
[opc@base-inst-boot ~]$ sudo touch /etc/rclone/rclone.conf
# add below contents to /etc/rclone/rclone.conf
[opc@base-inst-boot ~]$ cat /etc/rclone/rclone.conf
[ossa]
type = oracleobjectstorage
provider = instance_principal_auth
namespace = namespace-a
compartment = ocid1.compartment.oc1..aaaaaaaa...compartment-a
region = us-ashburn-1
[ossb]
type = oracleobjectstorage
provider = instance_principal_auth
namespace = namespace-b
compartment = ocid1.compartment.oc1..aaaaaaaa...compartment-b
region = us-ashburn-1
[ossc]
type = oracleobjectstorage
provider = instance_principal_auth
namespace = namespace-c
compartment = ocid1.compartment.oc1..aaaaaaaa...compartment-c
region = us-ashburn-1
# List remotes
[opc@base-inst-boot ~]$ rclone --config /etc/rclone/rclone.conf listremotes
ossa:
ossb:
ossc:
# Now please ensure you do not see below errors while listing the bucket,
# i.e you should fix the settings to see if namespace, compartment, bucket name are all correct.
# and you must have a dynamic group policy to allow the instance to use object-family in compartment.
[opc@base-inst-boot ~]$ rclone --config /etc/rclone/rclone.conf ls ossa:
2023/04/07 19:09:21 Failed to ls: Error returned by ObjectStorage Service. Http Status Code: 404. Error Code: NamespaceNotFound. Opc request id: iad-1:kVVAb0knsVXDvu9aHUGHRs3gSNBOFO2_334B6co82LrPMWo2lM5PuBKNxJOTmZsS. Message: You do not have authorization to perform this request, or the requested resource could not be found.
Operation Name: ListBuckets
@@ -117,49 +120,56 @@ If you are unable to resolve this ObjectStorage issue, please contact Oracle sup
```
## Step 3: Setup Dynamic Group and Add IAM Policy.
Just like a human user has an identity identified by its USER-PRINCIPAL, every OCI compute instance is also a robotic
user identified by its INSTANCE-PRINCIPAL. The instance principal key is automatically fetched by rclone/with-oci-sdk
## Step 3: Setup Dynamic Group and Add IAM Policy
Just like a human user has an identity identified by its USER-PRINCIPAL, every
OCI compute instance is also a robotic user identified by its INSTANCE-PRINCIPAL.
The instance principal key is automatically fetched by rclone/with-oci-sdk
from instance-metadata to make calls to object storage.
Similar to [user-group](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managinggroups.htm),
[instance groups](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managingdynamicgroups.htm)
is known as dynamic-group in IAM.
Create a dynamic group say rclone-dynamic-group that the oci compute instance becomes a member of the below group
says all instances belonging to compartment a...c is member of this dynamic-group.
Create a dynamic group say rclone-dynamic-group that the oci compute instance
becomes a member of the below group says all instances belonging to compartment
a...c is member of this dynamic-group.
```shell
any {instance.compartment.id = '<compartment_ocid_a>',
instance.compartment.id = '<compartment_ocid_b>',
```sh
any {instance.compartment.id = '<compartment_ocid_a>',
instance.compartment.id = '<compartment_ocid_b>',
instance.compartment.id = '<compartment_ocid_c>'
}
```
Now that you have a dynamic group, you need to add a policy allowing what permissions this dynamic-group has.
In our case, we want this dynamic-group to access object-storage. So create a policy now.
Now that you have a dynamic group, you need to add a policy allowing what
permissions this dynamic-group has. In our case, we want this dynamic-group to
access object-storage. So create a policy now.
```shell
```sh
allow dynamic-group rclone-dynamic-group to manage object-family in compartment compartment-a
allow dynamic-group rclone-dynamic-group to manage object-family in compartment compartment-b
allow dynamic-group rclone-dynamic-group to manage object-family in compartment compartment-c
```
After you add the policy, now ensure the rclone can list files in your bucket, if not please troubleshoot any mistakes
you did so far. Please note, identity can take upto a minute to ensure policy gets reflected.
After you add the policy, now ensure the rclone can list files in your bucket,
if not please troubleshoot any mistakes you did so far. Please note, identity
can take upto a minute to ensure policy gets reflected.
## Step 4: Setup Mount Folders
Let's assume you have to mount 3 buckets, bucket-a, bucket-b, bucket-c at path /opt/mnt/bucket-a, /opt/mnt/bucket-b,
/opt/mnt/bucket-c respectively.
Let's assume you have to mount 3 buckets, bucket-a, bucket-b, bucket-c at path
/opt/mnt/bucket-a, /opt/mnt/bucket-b, /opt/mnt/bucket-c respectively.
Create the mount folder and set its ownership to desired user, group.
```shell
```sh
[opc@base-inst-boot ~]$ sudo mkdir /opt/mnt
[opc@base-inst-boot ~]$ sudo chown -R opc:adm /opt/mnt
```
Set chmod permissions to user, group, others as desired for each mount path
```shell
```sh
[opc@base-inst-boot ~]$ sudo chmod 764 /opt/mnt
[opc@base-inst-boot ~]$ ls -al /opt/mnt/
total 0
@@ -179,21 +189,23 @@ drwxrwxr-x. 2 opc opc 6 Apr 7 18:17 bucket-b
drwxrwxr-x. 2 opc opc 6 Apr 7 18:17 bucket-c
```
## Step 5: Identify Rclone mount CLI configuration settings to use.
Please read through this [rclone mount](https://rclone.org/commands/rclone_mount/) page completely to really
understand the mount and its flags, what is rclone
[virtual file system](https://rclone.org/commands/rclone_mount/#vfs-virtual-file-system) mode settings and
how to effectively use them for desired Read/Write consistencies.
## Step 5: Identify Rclone mount CLI configuration settings to use
Local File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable.
Object storage can throw several errors like 429, 503, 404 etc. The rclone sync/copy commands cope with this with
lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads.
Please Look at the VFS File Caching for solutions to make mount more reliable.
Please read through this [rclone mount](https://rclone.org/commands/rclone_mount/)
page completely to really understand the mount and its flags, what is rclone
[virtual file system](https://rclone.org/commands/rclone_mount/#vfs-virtual-file-system)
mode settings and how to effectively use them for desired Read/Write consistencies.
Local File systems expect things to be 100% reliable, whereas cloud storage
systems are a long way from 100% reliable. Object storage can throw several
errors like 429, 503, 404 etc. The rclone sync/copy commands cope with this
with lots of retries. However rclone mount can't use retries in the same way
without making local copies of the uploads. Please Look at the VFS File Caching
for solutions to make mount more reliable.
First lets understand the rclone mount flags and some global flags for troubleshooting.
```shell
```sh
rclone mount \
ossa:bucket-a \ # Remote:bucket-name
/opt/mnt/bucket-a \ # Local mount folder
@@ -219,69 +231,79 @@ rclone mount \
--vfs-fast-fingerprint # Use fast (less accurate) fingerprints for change detection.
--log-level ERROR \ # log level, can be DEBUG, INFO, ERROR
--log-file /var/log/rclone/oosa-bucket-a.log # rclone application log
```
### --vfs-cache-mode writes
In this mode files opened for read only are still read directly from the remote, write only and read/write files are
buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be
In this mode files opened for read only are still read directly from the
remote, write only and read/write files are buffered to disk first. This mode
should support all normal file system operations. If an upload fails it will be
retried at exponentially increasing intervals up to 1 minute.
VFS cache mode of writes is recommended, so that application can have maximum compatibility of using remote storage
as a local disk, when write is finished, file is closed, it is uploaded to backend remote after vfs-write-back duration
has elapsed. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone
is run with the same flags.
VFS cache mode of writes is recommended, so that application can have maximum
compatibility of using remote storage as a local disk, when write is finished,
file is closed, it is uploaded to backend remote after vfs-write-back duration
has elapsed. If rclone is quit or dies with files that haven't been uploaded,
these will be uploaded next time rclone is run with the same flags.
### --tpslimit float
Limit transactions per second to this number. Default is 0 which is used to mean unlimited transactions per second.
Limit transactions per second to this number. Default is 0 which is used to
mean unlimited transactions per second.
A transaction is roughly defined as an API call; its exact meaning will depend on the backend. For HTTP based backends
it is an HTTP PUT/GET/POST/etc and its response. For FTP/SFTP it is a round trip transaction over TCP.
A transaction is roughly defined as an API call; its exact meaning will depend
on the backend. For HTTP based backends it is an HTTP PUT/GET/POST/etc and its
response. For FTP/SFTP it is a round trip transaction over TCP.
For example, to limit rclone to 10 transactions per second use --tpslimit 10, or to 1 transaction every 2 seconds
use --tpslimit 0.5.
For example, to limit rclone to 10 transactions per second use --tpslimit 10,
or to 1 transaction every 2 seconds use --tpslimit 0.5.
Use this when the number of transactions per second from rclone is causing a problem with the cloud storage
provider (e.g. getting you banned or rate limited or throttled).
Use this when the number of transactions per second from rclone is causing a
problem with the cloud storage provider (e.g. getting you banned or rate
limited or throttled).
This can be very useful for rclone mount to control the behaviour of applications using it. Let's guess and say Object
storage allows roughly 100 tps per tenant, so to be on safe side, it will be wise to set this at 50. (tune it to actuals per
region)
This can be very useful for rclone mount to control the behaviour of
applications using it. Let's guess and say Object storage allows roughly 100
tps per tenant, so to be on safe side, it will be wise to set this at 50
(tune it to actuals per region).
### --vfs-fast-fingerprint
If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This
makes the fingerprinting less accurate but much faster and will improve the opening time of cached files. If you are
running a vfs cache over local, s3, object storage or swift backends then using this flag is recommended.
If you use the --vfs-fast-fingerprint flag then rclone will not include the
slow operations in the fingerprint. This makes the fingerprinting less accurate
but much faster and will improve the opening time of cached files. If you are
running a vfs cache over local, s3, object storage or swift backends then using
this flag is recommended.
Various parts of the VFS use fingerprinting to see if a local file copy has
changed relative to a remote file. Fingerprints are made from:
Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file.
Fingerprints are made from:
- size
- modification time
- hash
where available on an object.
## Step 6: Mounting Options, Use Any one option
### Step 6a: Run as a Service Daemon: Configure FSTAB entry for Rclone mount
Add this entry in /etc/fstab :
```shell
Add this entry in /etc/fstab:
```sh
ossa:bucket-a /opt/mnt/bucket-a rclone rw,umask=0117,nofail,_netdev,args2env,config=/etc/rclone/rclone.conf,uid=1000,gid=4,
file_perms=0760,dir_perms=0760,allow_other,vfs_cache_mode=writes,cache_dir=/tmp/rclone/cache 0 0
```
IMPORTANT: Please note in fstab entry arguments are specified as underscore instead of dash,
example: vfs_cache_mode=writes instead of vfs-cache-mode=writes
Rclone in the mount helper mode will split -o argument(s) by comma, replace _ by - and prepend -- to
get the command-line flags. Options containing commas or spaces can be wrapped in single or double quotes.
Any inner quotes inside outer quotes of the same type should be doubled.
IMPORTANT: Please note in fstab entry arguments are specified as underscore
instead of dash, example: vfs_cache_mode=writes instead of vfs-cache-mode=writes
Rclone in the mount helper mode will split -o argument(s) by comma, replace `_`
by `-` and prepend `--` to get the command-line flags. Options containing commas
or spaces can be wrapped in single or double quotes. Any inner quotes inside outer
quotes of the same type should be doubled.
then run sudo mount -av
```shell
Then run sudo mount -av
```sh
[opc@base-inst-boot ~]$ sudo mount -av
/ : ignored
/boot : already mounted
@@ -290,15 +312,15 @@ then run sudo mount -av
/dev/shm : already mounted
none : ignored
/opt/mnt/bucket-a : already mounted # This is the bucket mounted information, running mount -av again and again is idempotent.
```
## Step 6b: Run as a Service Daemon: Configure systemd entry for Rclone mount
If you are familiar with configuring systemd unit files, you can also configure the each rclone mount into a
systemd units file.
various examples in git search: https://github.com/search?l=Shell&q=rclone+unit&type=Code
```shell
If you are familiar with configuring systemd unit files, you can also configure
the each rclone mount into a systemd units file.
various examples in git search: <https://github.com/search?l=Shell&q=rclone+unit&type=Code>
```sh
tee "/etc/systemd/system/rclonebucketa.service" > /dev/null <<EOF
[Unit]
Description=RCloneMounting
@@ -317,18 +339,22 @@ WantedBy=multi-user.target
EOF
```
## Step 7: Optional: Mount Nanny, for resiliency, recover from process crash.
Sometimes, rclone process crashes and the mount points are left in dangling state where its mounted but the rclone
mount process is gone. To clean up the mount point you can force unmount by running this command.
```shell
## Step 7: Optional: Mount Nanny, for resiliency, recover from process crash
Sometimes, rclone process crashes and the mount points are left in dangling
state where its mounted but the rclone mount process is gone. To clean up the
mount point you can force unmount by running this command.
```sh
sudo fusermount -uz /opt/mnt/bucket-a
```
One can also run a rclone_mount_nanny script, which detects and cleans up mount errors by unmounting and
then auto-mounting.
One can also run a rclone_mount_nanny script, which detects and cleans up mount
errors by unmounting and then auto-mounting.
Content of /etc/rclone/scripts/rclone_nanny_script.sh
```shell
```sh
#!/usr/bin/env bash
erroneous_list=$(df 2>&1 | grep -i 'Transport endpoint is not connected' | awk '{print ""$2"" }' | tr -d \:)
rclone_list=$(findmnt -t fuse.rclone -n 2>&1 | awk '{print ""$1"" }' | tr -d \:)
@@ -340,10 +366,11 @@ do
sudo fusermount -uz "$directory"
done
sudo mount -av
```
Script to idempotently add a Cron job to babysit the mount paths every 5 minutes
```shell
```sh
echo "Creating rclone nanny cron job."
croncmd="/etc/rclone/scripts/rclone_nanny_script.sh"
cronjob="*/5 * * * * $croncmd"
@@ -353,55 +380,59 @@ echo "Finished creating rclone nanny cron job."
```
Ensure the crontab is added, so that above nanny script runs every 5 minutes.
```shell
```sh
[opc@base-inst-boot ~]$ sudo crontab -l
*/5 * * * * /etc/rclone/scripts/rclone_nanny_script.sh
[opc@base-inst-boot ~]$
[opc@base-inst-boot ~]$
```
## Step 8: Optional: Setup NFS server to access the mount points of rclone
Let's say you want to make the rclone mount path /opt/mnt/bucket-a available as a NFS server export so that other
clients can access it by using a NFS client.
Let's say you want to make the rclone mount path /opt/mnt/bucket-a available
as a NFS server export so that other clients can access it by using a NFS client.
### Step 8a : Setup NFS server
Install NFS Utils
```shell
```sh
sudo yum install -y nfs-utils
```
Export the desired directory via NFS Server in the same machine where rclone has mounted to, ensure NFS service has
desired permissions to read the directory. If it runs as root, then it will have permissions for sure, but if it runs
Export the desired directory via NFS Server in the same machine where rclone
has mounted to, ensure NFS service has desired permissions to read the directory.
If it runs as root, then it will have permissions for sure, but if it runs
as separate user then ensure that user has necessary desired privileges.
```shell
```sh
# this gives opc user and adm (administrators group) ownership to the path, so any user belonging to adm group will be able to access the files.
[opc@tools ~]$ sudo chown -R opc:adm /opt/mnt/bucket-a/
[opc@tools ~]$ sudo chmod 764 /opt/mnt/bucket-a/
# Not export the mount path of rclone for exposing via nfs server
# There are various nfs export options that you should keep per desired usage.
# Syntax is
# <path> <allowed-ipaddr>(<option>)
[opc@tools ~]$ cat /etc/exports
/opt/mnt/bucket-a *(fsid=1,rw)
# Restart NFS server
[opc@tools ~]$ sudo systemctl restart nfs-server
# Show Export paths
[opc@tools ~]$ showmount -e
Export list for tools:
/opt/mnt/bucket-a *
# Know the port NFS server is running as, in this case it's listening on port 2049
[opc@tools ~]$ sudo rpcinfo -p | grep nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049 nfs_acl
# Allow NFS service via firewall
[opc@tools ~]$ sudo firewall-cmd --add-service=nfs --permanent
Warning: ALREADY_ENABLED: nfs
@@ -409,7 +440,7 @@ success
[opc@tools ~]$ sudo firewall-cmd --reload
success
[opc@tools ~]$
# Check status of NFS service
[opc@tools ~]$ sudo systemctl status nfs-server.service
● nfs-server.service - NFS server and services
@@ -425,25 +456,27 @@ success
Tasks: 0 (limit: 48514)
Memory: 0B
CGroup: /system.slice/nfs-server.service
Apr 19 17:59:58 tools systemd[1]: Starting NFS server and services...
Apr 19 17:59:58 tools systemd[1]: Started NFS server and services.
```
### Step 8b : Setup NFS client
Now to connect to the NFS server from a different client machine, ensure the client machine can reach to nfs server machine over tcp port 2049, ensure your subnet network acls allow from desired source IP ranges to destination:2049 port.
Now to connect to the NFS server from a different client machine, ensure the
client machine can reach to nfs server machine over tcp port 2049, ensure your
subnet network acls allow from desired source IP ranges to destination:2049 port.
In the client machine Mount the external NFS
```shell
```sh
# Install nfs-utils
[opc@base-inst-boot ~]$ sudo yum install -y nfs-utils
# In /etc/fstab, add the below entry
[opc@base-inst-boot ~]$ cat /etc/fstab | grep nfs
<ProvideYourIPAddress>:/opt/mnt/bucket-a /opt/mnt/buckert-a nfs rw 0 0
# remount so that newly added path gets mounted.
[opc@base-inst-boot ~]$ sudo mount -av
/ : ignored
@@ -457,7 +490,7 @@ In the client machine Mount the external NFS
### Step 8c : Test Connection
```shell
```sh
# List files to test connection
[opc@base-inst-boot ~]$ ls -al /opt/mnt/bucket-a
total 1
@@ -466,5 +499,3 @@ drwxrw-r--. 7 opc adm 85 Apr 18 17:36 ..
drw-rw----. 1 opc adm 0 Apr 18 17:29 FILES
-rw-rw----. 1 opc adm 15 Apr 18 18:13 nfs.txt
```

View File

@@ -17,11 +17,13 @@ need to do in your browser. `rclone config` walks you through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -85,15 +87,21 @@ Once configured you can then use `rclone` like this,
List directories in top level of your pCloud
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your pCloud
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to a pCloud directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Modification times and hashes
@@ -125,10 +133,11 @@ be used to empty the trash.
### Emptying the trash
Due to an API limitation, the `rclone cleanup` command will only work if you
set your username and password in the advanced options for this backend.
Due to an API limitation, the `rclone cleanup` command will only work if you
set your username and password in the advanced options for this backend.
Since we generally want to avoid storing user passwords in the rclone config
file, we advise you to only set this up if you need the `rclone cleanup` command to work.
file, we advise you to only set this up if you need the `rclone cleanup` command
to work.
### Root folder ID

View File

@@ -16,11 +16,13 @@ Here is an example of making a remote for PikPak.
First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password

View File

@@ -14,12 +14,12 @@ subscriptions](https://pixeldrain.com/#pro).
An overview of the filesystem's features and limitations is available in the
[filesystem guide](https://pixeldrain.com/filesystem) on pixeldrain.
### Usage with account
## Usage with account
To use the personal filesystem you will need a [pixeldrain
account](https://pixeldrain.com/register) and either the Prepaid plan or one of
the Patreon-based subscriptions. After registering and subscribing, your
personal filesystem will be available at this link: https://pixeldrain.com/d/me.
personal filesystem will be available at this link: <https://pixeldrain.com/d/me>.
Go to the [API keys page](https://pixeldrain.com/user/api_keys) on your account
and generate a new API key for rclone. Then run `rclone config` and use the API
@@ -27,8 +27,8 @@ key to create a new backend.
Example:
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
d) Delete remote
c) Copy remote
@@ -91,7 +91,7 @@ q) Quit config
e/n/d/r/c/s/q> q
```
### Usage without account
## Usage without account
It is possible to gain read-only access to publicly shared directories through
rclone. For this you only need a directory ID. The directory ID can be found in

View File

@@ -12,16 +12,19 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
The initial setup for [premiumize.me](https://premiumize.me/) involves getting a token from premiumize.me which you
need to do in your browser. `rclone config` walks you through it.
The initial setup for [premiumize.me](https://premiumize.me/) involves getting a
token from premiumize.me which you need to do in your browser. `rclone config`
walks you through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -74,15 +77,21 @@ Once configured you can then use `rclone` like this,
List directories in top level of your premiumize.me
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your premiumize.me
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to an premiumize.me directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Modification times and hashes

View File

@@ -13,8 +13,8 @@ status: Beta
This is an rclone backend for Proton Drive which supports the file transfer
features of Proton Drive using the same client-side encryption.
Due to the fact that Proton Drive doesn't publish its API documentation, this
backend is implemented with best efforts by reading the open-sourced client
Due to the fact that Proton Drive doesn't publish its API documentation, this
backend is implemented with best efforts by reading the open-sourced client
source code and observing the Proton Drive traffic in the browser.
**NB** This backend is currently in Beta. It is believed to be correct
@@ -31,11 +31,13 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -77,23 +79,29 @@ d) Delete this remote
y/e/d> y
```
**NOTE:** The Proton Drive encryption keys need to have been already generated
after a regular login via the browser, otherwise attempting to use the
**NOTE:** The Proton Drive encryption keys need to have been already generated
after a regular login via the browser, otherwise attempting to use the
credentials in `rclone` will fail.
Once configured you can then use `rclone` like this,
List directories in top level of your Proton Drive
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your Proton Drive
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to an Proton Drive directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Modification times and hashes
@@ -103,13 +111,13 @@ The SHA1 hash algorithm is supported.
### Restricted filename characters
Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8), also left and
Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8), also left and
right spaces will be removed ([code reference](https://github.com/ProtonMail/WebClients/blob/b4eba99d241af4fdae06ff7138bd651a40ef5d3c/applications/drive/src/app/store/_links/validation.ts#L51))
### Duplicated files
Proton Drive can not have two files with exactly the same name and path. If the
conflict occurs, depending on the advanced config, the file might or might not
Proton Drive can not have two files with exactly the same name and path. If the
conflict occurs, depending on the advanced config, the file might or might not
be overwritten.
### [Mailbox password](https://proton.me/support/the-difference-between-the-mailbox-password-and-login-password)
@@ -118,11 +126,11 @@ Please set your mailbox password in the advanced config section.
### Caching
The cache is currently built for the case when the rclone is the only instance
The cache is currently built for the case when the rclone is the only instance
performing operations to the mount point. The event system, which is the proton
API system that provides visibility of what has changed on the drive, is yet
to be implemented, so updates from other clients wont be reflected in the
cache. Thus, if there are concurrent clients accessing the same mount point,
API system that provides visibility of what has changed on the drive, is yet
to be implemented, so updates from other clients wont be reflected in the
cache. Thus, if there are concurrent clients accessing the same mount point,
then we might have a problem with caching the stale data.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/protondrive/protondrive.go then run make backenddocs" >}}

View File

@@ -19,11 +19,13 @@ through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -81,7 +83,7 @@ See the [remote setup docs](/remote_setup/) for how to set it up on a
machine with no Internet browser available.
Note that rclone runs a webserver on your local machine to collect the
token as returned from put.io if using web browser to automatically
token as returned from put.io if using web browser to automatically
authenticate. This only
runs from the moment it opens your browser to the moment you get back
the verification code. This is on `http://127.0.0.1:53682/` and this
@@ -92,15 +94,21 @@ You can then use it like this,
List directories in top level of your put.io
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your put.io
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to a put.io directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Restricted filename characters

View File

@@ -13,12 +13,14 @@ command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
Here is an example of making an QingStor configuration. First run
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
r) Rename remote
c) Copy remote
@@ -80,20 +82,28 @@ This remote is called `remote` and can now be used like this
See all buckets
rclone lsd remote:
```sh
rclone lsd remote:
```
Make a new bucket
rclone mkdir remote:bucket
```sh
rclone mkdir remote:bucket
```
List the contents of a bucket
rclone ls remote:bucket
```sh
rclone ls remote:bucket
```
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
rclone sync --interactive /home/local/directory remote:bucket
```sh
rclone sync --interactive /home/local/directory remote:bucket
```
### --fast-list
@@ -126,13 +136,13 @@ zone`.
There are two ways to supply `rclone` with a set of QingStor
credentials. In order of precedence:
- Directly in the rclone configuration file (as configured by `rclone config`)
- set `access_key_id` and `secret_access_key`
- Runtime configuration:
- set `env_auth` to `true` in the config file
- Exporting the following environment variables before running `rclone`
- Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY`
- Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY`
- Directly in the rclone configuration file (as configured by `rclone config`)
- set `access_key_id` and `secret_access_key`
- Runtime configuration:
- set `env_auth` to `true` in the config file
- Exporting the following environment variables before running `rclone`
- Access Key ID: `QS_ACCESS_KEY_ID` or `QS_ACCESS_KEY`
- Secret Access Key: `QS_SECRET_ACCESS_KEY` or `QS_SECRET_KEY`
### Restricted filename characters

View File

@@ -12,20 +12,23 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g., `remote:directory/subdirectory`.
The initial setup for Quatrix involves getting an API Key from Quatrix. You can get the API key in the user's profile at `https://<account>/profile/api-keys`
or with the help of the API - https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create.
The initial setup for Quatrix involves getting an API Key from Quatrix. You can
get the API key in the user's profile at `https://<account>/profile/api-keys`
or with the help of the API - <https://docs.maytech.net/quatrix/quatrix-api/api-explorer#/API-Key/post_api_key_create>.
See complete Swagger documentation for Quatrix - https://docs.maytech.net/quatrix/quatrix-api/api-explorer
See complete [Swagger documentation for Quatrix](https://docs.maytech.net/quatrix/quatrix-api/api-explorer).
## Configuration
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -60,23 +63,30 @@ Once configured you can then use `rclone` like this,
List directories in top level of your Quatrix
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your Quatrix
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to an Quatrix directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### API key validity
API Key is created with no expiration date. It will be valid until you delete or deactivate it in your account.
After disabling, the API Key can be enabled back. If the API Key was deleted and a new key was created, you can
update it in rclone config. The same happens if the hostname was changed.
API Key is created with no expiration date. It will be valid until you delete or
deactivate it in your account. After disabling, the API Key can be enabled back.
If the API Key was deleted and a new key was created, you can update it in rclone
config. The same happens if the hostname was changed.
```
```sh
$ rclone config
Current remotes:
@@ -131,23 +141,31 @@ Quatrix does not support hashes, so you cannot use the `--checksum` flag.
### Restricted filename characters
File names in Quatrix are case sensitive and have limitations like the maximum length of a filename is 255, and the minimum length is 1. A file name cannot be equal to `.` or `..` nor contain `/` , `\` or non-printable ascii.
File names in Quatrix are case sensitive and have limitations like the maximum
length of a filename is 255, and the minimum length is 1. A file name cannot be
equal to `.` or `..` nor contain `/` , `\` or non-printable ascii.
### Transfers
For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to `--transfers` chunks at the same time (shared among all multipart uploads).
Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by default, and it can be changed in the advanced configuration, so increasing `--transfers` will increase the memory use.
The chunk size has a maximum size limit, which is set to 100_000_000 bytes by default and can be changed in the advanced configuration.
For files above 50 MiB rclone will use a chunked transfer. Rclone will upload
up to `--transfers` chunks at the same time (shared among all multipart uploads).
Chunks are buffered in memory, and the minimal chunk size is 10_000_000 bytes by
default, and it can be changed in the advanced configuration, so increasing `--transfers`
will increase the memory use. The chunk size has a maximum size limit, which is
set to 100_000_000 bytes by default and can be changed in the advanced configuration.
The size of the uploaded chunk will dynamically change depending on the upload speed.
The total memory use equals the number of transfers multiplied by the minimal chunk size.
In case there's free memory allocated for the upload (which equals the difference of `maximal_summary_chunk_size` and `minimal_chunk_size` * `transfers`),
the chunk size may increase in case of high upload speed. As well as it can decrease in case of upload speed problems.
If no free memory is available, all chunks will equal `minimal_chunk_size`.
The total memory use equals the number of transfers multiplied by the minimal
chunk size. In case there's free memory allocated for the upload (which equals
the difference of `maximal_summary_chunk_size` and `minimal_chunk_size` * `transfers`),
the chunk size may increase in case of high upload speed. As well as it can decrease
in case of upload speed problems. If no free memory is available, all chunks will
equal `minimal_chunk_size`.
### Deleting files
Files you delete with rclone will end up in Trash and be stored there for 30 days.
Quatrix also provides an API to permanently delete files and an API to empty the Trash so that you can remove files permanently from your account.
Quatrix also provides an API to permanently delete files and an API to empty the
Trash so that you can remove files permanently from your account.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/quatrix/quatrix.go then run make backenddocs" >}}
### Standard options

View File

@@ -19,13 +19,14 @@ with a public key compiled into the rclone binary.
You may obtain the release signing key from:
- From [KEYS](/KEYS) on this website - this file contains all past signing keys also.
- The git repository hosted on GitHub - https://github.com/rclone/rclone/blob/master/docs/content/KEYS
- The git repository hosted on GitHub - <https://github.com/rclone/rclone/blob/master/docs/content/KEYS>
- `gpg --keyserver hkps://keys.openpgp.org --search nick@craig-wood.com`
- `gpg --keyserver hkps://keyserver.ubuntu.com --search nick@craig-wood.com`
- https://www.craig-wood.com/nick/pub/pgp-key.txt
- <https://www.craig-wood.com/nick/pub/pgp-key.txt>
After importing the key, verify that the fingerprint of one of the
keys matches: `FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA` as this key is used for signing.
keys matches: `FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA` ads this key is used
for signing.
We recommend that you cross-check the fingerprint shown above through
the domains listed below. By cross-checking the integrity of the
@@ -40,9 +41,10 @@ developers at once.
## How to verify the release
In the release directory you will see the release files and some files called `MD5SUMS`, `SHA1SUMS` and `SHA256SUMS`.
In the release directory you will see the release files and some files
called `MD5SUMS`, `SHA1SUMS` and `SHA256SUMS`.
```
```sh
$ rclone lsf --http-url https://downloads.rclone.org/v1.63.1 :http:
MD5SUMS
SHA1SUMS
@@ -60,7 +62,7 @@ binary files in the release directory along with a signature.
For example:
```
```sh
$ rclone cat --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
@@ -88,11 +90,11 @@ as these are the most secure. You could verify the other types of hash
also for extra security. `rclone selfupdate` verifies just the
`SHA256SUMS`.
```
$ mkdir /tmp/check
$ cd /tmp/check
$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS .
$ rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip .
```sh
mkdir /tmp/check
cd /tmp/check
rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:SHA256SUMS .
rclone copy --http-url https://downloads.rclone.org/v1.63.1 :http:rclone-v1.63.1-windows-amd64.zip .
```
### Verify the signatures
@@ -101,7 +103,7 @@ First verify the signatures on the SHA256 file.
Import the key. See above for ways to verify this key is correct.
```
```sh
$ gpg --keyserver keyserver.ubuntu.com --receive-keys FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA
gpg: key 93935E02FF3B54FA: public key "Nick Craig-Wood <nick@craig-wood.com>" imported
gpg: Total number processed: 1
@@ -110,7 +112,7 @@ gpg: imported: 1
Then check the signature:
```
```sh
$ gpg --verify SHA256SUMS
gpg: Signature made Mon 17 Jul 2023 15:03:17 BST
gpg: using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA
@@ -126,14 +128,14 @@ Repeat for `MD5SUMS` and `SHA1SUMS` if desired.
Now that we know the signatures on the hashes are OK we can verify the
binaries match the hashes, completing the verification.
```
```sh
$ sha256sum -c SHA256SUMS 2>&1 | grep OK
rclone-v1.63.1-windows-amd64.zip: OK
```
Or do the check with rclone
```
```sh
$ rclone hashsum sha256 -C SHA256SUMS rclone-v1.63.1-windows-amd64.zip
2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 0
2023/09/11 10:53:58 NOTICE: SHA256SUMS: improperly formatted checksum line 1
@@ -148,7 +150,7 @@ $ rclone hashsum sha256 -C SHA256SUMS rclone-v1.63.1-windows-amd64.zip
You can verify the signatures and hashes in one command line like this:
```
```sh
$ h=$(gpg --decrypt SHA256SUMS) && echo "$h" | sha256sum - -c --ignore-missing
gpg: Signature made Mon 17 Jul 2023 15:03:17 BST
gpg: using DSA key FBF737ECE9F8AB18604BD2AC93935E02FF3B54FA

View File

@@ -3,7 +3,7 @@ title: "Remote Setup"
description: "Configuring rclone up on a remote / headless machine"
---
# Configuring rclone on a remote / headless machine #
# Configuring rclone on a remote / headless machine
Some of the configurations (those involving oauth2) require an
Internet connected web browser.
@@ -13,11 +13,12 @@ browser available on it (e.g. a NAS or a server in a datacenter) then
you will need to use an alternative means of configuration. There are
two ways of doing it, described below.
## Configuring using rclone authorize ##
## Configuring using rclone authorize
On the headless box run `rclone` config but answer `N` to the `Use auto config?` question.
On the headless box run `rclone` config but answer `N` to the `Use auto config?`
question.
```
```text
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
@@ -32,7 +33,7 @@ a web browser available.
For more help and alternate methods see: https://rclone.org/remote_setup/
Execute the following on the machine with the web browser (same rclone
version recommended):
rclone authorize "onedrive"
rclone authorize "onedrive"
Then paste the result.
Enter a value.
config_token>
@@ -40,7 +41,7 @@ config_token>
Then on your main desktop machine
```
```text
rclone authorize "onedrive"
If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
Log in and authorize rclone for access
@@ -53,7 +54,7 @@ SECRET_TOKEN
Then back to the headless box, paste in the code
```
```text
config_token> SECRET_TOKEN
--------------------
[acd12]
@@ -67,20 +68,22 @@ d) Delete this remote
y/e/d>
```
## Configuring by copying the config file ##
## Configuring by copying the config file
Rclone stores all of its config in a single configuration file. This
can easily be copied to configure a remote rclone.
So first configure rclone on your desktop machine with
rclone config
```sh
rclone config
```
to set up the config file.
Find the config file by running `rclone config file`, for example
```
```sh
$ rclone config file
Configuration file is stored at:
/home/user/.rclone.conf
@@ -90,15 +93,19 @@ Now transfer it to the remote box (scp, cut paste, ftp, sftp, etc.) and
place it in the correct place (use `rclone config file` on the remote
box to find out where).
## Configuring using SSH Tunnel ##
## Configuring using SSH Tunnel
Linux and MacOS users can utilize SSH Tunnel to redirect the headless box port 53682 to local machine by using the following command:
```
Linux and MacOS users can utilize SSH Tunnel to redirect the headless box
port 53682 to local machine by using the following command:
```sh
ssh -L localhost:53682:localhost:53682 username@remote_server
```
Then on the headless box run `rclone config` and answer `Y` to the `Use auto config?` question.
```
Then on the headless box run `rclone config` and answer `Y` to the
`Use auto config?` question.
```text
Use auto config?
* Say Y if not sure
* Say N if you are working on a remote or headless machine
@@ -107,4 +114,6 @@ y) Yes (default)
n) No
y/n> y
```
Then copy and paste the auth url `http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx` to the browser on your local machine, complete the auth and it is done.
Then copy and paste the auth url `http://127.0.0.1:53682/auth?state=xxxxxxxxxxxx`
to the browser on your local machine, complete the auth and it is done.

View File

@@ -8,6 +8,9 @@ versionIntroduced: "v0.91"
The S3 backend can be used with a number of different providers:
<!-- markdownlint-capture -->
<!-- markdownlint-disable line-length no-bare-urls -->
{{< provider_list >}}
{{< provider name="AWS S3" home="https://aws.amazon.com/s3/" config="/s3/#configuration" start="true" >}}
{{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
@@ -48,6 +51,8 @@ The S3 backend can be used with a number of different providers:
{{< provider name="Zata" home="https://zata.ai/" config="/s3/#Zata" end="true" >}}
{{< /provider_list >}}
<!-- markdownlint-restore -->
Paths are specified as `remote:bucket` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
@@ -56,20 +61,28 @@ you can use it like this:
See all buckets
rclone lsd remote:
```sh
rclone lsd remote:
```
Make a new bucket
rclone mkdir remote:bucket
```sh
rclone mkdir remote:bucket
```
List the contents of a bucket
rclone ls remote:bucket
```sh
rclone ls remote:bucket
```
Sync `/home/local/directory` to the remote bucket, deleting any excess
files in the bucket.
rclone sync --interactive /home/local/directory remote:bucket
```sh
rclone sync --interactive /home/local/directory remote:bucket
```
## Configuration
@@ -78,12 +91,14 @@ Most applies to the other providers as well, any differences are described [belo
First run
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -304,9 +319,12 @@ However for objects which were uploaded as multipart uploads or with
server side encryption (SSE-AWS or SSE-C) the `ETag` header is no
longer the MD5 sum of the data, so rclone adds an additional piece of
metadata `X-Amz-Meta-Md5chksum` which is a base64 encoded MD5 hash (in
the same format as is required for `Content-MD5`). You can use base64 -d and hexdump to check this value manually:
the same format as is required for `Content-MD5`). You can use base64 -d and
hexdump to check this value manually:
echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
```sh
echo 'VWTGdNx3LyXQDfA0e2Edxw==' | base64 -d | hexdump
```
or you can use `rclone check` to verify the hashes are OK.
@@ -336,30 +354,30 @@ or `rclone copy`) in a few different ways, each with its own
tradeoffs.
- `--size-only`
- Only checks the size of files.
- Uses no extra transactions.
- If the file doesn't change size then rclone won't detect it has
changed.
- `rclone sync --size-only /path/to/source s3:bucket`
- Only checks the size of files.
- Uses no extra transactions.
- If the file doesn't change size then rclone won't detect it has
changed.
- `rclone sync --size-only /path/to/source s3:bucket`
- `--checksum`
- Checks the size and MD5 checksum of files.
- Uses no extra transactions.
- The most accurate detection of changes possible.
- Will cause the source to read an MD5 checksum which, if it is a
local disk, will cause lots of disk activity.
- If the source and destination are both S3 this is the
**recommended** flag to use for maximum efficiency.
- `rclone sync --checksum /path/to/source s3:bucket`
- Checks the size and MD5 checksum of files.
- Uses no extra transactions.
- The most accurate detection of changes possible.
- Will cause the source to read an MD5 checksum which, if it is a
local disk, will cause lots of disk activity.
- If the source and destination are both S3 this is the
**recommended** flag to use for maximum efficiency.
- `rclone sync --checksum /path/to/source s3:bucket`
- `--update --use-server-modtime`
- Uses no extra transactions.
- Modification time becomes the time the object was uploaded.
- For many operations this is sufficient to determine if it needs
uploading.
- Using `--update` along with `--use-server-modtime`, avoids the
extra API call and uploads files whose local modification time
is newer than the time it was last uploaded.
- Files created with timestamps in the past will be missed by the sync.
- `rclone sync --update --use-server-modtime /path/to/source s3:bucket`
- Uses no extra transactions.
- Modification time becomes the time the object was uploaded.
- For many operations this is sufficient to determine if it needs
uploading.
- Using `--update` along with `--use-server-modtime`, avoids the
extra API call and uploads files whose local modification time
is newer than the time it was last uploaded.
- Files created with timestamps in the past will be missed by the sync.
- `rclone sync --update --use-server-modtime /path/to/source s3:bucket`
These flags can and should be used in combination with `--fast-list` -
see below.
@@ -379,7 +397,9 @@ individually. This takes one API call per directory. Using the
memory first using a smaller number of API calls (one per 1000
objects). See the [rclone docs](/docs/#fast-list) for more details.
rclone sync --fast-list --checksum /path/to/source s3:bucket
```sh
rclone sync --fast-list --checksum /path/to/source s3:bucket
```
`--fast-list` trades off API transactions for memory use. As a rough
guide rclone uses 1k of memory per object stored, so using
@@ -392,7 +412,9 @@ instead of through directory listings. You can do a "top-up" sync very
cheaply by using `--max-age` and `--no-traverse` to copy only recent
files, eg
rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket
```sh
rclone copy --max-age 24h --no-traverse /path/to/source s3:bucket
```
You'd then do a full `rclone sync` less often.
@@ -413,32 +435,39 @@ Setting this flag increases the chance for undetected upload failures.
#### Using server-side copy
If you are copying objects between S3 buckets in the same region, you should
use server-side copy.
This is much faster than downloading and re-uploading the objects, as no data is transferred.
For rclone to use server-side copy, you must use the same remote for the source and destination.
use server-side copy. This is much faster than downloading and re-uploading
the objects, as no data is transferred.
rclone copy s3:source-bucket s3:destination-bucket
For rclone to use server-side copy, you must use the same remote for the
source and destination.
When using server-side copy, the performance is limited by the rate at which rclone issues
API requests to S3.
See below for how to increase the number of API requests rclone makes.
```sh
rclone copy s3:source-bucket s3:destination-bucket
```
When using server-side copy, the performance is limited by the rate at which
rclone issues API requests to S3. See below for how to increase the number of
API requests rclone makes.
#### Increasing the rate of API requests
You can increase the rate of API requests to S3 by increasing the parallelism using `--transfers` and `--checkers`
options.
You can increase the rate of API requests to S3 by increasing the parallelism
using `--transfers` and `--checkers` options.
Rclone uses a very conservative defaults for these settings, as not all providers support high rates of requests.
Depending on your provider, you can increase significantly the number of transfers and checkers.
Rclone uses a very conservative defaults for these settings, as not all
providers support high rates of requests. Depending on your provider, you can
increase significantly the number of transfers and checkers.
For example, with AWS S3, if you can increase the number of checkers to values like 200.
If you are doing a server-side copy, you can also increase the number of transfers to 200.
For example, with AWS S3, if you can increase the number of checkers to values
like 200. If you are doing a server-side copy, you can also increase the number
of transfers to 200.
rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
You will need to experiment with these values to find the optimal settings for your setup.
```sh
rclone sync --transfers 200 --checkers 200 --checksum s3:source-bucket s3:destination-bucket
```
You will need to experiment with these values to find the optimal settings for
your setup.
### Data integrity
@@ -553,7 +582,7 @@ version followed by a `cleanup` of the old versions.
Show current version and all the versions with `--s3-versions` flag.
```
```sh
$ rclone -q ls s3:cleanup-test
9 one.txt
@@ -566,7 +595,7 @@ $ rclone -q --s3-versions ls s3:cleanup-test
Retrieve an old version
```
```sh
$ rclone -q --s3-versions copy s3:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
$ ls -l /tmp/one-v2016-07-04-141003-000.txt
@@ -575,7 +604,7 @@ $ ls -l /tmp/one-v2016-07-04-141003-000.txt
Clean up all the old versions and show that they've gone.
```
```sh
$ rclone -q backend cleanup-hidden s3:cleanup-test
$ rclone -q ls s3:cleanup-test
@@ -590,11 +619,13 @@ $ rclone -q --s3-versions ls s3:cleanup-test
When using `--s3-versions` flag rclone is relying on the file name
to work out whether the objects are versions or not. Versions' names
are created by inserting timestamp between file name and its extension.
```
```sh
9 file.txt
8 file-v2023-07-17-161032-000.txt
16 file-v2023-06-15-141003-000.txt
```
If there are real files present with the same names as versions, then
behaviour of `--s3-versions` can be unpredictable.
@@ -602,8 +633,8 @@ behaviour of `--s3-versions` can be unpredictable.
If you run `rclone cleanup s3:bucket` then it will remove all pending
multipart uploads older than 24 hours. You can use the `--interactive`/`i`
or `--dry-run` flag to see exactly what it will do. If you want more control over the
expiry date then run `rclone backend cleanup s3:bucket -o max-age=1h`
or `--dry-run` flag to see exactly what it will do. If you want more control
over the expiry date then run `rclone backend cleanup s3:bucket -o max-age=1h`
to expire all uploads older than one hour. You can use `rclone backend
list-multipart-uploads s3:bucket` to see the pending multipart
uploads.
@@ -661,7 +692,6 @@ throughput (16M would be sensible). Increasing either of these will
use more memory. The default values are high enough to gain most of
the possible performance without using too much memory.
### Buckets and Regions
With Amazon S3 you can list buckets (`rclone lsd`) using any region,
@@ -677,23 +707,28 @@ credentials, with and without using the environment.
The different authentication methods are tried in this order:
- Directly in the rclone configuration file (`env_auth = false` in the config file):
- `access_key_id` and `secret_access_key` are required.
- `session_token` can be optionally set when using AWS STS.
- Runtime configuration (`env_auth = true` in the config file):
- Export the following environment variables before running `rclone`:
- Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY`
- Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`
- Session Token: `AWS_SESSION_TOKEN` (optional)
- Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html):
- Profile files are standard files used by AWS CLI tools
- By default it will use the profile in your home directory (e.g. `~/.aws/credentials` on unix based systems) file and the "default" profile, to change set these environment variables or config keys:
- `AWS_SHARED_CREDENTIALS_FILE` to control which file or the `shared_credentials_file` config key.
- `AWS_PROFILE` to control which profile to use or the `profile` config key.
- Or, run `rclone` in an ECS task with an IAM role (AWS only).
- Or, run `rclone` on an EC2 instance with an IAM role (AWS only).
- Or, run `rclone` in an EKS pod with an IAM role that is associated with a service account (AWS only).
- Or, use [process credentials](https://docs.aws.amazon.com/sdkref/latest/guide/feature-process-credentials.html) to read config from an external program.
- Directly in the rclone configuration file (`env_auth = false` in the config file):
- `access_key_id` and `secret_access_key` are required.
- `session_token` can be optionally set when using AWS STS.
- Runtime configuration (`env_auth = true` in the config file):
- Export the following environment variables before running `rclone`:
- Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY`
- Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`
- Session Token: `AWS_SESSION_TOKEN` (optional)
- Or, use a [named profile](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html):
- Profile files are standard files used by AWS CLI tools
- By default it will use the profile in your home directory (e.g. `~/.aws/credentials`
on unix based systems) file and the "default" profile, to change set these
environment variables or config keys:
- `AWS_SHARED_CREDENTIALS_FILE` to control which file or the `shared_credentials_file`
config key.
- `AWS_PROFILE` to control which profile to use or the `profile` config key.
- Or, run `rclone` in an ECS task with an IAM role (AWS only).
- Or, run `rclone` on an EC2 instance with an IAM role (AWS only).
- Or, run `rclone` in an EKS pod with an IAM role that is associated with a
service account (AWS only).
- Or, use [process credentials](https://docs.aws.amazon.com/sdkref/latest/guide/feature-process-credentials.html)
to read config from an external program.
With `env_auth = true` rclone (which uses the SDK for Go v2) should support
[all authentication methods](https://docs.aws.amazon.com/sdkref/latest/guide/standardized-credentials.html)
@@ -708,44 +743,44 @@ credentials then S3 interaction will be non-authenticated (see the
When using the `sync` subcommand of `rclone` the following minimum
permissions are required to be available on the bucket being written to:
* `ListBucket`
* `DeleteObject`
* `GetObject`
* `PutObject`
* `PutObjectACL`
* `CreateBucket` (unless using [s3-no-check-bucket](#s3-no-check-bucket))
- `ListBucket`
- `DeleteObject`
- `GetObject`
- `PutObject`
- `PutObjectACL`
- `CreateBucket` (unless using [s3-no-check-bucket](#s3-no-check-bucket))
When using the `lsd` subcommand, the `ListAllMyBuckets` permission is required.
Example policy:
```
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
},
"Action": [
"s3:ListBucket",
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*",
"arn:aws:s3:::BUCKET_NAME"
]
},
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
}
]
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::USER_SID:user/USER_NAME"
},
"Action": [
"s3:ListBucket",
"s3:DeleteObject",
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME/*",
"arn:aws:s3:::BUCKET_NAME"
]
},
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
}
]
}
```
@@ -755,7 +790,8 @@ Notes on above:
that `USER_NAME` has been created.
2. The Resource entry must include both resource ARNs, as one implies
the bucket and the other implies the bucket's objects.
3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already exists, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included.
3. When using [s3-no-check-bucket](#s3-no-check-bucket) and the bucket already
exists, the `"arn:aws:s3:::BUCKET_NAME"` doesn't have to be included.
For reference, [here's an Ansible script](https://gist.github.com/ebridges/ebfc9042dd7c756cd101cfa807b7ae2b)
that will generate one or more buckets that will work with `rclone sync`.
@@ -769,11 +805,14 @@ create checksum errors.
### Glacier and Glacier Deep Archive
You can upload objects using the glacier storage class or transition them to glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html).
You can upload objects using the glacier storage class or transition them to
glacier using a [lifecycle policy](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-lifecycle.html).
The bucket can still be synced or copied into normally, but if rclone
tries to access data from the glacier storage class you will see an error like below.
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
```text
2017/09/11 19:07:43 Failed to sync: failed to open source object: Object in GLACIER, restore first: path/to/file
```
In this case you need to [restore](http://docs.aws.amazon.com/AmazonS3/latest/user-guide/restore-archived-objects.html)
the object(s) in question before accessing object contents.
@@ -786,11 +825,13 @@ Vault API, so rclone cannot directly access Glacier Vaults.
According to AWS's [documentation on S3 Object Lock](https://docs.aws.amazon.com/AmazonS3/latest/userguide/object-lock-overview.html#object-lock-permission):
> If you configure a default retention period on a bucket, requests to upload objects in such a bucket must include the Content-MD5 header.
> If you configure a default retention period on a bucket, requests to upload
objects in such a bucket must include the Content-MD5 header.
As mentioned in the [Modification times and hashes](#modification-times-and-hashes) section,
small files that are not uploaded as multipart, use a different tag, causing the upload to fail.
A simple solution is to set the `--s3-upload-cutoff 0` and force all the files to be uploaded as multipart.
As mentioned in the [Modification times and hashes](#modification-times-and-hashes)
section, small files that are not uploaded as multipart, use a different tag, causing
the upload to fail. A simple solution is to set the `--s3-upload-cutoff 0` and force
all the files to be uploaded as multipart.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/s3/s3.go then run make backenddocs" >}}
### Standard options
@@ -2550,12 +2591,14 @@ upload_cutoff = 0
Here is an example of making an [Alibaba Cloud (Aliyun) OSS](https://www.alibabacloud.com/product/oss/)
configuration. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -2664,8 +2707,8 @@ Files like profile image in the app, images sent by users or scanned documents c
ArvanCloud provides an S3 interface which can be configured for use with
rclone like this.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
n/s> n
@@ -2825,12 +2868,14 @@ use the secret key as `xxxxxx/xxxx` it will work fine.
Here is an example of making an [China Mobile Ecloud Elastic Object Storage (EOS)](https:///ecloud.10086.cn/home/product-introduction/eos/)
configuration. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -3079,7 +3124,9 @@ services.
Here is an example of making a Cloudflare R2 configuration. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
@@ -3087,8 +3134,8 @@ Note that all buckets are private, and all are stored in the same
"auto" region. It is necessary to use Cloudflare workers to share the
content of a bucket publicly.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -3274,8 +3321,8 @@ if you need more help.
An `rclone config` walkthrough might look like this but details may
vary depending exactly on how you have set up the container.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -3368,8 +3415,8 @@ acl = private
```
Or you can also configure via the interactive command line:
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -3677,12 +3724,14 @@ v2_auth>
Here is an example of making an [IDrive e2](https://www.idrive.com/e2/)
configuration. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -4067,12 +4116,14 @@ leviia s3
Here is an example of making a [Liara Object Storage](https://liara.ir/landing/object-storage)
configuration. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
n/s> n
@@ -4168,12 +4219,14 @@ storage_class =
Here is an example of making a [Linode Object Storage](https://www.linode.com/products/object-storage/)
configuration. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -4323,12 +4376,14 @@ endpoint = eu-central-1.linodeobjects.com
Here is an example of making a [Magalu Object Storage](https://magalu.cloud/object-storage/)
configuration. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -4444,12 +4499,14 @@ included in existing Pro plans.
Here is an example of making a configuration. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -4638,8 +4695,8 @@ acl = private
You can also run `rclone config` to go through the interactive setup process:
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -4789,8 +4846,8 @@ to interact with the platform, take a look at the [documentation](https://ovh.to
Here is an example of making an OVHcloud Object Storage configuration with `rclone config`:
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -4985,14 +5042,14 @@ acl = private
Here is an example of making a [Petabox](https://petabox.io/)
configuration. First run:
```bash
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
n/s> n
@@ -5156,11 +5213,13 @@ To configure rclone for Pure Storage FlashBlade:
First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -5731,8 +5790,8 @@ the recommended default), not "path style".
You can use `rclone config` to make a new provider like this
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -5948,8 +6007,8 @@ rclone config
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -6202,8 +6261,8 @@ reliable, and secure data storage infrastructure at minimal cost.
Wasabi provides an S3 interface which can be configured for use with
rclone like this.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
n/s> n

View File

@@ -7,6 +7,7 @@ versionIntroduced: "v1.52"
# {{< icon "fa fa-server" >}} Seafile
This is a backend for the [Seafile](https://www.seafile.com/) storage service:
- It works with both the free community edition or the professional edition.
- Seafile versions 6.x, 7.x, 8.x and 9.x are all supported.
- Encrypted libraries are also supported.
@@ -16,22 +17,28 @@ This is a backend for the [Seafile](https://www.seafile.com/) storage service:
## Configuration
There are two distinct modes you can setup your remote:
- you point your remote to the **root of the server**, meaning you don't specify a library during the configuration:
Paths are specified as `remote:library`. You may put subdirectories in too, e.g. `remote:library/path/to/dir`.
- you point your remote to the **root of the server**, meaning you don't
specify a library during the configuration: Paths are specified as
`remote:library`. You may put subdirectories in too, e.g. `remote:library/path/to/dir`.
- you point your remote to a specific library during the configuration:
Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**. (_This mode is possibly slightly faster than the root mode_)
Paths are specified as `remote:path/to/dir`. **This is the recommended mode when using encrypted libraries**.
(*This mode is possibly slightly faster than the root mode*)
### Configuration in root mode
Here is an example of making a seafile configuration for a user with **no** two-factor authentication. First run
Here is an example of making a seafile configuration for a user with **no**
two-factor authentication. First run
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process. To authenticate
you will need the URL of your server, your email (or username) and your password.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -96,31 +103,42 @@ d) Delete this remote
y/e/d> y
```
This remote is called `seafile`. It's pointing to the root of your seafile server and can now be used like this:
This remote is called `seafile`. It's pointing to the root of your seafile
server and can now be used like this:
See all libraries
rclone lsd seafile:
```sh
rclone lsd seafile:
```
Create a new library
rclone mkdir seafile:library
```sh
rclone mkdir seafile:library
```
List the contents of a library
rclone ls seafile:library
```sh
rclone ls seafile:library
```
Sync `/home/local/directory` to the remote library, deleting any
excess files in the library.
rclone sync --interactive /home/local/directory seafile:library
```sh
rclone sync --interactive /home/local/directory seafile:library
```
### Configuration in library mode
Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:
Here's an example of a configuration in library mode with a user that has the
two-factor authentication enabled. Your 2FA code will be asked at the end of
the configuration, and will attempt to authenticate you:
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -189,28 +207,36 @@ d) Delete this remote
y/e/d> y
```
You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once.
You'll notice your password is blank in the configuration. It's because we only
need the password to authenticate you once.
You specified `My Library` during the configuration. The root of the remote is pointing at the
root of the library `My Library`:
You specified `My Library` during the configuration. The root of the remote is
pointing at the root of the library `My Library`:
See all files in the library:
rclone lsd seafile:
```sh
rclone lsd seafile:
```
Create a new directory inside the library
rclone mkdir seafile:directory
```sh
rclone mkdir seafile:directory
```
List the contents of a directory
rclone ls seafile:directory
```sh
rclone ls seafile:directory
```
Sync `/home/local/directory` to the remote library, deleting any
excess files in the library.
rclone sync --interactive /home/local/directory seafile:
```sh
rclone sync --interactive /home/local/directory seafile:
```
### --fast-list
@@ -219,7 +245,6 @@ transactions in exchange for more memory. See the [rclone
docs](/docs/#fast-list) for more details.
Please note this is not supported on seafile server version 6.x
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
@@ -239,7 +264,7 @@ as they can't be used in JSON strings.
Rclone supports generating share links for non-encrypted libraries only.
They can either be for a file or a directory:
```
```sh
rclone link seafile:seafile-tutorial.doc
http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/
@@ -247,17 +272,19 @@ http://my.seafile.server/f/fdcd8a2f93f84b8b90f4/
or if run on a directory you will get:
```
```sh
rclone link seafile:dir
http://my.seafile.server/d/9ea2455f6f55478bbb0d/
```
Please note a share link is unique for each file or directory. If you run a link command on a file/dir
that has already been shared, you will get the exact same link.
Please note a share link is unique for each file or directory. If you run a link
command on a file/dir that has already been shared, you will get the exact same link.
### Compatibility
It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker) of these versions:
It has been actively developed using the [seafile docker image](https://github.com/haiwen/seafile-docker)
of these versions:
- 6.3.4 community edition
- 7.0.5 community edition
- 7.1.3 community edition
@@ -266,7 +293,8 @@ It has been actively developed using the [seafile docker image](https://github.c
Versions below 6.0 are not supported.
Versions between 6.0 and 6.3 haven't been tested and might not work properly.
Each new version of `rclone` is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/) of the seafile community server.
Each new version of `rclone` is automatically tested against the [latest docker image](https://hub.docker.com/r/seafileltd/seafile-mc/)
of the seafile community server.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/seafile/seafile.go then run make backenddocs" >}}
### Standard options

View File

@@ -11,19 +11,24 @@ Protocol](https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol).
The SFTP backend can be used with a number of different providers:
<!-- markdownlint-capture -->
<!-- markdownlint-disable line-length no-bare-urls -->
{{< provider_list >}}
{{< provider name="Hetzner Storage Box" home="https://www.hetzner.com/storage/storage-box" config="/sftp/#hetzner-storage-box">}}
{{< provider name="rsync.net" home="https://rsync.net/products/rclone.html" config="/sftp/#rsync-net">}}
{{< /provider_list >}}
<!-- markdownlint-restore -->
SFTP runs over SSH v2 and is installed as standard with most modern
SSH installations.
Paths are specified as `remote:path`. If the path does not begin with
a `/` it is relative to the home directory of the user. An empty path
`remote:` refers to the user's home directory. For example, `rclone lsd remote:`
would list the home directory of the user configured in the rclone remote config
(`i.e /home/sftpuser`). However, `rclone lsd remote:/` would list the root
`remote:` refers to the user's home directory. For example, `rclone lsd remote:`
would list the home directory of the user configured in the rclone remote config
(`i.e /home/sftpuser`). However, `rclone lsd remote:/` would list the root
directory for remote machine (i.e. `/`)
Note that some SFTP servers will need the leading / - Synology is a
@@ -37,12 +42,14 @@ the server, see [shell access considerations](#shell-access-considerations).
Here is an example of making an SFTP configuration. First run
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -93,50 +100,67 @@ This remote is called `remote` and can now be used like this:
See all directories in the home directory
rclone lsd remote:
```sh
rclone lsd remote:
```
See all directories in the root directory
rclone lsd remote:/
```sh
rclone lsd remote:/
```
Make a new directory
rclone mkdir remote:path/to/directory
```sh
rclone mkdir remote:path/to/directory
```
List the contents of a directory
rclone ls remote:path/to/directory
```sh
rclone ls remote:path/to/directory
```
Sync `/home/local/directory` to the remote directory, deleting any
excess files in the directory.
rclone sync --interactive /home/local/directory remote:directory
```sh
rclone sync --interactive /home/local/directory remote:directory
```
Mount the remote path `/srv/www-data/` to the local path
`/mnt/www-data`
rclone mount remote:/srv/www-data/ /mnt/www-data
```sh
rclone mount remote:/srv/www-data/ /mnt/www-data
```
### SSH Authentication
The SFTP remote supports three authentication methods:
* Password
* Key file, including certificate signed keys
* ssh-agent
- Password
- Key file, including certificate signed keys
- ssh-agent
Key files should be PEM-encoded private key files. For instance `/home/$USER/.ssh/id_rsa`.
Only unencrypted OpenSSH or PEM encrypted files are supported.
The key file can be specified in either an external file (key_file) or contained within the
rclone config file (key_pem). If using key_pem in the config file, the entry should be on a
single line with new line ('\n' or '\r\n') separating lines. i.e.
The key file can be specified in either an external file (key_file) or contained
within the rclone config file (key_pem). If using key_pem in the config file,
the entry should be on a single line with new line ('\n' or '\r\n') separating lines.
I.e.
key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----
```text
key_pem = -----BEGIN RSA PRIVATE KEY-----\nMaMbaIXtE\n0gAMbMbaSsd\nMbaass\n-----END RSA PRIVATE KEY-----
```
This will generate it correctly for key_pem for use in the config:
awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
```sh
awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
```
If you don't specify `pass`, `key_file`, or `key_pem` or `ask_password` then
rclone will attempt to contact an ssh-agent. You can also specify `key_use_agent`
@@ -164,7 +188,7 @@ typically saved as `/home/$USER/.ssh/id_rsa.pub`. Setting this path in
Example:
```
```ini
[remote]
type = sftp
host = example.com
@@ -178,7 +202,7 @@ merged file in both places.
Note: the cert must come first in the file. e.g.
```
```sh
cat id_rsa-cert.pub id_rsa > merged_key
```
@@ -194,7 +218,7 @@ by `OpenSSH` or can point to a unique file.
e.g. using the OpenSSH `known_hosts` file:
```
```ini
[remote]
type = sftp
host = example.com
@@ -205,30 +229,36 @@ known_hosts_file = ~/.ssh/known_hosts
Alternatively you can create your own known hosts file like this:
```
```sh
ssh-keyscan -t dsa,rsa,ecdsa,ed25519 example.com >> known_hosts
```
There are some limitations:
* `rclone` will not _manage_ this file for you. If the key is missing or
wrong then the connection will be refused.
* If the server is set up for a certificate host key then the entry in
the `known_hosts` file _must_ be the `@cert-authority` entry for the CA
- `rclone` will not *manage* this file for you. If the key is missing or
wrong then the connection will be refused.
- If the server is set up for a certificate host key then the entry in
the `known_hosts` file *must* be the `@cert-authority` entry for the CA
If the host key provided by the server does not match the one in the
file (or is missing) then the connection will be aborted and an error
returned such as
NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key mismatch
```text
NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key mismatch
```
or
NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key is unknown
```text
NewFs: couldn't connect SSH: ssh: handshake failed: knownhosts: key is unknown
```
If you see an error such as
NewFs: couldn't connect SSH: ssh: handshake failed: ssh: no authorities for hostname: example.com:22
```text
NewFs: couldn't connect SSH: ssh: handshake failed: ssh: no authorities for hostname: example.com:22
```
then it is likely the server has presented a CA signed host certificate
and you will need to add the appropriate `@cert-authority` entry.
@@ -242,11 +272,15 @@ Note that there seem to be various problems with using an ssh-agent on
macOS due to recent changes in the OS. The most effective work-around
seems to be to start an ssh-agent in each session, e.g.
eval `ssh-agent -s` && ssh-add -A
```sh
eval `ssh-agent -s` && ssh-add -A
```
And then at the end of the session
eval `ssh-agent -k`
```sh
eval `ssh-agent -k`
```
These commands can be used in scripts of course.
@@ -263,7 +297,8 @@ and if shell access is available at all.
Most servers run on some version of Unix, and then a basic Unix shell can
be assumed, without further distinction. Windows 10, Server 2019, and later
can also run a SSH server, which is a port of OpenSSH (see official
[installation guide](https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse)). On a Windows server the shell handling is different: Although it can also
[installation guide](https://docs.microsoft.com/en-us/windows-server/administration/openssh/openssh_install_firstuse)).
On a Windows server the shell handling is different: Although it can also
be set up to use a Unix type shell, e.g. Cygwin bash, the default is to
use Windows Command Prompt (cmd.exe), and PowerShell is a recommended
alternative. All of these have behave differently, which rclone must handle.

View File

@@ -6,7 +6,8 @@ versionIntroduced: "v1.50"
# {{< icon "fas fa-share-square" >}} Citrix ShareFile
[Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer service aimed as business.
[Citrix ShareFile](https://sharefile.com) is a secure file sharing and transfer
service aimed as business.
## Configuration
@@ -16,11 +17,13 @@ through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -93,15 +96,21 @@ Once configured you can then use `rclone` like this,
List directories in top level of your ShareFile
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your ShareFile
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to an ShareFile directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.

View File

@@ -23,14 +23,15 @@ network (e.g. a NAS). Please follow the [Get started](https://sia.tech/get-start
guide and install one.
rclone interacts with Sia network by talking to the Sia daemon via [HTTP API](https://sia.tech/docs/)
which is usually available on port _9980_. By default you will run the daemon
which is usually available on port *9980*. By default you will run the daemon
locally on the same computer so it's safe to leave the API password blank
(the API URL will be `http://127.0.0.1:9980` making external access impossible).
However, if you want to access Sia daemon running on another node, for example
due to memory constraints or because you want to share single daemon between
several rclone and Sia-UI instances, you'll need to make a few more provisions:
- Ensure you have _Sia daemon_ installed directly or in
- Ensure you have *Sia daemon* installed directly or in
a [docker container](https://github.com/SiaFoundation/siad/pkgs/container/siad)
because Sia-UI does not support this mode natively.
- Run it on externally accessible port, for example provide `--api-addr :9980`
@@ -39,8 +40,8 @@ several rclone and Sia-UI instances, you'll need to make a few more provisions:
`SIA_API_PASSWORD` or text file named `apipassword` in the daemon directory.
- Set rclone backend option `api_password` taking it from above locations.
Notes:
1. If your wallet is locked, rclone cannot unlock it automatically.
You should either unlock it in advance by using Sia-UI or via command line
`siac wallet unlock`.
@@ -60,11 +61,13 @@ Notes:
Here is an example of how to make a `sia` remote called `mySia`.
First, run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -114,21 +117,21 @@ Once configured, you can then use `rclone` like this:
- List directories in top level of your Sia storage
```
rclone lsd mySia:
```
```sh
rclone lsd mySia:
```
- List all the files in your Sia storage
```
rclone ls mySia:
```
```sh
rclone ls mySia:
```
- Upload a local directory to the Sia directory called _backup_
- Upload a local directory to the Sia directory called *backup*
```
rclone copy /home/source mySia:backup
```
```sh
rclone copy /home/source mySia:backup
```
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sia/sia.go then run make backenddocs" >}}
### Standard options
@@ -212,7 +215,7 @@ Properties:
- Modification times not supported
- Checksums not supported
- `rclone about` not supported
- rclone can work only with _Siad_ or _Sia-UI_ at the moment,
- rclone can work only with *Siad* or *Sia-UI* at the moment,
the **SkyNet daemon is not supported yet.**
- Sia does not allow control characters or symbols like question and pound
signs in file names. rclone will transparently [encode](/overview/#encoding)

View File

@@ -8,21 +8,27 @@ versionIntroduced: "v1.60"
SMB is [a communication protocol to share files over network](https://en.wikipedia.org/wiki/Server_Message_Block).
This relies on [go-smb2 library](https://github.com/CloudSoda/go-smb2/) for communication with SMB protocol.
This relies on [go-smb2 library](https://github.com/CloudSoda/go-smb2/) for
communication with SMB protocol.
Paths are specified as `remote:sharename` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`.
## Notes
The first path segment must be the name of the share, which you entered when you started to share on Windows. On smbd, it's the section title in `smb.conf` (usually in `/etc/samba/`) file.
The first path segment must be the name of the share, which you entered when
you started to share on Windows. On smbd, it's the section title in `smb.conf`
(usually in `/etc/samba/`) file.
You can find shares by querying the root if you're unsure (e.g. `rclone lsd remote:`).
You can't access to the shared printers from rclone, obviously.
You can't use Anonymous access for logging in. You have to use the `guest` user with an empty password instead.
The rclone client tries to avoid 8.3 names when uploading files by encoding trailing spaces and periods.
Alternatively, [the local backend](/local/#paths-on-windows) on Windows can access SMB servers using UNC paths, by `\\server\share`. This doesn't apply to non-Windows OSes, such as Linux and macOS.
You can't use Anonymous access for logging in. You have to use the `guest` user
with an empty password instead. The rclone client tries to avoid 8.3 names when
uploading files by encoding trailing spaces and periods. Alternatively,
[the local backend](/local/#paths-on-windows) on Windows can access SMB servers
using UNC paths, by `\\server\share`. This doesn't apply to non-Windows OSes,
such as Linux and macOS.
## Configuration
@@ -30,12 +36,14 @@ Here is an example of making a SMB configuration.
First run
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config

View File

@@ -56,6 +56,9 @@ off donation.
Thank you very much to our sponsors:
<!-- markdownlint-capture -->
<!-- markdownlint-disable line-length no-bare-urls -->
{{< sponsor src="/img/logos/backblaze.svg" width="300" height="200" title="Visit our sponsor Backblaze" link="https://www.backblaze.com/cloud-storage-rclonead?utm_source=rclone&utm_medium=paid&utm_campaign=rclone-website-20250715">}}
{{< sponsor src="/img/logos/idrive_e2.svg" width="300" height="200" title="Visit our sponsor IDrive e2" link="https://www.idrive.com/e2/?refer=rclone">}}
{{< sponsor src="/img/logos/filescom-enterprise-grade-workflows.png" width="300" height="200" title="Start Your Free Trial Today" link="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=rclone">}}
@@ -65,3 +68,5 @@ Thank you very much to our sponsors:
{{< sponsor src="/img/logos/rcloneui.svg" width="300" height="200" title="Visit our sponsor RcloneUI" link="https://github.com/rclone-ui/rclone-ui">}}
{{< sponsor src="/img/logos/filelu-rclone.svg" width="330" height="200" title="Visit our sponsor FileLu" link="https://filelu.com/">}}
{{< sponsor src="/img/logos/torbox.png" width="200" height="200" title="Visit our sponsor TORBOX" link="https://www.torbox.app/">}}
<!-- markdownlint-restore -->

View File

@@ -36,95 +36,99 @@ storage nodes across the network.
Side by side comparison with more details:
* Characteristics:
* *Storj backend*: Uses native RPC protocol, connects directly
- Characteristics:
- *Storj backend*: Uses native RPC protocol, connects directly
to the storage nodes which hosts the data. Requires more CPU
resource of encoding/decoding and has network amplification
(especially during the upload), uses lots of TCP connections
* *S3 backend*: Uses S3 compatible HTTP Rest API via the shared
- *S3 backend*: Uses S3 compatible HTTP Rest API via the shared
gateways. There is no network amplification, but performance
depends on the shared gateways and the secret encryption key is
shared with the gateway.
* Typical usage:
* *Storj backend*: Server environments and desktops with enough
- Typical usage:
- *Storj backend*: Server environments and desktops with enough
resources, internet speed and connectivity - and applications
where storjs client-side encryption is required.
* *S3 backend*: Desktops and similar with limited resources,
- *S3 backend*: Desktops and similar with limited resources,
internet speed or connectivity.
* Security:
* *Storj backend*: __strong__. Private encryption key doesn't
- Security:
- *Storj backend*: **strong**. Private encryption key doesn't
need to leave the local computer.
* *S3 backend*: __weaker__. Private encryption key is [shared
- *S3 backend*: **weaker**. Private encryption key is [shared
with](https://docs.storj.io/dcs/api-reference/s3-compatible-gateway#security-and-encryption)
the authentication service of the hosted gateway, where it's
stored encrypted. It can be stronger when combining with the
rclone [crypt](/crypt) backend.
* Bandwidth usage (upload):
* *Storj backend*: __higher__. As data is erasure coded on the
- Bandwidth usage (upload):
- *Storj backend*: **higher**. As data is erasure coded on the
client side both the original data and the parities should be
uploaded. About ~2.7 times more data is required to be uploaded.
Client may start to upload with even higher number of nodes (~3.7
times more) and abandon/stop the slow uploads.
* *S3 backend*: __normal__. Only the raw data is uploaded, erasure
- *S3 backend*: **normal**. Only the raw data is uploaded, erasure
coding happens on the gateway.
* Bandwidth usage (download)
* *Storj backend*: __almost normal__. Only the minimal number
- Bandwidth usage (download)
- *Storj backend*: **almost normal**. Only the minimal number
of data is required, but to avoid very slow data providers a few
more sources are used and the slowest are ignored (max 1.2x
overhead).
* *S3 backend*: __normal__. Only the raw data is downloaded, erasure coding happens on the shared gateway.
* CPU usage:
* *Storj backend*: __higher__, but more predictable. Erasure
- *S3 backend*: **normal**. Only the raw data is downloaded, erasure
coding happens on the shared gateway.
- CPU usage:
- *Storj backend*: **higher**, but more predictable. Erasure
code and encryption/decryption happens locally which requires
significant CPU usage.
* *S3 backend*: __less__. Erasure code and encryption/decryption
- *S3 backend*: **less**. Erasure code and encryption/decryption
happens on shared s3 gateways (and as is, it depends on the
current load on the gateways)
* TCP connection usage:
* *Storj backend*: __high__. A direct connection is required to
- TCP connection usage:
- *Storj backend*: **high**. A direct connection is required to
each of the Storj nodes resulting in 110 connections on upload and
35 on download per 64 MB segment. Not all the connections are
actively used (slow ones are pruned), but they are all opened.
[Adjusting the max open file limit](/storj/#known-issues) may
be required.
* *S3 backend*: __normal__. Only one connection per download/upload
- *S3 backend*: **normal**. Only one connection per download/upload
thread is required to the shared gateway.
* Overall performance:
* *Storj backend*: with enough resources (CPU and bandwidth)
- Overall performance:
- *Storj backend*: with enough resources (CPU and bandwidth)
*storj* backend can provide even 2x better performance. Data
is directly downloaded to / uploaded from to the client instead of
the gateway.
* *S3 backend*: Can be faster on edge devices where CPU and network
- *S3 backend*: Can be faster on edge devices where CPU and network
bandwidth is limited as the shared S3 compatible gateways take
care about the encrypting/decryption and erasure coding and no
download/upload amplification.
* Decentralization:
* *Storj backend*: __high__. Data is downloaded directly from
- Decentralization:
- *Storj backend*: **high**. Data is downloaded directly from
the distributed cloud of storage providers.
* *S3 backend*: __low__. Requires a running S3 gateway (either
- *S3 backend*: **low**. Requires a running S3 gateway (either
self-hosted or Storj-hosted).
* Limitations:
* *Storj backend*: `rclone checksum` is not possible without
- Limitations:
- *Storj backend*: `rclone checksum` is not possible without
download, as checksum metadata is not calculated during upload
* *S3 backend*: secret encryption key is shared with the gateway
- *S3 backend*: secret encryption key is shared with the gateway
## Configuration
To make a new Storj configuration you need one of the following:
* Access Grant that someone else shared with you.
* [API Key](https://documentation.storj.io/getting-started/uploading-your-first-object/create-an-api-key)
of a Storj project you are a member of.
- Access Grant that someone else shared with you.
- [API Key](https://documentation.storj.io/getting-started/uploading-your-first-object/create-an-api-key)
of a Storj project you are a member of.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
### Setup with access grant
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -165,8 +169,8 @@ y/e/d> y
### Setup with API key and passphrase
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -329,13 +333,17 @@ Once configured you can then use `rclone` like this.
Use the `mkdir` command to create new bucket, e.g. `bucket`.
rclone mkdir remote:bucket
```sh
rclone mkdir remote:bucket
```
### List all buckets
Use the `lsf` command to list all buckets.
rclone lsf remote:
```sh
rclone lsf remote:
```
Note the colon (`:`) character at the end of the command line.
@@ -368,11 +376,17 @@ Only modified files will be copied.
Use the `ls` command to list recursively all objects in a bucket.
rclone ls remote:bucket
```sh
rclone ls remote:bucket
```
Add the folder to the remote path to list recursively all objects in this folder.
rclone ls remote:bucket/path/to/dir/
```sh
rclone ls remote:bucket
```
/path/to/dir/
Use the `lsf` command to list non-recursively all objects in a bucket or a folder.

View File

@@ -17,11 +17,13 @@ can do with rclone. `rclone config` walks you through it.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -80,15 +82,21 @@ Once configured you can then use `rclone` like this,
List directories (sync folders) in top level of your SugarSync
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your SugarSync folder "Test"
rclone ls remote:Test
```sh
rclone ls remote:Test
```
To copy a local directory to an SugarSync folder called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
Paths are specified as `remote:path`
@@ -120,7 +128,6 @@ However you can supply the flag `--sugarsync-hard-delete` or set the
config parameter `hard_delete = true` if you would like files to be
deleted straight away.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/sugarsync/sugarsync.go then run make backenddocs" >}}
### Standard options

View File

@@ -9,12 +9,12 @@ versionIntroduced: "v0.91"
Swift refers to [OpenStack Object Storage](https://docs.openstack.org/swift/latest/).
Commercial implementations of that being:
* [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/)
* [Memset Memstore](https://www.memset.com/cloud/storage/)
* [OVH Object Storage](https://www.ovhcloud.com/en/public-cloud/object-storage/)
* [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html)
* [Blomp Cloud Storage](https://www.blomp.com/cloud-storage/)
* [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html)
- [Rackspace Cloud Files](https://www.rackspace.com/cloud/files/)
- [Memset Memstore](https://www.memset.com/cloud/storage/)
- [OVH Object Storage](https://www.ovhcloud.com/en/public-cloud/object-storage/)
- [Oracle Cloud Storage](https://docs.oracle.com/en-us/iaas/integration/doc/configure-object-storage.html)
- [Blomp Cloud Storage](https://www.blomp.com/cloud-storage/)
- [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html)
Paths are specified as `remote:container` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir`.
@@ -23,12 +23,14 @@ command.) You may put subdirectories in too, e.g. `remote:container/path/to/dir
Here is an example of making a swift configuration. First run
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process.
```
No remotes found, make a new one?
```text
No remotes found, make a new one\?
n) New remote
s) Set configuration password
q) Quit config
@@ -124,27 +126,35 @@ This remote is called `remote` and can now be used like this
See all containers
rclone lsd remote:
```sh
rclone lsd remote:
```
Make a new container
rclone mkdir remote:container
```sh
rclone mkdir remote:container
```
List the contents of a container
rclone ls remote:container
```sh
rclone ls remote:container
```
Sync `/home/local/directory` to the remote container, deleting any
excess files in the container.
rclone sync --interactive /home/local/directory remote:container
```sh
rclone sync --interactive /home/local/directory remote:container
```
### Configuration from an OpenStack credentials file
An OpenStack credentials file typically looks something something
like this (without the comments)
```
```sh
export OS_AUTH_URL=https://a.provider.net/v2.0
export OS_TENANT_ID=ffffffffffffffffffffffffffffffff
export OS_TENANT_NAME="1234567890123456"
@@ -160,7 +170,7 @@ The config file needs to look something like this where `$OS_USERNAME`
represents the value of the `OS_USERNAME` variable - `123abc567xy` in
the example above.
```
```ini
[remote]
type = swift
user = $OS_USERNAME
@@ -188,12 +198,12 @@ in the docs for the swift library.
### Using an alternate authentication method
If your OpenStack installation uses a non-standard authentication method
that might not be yet supported by rclone or the underlying swift library,
you can authenticate externally (e.g. calling manually the `openstack`
commands to get a token). Then, you just need to pass the two
configuration variables ``auth_token`` and ``storage_url``.
If they are both provided, the other variables are ignored. rclone will
not try to authenticate but instead assume it is already authenticated
that might not be yet supported by rclone or the underlying swift library,
you can authenticate externally (e.g. calling manually the `openstack`
commands to get a token). Then, you just need to pass the two
configuration variables ``auth_token`` and ``storage_url``.
If they are both provided, the other variables are ignored. rclone will
not try to authenticate but instead assume it is already authenticated
and use these two variables to access the OpenStack installation.
#### Using rclone without a config file
@@ -201,7 +211,7 @@ and use these two variables to access the OpenStack installation.
You can use rclone with swift without a config file, if desired, like
this:
```
```sh
source openstack-credentials-file
export RCLONE_CONFIG_MYREMOTE_TYPE=swift
export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true

View File

@@ -10,18 +10,20 @@ Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
The initial setup for Uloz.to involves filling in the user credentials.
The initial setup for Uloz.to involves filling in the user credentials.
`rclone config` walks you through it.
## Configuration
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -75,32 +77,38 @@ Once configured you can then use `rclone` like this,
List folders in root level folder:
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your root folder:
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local folder to a Uloz.to folder called backup:
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### User credentials
The only reliable method is to authenticate the user using
username and password. Uloz.to offers an API key as well, but
The only reliable method is to authenticate the user using
username and password. Uloz.to offers an API key as well, but
it's reserved for the use of Uloz.to's in-house application
and using it in different circumstances is unreliable.
and using it in different circumstances is unreliable.
### Modification times and hashes
Uloz.to doesn't allow the user to set a custom modification time,
or retrieve the hashes after upload. As a result, the integration
uses a free form field the API provides to encode client-provided
timestamps and hashes. Timestamps are stored with microsecond
precision.
timestamps and hashes. Timestamps are stored with microsecond
precision.
A server calculated MD5 hash of the file is verified upon upload.
A server calculated MD5 hash of the file is verified upon upload.
Afterwards, the backend only serves the client-side calculated
hashes. Hashes can also be retrieved upon creating a file download
link, but it's impractical for `list`-like use cases.
@@ -119,16 +127,16 @@ as they can't be used in JSON strings.
### Transfers
All files are currently uploaded using a single HTTP request, so
All files are currently uploaded using a single HTTP request, so
for uploading large files a stable connection is necessary. Rclone will
upload up to `--transfers` chunks at the same time (shared among all
upload up to `--transfers` chunks at the same time (shared among all
uploads).
### Deleting files
By default, files are moved to the recycle bin whereas folders
are deleted immediately. Trashed files are permanently deleted after
30 days in the recycle bin.
30 days in the recycle bin.
Emptying the trash is currently not implemented in rclone.
@@ -147,12 +155,12 @@ folder you wish to use as root. This will be the last segment
of the URL when you open the relevant folder in the Uloz.to web
interface.
For example, for exploring a folder with URL
`https://uloz.to/fm/my-files/foobar`, `foobar` should be used as the
For example, for exploring a folder with URL
`https://uloz.to/fm/my-files/foobar`, `foobar` should be used as the
root slug.
`root_folder_slug` can be used alongside a specific path in the remote
path. For example, if your remote's `root_folder_slug` corresponds to `/foo/bar`,
`root_folder_slug` can be used alongside a specific path in the remote
path. For example, if your remote's `root_folder_slug` corresponds to `/foo/bar`,
`remote:baz/qux` will refer to `ABSOLUTE_ULOZTO_ROOT/foo/bar/baz/qux`.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/ulozto/ulozto.go then run make backenddocs" >}}

View File

@@ -6,7 +6,8 @@ versionIntroduced: "v1.44"
# {{< icon "fa fa-link" >}} Union
The `union` backend joins several remotes together to make a single unified view of them.
The `union` backend joins several remotes together to make a single unified view
of them.
During the initial setup with `rclone config` you will specify the upstream
remotes as a space separated list. The upstream remotes can either be a local
@@ -18,7 +19,8 @@ to tag the remote as **read only**, **no create** or **writeback**, e.g.
- `:ro` means files will only be read from here and never written
- `:nc` means new files or directories won't be created here
- `:writeback` means files found in different remotes will be written back here. See the [writeback section](#writeback) for more info.
- `:writeback` means files found in different remotes will be written back here.
See the [writeback section](#writeback) for more info.
Subfolders can be used in upstream remotes. Assume a union remote named `backup`
with the remotes `mydrive:private/backup`. Invoking `rclone mkdir backup:desktop`
@@ -33,11 +35,13 @@ mydrive:private/backup/../desktop`.
Here is an example of how to make a union called `remote` for local folders.
First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -97,19 +101,33 @@ Once configured you can then use `rclone` like this,
List directories in top level in `remote1:dir1`, `remote2:dir2` and `remote3:dir3`
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in `remote1:dir1`, `remote2:dir2` and `remote3:dir3`
rclone ls remote:
```sh
rclone ls remote:
```
Copy another local directory to the union directory called source, which will be placed into `remote3:dir3`
Copy another local directory to the union directory called source, which will be
placed into `remote3:dir3`
rclone copy C:\source remote:source
```sh
rclone copy C:\source remote:source
```
### Behavior / Policies
The behavior of union backend is inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs). All functions are grouped into 3 categories: **action**, **create** and **search**. These functions and categories can be assigned a policy which dictates what file or directory is chosen when performing that behavior. Any policy can be assigned to a function or category though some may not be very useful in practice. For instance: **rand** (random) may be useful for file creation (create) but could lead to very odd behavior if used for `delete` if there were more than one copy of the file.
The behavior of union backend is inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs).
All functions are grouped into 3 categories: **action**, **create** and **search**.
These functions and categories can be assigned a policy which dictates what file
or directory is chosen when performing that behavior. Any policy can be assigned
to a function or category though some may not be very useful in practice. For
instance: **rand** (random) may be useful for file creation (create) but could
lead to very odd behavior if used for `delete` if there were more than one copy
of the file.
### Function / Category classifications
@@ -122,17 +140,22 @@ The behavior of union backend is inspired by [trapexit/mergerfs](https://github.
### Path Preservation
Policies, as described below, are of two basic types. `path preserving` and `non-path preserving`.
Policies, as described below, are of two basic types. `path preserving` and
`non-path preserving`.
All policies which start with `ep` (**epff**, **eplfs**, **eplus**, **epmfs**, **eprand**) are `path preserving`. `ep` stands for `existing path`.
All policies which start with `ep` (**epff**, **eplfs**, **eplus**, **epmfs**, **eprand**)
are `path preserving`. `ep` stands for `existing path`.
A path preserving policy will only consider upstreams where the relative path being accessed already exists.
A path preserving policy will only consider upstreams where the relative path
being accessed already exists.
When using non-path preserving policies paths will be created in target upstreams as necessary.
When using non-path preserving policies paths will be created in target upstreams
as necessary.
### Quota Relevant Policies
Some policies rely on quota information. These policies should be used only if your upstreams support the respective quota fields.
Some policies rely on quota information. These policies should be used only if
your upstreams support the respective quota fields.
| Policy | Required Field |
|------------|----------------|
@@ -141,21 +164,27 @@ Some policies rely on quota information. These policies should be used only if y
| lus, eplus | Used |
| lno, eplno | Objects |
To check if your upstream supports the field, run `rclone about remote: [flags]` and see if the required field exists.
To check if your upstream supports the field, run `rclone about remote: [flags]`
and see if the required field exists.
### Filters
Policies basically search upstream remotes and create a list of files / paths for functions to work on. The policy is responsible for filtering and sorting. The policy type defines the sorting but filtering is mostly uniform as described below.
Policies basically search upstream remotes and create a list of files / paths for
functions to work on. The policy is responsible for filtering and sorting. The
policy type defines the sorting but filtering is mostly uniform as described below.
* No **search** policies filter.
* All **action** policies will filter out remotes which are tagged as **read-only**.
* All **create** policies will filter out remotes which are tagged **read-only** or **no-create**.
- No **search** policies filter.
- All **action** policies will filter out remotes which are tagged as **read-only**.
- All **create** policies will filter out remotes which are tagged **read-only**
or **no-create**.
If all remotes are filtered an error will be returned.
### Policy descriptions
The policies definition are inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs) but not exactly the same. Some policy definition could be different due to the much larger latency of remote file systems.
The policies definition are inspired by [trapexit/mergerfs](https://github.com/trapexit/mergerfs)
but not exactly the same. Some policy definition could be different due to the
much larger latency of remote file systems.
| Policy | Description |
|------------------|------------------------------------------------------------|
@@ -175,13 +204,12 @@ The policies definition are inspired by [trapexit/mergerfs](https://github.com/t
| newest | Pick the file / directory with the largest mtime. |
| rand (random) | Calls **all** and then randomizes. Returns only one upstream. |
### Writeback {#writeback}
The tag `:writeback` on an upstream remote can be used to make a simple cache
system like this:
```
```ini
[union]
type = union
action_policy = all

View File

@@ -6,8 +6,9 @@ versionIntroduced: "v1.56"
# {{< icon "fa fa-archive" >}} Uptobox
This is a Backend for Uptobox file storage service. Uptobox is closer to a one-click hoster than a traditional
cloud storage provider and therefore not suitable for long term storage.
This is a Backend for Uptobox file storage service. Uptobox is closer to a
one-click hoster than a traditional cloud storage provider and therefore not
suitable for long term storage.
Paths are specified as `remote:path`
@@ -15,16 +16,19 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
To configure an Uptobox backend you'll need your personal api token. You'll find it in your
[account settings](https://uptobox.com/my_account)
To configure an Uptobox backend you'll need your personal api token. You'll find
it in your [account settings](https://uptobox.com/my_account).
Here is an example of how to make a remote called `remote` with the default setup. First run:
Here is an example of how to make a remote called `remote` with the default setup.
First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
Current remotes:
Name Type
@@ -66,21 +70,28 @@ api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>
y/e/d>
```
Once configured you can then use `rclone` like this,
List directories in top level of your Uptobox
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your Uptobox
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to an Uptobox directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Modification times and hashes

View File

@@ -18,11 +18,13 @@ connecting to then rclone can enable extra features.
Here is an example of how to make a remote called `remote`. First run:
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -91,15 +93,21 @@ Once configured you can then use `rclone` like this,
List directories in top level of your WebDAV
rclone lsd remote:
```sh
rclone lsd remote:
```
List all the files in your WebDAV
rclone ls remote:
```sh
rclone ls remote:
```
To copy a local directory to an WebDAV directory called backup
rclone copy /home/source remote:backup
```sh
rclone copy /home/source remote:backup
```
### Modification times and hashes

View File

@@ -12,11 +12,13 @@ versionIntroduced: "v1.26"
Here is an example of making a yandex configuration. First run
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -71,20 +73,28 @@ Once configured you can then use `rclone` like this,
See top level directories
rclone lsd remote:
```sh
rclone lsd remote:
```
Make a new directory
rclone mkdir remote:directory
```sh
rclone mkdir remote:directory
```
List the contents of a directory
rclone ls remote:directory
```sh
rclone ls remote:directory
```
Sync `/home/local/directory` to the remote path, deleting any
excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
```sh
rclone sync --interactive /home/local/directory remote:directory
```
Yandex paths may be as deep as required, e.g. `remote:directory/subdirectory`.

View File

@@ -6,17 +6,20 @@ versionIntroduced: "v1.54"
# {{< icon "fas fa-folder" >}} Zoho Workdrive
[Zoho WorkDrive](https://www.zoho.com/workdrive/) is a cloud storage solution created by [Zoho](https://zoho.com).
[Zoho WorkDrive](https://www.zoho.com/workdrive/) is a cloud storage solution
created by [Zoho](https://zoho.com).
## Configuration
Here is an example of making a zoho configuration. First run
rclone config
```sh
rclone config
```
This will guide you through an interactive setup process:
```
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
@@ -90,20 +93,28 @@ Once configured you can then use `rclone` like this,
See top level directories
rclone lsd remote:
```sh
rclone lsd remote:
```
Make a new directory
rclone mkdir remote:directory
```sh
rclone mkdir remote:directory
```
List the contents of a directory
rclone ls remote:directory
```sh
rclone ls remote:directory
```
Sync `/home/local/directory` to the remote path, deleting any
excess files in the path.
rclone sync --interactive /home/local/directory remote:directory
```sh
rclone sync --interactive /home/local/directory remote:directory
```
Zoho paths may be as deep as required, eg `remote:directory/subdirectory`.
@@ -121,7 +132,7 @@ command which will display your current usage.
### Restricted filename characters
Only control characters and invalid UTF-8 are replaced. In addition most
Unicode full-width characters are not supported at all and will be removed
Unicode full-width characters are not supported at all and will be removed
from filenames during upload.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/zoho/zoho.go then run make backenddocs" >}}