From 091ccb649cbaf167cf432a28a3c6a2ff9d33a56b Mon Sep 17 00:00:00 2001 From: albertony <12441419+albertony@users.noreply.github.com> Date: Mon, 25 Aug 2025 00:00:48 +0200 Subject: [PATCH] docs: fix markdown lint issues in backend docs --- docs/content/_index.md | 19 +- docs/content/alias.md | 24 +- docs/content/azureblob.md | 110 +++-- docs/content/azurefiles.md | 91 +++-- docs/content/b2.md | 51 ++- docs/content/bisync.md | 40 +- docs/content/box.md | 22 +- docs/content/cache.md | 121 +++--- docs/content/chunker.md | 31 +- docs/content/cloudinary.md | 31 +- docs/content/combine.md | 40 +- docs/content/compress.md | 36 +- docs/content/contact.md | 10 +- docs/content/crypt.md | 69 ++-- docs/content/docker.md | 166 +++++--- docs/content/doi.md | 10 +- docs/content/drive.md | 165 ++++---- docs/content/dropbox.md | 24 +- docs/content/fichier.md | 22 +- docs/content/filefabric.md | 22 +- docs/content/filelu.md | 70 +++- docs/content/filescom.md | 120 +++--- docs/content/ftp.md | 42 +- docs/content/gofile.md | 19 +- docs/content/googlecloudstorage.md | 108 +++-- docs/content/googlephotos.md | 38 +- docs/content/hasher.md | 33 +- docs/content/hdfs.md | 36 +- docs/content/hidrive.md | 59 +-- docs/content/http.md | 26 +- docs/content/iclouddrive.md | 33 +- docs/content/imagekit.md | 37 +- docs/content/internetarchive.md | 53 ++- docs/content/jottacloud.md | 172 ++++---- docs/content/koofr.md | 30 +- docs/content/linkbox.md | 6 +- docs/content/local.md | 53 +-- docs/content/mailru.md | 42 +- docs/content/mega.md | 56 +-- docs/content/memory.md | 12 +- docs/content/netstorage.md | 247 +++++++----- docs/content/onedrive.md | 93 +++-- docs/content/opendrive.md | 19 +- docs/content/oracleobjectstorage/_index.md | 183 +++++---- .../oracleobjectstorage/tutorial_mount.md | 281 +++++++------ docs/content/pcloud.md | 25 +- docs/content/pikpak.md | 6 +- docs/content/pixeldrain.md | 10 +- docs/content/premiumizeme.md | 23 +- docs/content/protondrive.md | 40 +- docs/content/putio.md | 20 +- docs/content/qingstor.md | 38 +- docs/content/quatrix.md | 60 ++- docs/content/release_signing.md | 34 +- docs/content/remote_setup.md | 41 +- docs/content/s3.md | 375 ++++++++++-------- docs/content/seafile.md | 88 ++-- docs/content/sftp.md | 103 +++-- docs/content/sharefile.md | 21 +- docs/content/sia.md | 35 +- docs/content/smb.md | 24 +- docs/content/sponsor.md | 5 + docs/content/storj.md | 98 +++-- docs/content/sugarsync.md | 19 +- docs/content/swift.md | 54 ++- docs/content/ulozto.md | 46 ++- docs/content/union.md | 72 +++- docs/content/uptobox.md | 33 +- docs/content/webdav.md | 18 +- docs/content/yandex.md | 22 +- docs/content/zoho.md | 27 +- 71 files changed, 2663 insertions(+), 1646 deletions(-) diff --git a/docs/content/_index.md b/docs/content/_index.md index 5cb886263..e9ecff530 100644 --- a/docs/content/_index.md +++ b/docs/content/_index.md @@ -85,11 +85,11 @@ Rclone helps you: ## Features {#features} - Transfers - - MD5, SHA1 hashes are checked at all times for file integrity - - Timestamps are preserved on files - - Operations can be restarted at any time - - Can be to and from network, e.g. two different cloud providers - - Can use multi-threaded downloads to local disk + - MD5, SHA1 hashes are checked at all times for file integrity + - Timestamps are preserved on files + - Operations can be restarted at any time + - Can be to and from network, e.g. two different cloud providers + - Can use multi-threaded downloads to local disk - [Copy](/commands/rclone_copy/) new or changed files to cloud storage - [Sync](/commands/rclone_sync/) (one way) to make a directory identical - [Bisync](/bisync/) (two way) to keep two directories in sync bidirectionally @@ -216,10 +216,9 @@ These backends adapt or modify other storage providers: {{< provider name="Hasher: Hash files" home="/hasher/" config="/hasher/" >}} {{< provider name="Union: Join multiple remotes to work together" home="/union/" config="/union/" >}} - ## Links - * {{< icon "fa fa-home" >}} [Home page](https://rclone.org/) - * {{< icon "fab fa-github" >}} [GitHub project page for source and bug tracker](https://github.com/rclone/rclone) - * {{< icon "fa fa-comments" >}} [Rclone Forum](https://forum.rclone.org) - * {{< icon "fas fa-cloud-download-alt" >}}[Downloads](/downloads/) +- {{< icon "fa fa-home" >}} [Home page](https://rclone.org/) +- {{< icon "fab fa-github" >}} [GitHub project page for source and bug tracker](https://github.com/rclone/rclone) +- {{< icon "fa fa-comments" >}} [Rclone Forum](https://forum.rclone.org) +- {{< icon "fas fa-cloud-download-alt" >}}[Downloads](/downloads/) diff --git a/docs/content/alias.md b/docs/content/alias.md index eb0f9ec15..b9b4286f0 100644 --- a/docs/content/alias.md +++ b/docs/content/alias.md @@ -8,7 +8,7 @@ versionIntroduced: "v1.40" The `alias` remote provides a new name for another remote. -Paths may be as deep as required or a local path, +Paths may be as deep as required or a local path, e.g. `remote:directory/subdirectory` or `/directory/subdirectory`. During the initial setup with `rclone config` you will specify the target @@ -24,9 +24,9 @@ Invoking `rclone mkdir backup:../desktop` is exactly the same as invoking The empty path is not allowed as a remote. To alias the current directory use `.` instead. -The target remote can also be a [connection string](/docs/#connection-strings). +The target remote can also be a [connection string](/docs/#connection-strings). This can be used to modify the config of a remote for different uses, e.g. -the alias `myDriveTrash` with the target remote `myDrive,trashed_only:` +the alias `myDriveTrash` with the target remote `myDrive,trashed_only:` can be used to only show the trashed files in `myDrive`. ## Configuration @@ -34,11 +34,13 @@ can be used to only show the trashed files in `myDrive`. Here is an example of how to make an alias called `remote` for local folder. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -85,15 +87,21 @@ Once configured you can then use `rclone` like this, List directories in top level in `/mnt/storage/backup` - rclone lsd remote: +```sh +rclone lsd remote: +``` List all the files in `/mnt/storage/backup` - rclone ls remote: +```sh +rclone ls remote: +``` Copy another local directory to the alias directory called source - rclone copy /home/source remote:source +```sh +rclone copy /home/source remote:source +``` {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/alias/alias.go then run make backenddocs" >}} ### Standard options diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md index a08900c44..0f5773eeb 100644 --- a/docs/content/azureblob.md +++ b/docs/content/azureblob.md @@ -15,11 +15,13 @@ command.) You may put subdirectories in too, e.g. Here is an example of making a Microsoft Azure Blob Storage configuration. For a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -55,20 +57,28 @@ y/e/d> y See all containers - rclone lsd remote: +```sh +rclone lsd remote: +``` Make a new container - rclone mkdir remote:container +```sh +rclone mkdir remote:container +``` List the contents of a container - rclone ls remote:container +```sh +rclone ls remote:container +``` Sync `/home/local/directory` to the remote container, deleting any excess files in the container. - rclone sync --interactive /home/local/directory remote:container +```sh +rclone sync --interactive /home/local/directory remote:container +``` ### --fast-list @@ -147,26 +157,35 @@ user with a password, depending on which environment variable are set. It reads configuration from these variables, in the following order: 1. Service principal with client secret - - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID. + - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its + "directory" ID. - `AZURE_CLIENT_ID`: the service principal's client ID - `AZURE_CLIENT_SECRET`: one of the service principal's client secrets 2. Service principal with certificate - - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID. + - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its + "directory" ID. - `AZURE_CLIENT_ID`: the service principal's client ID - - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file including the private key. - - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the certificate file. - - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header. + - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file + including the private key. + - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the + certificate file. + - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an + authentication request will include an x5c header to support subject + name / issuer based authentication. When set to "true" or "1", + authentication requests include the x5c header. 3. User with username and password - `AZURE_TENANT_ID`: (optional) tenant to authenticate in. Defaults to "organizations". - - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to + - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate + to - `AZURE_USERNAME`: a username (usually an email address) - `AZURE_PASSWORD`: the user's password 4. Workload Identity - - `AZURE_TENANT_ID`: Tenant to authenticate in. - - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to. - - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file. - - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com). - + - `AZURE_TENANT_ID`: Tenant to authenticate in + - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate + to + - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file + - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint + (default: login.microsoftonline.com). ##### Env Auth: 2. Managed Service Identity Credentials @@ -193,19 +212,27 @@ Credentials created with the `az` tool can be picked up using `env_auth`. For example if you were to login with a service principal like this: - az login --service-principal -u XXX -p XXX --tenant XXX +```sh +az login --service-principal -u XXX -p XXX --tenant XXX +``` Then you could access rclone resources like this: - rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER +```sh +rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER +``` Or - rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER +```sh +rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER +``` Which is analogous to using the `az` tool: - az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login +```sh +az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login +``` #### Account and Shared Key @@ -226,18 +253,24 @@ explorer in the Azure portal. If you use a container level SAS URL, rclone operations are permitted only on a particular container, e.g. - rclone ls azureblob:container +```sh +rclone ls azureblob:container +``` You can also list the single container from the root. This will only show the container specified by the SAS URL. - $ rclone lsd azureblob: - container/ +```sh +$ rclone lsd azureblob: +container/ +``` Note that you can't see or access any other containers - this will fail - rclone ls azureblob:othercontainer +```sh +rclone ls azureblob:othercontainer +``` Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an @@ -245,7 +278,8 @@ untrusted environment such as a CI build server. #### Service principal with client secret -If these variables are set, rclone will authenticate with a service principal with a client secret. +If these variables are set, rclone will authenticate with a service principal +with a client secret. - `tenant`: ID of the service principal's tenant. Also called its "directory" ID. - `client_id`: the service principal's client ID @@ -256,13 +290,18 @@ The credentials can also be placed in a file using the #### Service principal with certificate -If these variables are set, rclone will authenticate with a service principal with certificate. +If these variables are set, rclone will authenticate with a service principal +with certificate. - `tenant`: ID of the service principal's tenant. Also called its "directory" ID. - `client_id`: the service principal's client ID -- `client_certificate_path`: path to a PEM or PKCS12 certificate file including the private key. +- `client_certificate_path`: path to a PEM or PKCS12 certificate file including + the private key. - `client_certificate_password`: (optional) password for the certificate file. -- `client_send_certificate_chain`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header. +- `client_send_certificate_chain`: (optional) Specifies whether an + authentication request will include an x5c header to support subject name / + issuer based authentication. When set to "true" or "1", authentication + requests include the x5c header. **NB** `client_certificate_password` must be obscured - see [rclone obscure](/commands/rclone_obscure/). @@ -297,15 +336,18 @@ be explicitly specified using exactly one of the `msi_object_id`, If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is set, this is is equivalent to using `env_auth`. -#### Fedrated Identity Credentials +#### Fedrated Identity Credentials If these variables are set, rclone will authenticate with fedrated identity. - `tenant_id`: tenant_id to authenticate in storage - `client_id`: client ID of the application the user will authenticate to storage -- `msi_client_id`: managed identity client ID of the application the user will authenticate to +- `msi_client_id`: managed identity client ID of the application the user will + authenticate to -By default "api://AzureADTokenExchange" is used as scope for token retrieval over MSI. This token is then exchanged for actual storage token using 'tenant_id' and 'client_id'. +By default "api://AzureADTokenExchange" is used as scope for token retrieval +over MSI. This token is then exchanged for actual storage token using +'tenant_id' and 'client_id'. #### Azure CLI tool `az` {#use_az} @@ -322,7 +364,9 @@ Don't set `env_auth` at the same time. If you want to access resources with public anonymous access then set `account` only. You can do this without making an rclone config: - rclone lsf :azureblob,account=ACCOUNT:CONTAINER +```sh +rclone lsf :azureblob,account=ACCOUNT:CONTAINER +``` {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/azureblob/azureblob.go then run make backenddocs" >}} ### Standard options diff --git a/docs/content/azurefiles.md b/docs/content/azurefiles.md index fe28662e3..64ea5695d 100644 --- a/docs/content/azurefiles.md +++ b/docs/content/azurefiles.md @@ -14,11 +14,13 @@ e.g. `remote:path/to/dir`. Here is an example of making a Microsoft Azure Files Storage configuration. For a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -88,20 +90,28 @@ Once configured you can use rclone. See all files in the top level: - rclone lsf remote: +```sh +rclone lsf remote: +``` Make a new directory in the root: - rclone mkdir remote:dir +```sh +rclone mkdir remote:dir +``` Recursively List the contents: - rclone ls remote: +```sh +rclone ls remote: +``` Sync `/home/local/directory` to the remote directory, deleting any excess files in the directory. - rclone sync --interactive /home/local/directory remote:dir +```sh +rclone sync --interactive /home/local/directory remote:dir +``` ### Modified time @@ -173,26 +183,35 @@ user with a password, depending on which environment variable are set. It reads configuration from these variables, in the following order: 1. Service principal with client secret - - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID. + - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its + "directory" ID. - `AZURE_CLIENT_ID`: the service principal's client ID - `AZURE_CLIENT_SECRET`: one of the service principal's client secrets 2. Service principal with certificate - - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its "directory" ID. + - `AZURE_TENANT_ID`: ID of the service principal's tenant. Also called its + "directory" ID. - `AZURE_CLIENT_ID`: the service principal's client ID - - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file including the private key. - - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the certificate file. - - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header. + - `AZURE_CLIENT_CERTIFICATE_PATH`: path to a PEM or PKCS12 certificate file + including the private key. + - `AZURE_CLIENT_CERTIFICATE_PASSWORD`: (optional) password for the + certificate file. + - `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN`: (optional) Specifies whether an + authentication request will include an x5c header to support subject + name / issuer based authentication. When set to "true" or "1", + authentication requests include the x5c header. 3. User with username and password - `AZURE_TENANT_ID`: (optional) tenant to authenticate in. Defaults to "organizations". - - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate to + - `AZURE_CLIENT_ID`: client ID of the application the user will authenticate + to - `AZURE_USERNAME`: a username (usually an email address) - `AZURE_PASSWORD`: the user's password 4. Workload Identity - - `AZURE_TENANT_ID`: Tenant to authenticate in. - - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate to. - - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file. - - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint (default: login.microsoftonline.com). - + - `AZURE_TENANT_ID`: Tenant to authenticate in + - `AZURE_CLIENT_ID`: Client ID of the application the user will authenticate + to + - `AZURE_FEDERATED_TOKEN_FILE`: Path to projected service account token file + - `AZURE_AUTHORITY_HOST`: Authority of an Azure Active Directory endpoint + (default: login.microsoftonline.com). ##### Env Auth: 2. Managed Service Identity Credentials @@ -219,15 +238,21 @@ Credentials created with the `az` tool can be picked up using `env_auth`. For example if you were to login with a service principal like this: - az login --service-principal -u XXX -p XXX --tenant XXX +```sh +az login --service-principal -u XXX -p XXX --tenant XXX +``` Then you could access rclone resources like this: - rclone lsf :azurefiles,env_auth,account=ACCOUNT: +```sh +rclone lsf :azurefiles,env_auth,account=ACCOUNT: +``` Or - rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles: +```sh +rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles: +``` #### Account and Shared Key @@ -244,7 +269,8 @@ To use it leave `account`, `key` and "sas_url" blank and fill in `connection_str #### Service principal with client secret -If these variables are set, rclone will authenticate with a service principal with a client secret. +If these variables are set, rclone will authenticate with a service principal +with a client secret. - `tenant`: ID of the service principal's tenant. Also called its "directory" ID. - `client_id`: the service principal's client ID @@ -255,13 +281,18 @@ The credentials can also be placed in a file using the #### Service principal with certificate -If these variables are set, rclone will authenticate with a service principal with certificate. +If these variables are set, rclone will authenticate with a service principal +with certificate. - `tenant`: ID of the service principal's tenant. Also called its "directory" ID. - `client_id`: the service principal's client ID -- `client_certificate_path`: path to a PEM or PKCS12 certificate file including the private key. +- `client_certificate_path`: path to a PEM or PKCS12 certificate file including + the private key. - `client_certificate_password`: (optional) password for the certificate file. -- `client_send_certificate_chain`: (optional) Specifies whether an authentication request will include an x5c header to support subject name / issuer based authentication. When set to "true" or "1", authentication requests include the x5c header. +- `client_send_certificate_chain`: (optional) Specifies whether an authentication + request will include an x5c header to support subject name / issuer based + authentication. When set to "true" or "1", authentication requests include + the x5c header. **NB** `client_certificate_password` must be obscured - see [rclone obscure](/commands/rclone_obscure/). @@ -296,17 +327,21 @@ be explicitly specified using exactly one of the `msi_object_id`, If none of `msi_object_id`, `msi_client_id`, or `msi_mi_res_id` is set, this is is equivalent to using `env_auth`. -#### Fedrated Identity Credentials +#### Fedrated Identity Credentials If these variables are set, rclone will authenticate with fedrated identity. - `tenant_id`: tenant_id to authenticate in storage - `client_id`: client ID of the application the user will authenticate to storage -- `msi_client_id`: managed identity client ID of the application the user will authenticate to +- `msi_client_id`: managed identity client ID of the application the user will + authenticate to + +By default "api://AzureADTokenExchange" is used as scope for token retrieval +over MSI. This token is then exchanged for actual storage token using 'tenant_id' +and 'client_id'. -By default "api://AzureADTokenExchange" is used as scope for token retrieval over MSI. This token is then exchanged for actual storage token using 'tenant_id' and 'client_id'. - #### Azure CLI tool `az` {#use_az} + Set to use the [Azure CLI tool `az`](https://learn.microsoft.com/en-us/cli/azure/) as the sole means of authentication. Setting this can be useful if you wish to use the `az` CLI on a host with diff --git a/docs/content/b2.md b/docs/content/b2.md index bb281a2f3..5ebe5dbf6 100644 --- a/docs/content/b2.md +++ b/docs/content/b2.md @@ -15,7 +15,9 @@ command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. Here is an example of making a b2 configuration. First run - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process. To authenticate you will either need your Account ID (a short hex number) and Master @@ -23,8 +25,8 @@ Application Key (a long hex number) OR an Application Key, which is the recommended method. See below for further details on generating and using an Application Key. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote q) Quit config n/q> n @@ -60,20 +62,29 @@ This remote is called `remote` and can now be used like this See all buckets - rclone lsd remote: +```sh +rclone lsd remote: +``` Create a new bucket - rclone mkdir remote:bucket +```sh +rclone mkdir remote:bucket +``` List the contents of a bucket - rclone ls remote:bucket +```sh +rclone ls remote:bucket +``` + Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. - rclone sync --interactive /home/local/directory remote:bucket +```sh +rclone sync --interactive /home/local/directory remote:bucket +``` ### Application Keys @@ -219,7 +230,7 @@ version followed by a `cleanup` of the old versions. Show current version and all the versions with `--b2-versions` flag. -``` +```sh $ rclone -q ls b2:cleanup-test 9 one.txt @@ -232,7 +243,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test Retrieve an old version -``` +```sh $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp $ ls -l /tmp/one-v2016-07-04-141003-000.txt @@ -241,7 +252,7 @@ $ ls -l /tmp/one-v2016-07-04-141003-000.txt Clean up all the old versions and show that they've gone. -``` +```sh $ rclone -q cleanup b2:cleanup-test $ rclone -q ls b2:cleanup-test @@ -256,11 +267,13 @@ $ rclone -q --b2-versions ls b2:cleanup-test When using `--b2-versions` flag rclone is relying on the file name to work out whether the objects are versions or not. Versions' names are created by inserting timestamp between file name and its extension. -``` + +```sh 9 file.txt 8 file-v2023-07-17-161032-000.txt 16 file-v2023-06-15-141003-000.txt ``` + If there are real files present with the same names as versions, then behaviour of `--b2-versions` can be unpredictable. @@ -270,7 +283,7 @@ It is useful to know how many requests are sent to the server in different scena All copy commands send the following 4 requests: -``` +```text /b2api/v1/b2_authorize_account /b2api/v1/b2_create_bucket /b2api/v1/b2_list_buckets @@ -287,7 +300,7 @@ require any files to be uploaded, no more requests will be sent. Uploading files that do not require chunking, will send 2 requests per file upload: -``` +```text /b2api/v1/b2_get_upload_url /b2api/v1/b2_upload_file/ ``` @@ -295,7 +308,7 @@ file upload: Uploading files requiring chunking, will send 2 requests (one each to start and finish the upload) and another 2 requests for each chunk: -``` +```text /b2api/v1/b2_start_large_file /b2api/v1/b2_get_upload_part_url /b2api/v1/b2_upload_part/ @@ -309,14 +322,14 @@ rclone will show and act on older versions of files. For example Listing without `--b2-versions` -``` +```sh $ rclone -q ls b2:cleanup-test 9 one.txt ``` And with -``` +```sh $ rclone -q --b2-versions ls b2:cleanup-test 9 one.txt 8 one-v2016-07-04-141032-000.txt @@ -336,7 +349,7 @@ permitted, so you can't upload files or delete them. Rclone supports generating file share links for private B2 buckets. They can either be for a file for example: -``` +```sh ./rclone link B2:bucket/path/to/file.txt https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx @@ -344,7 +357,7 @@ https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx or if run on a directory you will get: -``` +```sh ./rclone link B2:bucket/path https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx ``` @@ -352,7 +365,7 @@ https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx you can then use the authorization token (the part of the url from the `?Authorization=` on) on any file path under that directory. For example: -``` +```text https://f002.backblazeb2.com/file/bucket/path/to/file1?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/file2?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxxx diff --git a/docs/content/bisync.md b/docs/content/bisync.md index 7cfe63fce..2473b58ea 100644 --- a/docs/content/bisync.md +++ b/docs/content/bisync.md @@ -31,7 +31,7 @@ section) before using, or data loss can result. Questions can be asked in the For example, your first command might look like this: -```bash +```sh rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run ``` @@ -40,7 +40,7 @@ After that, remove `--resync` as well. Here is a typical run log (with timestamps removed for clarity): -```bash +```sh rclone bisync /testdir/path1/ /testdir/path2/ --verbose INFO : Synching Path1 "/testdir/path1/" with Path2 "/testdir/path2/" INFO : Path1 checking for diffs @@ -86,7 +86,7 @@ INFO : Bisync successful ## Command line syntax -```bash +```sh $ rclone bisync --help Usage: rclone bisync remote1:path1 remote2:path2 [flags] @@ -169,7 +169,7 @@ be copied to Path1, and the process will then copy the Path1 tree to Path2. The `--resync` sequence is roughly equivalent to the following (but see [`--resync-mode`](#resync-mode) for other options): -```bash +```sh rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs] rclone copy Path1 Path2 [--create-empty-src-dirs] ``` @@ -225,7 +225,7 @@ Shutdown](#graceful-shutdown) mode, when needed) for a very robust almost any interruption it might encounter. Consider adding something like the following: -```bash +```sh --resilient --recover --max-lock 2m --conflict-resolve newer ``` @@ -353,13 +353,13 @@ simultaneously (or just `modtime` AND `checksum`). being `size`, `modtime`, and `checksum`. For example, if you want to compare size and checksum, but not modtime, you would do: -```bash +```sh --compare size,checksum ``` Or if you want to compare all three: -```bash +```sh --compare size,modtime,checksum ``` @@ -627,7 +627,7 @@ specified (or when two identical suffixes are specified.) i.e. with `--conflict-loser pathname`, all of the following would produce exactly the same result: -```bash +```sh --conflict-suffix path --conflict-suffix path,path --conflict-suffix path1,path2 @@ -642,7 +642,7 @@ changed with the [`--suffix-keep-extension`](/docs/#suffix-keep-extension) flag curly braces as globs. This can be helpful to track the date and/or time that each conflict was handled by bisync. For example: -```bash +```sh --conflict-suffix {DateOnly}-conflict // result: myfile.txt.2006-01-02-conflict1 ``` @@ -667,7 +667,7 @@ conflicts with `..path1` and `..path2` (with two periods, and `path` instead of additional dots can be added by including them in the specified suffix string. For example, for behavior equivalent to the previous default, use: -```bash +```sh [--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path ``` @@ -707,13 +707,13 @@ For example, a possible sequence could look like this: 1. Normally scheduled bisync run: - ```bash + ```sh rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient ``` 2. Periodic independent integrity check (perhaps scheduled nightly or weekly): - ```bash + ```sh rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt ``` @@ -721,7 +721,7 @@ For example, a possible sequence could look like this: If one side is more up-to-date and you want to make the other side match it, you could run: - ```bash + ```sh rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v ``` @@ -851,7 +851,7 @@ override `--backup-dir`. Example: -```bash +```sh rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case ``` @@ -1383,7 +1383,7 @@ listings and thus not checked during the check access phase. Here are two normal runs. The first one has a newer file on the remote. The second has no deltas between local and remote. -```bash +```sh 2021/05/16 00:24:38 INFO : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/" 2021/05/16 00:24:38 INFO : Path1 checking for diffs 2021/05/16 00:24:38 INFO : - Path1 File is new - file.txt @@ -1433,7 +1433,7 @@ numerous such messages in the log. Since there are no final error/warning messages on line *7*, rclone has recovered from failure after a retry, and the overall sync was successful. -```bash +```sh 1: 2021/05/14 00:44:12 INFO : Synching Path1 "/path/to/local/tree" with Path2 "dropbox:" 2: 2021/05/14 00:44:12 INFO : Path1 checking for diffs 3: 2021/05/14 00:44:12 INFO : Path2 checking for diffs @@ -1446,7 +1446,7 @@ recovered from failure after a retry, and the overall sync was successful. This log shows a *Critical failure* which requires a `--resync` to recover from. See the [Runtime Error Handling](#error-handling) section. -```bash +```sh 2021/05/12 00:49:40 INFO : Google drive root '': Waiting for checks to finish 2021/05/12 00:49:40 INFO : Google drive root '': Waiting for transfers to finish 2021/05/12 00:49:40 INFO : Google drive root '': not deleting files as there were IO errors @@ -1531,7 +1531,7 @@ on Linux you can use *Cron* which is described below. The 1st example runs a sync every 5 minutes between a local directory and an OwnCloud server, with output logged to a runlog file: -```bash +```sh # Minute (0-59) # Hour (0-23) # Day of Month (1-31) @@ -1548,7 +1548,7 @@ If you run `rclone bisync` as a cron job, redirect stdout/stderr to a file. The 2nd example runs a sync to Dropbox every hour and logs all stdout (via the `>>`) and stderr (via `2>&1`) to a log file. -```bash +```sh 0 * * * * /path/to/rclone bisync /path/to/local/dropbox Dropbox: --check-access --filters-file /home/user/filters.txt >> /path/to/logs/dropbox-run.log 2>&1 ``` @@ -1630,7 +1630,7 @@ Rerunning the test will let it pass. Consider such failures as noise. ### Test command syntax -```bash +```sh usage: go test ./cmd/bisync [options...] Options: diff --git a/docs/content/box.md b/docs/content/box.md index 3398c14cb..85bc27a97 100644 --- a/docs/content/box.md +++ b/docs/content/box.md @@ -18,11 +18,13 @@ to use JWT authentication. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -94,11 +96,15 @@ Once configured you can then use `rclone` like this, List directories in top level of your Box - rclone lsd remote: +```sh +rclone lsd remote: +``` List all the files in your Box - rclone ls remote: +```sh +rclone ls remote: +``` To copy a local directory to an Box directory called backup @@ -123,9 +129,9 @@ According to the [box docs](https://developer.box.com/v2.0/docs/oauth-20#section This means that if you - * Don't use the box remote for 60 days - * Copy the config file with a box refresh token in and use it in two places - * Get an error on a token refresh +- Don't use the box remote for 60 days +- Copy the config file with a box refresh token in and use it in two places +- Get an error on a token refresh then rclone will return an error which includes the text `Invalid refresh token`. @@ -138,7 +144,7 @@ did the authentication on. Here is how to do it. -``` +```sh $ rclone config Current remotes: diff --git a/docs/content/cache.md b/docs/content/cache.md index dd9f1d976..79b6be29f 100644 --- a/docs/content/cache.md +++ b/docs/content/cache.md @@ -31,11 +31,13 @@ with `cache`. Here is an example of how to make a remote called `test-cache`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote r) Rename remote @@ -115,19 +117,25 @@ You can then use it like this, List directories in top level of your drive - rclone lsd test-cache: +```sh +rclone lsd test-cache: +``` List all the files in your drive - rclone ls test-cache: +```sh +rclone ls test-cache: +``` To start a cached mount - rclone mount --allow-other test-cache: /var/tmp/test-cache +```sh +rclone mount --allow-other test-cache: /var/tmp/test-cache +``` -### Write Features ### +### Write Features -### Offline uploading ### +### Offline uploading In an effort to make writing through cache more reliable, the backend now supports this feature which can be activated by specifying a @@ -152,7 +160,7 @@ Uploads will be stored in a queue and be processed based on the order they were The queue and the temporary storage is persistent across restarts but can be cleared on startup with the `--cache-db-purge` flag. -### Write Support ### +### Write Support Writes are supported through `cache`. One caveat is that a mounted cache remote does not add any retry or fallback @@ -163,9 +171,9 @@ One special case is covered with `cache-writes` which will cache the file data at the same time as the upload when it is enabled making it available from the cache store immediately once the upload is finished. -### Read Features ### +### Read Features -#### Multiple connections #### +#### Multiple connections To counter the high latency between a local PC where rclone is running and cloud providers, the cache remote can split multiple requests to the @@ -177,7 +185,7 @@ This is similar to buffering when media files are played online. Rclone will stay around the current marker but always try its best to stay ahead and prepare the data before. -#### Plex Integration #### +#### Plex Integration There is a direct integration with Plex which allows cache to detect during reading if the file is in playback or not. This helps cache to adapt how it queries @@ -196,9 +204,11 @@ How to enable? Run `rclone config` and add all the Plex options (endpoint, usern and password) in your remote and it will be automatically enabled. Affected settings: -- `cache-workers`: _Configured value_ during confirmed playback or _1_ all the other times -##### Certificate Validation ##### +- `cache-workers`: *Configured value* during confirmed playback or *1* all the + other times + +##### Certificate Validation When the Plex server is configured to only accept secure connections, it is possible to use `.plex.direct` URLs to ensure certificate validation succeeds. @@ -213,60 +223,63 @@ have been replaced with dashes, e.g. `127.0.0.1` becomes `127-0-0-1`. To get the `server-hash` part, the easiest way is to visit -https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token + This page will list all the available Plex servers for your account with at least one `.plex.direct` link for each. Copy one URL and replace the IP address with the desired address. This can be used as the `plex_url` value. -### Known issues ### +### Known issues -#### Mount and --dir-cache-time #### +#### Mount and --dir-cache-time ---dir-cache-time controls the first layer of directory caching which works at the mount layer. -Being an independent caching mechanism from the `cache` backend, it will manage its own entries -based on the configured time. +--dir-cache-time controls the first layer of directory caching which works at +the mount layer. Being an independent caching mechanism from the `cache` backend, +it will manage its own entries based on the configured time. -To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct -one, try to set `--dir-cache-time` to a lower time than `--cache-info-age`. Default values are -already configured in this way. +To avoid getting in a scenario where dir cache has obsolete data and cache would +have the correct one, try to set `--dir-cache-time` to a lower time than +`--cache-info-age`. Default values are already configured in this way. -#### Windows support - Experimental #### +#### Windows support - Experimental -There are a couple of issues with Windows `mount` functionality that still require some investigations. -It should be considered as experimental thus far as fixes come in for this OS. +There are a couple of issues with Windows `mount` functionality that still +require some investigations. It should be considered as experimental thus far +as fixes come in for this OS. Most of the issues seem to be related to the difference between filesystems on Linux flavors and Windows as cache is heavily dependent on them. Any reports or feedback on how cache behaves on this OS is greatly appreciated. - -- https://github.com/rclone/rclone/issues/1935 -- https://github.com/rclone/rclone/issues/1907 -- https://github.com/rclone/rclone/issues/1834 -#### Risk of throttling #### +- [Issue #1935](https://github.com/rclone/rclone/issues/1935) +- [Issue #1907](https://github.com/rclone/rclone/issues/1907) +- [Issue #1834](https://github.com/rclone/rclone/issues/1834) + +#### Risk of throttling Future iterations of the cache backend will make use of the pooling functionality of the cloud provider to synchronize and at the same time make writing through it -more tolerant to failures. +more tolerant to failures. There are a couple of enhancements in track to add these but in the meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts. Some recommendations: + - don't use a very small interval for entry information (`--cache-info-age`) -- while writes aren't yet optimised, you can still write through `cache` which gives you the advantage -of adding the file in the cache at the same time if configured to do so. +- while writes aren't yet optimised, you can still write through `cache` which + gives you the advantage of adding the file in the cache at the same time if + configured to do so. Future enhancements: -- https://github.com/rclone/rclone/issues/1937 -- https://github.com/rclone/rclone/issues/1936 +- [Issue #1937](https://github.com/rclone/rclone/issues/1937) +- [Issue #1936](https://github.com/rclone/rclone/issues/1936) -#### cache and crypt #### +#### cache and crypt One common scenario is to keep your data encrypted in the cloud provider using the `crypt` remote. `crypt` uses a similar technique to wrap around @@ -281,30 +294,36 @@ which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yields better results: {{}}**cloud remote** -> **cache** -> **crypt**{{}} -#### absolute remote paths #### +#### absolute remote paths -`cache` can not differentiate between relative and absolute paths for the wrapped remote. -Any path given in the `remote` config setting and on the command line will be passed to -the wrapped remote as is, but for storing the chunks on disk the path will be made -relative by removing any leading `/` character. +`cache` can not differentiate between relative and absolute paths for the wrapped +remote. Any path given in the `remote` config setting and on the command line will +be passed to the wrapped remote as is, but for storing the chunks on disk the path +will be made relative by removing any leading `/` character. -This behavior is irrelevant for most backend types, but there are backends where a leading `/` -changes the effective directory, e.g. in the `sftp` backend paths starting with a `/` are -relative to the root of the SSH server and paths without are relative to the user home directory. -As a result `sftp:bin` and `sftp:/bin` will share the same cache folder, even if they represent -a different directory on the SSH server. +This behavior is irrelevant for most backend types, but there are backends where +a leading `/` changes the effective directory, e.g. in the `sftp` backend paths +starting with a `/` are relative to the root of the SSH server and paths without +are relative to the user home directory. As a result `sftp:bin` and `sftp:/bin` +will share the same cache folder, even if they represent a different directory +on the SSH server. -### Cache and Remote Control (--rc) ### -Cache supports the new `--rc` mode in rclone and can be remote controlled through the following end points: -By default, the listener is disabled if you do not add the flag. +### Cache and Remote Control (--rc) + +Cache supports the new `--rc` mode in rclone and can be remote controlled +through the following end points: By default, the listener is disabled if +you do not add the flag. ### rc cache/expire + Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt. Params: - - **remote** = path to remote **(required)** - - **withData** = true/false to delete cached data (chunks) as well _(optional, false by default)_ + +- **remote** = path to remote **(required)** +- **withData** = true/false to delete cached data (chunks) as + well *(optional, false by default)* {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/cache/cache.go then run make backenddocs" >}} ### Standard options diff --git a/docs/content/chunker.md b/docs/content/chunker.md index 2de029e3c..c36071da3 100644 --- a/docs/content/chunker.md +++ b/docs/content/chunker.md @@ -26,8 +26,8 @@ then you should probably put the bucket in the remote `s3:bucket`. Now configure `chunker` using `rclone config`. We will call this one `overlay` to separate it from the `remote` itself. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -92,16 +92,15 @@ So if you use a remote of `/path/to/secret/files` then rclone will chunk stuff in that directory. If you use a remote of `name` then rclone will put files in a directory called `name` in the current directory. - ### Chunking When rclone starts a file upload, chunker checks the file size. If it doesn't exceed the configured chunk size, chunker will just pass the file -to the wrapped remote (however, see caveat below). If a file is large, chunker will transparently cut -data in pieces with temporary names and stream them one by one, on the fly. -Each data chunk will contain the specified number of bytes, except for the -last one which may have less data. If file size is unknown in advance -(this is called a streaming upload), chunker will internally create +to the wrapped remote (however, see caveat below). If a file is large, chunker +will transparently cut data in pieces with temporary names and stream them one +by one, on the fly. Each data chunk will contain the specified number of bytes, +except for the last one which may have less data. If file size is unknown in +advance (this is called a streaming upload), chunker will internally create a temporary copy, record its size and repeat the above process. When upload completes, temporary chunk files are finally renamed. @@ -129,14 +128,13 @@ proceed with current command. You can set the `--chunker-fail-hard` flag to have commands abort with error message in such cases. -**Caveat**: As it is now, chunker will always create a temporary file in the +**Caveat**: As it is now, chunker will always create a temporary file in the backend and then rename it, even if the file is below the chunk threshold. This will result in unnecessary API calls and can severely restrict throughput -when handling transfers primarily composed of small files on some backends (e.g. Box). -A workaround to this issue is to use chunker only for files above the chunk threshold -via `--min-size` and then perform a separate call without chunker on the remaining -files. - +when handling transfers primarily composed of small files on some backends +(e.g. Box). A workaround to this issue is to use chunker only for files above +the chunk threshold via `--min-size` and then perform a separate call without +chunker on the remaining files. #### Chunk names @@ -165,7 +163,6 @@ non-chunked files. When using `norename` transactions, chunk names will additionally have a unique file version suffix. For example, `BIG_FILE_NAME.rclone_chunk.001_bp562k`. - ### Metadata Besides data chunks chunker will by default create metadata object for @@ -199,7 +196,6 @@ base name and show group names as virtual composite files. This method is more prone to missing chunk errors (especially missing last chunk) than format with metadata enabled. - ### Hashsums Chunker supports hashsums only when a compatible metadata is present. @@ -243,7 +239,6 @@ hashsums at destination. Beware of consequences: the `sync` command will revert (sometimes silently) to time/size comparison if compatible hashsums between source and target are not found. - ### Modification times Chunker stores modification times using the wrapped remote so support @@ -254,7 +249,6 @@ modification time of the metadata object on the wrapped remote. If file is chunked but metadata format is `none` then chunker will use modification time of the first data chunk. - ### Migrations The idiomatic way to migrate to a different chunk size, hash type, transaction @@ -283,7 +277,6 @@ somewhere using the chunker remote and purge the original directory. The `copy` command will copy only active chunks while the `purge` will remove everything including garbage. - ### Caveats and Limitations Chunker requires wrapped remote to support server-side `move` (or `copy` + diff --git a/docs/content/cloudinary.md b/docs/content/cloudinary.md index c150e7a17..8297d9817 100644 --- a/docs/content/cloudinary.md +++ b/docs/content/cloudinary.md @@ -11,11 +11,16 @@ This is a backend for the [Cloudinary](https://cloudinary.com/) platform ## About Cloudinary [Cloudinary](https://cloudinary.com/) is an image and video API platform. -Trusted by 1.5 million developers and 10,000 enterprise and hyper-growth companies as a critical part of their tech stack to deliver visually engaging experiences. +Trusted by 1.5 million developers and 10,000 enterprise and hyper-growth +companies as a critical part of their tech stack to deliver visually engaging +experiences. ## Accounts & Pricing -To use this backend, you need to [create a free account](https://cloudinary.com/users/register_free) on Cloudinary. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See [the pricing details](https://cloudinary.com/pricing). +To use this backend, you need to [create a free account](https://cloudinary.com/users/register_free) +on Cloudinary. Start with a free plan with generous usage limits. Then, as your +requirements grow, upgrade to a plan that best fits your needs. +See [the pricing details](https://cloudinary.com/pricing). ## Securing Your Credentials @@ -25,13 +30,17 @@ Please refer to the [docs](/docs/#configuration-encryption-cheatsheet) Here is an example of making a Cloudinary configuration. -First, create a [cloudinary.com](https://cloudinary.com/users/register_free) account and choose a plan. +First, create a [cloudinary.com](https://cloudinary.com/users/register_free) +account and choose a plan. -You will need to log in and get the `API Key` and `API Secret` for your account from the developer section. +You will need to log in and get the `API Key` and `API Secret` for your account +from the developer section. Now run -`rclone config` +```sh +rclone config +``` Follow the interactive setup process: @@ -104,15 +113,21 @@ y/e/d> y List directories in the top level of your Media Library -`rclone lsd cloudinary-media-library:` +```sh +rclone lsd cloudinary-media-library: +``` Make a new directory. -`rclone mkdir cloudinary-media-library:directory` +```sh +rclone mkdir cloudinary-media-library:directory +``` List the contents of a directory. -`rclone ls cloudinary-media-library:directory` +```sh +rclone ls cloudinary-media-library:directory +``` ### Modified time and hashes diff --git a/docs/content/combine.md b/docs/content/combine.md index d6f05ca79..63a83ffbd 100644 --- a/docs/content/combine.md +++ b/docs/content/combine.md @@ -11,7 +11,7 @@ tree. For example you might have a remote for images on one provider: -``` +```sh $ rclone tree s3:imagesbucket / ├── image1.jpg @@ -20,7 +20,7 @@ $ rclone tree s3:imagesbucket And a remote for files on another: -``` +```sh $ rclone tree drive:important/files / ├── file1.txt @@ -30,7 +30,7 @@ $ rclone tree drive:important/files The `combine` backend can join these together into a synthetic directory structure like this: -``` +```sh $ rclone tree combined: / ├── files @@ -44,7 +44,9 @@ $ rclone tree combined: You'd do this by specifying an `upstreams` parameter in the config like this - upstreams = images=s3:imagesbucket files=drive:important/files +```text +upstreams = images=s3:imagesbucket files=drive:important/files +``` During the initial setup with `rclone config` you will specify the upstreams remotes as a space separated list. The upstream remotes can @@ -55,11 +57,13 @@ either be a local paths or other remotes. Here is an example of how to make a combine called `remote` for the example above. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -103,21 +107,25 @@ the shared drives you have access to. Assuming your main (non shared drive) Google drive remote is called `drive:` you would run - rclone backend -o config drives drive: +```sh +rclone backend -o config drives drive: +``` This would produce something like this: - [My Drive] - type = alias - remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: +```ini +[My Drive] +type = alias +remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: - [Test Drive] - type = alias - remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: +[Test Drive] +type = alias +remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: - [AllDrives] - type = combine - upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" +[AllDrives] +type = combine +upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" +``` If you then add that config to your config file (find it with `rclone config file`) then you can access all the shared drives in one place diff --git a/docs/content/compress.md b/docs/content/compress.md index 425c3653e..9fe8d518a 100644 --- a/docs/content/compress.md +++ b/docs/content/compress.md @@ -9,18 +9,20 @@ status: Experimental ## Warning -This remote is currently **experimental**. Things may break and data may be lost. Anything you do with this remote is -at your own risk. Please understand the risks associated with using experimental code and don't use this remote in -critical applications. +This remote is currently **experimental**. Things may break and data may be lost. +Anything you do with this remote is at your own risk. Please understand the risks +associated with using experimental code and don't use this remote in critical +applications. -The `Compress` remote adds compression to another remote. It is best used with remotes containing -many large compressible files. +The `Compress` remote adds compression to another remote. It is best used with +remotes containing many large compressible files. ## Configuration -To use this remote, all you need to do is specify another remote and a compression mode to use: +To use this remote, all you need to do is specify another remote and a +compression mode to use: -``` +```text Current remotes: Name Type @@ -72,22 +74,26 @@ y/e/d> y ### Compression Modes -Currently only gzip compression is supported. It provides a decent balance between speed and size and is well -supported by other applications. Compression strength can further be configured via an advanced setting where 0 is no +Currently only gzip compression is supported. It provides a decent balance +between speed and size and is well supported by other applications. Compression +strength can further be configured via an advanced setting where 0 is no compression and 9 is strongest compression. ### File types -If you open a remote wrapped by compress, you will see that there are many files with an extension corresponding to -the compression algorithm you chose. These files are standard files that can be opened by various archive programs, +If you open a remote wrapped by compress, you will see that there are many +files with an extension corresponding to the compression algorithm you chose. +These files are standard files that can be opened by various archive programs, but they have some hidden metadata that allows them to be used by rclone. -While you may download and decompress these files at will, do **not** manually delete or rename files. Files without -correct metadata files will not be recognized by rclone. +While you may download and decompress these files at will, do **not** manually +delete or rename files. Files without correct metadata files will not be +recognized by rclone. ### File names -The compressed files will be named `*.###########.gz` where `*` is the base file and the `#` part is base64 encoded -size of the uncompressed file. The file names should not be changed by anything other than the rclone compression backend. +The compressed files will be named `*.###########.gz` where `*` is the base +file and the `#` part is base64 encoded size of the uncompressed file. The file +names should not be changed by anything other than the rclone compression backend. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/compress/compress.go then run make backenddocs" >}} ### Standard options diff --git a/docs/content/contact.md b/docs/content/contact.md index 63e854090..ec244a280 100644 --- a/docs/content/contact.md +++ b/docs/content/contact.md @@ -9,20 +9,20 @@ description: "Contact the rclone project" Forum for questions and general discussion: -- https://forum.rclone.org +- ## Business support For business support or sponsorship enquiries please see: -- https://rclone.com/ -- sponsorship@rclone.com +- +- ## GitHub repository The project's repository is located at: -- https://github.com/rclone/rclone +- There you can file bug reports or contribute with pull requests. @@ -37,7 +37,7 @@ You can also follow Nick on twitter for rclone announcements: Or if all else fails or you want to ask something private or confidential -- info@rclone.com +- Please don't email requests for help to this address - those are better directed to the forum unless you'd like to sign up for business diff --git a/docs/content/crypt.md b/docs/content/crypt.md index db3edcc74..03e2564dc 100644 --- a/docs/content/crypt.md +++ b/docs/content/crypt.md @@ -31,11 +31,11 @@ will just give you the encrypted (scrambled) format, and anything you upload will *not* become encrypted. The encryption is a secret-key encryption (also called symmetric key encryption) -algorithm, where a password (or pass phrase) is used to generate real encryption key. -The password can be supplied by user, or you may chose to let rclone -generate one. It will be stored in the configuration file, in a lightly obscured form. -If you are in an environment where you are not able to keep your configuration -secured, you should add +algorithm, where a password (or pass phrase) is used to generate real encryption +key. The password can be supplied by user, or you may chose to let rclone +generate one. It will be stored in the configuration file, in a lightly obscured +form. If you are in an environment where you are not able to keep your +configuration secured, you should add [configuration encryption](https://rclone.org/docs/#configuration-encryption) as protection. As long as you have this configuration file, you will be able to decrypt your data. Without the configuration file, as long as you remember @@ -47,9 +47,9 @@ See below for guidance to [changing password](#changing-password). Encryption uses [cryptographic salt](https://en.wikipedia.org/wiki/Salt_(cryptography)), to permute the encryption key so that the same string may be encrypted in different ways. When configuring the crypt remote it is optional to enter a salt, -or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique string. -Normally in cryptography, the salt is stored together with the encrypted content, -and do not have to be memorized by the user. This is not the case in rclone, +or to let rclone generate a unique salt. If omitted, rclone uses a built-in unique +string. Normally in cryptography, the salt is stored together with the encrypted +content, and do not have to be memorized by the user. This is not the case in rclone, because rclone does not store any additional information on the remotes. Use of custom salt is effectively a second password that must be memorized. @@ -86,8 +86,8 @@ anything you write will be unencrypted. To avoid issues it is best to configure a dedicated path for encrypted content, and access it exclusively through a crypt remote. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -176,7 +176,8 @@ y/e/d> **Important** The crypt password stored in `rclone.conf` is lightly obscured. That only protects it from cursory inspection. It is not -secure unless [configuration encryption](https://rclone.org/docs/#configuration-encryption) of `rclone.conf` is specified. +secure unless [configuration encryption](https://rclone.org/docs/#configuration-encryption) +of `rclone.conf` is specified. A long passphrase is recommended, or `rclone config` can generate a random one. @@ -191,8 +192,8 @@ due to the different salt. Rclone does not encrypt - * file length - this can be calculated within 16 bytes - * modification time - used for syncing +- file length - this can be calculated within 16 bytes +- modification time - used for syncing ### Specifying the remote @@ -244,6 +245,7 @@ is to re-upload everything via a crypt remote configured with your new password. Depending on the size of your data, your bandwidth, storage quota etc, there are different approaches you can take: + - If you have everything in a different location, for example on your local system, you could remove all of the prior encrypted files, change the password for your configured crypt remote (or delete and re-create the crypt configuration), @@ -272,7 +274,7 @@ details, and a tool you can use to check if you are affected. Create the following file structure using "standard" file name encryption. -``` +```sh plaintext/ ├── file0.txt ├── file1.txt @@ -285,7 +287,7 @@ plaintext/ Copy these to the remote, and list them -``` +```sh $ rclone -q copy plaintext secret: $ rclone -q ls secret: 7 file1.txt @@ -297,7 +299,7 @@ $ rclone -q ls secret: The crypt remote looks like -``` +```sh $ rclone -q ls remote:path 55 hagjclgavj2mbiqm6u6cnjjqcg 54 v05749mltvv1tf4onltun46gls @@ -308,7 +310,7 @@ $ rclone -q ls remote:path The directory structure is preserved -``` +```sh $ rclone -q ls secret:subdir 8 file2.txt 9 file3.txt @@ -319,7 +321,7 @@ Without file name encryption `.bin` extensions are added to underlying names. This prevents the cloud provider attempting to interpret file content. -``` +```sh $ rclone -q ls remote:path 54 file0.txt.bin 57 subdir/file3.txt.bin @@ -332,18 +334,18 @@ $ rclone -q ls remote:path Off - * doesn't hide file names or directory structure - * allows for longer file names (~246 characters) - * can use sub paths and copy single files +- doesn't hide file names or directory structure +- allows for longer file names (~246 characters) +- can use sub paths and copy single files Standard - * file names encrypted - * file names can't be as long (~143 characters) - * can use sub paths and copy single files - * directory structure visible - * identical files names will have identical uploaded names - * can use shortcuts to shorten the directory recursion +- file names encrypted +- file names can't be as long (~143 characters) +- can use sub paths and copy single files +- directory structure visible +- identical files names will have identical uploaded names +- can use shortcuts to shorten the directory recursion Obfuscation @@ -362,11 +364,11 @@ equivalents. Obfuscation cannot be relied upon for strong protection. - * file names very lightly obfuscated - * file names can be longer than standard encryption - * can use sub paths and copy single files - * directory structure visible - * identical files names will have identical uploaded names +- file names very lightly obfuscated +- file names can be longer than standard encryption +- can use sub paths and copy single files +- directory structure visible +- identical files names will have identical uploaded names Cloud storage systems have limits on file name length and total path length which rclone is more likely to breach using @@ -380,7 +382,7 @@ For cloud storage systems with case sensitive file names (e.g. Google Drive), `base64` can be used to reduce file name length. For cloud storage systems using UTF-16 to store file names internally (e.g. OneDrive, Dropbox, Box), `base32768` can be used to drastically reduce -file name length. +file name length. An alternative, future rclone file name encryption mode may tolerate backend provider path length limits. @@ -404,7 +406,6 @@ Example: `1/12/123.txt` is encrypted to `1/12/qgm4avr35m5loi1th53ato71v0` - ### Modification times and hashes Crypt stores modification times using the underlying remote so support diff --git a/docs/content/docker.md b/docs/content/docker.md index f985874d7..c63d74408 100644 --- a/docs/content/docker.md +++ b/docs/content/docker.md @@ -20,14 +20,14 @@ As of Docker 1.12 volumes are supported by [Docker Swarm](https://docs.docker.com/engine/swarm/key-concepts/) included with Docker Engine and created from descriptions in [swarm compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference) -files for use with _swarm stacks_ across multiple cluster nodes. +files for use with *swarm stacks* across multiple cluster nodes. [Docker Volume Plugins](https://docs.docker.com/engine/extend/plugins_volume/) augment the default `local` volume driver included in Docker with stateful volumes shared across containers and hosts. Unlike local volumes, your -data will _not_ be deleted when such volume is removed. Plugins can run +data will *not* be deleted when such volume is removed. Plugins can run managed by the docker daemon, as a native system service -(under systemd, _sysv_ or _upstart_) or as a standalone executable. +(under systemd, *sysv* or *upstart*) or as a standalone executable. Rclone can run as docker volume plugin in all these modes. It interacts with the local docker daemon via [plugin API](https://docs.docker.com/engine/extend/plugin_api/) and @@ -42,39 +42,43 @@ rclone volume with Docker engine on a standalone Ubuntu machine. Start from [installing Docker](https://docs.docker.com/engine/install/) on the host. -The _FUSE_ driver is a prerequisite for rclone mounting and should be +The *FUSE* driver is a prerequisite for rclone mounting and should be installed on host: -``` + +```sh sudo apt-get -y install fuse3 ``` Create two directories required by rclone docker plugin: -``` + +```sh sudo mkdir -p /var/lib/docker-plugins/rclone/config sudo mkdir -p /var/lib/docker-plugins/rclone/cache ``` Install the managed rclone docker plugin for your architecture (here `amd64`): -``` + +```sh docker plugin install rclone/docker-volume-rclone:amd64 args="-v" --alias rclone --grant-all-permissions docker plugin list ``` Create your [SFTP volume](/sftp/#standard-options): -``` + +```sh docker volume create firstvolume -d rclone -o type=sftp -o sftp-host=_hostname_ -o sftp-user=_username_ -o sftp-pass=_password_ -o allow-other=true ``` Note that since all options are static, you don't even have to run `rclone config` or create the `rclone.conf` file (but the `config` directory should still be present). In the simplest case you can use `localhost` -as _hostname_ and your SSH credentials as _username_ and _password_. +as *hostname* and your SSH credentials as *username* and *password*. You can also change the remote path to your home directory on the host, for example `-o path=/home/username`. - Time to create a test container and mount the volume into it: -``` + +```sh docker run --rm -it -v firstvolume:/mnt --workdir /mnt ubuntu:latest bash ``` @@ -83,7 +87,8 @@ the mounted SFTP remote. You can type `ls` to list the mounted directory or otherwise play with it. Type `exit` when you are done. The container will stop but the volume will stay, ready to be reused. When it's not needed anymore, remove it: -``` + +```sh docker volume list docker volume remove firstvolume ``` @@ -92,7 +97,7 @@ Now let us try **something more elaborate**: [Google Drive](/drive/) volume on multi-node Docker Swarm. You should start from installing Docker and FUSE, creating plugin -directories and installing rclone plugin on _every_ swarm node. +directories and installing rclone plugin on *every* swarm node. Then [setup the Swarm](https://docs.docker.com/engine/swarm/swarm-mode/). Google Drive volumes need an access token which can be setup via web @@ -101,14 +106,15 @@ plugin cannot run a browser so we will use a technique similar to the [rclone setup on a headless box](/remote_setup/). Run [rclone config](/commands/rclone_config_create/) -on _another_ machine equipped with _web browser_ and graphical user interface. +on *another* machine equipped with *web browser* and graphical user interface. Create the [Google Drive remote](/drive/#standard-options). When done, transfer the resulting `rclone.conf` to the Swarm cluster and save as `/var/lib/docker-plugins/rclone/config/rclone.conf` -on _every_ node. By default this location is accessible only to the +on *every* node. By default this location is accessible only to the root user so you will need appropriate privileges. The resulting config will look like this: -``` + +```ini [gdrive] type = drive scope = drive @@ -119,7 +125,8 @@ token = {"access_token":...} Now create the file named `example.yml` with a swarm stack description like this: -``` + +```yml version: '3' services: heimdall: @@ -137,16 +144,18 @@ volumes: ``` and run the stack: -``` + +```sh docker stack deploy example -c ./example.yml ``` After a few seconds docker will spread the parsed stack description -over cluster, create the `example_heimdall` service on port _8080_, +over cluster, create the `example_heimdall` service on port *8080*, run service containers on one or more cluster nodes and request the `example_configdata` volume from rclone plugins on the node hosts. You can use the following commands to confirm results: -``` + +```sh docker service ls docker service ps example_heimdall docker volume ls @@ -163,7 +172,8 @@ the `docker volume remove example_configdata` command on every node. Volumes can be created with [docker volume create](https://docs.docker.com/engine/reference/commandline/volume_create/). Here are a few examples: -``` + +```sh docker volume create vol1 -d rclone -o remote=storj: -o vfs-cache-mode=full docker volume create vol2 -d rclone -o remote=:storj,access_grant=xxx:heimdall docker volume create vol3 -d rclone -o type=storj -o path=heimdall -o storj-access-grant=xxx -o poll-interval=0 @@ -175,7 +185,8 @@ name `rclone/docker-volume-rclone` because you provided the `--alias rclone` option. Volumes can be inspected as follows: -``` + +```sh docker volume list docker volume inspect vol1 ``` @@ -184,7 +195,7 @@ docker volume inspect vol1 Rclone flags and volume options are set via the `-o` flag to the `docker volume create` command. They include backend-specific parameters -as well as mount and _VFS_ options. Also there are a few +as well as mount and *VFS* options. Also there are a few special `-o` options: `remote`, `fs`, `type`, `path`, `mount-type` and `persist`. @@ -192,19 +203,23 @@ special `-o` options: trailing colon and optionally with a remote path. See the full syntax in the [rclone documentation](/docs/#syntax-of-remote-paths). This option can be aliased as `fs` to prevent confusion with the -_remote_ parameter of such backends as _crypt_ or _alias_. +*remote* parameter of such backends as *crypt* or *alias*. The `remote=:backend:dir/subdir` syntax can be used to create [on-the-fly (config-less) remotes](/docs/#backend-path-to-dir), while the `type` and `path` options provide a simpler alternative for this. Using two split options -``` + +```sh -o type=backend -o path=dir/subdir ``` + is equivalent to the combined syntax -``` + +```sh -o remote=:backend:dir/subdir ``` + but is arguably easier to parameterize in scripts. The `path` part is optional. @@ -219,7 +234,7 @@ Boolean CLI flags without value will gain the `true` value, e.g. Please note that you can provide parameters only for the backend immediately referenced by the backend type of mounted `remote`. -If this is a wrapping backend like _alias, chunker or crypt_, you cannot +If this is a wrapping backend like *alias, chunker or crypt*, you cannot provide options for the referred to remote or backend. This limitation is imposed by the rclone connection string parser. The only workaround is to feed plugin with `rclone.conf` or configure plugin arguments (see below). @@ -242,17 +257,21 @@ In future it will allow to persist on-the-fly remotes in the plugin The `remote` value can be extended with [connection strings](/docs/#connection-strings) as an alternative way to supply backend parameters. This is equivalent -to the `-o` backend options with one _syntactic difference_. +to the `-o` backend options with one *syntactic difference*. Inside connection string the backend prefix must be dropped from parameter names but in the `-o param=value` array it must be present. For instance, compare the following option array -``` + +```sh -o remote=:sftp:/home -o sftp-host=localhost ``` + with equivalent connection string: -``` + +```sh -o remote=:sftp,host=localhost:/home ``` + This difference exists because flag options `-o key=val` include not only backend parameters but also mount/VFS flags and possibly other settings. Also it allows to discriminate the `remote` option from the `crypt-remote` @@ -261,11 +280,13 @@ due to clearer value substitution. ## Using with Swarm or Compose -Both _Docker Swarm_ and _Docker Compose_ use +Both *Docker Swarm* and *Docker Compose* use [YAML](http://yaml.org/spec/1.2/spec.html)-formatted text files to describe groups (stacks) of containers, their properties, networks and volumes. -_Compose_ uses the [compose v2](https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference) format, -_Swarm_ uses the [compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference) format. +*Compose* uses the [compose v2](https://docs.docker.com/compose/compose-file/compose-file-v2/#volume-configuration-reference) +format, +*Swarm* uses the [compose v3](https://docs.docker.com/compose/compose-file/compose-file-v3/#volume-configuration-reference) +format. They are mostly similar, differences are explained in the [docker documentation](https://docs.docker.com/compose/compose-file/compose-versioning/#upgrading). @@ -274,7 +295,7 @@ Each of them should be named after its volume and have at least two elements, the self-explanatory `driver: rclone` value and the `driver_opts:` structure playing the same role as `-o key=val` CLI flags: -``` +```yml volumes: volume_name_1: driver: rclone @@ -287,6 +308,7 @@ volumes: ``` Notice a few important details: + - YAML prefers `_` in option names instead of `-`. - YAML treats single and double quotes interchangeably. Simple strings and integers can be left unquoted. @@ -313,6 +335,7 @@ The plugin requires presence of two directories on the host before it can be installed. Note that plugin will **not** create them automatically. By default they must exist on host at the following locations (though you can tweak the paths): + - `/var/lib/docker-plugins/rclone/config` is reserved for the `rclone.conf` config file and **must** exist even if it's empty and the config file is not present. @@ -321,14 +344,16 @@ By default they must exist on host at the following locations You can [install managed plugin](https://docs.docker.com/engine/reference/commandline/plugin_install/) with default settings as follows: -``` + +```sh docker plugin install rclone/docker-volume-rclone:amd64 --grant-all-permissions --alias rclone ``` -The `:amd64` part of the image specification after colon is called a _tag_. +The `:amd64` part of the image specification after colon is called a *tag*. Usually you will want to install the latest plugin for your architecture. In this case the tag will just name it, like `amd64` above. The following plugin architectures are currently available: + - `amd64` - `arm64` - `arm-v7` @@ -362,7 +387,8 @@ mount namespaces and bind-mounts into requesting user containers. You can tweak a few plugin settings after installation when it's disabled (not in use), for instance: -``` + +```sh docker plugin disable rclone docker plugin set rclone RCLONE_VERBOSE=2 config=/etc/rclone args="--vfs-cache-mode=writes --allow-other" docker plugin enable rclone @@ -377,10 +403,10 @@ plan in advance. You can tweak the following settings: `args`, `config`, `cache`, `HTTP_PROXY`, `HTTPS_PROXY`, `NO_PROXY` and `RCLONE_VERBOSE`. -It's _your_ task to keep plugin settings in sync across swarm cluster nodes. +It's *your* task to keep plugin settings in sync across swarm cluster nodes. `args` sets command-line arguments for the `rclone serve docker` command -(_none_ by default). Arguments should be separated by space so you will +(*none* by default). Arguments should be separated by space so you will normally want to put them in quotes on the [docker plugin set](https://docs.docker.com/engine/reference/commandline/plugin_set/) command line. Both [serve docker flags](/commands/rclone_serve_docker/#options) @@ -402,7 +428,7 @@ at the predefined path `/data/config`. For example, if your key file is named `sftp-box1.key` on the host, the corresponding volume config option should read `-o sftp-key-file=/data/config/sftp-box1.key`. -`cache=/host/dir` sets alternative host location for the _cache_ directory. +`cache=/host/dir` sets alternative host location for the *cache* directory. The plugin will keep VFS caches here. Also it will create and maintain the `docker-plugin.state` file in this directory. When the plugin is restarted or reinstalled, it will look in this file to recreate any volumes @@ -415,13 +441,14 @@ failures, daemon restarts or host reboots. to `2` (debugging). Verbosity can be also tweaked via `args="-v [-v] ..."`. Since arguments are more generic, you will rarely need this setting. The plugin output by default feeds the docker daemon log on local host. -Log entries are reflected as _errors_ in the docker log but retain their +Log entries are reflected as *errors* in the docker log but retain their actual level assigned by rclone in the encapsulated message string. `HTTP_PROXY`, `HTTPS_PROXY`, `NO_PROXY` customize the plugin proxy settings. -You can set custom plugin options right when you install it, _in one go_: -``` +You can set custom plugin options right when you install it, *in one go*: + +```sh docker plugin remove rclone docker plugin install rclone/docker-volume-rclone:amd64 \ --alias rclone --grant-all-permissions \ @@ -435,7 +462,8 @@ The docker plugin volume protocol doesn't provide a way for plugins to inform the docker daemon that a volume is (un-)available. As a workaround you can setup a healthcheck to verify that the mount is responding, for example: -``` + +```yml services: my_service: image: my_image @@ -456,8 +484,9 @@ systems. Proceed further only if you are on Linux. First, [install rclone](/install/). You can just run it (type `rclone serve docker` and hit enter) for the test. -Install _FUSE_: -``` +Install *FUSE*: + +```sh sudo apt-get -y install fuse ``` @@ -466,22 +495,25 @@ Download two systemd configuration files: and [docker-volume-rclone.socket](https://raw.githubusercontent.com/rclone/rclone/master/contrib/docker-plugin/systemd/docker-volume-rclone.socket). Put them to the `/etc/systemd/system/` directory: -``` + +```sh cp docker-volume-plugin.service /etc/systemd/system/ cp docker-volume-plugin.socket /etc/systemd/system/ ``` -Please note that all commands in this section must be run as _root_ but +Please note that all commands in this section must be run as *root* but we omit `sudo` prefix for brevity. Now create directories required by the service: -``` + +```sh mkdir -p /var/lib/docker-volumes/rclone mkdir -p /var/lib/docker-plugins/rclone/config mkdir -p /var/lib/docker-plugins/rclone/cache ``` Run the docker plugin service in the socket activated mode: -``` + +```sh systemctl daemon-reload systemctl start docker-volume-rclone.service systemctl enable docker-volume-rclone.socket @@ -490,6 +522,7 @@ systemctl restart docker ``` Or run the service directly: + - run `systemctl daemon-reload` to let systemd pick up new config - run `systemctl enable docker-volume-rclone.service` to make the new service start automatically when you power on your machine. @@ -506,39 +539,50 @@ prefer socket activation. You can [see managed plugin settings](https://docs.docker.com/engine/extend/#debugging-plugins) with -``` + +```sh docker plugin list docker plugin inspect rclone ``` + Note that docker (including latest 20.10.7) will not show actual values of `args`, just the defaults. Use `journalctl --unit docker` to see managed plugin output as part of -the docker daemon log. Note that docker reflects plugin lines as _errors_ +the docker daemon log. Note that docker reflects plugin lines as *errors* but their actual level can be seen from encapsulated message string. You will usually install the latest version of managed plugin for your platform. Use the following commands to print the actual installed version: -``` + +```sh PLUGID=$(docker plugin list --no-trunc | awk '/rclone/{print$1}') sudo runc --root /run/docker/runtime-runc/plugins.moby exec $PLUGID rclone version ``` You can even use `runc` to run shell inside the plugin container: -``` + +```sh sudo runc --root /run/docker/runtime-runc/plugins.moby exec --tty $PLUGID bash ``` Also you can use curl to check the plugin socket connectivity: -``` + +```sh docker plugin list --no-trunc PLUGID=123abc... sudo curl -H Content-Type:application/json -XPOST -d {} --unix-socket /run/docker/plugins/$PLUGID/rclone.sock http://localhost/Plugin.Activate ``` + though this is rarely needed. -If the plugin fails to work properly, and only as a last resort after you tried diagnosing with the above methods, you can try clearing the state of the plugin. **Note that all existing rclone docker volumes will probably have to be recreated.** This might be needed because a reinstall don't cleanup existing state files to allow for easy restoration, as stated above. -``` +If the plugin fails to work properly, and only as a last resort after you tried +diagnosing with the above methods, you can try clearing the state of the plugin. +**Note that all existing rclone docker volumes will probably have to be recreated.** +This might be needed because a reinstall don't cleanup existing state files to +allow for easy restoration, as stated above. + +```sh docker plugin disable rclone # disable the plugin to ensure no interference sudo rm /var/lib/docker-plugins/rclone/cache/docker-plugin.state # removing the plugin state docker plugin enable rclone # re-enable the plugin afterward @@ -546,20 +590,22 @@ docker plugin enable rclone # re-enable the plugin afterward ## Caveats -Finally I'd like to mention a _caveat with updating volume settings_. +Finally I'd like to mention a *caveat with updating volume settings*. Docker CLI does not have a dedicated command like `docker volume update`. It may be tempting to invoke `docker volume create` with updated options on existing volume, but there is a gotcha. The command will do nothing, it won't even return an error. I hope that docker maintainers will fix this some day. In the meantime be aware that you must remove your volume before recreating it with new settings: -``` + +```sh docker volume remove my_vol docker volume create my_vol -d rclone -o opt1=new_val1 ... ``` and verify that settings did update: -``` + +```sh docker volume list docker volume inspect my_vol ``` diff --git a/docs/content/doi.md b/docs/content/doi.md index edcd6db1b..75fb8fd7b 100644 --- a/docs/content/doi.md +++ b/docs/content/doi.md @@ -6,9 +6,11 @@ versionIntroduced: "?" # {{< icon "fa fa-building-columns" >}} DOI -The DOI remote is a read only remote for reading files from digital object identifiers (DOI). +The DOI remote is a read only remote for reading files from digital object +identifiers (DOI). Currently, the DOI backend supports DOIs hosted with: + - [InvenioRDM](https://inveniosoftware.org/products/rdm/) - [Zenodo](https://zenodo.org) - [CaltechDATA](https://data.caltech.edu) @@ -25,11 +27,13 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password diff --git a/docs/content/drive.md b/docs/content/drive.md index 6b1bb9994..c0f78ca6c 100644 --- a/docs/content/drive.md +++ b/docs/content/drive.md @@ -18,11 +18,13 @@ through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote r) Rename remote @@ -97,7 +99,7 @@ See the [remote setup docs](/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if using web browser to automatically +token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and it @@ -108,15 +110,21 @@ You can then use it like this, List directories in top level of your drive - rclone lsd remote: +```sh +rclone lsd remote: +``` List all the files in your drive - rclone ls remote: +```sh +rclone ls remote: +``` To copy a local directory to a drive directory called backup - rclone copy /home/source remote:backup +```sh +rclone copy /home/source remote:backup +``` ### Scopes @@ -168,9 +176,9 @@ directories. ### Root folder ID -This option has been moved to the advanced section. You can set the `root_folder_id` for rclone. This is the directory -(identified by its `Folder ID`) that rclone considers to be the root -of your drive. +This option has been moved to the advanced section. You can set the +`root_folder_id` for rclone. This is the directory (identified by its +`Folder ID`) that rclone considers to be the root of your drive. Normally you will leave this blank and rclone will determine the correct root to use itself. @@ -218,49 +226,51 @@ instead, or set the equivalent environment variable. Let's say that you are the administrator of a Google Workspace. The goal is to read or write data on an individual's Drive account, who IS -a member of the domain. We'll call the domain **example.com**, and the -user **foo@example.com**. +a member of the domain. We'll call the domain , and the +user . There's a few steps we need to go through to accomplish this: ##### 1. Create a service account for example.com - - To create a service account and obtain its credentials, go to the -[Google Developer Console](https://console.developers.google.com). - - You must have a project - create one if you don't and make sure you are on the selected project. - - Then go to "IAM & admin" -> "Service Accounts". - - Use the "Create Service Account" button. Fill in "Service account name" -and "Service account ID" with something that identifies your client. - - Select "Create And Continue". Step 2 and 3 are optional. - - Click on the newly created service account - - Click "Keys" and then "Add Key" and then "Create new key" - - Choose type "JSON" and click create - - This will download a small JSON file that rclone will use for authentication. +- To create a service account and obtain its credentials, go to the + [Google Developer Console](https://console.developers.google.com). +- You must have a project - create one if you don't and make sure you are + on the selected project. +- Then go to "IAM & admin" -> "Service Accounts". +- Use the "Create Service Account" button. Fill in "Service account name" + and "Service account ID" with something that identifies your client. +- Select "Create And Continue". Step 2 and 3 are optional. +- Click on the newly created service account +- Click "Keys" and then "Add Key" and then "Create new key" +- Choose type "JSON" and click create +- This will download a small JSON file that rclone will use for authentication. If you ever need to remove access, press the "Delete service account key" button. ##### 2. Allowing API access to example.com Google Drive - - Go to example.com's [Workspace Admin Console](https://admin.google.com) - - Go into "Security" (or use the search bar) - - Select "Access and data control" and then "API controls" - - Click "Manage domain-wide delegation" - - Click "Add new" - - In the "Client ID" field enter the service account's -"Client ID" - this can be found in the Developer Console under -"IAM & Admin" -> "Service Accounts", then "View Client ID" for -the newly created service account. -It is a ~21 character numerical string. - - In the next field, "OAuth Scopes", enter -`https://www.googleapis.com/auth/drive` -to grant read/write access to Google Drive specifically. -You can also use `https://www.googleapis.com/auth/drive.readonly` for read only access. - - Click "Authorise" +- Go to example.com's [Workspace Admin Console](https://admin.google.com) +- Go into "Security" (or use the search bar) +- Select "Access and data control" and then "API controls" +- Click "Manage domain-wide delegation" +- Click "Add new" +- In the "Client ID" field enter the service account's + "Client ID" - this can be found in the Developer Console under + "IAM & Admin" -> "Service Accounts", then "View Client ID" for + the newly created service account. + It is a ~21 character numerical string. +- In the next field, "OAuth Scopes", enter + `https://www.googleapis.com/auth/drive` + to grant read/write access to Google Drive specifically. + You can also use `https://www.googleapis.com/auth/drive.readonly` for read + only access. +- Click "Authorise" ##### 3. Configure rclone, assuming a new install -``` +```sh rclone config n/s/q> n # New @@ -277,20 +287,23 @@ y/n> # Auto config, n ##### 4. Verify that it's working - - `rclone -v --drive-impersonate foo@example.com lsf gdrive:backup` - - The arguments do: - - `-v` - verbose logging - - `--drive-impersonate foo@example.com` - this is what does +- `rclone -v --drive-impersonate foo@example.com lsf gdrive:backup` +- The arguments do: + - `-v` - verbose logging + - `--drive-impersonate foo@example.com` - this is what does the magic, pretending to be user foo. - - `lsf` - list files in a parsing friendly way - - `gdrive:backup` - use the remote called gdrive, work in + - `lsf` - list files in a parsing friendly way + - `gdrive:backup` - use the remote called gdrive, work in the folder named backup. -Note: in case you configured a specific root folder on gdrive and rclone is unable to access the contents of that folder when using `--drive-impersonate`, do this instead: - - in the gdrive web interface, share your root folder with the user/email of the new Service Account you created/selected at step 1 - - use rclone without specifying the `--drive-impersonate` option, like this: - `rclone -v lsf gdrive:backup` +Note: in case you configured a specific root folder on gdrive and rclone is +unable to access the contents of that folder when using `--drive-impersonate`, +do this instead: +- in the gdrive web interface, share your root folder with the user/email of the + new Service Account you created/selected at step 1 +- use rclone without specifying the `--drive-impersonate` option, like this: + `rclone -v lsf gdrive:backup` ### Shared drives (team drives) @@ -304,7 +317,7 @@ Drive ID if you prefer. For example: -``` +```text Configure this as a Shared Drive (Team Drive)? y) Yes n) No @@ -341,14 +354,18 @@ docs](/docs/#fast-list) for more details. It does this by combining multiple `list` calls into a single API request. This works by combining many `'%s' in parents` filters into one expression. -To list the contents of directories a, b and c, the following requests will be send by the regular `List` function: -``` +To list the contents of directories a, b and c, the following requests will be +send by the regular `List` function: + +```text trashed=false and 'a' in parents trashed=false and 'b' in parents trashed=false and 'c' in parents ``` + These can now be combined into a single request: -``` + +```text trashed=false and ('a' in parents or 'b' in parents or 'c' in parents) ``` @@ -357,7 +374,8 @@ It will use the `--checkers` value to specify the number of requests to run in In tests, these batch requests were up to 20x faster than the regular method. Running the following command against different sized folders gives: -``` + +```sh rclone lsjson -vv -R --checkers=6 gdrive:folder ``` @@ -396,8 +414,8 @@ revision of that file. Revisions follow the standard google policy which at time of writing was - * They are deleted after 30 days or 100 revisions (whatever comes first). - * They do not count towards a user storage quota. +- They are deleted after 30 days or 100 revisions (whatever comes first). +- They do not count towards a user storage quota. ### Deleting files @@ -425,28 +443,40 @@ For shortcuts pointing to files: - When listing a file shortcut appears as the destination file. - When downloading the contents of the destination file is downloaded. -- When updating shortcut file with a non shortcut file, the shortcut is removed then a new file is uploaded in place of the shortcut. -- When server-side moving (renaming) the shortcut is renamed, not the destination file. -- When server-side copying the shortcut is copied, not the contents of the shortcut. (unless `--drive-copy-shortcut-content` is in use in which case the contents of the shortcut gets copied). +- When updating shortcut file with a non shortcut file, the shortcut is removed + then a new file is uploaded in place of the shortcut. +- When server-side moving (renaming) the shortcut is renamed, not the destination + file. +- When server-side copying the shortcut is copied, not the contents of the shortcut. + (unless `--drive-copy-shortcut-content` is in use in which case the contents of + the shortcut gets copied). - When deleting the shortcut is deleted not the linked file. -- When setting the modification time, the modification time of the linked file will be set. +- When setting the modification time, the modification time of the linked file + will be set. For shortcuts pointing to folders: -- When listing the shortcut appears as a folder and that folder will contain the contents of the linked folder appear (including any sub folders) +- When listing the shortcut appears as a folder and that folder will contain the + contents of the linked folder appear (including any sub folders) - When downloading the contents of the linked folder and sub contents are downloaded - When uploading to a shortcut folder the file will be placed in the linked folder -- When server-side moving (renaming) the shortcut is renamed, not the destination folder +- When server-side moving (renaming) the shortcut is renamed, not the destination + folder - When server-side copying the contents of the linked folder is copied, not the shortcut. -- When deleting with `rclone rmdir` or `rclone purge` the shortcut is deleted not the linked folder. -- **NB** When deleting with `rclone remove` or `rclone mount` the contents of the linked folder will be deleted. +- When deleting with `rclone rmdir` or `rclone purge` the shortcut is deleted not + the linked folder. +- **NB** When deleting with `rclone remove` or `rclone mount` the contents of the + linked folder will be deleted. -The [rclone backend](https://rclone.org/commands/rclone_backend/) command can be used to create shortcuts. +The [rclone backend](https://rclone.org/commands/rclone_backend/) command can be +used to create shortcuts. Shortcuts can be completely ignored with the `--drive-skip-shortcuts` flag or the corresponding `skip_shortcuts` configuration setting. -If you have shortcuts that lead to an infinite recursion in your drive (e.g. a shortcut pointing to a parent folder), `skip_shortcuts` might be mandatory to be able to copy the drive. +If you have shortcuts that lead to an infinite recursion in your drive (e.g. a +shortcut pointing to a parent folder), `skip_shortcuts` might be mandatory to +be able to copy the drive. ### Emptying trash @@ -512,11 +542,12 @@ Here are some examples for allowed and prohibited conversions. This limitation can be disabled by specifying `--drive-allow-import-name-change`. When using this flag, rclone can convert multiple files types resulting in the same document type at once, e.g. with `--drive-import-formats docx,odt,txt`, -all files having these extension would result in a document represented as a docx file. +all files having these extension would result in a document represented as a +docx file. This brings the additional risk of overwriting a document, if multiple files have the same stem. Many rclone operations will not handle this name change in any way. They assume an equal name when copying files and might copy the -file again or delete them when the name changes. +file again or delete them when the name changes. Here are the possible export extensions with their corresponding mime types. Most of these can also be used for importing, but there more that are not diff --git a/docs/content/dropbox.md b/docs/content/dropbox.md index 01a6d04dc..71eccb7b4 100644 --- a/docs/content/dropbox.md +++ b/docs/content/dropbox.md @@ -19,11 +19,13 @@ through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text n) New remote d) Delete remote q) Quit config @@ -71,15 +73,21 @@ You can then use it like this, List directories in top level of your dropbox - rclone lsd remote: +```sh +rclone lsd remote: +``` List all the files in your dropbox - rclone ls remote: +```sh +rclone ls remote: +``` To copy a local directory to a dropbox directory called backup - rclone copy /home/source remote:backup +```sh +rclone copy /home/source remote:backup +``` ### Dropbox for business @@ -146,7 +154,9 @@ In this mode rclone will not use upload batching. This was the default before rclone v1.55. It has the disadvantage that it is very likely to encounter `too_many_requests` errors like this - NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds. +```text +NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 15 seconds. +``` When rclone receives these it has to wait for 15s or sometimes 300s before continuing which really slows down transfers. @@ -215,7 +225,7 @@ Here are some examples of how extensions are mapped: | Paper template | mydoc.papert | mydoc.papert.html | | other | mydoc | mydoc.html | -_Importing_ exportable files is not yet supported by rclone. +*Importing* exportable files is not yet supported by rclone. Here are the supported export extensions known by rclone. Note that rclone does not currently support other formats not on this list, diff --git a/docs/content/fichier.md b/docs/content/fichier.md index b8f12e5e0..b65f3d916 100644 --- a/docs/content/fichier.md +++ b/docs/content/fichier.md @@ -16,16 +16,18 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`. ## Configuration -The initial setup for 1Fichier involves getting the API key from the website which you -need to do in your browser. +The initial setup for 1Fichier involves getting the API key from the website +which you need to do in your browser. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -66,15 +68,21 @@ Once configured you can then use `rclone` like this, List directories in top level of your 1Fichier account - rclone lsd remote: +```sh +rclone lsd remote: +``` List all the files in your 1Fichier account - rclone ls remote: +```sh +rclone ls remote: +``` To copy a local directory to a 1Fichier directory called backup - rclone copy /home/source remote:backup +```sh +rclone copy /home/source remote:backup +``` ### Modification times and hashes diff --git a/docs/content/filefabric.md b/docs/content/filefabric.md index ce98cd731..9435c6c8f 100644 --- a/docs/content/filefabric.md +++ b/docs/content/filefabric.md @@ -19,11 +19,13 @@ do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -91,15 +93,21 @@ Once configured you can then use `rclone` like this, List directories in top level of your Enterprise File Fabric - rclone lsd remote: +```sh +rclone lsd remote: +``` List all the files in your Enterprise File Fabric - rclone ls remote: +```sh +rclone ls remote: +``` To copy a local directory to an Enterprise File Fabric directory called backup - rclone copy /home/source remote:backup +```sh +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -124,7 +132,7 @@ upload an empty file as a single space with a mime type of `application/vnd.rclone.empty.file` and files with that mime type are treated as empty. -### Root folder ID ### +### Root folder ID You can set the `root_folder_id` for rclone. This is the directory (identified by its `Folder ID`) that rclone considers to be the root @@ -140,7 +148,7 @@ In order to do this you will have to find the `Folder ID` of the directory you wish rclone to display. These aren't displayed in the web interface, but you can use `rclone lsf` to find them, for example -``` +```sh $ rclone lsf --dirs-only -Fip --csv filefabric: 120673758,Burnt PDFs/ 120673759,My Quick Uploads/ diff --git a/docs/content/filelu.md b/docs/content/filelu.md index 766330e8f..f88c3aaa1 100644 --- a/docs/content/filelu.md +++ b/docs/content/filelu.md @@ -18,11 +18,13 @@ device. Here is an example of how to make a remote called `filelu`. First, run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -54,7 +56,7 @@ A path without an initial `/` will operate in the `Rclone` directory. A path with an initial `/` will operate at the root where you can see the `Rclone` directory. -``` +```sh $ rclone lsf TestFileLu:/ CCTV/ Camera/ @@ -70,55 +72,81 @@ Videos/ Create a new folder named `foldername` in the `Rclone` directory: - rclone mkdir filelu:foldername +```sh +rclone mkdir filelu:foldername +``` Delete a folder on FileLu: - rclone rmdir filelu:/folder/path/ +```sh +rclone rmdir filelu:/folder/path/ +``` Delete a file on FileLu: - rclone delete filelu:/hello.txt +```sh +rclone delete filelu:/hello.txt +``` List files from your FileLu account: - rclone ls filelu: +```sh +rclone ls filelu: +``` List all folders: - rclone lsd filelu: +```sh +rclone lsd filelu: +``` Copy a specific file to the FileLu root: - rclone copy D:\\hello.txt filelu: +```sh +rclone copy D:\\hello.txt filelu: +``` Copy files from a local directory to a FileLu directory: - rclone copy D:/local-folder filelu:/remote-folder/path/ - +```sh +rclone copy D:/local-folder filelu:/remote-folder/path/ +``` + Download a file from FileLu into a local directory: - rclone copy filelu:/file-path/hello.txt D:/local-folder +```sh +rclone copy filelu:/file-path/hello.txt D:/local-folder +``` Move files from a local directory to a FileLu directory: - rclone move D:\\local-folder filelu:/remote-path/ +```sh +rclone move D:\\local-folder filelu:/remote-path/ +``` Sync files from a local directory to a FileLu directory: - rclone sync --interactive D:/local-folder filelu:/remote-path/ - +```sh +rclone sync --interactive D:/local-folder filelu:/remote-path/ +``` + Mount remote to local Linux: - rclone mount filelu: /root/mnt --vfs-cache-mode full +```sh +rclone mount filelu: /root/mnt --vfs-cache-mode full +``` Mount remote to local Windows: - rclone mount filelu: D:/local_mnt --vfs-cache-mode full +```sh +rclone mount filelu: D:/local_mnt --vfs-cache-mode full +``` Get storage info about the FileLu account: - rclone about filelu: +```sh +rclone about filelu: +``` All the other rclone commands are supported by this backend. @@ -135,8 +163,8 @@ millions of files, duplicate folder names or paths are quite common. FileLu supports both modification times and MD5 hashes. -FileLu only supports filenames and folder names up to 255 characters in length, where a -character is a Unicode character. +FileLu only supports filenames and folder names up to 255 characters in length, +where a character is a Unicode character. ### Duplicated Files @@ -155,7 +183,7 @@ key. If you are connecting to your FileLu remote for the first time and encounter an error such as: -``` +```text Failed to create file system for "my-filelu-remote:": couldn't login: Invalid credentials ``` diff --git a/docs/content/filescom.md b/docs/content/filescom.md index 95acf3339..56d1a2e89 100644 --- a/docs/content/filescom.md +++ b/docs/content/filescom.md @@ -19,85 +19,97 @@ password. Alternatively, you can authenticate using an API Key from Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n +```text +No remotes found, make a new one? +n) New remote +s) Set configuration password +q) Quit config +n/s/q> n - Enter name for new remote. - name> remote +Enter name for new remote. +name> remote - Option Storage. - Type of storage to configure. - Choose a number from below, or type in your own value. - [snip] - XX / Files.com - \ "filescom" - [snip] - Storage> filescom +Option Storage. +Type of storage to configure. +Choose a number from below, or type in your own value. +[snip] +XX / Files.com + \ "filescom" +[snip] +Storage> filescom - Option site. - Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com) - Enter a value. Press Enter to leave empty. - site> mysite +Option site. +Your site subdomain (e.g. mysite) or custom domain (e.g. myfiles.customdomain.com) +Enter a value. Press Enter to leave empty. +site> mysite - Option username. - The username used to authenticate with Files.com. - Enter a value. Press Enter to leave empty. - username> user +Option username. +The username used to authenticate with Files.com. +Enter a value. Press Enter to leave empty. +username> user - Option password. - The password used to authenticate with Files.com. - Choose an alternative below. Press Enter for the default (n). - y) Yes, type in my own password - g) Generate random password - n) No, leave this optional password blank (default) - y/g/n> y - Enter the password: - password: - Confirm the password: - password: +Option password. +The password used to authenticate with Files.com. +Choose an alternative below. Press Enter for the default (n). +y) Yes, type in my own password +g) Generate random password +n) No, leave this optional password blank (default) +y/g/n> y +Enter the password: +password: +Confirm the password: +password: - Edit advanced config? - y) Yes - n) No (default) - y/n> n +Edit advanced config? +y) Yes +n) No (default) +y/n> n - Configuration complete. - Options: - - type: filescom - - site: mysite - - username: user - - password: *** ENCRYPTED *** - Keep this "remote" remote? - y) Yes this is OK (default) - e) Edit this remote - d) Delete this remote - y/e/d> y +Configuration complete. +Options: +- type: filescom +- site: mysite +- username: user +- password: *** ENCRYPTED *** +Keep this "remote" remote? +y) Yes this is OK (default) +e) Edit this remote +d) Delete this remote +y/e/d> y +``` Once configured you can use rclone. See all files in the top level: - rclone lsf remote: +```sh +rclone lsf remote: +``` Make a new directory in the root: - rclone mkdir remote:dir +```sh +rclone mkdir remote:dir +``` Recursively List the contents: - rclone ls remote: +```sh +rclone ls remote: +``` Sync `/home/local/directory` to the remote directory, deleting any excess files in the directory. - rclone sync --interactive /home/local/directory remote:dir +```sh +rclone sync --interactive /home/local/directory remote:dir +``` ### Hashes diff --git a/docs/content/ftp.md b/docs/content/ftp.md index 195597940..638ec2f9e 100644 --- a/docs/content/ftp.md +++ b/docs/content/ftp.md @@ -20,14 +20,16 @@ a `/` it is relative to the home directory of the user. An empty path To create an FTP configuration named `remote`, run - rclone config +```sh +rclone config +``` Rclone config guides you through an interactive setup process. A minimal rclone FTP remote definition only requires host, username and password. For an anonymous FTP server, see [below](#anonymous-ftp). -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote r) Rename remote c) Copy remote @@ -86,20 +88,28 @@ y/e/d> y To see all directories in the home directory of `remote` - rclone lsd remote: +```sh +rclone lsd remote: +``` Make a new directory - rclone mkdir remote:path/to/directory +```sh +rclone mkdir remote:path/to/directory +``` List the contents of a directory - rclone ls remote:path/to/directory +```sh +rclone ls remote:path/to/directory +``` Sync `/home/local/directory` to the remote directory, deleting any excess files in the directory. - rclone sync --interactive /home/local/directory remote:directory +```sh +rclone sync --interactive /home/local/directory remote:directory +``` ### Anonymous FTP @@ -114,8 +124,10 @@ Using [on-the-fly](#backend-path-to-dir) or such servers, without requiring any configuration in advance. The following are examples of that: - rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy) - rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy): +```sh +rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=$(rclone obscure dummy) +rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=$(rclone obscure dummy): +``` The above examples work in Linux shells and in PowerShell, but not Windows Command Prompt. They execute the [rclone obscure](/commands/rclone_obscure/) @@ -124,8 +136,10 @@ command to create a password string in the format required by the an already obscured string representation of the same password "dummy", and therefore works even in Windows Command Prompt: - rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM - rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM: +```sh +rclone lsf :ftp: --ftp-host=speedtest.tele2.net --ftp-user=anonymous --ftp-pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM +rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47Ji1rHTmxM: +``` ### Implicit TLS @@ -139,7 +153,7 @@ can be set with [`--ftp-port`](#ftp-port). TLS options for Implicit and Explicit TLS can be set using the following flags which are specific to the FTP backend: -``` +```text --ftp-no-check-certificate Do not verify the TLS certificate of the server --ftp-disable-tls13 Disable TLS 1.3 (workaround for FTP servers with buggy TLS) --ftp-tls-cache-size int Size of TLS session cache for all control and data connections (default 32) @@ -147,7 +161,7 @@ following flags which are specific to the FTP backend: However any of the global TLS flags can also be used such as: -``` +```text --ca-cert stringArray CA certificate used to verify servers --client-cert string Client SSL certificate (PEM) for mutual TLS auth --client-key string Client SSL private key (PEM) for mutual TLS auth @@ -157,7 +171,7 @@ However any of the global TLS flags can also be used such as: If these need to be put in the config file so they apply to just the FTP backend then use the `override` syntax, eg -``` +```text override.ca_cert = XXX override.client_cert = XXX override.client_key = XXX diff --git a/docs/content/gofile.md b/docs/content/gofile.md index 45fe18f7c..0334932f0 100644 --- a/docs/content/gofile.md +++ b/docs/content/gofile.md @@ -21,11 +21,13 @@ premium account. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -68,11 +70,15 @@ Once configured you can then use `rclone` like this, List directories and files in the top level of your Gofile - rclone lsf remote: +```sh +rclone lsf remote: +``` To copy a local directory to an Gofile directory called backup - rclone copy /home/source remote:backup +```sh +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -97,7 +103,6 @@ the following characters are also replaced: | \ | 0x5C | \ | | \| | 0x7C | | | - File names can also not start or end with the following characters. These only get replaced if they are the first or last character in the name: @@ -134,7 +139,7 @@ directory you wish rclone to display. You can do this with rclone -``` +```sh $ rclone lsf -Fip --dirs-only remote: d6341f53-ee65-4f29-9f59-d11e8070b2a0;Files/ f4f5c9b8-6ece-478b-b03e-4538edfe5a1c;Photos/ @@ -143,7 +148,7 @@ d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/ The ID to use is the part before the `;` so you could set -``` +```text root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0 ``` diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md index 39c828e0e..c403a1d2e 100644 --- a/docs/content/googlecloudstorage.md +++ b/docs/content/googlecloudstorage.md @@ -11,17 +11,19 @@ command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. ## Configuration -The initial setup for google cloud storage involves getting a token from Google Cloud Storage -which you need to do in your browser. `rclone config` walks you +The initial setup for google cloud storage involves getting a token from Google +Cloud Storage which you need to do in your browser. `rclone config` walks you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text n) New remote d) Delete remote q) Quit config @@ -148,7 +150,7 @@ See the [remote setup docs](/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if using web browser to automatically +token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this @@ -159,20 +161,28 @@ This remote is called `remote` and can now be used like this See all the buckets in your project - rclone lsd remote: +```sh +rclone lsd remote: +``` Make a new bucket - rclone mkdir remote:bucket +```sh +rclone mkdir remote:bucket +``` List the contents of a bucket - rclone ls remote:bucket +```sh +rclone ls remote:bucket +``` Sync `/home/local/directory` to the remote bucket, deleting any excess files in the bucket. - rclone sync --interactive /home/local/directory remote:bucket +```sh +rclone sync --interactive /home/local/directory remote:bucket +``` ### Service Account support @@ -203,52 +213,67 @@ environment variable. ### Service Account Authentication with Access Tokens -Another option for service account authentication is to use access tokens via *gcloud impersonate-service-account*. Access tokens protect security by avoiding the use of the JSON -key file, which can be breached. They also bypass oauth login flow, which is simpler -on remote VMs that lack a web browser. +Another option for service account authentication is to use access tokens via +*gcloud impersonate-service-account*. Access tokens protect security by avoiding +the use of the JSON key file, which can be breached. They also bypass oauth +login flow, which is simpler on remote VMs that lack a web browser. -If you already have a working service account, skip to step 3. +If you already have a working service account, skip to step 3. -#### 1. Create a service account using +#### 1. Create a service account using - gcloud iam service-accounts create gcs-read-only +```sh +gcloud iam service-accounts create gcs-read-only +``` You can re-use an existing service account as well (like the one created above) -#### 2. Attach a Viewer (read-only) or User (read-write) role to the service account - $ PROJECT_ID=my-project - $ gcloud --verbose iam service-accounts add-iam-policy-binding \ - gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \ - --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \ - --role=roles/storage.objectViewer +#### 2. Attach a Viewer (read-only) or User (read-write) role to the service account -Use the Google Cloud console to identify a limited role. Some relevant pre-defined roles: +```sh +$ PROJECT_ID=my-project +$ gcloud --verbose iam service-accounts add-iam-policy-binding \ + gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \ + --member=serviceAccount:gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com \ + --role=roles/storage.objectViewer +``` -* *roles/storage.objectUser* -- read-write access but no admin privileges -* *roles/storage.objectViewer* -- read-only access to objects -* *roles/storage.admin* -- create buckets & administrative roles +Use the Google Cloud console to identify a limited role. Some relevant +pre-defined roles: + +- *roles/storage.objectUser* -- read-write access but no admin privileges +- *roles/storage.objectViewer* -- read-only access to objects +- *roles/storage.admin* -- create buckets & administrative roles #### 3. Get a temporary access key for the service account - $ gcloud auth application-default print-access-token \ - --impersonate-service-account \ - gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com +```sh +$ gcloud auth application-default print-access-token \ + --impersonate-service-account \ + gcs-read-only@${PROJECT_ID}.iam.gserviceaccount.com - ya29.c.c0ASRK0GbAFEewXD [truncated] +ya29.c.c0ASRK0GbAFEewXD [truncated] +``` #### 4. Update `access_token` setting -hit `CTRL-C` when you see *waiting for code*. This will save the config without doing oauth flow - rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx +hit `CTRL-C` when you see *waiting for code*. This will save the config without +doing oauth flow + +```sh +rclone config update ${REMOTE_NAME} access_token ya29.c.c0Axxxx +``` #### 5. Run rclone as usual - rclone ls dev-gcs:${MY_BUCKET}/ +```sh +rclone ls dev-gcs:${MY_BUCKET}/ +``` ### More Info on Service Accounts -* [Official GCS Docs](https://cloud.google.com/compute/docs/access/service-accounts) -* [Guide on Service Accounts using Key Files (less secure, but similar concepts)](https://forum.rclone.org/t/access-using-google-service-account/24822/2) +- [Official GCS Docs](https://cloud.google.com/compute/docs/access/service-accounts) +- [Guide on Service Accounts using Key Files (less secure, but similar concepts)](https://forum.rclone.org/t/access-using-google-service-account/24822/2) ### Anonymous Access @@ -299,13 +324,16 @@ Note that the last of these is for setting custom metadata in the form ### Modification times Google Cloud Storage stores md5sum natively. -Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores modification time -with one-second precision as `goog-reserved-file-mtime` in file metadata. +Google's [gsutil](https://cloud.google.com/storage/docs/gsutil) tool stores +modification time with one-second precision as `goog-reserved-file-mtime` in +file metadata. -To ensure compatibility with gsutil, rclone stores modification time in 2 separate metadata entries. -`mtime` uses RFC3339 format with one-nanosecond precision. -`goog-reserved-file-mtime` uses Unix timestamp format with one-second precision. -To get modification time from object metadata, rclone reads the metadata in the following order: `mtime`, `goog-reserved-file-mtime`, object updated time. +To ensure compatibility with gsutil, rclone stores modification time in 2 +separate metadata entries. `mtime` uses RFC3339 format with one-nanosecond +precision. `goog-reserved-file-mtime` uses Unix timestamp format with one-second +precision. To get modification time from object metadata, rclone reads the +metadata in the following order: `mtime`, `goog-reserved-file-mtime`, object +updated time. Note that rclone's default modify window is 1ns. Files uploaded by gsutil only contain timestamps with one-second precision. diff --git a/docs/content/googlephotos.md b/docs/content/googlephotos.md index 0555e4e1f..102f2d039 100644 --- a/docs/content/googlephotos.md +++ b/docs/content/googlephotos.md @@ -27,11 +27,13 @@ through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -98,7 +100,7 @@ See the [remote setup docs](/remote_setup/) for how to set it up on a machine with no Internet browser available. Note that rclone runs a webserver on your local machine to collect the -token as returned from Google if using web browser to automatically +token as returned from Google if using web browser to automatically authenticate. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on `http://127.0.0.1:53682/` and this @@ -109,20 +111,28 @@ This remote is called `remote` and can now be used like this See all the albums in your photos - rclone lsd remote:album +```sh +rclone lsd remote:album +``` Make a new album - rclone mkdir remote:album/newAlbum +```sh +rclone mkdir remote:album/newAlbum +``` List the contents of an album - rclone ls remote:album/newAlbum +```sh +rclone ls remote:album/newAlbum +``` Sync `/home/local/images` to the Google Photos, removing any excess files in the album. - rclone sync --interactive /home/local/image remote:album/newAlbum +```sh +rclone sync --interactive /home/local/image remote:album/newAlbum +``` ### Layout @@ -139,7 +149,7 @@ Note that all your photos and videos will appear somewhere under `media`, but they may not appear under `album` unless you've put them into albums. -``` +```text / - upload - file1.jpg @@ -203,11 +213,13 @@ may create new directories (albums) under `album`. If you copy files with a directory hierarchy in there then rclone will create albums with the `/` character in them. For example if you do - rclone copy /path/to/images remote:album/images +```sh +rclone copy /path/to/images remote:album/images +``` and the images directory contains -``` +```text images - file1.jpg dir @@ -220,11 +232,11 @@ images Then rclone will create the following albums with the following files in - images - - file1.jpg + - file1.jpg - images/dir - - file2.jpg + - file2.jpg - images/dir2/dir3 - - file3.jpg + - file3.jpg This means that you can use the `album` path pretty much like a normal filesystem and it is a good target for repeated syncing. diff --git a/docs/content/hasher.md b/docs/content/hasher.md index 63b89875b..2d483d67f 100644 --- a/docs/content/hasher.md +++ b/docs/content/hasher.md @@ -9,6 +9,7 @@ status: Experimental Hasher is a special overlay backend to create remotes which handle checksums for other remotes. It's main functions include: + - Emulate hash types unimplemented by backends - Cache checksums to help with slow hashing of large local or (S)FTP files - Warm up checksum cache from external SUM files @@ -29,8 +30,9 @@ Now proceed to interactive or manual configuration. ### Interactive configuration Run `rclone config`: -``` -No remotes found, make a new one? + +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -76,7 +78,7 @@ usually `YOURHOME/.config/rclone/rclone.conf`. Open it in your favorite text editor, find section for the base remote and create new section for hasher like in the following examples: -``` +```ini [Hasher1] type = hasher remote = myRemote:path @@ -91,12 +93,13 @@ max_age = 24h ``` Hasher takes basically the following parameters: -- `remote` is required, + +- `remote` is required - `hashes` is a comma separated list of supported checksums - (by default `md5,sha1`), -- `max_age` - maximum time to keep a checksum value in the cache, - `0` will disable caching completely, - `off` will cache "forever" (that is until the files get changed). + (by default `md5,sha1`) +- `max_age` - maximum time to keep a checksum value in the cache + `0` will disable caching completely + `off` will cache "forever" (that is until the files get changed) Make sure the `remote` has `:` (colon) in. If you specify the remote without a colon then rclone will use a local directory of that name. So if you use @@ -111,7 +114,8 @@ If you use `remote = name` literally then rclone will put files Now you can use it as `Hasher2:subdir/file` instead of base remote. Hasher will transparently update cache with new checksums when a file is fully read or overwritten, like: -``` + +```sh rclone copy External:path/file Hasher:dest/path rclone cat Hasher:path/to/file > /dev/null @@ -121,14 +125,16 @@ The way to refresh **all** cached checksums (even unsupported by the base backen for a subtree is to **re-download** all files in the subtree. For example, use `hashsum --download` using **any** supported hashsum on the command line (we just care to re-read): -``` + +```sh rclone hashsum MD5 --download Hasher:path/to/subtree > /dev/null rclone backend dump Hasher:path/to/subtree ``` You can print or drop hashsum cache using custom backend commands: -``` + +```sh rclone backend dump Hasher:dir/subdir rclone backend drop Hasher: @@ -139,7 +145,7 @@ rclone backend drop Hasher: Hasher supports two backend commands: generic SUM file `import` and faster but less consistent `stickyimport`. -``` +```sh rclone backend import Hasher:dir/subdir SHA1 /path/to/SHA1SUM [--checkers 4] ``` @@ -148,6 +154,7 @@ can point to either a local or an `other-remote:path` text file in SUM format. The command will parse the SUM file, then walk down the path given by the first argument, snapshot current fingerprints and fill in the cache entries correspondingly. + - Paths in the SUM file are treated as relative to `hasher:dir/subdir`. - The command will **not** check that supplied values are correct. You **must know** what you are doing. @@ -158,7 +165,7 @@ correspondingly. `--checkers` to make it faster. Or use `stickyimport` if you don't care about fingerprints and consistency. -``` +```sh rclone backend stickyimport hasher:path/to/data sha1 remote:/path/to/sum.sha1 ``` diff --git a/docs/content/hdfs.md b/docs/content/hdfs.md index b0f5453b4..5b5714a15 100644 --- a/docs/content/hdfs.md +++ b/docs/content/hdfs.md @@ -6,8 +6,9 @@ versionIntroduced: "v1.54" # {{< icon "fa fa-globe" >}} HDFS -[HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) is a -distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/) framework. +[HDFS](https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) +is a distributed file-system, part of the [Apache Hadoop](https://hadoop.apache.org/) +framework. Paths are specified as `remote:` or `remote:path/to/dir`. @@ -15,11 +16,13 @@ Paths are specified as `remote:` or `remote:path/to/dir`. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -83,15 +86,21 @@ This remote is called `remote` and can now be used like this See all the top level directories - rclone lsd remote: +```sh +rclone lsd remote: +``` List the contents of a directory - rclone ls remote:directory +```sh +rclone ls remote:directory +``` Sync the remote `directory` to `/home/local/directory`, deleting any excess files. - rclone sync --interactive remote:directory /home/local/directory +```sh +rclone sync --interactive remote:directory /home/local/directory +``` ### Setting up your own HDFS instance for testing @@ -100,7 +109,7 @@ or use the docker image from the tests: If you want to build the docker image -``` +```sh git clone https://github.com/rclone/rclone.git cd rclone/fstest/testserver/images/test-hdfs docker build --rm -t rclone/test-hdfs . @@ -108,7 +117,7 @@ docker build --rm -t rclone/test-hdfs . Or you can just use the latest one pushed -``` +```sh docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:8020 --hostname "rclone-hdfs" rclone/test-hdfs ``` @@ -116,15 +125,15 @@ docker run --rm --name "rclone-hdfs" -p 127.0.0.1:9866:9866 -p 127.0.0.1:8020:80 For this docker image the remote needs to be configured like this: -``` +```ini [remote] type = hdfs namenode = 127.0.0.1:8020 username = root ``` -You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use volumes, so all data -uploaded will be lost.) +You can stop this image with `docker kill rclone-hdfs` (**NB** it does not use +volumes, so all data uploaded will be lost.) ### Modification times @@ -136,7 +145,8 @@ No checksums are implemented. ### Usage information -You can use the `rclone about remote:` command which will display filesystem size and current usage. +You can use the `rclone about remote:` command which will display filesystem +size and current usage. ### Restricted filename characters diff --git a/docs/content/hidrive.md b/docs/content/hidrive.md index af38e71d1..8e990e93e 100644 --- a/docs/content/hidrive.md +++ b/docs/content/hidrive.md @@ -18,11 +18,13 @@ which you need to do in your browser. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found - make a new one n) New remote s) Set configuration password @@ -83,34 +85,42 @@ Once configured you can then use `rclone` like this, List directories in top level of your HiDrive root folder - rclone lsd remote: +```sh +rclone lsd remote: +``` List all the files in your HiDrive filesystem - rclone ls remote: +```sh +rclone ls remote: +``` To copy a local directory to a HiDrive directory called backup - rclone copy /home/source remote:backup +```sh +rclone copy /home/source remote:backup +``` ### Keeping your tokens safe -Any OAuth-tokens will be stored by rclone in the remote's configuration file as unencrypted text. -Anyone can use a valid refresh-token to access your HiDrive filesystem without knowing your password. -Therefore you should make sure no one else can access your configuration. +Any OAuth-tokens will be stored by rclone in the remote's configuration file as +unencrypted text. Anyone can use a valid refresh-token to access your HiDrive +filesystem without knowing your password. Therefore you should make sure no one +else can access your configuration. It is possible to encrypt rclone's configuration file. -You can find information on securing your configuration file by viewing the [configuration encryption docs](/docs/#configuration-encryption). +You can find information on securing your configuration file by viewing the +[configuration encryption docs](/docs/#configuration-encryption). ### Invalid refresh token -As can be verified [here](https://developer.hidrive.com/basics-flows/), +As can be verified on [HiDrive's OAuth guide](https://developer.hidrive.com/basics-flows/), each `refresh_token` (for Native Applications) is valid for 60 days. If used to access HiDrivei, its validity will be automatically extended. This means that if you - * Don't use the HiDrive remote for 60 days +- Don't use the HiDrive remote for 60 days then rclone will return an error which includes a text that implies the refresh token is *invalid* or *expired*. @@ -119,7 +129,9 @@ To fix this you will need to authorize rclone to access your HiDrive account aga Using - rclone config reconnect remote: +```sh +rclone config reconnect remote: +``` the process is very similar to the process of initial setup exemplified before. @@ -141,7 +153,7 @@ Therefore rclone will automatically replace these characters, if files or folders are stored or accessed with such names. You can read about how this filename encoding works in general -[here](overview/#restricted-filenames). +in the [main docs](/overview/#restricted-filenames). Keep in mind that HiDrive only supports file or folder names with a length of 255 characters or less. @@ -157,9 +169,9 @@ so you may want to restrict this behaviour on systems with limited resources. You can customize this behaviour using the following options: -* `chunk_size`: size of file parts -* `upload_cutoff`: files larger or equal to this in size will use a chunked transfer -* `upload_concurrency`: number of file-parts to upload at the same time +- `chunk_size`: size of file parts +- `upload_cutoff`: files larger or equal to this in size will use a chunked transfer +- `upload_concurrency`: number of file-parts to upload at the same time See the below section about configuration options for more details. @@ -176,9 +188,10 @@ This works by prepending the contents of the `root_prefix` option to any paths accessed by rclone. For example, the following two ways to access the home directory are equivalent: - rclone lsd --hidrive-root-prefix="/users/test/" remote:path - - rclone lsd remote:/users/test/path +```sh +rclone lsd --hidrive-root-prefix="/users/test/" remote:path +rclone lsd remote:/users/test/path +``` See the below section about configuration options for more details. @@ -187,10 +200,10 @@ See the below section about configuration options for more details. By default, rclone will know the number of directory members contained in a directory. For example, `rclone lsd` uses this information. -The acquisition of this information will result in additional time costs for HiDrive's API. -When dealing with large directory structures, it may be desirable to circumvent this time cost, -especially when this information is not explicitly needed. -For this, the `disable_fetching_member_count` option can be used. +The acquisition of this information will result in additional time costs for +HiDrive's API. When dealing with large directory structures, it may be +desirable to circumvent this time cost, especially when this information is not +explicitly needed. For this, the `disable_fetching_member_count` option can be used. See the below section about configuration options for more details. diff --git a/docs/content/http.md b/docs/content/http.md index 4266ddf7c..2f6aee587 100644 --- a/docs/content/http.md +++ b/docs/content/http.md @@ -39,11 +39,13 @@ To just download a single file it is easier to use Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -92,15 +94,21 @@ This remote is called `remote` and can now be used like this See all the top level directories - rclone lsd remote: +```sh +rclone lsd remote: +``` List the contents of a directory - rclone ls remote:directory +```sh +rclone ls remote:directory +``` Sync the remote `directory` to `/home/local/directory`, deleting any excess files. - rclone sync --interactive remote:directory /home/local/directory +```sh +rclone sync --interactive remote:directory /home/local/directory +``` ### Read only @@ -119,11 +127,15 @@ No checksums are stored. Since the http remote only has one config parameter it is easy to use without a config file: - rclone lsd --http-url https://beta.rclone.org :http: +```sh +rclone lsd --http-url https://beta.rclone.org :http: +``` or: - rclone lsd :http,url='https://beta.rclone.org': +```sh +rclone lsd :http,url='https://beta.rclone.org': +``` {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/http/http.go then run make backenddocs" >}} ### Standard options diff --git a/docs/content/iclouddrive.md b/docs/content/iclouddrive.md index 299cb064a..d3e60f336 100644 --- a/docs/content/iclouddrive.md +++ b/docs/content/iclouddrive.md @@ -7,22 +7,28 @@ status: Beta # {{< icon "fa fa-cloud" >}} iCloud Drive - ## Configuration -The initial setup for an iCloud Drive backend involves getting a trust token/session. This can be done by simply using the regular iCloud password, and accepting the code prompt on another iCloud connected device. +The initial setup for an iCloud Drive backend involves getting a trust token/session. +This can be done by simply using the regular iCloud password, and accepting the code +prompt on another iCloud connected device. -**IMPORTANT**: At the moment an app specific password won't be accepted. Only use your regular password and 2FA. +**IMPORTANT**: At the moment an app specific password won't be accepted. Only +use your regular password and 2FA. -`rclone config` walks you through the token creation. The trust token is valid for 30 days. After which you will have to reauthenticate with `rclone reconnect` or `rclone config`. +`rclone config` walks you through the token creation. The trust token is valid +for 30 days. After which you will have to reauthenticate with `rclone reconnect` +or `rclone config`. Here is an example of how to make a remote called `iclouddrive`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -78,19 +84,26 @@ y/e/d> y ADP is currently unsupported and need to be disabled -On iPhone, Settings `>` Apple Account `>` iCloud `>` 'Access iCloud Data on the Web' must be ON, and 'Advanced Data Protection' OFF. +On iPhone, Settings `>` Apple Account `>` iCloud `>` 'Access iCloud Data on the Web' +must be ON, and 'Advanced Data Protection' OFF. ## Troubleshooting ### Missing PCS cookies from the request -This means you have Advanced Data Protection (ADP) turned on. This is not supported at the moment. If you want to use rclone you will have to turn it off. See above for how to turn it off. +This means you have Advanced Data Protection (ADP) turned on. This is not supported +at the moment. If you want to use rclone you will have to turn it off. See above +for how to turn it off. -You will need to clear the `cookies` and the `trust_token` fields in the config. Or you can delete the remote config and start again. +You will need to clear the `cookies` and the `trust_token` fields in the config. +Or you can delete the remote config and start again. You should then run `rclone reconnect remote:`. -Note that changing the ADP setting may not take effect immediately - you may need to wait a few hours or a day before you can get rclone to work - keep clearing the config entry and running `rclone reconnect remote:` until rclone functions properly. +Note that changing the ADP setting may not take effect immediately - you may +need to wait a few hours or a day before you can get rclone to work - keep +clearing the config entry and running `rclone reconnect remote:` until rclone +functions properly. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/iclouddrive/iclouddrive.go then run make backenddocs" >}} ### Standard options diff --git a/docs/content/imagekit.md b/docs/content/imagekit.md index e4ecb7e34..ffa4f39a9 100644 --- a/docs/content/imagekit.md +++ b/docs/content/imagekit.md @@ -2,18 +2,19 @@ title: "ImageKit" description: "Rclone docs for ImageKit backend." versionIntroduced: "v1.63" - --- + # {{< icon "fa fa-cloud" >}} ImageKit + This is a backend for the [ImageKit.io](https://imagekit.io/) storage service. -#### About ImageKit -[ImageKit.io](https://imagekit.io/) provides real-time image and video optimizations, transformations, and CDN delivery. Over 1,000 businesses and 70,000 developers trust ImageKit with their images and videos on the web. +[ImageKit.io](https://imagekit.io/) provides real-time image and video +optimizations, transformations, and CDN delivery. Over 1,000 businesses +and 70,000 developers trust ImageKit with their images and videos on the web. - -#### Accounts & Pricing - -To use this backend, you need to [create an account](https://imagekit.io/registration/) on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements grow, upgrade to a plan that best fits your needs. See [the pricing details](https://imagekit.io/plans). +To use this backend, you need to [create an account](https://imagekit.io/registration/) +on ImageKit. Start with a free plan with generous usage limits. Then, as your requirements +grow, upgrade to a plan that best fits your needs. See [the pricing details](https://imagekit.io/plans). ## Configuration @@ -21,16 +22,18 @@ Here is an example of making an imagekit configuration. Firstly create a [ImageKit.io](https://imagekit.io/) account and choose a plan. -You will need to log in and get the `publicKey` and `privateKey` for your account from the developer section. +You will need to log in and get the `publicKey` and `privateKey` for your account +from the developer section. Now run -``` + +```sh rclone config ``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -82,20 +85,26 @@ e) Edit this remote d) Delete this remote y/e/d> y ``` + List directories in the top level of your Media Library -``` + +```sh rclone lsd imagekit-media-library: ``` + Make a new directory. -``` + +```sh rclone mkdir imagekit-media-library:directory ``` + List the contents of a directory. -``` + +```sh rclone ls imagekit-media-library:directory ``` -### Modified time and hashes +### Modified time and hashes ImageKit does not support modification times or hashes yet. diff --git a/docs/content/internetarchive.md b/docs/content/internetarchive.md index a165c90a8..cd01aa947 100644 --- a/docs/content/internetarchive.md +++ b/docs/content/internetarchive.md @@ -8,7 +8,8 @@ versionIntroduced: "v1.59" The Internet Archive backend utilizes Items on [archive.org](https://archive.org/) -Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.html) for the API this backend uses. +Refer to [IAS3 API documentation](https://archive.org/services/docs/api/ias3.html) +for the API this backend uses. Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in too, e.g. `remote:item/path/to/dir`. @@ -19,31 +20,47 @@ Once you have made a remote, you can use it like this: Make a new item - rclone mkdir remote:item +```sh +rclone mkdir remote:item +``` List the contents of a item - rclone ls remote:item +```sh +rclone ls remote:item +``` Sync `/home/local/directory` to the remote item, deleting any excess files in the item. - rclone sync --interactive /home/local/directory remote:item +```sh +rclone sync --interactive /home/local/directory remote:item +``` ## Notes -Because of Internet Archive's architecture, it enqueues write operations (and extra post-processings) in a per-item queue. You can check item's queue at https://catalogd.archive.org/history/item-name-here . Because of that, all uploads/deletes will not show up immediately and takes some time to be available. -The per-item queue is enqueued to an another queue, Item Deriver Queue. [You can check the status of Item Deriver Queue here.](https://catalogd.archive.org/catalog.php?whereami=1) This queue has a limit, and it may block you from uploading, or even deleting. You should avoid uploading a lot of small files for better behavior. -You can optionally wait for the server's processing to finish, by setting non-zero value to `wait_archive` key. -By making it wait, rclone can do normal file comparison. -Make sure to set a large enough value (e.g. `30m0s` for smaller files) as it can take a long time depending on server's queue. +Because of Internet Archive's architecture, it enqueues write operations (and +extra post-processings) in a per-item queue. You can check item's queue at +. Because of that, all +uploads/deletes will not show up immediately and takes some time to be available. +The per-item queue is enqueued to an another queue, Item Deriver Queue. +[You can check the status of Item Deriver Queue here.](https://catalogd.archive.org/catalog.php?whereami=1) +This queue has a limit, and it may block you from uploading, or even deleting. +You should avoid uploading a lot of small files for better behavior. + +You can optionally wait for the server's processing to finish, by setting +non-zero value to `wait_archive` key. By making it wait, rclone can do normal +file comparison. Make sure to set a large enough value (e.g. `30m0s` for smaller +files) as it can take a long time depending on server's queue. ## About metadata + This backend supports setting, updating and reading metadata of each file. The metadata will appear as file metadata on Internet Archive. However, some fields are reserved by both Internet Archive and rclone. The following are reserved by Internet Archive: + - `name` - `source` - `size` @@ -56,9 +73,11 @@ The following are reserved by Internet Archive: - `summation` Trying to set values to these keys is ignored with a warning. -Only setting `mtime` is an exception. Doing so make it the identical behavior as setting ModTime. +Only setting `mtime` is an exception. Doing so make it the identical +behavior as setting ModTime. -rclone reserves all the keys starting with `rclone-`. Setting value for these keys will give you warnings, but values are set according to request. +rclone reserves all the keys starting with `rclone-`. Setting value for +these keys will give you warnings, but values are set according to request. If there are multiple values for a key, only the first one is returned. This is a limitation of rclone, that supports one value per one key. @@ -76,7 +95,9 @@ changeable, as they are created by the Internet Archive automatically. These auto-created files can be excluded from the sync using [metadata filtering](/filtering/#metadata). - rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata" +```sh +rclone sync ... --metadata-exclude "source=metadata" --metadata-exclude "format=Metadata" +``` Which excludes from the sync any files which have the `source=metadata` or `format=Metadata` flags which are added to @@ -89,12 +110,14 @@ Most applies to the other providers as well, any differences are described [belo First run - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process. -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config diff --git a/docs/content/jottacloud.md b/docs/content/jottacloud.md index 9cdcc6cfd..e4dab510b 100644 --- a/docs/content/jottacloud.md +++ b/docs/content/jottacloud.md @@ -6,25 +6,27 @@ versionIntroduced: "v1.43" # {{< icon "fa fa-cloud" >}} Jottacloud -Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters -in Norway. In addition to the official service at [jottacloud.com](https://www.jottacloud.com/), -it also provides white-label solutions to different companies, such as: -* Telia - * Telia Cloud (cloud.telia.se) - * Telia Sky (sky.telia.no) -* Tele2 - * Tele2 Cloud (mittcloud.tele2.se) -* Onlime - * Onlime Cloud Storage (onlime.dk) -* Elkjøp (with subsidiaries): - * Elkjøp Cloud (cloud.elkjop.no) - * Elgiganten Sweden (cloud.elgiganten.se) - * Elgiganten Denmark (cloud.elgiganten.dk) - * Giganti Cloud (cloud.gigantti.fi) - * ELKO Cloud (cloud.elko.is) +Jottacloud is a cloud storage service provider from a Norwegian company, using +its own datacenters in Norway. In addition to the official service at +[jottacloud.com](https://www.jottacloud.com/), it also provides white-label +solutions to different companies, such as: -Most of the white-label versions are supported by this backend, although may require different -authentication setup - described below. +- Telia + - Telia Cloud (cloud.telia.se) + - Telia Sky (sky.telia.no) +- Tele2 + - Tele2 Cloud (mittcloud.tele2.se) +- Onlime + - Onlime Cloud Storage (onlime.dk) +- Elkjøp (with subsidiaries): + - Elkjøp Cloud (cloud.elkjop.no) + - Elgiganten Sweden (cloud.elgiganten.se) + - Elgiganten Denmark (cloud.elgiganten.dk) + - Giganti Cloud (cloud.gigantti.fi) + - ELKO Cloud (cloud.elko.is) + +Most of the white-label versions are supported by this backend, although may +require different authentication setup - described below. Paths are specified as `remote:path` @@ -32,81 +34,92 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`. ## Authentication types -Some of the whitelabel versions uses a different authentication method than the official service, -and you have to choose the correct one when setting up the remote. +Some of the whitelabel versions uses a different authentication method than the +official service, and you have to choose the correct one when setting up the remote. ### Standard authentication -The standard authentication method used by the official service (jottacloud.com), as well as -some of the whitelabel services, requires you to generate a single-use personal login token -from the account security settings in the service's web interface. Log in to your account, -go to "Settings" and then "Security", or use the direct link presented to you by rclone when -configuring the remote: . Scroll down to the section -"Personal login token", and click the "Generate" button. Note that if you are using a -whitelabel service you probably can't use the direct link, you need to find the same page in -their dedicated web interface, and also it may be in a different location than described above. +The standard authentication method used by the official service (jottacloud.com), +as well as some of the whitelabel services, requires you to generate a single-use +personal login token from the account security settings in the service's web +interface. Log in to your account, go to "Settings" and then "Security", or use +the direct link presented to you by rclone when configuring the remote: +. Scroll down to the section "Personal login +token", and click the "Generate" button. Note that if you are using a whitelabel +service you probably can't use the direct link, you need to find the same page in +their dedicated web interface, and also it may be in a different location than +described above. -To access your account from multiple instances of rclone, you need to configure each of them -with a separate personal login token. E.g. you create a Jottacloud remote with rclone in one -location, and copy the configuration file to a second location where you also want to run -rclone and access the same remote. Then you need to replace the token for one of them, using -the [config reconnect](https://rclone.org/commands/rclone_config_reconnect/) command, which -requires you to generate a new personal login token and supply as input. If you do not -do this, the token may easily end up being invalidated, resulting in both instances failing -with an error message something along the lines of: +To access your account from multiple instances of rclone, you need to configure +each of them with a separate personal login token. E.g. you create a Jottacloud +remote with rclone in one location, and copy the configuration file to a second +location where you also want to run rclone and access the same remote. Then you +need to replace the token for one of them, using the [config reconnect](https://rclone.org/commands/rclone_config_reconnect/) +command, which requires you to generate a new personal login token and supply +as input. If you do not do this, the token may easily end up being invalidated, +resulting in both instances failing with an error message something along the +lines of: - oauth2: cannot fetch token: 400 Bad Request - Response: {"error":"invalid_grant","error_description":"Stale token"} +```text + oauth2: cannot fetch token: 400 Bad Request + Response: {"error":"invalid_grant","error_description":"Stale token"} +``` -When this happens, you need to replace the token as described above to be able to use your -remote again. +When this happens, you need to replace the token as described above to be able +to use your remote again. -All personal login tokens you have taken into use will be listed in the web interface under -"My logged in devices", and from the right side of that list you can click the "X" button to -revoke individual tokens. +All personal login tokens you have taken into use will be listed in the web +interface under "My logged in devices", and from the right side of that list +you can click the "X" button to revoke individual tokens. ### Legacy authentication -If you are using one of the whitelabel versions (e.g. from Elkjøp) you may not have the option -to generate a CLI token. In this case you'll have to use the legacy authentication. To do this select -yes when the setup asks for legacy authentication and enter your username and password. -The rest of the setup is identical to the default setup. +If you are using one of the whitelabel versions (e.g. from Elkjøp) you may not +have the option to generate a CLI token. In this case you'll have to use the +legacy authentication. To do this select yes when the setup asks for legacy +authentication and enter your username and password. The rest of the setup is +identical to the default setup. ### Telia Cloud authentication -Similar to other whitelabel versions Telia Cloud doesn't offer the option of creating a CLI token, and -additionally uses a separate authentication flow where the username is generated internally. To setup -rclone to use Telia Cloud, choose Telia Cloud authentication in the setup. The rest of the setup is +Similar to other whitelabel versions Telia Cloud doesn't offer the option of +creating a CLI token, and additionally uses a separate authentication flow +where the username is generated internally. To setup rclone to use Telia Cloud, +choose Telia Cloud authentication in the setup. The rest of the setup is identical to the default setup. ### Tele2 Cloud authentication -As Tele2-Com Hem merger was completed this authentication can be used for former Com Hem Cloud and -Tele2 Cloud customers as no support for creating a CLI token exists, and additionally uses a separate -authentication flow where the username is generated internally. To setup rclone to use Tele2 Cloud, -choose Tele2 Cloud authentication in the setup. The rest of the setup is identical to the default setup. +As Tele2-Com Hem merger was completed this authentication can be used for former +Com Hem Cloud and Tele2 Cloud customers as no support for creating a CLI token +exists, and additionally uses a separate authentication flow where the username +is generated internally. To setup rclone to use Tele2 Cloud, choose Tele2 Cloud +authentication in the setup. The rest of the setup is identical to the default setup. ### Onlime Cloud Storage authentication -Onlime has sold access to Jottacloud proper, while providing localized support to Danish Customers, but -have recently set up their own hosting, transferring their customers from Jottacloud servers to their -own ones. +Onlime has sold access to Jottacloud proper, while providing localized support +to Danish Customers, but have recently set up their own hosting, transferring +their customers from Jottacloud servers to their own ones. -This, of course, necessitates using their servers for authentication, but otherwise functionality and -architecture seems equivalent to Jottacloud. +This, of course, necessitates using their servers for authentication, but +otherwise functionality and architecture seems equivalent to Jottacloud. -To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication in the setup. The rest -of the setup is identical to the default setup. +To setup rclone to use Onlime Cloud Storage, choose Onlime Cloud authentication +in the setup. The rest of the setup is identical to the default setup. ## Configuration -Here is an example of how to make a remote called `remote` with the default setup. First run: +Here is an example of how to make a remote called `remote` with the default setup. +First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -197,15 +210,21 @@ Once configured you can then use `rclone` like this, List directories in top level of your Jottacloud - rclone lsd remote: +```sh +rclone lsd remote: +``` List all the files in your Jottacloud - rclone ls remote: +```sh +rclone ls remote: +``` To copy a local directory to an Jottacloud directory called backup - rclone copy /home/source remote:backup +```sh +rclone copy /home/source remote:backup +``` ### Devices and Mountpoints @@ -286,18 +305,21 @@ as they can't be used in XML strings. ### Deleting files -By default, rclone will send all files to the trash when deleting files. They will be permanently -deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately -by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) flag, or set the equivalent environment variable. -Emptying the trash is supported by the [cleanup](/commands/rclone_cleanup/) command. +By default, rclone will send all files to the trash when deleting files. They +will be permanently deleted automatically after 30 days. You may bypass the +trash and permanently delete files immediately by using the [--jottacloud-hard-delete](#jottacloud-hard-delete) +flag, or set the equivalent environment variable. Emptying the trash is +supported by the [cleanup](/commands/rclone_cleanup/) command. ### Versions -Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. -Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website. +Jottacloud supports file versioning. When rclone uploads a new version of a +file it creates a new version of it. Currently rclone only supports retrieving +the current version but older versions can be accessed via the Jottacloud Website. -Versioning can be disabled by `--jottacloud-no-versions` option. This is achieved by deleting the remote file prior to uploading -a new version. If the upload the fails no version of the file will be available in the remote. +Versioning can be disabled by `--jottacloud-no-versions` option. This is +achieved by deleting the remote file prior to uploading a new version. If the +upload the fails no version of the file will be available in the remote. ### Quota information diff --git a/docs/content/koofr.md b/docs/content/koofr.md index ed7497604..8ef9641d1 100644 --- a/docs/content/koofr.md +++ b/docs/content/koofr.md @@ -19,11 +19,13 @@ giving the password a nice name like `rclone` and clicking on generate. Here is an example of how to make a remote called `koofr`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -89,15 +91,21 @@ Once configured you can then use `rclone` like this, List directories in top level of your Koofr - rclone lsd koofr: +```sh +rclone lsd koofr: +``` List all the files in your Koofr - rclone ls koofr: +```sh +rclone ls koofr: +``` To copy a local directory to an Koofr directory called backup - rclone copy /home/source koofr:backup +```sh +rclone copy /home/source koofr:backup +``` ### Restricted filename characters @@ -245,11 +253,13 @@ provides a Koofr API. Here is an example of how to make a remote called `ds`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -312,11 +322,13 @@ You may also want to use another, public or private storage provider that runs a Here is an example of how to make a remote called `other`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password diff --git a/docs/content/linkbox.md b/docs/content/linkbox.md index 5389468d5..2cb877046 100644 --- a/docs/content/linkbox.md +++ b/docs/content/linkbox.md @@ -14,11 +14,13 @@ Here is an example of making a remote for Linkbox. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password diff --git a/docs/content/local.md b/docs/content/local.md index 296ea8ceb..f44a4b1f3 100644 --- a/docs/content/local.md +++ b/docs/content/local.md @@ -8,7 +8,9 @@ versionIntroduced: "v0.91" Local paths are specified as normal filesystem paths, e.g. `/path/to/wherever`, so - rclone sync --interactive /home/source /tmp/destination +```sh +rclone sync --interactive /home/source /tmp/destination +``` Will sync `/home/source` to `/tmp/destination`. @@ -25,7 +27,7 @@ Rclone reads and writes the modification times using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X. -### Filenames ### +### Filenames Filenames should be encoded in UTF-8 on disk. This is the normal case for Windows and OS X. @@ -41,7 +43,7 @@ be replaced with a quoted representation of the invalid bytes. The name `gro\xdf` will be transferred as `gro‛DF`. `rclone` will emit a debug message in this case (use `-v` to see), e.g. -``` +```text Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf" ``` @@ -117,7 +119,7 @@ These only get replaced if they are the last character in the name: Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be converted to UTF-16. -### Paths on Windows ### +### Paths on Windows On Windows there are many ways of specifying a path to a file system resource. Local paths can be absolute, like `C:\path\to\wherever`, or relative, @@ -133,10 +135,11 @@ so in most cases you do not have to worry about this (read more [below](#long-pa Using the same prefix `\\?\` it is also possible to specify path to volumes identified by their GUID, e.g. `\\?\Volume{b75e2c83-0000-0000-0000-602f00000000}\some\path`. -#### Long paths #### +#### Long paths Rclone handles long paths automatically, by converting all paths to -[extended-length path format](https://docs.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation), which allows paths up to 32,767 characters. +[extended-length path format](https://docs.microsoft.com/en-us/windows/win32/fileio/maximum-file-path-limitation), +which allows paths up to 32,767 characters. This conversion will ensure paths are absolute and prefix them with the `\\?\`. This is why you will see that your paths, for instance @@ -147,18 +150,19 @@ However, in rare cases this may cause problems with buggy file system drivers like [EncFS](https://github.com/rclone/rclone/issues/261). To disable UNC conversion globally, add this to your `.rclone.conf` file: -``` +```ini [local] nounc = true ``` If you want to selectively disable UNC, you can add it to a separate entry like this: -``` +```ini [nounc] type = local nounc = true ``` + And use rclone like this: `rclone copy c:\src nounc:z:\dst` @@ -180,7 +184,7 @@ This flag applies to all commands. For example, supposing you have a directory structure like this -``` +```sh $ tree /tmp/a /tmp/a ├── b -> ../b @@ -192,7 +196,7 @@ $ tree /tmp/a Then you can see the difference with and without the flag like this -``` +```sh $ rclone ls /tmp/a 6 one 6 two/three @@ -200,7 +204,7 @@ $ rclone ls /tmp/a and -``` +```sh $ rclone -L ls /tmp/a 4174 expected 6 one @@ -209,7 +213,7 @@ $ rclone -L ls /tmp/a 6 b/one ``` -#### --local-links, --links, -l +#### --local-links, --links, -l Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows). @@ -223,7 +227,7 @@ This flag applies to all commands. For example, supposing you have a directory structure like this -``` +```sh $ tree /tmp/a /tmp/a ├── file1 -> ./file4 @@ -232,13 +236,13 @@ $ tree /tmp/a Copying the entire directory with '-l' -``` -$ rclone copy -l /tmp/a/ remote:/tmp/a/ +```sh +rclone copy -l /tmp/a/ remote:/tmp/a/ ``` The remote files are created with a `.rclonelink` suffix -``` +```sh $ rclone ls remote:/tmp/a 5 file1.rclonelink 14 file2.rclonelink @@ -246,7 +250,7 @@ $ rclone ls remote:/tmp/a The remote files will contain the target of the symbolic links -``` +```sh $ rclone cat remote:/tmp/a/file1.rclonelink ./file4 @@ -256,7 +260,7 @@ $ rclone cat remote:/tmp/a/file2.rclonelink Copying them back with '-l' -``` +```sh $ rclone copy -l remote:/tmp/a/ /tmp/b/ $ tree /tmp/b @@ -267,7 +271,7 @@ $ tree /tmp/b However, if copied back without '-l' -``` +```sh $ rclone copyto remote:/tmp/a/ /tmp/b/ $ tree /tmp/b @@ -278,7 +282,7 @@ $ tree /tmp/b If you want to copy a single file with `-l` then you must use the `.rclonelink` suffix. -``` +```sh $ rclone copy -l remote:/tmp/a/file1.rclonelink /tmp/c $ tree /tmp/c @@ -302,7 +306,7 @@ different file systems. For example if you have a directory hierarchy like this -``` +```sh root ├── disk1 - disk1 mounted on the root │   └── file3 - stored on disk1 @@ -312,15 +316,16 @@ root └── file2 - stored on the root disk ``` -Using `rclone --one-file-system copy root remote:` will only copy `file1` and `file2`. Eg +Using `rclone --one-file-system copy root remote:` will only copy `file1` +and `file2`. E.g. -``` +```sh $ rclone -q --one-file-system ls root 0 file1 0 file2 ``` -``` +```sh $ rclone -q ls root 0 disk1/file3 0 disk2/file4 diff --git a/docs/content/mailru.md b/docs/content/mailru.md index 51790739f..116ad7b6a 100644 --- a/docs/content/mailru.md +++ b/docs/content/mailru.md @@ -6,7 +6,10 @@ versionIntroduced: "v1.50" # {{< icon "fas fa-at" >}} Mail.ru Cloud -[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a Russian internet company [Mail.Ru Group](https://mail.ru). The official desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows and Mac OS. +[Mail.ru Cloud](https://cloud.mail.ru/) is a cloud storage provided by a +Russian internet company [Mail.Ru Group](https://mail.ru). The official +desktop client is [Disk-O:](https://disk-o.cloud/en), available on Windows +and Mac OS. ## Features highlights @@ -14,12 +17,13 @@ versionIntroduced: "v1.50" - Files have a `last modified time` property, directories don't - Deleted files are by default moved to the trash - Files and directories can be shared via public links -- Partial uploads or streaming are not supported, file size must be known before upload +- Partial uploads or streaming are not supported, file size must be known before + upload - Maximum file size is limited to 2G for a free account, unlimited for paid accounts - Storage keeps hash for all files and performs transparent deduplication, the hash algorithm is a modified SHA1 -- If a particular file is already present in storage, one can quickly submit file hash - instead of long file upload (this optimization is supported by rclone) +- If a particular file is already present in storage, one can quickly submit file + hash instead of long file upload (this optimization is supported by rclone) ## Configuration @@ -35,16 +39,22 @@ give an error like `oauth2: server response missing access_token`. - Go to Security / "Пароль и безопасность" - Click password for apps / "Пароли для внешних приложений" - Add the password - give it a name - eg "rclone" -- Select the permissions level. For some reason just "Full access to Cloud" (WebDav) doesn't work for Rclone currently. You have to select "Full access to Mail, Cloud and Calendar" (all protocols). ([thread on forum.rclone.org](https://forum.rclone.org/t/failed-to-create-file-system-for-mailru-failed-to-authorize-oauth2-invalid-username-or-password-username-or-password-is-incorrect/49298)) -- Copy the password and use this password below - your normal login password won't work. +- Select the permissions level. For some reason just "Full access to Cloud" + (WebDav) doesn't work for Rclone currently. You have to select "Full access + to Mail, Cloud and Calendar" (all protocols). + ([thread on forum.rclone.org](https://forum.rclone.org/t/failed-to-create-file-system-for-mailru-failed-to-authorize-oauth2-invalid-username-or-password-username-or-password-is-incorrect/49298)) +- Copy the password and use this password below - your normal login password + won't work. Now run - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -109,20 +119,28 @@ You can use the configured backend as shown below: See top level directories - rclone lsd remote: +```sh +rclone lsd remote: +``` Make a new directory - rclone mkdir remote:directory +```sh +rclone mkdir remote:directory +``` List the contents of a directory - rclone ls remote:directory +```sh +rclone ls remote:directory +``` Sync `/home/local/directory` to the remote path, deleting any excess files in the path. - rclone sync --interactive /home/local/directory remote:directory +```sh +rclone sync --interactive /home/local/directory remote:directory +``` ### Modification times and hashes diff --git a/docs/content/mega.md b/docs/content/mega.md index 2886c05f1..4c7584ce0 100644 --- a/docs/content/mega.md +++ b/docs/content/mega.md @@ -23,11 +23,13 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -65,22 +67,29 @@ d) Delete this remote y/e/d> y ``` -**NOTE:** The encryption keys need to have been already generated after a regular login -via the browser, otherwise attempting to use the credentials in `rclone` will fail. +**NOTE:** The encryption keys need to have been already generated after a regular +login via the browser, otherwise attempting to use the credentials in `rclone` +will fail. Once configured you can then use `rclone` like this, List directories in top level of your Mega - rclone lsd remote: +```sh +rclone lsd remote: +``` List all the files in your Mega - rclone ls remote: +```sh +rclone ls remote: +``` To copy a local directory to an Mega directory called backup - rclone copy /home/source remote:backup +```sh +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -110,26 +119,26 @@ Use `rclone dedupe` to fix duplicated files. #### Object not found -If you are connecting to your Mega remote for the first time, -to test access and synchronization, you may receive an error such as +If you are connecting to your Mega remote for the first time, +to test access and synchronization, you may receive an error such as -``` -Failed to create file system for "my-mega-remote:": +```text +Failed to create file system for "my-mega-remote:": couldn't login: Object (typically, node or user) not found ``` The diagnostic steps often recommended in the [rclone forum](https://forum.rclone.org/search?q=mega) -start with the **MEGAcmd** utility. Note that this refers to -the official C++ command from https://github.com/meganz/MEGAcmd -and not the go language built command from t3rm1n4l/megacmd -that is no longer maintained. +start with the **MEGAcmd** utility. Note that this refers to +the official C++ command from +and not the go language built command from t3rm1n4l/megacmd +that is no longer maintained. -Follow the instructions for installing MEGAcmd and try accessing -your remote as they recommend. You can establish whether or not -you can log in using MEGAcmd, and obtain diagnostic information -to help you, and search or work with others in the forum. +Follow the instructions for installing MEGAcmd and try accessing +your remote as they recommend. You can establish whether or not +you can log in using MEGAcmd, and obtain diagnostic information +to help you, and search or work with others in the forum. -``` +```text MEGA CMD> login me@example.com Password: Fetching nodes ... @@ -138,12 +147,11 @@ Login complete as me@example.com me@example.com:/$ ``` -Note that some have found issues with passwords containing special -characters. If you can not log on with rclone, but MEGAcmd logs on -just fine, then consider changing your password temporarily to +Note that some have found issues with passwords containing special +characters. If you can not log on with rclone, but MEGAcmd logs on +just fine, then consider changing your password temporarily to pure alphanumeric characters, in case that helps. - #### Repeated commands blocks access Mega remotes seem to get blocked (reject logins) under "heavy use". diff --git a/docs/content/memory.md b/docs/content/memory.md index c6f872d27..269ea3ac9 100644 --- a/docs/content/memory.md +++ b/docs/content/memory.md @@ -18,8 +18,8 @@ s3). Because it has no parameters you can just use it with the You can configure it as a remote like this with `rclone config` too if you want to: -``` -No remotes found, make a new one? +```text +No remotes found, make a new one\? n) New remote s) Set configuration password q) Quit config @@ -50,9 +50,11 @@ y/e/d> y Because the memory backend isn't persistent it is most useful for testing or with an rclone server or rclone mount, e.g. - rclone mount :memory: /mnt/tmp - rclone serve webdav :memory: - rclone serve sftp :memory: +```sh +rclone mount :memory: /mnt/tmp +rclone serve webdav :memory: +rclone serve sftp :memory: +``` ### Modification times and hashes diff --git a/docs/content/netstorage.md b/docs/content/netstorage.md index 517ce286e..65434eb96 100644 --- a/docs/content/netstorage.md +++ b/docs/content/netstorage.md @@ -8,16 +8,22 @@ versionIntroduced: "v1.58" Paths are specified as `remote:` You may put subdirectories in too, e.g. `remote:/path/to/dir`. -If you have a CP code you can use that as the folder after the domain such as \\/\\/\. +If you have a CP code you can use that as the folder after the domain such +as \\/\\/\. For example, this is commonly configured with or without a CP code: -* **With a CP code**. `[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/` -* **Without a CP code**. `[your-domain-prefix]-nsu.akamaihd.net` +- **With a CP code**. `[your-domain-prefix]-nsu.akamaihd.net/123456/subdirectory/` +- **Without a CP code**. `[your-domain-prefix]-nsu.akamaihd.net` See all buckets - rclone lsd remote: -The initial setup for Netstorage involves getting an account and secret. Use `rclone config` to walk you through the setup process. + +```sh +rclone lsd remote: +``` + +The initial setup for Netstorage involves getting an account and secret. +Use `rclone config` to walk you through the setup process. ## Configuration @@ -25,155 +31,216 @@ Here's an example of how to make a remote called `ns1`. 1. To begin the interactive configuration process, enter this command: -``` -rclone config -``` + ```sh + rclone config + ``` 2. Type `n` to create a new remote. -``` -n) New remote -d) Delete remote -q) Quit config -e/n/d/q> n -``` + ```text + n) New remote + d) Delete remote + q) Quit config + e/n/d/q> n + ``` 3. For this example, enter `ns1` when you reach the name> prompt. -``` -name> ns1 -``` + ```text + name> ns1 + ``` 4. Enter `netstorage` as the type of storage to configure. -``` -Type of storage to configure. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value -XX / NetStorage - \ "netstorage" -Storage> netstorage -``` + ```text + Type of storage to configure. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + XX / NetStorage + \ "netstorage" + Storage> netstorage + ``` -5. Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, which is the default. HTTP is provided primarily for debugging purposes. +5. Select between the HTTP or HTTPS protocol. Most users should choose HTTPS, +which is the default. HTTP is provided primarily for debugging purposes. + ```text + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / HTTP protocol + \ "http" + 2 / HTTPS protocol + \ "https" + protocol> 1 + ``` -``` -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / HTTP protocol - \ "http" - 2 / HTTPS protocol - \ "https" -protocol> 1 -``` +6. Specify your NetStorage host, CP code, and any necessary content paths using +this format: `///` -6. Specify your NetStorage host, CP code, and any necessary content paths using this format: `///` - -``` -Enter a string value. Press Enter for the default (""). -host> baseball-nsu.akamaihd.net/123456/content/ -``` + ```text + Enter a string value. Press Enter for the default (""). + host> baseball-nsu.akamaihd.net/123456/content/ + ``` 7. Set the netstorage account name -``` -Enter a string value. Press Enter for the default (""). -account> username -``` -8. Set the Netstorage account secret/G2O key which will be used for authentication purposes. Select the `y` option to set your own password then enter your secret. + ```text + Enter a string value. Press Enter for the default (""). + account> username + ``` + +8. Set the Netstorage account secret/G2O key which will be used for authentication +purposes. Select the `y` option to set your own password then enter your secret. Note: The secret is stored in the `rclone.conf` file with hex-encoded encryption. -``` -y) Yes type in my own password -g) Generate random password -y/g> y -Enter the password: -password: -Confirm the password: -password: -``` + ```text + y) Yes type in my own password + g) Generate random password + y/g> y + Enter the password: + password: + Confirm the password: + password: + ``` 9. View the summary and confirm your remote configuration. -``` -[ns1] -type = netstorage -protocol = http -host = baseball-nsu.akamaihd.net/123456/content/ -account = username -secret = *** ENCRYPTED *** --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -``` + ```text + [ns1] + type = netstorage + protocol = http + host = baseball-nsu.akamaihd.net/123456/content/ + account = username + secret = *** ENCRYPTED *** + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + ``` This remote is called `ns1` and can now be used. ## Example operations -Get started with rclone and NetStorage with these examples. For additional rclone commands, visit https://rclone.org/commands/. +Get started with rclone and NetStorage with these examples. For additional rclone +commands, visit . ### See contents of a directory in your project - rclone lsd ns1:/974012/testing/ +```sh +rclone lsd ns1:/974012/testing/ +``` ### Sync the contents local with remote - rclone sync . ns1:/974012/testing/ +```sh +rclone sync . ns1:/974012/testing/ +``` ### Upload local content to remote - rclone copy notes.txt ns1:/974012/testing/ + +```sh +rclone copy notes.txt ns1:/974012/testing/ +``` ### Delete content on remote - rclone delete ns1:/974012/testing/notes.txt -### Move or copy content between CP codes. +```sh +rclone delete ns1:/974012/testing/notes.txt +``` -Your credentials must have access to two CP codes on the same remote. You can't perform operations between different remotes. +### Move or copy content between CP codes - rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/ +Your credentials must have access to two CP codes on the same remote. +You can't perform operations between different remotes. + +```sh +rclone move ns1:/974012/testing/notes.txt ns1:/974450/testing2/ +``` ## Features ### Symlink Support -The Netstorage backend changes the rclone `--links, -l` behavior. When uploading, instead of creating the .rclonelink file, use the "symlink" API in order to create the corresponding symlink on the remote. The .rclonelink file will not be created, the upload will be intercepted and only the symlink file that matches the source file name with no suffix will be created on the remote. +The Netstorage backend changes the rclone `--links, -l` behavior. When uploading, +instead of creating the .rclonelink file, use the "symlink" API in order to create +the corresponding symlink on the remote. The .rclonelink file will not be created, +the upload will be intercepted and only the symlink file that matches the source +file name with no suffix will be created on the remote. -This will effectively allow commands like copy/copyto, move/moveto and sync to upload from local to remote and download from remote to local directories with symlinks. Due to internal rclone limitations, it is not possible to upload an individual symlink file to any remote backend. You can always use the "backend symlink" command to create a symlink on the NetStorage server, refer to "symlink" section below. +This will effectively allow commands like copy/copyto, move/moveto and sync to +upload from local to remote and download from remote to local directories with +symlinks. Due to internal rclone limitations, it is not possible to upload an +individual symlink file to any remote backend. You can always use the "backend +symlink" command to create a symlink on the NetStorage server, refer to "symlink" +section below. -Individual symlink files on the remote can be used with the commands like "cat" to print the destination name, or "delete" to delete symlink, or copy, copy/to and move/moveto to download from the remote to local. Note: individual symlink files on the remote should be specified including the suffix .rclonelink. +Individual symlink files on the remote can be used with the commands like "cat" +to print the destination name, or "delete" to delete symlink, or copy, copy/to +and move/moveto to download from the remote to local. Note: individual symlink +files on the remote should be specified including the suffix .rclonelink. -**Note**: No file with the suffix .rclonelink should ever exist on the server since it is not possible to actually upload/create a file with .rclonelink suffix with rclone, it can only exist if it is manually created through a non-rclone method on the remote. +**Note**: No file with the suffix .rclonelink should ever exist on the server +since it is not possible to actually upload/create a file with .rclonelink suffix +with rclone, it can only exist if it is manually created through a non-rclone +method on the remote. ### Implicit vs. Explicit Directories With NetStorage, directories can exist in one of two forms: -1. **Explicit Directory**. This is an actual, physical directory that you have created in a storage group. -2. **Implicit Directory**. This refers to a directory within a path that has not been physically created. For example, during upload of a file, nonexistent subdirectories can be specified in the target path. NetStorage creates these as "implicit." While the directories aren't physically created, they exist implicitly and the noted path is connected with the uploaded file. +1. **Explicit Directory**. This is an actual, physical directory that you have + created in a storage group. +2. **Implicit Directory**. This refers to a directory within a path that has + not been physically created. For example, during upload of a file, nonexistent + subdirectories can be specified in the target path. NetStorage creates these + as "implicit." While the directories aren't physically created, they exist + implicitly and the noted path is connected with the uploaded file. -Rclone will intercept all file uploads and mkdir commands for the NetStorage remote and will explicitly issue the mkdir command for each directory in the uploading path. This will help with the interoperability with the other Akamai services such as SFTP and the Content Management Shell (CMShell). Rclone will not guarantee correctness of operations with implicit directories which might have been created as a result of using an upload API directly. +Rclone will intercept all file uploads and mkdir commands for the NetStorage +remote and will explicitly issue the mkdir command for each directory in the +uploading path. This will help with the interoperability with the other Akamai +services such as SFTP and the Content Management Shell (CMShell). Rclone will +not guarantee correctness of operations with implicit directories which might +have been created as a result of using an upload API directly. ### `--fast-list` / ListR support -NetStorage remote supports the ListR feature by using the "list" NetStorage API action to return a lexicographical list of all objects within the specified CP code, recursing into subdirectories as they're encountered. +NetStorage remote supports the ListR feature by using the "list" NetStorage API +action to return a lexicographical list of all objects within the specified CP +code, recursing into subdirectories as they're encountered. -* **Rclone will use the ListR method for some commands by default**. Commands such as `lsf -R` will use ListR by default. To disable this, include the `--disable listR` option to use the non-recursive method of listing objects. +- **Rclone will use the ListR method for some commands by default**. Commands +such as `lsf -R` will use ListR by default. To disable this, include the +`--disable listR` option to use the non-recursive method of listing objects. -* **Rclone will not use the ListR method for some commands**. Commands such as `sync` don't use ListR by default. To force using the ListR method, include the `--fast-list` option. +- **Rclone will not use the ListR method for some commands**. Commands such as +`sync` don't use ListR by default. To force using the ListR method, include the +`--fast-list` option. -There are pros and cons of using the ListR method, refer to [rclone documentation](https://rclone.org/docs/#fast-list). In general, the sync command over an existing deep tree on the remote will run faster with the "--fast-list" flag but with extra memory usage as a side effect. It might also result in higher CPU utilization but the whole task can be completed faster. +There are pros and cons of using the ListR method, refer to [rclone documentation](https://rclone.org/docs/#fast-list). +In general, the sync command over an existing deep tree on the remote will +run faster with the "--fast-list" flag but with extra memory usage as a side effect. +It might also result in higher CPU utilization but the whole task can be completed +faster. -**Note**: There is a known limitation that "lsf -R" will display number of files in the directory and directory size as -1 when ListR method is used. The workaround is to pass "--disable listR" flag if these numbers are important in the output. +**Note**: There is a known limitation that "lsf -R" will display number of files +in the directory and directory size as -1 when ListR method is used. The workaround +is to pass "--disable listR" flag if these numbers are important in the output. ### Purge -NetStorage remote supports the purge feature by using the "quick-delete" NetStorage API action. The quick-delete action is disabled by default for security reasons and can be enabled for the account through the Akamai portal. Rclone will first try to use quick-delete action for the purge command and if this functionality is disabled then will fall back to a standard delete method. +NetStorage remote supports the purge feature by using the "quick-delete" +NetStorage API action. The quick-delete action is disabled by default for security +reasons and can be enabled for the account through the Akamai portal. Rclone +will first try to use quick-delete action for the purge command and if this +functionality is disabled then will fall back to a standard delete method. -**Note**: Read the [NetStorage Usage API](https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html) for considerations when using "quick-delete". In general, using quick-delete method will not delete the tree immediately and objects targeted for quick-delete may still be accessible. +**Note**: Read the [NetStorage Usage API](https://learn.akamai.com/en-us/webhelp/netstorage/netstorage-http-api-developer-guide/GUID-15836617-9F50-405A-833C-EA2556756A30.html) +for considerations when using "quick-delete". In general, using quick-delete +method will not delete the tree immediately and objects targeted for +quick-delete may still be accessible. {{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/netstorage/netstorage.go then run make backenddocs" >}} ### Standard options diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md index c565db861..6420037d1 100644 --- a/docs/content/onedrive.md +++ b/docs/content/onedrive.md @@ -18,11 +18,13 @@ you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text e) Edit existing remote n) New remote d) Delete remote @@ -110,57 +112,88 @@ Once configured you can then use `rclone` like this, List directories in top level of your OneDrive - rclone lsd remote: +```sh +rclone lsd remote: +``` List all the files in your OneDrive - rclone ls remote: +```sh +rclone ls remote: +``` To copy a local directory to an OneDrive directory called backup - rclone copy /home/source remote:backup +```sh +rclone copy /home/source remote:backup +``` ### Getting your own Client ID and Key -rclone uses a default Client ID when talking to OneDrive, unless a custom `client_id` is specified in the config. -The default Client ID and Key are shared by all rclone users when performing requests. +rclone uses a default Client ID when talking to OneDrive, unless a custom +`client_id` is specified in the config. The default Client ID and Key are +shared by all rclone users when performing requests. -You may choose to create and use your own Client ID, in case the default one does not work well for you. -For example, you might see throttling. +You may choose to create and use your own Client ID, in case the default one +does not work well for you. For example, you might see throttling. #### Creating Client ID for OneDrive Personal To create your own Client ID, please follow these steps: -1. Open https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview and then under the `Add` menu click `App registration`. - * If you have not created an Azure account, you will be prompted to. This is free, but you need to provide a phone number, address, and credit card for identity verification. -2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI`, then type (do not copy and paste) `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use. -3. Under `manage` select `Certificates & secrets`, click `New client secret`. Enter a description (can be anything) and set `Expires` to 24 months. Copy and keep that secret _Value_ for later use (you _won't_ be able to see this value afterwards). -4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`. -5. Search and select the following permissions: `Files.Read`, `Files.ReadWrite`, `Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read` and `Sites.Read.All` (if custom access scopes are configured, select the permissions accordingly). Once selected click `Add permissions` at the bottom. +1. Open + and then under the `Add` menu click `App registration`. + - If you have not created an Azure account, you will be prompted to. This is free, + but you need to provide a phone number, address, and credit card for identity + verification. +2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, + select `Web` in `Redirect URI`, then type (do not copy and paste) + `http://localhost:53682/` and click Register. Copy and keep the + `Application (client) ID` under the app name for later use. +3. Under `manage` select `Certificates & secrets`, click `New client secret`. + Enter a description (can be anything) and set `Expires` to 24 months. + Copy and keep that secret *Value* for later use (you *won't* be able to see + this value afterwards). +4. Under `manage` select `API permissions`, click `Add a permission` and select + `Microsoft Graph` then select `delegated permissions`. +5. Search and select the following permissions: `Files.Read`, `Files.ReadWrite`, + `Files.Read.All`, `Files.ReadWrite.All`, `offline_access`, `User.Read` and + `Sites.Read.All` (if custom access scopes are configured, select the + permissions accordingly). Once selected click `Add permissions` at the bottom. -Now the application is complete. Run `rclone config` to create or edit a OneDrive remote. -Supply the app ID and password as Client ID and Secret, respectively. rclone will walk you through the remaining steps. +Now the application is complete. Run `rclone config` to create or edit a OneDrive +remote. Supply the app ID and password as Client ID and Secret, respectively. +rclone will walk you through the remaining steps. The access_scopes option allows you to configure the permissions requested by rclone. -See [Microsoft Docs](https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions) for more information about the different scopes. +See [Microsoft Docs](https://docs.microsoft.com/en-us/graph/permissions-reference#files-permissions) +for more information about the different scopes. -The `Sites.Read.All` permission is required if you need to [search SharePoint sites when configuring the remote](https://github.com/rclone/rclone/pull/5883). However, if that permission is not assigned, you need to exclude `Sites.Read.All` from your access scopes or set `disable_site_permission` option to true in the advanced options. +The `Sites.Read.All` permission is required if you need to [search SharePoint sites when configuring the remote](https://github.com/rclone/rclone/pull/5883). +However, if that permission is not assigned, you need to exclude `Sites.Read.All` +from your access scopes or set `disable_site_permission` option to true in the +advanced options. #### Creating Client ID for OneDrive Business -The steps for OneDrive Personal may or may not work for OneDrive Business, depending on the security settings of the organization. +The steps for OneDrive Personal may or may not work for OneDrive Business, +depending on the security settings of the organization. A common error is that the publisher of the App is not verified. -You may try to [verify you account](https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview), or try to limit the App to your organization only, as shown below. +You may try to [verify you account](https://docs.microsoft.com/en-us/azure/active-directory/develop/publisher-verification-overview), +or try to limit the App to your organization only, as shown below. 1. Make sure to create the App with your business account. -2. Follow the steps above to create an App. However, we need a different account type here: `Accounts in this organizational directory only (*** - Single tenant)`. Note that you can also change the account type after creating the App. -3. Find the [tenant ID](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant) of your organization. +2. Follow the steps above to create an App. However, we need a different account + type here: `Accounts in this organizational directory only (*** - Single tenant)`. + Note that you can also change the account type after creating the App. +3. Find the [tenant ID](https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/active-directory-how-to-find-tenant) + of your organization. 4. In the rclone config, set `auth_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/authorize`. 5. In the rclone config, set `token_url` to `https://login.microsoftonline.com/YOUR_TENANT_ID/oauth2/v2.0/token`. -Note: If you have a special region, you may need a different host in step 4 and 5. Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86). +Note: If you have a special region, you may need a different host in step 4 and 5. +Here are [some hints](https://github.com/rclone/rclone/blob/bc23bf11db1c78c6ebbf8ea538fbebf7058b4176/backend/onedrive/onedrive.go#L86). ### Using OAuth Client Credential flow @@ -170,10 +203,14 @@ that adopting the context of an Azure AD user account. This flow can be enabled by following the steps below: -1. Create the Enterprise App registration in the Azure AD portal and obtain a Client ID and Client Secret as described above. -2. Ensure that the application has the appropriate permissions and they are assigned as *Application Permissions* -3. Configure the remote, ensuring that *Client ID* and *Client Secret* are entered correctly. -4. In the *Advanced Config* section, enter `true` for `client_credentials` and in the `tenant` section enter the tenant ID. +1. Create the Enterprise App registration in the Azure AD portal and obtain a + Client ID and Client Secret as described above. +2. Ensure that the application has the appropriate permissions and they are + assigned as *Application Permissions* +3. Configure the remote, ensuring that *Client ID* and *Client Secret* are + entered correctly. +4. In the *Advanced Config* section, enter `true` for `client_credentials` and + in the `tenant` section enter the tenant ID. When it comes to choosing the type of the connection work with the client credentials flow. In particular the "onedrive" option does not diff --git a/docs/content/opendrive.md b/docs/content/opendrive.md index b38b8d01a..21fba294e 100644 --- a/docs/content/opendrive.md +++ b/docs/content/opendrive.md @@ -14,11 +14,13 @@ Paths may be as deep as required, e.g. `remote:directory/subdirectory`. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: -``` +```text n) New remote d) Delete remote q) Quit config @@ -55,15 +57,21 @@ y/e/d> y List directories in top level of your OpenDrive - rclone lsd remote: +```sh +rclone lsd remote: +``` List all the files in your OpenDrive - rclone ls remote: +```sh +rclone ls remote: +``` To copy a local directory to an OpenDrive directory called backup - rclone copy /home/source remote:backup +```sh +rclone copy /home/source remote:backup +``` ### Modification times and hashes @@ -99,7 +107,6 @@ These only get replaced if they are the first or last character in the name: | VT | 0x0B | ␋ | | CR | 0x0D | ␍ | - Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8), as they can't be used in JSON strings. diff --git a/docs/content/oracleobjectstorage/_index.md b/docs/content/oracleobjectstorage/_index.md index 73d484986..c1eceec4e 100644 --- a/docs/content/oracleobjectstorage/_index.md +++ b/docs/content/oracleobjectstorage/_index.md @@ -6,30 +6,34 @@ versionIntroduced: "v1.60" --- # {{< icon "fa fa-cloud" >}} Oracle Object Storage + - [Oracle Object Storage Overview](https://docs.oracle.com/en-us/iaas/Content/Object/Concepts/objectstorageoverview.htm) - [Oracle Object Storage FAQ](https://www.oracle.com/cloud/storage/object-storage/faq/) - [Oracle Object Storage Limits](https://docs.oracle.com/en-us/iaas/Content/Resources/Assets/whitepapers/oci-object-storage-best-practices.pdf) -Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command.) You may put subdirectories in -too, e.g. `remote:bucket/path/to/dir`. +Paths are specified as `remote:bucket` (or `remote:` for the `lsd` command). +You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`. Sample command to transfer local artifacts to remote:bucket in oracle object storage: -`rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv` +```sh +rclone -vvv --progress --stats-one-line --max-stats-groups 10 --log-format date,time,UTC,longfile --fast-list --buffer-size 256Mi --oos-no-check-bucket --oos-upload-cutoff 10Mi --multi-thread-cutoff 16Mi --multi-thread-streams 3000 --transfers 3000 --checkers 64 --retries 2 --oos-chunk-size 10Mi --oos-upload-concurrency 10000 --oos-attempt-resume-upload --oos-leave-parts-on-error sync ./artifacts remote:bucket -vv +``` ## Configuration -Here is an example of making an oracle object storage configuration. `rclone config` walks you -through it. +Here is an example of making an oracle object storage configuration. `rclone config` +walks you through it. Here is an example of how to make a remote called `remote`. First run: - rclone config +```sh +rclone config +``` This will guide you through an interactive setup process: - -``` +```text n) New remote d) Delete remote r) Rename remote @@ -133,16 +137,22 @@ y/e/d> y See all buckets - rclone lsd remote: +```sh +rclone lsd remote: +``` Create a new bucket - rclone mkdir remote:bucket +```sh +rclone mkdir remote:bucket +``` List the contents of a bucket - rclone ls remote:bucket - rclone ls remote:bucket --max-depth 1 +```sh +rclone ls remote:bucket +rclone ls remote:bucket --max-depth 1 +``` ## Authentication Providers @@ -152,102 +162,128 @@ These choices can be specified in the rclone config file. Rclone supports the following OCI authentication provider. - User Principal - Instance Principal - Resource Principal - Workload Identity - No authentication +```text +User Principal +Instance Principal +Resource Principal +Workload Identity +No authentication +``` ### User Principal Sample rclone config file for Authentication Provider User Principal: - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = user_principal_auth - config_file = /home/opc/.oci/config - config_profile = Default +```ini +[oos] +type = oracleobjectstorage +namespace = id34 +compartment = ocid1.compartment.oc1..aaba +region = us-ashburn-1 +provider = user_principal_auth +config_file = /home/opc/.oci/config +config_profile = Default +``` Advantages: -- One can use this method from any server within OCI or on-premises or from other cloud provider. + +- One can use this method from any server within OCI or on-premises or from + other cloud provider. Considerations: -- you need to configure user’s privileges / policy to allow access to object storage + +- you need to configure user’s privileges / policy to allow access to object + storage - Overhead of managing users and keys. -- If the user is deleted, the config file will no longer work and may cause automation regressions that use the user's credentials. +- If the user is deleted, the config file will no longer work and may cause + automation regressions that use the user's credentials. -### Instance Principal +### Instance Principal -An OCI compute instance can be authorized to use rclone by using it's identity and certificates as an instance principal. -With this approach no credentials have to be stored and managed. +An OCI compute instance can be authorized to use rclone by using it's identity +and certificates as an instance principal. With this approach no credentials +have to be stored and managed. Sample rclone configuration file for Authentication Provider Instance Principal: - [opc@rclone ~]$ cat ~/.config/rclone/rclone.conf - [oos] - type = oracleobjectstorage - namespace = idfn - compartment = ocid1.compartment.oc1..aak7a - region = us-ashburn-1 - provider = instance_principal_auth +```sh +[opc@rclone ~]$ cat ~/.config/rclone/rclone.conf +[oos] +type = oracleobjectstorage +namespace = idfn +compartment = ocid1.compartment.oc1..aak7a +region = us-ashburn-1 +provider = instance_principal_auth +``` Advantages: -- With instance principals, you don't need to configure user credentials and transfer/ save it to disk in your compute - instances or rotate the credentials. +- With instance principals, you don't need to configure user credentials and + transfer/ save it to disk in your compute instances or rotate the credentials. - You don’t need to deal with users and keys. -- Greatly helps in automation as you don't have to manage access keys, user private keys, storing them in vault, - using kms etc. +- Greatly helps in automation as you don't have to manage access keys, user + private keys, storing them in vault, using kms etc. Considerations: -- You need to configure a dynamic group having this instance as member and add policy to read object storage to that - dynamic group. +- You need to configure a dynamic group having this instance as member and add + policy to read object storage to that dynamic group. - Everyone who has access to this machine can execute the CLI commands. -- It is applicable for oci compute instances only. It cannot be used on external instance or resources. +- It is applicable for oci compute instances only. It cannot be used on external + instance or resources. ### Resource Principal -Resource principal auth is very similar to instance principal auth but used for resources that are not -compute instances such as [serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). -To use resource principal ensure Rclone process is started with these environment variables set in its process. +Resource principal auth is very similar to instance principal auth but used for +resources that are not compute instances such as +[serverless functions](https://docs.oracle.com/en-us/iaas/Content/Functions/Concepts/functionsoverview.htm). +To use resource principal ensure Rclone process is started with these environment +variables set in its process. - export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 - export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 - export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem - export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token +```sh +export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 +export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 +export OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM=/usr/share/model-server/key.pem +export OCI_RESOURCE_PRINCIPAL_RPST=/usr/share/model-server/security_token +``` Sample rclone configuration file for Authentication Provider Resource Principal: - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = resource_principal_auth +```ini +[oos] +type = oracleobjectstorage +namespace = id34 +compartment = ocid1.compartment.oc1..aaba +region = us-ashburn-1 +provider = resource_principal_auth +``` ### Workload Identity -Workload Identity auth may be used when running Rclone from Kubernetes pod on a Container Engine for Kubernetes (OKE) cluster. -For more details on configuring Workload Identity, see [Granting Workloads Access to OCI Resources](https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm). -To use workload identity, ensure Rclone is started with these environment variables set in its process. - export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 - export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 +Workload Identity auth may be used when running Rclone from Kubernetes pod on +a Container Engine for Kubernetes (OKE) cluster. For more details on configuring +Workload Identity, see [Granting Workloads Access to OCI Resources](https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contenggrantingworkloadaccesstoresources.htm). +To use workload identity, ensure Rclone is started with these environment +variables set in its process. + +```sh +export OCI_RESOURCE_PRINCIPAL_VERSION=2.2 +export OCI_RESOURCE_PRINCIPAL_REGION=us-ashburn-1 +``` ### No authentication Public buckets do not require any authentication mechanism to read objects. Sample rclone configuration file for No authentication: - - [oos] - type = oracleobjectstorage - namespace = id34 - compartment = ocid1.compartment.oc1..aaba - region = us-ashburn-1 - provider = no_auth + +```ini +[oos] +type = oracleobjectstorage +namespace = id34 +compartment = ocid1.compartment.oc1..aaba +region = us-ashburn-1 +provider = no_auth +``` ### Modification times and hashes @@ -256,10 +292,11 @@ The modification time is stored as metadata on the object as If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. -In the case the object is larger than 5Gb, the object will be uploaded rather than copied. +In the case the object is larger than 5Gb, the object will be uploaded rather than +copied. -Note that reading this from the object takes an additional `HEAD` request as the metadata -isn't returned in object listings. +Note that reading this from the object takes an additional `HEAD` request as the +metadata isn't returned in object listings. The MD5 hash algorithm is supported. diff --git a/docs/content/oracleobjectstorage/tutorial_mount.md b/docs/content/oracleobjectstorage/tutorial_mount.md index 4e69b8c5f..e24c50f17 100644 --- a/docs/content/oracleobjectstorage/tutorial_mount.md +++ b/docs/content/oracleobjectstorage/tutorial_mount.md @@ -3,23 +3,25 @@ title: "Oracle Object Storage Mount" description: "Oracle Object Storage mounting tutorial" --- -# {{< icon "fa fa-cloud" >}} Mount Buckets and Expose via NFS Tutorial -This runbook shows how to [mount](/commands/rclone_mount/) *Oracle Object Storage* buckets as local file system in -OCI compute Instance using rclone tool. +# {{< icon "fa fa-cloud" >}} Mount Buckets and Expose via NFS Tutorial -You will also learn how to export the rclone mounts as NFS mount, so that other NFS client can access them. +This runbook shows how to [mount](/commands/rclone_mount/) *Oracle Object Storage* +buckets as local file system in OCI compute Instance using rclone tool. -Usage Pattern : +You will also learn how to export the rclone mounts as NFS mount, so that other +NFS client can access them. + +Usage Pattern: NFS Client --> NFS Server --> RClone Mount --> OCI Object Storage ## Step 1 : Install Rclone In oracle linux 8, Rclone can be installed from -[OL8_Developer](https://yum.oracle.com/repo/OracleLinux/OL8/developer/x86_64/index.html) Yum Repo, Please enable the -repo if not enabled already. +[OL8_Developer](https://yum.oracle.com/repo/OracleLinux/OL8/developer/x86_64/index.html) +Yum Repo, Please enable the repo if not enabled already. -```shell +```sh [opc@base-inst-boot ~]$ sudo yum-config-manager --enable ol8_developer [opc@base-inst-boot ~]$ sudo yum install -y rclone [opc@base-inst-boot ~]$ sudo yum install -y fuse @@ -42,67 +44,68 @@ License : MIT Description : Rclone is a command line program to sync files and directories to and from various cloud services. ``` -To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone and optionally /usr/bin/rclonefs, -e.g. ln -s /usr/bin/rclone /sbin/mount.rclone. rclone will detect it and translate command-line arguments appropriately. +To run it as a mount helper you should symlink rclone binary to /sbin/mount.rclone +and optionally /usr/bin/rclonefs, e.g. `ln -s /usr/bin/rclone /sbin/mount.rclone`. +rclone will detect it and translate command-line arguments appropriately. -```shell +```sh ln -s /usr/bin/rclone /sbin/mount.rclone ``` ## Step 2: Setup Rclone Configuration file -Let's assume you want to access 3 buckets from the oci compute instance using instance principal provider as means of -authenticating with object storage service. +Let's assume you want to access 3 buckets from the oci compute instance using +instance principal provider as means of authenticating with object storage service. - namespace-a, bucket-a, - namespace-b, bucket-b, - namespace-c, bucket-c -Rclone configuration file needs to have 3 remote sections, one section of each of above 3 buckets. Create a -configuration file in a accessible location that rclone program can read. - -```shell +Rclone configuration file needs to have 3 remote sections, one section of each +of above 3 buckets. Create a configuration file in a accessible location that +rclone program can read. +```sh [opc@base-inst-boot ~]$ mkdir -p /etc/rclone [opc@base-inst-boot ~]$ sudo touch /etc/rclone/rclone.conf - - + + # add below contents to /etc/rclone/rclone.conf [opc@base-inst-boot ~]$ cat /etc/rclone/rclone.conf - - + + [ossa] type = oracleobjectstorage provider = instance_principal_auth namespace = namespace-a compartment = ocid1.compartment.oc1..aaaaaaaa...compartment-a region = us-ashburn-1 - + [ossb] type = oracleobjectstorage provider = instance_principal_auth namespace = namespace-b compartment = ocid1.compartment.oc1..aaaaaaaa...compartment-b region = us-ashburn-1 - - + + [ossc] type = oracleobjectstorage provider = instance_principal_auth namespace = namespace-c compartment = ocid1.compartment.oc1..aaaaaaaa...compartment-c region = us-ashburn-1 - + # List remotes [opc@base-inst-boot ~]$ rclone --config /etc/rclone/rclone.conf listremotes ossa: ossb: ossc: - + # Now please ensure you do not see below errors while listing the bucket, # i.e you should fix the settings to see if namespace, compartment, bucket name are all correct. # and you must have a dynamic group policy to allow the instance to use object-family in compartment. - + [opc@base-inst-boot ~]$ rclone --config /etc/rclone/rclone.conf ls ossa: 2023/04/07 19:09:21 Failed to ls: Error returned by ObjectStorage Service. Http Status Code: 404. Error Code: NamespaceNotFound. Opc request id: iad-1:kVVAb0knsVXDvu9aHUGHRs3gSNBOFO2_334B6co82LrPMWo2lM5PuBKNxJOTmZsS. Message: You do not have authorization to perform this request, or the requested resource could not be found. Operation Name: ListBuckets @@ -117,49 +120,56 @@ If you are unable to resolve this ObjectStorage issue, please contact Oracle sup ``` -## Step 3: Setup Dynamic Group and Add IAM Policy. -Just like a human user has an identity identified by its USER-PRINCIPAL, every OCI compute instance is also a robotic -user identified by its INSTANCE-PRINCIPAL. The instance principal key is automatically fetched by rclone/with-oci-sdk +## Step 3: Setup Dynamic Group and Add IAM Policy + +Just like a human user has an identity identified by its USER-PRINCIPAL, every +OCI compute instance is also a robotic user identified by its INSTANCE-PRINCIPAL. +The instance principal key is automatically fetched by rclone/with-oci-sdk from instance-metadata to make calls to object storage. Similar to [user-group](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managinggroups.htm), [instance groups](https://docs.oracle.com/en-us/iaas/Content/Identity/Tasks/managingdynamicgroups.htm) is known as dynamic-group in IAM. -Create a dynamic group say rclone-dynamic-group that the oci compute instance becomes a member of the below group -says all instances belonging to compartment a...c is member of this dynamic-group. +Create a dynamic group say rclone-dynamic-group that the oci compute instance +becomes a member of the below group says all instances belonging to compartment +a...c is member of this dynamic-group. -```shell -any {instance.compartment.id = '', - instance.compartment.id = '', +```sh +any {instance.compartment.id = '', + instance.compartment.id = '', instance.compartment.id = '' } ``` -Now that you have a dynamic group, you need to add a policy allowing what permissions this dynamic-group has. -In our case, we want this dynamic-group to access object-storage. So create a policy now. +Now that you have a dynamic group, you need to add a policy allowing what +permissions this dynamic-group has. In our case, we want this dynamic-group to +access object-storage. So create a policy now. -```shell +```sh allow dynamic-group rclone-dynamic-group to manage object-family in compartment compartment-a allow dynamic-group rclone-dynamic-group to manage object-family in compartment compartment-b allow dynamic-group rclone-dynamic-group to manage object-family in compartment compartment-c ``` -After you add the policy, now ensure the rclone can list files in your bucket, if not please troubleshoot any mistakes -you did so far. Please note, identity can take upto a minute to ensure policy gets reflected. +After you add the policy, now ensure the rclone can list files in your bucket, +if not please troubleshoot any mistakes you did so far. Please note, identity +can take upto a minute to ensure policy gets reflected. ## Step 4: Setup Mount Folders -Let's assume you have to mount 3 buckets, bucket-a, bucket-b, bucket-c at path /opt/mnt/bucket-a, /opt/mnt/bucket-b, -/opt/mnt/bucket-c respectively. +Let's assume you have to mount 3 buckets, bucket-a, bucket-b, bucket-c at path +/opt/mnt/bucket-a, /opt/mnt/bucket-b, /opt/mnt/bucket-c respectively. Create the mount folder and set its ownership to desired user, group. -```shell + +```sh [opc@base-inst-boot ~]$ sudo mkdir /opt/mnt [opc@base-inst-boot ~]$ sudo chown -R opc:adm /opt/mnt ``` Set chmod permissions to user, group, others as desired for each mount path -```shell + +```sh [opc@base-inst-boot ~]$ sudo chmod 764 /opt/mnt [opc@base-inst-boot ~]$ ls -al /opt/mnt/ total 0 @@ -179,21 +189,23 @@ drwxrwxr-x. 2 opc opc 6 Apr 7 18:17 bucket-b drwxrwxr-x. 2 opc opc 6 Apr 7 18:17 bucket-c ``` -## Step 5: Identify Rclone mount CLI configuration settings to use. -Please read through this [rclone mount](https://rclone.org/commands/rclone_mount/) page completely to really -understand the mount and its flags, what is rclone -[virtual file system](https://rclone.org/commands/rclone_mount/#vfs-virtual-file-system) mode settings and -how to effectively use them for desired Read/Write consistencies. +## Step 5: Identify Rclone mount CLI configuration settings to use -Local File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. -Object storage can throw several errors like 429, 503, 404 etc. The rclone sync/copy commands cope with this with -lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. -Please Look at the VFS File Caching for solutions to make mount more reliable. +Please read through this [rclone mount](https://rclone.org/commands/rclone_mount/) +page completely to really understand the mount and its flags, what is rclone +[virtual file system](https://rclone.org/commands/rclone_mount/#vfs-virtual-file-system) +mode settings and how to effectively use them for desired Read/Write consistencies. + +Local File systems expect things to be 100% reliable, whereas cloud storage +systems are a long way from 100% reliable. Object storage can throw several +errors like 429, 503, 404 etc. The rclone sync/copy commands cope with this +with lots of retries. However rclone mount can't use retries in the same way +without making local copies of the uploads. Please Look at the VFS File Caching +for solutions to make mount more reliable. First lets understand the rclone mount flags and some global flags for troubleshooting. -```shell - +```sh rclone mount \ ossa:bucket-a \ # Remote:bucket-name /opt/mnt/bucket-a \ # Local mount folder @@ -219,69 +231,79 @@ rclone mount \ --vfs-fast-fingerprint # Use fast (less accurate) fingerprints for change detection. --log-level ERROR \ # log level, can be DEBUG, INFO, ERROR --log-file /var/log/rclone/oosa-bucket-a.log # rclone application log - ``` ### --vfs-cache-mode writes -In this mode files opened for read only are still read directly from the remote, write only and read/write files are -buffered to disk first. This mode should support all normal file system operations. If an upload fails it will be +In this mode files opened for read only are still read directly from the +remote, write only and read/write files are buffered to disk first. This mode +should support all normal file system operations. If an upload fails it will be retried at exponentially increasing intervals up to 1 minute. -VFS cache mode of writes is recommended, so that application can have maximum compatibility of using remote storage -as a local disk, when write is finished, file is closed, it is uploaded to backend remote after vfs-write-back duration -has elapsed. If rclone is quit or dies with files that haven't been uploaded, these will be uploaded next time rclone -is run with the same flags. +VFS cache mode of writes is recommended, so that application can have maximum +compatibility of using remote storage as a local disk, when write is finished, +file is closed, it is uploaded to backend remote after vfs-write-back duration +has elapsed. If rclone is quit or dies with files that haven't been uploaded, +these will be uploaded next time rclone is run with the same flags. ### --tpslimit float -Limit transactions per second to this number. Default is 0 which is used to mean unlimited transactions per second. +Limit transactions per second to this number. Default is 0 which is used to +mean unlimited transactions per second. -A transaction is roughly defined as an API call; its exact meaning will depend on the backend. For HTTP based backends -it is an HTTP PUT/GET/POST/etc and its response. For FTP/SFTP it is a round trip transaction over TCP. +A transaction is roughly defined as an API call; its exact meaning will depend +on the backend. For HTTP based backends it is an HTTP PUT/GET/POST/etc and its +response. For FTP/SFTP it is a round trip transaction over TCP. -For example, to limit rclone to 10 transactions per second use --tpslimit 10, or to 1 transaction every 2 seconds -use --tpslimit 0.5. +For example, to limit rclone to 10 transactions per second use --tpslimit 10, +or to 1 transaction every 2 seconds use --tpslimit 0.5. -Use this when the number of transactions per second from rclone is causing a problem with the cloud storage -provider (e.g. getting you banned or rate limited or throttled). +Use this when the number of transactions per second from rclone is causing a +problem with the cloud storage provider (e.g. getting you banned or rate +limited or throttled). -This can be very useful for rclone mount to control the behaviour of applications using it. Let's guess and say Object -storage allows roughly 100 tps per tenant, so to be on safe side, it will be wise to set this at 50. (tune it to actuals per -region) +This can be very useful for rclone mount to control the behaviour of +applications using it. Let's guess and say Object storage allows roughly 100 +tps per tenant, so to be on safe side, it will be wise to set this at 50 +(tune it to actuals per region). ### --vfs-fast-fingerprint -If you use the --vfs-fast-fingerprint flag then rclone will not include the slow operations in the fingerprint. This -makes the fingerprinting less accurate but much faster and will improve the opening time of cached files. If you are -running a vfs cache over local, s3, object storage or swift backends then using this flag is recommended. +If you use the --vfs-fast-fingerprint flag then rclone will not include the +slow operations in the fingerprint. This makes the fingerprinting less accurate +but much faster and will improve the opening time of cached files. If you are +running a vfs cache over local, s3, object storage or swift backends then using +this flag is recommended. + +Various parts of the VFS use fingerprinting to see if a local file copy has +changed relative to a remote file. Fingerprints are made from: -Various parts of the VFS use fingerprinting to see if a local file copy has changed relative to a remote file. -Fingerprints are made from: - size - modification time - hash where available on an object. - ## Step 6: Mounting Options, Use Any one option ### Step 6a: Run as a Service Daemon: Configure FSTAB entry for Rclone mount -Add this entry in /etc/fstab : -```shell + +Add this entry in /etc/fstab: + +```sh ossa:bucket-a /opt/mnt/bucket-a rclone rw,umask=0117,nofail,_netdev,args2env,config=/etc/rclone/rclone.conf,uid=1000,gid=4, file_perms=0760,dir_perms=0760,allow_other,vfs_cache_mode=writes,cache_dir=/tmp/rclone/cache 0 0 ``` -IMPORTANT: Please note in fstab entry arguments are specified as underscore instead of dash, -example: vfs_cache_mode=writes instead of vfs-cache-mode=writes -Rclone in the mount helper mode will split -o argument(s) by comma, replace _ by - and prepend -- to -get the command-line flags. Options containing commas or spaces can be wrapped in single or double quotes. -Any inner quotes inside outer quotes of the same type should be doubled. +IMPORTANT: Please note in fstab entry arguments are specified as underscore +instead of dash, example: vfs_cache_mode=writes instead of vfs-cache-mode=writes +Rclone in the mount helper mode will split -o argument(s) by comma, replace `_` +by `-` and prepend `--` to get the command-line flags. Options containing commas +or spaces can be wrapped in single or double quotes. Any inner quotes inside outer +quotes of the same type should be doubled. -then run sudo mount -av -```shell +Then run sudo mount -av +```sh [opc@base-inst-boot ~]$ sudo mount -av / : ignored /boot : already mounted @@ -290,15 +312,15 @@ then run sudo mount -av /dev/shm : already mounted none : ignored /opt/mnt/bucket-a : already mounted # This is the bucket mounted information, running mount -av again and again is idempotent. - ``` ## Step 6b: Run as a Service Daemon: Configure systemd entry for Rclone mount -If you are familiar with configuring systemd unit files, you can also configure the each rclone mount into a -systemd units file. -various examples in git search: https://github.com/search?l=Shell&q=rclone+unit&type=Code -```shell +If you are familiar with configuring systemd unit files, you can also configure +the each rclone mount into a systemd units file. +various examples in git search: + +```sh tee "/etc/systemd/system/rclonebucketa.service" > /dev/null <&1 | grep -i 'Transport endpoint is not connected' | awk '{print ""$2"" }' | tr -d \:) rclone_list=$(findmnt -t fuse.rclone -n 2>&1 | awk '{print ""$1"" }' | tr -d \:) @@ -340,10 +366,11 @@ do sudo fusermount -uz "$directory" done sudo mount -av - ``` + Script to idempotently add a Cron job to babysit the mount paths every 5 minutes -```shell + +```sh echo "Creating rclone nanny cron job." croncmd="/etc/rclone/scripts/rclone_nanny_script.sh" cronjob="*/5 * * * * $croncmd" @@ -353,55 +380,59 @@ echo "Finished creating rclone nanny cron job." ``` Ensure the crontab is added, so that above nanny script runs every 5 minutes. -```shell + +```sh [opc@base-inst-boot ~]$ sudo crontab -l */5 * * * * /etc/rclone/scripts/rclone_nanny_script.sh -[opc@base-inst-boot ~]$ +[opc@base-inst-boot ~]$ ``` ## Step 8: Optional: Setup NFS server to access the mount points of rclone -Let's say you want to make the rclone mount path /opt/mnt/bucket-a available as a NFS server export so that other -clients can access it by using a NFS client. +Let's say you want to make the rclone mount path /opt/mnt/bucket-a available +as a NFS server export so that other clients can access it by using a NFS client. ### Step 8a : Setup NFS server Install NFS Utils -```shell + +```sh sudo yum install -y nfs-utils ``` -Export the desired directory via NFS Server in the same machine where rclone has mounted to, ensure NFS service has -desired permissions to read the directory. If it runs as root, then it will have permissions for sure, but if it runs +Export the desired directory via NFS Server in the same machine where rclone +has mounted to, ensure NFS service has desired permissions to read the directory. +If it runs as root, then it will have permissions for sure, but if it runs as separate user then ensure that user has necessary desired privileges. -```shell + +```sh # this gives opc user and adm (administrators group) ownership to the path, so any user belonging to adm group will be able to access the files. [opc@tools ~]$ sudo chown -R opc:adm /opt/mnt/bucket-a/ [opc@tools ~]$ sudo chmod 764 /opt/mnt/bucket-a/ - + # Not export the mount path of rclone for exposing via nfs server # There are various nfs export options that you should keep per desired usage. # Syntax is # (