diff --git a/docs/content/azureblob.md b/docs/content/azureblob.md index 08c576c23..26b79c8b5 100644 --- a/docs/content/azureblob.md +++ b/docs/content/azureblob.md @@ -1044,7 +1044,7 @@ Properties: ### Custom upload headers -You can set custom upload headers with the `--header-upload` flag. +You can set custom upload headers with the `--header-upload` flag. - Cache-Control - Content-Disposition @@ -1053,19 +1053,21 @@ You can set custom upload headers with the `--header-upload` flag. - Content-Type - X-MS-Tags -Eg `--header-upload "Content-Type: text/potato"` or `--header-upload "X-MS-Tags: foo=bar"` +Eg `--header-upload "Content-Type: text/potato"` or +`--header-upload "X-MS-Tags: foo=bar"`. ## Limitations MD5 sums are only uploaded with chunked files if the source has an MD5 sum. This will always be the case for a local to azure copy. -`rclone about` is not supported by the Microsoft Azure Blob storage backend. Backends without -this capability cannot determine free space for an rclone mount or -use policy `mfs` (most free space) as a member of an rclone union +`rclone about` is not supported by the Microsoft Azure Blob storage backend. +Backends without this capability cannot determine free space for an rclone +mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). ## Azure Storage Emulator Support diff --git a/docs/content/b2.md b/docs/content/b2.md index 5efec4cc6..08ec37fe0 100644 --- a/docs/content/b2.md +++ b/docs/content/b2.md @@ -793,6 +793,5 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - - +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). diff --git a/docs/content/box.md b/docs/content/box.md index 48452b10b..397123c19 100644 --- a/docs/content/box.md +++ b/docs/content/box.md @@ -520,14 +520,16 @@ Reverse Solidus). Box only supports filenames up to 255 characters in length. -Box has [API rate limits](https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/) that sometimes reduce the speed of rclone. +Box has [API rate limits](https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/) +that sometimes reduce the speed of rclone. `rclone about` is not supported by the Box backend. Backends without this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). ## Get your own Box App ID diff --git a/docs/content/drive.md b/docs/content/drive.md index 5153f89f5..b11a30049 100644 --- a/docs/content/drive.md +++ b/docs/content/drive.md @@ -1870,7 +1870,12 @@ second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google. -It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower. +It is strongly recommended to use your own client ID as the default +rclone ID is heavily used. If you have multiple services running, it +is recommended to use an API key for each service. The default Google +quota is 10 transactions per second so it is recommended to stay under +that number as if you use more than that, it will cause rclone to rate +limit and make things slower. Here is how to create your own Google Drive client ID for rclone: @@ -1888,37 +1893,42 @@ be the same account as the Google Drive you want to access) credentials", which opens the wizard). 5. If you already configured an "Oauth Consent Screen", then skip -to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button +to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then click "Get started". On the next screen, enter an "Application name" -("rclone" is OK); enter "User Support Email" (your own email is OK); +("rclone" is OK); enter "User Support Email" (your own email is OK); Next, under Audience select "External". Next enter your own contact information, agree to terms and click "Create". You should now see rclone (or your project name) in a box in the top left of the screen. (PS: if you are a GSuite user, you could also select "Internal" instead -of "External" above, but this will restrict API use to Google Workspace -users in your organisation). +of "External" above, but this will restrict API use to Google Workspace +users in your organisation). You will also have to add [some scopes](https://developers.google.com/drive/api/guides/api-specific-auth), including - `https://www.googleapis.com/auth/docs` - `https://www.googleapis.com/auth/drive` in order to be able to edit, create and delete files with RClone. - - `https://www.googleapis.com/auth/drive.metadata.readonly` which you may also want to add. + - `https://www.googleapis.com/auth/drive.metadata.readonly` which you may + also want to add. - To do this, click Data Access on the left side panel, click "add or remove scopes" and select the three above and press update or go to the "Manually add scopes" text box (scroll down) and enter "https://www.googleapis.com/auth/docs,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/drive.metadata.readonly", press add to table then update. + To do this, click Data Access on the left side panel, click "add or + remove scopes" and select the three above and press update or go to the + "Manually add scopes" text box (scroll down) and enter + "https://www.googleapis.com/auth/docs,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/drive.metadata.readonly", press add to table then update. - You should now see the three scopes on your Data access page. Now press save at the bottom! + You should now see the three scopes on your Data access page. Now press save + at the bottom! 6. After adding scopes, click Audience Scroll down and click "+ Add users". Add yourself as a test user and press save. -7. Go to Overview on the left panel, click "Create OAuth client". Choose an application type of "Desktop app" and click "Create". (the default name is fine) +7. Go to Overview on the left panel, click "Create OAuth client". Choose + an application type of "Desktop app" and click "Create". (the default name is fine) 8. It will show you a client ID and client secret. Make a note of these. - - (If you selected "External" at Step 5 continue to Step 9. + (If you selected "External" at Step 5 continue to Step 9. If you chose "Internal" you don't need to publish and can skip straight to Step 10 but your destination drive must be part of the same Google Workspace.) @@ -1941,9 +1951,10 @@ testing mode would also be sufficient. (Thanks to @balazer on github for these instructions.) -Sometimes, creation of an OAuth consent in Google API Console fails due to an error message -“The request failed because changes to one of the field of the resource is not supported”. -As a convenient workaround, the necessary Google Drive API key can be created on the -[Python Quickstart](https://developers.google.com/drive/api/v3/quickstart/python) page. -Just push the Enable the Drive API button to receive the Client ID and Secret. +Sometimes, creation of an OAuth consent in Google API Console fails due to an +error message "The request failed because changes to one of the field of the +resource is not supported". As a convenient workaround, the necessary Google +Drive API key can be created on the +[Python Quickstart](https://developers.google.com/drive/api/v3/quickstart/python) +page. Just push the Enable the Drive API button to receive the Client ID and Secret. Note that it will automatically create a new project in the API Console. diff --git a/docs/content/fichier.md b/docs/content/fichier.md index f01c532bb..d6ff62df6 100644 --- a/docs/content/fichier.md +++ b/docs/content/fichier.md @@ -225,5 +225,5 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). diff --git a/docs/content/ftp.md b/docs/content/ftp.md index 137b82fb7..0c894481e 100644 --- a/docs/content/ftp.md +++ b/docs/content/ftp.md @@ -577,7 +577,8 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). The implementation of : `--dump headers`, `--dump bodies`, `--dump auth` for debugging isn't the same as diff --git a/docs/content/googlecloudstorage.md b/docs/content/googlecloudstorage.md index f3eef23cd..a27e4bb65 100644 --- a/docs/content/googlecloudstorage.md +++ b/docs/content/googlecloudstorage.md @@ -827,5 +827,5 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). diff --git a/docs/content/googlephotos.md b/docs/content/googlephotos.md index a440c84c6..55adba75b 100644 --- a/docs/content/googlephotos.md +++ b/docs/content/googlephotos.md @@ -571,7 +571,11 @@ When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by [bug #112096115](https://issuetracker.google.com/issues/112096115). -**The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort** +**The current google API does not allow photos to be downloaded at original +resolution. This is very important if you are, for example, relying on +"Google Photos" as a backup of your photos. You will not be able to use +rclone to redownload original images. You could use 'google takeout' +to recover the original photos as a last resort** **NB** you **can** use the [--gphotos-proxy](#gphotos-proxy) flag to use a headless browser to download images in full resolution. @@ -658,7 +662,7 @@ client_id stops working) then you can make your own. Please follow the steps in [the google drive docs](https://rclone.org/drive/#making-your-own-client-id). You will need these scopes instead of the drive ones detailed: -``` +```text https://www.googleapis.com/auth/photoslibrary.appendonly https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata diff --git a/docs/content/hidrive.md b/docs/content/hidrive.md index cbe69aa6b..c52dd9fdc 100644 --- a/docs/content/hidrive.md +++ b/docs/content/hidrive.md @@ -468,10 +468,10 @@ HiDrive is able to store symbolic links (*symlinks*) by design, for example, when unpacked from a zip archive. There exists no direct mechanism to manage native symlinks in remotes. -As such this implementation has chosen to ignore any native symlinks present in the remote. -rclone will not be able to access or show any symlinks stored in the hidrive-remote. -This means symlinks cannot be individually removed, copied, or moved, -except when removing, copying, or moving the parent folder. +As such this implementation has chosen to ignore any native symlinks present in +the remote. rclone will not be able to access or show any symlinks stored in +the hidrive-remote. This means symlinks cannot be individually removed, copied, +or moved, except when removing, copying, or moving the parent folder. *This does not affect the `.rclonelink`-files that rclone uses to encode and store symbolic links.* diff --git a/docs/content/http.md b/docs/content/http.md index 5a79ac8ba..b28edc659 100644 --- a/docs/content/http.md +++ b/docs/content/http.md @@ -296,5 +296,5 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). diff --git a/docs/content/jottacloud.md b/docs/content/jottacloud.md index 16047686d..b8d15e88c 100644 --- a/docs/content/jottacloud.md +++ b/docs/content/jottacloud.md @@ -589,12 +589,14 @@ See the [metadata](/docs/#metadata) docs for more info. Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc". -There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical -looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead. +There are quite a few characters that can't be in Jottacloud file names. +Rclone will map these names to and from an identical looking unicode +equivalent. For example if a file has a ? in it will be mapped to ? instead. Jottacloud only supports filenames up to 255 characters in length. ## Troubleshooting -Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove -operations to previously deleted paths to fail. Emptying the trash should help in such cases. +Jottacloud exhibits some inconsistent behaviours regarding deleted files and +folders which may cause Copy, Move and DirMove operations to previously +deleted paths to fail. Emptying the trash should help in such cases. diff --git a/docs/content/koofr.md b/docs/content/koofr.md index 2acedc1d3..93f6b53f2 100644 --- a/docs/content/koofr.md +++ b/docs/content/koofr.md @@ -244,12 +244,13 @@ Note that Koofr is case insensitive so you can't have a file called ### Koofr -This is the original [Koofr](https://koofr.eu) storage provider used as main example and described in the [configuration](#configuration) section above. +This is the original [Koofr](https://koofr.eu) storage provider used as main +example and described in the [configuration](#configuration) section above. ### Digi Storage -[Digi Storage](https://www.digi.ro/servicii/online/digi-storage) is a cloud storage service run by [Digi.ro](https://www.digi.ro/) that -provides a Koofr API. +[Digi Storage](https://www.digi.ro/servicii/online/digi-storage) is a cloud +storage service run by [Digi.ro](https://www.digi.ro/) that provides a Koofr API. Here is an example of how to make a remote called `ds`. First run: @@ -318,9 +319,11 @@ y/e/d> y ### Other -You may also want to use another, public or private storage provider that runs a Koofr API compatible service, by simply providing the base URL to connect to. +You may also want to use another, public or private storage provider that +runs a Koofr API compatible service, by simply providing the base URL to +connect to. -Here is an example of how to make a remote called `other`. First run: +Here is an example of how to make a remote called `other`. First run: ```console rclone config diff --git a/docs/content/mega.md b/docs/content/mega.md index cf5b4cd4f..1a7efff58 100644 --- a/docs/content/mega.md +++ b/docs/content/mega.md @@ -310,13 +310,18 @@ Properties: ### Process `killed` -On accounts with large files or something else, memory usage can significantly increase when executing list/sync instructions. When running on cloud providers (like AWS with EC2), check if the instance type has sufficient memory/CPU to execute the commands. Use the resource monitoring tools to inspect after sending the commands. Look [at this issue](https://forum.rclone.org/t/rclone-with-mega-appears-to-work-only-in-some-accounts/40233/4). +On accounts with large files or something else, memory usage can significantly +increase when executing list/sync instructions. When running on cloud providers +(like AWS with EC2), check if the instance type has sufficient memory/CPU to +execute the commands. Use the resource monitoring tools to inspect after sending +the commands. Look [at this issue](https://forum.rclone.org/t/rclone-with-mega-appears-to-work-only-in-some-accounts/40233/4). ## Limitations -This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega) which is an opensource +This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega) +which is an opensource go library implementing the Mega API. There doesn't appear to be any -documentation for the mega protocol beyond the [mega C++ SDK](https://github.com/meganz/sdk) source code -so there are likely quite a few errors still remaining in this library. +documentation for the mega protocol beyond the [mega C++ SDK](https://github.com/meganz/sdk) +source code so there are likely quite a few errors still remaining in this library. Mega allows duplicate files which may confuse rclone. diff --git a/docs/content/onedrive.md b/docs/content/onedrive.md index d1197080c..7e450fd4a 100644 --- a/docs/content/onedrive.md +++ b/docs/content/onedrive.md @@ -1000,25 +1000,36 @@ See the [metadata](/docs/#metadata) docs for more info. ### Impersonate other users as Admin -Unlike Google Drive and impersonating any domain user via service accounts, OneDrive requires you to authenticate as an admin account, and manually setup a remote per user you wish to impersonate. +Unlike Google Drive and impersonating any domain user via service accounts, +OneDrive requires you to authenticate as an admin account, and manually setup +a remote per user you wish to impersonate. -1. In [Microsoft 365 Admin Center](https://admin.microsoft.com), open each user you need to "impersonate" and go to the OneDrive section. There is a heading called "Get access to files", you need to click to create the link, this creates the link of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/` but also changes the permissions so you your admin user has access. +1. In [Microsoft 365 Admin Center](https://admin.microsoft.com), open each user + you need to "impersonate" and go to the OneDrive section. There is a heading + called "Get access to files", you need to click to create the link, this + creates the link of the format + `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/` but also + changes the permissions so you your admin user has access. 2. Then in powershell run the following commands: -```console -Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force -Import-Module Microsoft.Graph.Files -Connect-MgGraph -Scopes "Files.ReadWrite.All" -# Follow the steps to allow access to your admin user -# Then run this for each user you want to impersonate to get the Drive ID -Get-MgUserDefaultDrive -UserId '{emailaddress}' -# This will give you output of the format: -# Name Id DriveType CreatedDateTime -# ---- -- --------- --------------- -# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm -``` -3. Then in rclone add a onedrive remote type, and use the `Type in driveID` with the DriveID you got in the previous step. One remote per user. It will then confirm the drive ID, and hopefully give you a message of `Found drive "root" of type "business"` and then include the URL of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents` + ```console + Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force + Import-Module Microsoft.Graph.Files + Connect-MgGraph -Scopes "Files.ReadWrite.All" + # Follow the steps to allow access to your admin user + # Then run this for each user you want to impersonate to get the Drive ID + Get-MgUserDefaultDrive -UserId '{emailaddress}' + # This will give you output of the format: + # Name Id DriveType CreatedDateTime + # ---- -- --------- --------------- + # OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm + ``` +3. Then in rclone add a onedrive remote type, and use the `Type in driveID` + with the DriveID you got in the previous step. One remote per user. It will + then confirm the drive ID, and hopefully give you a message of + `Found drive "root" of type "business"` and then include the URL of the format + `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents` ## Limitations @@ -1040,11 +1051,16 @@ in it will be mapped to `?` instead. ### File sizes -The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). +The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive +for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize). ### Path length -The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones. +The entire path, including the file name, must contain fewer than 400 +characters for OneDrive, OneDrive for Business and SharePoint Online. If you +are encrypting file and folder names with rclone, you may want to pay attention +to this limitation because the encrypted names are typically longer than the +original ones. ### Number of files @@ -1053,7 +1069,8 @@ OneDrive seems to be OK with at least 50,000 files in a folder, but at list files: UnknownError:`. See [#2707](https://github.com/rclone/rclone/issues/2707) for more info. -An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). +An official document about the limitations for different types of OneDrive can +be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa). ## Versions @@ -1089,24 +1106,30 @@ command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting: -1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you haven't installed this already) +1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you + haven't installed this already) 2. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking` -3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials) +3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will + prompt for your credentials) 4. `Set-SPOTenant -EnableMinimumVersionRequirement $False` 5. `Disconnect-SPOService` (to disconnect from the server) -*Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.* +*Below are the steps for normal users to disable versioning. If you don't see +the "No Versioning" option, make sure the above requirements are met.* User [Weropol](https://github.com/Weropol) has found a method to disable versioning on OneDrive -1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page. +1. Open the settings menu by clicking on the gear symbol at the top of the + OneDrive Business page. 2. Click Site settings. -3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists. +3. Once on the Site settings page, navigate to Site Administration > Site libraries + and lists. 4. Click Customize "Documents". 5. Click General Settings > Versioning Settings. 6. Under Document Version History select the option No versioning. -Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe. + Note: This will disable the creation of new file versions, but will not remove + any previous versions. Your documents are safe. 7. Apply the changes by clicking OK. 8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag) 9. Restore the versioning settings after using rclone. (Optional) @@ -1120,20 +1143,25 @@ querying each file for versions it can be quite slow. Rclone does `--checkers` tests in parallel. The command also supports `--interactive`/`i` or `--dry-run` which is a great way to see what it would do. - rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir - rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir +```text +rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir +rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir +``` **NB** Onedrive personal can't currently delete versions -## Troubleshooting ## +## Troubleshooting ### Excessive throttling or blocked on SharePoint -If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: `--user-agent "ISV|rclone.org|rclone/v1.55.1"` +If you experience excessive throttling or is being blocked on SharePoint then +it may help to set the user agent explicitly with a flag like this: +`--user-agent "ISV|rclone.org|rclone/v1.55.1"` -The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling) +The specific details can be found in the Microsoft document: +[Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling) -### Unexpected file size/hash differences on Sharepoint #### +### Unexpected file size/hash differences on Sharepoint It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631) @@ -1144,57 +1172,66 @@ report inconsistent file sizes. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments: -``` +```text --ignore-checksum --ignore-size ``` Alternatively, if you have write access to the OneDrive files, it may be possible to fix this problem for certain files, by attempting the steps below. Open the web interface for [OneDrive](https://onedrive.live.com) and find the -affected files (which will be in the error messages/log for rclone). Simply click on -each of these files, causing OneDrive to open them on the web. This will cause each -file to be converted in place to a format that is functionally equivalent +affected files (which will be in the error messages/log for rclone). Simply click +on each of these files, causing OneDrive to open them on the web. This will cause +each file to be converted in place to a format that is functionally equivalent but which will no longer trigger the size discrepancy. Once all problematic files are converted you will no longer need the ignore options above. -### Replacing/deleting existing files on Sharepoint gets "item not found" #### +### Replacing/deleting existing files on Sharepoint gets "item not found" It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to -mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use +mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). +As a workaround, you may use the `--backup-dir ` command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory `rclone-backup-dir` on backend `mysharepoint`, you may use: -``` +```text --backup-dir mysharepoint:rclone-backup-dir ``` -### access\_denied (AADSTS65005) #### +### access\_denied (AADSTS65005) -``` +```text Error: access_denied Code: AADSTS65005 Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned. ``` -This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins. +This means that rclone can't use the OneDrive for Business API with your account. +You can't do much about it, maybe write an email to your admins. -However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint +However, there are other ways to interact with your OneDrive account. Have a look +at the WebDAV backend: -### invalid\_grant (AADSTS50076) #### +### invalid\_grant (AADSTS50076) -``` +```text Error: invalid_grant Code: AADSTS50076 Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'. ``` -If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run `rclone config`, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: `Already have a token - refresh?`. For this question, answer `y` and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend. +If you see the error above after enabling multi-factor authentication for your +account, you can fix it by refreshing your OAuth refresh token. To do that, run +`rclone config`, and choose to edit your OneDrive backend. Then, you don't need +to actually make any changes until you reach this question: +`Already have a token - refresh?`. For this question, answer `y` and go through +the process to refresh your token, just like the first time the backend is +configured. After this, rclone should work again for this backend. -### Invalid request when making public links #### +### Invalid request when making public links On Sharepoint and OneDrive for Business, `rclone link` may return an "Invalid request" error. A possible cause is that the organisation admin didn't allow @@ -1205,46 +1242,64 @@ permissions as an admin, take a look at the docs: ### Can not access `Shared` with me files -Shared with me files is not supported by rclone [currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround: +Shared with me files is not supported by rclone +[currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround: 1. Visit [https://onedrive.live.com](https://onedrive.live.com/) 2. Right click a item in `Shared`, then click `Add shortcut to My files` in the context ![make_shortcut](https://user-images.githubusercontent.com/60313789/206118040-7e762b3b-aa61-41a1-8649-cc18889f3572.png "Screenshot (Shared with me)") -3. The shortcut will appear in `My files`, you can access it with rclone, it behaves like a normal folder/file. +3. The shortcut will appear in `My files`, you can access it with rclone, it + behaves like a normal folder/file. ![in_my_files](https://i.imgur.com/0S8H3li.png "Screenshot (My Files)") ![rclone_mount](https://i.imgur.com/2Iq66sW.png "Screenshot (rclone mount)") ### Live Photos uploaded from iOS (small video clips in .heic files) -The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) -of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020. -The usage and download of these uploaded Live Photos is unfortunately still work-in-progress -and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows. +The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452) +of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020. +The usage and download of these uploaded Live Photos is unfortunately still +work-in-progress and this introduces several issues when copying, synchronising +and mounting – both in rclone and in the native OneDrive client on Windows. -The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface. -Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface. -The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive. +The root cause can easily be seen if you locate one of your Live Photos in the +OneDrive web interface. Then download the photo from the web interface. You +will then see that the size of downloaded .heic file is smaller than the size +displayed in the web interface. The downloaded file is smaller because it only +contains a single frame (still photo) extracted from the Live Photo (movie) +stored in OneDrive. -The different sizes will cause `rclone copy/sync` to repeatedly recopy unmodified photos something like this: +The different sizes will cause `rclone copy/sync` to repeatedly recopy +unmodified photos something like this: - DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) - DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK - INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) +```text +DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667) +DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK +INFO : 20230203_123826234_iOS.heic: Copied (replaced existing) +``` -These recopies can be worked around by adding `--ignore-size`. Please note that this workaround only syncs the still-picture not the movie clip, +These recopies can be worked around by adding `--ignore-size`. Please note that +this workaround only syncs the still-picture not the movie clip, and relies on modification dates being correctly updated on all files in all situations. -The different sizes will also cause `rclone check` to report size errors something like this: +The different sizes will also cause `rclone check` to report size errors something +like this: - ERROR : 20230203_123826234_iOS.heic: sizes differ +```text +ERROR : 20230203_123826234_iOS.heic: sizes differ +``` These check errors can be suppressed by adding `--ignore-size`. -The different sizes will also cause `rclone mount` to fail downloading with an error something like this: +The different sizes will also cause `rclone mount` to fail downloading with an +error something like this: - ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF +```text +ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF +``` or like this when using `--cache-mode=full`: - INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: - ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: +```text +INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: +ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable: +``` diff --git a/docs/content/opendrive.md b/docs/content/opendrive.md index dfac0aae1..5e9f2127c 100644 --- a/docs/content/opendrive.md +++ b/docs/content/opendrive.md @@ -217,6 +217,5 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - - +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). diff --git a/docs/content/protondrive.md b/docs/content/protondrive.md index 011e2eb23..1ee237fd7 100644 --- a/docs/content/protondrive.md +++ b/docs/content/protondrive.md @@ -355,25 +355,25 @@ Properties: ## Limitations -This backend uses the -[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which -is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a +This backend uses the +[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which +is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a fork of the [official repo](https://github.com/ProtonMail/go-proton-api). -There is no official API documentation available from Proton Drive. But, thanks -to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) -and the web, iOS, and Android client codebases, we don't need to completely -reverse engineer the APIs by observing the web client traffic! +There is no official API documentation available from Proton Drive. But, thanks +to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api) +and the web, iOS, and Android client codebases, we don't need to completely +reverse engineer the APIs by observing the web client traffic! -[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic -building blocks of API calls and error handling, such as 429 exponential -back-off, but it is pretty much just a barebone interface to the Proton API. -For example, the encryption and decryption of the Proton Drive file are not -provided in this library. +[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic +building blocks of API calls and error handling, such as 429 exponential +back-off, but it is pretty much just a barebone interface to the Proton API. +For example, the encryption and decryption of the Proton Drive file are not +provided in this library. -The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on -top of this quickly. This codebase handles the intricate tasks before and after -calling Proton APIs, particularly the complex encryption scheme, allowing -developers to implement features for other software on top of this codebase. -There are likely quite a few errors in this library, as there isn't official -documentation available. +The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on +top of this quickly. This codebase handles the intricate tasks before and after +calling Proton APIs, particularly the complex encryption scheme, allowing +developers to implement features for other software on top of this codebase. +There are likely quite a few errors in this library, as there isn't official +documentation available. diff --git a/docs/content/qingstor.md b/docs/content/qingstor.md index 9b0ae5031..f523b6827 100644 --- a/docs/content/qingstor.md +++ b/docs/content/qingstor.md @@ -341,5 +341,5 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). diff --git a/docs/content/quatrix.md b/docs/content/quatrix.md index 4f0a460f6..33157fded 100644 --- a/docs/content/quatrix.md +++ b/docs/content/quatrix.md @@ -282,10 +282,13 @@ Properties: ## Storage usage -The storage usage in Quatrix is restricted to the account during the purchase. You can restrict any user with a smaller storage limit. -The account limit is applied if the user has no custom storage limit. Once you've reached the limit, the upload of files will fail. -This can be fixed by freeing up the space or increasing the quota. +The storage usage in Quatrix is restricted to the account during the purchase. +You can restrict any user with a smaller storage limit. The account limit is +applied if the user has no custom storage limit. Once you've reached the limit, +the upload of files will fail. This can be fixed by freeing up the space or +increasing the quota. ## Server-side operations -Quatrix supports server-side operations (copy and move). In case of conflict, files are overwritten during server-side operation. +Quatrix supports server-side operations (copy and move). In case of conflict, +files are overwritten during server-side operation. diff --git a/docs/content/rc.md b/docs/content/rc.md index 2d7b6c818..74e0dc31f 100644 --- a/docs/content/rc.md +++ b/docs/content/rc.md @@ -2298,7 +2298,6 @@ This takes the following parameters This returns an empty result on success, or an error. - This command takes an "fs" parameter. If this parameter is not supplied and if there is only one VFS in use then that VFS will be used. If there is more than one VFS in use then the "fs" parameter diff --git a/docs/content/s3.md b/docs/content/s3.md index 12a254d2b..a630f467e 100644 --- a/docs/content/s3.md +++ b/docs/content/s3.md @@ -2530,7 +2530,7 @@ If you want to use rclone to access a public bucket, configure with a blank `access_key_id` and `secret_access_key`. Your config should end up looking like this: -``` +```ini [anons3] type = s3 provider = AWS @@ -2538,19 +2538,24 @@ provider = AWS Then use it as normal with the name of the public bucket, e.g. - rclone lsd anons3:1000genomes +```console +rclone lsd anons3:1000genomes +``` You will be able to list and copy data but not upload it. You can also do this entirely on the command line - rclone lsd :s3,provider=AWS:1000genomes +```console +rclone lsd :s3,provider=AWS:1000genomes +``` ## Providers ### AWS S3 -This is the provider used as main example and described in the [configuration](#configuration) section above. +This is the provider used as main example and described in the [configuration](#configuration) +section above. ### AWS Directory Buckets @@ -2583,7 +2588,8 @@ does not support query parameter based authentication. With rclone v1.59 or later setting `upload_cutoff` should not be necessary. eg. -``` + +```ini [snowball] type = s3 provider = Other @@ -2707,9 +2713,11 @@ y/e/d> y ### ArvanCloud {#arvan-cloud} -[ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud Object Storage goes beyond the limited traditional file storage. -It gives you access to backup and archived files and allows sharing. -Files like profile image in the app, images sent by users or scanned documents can be stored securely and easily in our Object Storage service. +[ArvanCloud](https://www.arvancloud.com/en/products/cloud-storage) ArvanCloud +Object Storage goes beyond the limited traditional file storage. +It gives you access to backup and archived files and allows sharing. +Files like profile image in the app, images sent by users or scanned documents +can be stored securely and easily in our Object Storage service. ArvanCloud provides an S3 interface which can be configured for use with rclone like this. @@ -2798,7 +2806,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [ArvanCloud] type = s3 provider = ArvanCloud @@ -2823,8 +2831,7 @@ To use rclone with Ceph, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config: - -``` +```ini [ceph] type = s3 provider = Ceph @@ -2853,7 +2860,7 @@ only write `/` in the secret access key. Eg the dump from Ceph looks something like this (irrelevant keys removed). -``` +```json { "user_id": "xxx", "display_name": "xxxx", @@ -3210,7 +3217,7 @@ y/e/d> y This will leave your config looking something like: -``` +```ini [r2] type = s3 provider = Cloudflare @@ -3238,15 +3245,20 @@ appear in the metadata on Cloudflare. ### Cubbit DS3 {#Cubbit} -[Cubbit Object Storage](https://www.cubbit.io/ds3-cloud) is a geo-distributed cloud object storage platform. +[Cubbit Object Storage](https://www.cubbit.io/ds3-cloud) is a geo-distributed +cloud object storage platform. -To connect to Cubbit DS3 you will need an access key and secret key pair. You can follow this [guide](https://docs.cubbit.io/getting-started/quickstart#api-keys) to retrieve these keys. They will be needed when prompted by `rclone config`. +To connect to Cubbit DS3 you will need an access key and secret key pair. You +can follow this [guide](https://docs.cubbit.io/getting-started/quickstart#api-keys) +to retrieve these keys. They will be needed when prompted by `rclone config`. -Default region will correspond to `eu-west-1` and the endpoint has to be specified as `s3.cubbit.eu`. +Default region will correspond to `eu-west-1` and the endpoint has to be specified +as `s3.cubbit.eu`. -Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below: +Going through the whole process of creating a new remote by running `rclone config`, +each prompt should be answered as shown below: -``` +```console name> cubbit-ds3 (or any name you like) Storage> s3 provider> Cubbit @@ -3260,7 +3272,7 @@ acl> The resulting configuration file should look like: -``` +```ini [cubbit-ds3] type = s3 provider = Cubbit @@ -3270,24 +3282,33 @@ region = eu-west-1 endpoint = s3.cubbit.eu ``` -You can then start using Cubbit DS3 with rclone. For example, to create a new bucket and copy files into it, you can run: +You can then start using Cubbit DS3 with rclone. For example, to create a new +bucket and copy files into it, you can run: -``` +```console rclone mkdir cubbit-ds3:my-bucket rclone copy /path/to/files cubbit-ds3:my-bucket ``` ### DigitalOcean Spaces -[Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean. +[Spaces](https://www.digitalocean.com/products/object-storage/) is an +[S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) +object storage service from cloud provider DigitalOcean. -To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when prompted by `rclone config` for your `access_key_id` and `secret_access_key`. +To connect to DigitalOcean Spaces you will need an access key and secret key. +These can be retrieved on the [Applications & API](https://cloud.digitalocean.com/settings/api/tokens) +page of the DigitalOcean control panel. They will be needed when prompted by +`rclone config` for your `access_key_id` and `secret_access_key`. -When prompted for a `region` or `location_constraint`, press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com`). The default values can be used for other settings. +When prompted for a `region` or `location_constraint`, press enter to use the +default value. The region must be included in the `endpoint` setting (e.g. +`nyc3.digitaloceanspaces.com`). The default values can be used for other settings. -Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below: +Going through the whole process of creating a new remote by running `rclone config`, +each prompt should be answered as shown below: -``` +```console Storage> s3 env_auth> 1 access_key_id> YOUR_ACCESS_KEY @@ -3301,7 +3322,7 @@ storage_class> The resulting configuration file should look like: -``` +```ini [spaces] type = s3 provider = DigitalOcean @@ -3318,7 +3339,7 @@ storage_class = Once configured, you can create a new Space and begin copying files. For example: -``` +```console rclone mkdir spaces:my-new-space rclone copy /path/to/files spaces:my-new-space ``` @@ -3332,7 +3353,7 @@ To use rclone with Dreamhost, configure as above but leave the region blank and set the endpoint. You should end up with something like this in your config: -``` +```ini [dreamobjects] type = s3 provider = DreamHost @@ -3416,7 +3437,7 @@ y/n> n And the config generated will end up looking like this: -``` +```ini [exaba] type = s3 provider = Exaba @@ -3427,11 +3448,14 @@ endpoint = http://127.0.0.1:9000/ ### Google Cloud Storage -[GoogleCloudStorage](https://cloud.google.com/storage/docs) is an [S3-interoperable](https://cloud.google.com/storage/docs/interoperability) object storage service from Google Cloud Platform. +[GoogleCloudStorage](https://cloud.google.com/storage/docs) is an +[S3-interoperable](https://cloud.google.com/storage/docs/interoperability) object +storage service from Google Cloud Platform. -To connect to Google Cloud Storage you will need an access key and secret key. These can be retrieved by creating an [HMAC key](https://cloud.google.com/storage/docs/authentication/managing-hmackeys). +To connect to Google Cloud Storage you will need an access key and secret key. +These can be retrieved by creating an [HMAC key](https://cloud.google.com/storage/docs/authentication/managing-hmackeys). -``` +```ini [gs] type = s3 provider = GCS @@ -3440,9 +3464,12 @@ secret_access_key = your_secret_key endpoint = https://storage.googleapis.com ``` -**Note** that `--s3-versions` does not work with GCS when it needs to do directory paging. Rclone will return the error: +**Note** that `--s3-versions` does not work with GCS when it needs to do +directory paging. Rclone will return the error: - s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker +```text +s3 protocol error: received versions listing with IsTruncated set with no NextKeyMarker +``` This is Google bug [#312292516](https://issuetracker.google.com/u/0/issues/312292516). @@ -3451,11 +3478,13 @@ This is Google bug [#312292516](https://issuetracker.google.com/u/0/issues/31229 Here is an example of making a [Hetzner Object Storage](https://www.hetzner.com/storage/object-storage/) configuration. First run: - rclone config +```console +rclone config +``` This will guide you through an interactive setup process. -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -3578,7 +3607,7 @@ e/n/d/r/c/s/q> This will leave the config file looking like this. -``` +```ini [my-hetzner] type = s3 provider = Hetzner @@ -3589,13 +3618,16 @@ endpoint = hel1.your-objectstorage.com acl = private ``` - ### Huawei OBS {#huawei-obs} -Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use cloud storage that lets you store virtually any volume of unstructured data in any format and access it from anywhere. +Object Storage Service (OBS) provides stable, secure, efficient, and easy-to-use +cloud storage that lets you store virtually any volume of unstructured data in +any format and access it from anywhere. -OBS provides an S3 interface, you can copy and modify the following configuration and add it to your rclone configuration file. -``` +OBS provides an S3 interface, you can copy and modify the following configuration +and add it to your rclone configuration file. + +```ini [obs] type = s3 provider = HuaweiOBS @@ -3607,6 +3639,7 @@ acl = private ``` Or you can also configure via the interactive command line: + ```text No remotes found, make a new one\? n) New remote @@ -3720,196 +3753,223 @@ e/n/d/r/c/s/q> q ### IBM COS (S3) -Information stored with IBM Cloud Object Storage is encrypted and dispersed across multiple geographic locations, and accessed through an implementation of the S3 API. This service makes use of the distributed storage technologies provided by IBM’s Cloud Object Storage System (formerly Cleversafe). For more information visit: (http://www.ibm.com/cloud/object-storage) +Information stored with IBM Cloud Object Storage is encrypted and dispersed across +multiple geographic locations, and accessed through an implementation of the S3 API. +This service makes use of the distributed storage technologies provided by IBM’s +Cloud Object Storage System (formerly Cleversafe). For more information visit: + To configure access to IBM COS S3, follow the steps below: 1. Run rclone config and select n for a new remote. -``` - 2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults - No remotes found, make a new one? - n) New remote - s) Set configuration password - q) Quit config - n/s/q> n -``` + + ```text + 2018/02/14 14:13:11 NOTICE: Config file "C:\\Users\\a\\.config\\rclone\\rclone.conf" not found - using defaults + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + ``` 2. Enter the name for the configuration -``` - name> -``` + + ```text + name> + ``` 3. Select "s3" storage. -``` -Choose a number from below, or type in your own value -[snip] -XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ "s3" -[snip] -Storage> s3 -``` + + ```text + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ "s3" + [snip] + Storage> s3 + ``` 4. Select IBM COS as the S3 Storage Provider. -``` -Choose the S3 provider. -Choose a number from below, or type in your own value - 1 / Choose this option to configure Storage to AWS S3 - \ "AWS" - 2 / Choose this option to configure Storage to Ceph Systems - \ "Ceph" - 3 / Choose this option to configure Storage to Dreamhost - \ "Dreamhost" - 4 / Choose this option to the configure Storage to IBM COS S3 - \ "IBMCOS" - 5 / Choose this option to the configure Storage to Minio - \ "Minio" - Provider>4 -``` + + ```text + Choose the S3 provider. + Choose a number from below, or type in your own value + 1 / Choose this option to configure Storage to AWS S3 + \ "AWS" + 2 / Choose this option to configure Storage to Ceph Systems + \ "Ceph" + 3 / Choose this option to configure Storage to Dreamhost + \ "Dreamhost" + 4 / Choose this option to the configure Storage to IBM COS S3 + \ "IBMCOS" + 5 / Choose this option to the configure Storage to Minio + \ "Minio" + Provider>4 + ``` 5. Enter the Access Key and Secret. -``` - AWS Access Key ID - leave blank for anonymous access or runtime credentials. - access_key_id> <> - AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. - secret_access_key> <> -``` -6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option below. For On Premise IBM COS, enter an endpoint address. -``` - Endpoint for IBM COS S3 API. - Specify if using an IBM COS On Premise. - Choose a number from below, or type in your own value - 1 / US Cross Region Endpoint - \ "s3-api.us-geo.objectstorage.softlayer.net" - 2 / US Cross Region Dallas Endpoint - \ "s3-api.dal.us-geo.objectstorage.softlayer.net" - 3 / US Cross Region Washington DC Endpoint - \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" - 4 / US Cross Region San Jose Endpoint - \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" - 5 / US Cross Region Private Endpoint - \ "s3-api.us-geo.objectstorage.service.networklayer.com" - 6 / US Cross Region Dallas Private Endpoint - \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" - 7 / US Cross Region Washington DC Private Endpoint - \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" - 8 / US Cross Region San Jose Private Endpoint - \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" - 9 / US Region East Endpoint - \ "s3.us-east.objectstorage.softlayer.net" - 10 / US Region East Private Endpoint - \ "s3.us-east.objectstorage.service.networklayer.com" - 11 / US Region South Endpoint -[snip] - 34 / Toronto Single Site Private Endpoint - \ "s3.tor01.objectstorage.service.networklayer.com" - endpoint>1 -``` + ```text + AWS Access Key ID - leave blank for anonymous access or runtime credentials. + access_key_id> <> + AWS Secret Access Key (password) - leave blank for anonymous access or runtime credentials. + secret_access_key> <> + ``` +6. Specify the endpoint for IBM COS. For Public IBM COS, choose from the option +below. For On Premise IBM COS, enter an endpoint address. -7. Specify a IBM COS Location Constraint. The location constraint must match endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection from this list, hit enter -``` - 1 / US Cross Region Standard - \ "us-standard" - 2 / US Cross Region Vault - \ "us-vault" - 3 / US Cross Region Cold - \ "us-cold" - 4 / US Cross Region Flex - \ "us-flex" - 5 / US East Region Standard - \ "us-east-standard" - 6 / US East Region Vault - \ "us-east-vault" - 7 / US East Region Cold - \ "us-east-cold" - 8 / US East Region Flex - \ "us-east-flex" - 9 / US South Region Standard - \ "us-south-standard" - 10 / US South Region Vault - \ "us-south-vault" -[snip] - 32 / Toronto Flex - \ "tor01-flex" -location_constraint>1 -``` + ```text + Endpoint for IBM COS S3 API. + Specify if using an IBM COS On Premise. + Choose a number from below, or type in your own value + 1 / US Cross Region Endpoint + \ "s3-api.us-geo.objectstorage.softlayer.net" + 2 / US Cross Region Dallas Endpoint + \ "s3-api.dal.us-geo.objectstorage.softlayer.net" + 3 / US Cross Region Washington DC Endpoint + \ "s3-api.wdc-us-geo.objectstorage.softlayer.net" + 4 / US Cross Region San Jose Endpoint + \ "s3-api.sjc-us-geo.objectstorage.softlayer.net" + 5 / US Cross Region Private Endpoint + \ "s3-api.us-geo.objectstorage.service.networklayer.com" + 6 / US Cross Region Dallas Private Endpoint + \ "s3-api.dal-us-geo.objectstorage.service.networklayer.com" + 7 / US Cross Region Washington DC Private Endpoint + \ "s3-api.wdc-us-geo.objectstorage.service.networklayer.com" + 8 / US Cross Region San Jose Private Endpoint + \ "s3-api.sjc-us-geo.objectstorage.service.networklayer.com" + 9 / US Region East Endpoint + \ "s3.us-east.objectstorage.softlayer.net" + 10 / US Region East Private Endpoint + \ "s3.us-east.objectstorage.service.networklayer.com" + 11 / US Region South Endpoint + [snip] + 34 / Toronto Single Site Private Endpoint + \ "s3.tor01.objectstorage.service.networklayer.com" + endpoint>1 + ``` -8. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private". IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the canned ACLs. -``` -Canned ACL used when creating buckets and/or storing objects in S3. -For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl -Choose a number from below, or type in your own value - 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS - \ "private" - 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS - \ "public-read" - 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS - \ "public-read-write" - 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS - \ "authenticated-read" -acl> 1 -``` +7. Specify a IBM COS Location Constraint. The location constraint must match +endpoint when using IBM Cloud Public. For on-prem COS, do not make a selection +from this list, hit enter -9. Review the displayed configuration and accept to save the "remote" then quit. The config file should look like this -``` - [xxx] - type = s3 - Provider = IBMCOS - access_key_id = xxx - secret_access_key = yyy - endpoint = s3-api.us-geo.objectstorage.softlayer.net - location_constraint = us-standard - acl = private -``` + ```text + 1 / US Cross Region Standard + \ "us-standard" + 2 / US Cross Region Vault + \ "us-vault" + 3 / US Cross Region Cold + \ "us-cold" + 4 / US Cross Region Flex + \ "us-flex" + 5 / US East Region Standard + \ "us-east-standard" + 6 / US East Region Vault + \ "us-east-vault" + 7 / US East Region Cold + \ "us-east-cold" + 8 / US East Region Flex + \ "us-east-flex" + 9 / US South Region Standard + \ "us-south-standard" + 10 / US South Region Vault + \ "us-south-vault" + [snip] + 32 / Toronto Flex + \ "tor01-flex" + location_constraint>1 + ``` + +8. Specify a canned ACL. IBM Cloud (Storage) supports "public-read" and "private". +IBM Cloud(Infra) supports all the canned ACLs. On-Premise COS supports all the +canned ACLs. + + ```text + Canned ACL used when creating buckets and/or storing objects in S3. + For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl + Choose a number from below, or type in your own value + 1 / Owner gets FULL_CONTROL. No one else has access rights (default). This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise COS + \ "private" + 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access. This acl is available on IBM Cloud (Infra), IBM Cloud (Storage), On-Premise IBM COS + \ "public-read" + 3 / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access. This acl is available on IBM Cloud (Infra), On-Premise IBM COS + \ "public-read-write" + 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access. Not supported on Buckets. This acl is available on IBM Cloud (Infra) and On-Premise IBM COS + \ "authenticated-read" + acl> 1 + ``` + +9. Review the displayed configuration and accept to save the "remote" then quit. +The config file should look like this + + ```ini + [xxx] + type = s3 + Provider = IBMCOS + access_key_id = xxx + secret_access_key = yyy + endpoint = s3-api.us-geo.objectstorage.softlayer.net + location_constraint = us-standard + acl = private + ``` 10. Execute rclone commands -``` - 1) Create a bucket. - rclone mkdir IBM-COS-XREGION:newbucket - 2) List available buckets. - rclone lsd IBM-COS-XREGION: - -1 2017-11-08 21:16:22 -1 test - -1 2018-02-14 20:16:39 -1 newbucket - 3) List contents of a bucket. - rclone ls IBM-COS-XREGION:newbucket - 18685952 test.exe - 4) Copy a file from local to remote. - rclone copy /Users/file.txt IBM-COS-XREGION:newbucket - 5) Copy a file from remote to local. - rclone copy IBM-COS-XREGION:newbucket/file.txt . - 6) Delete a file on remote. - rclone delete IBM-COS-XREGION:newbucket/file.txt -``` -#### IBM IAM authentication -If using IBM IAM authentication with IBM API KEY you need to fill in these additional parameters + ```text + 1) Create a bucket. + rclone mkdir IBM-COS-XREGION:newbucket + 2) List available buckets. + rclone lsd IBM-COS-XREGION: + -1 2017-11-08 21:16:22 -1 test + -1 2018-02-14 20:16:39 -1 newbucket + 3) List contents of a bucket. + rclone ls IBM-COS-XREGION:newbucket + 18685952 test.exe + 4) Copy a file from local to remote. + rclone copy /Users/file.txt IBM-COS-XREGION:newbucket + 5) Copy a file from remote to local. + rclone copy IBM-COS-XREGION:newbucket/file.txt . + 6) Delete a file on remote. + rclone delete IBM-COS-XREGION:newbucket/file.txt + ``` + +#### IBM IAM authentication + +If using IBM IAM authentication with IBM API KEY you need to fill in these +additional parameters + 1. Select false for env_auth 2. Leave `access_key_id` and `secret_access_key` blank -3. Paste your `ibm_api_key` -``` -Option ibm_api_key. -IBM API Key to be used to obtain IAM token -Enter a value of type string. Press Enter for the default (1). -ibm_api_key> -``` +3. Paste your `ibm_api_key` + + ```text + Option ibm_api_key. + IBM API Key to be used to obtain IAM token + Enter a value of type string. Press Enter for the default (1). + ibm_api_key> + ``` + 4. Paste your `ibm_resource_instance_id` -``` -Option ibm_resource_instance_id. -IBM service instance id -Enter a value of type string. Press Enter for the default (2). -ibm_resource_instance_id> -``` + + ```text + Option ibm_resource_instance_id. + IBM service instance id + Enter a value of type string. Press Enter for the default (2). + ibm_resource_instance_id> + ``` + 5. In advanced settings type true for `v2_auth` -``` -Option v2_auth. -If true use v2 authentication. -If this is false (the default) then rclone will use v4 authentication. -If it is set then rclone will use v2 authentication. -Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH. -Enter a boolean value (true or false). Press Enter for the default (true). -v2_auth> -``` + + ```text + Option v2_auth. + If true use v2 authentication. + If this is false (the default) then rclone will use v4 authentication. + If it is set then rclone will use v2 authentication. + Use this only if v4 signatures don't work, e.g. pre Jewel/v10 CEPH. + Enter a boolean value (true or false). Press Enter for the default (true). + v2_auth> + ``` ### IDrive e2 {#idrive-e2} @@ -4024,18 +4084,22 @@ y/e/d> y ``` ### Intercolo Object Storage {#intercolo} -[Intercolo Object Storage](https://intercolo.de/object-storage) offers -GDPR-compliant, transparently priced, S3-compatible + +[Intercolo Object Storage](https://intercolo.de/object-storage) offers +GDPR-compliant, transparently priced, S3-compatible cloud storage hosted in Frankfurt, Germany. Here's an example of making a configuration for Intercolo. First run: -``` + +```console rclone config ``` + This will guide you through an interactive setup process. -``` + +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -4140,7 +4204,8 @@ y/e/d> y ``` This will leave the config file looking like this. -``` + +```ini [intercolo] type = s3 provider = Intercolo @@ -4152,19 +4217,24 @@ endpoint = de-fra.i3storage.com ### IONOS Cloud {#ionos} -[IONOS S3 Object Storage](https://cloud.ionos.com/storage/object-storage) is a service offered by IONOS for storing and accessing unstructured data. -To connect to the service, you will need an access key and a secret key. These can be found in the [Data Center Designer](https://dcd.ionos.com/), by selecting **Manager resources** > **Object Storage Key Manager**. +[IONOS S3 Object Storage](https://cloud.ionos.com/storage/object-storage) is a +service offered by IONOS for storing and accessing unstructured data. +To connect to the service, you will need an access key and a secret key. These +can be found in the [Data Center Designer](https://dcd.ionos.com/), by +selecting **Manager resources** > **Object Storage Key Manager**. +Here is an example of a configuration. First, run `rclone config`. This will +walk you through an interactive setup process. Type `n` to add the new remote, +and then enter a name: -Here is an example of a configuration. First, run `rclone config`. This will walk you through an interactive setup process. Type `n` to add the new remote, and then enter a name: - -``` +```text Enter name for new remote. name> ionos-fra ``` Type `s3` to choose the connection type: -``` + +```text Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. @@ -4176,7 +4246,8 @@ Storage> s3 ``` Type `IONOS`: -``` + +```text Option provider. Choose your S3 provider. Choose a number from below, or type in your own value. @@ -4189,7 +4260,8 @@ provider> IONOS ``` Press Enter to choose the default option `Enter AWS credentials in the next step`: -``` + +```text Option env_auth. Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. @@ -4202,8 +4274,11 @@ Press Enter for the default (false). env_auth> ``` -Enter your Access Key and Secret key. These can be retrieved in the [Data Center Designer](https://dcd.ionos.com/), click on the menu “Manager resources” / "Object Storage Key Manager". -``` +Enter your Access Key and Secret key. These can be retrieved in the +[Data Center Designer](https://dcd.ionos.com/), click on the menu +"Manager resources" / "Object Storage Key Manager". + +```text Option access_key_id. AWS Access Key ID. Leave blank for anonymous access or runtime credentials. @@ -4218,7 +4293,8 @@ secret_access_key> YOUR_SECRET_KEY ``` Choose the region where your bucket is located: -``` + +```text Option region. Region where your bucket will be created and your data stored. Choose a number from below, or type in your own value. @@ -4233,7 +4309,8 @@ region> 2 ``` Choose the endpoint from the same region: -``` + +```text Option endpoint. Endpoint for IONOS S3 Object Storage. Specify the endpoint from the same region. @@ -4249,7 +4326,8 @@ endpoint> 1 ``` Press Enter to choose the default option or choose the desired ACL setting: -``` + +```text Option acl. Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. @@ -4267,7 +4345,8 @@ acl> ``` Press Enter to skip the advanced config: -``` + +```text Edit advanced config? y) Yes n) No (default) @@ -4275,7 +4354,8 @@ y/n> ``` Press Enter to save the configuration, and then `q` to quit the configuration process: -``` + +```text Configuration complete. Options: - type: s3 @@ -4292,143 +4372,155 @@ y/e/d> y Done! Now you can try some commands (for macOS, use `./rclone` instead of `rclone`). -1) Create a bucket (the name must be unique within the whole IONOS S3) -``` -rclone mkdir ionos-fra:my-bucket -``` -2) List available buckets -``` -rclone lsd ionos-fra: -``` -4) Copy a file from local to remote -``` -rclone copy /Users/file.txt ionos-fra:my-bucket -``` -3) List contents of a bucket -``` -rclone ls ionos-fra:my-bucket -``` -5) Copy a file from remote to local -``` -rclone copy ionos-fra:my-bucket/file.txt -``` +1) Create a bucket (the name must be unique within the whole IONOS S3) + + ```console + rclone mkdir ionos-fra:my-bucket + ``` + +2) List available buckets + + ```console + rclone lsd ionos-fra: + ``` + +3) Copy a file from local to remote + + ```console + rclone copy /Users/file.txt ionos-fra:my-bucket + ``` + +4) List contents of a bucket + + ```console + rclone ls ionos-fra:my-bucket + ``` + +5) Copy a file from remote to local + + ```console + rclone copy ionos-fra:my-bucket/file.txt + ``` ### Leviia Cloud Object Storage {#leviia} -[Leviia Object Storage](https://www.leviia.com/object-storage/), backup and secure your data in a 100% French cloud, independent of GAFAM.. +[Leviia Object Storage](https://www.leviia.com/object-storage/), backup and secure +your data in a 100% French cloud, independent of GAFAM.. To configure access to Leviia, follow the steps below: 1. Run `rclone config` and select `n` for a new remote. -``` -rclone config -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -``` + ```text + rclone config + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + ``` 2. Give the name of the configuration. For example, name it 'leviia'. -``` -name> leviia -``` + ```text + name> leviia + ``` 3. Select `s3` storage. -``` -Choose a number from below, or type in your own value -[snip] -XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ (s3) -[snip] -Storage> s3 -``` + ```text + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ (s3) + [snip] + Storage> s3 + ``` 4. Select `Leviia` provider. -``` -Choose a number from below, or type in your own value -1 / Amazon Web Services (AWS) S3 - \ "AWS" -[snip] -15 / Leviia Object Storage - \ (Leviia) -[snip] -provider> Leviia -``` + + ```text + Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + [snip] + 15 / Leviia Object Storage + \ (Leviia) + [snip] + provider> Leviia + ``` 5. Enter your SecretId and SecretKey of Leviia. -``` -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Enter a boolean value (true or false). Press Enter for the default ("false"). -Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" -env_auth> 1 -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -access_key_id> ZnIx.xxxxxxxxxxxxxxx -AWS Secret Access Key (password) -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -secret_access_key> xxxxxxxxxxx -``` + ```text + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Enter a boolean value (true or false). Press Enter for the default ("false"). + Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + access_key_id> ZnIx.xxxxxxxxxxxxxxx + AWS Secret Access Key (password) + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + secret_access_key> xxxxxxxxxxx + ``` 6. Select endpoint for Leviia. -``` - / The default endpoint - 1 | Leviia. - \ (s3.leviia.com) -[snip] -endpoint> 1 -``` + ```text + / The default endpoint + 1 | Leviia. + \ (s3.leviia.com) + [snip] + endpoint> 1 + ``` + 7. Choose acl. -``` -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - / Owner gets FULL_CONTROL. - 1 | No one else has access rights (default). - \ (private) - / Owner gets FULL_CONTROL. - 2 | The AllUsers group gets READ access. - \ (public-read) -[snip] -acl> 1 -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> n -Remote config --------------------- -[leviia] -- type: s3 -- provider: Leviia -- access_key_id: ZnIx.xxxxxxx -- secret_access_key: xxxxxxxx -- endpoint: s3.leviia.com -- acl: private --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -Current remotes: - -Name Type -==== ==== -leviia s3 -``` + ```text + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + [snip] + acl> 1 + Edit advanced config? (y/n) + y) Yes + n) No (default) + y/n> n + Remote config + -------------------- + [leviia] + - type: s3 + - provider: Leviia + - access_key_id: ZnIx.xxxxxxx + - secret_access_key: xxxxxxxx + - endpoint: s3.leviia.com + - acl: private + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: + + Name Type + ==== ==== + leviia s3 + ``` ### Liara {#liara-cloud} @@ -4518,7 +4610,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [Liara] type = s3 provider = Liara @@ -4681,7 +4773,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [linode] type = s3 provider = Linode @@ -4800,7 +4892,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [magalu] type = s3 provider = Magalu @@ -4912,7 +5004,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [megas4] type = s3 provider = Mega @@ -4923,15 +5015,17 @@ endpoint = s3.eu-central-1.s4.mega.io ### Minio -[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops. +[Minio](https://minio.io/) is an object storage server built for cloud application +developers and devops. -It is very easy to install and provides an S3 compatible server which can be used by rclone. +It is very easy to install and provides an S3 compatible server which can be used +by rclone. To use it, install Minio following the instructions [here](https://docs.minio.io/docs/minio-quickstart-guide). When it configures itself Minio will print something like this -``` +```text Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000 AccessKey: USWUXHGYZQYFYFFIT3RE SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 @@ -4957,7 +5051,7 @@ Drive Capacity: 26 GiB Free, 165 GiB Total These details need to go into `rclone config` like this. Note that it is important to put the region in as stated above. -``` +```text env_auth> 1 access_key_id> USWUXHGYZQYFYFFIT3RE secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03 @@ -4969,7 +5063,7 @@ server_side_encryption> Which makes the config file look like this -``` +```ini [minio] type = s3 provider = Minio @@ -4984,7 +5078,7 @@ server_side_encryption = So once set up, for example, to copy files into a bucket -``` +```console rclone copy /path/to/files minio:bucket ``` @@ -4996,11 +5090,15 @@ setting the provider `Netease`. This will automatically set ### Outscale -[OUTSCALE Object Storage (OOS)](https://en.outscale.com/storage/outscale-object-storage/) is an enterprise-grade, S3-compatible storage service provided by OUTSCALE, a brand of Dassault Systèmes. For more information about OOS, see the [official documentation](https://docs.outscale.com/en/userguide/OUTSCALE-Object-Storage-OOS.html). +[OUTSCALE Object Storage (OOS)](https://en.outscale.com/storage/outscale-object-storage/) +is an enterprise-grade, S3-compatible storage service provided by OUTSCALE, +a brand of Dassault Systèmes. For more information about OOS, see the +[official documentation](https://docs.outscale.com/en/userguide/OUTSCALE-Object-Storage-OOS.html). -Here is an example of an OOS configuration that you can paste into your rclone configuration file: +Here is an example of an OOS configuration that you can paste into your rclone +configuration file: -``` +```ini [outscale] type = s3 provider = Outscale @@ -5022,12 +5120,12 @@ q) Quit config n/s/q> n ``` -``` +```text Enter name for new remote. name> outscale ``` -``` +```text Option Storage. Type of storage to configure. Choose a number from below, or type in your own value. @@ -5038,7 +5136,7 @@ Choose a number from below, or type in your own value. Storage> outscale ``` -``` +```text Option provider. Choose your S3 provider. Choose a number from below, or type in your own value. @@ -5050,7 +5148,7 @@ XX / OUTSCALE Object Storage (OOS) provider> Outscale ``` -``` +```text Option env_auth. Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. @@ -5063,7 +5161,7 @@ Press Enter for the default (false). env_auth> ``` -``` +```text Option access_key_id. AWS Access Key ID. Leave blank for anonymous access or runtime credentials. @@ -5071,7 +5169,7 @@ Enter a value. Press Enter to leave empty. access_key_id> ABCDEFGHIJ0123456789 ``` -``` +```text Option secret_access_key. AWS Secret Access Key (password). Leave blank for anonymous access or runtime credentials. @@ -5079,7 +5177,7 @@ Enter a value. Press Enter to leave empty. secret_access_key> XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ``` -``` +```text Option region. Region where your bucket will be created and your data stored. Choose a number from below, or type in your own value. @@ -5097,7 +5195,7 @@ Press Enter to leave empty. region> 1 ``` -``` +```text Option endpoint. Endpoint for S3 API. Required when using an S3 clone. @@ -5116,7 +5214,7 @@ Press Enter to leave empty. endpoint> 1 ``` -``` +```text Option acl. Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. @@ -5134,14 +5232,14 @@ Press Enter to leave empty. acl> 1 ``` -``` +```text Edit advanced config? y) Yes n) No (default) y/n> n ``` -``` +```text Configuration complete. Options: - type: s3 @@ -5159,11 +5257,13 @@ y/e/d> y ### OVHcloud {#ovhcloud} [OVHcloud Object Storage](https://www.ovhcloud.com/en-ie/public-cloud/object-storage/) -is an S3-compatible general-purpose object storage platform available in all OVHcloud regions. -To use the platform, you will need an access key and secret key. To know more about it and how -to interact with the platform, take a look at the [documentation](https://ovh.to/8stqhuo). +is an S3-compatible general-purpose object storage platform available in all +OVHcloud regions. To use the platform, you will need an access key and secret key. +To know more about it and how to interact with the platform, take a look at the +[documentation](https://ovh.to/8stqhuo). -Here is an example of making an OVHcloud Object Storage configuration with `rclone config`: +Here is an example of making an OVHcloud Object Storage configuration with +`rclone config`: ```text No remotes found, make a new one\? @@ -5344,7 +5444,7 @@ y/e/d> y Your configuration file should now look like this: -``` +```ini [ovhcloud-rbx] type = s3 provider = OVHcloud @@ -5355,7 +5455,6 @@ endpoint = s3.rbx.io.cloud.ovh.net acl = private ``` - ### Petabox Here is an example of making a [Petabox](https://petabox.io/) @@ -5506,7 +5605,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [My Petabox Storage] type = s3 provider = Petabox @@ -5518,13 +5617,15 @@ endpoint = s3.petabox.io ### Pure Storage FlashBlade -[Pure Storage FlashBlade](https://www.purestorage.com/products/unstructured-data-storage.html) is a high performance S3-compatible object store. +[Pure Storage FlashBlade](https://www.purestorage.com/products/unstructured-data-storage.html) +is a high performance S3-compatible object store. FlashBlade supports most modern S3 features including: - ListObjectsV2 - Multipart uploads with AWS-compatible ETags -- Advanced checksum algorithms (SHA256, CRC32, CRC32C) with trailer support (Purity//FB 4.4.2+) +- Advanced checksum algorithms (SHA256, CRC32, CRC32C) with trailer support + (Purity//FB 4.4.2+) - Object versioning and lifecycle management - Virtual hosted-style requests (requires DNS configuration) @@ -5617,7 +5718,7 @@ y/e/d> y This results in the following configuration being stored in `~/.config/rclone/rclone.conf`: -``` +```ini [flashblade] type = s3 provider = FlashBlade @@ -5626,210 +5727,216 @@ secret_access_key = SECRET_ACCESS_KEY endpoint = https://s3.flashblade.example.com ``` -Note: The FlashBlade endpoint should be the S3 data VIP. For virtual-hosted style requests, -ensure proper DNS configuration: subdomains of the endpoint hostname should resolve to a -FlashBlade data VIP. For example, if your endpoint is `https://s3.flashblade.example.com`, +Note: The FlashBlade endpoint should be the S3 data VIP. For virtual-hosted style +requests, ensure proper DNS configuration: subdomains of the endpoint hostname should +resolve to a FlashBlade data VIP. For example, if your endpoint is `https://s3.flashblade.example.com`, then `bucket-name.s3.flashblade.example.com` should also resolve to the data VIP. ### Qiniu Cloud Object Storage (Kodo) {#qiniu} -[Qiniu Cloud Object Storage (Kodo)](https://www.qiniu.com/en/products/kodo), a completely independent-researched core technology which is proven by repeated customer experience has occupied absolute leading market leader position. Kodo can be widely applied to mass data management. +[Qiniu Cloud Object Storage (Kodo)](https://www.qiniu.com/en/products/kodo), a +completely independent-researched core technology which is proven by repeated +customer experience has occupied absolute leading market leader position. Kodo +can be widely applied to mass data management. To configure access to Qiniu Kodo, follow the steps below: 1. Run `rclone config` and select `n` for a new remote. -``` -rclone config -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -``` + ```text + rclone config + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + ``` 2. Give the name of the configuration. For example, name it 'qiniu'. -``` -name> qiniu -``` + ```text + name> qiniu + ``` 3. Select `s3` storage. -``` -Choose a number from below, or type in your own value -[snip] -XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ (s3) -[snip] -Storage> s3 -``` + ```text + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ (s3) + [snip] + Storage> s3 + ``` 4. Select `Qiniu` provider. -``` -Choose a number from below, or type in your own value -1 / Amazon Web Services (AWS) S3 - \ "AWS" -[snip] -22 / Qiniu Object Storage (Kodo) - \ (Qiniu) -[snip] -provider> Qiniu -``` + + ```text + Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + [snip] + 22 / Qiniu Object Storage (Kodo) + \ (Qiniu) + [snip] + provider> Qiniu + ``` 5. Enter your SecretId and SecretKey of Qiniu Kodo. -``` -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Enter a boolean value (true or false). Press Enter for the default ("false"). -Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" -env_auth> 1 -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -access_key_id> AKIDxxxxxxxxxx -AWS Secret Access Key (password) -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -secret_access_key> xxxxxxxxxxx -``` + ```text + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Enter a boolean value (true or false). Press Enter for the default ("false"). + Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + access_key_id> AKIDxxxxxxxxxx + AWS Secret Access Key (password) + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + secret_access_key> xxxxxxxxxxx + ``` 6. Select endpoint for Qiniu Kodo. This is the standard endpoint for different region. -``` - / The default endpoint - a good choice if you are unsure. - 1 | East China Region 1. - | Needs location constraint cn-east-1. - \ (cn-east-1) - / East China Region 2. - 2 | Needs location constraint cn-east-2. - \ (cn-east-2) - / North China Region 1. - 3 | Needs location constraint cn-north-1. - \ (cn-north-1) - / South China Region 1. - 4 | Needs location constraint cn-south-1. - \ (cn-south-1) - / North America Region. - 5 | Needs location constraint us-north-1. - \ (us-north-1) - / Southeast Asia Region 1. - 6 | Needs location constraint ap-southeast-1. - \ (ap-southeast-1) - / Northeast Asia Region 1. - 7 | Needs location constraint ap-northeast-1. - \ (ap-northeast-1) -[snip] -endpoint> 1 - -Option endpoint. -Endpoint for Qiniu Object Storage. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / East China Endpoint 1 - \ (s3-cn-east-1.qiniucs.com) - 2 / East China Endpoint 2 - \ (s3-cn-east-2.qiniucs.com) - 3 / North China Endpoint 1 - \ (s3-cn-north-1.qiniucs.com) - 4 / South China Endpoint 1 - \ (s3-cn-south-1.qiniucs.com) - 5 / North America Endpoint 1 - \ (s3-us-north-1.qiniucs.com) - 6 / Southeast Asia Endpoint 1 - \ (s3-ap-southeast-1.qiniucs.com) - 7 / Northeast Asia Endpoint 1 - \ (s3-ap-northeast-1.qiniucs.com) -endpoint> 1 - -Option location_constraint. -Location constraint - must be set to match the Region. -Used when creating buckets only. -Choose a number from below, or type in your own value. -Press Enter to leave empty. - 1 / East China Region 1 - \ (cn-east-1) - 2 / East China Region 2 - \ (cn-east-2) - 3 / North China Region 1 - \ (cn-north-1) - 4 / South China Region 1 - \ (cn-south-1) - 5 / North America Region 1 - \ (us-north-1) - 6 / Southeast Asia Region 1 - \ (ap-southeast-1) - 7 / Northeast Asia Region 1 - \ (ap-northeast-1) -location_constraint> 1 -``` + ```text + / The default endpoint - a good choice if you are unsure. + 1 | East China Region 1. + | Needs location constraint cn-east-1. + \ (cn-east-1) + / East China Region 2. + 2 | Needs location constraint cn-east-2. + \ (cn-east-2) + / North China Region 1. + 3 | Needs location constraint cn-north-1. + \ (cn-north-1) + / South China Region 1. + 4 | Needs location constraint cn-south-1. + \ (cn-south-1) + / North America Region. + 5 | Needs location constraint us-north-1. + \ (us-north-1) + / Southeast Asia Region 1. + 6 | Needs location constraint ap-southeast-1. + \ (ap-southeast-1) + / Northeast Asia Region 1. + 7 | Needs location constraint ap-northeast-1. + \ (ap-northeast-1) + [snip] + endpoint> 1 + + Option endpoint. + Endpoint for Qiniu Object Storage. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / East China Endpoint 1 + \ (s3-cn-east-1.qiniucs.com) + 2 / East China Endpoint 2 + \ (s3-cn-east-2.qiniucs.com) + 3 / North China Endpoint 1 + \ (s3-cn-north-1.qiniucs.com) + 4 / South China Endpoint 1 + \ (s3-cn-south-1.qiniucs.com) + 5 / North America Endpoint 1 + \ (s3-us-north-1.qiniucs.com) + 6 / Southeast Asia Endpoint 1 + \ (s3-ap-southeast-1.qiniucs.com) + 7 / Northeast Asia Endpoint 1 + \ (s3-ap-northeast-1.qiniucs.com) + endpoint> 1 + + Option location_constraint. + Location constraint - must be set to match the Region. + Used when creating buckets only. + Choose a number from below, or type in your own value. + Press Enter to leave empty. + 1 / East China Region 1 + \ (cn-east-1) + 2 / East China Region 2 + \ (cn-east-2) + 3 / North China Region 1 + \ (cn-north-1) + 4 / South China Region 1 + \ (cn-south-1) + 5 / North America Region 1 + \ (us-north-1) + 6 / Southeast Asia Region 1 + \ (ap-southeast-1) + 7 / Northeast Asia Region 1 + \ (ap-northeast-1) + location_constraint> 1 + ``` 7. Choose acl and storage class. -``` -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - / Owner gets FULL_CONTROL. - 1 | No one else has access rights (default). - \ (private) - / Owner gets FULL_CONTROL. - 2 | The AllUsers group gets READ access. - \ (public-read) -[snip] -acl> 2 -The storage class to use when storing new objects in Tencent COS. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / Standard storage class - \ (STANDARD) - 2 / Infrequent access storage mode - \ (LINE) - 3 / Archive storage mode - \ (GLACIER) - 4 / Deep archive storage mode - \ (DEEP_ARCHIVE) -[snip] -storage_class> 1 -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> n -Remote config --------------------- -[qiniu] -- type: s3 -- provider: Qiniu -- access_key_id: xxx -- secret_access_key: xxx -- region: cn-east-1 -- endpoint: s3-cn-east-1.qiniucs.com -- location_constraint: cn-east-1 -- acl: public-read -- storage_class: STANDARD --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -Current remotes: - -Name Type -==== ==== -qiniu s3 -``` + ```text + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + / Owner gets FULL_CONTROL. + 1 | No one else has access rights (default). + \ (private) + / Owner gets FULL_CONTROL. + 2 | The AllUsers group gets READ access. + \ (public-read) + [snip] + acl> 2 + The storage class to use when storing new objects in Tencent COS. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / Standard storage class + \ (STANDARD) + 2 / Infrequent access storage mode + \ (LINE) + 3 / Archive storage mode + \ (GLACIER) + 4 / Deep archive storage mode + \ (DEEP_ARCHIVE) + [snip] + storage_class> 1 + Edit advanced config? (y/n) + y) Yes + n) No (default) + y/n> n + Remote config + -------------------- + [qiniu] + - type: s3 + - provider: Qiniu + - access_key_id: xxx + - secret_access_key: xxx + - region: cn-east-1 + - endpoint: s3-cn-east-1.qiniucs.com + - location_constraint: cn-east-1 + - acl: public-read + - storage_class: STANDARD + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: + + Name Type + ==== ==== + qiniu s3 + ``` ### FileLu S5 {#filelu-s5} -[FileLu S5 Object Storage](https://s5lu.com) is an S3-compatible object storage system. -It provides multiple region options (Global, US-East, EU-Central, AP-Southeast, and ME-Central) while using a single endpoint (`s5lu.com`). -FileLu S5 is designed for scalability, security, and simplicity, with predictable pricing and no hidden charges for data transfers or API requests. +[FileLu S5 Object Storage](https://s5lu.com) is an S3-compatible object storage +system. It provides multiple region options (Global, US-East, EU-Central, +AP-Southeast, and ME-Central) while using a single endpoint (`s5lu.com`). +FileLu S5 is designed for scalability, security, and simplicity, with predictable +pricing and no hidden charges for data transfers or API requests. Here is an example of making a configuration. First run: @@ -5929,7 +6036,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [s5lu] type = s3 provider = FileLu @@ -5940,14 +6047,17 @@ endpoint = s5lu.com ### Rabata {#Rabata} -[Rabata](https://rabata.io) is an S3-compatible secure cloud storage service that offers flat, transparent pricing (no API request fees) -while supporting standard S3 APIs. It is suitable for backup, application storage,media workflows, and archive use cases. +[Rabata](https://rabata.io) is an S3-compatible secure cloud storage service that +offers flat, transparent pricing (no API request fees) while supporting standard +S3 APIs. It is suitable for backup, application storage,media workflows, and +archive use cases. -Server side copy is not implemented with Rabata, also meaning modification time of objects cannot be updated. +Server side copy is not implemented with Rabata, also meaning modification time +of objects cannot be updated. Rclone config: -``` +```text rclone config No remotes found, make a new one? n) New remote @@ -6065,16 +6175,20 @@ rabata s3 ### RackCorp {#RackCorp} -[RackCorp Object Storage](https://www.rackcorp.com/storage/s3storage) is an S3 compatible object storage platform from your friendly cloud provider RackCorp. -The service is fast, reliable, well priced and located in many strategic locations unserviced by others, to ensure you can maintain data sovereignty. +[RackCorp Object Storage](https://www.rackcorp.com/storage/s3storage) is an S3 +compatible object storage platform from your friendly cloud provider RackCorp. +The service is fast, reliable, well priced and located in many strategic +locations unserviced by others, to ensure you can maintain data sovereignty. -Before you can use RackCorp Object Storage, you'll need to "[sign up](https://www.rackcorp.com/signup)" for an account on our "[portal](https://portal.rackcorp.com)". -Next you can create an `access key`, a `secret key` and `buckets`, in your location of choice with ease. -These details are required for the next steps of configuration, when `rclone config` asks for your `access_key_id` and `secret_access_key`. +Before you can use RackCorp Object Storage, you'll need to +[sign up](https://www.rackcorp.com/signup) for an account on our [portal](https://portal.rackcorp.com). +Next you can create an `access key`, a `secret key` and `buckets`, in your +location of choice with ease. These details are required for the next steps of +configuration, when `rclone config` asks for your `access_key_id` and `secret_access_key`. Your config should end up looking a bit like this: -``` +```ini [RCS3-demo-config] type = s3 provider = RackCorp @@ -6093,13 +6207,13 @@ Rclone can serve any remote over the S3 protocol. For details see the For example, to serve `remote:path` over s3, run the server like this: -``` +```console rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path ``` This will be compatible with an rclone remote which is defined like this: -``` +```ini [serves3] type = s3 provider = Rclone @@ -6114,12 +6228,15 @@ Note that setting `use_multipart_uploads = false` is to work around ### Scaleway -[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform allows you to store anything from backups, logs and web assets to documents and photos. -Files can be dropped from the Scaleway console or transferred through our API and CLI or using any S3-compatible tool. +[Scaleway](https://www.scaleway.com/object-storage/) The Object Storage platform +allows you to store anything from backups, logs and web assets to documents and photos. +Files can be dropped from the Scaleway console or transferred through our API and +CLI or using any S3-compatible tool. -Scaleway provides an S3 interface which can be configured for use with rclone like this: +Scaleway provides an S3 interface which can be configured for use with rclone +like this: -``` +```ini [scaleway] type = s3 provider = Scaleway @@ -6135,19 +6252,25 @@ chunk_size = 5M copy_cutoff = 5M ``` -[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`. -So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above) +[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the +low-cost S3 Glacier alternative from Scaleway and it works the same way as on +S3 by accepting the "GLACIER" `storage_class`. So you can configure your remote +with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. +Don't forget that in this state you can't read files back after, you will need +to restore them to "STANDARD" storage_class first before being able to read +them (see "restore" section above) ### Seagate Lyve Cloud {#lyve} -[Seagate Lyve Cloud](https://www.seagate.com/gb/en/services/cloud/storage/) is an S3 -compatible object storage platform from [Seagate](https://seagate.com/) intended for enterprise use. +[Seagate Lyve Cloud](https://www.seagate.com/gb/en/services/cloud/storage/) is +an S3 compatible object storage platform from [Seagate](https://seagate.com/) +intended for enterprise use. Here is a config run through for a remote called `remote` - you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first. -``` +```console $ rclone config No remotes found, make a new one? n) New remote @@ -6159,7 +6282,7 @@ name> remote Choose `s3` backend -``` +```text Type of storage to configure. Choose a number from below, or type in your own value. [snip] @@ -6171,7 +6294,7 @@ Storage> s3 Choose `LyveCloud` as S3 provider -``` +```text Choose your S3 provider. Choose a number from below, or type in your own value. Press Enter to leave empty. @@ -6182,9 +6305,10 @@ XX / Seagate Lyve Cloud provider> LyveCloud ``` -Take the default (just press enter) to enter access key and secret in the config file. +Take the default (just press enter) to enter access key and secret in the +config file. -``` +```text Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank. Choose a number from below, or type in your own boolean value (true or false). @@ -6196,14 +6320,14 @@ Press Enter for the default (false). env_auth> ``` -``` +```text AWS Access Key ID. Leave blank for anonymous access or runtime credentials. Enter a value. Press Enter to leave empty. access_key_id> XXX ``` -``` +```text AWS Secret Access Key (password). Leave blank for anonymous access or runtime credentials. Enter a value. Press Enter to leave empty. @@ -6212,7 +6336,7 @@ secret_access_key> YYY Leave region blank -``` +```text Region to connect to. Leave blank if you are using an S3 clone and you don't have a region. Choose a number from below, or type in your own value. @@ -6228,7 +6352,7 @@ region> Enter your Lyve Cloud endpoint. This field cannot be kept empty. -``` +```text Endpoint for Lyve Cloud S3 API. Required when using an S3 clone. Please type in your LyveCloud endpoint. @@ -6241,7 +6365,7 @@ endpoint> s3.us-west-1.global.lyve.seagate.com Leave location constraint blank -``` +```text Location constraint - must be set to match the Region. Leave blank if not sure. Used when creating buckets only. Enter a value. Press Enter to leave empty. @@ -6250,7 +6374,7 @@ location_constraint> Choose default ACL (`private`). -``` +```text Canned ACL used when creating buckets and storing or copying objects. This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too. For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl @@ -6267,7 +6391,7 @@ acl> And the config file should end up looking like this: -``` +```ini [remote] type = s3 provider = LyveCloud @@ -6278,14 +6402,16 @@ endpoint = s3.us-east-1.lyvecloud.seagate.com ### SeaweedFS -[SeaweedFS](https://github.com/chrislusf/seaweedfs/) is a distributed storage system for -blobs, objects, files, and data lake, with O(1) disk seek and a scalable file metadata store. -It has an S3 compatible object storage interface. SeaweedFS can also act as a -[gateway to remote S3 compatible object store](https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remote-Object-Storage) -to cache data and metadata with asynchronous write back, for fast local speed and minimize access cost. +[SeaweedFS](https://github.com/chrislusf/seaweedfs/) is a distributed storage +system for blobs, objects, files, and data lake, with O(1) disk seek and a +scalable file metadata store. It has an S3 compatible object storage interface. +SeaweedFS can also act as a [gateway to remote S3 compatible object store](https://github.com/chrislusf/seaweedfs/wiki/Gateway-to-Remote-Object-Storage) +to cache data and metadata with asynchronous write back, for fast local speed +and minimize access cost. Assuming the SeaweedFS are configured with `weed shell` as such: -``` + +```text > s3.bucket.create -name foo > s3.configure -access_key=any -secret_key=any -buckets=foo -user=me -actions=Read,Write,List,Tagging,Admin -apply { @@ -6310,10 +6436,10 @@ Assuming the SeaweedFS are configured with `weed shell` as such: } ``` -To use rclone with SeaweedFS, above configuration should end up with something like this in -your config: +To use rclone with SeaweedFS, above configuration should end up with something +like this in your config: -``` +```ini [seaweedfs_s3] type = s3 provider = SeaweedFS @@ -6324,7 +6450,7 @@ endpoint = localhost:8333 So once set up, for example to copy files into a bucket -``` +```console rclone copy /path/to/files seaweedfs_s3:foo ``` @@ -6437,7 +6563,7 @@ y/e/d> y And your config should end up looking like this: -``` +```ini [selectel] type = s3 provider = Selectel @@ -6448,13 +6574,14 @@ endpoint = s3.ru-1.storage.selcloud.ru ``` ### Servercore {#servercore} + [Servercore Object Storage](https://servercore.io/object-storage/) is an S3 compatible object storage system that provides scalable and secure storage solutions for businesses of all sizes. rclone config example: -``` +```text No remotes found, make a new one\? n) New remote s) Set configuration password @@ -6561,15 +6688,15 @@ y/e/d> y ### Spectra Logic {#spectralogic} [Spectra Logic](https://www.spectralogic.com/blackpearl-nearline-object-gateway) -is an on-prem S3-compatible object storage gateway that exposes local object storage and -policy-tiers data to Spectra tape and public clouds under a single namespace for -backup and archiving. +is an on-prem S3-compatible object storage gateway that exposes local object +storage and policy-tiers data to Spectra tape and public clouds under a single +namespace for backup and archiving. The S3 compatible gateway is configured using `rclone config` with a type of `s3` and with a provider name of `SpectraLogic`. Here is an example run of the configurator. -``` +```text No remotes found, make a new one? n) New remote s) Set configuration password @@ -6648,7 +6775,7 @@ y/e/d> y And your config should end up looking like this: -``` +```ini [spectratest] type = s3 provider = SpectraLogic @@ -6666,7 +6793,7 @@ The S3 compatible gateway is configured using `rclone config` with a type of `s3` and with a provider name of `Storj`. Here is an example run of the configurator. -``` +```text Type of storage to configure. Storage> s3 Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). @@ -6728,11 +6855,14 @@ This has the following consequences: - Using `rclone rcat` will fail as the metadata doesn't match after upload - Uploading files with `rclone mount` will fail for the same reason - - This can worked around by using `--vfs-cache-mode writes` or `--vfs-cache-mode full` or setting `--s3-upload-cutoff` large + - This can worked around by using `--vfs-cache-mode writes` or + `--vfs-cache-mode full` or setting `--s3-upload-cutoff` large - Files uploaded via a multipart upload won't have their modtimes - - This will mean that `rclone sync` will likely keep trying to upload files bigger than `--s3-upload-cutoff` - - This can be worked around with `--checksum` or `--size-only` or setting `--s3-upload-cutoff` large - - The maximum value for `--s3-upload-cutoff` is 5GiB though + - This will mean that `rclone sync` will likely keep trying to upload + files bigger than `--s3-upload-cutoff` + - This can be worked around with `--checksum` or `--size-only` or + setting `--s3-upload-cutoff` large + - The maximum value for `--s3-upload-cutoff` is 5GiB though One general purpose workaround is to set `--s3-upload-cutoff 5G`. This means that rclone will upload files smaller than 5GiB as single parts. @@ -6760,7 +6890,9 @@ For more detailed comparison please check the documentation of the ### Synology C2 Object Storage {#synology-c2} -[Synology C2 Object Storage](https://c2.synology.com/en-global/object-storage/overview) provides a secure, S3-compatible, and cost-effective cloud storage solution without API request, download fees, and deletion penalty. +[Synology C2 Object Storage](https://c2.synology.com/en-global/object-storage/overview) +provides a secure, S3-compatible, and cost-effective cloud storage solution +without API request, download fees, and deletion penalty. The S3 compatible gateway is configured using `rclone config` with a type of `s3` and with a provider name of `Synology`. Here is an example @@ -6768,7 +6900,7 @@ run of the configurator. First run: -``` +```console rclone config ``` @@ -6893,130 +7025,133 @@ y/e/d> y ### Tencent COS {#tencent-cos} -[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) is a distributed storage service offered by Tencent Cloud for unstructured data. It is secure, stable, massive, convenient, low-delay and low-cost. +[Tencent Cloud Object Storage (COS)](https://intl.cloud.tencent.com/product/cos) +is a distributed storage service offered by Tencent Cloud for unstructured data. +It is secure, stable, massive, convenient, low-delay and low-cost. To configure access to Tencent COS, follow the steps below: 1. Run `rclone config` and select `n` for a new remote. -``` -rclone config -No remotes found, make a new one? -n) New remote -s) Set configuration password -q) Quit config -n/s/q> n -``` + ```text + rclone config + No remotes found, make a new one? + n) New remote + s) Set configuration password + q) Quit config + n/s/q> n + ``` 2. Give the name of the configuration. For example, name it 'cos'. -``` -name> cos -``` + ```text + name> cos + ``` 3. Select `s3` storage. -``` -Choose a number from below, or type in your own value -[snip] -XX / Amazon S3 Compliant Storage Providers including AWS, ... - \ "s3" -[snip] -Storage> s3 -``` + ```text + Choose a number from below, or type in your own value + [snip] + XX / Amazon S3 Compliant Storage Providers including AWS, ... + \ "s3" + [snip] + Storage> s3 + ``` 4. Select `TencentCOS` provider. -``` -Choose a number from below, or type in your own value -1 / Amazon Web Services (AWS) S3 - \ "AWS" -[snip] -11 / Tencent Cloud Object Storage (COS) - \ "TencentCOS" -[snip] -provider> TencentCOS -``` + + ```text + Choose a number from below, or type in your own value + 1 / Amazon Web Services (AWS) S3 + \ "AWS" + [snip] + 11 / Tencent Cloud Object Storage (COS) + \ "TencentCOS" + [snip] + provider> TencentCOS + ``` 5. Enter your SecretId and SecretKey of Tencent Cloud. -``` -Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). -Only applies if access_key_id and secret_access_key is blank. -Enter a boolean value (true or false). Press Enter for the default ("false"). -Choose a number from below, or type in your own value - 1 / Enter AWS credentials in the next step - \ "false" - 2 / Get AWS credentials from the environment (env vars or IAM) - \ "true" -env_auth> 1 -AWS Access Key ID. -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -access_key_id> AKIDxxxxxxxxxx -AWS Secret Access Key (password) -Leave blank for anonymous access or runtime credentials. -Enter a string value. Press Enter for the default (""). -secret_access_key> xxxxxxxxxxx -``` + ```text + Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). + Only applies if access_key_id and secret_access_key is blank. + Enter a boolean value (true or false). Press Enter for the default ("false"). + Choose a number from below, or type in your own value + 1 / Enter AWS credentials in the next step + \ "false" + 2 / Get AWS credentials from the environment (env vars or IAM) + \ "true" + env_auth> 1 + AWS Access Key ID. + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + access_key_id> AKIDxxxxxxxxxx + AWS Secret Access Key (password) + Leave blank for anonymous access or runtime credentials. + Enter a string value. Press Enter for the default (""). + secret_access_key> xxxxxxxxxxx + ``` 6. Select endpoint for Tencent COS. This is the standard endpoint for different region. -``` - 1 / Beijing Region. - \ "cos.ap-beijing.myqcloud.com" - 2 / Nanjing Region. - \ "cos.ap-nanjing.myqcloud.com" - 3 / Shanghai Region. - \ "cos.ap-shanghai.myqcloud.com" - 4 / Guangzhou Region. - \ "cos.ap-guangzhou.myqcloud.com" -[snip] -endpoint> 4 -``` + ```text + 1 / Beijing Region. + \ "cos.ap-beijing.myqcloud.com" + 2 / Nanjing Region. + \ "cos.ap-nanjing.myqcloud.com" + 3 / Shanghai Region. + \ "cos.ap-shanghai.myqcloud.com" + 4 / Guangzhou Region. + \ "cos.ap-guangzhou.myqcloud.com" + [snip] + endpoint> 4 + ``` 7. Choose acl and storage class. -``` -Note that this ACL is applied when server-side copying objects as S3 -doesn't copy the ACL from the source but rather writes a fresh one. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / Owner gets Full_CONTROL. No one else has access rights (default). - \ "default" -[snip] -acl> 1 -The storage class to use when storing new objects in Tencent COS. -Enter a string value. Press Enter for the default (""). -Choose a number from below, or type in your own value - 1 / Default - \ "" -[snip] -storage_class> 1 -Edit advanced config? (y/n) -y) Yes -n) No (default) -y/n> n -Remote config --------------------- -[cos] -type = s3 -provider = TencentCOS -env_auth = false -access_key_id = xxx -secret_access_key = xxx -endpoint = cos.ap-guangzhou.myqcloud.com -acl = default --------------------- -y) Yes this is OK (default) -e) Edit this remote -d) Delete this remote -y/e/d> y -Current remotes: - -Name Type -==== ==== -cos s3 -``` + ```text + Note that this ACL is applied when server-side copying objects as S3 + doesn't copy the ACL from the source but rather writes a fresh one. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / Owner gets Full_CONTROL. No one else has access rights (default). + \ "default" + [snip] + acl> 1 + The storage class to use when storing new objects in Tencent COS. + Enter a string value. Press Enter for the default (""). + Choose a number from below, or type in your own value + 1 / Default + \ "" + [snip] + storage_class> 1 + Edit advanced config? (y/n) + y) Yes + n) No (default) + y/n> n + Remote config + -------------------- + [cos] + type = s3 + provider = TencentCOS + env_auth = false + access_key_id = xxx + secret_access_key = xxx + endpoint = cos.ap-guangzhou.myqcloud.com + acl = default + -------------------- + y) Yes this is OK (default) + e) Edit this remote + d) Delete this remote + y/e/d> y + Current remotes: + + Name Type + ==== ==== + cos s3 + ``` ### Wasabi @@ -7116,7 +7251,7 @@ y/e/d> y This will leave the config file looking like this. -``` +```ini [wasabi] type = s3 provider = Wasabi @@ -7133,15 +7268,17 @@ storage_class = ### Zata Object Storage {#Zata} -[Zata Object Storage](https://zata.ai/) provides a secure, S3-compatible cloud storage solution designed for scalability and performance, ideal for a variety of data storage needs. +[Zata Object Storage](https://zata.ai/) provides a secure, S3-compatible cloud +storage solution designed for scalability and performance, ideal for a variety +of data storage needs. First run: -``` +```console rclone config ``` -``` +```text This will guide you through an interactive setup process: e) Edit existing remote @@ -7270,10 +7407,11 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> +``` -``` This will leave the config file looking like this. -``` + +```ini [my zata storage] type = s3 provider = Zata @@ -7281,7 +7419,6 @@ access_key_id = xxx secret_access_key = xxx region = us-east-1 endpoint = idr01.zata.ai - ``` ## Memory usage {#memory} @@ -7313,6 +7450,5 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - - +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). diff --git a/docs/content/sharefile.md b/docs/content/sharefile.md index 650b9c4d5..9f0d6b2cb 100644 --- a/docs/content/sharefile.md +++ b/docs/content/sharefile.md @@ -352,5 +352,5 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). diff --git a/docs/content/storj.md b/docs/content/storj.md index c133b1904..07b0f5210 100644 --- a/docs/content/storj.md +++ b/docs/content/storj.md @@ -480,10 +480,23 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). ## Known issues -If you get errors like `too many open files` this usually happens when the default `ulimit` for system max open files is exceeded. Native Storj protocol opens a large number of TCP connections (each of which is counted as an open file). For a single upload stream you can expect 110 TCP connections to be opened. For a single download stream you can expect 35. This batch of connections will be opened for every 64 MiB segment and you should also expect TCP connections to be reused. If you do many transfers you eventually open a connection to most storage nodes (thousands of nodes). +If you get errors like `too many open files` this usually happens when the +default `ulimit` for system max open files is exceeded. Native Storj protocol +opens a large number of TCP connections (each of which is counted as an open +file). For a single upload stream you can expect 110 TCP connections to be +opened. For a single download stream you can expect 35. This batch of +connections will be opened for every 64 MiB segment and you should also +expect TCP connections to be reused. If you do many transfers you eventually +open a connection to most storage nodes (thousands of nodes). -To fix these, please raise your system limits. You can do this issuing a `ulimit -n 65536` just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. `$HOME/.bashrc`, or change the system-wide configuration, usually `/etc/sysctl.conf` and/or `/etc/security/limits.conf`, but please refer to your operating system manual. +To fix these, please raise your system limits. You can do this issuing a +`ulimit -n 65536` just before you run rclone. To change the limits more +permanently you can add this to your shell startup script, +e.g. `$HOME/.bashrc`, or change the system-wide configuration, +usually `/etc/sysctl.conf` and/or `/etc/security/limits.conf`, but please +refer to your operating system manual. diff --git a/docs/content/sugarsync.md b/docs/content/sugarsync.md index 8466a7198..37f69fab0 100644 --- a/docs/content/sugarsync.md +++ b/docs/content/sugarsync.md @@ -300,5 +300,5 @@ this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) - +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). diff --git a/docs/content/swift.md b/docs/content/swift.md index fd5e0c2ca..8011e5878 100644 --- a/docs/content/swift.md +++ b/docs/content/swift.md @@ -721,16 +721,23 @@ setting up a swift remote. ## OVH Cloud Archive -To use rclone with OVH cloud archive, first use `rclone config` to set up a `swift` backend with OVH, choosing `pca` as the `storage_policy`. +To use rclone with OVH cloud archive, first use `rclone config` to set up a +`swift` backend with OVH, choosing `pca` as the `storage_policy`. ### Uploading Objects -Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel. +Uploading objects to OVH cloud archive is no different to object storage, you +just simply run the command you like (move, copy or sync) to upload the objects. +Once uploaded the objects will show in a "Frozen" state within the OVH control panel. ### Retrieving Objects -To retrieve objects use `rclone copy` as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following: +To retrieve objects use `rclone copy` as normal. If the objects are in a frozen +state then rclone will ask for them all to be unfrozen and it will wait at the +end of the output with a message like the following: -`2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)` +```text +2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s) +``` Rclone will wait for the time specified then retry the copy. diff --git a/docs/content/ulozto.md b/docs/content/ulozto.md index 39e7fd098..c57827c93 100644 --- a/docs/content/ulozto.md +++ b/docs/content/ulozto.md @@ -278,4 +278,5 @@ exposed in the API. Backends without this capability cannot determine free space for an rclone mount or use policy `mfs` (most free space) as a member of an rclone union remote. -See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) +See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) +and [rclone about](https://rclone.org/commands/rclone_about/). diff --git a/docs/content/webdav.md b/docs/content/webdav.md index b4343711d..c9f7ec39d 100644 --- a/docs/content/webdav.md +++ b/docs/content/webdav.md @@ -378,7 +378,9 @@ ownCloud supports modified times using the `X-OC-Mtime` header. This is configured in an identical way to ownCloud. Note that Nextcloud initially did not support streaming of files (`rcat`) whereas -ownCloud did, but [this](https://github.com/nextcloud/nextcloud-snap/issues/365) seems to be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud Server v19). +ownCloud did, but [this](https://github.com/nextcloud/nextcloud-snap/issues/365) +seems to be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud +Server v19). ### ownCloud Infinite Scale @@ -421,7 +423,7 @@ Set the `vendor` to `sharepoint`. Your config file should look like this: -``` +```ini [sharepoint] type = webdav url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents @@ -432,17 +434,19 @@ pass = encryptedpassword ### Sharepoint with NTLM Authentication -Use this option in case your (hosted) Sharepoint is not tied to OneDrive accounts and uses NTLM authentication. +Use this option in case your (hosted) Sharepoint is not tied to OneDrive +accounts and uses NTLM authentication. -To get the `url` configuration, similarly to the above, first navigate to the desired directory in your browser to get the URL, -then strip everything after the name of the opened directory. +To get the `url` configuration, similarly to the above, first navigate to the +desired directory in your browser to get the URL, then strip everything after +the name of the opened directory. Example: If the URL is: -https://example.sharepoint.com/sites/12345/Documents/Forms/AllItems.aspx + The configuration to use would be: -https://example.sharepoint.com/sites/12345/Documents + Set the `vendor` to `sharepoint-ntlm`. @@ -451,7 +455,7 @@ set `user` to `DOMAIN\username`. Your config file should look like this: -``` +```ini [sharepoint] type = webdav url = https://[YOUR-DOMAIN]/some-path-to/Documents @@ -462,11 +466,15 @@ pass = encryptedpassword #### Required Flags for SharePoint -As SharePoint does some special things with uploaded documents, you won't be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer. +As SharePoint does some special things with uploaded documents, you won't be +able to use the documents size or the documents hash to compare if a file has +been changed since the upload / which file is newer. -For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the "Last Modified" datetime property to compare your documents: +For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) +from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure +Rclone uses the "Last Modified" datetime property to compare your documents: -``` +```text --ignore-size --ignore-checksum --update ``` @@ -477,7 +485,6 @@ Read [rclone serve webdav](commands/rclone_serve_webdav/) for more details. rclone serve supports modified times using the `X-OC-Mtime` header. - ### dCache dCache is a storage system that supports many protocols and @@ -493,7 +500,7 @@ password, instead enter your Macaroon as the `bearer_token`. The config will end up looking something like this. -``` +```ini [dcache] type = webdav url = https://dcache... @@ -503,8 +510,9 @@ pass = bearer_token = your-macaroon ``` -There is a [script](https://github.com/sara-nl/GridScripts/blob/master/get-macaroon) that -obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file. +There is a [script](https://github.com/sara-nl/GridScripts/blob/master/get-macaroon) +that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config +file. Macaroons may also be obtained from the dCacheView web-browser/JavaScript client that comes with dCache. diff --git a/docs/content/yandex.md b/docs/content/yandex.md index 99331195d..b181c2fa7 100644 --- a/docs/content/yandex.md +++ b/docs/content/yandex.md @@ -274,7 +274,9 @@ to upload a 30 GiB file set a timeout of `2 * 30 = 60m`, that is `--timeout 60m`. Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription. -Token generation will work without a mail account, but Rclone won't be able to complete any actions. -``` +Token generation will work without a mail account, but Rclone won't be able to +complete any actions. + +```text [403 - DiskUnsupportedUserAccountTypeError] User account type is not supported. ``` diff --git a/docs/content/zoho.md b/docs/content/zoho.md index 7cddc1c94..225849019 100644 --- a/docs/content/zoho.md +++ b/docs/content/zoho.md @@ -290,12 +290,15 @@ Properties: ## Setting up your own client_id -For Zoho we advise you to set up your own client_id. To do so you have to complete the following steps. +For Zoho we advise you to set up your own client_id. To do so you have to +complete the following steps. 1. Log in to the [Zoho API Console](https://api-console.zoho.com) -2. Create a new client of type "Server-based Application". The name and website don't matter, but you must add the redirect URL `http://localhost:53682/`. +2. Create a new client of type "Server-based Application". The name and website +don't matter, but you must add the redirect URL `http://localhost:53682/`. -3. Once the client is created, you can go to the settings tab and enable it in other regions. +3. Once the client is created, you can go to the settings tab and enable it in +other regions. The client id and client secret can now be used with rclone.