mirror of
https://github.com/rclone/rclone.git
synced 2025-12-06 00:03:32 +00:00
docs: fix various markdownlint issues
This commit is contained in:
@@ -1044,7 +1044,7 @@ Properties:
|
||||
|
||||
### Custom upload headers
|
||||
|
||||
You can set custom upload headers with the `--header-upload` flag.
|
||||
You can set custom upload headers with the `--header-upload` flag.
|
||||
|
||||
- Cache-Control
|
||||
- Content-Disposition
|
||||
@@ -1053,19 +1053,21 @@ You can set custom upload headers with the `--header-upload` flag.
|
||||
- Content-Type
|
||||
- X-MS-Tags
|
||||
|
||||
Eg `--header-upload "Content-Type: text/potato"` or `--header-upload "X-MS-Tags: foo=bar"`
|
||||
Eg `--header-upload "Content-Type: text/potato"` or
|
||||
`--header-upload "X-MS-Tags: foo=bar"`.
|
||||
|
||||
## Limitations
|
||||
|
||||
MD5 sums are only uploaded with chunked files if the source has an MD5
|
||||
sum. This will always be the case for a local to azure copy.
|
||||
|
||||
`rclone about` is not supported by the Microsoft Azure Blob storage backend. Backends without
|
||||
this capability cannot determine free space for an rclone mount or
|
||||
use policy `mfs` (most free space) as a member of an rclone union
|
||||
`rclone about` is not supported by the Microsoft Azure Blob storage backend.
|
||||
Backends without this capability cannot determine free space for an rclone
|
||||
mount or use policy `mfs` (most free space) as a member of an rclone union
|
||||
remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
## Azure Storage Emulator Support
|
||||
|
||||
|
||||
@@ -793,6 +793,5 @@ this capability cannot determine free space for an rclone mount or
|
||||
use policy `mfs` (most free space) as a member of an rclone union
|
||||
remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
@@ -520,14 +520,16 @@ Reverse Solidus).
|
||||
|
||||
Box only supports filenames up to 255 characters in length.
|
||||
|
||||
Box has [API rate limits](https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/) that sometimes reduce the speed of rclone.
|
||||
Box has [API rate limits](https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/)
|
||||
that sometimes reduce the speed of rclone.
|
||||
|
||||
`rclone about` is not supported by the Box backend. Backends without
|
||||
this capability cannot determine free space for an rclone mount or
|
||||
use policy `mfs` (most free space) as a member of an rclone union
|
||||
remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
## Get your own Box App ID
|
||||
|
||||
|
||||
@@ -1870,7 +1870,12 @@ second that each client_id can do set by Google. rclone already has a
|
||||
high quota and I will continue to make sure it is high enough by
|
||||
contacting Google.
|
||||
|
||||
It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower.
|
||||
It is strongly recommended to use your own client ID as the default
|
||||
rclone ID is heavily used. If you have multiple services running, it
|
||||
is recommended to use an API key for each service. The default Google
|
||||
quota is 10 transactions per second so it is recommended to stay under
|
||||
that number as if you use more than that, it will cause rclone to rate
|
||||
limit and make things slower.
|
||||
|
||||
Here is how to create your own Google Drive client ID for rclone:
|
||||
|
||||
@@ -1888,37 +1893,42 @@ be the same account as the Google Drive you want to access)
|
||||
credentials", which opens the wizard).
|
||||
|
||||
5. If you already configured an "Oauth Consent Screen", then skip
|
||||
to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button
|
||||
to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button
|
||||
(near the top right corner of the right panel), then click "Get started".
|
||||
On the next screen, enter an "Application name"
|
||||
("rclone" is OK); enter "User Support Email" (your own email is OK);
|
||||
("rclone" is OK); enter "User Support Email" (your own email is OK);
|
||||
Next, under Audience select "External". Next enter your own contact information,
|
||||
agree to terms and click "Create". You should now see rclone (or your project name)
|
||||
in a box in the top left of the screen.
|
||||
|
||||
(PS: if you are a GSuite user, you could also select "Internal" instead
|
||||
of "External" above, but this will restrict API use to Google Workspace
|
||||
users in your organisation).
|
||||
of "External" above, but this will restrict API use to Google Workspace
|
||||
users in your organisation).
|
||||
|
||||
You will also have to add [some scopes](https://developers.google.com/drive/api/guides/api-specific-auth),
|
||||
including
|
||||
- `https://www.googleapis.com/auth/docs`
|
||||
- `https://www.googleapis.com/auth/drive` in order to be able to edit,
|
||||
create and delete files with RClone.
|
||||
- `https://www.googleapis.com/auth/drive.metadata.readonly` which you may also want to add.
|
||||
- `https://www.googleapis.com/auth/drive.metadata.readonly` which you may
|
||||
also want to add.
|
||||
|
||||
To do this, click Data Access on the left side panel, click "add or remove scopes" and select the three above and press update or go to the "Manually add scopes" text box (scroll down) and enter "https://www.googleapis.com/auth/docs,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/drive.metadata.readonly", press add to table then update.
|
||||
To do this, click Data Access on the left side panel, click "add or
|
||||
remove scopes" and select the three above and press update or go to the
|
||||
"Manually add scopes" text box (scroll down) and enter
|
||||
"https://www.googleapis.com/auth/docs,https://www.googleapis.com/auth/drive,https://www.googleapis.com/auth/drive.metadata.readonly", press add to table then update.
|
||||
|
||||
You should now see the three scopes on your Data access page. Now press save at the bottom!
|
||||
You should now see the three scopes on your Data access page. Now press save
|
||||
at the bottom!
|
||||
|
||||
6. After adding scopes, click Audience
|
||||
Scroll down and click "+ Add users". Add yourself as a test user and press save.
|
||||
|
||||
7. Go to Overview on the left panel, click "Create OAuth client". Choose an application type of "Desktop app" and click "Create". (the default name is fine)
|
||||
7. Go to Overview on the left panel, click "Create OAuth client". Choose
|
||||
an application type of "Desktop app" and click "Create". (the default name is fine)
|
||||
|
||||
8. It will show you a client ID and client secret. Make a note of these.
|
||||
|
||||
(If you selected "External" at Step 5 continue to Step 9.
|
||||
(If you selected "External" at Step 5 continue to Step 9.
|
||||
If you chose "Internal" you don't need to publish and can skip straight to
|
||||
Step 10 but your destination drive must be part of the same Google Workspace.)
|
||||
|
||||
@@ -1941,9 +1951,10 @@ testing mode would also be sufficient.
|
||||
|
||||
(Thanks to @balazer on github for these instructions.)
|
||||
|
||||
Sometimes, creation of an OAuth consent in Google API Console fails due to an error message
|
||||
“The request failed because changes to one of the field of the resource is not supported”.
|
||||
As a convenient workaround, the necessary Google Drive API key can be created on the
|
||||
[Python Quickstart](https://developers.google.com/drive/api/v3/quickstart/python) page.
|
||||
Just push the Enable the Drive API button to receive the Client ID and Secret.
|
||||
Sometimes, creation of an OAuth consent in Google API Console fails due to an
|
||||
error message "The request failed because changes to one of the field of the
|
||||
resource is not supported". As a convenient workaround, the necessary Google
|
||||
Drive API key can be created on the
|
||||
[Python Quickstart](https://developers.google.com/drive/api/v3/quickstart/python)
|
||||
page. Just push the Enable the Drive API button to receive the Client ID and Secret.
|
||||
Note that it will automatically create a new project in the API Console.
|
||||
|
||||
@@ -225,5 +225,5 @@ this capability cannot determine free space for an rclone mount or
|
||||
use policy `mfs` (most free space) as a member of an rclone union
|
||||
remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
@@ -577,7 +577,8 @@ this capability cannot determine free space for an rclone mount or
|
||||
use policy `mfs` (most free space) as a member of an rclone union
|
||||
remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
The implementation of : `--dump headers`,
|
||||
`--dump bodies`, `--dump auth` for debugging isn't the same as
|
||||
|
||||
@@ -827,5 +827,5 @@ this capability cannot determine free space for an rclone mount or
|
||||
use policy `mfs` (most free space) as a member of an rclone union
|
||||
remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
@@ -571,7 +571,11 @@ When Images are downloaded this strips EXIF location (according to the
|
||||
docs and my tests). This is a limitation of the Google Photos API and
|
||||
is covered by [bug #112096115](https://issuetracker.google.com/issues/112096115).
|
||||
|
||||
**The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort**
|
||||
**The current google API does not allow photos to be downloaded at original
|
||||
resolution. This is very important if you are, for example, relying on
|
||||
"Google Photos" as a backup of your photos. You will not be able to use
|
||||
rclone to redownload original images. You could use 'google takeout'
|
||||
to recover the original photos as a last resort**
|
||||
|
||||
**NB** you **can** use the [--gphotos-proxy](#gphotos-proxy) flag to use a
|
||||
headless browser to download images in full resolution.
|
||||
@@ -658,7 +662,7 @@ client_id stops working) then you can make your own.
|
||||
Please follow the steps in [the google drive docs](https://rclone.org/drive/#making-your-own-client-id).
|
||||
You will need these scopes instead of the drive ones detailed:
|
||||
|
||||
```
|
||||
```text
|
||||
https://www.googleapis.com/auth/photoslibrary.appendonly
|
||||
https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata
|
||||
https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata
|
||||
|
||||
@@ -468,10 +468,10 @@ HiDrive is able to store symbolic links (*symlinks*) by design,
|
||||
for example, when unpacked from a zip archive.
|
||||
|
||||
There exists no direct mechanism to manage native symlinks in remotes.
|
||||
As such this implementation has chosen to ignore any native symlinks present in the remote.
|
||||
rclone will not be able to access or show any symlinks stored in the hidrive-remote.
|
||||
This means symlinks cannot be individually removed, copied, or moved,
|
||||
except when removing, copying, or moving the parent folder.
|
||||
As such this implementation has chosen to ignore any native symlinks present in
|
||||
the remote. rclone will not be able to access or show any symlinks stored in
|
||||
the hidrive-remote. This means symlinks cannot be individually removed, copied,
|
||||
or moved, except when removing, copying, or moving the parent folder.
|
||||
|
||||
*This does not affect the `.rclonelink`-files
|
||||
that rclone uses to encode and store symbolic links.*
|
||||
|
||||
@@ -296,5 +296,5 @@ this capability cannot determine free space for an rclone mount or
|
||||
use policy `mfs` (most free space) as a member of an rclone union
|
||||
remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
@@ -589,12 +589,14 @@ See the [metadata](/docs/#metadata) docs for more info.
|
||||
Note that Jottacloud is case insensitive so you can't have a file called
|
||||
"Hello.doc" and one called "hello.doc".
|
||||
|
||||
There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical
|
||||
looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
|
||||
There are quite a few characters that can't be in Jottacloud file names.
|
||||
Rclone will map these names to and from an identical looking unicode
|
||||
equivalent. For example if a file has a ? in it will be mapped to ? instead.
|
||||
|
||||
Jottacloud only supports filenames up to 255 characters in length.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove
|
||||
operations to previously deleted paths to fail. Emptying the trash should help in such cases.
|
||||
Jottacloud exhibits some inconsistent behaviours regarding deleted files and
|
||||
folders which may cause Copy, Move and DirMove operations to previously
|
||||
deleted paths to fail. Emptying the trash should help in such cases.
|
||||
|
||||
@@ -244,12 +244,13 @@ Note that Koofr is case insensitive so you can't have a file called
|
||||
|
||||
### Koofr
|
||||
|
||||
This is the original [Koofr](https://koofr.eu) storage provider used as main example and described in the [configuration](#configuration) section above.
|
||||
This is the original [Koofr](https://koofr.eu) storage provider used as main
|
||||
example and described in the [configuration](#configuration) section above.
|
||||
|
||||
### Digi Storage
|
||||
|
||||
[Digi Storage](https://www.digi.ro/servicii/online/digi-storage) is a cloud storage service run by [Digi.ro](https://www.digi.ro/) that
|
||||
provides a Koofr API.
|
||||
[Digi Storage](https://www.digi.ro/servicii/online/digi-storage) is a cloud
|
||||
storage service run by [Digi.ro](https://www.digi.ro/) that provides a Koofr API.
|
||||
|
||||
Here is an example of how to make a remote called `ds`. First run:
|
||||
|
||||
@@ -318,9 +319,11 @@ y/e/d> y
|
||||
|
||||
### Other
|
||||
|
||||
You may also want to use another, public or private storage provider that runs a Koofr API compatible service, by simply providing the base URL to connect to.
|
||||
You may also want to use another, public or private storage provider that
|
||||
runs a Koofr API compatible service, by simply providing the base URL to
|
||||
connect to.
|
||||
|
||||
Here is an example of how to make a remote called `other`. First run:
|
||||
Here is an example of how to make a remote called `other`. First run:
|
||||
|
||||
```console
|
||||
rclone config
|
||||
|
||||
@@ -310,13 +310,18 @@ Properties:
|
||||
|
||||
### Process `killed`
|
||||
|
||||
On accounts with large files or something else, memory usage can significantly increase when executing list/sync instructions. When running on cloud providers (like AWS with EC2), check if the instance type has sufficient memory/CPU to execute the commands. Use the resource monitoring tools to inspect after sending the commands. Look [at this issue](https://forum.rclone.org/t/rclone-with-mega-appears-to-work-only-in-some-accounts/40233/4).
|
||||
On accounts with large files or something else, memory usage can significantly
|
||||
increase when executing list/sync instructions. When running on cloud providers
|
||||
(like AWS with EC2), check if the instance type has sufficient memory/CPU to
|
||||
execute the commands. Use the resource monitoring tools to inspect after sending
|
||||
the commands. Look [at this issue](https://forum.rclone.org/t/rclone-with-mega-appears-to-work-only-in-some-accounts/40233/4).
|
||||
|
||||
## Limitations
|
||||
|
||||
This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega) which is an opensource
|
||||
This backend uses the [go-mega go library](https://github.com/t3rm1n4l/go-mega)
|
||||
which is an opensource
|
||||
go library implementing the Mega API. There doesn't appear to be any
|
||||
documentation for the mega protocol beyond the [mega C++ SDK](https://github.com/meganz/sdk) source code
|
||||
so there are likely quite a few errors still remaining in this library.
|
||||
documentation for the mega protocol beyond the [mega C++ SDK](https://github.com/meganz/sdk)
|
||||
source code so there are likely quite a few errors still remaining in this library.
|
||||
|
||||
Mega allows duplicate files which may confuse rclone.
|
||||
|
||||
@@ -1000,25 +1000,36 @@ See the [metadata](/docs/#metadata) docs for more info.
|
||||
|
||||
### Impersonate other users as Admin
|
||||
|
||||
Unlike Google Drive and impersonating any domain user via service accounts, OneDrive requires you to authenticate as an admin account, and manually setup a remote per user you wish to impersonate.
|
||||
Unlike Google Drive and impersonating any domain user via service accounts,
|
||||
OneDrive requires you to authenticate as an admin account, and manually setup
|
||||
a remote per user you wish to impersonate.
|
||||
|
||||
1. In [Microsoft 365 Admin Center](https://admin.microsoft.com), open each user you need to "impersonate" and go to the OneDrive section. There is a heading called "Get access to files", you need to click to create the link, this creates the link of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/` but also changes the permissions so you your admin user has access.
|
||||
1. In [Microsoft 365 Admin Center](https://admin.microsoft.com), open each user
|
||||
you need to "impersonate" and go to the OneDrive section. There is a heading
|
||||
called "Get access to files", you need to click to create the link, this
|
||||
creates the link of the format
|
||||
`https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/` but also
|
||||
changes the permissions so you your admin user has access.
|
||||
2. Then in powershell run the following commands:
|
||||
```console
|
||||
Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
|
||||
Import-Module Microsoft.Graph.Files
|
||||
Connect-MgGraph -Scopes "Files.ReadWrite.All"
|
||||
# Follow the steps to allow access to your admin user
|
||||
# Then run this for each user you want to impersonate to get the Drive ID
|
||||
Get-MgUserDefaultDrive -UserId '{emailaddress}'
|
||||
# This will give you output of the format:
|
||||
# Name Id DriveType CreatedDateTime
|
||||
# ---- -- --------- ---------------
|
||||
# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm
|
||||
|
||||
```
|
||||
3. Then in rclone add a onedrive remote type, and use the `Type in driveID` with the DriveID you got in the previous step. One remote per user. It will then confirm the drive ID, and hopefully give you a message of `Found drive "root" of type "business"` and then include the URL of the format `https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents`
|
||||
```console
|
||||
Install-Module Microsoft.Graph -Scope CurrentUser -Repository PSGallery -Force
|
||||
Import-Module Microsoft.Graph.Files
|
||||
Connect-MgGraph -Scopes "Files.ReadWrite.All"
|
||||
# Follow the steps to allow access to your admin user
|
||||
# Then run this for each user you want to impersonate to get the Drive ID
|
||||
Get-MgUserDefaultDrive -UserId '{emailaddress}'
|
||||
# This will give you output of the format:
|
||||
# Name Id DriveType CreatedDateTime
|
||||
# ---- -- --------- ---------------
|
||||
# OneDrive b!XYZ123 business 14/10/2023 1:00:58 pm
|
||||
```
|
||||
|
||||
3. Then in rclone add a onedrive remote type, and use the `Type in driveID`
|
||||
with the DriveID you got in the previous step. One remote per user. It will
|
||||
then confirm the drive ID, and hopefully give you a message of
|
||||
`Found drive "root" of type "business"` and then include the URL of the format
|
||||
`https://{tenant}-my.sharepoint.com/personal/{user_name_domain_tld}/Documents`
|
||||
|
||||
## Limitations
|
||||
|
||||
@@ -1040,11 +1051,16 @@ in it will be mapped to `?` instead.
|
||||
|
||||
### File sizes
|
||||
|
||||
The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize).
|
||||
The largest allowed file size is 250 GiB for both OneDrive Personal and OneDrive
|
||||
for Business [(Updated 13 Jan 2021)](https://support.microsoft.com/en-us/office/invalid-file-names-and-file-types-in-onedrive-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa?ui=en-us&rs=en-us&ad=us#individualfilesize).
|
||||
|
||||
### Path length
|
||||
|
||||
The entire path, including the file name, must contain fewer than 400 characters for OneDrive, OneDrive for Business and SharePoint Online. If you are encrypting file and folder names with rclone, you may want to pay attention to this limitation because the encrypted names are typically longer than the original ones.
|
||||
The entire path, including the file name, must contain fewer than 400
|
||||
characters for OneDrive, OneDrive for Business and SharePoint Online. If you
|
||||
are encrypting file and folder names with rclone, you may want to pay attention
|
||||
to this limitation because the encrypted names are typically longer than the
|
||||
original ones.
|
||||
|
||||
### Number of files
|
||||
|
||||
@@ -1053,7 +1069,8 @@ OneDrive seems to be OK with at least 50,000 files in a folder, but at
|
||||
list files: UnknownError:`. See
|
||||
[#2707](https://github.com/rclone/rclone/issues/2707) for more info.
|
||||
|
||||
An official document about the limitations for different types of OneDrive can be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa).
|
||||
An official document about the limitations for different types of OneDrive can
|
||||
be found [here](https://support.office.com/en-us/article/invalid-file-names-and-file-types-in-onedrive-onedrive-for-business-and-sharepoint-64883a5d-228e-48f5-b3d2-eb39e07630fa).
|
||||
|
||||
## Versions
|
||||
|
||||
@@ -1089,24 +1106,30 @@ command is required to be run by a SharePoint admin. If you are an
|
||||
admin, you can run these commands in PowerShell to change that
|
||||
setting:
|
||||
|
||||
1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you haven't installed this already)
|
||||
1. `Install-Module -Name Microsoft.Online.SharePoint.PowerShell` (in case you
|
||||
haven't installed this already)
|
||||
2. `Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking`
|
||||
3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will prompt for your credentials)
|
||||
3. `Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM` (replacing `YOURSITE`, `YOU`, `YOURSITE.COM` with the actual values; this will
|
||||
prompt for your credentials)
|
||||
4. `Set-SPOTenant -EnableMinimumVersionRequirement $False`
|
||||
5. `Disconnect-SPOService` (to disconnect from the server)
|
||||
|
||||
*Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.*
|
||||
*Below are the steps for normal users to disable versioning. If you don't see
|
||||
the "No Versioning" option, make sure the above requirements are met.*
|
||||
|
||||
User [Weropol](https://github.com/Weropol) has found a method to disable
|
||||
versioning on OneDrive
|
||||
|
||||
1. Open the settings menu by clicking on the gear symbol at the top of the OneDrive Business page.
|
||||
1. Open the settings menu by clicking on the gear symbol at the top of the
|
||||
OneDrive Business page.
|
||||
2. Click Site settings.
|
||||
3. Once on the Site settings page, navigate to Site Administration > Site libraries and lists.
|
||||
3. Once on the Site settings page, navigate to Site Administration > Site libraries
|
||||
and lists.
|
||||
4. Click Customize "Documents".
|
||||
5. Click General Settings > Versioning Settings.
|
||||
6. Under Document Version History select the option No versioning.
|
||||
Note: This will disable the creation of new file versions, but will not remove any previous versions. Your documents are safe.
|
||||
Note: This will disable the creation of new file versions, but will not remove
|
||||
any previous versions. Your documents are safe.
|
||||
7. Apply the changes by clicking OK.
|
||||
8. Use rclone to upload or modify files. (I also use the --no-update-modtime flag)
|
||||
9. Restore the versioning settings after using rclone. (Optional)
|
||||
@@ -1120,20 +1143,25 @@ querying each file for versions it can be quite slow. Rclone does
|
||||
`--checkers` tests in parallel. The command also supports `--interactive`/`i`
|
||||
or `--dry-run` which is a great way to see what it would do.
|
||||
|
||||
rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
|
||||
rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
|
||||
```text
|
||||
rclone cleanup --interactive remote:path/subdir # interactively remove all old version for path/subdir
|
||||
rclone cleanup remote:path/subdir # unconditionally remove all old version for path/subdir
|
||||
```
|
||||
|
||||
**NB** Onedrive personal can't currently delete versions
|
||||
|
||||
## Troubleshooting ##
|
||||
## Troubleshooting
|
||||
|
||||
### Excessive throttling or blocked on SharePoint
|
||||
|
||||
If you experience excessive throttling or is being blocked on SharePoint then it may help to set the user agent explicitly with a flag like this: `--user-agent "ISV|rclone.org|rclone/v1.55.1"`
|
||||
If you experience excessive throttling or is being blocked on SharePoint then
|
||||
it may help to set the user agent explicitly with a flag like this:
|
||||
`--user-agent "ISV|rclone.org|rclone/v1.55.1"`
|
||||
|
||||
The specific details can be found in the Microsoft document: [Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling)
|
||||
The specific details can be found in the Microsoft document:
|
||||
[Avoid getting throttled or blocked in SharePoint Online](https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online#how-to-decorate-your-http-traffic-to-avoid-throttling)
|
||||
|
||||
### Unexpected file size/hash differences on Sharepoint ####
|
||||
### Unexpected file size/hash differences on Sharepoint
|
||||
|
||||
It is a
|
||||
[known](https://github.com/OneDrive/onedrive-api-docs/issues/935#issuecomment-441741631)
|
||||
@@ -1144,57 +1172,66 @@ report inconsistent file sizes. To use rclone with such
|
||||
affected files on Sharepoint, you
|
||||
may disable these checks with the following command line arguments:
|
||||
|
||||
```
|
||||
```text
|
||||
--ignore-checksum --ignore-size
|
||||
```
|
||||
|
||||
Alternatively, if you have write access to the OneDrive files, it may be possible
|
||||
to fix this problem for certain files, by attempting the steps below.
|
||||
Open the web interface for [OneDrive](https://onedrive.live.com) and find the
|
||||
affected files (which will be in the error messages/log for rclone). Simply click on
|
||||
each of these files, causing OneDrive to open them on the web. This will cause each
|
||||
file to be converted in place to a format that is functionally equivalent
|
||||
affected files (which will be in the error messages/log for rclone). Simply click
|
||||
on each of these files, causing OneDrive to open them on the web. This will cause
|
||||
each file to be converted in place to a format that is functionally equivalent
|
||||
but which will no longer trigger the size discrepancy. Once all problematic files
|
||||
are converted you will no longer need the ignore options above.
|
||||
|
||||
### Replacing/deleting existing files on Sharepoint gets "item not found" ####
|
||||
### Replacing/deleting existing files on Sharepoint gets "item not found"
|
||||
|
||||
It is a [known](https://github.com/OneDrive/onedrive-api-docs/issues/1068) issue
|
||||
that Sharepoint (not OneDrive or OneDrive for Business) may return "item not
|
||||
found" errors when users try to replace or delete uploaded files; this seems to
|
||||
mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.). As a workaround, you may use
|
||||
mainly affect Office files (.docx, .xlsx, etc.) and web files (.html, .aspx, etc.).
|
||||
As a workaround, you may use
|
||||
the `--backup-dir <BACKUP_DIR>` command line argument so rclone moves the
|
||||
files to be replaced/deleted into a given backup directory (instead of directly
|
||||
replacing/deleting them). For example, to instruct rclone to move the files into
|
||||
the directory `rclone-backup-dir` on backend `mysharepoint`, you may use:
|
||||
|
||||
```
|
||||
```text
|
||||
--backup-dir mysharepoint:rclone-backup-dir
|
||||
```
|
||||
|
||||
### access\_denied (AADSTS65005) ####
|
||||
### access\_denied (AADSTS65005)
|
||||
|
||||
```
|
||||
```text
|
||||
Error: access_denied
|
||||
Code: AADSTS65005
|
||||
Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.
|
||||
```
|
||||
|
||||
This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins.
|
||||
This means that rclone can't use the OneDrive for Business API with your account.
|
||||
You can't do much about it, maybe write an email to your admins.
|
||||
|
||||
However, there are other ways to interact with your OneDrive account. Have a look at the WebDAV backend: https://rclone.org/webdav/#sharepoint
|
||||
However, there are other ways to interact with your OneDrive account. Have a look
|
||||
at the WebDAV backend: <https://rclone.org/webdav/#sharepoint>
|
||||
|
||||
### invalid\_grant (AADSTS50076) ####
|
||||
### invalid\_grant (AADSTS50076)
|
||||
|
||||
```
|
||||
```text
|
||||
Error: invalid_grant
|
||||
Code: AADSTS50076
|
||||
Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.
|
||||
```
|
||||
|
||||
If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run `rclone config`, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: `Already have a token - refresh?`. For this question, answer `y` and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.
|
||||
If you see the error above after enabling multi-factor authentication for your
|
||||
account, you can fix it by refreshing your OAuth refresh token. To do that, run
|
||||
`rclone config`, and choose to edit your OneDrive backend. Then, you don't need
|
||||
to actually make any changes until you reach this question:
|
||||
`Already have a token - refresh?`. For this question, answer `y` and go through
|
||||
the process to refresh your token, just like the first time the backend is
|
||||
configured. After this, rclone should work again for this backend.
|
||||
|
||||
### Invalid request when making public links ####
|
||||
### Invalid request when making public links
|
||||
|
||||
On Sharepoint and OneDrive for Business, `rclone link` may return an "Invalid
|
||||
request" error. A possible cause is that the organisation admin didn't allow
|
||||
@@ -1205,46 +1242,64 @@ permissions as an admin, take a look at the docs:
|
||||
|
||||
### Can not access `Shared` with me files
|
||||
|
||||
Shared with me files is not supported by rclone [currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround:
|
||||
Shared with me files is not supported by rclone
|
||||
[currently](https://github.com/rclone/rclone/issues/4062), but there is a workaround:
|
||||
|
||||
1. Visit [https://onedrive.live.com](https://onedrive.live.com/)
|
||||
2. Right click a item in `Shared`, then click `Add shortcut to My files` in the context
|
||||
")
|
||||
3. The shortcut will appear in `My files`, you can access it with rclone, it behaves like a normal folder/file.
|
||||
3. The shortcut will appear in `My files`, you can access it with rclone, it
|
||||
behaves like a normal folder/file.
|
||||
")
|
||||
")
|
||||
|
||||
### Live Photos uploaded from iOS (small video clips in .heic files)
|
||||
|
||||
The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452)
|
||||
of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020.
|
||||
The usage and download of these uploaded Live Photos is unfortunately still work-in-progress
|
||||
and this introduces several issues when copying, synchronising and mounting – both in rclone and in the native OneDrive client on Windows.
|
||||
The iOS OneDrive app introduced [upload and storage](https://techcommunity.microsoft.com/t5/microsoft-onedrive-blog/live-photos-come-to-onedrive/ba-p/1953452)
|
||||
of [Live Photos](https://support.apple.com/en-gb/HT207310) in 2020.
|
||||
The usage and download of these uploaded Live Photos is unfortunately still
|
||||
work-in-progress and this introduces several issues when copying, synchronising
|
||||
and mounting – both in rclone and in the native OneDrive client on Windows.
|
||||
|
||||
The root cause can easily be seen if you locate one of your Live Photos in the OneDrive web interface.
|
||||
Then download the photo from the web interface. You will then see that the size of downloaded .heic file is smaller than the size displayed in the web interface.
|
||||
The downloaded file is smaller because it only contains a single frame (still photo) extracted from the Live Photo (movie) stored in OneDrive.
|
||||
The root cause can easily be seen if you locate one of your Live Photos in the
|
||||
OneDrive web interface. Then download the photo from the web interface. You
|
||||
will then see that the size of downloaded .heic file is smaller than the size
|
||||
displayed in the web interface. The downloaded file is smaller because it only
|
||||
contains a single frame (still photo) extracted from the Live Photo (movie)
|
||||
stored in OneDrive.
|
||||
|
||||
The different sizes will cause `rclone copy/sync` to repeatedly recopy unmodified photos something like this:
|
||||
The different sizes will cause `rclone copy/sync` to repeatedly recopy
|
||||
unmodified photos something like this:
|
||||
|
||||
DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
|
||||
DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK
|
||||
INFO : 20230203_123826234_iOS.heic: Copied (replaced existing)
|
||||
```text
|
||||
DEBUG : 20230203_123826234_iOS.heic: Sizes differ (src 4470314 vs dst 1298667)
|
||||
DEBUG : 20230203_123826234_iOS.heic: sha1 = fc2edde7863b7a7c93ca6771498ac797f8460750 OK
|
||||
INFO : 20230203_123826234_iOS.heic: Copied (replaced existing)
|
||||
```
|
||||
|
||||
These recopies can be worked around by adding `--ignore-size`. Please note that this workaround only syncs the still-picture not the movie clip,
|
||||
These recopies can be worked around by adding `--ignore-size`. Please note that
|
||||
this workaround only syncs the still-picture not the movie clip,
|
||||
and relies on modification dates being correctly updated on all files in all situations.
|
||||
|
||||
The different sizes will also cause `rclone check` to report size errors something like this:
|
||||
The different sizes will also cause `rclone check` to report size errors something
|
||||
like this:
|
||||
|
||||
ERROR : 20230203_123826234_iOS.heic: sizes differ
|
||||
```text
|
||||
ERROR : 20230203_123826234_iOS.heic: sizes differ
|
||||
```
|
||||
|
||||
These check errors can be suppressed by adding `--ignore-size`.
|
||||
|
||||
The different sizes will also cause `rclone mount` to fail downloading with an error something like this:
|
||||
The different sizes will also cause `rclone mount` to fail downloading with an
|
||||
error something like this:
|
||||
|
||||
ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
|
||||
```text
|
||||
ERROR : 20230203_123826234_iOS.heic: ReadFileHandle.Read error: low level retry 1/10: unexpected EOF
|
||||
```
|
||||
|
||||
or like this when using `--cache-mode=full`:
|
||||
|
||||
INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
|
||||
ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
|
||||
```text
|
||||
INFO : 20230203_123826234_iOS.heic: vfs cache: downloader: error count now 1: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
|
||||
ERROR : 20230203_123826234_iOS.heic: vfs cache: failed to download: vfs reader: failed to write to cache file: 416 Requested Range Not Satisfiable:
|
||||
```
|
||||
|
||||
@@ -217,6 +217,5 @@ this capability cannot determine free space for an rclone mount or
|
||||
use policy `mfs` (most free space) as a member of an rclone union
|
||||
remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
@@ -355,25 +355,25 @@ Properties:
|
||||
|
||||
## Limitations
|
||||
|
||||
This backend uses the
|
||||
[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which
|
||||
is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a
|
||||
This backend uses the
|
||||
[Proton-API-Bridge](https://github.com/henrybear327/Proton-API-Bridge), which
|
||||
is based on [go-proton-api](https://github.com/henrybear327/go-proton-api), a
|
||||
fork of the [official repo](https://github.com/ProtonMail/go-proton-api).
|
||||
|
||||
There is no official API documentation available from Proton Drive. But, thanks
|
||||
to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api)
|
||||
and the web, iOS, and Android client codebases, we don't need to completely
|
||||
reverse engineer the APIs by observing the web client traffic!
|
||||
There is no official API documentation available from Proton Drive. But, thanks
|
||||
to Proton open sourcing [proton-go-api](https://github.com/ProtonMail/go-proton-api)
|
||||
and the web, iOS, and Android client codebases, we don't need to completely
|
||||
reverse engineer the APIs by observing the web client traffic!
|
||||
|
||||
[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic
|
||||
building blocks of API calls and error handling, such as 429 exponential
|
||||
back-off, but it is pretty much just a barebone interface to the Proton API.
|
||||
For example, the encryption and decryption of the Proton Drive file are not
|
||||
provided in this library.
|
||||
[proton-go-api](https://github.com/ProtonMail/go-proton-api) provides the basic
|
||||
building blocks of API calls and error handling, such as 429 exponential
|
||||
back-off, but it is pretty much just a barebone interface to the Proton API.
|
||||
For example, the encryption and decryption of the Proton Drive file are not
|
||||
provided in this library.
|
||||
|
||||
The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on
|
||||
top of this quickly. This codebase handles the intricate tasks before and after
|
||||
calling Proton APIs, particularly the complex encryption scheme, allowing
|
||||
developers to implement features for other software on top of this codebase.
|
||||
There are likely quite a few errors in this library, as there isn't official
|
||||
documentation available.
|
||||
The Proton-API-Bridge, attempts to bridge the gap, so rclone can be built on
|
||||
top of this quickly. This codebase handles the intricate tasks before and after
|
||||
calling Proton APIs, particularly the complex encryption scheme, allowing
|
||||
developers to implement features for other software on top of this codebase.
|
||||
There are likely quite a few errors in this library, as there isn't official
|
||||
documentation available.
|
||||
|
||||
@@ -341,5 +341,5 @@ this capability cannot determine free space for an rclone mount or
|
||||
use policy `mfs` (most free space) as a member of an rclone union
|
||||
remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
@@ -282,10 +282,13 @@ Properties:
|
||||
|
||||
## Storage usage
|
||||
|
||||
The storage usage in Quatrix is restricted to the account during the purchase. You can restrict any user with a smaller storage limit.
|
||||
The account limit is applied if the user has no custom storage limit. Once you've reached the limit, the upload of files will fail.
|
||||
This can be fixed by freeing up the space or increasing the quota.
|
||||
The storage usage in Quatrix is restricted to the account during the purchase.
|
||||
You can restrict any user with a smaller storage limit. The account limit is
|
||||
applied if the user has no custom storage limit. Once you've reached the limit,
|
||||
the upload of files will fail. This can be fixed by freeing up the space or
|
||||
increasing the quota.
|
||||
|
||||
## Server-side operations
|
||||
|
||||
Quatrix supports server-side operations (copy and move). In case of conflict, files are overwritten during server-side operation.
|
||||
Quatrix supports server-side operations (copy and move). In case of conflict,
|
||||
files are overwritten during server-side operation.
|
||||
|
||||
@@ -2298,7 +2298,6 @@ This takes the following parameters
|
||||
|
||||
This returns an empty result on success, or an error.
|
||||
|
||||
|
||||
This command takes an "fs" parameter. If this parameter is not
|
||||
supplied and if there is only one VFS in use then that VFS will be
|
||||
used. If there is more than one VFS in use then the "fs" parameter
|
||||
|
||||
1574
docs/content/s3.md
1574
docs/content/s3.md
File diff suppressed because it is too large
Load Diff
@@ -352,5 +352,5 @@ this capability cannot determine free space for an rclone mount or
|
||||
use policy `mfs` (most free space) as a member of an rclone union
|
||||
remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
@@ -480,10 +480,23 @@ this capability cannot determine free space for an rclone mount or
|
||||
use policy `mfs` (most free space) as a member of an rclone union
|
||||
remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
## Known issues
|
||||
|
||||
If you get errors like `too many open files` this usually happens when the default `ulimit` for system max open files is exceeded. Native Storj protocol opens a large number of TCP connections (each of which is counted as an open file). For a single upload stream you can expect 110 TCP connections to be opened. For a single download stream you can expect 35. This batch of connections will be opened for every 64 MiB segment and you should also expect TCP connections to be reused. If you do many transfers you eventually open a connection to most storage nodes (thousands of nodes).
|
||||
If you get errors like `too many open files` this usually happens when the
|
||||
default `ulimit` for system max open files is exceeded. Native Storj protocol
|
||||
opens a large number of TCP connections (each of which is counted as an open
|
||||
file). For a single upload stream you can expect 110 TCP connections to be
|
||||
opened. For a single download stream you can expect 35. This batch of
|
||||
connections will be opened for every 64 MiB segment and you should also
|
||||
expect TCP connections to be reused. If you do many transfers you eventually
|
||||
open a connection to most storage nodes (thousands of nodes).
|
||||
|
||||
To fix these, please raise your system limits. You can do this issuing a `ulimit -n 65536` just before you run rclone. To change the limits more permanently you can add this to your shell startup script, e.g. `$HOME/.bashrc`, or change the system-wide configuration, usually `/etc/sysctl.conf` and/or `/etc/security/limits.conf`, but please refer to your operating system manual.
|
||||
To fix these, please raise your system limits. You can do this issuing a
|
||||
`ulimit -n 65536` just before you run rclone. To change the limits more
|
||||
permanently you can add this to your shell startup script,
|
||||
e.g. `$HOME/.bashrc`, or change the system-wide configuration,
|
||||
usually `/etc/sysctl.conf` and/or `/etc/security/limits.conf`, but please
|
||||
refer to your operating system manual.
|
||||
|
||||
@@ -300,5 +300,5 @@ this capability cannot determine free space for an rclone mount or
|
||||
use policy `mfs` (most free space) as a member of an rclone union
|
||||
remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
@@ -721,16 +721,23 @@ setting up a swift remote.
|
||||
|
||||
## OVH Cloud Archive
|
||||
|
||||
To use rclone with OVH cloud archive, first use `rclone config` to set up a `swift` backend with OVH, choosing `pca` as the `storage_policy`.
|
||||
To use rclone with OVH cloud archive, first use `rclone config` to set up a
|
||||
`swift` backend with OVH, choosing `pca` as the `storage_policy`.
|
||||
|
||||
### Uploading Objects
|
||||
|
||||
Uploading objects to OVH cloud archive is no different to object storage, you just simply run the command you like (move, copy or sync) to upload the objects. Once uploaded the objects will show in a "Frozen" state within the OVH control panel.
|
||||
Uploading objects to OVH cloud archive is no different to object storage, you
|
||||
just simply run the command you like (move, copy or sync) to upload the objects.
|
||||
Once uploaded the objects will show in a "Frozen" state within the OVH control panel.
|
||||
|
||||
### Retrieving Objects
|
||||
|
||||
To retrieve objects use `rclone copy` as normal. If the objects are in a frozen state then rclone will ask for them all to be unfrozen and it will wait at the end of the output with a message like the following:
|
||||
To retrieve objects use `rclone copy` as normal. If the objects are in a frozen
|
||||
state then rclone will ask for them all to be unfrozen and it will wait at the
|
||||
end of the output with a message like the following:
|
||||
|
||||
`2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)`
|
||||
```text
|
||||
2019/03/23 13:06:33 NOTICE: Received retry after error - sleeping until 2019-03-23T13:16:33.481657164+01:00 (9m59.99985121s)
|
||||
```
|
||||
|
||||
Rclone will wait for the time specified then retry the copy.
|
||||
|
||||
@@ -278,4 +278,5 @@ exposed in the API. Backends without this capability cannot determine
|
||||
free space for an rclone mount or use policy `mfs` (most free space)
|
||||
as a member of an rclone union remote.
|
||||
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
|
||||
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
|
||||
and [rclone about](https://rclone.org/commands/rclone_about/).
|
||||
|
||||
@@ -378,7 +378,9 @@ ownCloud supports modified times using the `X-OC-Mtime` header.
|
||||
|
||||
This is configured in an identical way to ownCloud. Note that
|
||||
Nextcloud initially did not support streaming of files (`rcat`) whereas
|
||||
ownCloud did, but [this](https://github.com/nextcloud/nextcloud-snap/issues/365) seems to be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud Server v19).
|
||||
ownCloud did, but [this](https://github.com/nextcloud/nextcloud-snap/issues/365)
|
||||
seems to be fixed as of 2020-11-27 (tested with rclone v1.53.1 and Nextcloud
|
||||
Server v19).
|
||||
|
||||
### ownCloud Infinite Scale
|
||||
|
||||
@@ -421,7 +423,7 @@ Set the `vendor` to `sharepoint`.
|
||||
|
||||
Your config file should look like this:
|
||||
|
||||
```
|
||||
```ini
|
||||
[sharepoint]
|
||||
type = webdav
|
||||
url = https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
|
||||
@@ -432,17 +434,19 @@ pass = encryptedpassword
|
||||
|
||||
### Sharepoint with NTLM Authentication
|
||||
|
||||
Use this option in case your (hosted) Sharepoint is not tied to OneDrive accounts and uses NTLM authentication.
|
||||
Use this option in case your (hosted) Sharepoint is not tied to OneDrive
|
||||
accounts and uses NTLM authentication.
|
||||
|
||||
To get the `url` configuration, similarly to the above, first navigate to the desired directory in your browser to get the URL,
|
||||
then strip everything after the name of the opened directory.
|
||||
To get the `url` configuration, similarly to the above, first navigate to the
|
||||
desired directory in your browser to get the URL, then strip everything after
|
||||
the name of the opened directory.
|
||||
|
||||
Example:
|
||||
If the URL is:
|
||||
https://example.sharepoint.com/sites/12345/Documents/Forms/AllItems.aspx
|
||||
<https://example.sharepoint.com/sites/12345/Documents/Forms/AllItems.aspx>
|
||||
|
||||
The configuration to use would be:
|
||||
https://example.sharepoint.com/sites/12345/Documents
|
||||
<https://example.sharepoint.com/sites/12345/Documents>
|
||||
|
||||
Set the `vendor` to `sharepoint-ntlm`.
|
||||
|
||||
@@ -451,7 +455,7 @@ set `user` to `DOMAIN\username`.
|
||||
|
||||
Your config file should look like this:
|
||||
|
||||
```
|
||||
```ini
|
||||
[sharepoint]
|
||||
type = webdav
|
||||
url = https://[YOUR-DOMAIN]/some-path-to/Documents
|
||||
@@ -462,11 +466,15 @@ pass = encryptedpassword
|
||||
|
||||
#### Required Flags for SharePoint
|
||||
|
||||
As SharePoint does some special things with uploaded documents, you won't be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer.
|
||||
As SharePoint does some special things with uploaded documents, you won't be
|
||||
able to use the documents size or the documents hash to compare if a file has
|
||||
been changed since the upload / which file is newer.
|
||||
|
||||
For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the "Last Modified" datetime property to compare your documents:
|
||||
For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.)
|
||||
from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure
|
||||
Rclone uses the "Last Modified" datetime property to compare your documents:
|
||||
|
||||
```
|
||||
```text
|
||||
--ignore-size --ignore-checksum --update
|
||||
```
|
||||
|
||||
@@ -477,7 +485,6 @@ Read [rclone serve webdav](commands/rclone_serve_webdav/) for more details.
|
||||
|
||||
rclone serve supports modified times using the `X-OC-Mtime` header.
|
||||
|
||||
|
||||
### dCache
|
||||
|
||||
dCache is a storage system that supports many protocols and
|
||||
@@ -493,7 +500,7 @@ password, instead enter your Macaroon as the `bearer_token`.
|
||||
|
||||
The config will end up looking something like this.
|
||||
|
||||
```
|
||||
```ini
|
||||
[dcache]
|
||||
type = webdav
|
||||
url = https://dcache...
|
||||
@@ -503,8 +510,9 @@ pass =
|
||||
bearer_token = your-macaroon
|
||||
```
|
||||
|
||||
There is a [script](https://github.com/sara-nl/GridScripts/blob/master/get-macaroon) that
|
||||
obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config file.
|
||||
There is a [script](https://github.com/sara-nl/GridScripts/blob/master/get-macaroon)
|
||||
that obtains a Macaroon from a dCache WebDAV endpoint, and creates an rclone config
|
||||
file.
|
||||
|
||||
Macaroons may also be obtained from the dCacheView
|
||||
web-browser/JavaScript client that comes with dCache.
|
||||
|
||||
@@ -274,7 +274,9 @@ to upload a 30 GiB file set a timeout of `2 * 30 = 60m`, that is
|
||||
`--timeout 60m`.
|
||||
|
||||
Having a Yandex Mail account is mandatory to use the Yandex.Disk subscription.
|
||||
Token generation will work without a mail account, but Rclone won't be able to complete any actions.
|
||||
```
|
||||
Token generation will work without a mail account, but Rclone won't be able to
|
||||
complete any actions.
|
||||
|
||||
```text
|
||||
[403 - DiskUnsupportedUserAccountTypeError] User account type is not supported.
|
||||
```
|
||||
|
||||
@@ -290,12 +290,15 @@ Properties:
|
||||
|
||||
## Setting up your own client_id
|
||||
|
||||
For Zoho we advise you to set up your own client_id. To do so you have to complete the following steps.
|
||||
For Zoho we advise you to set up your own client_id. To do so you have to
|
||||
complete the following steps.
|
||||
|
||||
1. Log in to the [Zoho API Console](https://api-console.zoho.com)
|
||||
|
||||
2. Create a new client of type "Server-based Application". The name and website don't matter, but you must add the redirect URL `http://localhost:53682/`.
|
||||
2. Create a new client of type "Server-based Application". The name and website
|
||||
don't matter, but you must add the redirect URL `http://localhost:53682/`.
|
||||
|
||||
3. Once the client is created, you can go to the settings tab and enable it in other regions.
|
||||
3. Once the client is created, you can go to the settings tab and enable it in
|
||||
other regions.
|
||||
|
||||
The client id and client secret can now be used with rclone.
|
||||
|
||||
Reference in New Issue
Block a user