mirror of
https://github.com/rclone/rclone.git
synced 2026-01-06 10:33:34 +00:00
Compare commits
21 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f2d16ab4c5 | ||
|
|
c0fc4fe0ca | ||
|
|
669b2f2669 | ||
|
|
e1ba10a86e | ||
|
|
022442cf58 | ||
|
|
5cc4488294 | ||
|
|
ec9566c5c3 | ||
|
|
f6976eb4c4 | ||
|
|
c242c00799 | ||
|
|
bf954b74ff | ||
|
|
88f0770d0a | ||
|
|
41d905c9b0 | ||
|
|
300a063b5e | ||
|
|
61bf29ed5e | ||
|
|
3191717572 | ||
|
|
961dfe97b5 | ||
|
|
22612b4b38 | ||
|
|
b9927461c3 | ||
|
|
6d04be99f2 | ||
|
|
06ae0dfa54 | ||
|
|
912f29b5b8 |
57
MANUAL.html
generated
57
MANUAL.html
generated
@@ -81,7 +81,7 @@
|
|||||||
<header id="title-block-header">
|
<header id="title-block-header">
|
||||||
<h1 class="title">rclone(1) User Manual</h1>
|
<h1 class="title">rclone(1) User Manual</h1>
|
||||||
<p class="author">Nick Craig-Wood</p>
|
<p class="author">Nick Craig-Wood</p>
|
||||||
<p class="date">Sep 24, 2024</p>
|
<p class="date">Nov 15, 2024</p>
|
||||||
</header>
|
</header>
|
||||||
<h1 id="rclone-syncs-your-files-to-cloud-storage">Rclone syncs your files to cloud storage</h1>
|
<h1 id="rclone-syncs-your-files-to-cloud-storage">Rclone syncs your files to cloud storage</h1>
|
||||||
<p><img width="50%" src="https://rclone.org/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" ></p>
|
<p><img width="50%" src="https://rclone.org/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" ></p>
|
||||||
@@ -8323,12 +8323,14 @@ file2.avi</code></pre>
|
|||||||
<p>Adds path/file names to an rclone command based on rules in a named file. The file contains a list of remarks and pattern rules. Include rules start with <code>+</code> and exclude rules with <code>-</code>. <code>!</code> clears existing rules. Rules are processed in the order they are defined.</p>
|
<p>Adds path/file names to an rclone command based on rules in a named file. The file contains a list of remarks and pattern rules. Include rules start with <code>+</code> and exclude rules with <code>-</code>. <code>!</code> clears existing rules. Rules are processed in the order they are defined.</p>
|
||||||
<p>This flag can be repeated. See above for the order filter flags are processed in.</p>
|
<p>This flag can be repeated. See above for the order filter flags are processed in.</p>
|
||||||
<p>Arrange the order of filter rules with the most restrictive first and work down.</p>
|
<p>Arrange the order of filter rules with the most restrictive first and work down.</p>
|
||||||
|
<p>Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. <em>Use <code>-vv --dump filters</code> to see how they appear in the final regexp.</em></p>
|
||||||
<p>E.g. for <code>filter-file.txt</code>:</p>
|
<p>E.g. for <code>filter-file.txt</code>:</p>
|
||||||
<pre><code># a sample filter rule file
|
<pre><code># a sample filter rule file
|
||||||
- secret*.jpg
|
- secret*.jpg
|
||||||
+ *.jpg
|
+ *.jpg
|
||||||
+ *.png
|
+ *.png
|
||||||
+ file2.avi
|
+ file2.avi
|
||||||
|
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
|
||||||
- /dir/Trash/**
|
- /dir/Trash/**
|
||||||
+ /dir/**
|
+ /dir/**
|
||||||
# exclude everything else
|
# exclude everything else
|
||||||
@@ -10418,7 +10420,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
|
|||||||
<tr class="odd">
|
<tr class="odd">
|
||||||
<td>pCloud</td>
|
<td>pCloud</td>
|
||||||
<td style="text-align: center;">MD5, SHA1 ⁷</td>
|
<td style="text-align: center;">MD5, SHA1 ⁷</td>
|
||||||
<td style="text-align: center;">R</td>
|
<td style="text-align: center;">R/W</td>
|
||||||
<td style="text-align: center;">No</td>
|
<td style="text-align: center;">No</td>
|
||||||
<td style="text-align: center;">No</td>
|
<td style="text-align: center;">No</td>
|
||||||
<td style="text-align: center;">W</td>
|
<td style="text-align: center;">W</td>
|
||||||
@@ -11968,7 +11970,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
|
|||||||
--tpslimit float Limit HTTP transactions per second to this
|
--tpslimit float Limit HTTP transactions per second to this
|
||||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||||
--use-cookies Enable session cookiejar
|
--use-cookies Enable session cookiejar
|
||||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.1")</code></pre>
|
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")</code></pre>
|
||||||
<h2 id="performance">Performance</h2>
|
<h2 id="performance">Performance</h2>
|
||||||
<p>Flags helpful for increasing performance.</p>
|
<p>Flags helpful for increasing performance.</p>
|
||||||
<pre><code> --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
|
<pre><code> --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
|
||||||
@@ -17168,7 +17170,7 @@ acl = private
|
|||||||
upload_cutoff = 5M
|
upload_cutoff = 5M
|
||||||
chunk_size = 5M
|
chunk_size = 5M
|
||||||
copy_cutoff = 5M</code></pre>
|
copy_cutoff = 5M</code></pre>
|
||||||
<p><a href="https://www.online.net/en/storage/c14-cold-storage">C14 Cold Storage</a> is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" <code>storage_class</code>. So you can configure your remote with the <code>storage_class = GLACIER</code> option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)</p>
|
<p><a href="https://www.scaleway.com/en/glacier-cold-storage/">Scaleway Glacier</a> is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" <code>storage_class</code>. So you can configure your remote with the <code>storage_class = GLACIER</code> option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)</p>
|
||||||
<h3 id="lyve">Seagate Lyve Cloud</h3>
|
<h3 id="lyve">Seagate Lyve Cloud</h3>
|
||||||
<p><a href="https://www.seagate.com/gb/en/services/cloud/storage/">Seagate Lyve Cloud</a> is an S3 compatible object storage platform from <a href="https://seagate.com/">Seagate</a> intended for enterprise use.</p>
|
<p><a href="https://www.seagate.com/gb/en/services/cloud/storage/">Seagate Lyve Cloud</a> is an S3 compatible object storage platform from <a href="https://seagate.com/">Seagate</a> intended for enterprise use.</p>
|
||||||
<p>Here is a config run through for a remote called <code>remote</code> - you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first.</p>
|
<p>Here is a config run through for a remote called <code>remote</code> - you may choose a different name of course. Note that to create an access key and secret key you will need to create a service account first.</p>
|
||||||
@@ -24613,7 +24615,7 @@ rclone backend copyid drive: ID1 path1 ID2 path2</code></pre>
|
|||||||
<li><p>Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".</p></li>
|
<li><p>Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".</p></li>
|
||||||
<li><p>Choose an application type of "Desktop app" and click "Create". (the default name is fine)</p></li>
|
<li><p>Choose an application type of "Desktop app" and click "Create". (the default name is fine)</p></li>
|
||||||
<li><p>It will show you a client ID and client secret. Make a note of these.</p>
|
<li><p>It will show you a client ID and client secret. Make a note of these.</p>
|
||||||
<p>(If you selected "External" at Step 5 continue to Step 9. If you chose "Internal" you don't need to publish and can skip straight to Step 10 but your destination drive must be part of the same Google Workspace.)</p></li>
|
<p>(If you selected "External" at Step 5 continue to Step 10. If you chose "Internal" you don't need to publish and can skip straight to Step 11 but your destination drive must be part of the same Google Workspace.)</p></li>
|
||||||
<li><p>Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm. You will also want to add yourself as a test user.</p></li>
|
<li><p>Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm. You will also want to add yourself as a test user.</p></li>
|
||||||
<li><p>Provide the noted client ID and client secret to rclone.</p></li>
|
<li><p>Provide the noted client ID and client secret to rclone.</p></li>
|
||||||
</ol>
|
</ol>
|
||||||
@@ -36964,6 +36966,51 @@ $ tree /tmp/c
|
|||||||
<li>"error": return an error based on option value</li>
|
<li>"error": return an error based on option value</li>
|
||||||
</ul>
|
</ul>
|
||||||
<h1 id="changelog-1">Changelog</h1>
|
<h1 id="changelog-1">Changelog</h1>
|
||||||
|
<h2 id="v1.68.2---2024-11-15">v1.68.2 - 2024-11-15</h2>
|
||||||
|
<p><a href="https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2">See commits</a></p>
|
||||||
|
<ul>
|
||||||
|
<li>Security fixes
|
||||||
|
<ul>
|
||||||
|
<li>local backend: CVE-2024-52522: fix permission and ownership on symlinks with <code>--links</code> and <code>--metadata</code> (Nick Craig-Wood)
|
||||||
|
<ul>
|
||||||
|
<li>Only affects users using <code>--metadata</code> and <code>--links</code> and copying files to the local backend</li>
|
||||||
|
<li>See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv</li>
|
||||||
|
</ul></li>
|
||||||
|
<li>build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
|
||||||
|
<ul>
|
||||||
|
<li>This is an issue in a dependency which is used for JWT certificates</li>
|
||||||
|
<li>See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r</li>
|
||||||
|
</ul></li>
|
||||||
|
</ul></li>
|
||||||
|
<li>Bug Fixes
|
||||||
|
<ul>
|
||||||
|
<li>accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)</li>
|
||||||
|
<li>bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)</li>
|
||||||
|
<li>dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)</li>
|
||||||
|
<li>serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)</li>
|
||||||
|
<li>doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)</li>
|
||||||
|
</ul></li>
|
||||||
|
<li>Local
|
||||||
|
<ul>
|
||||||
|
<li>Fix permission and ownership on symlinks with <code>--links</code> and <code>--metadata</code> (Nick Craig-Wood)</li>
|
||||||
|
<li>Fix <code>--copy-links</code> on macOS when cloning (nielash)</li>
|
||||||
|
</ul></li>
|
||||||
|
<li>Onedrive
|
||||||
|
<ul>
|
||||||
|
<li>Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)</li>
|
||||||
|
</ul></li>
|
||||||
|
<li>Pikpak
|
||||||
|
<ul>
|
||||||
|
<li>Fix cid/gcid calculations for fs.OverrideRemote (wiserain)</li>
|
||||||
|
<li>Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)</li>
|
||||||
|
</ul></li>
|
||||||
|
<li>S3
|
||||||
|
<ul>
|
||||||
|
<li>Fix crash when using <code>--s3-download-url</code> after migration to SDKv2 (Nick Craig-Wood)</li>
|
||||||
|
<li>Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)</li>
|
||||||
|
<li>Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)</li>
|
||||||
|
</ul></li>
|
||||||
|
</ul>
|
||||||
<h2 id="v1.68.1---2024-09-24">v1.68.1 - 2024-09-24</h2>
|
<h2 id="v1.68.1---2024-09-24">v1.68.1 - 2024-09-24</h2>
|
||||||
<p><a href="https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1">See commits</a></p>
|
<p><a href="https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1">See commits</a></p>
|
||||||
<ul>
|
<ul>
|
||||||
|
|||||||
47
MANUAL.md
generated
47
MANUAL.md
generated
@@ -1,6 +1,6 @@
|
|||||||
% rclone(1) User Manual
|
% rclone(1) User Manual
|
||||||
% Nick Craig-Wood
|
% Nick Craig-Wood
|
||||||
% Sep 24, 2024
|
% Nov 15, 2024
|
||||||
|
|
||||||
# Rclone syncs your files to cloud storage
|
# Rclone syncs your files to cloud storage
|
||||||
|
|
||||||
@@ -16970,6 +16970,8 @@ processed in.
|
|||||||
Arrange the order of filter rules with the most restrictive first and
|
Arrange the order of filter rules with the most restrictive first and
|
||||||
work down.
|
work down.
|
||||||
|
|
||||||
|
Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. _Use `-vv --dump filters` to see how they appear in the final regexp._
|
||||||
|
|
||||||
E.g. for `filter-file.txt`:
|
E.g. for `filter-file.txt`:
|
||||||
|
|
||||||
# a sample filter rule file
|
# a sample filter rule file
|
||||||
@@ -16977,6 +16979,7 @@ E.g. for `filter-file.txt`:
|
|||||||
+ *.jpg
|
+ *.jpg
|
||||||
+ *.png
|
+ *.png
|
||||||
+ file2.avi
|
+ file2.avi
|
||||||
|
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
|
||||||
- /dir/Trash/**
|
- /dir/Trash/**
|
||||||
+ /dir/**
|
+ /dir/**
|
||||||
# exclude everything else
|
# exclude everything else
|
||||||
@@ -19787,7 +19790,7 @@ Here is an overview of the major features of each cloud storage system.
|
|||||||
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
|
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
|
||||||
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
|
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
|
||||||
| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
|
| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
|
||||||
| pCloud | MD5, SHA1 ⁷ | R | No | No | W | - |
|
| pCloud | MD5, SHA1 ⁷ | R/W | No | No | W | - |
|
||||||
| PikPak | MD5 | R | No | No | R | - |
|
| PikPak | MD5 | R | No | No | R | - |
|
||||||
| Pixeldrain | SHA256 | R/W | No | No | R | RW |
|
| Pixeldrain | SHA256 | R/W | No | No | R | RW |
|
||||||
| premiumize.me | - | - | Yes | No | R | - |
|
| premiumize.me | - | - | Yes | No | R | - |
|
||||||
@@ -20494,7 +20497,7 @@ Flags for general networking and HTTP stuff.
|
|||||||
--tpslimit float Limit HTTP transactions per second to this
|
--tpslimit float Limit HTTP transactions per second to this
|
||||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||||
--use-cookies Enable session cookiejar
|
--use-cookies Enable session cookiejar
|
||||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.1")
|
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
@@ -27833,8 +27836,8 @@ chunk_size = 5M
|
|||||||
copy_cutoff = 5M
|
copy_cutoff = 5M
|
||||||
```
|
```
|
||||||
|
|
||||||
[C14 Cold Storage](https://www.online.net/en/storage/c14-cold-storage) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
|
[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
|
||||||
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
|
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
|
||||||
|
|
||||||
### Seagate Lyve Cloud {#lyve}
|
### Seagate Lyve Cloud {#lyve}
|
||||||
|
|
||||||
@@ -37898,9 +37901,9 @@ then select "OAuth client ID".
|
|||||||
|
|
||||||
9. It will show you a client ID and client secret. Make a note of these.
|
9. It will show you a client ID and client secret. Make a note of these.
|
||||||
|
|
||||||
(If you selected "External" at Step 5 continue to Step 9.
|
(If you selected "External" at Step 5 continue to Step 10.
|
||||||
If you chose "Internal" you don't need to publish and can skip straight to
|
If you chose "Internal" you don't need to publish and can skip straight to
|
||||||
Step 10 but your destination drive must be part of the same Google Workspace.)
|
Step 11 but your destination drive must be part of the same Google Workspace.)
|
||||||
|
|
||||||
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm.
|
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm.
|
||||||
You will also want to add yourself as a test user.
|
You will also want to add yourself as a test user.
|
||||||
@@ -54612,6 +54615,36 @@ Options:
|
|||||||
|
|
||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## v1.68.2 - 2024-11-15
|
||||||
|
|
||||||
|
[See commits](https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
|
||||||
|
|
||||||
|
* Security fixes
|
||||||
|
* local backend: CVE-2024-52522: fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
|
||||||
|
* Only affects users using `--metadata` and `--links` and copying files to the local backend
|
||||||
|
* See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
|
||||||
|
* build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
|
||||||
|
* This is an issue in a dependency which is used for JWT certificates
|
||||||
|
* See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
|
||||||
|
* Bug Fixes
|
||||||
|
* accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)
|
||||||
|
* bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)
|
||||||
|
* dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
|
||||||
|
* serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)
|
||||||
|
* doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
|
||||||
|
* Local
|
||||||
|
* Fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
|
||||||
|
* Fix `--copy-links` on macOS when cloning (nielash)
|
||||||
|
* Onedrive
|
||||||
|
* Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
|
||||||
|
* Pikpak
|
||||||
|
* Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
|
||||||
|
* Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)
|
||||||
|
* S3
|
||||||
|
* Fix crash when using `--s3-download-url` after migration to SDKv2 (Nick Craig-Wood)
|
||||||
|
* Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)
|
||||||
|
* Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
|
||||||
|
|
||||||
## v1.68.1 - 2024-09-24
|
## v1.68.1 - 2024-09-24
|
||||||
|
|
||||||
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
|
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
|
||||||
|
|||||||
71
MANUAL.txt
generated
71
MANUAL.txt
generated
@@ -1,6 +1,6 @@
|
|||||||
rclone(1) User Manual
|
rclone(1) User Manual
|
||||||
Nick Craig-Wood
|
Nick Craig-Wood
|
||||||
Sep 24, 2024
|
Nov 15, 2024
|
||||||
|
|
||||||
Rclone syncs your files to cloud storage
|
Rclone syncs your files to cloud storage
|
||||||
|
|
||||||
@@ -16428,6 +16428,10 @@ processed in.
|
|||||||
Arrange the order of filter rules with the most restrictive first and
|
Arrange the order of filter rules with the most restrictive first and
|
||||||
work down.
|
work down.
|
||||||
|
|
||||||
|
Lines starting with # or ; are ignored, and can be used to write
|
||||||
|
comments. Inline comments are not supported. Use -vv --dump filters to
|
||||||
|
see how they appear in the final regexp.
|
||||||
|
|
||||||
E.g. for filter-file.txt:
|
E.g. for filter-file.txt:
|
||||||
|
|
||||||
# a sample filter rule file
|
# a sample filter rule file
|
||||||
@@ -16435,6 +16439,7 @@ E.g. for filter-file.txt:
|
|||||||
+ *.jpg
|
+ *.jpg
|
||||||
+ *.png
|
+ *.png
|
||||||
+ file2.avi
|
+ file2.avi
|
||||||
|
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
|
||||||
- /dir/Trash/**
|
- /dir/Trash/**
|
||||||
+ /dir/**
|
+ /dir/**
|
||||||
# exclude everything else
|
# exclude everything else
|
||||||
@@ -19269,7 +19274,7 @@ Here is an overview of the major features of each cloud storage system.
|
|||||||
OpenDrive MD5 R/W Yes Partial ⁸ - -
|
OpenDrive MD5 R/W Yes Partial ⁸ - -
|
||||||
OpenStack Swift MD5 R/W No No R/W -
|
OpenStack Swift MD5 R/W No No R/W -
|
||||||
Oracle Object Storage MD5 R/W No No R/W -
|
Oracle Object Storage MD5 R/W No No R/W -
|
||||||
pCloud MD5, SHA1 ⁷ R No No W -
|
pCloud MD5, SHA1 ⁷ R/W No No W -
|
||||||
PikPak MD5 R No No R -
|
PikPak MD5 R No No R -
|
||||||
Pixeldrain SHA256 R/W No No R RW
|
Pixeldrain SHA256 R/W No No R RW
|
||||||
premiumize.me - - Yes No R -
|
premiumize.me - - Yes No R -
|
||||||
@@ -20076,7 +20081,7 @@ Flags for general networking and HTTP stuff.
|
|||||||
--tpslimit float Limit HTTP transactions per second to this
|
--tpslimit float Limit HTTP transactions per second to this
|
||||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||||
--use-cookies Enable session cookiejar
|
--use-cookies Enable session cookiejar
|
||||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.1")
|
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
|
||||||
|
|
||||||
Performance
|
Performance
|
||||||
|
|
||||||
@@ -27364,13 +27369,13 @@ rclone like this:
|
|||||||
chunk_size = 5M
|
chunk_size = 5M
|
||||||
copy_cutoff = 5M
|
copy_cutoff = 5M
|
||||||
|
|
||||||
C14 Cold Storage is the low-cost S3 Glacier alternative from Scaleway
|
Scaleway Glacier is the low-cost S3 Glacier alternative from Scaleway
|
||||||
and it works the same way as on S3 by accepting the "GLACIER"
|
and it works the same way as on S3 by accepting the "GLACIER"
|
||||||
storage_class. So you can configure your remote with the
|
storage_class. So you can configure your remote with the
|
||||||
storage_class = GLACIER option to upload directly to C14. Don't forget
|
storage_class = GLACIER option to upload directly to Scaleway Glacier.
|
||||||
that in this state you can't read files back after, you will need to
|
Don't forget that in this state you can't read files back after, you
|
||||||
restore them to "STANDARD" storage_class first before being able to read
|
will need to restore them to "STANDARD" storage_class first before being
|
||||||
them (see "restore" section above)
|
able to read them (see "restore" section above)
|
||||||
|
|
||||||
Seagate Lyve Cloud
|
Seagate Lyve Cloud
|
||||||
|
|
||||||
@@ -37324,9 +37329,9 @@ Here is how to create your own Google Drive client ID for rclone:
|
|||||||
9. It will show you a client ID and client secret. Make a note of
|
9. It will show you a client ID and client secret. Make a note of
|
||||||
these.
|
these.
|
||||||
|
|
||||||
(If you selected "External" at Step 5 continue to Step 9. If you
|
(If you selected "External" at Step 5 continue to Step 10. If you
|
||||||
chose "Internal" you don't need to publish and can skip straight to
|
chose "Internal" you don't need to publish and can skip straight to
|
||||||
Step 10 but your destination drive must be part of the same Google
|
Step 11 but your destination drive must be part of the same Google
|
||||||
Workspace.)
|
Workspace.)
|
||||||
|
|
||||||
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and
|
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and
|
||||||
@@ -54290,6 +54295,52 @@ Options:
|
|||||||
|
|
||||||
Changelog
|
Changelog
|
||||||
|
|
||||||
|
v1.68.2 - 2024-11-15
|
||||||
|
|
||||||
|
See commits
|
||||||
|
|
||||||
|
- Security fixes
|
||||||
|
- local backend: CVE-2024-52522: fix permission and ownership on
|
||||||
|
symlinks with --links and --metadata (Nick Craig-Wood)
|
||||||
|
- Only affects users using --metadata and --links and copying
|
||||||
|
files to the local backend
|
||||||
|
- See
|
||||||
|
https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
|
||||||
|
- build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
|
||||||
|
(dependabot)
|
||||||
|
- This is an issue in a dependency which is used for JWT
|
||||||
|
certificates
|
||||||
|
- See
|
||||||
|
https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
|
||||||
|
- Bug Fixes
|
||||||
|
- accounting: Fix wrong message on SIGUSR2 to enable/disable
|
||||||
|
bwlimit (Nick Craig-Wood)
|
||||||
|
- bisync: Fix output capture restoring the wrong output for logrus
|
||||||
|
(Dimitrios Slamaris)
|
||||||
|
- dlna: Fix loggingResponseWriter disregarding log level (Simon
|
||||||
|
Bos)
|
||||||
|
- serve s3: Fix excess locking which was making serve s3 single
|
||||||
|
threaded (Nick Craig-Wood)
|
||||||
|
- doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy
|
||||||
|
Bush)
|
||||||
|
- Local
|
||||||
|
- Fix permission and ownership on symlinks with --links and
|
||||||
|
--metadata (Nick Craig-Wood)
|
||||||
|
- Fix --copy-links on macOS when cloning (nielash)
|
||||||
|
- Onedrive
|
||||||
|
- Fix Retry-After handling to look at 503 errors also (Nick
|
||||||
|
Craig-Wood)
|
||||||
|
- Pikpak
|
||||||
|
- Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
|
||||||
|
- Fix fatal crash on startup with token that can't be refreshed
|
||||||
|
(Nick Craig-Wood)
|
||||||
|
- S3
|
||||||
|
- Fix crash when using --s3-download-url after migration to SDKv2
|
||||||
|
(Nick Craig-Wood)
|
||||||
|
- Storj provider: fix server-side copy of files bigger than 5GB
|
||||||
|
(Kaloyan Raev)
|
||||||
|
- Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
|
||||||
|
|
||||||
v1.68.1 - 2024-09-24
|
v1.68.1 - 2024-09-24
|
||||||
|
|
||||||
See commits
|
See commits
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ package local
|
|||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
"path/filepath"
|
||||||
"runtime"
|
"runtime"
|
||||||
|
|
||||||
"github.com/go-darwin/apfs"
|
"github.com/go-darwin/apfs"
|
||||||
@@ -22,7 +23,7 @@ import (
|
|||||||
//
|
//
|
||||||
// If it isn't possible then return fs.ErrorCantCopy
|
// If it isn't possible then return fs.ErrorCantCopy
|
||||||
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
|
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
|
||||||
if runtime.GOOS != "darwin" || f.opt.TranslateSymlinks || f.opt.NoClone {
|
if runtime.GOOS != "darwin" || f.opt.NoClone {
|
||||||
return nil, fs.ErrorCantCopy
|
return nil, fs.ErrorCantCopy
|
||||||
}
|
}
|
||||||
srcObj, ok := src.(*Object)
|
srcObj, ok := src.(*Object)
|
||||||
@@ -30,6 +31,9 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
|
|||||||
fs.Debugf(src, "Can't clone - not same remote type")
|
fs.Debugf(src, "Can't clone - not same remote type")
|
||||||
return nil, fs.ErrorCantCopy
|
return nil, fs.ErrorCantCopy
|
||||||
}
|
}
|
||||||
|
if f.opt.TranslateSymlinks && srcObj.translatedLink { // in --links mode, use cloning only for regular files
|
||||||
|
return nil, fs.ErrorCantCopy
|
||||||
|
}
|
||||||
|
|
||||||
// Fetch metadata if --metadata is in use
|
// Fetch metadata if --metadata is in use
|
||||||
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
|
meta, err := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
|
||||||
@@ -44,11 +48,18 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
|
|||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
err = Clone(srcObj.path, f.localPath(remote))
|
srcPath := srcObj.path
|
||||||
|
if f.opt.FollowSymlinks { // in --copy-links mode, find the real file being pointed to and pass that in instead
|
||||||
|
srcPath, err = filepath.EvalSymlinks(srcPath)
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
err = Clone(srcPath, f.localPath(remote))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
fs.Debugf(remote, "server-side cloned!")
|
|
||||||
|
|
||||||
// Set metadata if --metadata is in use
|
// Set metadata if --metadata is in use
|
||||||
if meta != nil {
|
if meta != nil {
|
||||||
|
|||||||
16
backend/local/lchmod.go
Normal file
16
backend/local/lchmod.go
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
//go:build windows || plan9 || js || linux
|
||||||
|
|
||||||
|
package local
|
||||||
|
|
||||||
|
import "os"
|
||||||
|
|
||||||
|
const haveLChmod = false
|
||||||
|
|
||||||
|
// lChmod changes the mode of the named file to mode. If the file is a symbolic
|
||||||
|
// link, it changes the link, not the target. If there is an error,
|
||||||
|
// it will be of type *PathError.
|
||||||
|
func lChmod(name string, mode os.FileMode) error {
|
||||||
|
// Can't do this safely on this OS - chmoding a symlink always
|
||||||
|
// changes the destination.
|
||||||
|
return nil
|
||||||
|
}
|
||||||
41
backend/local/lchmod_unix.go
Normal file
41
backend/local/lchmod_unix.go
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
//go:build !windows && !plan9 && !js && !linux
|
||||||
|
|
||||||
|
package local
|
||||||
|
|
||||||
|
import (
|
||||||
|
"os"
|
||||||
|
"syscall"
|
||||||
|
|
||||||
|
"golang.org/x/sys/unix"
|
||||||
|
)
|
||||||
|
|
||||||
|
const haveLChmod = true
|
||||||
|
|
||||||
|
// syscallMode returns the syscall-specific mode bits from Go's portable mode bits.
|
||||||
|
//
|
||||||
|
// Borrowed from the syscall source since it isn't public.
|
||||||
|
func syscallMode(i os.FileMode) (o uint32) {
|
||||||
|
o |= uint32(i.Perm())
|
||||||
|
if i&os.ModeSetuid != 0 {
|
||||||
|
o |= syscall.S_ISUID
|
||||||
|
}
|
||||||
|
if i&os.ModeSetgid != 0 {
|
||||||
|
o |= syscall.S_ISGID
|
||||||
|
}
|
||||||
|
if i&os.ModeSticky != 0 {
|
||||||
|
o |= syscall.S_ISVTX
|
||||||
|
}
|
||||||
|
return o
|
||||||
|
}
|
||||||
|
|
||||||
|
// lChmod changes the mode of the named file to mode. If the file is a symbolic
|
||||||
|
// link, it changes the link, not the target. If there is an error,
|
||||||
|
// it will be of type *PathError.
|
||||||
|
func lChmod(name string, mode os.FileMode) error {
|
||||||
|
// NB linux does not support AT_SYMLINK_NOFOLLOW as a parameter to fchmodat
|
||||||
|
// and returns ENOTSUP if you try, so we don't support this on linux
|
||||||
|
if e := unix.Fchmodat(unix.AT_FDCWD, name, syscallMode(mode), unix.AT_SYMLINK_NOFOLLOW); e != nil {
|
||||||
|
return &os.PathError{Op: "lChmod", Path: name, Err: e}
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
@@ -1,4 +1,4 @@
|
|||||||
//go:build windows || plan9 || js
|
//go:build plan9 || js
|
||||||
|
|
||||||
package local
|
package local
|
||||||
|
|
||||||
|
|||||||
19
backend/local/lchtimes_windows.go
Normal file
19
backend/local/lchtimes_windows.go
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
//go:build windows
|
||||||
|
|
||||||
|
package local
|
||||||
|
|
||||||
|
import (
|
||||||
|
"time"
|
||||||
|
)
|
||||||
|
|
||||||
|
const haveLChtimes = true
|
||||||
|
|
||||||
|
// lChtimes changes the access and modification times of the named
|
||||||
|
// link, similar to the Unix utime() or utimes() functions.
|
||||||
|
//
|
||||||
|
// The underlying filesystem may truncate or round the values to a
|
||||||
|
// less precise time unit.
|
||||||
|
// If there is an error, it will be of type *PathError.
|
||||||
|
func lChtimes(name string, atime time.Time, mtime time.Time) error {
|
||||||
|
return setTimes(name, atime, mtime, time.Time{}, true)
|
||||||
|
}
|
||||||
@@ -73,7 +73,6 @@ func TestUpdatingCheck(t *testing.T) {
|
|||||||
r.WriteFile(filePath, "content updated", time.Now())
|
r.WriteFile(filePath, "content updated", time.Now())
|
||||||
_, err = in.Read(buf)
|
_, err = in.Read(buf)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test corrupted on transfer
|
// Test corrupted on transfer
|
||||||
@@ -224,7 +223,7 @@ func TestHashOnUpdate(t *testing.T) {
|
|||||||
assert.Equal(t, "9a0364b9e99bb480dd25e1f0284c8555", md5)
|
assert.Equal(t, "9a0364b9e99bb480dd25e1f0284c8555", md5)
|
||||||
|
|
||||||
// Reupload it with different contents but same size and timestamp
|
// Reupload it with different contents but same size and timestamp
|
||||||
var b = bytes.NewBufferString("CONTENT")
|
b := bytes.NewBufferString("CONTENT")
|
||||||
src := object.NewStaticObjectInfo(filePath, when, int64(b.Len()), true, nil, f)
|
src := object.NewStaticObjectInfo(filePath, when, int64(b.Len()), true, nil, f)
|
||||||
err = o.Update(ctx, b, src)
|
err = o.Update(ctx, b, src)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -269,22 +268,66 @@ func TestMetadata(t *testing.T) {
|
|||||||
r := fstest.NewRun(t)
|
r := fstest.NewRun(t)
|
||||||
const filePath = "metafile.txt"
|
const filePath = "metafile.txt"
|
||||||
when := time.Now()
|
when := time.Now()
|
||||||
const dayLength = len("2001-01-01")
|
|
||||||
whenRFC := when.Format(time.RFC3339Nano)
|
|
||||||
r.WriteFile(filePath, "metadata file contents", when)
|
r.WriteFile(filePath, "metadata file contents", when)
|
||||||
f := r.Flocal.(*Fs)
|
f := r.Flocal.(*Fs)
|
||||||
|
|
||||||
|
// Set fs into "-l" / "--links" mode
|
||||||
|
f.opt.TranslateSymlinks = true
|
||||||
|
|
||||||
|
// Write a symlink to the file
|
||||||
|
symlinkPath := "metafile-link.txt"
|
||||||
|
osSymlinkPath := filepath.Join(f.root, symlinkPath)
|
||||||
|
symlinkPath += linkSuffix
|
||||||
|
require.NoError(t, os.Symlink(filePath, osSymlinkPath))
|
||||||
|
symlinkModTime := fstest.Time("2002-02-03T04:05:10.123123123Z")
|
||||||
|
require.NoError(t, lChtimes(osSymlinkPath, symlinkModTime, symlinkModTime))
|
||||||
|
|
||||||
// Get the object
|
// Get the object
|
||||||
obj, err := f.NewObject(ctx, filePath)
|
obj, err := f.NewObject(ctx, filePath)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
o := obj.(*Object)
|
o := obj.(*Object)
|
||||||
|
|
||||||
|
// Get the symlink object
|
||||||
|
symlinkObj, err := f.NewObject(ctx, symlinkPath)
|
||||||
|
require.NoError(t, err)
|
||||||
|
symlinkO := symlinkObj.(*Object)
|
||||||
|
|
||||||
|
// Record metadata for o
|
||||||
|
oMeta, err := o.Metadata(ctx)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// Test symlink first to check it doesn't mess up file
|
||||||
|
t.Run("Symlink", func(t *testing.T) {
|
||||||
|
testMetadata(t, r, symlinkO, symlinkModTime)
|
||||||
|
})
|
||||||
|
|
||||||
|
// Read it again
|
||||||
|
oMetaNew, err := o.Metadata(ctx)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// Check that operating on the symlink didn't change the file it was pointing to
|
||||||
|
// See: https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
|
||||||
|
assert.Equal(t, oMeta, oMetaNew, "metadata setting on symlink messed up file")
|
||||||
|
|
||||||
|
// Now run the same tests on the file
|
||||||
|
t.Run("File", func(t *testing.T) {
|
||||||
|
testMetadata(t, r, o, when)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
func testMetadata(t *testing.T, r *fstest.Run, o *Object, when time.Time) {
|
||||||
|
ctx := context.Background()
|
||||||
|
whenRFC := when.Format(time.RFC3339Nano)
|
||||||
|
const dayLength = len("2001-01-01")
|
||||||
|
|
||||||
|
f := r.Flocal.(*Fs)
|
||||||
features := f.Features()
|
features := f.Features()
|
||||||
|
|
||||||
var hasXID, hasAtime, hasBtime bool
|
var hasXID, hasAtime, hasBtime, canSetXattrOnLinks bool
|
||||||
switch runtime.GOOS {
|
switch runtime.GOOS {
|
||||||
case "darwin", "freebsd", "netbsd", "linux":
|
case "darwin", "freebsd", "netbsd", "linux":
|
||||||
hasXID, hasAtime, hasBtime = true, true, true
|
hasXID, hasAtime, hasBtime = true, true, true
|
||||||
|
canSetXattrOnLinks = runtime.GOOS != "linux"
|
||||||
case "openbsd", "solaris":
|
case "openbsd", "solaris":
|
||||||
hasXID, hasAtime = true, true
|
hasXID, hasAtime = true, true
|
||||||
case "windows":
|
case "windows":
|
||||||
@@ -307,6 +350,10 @@ func TestMetadata(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Nil(t, m)
|
assert.Nil(t, m)
|
||||||
|
|
||||||
|
if !canSetXattrOnLinks && o.translatedLink {
|
||||||
|
t.Skip("Skip remainder of test as can't set xattr on symlinks on this OS")
|
||||||
|
}
|
||||||
|
|
||||||
inM := fs.Metadata{
|
inM := fs.Metadata{
|
||||||
"potato": "chips",
|
"potato": "chips",
|
||||||
"cabbage": "soup",
|
"cabbage": "soup",
|
||||||
@@ -321,18 +368,21 @@ func TestMetadata(t *testing.T) {
|
|||||||
})
|
})
|
||||||
|
|
||||||
checkTime := func(m fs.Metadata, key string, when time.Time) {
|
checkTime := func(m fs.Metadata, key string, when time.Time) {
|
||||||
|
t.Helper()
|
||||||
mt, ok := o.parseMetadataTime(m, key)
|
mt, ok := o.parseMetadataTime(m, key)
|
||||||
assert.True(t, ok)
|
assert.True(t, ok)
|
||||||
dt := mt.Sub(when)
|
dt := mt.Sub(when)
|
||||||
precision := time.Second
|
precision := time.Second
|
||||||
assert.True(t, dt >= -precision && dt <= precision, fmt.Sprintf("%s: dt %v outside +/- precision %v", key, dt, precision))
|
assert.True(t, dt >= -precision && dt <= precision, fmt.Sprintf("%s: dt %v outside +/- precision %v want %v got %v", key, dt, precision, mt, when))
|
||||||
}
|
}
|
||||||
|
|
||||||
checkInt := func(m fs.Metadata, key string, base int) int {
|
checkInt := func(m fs.Metadata, key string, base int) int {
|
||||||
|
t.Helper()
|
||||||
value, ok := o.parseMetadataInt(m, key, base)
|
value, ok := o.parseMetadataInt(m, key, base)
|
||||||
assert.True(t, ok)
|
assert.True(t, ok)
|
||||||
return value
|
return value
|
||||||
}
|
}
|
||||||
|
|
||||||
t.Run("Read", func(t *testing.T) {
|
t.Run("Read", func(t *testing.T) {
|
||||||
m, err := o.Metadata(ctx)
|
m, err := o.Metadata(ctx)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -342,13 +392,12 @@ func TestMetadata(t *testing.T) {
|
|||||||
checkInt(m, "mode", 8)
|
checkInt(m, "mode", 8)
|
||||||
checkTime(m, "mtime", when)
|
checkTime(m, "mtime", when)
|
||||||
|
|
||||||
assert.Equal(t, len(whenRFC), len(m["mtime"]))
|
|
||||||
assert.Equal(t, whenRFC[:dayLength], m["mtime"][:dayLength])
|
assert.Equal(t, whenRFC[:dayLength], m["mtime"][:dayLength])
|
||||||
|
|
||||||
if hasAtime {
|
if hasAtime && !o.translatedLink { // symlinks generally don't record atime
|
||||||
checkTime(m, "atime", when)
|
checkTime(m, "atime", when)
|
||||||
}
|
}
|
||||||
if hasBtime {
|
if hasBtime && !o.translatedLink { // symlinks generally don't record btime
|
||||||
checkTime(m, "btime", when)
|
checkTime(m, "btime", when)
|
||||||
}
|
}
|
||||||
if hasXID {
|
if hasXID {
|
||||||
@@ -372,6 +421,10 @@ func TestMetadata(t *testing.T) {
|
|||||||
"mode": "0767",
|
"mode": "0767",
|
||||||
"potato": "wedges",
|
"potato": "wedges",
|
||||||
}
|
}
|
||||||
|
if !canSetXattrOnLinks && o.translatedLink {
|
||||||
|
// Don't change xattr if not supported on symlinks
|
||||||
|
delete(newM, "potato")
|
||||||
|
}
|
||||||
err := o.writeMetadata(newM)
|
err := o.writeMetadata(newM)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
@@ -381,7 +434,11 @@ func TestMetadata(t *testing.T) {
|
|||||||
|
|
||||||
mode := checkInt(m, "mode", 8)
|
mode := checkInt(m, "mode", 8)
|
||||||
if runtime.GOOS != "windows" {
|
if runtime.GOOS != "windows" {
|
||||||
assert.Equal(t, 0767, mode&0777, fmt.Sprintf("mode wrong - expecting 0767 got 0%o", mode&0777))
|
expectedMode := 0767
|
||||||
|
if o.translatedLink && runtime.GOOS == "linux" {
|
||||||
|
expectedMode = 0777 // perms of symlinks always read as 0777 on linux
|
||||||
|
}
|
||||||
|
assert.Equal(t, expectedMode, mode&0777, fmt.Sprintf("mode wrong - expecting 0%o got 0%o", expectedMode, mode&0777))
|
||||||
}
|
}
|
||||||
|
|
||||||
checkTime(m, "mtime", newMtime)
|
checkTime(m, "mtime", newMtime)
|
||||||
@@ -391,11 +448,10 @@ func TestMetadata(t *testing.T) {
|
|||||||
if haveSetBTime {
|
if haveSetBTime {
|
||||||
checkTime(m, "btime", newBtime)
|
checkTime(m, "btime", newBtime)
|
||||||
}
|
}
|
||||||
if xattrSupported {
|
if xattrSupported && (canSetXattrOnLinks || !o.translatedLink) {
|
||||||
assert.Equal(t, "wedges", m["potato"])
|
assert.Equal(t, "wedges", m["potato"])
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestFilter(t *testing.T) {
|
func TestFilter(t *testing.T) {
|
||||||
@@ -572,4 +628,35 @@ func TestCopySymlink(t *testing.T) {
|
|||||||
linkContents, err := os.Readlink(dstPath)
|
linkContents, err := os.Readlink(dstPath)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Equal(t, "file.txt", linkContents)
|
assert.Equal(t, "file.txt", linkContents)
|
||||||
|
|
||||||
|
// Set fs into "-L/--copy-links" mode
|
||||||
|
f.opt.FollowSymlinks = true
|
||||||
|
f.opt.TranslateSymlinks = false
|
||||||
|
f.lstat = os.Stat
|
||||||
|
|
||||||
|
// Create dst
|
||||||
|
require.NoError(t, f.Mkdir(ctx, "dst2"))
|
||||||
|
|
||||||
|
// Do copy from src into dst
|
||||||
|
src, err = f.NewObject(ctx, "src/link.txt")
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NotNil(t, src)
|
||||||
|
dst, err = operations.Copy(ctx, f, nil, "dst2/link.txt", src)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NotNil(t, dst)
|
||||||
|
|
||||||
|
// Test that we made a NON-symlink and it has the right contents
|
||||||
|
dstPath = filepath.Join(r.LocalName, "dst2", "link.txt")
|
||||||
|
fi, err := os.Lstat(dstPath)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.True(t, fi.Mode()&os.ModeSymlink == 0)
|
||||||
|
want := fstest.NewItem("dst2/link.txt", "hello world", when)
|
||||||
|
fstest.CompareItems(t, []fs.DirEntry{dst}, []fstest.Item{want}, nil, f.precision, "")
|
||||||
|
|
||||||
|
// Test that copying a normal file also works
|
||||||
|
dst, err = operations.Copy(ctx, f, nil, "dst2/file.txt", dst)
|
||||||
|
require.NoError(t, err)
|
||||||
|
require.NotNil(t, dst)
|
||||||
|
want = fstest.NewItem("dst2/file.txt", "hello world", when)
|
||||||
|
fstest.CompareItems(t, []fs.DirEntry{dst}, []fstest.Item{want}, nil, f.precision, "")
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -105,7 +105,11 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
|
|||||||
}
|
}
|
||||||
if haveSetBTime {
|
if haveSetBTime {
|
||||||
if btimeOK {
|
if btimeOK {
|
||||||
err = setBTime(o.path, btime)
|
if o.translatedLink {
|
||||||
|
err = lsetBTime(o.path, btime)
|
||||||
|
} else {
|
||||||
|
err = setBTime(o.path, btime)
|
||||||
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
outErr = fmt.Errorf("failed to set birth (creation) time: %w", err)
|
outErr = fmt.Errorf("failed to set birth (creation) time: %w", err)
|
||||||
}
|
}
|
||||||
@@ -121,7 +125,11 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
|
|||||||
if runtime.GOOS == "windows" || runtime.GOOS == "plan9" {
|
if runtime.GOOS == "windows" || runtime.GOOS == "plan9" {
|
||||||
fs.Debugf(o, "Ignoring request to set ownership %o.%o on this OS", gid, uid)
|
fs.Debugf(o, "Ignoring request to set ownership %o.%o on this OS", gid, uid)
|
||||||
} else {
|
} else {
|
||||||
err = os.Chown(o.path, uid, gid)
|
if o.translatedLink {
|
||||||
|
err = os.Lchown(o.path, uid, gid)
|
||||||
|
} else {
|
||||||
|
err = os.Chown(o.path, uid, gid)
|
||||||
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
outErr = fmt.Errorf("failed to change ownership: %w", err)
|
outErr = fmt.Errorf("failed to change ownership: %w", err)
|
||||||
}
|
}
|
||||||
@@ -132,7 +140,16 @@ func (o *Object) writeMetadataToFile(m fs.Metadata) (outErr error) {
|
|||||||
if mode >= 0 {
|
if mode >= 0 {
|
||||||
umode := uint(mode)
|
umode := uint(mode)
|
||||||
if umode <= math.MaxUint32 {
|
if umode <= math.MaxUint32 {
|
||||||
err = os.Chmod(o.path, os.FileMode(umode))
|
if o.translatedLink {
|
||||||
|
if haveLChmod {
|
||||||
|
err = lChmod(o.path, os.FileMode(umode))
|
||||||
|
} else {
|
||||||
|
fs.Debugf(o, "Unable to set mode %v on a symlink on this OS", os.FileMode(umode))
|
||||||
|
err = nil
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
err = os.Chmod(o.path, os.FileMode(umode))
|
||||||
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
outErr = fmt.Errorf("failed to change permissions: %w", err)
|
outErr = fmt.Errorf("failed to change permissions: %w", err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -13,3 +13,9 @@ func setBTime(name string, btime time.Time) error {
|
|||||||
// Does nothing
|
// Does nothing
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// lsetBTime changes the birth time of the link passed in
|
||||||
|
func lsetBTime(name string, btime time.Time) error {
|
||||||
|
// Does nothing
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|||||||
@@ -9,15 +9,20 @@ import (
|
|||||||
|
|
||||||
const haveSetBTime = true
|
const haveSetBTime = true
|
||||||
|
|
||||||
// setBTime sets the birth time of the file passed in
|
// setTimes sets any of atime, mtime or btime
|
||||||
func setBTime(name string, btime time.Time) (err error) {
|
// if link is set it sets a link rather than the target
|
||||||
|
func setTimes(name string, atime, mtime, btime time.Time, link bool) (err error) {
|
||||||
pathp, err := syscall.UTF16PtrFromString(name)
|
pathp, err := syscall.UTF16PtrFromString(name)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
fileFlag := uint32(syscall.FILE_FLAG_BACKUP_SEMANTICS)
|
||||||
|
if link {
|
||||||
|
fileFlag |= syscall.FILE_FLAG_OPEN_REPARSE_POINT
|
||||||
|
}
|
||||||
h, err := syscall.CreateFile(pathp,
|
h, err := syscall.CreateFile(pathp,
|
||||||
syscall.FILE_WRITE_ATTRIBUTES, syscall.FILE_SHARE_WRITE, nil,
|
syscall.FILE_WRITE_ATTRIBUTES, syscall.FILE_SHARE_WRITE, nil,
|
||||||
syscall.OPEN_EXISTING, syscall.FILE_FLAG_BACKUP_SEMANTICS, 0)
|
syscall.OPEN_EXISTING, fileFlag, 0)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -27,6 +32,28 @@ func setBTime(name string, btime time.Time) (err error) {
|
|||||||
err = closeErr
|
err = closeErr
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
bFileTime := syscall.NsecToFiletime(btime.UnixNano())
|
var patime, pmtime, pbtime *syscall.Filetime
|
||||||
return syscall.SetFileTime(h, &bFileTime, nil, nil)
|
if !atime.IsZero() {
|
||||||
|
t := syscall.NsecToFiletime(atime.UnixNano())
|
||||||
|
patime = &t
|
||||||
|
}
|
||||||
|
if !mtime.IsZero() {
|
||||||
|
t := syscall.NsecToFiletime(mtime.UnixNano())
|
||||||
|
pmtime = &t
|
||||||
|
}
|
||||||
|
if !btime.IsZero() {
|
||||||
|
t := syscall.NsecToFiletime(btime.UnixNano())
|
||||||
|
pbtime = &t
|
||||||
|
}
|
||||||
|
return syscall.SetFileTime(h, pbtime, patime, pmtime)
|
||||||
|
}
|
||||||
|
|
||||||
|
// setBTime sets the birth time of the file passed in
|
||||||
|
func setBTime(name string, btime time.Time) (err error) {
|
||||||
|
return setTimes(name, time.Time{}, time.Time{}, btime, false)
|
||||||
|
}
|
||||||
|
|
||||||
|
// lsetBTime changes the birth time of the link passed in
|
||||||
|
func lsetBTime(name string, btime time.Time) error {
|
||||||
|
return setTimes(name, time.Time{}, time.Time{}, btime, true)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -202,9 +202,14 @@ type SharingLinkType struct {
|
|||||||
type LinkType string
|
type LinkType string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ViewLinkType LinkType = "view" // ViewLinkType (role: read) A view-only sharing link, allowing read-only access.
|
// ViewLinkType (role: read) A view-only sharing link, allowing read-only access.
|
||||||
EditLinkType LinkType = "edit" // EditLinkType (role: write) An edit sharing link, allowing read-write access.
|
ViewLinkType LinkType = "view"
|
||||||
EmbedLinkType LinkType = "embed" // EmbedLinkType (role: read) A view-only sharing link that can be used to embed content into a host webpage. Embed links are not available for OneDrive for Business or SharePoint.
|
// EditLinkType (role: write) An edit sharing link, allowing read-write access.
|
||||||
|
EditLinkType LinkType = "edit"
|
||||||
|
// EmbedLinkType (role: read) A view-only sharing link that can be used to embed
|
||||||
|
// content into a host webpage. Embed links are not available for OneDrive for
|
||||||
|
// Business or SharePoint.
|
||||||
|
EmbedLinkType LinkType = "embed"
|
||||||
)
|
)
|
||||||
|
|
||||||
// LinkScope represents the scope of the link represented by this permission.
|
// LinkScope represents the scope of the link represented by this permission.
|
||||||
@@ -212,9 +217,12 @@ const (
|
|||||||
type LinkScope string
|
type LinkScope string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
AnonymousScope LinkScope = "anonymous" // AnonymousScope = Anyone with the link has access, without needing to sign in. This may include people outside of your organization.
|
// AnonymousScope = Anyone with the link has access, without needing to sign in.
|
||||||
OrganizationScope LinkScope = "organization" // OrganizationScope = Anyone signed into your organization (tenant) can use the link to get access. Only available in OneDrive for Business and SharePoint.
|
// This may include people outside of your organization.
|
||||||
|
AnonymousScope LinkScope = "anonymous"
|
||||||
|
// OrganizationScope = Anyone signed into your organization (tenant) can use the
|
||||||
|
// link to get access. Only available in OneDrive for Business and SharePoint.
|
||||||
|
OrganizationScope LinkScope = "organization"
|
||||||
)
|
)
|
||||||
|
|
||||||
// PermissionsType provides information about a sharing permission granted for a DriveItem resource.
|
// PermissionsType provides information about a sharing permission granted for a DriveItem resource.
|
||||||
@@ -236,10 +244,14 @@ type PermissionsType struct {
|
|||||||
type Role string
|
type Role string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
ReadRole Role = "read" // ReadRole provides the ability to read the metadata and contents of the item.
|
// ReadRole provides the ability to read the metadata and contents of the item.
|
||||||
WriteRole Role = "write" // WriteRole provides the ability to read and modify the metadata and contents of the item.
|
ReadRole Role = "read"
|
||||||
OwnerRole Role = "owner" // OwnerRole represents the owner role for SharePoint and OneDrive for Business.
|
// WriteRole provides the ability to read and modify the metadata and contents of the item.
|
||||||
MemberRole Role = "member" // MemberRole represents the member role for SharePoint and OneDrive for Business.
|
WriteRole Role = "write"
|
||||||
|
// OwnerRole represents the owner role for SharePoint and OneDrive for Business.
|
||||||
|
OwnerRole Role = "owner"
|
||||||
|
// MemberRole represents the member role for SharePoint and OneDrive for Business.
|
||||||
|
MemberRole Role = "member"
|
||||||
)
|
)
|
||||||
|
|
||||||
// PermissionsResponse is the response to the list permissions method
|
// PermissionsResponse is the response to the list permissions method
|
||||||
|
|||||||
@@ -827,7 +827,7 @@ func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, err
|
|||||||
retry = true
|
retry = true
|
||||||
fs.Debugf(nil, "HTTP 401: Unable to initialize RPS. Trying again.")
|
fs.Debugf(nil, "HTTP 401: Unable to initialize RPS. Trying again.")
|
||||||
}
|
}
|
||||||
case 429: // Too Many Requests.
|
case 429, 503: // Too Many Requests, Server Too Busy
|
||||||
// see https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online
|
// see https://docs.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online
|
||||||
if values := resp.Header["Retry-After"]; len(values) == 1 && values[0] != "" {
|
if values := resp.Header["Retry-After"]; len(values) == 1 && values[0] != "" {
|
||||||
retryAfter, parseErr := strconv.Atoi(values[0])
|
retryAfter, parseErr := strconv.Atoi(values[0])
|
||||||
|
|||||||
@@ -378,11 +378,23 @@ func calcGcid(r io.Reader, size int64) (string, error) {
|
|||||||
return hex.EncodeToString(totalHash.Sum(nil)), nil
|
return hex.EncodeToString(totalHash.Sum(nil)), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// unWrapObjectInfo returns the underlying Object unwrapped as much as
|
||||||
|
// possible or nil even if it is an OverrideRemote
|
||||||
|
func unWrapObjectInfo(oi fs.ObjectInfo) fs.Object {
|
||||||
|
if o, ok := oi.(fs.Object); ok {
|
||||||
|
return fs.UnWrapObject(o)
|
||||||
|
} else if do, ok := oi.(*fs.OverrideRemote); ok {
|
||||||
|
// Unwrap if it is an operations.OverrideRemote
|
||||||
|
return do.UnWrap()
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
// calcCid calculates Cid from source
|
// calcCid calculates Cid from source
|
||||||
//
|
//
|
||||||
// Cid is a simplified version of Gcid
|
// Cid is a simplified version of Gcid
|
||||||
func calcCid(ctx context.Context, src fs.ObjectInfo) (cid string, err error) {
|
func calcCid(ctx context.Context, src fs.ObjectInfo) (cid string, err error) {
|
||||||
srcObj := fs.UnWrapObjectInfo(src)
|
srcObj := unWrapObjectInfo(src)
|
||||||
if srcObj == nil {
|
if srcObj == nil {
|
||||||
return "", fmt.Errorf("failed to unwrap object from src: %s", src)
|
return "", fmt.Errorf("failed to unwrap object from src: %s", src)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -561,6 +561,7 @@ func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, err
|
|||||||
if strings.Contains(err.Error(), "invalid_grant") {
|
if strings.Contains(err.Error(), "invalid_grant") {
|
||||||
return f, f.reAuthorize(ctx)
|
return f, f.reAuthorize(ctx)
|
||||||
}
|
}
|
||||||
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
return f, nil
|
return f, nil
|
||||||
@@ -1773,7 +1774,7 @@ func (o *Object) upload(ctx context.Context, in io.Reader, src fs.ObjectInfo, wi
|
|||||||
gcid, err := o.fs.getGcid(ctx, src)
|
gcid, err := o.fs.getGcid(ctx, src)
|
||||||
if err != nil || gcid == "" {
|
if err != nil || gcid == "" {
|
||||||
fs.Debugf(o, "calculating gcid: %v", err)
|
fs.Debugf(o, "calculating gcid: %v", err)
|
||||||
if srcObj := fs.UnWrapObjectInfo(src); srcObj != nil && srcObj.Fs().Features().IsLocal {
|
if srcObj := unWrapObjectInfo(src); srcObj != nil && srcObj.Fs().Features().IsLocal {
|
||||||
// No buffering; directly calculate gcid from source
|
// No buffering; directly calculate gcid from source
|
||||||
rc, err := srcObj.Open(ctx)
|
rc, err := srcObj.Open(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
@@ -3368,6 +3368,10 @@ func setQuirks(opt *Options) {
|
|||||||
opt.ChunkSize = 64 * fs.Mebi
|
opt.ChunkSize = 64 * fs.Mebi
|
||||||
}
|
}
|
||||||
useAlreadyExists = false // returns BucketAlreadyExists
|
useAlreadyExists = false // returns BucketAlreadyExists
|
||||||
|
// Storj doesn't support multi-part server side copy:
|
||||||
|
// https://github.com/storj/roadmap/issues/40
|
||||||
|
// So make cutoff very large which it does support
|
||||||
|
opt.CopyCutoff = math.MaxInt64
|
||||||
case "Synology":
|
case "Synology":
|
||||||
useMultipartEtag = false
|
useMultipartEtag = false
|
||||||
useAlreadyExists = false // untested
|
useAlreadyExists = false // untested
|
||||||
@@ -5746,7 +5750,7 @@ func (o *Object) downloadFromURL(ctx context.Context, bucketPath string, options
|
|||||||
ContentEncoding: header("Content-Encoding"),
|
ContentEncoding: header("Content-Encoding"),
|
||||||
ContentLanguage: header("Content-Language"),
|
ContentLanguage: header("Content-Language"),
|
||||||
ContentType: header("Content-Type"),
|
ContentType: header("Content-Type"),
|
||||||
StorageClass: types.StorageClass(*header("X-Amz-Storage-Class")),
|
StorageClass: types.StorageClass(deref(header("X-Amz-Storage-Class"))),
|
||||||
}
|
}
|
||||||
o.setMetaData(&head)
|
o.setMetaData(&head)
|
||||||
return resp.Body, err
|
return resp.Body, err
|
||||||
@@ -5940,8 +5944,8 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
|
|||||||
chunkSize: int64(chunkSize),
|
chunkSize: int64(chunkSize),
|
||||||
size: size,
|
size: size,
|
||||||
f: f,
|
f: f,
|
||||||
bucket: mOut.Bucket,
|
bucket: ui.req.Bucket,
|
||||||
key: mOut.Key,
|
key: ui.req.Key,
|
||||||
uploadID: mOut.UploadId,
|
uploadID: mOut.UploadId,
|
||||||
multiPartUploadInput: &mReq,
|
multiPartUploadInput: &mReq,
|
||||||
completedParts: make([]types.CompletedPart, 0),
|
completedParts: make([]types.CompletedPart, 0),
|
||||||
|
|||||||
@@ -5,20 +5,13 @@ import (
|
|||||||
"bytes"
|
"bytes"
|
||||||
"log"
|
"log"
|
||||||
|
|
||||||
"github.com/rclone/rclone/fs"
|
|
||||||
"github.com/sirupsen/logrus"
|
"github.com/sirupsen/logrus"
|
||||||
)
|
)
|
||||||
|
|
||||||
// CaptureOutput runs a function capturing its output.
|
// CaptureOutput runs a function capturing its output.
|
||||||
func CaptureOutput(fun func()) []byte {
|
func CaptureOutput(fun func()) []byte {
|
||||||
logSave := log.Writer()
|
logSave := log.Writer()
|
||||||
logrusSave := logrus.StandardLogger().Writer()
|
logrusSave := logrus.StandardLogger().Out
|
||||||
defer func() {
|
|
||||||
err := logrusSave.Close()
|
|
||||||
if err != nil {
|
|
||||||
fs.Errorf(nil, "error closing logrusSave: %v", err)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
buf := &bytes.Buffer{}
|
buf := &bytes.Buffer{}
|
||||||
log.SetOutput(buf)
|
log.SetOutput(buf)
|
||||||
logrus.SetOutput(buf)
|
logrus.SetOutput(buf)
|
||||||
|
|||||||
@@ -66,7 +66,8 @@ func quotePath(path string) string {
|
|||||||
return escapePath(path, true)
|
return escapePath(path, true)
|
||||||
}
|
}
|
||||||
|
|
||||||
var Colors bool // Colors controls whether terminal colors are enabled
|
// Colors controls whether terminal colors are enabled
|
||||||
|
var Colors bool
|
||||||
|
|
||||||
// Color handles terminal colors for bisync
|
// Color handles terminal colors for bisync
|
||||||
func Color(style string, s string) string {
|
func Color(style string, s string) string {
|
||||||
|
|||||||
@@ -107,7 +107,7 @@ func (lrw *loggingResponseWriter) logRequest(code int, err interface{}) {
|
|||||||
err = ""
|
err = ""
|
||||||
}
|
}
|
||||||
|
|
||||||
fs.LogPrintf(level, lrw.request.URL, "%s %s %d %s %s",
|
fs.LogLevelPrintf(level, lrw.request.URL, "%s %s %d %s %s",
|
||||||
lrw.request.RemoteAddr, lrw.request.Method, code,
|
lrw.request.RemoteAddr, lrw.request.Method, code,
|
||||||
lrw.request.Header.Get("SOAPACTION"), err)
|
lrw.request.Header.Get("SOAPACTION"), err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -5,6 +5,36 @@ description: "Rclone Changelog"
|
|||||||
|
|
||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
|
## v1.68.2 - 2024-11-15
|
||||||
|
|
||||||
|
[See commits](https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
|
||||||
|
|
||||||
|
* Security fixes
|
||||||
|
* local backend: CVE-2024-52522: fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
|
||||||
|
* Only affects users using `--metadata` and `--links` and copying files to the local backend
|
||||||
|
* See https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
|
||||||
|
* build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1 (dependabot)
|
||||||
|
* This is an issue in a dependency which is used for JWT certificates
|
||||||
|
* See https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
|
||||||
|
* Bug Fixes
|
||||||
|
* accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick Craig-Wood)
|
||||||
|
* bisync: Fix output capture restoring the wrong output for logrus (Dimitrios Slamaris)
|
||||||
|
* dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
|
||||||
|
* serve s3: Fix excess locking which was making serve s3 single threaded (Nick Craig-Wood)
|
||||||
|
* doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
|
||||||
|
* Local
|
||||||
|
* Fix permission and ownership on symlinks with `--links` and `--metadata` (Nick Craig-Wood)
|
||||||
|
* Fix `--copy-links` on macOS when cloning (nielash)
|
||||||
|
* Onedrive
|
||||||
|
* Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
|
||||||
|
* Pikpak
|
||||||
|
* Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
|
||||||
|
* Fix fatal crash on startup with token that can't be refreshed (Nick Craig-Wood)
|
||||||
|
* S3
|
||||||
|
* Fix crash when using `--s3-download-url` after migration to SDKv2 (Nick Craig-Wood)
|
||||||
|
* Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan Raev)
|
||||||
|
* Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
|
||||||
|
|
||||||
## v1.68.1 - 2024-09-24
|
## v1.68.1 - 2024-09-24
|
||||||
|
|
||||||
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
|
[See commits](https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
|
||||||
|
|||||||
@@ -929,7 +929,7 @@ rclone [flags]
|
|||||||
--use-json-log Use json log format
|
--use-json-log Use json log format
|
||||||
--use-mmap Use mmap allocator (see docs)
|
--use-mmap Use mmap allocator (see docs)
|
||||||
--use-server-modtime Use server modified time instead of object metadata
|
--use-server-modtime Use server modified time instead of object metadata
|
||||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.1")
|
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
|
||||||
-v, --verbose count Print lots more stuff (repeat for more)
|
-v, --verbose count Print lots more stuff (repeat for more)
|
||||||
-V, --version Print the version number
|
-V, --version Print the version number
|
||||||
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
|
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
|
||||||
|
|||||||
@@ -1809,9 +1809,9 @@ then select "OAuth client ID".
|
|||||||
|
|
||||||
9. It will show you a client ID and client secret. Make a note of these.
|
9. It will show you a client ID and client secret. Make a note of these.
|
||||||
|
|
||||||
(If you selected "External" at Step 5 continue to Step 9.
|
(If you selected "External" at Step 5 continue to Step 10.
|
||||||
If you chose "Internal" you don't need to publish and can skip straight to
|
If you chose "Internal" you don't need to publish and can skip straight to
|
||||||
Step 10 but your destination drive must be part of the same Google Workspace.)
|
Step 11 but your destination drive must be part of the same Google Workspace.)
|
||||||
|
|
||||||
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm.
|
10. Go to "Oauth consent screen" and then click "PUBLISH APP" button and confirm.
|
||||||
You will also want to add yourself as a test user.
|
You will also want to add yourself as a test user.
|
||||||
|
|||||||
@@ -505,6 +505,8 @@ processed in.
|
|||||||
Arrange the order of filter rules with the most restrictive first and
|
Arrange the order of filter rules with the most restrictive first and
|
||||||
work down.
|
work down.
|
||||||
|
|
||||||
|
Lines starting with # or ; are ignored, and can be used to write comments. Inline comments are not supported. _Use `-vv --dump filters` to see how they appear in the final regexp._
|
||||||
|
|
||||||
E.g. for `filter-file.txt`:
|
E.g. for `filter-file.txt`:
|
||||||
|
|
||||||
# a sample filter rule file
|
# a sample filter rule file
|
||||||
@@ -512,6 +514,7 @@ E.g. for `filter-file.txt`:
|
|||||||
+ *.jpg
|
+ *.jpg
|
||||||
+ *.png
|
+ *.png
|
||||||
+ file2.avi
|
+ file2.avi
|
||||||
|
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
|
||||||
- /dir/Trash/**
|
- /dir/Trash/**
|
||||||
+ /dir/**
|
+ /dir/**
|
||||||
# exclude everything else
|
# exclude everything else
|
||||||
|
|||||||
@@ -115,7 +115,7 @@ Flags for general networking and HTTP stuff.
|
|||||||
--tpslimit float Limit HTTP transactions per second to this
|
--tpslimit float Limit HTTP transactions per second to this
|
||||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||||
--use-cookies Enable session cookiejar
|
--use-cookies Enable session cookiejar
|
||||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.1")
|
--user-agent string Set the user-agent to a specified string (default "rclone/v1.68.2")
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -46,7 +46,7 @@ Here is an overview of the major features of each cloud storage system.
|
|||||||
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
|
| OpenDrive | MD5 | R/W | Yes | Partial ⁸ | - | - |
|
||||||
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
|
| OpenStack Swift | MD5 | R/W | No | No | R/W | - |
|
||||||
| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
|
| Oracle Object Storage | MD5 | R/W | No | No | R/W | - |
|
||||||
| pCloud | MD5, SHA1 ⁷ | R | No | No | W | - |
|
| pCloud | MD5, SHA1 ⁷ | R/W | No | No | W | - |
|
||||||
| PikPak | MD5 | R | No | No | R | - |
|
| PikPak | MD5 | R | No | No | R | - |
|
||||||
| Pixeldrain | SHA256 | R/W | No | No | R | RW |
|
| Pixeldrain | SHA256 | R/W | No | No | R | RW |
|
||||||
| premiumize.me | - | - | Yes | No | R | - |
|
| premiumize.me | - | - | Yes | No | R | - |
|
||||||
|
|||||||
@@ -3476,8 +3476,8 @@ chunk_size = 5M
|
|||||||
copy_cutoff = 5M
|
copy_cutoff = 5M
|
||||||
```
|
```
|
||||||
|
|
||||||
[C14 Cold Storage](https://www.online.net/en/storage/c14-cold-storage) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
|
[Scaleway Glacier](https://www.scaleway.com/en/glacier-cold-storage/) is the low-cost S3 Glacier alternative from Scaleway and it works the same way as on S3 by accepting the "GLACIER" `storage_class`.
|
||||||
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to C14. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
|
So you can configure your remote with the `storage_class = GLACIER` option to upload directly to Scaleway Glacier. Don't forget that in this state you can't read files back after, you will need to restore them to "STANDARD" storage_class first before being able to read them (see "restore" section above)
|
||||||
|
|
||||||
### Seagate Lyve Cloud {#lyve}
|
### Seagate Lyve Cloud {#lyve}
|
||||||
|
|
||||||
|
|||||||
@@ -61,3 +61,4 @@ Thank you very much to our sponsors:
|
|||||||
{{< sponsor src="/img/logos/warp.svg" width="300" height="200" title="Visit our sponsor warp.dev" link="https://www.warp.dev/?utm_source=rclone&utm_medium=referral&utm_campaign=rclone_20231103">}}
|
{{< sponsor src="/img/logos/warp.svg" width="300" height="200" title="Visit our sponsor warp.dev" link="https://www.warp.dev/?utm_source=rclone&utm_medium=referral&utm_campaign=rclone_20231103">}}
|
||||||
{{< sponsor src="/img/logos/sia.svg" width="200" height="200" title="Visit our sponsor sia" link="https://sia.tech">}}
|
{{< sponsor src="/img/logos/sia.svg" width="200" height="200" title="Visit our sponsor sia" link="https://sia.tech">}}
|
||||||
{{< sponsor src="/img/logos/route4me.svg" width="400" height="200" title="Visit our sponsor Route4Me" link="https://route4me.com/">}}
|
{{< sponsor src="/img/logos/route4me.svg" width="400" height="200" title="Visit our sponsor Route4Me" link="https://route4me.com/">}}
|
||||||
|
{{< sponsor src="/img/logos/rcloneview.svg" width="300" height="200" title="Visit our sponsor RcloneView" link="https://rcloneview.com/">}}
|
||||||
|
|||||||
@@ -1 +1 @@
|
|||||||
v1.68.1
|
v1.68.2
|
||||||
@@ -41,7 +41,12 @@ type tokenBucket struct {
|
|||||||
//
|
//
|
||||||
// Call with lock held
|
// Call with lock held
|
||||||
func (bs *buckets) _isOff() bool { //nolint:unused // Don't include unused when running golangci-lint in case its on windows where this is not called
|
func (bs *buckets) _isOff() bool { //nolint:unused // Don't include unused when running golangci-lint in case its on windows where this is not called
|
||||||
return bs[0] == nil
|
for i := range bs {
|
||||||
|
if bs[i] != nil {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return true
|
||||||
}
|
}
|
||||||
|
|
||||||
// Disable the limits
|
// Disable the limits
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
package fs
|
package fs
|
||||||
|
|
||||||
// VersionTag of rclone
|
// VersionTag of rclone
|
||||||
var VersionTag = "v1.68.1"
|
var VersionTag = "v1.68.2"
|
||||||
|
|||||||
4
go.mod
4
go.mod
@@ -59,7 +59,7 @@ require (
|
|||||||
github.com/prometheus/client_golang v1.19.1
|
github.com/prometheus/client_golang v1.19.1
|
||||||
github.com/putdotio/go-putio/putio v0.0.0-20200123120452-16d982cac2b8
|
github.com/putdotio/go-putio/putio v0.0.0-20200123120452-16d982cac2b8
|
||||||
github.com/quasilyte/go-ruleguard/dsl v0.3.22
|
github.com/quasilyte/go-ruleguard/dsl v0.3.22
|
||||||
github.com/rclone/gofakes3 v0.0.3-0.20240807151802-e80146f8de87
|
github.com/rclone/gofakes3 v0.0.3
|
||||||
github.com/rfjakob/eme v1.1.2
|
github.com/rfjakob/eme v1.1.2
|
||||||
github.com/rivo/uniseg v0.4.7
|
github.com/rivo/uniseg v0.4.7
|
||||||
github.com/rogpeppe/go-internal v1.12.0
|
github.com/rogpeppe/go-internal v1.12.0
|
||||||
@@ -223,7 +223,7 @@ require (
|
|||||||
require (
|
require (
|
||||||
github.com/Microsoft/go-winio v0.6.1 // indirect
|
github.com/Microsoft/go-winio v0.6.1 // indirect
|
||||||
github.com/ProtonMail/go-crypto v1.0.0
|
github.com/ProtonMail/go-crypto v1.0.0
|
||||||
github.com/golang-jwt/jwt/v4 v4.5.0
|
github.com/golang-jwt/jwt/v4 v4.5.1
|
||||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
|
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
|
||||||
github.com/pkg/xattr v0.4.9
|
github.com/pkg/xattr v0.4.9
|
||||||
golang.org/x/mobile v0.0.0-20240716161057-1ad2df20a8b6
|
golang.org/x/mobile v0.0.0-20240716161057-1ad2df20a8b6
|
||||||
|
|||||||
8
go.sum
8
go.sum
@@ -275,8 +275,8 @@ github.com/gofrs/flock v0.8.1 h1:+gYjHKf32LDeiEEFhQaotPbLuUXjY5ZqxKgXy7n59aw=
|
|||||||
github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
|
github.com/gofrs/flock v0.8.1/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
|
||||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||||
github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg=
|
github.com/golang-jwt/jwt/v4 v4.5.1 h1:JdqV9zKUdtaa9gdPlywC3aeoEsR681PlKC+4F5gQgeo=
|
||||||
github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
github.com/golang-jwt/jwt/v4 v4.5.1/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0=
|
||||||
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
|
github.com/golang-jwt/jwt/v5 v5.2.1 h1:OuVbFODueb089Lh128TAcimifWaLhJwVflnrgM17wHk=
|
||||||
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
|
github.com/golang-jwt/jwt/v5 v5.2.1/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk=
|
||||||
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
|
||||||
@@ -519,8 +519,8 @@ github.com/quic-go/quic-go v0.40.1 h1:X3AGzUNFs0jVuO3esAGnTfvdgvL4fq655WaOi1snv1
|
|||||||
github.com/quic-go/quic-go v0.40.1/go.mod h1:PeN7kuVJ4xZbxSv/4OX6S1USOX8MJvydwpTx31vx60c=
|
github.com/quic-go/quic-go v0.40.1/go.mod h1:PeN7kuVJ4xZbxSv/4OX6S1USOX8MJvydwpTx31vx60c=
|
||||||
github.com/rasky/go-xdr v0.0.0-20170124162913-1a41d1a06c93 h1:UVArwN/wkKjMVhh2EQGC0tEc1+FqiLlvYXY5mQ2f8Wg=
|
github.com/rasky/go-xdr v0.0.0-20170124162913-1a41d1a06c93 h1:UVArwN/wkKjMVhh2EQGC0tEc1+FqiLlvYXY5mQ2f8Wg=
|
||||||
github.com/rasky/go-xdr v0.0.0-20170124162913-1a41d1a06c93/go.mod h1:Nfe4efndBz4TibWycNE+lqyJZiMX4ycx+QKV8Ta0f/o=
|
github.com/rasky/go-xdr v0.0.0-20170124162913-1a41d1a06c93/go.mod h1:Nfe4efndBz4TibWycNE+lqyJZiMX4ycx+QKV8Ta0f/o=
|
||||||
github.com/rclone/gofakes3 v0.0.3-0.20240807151802-e80146f8de87 h1:0YRo2aYhE+SCZsjWYMFe8zLD18xieXy7wQ8M9Ywcr/g=
|
github.com/rclone/gofakes3 v0.0.3 h1:0sKCxJ8TUUAG5KXGuc/fcDKGnzB/j6IjNQui9ntIZPo=
|
||||||
github.com/rclone/gofakes3 v0.0.3-0.20240807151802-e80146f8de87/go.mod h1:z7+o2VUwitO0WuVHReQlOW9jZ03LpeJ0PUFSULyTIds=
|
github.com/rclone/gofakes3 v0.0.3/go.mod h1:z7+o2VUwitO0WuVHReQlOW9jZ03LpeJ0PUFSULyTIds=
|
||||||
github.com/relvacode/iso8601 v1.3.0 h1:HguUjsGpIMh/zsTczGN3DVJFxTU/GX+MMmzcKoMO7ko=
|
github.com/relvacode/iso8601 v1.3.0 h1:HguUjsGpIMh/zsTczGN3DVJFxTU/GX+MMmzcKoMO7ko=
|
||||||
github.com/relvacode/iso8601 v1.3.0/go.mod h1:FlNp+jz+TXpyRqgmM7tnzHHzBnz776kmAH2h3sZCn0I=
|
github.com/relvacode/iso8601 v1.3.0/go.mod h1:FlNp+jz+TXpyRqgmM7tnzHHzBnz776kmAH2h3sZCn0I=
|
||||||
github.com/rfjakob/eme v1.1.2 h1:SxziR8msSOElPayZNFfQw4Tjx/Sbaeeh3eRvrHVMUs4=
|
github.com/rfjakob/eme v1.1.2 h1:SxziR8msSOElPayZNFfQw4Tjx/Sbaeeh3eRvrHVMUs4=
|
||||||
|
|||||||
@@ -17,7 +17,8 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
BufferSize = 1024 * 1024 // BufferSize is the default size of the pages used in the reader
|
// BufferSize is the default size of the pages used in the reader
|
||||||
|
BufferSize = 1024 * 1024
|
||||||
bufferCacheSize = 64 // max number of buffers to keep in cache
|
bufferCacheSize = 64 // max number of buffers to keep in cache
|
||||||
bufferCacheFlushTime = 5 * time.Second // flush the cached buffers after this long
|
bufferCacheFlushTime = 5 * time.Second // flush the cached buffers after this long
|
||||||
)
|
)
|
||||||
|
|||||||
103
rclone.1
generated
103
rclone.1
generated
@@ -1,7 +1,7 @@
|
|||||||
.\"t
|
.\"t
|
||||||
.\" Automatically generated by Pandoc 2.9.2.1
|
.\" Automatically generated by Pandoc 2.9.2.1
|
||||||
.\"
|
.\"
|
||||||
.TH "rclone" "1" "Sep 24, 2024" "User Manual" ""
|
.TH "rclone" "1" "Nov 15, 2024" "User Manual" ""
|
||||||
.hy
|
.hy
|
||||||
.SH Rclone syncs your files to cloud storage
|
.SH Rclone syncs your files to cloud storage
|
||||||
.PP
|
.PP
|
||||||
@@ -20955,6 +20955,12 @@ See above for the order filter flags are processed in.
|
|||||||
Arrange the order of filter rules with the most restrictive first and
|
Arrange the order of filter rules with the most restrictive first and
|
||||||
work down.
|
work down.
|
||||||
.PP
|
.PP
|
||||||
|
Lines starting with # or ; are ignored, and can be used to write
|
||||||
|
comments.
|
||||||
|
Inline comments are not supported.
|
||||||
|
\f[I]Use \f[CI]-vv --dump filters\f[I] to see how they appear in the
|
||||||
|
final regexp.\f[R]
|
||||||
|
.PP
|
||||||
E.g.
|
E.g.
|
||||||
for \f[C]filter-file.txt\f[R]:
|
for \f[C]filter-file.txt\f[R]:
|
||||||
.IP
|
.IP
|
||||||
@@ -20965,6 +20971,7 @@ for \f[C]filter-file.txt\f[R]:
|
|||||||
+ *.jpg
|
+ *.jpg
|
||||||
+ *.png
|
+ *.png
|
||||||
+ file2.avi
|
+ file2.avi
|
||||||
|
- /dir/tmp/** # WARNING! This text will be treated as part of the path.
|
||||||
- /dir/Trash/**
|
- /dir/Trash/**
|
||||||
+ /dir/**
|
+ /dir/**
|
||||||
# exclude everything else
|
# exclude everything else
|
||||||
@@ -24965,7 +24972,7 @@ pCloud
|
|||||||
T}@T{
|
T}@T{
|
||||||
MD5, SHA1 \[u2077]
|
MD5, SHA1 \[u2077]
|
||||||
T}@T{
|
T}@T{
|
||||||
R
|
R/W
|
||||||
T}@T{
|
T}@T{
|
||||||
No
|
No
|
||||||
T}@T{
|
T}@T{
|
||||||
@@ -27719,7 +27726,7 @@ Flags for general networking and HTTP stuff.
|
|||||||
--tpslimit float Limit HTTP transactions per second to this
|
--tpslimit float Limit HTTP transactions per second to this
|
||||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||||
--use-cookies Enable session cookiejar
|
--use-cookies Enable session cookiejar
|
||||||
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.68.1\[dq])
|
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.68.2\[dq])
|
||||||
\f[R]
|
\f[R]
|
||||||
.fi
|
.fi
|
||||||
.SS Performance
|
.SS Performance
|
||||||
@@ -37393,11 +37400,12 @@ copy_cutoff = 5M
|
|||||||
\f[R]
|
\f[R]
|
||||||
.fi
|
.fi
|
||||||
.PP
|
.PP
|
||||||
C14 Cold Storage (https://www.online.net/en/storage/c14-cold-storage) is
|
Scaleway Glacier (https://www.scaleway.com/en/glacier-cold-storage/) is
|
||||||
the low-cost S3 Glacier alternative from Scaleway and it works the same
|
the low-cost S3 Glacier alternative from Scaleway and it works the same
|
||||||
way as on S3 by accepting the \[dq]GLACIER\[dq] \f[C]storage_class\f[R].
|
way as on S3 by accepting the \[dq]GLACIER\[dq] \f[C]storage_class\f[R].
|
||||||
So you can configure your remote with the
|
So you can configure your remote with the
|
||||||
\f[C]storage_class = GLACIER\f[R] option to upload directly to C14.
|
\f[C]storage_class = GLACIER\f[R] option to upload directly to Scaleway
|
||||||
|
Glacier.
|
||||||
Don\[aq]t forget that in this state you can\[aq]t read files back after,
|
Don\[aq]t forget that in this state you can\[aq]t read files back after,
|
||||||
you will need to restore them to \[dq]STANDARD\[dq] storage_class first
|
you will need to restore them to \[dq]STANDARD\[dq] storage_class first
|
||||||
before being able to read them (see \[dq]restore\[dq] section above)
|
before being able to read them (see \[dq]restore\[dq] section above)
|
||||||
@@ -50216,9 +50224,9 @@ It will show you a client ID and client secret.
|
|||||||
Make a note of these.
|
Make a note of these.
|
||||||
.RS 4
|
.RS 4
|
||||||
.PP
|
.PP
|
||||||
(If you selected \[dq]External\[dq] at Step 5 continue to Step 9.
|
(If you selected \[dq]External\[dq] at Step 5 continue to Step 10.
|
||||||
If you chose \[dq]Internal\[dq] you don\[aq]t need to publish and can
|
If you chose \[dq]Internal\[dq] you don\[aq]t need to publish and can
|
||||||
skip straight to Step 10 but your destination drive must be part of the
|
skip straight to Step 11 but your destination drive must be part of the
|
||||||
same Google Workspace.)
|
same Google Workspace.)
|
||||||
.RE
|
.RE
|
||||||
.IP "10." 4
|
.IP "10." 4
|
||||||
@@ -72522,6 +72530,87 @@ Options:
|
|||||||
.IP \[bu] 2
|
.IP \[bu] 2
|
||||||
\[dq]error\[dq]: return an error based on option value
|
\[dq]error\[dq]: return an error based on option value
|
||||||
.SH Changelog
|
.SH Changelog
|
||||||
|
.SS v1.68.2 - 2024-11-15
|
||||||
|
.PP
|
||||||
|
See commits (https://github.com/rclone/rclone/compare/v1.68.1...v1.68.2)
|
||||||
|
.IP \[bu] 2
|
||||||
|
Security fixes
|
||||||
|
.RS 2
|
||||||
|
.IP \[bu] 2
|
||||||
|
local backend: CVE-2024-52522: fix permission and ownership on symlinks
|
||||||
|
with \f[C]--links\f[R] and \f[C]--metadata\f[R] (Nick Craig-Wood)
|
||||||
|
.RS 2
|
||||||
|
.IP \[bu] 2
|
||||||
|
Only affects users using \f[C]--metadata\f[R] and \f[C]--links\f[R] and
|
||||||
|
copying files to the local backend
|
||||||
|
.IP \[bu] 2
|
||||||
|
See
|
||||||
|
https://github.com/rclone/rclone/security/advisories/GHSA-hrxh-9w67-g4cv
|
||||||
|
.RE
|
||||||
|
.IP \[bu] 2
|
||||||
|
build: bump github.com/golang-jwt/jwt/v4 from 4.5.0 to 4.5.1
|
||||||
|
(dependabot)
|
||||||
|
.RS 2
|
||||||
|
.IP \[bu] 2
|
||||||
|
This is an issue in a dependency which is used for JWT certificates
|
||||||
|
.IP \[bu] 2
|
||||||
|
See
|
||||||
|
https://github.com/golang-jwt/jwt/security/advisories/GHSA-29wx-vh33-7x7r
|
||||||
|
.RE
|
||||||
|
.RE
|
||||||
|
.IP \[bu] 2
|
||||||
|
Bug Fixes
|
||||||
|
.RS 2
|
||||||
|
.IP \[bu] 2
|
||||||
|
accounting: Fix wrong message on SIGUSR2 to enable/disable bwlimit (Nick
|
||||||
|
Craig-Wood)
|
||||||
|
.IP \[bu] 2
|
||||||
|
bisync: Fix output capture restoring the wrong output for logrus
|
||||||
|
(Dimitrios Slamaris)
|
||||||
|
.IP \[bu] 2
|
||||||
|
dlna: Fix loggingResponseWriter disregarding log level (Simon Bos)
|
||||||
|
.IP \[bu] 2
|
||||||
|
serve s3: Fix excess locking which was making serve s3 single threaded
|
||||||
|
(Nick Craig-Wood)
|
||||||
|
.IP \[bu] 2
|
||||||
|
doc fixes (Nick Craig-Wood, tgfisher, Alexandre Hamez, Randy Bush)
|
||||||
|
.RE
|
||||||
|
.IP \[bu] 2
|
||||||
|
Local
|
||||||
|
.RS 2
|
||||||
|
.IP \[bu] 2
|
||||||
|
Fix permission and ownership on symlinks with \f[C]--links\f[R] and
|
||||||
|
\f[C]--metadata\f[R] (Nick Craig-Wood)
|
||||||
|
.IP \[bu] 2
|
||||||
|
Fix \f[C]--copy-links\f[R] on macOS when cloning (nielash)
|
||||||
|
.RE
|
||||||
|
.IP \[bu] 2
|
||||||
|
Onedrive
|
||||||
|
.RS 2
|
||||||
|
.IP \[bu] 2
|
||||||
|
Fix Retry-After handling to look at 503 errors also (Nick Craig-Wood)
|
||||||
|
.RE
|
||||||
|
.IP \[bu] 2
|
||||||
|
Pikpak
|
||||||
|
.RS 2
|
||||||
|
.IP \[bu] 2
|
||||||
|
Fix cid/gcid calculations for fs.OverrideRemote (wiserain)
|
||||||
|
.IP \[bu] 2
|
||||||
|
Fix fatal crash on startup with token that can\[aq]t be refreshed (Nick
|
||||||
|
Craig-Wood)
|
||||||
|
.RE
|
||||||
|
.IP \[bu] 2
|
||||||
|
S3
|
||||||
|
.RS 2
|
||||||
|
.IP \[bu] 2
|
||||||
|
Fix crash when using \f[C]--s3-download-url\f[R] after migration to
|
||||||
|
SDKv2 (Nick Craig-Wood)
|
||||||
|
.IP \[bu] 2
|
||||||
|
Storj provider: fix server-side copy of files bigger than 5GB (Kaloyan
|
||||||
|
Raev)
|
||||||
|
.IP \[bu] 2
|
||||||
|
Fix multitenant multipart uploads with CEPH (Nick Craig-Wood)
|
||||||
|
.RE
|
||||||
.SS v1.68.1 - 2024-09-24
|
.SS v1.68.1 - 2024-09-24
|
||||||
.PP
|
.PP
|
||||||
See commits (https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
|
See commits (https://github.com/rclone/rclone/compare/v1.68.0...v1.68.1)
|
||||||
|
|||||||
Reference in New Issue
Block a user