1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

39 Commits

Author SHA1 Message Date
Nick Craig-Wood
24739b56d5 fs: allow global variables to be overriden or set on backend creation
This allows backend config to contain

- `override.var` - set var during remote creation only
- `global.var` - set var in the global config permanently

Fixes #8563
2025-07-23 15:27:52 +01:00
Nick Craig-Wood
a9178cab8c fs: allow setting of --http_proxy from command line
This in turn allows `override.http_proxy` to be set in backend configs
to set an http proxy for a single backend.
2025-07-23 15:26:55 +01:00
Nick Craig-Wood
4133a197bc Version v1.70.3 2025-07-09 10:51:25 +01:00
Nick Craig-Wood
a30a4909fe azureblob: fix server side copy error "requires exactly one scope"
Before this change, if not using shared key or SAS URL authentication
for the source, rclone gave this error

    ManagedIdentityCredential.GetToken() requires exactly one scope

when doing server side copies.

This was introduced in:

3a5ddfcd3c azureblob: implement multipart server side copy

This fixes the problem by creating a temporary SAS URL using user
delegation to read the source blob when copying.

Fixes #8662
2025-07-09 10:32:12 +01:00
albertony
cdc6d22929 docs: explain the json log format in more detail 2025-07-09 10:32:12 +01:00
albertony
e319406f52 check: fix difference report (was reporting error counts) 2025-07-09 10:32:12 +01:00
Nick Craig-Wood
ac54cccced linkbox: fix upload error "user upload file not exist"
Linkbox have started issuing 302 redirects on some of their PUT
requests when rclone uploads a file.

This is problematic for several reasons:

1. This is the wrong redirect code - it should be 307 to preserve the method
2. Since Expect/100-Continue isn't supported the whole body gets uploaded

This fixes the problem by first doing a HEAD request on the URL. This
will allow us to read the redirect Location and not upload the body to
the wrong place.

It should still work (albeit a little more inefficiently) if Linkbox
stop redirecting the PUT requests.

See: https://forum.rclone.org/t/linkbox-upload-error/51795
Fixes: #8606
2025-07-09 10:32:12 +01:00
Nick Craig-Wood
4c4d366e29 march: fix deadlock when using --no-traverse - fixes #8656
This ocurred whenever there were more than 100 files in the source due
to the output channel filling up.

The fix is not to use list.NewSorter but take more care to output the
dst objects in the same order the src objects are delivered. As the
src objects are delivered sorted, no sorting is needed.

In order not to cause another deadlock, we need to send nil dst
objects which is safe since this adjusts the termination conditions
for the channels.

Thanks to @jeremy for the test script the Go tests are based on.
2025-07-09 10:32:12 +01:00
wiserain
64fc3d05ae pikpak: improve error handling for missing links and unrecoverable 500s
This commit improves error handling in two specific scenarios:

* Missing Download Links: A 5-second delay is introduced when a download
  link is missing, as low-level retries aren't enough. Empirically, it
  takes about 30s-1m for the link to become available. This resolves
  failed integration tests: backend: TestIntegration/FsMkdir/FsPutFiles/
  ObjectUpdate, vfs: TestFileReadAtNonZeroLength

* Unrecoverable 500 Errors: The shouldRetry method is updated to skip
  retries for 500 errors from "idx.shub.mypikpak.com" indicating "no
  record for gcid." These errors are non-recoverable, so retrying is futile.
2025-07-09 10:32:12 +01:00
WeidiDeng
90386efeb1 webdav: fix setting modtime to that of local object instead of remote
In this commit the source of the modtime got changed to the wrong object by accident

0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support

This reverts that change and fixes the integration tests.
2025-07-09 10:32:12 +01:00
Davide Bizzarri
5f78b47295 fix: b2 versionAt read metadata 2025-07-09 10:32:12 +01:00
Nick Craig-Wood
775ee90fa5 Start v1.70.3-DEV development 2025-07-02 15:36:43 +01:00
Nick Craig-Wood
444392bf9c docs: fix filescom/filelu link mixup
See: https://forum.rclone.org/t/a-small-bug-in-rclone-documentation/51774
2025-07-02 15:35:18 +01:00
Nick Craig-Wood
d36259749f docs: update link for filescom 2025-06-30 11:10:31 +01:00
Nick Craig-Wood
4010380ea8 Version v1.70.2 2025-06-27 12:30:18 +01:00
Ali Zein Yousuf
c138e52a57 docs: update client ID instructions to current Azure AD portal - fixes #8027 2025-06-27 12:23:00 +01:00
necaran
e22ce597ad mega: fix tls handshake failure - fixes #8565
The cipher suites used by Mega's storage endpoints: https://github.com/meganz/webclient/issues/103
are no longer supported by default since Go 1.22: https://tip.golang.org/doc/go1.22#minor_library_changes
This therefore assigns the cipher suites explicitly to include the one Mega needs.
2025-06-26 17:06:03 +01:00
Nick Craig-Wood
79bd9e7913 pacer: fix nil pointer deref in RetryError - fixes #8077
Before this change, if RetryAfterError was called with a nil err, then
it's Error method would return this when wrapped in a fmt.Errorf
statement

    error %!v(PANIC=Error method: runtime error: invalid memory address or nil pointer dereference))

Looking at the code, it looks like RetryAfterError will usually be
called with a nil pointer, so this patch makes sure it has a sensible
error.
2025-06-26 17:05:37 +01:00
nielash
32f9393ac8 convmv: fix moving to unicode-equivalent name - fixes #8634
Before this change, using convmv to convert filenames between NFD and NFC could
fail on certain backends (such as onedrive) that were insensitive to the
difference. This change fixes the issue by extending the existing
needsMoveCaseInsensitive logic for use in this scenario.
2025-06-26 17:05:37 +01:00
nielash
f97c876eb1 convmv: make --dry-run logs less noisy
Before this change, convmv dry runs would log a SkipDestructive message for
every single object, even objects that would not really be moved during a real
run. This made it quite difficult to tell what would actually happen during the
real run. This change fixes that by returning silently in such cases (as would
happen during a real run.)
2025-06-26 17:05:37 +01:00
nielash
9b43836e19 sync: avoid copying dir metadata to itself
In convmv, src and dst can point to the same directory. Unless a dir's name is
changing, we should leave it alone and not attempt to copy its metadata to
itself.
2025-06-26 17:05:37 +01:00
Nick Craig-Wood
ff817e8764 combine: fix directory not found errors with ListP interface - Fixes #8627
In

b1d774c2e3 combine: implement ListP interface

We introduced the ListP interface to the combine backend. This was
passing the wrong remote to the upstreams. This was picked up by the
integration tests but was ignored by accident.
2025-06-26 17:05:37 +01:00
Nick Craig-Wood
3c63dec849 local: fix --skip-links on Windows when skipping Junction points
Due to a change in Go which was enabled by the `go 1.22` in `go.mod`
rclone has stopped skipping junction points ("My Documents" in
particular) if `--skip-links` is set on Windows.

This is because the output from os.Lstat has changed and junction
points are no longer marked with os.ModeSymlink but with
os.ModeIrregular instead.

This fix now skips os.ModeIrregular objects if --skip-links is set on
Windows only.

Fixes #8561
See: https://github.com/golang/go/issues/73827
2025-06-26 17:05:37 +01:00
dependabot[bot]
33876c5806 build: bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix GHSA-vrw8-fxc6-2r93
See: https://github.com/go-chi/chi/security/advisories/GHSA-vrw8-fxc6-2r93
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-26 17:05:37 +01:00
Nick Craig-Wood
fa3b444341 log: fix deadlock when using systemd logging - fixes #8621
In this commit the logging system was re-worked

dfa4d94827 fs: Remove github.com/sirupsen/logrus and replace with log/slog

Unfortunately the systemd logging was still using the plain log
package and this caused a deadlock as it was recursively calling the
logging package.

The fix was to use the dedicated systemd journal logging routines in
the process removing a TODO!
2025-06-26 17:05:37 +01:00
Nick Craig-Wood
e5fc424955 docs: googlephotos: detail how to make your own client_id - fixes #8622 2025-06-26 17:05:37 +01:00
Nick Craig-Wood
06badeffa3 pikpak: fix uploads fail with "aws-chunked encoding is not supported" error
This downgrades the AWS SDK slightly (this is still an upgrade from
rclone v1.69.3) to work around a breakage in the upstream SDK when
used with pikpak. This isn't a long term solution - either they will
fix it upstream or we will implement a workaround.

See: https://github.com/aws/aws-sdk-go-v2/issues/3007
See: #8629
2025-06-26 16:58:02 +01:00
Nick Craig-Wood
eb71d1be18 Start v1.70.2-DEV development 2025-06-25 16:40:16 +01:00
Nick Craig-Wood
7506a3c84c docs: Remove Warp as a sponsor 2025-06-25 16:38:25 +01:00
Nick Craig-Wood
831abd3406 docs: add files.com as a Gold sponsor 2025-06-25 16:38:25 +01:00
Nick Craig-Wood
9c08cd80c7 docs: add links to SecureBuild docker image 2025-06-25 16:38:25 +01:00
Nick Craig-Wood
948db193a2 Version v1.70.1 2025-06-19 11:48:30 +01:00
Ed Craig-Wood
72bc3f5079 docs: DOI grammar error 2025-06-19 11:36:27 +01:00
albertony
bf8a428fbd docs: lib/transform: cleanup formatting 2025-06-19 11:36:27 +01:00
albertony
05cc6f829b lib/transform: avoid empty charmap entry 2025-06-19 11:36:27 +01:00
jinjingroad
af73833773 chore: fix function name
Signed-off-by: jinjingroad <jinjingroad@sina.com>
2025-06-19 11:36:27 +01:00
Nick Craig-Wood
3167a63780 convmv: fix spurious "error running command echo" on Windows
Before this change the help for convmv was generated by running the
examples each time rclone started up. Unfortunately this involved
running the echo command which did not work on Windows.

This pre-generates the help into `transform.md` and embeds it. It can
be re-generated with `go generate` which is a better solution.

See: https://forum.rclone.org/t/invoke-of-1-70-0-complains-of-echo-not-found/51618
2025-06-19 11:36:27 +01:00
Ed Craig-Wood
1d9795daa6 docs: client-credentials is not support by all backends 2025-06-19 11:36:27 +01:00
Nick Craig-Wood
03ea89adf0 Start v1.70.1-DEV development 2025-06-19 11:35:18 +01:00
72 changed files with 1827 additions and 329 deletions

266
MANUAL.html generated
View File

@@ -81,7 +81,7 @@
<header id="title-block-header"> <header id="title-block-header">
<h1 class="title">rclone(1) User Manual</h1> <h1 class="title">rclone(1) User Manual</h1>
<p class="author">Nick Craig-Wood</p> <p class="author">Nick Craig-Wood</p>
<p class="date">Jun 17, 2025</p> <p class="date">Jul 09, 2025</p>
</header> </header>
<h1 id="name">NAME</h1> <h1 id="name">NAME</h1>
<p>rclone - manage files on cloud storage</p> <p>rclone - manage files on cloud storage</p>
@@ -222,8 +222,8 @@ Use &quot;rclone help backends&quot; for a list of supported services.
<li>Dropbox</li> <li>Dropbox</li>
<li>Enterprise File Fabric</li> <li>Enterprise File Fabric</li>
<li>Fastmail Files</li> <li>Fastmail Files</li>
<li>Files.com</li>
<li>FileLu Cloud Storage</li> <li>FileLu Cloud Storage</li>
<li>Files.com</li>
<li>FlashBlade</li> <li>FlashBlade</li>
<li>FTP</li> <li>FTP</li>
<li>Gofile</li> <li>Gofile</li>
@@ -426,6 +426,7 @@ choco install rclone</code></pre>
<p><a href="https://repology.org/project/rclone/versions"><img src="https://repology.org/badge/vertical-allrepos/rclone.svg?columns=3" alt="Packaging status" /></a></p> <p><a href="https://repology.org/project/rclone/versions"><img src="https://repology.org/badge/vertical-allrepos/rclone.svg?columns=3" alt="Packaging status" /></a></p>
<h2 id="docker">Docker installation</h2> <h2 id="docker">Docker installation</h2>
<p>The rclone developers maintain a <a href="https://hub.docker.com/r/rclone/rclone">docker image for rclone</a>.</p> <p>The rclone developers maintain a <a href="https://hub.docker.com/r/rclone/rclone">docker image for rclone</a>.</p>
<p><strong>Note:</strong> We also now offer a paid version of rclone with enterprise-grade security and zero CVEs through our partner <a href="https://securebuild.com/blog/introducing-securebuild">SecureBuild</a>. If you are interested, check out their website and the <a href="https://securebuild.com/images/rclone">Rclone SecureBuild Image</a>.</p>
<p>These images are built as part of the release process based on a minimal Alpine Linux.</p> <p>These images are built as part of the release process based on a minimal Alpine Linux.</p>
<p>The <code>:latest</code> tag will always point to the latest stable release. You can use the <code>:beta</code> tag to get the latest build from master. You can also use version tags, e.g. <code>:1.49.1</code>, <code>:1.49</code> or <code>:1</code>.</p> <p>The <code>:latest</code> tag will always point to the latest stable release. You can use the <code>:beta</code> tag to get the latest build from master. You can also use version tags, e.g. <code>:1.49.1</code>, <code>:1.49</code> or <code>:1</code>.</p>
<pre><code>$ docker pull rclone/rclone:latest <pre><code>$ docker pull rclone/rclone:latest
@@ -2682,15 +2683,15 @@ X-User-Defined </code></pre>
<pre><code>rclone convmv &quot;stories/The Quick Brown Fox!.txt&quot; --name-transform &quot;all,command=echo&quot; <pre><code>rclone convmv &quot;stories/The Quick Brown Fox!.txt&quot; --name-transform &quot;all,command=echo&quot;
// Output: stories/The Quick Brown Fox!.txt</code></pre> // Output: stories/The Quick Brown Fox!.txt</code></pre>
<pre><code>rclone convmv &quot;stories/The Quick Brown Fox!&quot; --name-transform &quot;date=-{YYYYMMDD}&quot; <pre><code>rclone convmv &quot;stories/The Quick Brown Fox!&quot; --name-transform &quot;date=-{YYYYMMDD}&quot;
// Output: stories/The Quick Brown Fox!-20250617</code></pre> // Output: stories/The Quick Brown Fox!-20250618</code></pre>
<pre><code>rclone convmv &quot;stories/The Quick Brown Fox!&quot; --name-transform &quot;date=-{macfriendlytime}&quot; <pre><code>rclone convmv &quot;stories/The Quick Brown Fox!&quot; --name-transform &quot;date=-{macfriendlytime}&quot;
// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM</code></pre> // Output: stories/The Quick Brown Fox!-2025-06-18 0148PM</code></pre>
<pre><code>rclone convmv &quot;stories/The Quick Brown Fox!.txt&quot; --name-transform &quot;all,regex=[\\.\\w]/ab&quot; <pre><code>rclone convmv &quot;stories/The Quick Brown Fox!.txt&quot; --name-transform &quot;all,regex=[\\.\\w]/ab&quot;
// Output: ababababababab/ababab ababababab ababababab ababab!abababab</code></pre> // Output: ababababababab/ababab ababababab ababababab ababab!abababab</code></pre>
<p>Multiple transformations can be used in sequence, applied in the order they are specified on the command line.</p> <p>Multiple transformations can be used in sequence, applied in the order they are specified on the command line.</p>
<p>The <code>--name-transform</code> flag is also available in <code>sync</code>, <code>copy</code>, and <code>move</code>.</p> <p>The <code>--name-transform</code> flag is also available in <code>sync</code>, <code>copy</code>, and <code>move</code>.</p>
<h1 id="files-vs-directories">Files vs Directories</h1> <h1 id="files-vs-directories">Files vs Directories</h1>
<p>By default <code>--name-transform</code> will only apply to file names. The means only the leaf file name will be transformed. However some of the transforms would be better applied to the whole path or just directories. To choose which which part of the file path is affected some tags can be added to the <code>--name-transform</code></p> <p>By default <code>--name-transform</code> will only apply to file names. The means only the leaf file name will be transformed. However some of the transforms would be better applied to the whole path or just directories. To choose which which part of the file path is affected some tags can be added to the <code>--name-transform</code>.</p>
<table> <table>
<colgroup> <colgroup>
<col style="width: 50%" /> <col style="width: 50%" />
@@ -2718,7 +2719,7 @@ X-User-Defined </code></pre>
</tbody> </tbody>
</table> </table>
<p>This is used by adding the tag into the transform name like this: <code>--name-transform file,prefix=ABC</code> or <code>--name-transform dir,prefix=DEF</code>.</p> <p>This is used by adding the tag into the transform name like this: <code>--name-transform file,prefix=ABC</code> or <code>--name-transform dir,prefix=DEF</code>.</p>
<p>For some conversions using all is more likely to be useful, for example <code>--name-transform all,nfc</code></p> <p>For some conversions using all is more likely to be useful, for example <code>--name-transform all,nfc</code>.</p>
<p>Note that <code>--name-transform</code> may not add path separators <code>/</code> to the name. This will cause an error.</p> <p>Note that <code>--name-transform</code> may not add path separators <code>/</code> to the name. This will cause an error.</p>
<h1 id="ordering-and-conflicts">Ordering and Conflicts</h1> <h1 id="ordering-and-conflicts">Ordering and Conflicts</h1>
<ul> <ul>
@@ -2739,16 +2740,7 @@ X-User-Defined </code></pre>
</ul> </ul>
<h1 id="race-conditions-and-non-deterministic-behavior">Race Conditions and Non-Deterministic Behavior</h1> <h1 id="race-conditions-and-non-deterministic-behavior">Race Conditions and Non-Deterministic Behavior</h1>
<p>Some transformations, such as <code>replace=old:new</code>, may introduce conflicts where multiple source files map to the same destination name. This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these. * If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic. * Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.</p> <p>Some transformations, such as <code>replace=old:new</code>, may introduce conflicts where multiple source files map to the same destination name. This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these. * If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic. * Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.</p>
<ul> <p>To minimize risks, users should: * Carefully review transformations that may introduce conflicts. * Use <code>--dry-run</code> to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations). * Avoid transformations that cause multiple distinct source files to map to the same destination name. * Consider disabling concurrency with <code>--transfers=1</code> if necessary. * Certain transformations (e.g. <code>prefix</code>) will have a multiplying effect every time they are used. Avoid these when using <code>bisync</code>.</p>
<li>To minimize risks, users should:
<ul>
<li>Carefully review transformations that may introduce conflicts.</li>
<li>Use <code>--dry-run</code> to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).</li>
<li>Avoid transformations that cause multiple distinct source files to map to the same destination name.</li>
<li>Consider disabling concurrency with <code>--transfers=1</code> if necessary.</li>
<li>Certain transformations (e.g. <code>prefix</code>) will have a multiplying effect every time they are used. Avoid these when using <code>bisync</code>.</li>
</ul></li>
</ul>
<pre><code>rclone convmv dest:path --name-transform XXX [flags]</code></pre> <pre><code>rclone convmv dest:path --name-transform XXX [flags]</code></pre>
<h2 id="options-48">Options</h2> <h2 id="options-48">Options</h2>
<pre><code> --create-empty-src-dirs Create empty source dirs on destination after move <pre><code> --create-empty-src-dirs Create empty source dirs on destination after move
@@ -8266,7 +8258,8 @@ y/n/s/!/q&gt; n</code></pre>
<pre><code>--log-file rclone.log --log-level DEBUG --windows-event-log ERROR</code></pre> <pre><code>--log-file rclone.log --log-level DEBUG --windows-event-log ERROR</code></pre>
<p>This option is only supported Windows platforms.</p> <p>This option is only supported Windows platforms.</p>
<h3 id="use-json-log">--use-json-log</h3> <h3 id="use-json-log">--use-json-log</h3>
<p>This switches the log format to JSON for rclone. The fields of JSON log are <code>level</code>, <code>msg</code>, <code>source</code>, <code>time</code>. The JSON logs will be printed on a single line, but are shown expanded here for clarity.</p> <p>This switches the log format to JSON. The log messages are then streamed as individual JSON objects, with fields: <code>level</code>, <code>msg</code>, <code>source</code>, and <code>time</code>. The resulting format is what is sometimes referred to as <a href="https://en.wikipedia.org/wiki/JSON_streaming#Newline-delimited_JSON">newline-delimited JSON</a> (NDJSON), or JSON Lines (JSONL). This is well suited for processing by traditional line-oriented tools and shell pipelines, but a complete log file is not strictly valid JSON and needs a parser that can handle it.</p>
<p>The JSON logs will be printed on a single line, but are shown expanded here for clarity.</p>
<div class="sourceCode" id="cb654"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb654-1"><a href="#cb654-1" aria-hidden="true"></a><span class="fu">{</span></span> <div class="sourceCode" id="cb654"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb654-1"><a href="#cb654-1" aria-hidden="true"></a><span class="fu">{</span></span>
<span id="cb654-2"><a href="#cb654-2" aria-hidden="true"></a> <span class="dt">&quot;time&quot;</span><span class="fu">:</span> <span class="st">&quot;2025-05-13T17:30:51.036237518+01:00&quot;</span><span class="fu">,</span></span> <span id="cb654-2"><a href="#cb654-2" aria-hidden="true"></a> <span class="dt">&quot;time&quot;</span><span class="fu">:</span> <span class="st">&quot;2025-05-13T17:30:51.036237518+01:00&quot;</span><span class="fu">,</span></span>
<span id="cb654-3"><a href="#cb654-3" aria-hidden="true"></a> <span class="dt">&quot;level&quot;</span><span class="fu">:</span> <span class="st">&quot;debug&quot;</span><span class="fu">,</span></span> <span id="cb654-3"><a href="#cb654-3" aria-hidden="true"></a> <span class="dt">&quot;level&quot;</span><span class="fu">:</span> <span class="st">&quot;debug&quot;</span><span class="fu">,</span></span>
@@ -13239,7 +13232,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--tpslimit float Limit HTTP transactions per second to this --tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.70.0&quot;)</code></pre> --user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.70.3&quot;)</code></pre>
<h2 id="performance">Performance</h2> <h2 id="performance">Performance</h2>
<p>Flags helpful for increasing performance.</p> <p>Flags helpful for increasing performance.</p>
<pre><code> --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi) <pre><code> --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@@ -13661,6 +13654,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to --ftp-host string FTP host to connect to
--ftp-http-proxy string URL for HTTP CONNECT proxy
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s) --ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server --ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-no-check-upload Don&#39;t check the upload is OK --ftp-no-check-upload Don&#39;t check the upload is OK
@@ -21368,6 +21362,7 @@ y/e/d&gt; y</code></pre>
<h4 id="box-client-credentials">--box-client-credentials</h4> <h4 id="box-client-credentials">--box-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -22622,6 +22617,7 @@ y/e/d&gt; y</code></pre>
<h4 id="sharefile-client-credentials">--sharefile-client-credentials</h4> <h4 id="sharefile-client-credentials">--sharefile-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -23395,7 +23391,7 @@ upstreams = &quot;My Drive=My Drive:&quot; &quot;Test Drive=Test Drive:&quot;</c
<p>See the <a href="https://rclone.org/docs/#metadata">metadata</a> docs for more info.</p> <p>See the <a href="https://rclone.org/docs/#metadata">metadata</a> docs for more info.</p>
<h1 id="doi">DOI</h1> <h1 id="doi">DOI</h1>
<p>The DOI remote is a read only remote for reading files from digital object identifiers (DOI).</p> <p>The DOI remote is a read only remote for reading files from digital object identifiers (DOI).</p>
<p>Currently, the DOI backend supports supports DOIs hosted with: - <a href="https://inveniosoftware.org/products/rdm/">InvenioRDM</a> - <a href="https://zenodo.org">Zenodo</a> - <a href="https://data.caltech.edu">CaltechDATA</a> - <a href="https://inveniosoftware.org/showcase/">Other InvenioRDM repositories</a> - <a href="https://dataverse.org">Dataverse</a> - <a href="https://dataverse.harvard.edu">Harvard Dataverse</a> - <a href="https://dataverse.org/installations">Other Dataverse repositories</a></p> <p>Currently, the DOI backend supports DOIs hosted with: - <a href="https://inveniosoftware.org/products/rdm/">InvenioRDM</a> - <a href="https://zenodo.org">Zenodo</a> - <a href="https://data.caltech.edu">CaltechDATA</a> - <a href="https://inveniosoftware.org/showcase/">Other InvenioRDM repositories</a> - <a href="https://dataverse.org">Dataverse</a> - <a href="https://dataverse.harvard.edu">Harvard Dataverse</a> - <a href="https://dataverse.org/installations">Other Dataverse repositories</a></p>
<p>Paths are specified as <code>remote:path</code></p> <p>Paths are specified as <code>remote:path</code></p>
<p>Paths may be as deep as required, e.g. <code>remote:directory/subdirectory</code>.</p> <p>Paths may be as deep as required, e.g. <code>remote:directory/subdirectory</code>.</p>
<h2 id="configuration-12">Configuration</h2> <h2 id="configuration-12">Configuration</h2>
@@ -23756,6 +23752,7 @@ y/e/d&gt; y</code></pre>
<h4 id="dropbox-client-credentials">--dropbox-client-credentials</h4> <h4 id="dropbox-client-credentials">--dropbox-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -24724,6 +24721,16 @@ rclone lsf :ftp,host=speedtest.tele2.net,user=anonymous,pass=IXs2wc8OJOz7SYLBk47
<li>Type: string</li> <li>Type: string</li>
<li>Required: false</li> <li>Required: false</li>
</ul> </ul>
<h4 id="ftp-http-proxy">--ftp-http-proxy</h4>
<p>URL for HTTP CONNECT proxy</p>
<p>Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.</p>
<p>Properties:</p>
<ul>
<li>Config: http_proxy</li>
<li>Env Var: RCLONE_FTP_HTTP_PROXY</li>
<li>Type: string</li>
<li>Required: false</li>
</ul>
<h4 id="ftp-no-check-upload">--ftp-no-check-upload</h4> <h4 id="ftp-no-check-upload">--ftp-no-check-upload</h4>
<p>Don't check the upload is OK</p> <p>Don't check the upload is OK</p>
<p>Normally rclone will try to check the upload exists after it has uploaded a file to make sure the size and modification time are as expected.</p> <p>Normally rclone will try to check the upload exists after it has uploaded a file to make sure the size and modification time are as expected.</p>
@@ -25651,6 +25658,7 @@ ya29.c.c0ASRK0GbAFEewXD [truncated]</code></pre>
<h4 id="gcs-client-credentials">--gcs-client-credentials</h4> <h4 id="gcs-client-credentials">--gcs-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -26354,6 +26362,7 @@ trashed=false and &#39;c&#39; in parents</code></pre>
<h4 id="drive-client-credentials">--drive-client-credentials</h4> <h4 id="drive-client-credentials">--drive-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -27429,6 +27438,7 @@ y/e/d&gt; y</code></pre>
<h4 id="gphotos-client-credentials">--gphotos-client-credentials</h4> <h4 id="gphotos-client-credentials">--gphotos-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -27596,6 +27606,13 @@ y/e/d&gt; y</code></pre>
<p>Rclone cannot delete files anywhere except under <code>album</code>.</p> <p>Rclone cannot delete files anywhere except under <code>album</code>.</p>
<h3 id="deleting-albums">Deleting albums</h3> <h3 id="deleting-albums">Deleting albums</h3>
<p>The Google Photos API does not support deleting albums - see <a href="https://issuetracker.google.com/issues/135714733">bug #135714733</a>.</p> <p>The Google Photos API does not support deleting albums - see <a href="https://issuetracker.google.com/issues/135714733">bug #135714733</a>.</p>
<h2 id="making-your-own-client_id-1">Making your own client_id</h2>
<p>When you use rclone with Google photos in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google.</p>
<p>If there is a problem with this client_id (eg quota too low or the client_id stops working) then you can make your own.</p>
<p>Please follow the steps in <a href="https://rclone.org/drive/#making-your-own-client-id">the google drive docs</a>. You will need these scopes instead of the drive ones detailed:</p>
<pre><code>https://www.googleapis.com/auth/photoslibrary.appendonly
https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata
https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata</code></pre>
<h1 id="hasher">Hasher</h1> <h1 id="hasher">Hasher</h1>
<p>Hasher is a special overlay backend to create remotes which handle checksums for other remotes. It's main functions include: - Emulate hash types unimplemented by backends - Cache checksums to help with slow hashing of large local or (S)FTP files - Warm up checksum cache from external SUM files</p> <p>Hasher is a special overlay backend to create remotes which handle checksums for other remotes. It's main functions include: - Emulate hash types unimplemented by backends - Cache checksums to help with slow hashing of large local or (S)FTP files - Warm up checksum cache from external SUM files</p>
<h2 id="getting-started-1">Getting started</h2> <h2 id="getting-started-1">Getting started</h2>
@@ -28151,6 +28168,7 @@ rclone lsd remote:/users/test/path</code></pre>
<h4 id="hidrive-client-credentials">--hidrive-client-credentials</h4> <h4 id="hidrive-client-credentials">--hidrive-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -29403,6 +29421,7 @@ y/e/d&gt; y</code></pre>
<h4 id="jottacloud-client-credentials">--jottacloud-client-credentials</h4> <h4 id="jottacloud-client-credentials">--jottacloud-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -30162,6 +30181,7 @@ y/e/d&gt; y</code></pre>
<h4 id="mailru-client-credentials">--mailru-client-credentials</h4> <h4 id="mailru-client-credentials">--mailru-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -32064,7 +32084,10 @@ y/e/d&gt; y</code></pre>
<h4 id="creating-client-id-for-onedrive-personal">Creating Client ID for OneDrive Personal</h4> <h4 id="creating-client-id-for-onedrive-personal">Creating Client ID for OneDrive Personal</h4>
<p>To create your own Client ID, please follow these steps:</p> <p>To create your own Client ID, please follow these steps:</p>
<ol type="1"> <ol type="1">
<li>Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click <code>New registration</code>.</li> <li>Open https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview and then under the <code>Add</code> menu click <code>App registration</code>.
<ul>
<li>If you have not created an Azure account, you will be prompted to. This is free, but you need to provide a phone number, address, and credit card for identity verification.</li>
</ul></li>
<li>Enter a name for your app, choose account type <code>Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)</code>, select <code>Web</code> in <code>Redirect URI</code>, then type (do not copy and paste) <code>http://localhost:53682/</code> and click Register. Copy and keep the <code>Application (client) ID</code> under the app name for later use.</li> <li>Enter a name for your app, choose account type <code>Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)</code>, select <code>Web</code> in <code>Redirect URI</code>, then type (do not copy and paste) <code>http://localhost:53682/</code> and click Register. Copy and keep the <code>Application (client) ID</code> under the app name for later use.</li>
<li>Under <code>manage</code> select <code>Certificates &amp; secrets</code>, click <code>New client secret</code>. Enter a description (can be anything) and set <code>Expires</code> to 24 months. Copy and keep that secret <em>Value</em> for later use (you <em>won't</em> be able to see this value afterwards).</li> <li>Under <code>manage</code> select <code>Certificates &amp; secrets</code>, click <code>New client secret</code>. Enter a description (can be anything) and set <code>Expires</code> to 24 months. Copy and keep that secret <em>Value</em> for later use (you <em>won't</em> be able to see this value afterwards).</li>
<li>Under <code>manage</code> select <code>API permissions</code>, click <code>Add a permission</code> and select <code>Microsoft Graph</code> then select <code>delegated permissions</code>.</li> <li>Under <code>manage</code> select <code>API permissions</code>, click <code>Add a permission</code> and select <code>Microsoft Graph</code> then select <code>delegated permissions</code>.</li>
@@ -32302,6 +32325,7 @@ y/e/d&gt; y</code></pre>
<h4 id="onedrive-client-credentials">--onedrive-client-credentials</h4> <h4 id="onedrive-client-credentials">--onedrive-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -32635,75 +32659,75 @@ rclone rc vfs/refresh recursive=true</code></pre>
<p>Permissions are also supported, if <code>--onedrive-metadata-permissions</code> is set. The accepted values for <code>--onedrive-metadata-permissions</code> are "<code>read</code>", "<code>write</code>", "<code>read,write</code>", and "<code>off</code>" (the default). "<code>write</code>" supports adding new permissions, updating the "role" of existing permissions, and removing permissions. Updating and removing require the Permission ID to be known, so it is recommended to use "<code>read,write</code>" instead of "<code>write</code>" if you wish to update/remove permissions.</p> <p>Permissions are also supported, if <code>--onedrive-metadata-permissions</code> is set. The accepted values for <code>--onedrive-metadata-permissions</code> are "<code>read</code>", "<code>write</code>", "<code>read,write</code>", and "<code>off</code>" (the default). "<code>write</code>" supports adding new permissions, updating the "role" of existing permissions, and removing permissions. Updating and removing require the Permission ID to be known, so it is recommended to use "<code>read,write</code>" instead of "<code>write</code>" if you wish to update/remove permissions.</p>
<p>Permissions are read/written in JSON format using the same schema as the <a href="https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online">OneDrive API</a>, which differs slightly between OneDrive Personal and Business.</p> <p>Permissions are read/written in JSON format using the same schema as the <a href="https://learn.microsoft.com/en-us/onedrive/developer/rest-api/resources/permission?view=odsp-graph-online">OneDrive API</a>, which differs slightly between OneDrive Personal and Business.</p>
<p>Example for OneDrive Personal:</p> <p>Example for OneDrive Personal:</p>
<div class="sourceCode" id="cb1405"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1405-1"><a href="#cb1405-1" aria-hidden="true"></a><span class="ot">[</span></span>
<span id="cb1405-2"><a href="#cb1405-2" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1405-3"><a href="#cb1405-3" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;1234567890ABC!123&quot;</span><span class="fu">,</span></span>
<span id="cb1405-4"><a href="#cb1405-4" aria-hidden="true"></a> <span class="dt">&quot;grantedTo&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1405-5"><a href="#cb1405-5" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1405-6"><a href="#cb1405-6" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1405-7"><a href="#cb1405-7" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1405-8"><a href="#cb1405-8" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1405-9"><a href="#cb1405-9" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1405-10"><a href="#cb1405-10" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1405-11"><a href="#cb1405-11" aria-hidden="true"></a> <span class="dt">&quot;invitation&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1405-12"><a href="#cb1405-12" aria-hidden="true"></a> <span class="dt">&quot;email&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1405-13"><a href="#cb1405-13" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1405-14"><a href="#cb1405-14" aria-hidden="true"></a> <span class="dt">&quot;link&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1405-15"><a href="#cb1405-15" aria-hidden="true"></a> <span class="dt">&quot;webUrl&quot;</span><span class="fu">:</span> <span class="st">&quot;https://1drv.ms/t/s!1234567890ABC&quot;</span></span>
<span id="cb1405-16"><a href="#cb1405-16" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1405-17"><a href="#cb1405-17" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1405-18"><a href="#cb1405-18" aria-hidden="true"></a> <span class="st">&quot;read&quot;</span></span>
<span id="cb1405-19"><a href="#cb1405-19" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1405-20"><a href="#cb1405-20" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;s!1234567890ABC&quot;</span></span>
<span id="cb1405-21"><a href="#cb1405-21" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1405-22"><a href="#cb1405-22" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
<p>Example for OneDrive Business:</p>
<div class="sourceCode" id="cb1406"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1406-1"><a href="#cb1406-1" aria-hidden="true"></a><span class="ot">[</span></span> <div class="sourceCode" id="cb1406"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1406-1"><a href="#cb1406-1" aria-hidden="true"></a><span class="ot">[</span></span>
<span id="cb1406-2"><a href="#cb1406-2" aria-hidden="true"></a> <span class="fu">{</span></span> <span id="cb1406-2"><a href="#cb1406-2" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1406-3"><a href="#cb1406-3" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;48d31887-5fad-4d73-a9f5-3c356e68a038&quot;</span><span class="fu">,</span></span> <span id="cb1406-3"><a href="#cb1406-3" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;1234567890ABC!123&quot;</span><span class="fu">,</span></span>
<span id="cb1406-4"><a href="#cb1406-4" aria-hidden="true"></a> <span class="dt">&quot;grantedToIdentities&quot;</span><span class="fu">:</span> <span class="ot">[</span></span> <span id="cb1406-4"><a href="#cb1406-4" aria-hidden="true"></a> <span class="dt">&quot;grantedTo&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-5"><a href="#cb1406-5" aria-hidden="true"></a> <span class="fu">{</span></span> <span id="cb1406-5"><a href="#cb1406-5" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-6"><a href="#cb1406-6" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span> <span id="cb1406-6"><a href="#cb1406-6" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1406-7"><a href="#cb1406-7" aria-hidden="true"></a> <span class="dt">&quot;displayName&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span> <span id="cb1406-7"><a href="#cb1406-7" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-8"><a href="#cb1406-8" aria-hidden="true"></a> <span class="fu">},</span></span> <span id="cb1406-8"><a href="#cb1406-8" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1406-9"><a href="#cb1406-9" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span> <span id="cb1406-9"><a href="#cb1406-9" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1406-10"><a href="#cb1406-10" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span> <span id="cb1406-10"><a href="#cb1406-10" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-11"><a href="#cb1406-11" aria-hidden="true"></a> <span class="fu">}</span></span> <span id="cb1406-11"><a href="#cb1406-11" aria-hidden="true"></a> <span class="dt">&quot;invitation&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-12"><a href="#cb1406-12" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span> <span id="cb1406-12"><a href="#cb1406-12" aria-hidden="true"></a> <span class="dt">&quot;email&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1406-13"><a href="#cb1406-13" aria-hidden="true"></a> <span class="dt">&quot;link&quot;</span><span class="fu">:</span> <span class="fu">{</span></span> <span id="cb1406-13"><a href="#cb1406-13" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-14"><a href="#cb1406-14" aria-hidden="true"></a> <span class="dt">&quot;type&quot;</span><span class="fu">:</span> <span class="st">&quot;view&quot;</span><span class="fu">,</span></span> <span id="cb1406-14"><a href="#cb1406-14" aria-hidden="true"></a> <span class="dt">&quot;link&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-15"><a href="#cb1406-15" aria-hidden="true"></a> <span class="dt">&quot;scope&quot;</span><span class="fu">:</span> <span class="st">&quot;users&quot;</span><span class="fu">,</span></span> <span id="cb1406-15"><a href="#cb1406-15" aria-hidden="true"></a> <span class="dt">&quot;webUrl&quot;</span><span class="fu">:</span> <span class="st">&quot;https://1drv.ms/t/s!1234567890ABC&quot;</span></span>
<span id="cb1406-16"><a href="#cb1406-16" aria-hidden="true"></a> <span class="dt">&quot;webUrl&quot;</span><span class="fu">:</span> <span class="st">&quot;https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s&quot;</span></span> <span id="cb1406-16"><a href="#cb1406-16" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-17"><a href="#cb1406-17" aria-hidden="true"></a> <span class="fu">},</span></span> <span id="cb1406-17"><a href="#cb1406-17" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1406-18"><a href="#cb1406-18" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span> <span id="cb1406-18"><a href="#cb1406-18" aria-hidden="true"></a> <span class="st">&quot;read&quot;</span></span>
<span id="cb1406-19"><a href="#cb1406-19" aria-hidden="true"></a> <span class="st">&quot;read&quot;</span></span> <span id="cb1406-19"><a href="#cb1406-19" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1406-20"><a href="#cb1406-20" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span> <span id="cb1406-20"><a href="#cb1406-20" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;s!1234567890ABC&quot;</span></span>
<span id="cb1406-21"><a href="#cb1406-21" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;u!LKj1lkdlals90j1nlkascl&quot;</span></span> <span id="cb1406-21"><a href="#cb1406-21" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1406-22"><a href="#cb1406-22" aria-hidden="true"></a> <span class="fu">}</span><span class="ot">,</span></span> <span id="cb1406-22"><a href="#cb1406-22" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
<span id="cb1406-23"><a href="#cb1406-23" aria-hidden="true"></a> <span class="fu">{</span></span> <p>Example for OneDrive Business:</p>
<span id="cb1406-24"><a href="#cb1406-24" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;5D33DD65C6932946&quot;</span><span class="fu">,</span></span> <div class="sourceCode" id="cb1407"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1407-1"><a href="#cb1407-1" aria-hidden="true"></a><span class="ot">[</span></span>
<span id="cb1406-25"><a href="#cb1406-25" aria-hidden="true"></a> <span class="dt">&quot;grantedTo&quot;</span><span class="fu">:</span> <span class="fu">{</span></span> <span id="cb1407-2"><a href="#cb1407-2" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1406-26"><a href="#cb1406-26" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span> <span id="cb1407-3"><a href="#cb1407-3" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;48d31887-5fad-4d73-a9f5-3c356e68a038&quot;</span><span class="fu">,</span></span>
<span id="cb1406-27"><a href="#cb1406-27" aria-hidden="true"></a> <span class="dt">&quot;displayName&quot;</span><span class="fu">:</span> <span class="st">&quot;John Doe&quot;</span><span class="fu">,</span></span> <span id="cb1407-4"><a href="#cb1407-4" aria-hidden="true"></a> <span class="dt">&quot;grantedToIdentities&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1406-28"><a href="#cb1406-28" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;efee1b77-fb3b-4f65-99d6-274c11914d12&quot;</span></span> <span id="cb1407-5"><a href="#cb1407-5" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1406-29"><a href="#cb1406-29" aria-hidden="true"></a> <span class="fu">},</span></span> <span id="cb1407-6"><a href="#cb1407-6" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-30"><a href="#cb1406-30" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span> <span id="cb1407-7"><a href="#cb1407-7" aria-hidden="true"></a> <span class="dt">&quot;displayName&quot;</span><span class="fu">:</span> <span class="st">&quot;ryan@contoso.com&quot;</span></span>
<span id="cb1406-31"><a href="#cb1406-31" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span> <span id="cb1407-8"><a href="#cb1407-8" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1406-32"><a href="#cb1406-32" aria-hidden="true"></a> <span class="fu">},</span></span> <span id="cb1407-9"><a href="#cb1407-9" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1406-33"><a href="#cb1406-33" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span> <span id="cb1407-10"><a href="#cb1407-10" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1406-34"><a href="#cb1406-34" aria-hidden="true"></a> <span class="st">&quot;owner&quot;</span></span> <span id="cb1407-11"><a href="#cb1407-11" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1406-35"><a href="#cb1406-35" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span> <span id="cb1407-12"><a href="#cb1407-12" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1406-36"><a href="#cb1406-36" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U&quot;</span></span> <span id="cb1407-13"><a href="#cb1407-13" aria-hidden="true"></a> <span class="dt">&quot;link&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1406-37"><a href="#cb1406-37" aria-hidden="true"></a> <span class="fu">}</span></span> <span id="cb1407-14"><a href="#cb1407-14" aria-hidden="true"></a> <span class="dt">&quot;type&quot;</span><span class="fu">:</span> <span class="st">&quot;view&quot;</span><span class="fu">,</span></span>
<span id="cb1406-38"><a href="#cb1406-38" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div> <span id="cb1407-15"><a href="#cb1407-15" aria-hidden="true"></a> <span class="dt">&quot;scope&quot;</span><span class="fu">:</span> <span class="st">&quot;users&quot;</span><span class="fu">,</span></span>
<span id="cb1407-16"><a href="#cb1407-16" aria-hidden="true"></a> <span class="dt">&quot;webUrl&quot;</span><span class="fu">:</span> <span class="st">&quot;https://contoso.sharepoint.com/:w:/t/design/a577ghg9hgh737613bmbjf839026561fmzhsr85ng9f3hjck2t5s&quot;</span></span>
<span id="cb1407-17"><a href="#cb1407-17" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1407-18"><a href="#cb1407-18" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1407-19"><a href="#cb1407-19" aria-hidden="true"></a> <span class="st">&quot;read&quot;</span></span>
<span id="cb1407-20"><a href="#cb1407-20" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1407-21"><a href="#cb1407-21" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;u!LKj1lkdlals90j1nlkascl&quot;</span></span>
<span id="cb1407-22"><a href="#cb1407-22" aria-hidden="true"></a> <span class="fu">}</span><span class="ot">,</span></span>
<span id="cb1407-23"><a href="#cb1407-23" aria-hidden="true"></a> <span class="fu">{</span></span>
<span id="cb1407-24"><a href="#cb1407-24" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;5D33DD65C6932946&quot;</span><span class="fu">,</span></span>
<span id="cb1407-25"><a href="#cb1407-25" aria-hidden="true"></a> <span class="dt">&quot;grantedTo&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1407-26"><a href="#cb1407-26" aria-hidden="true"></a> <span class="dt">&quot;user&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1407-27"><a href="#cb1407-27" aria-hidden="true"></a> <span class="dt">&quot;displayName&quot;</span><span class="fu">:</span> <span class="st">&quot;John Doe&quot;</span><span class="fu">,</span></span>
<span id="cb1407-28"><a href="#cb1407-28" aria-hidden="true"></a> <span class="dt">&quot;id&quot;</span><span class="fu">:</span> <span class="st">&quot;efee1b77-fb3b-4f65-99d6-274c11914d12&quot;</span></span>
<span id="cb1407-29"><a href="#cb1407-29" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1407-30"><a href="#cb1407-30" aria-hidden="true"></a> <span class="dt">&quot;application&quot;</span><span class="fu">:</span> <span class="fu">{},</span></span>
<span id="cb1407-31"><a href="#cb1407-31" aria-hidden="true"></a> <span class="dt">&quot;device&quot;</span><span class="fu">:</span> <span class="fu">{}</span></span>
<span id="cb1407-32"><a href="#cb1407-32" aria-hidden="true"></a> <span class="fu">},</span></span>
<span id="cb1407-33"><a href="#cb1407-33" aria-hidden="true"></a> <span class="dt">&quot;roles&quot;</span><span class="fu">:</span> <span class="ot">[</span></span>
<span id="cb1407-34"><a href="#cb1407-34" aria-hidden="true"></a> <span class="st">&quot;owner&quot;</span></span>
<span id="cb1407-35"><a href="#cb1407-35" aria-hidden="true"></a> <span class="ot">]</span><span class="fu">,</span></span>
<span id="cb1407-36"><a href="#cb1407-36" aria-hidden="true"></a> <span class="dt">&quot;shareId&quot;</span><span class="fu">:</span> <span class="st">&quot;FWxc1lasfdbEAGM5fI7B67aB5ZMPDMmQ11U&quot;</span></span>
<span id="cb1407-37"><a href="#cb1407-37" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1407-38"><a href="#cb1407-38" aria-hidden="true"></a><span class="ot">]</span></span></code></pre></div>
<p>To write permissions, pass in a "permissions" metadata key using this same format. The <a href="https://rclone.org/docs/#metadata-mapper"><code>--metadata-mapper</code></a> tool can be very helpful for this.</p> <p>To write permissions, pass in a "permissions" metadata key using this same format. The <a href="https://rclone.org/docs/#metadata-mapper"><code>--metadata-mapper</code></a> tool can be very helpful for this.</p>
<p>When adding permissions, an email address can be provided in the <code>User.ID</code> or <code>DisplayName</code> properties of <code>grantedTo</code> or <code>grantedToIdentities</code>. Alternatively, an ObjectID can be provided in <code>User.ID</code>. At least one valid recipient must be provided in order to add a permission for a user. Creating a Public Link is also supported, if <code>Link.Scope</code> is set to <code>"anonymous"</code>.</p> <p>When adding permissions, an email address can be provided in the <code>User.ID</code> or <code>DisplayName</code> properties of <code>grantedTo</code> or <code>grantedToIdentities</code>. Alternatively, an ObjectID can be provided in <code>User.ID</code>. At least one valid recipient must be provided in order to add a permission for a user. Creating a Public Link is also supported, if <code>Link.Scope</code> is set to <code>"anonymous"</code>.</p>
<p>Example request to add a "read" permission with <code>--metadata-mapper</code>:</p> <p>Example request to add a "read" permission with <code>--metadata-mapper</code>:</p>
<div class="sourceCode" id="cb1407"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1407-1"><a href="#cb1407-1" aria-hidden="true"></a><span class="fu">{</span></span> <div class="sourceCode" id="cb1408"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb1408-1"><a href="#cb1408-1" aria-hidden="true"></a><span class="fu">{</span></span>
<span id="cb1407-2"><a href="#cb1407-2" aria-hidden="true"></a> <span class="dt">&quot;Metadata&quot;</span><span class="fu">:</span> <span class="fu">{</span></span> <span id="cb1408-2"><a href="#cb1408-2" aria-hidden="true"></a> <span class="dt">&quot;Metadata&quot;</span><span class="fu">:</span> <span class="fu">{</span></span>
<span id="cb1407-3"><a href="#cb1407-3" aria-hidden="true"></a> <span class="dt">&quot;permissions&quot;</span><span class="fu">:</span> <span class="st">&quot;[{</span><span class="ch">\&quot;</span><span class="st">grantedToIdentities</span><span class="ch">\&quot;</span><span class="st">:[{</span><span class="ch">\&quot;</span><span class="st">user</span><span class="ch">\&quot;</span><span class="st">:{</span><span class="ch">\&quot;</span><span class="st">id</span><span class="ch">\&quot;</span><span class="st">:</span><span class="ch">\&quot;</span><span class="st">ryan@contoso.com</span><span class="ch">\&quot;</span><span class="st">}}],</span><span class="ch">\&quot;</span><span class="st">roles</span><span class="ch">\&quot;</span><span class="st">:[</span><span class="ch">\&quot;</span><span class="st">read</span><span class="ch">\&quot;</span><span class="st">]}]&quot;</span></span> <span id="cb1408-3"><a href="#cb1408-3" aria-hidden="true"></a> <span class="dt">&quot;permissions&quot;</span><span class="fu">:</span> <span class="st">&quot;[{</span><span class="ch">\&quot;</span><span class="st">grantedToIdentities</span><span class="ch">\&quot;</span><span class="st">:[{</span><span class="ch">\&quot;</span><span class="st">user</span><span class="ch">\&quot;</span><span class="st">:{</span><span class="ch">\&quot;</span><span class="st">id</span><span class="ch">\&quot;</span><span class="st">:</span><span class="ch">\&quot;</span><span class="st">ryan@contoso.com</span><span class="ch">\&quot;</span><span class="st">}}],</span><span class="ch">\&quot;</span><span class="st">roles</span><span class="ch">\&quot;</span><span class="st">:[</span><span class="ch">\&quot;</span><span class="st">read</span><span class="ch">\&quot;</span><span class="st">]}]&quot;</span></span>
<span id="cb1407-4"><a href="#cb1407-4" aria-hidden="true"></a> <span class="fu">}</span></span> <span id="cb1408-4"><a href="#cb1408-4" aria-hidden="true"></a> <span class="fu">}</span></span>
<span id="cb1407-5"><a href="#cb1407-5" aria-hidden="true"></a><span class="fu">}</span></span></code></pre></div> <span id="cb1408-5"><a href="#cb1408-5" aria-hidden="true"></a><span class="fu">}</span></span></code></pre></div>
<p>Note that adding a permission can fail if a conflicting permission already exists for the file/folder.</p> <p>Note that adding a permission can fail if a conflicting permission already exists for the file/folder.</p>
<p>To update an existing permission, include both the Permission ID and the new <code>roles</code> to be assigned. <code>roles</code> is the only property that can be changed.</p> <p>To update an existing permission, include both the Permission ID and the new <code>roles</code> to be assigned. <code>roles</code> is the only property that can be changed.</p>
<p>To remove permissions, pass in a blob containing only the permissions you wish to keep (which can be empty, to remove all.) Note that the <code>owner</code> role will be ignored, as it cannot be removed.</p> <p>To remove permissions, pass in a blob containing only the permissions you wish to keep (which can be empty, to remove all.) Note that the <code>owner</code> role will be ignored, as it cannot be removed.</p>
@@ -35077,6 +35101,7 @@ y/e/d&gt; y</code></pre>
<h4 id="pcloud-client-credentials">--pcloud-client-credentials</h4> <h4 id="pcloud-client-credentials">--pcloud-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -35686,6 +35711,7 @@ y/e/d&gt; </code></pre>
<h4 id="premiumizeme-client-credentials">--premiumizeme-client-credentials</h4> <h4 id="premiumizeme-client-credentials">--premiumizeme-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -36077,6 +36103,7 @@ e/n/d/r/c/s/q&gt; q</code></pre>
<h4 id="putio-client-credentials">--putio-client-credentials</h4> <h4 id="putio-client-credentials">--putio-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -39099,6 +39126,7 @@ y/e/d&gt; y</code></pre>
<h4 id="yandex-client-credentials">--yandex-client-credentials</h4> <h4 id="yandex-client-credentials">--yandex-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -39322,6 +39350,7 @@ y/e/d&gt; </code></pre>
<h4 id="zoho-client-credentials">--zoho-client-credentials</h4> <h4 id="zoho-client-credentials">--zoho-client-credentials</h4>
<p>Use client credentials OAuth flow.</p> <p>Use client credentials OAuth flow.</p>
<p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p> <p>This will use the OAUTH2 client Credentials Flow as described in RFC 6749.</p>
<p>Note that this option is NOT supported by all backends.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: client_credentials</li> <li>Config: client_credentials</li>
@@ -40062,6 +40091,75 @@ $ tree /tmp/c
<li>"error": return an error based on option value</li> <li>"error": return an error based on option value</li>
</ul> </ul>
<h1 id="changelog-1">Changelog</h1> <h1 id="changelog-1">Changelog</h1>
<h2 id="v1.70.3---2025-07-09">v1.70.3 - 2025-07-09</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.70.2...v1.70.3">See commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>check: Fix difference report (was reporting error counts) (albertony)</li>
<li>march: Fix deadlock when using <code>--no-traverse</code> (Nick Craig-Wood)</li>
<li>doc fixes (albertony, Nick Craig-Wood)</li>
</ul></li>
<li>Azure Blob
<ul>
<li>Fix server side copy error "requires exactly one scope" (Nick Craig-Wood)</li>
</ul></li>
<li>B2
<ul>
<li>Fix finding objects when using <code>--b2-version-at</code> (Davide Bizzarri)</li>
</ul></li>
<li>Linkbox
<ul>
<li>Fix upload error "user upload file not exist" (Nick Craig-Wood)</li>
</ul></li>
<li>Pikpak
<ul>
<li>Improve error handling for missing links and unrecoverable 500s (wiserain)</li>
</ul></li>
<li>WebDAV
<ul>
<li>Fix setting modtime to that of local object instead of remote (WeidiDeng)</li>
</ul></li>
</ul>
<h2 id="v1.70.2---2025-06-27">v1.70.2 - 2025-06-27</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.70.1...v1.70.2">See commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>convmv: Make --dry-run logs less noisy (nielash)</li>
<li>sync: Avoid copying dir metadata to itself (nielash)</li>
<li>build: Bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix GHSA-vrw8-fxc6-2r93 (dependabot[bot])</li>
<li>convmv: Fix moving to unicode-equivalent name (nielash)</li>
<li>log: Fix deadlock when using systemd logging (Nick Craig-Wood)</li>
<li>pacer: Fix nil pointer deref in RetryError (Nick Craig-Wood)</li>
<li>doc fixes (Ali Zein Yousuf, Nick Craig-Wood)</li>
</ul></li>
<li>Local
<ul>
<li>Fix --skip-links on Windows when skipping Junction points (Nick Craig-Wood)</li>
</ul></li>
<li>Combine
<ul>
<li>Fix directory not found errors with ListP interface (Nick Craig-Wood)</li>
</ul></li>
<li>Mega
<ul>
<li>Fix tls handshake failure (necaran)</li>
</ul></li>
<li>Pikpak
<ul>
<li>Fix uploads fail with "aws-chunked encoding is not supported" error (Nick Craig-Wood)</li>
</ul></li>
</ul>
<h2 id="v1.70.1---2025-06-19">v1.70.1 - 2025-06-19</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.70.0...v1.70.1">See commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>convmv: Fix spurious "error running command echo" on Windows (Nick Craig-Wood)</li>
<li>doc fixes (albertony, Ed Craig-Wood, jinjingroad)</li>
</ul></li>
</ul>
<h2 id="v1.70.0---2025-06-17">v1.70.0 - 2025-06-17</h2> <h2 id="v1.70.0---2025-06-17">v1.70.0 - 2025-06-17</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0">See commits</a></p> <p><a href="https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0">See commits</a></p>
<ul> <ul>

160
MANUAL.md generated
View File

@@ -1,6 +1,6 @@
% rclone(1) User Manual % rclone(1) User Manual
% Nick Craig-Wood % Nick Craig-Wood
% Jun 17, 2025 % Jul 09, 2025
# NAME # NAME
@@ -192,8 +192,8 @@ WebDAV or S3, that work out of the box.)
- Dropbox - Dropbox
- Enterprise File Fabric - Enterprise File Fabric
- Fastmail Files - Fastmail Files
- Files.com
- FileLu Cloud Storage - FileLu Cloud Storage
- Files.com
- FlashBlade - FlashBlade
- FTP - FTP
- Gofile - Gofile
@@ -512,6 +512,12 @@ package is here.
The rclone developers maintain a [docker image for rclone](https://hub.docker.com/r/rclone/rclone). The rclone developers maintain a [docker image for rclone](https://hub.docker.com/r/rclone/rclone).
**Note:** We also now offer a paid version of rclone with
enterprise-grade security and zero CVEs through our partner
[SecureBuild](https://securebuild.com/blog/introducing-securebuild).
If you are interested, check out their website and the [Rclone
SecureBuild Image](https://securebuild.com/images/rclone).
These images are built as part of the release process based on a These images are built as part of the release process based on a
minimal Alpine Linux. minimal Alpine Linux.
@@ -4452,12 +4458,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
``` ```
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20250617 // Output: stories/The Quick Brown Fox!-20250618
``` ```
``` ```
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM // Output: stories/The Quick Brown Fox!-2025-06-18 0148PM
``` ```
``` ```
@@ -4465,17 +4471,15 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\
// Output: ababababababab/ababab ababababab ababababab ababab!abababab // Output: ababababababab/ababab ababababab ababababab ababab!abababab
``` ```
Multiple transformations can be used in sequence, applied in the order they are specified on the command line. Multiple transformations can be used in sequence, applied in the order they are specified on the command line.
The `--name-transform` flag is also available in `sync`, `copy`, and `move`. The `--name-transform` flag is also available in `sync`, `copy`, and `move`.
# Files vs Directories ## # Files vs Directories
By default `--name-transform` will only apply to file names. The means only the leaf file name will be transformed. By default `--name-transform` will only apply to file names. The means only the leaf file name will be transformed.
However some of the transforms would be better applied to the whole path or just directories. However some of the transforms would be better applied to the whole path or just directories.
To choose which which part of the file path is affected some tags can be added to the `--name-transform` To choose which which part of the file path is affected some tags can be added to the `--name-transform`.
| Tag | Effect | | Tag | Effect |
|------|------| |------|------|
@@ -4485,11 +4489,11 @@ To choose which which part of the file path is affected some tags can be added t
This is used by adding the tag into the transform name like this: `--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`. This is used by adding the tag into the transform name like this: `--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`.
For some conversions using all is more likely to be useful, for example `--name-transform all,nfc` For some conversions using all is more likely to be useful, for example `--name-transform all,nfc`.
Note that `--name-transform` may not add path separators `/` to the name. This will cause an error. Note that `--name-transform` may not add path separators `/` to the name. This will cause an error.
# Ordering and Conflicts ## # Ordering and Conflicts
* Transformations will be applied in the order specified by the user. * Transformations will be applied in the order specified by the user.
* If the `file` tag is in use (the default) then only the leaf name of files will be transformed. * If the `file` tag is in use (the default) then only the leaf name of files will be transformed.
@@ -4504,14 +4508,14 @@ user, allowing for intentional use cases (e.g., trimming one prefix before addin
* Users should be aware that certain combinations may lead to unexpected results and should verify * Users should be aware that certain combinations may lead to unexpected results and should verify
transformations using `--dry-run` before execution. transformations using `--dry-run` before execution.
# Race Conditions and Non-Deterministic Behavior ## # Race Conditions and Non-Deterministic Behavior
Some transformations, such as `replace=old:new`, may introduce conflicts where multiple source files map to the same destination name. Some transformations, such as `replace=old:new`, may introduce conflicts where multiple source files map to the same destination name.
This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these. This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these.
* If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic. * If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic.
* Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results. * Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.
* To minimize risks, users should: To minimize risks, users should:
* Carefully review transformations that may introduce conflicts. * Carefully review transformations that may introduce conflicts.
* Use `--dry-run` to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations). * Use `--dry-run` to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).
* Avoid transformations that cause multiple distinct source files to map to the same destination name. * Avoid transformations that cause multiple distinct source files to map to the same destination name.
@@ -16328,9 +16332,16 @@ This option is only supported Windows platforms.
### --use-json-log ### ### --use-json-log ###
This switches the log format to JSON for rclone. The fields of JSON This switches the log format to JSON. The log messages are then
log are `level`, `msg`, `source`, `time`. The JSON logs will be streamed as individual JSON objects, with fields: `level`, `msg`, `source`,
printed on a single line, but are shown expanded here for clarity. and `time`. The resulting format is what is sometimes referred to as
[newline-delimited JSON](https://en.wikipedia.org/wiki/JSON_streaming#Newline-delimited_JSON)
(NDJSON), or JSON Lines (JSONL). This is well suited for processing by
traditional line-oriented tools and shell pipelines, but a complete log
file is not strictly valid JSON and needs a parser that can handle it.
The JSON logs will be printed on a single line, but are shown expanded
here for clarity.
```json ```json
{ {
@@ -22400,7 +22411,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this --tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.0") --user-agent string Set the user-agent to a specified string (default "rclone/v1.70.3")
``` ```
@@ -22882,6 +22893,7 @@ Backend-only flags (these can be set in the config file also).
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to --ftp-host string FTP host to connect to
--ftp-http-proxy string URL for HTTP CONNECT proxy
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s) --ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server --ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-no-check-upload Don't check the upload is OK --ftp-no-check-upload Don't check the upload is OK
@@ -33549,6 +33561,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -35368,6 +35382,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -36631,7 +36647,7 @@ See the [metadata](https://rclone.org/docs/#metadata) docs for more info.
The DOI remote is a read only remote for reading files from digital object identifiers (DOI). The DOI remote is a read only remote for reading files from digital object identifiers (DOI).
Currently, the DOI backend supports supports DOIs hosted with: Currently, the DOI backend supports DOIs hosted with:
- [InvenioRDM](https://inveniosoftware.org/products/rdm/) - [InvenioRDM](https://inveniosoftware.org/products/rdm/)
- [Zenodo](https://zenodo.org) - [Zenodo](https://zenodo.org)
- [CaltechDATA](https://data.caltech.edu) - [CaltechDATA](https://data.caltech.edu)
@@ -37110,6 +37126,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -38548,6 +38566,20 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
#### --ftp-http-proxy
URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
Properties:
- Config: http_proxy
- Env Var: RCLONE_FTP_HTTP_PROXY
- Type: string
- Required: false
#### --ftp-no-check-upload #### --ftp-no-check-upload
Don't check the upload is OK Don't check the upload is OK
@@ -39591,6 +39623,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -40404,6 +40438,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -41939,6 +41975,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -42246,6 +42284,25 @@ Rclone cannot delete files anywhere except under `album`.
The Google Photos API does not support deleting albums - see [bug #135714733](https://issuetracker.google.com/issues/135714733). The Google Photos API does not support deleting albums - see [bug #135714733](https://issuetracker.google.com/issues/135714733).
## Making your own client_id
When you use rclone with Google photos in its default configuration you
are using rclone's client_id. This is shared between all the rclone
users. There is a global rate limit on the number of queries per
second that each client_id can do set by Google.
If there is a problem with this client_id (eg quota too low or the
client_id stops working) then you can make your own.
Please follow the steps in [the google drive docs](https://rclone.org/drive/#making-your-own-client-id).
You will need these scopes instead of the drive ones detailed:
```
https://www.googleapis.com/auth/photoslibrary.appendonly
https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata
https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata
```
# Hasher # Hasher
Hasher is a special overlay backend to create remotes which handle Hasher is a special overlay backend to create remotes which handle
@@ -43128,6 +43185,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -44697,6 +44756,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -45573,6 +45634,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -48306,7 +48369,8 @@ For example, you might see throttling.
To create your own Client ID, please follow these steps: To create your own Client ID, please follow these steps:
1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click `New registration`. 1. Open https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview and then under the `Add` menu click `App registration`.
* If you have not created an Azure account, you will be prompted to. This is free, but you need to provide a phone number, address, and credit card for identity verification.
2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI`, then type (do not copy and paste) `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use. 2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI`, then type (do not copy and paste) `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use.
3. Under `manage` select `Certificates & secrets`, click `New client secret`. Enter a description (can be anything) and set `Expires` to 24 months. Copy and keep that secret _Value_ for later use (you _won't_ be able to see this value afterwards). 3. Under `manage` select `Certificates & secrets`, click `New client secret`. Enter a description (can be anything) and set `Expires` to 24 months. Copy and keep that secret _Value_ for later use (you _won't_ be able to see this value afterwards).
4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`. 4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`.
@@ -48559,6 +48623,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -52187,6 +52253,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -52988,6 +53056,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -53584,6 +53654,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -58004,6 +58076,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -58304,6 +58378,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -59071,6 +59147,54 @@ Options:
# Changelog # Changelog
## v1.70.3 - 2025-07-09
[See commits](https://github.com/rclone/rclone/compare/v1.70.2...v1.70.3)
* Bug Fixes
* check: Fix difference report (was reporting error counts) (albertony)
* march: Fix deadlock when using `--no-traverse` (Nick Craig-Wood)
* doc fixes (albertony, Nick Craig-Wood)
* Azure Blob
* Fix server side copy error "requires exactly one scope" (Nick Craig-Wood)
* B2
* Fix finding objects when using `--b2-version-at` (Davide Bizzarri)
* Linkbox
* Fix upload error "user upload file not exist" (Nick Craig-Wood)
* Pikpak
* Improve error handling for missing links and unrecoverable 500s (wiserain)
* WebDAV
* Fix setting modtime to that of local object instead of remote (WeidiDeng)
## v1.70.2 - 2025-06-27
[See commits](https://github.com/rclone/rclone/compare/v1.70.1...v1.70.2)
* Bug Fixes
* convmv: Make --dry-run logs less noisy (nielash)
* sync: Avoid copying dir metadata to itself (nielash)
* build: Bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix GHSA-vrw8-fxc6-2r93 (dependabot[bot])
* convmv: Fix moving to unicode-equivalent name (nielash)
* log: Fix deadlock when using systemd logging (Nick Craig-Wood)
* pacer: Fix nil pointer deref in RetryError (Nick Craig-Wood)
* doc fixes (Ali Zein Yousuf, Nick Craig-Wood)
* Local
* Fix --skip-links on Windows when skipping Junction points (Nick Craig-Wood)
* Combine
* Fix directory not found errors with ListP interface (Nick Craig-Wood)
* Mega
* Fix tls handshake failure (necaran)
* Pikpak
* Fix uploads fail with "aws-chunked encoding is not supported" error (Nick Craig-Wood)
## v1.70.1 - 2025-06-19
[See commits](https://github.com/rclone/rclone/compare/v1.70.0...v1.70.1)
* Bug Fixes
* convmv: Fix spurious "error running command echo" on Windows (Nick Craig-Wood)
* doc fixes (albertony, Ed Craig-Wood, jinjingroad)
## v1.70.0 - 2025-06-17 ## v1.70.0 - 2025-06-17
[See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0) [See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0)

181
MANUAL.txt generated
View File

@@ -1,6 +1,6 @@
rclone(1) User Manual rclone(1) User Manual
Nick Craig-Wood Nick Craig-Wood
Jun 17, 2025 Jul 09, 2025
NAME NAME
@@ -179,8 +179,8 @@ S3, that work out of the box.)
- Dropbox - Dropbox
- Enterprise File Fabric - Enterprise File Fabric
- Fastmail Files - Fastmail Files
- Files.com
- FileLu Cloud Storage - FileLu Cloud Storage
- Files.com
- FlashBlade - FlashBlade
- FTP - FTP
- Gofile - Gofile
@@ -491,6 +491,10 @@ Docker installation
The rclone developers maintain a docker image for rclone. The rclone developers maintain a docker image for rclone.
Note: We also now offer a paid version of rclone with enterprise-grade
security and zero CVEs through our partner SecureBuild. If you are
interested, check out their website and the Rclone SecureBuild Image.
These images are built as part of the release process based on a minimal These images are built as part of the release process based on a minimal
Alpine Linux. Alpine Linux.
@@ -4145,10 +4149,10 @@ Examples:
// Output: stories/The Quick Brown Fox!.txt // Output: stories/The Quick Brown Fox!.txt
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20250617 // Output: stories/The Quick Brown Fox!-20250618
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM // Output: stories/The Quick Brown Fox!-2025-06-18 0148PM
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab" rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
// Output: ababababababab/ababab ababababab ababababab ababab!abababab // Output: ababababababab/ababab ababababab ababababab ababab!abababab
@@ -4164,7 +4168,7 @@ By default --name-transform will only apply to file names. The means
only the leaf file name will be transformed. However some of the only the leaf file name will be transformed. However some of the
transforms would be better applied to the whole path or just transforms would be better applied to the whole path or just
directories. To choose which which part of the file path is affected directories. To choose which which part of the file path is affected
some tags can be added to the --name-transform some tags can be added to the --name-transform.
----------------------------------------------------------------------- -----------------------------------------------------------------------
Tag Effect Tag Effect
@@ -4184,7 +4188,7 @@ This is used by adding the tag into the transform name like this:
--name-transform file,prefix=ABC or --name-transform dir,prefix=DEF. --name-transform file,prefix=ABC or --name-transform dir,prefix=DEF.
For some conversions using all is more likely to be useful, for example For some conversions using all is more likely to be useful, for example
--name-transform all,nfc --name-transform all,nfc.
Note that --name-transform may not add path separators / to the name. Note that --name-transform may not add path separators / to the name.
This will cause an error. This will cause an error.
@@ -4223,16 +4227,14 @@ be non-deterministic. * Running rclone check after a sync using such
transformations may erroneously report missing or differing files due to transformations may erroneously report missing or differing files due to
overwritten results. overwritten results.
- To minimize risks, users should: To minimize risks, users should: * Carefully review transformations that
- Carefully review transformations that may introduce conflicts. may introduce conflicts. * Use --dry-run to inspect changes before
- Use --dry-run to inspect changes before executing a sync (but executing a sync (but keep in mind that it won't show the effect of
keep in mind that it won't show the effect of non-deterministic non-deterministic transformations). * Avoid transformations that cause
transformations). multiple distinct source files to map to the same destination name. *
- Avoid transformations that cause multiple distinct source files Consider disabling concurrency with --transfers=1 if necessary. *
to map to the same destination name. Certain transformations (e.g. prefix) will have a multiplying effect
- Consider disabling concurrency with --transfers=1 if necessary. every time they are used. Avoid these when using bisync.
- Certain transformations (e.g. prefix) will have a multiplying
effect every time they are used. Avoid these when using bisync.
rclone convmv dest:path --name-transform XXX [flags] rclone convmv dest:path --name-transform XXX [flags]
@@ -15795,9 +15797,16 @@ This option is only supported Windows platforms.
--use-json-log --use-json-log
This switches the log format to JSON for rclone. The fields of JSON log This switches the log format to JSON. The log messages are then streamed
are level, msg, source, time. The JSON logs will be printed on a single as individual JSON objects, with fields: level, msg, source, and time.
line, but are shown expanded here for clarity. The resulting format is what is sometimes referred to as
newline-delimited JSON (NDJSON), or JSON Lines (JSONL). This is well
suited for processing by traditional line-oriented tools and shell
pipelines, but a complete log file is not strictly valid JSON and needs
a parser that can handle it.
The JSON logs will be printed on a single line, but are shown expanded
here for clarity.
{ {
"time": "2025-05-13T17:30:51.036237518+01:00", "time": "2025-05-13T17:30:51.036237518+01:00",
@@ -21961,7 +21970,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this --tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.0") --user-agent string Set the user-agent to a specified string (default "rclone/v1.70.3")
Performance Performance
@@ -22413,6 +22422,7 @@ Backend-only flags (these can be set in the config file also).
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to --ftp-host string FTP host to connect to
--ftp-http-proxy string URL for HTTP CONNECT proxy
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s) --ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server --ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-no-check-upload Don't check the upload is OK --ftp-no-check-upload Don't check the upload is OK
@@ -32909,6 +32919,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -34747,6 +34759,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -35978,9 +35992,9 @@ DOI
The DOI remote is a read only remote for reading files from digital The DOI remote is a read only remote for reading files from digital
object identifiers (DOI). object identifiers (DOI).
Currently, the DOI backend supports supports DOIs hosted with: - Currently, the DOI backend supports DOIs hosted with: - InvenioRDM -
InvenioRDM - Zenodo - CaltechDATA - Other InvenioRDM repositories - Zenodo - CaltechDATA - Other InvenioRDM repositories - Dataverse -
Dataverse - Harvard Dataverse - Other Dataverse repositories Harvard Dataverse - Other Dataverse repositories
Paths are specified as remote:path Paths are specified as remote:path
@@ -36438,6 +36452,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -37853,6 +37869,20 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
--ftp-http-proxy
URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT
verb.
Properties:
- Config: http_proxy
- Env Var: RCLONE_FTP_HTTP_PROXY
- Type: string
- Required: false
--ftp-no-check-upload --ftp-no-check-upload
Don't check the upload is OK Don't check the upload is OK
@@ -38891,6 +38921,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -39745,6 +39777,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -41311,6 +41345,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -41617,6 +41653,23 @@ Deleting albums
The Google Photos API does not support deleting albums - see bug The Google Photos API does not support deleting albums - see bug
#135714733. #135714733.
Making your own client_id
When you use rclone with Google photos in its default configuration you
are using rclone's client_id. This is shared between all the rclone
users. There is a global rate limit on the number of queries per second
that each client_id can do set by Google.
If there is a problem with this client_id (eg quota too low or the
client_id stops working) then you can make your own.
Please follow the steps in the google drive docs. You will need these
scopes instead of the drive ones detailed:
https://www.googleapis.com/auth/photoslibrary.appendonly
https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata
https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata
Hasher Hasher
Hasher is a special overlay backend to create remotes which handle Hasher is a special overlay backend to create remotes which handle
@@ -42487,6 +42540,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -44163,6 +44218,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -45064,6 +45121,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -47830,8 +47889,11 @@ Creating Client ID for OneDrive Personal
To create your own Client ID, please follow these steps: To create your own Client ID, please follow these steps:
1. Open 1. Open
https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview
and then click New registration. and then under the Add menu click App registration.
- If you have not created an Azure account, you will be prompted
to. This is free, but you need to provide a phone number,
address, and credit card for identity verification.
2. Enter a name for your app, choose account type 2. Enter a name for your app, choose account type
Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox), Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox),
select Web in Redirect URI, then type (do not copy and paste) select Web in Redirect URI, then type (do not copy and paste)
@@ -48114,6 +48176,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -51840,6 +51904,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -52644,6 +52710,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -53228,6 +53296,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -57679,6 +57749,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -57974,6 +58046,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -58734,6 +58808,63 @@ Options:
Changelog Changelog
v1.70.3 - 2025-07-09
See commits
- Bug Fixes
- check: Fix difference report (was reporting error counts)
(albertony)
- march: Fix deadlock when using --no-traverse (Nick Craig-Wood)
- doc fixes (albertony, Nick Craig-Wood)
- Azure Blob
- Fix server side copy error "requires exactly one scope" (Nick
Craig-Wood)
- B2
- Fix finding objects when using --b2-version-at (Davide Bizzarri)
- Linkbox
- Fix upload error "user upload file not exist" (Nick Craig-Wood)
- Pikpak
- Improve error handling for missing links and unrecoverable 500s
(wiserain)
- WebDAV
- Fix setting modtime to that of local object instead of remote
(WeidiDeng)
v1.70.2 - 2025-06-27
See commits
- Bug Fixes
- convmv: Make --dry-run logs less noisy (nielash)
- sync: Avoid copying dir metadata to itself (nielash)
- build: Bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix
GHSA-vrw8-fxc6-2r93 (dependabot[bot])
- convmv: Fix moving to unicode-equivalent name (nielash)
- log: Fix deadlock when using systemd logging (Nick Craig-Wood)
- pacer: Fix nil pointer deref in RetryError (Nick Craig-Wood)
- doc fixes (Ali Zein Yousuf, Nick Craig-Wood)
- Local
- Fix --skip-links on Windows when skipping Junction points (Nick
Craig-Wood)
- Combine
- Fix directory not found errors with ListP interface (Nick
Craig-Wood)
- Mega
- Fix tls handshake failure (necaran)
- Pikpak
- Fix uploads fail with "aws-chunked encoding is not supported"
error (Nick Craig-Wood)
v1.70.1 - 2025-06-19
See commits
- Bug Fixes
- convmv: Fix spurious "error running command echo" on Windows
(Nick Craig-Wood)
- doc fixes (albertony, Ed Craig-Wood, jinjingroad)
v1.70.0 - 2025-06-17 v1.70.0 - 2025-06-17
See commits See commits

View File

@@ -39,6 +39,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/) * Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/) * Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files) * Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
* FileLu [:page_facing_up:](https://rclone.org/filelu/)
* Files.com [:page_facing_up:](https://rclone.org/filescom/) * Files.com [:page_facing_up:](https://rclone.org/filescom/)
* FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade) * FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade)
* FTP [:page_facing_up:](https://rclone.org/ftp/) * FTP [:page_facing_up:](https://rclone.org/ftp/)

View File

@@ -1 +1 @@
v1.70.0 v1.70.3

View File

@@ -72,6 +72,7 @@ const (
emulatorAccount = "devstoreaccount1" emulatorAccount = "devstoreaccount1"
emulatorAccountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==" emulatorAccountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
emulatorBlobEndpoint = "http://127.0.0.1:10000/devstoreaccount1" emulatorBlobEndpoint = "http://127.0.0.1:10000/devstoreaccount1"
sasCopyValidity = time.Hour // how long SAS should last when doing server side copy
) )
var ( var (
@@ -559,6 +560,11 @@ type Fs struct {
pacer *fs.Pacer // To pace and retry the API calls pacer *fs.Pacer // To pace and retry the API calls
uploadToken *pacer.TokenDispenser // control concurrency uploadToken *pacer.TokenDispenser // control concurrency
publicAccess container.PublicAccessType // Container Public Access Level publicAccess container.PublicAccessType // Container Public Access Level
// user delegation cache
userDelegationMu sync.Mutex
userDelegation *service.UserDelegationCredential
userDelegationExpiry time.Time
} }
// Object describes an azure object // Object describes an azure object
@@ -1688,6 +1694,38 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.deleteContainer(ctx, container) return f.deleteContainer(ctx, container)
} }
// Get a user delegation which is valid for at least sasCopyValidity
//
// This value is cached in f
func (f *Fs) getUserDelegation(ctx context.Context) (*service.UserDelegationCredential, error) {
f.userDelegationMu.Lock()
defer f.userDelegationMu.Unlock()
if f.userDelegation != nil && time.Until(f.userDelegationExpiry) > sasCopyValidity {
return f.userDelegation, nil
}
// Validity window
start := time.Now().UTC()
expiry := start.Add(2 * sasCopyValidity)
startStr := start.Format(time.RFC3339)
expiryStr := expiry.Format(time.RFC3339)
// Acquire user delegation key from the service client
info := service.KeyInfo{
Start: &startStr,
Expiry: &expiryStr,
}
userDelegationKey, err := f.svc.GetUserDelegationCredential(ctx, info, nil)
if err != nil {
return nil, fmt.Errorf("failed to get user delegation key: %w", err)
}
f.userDelegation = userDelegationKey
f.userDelegationExpiry = expiry
return f.userDelegation, nil
}
// getAuth gets auth to copy o. // getAuth gets auth to copy o.
// //
// tokenOK is used to signal that token based auth (Microsoft Entra // tokenOK is used to signal that token based auth (Microsoft Entra
@@ -1699,7 +1737,7 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
// URL (not a SAS) and token will be empty. // URL (not a SAS) and token will be empty.
// //
// If tokenOK is true it may also return a token for the auth. // If tokenOK is true it may also return a token for the auth.
func (o *Object) getAuth(ctx context.Context, tokenOK bool, noAuth bool) (srcURL string, token *string, err error) { func (o *Object) getAuth(ctx context.Context, noAuth bool) (srcURL string, err error) {
f := o.fs f := o.fs
srcBlobSVC := o.getBlobSVC() srcBlobSVC := o.getBlobSVC()
srcURL = srcBlobSVC.URL() srcURL = srcBlobSVC.URL()
@@ -1708,29 +1746,47 @@ func (o *Object) getAuth(ctx context.Context, tokenOK bool, noAuth bool) (srcURL
case noAuth: case noAuth:
// If same storage account then no auth needed // If same storage account then no auth needed
case f.cred != nil: case f.cred != nil:
if !tokenOK { // Generate a User Delegation SAS URL using Azure AD credentials
return srcURL, token, errors.New("not supported: Microsoft Entra ID") userDelegationKey, err := f.getUserDelegation(ctx)
}
options := policy.TokenRequestOptions{}
accessToken, err := f.cred.GetToken(ctx, options)
if err != nil { if err != nil {
return srcURL, token, fmt.Errorf("failed to create access token: %w", err) return "", fmt.Errorf("sas creation: %w", err)
} }
token = &accessToken.Token
// Build the SAS values
perms := sas.BlobPermissions{Read: true}
container, containerPath := o.split()
start := time.Now().UTC()
expiry := start.Add(sasCopyValidity)
vals := sas.BlobSignatureValues{
StartTime: start,
ExpiryTime: expiry,
Permissions: perms.String(),
ContainerName: container,
BlobName: containerPath,
}
// Sign with the delegation key
queryParameters, err := vals.SignWithUserDelegation(userDelegationKey)
if err != nil {
return "", fmt.Errorf("signing SAS with user delegation failed: %w", err)
}
// Append the SAS to the URL
srcURL = srcBlobSVC.URL() + "?" + queryParameters.Encode()
case f.sharedKeyCred != nil: case f.sharedKeyCred != nil:
// Generate a short lived SAS URL if using shared key credentials // Generate a short lived SAS URL if using shared key credentials
expiry := time.Now().Add(time.Hour) expiry := time.Now().Add(sasCopyValidity)
sasOptions := blob.GetSASURLOptions{} sasOptions := blob.GetSASURLOptions{}
srcURL, err = srcBlobSVC.GetSASURL(sas.BlobPermissions{Read: true}, expiry, &sasOptions) srcURL, err = srcBlobSVC.GetSASURL(sas.BlobPermissions{Read: true}, expiry, &sasOptions)
if err != nil { if err != nil {
return srcURL, token, fmt.Errorf("failed to create SAS URL: %w", err) return srcURL, fmt.Errorf("failed to create SAS URL: %w", err)
} }
case f.anonymous || f.opt.SASURL != "": case f.anonymous || f.opt.SASURL != "":
// If using a SASURL or anonymous, no need for any extra auth // If using a SASURL or anonymous, no need for any extra auth
default: default:
return srcURL, token, errors.New("unknown authentication type") return srcURL, errors.New("unknown authentication type")
} }
return srcURL, token, nil return srcURL, nil
} }
// Do multipart parallel copy. // Do multipart parallel copy.
@@ -1751,7 +1807,7 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
o.fs = f o.fs = f
o.remote = remote o.remote = remote
srcURL, token, err := src.getAuth(ctx, true, false) srcURL, err := src.getAuth(ctx, false)
if err != nil { if err != nil {
return nil, fmt.Errorf("multipart copy: %w", err) return nil, fmt.Errorf("multipart copy: %w", err)
} }
@@ -1795,7 +1851,8 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
Count: partSize, Count: partSize,
}, },
// Specifies the authorization scheme and signature for the copy source. // Specifies the authorization scheme and signature for the copy source.
CopySourceAuthorization: token, // We use SAS URLs as this doesn't seem to work always
// CopySourceAuthorization: token,
// CPKInfo *blob.CPKInfo // CPKInfo *blob.CPKInfo
// CPKScopeInfo *blob.CPKScopeInfo // CPKScopeInfo *blob.CPKScopeInfo
} }
@@ -1865,7 +1922,7 @@ func (f *Fs) copySinglepart(ctx context.Context, remote, dstContainer, dstPath s
dstBlobSVC := f.getBlobSVC(dstContainer, dstPath) dstBlobSVC := f.getBlobSVC(dstContainer, dstPath)
// Get the source auth - none needed for same storage account // Get the source auth - none needed for same storage account
srcURL, _, err := src.getAuth(ctx, false, f == src.fs) srcURL, err := src.getAuth(ctx, f == src.fs)
if err != nil { if err != nil {
return nil, fmt.Errorf("single part copy: source auth: %w", err) return nil, fmt.Errorf("single part copy: source auth: %w", err)
} }

View File

@@ -922,7 +922,7 @@ func (o *Object) setMetadata(resp *file.GetPropertiesResponse) {
} }
} }
// readMetaData gets the metadata if it hasn't already been fetched // getMetadata gets the metadata if it hasn't already been fetched
func (o *Object) getMetadata(ctx context.Context) error { func (o *Object) getMetadata(ctx context.Context) error {
resp, err := o.fileClient().GetProperties(ctx, nil) resp, err := o.fileClient().GetProperties(ctx, nil)
if err != nil { if err != nil {

View File

@@ -1673,6 +1673,21 @@ func (o *Object) getMetaData(ctx context.Context) (info *api.File, err error) {
return o.getMetaDataListing(ctx) return o.getMetaDataListing(ctx)
} }
} }
// If using versionAt we need to list the find the correct version.
if o.fs.opt.VersionAt.IsSet() {
info, err := o.getMetaDataListing(ctx)
if err != nil {
return nil, err
}
if info.Action == "hide" {
// Rerturn object not found error if the current version is deleted.
return nil, fs.ErrorObjectNotFound
}
return info, nil
}
_, info, err = o.getOrHead(ctx, "HEAD", nil) _, info, err = o.getOrHead(ctx, "HEAD", nil)
return info, err return info, err
} }

View File

@@ -446,14 +446,14 @@ func (f *Fs) InternalTestVersions(t *testing.T) {
t.Run("List", func(t *testing.T) { t.Run("List", func(t *testing.T) {
fstest.CheckListing(t, f, test.want) fstest.CheckListing(t, f, test.want)
}) })
// b2 NewObject doesn't work with VersionAt
//t.Run("NewObject", func(t *testing.T) { t.Run("NewObject", func(t *testing.T) {
// gotObj, gotErr := f.NewObject(ctx, fileName) gotObj, gotErr := f.NewObject(ctx, fileName)
// assert.Equal(t, test.wantErr, gotErr) assert.Equal(t, test.wantErr, gotErr)
// if gotErr == nil { if gotErr == nil {
// assert.Equal(t, test.wantSize, gotObj.Size()) assert.Equal(t, test.wantSize, gotObj.Size())
// } }
//}) })
}) })
} }
}) })

View File

@@ -858,7 +858,7 @@ func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) e
} }
return wrappedCallback(entries) return wrappedCallback(entries)
} }
return listP(ctx, dir, wrappedCallback) return listP(ctx, uRemote, wrappedCallback)
} }
// ListR lists the objects and directories of the Fs starting // ListR lists the objects and directories of the Fs starting

View File

@@ -617,16 +617,36 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
case 1: case 1:
// upload file using link from first step // upload file using link from first step
var res *http.Response var res *http.Response
var location string
// Check to see if we are being redirected
opts := &rest.Opts{
Method: "HEAD",
RootURL: getFirstStepResult.Data.SignURL,
Options: options,
NoRedirect: true,
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, opts)
return o.fs.shouldRetry(ctx, res, err)
})
if res != nil {
location = res.Header.Get("Location")
if location != "" {
// set the URL to the new Location
opts.RootURL = location
err = nil
}
}
if err != nil {
return fmt.Errorf("head upload URL: %w", err)
}
file := io.MultiReader(bytes.NewReader(first10mBytes), in) file := io.MultiReader(bytes.NewReader(first10mBytes), in)
opts := &rest.Opts{ opts.Method = "PUT"
Method: "PUT", opts.Body = file
RootURL: getFirstStepResult.Data.SignURL, opts.ContentLength = &size
Options: options,
Body: file,
ContentLength: &size,
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, opts) res, err = o.fs.srv.Call(ctx, opts)

View File

@@ -1201,7 +1201,15 @@ func (o *Object) Storable() bool {
o.fs.objectMetaMu.RLock() o.fs.objectMetaMu.RLock()
mode := o.mode mode := o.mode
o.fs.objectMetaMu.RUnlock() o.fs.objectMetaMu.RUnlock()
if mode&os.ModeSymlink != 0 && !o.fs.opt.TranslateSymlinks {
// On Windows items with os.ModeIrregular are likely Junction
// points so we treat them as symlinks for the purpose of ignoring them.
// https://github.com/golang/go/issues/73827
symlinkFlag := os.ModeSymlink
if runtime.GOOS == "windows" {
symlinkFlag |= os.ModeIrregular
}
if mode&symlinkFlag != 0 && !o.fs.opt.TranslateSymlinks {
if !o.fs.opt.SkipSymlinks { if !o.fs.opt.SkipSymlinks {
fs.Logf(o, "Can't follow symlink without -L/--copy-links") fs.Logf(o, "Can't follow symlink without -L/--copy-links")
} }

View File

@@ -17,9 +17,11 @@ Improvements:
import ( import (
"context" "context"
"crypto/tls"
"errors" "errors"
"fmt" "fmt"
"io" "io"
"net/http"
"path" "path"
"slices" "slices"
"strings" "strings"
@@ -216,7 +218,25 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
defer megaCacheMu.Unlock() defer megaCacheMu.Unlock()
srv := megaCache[opt.User] srv := megaCache[opt.User]
if srv == nil { if srv == nil {
srv = mega.New().SetClient(fshttp.NewClient(ctx)) // srv = mega.New().SetClient(fshttp.NewClient(ctx))
// Workaround for Mega's use of insecure cipher suites which are no longer supported by default since Go 1.22.
// Relevant issues:
// https://github.com/rclone/rclone/issues/8565
// https://github.com/meganz/webclient/issues/103
clt := fshttp.NewClient(ctx)
clt.Transport = fshttp.NewTransportCustom(ctx, func(t *http.Transport) {
var ids []uint16
// Read default ciphers
for _, cs := range tls.CipherSuites() {
ids = append(ids, cs.ID)
}
// Insecure but Mega uses TLS_RSA_WITH_AES_128_GCM_SHA256 for storage endpoints
// (e.g. https://gfs302n114.userstorage.mega.co.nz) as of June 18, 2025.
t.TLSClientConfig.CipherSuites = append(ids, tls.TLS_RSA_WITH_AES_128_GCM_SHA256)
})
srv = mega.New().SetClient(clt)
srv.SetRetries(ci.LowLevelRetries) // let mega do the low level retries srv.SetRetries(ci.LowLevelRetries) // let mega do the low level retries
srv.SetHTTPS(opt.UseHTTPS) srv.SetHTTPS(opt.UseHTTPS)
srv.SetLogger(func(format string, v ...any) { srv.SetLogger(func(format string, v ...any) {

View File

@@ -155,6 +155,7 @@ func (f *Fs) getFile(ctx context.Context, ID string) (info *api.File, err error)
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.rst.CallJSON(ctx, &opts, nil, &info) resp, err = f.rst.CallJSON(ctx, &opts, nil, &info)
if err == nil && !info.Links.ApplicationOctetStream.Valid() { if err == nil && !info.Links.ApplicationOctetStream.Valid() {
time.Sleep(5 * time.Second)
return true, errors.New("no link") return true, errors.New("no link")
} }
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(ctx, resp, err)

View File

@@ -467,6 +467,11 @@ func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (b
// when a zero-byte file was uploaded with an invalid captcha token // when a zero-byte file was uploaded with an invalid captcha token
f.rst.captcha.Invalidate() f.rst.captcha.Invalidate()
return true, err return true, err
} else if strings.Contains(apiErr.Reason, "idx.shub.mypikpak.com") && apiErr.Code == 500 {
// internal server error: Post "http://idx.shub.mypikpak.com": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
// This typically happens when trying to retrieve a gcid for which no record exists.
// No retry is needed in this case.
return false, err
} }
} }

View File

@@ -1550,7 +1550,7 @@ func (o *Object) extraHeaders(ctx context.Context, src fs.ObjectInfo) map[string
extraHeaders := map[string]string{} extraHeaders := map[string]string{}
if o.fs.useOCMtime || o.fs.hasOCMD5 || o.fs.hasOCSHA1 { if o.fs.useOCMtime || o.fs.hasOCMD5 || o.fs.hasOCSHA1 {
if o.fs.useOCMtime { if o.fs.useOCMtime {
extraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", o.modTime.Unix()) extraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", src.ModTime(ctx).Unix())
} }
// Set one upload checksum // Set one upload checksum
// Owncloud uses one checksum only to check the upload and stores its own SHA1 and MD5 // Owncloud uses one checksum only to check the upload and stores its own SHA1 and MD5

View File

@@ -34,17 +34,15 @@ var commandDefinition = &cobra.Command{
Long: strings.ReplaceAll(` Long: strings.ReplaceAll(`
convmv supports advanced path name transformations for converting and renaming files and directories by applying prefixes, suffixes, and other alterations. convmv supports advanced path name transformations for converting and renaming files and directories by applying prefixes, suffixes, and other alterations.
`+transform.SprintList()+` `+transform.Help()+`Multiple transformations can be used in sequence, applied in the order they are specified on the command line.
Multiple transformations can be used in sequence, applied in the order they are specified on the command line.
The ¡--name-transform¡ flag is also available in ¡sync¡, ¡copy¡, and ¡move¡. The ¡--name-transform¡ flag is also available in ¡sync¡, ¡copy¡, and ¡move¡.
## Files vs Directories ## ## Files vs Directories
By default ¡--name-transform¡ will only apply to file names. The means only the leaf file name will be transformed. By default ¡--name-transform¡ will only apply to file names. The means only the leaf file name will be transformed.
However some of the transforms would be better applied to the whole path or just directories. However some of the transforms would be better applied to the whole path or just directories.
To choose which which part of the file path is affected some tags can be added to the ¡--name-transform¡ To choose which which part of the file path is affected some tags can be added to the ¡--name-transform¡.
| Tag | Effect | | Tag | Effect |
|------|------| |------|------|
@@ -54,11 +52,11 @@ To choose which which part of the file path is affected some tags can be added t
This is used by adding the tag into the transform name like this: ¡--name-transform file,prefix=ABC¡ or ¡--name-transform dir,prefix=DEF¡. This is used by adding the tag into the transform name like this: ¡--name-transform file,prefix=ABC¡ or ¡--name-transform dir,prefix=DEF¡.
For some conversions using all is more likely to be useful, for example ¡--name-transform all,nfc¡ For some conversions using all is more likely to be useful, for example ¡--name-transform all,nfc¡.
Note that ¡--name-transform¡ may not add path separators ¡/¡ to the name. This will cause an error. Note that ¡--name-transform¡ may not add path separators ¡/¡ to the name. This will cause an error.
## Ordering and Conflicts ## ## Ordering and Conflicts
* Transformations will be applied in the order specified by the user. * Transformations will be applied in the order specified by the user.
* If the ¡file¡ tag is in use (the default) then only the leaf name of files will be transformed. * If the ¡file¡ tag is in use (the default) then only the leaf name of files will be transformed.
@@ -73,14 +71,14 @@ user, allowing for intentional use cases (e.g., trimming one prefix before addin
* Users should be aware that certain combinations may lead to unexpected results and should verify * Users should be aware that certain combinations may lead to unexpected results and should verify
transformations using ¡--dry-run¡ before execution. transformations using ¡--dry-run¡ before execution.
## Race Conditions and Non-Deterministic Behavior ## ## Race Conditions and Non-Deterministic Behavior
Some transformations, such as ¡replace=old:new¡, may introduce conflicts where multiple source files map to the same destination name. Some transformations, such as ¡replace=old:new¡, may introduce conflicts where multiple source files map to the same destination name.
This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these. This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these.
* If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic. * If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic.
* Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results. * Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.
* To minimize risks, users should: To minimize risks, users should:
* Carefully review transformations that may introduce conflicts. * Carefully review transformations that may introduce conflicts.
* Use ¡--dry-run¡ to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations). * Use ¡--dry-run¡ to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).
* Avoid transformations that cause multiple distinct source files to map to the same destination name. * Avoid transformations that cause multiple distinct source files to map to the same destination name.

View File

@@ -141,7 +141,7 @@ func TestTransform(t *testing.T) {
} }
// const alphabet = "ƀɀɠʀҠԀڀڠݠހ߀ကႠᄀᄠᅀᆀᇠሠበዠጠᐠᑀᑠᒀᒠᓀᓠᔀᔠᕀᕠᖀᖠᗀᗠᘀᘠᙀᚠᛀកᠠᡀᣀᦀ᧠ᨠᯀᰀᴀ⇠⋀⍀⍠⎀⎠⏀␀─┠╀╠▀■◀◠☀☠♀♠⚀⚠⛀⛠✀✠❀➀➠⠠⡀⡠⢀⢠⣀⣠⤀⤠⥀⥠⦠⨠⩀⪀⪠⫠⬀⬠⭀ⰀⲀⲠⳀⴀⵀ⺠⻀㇀㐀㐠㑀㑠㒀㒠㓀㓠㔀㔠㕀㕠㖀㖠㗀㗠㘀㘠㙀㙠㚀㚠㛀㛠㜀㜠㝀㝠㞀㞠㟀㟠㠀㠠㡀㡠㢀㢠㣀㣠㤀㤠㥀㥠㦀㦠㧀㧠㨀㨠㩀㩠㪀㪠㫀㫠㬀㬠㭀㭠㮀㮠㯀㯠㰀㰠㱀㱠㲀㲠㳀㳠㴀㴠㵀㵠㶀㶠㷀㷠㸀㸠㹀㹠㺀㺠㻀㻠㼀㼠㽀㽠㾀㾠㿀㿠䀀䀠䁀䁠䂀䂠䃀䃠䄀䄠䅀䅠䆀䆠䇀䇠䈀䈠䉀䉠䊀䊠䋀䋠䌀䌠䍀䍠䎀䎠䏀䏠䐀䐠䑀䑠䒀䒠䓀䓠䔀䔠䕀䕠䖀䖠䗀䗠䘀䘠䙀䙠䚀䚠䛀䛠䜀䜠䝀䝠䞀䞠䟀䟠䠀䠠䡀䡠䢀䢠䣀䣠䤀䤠䥀䥠䦀䦠䧀䧠䨀䨠䩀䩠䪀䪠䫀䫠䬀䬠䭀䭠䮀䮠䯀䯠䰀䰠䱀䱠䲀䲠䳀䳠䴀䴠䵀䵠䶀䷀䷠一丠乀习亀亠什仠伀传佀你侀侠俀俠倀倠偀偠傀傠僀僠儀儠兀兠冀冠净几刀删剀剠劀加勀勠匀匠區占厀厠叀叠吀吠呀呠咀咠哀哠唀唠啀啠喀喠嗀嗠嘀嘠噀噠嚀嚠囀因圀圠址坠垀垠埀埠堀堠塀塠墀墠壀壠夀夠奀奠妀妠姀姠娀娠婀婠媀媠嫀嫠嬀嬠孀孠宀宠寀寠尀尠局屠岀岠峀峠崀崠嵀嵠嶀嶠巀巠帀帠幀幠庀庠廀廠开张彀彠往徠忀忠怀怠恀恠悀悠惀惠愀愠慀慠憀憠懀懠戀戠所扠技抠拀拠挀挠捀捠掀掠揀揠搀搠摀摠撀撠擀擠攀攠敀敠斀斠旀无昀映晀晠暀暠曀曠最朠杀杠枀枠柀柠栀栠桀桠梀梠检棠椀椠楀楠榀榠槀槠樀樠橀橠檀檠櫀櫠欀欠歀歠殀殠毀毠氀氠汀池沀沠泀泠洀洠浀浠涀涠淀淠渀渠湀湠満溠滀滠漀漠潀潠澀澠激濠瀀瀠灀灠炀炠烀烠焀焠煀煠熀熠燀燠爀爠牀牠犀犠狀狠猀猠獀獠玀玠珀珠琀琠瑀瑠璀璠瓀瓠甀甠畀畠疀疠痀痠瘀瘠癀癠皀皠盀盠眀眠着睠瞀瞠矀矠砀砠础硠碀碠磀磠礀礠祀祠禀禠秀秠稀稠穀穠窀窠竀章笀笠筀筠简箠節篠簀簠籀籠粀粠糀糠紀素絀絠綀綠緀締縀縠繀繠纀纠绀绠缀缠罀罠羀羠翀翠耀耠聀聠肀肠胀胠脀脠腀腠膀膠臀臠舀舠艀艠芀芠苀苠茀茠荀荠莀莠菀菠萀萠葀葠蒀蒠蓀蓠蔀蔠蕀蕠薀薠藀藠蘀蘠虀虠蚀蚠蛀蛠蜀蜠蝀蝠螀螠蟀蟠蠀蠠血衠袀袠裀裠褀褠襀襠覀覠觀觠言訠詀詠誀誠諀諠謀謠譀譠讀讠诀诠谀谠豀豠貀負賀賠贀贠赀赠趀趠跀跠踀踠蹀蹠躀躠軀軠輀輠轀轠辀辠迀迠退造遀遠邀邠郀郠鄀鄠酀酠醀醠釀釠鈀鈠鉀鉠銀銠鋀鋠錀錠鍀鍠鎀鎠鏀鏠鐀鐠鑀鑠钀钠铀铠销锠镀镠門閠闀闠阀阠陀陠隀隠雀雠需霠靀靠鞀鞠韀韠頀頠顀顠颀颠飀飠餀餠饀饠馀馠駀駠騀騠驀驠骀骠髀髠鬀鬠魀魠鮀鮠鯀鯠鰀鰠鱀鱠鲀鲠鳀鳠鴀鴠鵀鵠鶀鶠鷀鷠鸀鸠鹀鹠麀麠黀黠鼀鼠齀齠龀龠ꀀꀠꁀꁠꂀꂠꃀꃠꄀꄠꅀꅠꆀꆠꇀꇠꈀꈠꉀꉠꊀꊠꋀꋠꌀꌠꍀꍠꎀꎠꏀꏠꐀꐠꑀꑠ꒠ꔀꔠꕀꕠꖀꖠꗀꗠꙀꚠꛀ꜀꜠ꝀꞀꡀ測試_Русский___ě_áñ" // const alphabet = "ƀɀɠʀҠԀڀڠݠހ߀ကႠᄀᄠᅀᆀᇠሠበዠጠᐠᑀᑠᒀᒠᓀᓠᔀᔠᕀᕠᖀᖠᗀᗠᘀᘠᙀᚠᛀកᠠᡀᣀᦀ᧠ᨠᯀᰀᴀ⇠⋀⍀⍠⎀⎠⏀␀─┠╀╠▀■◀◠☀☠♀♠⚀⚠⛀⛠✀✠❀➀➠⠠⡀⡠⢀⢠⣀⣠⤀⤠⥀⥠⦠⨠⩀⪀⪠⫠⬀⬠⭀ⰀⲀⲠⳀⴀⵀ⺠⻀㇀㐀㐠㑀㑠㒀㒠㓀㓠㔀㔠㕀㕠㖀㖠㗀㗠㘀㘠㙀㙠㚀㚠㛀㛠㜀㜠㝀㝠㞀㞠㟀㟠㠀㠠㡀㡠㢀㢠㣀㣠㤀㤠㥀㥠㦀㦠㧀㧠㨀㨠㩀㩠㪀㪠㫀㫠㬀㬠㭀㭠㮀㮠㯀㯠㰀㰠㱀㱠㲀㲠㳀㳠㴀㴠㵀㵠㶀㶠㷀㷠㸀㸠㹀㹠㺀㺠㻀㻠㼀㼠㽀㽠㾀㾠㿀㿠䀀䀠䁀䁠䂀䂠䃀䃠䄀䄠䅀䅠䆀䆠䇀䇠䈀䈠䉀䉠䊀䊠䋀䋠䌀䌠䍀䍠䎀䎠䏀䏠䐀䐠䑀䑠䒀䒠䓀䓠䔀䔠䕀䕠䖀䖠䗀䗠䘀䘠䙀䙠䚀䚠䛀䛠䜀䜠䝀䝠䞀䞠䟀䟠䠀䠠䡀䡠䢀䢠䣀䣠䤀䤠䥀䥠䦀䦠䧀䧠䨀䨠䩀䩠䪀䪠䫀䫠䬀䬠䭀䭠䮀䮠䯀䯠䰀䰠䱀䱠䲀䲠䳀䳠䴀䴠䵀䵠䶀䷀䷠一丠乀习亀亠什仠伀传佀你侀侠俀俠倀倠偀偠傀傠僀僠儀儠兀兠冀冠净几刀删剀剠劀加勀勠匀匠區占厀厠叀叠吀吠呀呠咀咠哀哠唀唠啀啠喀喠嗀嗠嘀嘠噀噠嚀嚠囀因圀圠址坠垀垠埀埠堀堠塀塠墀墠壀壠夀夠奀奠妀妠姀姠娀娠婀婠媀媠嫀嫠嬀嬠孀孠宀宠寀寠尀尠局屠岀岠峀峠崀崠嵀嵠嶀嶠巀巠帀帠幀幠庀庠廀廠开张彀彠往徠忀忠怀怠恀恠悀悠惀惠愀愠慀慠憀憠懀懠戀戠所扠技抠拀拠挀挠捀捠掀掠揀揠搀搠摀摠撀撠擀擠攀攠敀敠斀斠旀无昀映晀晠暀暠曀曠最朠杀杠枀枠柀柠栀栠桀桠梀梠检棠椀椠楀楠榀榠槀槠樀樠橀橠檀檠櫀櫠欀欠歀歠殀殠毀毠氀氠汀池沀沠泀泠洀洠浀浠涀涠淀淠渀渠湀湠満溠滀滠漀漠潀潠澀澠激濠瀀瀠灀灠炀炠烀烠焀焠煀煠熀熠燀燠爀爠牀牠犀犠狀狠猀猠獀獠玀玠珀珠琀琠瑀瑠璀璠瓀瓠甀甠畀畠疀疠痀痠瘀瘠癀癠皀皠盀盠眀眠着睠瞀瞠矀矠砀砠础硠碀碠磀磠礀礠祀祠禀禠秀秠稀稠穀穠窀窠竀章笀笠筀筠简箠節篠簀簠籀籠粀粠糀糠紀素絀絠綀綠緀締縀縠繀繠纀纠绀绠缀缠罀罠羀羠翀翠耀耠聀聠肀肠胀胠脀脠腀腠膀膠臀臠舀舠艀艠芀芠苀苠茀茠荀荠莀莠菀菠萀萠葀葠蒀蒠蓀蓠蔀蔠蕀蕠薀薠藀藠蘀蘠虀虠蚀蚠蛀蛠蜀蜠蝀蝠螀螠蟀蟠蠀蠠血衠袀袠裀裠褀褠襀襠覀覠觀觠言訠詀詠誀誠諀諠謀謠譀譠讀讠诀诠谀谠豀豠貀負賀賠贀贠赀赠趀趠跀跠踀踠蹀蹠躀躠軀軠輀輠轀轠辀辠迀迠退造遀遠邀邠郀郠鄀鄠酀酠醀醠釀釠鈀鈠鉀鉠銀銠鋀鋠錀錠鍀鍠鎀鎠鏀鏠鐀鐠鑀鑠钀钠铀铠销锠镀镠門閠闀闠阀阠陀陠隀隠雀雠需霠靀靠鞀鞠韀韠頀頠顀顠颀颠飀飠餀餠饀饠馀馠駀駠騀騠驀驠骀骠髀髠鬀鬠魀魠鮀鮠鯀鯠鰀鰠鱀鱠鲀鲠鳀鳠鴀鴠鵀鵠鶀鶠鷀鷠鸀鸠鹀鹠麀麠黀黠鼀鼠齀齠龀龠ꀀꀠꁀꁠꂀꂠꃀꃠꄀꄠꅀꅠꆀꆠꇀꇠꈀꈠꉀꉠꊀꊠꋀꋠꌀꌠꍀꍠꎀꎠꏀꏠꐀꐠꑀꑠ꒠ꔀꔠꕀꕠꖀꖠꗀꗠꙀꚠꛀ꜀꜠ꝀꞀꡀ測試_Русский___ě_áñ"
const alphabet = "abcdefg123456789" const alphabet = "abcdefg123456789"
var extras = []string{"apple", "banana", "appleappleapplebanana", "splitbananasplit"} var extras = []string{"apple", "banana", "appleappleapplebanana", "splitbananasplit"}
@@ -251,3 +251,25 @@ func detectEncoding(s string) string {
} }
return "OTHER" return "OTHER"
} }
func TestUnicodeEquivalence(t *testing.T) {
r := fstest.NewRun(t)
defer r.Finalise()
ctx := context.Background()
r.Mkdir(ctx, r.Fremote)
const remote = "Über"
item := r.WriteObject(ctx, remote, "", t1)
obj, err := r.Fremote.NewObject(ctx, remote) // can't use r.CheckRemoteListing here as it forces NFC
require.NoError(t, err)
require.NotEmpty(t, obj)
err = transform.SetOptions(ctx, "all,nfc")
require.NoError(t, err)
err = sync.Transform(ctx, r.Fremote, true, true)
assert.NoError(t, err)
item.Path = norm.NFC.String(item.Path)
r.CheckRemoteListing(t, []fstest.Item{item}, nil)
}

View File

@@ -123,8 +123,8 @@ WebDAV or S3, that work out of the box.)
{{< provider name="Dropbox" home="https://www.dropbox.com/" config="/dropbox/" >}} {{< provider name="Dropbox" home="https://www.dropbox.com/" config="/dropbox/" >}}
{{< provider name="Enterprise File Fabric" home="https://storagemadeeasy.com/about/" config="/filefabric/" >}} {{< provider name="Enterprise File Fabric" home="https://storagemadeeasy.com/about/" config="/filefabric/" >}}
{{< provider name="Fastmail Files" home="https://www.fastmail.com/" config="/webdav/#fastmail-files" >}} {{< provider name="Fastmail Files" home="https://www.fastmail.com/" config="/webdav/#fastmail-files" >}}
{{< provider name="Files.com" home="https://www.files.com/" config="/filescom/" >}}
{{< provider name="FileLu Cloud Storage" home="https://filelu.com/" config="/filelu/" >}} {{< provider name="FileLu Cloud Storage" home="https://filelu.com/" config="/filelu/" >}}
{{< provider name="Files.com" home="https://www.files.com/" config="/filescom/" >}}
{{< provider name="FlashBlade" home="https://www.purestorage.com/products/unstructured-data-storage.html" config="/s3/#pure-storage-flashblade" >}} {{< provider name="FlashBlade" home="https://www.purestorage.com/products/unstructured-data-storage.html" config="/s3/#pure-storage-flashblade" >}}
{{< provider name="FTP" home="https://en.wikipedia.org/wiki/File_Transfer_Protocol" config="/ftp/" >}} {{< provider name="FTP" home="https://en.wikipedia.org/wiki/File_Transfer_Protocol" config="/ftp/" >}}
{{< provider name="Gofile" home="https://gofile.io/" config="/gofile/" >}} {{< provider name="Gofile" home="https://gofile.io/" config="/gofile/" >}}

View File

@@ -390,6 +390,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -5,6 +5,54 @@ description: "Rclone Changelog"
# Changelog # Changelog
## v1.70.3 - 2025-07-09
[See commits](https://github.com/rclone/rclone/compare/v1.70.2...v1.70.3)
* Bug Fixes
* check: Fix difference report (was reporting error counts) (albertony)
* march: Fix deadlock when using `--no-traverse` (Nick Craig-Wood)
* doc fixes (albertony, Nick Craig-Wood)
* Azure Blob
* Fix server side copy error "requires exactly one scope" (Nick Craig-Wood)
* B2
* Fix finding objects when using `--b2-version-at` (Davide Bizzarri)
* Linkbox
* Fix upload error "user upload file not exist" (Nick Craig-Wood)
* Pikpak
* Improve error handling for missing links and unrecoverable 500s (wiserain)
* WebDAV
* Fix setting modtime to that of local object instead of remote (WeidiDeng)
## v1.70.2 - 2025-06-27
[See commits](https://github.com/rclone/rclone/compare/v1.70.1...v1.70.2)
* Bug Fixes
* convmv: Make --dry-run logs less noisy (nielash)
* sync: Avoid copying dir metadata to itself (nielash)
* build: Bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix GHSA-vrw8-fxc6-2r93 (dependabot[bot])
* convmv: Fix moving to unicode-equivalent name (nielash)
* log: Fix deadlock when using systemd logging (Nick Craig-Wood)
* pacer: Fix nil pointer deref in RetryError (Nick Craig-Wood)
* doc fixes (Ali Zein Yousuf, Nick Craig-Wood)
* Local
* Fix --skip-links on Windows when skipping Junction points (Nick Craig-Wood)
* Combine
* Fix directory not found errors with ListP interface (Nick Craig-Wood)
* Mega
* Fix tls handshake failure (necaran)
* Pikpak
* Fix uploads fail with "aws-chunked encoding is not supported" error (Nick Craig-Wood)
## v1.70.1 - 2025-06-19
[See commits](https://github.com/rclone/rclone/compare/v1.70.0...v1.70.1)
* Bug Fixes
* convmv: Fix spurious "error running command echo" on Windows (Nick Craig-Wood)
* doc fixes (albertony, Ed Craig-Wood, jinjingroad)
## v1.70.0 - 2025-06-17 ## v1.70.0 - 2025-06-17
[See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0) [See commits](https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0)

View File

@@ -339,6 +339,7 @@ rclone [flags]
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to --ftp-host string FTP host to connect to
--ftp-http-proxy string URL for HTTP CONNECT proxy
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s) --ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server --ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-no-check-upload Don't check the upload is OK --ftp-no-check-upload Don't check the upload is OK
@@ -997,7 +998,7 @@ rclone [flags]
--use-json-log Use json log format --use-json-log Use json log format
--use-mmap Use mmap allocator (see docs) --use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata --use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.0") --user-agent string Set the user-agent to a specified string (default "rclone/v1.70.3")
-v, --verbose count Print lots more stuff (repeat for more) -v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number -V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect --webdav-auth-redirect Preserve authentication on redirect

View File

@@ -221,12 +221,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
``` ```
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20250617 // Output: stories/The Quick Brown Fox!-20250618
``` ```
``` ```
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM // Output: stories/The Quick Brown Fox!-2025-06-18 0148PM
``` ```
``` ```
@@ -234,17 +234,15 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\
// Output: ababababababab/ababab ababababab ababababab ababab!abababab // Output: ababababababab/ababab ababababab ababababab ababab!abababab
``` ```
Multiple transformations can be used in sequence, applied in the order they are specified on the command line. Multiple transformations can be used in sequence, applied in the order they are specified on the command line.
The `--name-transform` flag is also available in `sync`, `copy`, and `move`. The `--name-transform` flag is also available in `sync`, `copy`, and `move`.
# Files vs Directories ## # Files vs Directories
By default `--name-transform` will only apply to file names. The means only the leaf file name will be transformed. By default `--name-transform` will only apply to file names. The means only the leaf file name will be transformed.
However some of the transforms would be better applied to the whole path or just directories. However some of the transforms would be better applied to the whole path or just directories.
To choose which which part of the file path is affected some tags can be added to the `--name-transform` To choose which which part of the file path is affected some tags can be added to the `--name-transform`.
| Tag | Effect | | Tag | Effect |
|------|------| |------|------|
@@ -254,11 +252,11 @@ To choose which which part of the file path is affected some tags can be added t
This is used by adding the tag into the transform name like this: `--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`. This is used by adding the tag into the transform name like this: `--name-transform file,prefix=ABC` or `--name-transform dir,prefix=DEF`.
For some conversions using all is more likely to be useful, for example `--name-transform all,nfc` For some conversions using all is more likely to be useful, for example `--name-transform all,nfc`.
Note that `--name-transform` may not add path separators `/` to the name. This will cause an error. Note that `--name-transform` may not add path separators `/` to the name. This will cause an error.
# Ordering and Conflicts ## # Ordering and Conflicts
* Transformations will be applied in the order specified by the user. * Transformations will be applied in the order specified by the user.
* If the `file` tag is in use (the default) then only the leaf name of files will be transformed. * If the `file` tag is in use (the default) then only the leaf name of files will be transformed.
@@ -273,14 +271,14 @@ user, allowing for intentional use cases (e.g., trimming one prefix before addin
* Users should be aware that certain combinations may lead to unexpected results and should verify * Users should be aware that certain combinations may lead to unexpected results and should verify
transformations using `--dry-run` before execution. transformations using `--dry-run` before execution.
# Race Conditions and Non-Deterministic Behavior ## # Race Conditions and Non-Deterministic Behavior
Some transformations, such as `replace=old:new`, may introduce conflicts where multiple source files map to the same destination name. Some transformations, such as `replace=old:new`, may introduce conflicts where multiple source files map to the same destination name.
This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these. This can lead to race conditions when performing concurrent transfers. It is up to the user to anticipate these.
* If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic. * If two files from the source are transformed into the same name at the destination, the final state may be non-deterministic.
* Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results. * Running rclone check after a sync using such transformations may erroneously report missing or differing files due to overwritten results.
* To minimize risks, users should: To minimize risks, users should:
* Carefully review transformations that may introduce conflicts. * Carefully review transformations that may introduce conflicts.
* Use `--dry-run` to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations). * Use `--dry-run` to inspect changes before executing a sync (but keep in mind that it won't show the effect of non-deterministic transformations).
* Avoid transformations that cause multiple distinct source files to map to the same destination name. * Avoid transformations that cause multiple distinct source files to map to the same destination name.

View File

@@ -384,6 +384,103 @@ Do not use single character names on Windows as it creates ambiguity with Window
drives' names, e.g.: remote called `C` is indistinguishable from `C` drive. Rclone drives' names, e.g.: remote called `C` is indistinguishable from `C` drive. Rclone
will always assume that single letter name refers to a drive. will always assume that single letter name refers to a drive.
## Adding global configuration to a remote
It is possible to add global configuration to the remote configuration which
will be applied just before the remote is created.
This can be done in two ways. The first is to use `override.var = value` in the
config file or the connection string for a temporary change, and the second is
to use `global.var = value` in the config file or connection string for a
permanent change.
This is explained fully below.
### override.var
This is used to override a global variable **just** for the duration of the
remote creation. It won't affect other remotes even if they are created at the
same time.
This is very useful for overriding networking config needed for just for that
remote. For example, say you have a remote which needs `--no-check-certificate`
as it is running on test infrastructure without a proper certificate. You could
supply the `--no-check-certificate` flag to rclone, but this will affect **all**
the remotes. To make it just affect this remote you use an override. You could
put this in the config file:
```ini
[remote]
type = XXX
...
override.no_check_certificate = true
```
or use it in the connection string `remote,override.no_check_certificate=true:`
(or just `remote,override.no_check_certificate:`).
Note how the global flag name loses its initial `--` and gets `-` replaced with
`_` and gets an `override.` prefix.
Not all global variables make sense to be overridden like this as the config is
only applied during the remote creation. Here is a non exhaustive list of ones
which might be useful:
- `bind_addr`
- `ca_cert`
- `client_cert`
- `client_key`
- `connect_timeout`
- `disable_http2`
- `disable_http_keep_alives`
- `dump`
- `expect_continue_timeout`
- `headers`
- `http_proxy`
- `low_level_retries`
- `max_connections`
- `no_check_certificate`
- `no_gzip`
- `timeout`
- `traffic_class`
- `use_cookies`
- `use_server_modtime`
- `user_agent`
An `override.var` will override all other config methods, but **just** for the
duration of the creation of the remote.
### global.var
This is used to set a global variable **for everything**. The global variable is
set just before the remote is created.
This is useful for parameters (eg sync parameters) which can't be set as an
`override`. For example, say you have a remote where you would always like to
use the `--checksum` flag. You could supply the `--checksum` flag to rclone on
every command line, but instead you could put this in the config file:
```ini
[remote]
type = XXX
...
global.checksum = true
```
or use it in the connection string `remote,global.checksum=true:` (or just
`remote,global.checksum:`). This is equivalent to using the `--checksum` flag.
Note how the global flag name loses its initial `--` and gets `-` replaced with
`_` and gets a `global.` prefix.
Any global variable can be set like this and it is exactly equivalent to using
the equivalent flag on the command line. This means it will affect all uses of
rclone.
If two remotes set the same global variable then the first one instantiated will
be overridden by the second one. A `global.var` will override all other config
methods when the remote is created.
Quoting and the shell Quoting and the shell
--------------------- ---------------------
@@ -1249,6 +1346,15 @@ rclone sync --interactive ~/src s3:test/dst --header-upload "Content-Disposition
See the GitHub issue [here](https://github.com/rclone/rclone/issues/59) for See the GitHub issue [here](https://github.com/rclone/rclone/issues/59) for
currently supported backends. currently supported backends.
### --http-proxy string
Use this option to set an HTTP proxy for all HTTP based services to
use.
Rclone also supports the standard HTTP proxy environment variables
which it will pick up automatically. The is the way the HTTP proxy
will normally be set but this flag can be used to override it.
### --human-readable ### ### --human-readable ###
Rclone commands output values for sizes (e.g. number of bytes) and Rclone commands output values for sizes (e.g. number of bytes) and
@@ -1545,9 +1651,16 @@ This option is only supported Windows platforms.
### --use-json-log ### ### --use-json-log ###
This switches the log format to JSON for rclone. The fields of JSON This switches the log format to JSON. The log messages are then
log are `level`, `msg`, `source`, `time`. The JSON logs will be streamed as individual JSON objects, with fields: `level`, `msg`, `source`,
printed on a single line, but are shown expanded here for clarity. and `time`. The resulting format is what is sometimes referred to as
[newline-delimited JSON](https://en.wikipedia.org/wiki/JSON_streaming#Newline-delimited_JSON)
(NDJSON), or JSON Lines (JSONL). This is well suited for processing by
traditional line-oriented tools and shell pipelines, but a complete log
file is not strictly valid JSON and needs a parser that can handle it.
The JSON logs will be printed on a single line, but are shown expanded
here for clarity.
```json ```json
{ {

View File

@@ -8,7 +8,7 @@ versionIntroduced: "?"
The DOI remote is a read only remote for reading files from digital object identifiers (DOI). The DOI remote is a read only remote for reading files from digital object identifiers (DOI).
Currently, the DOI backend supports supports DOIs hosted with: Currently, the DOI backend supports DOIs hosted with:
- [InvenioRDM](https://inveniosoftware.org/products/rdm/) - [InvenioRDM](https://inveniosoftware.org/products/rdm/)
- [Zenodo](https://zenodo.org) - [Zenodo](https://zenodo.org)
- [CaltechDATA](https://data.caltech.edu) - [CaltechDATA](https://data.caltech.edu)

View File

@@ -9,6 +9,9 @@ type: page
Rclone is single executable (`rclone`, or `rclone.exe` on Windows) that you can Rclone is single executable (`rclone`, or `rclone.exe` on Windows) that you can
simply download as a zip archive and extract into a location of your choosing. simply download as a zip archive and extract into a location of your choosing.
See the [install](https://rclone.org/install/) documentation for more details. See the [install](https://rclone.org/install/) documentation for more details.
We also offer a secure [enterprise-grade, zero CVE docker
image](https://securebuild.com/images/rclone) through our partner
[SecureBuild](https://securebuild.com/blog/introducing-securebuild).
## Release {{% version %}} OS requirements {#osrequirements} ## Release {{% version %}} OS requirements {#osrequirements}

View File

@@ -699,6 +699,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -305,6 +305,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -141,6 +141,16 @@ e.g.
Note that the FTP backend does not support `ftp_proxy` yet. Note that the FTP backend does not support `ftp_proxy` yet.
You can use the command line argument `--http-proxy` to set the proxy,
and in turn use an override in the config file if you want it set for
a single backend, eg `override.http_proxy = http://...` in the config
file.
The FTP and SFTP backends have their own `http_proxy` settings to
support an HTTP CONNECT proxy (
[--ftp-http-proxy](https://rclone.org/ftp/#ftp-http-proxy) and
[--sftp-http-proxy](https://rclone.org/ftp/#sftp-http-proxy) )
### Rclone gives x509: failed to load system roots and no roots provided error ### ### Rclone gives x509: failed to load system roots and no roots provided error ###
This means that `rclone` can't find the SSL root certificates. Likely This means that `rclone` can't find the SSL root certificates. Likely

View File

@@ -119,7 +119,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this --tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.0") --user-agent string Set the user-agent to a specified string (default "rclone/v1.70.3")
``` ```
@@ -601,6 +601,7 @@ Backend-only flags (these can be set in the config file also).
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to --ftp-host string FTP host to connect to
--ftp-http-proxy string URL for HTTP CONNECT proxy
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s) --ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server --ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-no-check-upload Don't check the upload is OK --ftp-no-check-upload Don't check the upload is OK

View File

@@ -433,6 +433,20 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
#### --ftp-http-proxy
URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
Properties:
- Config: http_proxy
- Env Var: RCLONE_FTP_HTTP_PROXY
- Type: string
- Required: false
#### --ftp-no-check-upload #### --ftp-no-check-upload
Don't check the upload is OK Don't check the upload is OK

View File

@@ -679,6 +679,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -324,6 +324,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials
@@ -630,3 +632,22 @@ Rclone cannot delete files anywhere except under `album`.
### Deleting albums ### Deleting albums
The Google Photos API does not support deleting albums - see [bug #135714733](https://issuetracker.google.com/issues/135714733). The Google Photos API does not support deleting albums - see [bug #135714733](https://issuetracker.google.com/issues/135714733).
## Making your own client_id
When you use rclone with Google photos in its default configuration you
are using rclone's client_id. This is shared between all the rclone
users. There is a global rate limit on the number of queries per
second that each client_id can do set by Google.
If there is a problem with this client_id (eg quota too low or the
client_id stops working) then you can make your own.
Please follow the steps in [the google drive docs](https://rclone.org/drive/#making-your-own-client-id).
You will need these scopes instead of the drive ones detailed:
```
https://www.googleapis.com/auth/photoslibrary.appendonly
https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata
https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata
```

View File

@@ -288,6 +288,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -225,6 +225,12 @@ package is here.
The rclone developers maintain a [docker image for rclone](https://hub.docker.com/r/rclone/rclone). The rclone developers maintain a [docker image for rclone](https://hub.docker.com/r/rclone/rclone).
**Note:** We also now offer a paid version of rclone with
enterprise-grade security and zero CVEs through our partner
[SecureBuild](https://securebuild.com/blog/introducing-securebuild).
If you are interested, check out their website and the [Rclone
SecureBuild Image](https://securebuild.com/images/rclone).
These images are built as part of the release process based on a These images are built as part of the release process based on a
minimal Alpine Linux. minimal Alpine Linux.

View File

@@ -383,6 +383,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -300,6 +300,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -132,7 +132,8 @@ For example, you might see throttling.
To create your own Client ID, please follow these steps: To create your own Client ID, please follow these steps:
1. Open https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade and then click `New registration`. 1. Open https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/~/Overview and then under the `Add` menu click `App registration`.
* If you have not created an Azure account, you will be prompted to. This is free, but you need to provide a phone number, address, and credit card for identity verification.
2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI`, then type (do not copy and paste) `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use. 2. Enter a name for your app, choose account type `Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)`, select `Web` in `Redirect URI`, then type (do not copy and paste) `http://localhost:53682/` and click Register. Copy and keep the `Application (client) ID` under the app name for later use.
3. Under `manage` select `Certificates & secrets`, click `New client secret`. Enter a description (can be anything) and set `Expires` to 24 months. Copy and keep that secret _Value_ for later use (you _won't_ be able to see this value afterwards). 3. Under `manage` select `Certificates & secrets`, click `New client secret`. Enter a description (can be anything) and set `Expires` to 24 months. Copy and keep that secret _Value_ for later use (you _won't_ be able to see this value afterwards).
4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`. 4. Under `manage` select `API permissions`, click `Add a permission` and select `Microsoft Graph` then select `delegated permissions`.
@@ -385,6 +386,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -230,6 +230,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -195,6 +195,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -192,6 +192,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -252,6 +252,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -57,7 +57,7 @@ off donation.
Thank you very much to our sponsors: Thank you very much to our sponsors:
{{< sponsor src="/img/logos/idrive_e2.svg" width="300" height="200" title="Visit our sponsor IDrive e2" link="https://www.idrive.com/e2/?refer=rclone">}} {{< sponsor src="/img/logos/idrive_e2.svg" width="300" height="200" title="Visit our sponsor IDrive e2" link="https://www.idrive.com/e2/?refer=rclone">}}
{{< sponsor src="/img/logos/warp.svg" width="285" height="200" title="Visit our sponsor warp.dev" link="https://www.warp.dev/?utm_source=rclone&utm_medium=referral&utm_campaign=rclone_20231103">}} {{< sponsor src="/img/logos/filescom-enterprise-grade-workflows.png" width="300" height="200" title="Start Your Free Trial Today" link="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=rclone">}}
{{< sponsor src="/img/logos/sia.svg" width="200" height="200" title="Visit our sponsor sia" link="https://sia.tech">}} {{< sponsor src="/img/logos/sia.svg" width="200" height="200" title="Visit our sponsor sia" link="https://sia.tech">}}
{{< sponsor src="/img/logos/route4me.svg" width="400" height="200" title="Visit our sponsor Route4Me" link="https://route4me.com/">}} {{< sponsor src="/img/logos/route4me.svg" width="400" height="200" title="Visit our sponsor Route4Me" link="https://route4me.com/">}}
{{< sponsor src="/img/logos/rcloneview.svg" width="300" height="200" title="Visit our sponsor RcloneView" link="https://rcloneview.com/">}} {{< sponsor src="/img/logos/rcloneview.svg" width="300" height="200" title="Visit our sponsor RcloneView" link="https://rcloneview.com/">}}

View File

@@ -192,6 +192,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -230,6 +230,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC 6749. This will use the OAUTH2 client Credentials Flow as described in RFC 6749.
Note that this option is NOT supported by all backends.
Properties: Properties:
- Config: client_credentials - Config: client_credentials

View File

@@ -19,6 +19,15 @@
</div> </div>
</div> </div>
<div class="card">
<div class="card-header" style="padding: 5px 15px;">
Gold Sponsor
</div>
<div class="card-body">
<a href="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=rclone" target="_blank" rel="noopener" title="Start Your Free Trial Today"><img style="max-width: 100%; height: auto;" src="/img/logos/filescom-enterprise-grade-workflows.png"></a><br />
</div>
</div>
{{if .IsHome}} {{if .IsHome}}
<div class="card"> <div class="card">
<div class="card-header" style="padding: 5px 15px;"> <div class="card-header" style="padding: 5px 15px;">

View File

@@ -66,8 +66,8 @@
<a class="dropdown-item" href="/koofr/#digi-storage"><i class="fa fa-cloud fa-fw"></i> Digi Storage</a> <a class="dropdown-item" href="/koofr/#digi-storage"><i class="fa fa-cloud fa-fw"></i> Digi Storage</a>
<a class="dropdown-item" href="/dropbox/"><i class="fab fa-dropbox fa-fw"></i> Dropbox</a> <a class="dropdown-item" href="/dropbox/"><i class="fab fa-dropbox fa-fw"></i> Dropbox</a>
<a class="dropdown-item" href="/filefabric/"><i class="fa fa-cloud fa-fw"></i> Enterprise File Fabric</a> <a class="dropdown-item" href="/filefabric/"><i class="fa fa-cloud fa-fw"></i> Enterprise File Fabric</a>
<a class="dropdown-item" href="/filelu/"><i class="fa fa-brands fa-files-pinwheel fa-fw"></i> Files.com</a> <a class="dropdown-item" href="/filelu/"><i class="fa fa-folder"></i> FileLu Cloud Storage</a>
<a class="dropdown-item" href="/filescom/"><i class="fa fa-cloud fa-fw"></i> FileLu Cloud Storage</a> <a class="dropdown-item" href="/filescom/"><i class="fa fa-brands fa-files-pinwheel fa-fw"></i> Files.com</a>
<a class="dropdown-item" href="/ftp/"><i class="fa fa-file fa-fw"></i> FTP</a> <a class="dropdown-item" href="/ftp/"><i class="fa fa-file fa-fw"></i> FTP</a>
<a class="dropdown-item" href="/gofile/"><i class="fa fa-folder fa-fw"></i> Gofile</a> <a class="dropdown-item" href="/gofile/"><i class="fa fa-folder fa-fw"></i> Gofile</a>
<a class="dropdown-item" href="/googlecloudstorage/"><i class="fab fa-google fa-fw"></i> Google Cloud Storage</a> <a class="dropdown-item" href="/googlecloudstorage/"><i class="fab fa-google fa-fw"></i> Google Cloud Storage</a>

View File

@@ -1 +1 @@
v1.70.0 v1.70.3

View File

@@ -555,6 +555,11 @@ var ConfigOptionsInfo = Options{{
Default: []string{}, Default: []string{},
Help: "Transform paths during the copy process.", Help: "Transform paths during the copy process.",
Groups: "Copy", Groups: "Copy",
}, {
Name: "http_proxy",
Default: "",
Help: "HTTP proxy URL.",
Groups: "Networking",
}} }}
// ConfigInfo is filesystem config options // ConfigInfo is filesystem config options
@@ -667,6 +672,7 @@ type ConfigInfo struct {
MetadataMapper SpaceSepList `config:"metadata_mapper"` MetadataMapper SpaceSepList `config:"metadata_mapper"`
MaxConnections int `config:"max_connections"` MaxConnections int `config:"max_connections"`
NameTransform []string `config:"name_transform"` NameTransform []string `config:"name_transform"`
HTTPProxy string `config:"http_proxy"`
} }
func init() { func init() {

View File

@@ -6,10 +6,12 @@ import (
"context" "context"
"crypto/tls" "crypto/tls"
"crypto/x509" "crypto/x509"
"fmt"
"net" "net"
"net/http" "net/http"
"net/http/cookiejar" "net/http/cookiejar"
"net/http/httputil" "net/http/httputil"
"net/url"
"os" "os"
"sync" "sync"
"time" "time"
@@ -55,7 +57,18 @@ func NewTransportCustom(ctx context.Context, customize func(*http.Transport)) ht
// This also means we get new stuff when it gets added to go // This also means we get new stuff when it gets added to go
t := new(http.Transport) t := new(http.Transport)
structs.SetDefaults(t, http.DefaultTransport.(*http.Transport)) structs.SetDefaults(t, http.DefaultTransport.(*http.Transport))
if ci.HTTPProxy != "" {
proxyURL, err := url.Parse(ci.HTTPProxy)
if err != nil {
t.Proxy = func(*http.Request) (*url.URL, error) {
return nil, fmt.Errorf("failed to set --http-proxy from %q: %w", ci.HTTPProxy, err)
}
} else {
t.Proxy = http.ProxyURL(proxyURL)
}
} else {
t.Proxy = http.ProxyFromEnvironment t.Proxy = http.ProxyFromEnvironment
}
t.MaxIdleConnsPerHost = 2 * (ci.Checkers + ci.Transfers + 1) t.MaxIdleConnsPerHost = 2 * (ci.Checkers + ci.Transfers + 1)
t.MaxIdleConns = 2 * t.MaxIdleConnsPerHost t.MaxIdleConns = 2 * t.MaxIdleConnsPerHost
t.TLSHandshakeTimeout = ci.ConnectTimeout t.TLSHandshakeTimeout = ci.ConnectTimeout

View File

@@ -20,7 +20,7 @@ const (
var ( var (
errInvalidCharacters = errors.New("config name contains invalid characters - may only contain numbers, letters, `_`, `-`, `.`, `+`, `@` and space, while not start with `-` or space, and not end with space") errInvalidCharacters = errors.New("config name contains invalid characters - may only contain numbers, letters, `_`, `-`, `.`, `+`, `@` and space, while not start with `-` or space, and not end with space")
errCantBeEmpty = errors.New("can't use empty string as a path") errCantBeEmpty = errors.New("can't use empty string as a path")
errBadConfigParam = errors.New("config parameters may only contain `0-9`, `A-Z`, `a-z` and `_`") errBadConfigParam = errors.New("config parameters may only contain `0-9`, `A-Z`, `a-z`, `_` and `.`")
errEmptyConfigParam = errors.New("config parameters can't be empty") errEmptyConfigParam = errors.New("config parameters can't be empty")
errConfigNameEmpty = errors.New("config name can't be empty") errConfigNameEmpty = errors.New("config name can't be empty")
errConfigName = errors.New("config name needs a trailing `:`") errConfigName = errors.New("config name needs a trailing `:`")
@@ -79,7 +79,8 @@ func isConfigParam(c rune) bool {
return ((c >= 'a' && c <= 'z') || return ((c >= 'a' && c <= 'z') ||
(c >= 'A' && c <= 'Z') || (c >= 'A' && c <= 'Z') ||
(c >= '0' && c <= '9') || (c >= '0' && c <= '9') ||
c == '_') c == '_' ||
c == '.')
} }
// Parsed is returned from Parse with the results of the connection string decomposition // Parsed is returned from Parse with the results of the connection string decomposition

View File

@@ -5,10 +5,7 @@
package log package log
import ( import (
"fmt"
"log"
"log/slog" "log/slog"
"strconv"
"github.com/coreos/go-systemd/v22/journal" "github.com/coreos/go-systemd/v22/journal"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
@@ -18,10 +15,8 @@ import (
func startSystemdLog(handler *OutputHandler) bool { func startSystemdLog(handler *OutputHandler) bool {
handler.clearFormatFlags(logFormatDate | logFormatTime | logFormatMicroseconds | logFormatUTC | logFormatLongFile | logFormatShortFile | logFormatPid) handler.clearFormatFlags(logFormatDate | logFormatTime | logFormatMicroseconds | logFormatUTC | logFormatLongFile | logFormatShortFile | logFormatPid)
handler.setFormatFlags(logFormatNoLevel) handler.setFormatFlags(logFormatNoLevel)
// TODO: Use the native journal.Print approach rather than a custom implementation
handler.SetOutput(func(level slog.Level, text string) { handler.SetOutput(func(level slog.Level, text string) {
text = fmt.Sprintf("<%s>%-6s: %s", systemdLogPrefix(level), level, text) _ = journal.Print(slogLevelToSystemdPriority(level), "%-6s: %s\n", level, text)
_ = log.Output(4, text)
}) })
return true return true
} }
@@ -37,12 +32,12 @@ var slogLevelToSystemdPrefix = map[slog.Level]journal.Priority{
slog.LevelDebug: journal.PriDebug, slog.LevelDebug: journal.PriDebug,
} }
func systemdLogPrefix(l slog.Level) string { func slogLevelToSystemdPriority(l slog.Level) journal.Priority {
prio, ok := slogLevelToSystemdPrefix[l] prio, ok := slogLevelToSystemdPrefix[l]
if !ok { if !ok {
return "" return journal.PriInfo
} }
return strconv.Itoa(int(prio)) return prio
} }
func isJournalStream() bool { func isJournalStream() bool {

View File

@@ -16,7 +16,6 @@ import (
"github.com/rclone/rclone/fs/list" "github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/transform" "github.com/rclone/rclone/lib/transform"
"golang.org/x/sync/errgroup"
"golang.org/x/text/unicode/norm" "golang.org/x/text/unicode/norm"
) )
@@ -291,6 +290,7 @@ func (m *March) matchListings(srcChan, dstChan <-chan fs.DirEntry, srcOnly, dstO
srcPrev, dstPrev fs.DirEntry srcPrev, dstPrev fs.DirEntry
srcPrevName, dstPrevName string srcPrevName, dstPrevName string
src, dst fs.DirEntry src, dst fs.DirEntry
srcHasMore, dstHasMore = true, true
srcName, dstName string srcName, dstName string
) )
srcDone := func() { srcDone := func() {
@@ -311,14 +311,14 @@ func (m *March) matchListings(srcChan, dstChan <-chan fs.DirEntry, srcOnly, dstO
} }
// Reload src and dst if needed - we set them to nil if used // Reload src and dst if needed - we set them to nil if used
if src == nil { if src == nil {
src = <-srcChan src, srcHasMore = <-srcChan
srcName = m.srcKey(src) srcName = m.srcKey(src)
} }
if dst == nil { if dst == nil {
dst = <-dstChan dst, dstHasMore = <-dstChan
dstName = m.dstKey(dst) dstName = m.dstKey(dst)
} }
if src == nil && dst == nil { if !srcHasMore && !dstHasMore {
break break
} }
if src != nil && srcPrev != nil { if src != nil && srcPrev != nil {
@@ -419,38 +419,65 @@ func (m *March) processJob(job listDirJob) ([]listDirJob, error) {
// If NoTraverse is set, then try to find a matching object // If NoTraverse is set, then try to find a matching object
// for each item in the srcList to head dst object // for each item in the srcList to head dst object
if m.NoTraverse && !m.NoCheckDest { if m.NoTraverse && !m.NoCheckDest {
startedDst = true
workers := ci.Checkers
originalSrcChan := srcChan originalSrcChan := srcChan
srcChan = make(chan fs.DirEntry, 100) srcChan = make(chan fs.DirEntry, 100)
ls, err := list.NewSorter(m.Ctx, m.Fdst, list.SortToChan(dstChan), m.dstKey)
type matchTask struct {
src fs.DirEntry // src object to find in destination
dstMatch chan<- fs.DirEntry // channel to receive matching dst object or nil
}
matchTasks := make(chan matchTask, workers)
dstMatches := make(chan (<-chan fs.DirEntry), workers)
// Create the tasks from the originalSrcChan. These are put into matchTasks for
// processing and dstMatches so they can be retrieved in order.
go func() {
for src := range originalSrcChan {
srcChan <- src
dstMatch := make(chan fs.DirEntry, 1)
matchTasks <- matchTask{
src: src,
dstMatch: dstMatch,
}
dstMatches <- dstMatch
}
close(matchTasks)
}()
// Get the tasks from the queue and find a matching object.
var workerWg sync.WaitGroup
for range workers {
workerWg.Add(1)
go func() {
defer workerWg.Done()
for t := range matchTasks {
leaf := path.Base(t.src.Remote())
dst, err := m.Fdst.NewObject(m.Ctx, path.Join(job.dstRemote, leaf))
if err != nil { if err != nil {
return nil, err dst = nil
}
t.dstMatch <- dst
}
}()
} }
startedDst = true // Close dstResults when all the workers have finished
go func() {
workerWg.Wait()
close(dstMatches)
}()
// Read the matches in order and send them to dstChan if found.
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
defer ls.CleanUp() for dstMatch := range dstMatches {
dst := <-dstMatch
g, gCtx := errgroup.WithContext(m.Ctx) // Note that dst may be nil here
g.SetLimit(ci.Checkers) // We send these on so we don't deadlock the reader
for src := range originalSrcChan { dstChan <- dst
srcChan <- src
if srcObj, ok := src.(fs.Object); ok {
g.Go(func() error {
leaf := path.Base(srcObj.Remote())
dstObj, err := m.Fdst.NewObject(gCtx, path.Join(job.dstRemote, leaf))
if err == nil {
_ = ls.Add(fs.DirEntries{dstObj}) // ignore errors
}
return nil // ignore errors
})
}
}
dstListErr = g.Wait()
sendErr := ls.Send()
if dstListErr == nil {
dstListErr = sendErr
} }
close(srcChan) close(srcChan)
close(dstChan) close(dstChan)

View File

@@ -7,12 +7,15 @@ import (
"crypto/md5" "crypto/md5"
"encoding/base64" "encoding/base64"
"fmt" "fmt"
"maps"
"os" "os"
"path/filepath" "path/filepath"
"slices"
"strings" "strings"
"sync" "sync"
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath" "github.com/rclone/rclone/fs/fspath"
) )
@@ -65,6 +68,10 @@ func NewFs(ctx context.Context, path string) (Fs, error) {
overriddenConfig[suffix] = extraConfig overriddenConfig[suffix] = extraConfig
overriddenConfigMu.Unlock() overriddenConfigMu.Unlock()
} }
ctx, err = addConfigToContext(ctx, configName, config)
if err != nil {
return nil, err
}
f, err := fsInfo.NewFs(ctx, configName, fsPath, config) f, err := fsInfo.NewFs(ctx, configName, fsPath, config)
if f != nil && (err == nil || err == ErrorIsFile) { if f != nil && (err == nil || err == ErrorIsFile) {
addReverse(f, fsInfo) addReverse(f, fsInfo)
@@ -72,6 +79,54 @@ func NewFs(ctx context.Context, path string) (Fs, error) {
return f, err return f, err
} }
// Add "global" config or "override" to ctx and the global config if required.
//
// This looks through keys prefixed with "global." or "override." in
// config and sets ctx and optionally the global context if "global.".
func addConfigToContext(ctx context.Context, configName string, config configmap.Getter) (newCtx context.Context, err error) {
overrideConfig := make(configmap.Simple)
globalConfig := make(configmap.Simple)
for i := range ConfigOptionsInfo {
opt := &ConfigOptionsInfo[i]
globalName := "global." + opt.Name
value, isSet := config.Get(globalName)
if isSet {
// Set both override and global if global
overrideConfig[opt.Name] = value
globalConfig[opt.Name] = value
}
overrideName := "override." + opt.Name
value, isSet = config.Get(overrideName)
if isSet {
overrideConfig[opt.Name] = value
}
}
if len(overrideConfig) == 0 && len(globalConfig) == 0 {
return ctx, nil
}
newCtx, ci := AddConfig(ctx)
overrideKeys := slices.Collect(maps.Keys(overrideConfig))
slices.Sort(overrideKeys)
globalKeys := slices.Collect(maps.Keys(globalConfig))
slices.Sort(globalKeys)
// Set the config in the newCtx
err = configstruct.Set(overrideConfig, ci)
if err != nil {
return ctx, fmt.Errorf("failed to set override config variables %q: %w", overrideKeys, err)
}
Debugf(configName, "Set overridden config %q for backend startup", overrideKeys)
// Set the global context only
if len(globalConfig) != 0 {
globalCI := GetConfig(context.Background())
err = configstruct.Set(globalConfig, globalCI)
if err != nil {
return ctx, fmt.Errorf("failed to set global config variables %q: %w", globalKeys, err)
}
Debugf(configName, "Set global config %q at backend startup", overrideKeys)
}
return newCtx, nil
}
// ConfigFs makes the config for calling NewFs with. // ConfigFs makes the config for calling NewFs with.
// //
// It parses the path which is of the form remote:path // It parses the path which is of the form remote:path

55
fs/newfs_internal_test.go Normal file
View File

@@ -0,0 +1,55 @@
package fs
import (
"context"
"testing"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// When no override/global keys exist, ctx must be returned unchanged.
func TestAddConfigToContext_NoChanges(t *testing.T) {
ctx := context.Background()
newCtx, err := addConfigToContext(ctx, "unit-test", configmap.Simple{})
require.NoError(t, err)
assert.Equal(t, newCtx, ctx)
}
// A single override.key must create a new ctx, but leave the
// background ctx untouched.
func TestAddConfigToContext_OverrideOnly(t *testing.T) {
override := configmap.Simple{
"override.user_agent": "potato",
}
ctx := context.Background()
globalCI := GetConfig(ctx)
original := globalCI.UserAgent
newCtx, err := addConfigToContext(ctx, "unit-test", override)
require.NoError(t, err)
assert.NotEqual(t, newCtx, ctx)
assert.Equal(t, original, globalCI.UserAgent)
ci := GetConfig(newCtx)
assert.Equal(t, "potato", ci.UserAgent)
}
// A single global.key must create a new ctx and update the
// background/global config.
func TestAddConfigToContext_GlobalOnly(t *testing.T) {
global := configmap.Simple{
"global.user_agent": "potato2",
}
ctx := context.Background()
globalCI := GetConfig(ctx)
original := globalCI.UserAgent
defer func() {
globalCI.UserAgent = original
}()
newCtx, err := addConfigToContext(ctx, "unit-test", global)
require.NoError(t, err)
assert.NotEqual(t, newCtx, ctx)
assert.Equal(t, "potato2", globalCI.UserAgent)
ci := GetConfig(newCtx)
assert.Equal(t, "potato2", ci.UserAgent)
}

View File

@@ -42,4 +42,21 @@ func TestNewFs(t *testing.T) {
assert.Equal(t, ":mockfs{S_NHG}:/tmp", fs.ConfigString(f3)) assert.Equal(t, ":mockfs{S_NHG}:/tmp", fs.ConfigString(f3))
assert.Equal(t, ":mockfs,potato='true':/tmp", fs.ConfigStringFull(f3)) assert.Equal(t, ":mockfs,potato='true':/tmp", fs.ConfigStringFull(f3))
// Check that the overrides work
globalCI := fs.GetConfig(ctx)
original := globalCI.UserAgent
defer func() {
globalCI.UserAgent = original
}()
f4, err := fs.NewFs(ctx, ":mockfs,global.user_agent='julian':/tmp")
require.NoError(t, err)
assert.Equal(t, ":mockfs", f4.Name())
assert.Equal(t, "/tmp", f4.Root())
assert.Equal(t, ":mockfs:/tmp", fs.ConfigString(f4))
assert.Equal(t, ":mockfs:/tmp", fs.ConfigStringFull(f4))
assert.Equal(t, "julian", globalCI.UserAgent)
} }

View File

@@ -249,7 +249,7 @@ func (c *checkMarch) reportResults(ctx context.Context, err error) error {
fs.Logf(c.opt.Fsrc, "%d %s missing", c.srcFilesMissing.Load(), entity) fs.Logf(c.opt.Fsrc, "%d %s missing", c.srcFilesMissing.Load(), entity)
} }
fs.Logf(c.opt.Fdst, "%d differences found", accounting.Stats(ctx).GetErrors()) fs.Logf(c.opt.Fdst, "%d differences found", c.differences.Load())
if errs := accounting.Stats(ctx).GetErrors(); errs > 0 { if errs := accounting.Stats(ctx).GetErrors(); errs > 0 {
fs.Logf(c.opt.Fdst, "%d errors while checking", errs) fs.Logf(c.opt.Fdst, "%d errors while checking", errs)
} }

View File

@@ -428,6 +428,10 @@ func move(ctx context.Context, fdst fs.Fs, dst fs.Object, remote string, src fs.
origRemote := remote // avoid double-transform on fallback to copy origRemote := remote // avoid double-transform on fallback to copy
remote = transform.Path(ctx, remote, false) remote = transform.Path(ctx, remote, false)
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
newDst = dst
if ci.DryRun && dst != nil && SameObject(src, dst) && src.Remote() == transform.Path(ctx, dst.Remote(), false) {
return // avoid SkipDestructive log for objects that won't really be moved
}
var tr *accounting.Transfer var tr *accounting.Transfer
if isTransfer { if isTransfer {
tr = accounting.Stats(ctx).NewTransfer(src, fdst) tr = accounting.Stats(ctx).NewTransfer(src, fdst)
@@ -440,8 +444,11 @@ func move(ctx context.Context, fdst fs.Fs, dst fs.Object, remote string, src fs.
} }
tr.Done(ctx, err) tr.Done(ctx, err)
}() }()
newDst = dst action := "move"
if SkipDestructive(ctx, src, "move") { if remote != src.Remote() {
action += " to " + remote
}
if SkipDestructive(ctx, src, action) {
in := tr.Account(ctx, nil) in := tr.Account(ctx, nil)
in.DryRun(src.Size()) in.DryRun(src.Size())
return newDst, nil return newDst, nil
@@ -1939,6 +1946,9 @@ func MoveBackupDir(ctx context.Context, backupDir fs.Fs, dst fs.Object) (err err
func needsMoveCaseInsensitive(fdst fs.Fs, fsrc fs.Fs, dstFileName string, srcFileName string, cp bool) bool { func needsMoveCaseInsensitive(fdst fs.Fs, fsrc fs.Fs, dstFileName string, srcFileName string, cp bool) bool {
dstFilePath := path.Join(fdst.Root(), dstFileName) dstFilePath := path.Join(fdst.Root(), dstFileName)
srcFilePath := path.Join(fsrc.Root(), srcFileName) srcFilePath := path.Join(fsrc.Root(), srcFileName)
if !cp && fdst.Name() == fsrc.Name() && dstFileName != srcFileName && norm.NFC.String(dstFilePath) == norm.NFC.String(srcFilePath) {
return true
}
return !cp && fdst.Name() == fsrc.Name() && fdst.Features().CaseInsensitive && dstFileName != srcFileName && strings.EqualFold(dstFilePath, srcFilePath) return !cp && fdst.Name() == fsrc.Name() && fdst.Features().CaseInsensitive && dstFileName != srcFileName && strings.EqualFold(dstFilePath, srcFilePath)
} }

View File

@@ -1109,6 +1109,9 @@ func (s *syncCopyMove) markDirModifiedObject(o fs.Object) {
// be nil. // be nil.
func (s *syncCopyMove) copyDirMetadata(ctx context.Context, f fs.Fs, dst fs.Directory, dir string, src fs.Directory) (newDst fs.Directory) { func (s *syncCopyMove) copyDirMetadata(ctx context.Context, f fs.Fs, dst fs.Directory, dir string, src fs.Directory) (newDst fs.Directory) {
var err error var err error
if dst != nil && src.Remote() == dst.Remote() && operations.OverlappingFilterCheck(ctx, s.fdst, s.fsrc) {
return nil // src and dst can be the same in convmv
}
equal := operations.DirsEqual(ctx, src, dst, operations.DirsEqualOpt{ModifyWindow: s.modifyWindow, SetDirModtime: s.setDirModTime, SetDirMetadata: s.setDirMetadata}) equal := operations.DirsEqual(ctx, src, dst, operations.DirsEqualOpt{ModifyWindow: s.modifyWindow, SetDirModtime: s.setDirModTime, SetDirMetadata: s.setDirMetadata})
if !s.setDirModTimeAfter && equal { if !s.setDirModTimeAfter && equal {
return nil return nil

View File

@@ -216,6 +216,35 @@ func TestCopyNoTraverse(t *testing.T) {
r.CheckRemoteItems(t, file1) r.CheckRemoteItems(t, file1)
} }
func TestCopyNoTraverseDeadlock(t *testing.T) {
r := fstest.NewRun(t)
if !r.Fremote.Features().IsLocal {
t.Skip("Only runs on local")
}
const nFiles = 200
t1 := fstest.Time("2001-02-03T04:05:06.499999999Z")
// Create lots of source files.
items := make([]fstest.Item, nFiles)
for i := range items {
name := fmt.Sprintf("file%d.txt", i)
items[i] = r.WriteFile(name, fmt.Sprintf("content%d", i), t1)
}
r.CheckLocalItems(t, items...)
// Set --no-traverse
ctx, ci := fs.AddConfig(context.Background())
ci.NoTraverse = true
// Initial copy to establish destination.
require.NoError(t, CopyDir(ctx, r.Fremote, r.Flocal, false))
r.CheckRemoteItems(t, items...)
// Second copy which shouldn't deadlock
require.NoError(t, CopyDir(ctx, r.Flocal, r.Fremote, false))
r.CheckRemoteItems(t, items...)
}
// Now with --check-first // Now with --check-first
func TestCopyCheckFirst(t *testing.T) { func TestCopyCheckFirst(t *testing.T) {
ctx := context.Background() ctx := context.Background()

View File

@@ -1,4 +1,4 @@
package fs package fs
// VersionTag of rclone // VersionTag of rclone
var VersionTag = "v1.70.0" var VersionTag = "v1.70.3"

7
go.mod
View File

@@ -20,8 +20,8 @@ require (
github.com/aws/aws-sdk-go-v2 v1.36.3 github.com/aws/aws-sdk-go-v2 v1.36.3
github.com/aws/aws-sdk-go-v2/config v1.29.14 github.com/aws/aws-sdk-go-v2/config v1.29.14
github.com/aws/aws-sdk-go-v2/credentials v1.17.67 github.com/aws/aws-sdk-go-v2/credentials v1.17.67
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.77 github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.49
github.com/aws/aws-sdk-go-v2/service/s3 v1.80.0 github.com/aws/aws-sdk-go-v2/service/s3 v1.72.3
github.com/aws/smithy-go v1.22.3 github.com/aws/smithy-go v1.22.3
github.com/buengese/sgzip v0.1.1 github.com/buengese/sgzip v0.1.1
github.com/cloudinary/cloudinary-go/v2 v2.10.0 github.com/cloudinary/cloudinary-go/v2 v2.10.0
@@ -33,7 +33,7 @@ require (
github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5 github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5
github.com/gabriel-vasile/mimetype v1.4.9 github.com/gabriel-vasile/mimetype v1.4.9
github.com/gdamore/tcell/v2 v2.8.1 github.com/gdamore/tcell/v2 v2.8.1
github.com/go-chi/chi/v5 v5.2.1 github.com/go-chi/chi/v5 v5.2.2
github.com/go-darwin/apfs v0.0.0-20211011131704-f84b94dbf348 github.com/go-darwin/apfs v0.0.0-20211011131704-f84b94dbf348
github.com/go-git/go-billy/v5 v5.6.2 github.com/go-git/go-billy/v5 v5.6.2
github.com/google/uuid v1.6.0 github.com/google/uuid v1.6.0
@@ -91,7 +91,6 @@ require (
gopkg.in/validator.v2 v2.0.1 gopkg.in/validator.v2 v2.0.1
gopkg.in/yaml.v3 v3.0.1 gopkg.in/yaml.v3 v3.0.1
storj.io/uplink v1.13.1 storj.io/uplink v1.13.1
) )
require ( require (

12
go.sum
View File

@@ -120,8 +120,8 @@ github.com/aws/aws-sdk-go-v2/credentials v1.17.67 h1:9KxtdcIA/5xPNQyZRgUSpYOE6j9
github.com/aws/aws-sdk-go-v2/credentials v1.17.67/go.mod h1:p3C44m+cfnbv763s52gCqrjaqyPikj9Sg47kUVaNZQQ= github.com/aws/aws-sdk-go-v2/credentials v1.17.67/go.mod h1:p3C44m+cfnbv763s52gCqrjaqyPikj9Sg47kUVaNZQQ=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.30 h1:x793wxmUWVDhshP8WW2mlnXuFrO4cOd3HLBroh1paFw= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.30 h1:x793wxmUWVDhshP8WW2mlnXuFrO4cOd3HLBroh1paFw=
github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.30/go.mod h1:Jpne2tDnYiFascUEs2AWHJL9Yp7A5ZVy3TNyxaAjD6M= github.com/aws/aws-sdk-go-v2/feature/ec2/imds v1.16.30/go.mod h1:Jpne2tDnYiFascUEs2AWHJL9Yp7A5ZVy3TNyxaAjD6M=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.77 h1:xaRN9fags7iJznsMEjtcEuON1hGfCZ0y5MVfEMKtrx8= github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.49 h1:7gss+6H2mrrFtBrkokJRR2TzQD9qkpGA4N6BvIP/pCM=
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.77/go.mod h1:lolsiGkT47AZ3DWqtxgEQM/wVMpayi7YWNjl3wHSRx8= github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.17.49/go.mod h1:30PBx0ENoUCJm2AxzgCue8j7KEjb9ci4enxy6CCOjbE=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.34 h1:ZK5jHhnrioRkUNOc+hOgQKlUL5JeC3S6JgLxtQ+Rm0Q= github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.34 h1:ZK5jHhnrioRkUNOc+hOgQKlUL5JeC3S6JgLxtQ+Rm0Q=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.34/go.mod h1:p4VfIceZokChbA9FzMbRGz5OV+lekcVtHlPKEO0gSZY= github.com/aws/aws-sdk-go-v2/internal/configsources v1.3.34/go.mod h1:p4VfIceZokChbA9FzMbRGz5OV+lekcVtHlPKEO0gSZY=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.34 h1:SZwFm17ZUNNg5Np0ioo/gq8Mn6u9w19Mri8DnJ15Jf0= github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.6.34 h1:SZwFm17ZUNNg5Np0ioo/gq8Mn6u9w19Mri8DnJ15Jf0=
@@ -138,8 +138,8 @@ github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.15 h1:dM9/92u2
github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.15/go.mod h1:SwFBy2vjtA0vZbjjaFtfN045boopadnoVPhu4Fv66vY= github.com/aws/aws-sdk-go-v2/service/internal/presigned-url v1.12.15/go.mod h1:SwFBy2vjtA0vZbjjaFtfN045boopadnoVPhu4Fv66vY=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15 h1:moLQUoVq91LiqT1nbvzDukyqAlCv89ZmwaHw/ZFlFZg= github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15 h1:moLQUoVq91LiqT1nbvzDukyqAlCv89ZmwaHw/ZFlFZg=
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15/go.mod h1:ZH34PJUc8ApjBIfgQCFvkWcUDBtl/WTD+uiYHjd8igA= github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.18.15/go.mod h1:ZH34PJUc8ApjBIfgQCFvkWcUDBtl/WTD+uiYHjd8igA=
github.com/aws/aws-sdk-go-v2/service/s3 v1.80.0 h1:fV4XIU5sn/x8gjRouoJpDVHj+ExJaUk4prYF+eb6qTs= github.com/aws/aws-sdk-go-v2/service/s3 v1.72.3 h1:WZOmJfCDV+4tYacLxpiojoAdT5sxTfB3nTqQNtZu+J4=
github.com/aws/aws-sdk-go-v2/service/s3 v1.80.0/go.mod h1:qbn305Je/IofWBJ4bJz/Q7pDEtnnoInw/dGt71v6rHE= github.com/aws/aws-sdk-go-v2/service/s3 v1.72.3/go.mod h1:xMekrnhmJ5aqmyxtmALs7mlvXw5xRh+eYjOjvrIIFJ4=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.3 h1:1Gw+9ajCV1jogloEv1RRnvfRFia2cL6c9cuKV2Ps+G8= github.com/aws/aws-sdk-go-v2/service/sso v1.25.3 h1:1Gw+9ajCV1jogloEv1RRnvfRFia2cL6c9cuKV2Ps+G8=
github.com/aws/aws-sdk-go-v2/service/sso v1.25.3/go.mod h1:qs4a9T5EMLl/Cajiw2TcbNt2UNo/Hqlyp+GiuG4CFDI= github.com/aws/aws-sdk-go-v2/service/sso v1.25.3/go.mod h1:qs4a9T5EMLl/Cajiw2TcbNt2UNo/Hqlyp+GiuG4CFDI=
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.1 h1:hXmVKytPfTy5axZ+fYbR5d0cFmC3JvwLm5kM83luako= github.com/aws/aws-sdk-go-v2/service/ssooidc v1.30.1 h1:hXmVKytPfTy5axZ+fYbR5d0cFmC3JvwLm5kM83luako=
@@ -246,8 +246,8 @@ github.com/gin-contrib/sse v1.0.0 h1:y3bT1mUWUxDpW4JLQg/HnTqV4rozuW4tC9eFKTxYI9E
github.com/gin-contrib/sse v1.0.0/go.mod h1:zNuFdwarAygJBht0NTKiSi3jRf6RbqeILZ9Sp6Slhe0= github.com/gin-contrib/sse v1.0.0/go.mod h1:zNuFdwarAygJBht0NTKiSi3jRf6RbqeILZ9Sp6Slhe0=
github.com/gin-gonic/gin v1.10.0 h1:nTuyha1TYqgedzytsKYqna+DfLos46nTv2ygFy86HFU= github.com/gin-gonic/gin v1.10.0 h1:nTuyha1TYqgedzytsKYqna+DfLos46nTv2ygFy86HFU=
github.com/gin-gonic/gin v1.10.0/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y= github.com/gin-gonic/gin v1.10.0/go.mod h1:4PMNQiOhvDRa013RKVbsiNwoyezlm2rm0uX/T7kzp5Y=
github.com/go-chi/chi/v5 v5.2.1 h1:KOIHODQj58PmL80G2Eak4WdvUzjSJSm0vG72crDCqb8= github.com/go-chi/chi/v5 v5.2.2 h1:CMwsvRVTbXVytCk1Wd72Zy1LAsAh9GxMmSNWLHCG618=
github.com/go-chi/chi/v5 v5.2.1/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops= github.com/go-chi/chi/v5 v5.2.2/go.mod h1:L2yAIGWB3H+phAw1NxKwWM+7eUH/lU8pOMm5hHcoops=
github.com/go-darwin/apfs v0.0.0-20211011131704-f84b94dbf348 h1:JnrjqG5iR07/8k7NqrLNilRsl3s1EPRQEGvbPyOce68= github.com/go-darwin/apfs v0.0.0-20211011131704-f84b94dbf348 h1:JnrjqG5iR07/8k7NqrLNilRsl3s1EPRQEGvbPyOce68=
github.com/go-darwin/apfs v0.0.0-20211011131704-f84b94dbf348/go.mod h1:Czxo/d1g948LtrALAZdL04TL/HnkopquAjxYUuI02bo= github.com/go-darwin/apfs v0.0.0-20211011131704-f84b94dbf348/go.mod h1:Czxo/d1g948LtrALAZdL04TL/HnkopquAjxYUuI02bo=
github.com/go-git/go-billy/v5 v5.6.2 h1:6Q86EsPXMa7c3YZ3aLAQsMA0VlWmy43r6FHqa/UNbRM= github.com/go-git/go-billy/v5 v5.6.2 h1:6Q86EsPXMa7c3YZ3aLAQsMA0VlWmy43r6FHqa/UNbRM=

View File

@@ -155,7 +155,7 @@ var SharedOptions = []fs.Option{{
}, { }, {
Name: config.ConfigClientCredentials, Name: config.ConfigClientCredentials,
Default: false, Default: false,
Help: "Use client credentials OAuth flow.\n\nThis will use the OAUTH2 client Credentials Flow as described in RFC 6749.", Help: "Use client credentials OAuth flow.\n\nThis will use the OAUTH2 client Credentials Flow as described in RFC 6749.\n\nNote that this option is NOT supported by all backends.",
Advanced: true, Advanced: true,
}} }}

View File

@@ -2,6 +2,8 @@
package pacer package pacer
import ( import (
"errors"
"fmt"
"sync" "sync"
"time" "time"
@@ -235,15 +237,22 @@ type retryAfterError struct {
} }
func (r *retryAfterError) Error() string { func (r *retryAfterError) Error() string {
return r.error.Error() return fmt.Sprintf("%v: trying again in %v", r.error, r.retryAfter)
} }
func (r *retryAfterError) Cause() error { func (r *retryAfterError) Cause() error {
return r.error return r.error
} }
func (r *retryAfterError) Unwrap() error {
return r.error
}
// RetryAfterError returns a wrapped error that can be used by Calculator implementations // RetryAfterError returns a wrapped error that can be used by Calculator implementations
func RetryAfterError(err error, retryAfter time.Duration) error { func RetryAfterError(err error, retryAfter time.Duration) error {
if err == nil {
err = errors.New("too many requests")
}
return &retryAfterError{ return &retryAfterError{
error: err, error: err,
retryAfter: retryAfter, retryAfter: retryAfter,

View File

@@ -2,6 +2,8 @@ package pacer
import ( import (
"errors" "errors"
"fmt"
"strings"
"sync" "sync"
"testing" "testing"
"time" "time"
@@ -350,3 +352,82 @@ func TestCallParallel(t *testing.T) {
assert.Equal(t, 5, called) assert.Equal(t, 5, called)
wait.Broadcast() wait.Broadcast()
} }
func TestRetryAfterError_NonNilErr(t *testing.T) {
orig := errors.New("test failure")
dur := 2 * time.Second
err := RetryAfterError(orig, dur)
rErr, ok := err.(*retryAfterError)
if !ok {
t.Fatalf("expected *retryAfterError, got %T", err)
}
if !strings.Contains(err.Error(), "test failure") {
t.Errorf("Error() = %q, want it to contain original message", err.Error())
}
if !strings.Contains(err.Error(), dur.String()) {
t.Errorf("Error() = %q, want it to contain retryAfter %v", err.Error(), dur)
}
if rErr.retryAfter != dur {
t.Errorf("retryAfter = %v, want %v", rErr.retryAfter, dur)
}
if !errors.Is(err, orig) {
t.Error("errors.Is(err, orig) = false, want true")
}
}
func TestRetryAfterError_NilErr(t *testing.T) {
dur := 5 * time.Second
err := RetryAfterError(nil, dur)
if !strings.Contains(err.Error(), "too many requests") {
t.Errorf("Error() = %q, want it to mention default message", err.Error())
}
if !strings.Contains(err.Error(), dur.String()) {
t.Errorf("Error() = %q, want it to contain retryAfter %v", err.Error(), dur)
}
}
func TestCauseMethod(t *testing.T) {
orig := errors.New("underlying")
dur := time.Second
rErr := RetryAfterError(orig, dur).(*retryAfterError)
cause := rErr.Cause()
if !errors.Is(cause, orig) {
t.Errorf("Cause() does not wrap original: got %v", cause)
}
}
func TestIsRetryAfter_True(t *testing.T) {
orig := errors.New("oops")
dur := 3 * time.Second
err := RetryAfterError(orig, dur)
gotDur, ok := IsRetryAfter(err)
if !ok {
t.Error("IsRetryAfter returned false, want true")
}
if gotDur != dur {
t.Errorf("got %v, want %v", gotDur, dur)
}
}
func TestIsRetryAfter_Nested(t *testing.T) {
orig := errors.New("fail")
dur := 4 * time.Second
retryErr := RetryAfterError(orig, dur)
nested := fmt.Errorf("wrapped: %w", retryErr)
gotDur, ok := IsRetryAfter(nested)
if !ok {
t.Error("IsRetryAfter on nested error returned false, want true")
}
if gotDur != dur {
t.Errorf("got %v, want %v", gotDur, dur)
}
}
func TestIsRetryAfter_False(t *testing.T) {
if _, ok := IsRetryAfter(errors.New("other")); ok {
t.Error("IsRetryAfter = true for non-retry error, want false")
}
}

View File

@@ -14,10 +14,13 @@ var (
lock sync.Mutex lock sync.Mutex
) )
// CharmapChoices is an enum of the character map choices.
type CharmapChoices = fs.Enum[cmapChoices]
type cmapChoices struct{} type cmapChoices struct{}
func (cmapChoices) Choices() []string { func (cmapChoices) Choices() []string {
choices := make([]string, 1) choices := []string{}
i := 0 i := 0
for _, enc := range charmap.All { for _, enc := range charmap.All {
c, ok := enc.(*charmap.Charmap) c, ok := enc.(*charmap.Charmap)

View File

@@ -1,12 +1,20 @@
package transform // Create the help text for transform
//
// Run with go generate (defined in transform.go)
//
//go:build none
package main
import ( import (
"context" "context"
"fmt" "fmt"
"os"
"strings" "strings"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/transform"
) )
type commands struct { type commands struct {
@@ -43,7 +51,7 @@ var commandList = []commands{
{command: "--name-transform nfd", description: "Converts the file name to NFD Unicode normalization form."}, {command: "--name-transform nfd", description: "Converts the file name to NFD Unicode normalization form."},
{command: "--name-transform nfkc", description: "Converts the file name to NFKC Unicode normalization form."}, {command: "--name-transform nfkc", description: "Converts the file name to NFKC Unicode normalization form."},
{command: "--name-transform nfkd", description: "Converts the file name to NFKD Unicode normalization form."}, {command: "--name-transform nfkd", description: "Converts the file name to NFKD Unicode normalization form."},
{command: "--name-transform command=/path/to/my/programfile names.", description: "Executes an external program to transform"}, {command: "--name-transform command=/path/to/my/programfile names.", description: "Executes an external program to transform."},
} }
var examples = []example{ var examples = []example{
@@ -75,11 +83,11 @@ func (e example) command() string {
func (e example) output() string { func (e example) output() string {
ctx := context.Background() ctx := context.Background()
err := SetOptions(ctx, e.flags...) err := transform.SetOptions(ctx, e.flags...)
if err != nil { if err != nil {
fs.Errorf(nil, "error generating help text: %v", err) fs.Errorf(nil, "error generating help text: %v", err)
} }
return Path(ctx, e.path, false) return transform.Path(ctx, e.path, false)
} }
// go run ./ convmv --help // go run ./ convmv --help
@@ -98,43 +106,50 @@ func commandTable() string {
for _, c := range commandList { for _, c := range commandList {
s += fmt.Sprintf("\n| `%s` | %s |", c.command, c.description) s += fmt.Sprintf("\n| `%s` | %s |", c.command, c.description)
} }
s += "\n\n\n" s += "\n\n"
return s return s
} }
var generatingHelpText bool
// SprintList returns the example help text as a string // SprintList returns the example help text as a string
func SprintList() string { func SprintList() string {
var algos transformAlgo var algos transform.Algo
var charmaps fs.Enum[cmapChoices] var charmaps transform.CharmapChoices
generatingHelpText = true
s := commandTable() s := commandTable()
s += fmt.Sprintln("Conversion modes: \n```") s += "Conversion modes:\n\n```\n"
for _, v := range algos.Choices() { for _, v := range algos.Choices() {
s += fmt.Sprintln(v + " ") s += v + "\n"
} }
s += fmt.Sprintln("```") s += "```\n\n"
s += fmt.Sprintln("Char maps: \n```") s += "Char maps:\n\n```\n"
for _, v := range charmaps.Choices() { for _, v := range charmaps.Choices() {
s += fmt.Sprintln(v + " ") s += v + "\n"
} }
s += fmt.Sprintln("```") s += "```\n\n"
s += fmt.Sprintln("Encoding masks: \n```") s += "Encoding masks:\n\n```\n"
for _, v := range strings.Split(encoder.ValidStrings(), ", ") { for _, v := range strings.Split(encoder.ValidStrings(), ", ") {
s += fmt.Sprintln(v + " ") s += v + "\n"
} }
s += fmt.Sprintln("```") s += "```\n\n"
s += sprintExamples() s += sprintExamples()
generatingHelpText = false
return s return s
} }
// PrintList prints the example help text to stdout // Output the help to stdout
func PrintList() { func main() {
fmt.Println(SprintList()) out := os.Stdout
if len(os.Args) > 1 {
var err error
out, err = os.Create(os.Args[1])
if err != nil {
fs.Fatalf(nil, "Open output failed: %v", err)
}
defer out.Close()
}
fmt.Fprintf(out, "<!--- Docs generated by help.go - use go generate to rebuild - DO NOT EDIT --->\n\n")
fmt.Fprintln(out, SprintList())
} }

View File

@@ -11,7 +11,7 @@ import (
) )
type transform struct { type transform struct {
key transformAlgo // for example, "prefix" key Algo // for example, "prefix"
value string // for example, "some_prefix_" value string // for example, "some_prefix_"
tag tag // file, dir, or all tag tag // file, dir, or all
} }
@@ -171,12 +171,12 @@ func (t *transform) requiresValue() bool {
return false return false
} }
// transformAlgo describes conversion setting // Algo describes conversion setting
type transformAlgo = fs.Enum[transformChoices] type Algo = fs.Enum[transformChoices]
// Supported transform options // Supported transform options
const ( const (
ConvNone transformAlgo = iota ConvNone Algo = iota
ConvToNFC ConvToNFC
ConvToNFD ConvToNFD
ConvToNFKC ConvToNFKC

View File

@@ -1,9 +1,12 @@
// Package transform holds functions for path name transformations // Package transform holds functions for path name transformations
//
//go:generate go run gen_help.go transform.md
package transform package transform
import ( import (
"bytes" "bytes"
"context" "context"
_ "embed"
"encoding/base64" "encoding/base64"
"errors" "errors"
"fmt" "fmt"
@@ -24,6 +27,16 @@ import (
"golang.org/x/text/unicode/norm" "golang.org/x/text/unicode/norm"
) )
//go:embed transform.md
var help string
// Help returns the help string cleaned up to simplify appending
func Help() string {
// Chop off auto generated message
nl := strings.IndexRune(help, '\n')
return strings.TrimSpace(help[nl:]) + "\n\n"
}
// Path transforms a path s according to the --name-transform options in use // Path transforms a path s according to the --name-transform options in use
// //
// If no transforms are in use, s is returned unchanged // If no transforms are in use, s is returned unchanged
@@ -53,7 +66,7 @@ func Path(ctx context.Context, s string, isDir bool) string {
fs.Errorf(s, "Failed to transform: %v", err) fs.Errorf(s, "Failed to transform: %v", err)
} }
} }
if old != s && !generatingHelpText { if old != s {
fs.Debugf(old, "transformed to: %v", s) fs.Debugf(old, "transformed to: %v", s)
} }
if strings.Count(old, "/") != strings.Count(s, "/") { if strings.Count(old, "/") != strings.Count(s, "/") {
@@ -181,7 +194,7 @@ func transformPathSegment(s string, t transform) (string, error) {
case ConvMacintosh: case ConvMacintosh:
return encodeWithReplacement(s, charmap.Macintosh), nil return encodeWithReplacement(s, charmap.Macintosh), nil
case ConvCharmap: case ConvCharmap:
var cmapType fs.Enum[cmapChoices] var cmapType CharmapChoices
err := cmapType.Set(t.value) err := cmapType.Set(t.value)
if err != nil { if err != nil {
return s, err return s, err

224
lib/transform/transform.md Normal file
View File

@@ -0,0 +1,224 @@
<!--- Docs generated by help.go - use go generate to rebuild - DO NOT EDIT --->
| Command | Description |
|------|------|
| `--name-transform prefix=XXXX` | Prepends XXXX to the file name. |
| `--name-transform suffix=XXXX` | Appends XXXX to the file name after the extension. |
| `--name-transform suffix_keep_extension=XXXX` | Appends XXXX to the file name while preserving the original file extension. |
| `--name-transform trimprefix=XXXX` | Removes XXXX if it appears at the start of the file name. |
| `--name-transform trimsuffix=XXXX` | Removes XXXX if it appears at the end of the file name. |
| `--name-transform regex=/pattern/replacement/` | Applies a regex-based transformation. |
| `--name-transform replace=old:new` | Replaces occurrences of old with new in the file name. |
| `--name-transform date={YYYYMMDD}` | Appends or prefixes the specified date format. |
| `--name-transform truncate=N` | Truncates the file name to a maximum of N characters. |
| `--name-transform base64encode` | Encodes the file name in Base64. |
| `--name-transform base64decode` | Decodes a Base64-encoded file name. |
| `--name-transform encoder=ENCODING` | Converts the file name to the specified encoding (e.g., ISO-8859-1, Windows-1252, Macintosh). |
| `--name-transform decoder=ENCODING` | Decodes the file name from the specified encoding. |
| `--name-transform charmap=MAP` | Applies a character mapping transformation. |
| `--name-transform lowercase` | Converts the file name to lowercase. |
| `--name-transform uppercase` | Converts the file name to UPPERCASE. |
| `--name-transform titlecase` | Converts the file name to Title Case. |
| `--name-transform ascii` | Strips non-ASCII characters. |
| `--name-transform url` | URL-encodes the file name. |
| `--name-transform nfc` | Converts the file name to NFC Unicode normalization form. |
| `--name-transform nfd` | Converts the file name to NFD Unicode normalization form. |
| `--name-transform nfkc` | Converts the file name to NFKC Unicode normalization form. |
| `--name-transform nfkd` | Converts the file name to NFKD Unicode normalization form. |
| `--name-transform command=/path/to/my/programfile names.` | Executes an external program to transform |
Conversion modes:
```
none
nfc
nfd
nfkc
nfkd
replace
prefix
suffix
suffix_keep_extension
trimprefix
trimsuffix
index
date
truncate
base64encode
base64decode
encoder
decoder
ISO-8859-1
Windows-1252
Macintosh
charmap
lowercase
uppercase
titlecase
ascii
url
regex
command
```
Char maps:
```
IBM-Code-Page-037
IBM-Code-Page-437
IBM-Code-Page-850
IBM-Code-Page-852
IBM-Code-Page-855
Windows-Code-Page-858
IBM-Code-Page-860
IBM-Code-Page-862
IBM-Code-Page-863
IBM-Code-Page-865
IBM-Code-Page-866
IBM-Code-Page-1047
IBM-Code-Page-1140
ISO-8859-1
ISO-8859-2
ISO-8859-3
ISO-8859-4
ISO-8859-5
ISO-8859-6
ISO-8859-7
ISO-8859-8
ISO-8859-9
ISO-8859-10
ISO-8859-13
ISO-8859-14
ISO-8859-15
ISO-8859-16
KOI8-R
KOI8-U
Macintosh
Macintosh-Cyrillic
Windows-874
Windows-1250
Windows-1251
Windows-1252
Windows-1253
Windows-1254
Windows-1255
Windows-1256
Windows-1257
Windows-1258
X-User-Defined
```
Encoding masks:
```
Asterisk
BackQuote
BackSlash
Colon
CrLf
Ctl
Del
Dollar
Dot
DoubleQuote
Exclamation
Hash
InvalidUtf8
LeftCrLfHtVt
LeftPeriod
LeftSpace
LeftTilde
LtGt
None
Percent
Pipe
Question
Raw
RightCrLfHtVt
RightPeriod
RightSpace
Semicolon
SingleQuote
Slash
SquareBracket
```
Examples:
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,uppercase"
// Output: STORIES/THE QUICK BROWN FOX!.TXT
```
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,replace=Fox:Turtle" --name-transform "all,replace=Quick:Slow"
// Output: stories/The Slow Brown Turtle!.txt
```
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,base64encode"
// Output: c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0
```
```
rclone convmv "c3Rvcmllcw==/VGhlIFF1aWNrIEJyb3duIEZveCEudHh0" --name-transform "all,base64decode"
// Output: stories/The Quick Brown Fox!.txt
```
```
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfc"
// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
```
```
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,nfd"
// Output: stories/The Quick Brown 🦊 Fox Went to the Café!.txt
```
```
rclone convmv "stories/The Quick Brown 🦊 Fox!.txt" --name-transform "all,ascii"
// Output: stories/The Quick Brown Fox!.txt
```
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,trimsuffix=.txt"
// Output: stories/The Quick Brown Fox!
```
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,prefix=OLD_"
// Output: OLD_stories/OLD_The Quick Brown Fox!.txt
```
```
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,charmap=ISO-8859-7"
// Output: stories/The Quick Brown _ Fox Went to the Caf_!.txt
```
```
rclone convmv "stories/The Quick Brown Fox: A Memoir [draft].txt" --name-transform "all,encoder=Colon,SquareBracket"
// Output: stories/The Quick Brown Fox A Memoir draft.txt
```
```
rclone convmv "stories/The Quick Brown 🦊 Fox Went to the Café!.txt" --name-transform "all,truncate=21"
// Output: stories/The Quick Brown 🦊 Fox
```
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo"
// Output: stories/The Quick Brown Fox!.txt
```
```
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20250618
```
```
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-06-18 0148PM
```
```
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
// Output: ababababababab/ababab ababababab ababababab ababab!abababab
```

248
rclone.1 generated
View File

@@ -1,7 +1,7 @@
.\"t .\"t
.\" Automatically generated by Pandoc 2.9.2.1 .\" Automatically generated by Pandoc 2.9.2.1
.\" .\"
.TH "rclone" "1" "Jun 17, 2025" "User Manual" "" .TH "rclone" "1" "Jul 09, 2025" "User Manual" ""
.hy .hy
.SH NAME .SH NAME
.PP .PP
@@ -252,10 +252,10 @@ Enterprise File Fabric
.IP \[bu] 2 .IP \[bu] 2
Fastmail Files Fastmail Files
.IP \[bu] 2 .IP \[bu] 2
Files.com
.IP \[bu] 2
FileLu Cloud Storage FileLu Cloud Storage
.IP \[bu] 2 .IP \[bu] 2
Files.com
.IP \[bu] 2
FlashBlade FlashBlade
.IP \[bu] 2 .IP \[bu] 2
FTP FTP
@@ -747,6 +747,12 @@ status (https://repology.org/badge/vertical-allrepos/rclone.svg?columns=3)] (htt
The rclone developers maintain a docker image for The rclone developers maintain a docker image for
rclone (https://hub.docker.com/r/rclone/rclone). rclone (https://hub.docker.com/r/rclone/rclone).
.PP .PP
\f[B]Note:\f[R] We also now offer a paid version of rclone with
enterprise-grade security and zero CVEs through our partner
SecureBuild (https://securebuild.com/blog/introducing-securebuild).
If you are interested, check out their website and the Rclone
SecureBuild Image (https://securebuild.com/images/rclone).
.PP
These images are built as part of the release process based on a minimal These images are built as part of the release process based on a minimal
Alpine Linux. Alpine Linux.
.PP .PP
@@ -5572,14 +5578,14 @@ rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]a
.nf .nf
\f[C] \f[C]
rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{YYYYMMDD}\[dq] rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{YYYYMMDD}\[dq]
// Output: stories/The Quick Brown Fox!-20250617 // Output: stories/The Quick Brown Fox!-20250618
\f[R] \f[R]
.fi .fi
.IP .IP
.nf .nf
\f[C] \f[C]
rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{macfriendlytime}\[dq] rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{macfriendlytime}\[dq]
// Output: stories/The Quick Brown Fox!-2025-06-17 0551PM // Output: stories/The Quick Brown Fox!-2025-06-18 0148PM
\f[R] \f[R]
.fi .fi
.IP .IP
@@ -5602,7 +5608,7 @@ The means only the leaf file name will be transformed.
However some of the transforms would be better applied to the whole path However some of the transforms would be better applied to the whole path
or just directories. or just directories.
To choose which which part of the file path is affected some tags can be To choose which which part of the file path is affected some tags can be
added to the \f[C]--name-transform\f[R] added to the \f[C]--name-transform\f[R].
.PP .PP
.TS .TS
tab(@); tab(@);
@@ -5636,7 +5642,7 @@ This is used by adding the tag into the transform name like this:
\f[C]--name-transform dir,prefix=DEF\f[R]. \f[C]--name-transform dir,prefix=DEF\f[R].
.PP .PP
For some conversions using all is more likely to be useful, for example For some conversions using all is more likely to be useful, for example
\f[C]--name-transform all,nfc\f[R] \f[C]--name-transform all,nfc\f[R].
.PP .PP
Note that \f[C]--name-transform\f[R] may not add path separators Note that \f[C]--name-transform\f[R] may not add path separators
\f[C]/\f[R] to the name. \f[C]/\f[R] to the name.
@@ -5687,27 +5693,20 @@ destination, the final state may be non-deterministic.
* Running rclone check after a sync using such transformations may * Running rclone check after a sync using such transformations may
erroneously report missing or differing files due to overwritten erroneously report missing or differing files due to overwritten
results. results.
.IP \[bu] 2 .PP
To minimize risks, users should: To minimize risks, users should: * Carefully review transformations that
.RS 2 may introduce conflicts.
.IP \[bu] 2 * Use \f[C]--dry-run\f[R] to inspect changes before executing a sync
Carefully review transformations that may introduce conflicts. (but keep in mind that it won\[aq]t show the effect of non-deterministic
.IP \[bu] 2
Use \f[C]--dry-run\f[R] to inspect changes before executing a sync (but
keep in mind that it won\[aq]t show the effect of non-deterministic
transformations). transformations).
.IP \[bu] 2 * Avoid transformations that cause multiple distinct source files to map
Avoid transformations that cause multiple distinct source files to map
to the same destination name. to the same destination name.
.IP \[bu] 2 * Consider disabling concurrency with \f[C]--transfers=1\f[R] if
Consider disabling concurrency with \f[C]--transfers=1\f[R] if
necessary. necessary.
.IP \[bu] 2 * Certain transformations (e.g.
Certain transformations (e.g.
\f[C]prefix\f[R]) will have a multiplying effect every time they are \f[C]prefix\f[R]) will have a multiplying effect every time they are
used. used.
Avoid these when using \f[C]bisync\f[R]. Avoid these when using \f[C]bisync\f[R].
.RE
.IP .IP
.nf .nf
\f[C] \f[C]
@@ -20152,9 +20151,18 @@ would use
This option is only supported Windows platforms. This option is only supported Windows platforms.
.SS --use-json-log .SS --use-json-log
.PP .PP
This switches the log format to JSON for rclone. This switches the log format to JSON.
The fields of JSON log are \f[C]level\f[R], \f[C]msg\f[R], The log messages are then streamed as individual JSON objects, with
\f[C]source\f[R], \f[C]time\f[R]. fields: \f[C]level\f[R], \f[C]msg\f[R], \f[C]source\f[R], and
\f[C]time\f[R].
The resulting format is what is sometimes referred to as
newline-delimited
JSON (https://en.wikipedia.org/wiki/JSON_streaming#Newline-delimited_JSON)
(NDJSON), or JSON Lines (JSONL).
This is well suited for processing by traditional line-oriented tools
and shell pipelines, but a complete log file is not strictly valid JSON
and needs a parser that can handle it.
.PP
The JSON logs will be printed on a single line, but are shown expanded The JSON logs will be printed on a single line, but are shown expanded
here for clarity. here for clarity.
.IP .IP
@@ -30217,7 +30225,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this --tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.70.0\[dq]) --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.70.3\[dq])
\f[R] \f[R]
.fi .fi
.SS Performance .SS Performance
@@ -30699,6 +30707,7 @@ Backend-only flags (these can be set in the config file also).
--ftp-explicit-tls Use Explicit FTPS (FTP over TLS) --ftp-explicit-tls Use Explicit FTPS (FTP over TLS)
--ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD --ftp-force-list-hidden Use LIST -a to force listing of hidden files and folders. This will disable the use of MLSD
--ftp-host string FTP host to connect to --ftp-host string FTP host to connect to
--ftp-http-proxy string URL for HTTP CONNECT proxy
--ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s) --ftp-idle-timeout Duration Max time before closing idle connections (default 1m0s)
--ftp-no-check-certificate Do not verify the TLS certificate of the server --ftp-no-check-certificate Do not verify the TLS certificate of the server
--ftp-no-check-upload Don\[aq]t check the upload is OK --ftp-no-check-upload Don\[aq]t check the upload is OK
@@ -44237,6 +44246,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -46539,6 +46550,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -48032,7 +48045,7 @@ See the metadata (https://rclone.org/docs/#metadata) docs for more info.
The DOI remote is a read only remote for reading files from digital The DOI remote is a read only remote for reading files from digital
object identifiers (DOI). object identifiers (DOI).
.PP .PP
Currently, the DOI backend supports supports DOIs hosted with: - Currently, the DOI backend supports DOIs hosted with: -
InvenioRDM (https://inveniosoftware.org/products/rdm/) - InvenioRDM (https://inveniosoftware.org/products/rdm/) -
Zenodo (https://zenodo.org) - CaltechDATA (https://data.caltech.edu) - Zenodo (https://zenodo.org) - CaltechDATA (https://data.caltech.edu) -
Other InvenioRDM repositories (https://inveniosoftware.org/showcase/) - Other InvenioRDM repositories (https://inveniosoftware.org/showcase/) -
@@ -48687,6 +48700,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -50455,6 +50470,22 @@ Env Var: RCLONE_FTP_SOCKS_PROXY
Type: string Type: string
.IP \[bu] 2 .IP \[bu] 2
Required: false Required: false
.SS --ftp-http-proxy
.PP
URL for HTTP CONNECT proxy
.PP
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT
verb.
.PP
Properties:
.IP \[bu] 2
Config: http_proxy
.IP \[bu] 2
Env Var: RCLONE_FTP_HTTP_PROXY
.IP \[bu] 2
Type: string
.IP \[bu] 2
Required: false
.SS --ftp-no-check-upload .SS --ftp-no-check-upload
.PP .PP
Don\[aq]t check the upload is OK Don\[aq]t check the upload is OK
@@ -52019,6 +52050,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -53271,6 +53304,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -55332,6 +55367,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -55688,6 +55725,28 @@ Rclone cannot delete files anywhere except under \f[C]album\f[R].
.PP .PP
The Google Photos API does not support deleting albums - see bug The Google Photos API does not support deleting albums - see bug
#135714733 (https://issuetracker.google.com/issues/135714733). #135714733 (https://issuetracker.google.com/issues/135714733).
.SS Making your own client_id
.PP
When you use rclone with Google photos in its default configuration you
are using rclone\[aq]s client_id.
This is shared between all the rclone users.
There is a global rate limit on the number of queries per second that
each client_id can do set by Google.
.PP
If there is a problem with this client_id (eg quota too low or the
client_id stops working) then you can make your own.
.PP
Please follow the steps in the google drive
docs (https://rclone.org/drive/#making-your-own-client-id).
You will need these scopes instead of the drive ones detailed:
.IP
.nf
\f[C]
https://www.googleapis.com/auth/photoslibrary.appendonly
https://www.googleapis.com/auth/photoslibrary.readonly.appcreateddata
https://www.googleapis.com/auth/photoslibrary.edit.appcreateddata
\f[R]
.fi
.SH Hasher .SH Hasher
.PP .PP
Hasher is a special overlay backend to create remotes which handle Hasher is a special overlay backend to create remotes which handle
@@ -56765,6 +56824,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -58970,6 +59031,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -60152,6 +60215,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -63722,8 +63787,14 @@ For example, you might see throttling.
To create your own Client ID, please follow these steps: To create your own Client ID, please follow these steps:
.IP "1." 3 .IP "1." 3
Open Open
https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade https://portal.azure.com/?quickstart=true#view/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/\[ti]/Overview
and then click \f[C]New registration\f[R]. and then under the \f[C]Add\f[R] menu click \f[C]App registration\f[R].
.RS 4
.IP \[bu] 2
If you have not created an Azure account, you will be prompted to.
This is free, but you need to provide a phone number, address, and
credit card for identity verification.
.RE
.IP "2." 3 .IP "2." 3
Enter a name for your app, choose account type Enter a name for your app, choose account type
\f[C]Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)\f[R], \f[C]Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)\f[R],
@@ -64168,6 +64239,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -69155,6 +69228,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -70143,6 +70218,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -70838,6 +70915,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -76486,6 +76565,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -76862,6 +76943,8 @@ Use client credentials OAuth flow.
This will use the OAUTH2 client Credentials Flow as described in RFC This will use the OAUTH2 client Credentials Flow as described in RFC
6749. 6749.
.PP .PP
Note that this option is NOT supported by all backends.
.PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
Config: client_credentials Config: client_credentials
@@ -78165,6 +78248,113 @@ Options:
.IP \[bu] 2 .IP \[bu] 2
\[dq]error\[dq]: return an error based on option value \[dq]error\[dq]: return an error based on option value
.SH Changelog .SH Changelog
.SS v1.70.3 - 2025-07-09
.PP
See commits (https://github.com/rclone/rclone/compare/v1.70.2...v1.70.3)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
check: Fix difference report (was reporting error counts) (albertony)
.IP \[bu] 2
march: Fix deadlock when using \f[C]--no-traverse\f[R] (Nick Craig-Wood)
.IP \[bu] 2
doc fixes (albertony, Nick Craig-Wood)
.RE
.IP \[bu] 2
Azure Blob
.RS 2
.IP \[bu] 2
Fix server side copy error \[dq]requires exactly one scope\[dq] (Nick
Craig-Wood)
.RE
.IP \[bu] 2
B2
.RS 2
.IP \[bu] 2
Fix finding objects when using \f[C]--b2-version-at\f[R] (Davide
Bizzarri)
.RE
.IP \[bu] 2
Linkbox
.RS 2
.IP \[bu] 2
Fix upload error \[dq]user upload file not exist\[dq] (Nick Craig-Wood)
.RE
.IP \[bu] 2
Pikpak
.RS 2
.IP \[bu] 2
Improve error handling for missing links and unrecoverable 500s
(wiserain)
.RE
.IP \[bu] 2
WebDAV
.RS 2
.IP \[bu] 2
Fix setting modtime to that of local object instead of remote
(WeidiDeng)
.RE
.SS v1.70.2 - 2025-06-27
.PP
See commits (https://github.com/rclone/rclone/compare/v1.70.1...v1.70.2)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
convmv: Make --dry-run logs less noisy (nielash)
.IP \[bu] 2
sync: Avoid copying dir metadata to itself (nielash)
.IP \[bu] 2
build: Bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix
GHSA-vrw8-fxc6-2r93 (dependabot[bot])
.IP \[bu] 2
convmv: Fix moving to unicode-equivalent name (nielash)
.IP \[bu] 2
log: Fix deadlock when using systemd logging (Nick Craig-Wood)
.IP \[bu] 2
pacer: Fix nil pointer deref in RetryError (Nick Craig-Wood)
.IP \[bu] 2
doc fixes (Ali Zein Yousuf, Nick Craig-Wood)
.RE
.IP \[bu] 2
Local
.RS 2
.IP \[bu] 2
Fix --skip-links on Windows when skipping Junction points (Nick
Craig-Wood)
.RE
.IP \[bu] 2
Combine
.RS 2
.IP \[bu] 2
Fix directory not found errors with ListP interface (Nick Craig-Wood)
.RE
.IP \[bu] 2
Mega
.RS 2
.IP \[bu] 2
Fix tls handshake failure (necaran)
.RE
.IP \[bu] 2
Pikpak
.RS 2
.IP \[bu] 2
Fix uploads fail with \[dq]aws-chunked encoding is not supported\[dq]
error (Nick Craig-Wood)
.RE
.SS v1.70.1 - 2025-06-19
.PP
See commits (https://github.com/rclone/rclone/compare/v1.70.0...v1.70.1)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
convmv: Fix spurious \[dq]error running command echo\[dq] on Windows
(Nick Craig-Wood)
.IP \[bu] 2
doc fixes (albertony, Ed Craig-Wood, jinjingroad)
.RE
.SS v1.70.0 - 2025-06-17 .SS v1.70.0 - 2025-06-17
.PP .PP
See commits (https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0) See commits (https://github.com/rclone/rclone/compare/v1.69.0...v1.70.0)