1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-31 23:53:18 +00:00

Compare commits

..

14 Commits

Author SHA1 Message Date
Nick Craig-Wood
24739b56d5 fs: allow global variables to be overriden or set on backend creation
This allows backend config to contain

- `override.var` - set var during remote creation only
- `global.var` - set var in the global config permanently

Fixes #8563
2025-07-23 15:27:52 +01:00
Nick Craig-Wood
a9178cab8c fs: allow setting of --http_proxy from command line
This in turn allows `override.http_proxy` to be set in backend configs
to set an http proxy for a single backend.
2025-07-23 15:26:55 +01:00
Nick Craig-Wood
4133a197bc Version v1.70.3 2025-07-09 10:51:25 +01:00
Nick Craig-Wood
a30a4909fe azureblob: fix server side copy error "requires exactly one scope"
Before this change, if not using shared key or SAS URL authentication
for the source, rclone gave this error

    ManagedIdentityCredential.GetToken() requires exactly one scope

when doing server side copies.

This was introduced in:

3a5ddfcd3c azureblob: implement multipart server side copy

This fixes the problem by creating a temporary SAS URL using user
delegation to read the source blob when copying.

Fixes #8662
2025-07-09 10:32:12 +01:00
albertony
cdc6d22929 docs: explain the json log format in more detail 2025-07-09 10:32:12 +01:00
albertony
e319406f52 check: fix difference report (was reporting error counts) 2025-07-09 10:32:12 +01:00
Nick Craig-Wood
ac54cccced linkbox: fix upload error "user upload file not exist"
Linkbox have started issuing 302 redirects on some of their PUT
requests when rclone uploads a file.

This is problematic for several reasons:

1. This is the wrong redirect code - it should be 307 to preserve the method
2. Since Expect/100-Continue isn't supported the whole body gets uploaded

This fixes the problem by first doing a HEAD request on the URL. This
will allow us to read the redirect Location and not upload the body to
the wrong place.

It should still work (albeit a little more inefficiently) if Linkbox
stop redirecting the PUT requests.

See: https://forum.rclone.org/t/linkbox-upload-error/51795
Fixes: #8606
2025-07-09 10:32:12 +01:00
Nick Craig-Wood
4c4d366e29 march: fix deadlock when using --no-traverse - fixes #8656
This ocurred whenever there were more than 100 files in the source due
to the output channel filling up.

The fix is not to use list.NewSorter but take more care to output the
dst objects in the same order the src objects are delivered. As the
src objects are delivered sorted, no sorting is needed.

In order not to cause another deadlock, we need to send nil dst
objects which is safe since this adjusts the termination conditions
for the channels.

Thanks to @jeremy for the test script the Go tests are based on.
2025-07-09 10:32:12 +01:00
wiserain
64fc3d05ae pikpak: improve error handling for missing links and unrecoverable 500s
This commit improves error handling in two specific scenarios:

* Missing Download Links: A 5-second delay is introduced when a download
  link is missing, as low-level retries aren't enough. Empirically, it
  takes about 30s-1m for the link to become available. This resolves
  failed integration tests: backend: TestIntegration/FsMkdir/FsPutFiles/
  ObjectUpdate, vfs: TestFileReadAtNonZeroLength

* Unrecoverable 500 Errors: The shouldRetry method is updated to skip
  retries for 500 errors from "idx.shub.mypikpak.com" indicating "no
  record for gcid." These errors are non-recoverable, so retrying is futile.
2025-07-09 10:32:12 +01:00
WeidiDeng
90386efeb1 webdav: fix setting modtime to that of local object instead of remote
In this commit the source of the modtime got changed to the wrong object by accident

0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support

This reverts that change and fixes the integration tests.
2025-07-09 10:32:12 +01:00
Davide Bizzarri
5f78b47295 fix: b2 versionAt read metadata 2025-07-09 10:32:12 +01:00
Nick Craig-Wood
775ee90fa5 Start v1.70.3-DEV development 2025-07-02 15:36:43 +01:00
Nick Craig-Wood
444392bf9c docs: fix filescom/filelu link mixup
See: https://forum.rclone.org/t/a-small-bug-in-rclone-documentation/51774
2025-07-02 15:35:18 +01:00
Nick Craig-Wood
d36259749f docs: update link for filescom 2025-06-30 11:10:31 +01:00
33 changed files with 687 additions and 100 deletions

39
MANUAL.html generated
View File

@@ -81,7 +81,7 @@
<header id="title-block-header">
<h1 class="title">rclone(1) User Manual</h1>
<p class="author">Nick Craig-Wood</p>
<p class="date">Jun 27, 2025</p>
<p class="date">Jul 09, 2025</p>
</header>
<h1 id="name">NAME</h1>
<p>rclone - manage files on cloud storage</p>
@@ -222,8 +222,8 @@ Use &quot;rclone help backends&quot; for a list of supported services.
<li>Dropbox</li>
<li>Enterprise File Fabric</li>
<li>Fastmail Files</li>
<li>Files.com</li>
<li>FileLu Cloud Storage</li>
<li>Files.com</li>
<li>FlashBlade</li>
<li>FTP</li>
<li>Gofile</li>
@@ -8258,7 +8258,8 @@ y/n/s/!/q&gt; n</code></pre>
<pre><code>--log-file rclone.log --log-level DEBUG --windows-event-log ERROR</code></pre>
<p>This option is only supported Windows platforms.</p>
<h3 id="use-json-log">--use-json-log</h3>
<p>This switches the log format to JSON for rclone. The fields of JSON log are <code>level</code>, <code>msg</code>, <code>source</code>, <code>time</code>. The JSON logs will be printed on a single line, but are shown expanded here for clarity.</p>
<p>This switches the log format to JSON. The log messages are then streamed as individual JSON objects, with fields: <code>level</code>, <code>msg</code>, <code>source</code>, and <code>time</code>. The resulting format is what is sometimes referred to as <a href="https://en.wikipedia.org/wiki/JSON_streaming#Newline-delimited_JSON">newline-delimited JSON</a> (NDJSON), or JSON Lines (JSONL). This is well suited for processing by traditional line-oriented tools and shell pipelines, but a complete log file is not strictly valid JSON and needs a parser that can handle it.</p>
<p>The JSON logs will be printed on a single line, but are shown expanded here for clarity.</p>
<div class="sourceCode" id="cb654"><pre class="sourceCode json"><code class="sourceCode json"><span id="cb654-1"><a href="#cb654-1" aria-hidden="true"></a><span class="fu">{</span></span>
<span id="cb654-2"><a href="#cb654-2" aria-hidden="true"></a> <span class="dt">&quot;time&quot;</span><span class="fu">:</span> <span class="st">&quot;2025-05-13T17:30:51.036237518+01:00&quot;</span><span class="fu">,</span></span>
<span id="cb654-3"><a href="#cb654-3" aria-hidden="true"></a> <span class="dt">&quot;level&quot;</span><span class="fu">:</span> <span class="st">&quot;debug&quot;</span><span class="fu">,</span></span>
@@ -13231,7 +13232,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.70.2&quot;)</code></pre>
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.70.3&quot;)</code></pre>
<h2 id="performance">Performance</h2>
<p>Flags helpful for increasing performance.</p>
<pre><code> --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@@ -40090,6 +40091,36 @@ $ tree /tmp/c
<li>"error": return an error based on option value</li>
</ul>
<h1 id="changelog-1">Changelog</h1>
<h2 id="v1.70.3---2025-07-09">v1.70.3 - 2025-07-09</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.70.2...v1.70.3">See commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>check: Fix difference report (was reporting error counts) (albertony)</li>
<li>march: Fix deadlock when using <code>--no-traverse</code> (Nick Craig-Wood)</li>
<li>doc fixes (albertony, Nick Craig-Wood)</li>
</ul></li>
<li>Azure Blob
<ul>
<li>Fix server side copy error "requires exactly one scope" (Nick Craig-Wood)</li>
</ul></li>
<li>B2
<ul>
<li>Fix finding objects when using <code>--b2-version-at</code> (Davide Bizzarri)</li>
</ul></li>
<li>Linkbox
<ul>
<li>Fix upload error "user upload file not exist" (Nick Craig-Wood)</li>
</ul></li>
<li>Pikpak
<ul>
<li>Improve error handling for missing links and unrecoverable 500s (wiserain)</li>
</ul></li>
<li>WebDAV
<ul>
<li>Fix setting modtime to that of local object instead of remote (WeidiDeng)</li>
</ul></li>
</ul>
<h2 id="v1.70.2---2025-06-27">v1.70.2 - 2025-06-27</h2>
<p><a href="https://github.com/rclone/rclone/compare/v1.70.1...v1.70.2">See commits</a></p>
<ul>

38
MANUAL.md generated
View File

@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Jun 27, 2025
% Jul 09, 2025
# NAME
@@ -192,8 +192,8 @@ WebDAV or S3, that work out of the box.)
- Dropbox
- Enterprise File Fabric
- Fastmail Files
- Files.com
- FileLu Cloud Storage
- Files.com
- FlashBlade
- FTP
- Gofile
@@ -16332,9 +16332,16 @@ This option is only supported Windows platforms.
### --use-json-log ###
This switches the log format to JSON for rclone. The fields of JSON
log are `level`, `msg`, `source`, `time`. The JSON logs will be
printed on a single line, but are shown expanded here for clarity.
This switches the log format to JSON. The log messages are then
streamed as individual JSON objects, with fields: `level`, `msg`, `source`,
and `time`. The resulting format is what is sometimes referred to as
[newline-delimited JSON](https://en.wikipedia.org/wiki/JSON_streaming#Newline-delimited_JSON)
(NDJSON), or JSON Lines (JSONL). This is well suited for processing by
traditional line-oriented tools and shell pipelines, but a complete log
file is not strictly valid JSON and needs a parser that can handle it.
The JSON logs will be printed on a single line, but are shown expanded
here for clarity.
```json
{
@@ -22404,7 +22411,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.2")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.3")
```
@@ -59140,6 +59147,25 @@ Options:
# Changelog
## v1.70.3 - 2025-07-09
[See commits](https://github.com/rclone/rclone/compare/v1.70.2...v1.70.3)
* Bug Fixes
* check: Fix difference report (was reporting error counts) (albertony)
* march: Fix deadlock when using `--no-traverse` (Nick Craig-Wood)
* doc fixes (albertony, Nick Craig-Wood)
* Azure Blob
* Fix server side copy error "requires exactly one scope" (Nick Craig-Wood)
* B2
* Fix finding objects when using `--b2-version-at` (Davide Bizzarri)
* Linkbox
* Fix upload error "user upload file not exist" (Nick Craig-Wood)
* Pikpak
* Improve error handling for missing links and unrecoverable 500s (wiserain)
* WebDAV
* Fix setting modtime to that of local object instead of remote (WeidiDeng)
## v1.70.2 - 2025-06-27
[See commits](https://github.com/rclone/rclone/compare/v1.70.1...v1.70.2)

42
MANUAL.txt generated
View File

@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
Jun 27, 2025
Jul 09, 2025
NAME
@@ -179,8 +179,8 @@ S3, that work out of the box.)
- Dropbox
- Enterprise File Fabric
- Fastmail Files
- Files.com
- FileLu Cloud Storage
- Files.com
- FlashBlade
- FTP
- Gofile
@@ -15797,9 +15797,16 @@ This option is only supported Windows platforms.
--use-json-log
This switches the log format to JSON for rclone. The fields of JSON log
are level, msg, source, time. The JSON logs will be printed on a single
line, but are shown expanded here for clarity.
This switches the log format to JSON. The log messages are then streamed
as individual JSON objects, with fields: level, msg, source, and time.
The resulting format is what is sometimes referred to as
newline-delimited JSON (NDJSON), or JSON Lines (JSONL). This is well
suited for processing by traditional line-oriented tools and shell
pipelines, but a complete log file is not strictly valid JSON and needs
a parser that can handle it.
The JSON logs will be printed on a single line, but are shown expanded
here for clarity.
{
"time": "2025-05-13T17:30:51.036237518+01:00",
@@ -21963,7 +21970,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.2")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.3")
Performance
@@ -58801,6 +58808,29 @@ Options:
Changelog
v1.70.3 - 2025-07-09
See commits
- Bug Fixes
- check: Fix difference report (was reporting error counts)
(albertony)
- march: Fix deadlock when using --no-traverse (Nick Craig-Wood)
- doc fixes (albertony, Nick Craig-Wood)
- Azure Blob
- Fix server side copy error "requires exactly one scope" (Nick
Craig-Wood)
- B2
- Fix finding objects when using --b2-version-at (Davide Bizzarri)
- Linkbox
- Fix upload error "user upload file not exist" (Nick Craig-Wood)
- Pikpak
- Improve error handling for missing links and unrecoverable 500s
(wiserain)
- WebDAV
- Fix setting modtime to that of local object instead of remote
(WeidiDeng)
v1.70.2 - 2025-06-27
See commits

View File

@@ -39,6 +39,7 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
* FileLu [:page_facing_up:](https://rclone.org/filelu/)
* Files.com [:page_facing_up:](https://rclone.org/filescom/)
* FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade)
* FTP [:page_facing_up:](https://rclone.org/ftp/)

View File

@@ -1 +1 @@
v1.70.2
v1.70.3

View File

@@ -72,6 +72,7 @@ const (
emulatorAccount = "devstoreaccount1"
emulatorAccountKey = "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw=="
emulatorBlobEndpoint = "http://127.0.0.1:10000/devstoreaccount1"
sasCopyValidity = time.Hour // how long SAS should last when doing server side copy
)
var (
@@ -559,6 +560,11 @@ type Fs struct {
pacer *fs.Pacer // To pace and retry the API calls
uploadToken *pacer.TokenDispenser // control concurrency
publicAccess container.PublicAccessType // Container Public Access Level
// user delegation cache
userDelegationMu sync.Mutex
userDelegation *service.UserDelegationCredential
userDelegationExpiry time.Time
}
// Object describes an azure object
@@ -1688,6 +1694,38 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
return f.deleteContainer(ctx, container)
}
// Get a user delegation which is valid for at least sasCopyValidity
//
// This value is cached in f
func (f *Fs) getUserDelegation(ctx context.Context) (*service.UserDelegationCredential, error) {
f.userDelegationMu.Lock()
defer f.userDelegationMu.Unlock()
if f.userDelegation != nil && time.Until(f.userDelegationExpiry) > sasCopyValidity {
return f.userDelegation, nil
}
// Validity window
start := time.Now().UTC()
expiry := start.Add(2 * sasCopyValidity)
startStr := start.Format(time.RFC3339)
expiryStr := expiry.Format(time.RFC3339)
// Acquire user delegation key from the service client
info := service.KeyInfo{
Start: &startStr,
Expiry: &expiryStr,
}
userDelegationKey, err := f.svc.GetUserDelegationCredential(ctx, info, nil)
if err != nil {
return nil, fmt.Errorf("failed to get user delegation key: %w", err)
}
f.userDelegation = userDelegationKey
f.userDelegationExpiry = expiry
return f.userDelegation, nil
}
// getAuth gets auth to copy o.
//
// tokenOK is used to signal that token based auth (Microsoft Entra
@@ -1699,7 +1737,7 @@ func (f *Fs) Purge(ctx context.Context, dir string) error {
// URL (not a SAS) and token will be empty.
//
// If tokenOK is true it may also return a token for the auth.
func (o *Object) getAuth(ctx context.Context, tokenOK bool, noAuth bool) (srcURL string, token *string, err error) {
func (o *Object) getAuth(ctx context.Context, noAuth bool) (srcURL string, err error) {
f := o.fs
srcBlobSVC := o.getBlobSVC()
srcURL = srcBlobSVC.URL()
@@ -1708,29 +1746,47 @@ func (o *Object) getAuth(ctx context.Context, tokenOK bool, noAuth bool) (srcURL
case noAuth:
// If same storage account then no auth needed
case f.cred != nil:
if !tokenOK {
return srcURL, token, errors.New("not supported: Microsoft Entra ID")
}
options := policy.TokenRequestOptions{}
accessToken, err := f.cred.GetToken(ctx, options)
// Generate a User Delegation SAS URL using Azure AD credentials
userDelegationKey, err := f.getUserDelegation(ctx)
if err != nil {
return srcURL, token, fmt.Errorf("failed to create access token: %w", err)
return "", fmt.Errorf("sas creation: %w", err)
}
token = &accessToken.Token
// Build the SAS values
perms := sas.BlobPermissions{Read: true}
container, containerPath := o.split()
start := time.Now().UTC()
expiry := start.Add(sasCopyValidity)
vals := sas.BlobSignatureValues{
StartTime: start,
ExpiryTime: expiry,
Permissions: perms.String(),
ContainerName: container,
BlobName: containerPath,
}
// Sign with the delegation key
queryParameters, err := vals.SignWithUserDelegation(userDelegationKey)
if err != nil {
return "", fmt.Errorf("signing SAS with user delegation failed: %w", err)
}
// Append the SAS to the URL
srcURL = srcBlobSVC.URL() + "?" + queryParameters.Encode()
case f.sharedKeyCred != nil:
// Generate a short lived SAS URL if using shared key credentials
expiry := time.Now().Add(time.Hour)
expiry := time.Now().Add(sasCopyValidity)
sasOptions := blob.GetSASURLOptions{}
srcURL, err = srcBlobSVC.GetSASURL(sas.BlobPermissions{Read: true}, expiry, &sasOptions)
if err != nil {
return srcURL, token, fmt.Errorf("failed to create SAS URL: %w", err)
return srcURL, fmt.Errorf("failed to create SAS URL: %w", err)
}
case f.anonymous || f.opt.SASURL != "":
// If using a SASURL or anonymous, no need for any extra auth
default:
return srcURL, token, errors.New("unknown authentication type")
return srcURL, errors.New("unknown authentication type")
}
return srcURL, token, nil
return srcURL, nil
}
// Do multipart parallel copy.
@@ -1751,7 +1807,7 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
o.fs = f
o.remote = remote
srcURL, token, err := src.getAuth(ctx, true, false)
srcURL, err := src.getAuth(ctx, false)
if err != nil {
return nil, fmt.Errorf("multipart copy: %w", err)
}
@@ -1795,7 +1851,8 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
Count: partSize,
},
// Specifies the authorization scheme and signature for the copy source.
CopySourceAuthorization: token,
// We use SAS URLs as this doesn't seem to work always
// CopySourceAuthorization: token,
// CPKInfo *blob.CPKInfo
// CPKScopeInfo *blob.CPKScopeInfo
}
@@ -1865,7 +1922,7 @@ func (f *Fs) copySinglepart(ctx context.Context, remote, dstContainer, dstPath s
dstBlobSVC := f.getBlobSVC(dstContainer, dstPath)
// Get the source auth - none needed for same storage account
srcURL, _, err := src.getAuth(ctx, false, f == src.fs)
srcURL, err := src.getAuth(ctx, f == src.fs)
if err != nil {
return nil, fmt.Errorf("single part copy: source auth: %w", err)
}

View File

@@ -1673,6 +1673,21 @@ func (o *Object) getMetaData(ctx context.Context) (info *api.File, err error) {
return o.getMetaDataListing(ctx)
}
}
// If using versionAt we need to list the find the correct version.
if o.fs.opt.VersionAt.IsSet() {
info, err := o.getMetaDataListing(ctx)
if err != nil {
return nil, err
}
if info.Action == "hide" {
// Rerturn object not found error if the current version is deleted.
return nil, fs.ErrorObjectNotFound
}
return info, nil
}
_, info, err = o.getOrHead(ctx, "HEAD", nil)
return info, err
}

View File

@@ -446,14 +446,14 @@ func (f *Fs) InternalTestVersions(t *testing.T) {
t.Run("List", func(t *testing.T) {
fstest.CheckListing(t, f, test.want)
})
// b2 NewObject doesn't work with VersionAt
//t.Run("NewObject", func(t *testing.T) {
// gotObj, gotErr := f.NewObject(ctx, fileName)
// assert.Equal(t, test.wantErr, gotErr)
// if gotErr == nil {
// assert.Equal(t, test.wantSize, gotObj.Size())
// }
//})
t.Run("NewObject", func(t *testing.T) {
gotObj, gotErr := f.NewObject(ctx, fileName)
assert.Equal(t, test.wantErr, gotErr)
if gotErr == nil {
assert.Equal(t, test.wantSize, gotObj.Size())
}
})
})
}
})

View File

@@ -617,16 +617,36 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
case 1:
// upload file using link from first step
var res *http.Response
var location string
// Check to see if we are being redirected
opts := &rest.Opts{
Method: "HEAD",
RootURL: getFirstStepResult.Data.SignURL,
Options: options,
NoRedirect: true,
}
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, opts)
return o.fs.shouldRetry(ctx, res, err)
})
if res != nil {
location = res.Header.Get("Location")
if location != "" {
// set the URL to the new Location
opts.RootURL = location
err = nil
}
}
if err != nil {
return fmt.Errorf("head upload URL: %w", err)
}
file := io.MultiReader(bytes.NewReader(first10mBytes), in)
opts := &rest.Opts{
Method: "PUT",
RootURL: getFirstStepResult.Data.SignURL,
Options: options,
Body: file,
ContentLength: &size,
}
opts.Method = "PUT"
opts.Body = file
opts.ContentLength = &size
err = o.fs.pacer.CallNoRetry(func() (bool, error) {
res, err = o.fs.srv.Call(ctx, opts)

View File

@@ -155,6 +155,7 @@ func (f *Fs) getFile(ctx context.Context, ID string) (info *api.File, err error)
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rst.CallJSON(ctx, &opts, nil, &info)
if err == nil && !info.Links.ApplicationOctetStream.Valid() {
time.Sleep(5 * time.Second)
return true, errors.New("no link")
}
return f.shouldRetry(ctx, resp, err)

View File

@@ -467,6 +467,11 @@ func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (b
// when a zero-byte file was uploaded with an invalid captcha token
f.rst.captcha.Invalidate()
return true, err
} else if strings.Contains(apiErr.Reason, "idx.shub.mypikpak.com") && apiErr.Code == 500 {
// internal server error: Post "http://idx.shub.mypikpak.com": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
// This typically happens when trying to retrieve a gcid for which no record exists.
// No retry is needed in this case.
return false, err
}
}

View File

@@ -1550,7 +1550,7 @@ func (o *Object) extraHeaders(ctx context.Context, src fs.ObjectInfo) map[string
extraHeaders := map[string]string{}
if o.fs.useOCMtime || o.fs.hasOCMD5 || o.fs.hasOCSHA1 {
if o.fs.useOCMtime {
extraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", o.modTime.Unix())
extraHeaders["X-OC-Mtime"] = fmt.Sprintf("%d", src.ModTime(ctx).Unix())
}
// Set one upload checksum
// Owncloud uses one checksum only to check the upload and stores its own SHA1 and MD5

View File

@@ -123,8 +123,8 @@ WebDAV or S3, that work out of the box.)
{{< provider name="Dropbox" home="https://www.dropbox.com/" config="/dropbox/" >}}
{{< provider name="Enterprise File Fabric" home="https://storagemadeeasy.com/about/" config="/filefabric/" >}}
{{< provider name="Fastmail Files" home="https://www.fastmail.com/" config="/webdav/#fastmail-files" >}}
{{< provider name="Files.com" home="https://www.files.com/" config="/filescom/" >}}
{{< provider name="FileLu Cloud Storage" home="https://filelu.com/" config="/filelu/" >}}
{{< provider name="Files.com" home="https://www.files.com/" config="/filescom/" >}}
{{< provider name="FlashBlade" home="https://www.purestorage.com/products/unstructured-data-storage.html" config="/s3/#pure-storage-flashblade" >}}
{{< provider name="FTP" home="https://en.wikipedia.org/wiki/File_Transfer_Protocol" config="/ftp/" >}}
{{< provider name="Gofile" home="https://gofile.io/" config="/gofile/" >}}

View File

@@ -5,6 +5,25 @@ description: "Rclone Changelog"
# Changelog
## v1.70.3 - 2025-07-09
[See commits](https://github.com/rclone/rclone/compare/v1.70.2...v1.70.3)
* Bug Fixes
* check: Fix difference report (was reporting error counts) (albertony)
* march: Fix deadlock when using `--no-traverse` (Nick Craig-Wood)
* doc fixes (albertony, Nick Craig-Wood)
* Azure Blob
* Fix server side copy error "requires exactly one scope" (Nick Craig-Wood)
* B2
* Fix finding objects when using `--b2-version-at` (Davide Bizzarri)
* Linkbox
* Fix upload error "user upload file not exist" (Nick Craig-Wood)
* Pikpak
* Improve error handling for missing links and unrecoverable 500s (wiserain)
* WebDAV
* Fix setting modtime to that of local object instead of remote (WeidiDeng)
## v1.70.2 - 2025-06-27
[See commits](https://github.com/rclone/rclone/compare/v1.70.1...v1.70.2)

View File

@@ -998,7 +998,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.2")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.3")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect

View File

@@ -384,6 +384,103 @@ Do not use single character names on Windows as it creates ambiguity with Window
drives' names, e.g.: remote called `C` is indistinguishable from `C` drive. Rclone
will always assume that single letter name refers to a drive.
## Adding global configuration to a remote
It is possible to add global configuration to the remote configuration which
will be applied just before the remote is created.
This can be done in two ways. The first is to use `override.var = value` in the
config file or the connection string for a temporary change, and the second is
to use `global.var = value` in the config file or connection string for a
permanent change.
This is explained fully below.
### override.var
This is used to override a global variable **just** for the duration of the
remote creation. It won't affect other remotes even if they are created at the
same time.
This is very useful for overriding networking config needed for just for that
remote. For example, say you have a remote which needs `--no-check-certificate`
as it is running on test infrastructure without a proper certificate. You could
supply the `--no-check-certificate` flag to rclone, but this will affect **all**
the remotes. To make it just affect this remote you use an override. You could
put this in the config file:
```ini
[remote]
type = XXX
...
override.no_check_certificate = true
```
or use it in the connection string `remote,override.no_check_certificate=true:`
(or just `remote,override.no_check_certificate:`).
Note how the global flag name loses its initial `--` and gets `-` replaced with
`_` and gets an `override.` prefix.
Not all global variables make sense to be overridden like this as the config is
only applied during the remote creation. Here is a non exhaustive list of ones
which might be useful:
- `bind_addr`
- `ca_cert`
- `client_cert`
- `client_key`
- `connect_timeout`
- `disable_http2`
- `disable_http_keep_alives`
- `dump`
- `expect_continue_timeout`
- `headers`
- `http_proxy`
- `low_level_retries`
- `max_connections`
- `no_check_certificate`
- `no_gzip`
- `timeout`
- `traffic_class`
- `use_cookies`
- `use_server_modtime`
- `user_agent`
An `override.var` will override all other config methods, but **just** for the
duration of the creation of the remote.
### global.var
This is used to set a global variable **for everything**. The global variable is
set just before the remote is created.
This is useful for parameters (eg sync parameters) which can't be set as an
`override`. For example, say you have a remote where you would always like to
use the `--checksum` flag. You could supply the `--checksum` flag to rclone on
every command line, but instead you could put this in the config file:
```ini
[remote]
type = XXX
...
global.checksum = true
```
or use it in the connection string `remote,global.checksum=true:` (or just
`remote,global.checksum:`). This is equivalent to using the `--checksum` flag.
Note how the global flag name loses its initial `--` and gets `-` replaced with
`_` and gets a `global.` prefix.
Any global variable can be set like this and it is exactly equivalent to using
the equivalent flag on the command line. This means it will affect all uses of
rclone.
If two remotes set the same global variable then the first one instantiated will
be overridden by the second one. A `global.var` will override all other config
methods when the remote is created.
Quoting and the shell
---------------------
@@ -1249,6 +1346,15 @@ rclone sync --interactive ~/src s3:test/dst --header-upload "Content-Disposition
See the GitHub issue [here](https://github.com/rclone/rclone/issues/59) for
currently supported backends.
### --http-proxy string
Use this option to set an HTTP proxy for all HTTP based services to
use.
Rclone also supports the standard HTTP proxy environment variables
which it will pick up automatically. The is the way the HTTP proxy
will normally be set but this flag can be used to override it.
### --human-readable ###
Rclone commands output values for sizes (e.g. number of bytes) and
@@ -1545,9 +1651,16 @@ This option is only supported Windows platforms.
### --use-json-log ###
This switches the log format to JSON for rclone. The fields of JSON
log are `level`, `msg`, `source`, `time`. The JSON logs will be
printed on a single line, but are shown expanded here for clarity.
This switches the log format to JSON. The log messages are then
streamed as individual JSON objects, with fields: `level`, `msg`, `source`,
and `time`. The resulting format is what is sometimes referred to as
[newline-delimited JSON](https://en.wikipedia.org/wiki/JSON_streaming#Newline-delimited_JSON)
(NDJSON), or JSON Lines (JSONL). This is well suited for processing by
traditional line-oriented tools and shell pipelines, but a complete log
file is not strictly valid JSON and needs a parser that can handle it.
The JSON logs will be printed on a single line, but are shown expanded
here for clarity.
```json
{

View File

@@ -141,6 +141,16 @@ e.g.
Note that the FTP backend does not support `ftp_proxy` yet.
You can use the command line argument `--http-proxy` to set the proxy,
and in turn use an override in the config file if you want it set for
a single backend, eg `override.http_proxy = http://...` in the config
file.
The FTP and SFTP backends have their own `http_proxy` settings to
support an HTTP CONNECT proxy (
[--ftp-http-proxy](https://rclone.org/ftp/#ftp-http-proxy) and
[--sftp-http-proxy](https://rclone.org/ftp/#sftp-http-proxy) )
### Rclone gives x509: failed to load system roots and no roots provided error ###
This means that `rclone` can't find the SSL root certificates. Likely

View File

@@ -119,7 +119,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.2")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.70.3")
```

View File

@@ -57,7 +57,7 @@ off donation.
Thank you very much to our sponsors:
{{< sponsor src="/img/logos/idrive_e2.svg" width="300" height="200" title="Visit our sponsor IDrive e2" link="https://www.idrive.com/e2/?refer=rclone">}}
{{< sponsor src="/img/logos/filescom-enterprise-grade-workflows.png" width="300" height="200" title="Start Your Free Trial Today" link="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=banner">}}
{{< sponsor src="/img/logos/filescom-enterprise-grade-workflows.png" width="300" height="200" title="Start Your Free Trial Today" link="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=rclone">}}
{{< sponsor src="/img/logos/sia.svg" width="200" height="200" title="Visit our sponsor sia" link="https://sia.tech">}}
{{< sponsor src="/img/logos/route4me.svg" width="400" height="200" title="Visit our sponsor Route4Me" link="https://route4me.com/">}}
{{< sponsor src="/img/logos/rcloneview.svg" width="300" height="200" title="Visit our sponsor RcloneView" link="https://rcloneview.com/">}}

View File

@@ -24,7 +24,7 @@
Gold Sponsor
</div>
<div class="card-body">
<a href="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=banner" target="_blank" rel="noopener" title="Start Your Free Trial Today"><img style="max-width: 100%; height: auto;" src="/img/logos/filescom-enterprise-grade-workflows.png"></a><br />
<a href="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=rclone" target="_blank" rel="noopener" title="Start Your Free Trial Today"><img style="max-width: 100%; height: auto;" src="/img/logos/filescom-enterprise-grade-workflows.png"></a><br />
</div>
</div>

View File

@@ -66,8 +66,8 @@
<a class="dropdown-item" href="/koofr/#digi-storage"><i class="fa fa-cloud fa-fw"></i> Digi Storage</a>
<a class="dropdown-item" href="/dropbox/"><i class="fab fa-dropbox fa-fw"></i> Dropbox</a>
<a class="dropdown-item" href="/filefabric/"><i class="fa fa-cloud fa-fw"></i> Enterprise File Fabric</a>
<a class="dropdown-item" href="/filelu/"><i class="fa fa-brands fa-files-pinwheel fa-fw"></i> Files.com</a>
<a class="dropdown-item" href="/filescom/"><i class="fa fa-cloud fa-fw"></i> FileLu Cloud Storage</a>
<a class="dropdown-item" href="/filelu/"><i class="fa fa-folder"></i> FileLu Cloud Storage</a>
<a class="dropdown-item" href="/filescom/"><i class="fa fa-brands fa-files-pinwheel fa-fw"></i> Files.com</a>
<a class="dropdown-item" href="/ftp/"><i class="fa fa-file fa-fw"></i> FTP</a>
<a class="dropdown-item" href="/gofile/"><i class="fa fa-folder fa-fw"></i> Gofile</a>
<a class="dropdown-item" href="/googlecloudstorage/"><i class="fab fa-google fa-fw"></i> Google Cloud Storage</a>

View File

@@ -1 +1 @@
v1.70.2
v1.70.3

View File

@@ -555,6 +555,11 @@ var ConfigOptionsInfo = Options{{
Default: []string{},
Help: "Transform paths during the copy process.",
Groups: "Copy",
}, {
Name: "http_proxy",
Default: "",
Help: "HTTP proxy URL.",
Groups: "Networking",
}}
// ConfigInfo is filesystem config options
@@ -667,6 +672,7 @@ type ConfigInfo struct {
MetadataMapper SpaceSepList `config:"metadata_mapper"`
MaxConnections int `config:"max_connections"`
NameTransform []string `config:"name_transform"`
HTTPProxy string `config:"http_proxy"`
}
func init() {

View File

@@ -6,10 +6,12 @@ import (
"context"
"crypto/tls"
"crypto/x509"
"fmt"
"net"
"net/http"
"net/http/cookiejar"
"net/http/httputil"
"net/url"
"os"
"sync"
"time"
@@ -55,7 +57,18 @@ func NewTransportCustom(ctx context.Context, customize func(*http.Transport)) ht
// This also means we get new stuff when it gets added to go
t := new(http.Transport)
structs.SetDefaults(t, http.DefaultTransport.(*http.Transport))
t.Proxy = http.ProxyFromEnvironment
if ci.HTTPProxy != "" {
proxyURL, err := url.Parse(ci.HTTPProxy)
if err != nil {
t.Proxy = func(*http.Request) (*url.URL, error) {
return nil, fmt.Errorf("failed to set --http-proxy from %q: %w", ci.HTTPProxy, err)
}
} else {
t.Proxy = http.ProxyURL(proxyURL)
}
} else {
t.Proxy = http.ProxyFromEnvironment
}
t.MaxIdleConnsPerHost = 2 * (ci.Checkers + ci.Transfers + 1)
t.MaxIdleConns = 2 * t.MaxIdleConnsPerHost
t.TLSHandshakeTimeout = ci.ConnectTimeout

View File

@@ -20,7 +20,7 @@ const (
var (
errInvalidCharacters = errors.New("config name contains invalid characters - may only contain numbers, letters, `_`, `-`, `.`, `+`, `@` and space, while not start with `-` or space, and not end with space")
errCantBeEmpty = errors.New("can't use empty string as a path")
errBadConfigParam = errors.New("config parameters may only contain `0-9`, `A-Z`, `a-z` and `_`")
errBadConfigParam = errors.New("config parameters may only contain `0-9`, `A-Z`, `a-z`, `_` and `.`")
errEmptyConfigParam = errors.New("config parameters can't be empty")
errConfigNameEmpty = errors.New("config name can't be empty")
errConfigName = errors.New("config name needs a trailing `:`")
@@ -79,7 +79,8 @@ func isConfigParam(c rune) bool {
return ((c >= 'a' && c <= 'z') ||
(c >= 'A' && c <= 'Z') ||
(c >= '0' && c <= '9') ||
c == '_')
c == '_' ||
c == '.')
}
// Parsed is returned from Parse with the results of the connection string decomposition

View File

@@ -16,7 +16,6 @@ import (
"github.com/rclone/rclone/fs/list"
"github.com/rclone/rclone/fs/walk"
"github.com/rclone/rclone/lib/transform"
"golang.org/x/sync/errgroup"
"golang.org/x/text/unicode/norm"
)
@@ -291,6 +290,7 @@ func (m *March) matchListings(srcChan, dstChan <-chan fs.DirEntry, srcOnly, dstO
srcPrev, dstPrev fs.DirEntry
srcPrevName, dstPrevName string
src, dst fs.DirEntry
srcHasMore, dstHasMore = true, true
srcName, dstName string
)
srcDone := func() {
@@ -311,14 +311,14 @@ func (m *March) matchListings(srcChan, dstChan <-chan fs.DirEntry, srcOnly, dstO
}
// Reload src and dst if needed - we set them to nil if used
if src == nil {
src = <-srcChan
src, srcHasMore = <-srcChan
srcName = m.srcKey(src)
}
if dst == nil {
dst = <-dstChan
dst, dstHasMore = <-dstChan
dstName = m.dstKey(dst)
}
if src == nil && dst == nil {
if !srcHasMore && !dstHasMore {
break
}
if src != nil && srcPrev != nil {
@@ -419,38 +419,65 @@ func (m *March) processJob(job listDirJob) ([]listDirJob, error) {
// If NoTraverse is set, then try to find a matching object
// for each item in the srcList to head dst object
if m.NoTraverse && !m.NoCheckDest {
startedDst = true
workers := ci.Checkers
originalSrcChan := srcChan
srcChan = make(chan fs.DirEntry, 100)
ls, err := list.NewSorter(m.Ctx, m.Fdst, list.SortToChan(dstChan), m.dstKey)
if err != nil {
return nil, err
type matchTask struct {
src fs.DirEntry // src object to find in destination
dstMatch chan<- fs.DirEntry // channel to receive matching dst object or nil
}
matchTasks := make(chan matchTask, workers)
dstMatches := make(chan (<-chan fs.DirEntry), workers)
// Create the tasks from the originalSrcChan. These are put into matchTasks for
// processing and dstMatches so they can be retrieved in order.
go func() {
for src := range originalSrcChan {
srcChan <- src
dstMatch := make(chan fs.DirEntry, 1)
matchTasks <- matchTask{
src: src,
dstMatch: dstMatch,
}
dstMatches <- dstMatch
}
close(matchTasks)
}()
// Get the tasks from the queue and find a matching object.
var workerWg sync.WaitGroup
for range workers {
workerWg.Add(1)
go func() {
defer workerWg.Done()
for t := range matchTasks {
leaf := path.Base(t.src.Remote())
dst, err := m.Fdst.NewObject(m.Ctx, path.Join(job.dstRemote, leaf))
if err != nil {
dst = nil
}
t.dstMatch <- dst
}
}()
}
startedDst = true
// Close dstResults when all the workers have finished
go func() {
workerWg.Wait()
close(dstMatches)
}()
// Read the matches in order and send them to dstChan if found.
wg.Add(1)
go func() {
defer wg.Done()
defer ls.CleanUp()
g, gCtx := errgroup.WithContext(m.Ctx)
g.SetLimit(ci.Checkers)
for src := range originalSrcChan {
srcChan <- src
if srcObj, ok := src.(fs.Object); ok {
g.Go(func() error {
leaf := path.Base(srcObj.Remote())
dstObj, err := m.Fdst.NewObject(gCtx, path.Join(job.dstRemote, leaf))
if err == nil {
_ = ls.Add(fs.DirEntries{dstObj}) // ignore errors
}
return nil // ignore errors
})
}
}
dstListErr = g.Wait()
sendErr := ls.Send()
if dstListErr == nil {
dstListErr = sendErr
for dstMatch := range dstMatches {
dst := <-dstMatch
// Note that dst may be nil here
// We send these on so we don't deadlock the reader
dstChan <- dst
}
close(srcChan)
close(dstChan)

View File

@@ -7,12 +7,15 @@ import (
"crypto/md5"
"encoding/base64"
"fmt"
"maps"
"os"
"path/filepath"
"slices"
"strings"
"sync"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath"
)
@@ -65,6 +68,10 @@ func NewFs(ctx context.Context, path string) (Fs, error) {
overriddenConfig[suffix] = extraConfig
overriddenConfigMu.Unlock()
}
ctx, err = addConfigToContext(ctx, configName, config)
if err != nil {
return nil, err
}
f, err := fsInfo.NewFs(ctx, configName, fsPath, config)
if f != nil && (err == nil || err == ErrorIsFile) {
addReverse(f, fsInfo)
@@ -72,6 +79,54 @@ func NewFs(ctx context.Context, path string) (Fs, error) {
return f, err
}
// Add "global" config or "override" to ctx and the global config if required.
//
// This looks through keys prefixed with "global." or "override." in
// config and sets ctx and optionally the global context if "global.".
func addConfigToContext(ctx context.Context, configName string, config configmap.Getter) (newCtx context.Context, err error) {
overrideConfig := make(configmap.Simple)
globalConfig := make(configmap.Simple)
for i := range ConfigOptionsInfo {
opt := &ConfigOptionsInfo[i]
globalName := "global." + opt.Name
value, isSet := config.Get(globalName)
if isSet {
// Set both override and global if global
overrideConfig[opt.Name] = value
globalConfig[opt.Name] = value
}
overrideName := "override." + opt.Name
value, isSet = config.Get(overrideName)
if isSet {
overrideConfig[opt.Name] = value
}
}
if len(overrideConfig) == 0 && len(globalConfig) == 0 {
return ctx, nil
}
newCtx, ci := AddConfig(ctx)
overrideKeys := slices.Collect(maps.Keys(overrideConfig))
slices.Sort(overrideKeys)
globalKeys := slices.Collect(maps.Keys(globalConfig))
slices.Sort(globalKeys)
// Set the config in the newCtx
err = configstruct.Set(overrideConfig, ci)
if err != nil {
return ctx, fmt.Errorf("failed to set override config variables %q: %w", overrideKeys, err)
}
Debugf(configName, "Set overridden config %q for backend startup", overrideKeys)
// Set the global context only
if len(globalConfig) != 0 {
globalCI := GetConfig(context.Background())
err = configstruct.Set(globalConfig, globalCI)
if err != nil {
return ctx, fmt.Errorf("failed to set global config variables %q: %w", globalKeys, err)
}
Debugf(configName, "Set global config %q at backend startup", overrideKeys)
}
return newCtx, nil
}
// ConfigFs makes the config for calling NewFs with.
//
// It parses the path which is of the form remote:path

55
fs/newfs_internal_test.go Normal file
View File

@@ -0,0 +1,55 @@
package fs
import (
"context"
"testing"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// When no override/global keys exist, ctx must be returned unchanged.
func TestAddConfigToContext_NoChanges(t *testing.T) {
ctx := context.Background()
newCtx, err := addConfigToContext(ctx, "unit-test", configmap.Simple{})
require.NoError(t, err)
assert.Equal(t, newCtx, ctx)
}
// A single override.key must create a new ctx, but leave the
// background ctx untouched.
func TestAddConfigToContext_OverrideOnly(t *testing.T) {
override := configmap.Simple{
"override.user_agent": "potato",
}
ctx := context.Background()
globalCI := GetConfig(ctx)
original := globalCI.UserAgent
newCtx, err := addConfigToContext(ctx, "unit-test", override)
require.NoError(t, err)
assert.NotEqual(t, newCtx, ctx)
assert.Equal(t, original, globalCI.UserAgent)
ci := GetConfig(newCtx)
assert.Equal(t, "potato", ci.UserAgent)
}
// A single global.key must create a new ctx and update the
// background/global config.
func TestAddConfigToContext_GlobalOnly(t *testing.T) {
global := configmap.Simple{
"global.user_agent": "potato2",
}
ctx := context.Background()
globalCI := GetConfig(ctx)
original := globalCI.UserAgent
defer func() {
globalCI.UserAgent = original
}()
newCtx, err := addConfigToContext(ctx, "unit-test", global)
require.NoError(t, err)
assert.NotEqual(t, newCtx, ctx)
assert.Equal(t, "potato2", globalCI.UserAgent)
ci := GetConfig(newCtx)
assert.Equal(t, "potato2", ci.UserAgent)
}

View File

@@ -42,4 +42,21 @@ func TestNewFs(t *testing.T) {
assert.Equal(t, ":mockfs{S_NHG}:/tmp", fs.ConfigString(f3))
assert.Equal(t, ":mockfs,potato='true':/tmp", fs.ConfigStringFull(f3))
// Check that the overrides work
globalCI := fs.GetConfig(ctx)
original := globalCI.UserAgent
defer func() {
globalCI.UserAgent = original
}()
f4, err := fs.NewFs(ctx, ":mockfs,global.user_agent='julian':/tmp")
require.NoError(t, err)
assert.Equal(t, ":mockfs", f4.Name())
assert.Equal(t, "/tmp", f4.Root())
assert.Equal(t, ":mockfs:/tmp", fs.ConfigString(f4))
assert.Equal(t, ":mockfs:/tmp", fs.ConfigStringFull(f4))
assert.Equal(t, "julian", globalCI.UserAgent)
}

View File

@@ -249,7 +249,7 @@ func (c *checkMarch) reportResults(ctx context.Context, err error) error {
fs.Logf(c.opt.Fsrc, "%d %s missing", c.srcFilesMissing.Load(), entity)
}
fs.Logf(c.opt.Fdst, "%d differences found", accounting.Stats(ctx).GetErrors())
fs.Logf(c.opt.Fdst, "%d differences found", c.differences.Load())
if errs := accounting.Stats(ctx).GetErrors(); errs > 0 {
fs.Logf(c.opt.Fdst, "%d errors while checking", errs)
}

View File

@@ -216,6 +216,35 @@ func TestCopyNoTraverse(t *testing.T) {
r.CheckRemoteItems(t, file1)
}
func TestCopyNoTraverseDeadlock(t *testing.T) {
r := fstest.NewRun(t)
if !r.Fremote.Features().IsLocal {
t.Skip("Only runs on local")
}
const nFiles = 200
t1 := fstest.Time("2001-02-03T04:05:06.499999999Z")
// Create lots of source files.
items := make([]fstest.Item, nFiles)
for i := range items {
name := fmt.Sprintf("file%d.txt", i)
items[i] = r.WriteFile(name, fmt.Sprintf("content%d", i), t1)
}
r.CheckLocalItems(t, items...)
// Set --no-traverse
ctx, ci := fs.AddConfig(context.Background())
ci.NoTraverse = true
// Initial copy to establish destination.
require.NoError(t, CopyDir(ctx, r.Fremote, r.Flocal, false))
r.CheckRemoteItems(t, items...)
// Second copy which shouldn't deadlock
require.NoError(t, CopyDir(ctx, r.Flocal, r.Fremote, false))
r.CheckRemoteItems(t, items...)
}
// Now with --check-first
func TestCopyCheckFirst(t *testing.T) {
ctx := context.Background()

View File

@@ -1,4 +1,4 @@
package fs
// VersionTag of rclone
var VersionTag = "v1.70.2"
var VersionTag = "v1.70.3"

70
rclone.1 generated
View File

@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 2.9.2.1
.\"
.TH "rclone" "1" "Jun 27, 2025" "User Manual" ""
.TH "rclone" "1" "Jul 09, 2025" "User Manual" ""
.hy
.SH NAME
.PP
@@ -252,10 +252,10 @@ Enterprise File Fabric
.IP \[bu] 2
Fastmail Files
.IP \[bu] 2
Files.com
.IP \[bu] 2
FileLu Cloud Storage
.IP \[bu] 2
Files.com
.IP \[bu] 2
FlashBlade
.IP \[bu] 2
FTP
@@ -20151,9 +20151,18 @@ would use
This option is only supported Windows platforms.
.SS --use-json-log
.PP
This switches the log format to JSON for rclone.
The fields of JSON log are \f[C]level\f[R], \f[C]msg\f[R],
\f[C]source\f[R], \f[C]time\f[R].
This switches the log format to JSON.
The log messages are then streamed as individual JSON objects, with
fields: \f[C]level\f[R], \f[C]msg\f[R], \f[C]source\f[R], and
\f[C]time\f[R].
The resulting format is what is sometimes referred to as
newline-delimited
JSON (https://en.wikipedia.org/wiki/JSON_streaming#Newline-delimited_JSON)
(NDJSON), or JSON Lines (JSONL).
This is well suited for processing by traditional line-oriented tools
and shell pipelines, but a complete log file is not strictly valid JSON
and needs a parser that can handle it.
.PP
The JSON logs will be printed on a single line, but are shown expanded
here for clarity.
.IP
@@ -30216,7 +30225,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.70.2\[dq])
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.70.3\[dq])
\f[R]
.fi
.SS Performance
@@ -78239,6 +78248,53 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: return an error based on option value
.SH Changelog
.SS v1.70.3 - 2025-07-09
.PP
See commits (https://github.com/rclone/rclone/compare/v1.70.2...v1.70.3)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
check: Fix difference report (was reporting error counts) (albertony)
.IP \[bu] 2
march: Fix deadlock when using \f[C]--no-traverse\f[R] (Nick Craig-Wood)
.IP \[bu] 2
doc fixes (albertony, Nick Craig-Wood)
.RE
.IP \[bu] 2
Azure Blob
.RS 2
.IP \[bu] 2
Fix server side copy error \[dq]requires exactly one scope\[dq] (Nick
Craig-Wood)
.RE
.IP \[bu] 2
B2
.RS 2
.IP \[bu] 2
Fix finding objects when using \f[C]--b2-version-at\f[R] (Davide
Bizzarri)
.RE
.IP \[bu] 2
Linkbox
.RS 2
.IP \[bu] 2
Fix upload error \[dq]user upload file not exist\[dq] (Nick Craig-Wood)
.RE
.IP \[bu] 2
Pikpak
.RS 2
.IP \[bu] 2
Improve error handling for missing links and unrecoverable 500s
(wiserain)
.RE
.IP \[bu] 2
WebDAV
.RS 2
.IP \[bu] 2
Fix setting modtime to that of local object instead of remote
(WeidiDeng)
.RE
.SS v1.70.2 - 2025-06-27
.PP
See commits (https://github.com/rclone/rclone/compare/v1.70.1...v1.70.2)