Before this, String() quoted every part of the config map even if it
wasn't necessary.
The new Human() method removes the quoting and adds the special case
for "true" values.
Before this change bisync adjusted the global MaxCompletedTransfers
variable which caused races.
This adds a SetMaxCompletedTransfers method and uses it in bisync.
The MaxCompletedTransfers global becomes the default. This can be
changed externally if rclone is in use as a library, and the commit
history indicates that MaxCompletedTransfers was added for exactly
this purpose so we try not to break it here.
Before this change bisync was adjusting MaxCompletedTransfers in order
to clear the done transfers from the stats.
This wasn't working (because it was only clearing one transfer) and
was part of a race adjusting MaxCompletedTransfers.
This fixes the problem by introducing a new method RemoveDoneTransfers
to clear the done transfers explicitly and calling it in bisync.
Before this change CaptureOutput could trip the race detector when
used concurrently. In particular if go routines using the logging are
outlasting the return from `fun()`.
This fixes the problem with a mutex.
If the pacer was used recursively and --max-connections was in use
then it could deadlock if all the connections were in use at the time
of recursive call (likely).
This affected the azureblob backend because when it receives an
InvalidBlockOrBlob error it attempts to clear the condition before
retrying. This in turn involves recursively calling the pacer.
This fixes the problem by skipping the --max-connections check if the
pacer is called recursively.
The recursive detection is done by stack inspection which isn't ideal,
but the alternative would be to add ctx to all >1,000 pacer calls. The
benchmark reveals stack inspection takes about 55nS per stack level so
it is relatively cheap.
Before this change, rclone would unnecessarily retry downloads when
the `Link.Expire` field was unreliable but the download URL contained
a valid expire query parameter. This primarily affects cases where
media links are unavailable or when `no_media_link` is enabled.
The `Link.Valid()` method now primarily checks the URL's expire query
parameter (as Unix timestamp) and falls back to the Expire field
only when URL parsing fails. This eliminates the `error no link`
retry loops while maintaining backward compatibility.
Signed-off-by: Youfu Zhang <zhangyoufu@gmail.com>
Before this change the minimum chunk size would default to 96M which
would allow a maximum size of just below 1TB file to be uploaded, due to
the 10000 part rule for b2.
Now the calculated chunk size is used so the chunk size can be 5GB
making a max file size of 50TB.
Fixes#8460
Before this change, it was possible to have a deadlock when using
--fast-list for a sync if both the source and destination supported
ListR.
This fixes the problem by shortening the locking window.
This turned out to be a problem in the tests. The tests used to do
1. allocate
2. increment
3. free
4. decrement
But if one goroutine had just completed 2 and another had just
completed 3 then this can cause the test to register too many
allocations.
This was fixed by doing the test in this order instead:
1. allocate
2. increment
3. decrement
4. free
The 4 operations are atomic.
Fixes#8813
Before this change, TestIntegration/FsName could fail with "slice bounds out of
range [:-1]" when run with -remotes local.
It also caused issues with
'^TestGitAnnexFstestBackendCases$/^(TransferStorePathWithInteriorWhitespace|TransferStoreRelative)$'.
This change fixes the issue by accepting either "" or "local" to indicate the
local remote.
Before this change, TestMetadata could fail due to a difference between the
user's local time zone and UTC causing the string representation of the date to
be off by one day. This change fixes the issue by comparing both in the Local
time zone.
Before this change, Rmdir (and other commands that rely on Rmdir) would fail
with "Access is denied" on Windows, if the directory had
FILE_ATTRIBUTE_READONLY. This could happen if, for example, an empty folder had
a custom icon added via Windows Explorer's interface (Properties => Customize =>
Change Icon...).
However, Microsoft docs indicate that "This attribute is not honored on
directories."
https://learn.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants#file_attribute_readonly
Accordingly, this created an odd situation where such directories were removable
(by their owner) via File Explorer and the rd command, but not via rclone.
An upstream issue has been open since 2018, but has not yet resulted in a fix.
https://github.com/golang/go/issues/26295
This change gets around the issue by doing os.Chmod on the dir and then retrying
os.Remove. If the dir is not empty, this will still fail with "The directory is
not empty."
A bisync user confirmed that it fixed their issue in
https://forum.rclone.org/t/bisync-leaving-empty-directories-on-unc-path-1-or-local-filesystem-path-2-on-directory-renames/52456/4?u=nielash
It is likely also a fix for #8019, although @ncw is correct that Purge would be
a more efficient solution in that particular scenario.
Before this change, rclone could crash during modifyListing if a rename's
srcNewName is known but not found in the srcList
(srcNewName != "" && new == nil).
This scenario should not happen, but if it does, we should print an error
instead of crashing.
On #8458 there is a report of this possibly happening on v1.68.2. It is unknown
what the underlying issue was, and whether it still exists in the latest
version, but if it does, the user will now see an error and debug info instead
of a crash.
In this commit:
c63f1865f3 operations: copy: generate stable partial suffix
We made the partial suffix for non inplace copies stable. This was a
hash based off the file fingerprint.
However, given a directory of files which have the same fingerprint
the partial suffix collides. On some backends (eg the local backend)
the fingerprint is just the size and modification time so files with
different contents can collide.
The effect of collisions was hash failures on copy when using
--transfers > 1. These copies invariably retried successfully which
probably explains why this bug hasn't been reported.
This fixes the problem by adding the file name to the hash.
It also makes sure the hash is always represented as 8 hex bytes for
consistency.