This reduces the number of go routines which can get out of hand when
using large --transfers and --multi-thread-streams from potentially
--multi-thread-streams * --transfers Go routines to --max-memory /
--multi-thread-chunk-size
It serializes the memory allocator in each transfer which should be
good for performance and reduce lock contention.
These stats weren't being updated in the global stats read by rc
core/stats:
- transferQueue
- deletesSize
- serverSideCopies
- serverSideCopyBytes
- serverSideMoves
- serverSideMoveBytes
Before this change server side copies would show at 0% until they were
done then show at 100%.
With support from the backend, server side copies can now be accounted
in real time. This will only work for backends which have been
modified and themselves get feedback about how copies are going.
If the transfer fails, the bytes accounted will be reversed.
Before this change we read sleepTime before acquiring the pacer token
and uses that possibly stale value to schedule the token return. When
many goroutines enter while sleepTime is high (e.g., 10s), each
goroutine caches this 10s value. Even if successful calls rapidly
decay the pacer state to 0, the queued goroutines still schedule 10s
token returns, so the queue drains at 1 req/10s for the entire herd.
This can create multi‑minute delays even after the pacer has dropped
to 0.
After this change we refresh the sleep time after getting the token.
This problem was introduced by the desire to skip reading the pacer
token entirely when sleepTime is 0 in high performance backends (eg
s3, azure blob).
It was possible in the presence of --max-connections and recursive
calls to the pacer to deadlock it leaving all connections waiting on
either a max connection token or a pacer token.
This fixes the problem by making sure we return the pacer token on
schedule if we take it.
This also short circuits the pacer token if sleepTime is 0.
If the pacer was used recursively and --max-connections was in use
then it could deadlock if all the connections were in use at the time
of recursive call (likely).
This affected the azureblob backend because when it receives an
InvalidBlockOrBlob error it attempts to clear the condition before
retrying. This in turn involves recursively calling the pacer.
This fixes the problem by skipping the --max-connections check if the
pacer is called recursively.
The recursive detection is done by stack inspection which isn't ideal,
but the alternative would be to add ctx to all >1,000 pacer calls. The
benchmark reveals stack inspection takes about 55nS per stack level so
it is relatively cheap.
As of v1.71, bisync is officially out of beta.
Some history:
- bisync was born in 2018 as https://github.com/cjnaz/rclonesync-V2
by @cjnaz, written in python.
- In 2021, @ivandeex ported it to go with @cjnaz's support.
https://github.com/rclone/rclone/pull/5164
- It was introduced as an "experimental" feature in v1.58.
6210e22ab5
- In 2023, bisync needed a new maintainer, and @nielash volunteered.
https://forum.rclone.org/t/bisync-bugs-and-feature-requests/37636
- Later in 2023, bisync received a major overhaul and was relabeled "beta"
(from "experimental"). https://github.com/rclone/rclone/pull/7410
- In 2024, integration tests were introduced for bisync (which previously had
only unit tests). https://github.com/rclone/rclone/pull/7693
- As of August 2025, bisync is stable and integration tests are passing on all
of the "flagship" backends.
Development doesn't stop here, of course. But bisync has come a long way since
its "experimental" days, and the "beta" tag is no longer needed.
This commit introduces a new validation step to ensure data integrity
during file uploads.
- The API's returned file name (new.File.Name) is now verified
against the requested file name (leaf) immediately after
the initial upload ticket is created.
- If a mismatch is detected, the upload process is aborted with an error,
and the defer cleanup logic is triggered to delete any partially created file.
- This addresses an unexpected API behavior where numbered suffixes
might be appended to filenames even without conflicts.
- This change prevents corrupted or misnamed files from being uploaded
without client-side awareness.
Before this change, server side copy of files with & gave the error:
Invalid Argument</Message><Resource>x-(amz|archive)-copy-source
header has bad character
This fix switches to using url.QueryEscape which escapes everything
from url.PathEscape which doesn't escape &.
Fixes#8754
This reverts commit 64ed9b175f.
This fails the integration tests with
s3_internal_test.go:434: Creating a bucket we already have created returned code: No Error
s3_internal_test.go:439:
Error Trace: backend/s3/s3_internal_test.go:439
Error: Should be true
Test: TestIntegration/FsMkdir/FsPutFiles/Internal/Versions/Mkdir
Messages: Need to set UseAlreadyExists quirk
In the current design, OpenWriterAt provides the interface for random-access
writes, and openChunkWriterFromOpenWriterAt wraps this interface to enable
parallel chunk uploads using multiple goroutines. A global connection pool is
already in place to manage SMB connections across files.
However, currently only one connection is used per file, which makes multiple
goroutines compete for the connection during multithreaded writes.
This changes create separate connections for each goroutine, which allows true
parallelism by giving each goroutine its own SMB connection
Signed-off-by: sudipto baral <sudiptobaral.me@gmail.com>
Before this change, bisync used some global variables, which could cause errors
if running multiple concurrent bisync runs through the rc. (Running normally
from the command line was not affected.)
This change deglobalizes those variables so that multiple bisync runs can be
safely run at once, from the same rclone instance.
`Content-Type: aws-chunked` is used on S3 PUT requests to signal SigV4
streaming uploads: the body is sent in AWS-formatted chunks, each
chunk framed and HMAC-signed.
When copying from a non S3 compatible object store (like Digital
Ocean) the objects can have `Content-Type: aws-chunked` (which you
won't see on AWS S3). Attempting to copy these objects to S3 with
`--metadata` this produces this error.
aws-chunked encoding is not supported when x-amz-content-sha256 UNSIGNED-PAYLOAD is supplied
This patch makes sure `aws-chunked` is removed from the `Content-Type`
metadata both on the way in and the way out.
Fixes#8724
Before this change we were reading input from stdin using the terminal
in the default line mode which has a limit of 4095 characters.
The typical culprit was onedrive tokens (which are very long) giving the error
Couldn't decode response: invalid character 'e' looking for beginning of value
This change swaps over to use the github.com/peterh/liner read line
library which does not have that limitation and also enables more
sensible cursor editing.
Fixes#8688#8323#5835
Before this change we used the current context to start the average
loop. This means that if the context came from the rc the average loop
would be cancelled at the end of the rc request leading the speed not
being measured.
This uses the background context for the accounting loop so it doesn't
get cancelled when its parent gets cancelled.