Before this change the minimum chunk size would default to 96M which
would allow a maximum size of just below 1TB file to be uploaded, due to
the 10000 part rule for b2.
Now the calculated chunk size is used so the chunk size can be 5GB
making a max file size of 50TB.
Fixes#8460
Before this change, Rmdir (and other commands that rely on Rmdir) would fail
with "Access is denied" on Windows, if the directory had
FILE_ATTRIBUTE_READONLY. This could happen if, for example, an empty folder had
a custom icon added via Windows Explorer's interface (Properties => Customize =>
Change Icon...).
However, Microsoft docs indicate that "This attribute is not honored on
directories."
https://learn.microsoft.com/en-us/windows/win32/fileio/file-attribute-constants#file_attribute_readonly
Accordingly, this created an odd situation where such directories were removable
(by their owner) via File Explorer and the rd command, but not via rclone.
An upstream issue has been open since 2018, but has not yet resulted in a fix.
https://github.com/golang/go/issues/26295
This change gets around the issue by doing os.Chmod on the dir and then retrying
os.Remove. If the dir is not empty, this will still fail with "The directory is
not empty."
A bisync user confirmed that it fixed their issue in
https://forum.rclone.org/t/bisync-leaving-empty-directories-on-unc-path-1-or-local-filesystem-path-2-on-directory-renames/52456/4?u=nielash
It is likely also a fix for #8019, although @ncw is correct that Purge would be
a more efficient solution in that particular scenario.
In this commit we broke server side copy for files with spaces
4c5764204d internetarchive: fix server side copy files with &
This fixes the problem by using rest.URLPathEscapeAll which escapes
everything possible.
Fixes#8754
This commit introduces a new validation step to ensure data integrity
during file uploads.
- The API's returned file name (new.File.Name) is now verified
against the requested file name (leaf) immediately after
the initial upload ticket is created.
- If a mismatch is detected, the upload process is aborted with an error,
and the defer cleanup logic is triggered to delete any partially created file.
- This addresses an unexpected API behavior where numbered suffixes
might be appended to filenames even without conflicts.
- This change prevents corrupted or misnamed files from being uploaded
without client-side awareness.
Before this change, server side copy of files with & gave the error:
Invalid Argument</Message><Resource>x-(amz|archive)-copy-source
header has bad character
This fix switches to using url.QueryEscape which escapes everything
from url.PathEscape which doesn't escape &.
Fixes#8754
This reverts commit 64ed9b175f.
This fails the integration tests with
s3_internal_test.go:434: Creating a bucket we already have created returned code: No Error
s3_internal_test.go:439:
Error Trace: backend/s3/s3_internal_test.go:439
Error: Should be true
Test: TestIntegration/FsMkdir/FsPutFiles/Internal/Versions/Mkdir
Messages: Need to set UseAlreadyExists quirk
In the current design, OpenWriterAt provides the interface for random-access
writes, and openChunkWriterFromOpenWriterAt wraps this interface to enable
parallel chunk uploads using multiple goroutines. A global connection pool is
already in place to manage SMB connections across files.
However, currently only one connection is used per file, which makes multiple
goroutines compete for the connection during multithreaded writes.
This changes create separate connections for each goroutine, which allows true
parallelism by giving each goroutine its own SMB connection
Signed-off-by: sudipto baral <sudiptobaral.me@gmail.com>
`Content-Type: aws-chunked` is used on S3 PUT requests to signal SigV4
streaming uploads: the body is sent in AWS-formatted chunks, each
chunk framed and HMAC-signed.
When copying from a non S3 compatible object store (like Digital
Ocean) the objects can have `Content-Type: aws-chunked` (which you
won't see on AWS S3). Attempting to copy these objects to S3 with
`--metadata` this produces this error.
aws-chunked encoding is not supported when x-amz-content-sha256 UNSIGNED-PAYLOAD is supplied
This patch makes sure `aws-chunked` is removed from the `Content-Type`
metadata both on the way in and the way out.
Fixes#8724
Before this change multipart uploads using OpenChunkWriter would
account for twice the space used.
This fixes the problem by adjusting the accounting delay.
Before this change the azureblob backend could deadlock when using
--max-connections. This is because when it receives InvalidBlockOrBlob
error it attempts to clear the condition before retrying. This in turn
involved recursively calling the pacer. At this point the pacer can
easily have no connections left which causes a deadlock as all the
other pacer connections are waiting for the InvalidBlockOrBlob to be
resolved.
This fixes the problem by using a temporary pacer when resolving the
InvalidBlockOrBlob errors.
Before this change, setting an object's modtime with o.SetModTime() (without
updating the file's content) would inadvertently erase its md5 hash.
The documentation notes: "If this property isn't specified on the request, the
property is cleared for the file. Subsequent calls to Get File Properties won't
return this property, unless it's explicitly set on the file again."
https://learn.microsoft.com/en-us/rest/api/storageservices/set-file-properties#common-request-headers
This change fixes the issue by setting ContentMD5 (and ContentType), to the
extent we have it, during SetModTime.
Discovered on bisync integration tests such as TestBisyncRemoteRemote/resolve
Before this fix it was possible for an about call in various backends
to exceed an int64 and wrap.
This patch causes it to clip to the max int64 value instead.
Before this change rclone about was failing with
cannot unmarshal number 1.0e+18 into Go struct field User.space_amount of type int64
As Box increased Enterprise accounts user.space_amount from 30PB to
1e+18 or 888.178PB returning it as a floating point number, not an integer.
This fix reads it as a float64 and clips it to the maximum value of an
int64 if necessary.
This change enhances the SMB backend in Rclone to automatically refresh
Kerberos credentials when the associated ccache file is updated.
Previously, credentials were only loaded once per path and cached
indefinitely, which caused issues when service tickets expired or the
cache was renewed on the server.
When uploading or moving data within an s3-compatible bucket, the
`SSECustomer*` headers should always be forwarded: on
`CreateMultipartUpload`, `UploadPart`, `UploadCopyPart` and
`CompleteMultipartUpload`. But currently rclone doesn't forward those
headers to `CompleteMultipartUpload`.
This is a requirement if you want to enforce `SSE-C` at the bucket level
via a bucket policy. Cf: `This parameter is required only when the
object was created using a checksum algorithm or if your bucket policy
requires the use of SSE-C.` in
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
This commit addresses a potential memory leak in the S3 backend where
strings extracted from large API responses were keeping the entire
response in memory. The issue occurs because Go strings share underlying
memory with their source, preventing garbage collection of large XML
responses even when only small substrings are needed.
Signed-off-by: liubingrun <liubr1@chinatelecom.cn>
The Copy method was downloading the file and uploading it again rather
than server side copying it.
It looks from the docs that the upload process can read a URL so this
might be possible, but the removed code is incorrect.
All user visible Durations should be fs.Duration rather than time.Duration. Suffix is then optional and defaults to s. Additional suffices d, w, M and y are supported, in addition to ms, s, m and h - which are the only ones supported by time.Duration. Absolute times can also be specified, and will be interpreted as duration relative to now.
Before this change, if not using shared key or SAS URL authentication
for the source, rclone gave this error
ManagedIdentityCredential.GetToken() requires exactly one scope
when doing server side copies.
This was introduced in:
3a5ddfcd3c azureblob: implement multipart server side copy
This fixes the problem by creating a temporary SAS URL using user
delegation to read the source blob when copying.
Fixes#8662
The seafile backend used to be able to cope with files called "." and
".." but at some point became unable to do so, causing integration
test failurs.
This adds EncodeDot to the encoding which encodes "." and ".." names.
Linkbox have started issuing 302 redirects on some of their PUT
requests when rclone uploads a file.
This is problematic for several reasons:
1. This is the wrong redirect code - it should be 307 to preserve the method
2. Since Expect/100-Continue isn't supported the whole body gets uploaded
This fixes the problem by first doing a HEAD request on the URL. This
will allow us to read the redirect Location and not upload the body to
the wrong place.
It should still work (albeit a little more inefficiently) if Linkbox
stop redirecting the PUT requests.
See: https://forum.rclone.org/t/linkbox-upload-error/51795Fixes: #8606
This commit improves error handling in two specific scenarios:
* Missing Download Links: A 5-second delay is introduced when a download
link is missing, as low-level retries aren't enough. Empirically, it
takes about 30s-1m for the link to become available. This resolves
failed integration tests: backend: TestIntegration/FsMkdir/FsPutFiles/
ObjectUpdate, vfs: TestFileReadAtNonZeroLength
* Unrecoverable 500 Errors: The shouldRetry method is updated to skip
retries for 500 errors from "idx.shub.mypikpak.com" indicating "no
record for gcid." These errors are non-recoverable, so retrying is futile.
This commit introduces a significant rewrite of PikPak's upload, specifically
targeting direct handling of file uploads rather than relying on the generic
S3 manager. The primary motivation is to address critical upload failures
reported in #8629.
* Added new `multipart.go` file for multipart uploads using AWS S3 SDK.
* Removed dependency on AWS S3 manager; replaced with custom handling.
* Updated PikPak test package with new multipart upload tests,
including configurable chunk size and upload cutoff.
* Added new configuration option `upload_cutoff` to control chunked uploads.
* Defined constraints for `chunk_size` and `upload_cutoff` (min/max values,
validation).
* Adjusted default `upload_concurrency` from 5 to 4.
In this commit the source of the modtime got changed to the wrong object by accident
0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support
This reverts that change and fixes the integration tests.