Before this fix it was possible for an about call in various backends
to exceed an int64 and wrap.
This patch causes it to clip to the max int64 value instead.
Before this change rclone about was failing with
cannot unmarshal number 1.0e+18 into Go struct field User.space_amount of type int64
As Box increased Enterprise accounts user.space_amount from 30PB to
1e+18 or 888.178PB returning it as a floating point number, not an integer.
This fix reads it as a float64 and clips it to the maximum value of an
int64 if necessary.
This change enhances the SMB backend in Rclone to automatically refresh
Kerberos credentials when the associated ccache file is updated.
Previously, credentials were only loaded once per path and cached
indefinitely, which caused issues when service tickets expired or the
cache was renewed on the server.
When uploading or moving data within an s3-compatible bucket, the
`SSECustomer*` headers should always be forwarded: on
`CreateMultipartUpload`, `UploadPart`, `UploadCopyPart` and
`CompleteMultipartUpload`. But currently rclone doesn't forward those
headers to `CompleteMultipartUpload`.
This is a requirement if you want to enforce `SSE-C` at the bucket level
via a bucket policy. Cf: `This parameter is required only when the
object was created using a checksum algorithm or if your bucket policy
requires the use of SSE-C.` in
https://docs.aws.amazon.com/AmazonS3/latest/API/API_CompleteMultipartUpload.html
This commit addresses a potential memory leak in the S3 backend where
strings extracted from large API responses were keeping the entire
response in memory. The issue occurs because Go strings share underlying
memory with their source, preventing garbage collection of large XML
responses even when only small substrings are needed.
Signed-off-by: liubingrun <liubr1@chinatelecom.cn>
The Copy method was downloading the file and uploading it again rather
than server side copying it.
It looks from the docs that the upload process can read a URL so this
might be possible, but the removed code is incorrect.
All user visible Durations should be fs.Duration rather than time.Duration. Suffix is then optional and defaults to s. Additional suffices d, w, M and y are supported, in addition to ms, s, m and h - which are the only ones supported by time.Duration. Absolute times can also be specified, and will be interpreted as duration relative to now.
Before this change, if not using shared key or SAS URL authentication
for the source, rclone gave this error
ManagedIdentityCredential.GetToken() requires exactly one scope
when doing server side copies.
This was introduced in:
3a5ddfcd3c azureblob: implement multipart server side copy
This fixes the problem by creating a temporary SAS URL using user
delegation to read the source blob when copying.
Fixes#8662
The seafile backend used to be able to cope with files called "." and
".." but at some point became unable to do so, causing integration
test failurs.
This adds EncodeDot to the encoding which encodes "." and ".." names.
Linkbox have started issuing 302 redirects on some of their PUT
requests when rclone uploads a file.
This is problematic for several reasons:
1. This is the wrong redirect code - it should be 307 to preserve the method
2. Since Expect/100-Continue isn't supported the whole body gets uploaded
This fixes the problem by first doing a HEAD request on the URL. This
will allow us to read the redirect Location and not upload the body to
the wrong place.
It should still work (albeit a little more inefficiently) if Linkbox
stop redirecting the PUT requests.
See: https://forum.rclone.org/t/linkbox-upload-error/51795Fixes: #8606
This commit improves error handling in two specific scenarios:
* Missing Download Links: A 5-second delay is introduced when a download
link is missing, as low-level retries aren't enough. Empirically, it
takes about 30s-1m for the link to become available. This resolves
failed integration tests: backend: TestIntegration/FsMkdir/FsPutFiles/
ObjectUpdate, vfs: TestFileReadAtNonZeroLength
* Unrecoverable 500 Errors: The shouldRetry method is updated to skip
retries for 500 errors from "idx.shub.mypikpak.com" indicating "no
record for gcid." These errors are non-recoverable, so retrying is futile.
This commit introduces a significant rewrite of PikPak's upload, specifically
targeting direct handling of file uploads rather than relying on the generic
S3 manager. The primary motivation is to address critical upload failures
reported in #8629.
* Added new `multipart.go` file for multipart uploads using AWS S3 SDK.
* Removed dependency on AWS S3 manager; replaced with custom handling.
* Updated PikPak test package with new multipart upload tests,
including configurable chunk size and upload cutoff.
* Added new configuration option `upload_cutoff` to control chunked uploads.
* Defined constraints for `chunk_size` and `upload_cutoff` (min/max values,
validation).
* Adjusted default `upload_concurrency` from 5 to 4.
In this commit the source of the modtime got changed to the wrong object by accident
0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support
This reverts that change and fixes the integration tests.
In
b1d774c2e3 combine: implement ListP interface
We introduced the ListP interface to the combine backend. This was
passing the wrong remote to the upstreams. This was picked up by the
integration tests but was ignored by accident.
Due to a change in Go which was enabled by the `go 1.22` in `go.mod`
rclone has stopped skipping junction points ("My Documents" in
particular) if `--skip-links` is set on Windows.
This is because the output from os.Lstat has changed and junction
points are no longer marked with os.ModeSymlink but with
os.ModeIrregular instead.
This fix now skips os.ModeIrregular objects if --skip-links is set on
Windows only.
Fixes#8561
See: https://github.com/golang/go/issues/73827
The API we use for OpenWriterAt seems to have been disabled at pcloud
PUT /file_open?flags=XXX&folderid=XXX&name=XXX HTTP/1.1
gives
{
"result": 2003,
"error": "Access denied. You do not have permissions to perform this operation."
}
So disable OpenWriterAt and hence multipart uploads for the moment.
Before this change, chunker could double-transform a file under certain
conditions, when --name-transform was in use. This change fixes the issue by
ensuring that --name-transform is disabled during internal file moves.
Before this change, rclone would crash if no metadata was updated.
This could happen if the --onedrive-metadata-permissions read was
supplied but metadata to write was supplied.
Fixes#8586