Fixes previous pull request #8978
An oversight meant that unrestricted API keys
never called b2_list_buckets,
meaning the root remote could not be listed.
The call is now made in the event there are no allowed buckets,
indicating an unrestricted API key
Fixes#9007
When specifying a custom endpoint with a subpath, there is a limitation
in the Google cloud storage integration that the subpath is ignored
during upload operations. For example with the custom endpoint
"example.org/custom/endpoint" on upload the /custom/endpoint is not
reflected.
As this is most likely an issue with the underlying API client, there is
no way to fix this in rclone. By extending the documentation at least
rclone users are made aware of this limitation.
Related forum thread: https://forum.rclone.org/t/googlecloudstorage-custom-endpoint-subpath-removed-for-upload/53059
This change adds first-class metadata support to the Azure Blob backend,
including headers, user metadata, tags, and modtime overrides, and wires
it through uploads and server-side copies.
There is a behavior change in that rclone will now set the "mtime"
custom metadata when doing server side copies to azure and the
`--metadata` argument is given.
- Map standard headers: cache-control, content-disposition, content-encoding,
content-language, content-type to corresponding x-ms-blob-* HTTP headers.
- Map user metadata: any non-reserved keys (excluding x-ms-*) are sent as
blob user metadata. Keys are normalized to lowercase for consistency.
- Support tags: parse `x-ms-tags` as a comma-separated list of key=value
pairs and apply them on uploads and copies.
- Support mtime override: accept `mtime` in metadata (RFC3339/RFC3339Nano)
to override the stored modtime persisted in user metadata.
Backblaze has updated its b2_authorize_account API endpoint, newly created
application keys are now "multi-bucket" keys, capable of being limited to
multiple buckets. These keys can only be used with the v4 endpoint, not v1 which
returns an HTTP 400.
This commit switches authorization to the v4 endpoint, and allowing such keys to
work with any of the allowed buckets.
With multi-bucket keys, missing restricted buckets can be non-fatal.
Supports listing root with multi-bucket API keys
#8947 implemented support for the If-Match and If-None-Match headers for S3 PUT
Object requests; however, this support did not extend to multi-part copy and
upload requests. These headers are implemented via inclusion in the
CompleteMultipartUpload request.
This updates the auto generated code also which was needed for multipart copy.
We accidentally added a non `camelCase` parameter to the rc
(`config_password`)- this fixes it (to `configPassword`) but accepts
the old name too as it has been in a release.
Especially when using rclone via rc it is helpful to configure the box
backend using the contents of the config file instead of heaving to
upload the file to the server that is running rclone.
Before Go 1.23, x509.ParseCertificate accepted certificates with
negative serial numbers. Rejecting these certificates caused a small
number of users to see this error.
From Go 1.23 debug flags can be added to go.mod so this change adds a
debug flag to ensure negative serial numbers are still allowed since
this is a spec violation, not a security issue.
See: https://forum.rclone.org/t/ssl-validation-broken-between-v1-69-1-latest-version/
The If-Match and If-None-Match headers were being dropped rather
than implemented in the Put Object request to S3. These headers
make requests conditional which allow AWS S3 Bucket Policies to
prevent Object overwriting.
The bisync tests have been failing as Dropbox is failing to move just
created objects. This seems to be caused by an eventual consistency
problem so this attempts to fix it by retrying the specific error.
Before this change, if any code called fs.Fatal(f) then it would stop
rclone as designed. However this is not appropriate when using the RC
API - we want the error returned to the user.
This change turns the fs.Fatal(f) call into a panic which is caught by
the RC API handler and returned to the user as a 500 error.
The uloz.to backend was failing to download files, instead returning
an HTML page with a "Slow download" message. This was caused by
recent changes in the uloz.to API.
This commit fixes the issue by making the following changes to the
download process:
1. The `hash` received from the download link API is now appended as a
query parameter to the download URL.
2. The download is now performed using the authenticated `rest` client
to ensure premium access is recognized.
3. The `DeviceID` is now generated dynamically for each download request
to avoid potential rate-limiting of a static ID.