Add gzip compression for directory listings and text assets served over HTTP.
This reduces the rclone repository file listing from 40 kB to 8 kB and reduces
the rclone MANUAL.txt from 2.7 MB to 700 kB.
This makes listings and assets served across the network load faster.
The compression level of 5 should be a good balance between size and speed.
The WebDAV implementation already permits redirects on PROPFIND for
listing paths in the `listAll` method but does not permit this for
metadata in `readMetaDataForPath`. This results in a strange experience
for endpoints that heavily use redirects -
```
rclone lsl endpoint:
```
functions and lists `hello_world.txt` in its output but
```
rclone lsl endpoint:hello_world.txt
```
Fails with a HTTP 307.
The git history for this setting indicates this was done to avoid
an issue where redirects cause a verb change to GET in the Go HTTP
client; it does not appear to be problematic with HTTP 307.
To fix, a new `CheckRedirect` function is added in the `rest` library
to force the client to use the same verb across redirects, forcing this
for the PROPFIND case.
Fixes#5063 by documenting that S3 object keys containing
consecutive forward slashes (//) are not supported by rclone.
The issue occurs because rclone normalizes paths like "a//b" to "a/b",
causing "object not found" errors when trying to access the original
object. This documentation addition explicitly warns users about this
limitation and provides workarounds.
Changes:
- Added new subsection "Important note about double slashes (//)"
under "Restricted filename characters" in S3 documentation
- Explains the normalization behavior and its consequences
- Provides clear examples and workarounds
AI Model/Tool Attribution:
- Implemented using opencode AI assistant
- Issue analysis and documentation update performed by AI tools
Resolves: #5063
When specifying --drime-workspace-id, a file greater than the limit at
which file uploads get chunked would ignore the specified ID and get put
into the default workspace instead.
Completes the fix described in commit 2360e65 by properly closing the
chunkwriter by providing the workspace ID to the Drime API call.
Add AU East 1, EU South 1, JP Central 1, UK East 1, and US Central 1
regions and endpoints for Fastly Object Storage.
Also sort the entries alphabetically.
Browsers make a request to /favicon.ico when visiting pages generated
by the HTTP server.
Previously, if remotes did not have a /favicon.ico then the server
responded with a 404, causing browsers to show a default icon.
This adds a tiny fallback embedded PNG rclone favicon to help users
identify the rclone browser tab.
Add support for S3 Object Lock with the following new options:
- --s3-object-lock-mode: set retention mode (GOVERNANCE/COMPLIANCE/copy)
- --s3-object-lock-retain-until-date: set retention date (RFC3339/duration/copy)
- --s3-object-lock-legal-hold-status: set legal hold (ON/OFF/copy)
- --s3-bypass-governance-retention: bypass GOVERNANCE lock on delete
- --s3-bucket-object-lock-enabled: enable Object Lock on bucket creation
- --s3-object-lock-set-after-upload: apply lock via separate API calls
The special value "copy" preserves the source object's setting when used
with --metadata flag, enabling scenarios like cloning objects from
COMPLIANCE to GOVERNANCE mode while preserving the original retention date.
Includes integration tests that create a temporary Object Lock bucket covering:
- Retention Mode and Date
- Legal Hold
- Apply settings after upload
- Override protections using bypass-governance flag
The tests are gracefully skipped on providers that do not support Object Lock.
Fixes#4683Closes#7894#7893#8866
Use URLPathEscapeAll instead of URLPathEscape for path encoding.
URLPathEscape relies on Go's url.URL.String() which only minimally
escapes paths - reserved sub-delimiter characters like semicolons and
equals signs pass through unescaped. Per RFC 3986 section 3.3, these
characters must be percent-encoded when used as literal values in
path segments.
Some WebDAV servers (notably dCache/Jetty) interpret unescaped
semicolons as path parameter delimiters, which truncates filenames
at the semicolon position. URLPathEscapeAll encodes everything
except [A-Za-z0-9/], which is safe for all servers.
Fixes#9082
Tar files created from the current directory (e.g. tar -czf archive.tar.gz .)
produce entries prefixed with "./". When extracting, rclone's character
encoding replaces the "." with a full-width dot (U+FF0E), creating a
spurious directory instead of merging into the destination root.
Strip the leading "./" from NameInArchive before processing. Only "./"
is stripped specifically to avoid enabling path traversal attacks via
"../".
Fixes#9168
Before this change when doing a sync with `--no-traverse` and
`--files-from` we could call `NewObject` a total of `--checkers` *
`--checkers` times simultaneously.
With `--checkers 128` this can exceed the 10,000 thread limit and
fails when run on a local to local transfer because `NewObject` calls
`lstat` which is a syscall which needs an OS thread of its own.
This patch uses a weighted semaphore to limit the number of
simultaneous calls to `NewObject` to `--checkers` instead which won't
blow the 10,000 thread limit and is far more sensible use of OS
resources.
Fixes#9073
These stats weren't being updated in the global stats read by rc
core/stats:
- transferQueue
- deletesSize
- serverSideCopies
- serverSideCopyBytes
- serverSideMoves
- serverSideMoveBytes
Before this change we read sleepTime before acquiring the pacer token
and uses that possibly stale value to schedule the token return. When
many goroutines enter while sleepTime is high (e.g., 10s), each
goroutine caches this 10s value. Even if successful calls rapidly
decay the pacer state to 0, the queued goroutines still schedule 10s
token returns, so the queue drains at 1 req/10s for the entire herd.
This can create multi‑minute delays even after the pacer has dropped
to 0.
After this change we refresh the sleep time after getting the token.
This problem was introduced by the desire to skip reading the pacer
token entirely when sleepTime is 0 in high performance backends (eg
s3, azure blob).
It was possible in the presence of --max-connections and recursive
calls to the pacer to deadlock it leaving all connections waiting on
either a max connection token or a pacer token.
This fixes the problem by making sure we return the pacer token on
schedule if we take it.
This also short circuits the pacer token if sleepTime is 0.