1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-06 10:33:34 +00:00

Compare commits

..

26 Commits

Author SHA1 Message Date
Nick Craig-Wood
fdce6dd466 cmount: make work under OpenBSD - fixes #1727 2025-12-09 15:04:24 +00:00
Nick Craig-Wood
5ef9551b02 vfs: make mount tests run on OpenBSD 2025-12-09 14:59:31 +00:00
Jonas Tingeborn
233fef5c4d configfile: add piped config support - fixes #9012 2025-12-08 18:42:17 +00:00
Tingsong Xu
b9586c3e03 fs/log: fix PID not included in JSON log output
When using `--log-format pid,json`, the PID was not being added to the JSON log output. This fix adds PID support to JSON logging.
2025-12-08 18:41:58 +00:00
Nick Craig-Wood
0dc0ab1330 build: adjust lint rules to exclude new errors from linter update 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
a6bbdb35a0 proxy: fix error handling in tests spotted by the linter 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
b33cb77b6c Add Johannes Rothe to contributors 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
d51322bb5f Add Leo to contributors 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
e718ab6091 Add Vladislav Tropnikov to contributors 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
0a9e6e130f Add Cliff Frey to contributors 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
3358b9049c Add vicerace to contributors 2025-12-08 14:45:06 +00:00
DianaNites
847734d421 b2: Fix listing root buckets with unrestricted API key
Fixes previous pull request #8978

An oversight meant that unrestricted API keys
never called b2_list_buckets,
meaning the root remote could not be listed.

The call is now made in the event there are no allowed buckets,
indicating an unrestricted API key

Fixes #9007
2025-12-04 15:55:17 +00:00
Johannes Rothe
f7b255d4ec googlecloudstorage: improve endpoint parameter docs
When specifying a custom endpoint with a subpath, there is a limitation
in the Google cloud storage integration that the subpath is ignored
during upload operations. For example with the custom endpoint
"example.org/custom/endpoint" on upload the /custom/endpoint is not
reflected.

As this is most likely an issue with the underlying API client, there is
no way to fix this in rclone. By extending the documentation at least
rclone users are made aware of this limitation.

Related forum thread: https://forum.rclone.org/t/googlecloudstorage-custom-endpoint-subpath-removed-for-upload/53059
2025-12-01 19:04:02 +00:00
Leo
24c752ed9e serve webdav: implement download-directory-as-zip
Signed-off-by: Leo <i@hardrain980.com>
2025-12-01 15:42:16 +00:00
Vladislav Tropnikov
a99d155fd4 s3: The ability to specify an IAM role for cross-account interaction 2025-11-29 13:53:00 +00:00
Cliff Frey
f72b32b470 azureblob: add metadata and tags support across upload and copy paths
This change adds first-class metadata support to the Azure Blob backend,
including headers, user metadata, tags, and modtime overrides, and wires
it through uploads and server-side copies.

There is a behavior change in that rclone will now set the "mtime"
custom metadata when doing server side copies to azure and the
`--metadata` argument is given.

- Map standard headers: cache-control, content-disposition, content-encoding,
  content-language, content-type to corresponding x-ms-blob-* HTTP headers.
- Map user metadata: any non-reserved keys (excluding x-ms-*) are sent as
  blob user metadata. Keys are normalized to lowercase for consistency.
- Support tags: parse `x-ms-tags` as a comma-separated list of key=value
  pairs and apply them on uploads and copies.
- Support mtime override: accept `mtime` in metadata (RFC3339/RFC3339Nano)
  to override the stored modtime persisted in user metadata.
2025-11-27 16:58:07 +00:00
vicerace
9be7f99bf8 refactor: use strings.Cut to simplify code
Signed-off-by: vicerace <vicerace@sohu.com>
2025-11-27 14:42:11 +00:00
Nick Craig-Wood
6858bf242e docs: note where a provider has an S3 compatible alternative 2025-11-26 12:22:48 +00:00
Nick Craig-Wood
e8c6867e4c Add Shade as sponsor 2025-11-26 12:22:48 +00:00
Nick Craig-Wood
50fbd6b049 Add Duncan Smart to contributors 2025-11-26 12:22:48 +00:00
Nick Craig-Wood
0783cab952 Add Diana to contributors 2025-11-26 12:22:48 +00:00
Duncan Smart
886ac7af1d docs: Clarify OAuth scopes for readonly Google Drive access 2025-11-24 15:58:53 +00:00
Diana
3c40238f02 b2: support authentication with new bucket restricted application keys
Backblaze has updated its b2_authorize_account API endpoint, newly created
application keys are now "multi-bucket" keys, capable of being limited to
multiple buckets. These keys can only be used with the v4 endpoint, not v1 which
returns an HTTP 400.

This commit switches authorization to the v4 endpoint, and allowing such keys to
work with any of the allowed buckets.

With multi-bucket keys, missing restricted buckets can be non-fatal.

Supports listing root with multi-bucket API keys
2025-11-24 15:46:41 +00:00
Nick Craig-Wood
46ca0dd7fe docs: update sponsor logos 2025-11-24 14:58:33 +00:00
Nick Craig-Wood
2e968e7ce0 docs: fix lint error in changelog 2025-11-21 18:23:16 +00:00
Nick Craig-Wood
1886c552db Start v1.73.0-DEV development 2025-11-21 18:23:07 +00:00
40 changed files with 1251 additions and 646 deletions

142
MANUAL.html generated
View File

@@ -233,7 +233,7 @@
<header id="title-block-header">
<h1 class="title">rclone(1) User Manual</h1>
<p class="author">Nick Craig-Wood</p>
<p class="date">Dec 10, 2025</p>
<p class="date">Nov 21, 2025</p>
</header>
<h1 id="name">NAME</h1>
<p>rclone - manage files on cloud storage</p>
@@ -4531,9 +4531,9 @@ SquareBracket</code></pre>
<pre class="console"><code>rclone convmv &quot;stories/The Quick Brown Fox!.txt&quot; --name-transform &quot;all,command=echo&quot;
// Output: stories/The Quick Brown Fox!.txt</code></pre>
<pre class="console"><code>rclone convmv &quot;stories/The Quick Brown Fox!&quot; --name-transform &quot;date=-{YYYYMMDD}&quot;
// Output: stories/The Quick Brown Fox!-20251210</code></pre>
// Output: stories/The Quick Brown Fox!-20251121</code></pre>
<pre class="console"><code>rclone convmv &quot;stories/The Quick Brown Fox!&quot; --name-transform &quot;date=-{macfriendlytime}&quot;
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM</code></pre>
// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM</code></pre>
<pre class="console"><code>rclone convmv &quot;stories/The Quick Brown Fox!.txt&quot; --name-transform &quot;all,regex=[\\.\\w]/ab&quot;
// Output: ababababababab/ababab ababababab ababababab ababab!abababab</code></pre>
<p>The regex command generally accepts Perl-style regular expressions,
@@ -22567,7 +22567,7 @@ split into groups.</p>
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.72.1&quot;)</code></pre>
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.72.0&quot;)</code></pre>
<h2 id="performance">Performance</h2>
<p>Flags helpful for increasing performance.</p>
<pre><code> --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@@ -23024,7 +23024,7 @@ split into groups.</p>
--gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
--gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don&#39;t attempt to check the bucket exists or create it
@@ -25234,29 +25234,7 @@ investigation:</p>
<li><a
href="https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt"><code>TestBisyncRemoteRemote/normalization</code></a></li>
</ul></li>
<li><code>TestGoFile</code> (<code>gofile</code>)
<ul>
<li><a
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/all_changed</code></a></li>
<li><a
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/backupdir</code></a></li>
<li><a
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/basic</code></a></li>
<li><a
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/changes</code></a></li>
<li><a
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/check_access</code></a></li>
<li><a href="https://pub.rclone.org/integration-tests/current/">78
more</a></li>
</ul></li>
<li><code>TestPcloud</code> (<code>pcloud</code>)
<ul>
<li><a
href="https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt"><code>TestBisyncRemoteRemote/check_access</code></a></li>
<li><a
href="https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt"><code>TestBisyncRemoteRemote/check_access_filters</code></a></li>
</ul></li>
<li>Updated: 2025-12-10-010012
<li>Updated: 2025-11-21-010037
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs ---></li>
</ul>
<p>The following backends either have not been tested recently or have
@@ -28396,30 +28374,15 @@ centers for low latency.</li>
<li>St. Petersburg</li>
<li>Provider: Selectel,Servercore</li>
</ul></li>
<li>"ru-3"
<li>"gis-1"
<ul>
<li>St. Petersburg</li>
<li>Provider: Selectel</li>
<li>Moscow</li>
<li>Provider: Servercore</li>
</ul></li>
<li>"ru-7"
<ul>
<li>Moscow</li>
<li>Provider: Selectel,Servercore</li>
</ul></li>
<li>"gis-1"
<ul>
<li>Moscow</li>
<li>Provider: Selectel,Servercore</li>
</ul></li>
<li>"kz-1"
<ul>
<li>Kazakhstan</li>
<li>Provider: Selectel</li>
</ul></li>
<li>"uz-2"
<ul>
<li>Uzbekistan</li>
<li>Provider: Selectel</li>
<li>Provider: Servercore</li>
</ul></li>
<li>"uz-2"
<ul>
@@ -29727,37 +29690,17 @@ AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost
</ul></li>
<li>"s3.ru-1.storage.selcloud.ru"
<ul>
<li>St. Petersburg</li>
<li>Provider: Selectel</li>
</ul></li>
<li>"s3.ru-3.storage.selcloud.ru"
<ul>
<li>St. Petersburg</li>
<li>Provider: Selectel</li>
</ul></li>
<li>"s3.ru-7.storage.selcloud.ru"
<ul>
<li>Moscow</li>
<li>Saint Petersburg</li>
<li>Provider: Selectel,Servercore</li>
</ul></li>
<li>"s3.gis-1.storage.selcloud.ru"
<ul>
<li>Moscow</li>
<li>Provider: Selectel,Servercore</li>
<li>Provider: Servercore</li>
</ul></li>
<li>"s3.kz-1.storage.selcloud.ru"
<li>"s3.ru-7.storage.selcloud.ru"
<ul>
<li>Kazakhstan</li>
<li>Provider: Selectel</li>
</ul></li>
<li>"s3.uz-2.storage.selcloud.ru"
<ul>
<li>Uzbekistan</li>
<li>Provider: Selectel</li>
</ul></li>
<li>"s3.ru-1.storage.selcloud.ru"
<ul>
<li>Saint Petersburg</li>
<li>Moscow</li>
<li>Provider: Servercore</li>
</ul></li>
<li>"s3.uz-2.srvstorage.uz"
@@ -41553,9 +41496,6 @@ storage options, and sharing capabilities. With support for high storage
limits and seamless integration with rclone, FileLu makes managing files
in the cloud easy. Its cross-platform file backup services let you
upload and back up files from any internet-connected device.</p>
<p><strong>Note</strong> FileLu now has a fully featured S3 backend <a
href="/s3#filelu-s5">FileLu S5</a>, an industry standard S3 compatible
object store.</p>
<h2 id="configuration-16">Configuration</h2>
<p>Here is an example of how to make a remote called
<code>filelu</code>. First, run:</p>
@@ -43478,36 +43418,14 @@ decompressed.</p>
<li>Default: false</li>
</ul>
<h4 id="gcs-endpoint">--gcs-endpoint</h4>
<p>Custom endpoint for the storage API. Leave blank to use the provider
default.</p>
<p>When using a custom endpoint that includes a subpath (e.g.
example.org/custom/endpoint), the subpath will be ignored during upload
operations due to a limitation in the underlying Google API Go client
library. Download and listing operations will work correctly with the
full endpoint path. If you require subpath support for uploads, avoid
using subpaths in your custom endpoint configuration.</p>
<p>Endpoint for the service.</p>
<p>Leave blank normally.</p>
<p>Properties:</p>
<ul>
<li>Config: endpoint</li>
<li>Env Var: RCLONE_GCS_ENDPOINT</li>
<li>Type: string</li>
<li>Required: false</li>
<li>Examples:
<ul>
<li>"storage.example.org"
<ul>
<li>Specify a custom endpoint</li>
</ul></li>
<li>"storage.example.org:4443"
<ul>
<li>Specifying a custom endpoint with port</li>
</ul></li>
<li>"storage.example.org:4443/gcs/api"
<ul>
<li>Specifying a subpath, see the note, uploads won't use the custom
path!</li>
</ul></li>
</ul></li>
</ul>
<h4 id="gcs-encoding">--gcs-encoding</h4>
<p>The encoding for the backend.</p>
@@ -43752,7 +43670,7 @@ account. It is a ~21 character numerical string.</li>
<code>https://www.googleapis.com/auth/drive</code> to grant read/write
access to Google Drive specifically. You can also use
<code>https://www.googleapis.com/auth/drive.readonly</code> for read
only access with <code>--drive-scope=drive.readonly</code>.</li>
only access.</li>
<li>Click "Authorise"</li>
</ul>
<h5 id="configure-rclone-assuming-a-new-install">3. Configure rclone,
@@ -62675,32 +62593,6 @@ the output.</p>
<!-- autogenerated options stop -->
<!-- markdownlint-disable line-length -->
<h1 id="changelog-1">Changelog</h1>
<h2 id="v1.72.1---2025-12-10">v1.72.1 - 2025-12-10</h2>
<p><a
href="https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1">See
commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>build: update to go1.25.5 to fix <a
href="https://pkg.go.dev/vuln/GO-2025-4155">CVE-2025-61729</a></li>
<li>doc fixes (Duncan Smart, Nick Craig-Wood)</li>
<li>configfile: Fix piped config support (Jonas Tingeborn)</li>
<li>log
<ul>
<li>Fix PID not included in JSON log output (Tingsong Xu)</li>
<li>Fix backtrace not going to the --log-file (Nick Craig-Wood)</li>
</ul></li>
</ul></li>
<li>Google Cloud Storage
<ul>
<li>Improve endpoint parameter docs (Johannes Rothe)</li>
</ul></li>
<li>S3
<ul>
<li>Add missing regions for Selectel provider (Nick Craig-Wood)</li>
</ul></li>
</ul>
<h2 id="v1.72.0---2025-11-21">v1.72.0 - 2025-11-21</h2>
<p><a
href="https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0">See

98
MANUAL.md generated
View File

@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
% Dec 10, 2025
% Nov 21, 2025
# NAME
@@ -5369,12 +5369,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20251210
// Output: stories/The Quick Brown Fox!-20251121
```
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM
// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
```
```console
@@ -24802,7 +24802,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
```
@@ -25319,7 +25319,7 @@ Backend-only flags (these can be set in the config file also).
--gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
--gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
@@ -27514,17 +27514,7 @@ The following backends have known issues that need more investigation:
<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
- `TestDropbox` (`dropbox`)
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
- `TestGoFile` (`gofile`)
- [`TestBisyncRemoteLocal/all_changed`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/backupdir`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/basic`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/changes`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/check_access`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [78 more](https://pub.rclone.org/integration-tests/current/)
- `TestPcloud` (`pcloud`)
- [`TestBisyncRemoteRemote/check_access`](https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
- [`TestBisyncRemoteRemote/check_access_filters`](https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
- Updated: 2025-12-10-010012
- Updated: 2025-11-21-010037
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
The following backends either have not been tested recently or have known issues
@@ -30353,21 +30343,12 @@ Properties:
- "ru-1"
- St. Petersburg
- Provider: Selectel,Servercore
- "ru-3"
- St. Petersburg
- Provider: Selectel
- "ru-7"
- Moscow
- Provider: Selectel,Servercore
- "gis-1"
- Moscow
- Provider: Selectel,Servercore
- "kz-1"
- Kazakhstan
- Provider: Selectel
- "uz-2"
- Uzbekistan
- Provider: Selectel
- Provider: Servercore
- "ru-7"
- Moscow
- Provider: Servercore
- "uz-2"
- Tashkent, Uzbekistan
- Provider: Servercore
@@ -31159,25 +31140,13 @@ Properties:
- SeaweedFS S3 localhost
- Provider: SeaweedFS
- "s3.ru-1.storage.selcloud.ru"
- St. Petersburg
- Provider: Selectel
- "s3.ru-3.storage.selcloud.ru"
- St. Petersburg
- Provider: Selectel
- "s3.ru-7.storage.selcloud.ru"
- Moscow
- Saint Petersburg
- Provider: Selectel,Servercore
- "s3.gis-1.storage.selcloud.ru"
- Moscow
- Provider: Selectel,Servercore
- "s3.kz-1.storage.selcloud.ru"
- Kazakhstan
- Provider: Selectel
- "s3.uz-2.storage.selcloud.ru"
- Uzbekistan
- Provider: Selectel
- "s3.ru-1.storage.selcloud.ru"
- Saint Petersburg
- Provider: Servercore
- "s3.ru-7.storage.selcloud.ru"
- Moscow
- Provider: Servercore
- "s3.uz-2.srvstorage.uz"
- Tashkent, Uzbekistan
@@ -44004,9 +43973,6 @@ managing files in the cloud easy. Its cross-platform file backup
services let you upload and back up files from any internet-connected
device.
**Note** FileLu now has a fully featured S3 backend [FileLu S5](/s3#filelu-s5),
an industry standard S3 compatible object store.
## Configuration
Here is an example of how to make a remote called `filelu`. First, run:
@@ -46105,14 +46071,9 @@ Properties:
#### --gcs-endpoint
Custom endpoint for the storage API. Leave blank to use the provider default.
Endpoint for the service.
When using a custom endpoint that includes a subpath (e.g. example.org/custom/endpoint),
the subpath will be ignored during upload operations due to a limitation in the
underlying Google API Go client library.
Download and listing operations will work correctly with the full endpoint path.
If you require subpath support for uploads, avoid using subpaths in your custom
endpoint configuration.
Leave blank normally.
Properties:
@@ -46120,13 +46081,6 @@ Properties:
- Env Var: RCLONE_GCS_ENDPOINT
- Type: string
- Required: false
- Examples:
- "storage.example.org"
- Specify a custom endpoint
- "storage.example.org:4443"
- Specifying a custom endpoint with port
- "storage.example.org:4443/gcs/api"
- Specifying a subpath, see the note, uploads won't use the custom path!
#### --gcs-encoding
@@ -46425,7 +46379,7 @@ account key" button.
`https://www.googleapis.com/auth/drive`
to grant read/write access to Google Drive specifically.
You can also use `https://www.googleapis.com/auth/drive.readonly` for read
only access with `--drive-scope=drive.readonly`.
only access.
- Click "Authorise"
##### 3. Configure rclone, assuming a new install
@@ -66913,22 +66867,6 @@ Options:
# Changelog
## v1.72.1 - 2025-12-10
[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
- Bug Fixes
- build: update to go1.25.5 to fix [CVE-2025-61729](https://pkg.go.dev/vuln/GO-2025-4155)
- doc fixes (Duncan Smart, Nick Craig-Wood)
- configfile: Fix piped config support (Jonas Tingeborn)
- log
- Fix PID not included in JSON log output (Tingsong Xu)
- Fix backtrace not going to the --log-file (Nick Craig-Wood)
- Google Cloud Storage
- Improve endpoint parameter docs (Johannes Rothe)
- S3
- Add missing regions for Selectel provider (Nick Craig-Wood)
## v1.72.0 - 2025-11-21
[See commits](https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0)
@@ -66949,7 +66887,7 @@ Options:
- [rclone test speed](https://rclone.org/commands/rclone_test_speed/): Add command to test a specified remotes speed (dougal)
- New Features
- backends: many backends have has a paged listing (`ListP`) interface added
- this enables progress when listing large directories and reduced memory usage
- this enables progress when listing large directories and reduced memory usage
- build
- Bump golang.org/x/crypto from 0.43.0 to 0.45.0 to fix CVE-2025-58181 (dependabot[bot])
- Modernize code and tests (Nick Craig-Wood, russcoss, juejinyuxitu, reddaisyy, dulanting, Oleksandr Redko)

99
MANUAL.txt generated
View File

@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
Dec 10, 2025
Nov 21, 2025
NAME
@@ -4588,10 +4588,10 @@ Examples:
// Output: stories/The Quick Brown Fox!.txt
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20251210
// Output: stories/The Quick Brown Fox!-20251121
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM
// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
// Output: ababababababab/ababab ababababab ababababab ababab!abababab
@@ -23110,7 +23110,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
Performance
@@ -23597,7 +23597,7 @@ Backend-only flags (these can be set in the config file also).
--gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
--gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
@@ -25734,17 +25734,7 @@ The following backends have known issues that need more investigation:
- TestDropbox (dropbox)
- TestBisyncRemoteRemote/normalization
- TestGoFile (gofile)
- TestBisyncRemoteLocal/all_changed
- TestBisyncRemoteLocal/backupdir
- TestBisyncRemoteLocal/basic
- TestBisyncRemoteLocal/changes
- TestBisyncRemoteLocal/check_access
- 78 more
- TestPcloud (pcloud)
- TestBisyncRemoteRemote/check_access
- TestBisyncRemoteRemote/check_access_filters
- Updated: 2025-12-10-010012
- Updated: 2025-11-21-010037
The following backends either have not been tested recently or have
known issues that are deemed unfixable for the time being:
@@ -28527,21 +28517,12 @@ Properties:
- "ru-1"
- St. Petersburg
- Provider: Selectel,Servercore
- "ru-3"
- St. Petersburg
- Provider: Selectel
- "ru-7"
- Moscow
- Provider: Selectel,Servercore
- "gis-1"
- Moscow
- Provider: Selectel,Servercore
- "kz-1"
- Kazakhstan
- Provider: Selectel
- "uz-2"
- Uzbekistan
- Provider: Selectel
- Provider: Servercore
- "ru-7"
- Moscow
- Provider: Servercore
- "uz-2"
- Tashkent, Uzbekistan
- Provider: Servercore
@@ -29334,25 +29315,13 @@ Properties:
- SeaweedFS S3 localhost
- Provider: SeaweedFS
- "s3.ru-1.storage.selcloud.ru"
- St. Petersburg
- Provider: Selectel
- "s3.ru-3.storage.selcloud.ru"
- St. Petersburg
- Provider: Selectel
- "s3.ru-7.storage.selcloud.ru"
- Moscow
- Saint Petersburg
- Provider: Selectel,Servercore
- "s3.gis-1.storage.selcloud.ru"
- Moscow
- Provider: Selectel,Servercore
- "s3.kz-1.storage.selcloud.ru"
- Kazakhstan
- Provider: Selectel
- "s3.uz-2.storage.selcloud.ru"
- Uzbekistan
- Provider: Selectel
- "s3.ru-1.storage.selcloud.ru"
- Saint Petersburg
- Provider: Servercore
- "s3.ru-7.storage.selcloud.ru"
- Moscow
- Provider: Servercore
- "s3.uz-2.srvstorage.uz"
- Tashkent, Uzbekistan
@@ -41751,9 +41720,6 @@ integration with rclone, FileLu makes managing files in the cloud easy.
Its cross-platform file backup services let you upload and back up files
from any internet-connected device.
Note FileLu now has a fully featured S3 backend FileLu S5, an industry
standard S3 compatible object store.
Configuration
Here is an example of how to make a remote called filelu. First, run:
@@ -43730,15 +43696,9 @@ Properties:
--gcs-endpoint
Custom endpoint for the storage API. Leave blank to use the provider
default.
Endpoint for the service.
When using a custom endpoint that includes a subpath (e.g.
example.org/custom/endpoint), the subpath will be ignored during upload
operations due to a limitation in the underlying Google API Go client
library. Download and listing operations will work correctly with the
full endpoint path. If you require subpath support for uploads, avoid
using subpaths in your custom endpoint configuration.
Leave blank normally.
Properties:
@@ -43746,14 +43706,6 @@ Properties:
- Env Var: RCLONE_GCS_ENDPOINT
- Type: string
- Required: false
- Examples:
- "storage.example.org"
- Specify a custom endpoint
- "storage.example.org:4443"
- Specifying a custom endpoint with port
- "storage.example.org:4443/gcs/api"
- Specifying a subpath, see the note, uploads won't use the
custom path!
--gcs-encoding
@@ -44037,8 +43989,7 @@ key" button.
- In the next field, "OAuth Scopes", enter
https://www.googleapis.com/auth/drive to grant read/write access to
Google Drive specifically. You can also use
https://www.googleapis.com/auth/drive.readonly for read only access
with --drive-scope=drive.readonly.
https://www.googleapis.com/auth/drive.readonly for read only access.
- Click "Authorise"
3. Configure rclone, assuming a new install
@@ -64059,22 +64010,6 @@ Options:
Changelog
v1.72.1 - 2025-12-10
See commits
- Bug Fixes
- build: update to go1.25.5 to fix CVE-2025-61729
- doc fixes (Duncan Smart, Nick Craig-Wood)
- configfile: Fix piped config support (Jonas Tingeborn)
- log
- Fix PID not included in JSON log output (Tingsong Xu)
- Fix backtrace not going to the --log-file (Nick Craig-Wood)
- Google Cloud Storage
- Improve endpoint parameter docs (Johannes Rothe)
- S3
- Add missing regions for Selectel provider (Nick Craig-Wood)
v1.72.0 - 2025-11-21
See commits

View File

@@ -1 +1 @@
v1.72.1
v1.73.0

View File

@@ -86,12 +86,56 @@ var (
metadataMu sync.Mutex
)
// system metadata keys which this backend owns
var systemMetadataInfo = map[string]fs.MetadataHelp{
"cache-control": {
Help: "Cache-Control header",
Type: "string",
Example: "no-cache",
},
"content-disposition": {
Help: "Content-Disposition header",
Type: "string",
Example: "inline",
},
"content-encoding": {
Help: "Content-Encoding header",
Type: "string",
Example: "gzip",
},
"content-language": {
Help: "Content-Language header",
Type: "string",
Example: "en-US",
},
"content-type": {
Help: "Content-Type header",
Type: "string",
Example: "text/plain",
},
"tier": {
Help: "Tier of the object",
Type: "string",
Example: "Hot",
ReadOnly: true,
},
"mtime": {
Help: "Time of last modification, read from rclone metadata",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05.999999999Z07:00",
},
}
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "azureblob",
Description: "Microsoft Azure Blob Storage",
NewFs: NewFs,
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `User metadata is stored as x-ms-meta- keys. Azure metadata keys are case insensitive and are always returned in lower case.`,
},
Options: []fs.Option{{
Name: "account",
Help: `Azure Storage Account Name.
@@ -810,6 +854,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features = (&fs.Features{
ReadMimeType: true,
WriteMimeType: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
BucketBased: true,
BucketBasedRootOK: true,
SetTier: true,
@@ -1157,6 +1204,289 @@ func (o *Object) updateMetadataWithModTime(modTime time.Time) {
o.meta[modTimeKey] = modTime.Format(timeFormatOut)
}
// parseXMsTags parses the value of the x-ms-tags header into a map.
// It expects comma-separated key=value pairs. Whitespace around keys and
// values is trimmed. Empty pairs and empty keys are rejected.
func parseXMsTags(s string) (map[string]string, error) {
if strings.TrimSpace(s) == "" {
return map[string]string{}, nil
}
out := make(map[string]string)
parts := strings.Split(s, ",")
for _, p := range parts {
p = strings.TrimSpace(p)
if p == "" {
continue
}
kv := strings.SplitN(p, "=", 2)
if len(kv) != 2 {
return nil, fmt.Errorf("invalid tag %q", p)
}
k := strings.TrimSpace(kv[0])
v := strings.TrimSpace(kv[1])
if k == "" {
return nil, fmt.Errorf("invalid tag key in %q", p)
}
out[k] = v
}
return out, nil
}
// mapMetadataToAzure maps a generic metadata map to Azure HTTP headers,
// user metadata, tags and optional modTime override.
// Reserved x-ms-* keys (except x-ms-tags) are ignored for user metadata.
//
// Pass a logger to surface non-fatal parsing issues (e.g. bad mtime).
func mapMetadataToAzure(meta map[string]string, logf func(string, ...any)) (headers blob.HTTPHeaders, userMeta map[string]*string, tags map[string]string, modTime *time.Time, err error) {
if meta == nil {
return headers, nil, nil, nil, nil
}
tmp := make(map[string]string)
for k, v := range meta {
lowerKey := strings.ToLower(k)
switch lowerKey {
case "cache-control":
headers.BlobCacheControl = pString(v)
case "content-disposition":
headers.BlobContentDisposition = pString(v)
case "content-encoding":
headers.BlobContentEncoding = pString(v)
case "content-language":
headers.BlobContentLanguage = pString(v)
case "content-type":
headers.BlobContentType = pString(v)
case "x-ms-tags":
parsed, perr := parseXMsTags(v)
if perr != nil {
return headers, nil, nil, nil, perr
}
// allocate only if there are tags
if len(parsed) > 0 {
tags = parsed
}
case "mtime":
// Accept multiple layouts for tolerance
var parsed time.Time
var pErr error
for _, layout := range []string{time.RFC3339Nano, time.RFC3339, timeFormatOut} {
parsed, pErr = time.Parse(layout, v)
if pErr == nil {
modTime = &parsed
break
}
}
// Log and ignore if unparseable
if modTime == nil && logf != nil {
logf("metadata: couldn't parse mtime %q: %v", v, pErr)
}
case "tier":
// ignore - handled elsewhere
default:
// Filter out other reserved headers so they don't end up as user metadata
if strings.HasPrefix(lowerKey, "x-ms-") {
continue
}
tmp[lowerKey] = v
}
}
userMeta = toAzureMetaPtr(tmp)
return headers, userMeta, tags, modTime, nil
}
// toAzureMetaPtr converts a map[string]string to map[string]*string as used by Azure SDK
func toAzureMetaPtr(in map[string]string) map[string]*string {
if len(in) == 0 {
return nil
}
out := make(map[string]*string, len(in))
for k, v := range in {
vv := v
out[k] = &vv
}
return out
}
// assembleCopyParams prepares headers, metadata and tags for copy operations.
//
// It starts from the source properties, optionally overlays mapped metadata
// from rclone's metadata options, ensures mtime presence when mapping is
// enabled, and returns whether mapping was actually requested (hadMapping).
// assembleCopyParams prepares headers, metadata and tags for copy operations.
//
// If includeBaseMeta is true, start user metadata from the source's metadata
// and overlay mapped values. This matches multipart copy commit behavior.
// If false, only include mapped user metadata (no source baseline) which
// matches previous singlepart StartCopyFromURL semantics.
func assembleCopyParams(ctx context.Context, f *Fs, src fs.Object, srcProps *blob.GetPropertiesResponse, includeBaseMeta bool) (headers blob.HTTPHeaders, meta map[string]*string, tags map[string]string, hadMapping bool, err error) {
// Start from source properties
headers = blob.HTTPHeaders{
BlobCacheControl: srcProps.CacheControl,
BlobContentDisposition: srcProps.ContentDisposition,
BlobContentEncoding: srcProps.ContentEncoding,
BlobContentLanguage: srcProps.ContentLanguage,
BlobContentMD5: srcProps.ContentMD5,
BlobContentType: srcProps.ContentType,
}
// Optionally deep copy user metadata pointers from source. Normalise keys to
// lower-case to avoid duplicate x-ms-meta headers when we later inject/overlay
// metadata (Azure treats keys case-insensitively but Go's http.Header will
// join duplicate keys into a comma separated list, which breaks shared-key
// signing).
if includeBaseMeta && len(srcProps.Metadata) > 0 {
meta = make(map[string]*string, len(srcProps.Metadata))
for k, v := range srcProps.Metadata {
if v != nil {
vv := *v
meta[strings.ToLower(k)] = &vv
}
}
}
// Only consider mapping if metadata pipeline is enabled
if fs.GetConfig(ctx).Metadata {
mapped, mapErr := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if mapErr != nil {
return headers, meta, nil, false, fmt.Errorf("failed to map metadata: %w", mapErr)
}
if mapped != nil {
// Map rclone metadata to Azure shapes
mappedHeaders, userMeta, mappedTags, mappedModTime, herr := mapMetadataToAzure(mapped, func(format string, args ...any) { fs.Debugf(f, format, args...) })
if herr != nil {
return headers, meta, nil, false, fmt.Errorf("metadata mapping: %w", herr)
}
hadMapping = true
// Overlay headers (only non-nil)
if mappedHeaders.BlobCacheControl != nil {
headers.BlobCacheControl = mappedHeaders.BlobCacheControl
}
if mappedHeaders.BlobContentDisposition != nil {
headers.BlobContentDisposition = mappedHeaders.BlobContentDisposition
}
if mappedHeaders.BlobContentEncoding != nil {
headers.BlobContentEncoding = mappedHeaders.BlobContentEncoding
}
if mappedHeaders.BlobContentLanguage != nil {
headers.BlobContentLanguage = mappedHeaders.BlobContentLanguage
}
if mappedHeaders.BlobContentType != nil {
headers.BlobContentType = mappedHeaders.BlobContentType
}
// Overlay user metadata
if len(userMeta) > 0 {
if meta == nil {
meta = make(map[string]*string, len(userMeta))
}
for k, v := range userMeta {
meta[k] = v
}
}
// Apply tags if any
if len(mappedTags) > 0 {
tags = mappedTags
}
// Ensure mtime present using mapped or source time
if _, ok := meta[modTimeKey]; !ok {
when := src.ModTime(ctx)
if mappedModTime != nil {
when = *mappedModTime
}
val := when.Format(time.RFC3339Nano)
if meta == nil {
meta = make(map[string]*string, 1)
}
meta[modTimeKey] = &val
}
// Ensure content-type fallback to source if not set by mapper
if headers.BlobContentType == nil {
headers.BlobContentType = srcProps.ContentType
}
} else {
// Mapping enabled but not provided: ensure mtime present based on source ModTime
if _, ok := meta[modTimeKey]; !ok {
when := src.ModTime(ctx)
val := when.Format(time.RFC3339Nano)
if meta == nil {
meta = make(map[string]*string, 1)
}
meta[modTimeKey] = &val
}
}
}
return headers, meta, tags, hadMapping, nil
}
// applyMappedMetadata applies mapped metadata and headers to the object state for uploads.
//
// It reads `--metadata`, `--metadata-set`, and `--metadata-mapper` outputs via fs.GetMetadataOptions
// and updates o.meta, o.tags and ui.httpHeaders accordingly.
func (o *Object) applyMappedMetadata(ctx context.Context, src fs.ObjectInfo, ui *uploadInfo, options []fs.OpenOption) (modTime time.Time, err error) {
// Start from the source modtime; may be overridden by metadata
modTime = src.ModTime(ctx)
// Fetch mapped metadata if --metadata is enabled
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return modTime, err
}
if meta == nil {
// No metadata processing requested
return modTime, nil
}
// Map metadata using common helper
headers, userMeta, tags, mappedModTime, err := mapMetadataToAzure(meta, func(format string, args ...any) { fs.Debugf(o, format, args...) })
if err != nil {
return modTime, err
}
// Merge headers into ui
if headers.BlobCacheControl != nil {
ui.httpHeaders.BlobCacheControl = headers.BlobCacheControl
}
if headers.BlobContentDisposition != nil {
ui.httpHeaders.BlobContentDisposition = headers.BlobContentDisposition
}
if headers.BlobContentEncoding != nil {
ui.httpHeaders.BlobContentEncoding = headers.BlobContentEncoding
}
if headers.BlobContentLanguage != nil {
ui.httpHeaders.BlobContentLanguage = headers.BlobContentLanguage
}
if headers.BlobContentType != nil {
ui.httpHeaders.BlobContentType = headers.BlobContentType
}
// Apply user metadata to o.meta with a single critical section
if len(userMeta) > 0 {
metadataMu.Lock()
if o.meta == nil {
o.meta = make(map[string]string, len(userMeta))
}
for k, v := range userMeta {
if v != nil {
o.meta[k] = *v
}
}
metadataMu.Unlock()
}
// Apply tags
if len(tags) > 0 {
if o.tags == nil {
o.tags = make(map[string]string, len(tags))
}
for k, v := range tags {
o.tags[k] = v
}
}
if mappedModTime != nil {
modTime = *mappedModTime
}
return modTime, nil
}
// Returns whether file is a directory marker or not
func isDirectoryMarker(size int64, metadata map[string]*string, remote string) bool {
// Directory markers are 0 length
@@ -1951,18 +2281,19 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
return nil, err
}
// Convert metadata from source object
// Prepare metadata/headers/tags for destination
// For multipart commit, include base metadata from source then overlay mapped
commitHeaders, commitMeta, commitTags, _, err := assembleCopyParams(ctx, f, src, srcProperties, true)
if err != nil {
return nil, fmt.Errorf("multipart copy: %w", err)
}
// Convert metadata from source or mapper
options := blockblob.CommitBlockListOptions{
Metadata: srcProperties.Metadata,
Tier: parseTier(f.opt.AccessTier),
HTTPHeaders: &blob.HTTPHeaders{
BlobCacheControl: srcProperties.CacheControl,
BlobContentDisposition: srcProperties.ContentDisposition,
BlobContentEncoding: srcProperties.ContentEncoding,
BlobContentLanguage: srcProperties.ContentLanguage,
BlobContentMD5: srcProperties.ContentMD5,
BlobContentType: srcProperties.ContentType,
},
Metadata: commitMeta,
Tags: commitTags,
Tier: parseTier(f.opt.AccessTier),
HTTPHeaders: &commitHeaders,
}
// Finalise the upload session
@@ -1993,10 +2324,36 @@ func (f *Fs) copySinglepart(ctx context.Context, remote, dstContainer, dstPath s
return nil, fmt.Errorf("single part copy: source auth: %w", err)
}
// Start the copy
// Prepare mapped metadata/tags/headers if requested
options := blob.StartCopyFromURLOptions{
Tier: parseTier(f.opt.AccessTier),
}
var postHeaders *blob.HTTPHeaders
// Read source properties and assemble params; this also handles the case when mapping is disabled
srcProps, err := src.readMetaDataAlways(ctx)
if err != nil {
return nil, fmt.Errorf("single part copy: read source properties: %w", err)
}
// For singlepart copy, do not include base metadata from source in StartCopyFromURL
headers, meta, tags, hadMapping, aerr := assembleCopyParams(ctx, f, src, srcProps, false)
if aerr != nil {
return nil, fmt.Errorf("single part copy: %w", aerr)
}
// Apply tags and post-copy headers only when mapping requested changes
if len(tags) > 0 {
options.BlobTags = make(map[string]string, len(tags))
for k, v := range tags {
options.BlobTags[k] = v
}
}
if hadMapping {
// Only set metadata explicitly when mapping was requested; otherwise
// let the service copy source metadata (including mtime) automatically.
if len(meta) > 0 {
options.Metadata = meta
}
postHeaders = &headers
}
var startCopy blob.StartCopyFromURLResponse
err = f.pacer.Call(func() (bool, error) {
startCopy, err = dstBlobSVC.StartCopyFromURL(ctx, srcURL, &options)
@@ -2026,6 +2383,16 @@ func (f *Fs) copySinglepart(ctx context.Context, remote, dstContainer, dstPath s
pollTime = min(2*pollTime, time.Second)
}
// If mapper requested header changes, set them post-copy
if postHeaders != nil {
blb := f.getBlobSVC(dstContainer, dstPath)
_, setErr := blb.SetHTTPHeaders(ctx, *postHeaders, nil)
if setErr != nil {
return nil, fmt.Errorf("single part copy: failed to set headers: %w", setErr)
}
}
// Metadata (when requested) is set via StartCopyFromURL options.Metadata
return f.NewObject(ctx, remote)
}
@@ -2157,6 +2524,35 @@ func (o *Object) getMetadata() (metadata map[string]*string) {
return metadata
}
// Metadata returns metadata for an object
//
// It returns a combined view of system and user metadata.
func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
// Ensure metadata is loaded
if err := o.readMetaData(ctx); err != nil {
return nil, err
}
m := fs.Metadata{}
// System metadata we expose
if !o.modTime.IsZero() {
m["mtime"] = o.modTime.Format(time.RFC3339Nano)
}
if o.accessTier != "" {
m["tier"] = string(o.accessTier)
}
// Merge user metadata (already lower-cased keys)
metadataMu.Lock()
for k, v := range o.meta {
m[k] = v
}
metadataMu.Unlock()
return m, nil
}
// decodeMetaDataFromPropertiesResponse sets the metadata from the data passed in
//
// Sets
@@ -2995,17 +3391,19 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
// containerPath = containerPath[:len(containerPath)-1]
// }
// Update Mod time
o.updateMetadataWithModTime(src.ModTime(ctx))
if err != nil {
return ui, err
}
// Create the HTTP headers for the upload
// Start with default content-type based on source
ui.httpHeaders = blob.HTTPHeaders{
BlobContentType: pString(fs.MimeType(ctx, src)),
}
// Apply mapped metadata/headers/tags if requested
modTime, err := o.applyMappedMetadata(ctx, src, &ui, options)
if err != nil {
return ui, err
}
// Ensure mtime is set in metadata based on possibly overridden modTime
o.updateMetadataWithModTime(modTime)
// Compute the Content-MD5 of the file. As we stream all uploads it
// will be set in PutBlockList API call using the 'x-ms-blob-content-md5' header
if !o.fs.opt.DisableCheckSum {

View File

@@ -5,11 +5,16 @@ package azureblob
import (
"context"
"encoding/base64"
"fmt"
"net/http"
"strings"
"testing"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
@@ -148,4 +153,417 @@ func (f *Fs) testWriteUncommittedBlocks(t *testing.T) {
func (f *Fs) InternalTest(t *testing.T) {
t.Run("Features", f.testFeatures)
t.Run("WriteUncommittedBlocks", f.testWriteUncommittedBlocks)
t.Run("Metadata", f.testMetadataPaths)
}
// helper to read blob properties for an object
func getProps(ctx context.Context, t *testing.T, o fs.Object) *blob.GetPropertiesResponse {
ao := o.(*Object)
props, err := ao.readMetaDataAlways(ctx)
require.NoError(t, err)
return props
}
// helper to assert select headers and user metadata
func assertHeadersAndMetadata(t *testing.T, props *blob.GetPropertiesResponse, want map[string]string, wantUserMeta map[string]string) {
// Headers
get := func(p *string) string {
if p == nil {
return ""
}
return *p
}
if v, ok := want["content-type"]; ok {
assert.Equal(t, v, get(props.ContentType), "content-type")
}
if v, ok := want["cache-control"]; ok {
assert.Equal(t, v, get(props.CacheControl), "cache-control")
}
if v, ok := want["content-disposition"]; ok {
assert.Equal(t, v, get(props.ContentDisposition), "content-disposition")
}
if v, ok := want["content-encoding"]; ok {
assert.Equal(t, v, get(props.ContentEncoding), "content-encoding")
}
if v, ok := want["content-language"]; ok {
assert.Equal(t, v, get(props.ContentLanguage), "content-language")
}
// User metadata (case-insensitive keys from service)
norm := make(map[string]*string, len(props.Metadata))
for kk, vv := range props.Metadata {
norm[strings.ToLower(kk)] = vv
}
for k, v := range wantUserMeta {
pv, ok := norm[strings.ToLower(k)]
if assert.True(t, ok, fmt.Sprintf("missing user metadata key %q", k)) {
if pv == nil {
assert.Equal(t, v, "", k)
} else {
assert.Equal(t, v, *pv, k)
}
} else {
// Log available keys for diagnostics
keys := make([]string, 0, len(props.Metadata))
for kk := range props.Metadata {
keys = append(keys, kk)
}
t.Logf("available user metadata keys: %v", keys)
}
}
}
// helper to read blob tags for an object
func getTagsMap(ctx context.Context, t *testing.T, o fs.Object) map[string]string {
ao := o.(*Object)
blb := ao.getBlobSVC()
resp, err := blb.GetTags(ctx, nil)
require.NoError(t, err)
out := make(map[string]string)
for _, tag := range resp.BlobTagSet {
if tag.Key != nil {
k := *tag.Key
v := ""
if tag.Value != nil {
v = *tag.Value
}
out[k] = v
}
}
return out
}
// Test metadata across different write paths
func (f *Fs) testMetadataPaths(t *testing.T) {
ctx := context.Background()
if testing.Short() {
t.Skip("skipping in short mode")
}
// Common expected metadata and headers
baseMeta := fs.Metadata{
"cache-control": "no-cache",
"content-disposition": "inline",
"content-language": "en-US",
// Note: Don't set content-encoding here to avoid download decoding differences
// We will set a custom user metadata key
"potato": "royal",
// and modtime
"mtime": fstest.Time("2009-05-06T04:05:06.499999999Z").Format(time.RFC3339Nano),
}
// Singlepart upload
t.Run("PutSinglepart", func(t *testing.T) {
// size less than chunk size
contents := random.String(int(f.opt.ChunkSize / 2))
item := fstest.NewItem("meta-single.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
// override content-type via metadata mapping
meta := fs.Metadata{}
meta.Merge(baseMeta)
meta["content-type"] = "text/plain"
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "text/html", meta)
defer func() { _ = obj.Remove(ctx) }()
props := getProps(ctx, t, obj)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "text/plain",
"cache-control": "no-cache",
"content-disposition": "inline",
"content-language": "en-US",
}, map[string]string{
"potato": "royal",
})
_ = http.StatusOK // keep import for parity but don't inspect RawResponse
})
// Multipart upload
t.Run("PutMultipart", func(t *testing.T) {
// size greater than chunk size to force multipart
contents := random.String(int(f.opt.ChunkSize + 1024))
item := fstest.NewItem("meta-multipart.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
meta := fs.Metadata{}
meta.Merge(baseMeta)
meta["content-type"] = "application/json"
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "text/html", meta)
defer func() { _ = obj.Remove(ctx) }()
props := getProps(ctx, t, obj)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "application/json",
"cache-control": "no-cache",
"content-disposition": "inline",
"content-language": "en-US",
}, map[string]string{
"potato": "royal",
})
// Tags: Singlepart upload
t.Run("PutSinglepartTags", func(t *testing.T) {
contents := random.String(int(f.opt.ChunkSize / 2))
item := fstest.NewItem("tags-single.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
meta := fs.Metadata{
"x-ms-tags": "env=dev,team=sync",
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "text/plain", meta)
defer func() { _ = obj.Remove(ctx) }()
tags := getTagsMap(ctx, t, obj)
assert.Equal(t, "dev", tags["env"])
assert.Equal(t, "sync", tags["team"])
})
// Tags: Multipart upload
t.Run("PutMultipartTags", func(t *testing.T) {
contents := random.String(int(f.opt.ChunkSize + 2048))
item := fstest.NewItem("tags-multipart.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
meta := fs.Metadata{
"x-ms-tags": "project=alpha,release=2025-08",
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "application/octet-stream", meta)
defer func() { _ = obj.Remove(ctx) }()
tags := getTagsMap(ctx, t, obj)
assert.Equal(t, "alpha", tags["project"])
assert.Equal(t, "2025-08", tags["release"])
})
})
// Singlepart copy with metadata-set mapping; omit content-type to exercise fallback
t.Run("CopySinglepart", func(t *testing.T) {
// create small source
contents := random.String(int(f.opt.ChunkSize / 2))
srcItem := fstest.NewItem("meta-copy-single-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "text/plain", nil)
defer func() { _ = srcObj.Remove(ctx) }()
// set mapping via MetadataSet
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
ci.MetadataSet = fs.Metadata{
"cache-control": "private, max-age=60",
"content-disposition": "attachment; filename=foo.txt",
"content-language": "fr",
// no content-type: should fallback to source
"potato": "maris",
}
// do copy
dstName := "meta-copy-single-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
props := getProps(ctx2, t, dst)
// content-type should fallback to source (text/plain)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "text/plain",
"cache-control": "private, max-age=60",
"content-disposition": "attachment; filename=foo.txt",
"content-language": "fr",
}, map[string]string{
"potato": "maris",
})
// mtime should be populated on copy when --metadata is used
// and should equal the source ModTime (RFC3339Nano)
// Read user metadata (case-insensitive)
m := props.Metadata
var gotMtime string
for k, v := range m {
if strings.EqualFold(k, "mtime") && v != nil {
gotMtime = *v
break
}
}
if assert.NotEmpty(t, gotMtime, "mtime not set on destination metadata") {
// parse and compare times ignoring formatting differences
parsed, err := time.Parse(time.RFC3339Nano, gotMtime)
require.NoError(t, err)
assert.True(t, srcObj.ModTime(ctx2).Equal(parsed), "dst mtime should equal src ModTime")
}
})
// CopySinglepart with only --metadata (no MetadataSet) must inject mtime and preserve src content-type
t.Run("CopySinglepart_MetadataOnly", func(t *testing.T) {
contents := random.String(int(f.opt.ChunkSize / 2))
srcItem := fstest.NewItem("meta-copy-single-only-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "text/plain", nil)
defer func() { _ = srcObj.Remove(ctx) }()
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
dstName := "meta-copy-single-only-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
props := getProps(ctx2, t, dst)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "text/plain",
}, map[string]string{})
// Assert mtime injected
m := props.Metadata
var gotMtime string
for k, v := range m {
if strings.EqualFold(k, "mtime") && v != nil {
gotMtime = *v
break
}
}
if assert.NotEmpty(t, gotMtime, "mtime not set on destination metadata") {
parsed, err := time.Parse(time.RFC3339Nano, gotMtime)
require.NoError(t, err)
assert.True(t, srcObj.ModTime(ctx2).Equal(parsed), "dst mtime should equal src ModTime")
}
})
// Multipart copy with metadata-set mapping; omit content-type to exercise fallback
t.Run("CopyMultipart", func(t *testing.T) {
// create large source to force multipart
contents := random.String(int(f.opt.CopyCutoff + 1024))
srcItem := fstest.NewItem("meta-copy-multi-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "application/octet-stream", nil)
defer func() { _ = srcObj.Remove(ctx) }()
// set mapping via MetadataSet
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
ci.MetadataSet = fs.Metadata{
"cache-control": "max-age=0, no-cache",
// omit content-type to trigger fallback
"content-language": "de",
"potato": "desiree",
}
dstName := "meta-copy-multi-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
props := getProps(ctx2, t, dst)
// content-type should fallback to source (application/octet-stream)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "application/octet-stream",
"cache-control": "max-age=0, no-cache",
"content-language": "de",
}, map[string]string{
"potato": "desiree",
})
// mtime should be populated on copy when --metadata is used
m := props.Metadata
var gotMtime string
for k, v := range m {
if strings.EqualFold(k, "mtime") && v != nil {
gotMtime = *v
break
}
}
if assert.NotEmpty(t, gotMtime, "mtime not set on destination metadata") {
parsed, err := time.Parse(time.RFC3339Nano, gotMtime)
require.NoError(t, err)
assert.True(t, srcObj.ModTime(ctx2).Equal(parsed), "dst mtime should equal src ModTime")
}
})
// CopyMultipart with only --metadata must inject mtime and preserve src content-type
t.Run("CopyMultipart_MetadataOnly", func(t *testing.T) {
contents := random.String(int(f.opt.CopyCutoff + 2048))
srcItem := fstest.NewItem("meta-copy-multi-only-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "application/octet-stream", nil)
defer func() { _ = srcObj.Remove(ctx) }()
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
dstName := "meta-copy-multi-only-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
props := getProps(ctx2, t, dst)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "application/octet-stream",
}, map[string]string{})
m := props.Metadata
var gotMtime string
for k, v := range m {
if strings.EqualFold(k, "mtime") && v != nil {
gotMtime = *v
break
}
}
if assert.NotEmpty(t, gotMtime, "mtime not set on destination metadata") {
parsed, err := time.Parse(time.RFC3339Nano, gotMtime)
require.NoError(t, err)
assert.True(t, srcObj.ModTime(ctx2).Equal(parsed), "dst mtime should equal src ModTime")
}
})
// Tags: Singlepart copy
t.Run("CopySinglepartTags", func(t *testing.T) {
// create small source
contents := random.String(int(f.opt.ChunkSize / 2))
srcItem := fstest.NewItem("tags-copy-single-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "text/plain", nil)
defer func() { _ = srcObj.Remove(ctx) }()
// set mapping via MetadataSet including tags
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
ci.MetadataSet = fs.Metadata{
"x-ms-tags": "copy=single,mode=test",
}
dstName := "tags-copy-single-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
tags := getTagsMap(ctx2, t, dst)
assert.Equal(t, "single", tags["copy"])
assert.Equal(t, "test", tags["mode"])
})
// Tags: Multipart copy
t.Run("CopyMultipartTags", func(t *testing.T) {
// create large source to force multipart
contents := random.String(int(f.opt.CopyCutoff + 4096))
srcItem := fstest.NewItem("tags-copy-multi-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "application/octet-stream", nil)
defer func() { _ = srcObj.Remove(ctx) }()
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
ci.MetadataSet = fs.Metadata{
"x-ms-tags": "copy=multi,mode=test",
}
dstName := "tags-copy-multi-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
tags := getTagsMap(ctx2, t, dst)
assert.Equal(t, "multi", tags["copy"])
assert.Equal(t, "test", tags["mode"])
})
// Negative: invalid x-ms-tags must error
t.Run("InvalidXMsTags", func(t *testing.T) {
contents := random.String(32)
item := fstest.NewItem("tags-invalid.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
// construct ObjectInfo with invalid x-ms-tags
buf := strings.NewReader(contents)
// Build obj info with metadata
meta := fs.Metadata{
"x-ms-tags": "badpair-without-equals",
}
// force metadata on
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
obji := object.NewStaticObjectInfo(item.Path, item.ModTime, int64(len(contents)), true, nil, nil)
obji = obji.WithMetadata(meta).WithMimeType("text/plain")
_, err := f.Put(ctx2, buf, obji)
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid tag")
})
}

View File

@@ -133,23 +133,32 @@ type File struct {
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
}
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct {
// StorageAPI is as returned from the b2_authorize_account call
type StorageAPI struct {
AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
Buckets []struct { // When present, access is restricted to one or more buckets.
ID string `json:"id"` // ID of bucket
Name string `json:"name"` // When present, name of bucket - may be empty
} `json:"buckets"`
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has for every bucket.
NamePrefix any `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"`
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead.
RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance.
}
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct {
AccountID string `json:"accountId"` // The identifier for the account.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
APIs struct { // Supported APIs for this account / key. These are API-dependent JSON objects.
Storage StorageAPI `json:"storageApi"`
} `json:"apiInfo"`
}
// ListBucketsRequest is parameters for b2_list_buckets call
type ListBucketsRequest struct {
AccountID string `json:"accountId"` // The identifier for the account.

View File

@@ -607,17 +607,29 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil {
return nil, fmt.Errorf("failed to authorize account: %w", err)
}
// If this is a key limited to a single bucket, it must exist already
if f.rootBucket != "" && f.info.Allowed.BucketID != "" {
allowedBucket := f.opt.Enc.ToStandardName(f.info.Allowed.BucketName)
if allowedBucket == "" {
return nil, errors.New("bucket that application key is restricted to no longer exists")
// If this is a key limited to one or more buckets, one of them must exist
// and be ours.
if f.rootBucket != "" && len(f.info.APIs.Storage.Allowed.Buckets) != 0 {
buckets := f.info.APIs.Storage.Allowed.Buckets
var rootFound = false
var rootID string
for _, b := range buckets {
allowedBucket := f.opt.Enc.ToStandardName(b.Name)
if allowedBucket == "" {
fs.Debugf(f, "bucket %q that application key is restricted to no longer exists", b.ID)
continue
}
if allowedBucket == f.rootBucket {
rootFound = true
rootID = b.ID
}
}
if allowedBucket != f.rootBucket {
return nil, fmt.Errorf("you must use bucket %q with this application key", allowedBucket)
if !rootFound {
return nil, fmt.Errorf("you must use bucket(s) %q with this application key", buckets)
}
f.cache.MarkOK(f.rootBucket)
f.setBucketID(f.rootBucket, f.info.Allowed.BucketID)
f.setBucketID(f.rootBucket, rootID)
}
if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the (bucket,directory) is actually an existing file
@@ -643,7 +655,7 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
defer f.authMu.Unlock()
opts := rest.Opts{
Method: "GET",
Path: "/b2api/v1/b2_authorize_account",
Path: "/b2api/v4/b2_authorize_account",
RootURL: f.opt.Endpoint,
UserName: f.opt.Account,
Password: f.opt.Key,
@@ -656,13 +668,13 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
if err != nil {
return fmt.Errorf("failed to authenticate: %w", err)
}
f.srv.SetRoot(f.info.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken)
f.srv.SetRoot(f.info.APIs.Storage.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken)
return nil
}
// hasPermission returns if the current AuthorizationToken has the selected permission
func (f *Fs) hasPermission(permission string) bool {
return slices.Contains(f.info.Allowed.Capabilities, permission)
return slices.Contains(f.info.APIs.Storage.Allowed.Capabilities, permission)
}
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
@@ -1067,44 +1079,83 @@ type listBucketFn func(*api.Bucket) error
// listBucketsToFn lists the buckets to the function supplied
func (f *Fs) listBucketsToFn(ctx context.Context, bucketName string, fn listBucketFn) error {
var account = api.ListBucketsRequest{
AccountID: f.info.AccountID,
BucketID: f.info.Allowed.BucketID,
}
if bucketName != "" && account.BucketID == "" {
account.BucketName = f.opt.Enc.FromStandardName(bucketName)
responses := make([]api.ListBucketsResponse, len(f.info.APIs.Storage.Allowed.Buckets))[:0]
call := func(id string) error {
var account = api.ListBucketsRequest{
AccountID: f.info.AccountID,
BucketID: id,
}
if bucketName != "" && account.BucketID == "" {
account.BucketName = f.opt.Enc.FromStandardName(bucketName)
}
var response api.ListBucketsResponse
opts := rest.Opts{
Method: "POST",
Path: "/b2_list_buckets",
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &account, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
}
responses = append(responses, response)
return nil
}
var response api.ListBucketsResponse
opts := rest.Opts{
Method: "POST",
Path: "/b2_list_buckets",
for i := range f.info.APIs.Storage.Allowed.Buckets {
b := &f.info.APIs.Storage.Allowed.Buckets[i]
// Empty names indicate a bucket that no longer exists, this is non-fatal
// for multi-bucket API keys.
if b.Name == "" {
continue
}
// When requesting a specific bucket skip over non-matching names
if bucketName != "" && b.Name != bucketName {
continue
}
err := call(b.ID)
if err != nil {
return err
}
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &account, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
if len(f.info.APIs.Storage.Allowed.Buckets) == 0 {
err := call("")
if err != nil {
return err
}
}
f.bucketIDMutex.Lock()
f.bucketTypeMutex.Lock()
f._bucketID = make(map[string]string, 1)
f._bucketType = make(map[string]string, 1)
for i := range response.Buckets {
bucket := &response.Buckets[i]
bucket.Name = f.opt.Enc.ToStandardName(bucket.Name)
f.cache.MarkOK(bucket.Name)
f._bucketID[bucket.Name] = bucket.ID
f._bucketType[bucket.Name] = bucket.Type
for ri := range responses {
response := &responses[ri]
for i := range response.Buckets {
bucket := &response.Buckets[i]
bucket.Name = f.opt.Enc.ToStandardName(bucket.Name)
f.cache.MarkOK(bucket.Name)
f._bucketID[bucket.Name] = bucket.ID
f._bucketType[bucket.Name] = bucket.Type
}
}
f.bucketTypeMutex.Unlock()
f.bucketIDMutex.Unlock()
for i := range response.Buckets {
bucket := &response.Buckets[i]
err = fn(bucket)
if err != nil {
return err
for ri := range responses {
response := &responses[ri]
for i := range response.Buckets {
bucket := &response.Buckets[i]
err := fn(bucket)
if err != nil {
return err
}
}
}
return nil
@@ -1606,7 +1657,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
bucket, bucketPath := f.split(remote)
var RootURL string
if f.opt.DownloadURL == "" {
RootURL = f.info.DownloadURL
RootURL = f.info.APIs.Storage.DownloadURL
} else {
RootURL = f.opt.DownloadURL
}
@@ -1957,7 +2008,7 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
// Use downloadUrl from backblaze if downloadUrl is not set
// otherwise use the custom downloadUrl
if o.fs.opt.DownloadURL == "" {
opts.RootURL = o.fs.info.DownloadURL
opts.RootURL = o.fs.info.APIs.Storage.DownloadURL
} else {
opts.RootURL = o.fs.opt.DownloadURL
}

View File

@@ -403,14 +403,14 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
if ciphertext == "" {
return "", nil
}
pos := strings.Index(ciphertext, ".")
if pos == -1 {
before, after, ok := strings.Cut(ciphertext, ".")
if !ok {
return "", ErrorNotAnEncryptedFile
} // No .
num := ciphertext[:pos]
num := before
if num == "!" {
// No rotation; probably original was not valid unicode
return ciphertext[pos+1:], nil
return after, nil
}
dir, err := strconv.Atoi(num)
if err != nil {
@@ -425,7 +425,7 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
var result bytes.Buffer
inQuote := false
for _, runeValue := range ciphertext[pos+1:] {
for _, runeValue := range after {
switch {
case inQuote:
_, _ = result.WriteRune(runeValue)

View File

@@ -2,17 +2,7 @@ name: Selectel
description: Selectel Object Storage
region:
ru-1: St. Petersburg
ru-3: St. Petersburg
ru-7: Moscow
gis-1: Moscow
kz-1: Kazakhstan
uz-2: Uzbekistan
endpoint:
s3.ru-1.storage.selcloud.ru: St. Petersburg
s3.ru-3.storage.selcloud.ru: St. Petersburg
s3.ru-7.storage.selcloud.ru: Moscow
s3.gis-1.storage.selcloud.ru: Moscow
s3.kz-1.storage.selcloud.ru: Kazakhstan
s3.uz-2.storage.selcloud.ru: Uzbekistan
s3.ru-1.storage.selcloud.ru: Saint Petersburg
quirks:
list_url_encode: false

View File

@@ -30,9 +30,11 @@ import (
v4signer "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/credentials/stscreds"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/aws/aws-sdk-go-v2/service/sts"
"github.com/aws/smithy-go"
"github.com/aws/smithy-go/logging"
"github.com/aws/smithy-go/middleware"
@@ -325,6 +327,30 @@ If empty it will default to the environment variable "AWS_PROFILE" or
Help: "An AWS session token.",
Advanced: true,
Sensitive: true,
}, {
Name: "role_arn",
Help: `ARN of the IAM role to assume.
Leave blank if not using assume role.`,
Advanced: true,
}, {
Name: "role_session_name",
Help: `Session name for assumed role.
If empty, a session name will be generated automatically.`,
Advanced: true,
}, {
Name: "role_session_duration",
Help: `Session duration for assumed role.
If empty, the default session duration will be used.`,
Advanced: true,
}, {
Name: "role_external_id",
Help: `External ID for assumed role.
Leave blank if not using an external ID.`,
Advanced: true,
}, {
Name: "upload_concurrency",
Help: `Concurrency for multipart uploads and copies.
@@ -927,6 +953,10 @@ type Options struct {
SharedCredentialsFile string `config:"shared_credentials_file"`
Profile string `config:"profile"`
SessionToken string `config:"session_token"`
RoleARN string `config:"role_arn"`
RoleSessionName string `config:"role_session_name"`
RoleSessionDuration fs.Duration `config:"role_session_duration"`
RoleExternalID string `config:"role_external_id"`
UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"`
@@ -1290,6 +1320,34 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (s3Cli
opt.Region = "us-east-1"
}
// Handle assume role if RoleARN is specified
if opt.RoleARN != "" {
fs.Debugf(nil, "Using assume role with ARN: %s", opt.RoleARN)
// Set region for the config before creating STS client
awsConfig.Region = opt.Region
// Create STS client using the base credentials
stsClient := sts.NewFromConfig(awsConfig)
// Configure AssumeRole options
assumeRoleOptions := func(aro *stscreds.AssumeRoleOptions) {
// Set session name if provided, otherwise use a default
if opt.RoleSessionName != "" {
aro.RoleSessionName = opt.RoleSessionName
}
if opt.RoleSessionDuration != 0 {
aro.Duration = time.Duration(opt.RoleSessionDuration)
}
if opt.RoleExternalID != "" {
aro.ExternalID = &opt.RoleExternalID
}
}
// Create AssumeRole credentials provider
awsConfig.Credentials = stscreds.NewAssumeRoleProvider(stsClient, opt.RoleARN, assumeRoleOptions)
}
provider = loadProvider(opt.Provider)
if provider == nil {
fs.Logf("s3", "s3 provider %q not known - please set correctly", opt.Provider)

View File

@@ -389,8 +389,8 @@ func parseHash(str string) (string, string, error) {
if str == "-" {
return "", "", nil
}
if pos := strings.Index(str, ":"); pos > 0 {
name, val := str[:pos], str[pos+1:]
if before, after, ok := strings.Cut(str, ":"); ok {
name, val := before, after
if name != "" && val != "" {
return name, val, nil
}

View File

@@ -1,4 +1,4 @@
//go:build cmount && ((linux && cgo) || (darwin && cgo) || (freebsd && cgo) || windows)
//go:build cmount && ((linux && cgo) || (darwin && cgo) || (freebsd && cgo) || (openbsd && cgo) || windows)
package cmount
@@ -6,6 +6,7 @@ import (
"io"
"os"
"path"
"runtime"
"strings"
"sync"
"sync/atomic"
@@ -210,6 +211,12 @@ func (fsys *FS) Readdir(dirPath string,
// We can't seek in directories and FUSE should know that so
// return an error if ofst is ever set.
if ofst > 0 {
// However openbsd doesn't seem to know this - perhaps a bug in its
// FUSE implementation or a bug in cgofuse?
// See: https://github.com/billziss-gh/cgofuse/issues/49
if runtime.GOOS == "openbsd" {
return 0
}
return -fuse.ESPIPE
}

View File

@@ -1,4 +1,4 @@
//go:build cmount && ((linux && cgo) || (darwin && cgo) || (freebsd && cgo) || windows)
//go:build cmount && ((linux && cgo) || (darwin && cgo) || (freebsd && cgo) || (openbsd && cgo) || windows)
// Package cmount implements a FUSE mounting system for rclone remotes.
//
@@ -8,9 +8,9 @@ package cmount
import (
"errors"
"fmt"
"strings"
"os"
"runtime"
"strings"
"time"
"github.com/rclone/rclone/cmd/mountlib"
@@ -59,12 +59,14 @@ func mountOptions(VFS *vfs.VFS, device string, mountpoint string, opt *mountlib.
} else {
options = append(options, "-o", "fsname="+device)
options = append(options, "-o", "subtype=rclone")
options = append(options, "-o", fmt.Sprintf("max_readahead=%d", opt.MaxReadAhead))
// This causes FUSE to supply O_TRUNC with the Open
// call which is more efficient for cmount. However
// it does not work with cgofuse on Windows with
// WinFSP so cmount must work with or without it.
options = append(options, "-o", "atomic_o_trunc")
if runtime.GOOS != "openbsd" {
options = append(options, "-o", fmt.Sprintf("max_readahead=%d", opt.MaxReadAhead))
// This causes FUSE to supply O_TRUNC with the Open
// call which is more efficient for cmount. However
// it does not work with cgofuse on Windows with
// WinFSP so cmount must work with or without it.
options = append(options, "-o", "atomic_o_trunc")
}
if opt.DaemonTimeout != 0 {
options = append(options, "-o", fmt.Sprintf("daemon_timeout=%d", int(time.Duration(opt.DaemonTimeout).Seconds())))
}

View File

@@ -1,4 +1,4 @@
//go:build cmount && ((linux && cgo) || (darwin && cgo) || (freebsd && cgo) || windows) && (!race || !windows)
//go:build cmount && ((linux && cgo) || (darwin && cgo) || (freebsd && cgo) || (openbsd && cgo) || windows) && (!race || !windows)
// Package cmount implements a FUSE mounting system for rclone remotes.
//

View File

@@ -1,4 +1,4 @@
//go:build !((linux && cgo && cmount) || (darwin && cgo && cmount) || (freebsd && cgo && cmount) || (windows && cmount))
//go:build !((linux && cgo && cmount) || (darwin && cgo && cmount) || (freebsd && cgo && cmount) || (openbsd && cgo && cmount) || (windows && cmount))
// Package cmount implements a FUSE mounting system for rclone remotes.
//

View File

@@ -58,10 +58,10 @@ type conn struct {
// interoperate with the rclone sftp backend
func (c *conn) execCommand(ctx context.Context, out io.Writer, command string) (err error) {
binary, args := command, ""
space := strings.Index(command, " ")
if space >= 0 {
binary = command[:space]
args = strings.TrimLeft(command[space+1:], " ")
before, after, ok := strings.Cut(command, " ")
if ok {
binary = before
args = strings.TrimLeft(after, " ")
}
args = shellUnEscape(args)
fs.Debugf(c.what, "exec command: binary = %q, args = %q", binary, args)

View File

@@ -45,6 +45,10 @@ var OptionsInfo = fs.Options{{
Name: "disable_dir_list",
Default: false,
Help: "Disable HTML directory list on GET request for a directory",
}, {
Name: "disable_zip",
Default: false,
Help: "Disable zip download of directories",
}}.
Add(libhttp.ConfigInfo).
Add(libhttp.AuthConfigInfo).
@@ -57,6 +61,7 @@ type Options struct {
Template libhttp.TemplateConfig
EtagHash string `config:"etag_hash"`
DisableDirList bool `config:"disable_dir_list"`
DisableZip bool `config:"disable_zip"`
}
// Opt is options set by command line flags
@@ -408,6 +413,24 @@ func (w *WebDAV) serveDir(rw http.ResponseWriter, r *http.Request, dirRemote str
return
}
dir := node.(*vfs.Dir)
if r.URL.Query().Get("download") == "zip" && !w.opt.DisableZip {
fs.Infof(dirRemote, "%s: Zipping directory", r.RemoteAddr)
zipName := path.Base(dirRemote)
if dirRemote == "" {
zipName = "root"
}
rw.Header().Set("Content-Disposition", "attachment; filename=\""+zipName+".zip\"")
rw.Header().Set("Content-Type", "application/zip")
rw.Header().Set("Last-Modified", time.Now().UTC().Format(http.TimeFormat))
err := vfs.CreateZip(ctx, dir, rw)
if err != nil {
serve.Error(ctx, dirRemote, rw, "Failed to create zip", err)
return
}
return
}
dirEntries, err := dir.ReadDirAll()
if err != nil {
@@ -417,6 +440,7 @@ func (w *WebDAV) serveDir(rw http.ResponseWriter, r *http.Request, dirRemote str
// Make the entries for display
directory := serve.NewDirectory(dirRemote, w.server.HTMLTemplate())
directory.DisableZip = w.opt.DisableZip
for _, node := range dirEntries {
if vfscommon.Opt.NoModTime {
directory.AddHTMLEntry(node.Path(), node.IsDir(), node.Size(), time.Time{})

View File

@@ -1048,3 +1048,10 @@ put them back in again. -->
- jijamik <30904953+jijamik@users.noreply.github.com>
- Dominik Sander <git@dsander.de>
- Nikolay Kiryanov <nikolay@kiryanov.ru>
- Diana <5275194+DianaNites@users.noreply.github.com>
- Duncan Smart <duncan.smart@gmail.com>
- vicerace <vicerace@sohu.com>
- Cliff Frey <cliff@openai.com>
- Vladislav Tropnikov <vtr.name@gmail.com>
- Leo <i@hardrain980.com>
- Johannes Rothe <mail@johannes-rothe.de>

View File

@@ -103,6 +103,26 @@ MD5 hashes are stored with blobs. However blobs that were uploaded in
chunks only have an MD5 if the source remote was capable of MD5
hashes, e.g. the local disk.
### Metadata and tags
Rclone can map arbitrary metadata to Azure Blob headers, user metadata, and tags
when `--metadata` is enabled (or when using `--metadata-set` / `--metadata-mapper`).
- Headers: Set these keys in metadata to map to the corresponding blob headers:
- `cache-control`, `content-disposition`, `content-encoding`, `content-language`, `content-type`.
- User metadata: Any other non-reserved keys are written as user metadata
(keys are normalized to lowercase). Keys starting with `x-ms-` are reserved and
are not stored as user metadata.
- Tags: Provide `x-ms-tags` as a comma-separated list of `key=value` pairs, e.g.
`x-ms-tags=env=dev,team=sync`. These are applied as blob tags on upload and on
server-side copies. Whitespace around keys/values is ignored.
- Modtime override: Provide `mtime` in RFC3339/RFC3339Nano format to override the
stored modtime persisted in user metadata. If `mtime` cannot be parsed, rclone
logs a debug message and ignores the override.
Notes:
- Rclone ignores reserved `x-ms-*` keys (except `x-ms-tags`) for user metadata.
### Performance
When uploading large files, increasing the value of

View File

@@ -283,7 +283,7 @@ It is useful to know how many requests are sent to the server in different scena
All copy commands send the following 4 requests:
```text
/b2api/v1/b2_authorize_account
/b2api/v4/b2_authorize_account
/b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names

View File

@@ -1049,17 +1049,7 @@ The following backends have known issues that need more investigation:
<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
- `TestDropbox` (`dropbox`)
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
- `TestGoFile` (`gofile`)
- [`TestBisyncRemoteLocal/all_changed`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/backupdir`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/basic`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/changes`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/check_access`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [78 more](https://pub.rclone.org/integration-tests/current/)
- `TestPcloud` (`pcloud`)
- [`TestBisyncRemoteRemote/check_access`](https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
- [`TestBisyncRemoteRemote/check_access_filters`](https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
- Updated: 2025-12-10-010012
- Updated: 2025-11-21-010037
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
The following backends either have not been tested recently or have known issues

View File

@@ -6,22 +6,6 @@ description: "Rclone Changelog"
# Changelog
## v1.72.1 - 2025-12-10
[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
- Bug Fixes
- build: update to go1.25.5 to fix [CVE-2025-61729](https://pkg.go.dev/vuln/GO-2025-4155)
- doc fixes (Duncan Smart, Nick Craig-Wood)
- configfile: Fix piped config support (Jonas Tingeborn)
- log
- Fix PID not included in JSON log output (Tingsong Xu)
- Fix backtrace not going to the --log-file (Nick Craig-Wood)
- Google Cloud Storage
- Improve endpoint parameter docs (Johannes Rothe)
- S3
- Add missing regions for Selectel provider (Nick Craig-Wood)
## v1.72.0 - 2025-11-21
[See commits](https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0)

View File

@@ -369,7 +369,7 @@ rclone [flags]
--gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
--gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
@@ -1023,7 +1023,7 @@ rclone [flags]
--use-json-log Use json log format
--use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
-v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect

View File

@@ -231,12 +231,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20251210
// Output: stories/The Quick Brown Fox!-20251121
```
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM
// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
```
```console

View File

@@ -121,7 +121,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1")
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
```
@@ -638,7 +638,7 @@ Backend-only flags (these can be set in the config file also).
--gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
--gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it

View File

@@ -785,14 +785,9 @@ Properties:
#### --gcs-endpoint
Custom endpoint for the storage API. Leave blank to use the provider default.
Endpoint for the service.
When using a custom endpoint that includes a subpath (e.g. example.org/custom/endpoint),
the subpath will be ignored during upload operations due to a limitation in the
underlying Google API Go client library.
Download and listing operations will work correctly with the full endpoint path.
If you require subpath support for uploads, avoid using subpaths in your custom
endpoint configuration.
Leave blank normally.
Properties:
@@ -800,13 +795,6 @@ Properties:
- Env Var: RCLONE_GCS_ENDPOINT
- Type: string
- Required: false
- Examples:
- "storage.example.org"
- Specify a custom endpoint
- "storage.example.org:4443"
- Specifying a custom endpoint with port
- "storage.example.org:4443/gcs/api"
- Specifying a subpath, see the note, uploads won't use the custom path!
#### --gcs-encoding

View File

@@ -745,6 +745,68 @@ If none of these option actually end up providing `rclone` with AWS
credentials then S3 interaction will be non-authenticated (see the
[anonymous access](#anonymous-access) section for more info).
#### Assume Role (Cross-Account Access)
If you need to access S3 resources in a different AWS account, you can use IAM role assumption.
This is useful for cross-account access scenarios where you have credentials in one account
but need to access resources in another account.
To use assume role, configure the following parameters:
- `role_arn` - The ARN (Amazon Resource Name) of the IAM role to assume in the target account.
Format: `arn:aws:iam::ACCOUNT-ID:role/ROLE-NAME`
- `role_session_name` (optional) - A name for the assumed role session. If not specified,
rclone will generate one automatically.
- `role_session_duration` (optional) - Duration for which the assumed role credentials are valid.
If not specified, AWS default duration will be used (typically 1 hour).
- `role_external_id` (optional) - An external ID required by the role's trust policy for additional security.
This is typically used when the role is accessed by a third party.
The assume role feature works with both direct credentials (`env_auth = false`) and environment-based
authentication (`env_auth = true`). Rclone will first authenticate using the base credentials, then
use those credentials to assume the specified role.
Example configuration for cross-account access:
```
[s3-cross-account]
type = s3
provider = AWS
env_auth = true
region = us-east-1
role_arn = arn:aws:iam::123456789012:role/CrossAccountS3Role
role_session_name = rclone-session
role_external_id = unique-role-external-id-12345
```
In this example:
- Base credentials are obtained from the environment (IAM role, credentials file, or environment variables)
- These credentials are then used to assume the role `CrossAccountS3Role` in account `123456789012`
- An external ID is provided for additional security as required by the role's trust policy
The target role's trust policy in the destination account must allow the source account or user to assume it.
Example trust policy:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::SOURCE-ACCOUNT-ID:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalID": "unique-role-external-id-12345"
}
}
}
]
}
```
### S3 Permissions
When using the `sync` subcommand of `rclone` the following minimum
@@ -1383,21 +1445,12 @@ Properties:
- "ru-1"
- St. Petersburg
- Provider: Selectel,Servercore
- "ru-3"
- St. Petersburg
- Provider: Selectel
- "ru-7"
- Moscow
- Provider: Selectel,Servercore
- "gis-1"
- Moscow
- Provider: Selectel,Servercore
- "kz-1"
- Kazakhstan
- Provider: Selectel
- "uz-2"
- Uzbekistan
- Provider: Selectel
- Provider: Servercore
- "ru-7"
- Moscow
- Provider: Servercore
- "uz-2"
- Tashkent, Uzbekistan
- Provider: Servercore
@@ -2189,25 +2242,13 @@ Properties:
- SeaweedFS S3 localhost
- Provider: SeaweedFS
- "s3.ru-1.storage.selcloud.ru"
- St. Petersburg
- Provider: Selectel
- "s3.ru-3.storage.selcloud.ru"
- St. Petersburg
- Provider: Selectel
- "s3.ru-7.storage.selcloud.ru"
- Moscow
- Saint Petersburg
- Provider: Selectel,Servercore
- "s3.gis-1.storage.selcloud.ru"
- Moscow
- Provider: Selectel,Servercore
- "s3.kz-1.storage.selcloud.ru"
- Kazakhstan
- Provider: Selectel
- "s3.uz-2.storage.selcloud.ru"
- Uzbekistan
- Provider: Selectel
- "s3.ru-1.storage.selcloud.ru"
- Saint Petersburg
- Provider: Servercore
- "s3.ru-7.storage.selcloud.ru"
- Moscow
- Provider: Servercore
- "s3.uz-2.srvstorage.uz"
- Tashkent, Uzbekistan

View File

@@ -1 +1 @@
v1.72.1
v1.73.0

View File

@@ -29,16 +29,16 @@ func (bp *BwPair) String() string {
// Set the bandwidth from a string which is either
// SizeSuffix or SizeSuffix:SizeSuffix (for tx:rx bandwidth)
func (bp *BwPair) Set(s string) (err error) {
colon := strings.Index(s, ":")
before, after, ok := strings.Cut(s, ":")
stx, srx := s, ""
if colon >= 0 {
stx, srx = s[:colon], s[colon+1:]
if ok {
stx, srx = before, after
}
err = bp.Tx.Set(stx)
if err != nil {
return err
}
if colon < 0 {
if !ok {
bp.Rx = bp.Tx
} else {
err = bp.Rx.Set(srx)

View File

@@ -372,7 +372,7 @@ func (p *pipedInput) Read(b []byte) (int, error) {
return p.Reader.Read(b)
}
func (*pipedInput) Seek(int64, int) (int64, error) {
func (_ *pipedInput) Seek(_ int64, _ int) (int64, error) {
return 0, fmt.Errorf("Seek not supported")
}

View File

@@ -209,7 +209,7 @@ func InitLogging() {
// Log file output
if Opt.File != "" {
var w io.Writer
if Opt.MaxSize < 0 {
if Opt.MaxSize == 0 {
// No log rotation - just open the file as normal
// We'll capture tracebacks like this too.
f, err := os.OpenFile(Opt.File, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0640)

View File

@@ -1,4 +1,4 @@
package fs
// VersionTag of rclone
var VersionTag = "v1.72.1"
var VersionTag = "v1.73.0"

View File

@@ -368,7 +368,7 @@ func Run(t *testing.T, opt *Opt) {
}
file1Contents string
file1MimeType = "text/csv"
file1Metadata = fs.Metadata{"rclone-test": "potato"}
file1Metadata = fs.Metadata{"rclonetest": "potato"}
file2 = fstest.Item{
ModTime: fstest.Time("2001-02-03T04:05:10.123123123Z"),
Path: `hello? sausage/êé/Hello, 世界/ " ' @ < > & ? + ≠/z.txt`,

2
go.mod
View File

@@ -25,6 +25,7 @@ require (
github.com/aws/aws-sdk-go-v2/credentials v1.18.21
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.4
github.com/aws/aws-sdk-go-v2/service/s3 v1.90.0
github.com/aws/aws-sdk-go-v2/service/sts v1.39.1
github.com/aws/smithy-go v1.23.2
github.com/buengese/sgzip v0.1.1
github.com/cloudinary/cloudinary-go/v2 v2.13.0
@@ -133,7 +134,6 @@ require (
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.13 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.30.1 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.5 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.39.1 // indirect
github.com/bahlo/generic-list-go v0.2.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bodgit/plumbing v1.3.0 // indirect

View File

@@ -218,12 +218,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20251210
// Output: stories/The Quick Brown Fox!-20251121
```
```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-12-10 1253PM
// Output: stories/The Quick Brown Fox!-2025-11-21 0508PM
```
```console

185
rclone.1 generated
View File

@@ -15,7 +15,7 @@
. ftr VB CB
. ftr VBI CBI
.\}
.TH "rclone" "1" "Dec 10, 2025" "User Manual" ""
.TH "rclone" "1" "Nov 21, 2025" "User Manual" ""
.hy
.SH NAME
.PP
@@ -6260,14 +6260,14 @@ rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]a
.nf
\f[C]
rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{YYYYMMDD}\[dq]
// Output: stories/The Quick Brown Fox!-20251210
// Output: stories/The Quick Brown Fox!-20251121
\f[R]
.fi
.IP
.nf
\f[C]
rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{macfriendlytime}\[dq]
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM
// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
\f[R]
.fi
.IP
@@ -31741,7 +31741,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.72.1\[dq])
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.72.0\[dq])
\f[R]
.fi
.SS Performance
@@ -32258,7 +32258,7 @@ Backend-only flags (these can be set in the config file also).
--gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
--gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it
@@ -34968,31 +34968,7 @@ The following backends have known issues that need more investigation:
\f[V]TestBisyncRemoteRemote/normalization\f[R] (https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
.RE
.IP \[bu] 2
\f[V]TestGoFile\f[R] (\f[V]gofile\f[R])
.RS 2
.IP \[bu] 2
\f[V]TestBisyncRemoteLocal/all_changed\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
.IP \[bu] 2
\f[V]TestBisyncRemoteLocal/backupdir\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
.IP \[bu] 2
\f[V]TestBisyncRemoteLocal/basic\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
.IP \[bu] 2
\f[V]TestBisyncRemoteLocal/changes\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
.IP \[bu] 2
\f[V]TestBisyncRemoteLocal/check_access\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
.IP \[bu] 2
78 more (https://pub.rclone.org/integration-tests/current/)
.RE
.IP \[bu] 2
\f[V]TestPcloud\f[R] (\f[V]pcloud\f[R])
.RS 2
.IP \[bu] 2
\f[V]TestBisyncRemoteRemote/check_access\f[R] (https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
.IP \[bu] 2
\f[V]TestBisyncRemoteRemote/check_access_filters\f[R] (https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
.RE
.IP \[bu] 2
Updated: 2025-12-10-010012
Updated: 2025-11-21-010037
.PP
The following backends either have not been tested recently or have
known issues that are deemed unfixable for the time being:
@@ -39287,13 +39263,12 @@ Petersburg
Provider: Selectel,Servercore
.RE
.IP \[bu] 2
\[dq]ru-3\[dq]
\[dq]gis-1\[dq]
.RS 2
.IP \[bu] 2
St.
Petersburg
Moscow
.IP \[bu] 2
Provider: Selectel
Provider: Servercore
.RE
.IP \[bu] 2
\[dq]ru-7\[dq]
@@ -39301,31 +39276,7 @@ Provider: Selectel
.IP \[bu] 2
Moscow
.IP \[bu] 2
Provider: Selectel,Servercore
.RE
.IP \[bu] 2
\[dq]gis-1\[dq]
.RS 2
.IP \[bu] 2
Moscow
.IP \[bu] 2
Provider: Selectel,Servercore
.RE
.IP \[bu] 2
\[dq]kz-1\[dq]
.RS 2
.IP \[bu] 2
Kazakhstan
.IP \[bu] 2
Provider: Selectel
.RE
.IP \[bu] 2
\[dq]uz-2\[dq]
.RS 2
.IP \[bu] 2
Uzbekistan
.IP \[bu] 2
Provider: Selectel
Provider: Servercore
.RE
.IP \[bu] 2
\[dq]uz-2\[dq]
@@ -41420,25 +41371,7 @@ Provider: SeaweedFS
\[dq]s3.ru-1.storage.selcloud.ru\[dq]
.RS 2
.IP \[bu] 2
St.
Petersburg
.IP \[bu] 2
Provider: Selectel
.RE
.IP \[bu] 2
\[dq]s3.ru-3.storage.selcloud.ru\[dq]
.RS 2
.IP \[bu] 2
St.
Petersburg
.IP \[bu] 2
Provider: Selectel
.RE
.IP \[bu] 2
\[dq]s3.ru-7.storage.selcloud.ru\[dq]
.RS 2
.IP \[bu] 2
Moscow
Saint Petersburg
.IP \[bu] 2
Provider: Selectel,Servercore
.RE
@@ -41448,29 +41381,13 @@ Provider: Selectel,Servercore
.IP \[bu] 2
Moscow
.IP \[bu] 2
Provider: Selectel,Servercore
Provider: Servercore
.RE
.IP \[bu] 2
\[dq]s3.kz-1.storage.selcloud.ru\[dq]
\[dq]s3.ru-7.storage.selcloud.ru\[dq]
.RS 2
.IP \[bu] 2
Kazakhstan
.IP \[bu] 2
Provider: Selectel
.RE
.IP \[bu] 2
\[dq]s3.uz-2.storage.selcloud.ru\[dq]
.RS 2
.IP \[bu] 2
Uzbekistan
.IP \[bu] 2
Provider: Selectel
.RE
.IP \[bu] 2
\[dq]s3.ru-1.storage.selcloud.ru\[dq]
.RS 2
.IP \[bu] 2
Saint Petersburg
Moscow
.IP \[bu] 2
Provider: Servercore
.RE
@@ -57528,9 +57445,6 @@ With support for high storage limits and seamless integration with
rclone, FileLu makes managing files in the cloud easy.
Its cross-platform file backup services let you upload and back up files
from any internet-connected device.
.PP
\f[B]Note\f[R] FileLu now has a fully featured S3 backend FileLu S5, an
industry standard S3 compatible object store.
.SS Configuration
.PP
Here is an example of how to make a remote called \f[V]filelu\f[R].
@@ -60302,17 +60216,9 @@ Type: bool
Default: false
.SS --gcs-endpoint
.PP
Custom endpoint for the storage API.
Leave blank to use the provider default.
Endpoint for the service.
.PP
When using a custom endpoint that includes a subpath (e.g.
example.org/custom/endpoint), the subpath will be ignored during upload
operations due to a limitation in the underlying Google API Go client
library.
Download and listing operations will work correctly with the full
endpoint path.
If you require subpath support for uploads, avoid using subpaths in your
custom endpoint configuration.
Leave blank normally.
.PP
Properties:
.IP \[bu] 2
@@ -60323,29 +60229,6 @@ Env Var: RCLONE_GCS_ENDPOINT
Type: string
.IP \[bu] 2
Required: false
.IP \[bu] 2
Examples:
.RS 2
.IP \[bu] 2
\[dq]storage.example.org\[dq]
.RS 2
.IP \[bu] 2
Specify a custom endpoint
.RE
.IP \[bu] 2
\[dq]storage.example.org:4443\[dq]
.RS 2
.IP \[bu] 2
Specifying a custom endpoint with port
.RE
.IP \[bu] 2
\[dq]storage.example.org:4443/gcs/api\[dq]
.RS 2
.IP \[bu] 2
Specifying a subpath, see the note, uploads won\[aq]t use the custom
path!
.RE
.RE
.SS --gcs-encoding
.PP
The encoding for the backend.
@@ -60674,7 +60557,7 @@ In the next field, \[dq]OAuth Scopes\[dq], enter
access to Google Drive specifically.
You can also use
\f[V]https://www.googleapis.com/auth/drive.readonly\f[R] for read only
access with \f[V]--drive-scope=drive.readonly\f[R].
access.
.IP \[bu] 2
Click \[dq]Authorise\[dq]
.SS 3. Configure rclone, assuming a new install
@@ -87232,40 +87115,6 @@ Options:
.IP \[bu] 2
\[dq]error\[dq]: Return an error based on option value.
.SH Changelog
.SS v1.72.1 - 2025-12-10
.PP
See commits (https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
build: update to go1.25.5 to fix
CVE-2025-61729 (https://pkg.go.dev/vuln/GO-2025-4155)
.IP \[bu] 2
doc fixes (Duncan Smart, Nick Craig-Wood)
.IP \[bu] 2
configfile: Fix piped config support (Jonas Tingeborn)
.IP \[bu] 2
log
.RS 2
.IP \[bu] 2
Fix PID not included in JSON log output (Tingsong Xu)
.IP \[bu] 2
Fix backtrace not going to the --log-file (Nick Craig-Wood)
.RE
.RE
.IP \[bu] 2
Google Cloud Storage
.RS 2
.IP \[bu] 2
Improve endpoint parameter docs (Johannes Rothe)
.RE
.IP \[bu] 2
S3
.RS 2
.IP \[bu] 2
Add missing regions for Selectel provider (Nick Craig-Wood)
.RE
.SS v1.72.0 - 2025-11-21
.PP
See commits (https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0)

View File

@@ -5,6 +5,7 @@ import (
"runtime"
"testing"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfscommon"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
@@ -110,6 +111,9 @@ func TestWriteFileDup(t *testing.T) {
var dupFd uintptr
dupFd, err = writeTestDup(fh.Fd())
if err == vfs.ENOSYS {
t.Skip("dup not supported on this platform")
}
require.NoError(t, err)
dupFile := os.NewFile(dupFd, fh.Name())

View File

@@ -1,4 +1,4 @@
//go:build !linux && !darwin && !freebsd && !windows
//go:build !linux && !darwin && !freebsd && !openbsd && !windows
package vfstest

View File

@@ -1,4 +1,4 @@
//go:build linux || darwin || freebsd
//go:build linux || darwin || freebsd || openbsd
package vfstest