mirror of
https://github.com/rclone/rclone.git
synced 2026-01-05 01:53:14 +00:00
Compare commits
9 Commits
v1.72-stab
...
6858bf242e
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6858bf242e | ||
|
|
e8c6867e4c | ||
|
|
50fbd6b049 | ||
|
|
0783cab952 | ||
|
|
886ac7af1d | ||
|
|
3c40238f02 | ||
|
|
46ca0dd7fe | ||
|
|
2e968e7ce0 | ||
|
|
1886c552db |
@@ -17,14 +17,6 @@ linters:
|
||||
#- prealloc # TODO
|
||||
- revive
|
||||
- unconvert
|
||||
exclusions:
|
||||
rules:
|
||||
- linters:
|
||||
- revive
|
||||
text: 'var-naming: avoid meaningless package names'
|
||||
- linters:
|
||||
- revive
|
||||
text: 'var-naming: avoid package names that conflict with Go standard library package names'
|
||||
# Configure checks. Mostly using defaults but with some commented exceptions.
|
||||
settings:
|
||||
govet:
|
||||
@@ -144,7 +136,6 @@ linters:
|
||||
- name: var-naming
|
||||
disabled: false
|
||||
|
||||
|
||||
formatters:
|
||||
enable:
|
||||
- goimports
|
||||
|
||||
142
MANUAL.html
generated
142
MANUAL.html
generated
@@ -233,7 +233,7 @@
|
||||
<header id="title-block-header">
|
||||
<h1 class="title">rclone(1) User Manual</h1>
|
||||
<p class="author">Nick Craig-Wood</p>
|
||||
<p class="date">Dec 10, 2025</p>
|
||||
<p class="date">Nov 21, 2025</p>
|
||||
</header>
|
||||
<h1 id="name">NAME</h1>
|
||||
<p>rclone - manage files on cloud storage</p>
|
||||
@@ -4531,9 +4531,9 @@ SquareBracket</code></pre>
|
||||
<pre class="console"><code>rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=echo"
|
||||
// Output: stories/The Quick Brown Fox!.txt</code></pre>
|
||||
<pre class="console"><code>rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
|
||||
// Output: stories/The Quick Brown Fox!-20251210</code></pre>
|
||||
// Output: stories/The Quick Brown Fox!-20251121</code></pre>
|
||||
<pre class="console"><code>rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
|
||||
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM</code></pre>
|
||||
// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM</code></pre>
|
||||
<pre class="console"><code>rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
|
||||
// Output: ababababababab/ababab ababababab ababababab ababab!abababab</code></pre>
|
||||
<p>The regex command generally accepts Perl-style regular expressions,
|
||||
@@ -22567,7 +22567,7 @@ split into groups.</p>
|
||||
--tpslimit float Limit HTTP transactions per second to this
|
||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||
--use-cookies Enable session cookiejar
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1")</code></pre>
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")</code></pre>
|
||||
<h2 id="performance">Performance</h2>
|
||||
<p>Flags helpful for increasing performance.</p>
|
||||
<pre><code> --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
|
||||
@@ -23024,7 +23024,7 @@ split into groups.</p>
|
||||
--gcs-description string Description of the remote
|
||||
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
|
||||
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
|
||||
--gcs-endpoint string Endpoint for the service
|
||||
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
|
||||
--gcs-location string Location for the newly created buckets
|
||||
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
|
||||
@@ -25234,29 +25234,7 @@ investigation:</p>
|
||||
<li><a
|
||||
href="https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt"><code>TestBisyncRemoteRemote/normalization</code></a></li>
|
||||
</ul></li>
|
||||
<li><code>TestGoFile</code> (<code>gofile</code>)
|
||||
<ul>
|
||||
<li><a
|
||||
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/all_changed</code></a></li>
|
||||
<li><a
|
||||
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/backupdir</code></a></li>
|
||||
<li><a
|
||||
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/basic</code></a></li>
|
||||
<li><a
|
||||
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/changes</code></a></li>
|
||||
<li><a
|
||||
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/check_access</code></a></li>
|
||||
<li><a href="https://pub.rclone.org/integration-tests/current/">78
|
||||
more</a></li>
|
||||
</ul></li>
|
||||
<li><code>TestPcloud</code> (<code>pcloud</code>)
|
||||
<ul>
|
||||
<li><a
|
||||
href="https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt"><code>TestBisyncRemoteRemote/check_access</code></a></li>
|
||||
<li><a
|
||||
href="https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt"><code>TestBisyncRemoteRemote/check_access_filters</code></a></li>
|
||||
</ul></li>
|
||||
<li>Updated: 2025-12-10-010012
|
||||
<li>Updated: 2025-11-21-010037
|
||||
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs ---></li>
|
||||
</ul>
|
||||
<p>The following backends either have not been tested recently or have
|
||||
@@ -28396,30 +28374,15 @@ centers for low latency.</li>
|
||||
<li>St. Petersburg</li>
|
||||
<li>Provider: Selectel,Servercore</li>
|
||||
</ul></li>
|
||||
<li>"ru-3"
|
||||
<li>"gis-1"
|
||||
<ul>
|
||||
<li>St. Petersburg</li>
|
||||
<li>Provider: Selectel</li>
|
||||
<li>Moscow</li>
|
||||
<li>Provider: Servercore</li>
|
||||
</ul></li>
|
||||
<li>"ru-7"
|
||||
<ul>
|
||||
<li>Moscow</li>
|
||||
<li>Provider: Selectel,Servercore</li>
|
||||
</ul></li>
|
||||
<li>"gis-1"
|
||||
<ul>
|
||||
<li>Moscow</li>
|
||||
<li>Provider: Selectel,Servercore</li>
|
||||
</ul></li>
|
||||
<li>"kz-1"
|
||||
<ul>
|
||||
<li>Kazakhstan</li>
|
||||
<li>Provider: Selectel</li>
|
||||
</ul></li>
|
||||
<li>"uz-2"
|
||||
<ul>
|
||||
<li>Uzbekistan</li>
|
||||
<li>Provider: Selectel</li>
|
||||
<li>Provider: Servercore</li>
|
||||
</ul></li>
|
||||
<li>"uz-2"
|
||||
<ul>
|
||||
@@ -29727,37 +29690,17 @@ AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost
|
||||
</ul></li>
|
||||
<li>"s3.ru-1.storage.selcloud.ru"
|
||||
<ul>
|
||||
<li>St. Petersburg</li>
|
||||
<li>Provider: Selectel</li>
|
||||
</ul></li>
|
||||
<li>"s3.ru-3.storage.selcloud.ru"
|
||||
<ul>
|
||||
<li>St. Petersburg</li>
|
||||
<li>Provider: Selectel</li>
|
||||
</ul></li>
|
||||
<li>"s3.ru-7.storage.selcloud.ru"
|
||||
<ul>
|
||||
<li>Moscow</li>
|
||||
<li>Saint Petersburg</li>
|
||||
<li>Provider: Selectel,Servercore</li>
|
||||
</ul></li>
|
||||
<li>"s3.gis-1.storage.selcloud.ru"
|
||||
<ul>
|
||||
<li>Moscow</li>
|
||||
<li>Provider: Selectel,Servercore</li>
|
||||
<li>Provider: Servercore</li>
|
||||
</ul></li>
|
||||
<li>"s3.kz-1.storage.selcloud.ru"
|
||||
<li>"s3.ru-7.storage.selcloud.ru"
|
||||
<ul>
|
||||
<li>Kazakhstan</li>
|
||||
<li>Provider: Selectel</li>
|
||||
</ul></li>
|
||||
<li>"s3.uz-2.storage.selcloud.ru"
|
||||
<ul>
|
||||
<li>Uzbekistan</li>
|
||||
<li>Provider: Selectel</li>
|
||||
</ul></li>
|
||||
<li>"s3.ru-1.storage.selcloud.ru"
|
||||
<ul>
|
||||
<li>Saint Petersburg</li>
|
||||
<li>Moscow</li>
|
||||
<li>Provider: Servercore</li>
|
||||
</ul></li>
|
||||
<li>"s3.uz-2.srvstorage.uz"
|
||||
@@ -41553,9 +41496,6 @@ storage options, and sharing capabilities. With support for high storage
|
||||
limits and seamless integration with rclone, FileLu makes managing files
|
||||
in the cloud easy. Its cross-platform file backup services let you
|
||||
upload and back up files from any internet-connected device.</p>
|
||||
<p><strong>Note</strong> FileLu now has a fully featured S3 backend <a
|
||||
href="/s3#filelu-s5">FileLu S5</a>, an industry standard S3 compatible
|
||||
object store.</p>
|
||||
<h2 id="configuration-16">Configuration</h2>
|
||||
<p>Here is an example of how to make a remote called
|
||||
<code>filelu</code>. First, run:</p>
|
||||
@@ -43478,36 +43418,14 @@ decompressed.</p>
|
||||
<li>Default: false</li>
|
||||
</ul>
|
||||
<h4 id="gcs-endpoint">--gcs-endpoint</h4>
|
||||
<p>Custom endpoint for the storage API. Leave blank to use the provider
|
||||
default.</p>
|
||||
<p>When using a custom endpoint that includes a subpath (e.g.
|
||||
example.org/custom/endpoint), the subpath will be ignored during upload
|
||||
operations due to a limitation in the underlying Google API Go client
|
||||
library. Download and listing operations will work correctly with the
|
||||
full endpoint path. If you require subpath support for uploads, avoid
|
||||
using subpaths in your custom endpoint configuration.</p>
|
||||
<p>Endpoint for the service.</p>
|
||||
<p>Leave blank normally.</p>
|
||||
<p>Properties:</p>
|
||||
<ul>
|
||||
<li>Config: endpoint</li>
|
||||
<li>Env Var: RCLONE_GCS_ENDPOINT</li>
|
||||
<li>Type: string</li>
|
||||
<li>Required: false</li>
|
||||
<li>Examples:
|
||||
<ul>
|
||||
<li>"storage.example.org"
|
||||
<ul>
|
||||
<li>Specify a custom endpoint</li>
|
||||
</ul></li>
|
||||
<li>"storage.example.org:4443"
|
||||
<ul>
|
||||
<li>Specifying a custom endpoint with port</li>
|
||||
</ul></li>
|
||||
<li>"storage.example.org:4443/gcs/api"
|
||||
<ul>
|
||||
<li>Specifying a subpath, see the note, uploads won't use the custom
|
||||
path!</li>
|
||||
</ul></li>
|
||||
</ul></li>
|
||||
</ul>
|
||||
<h4 id="gcs-encoding">--gcs-encoding</h4>
|
||||
<p>The encoding for the backend.</p>
|
||||
@@ -43752,7 +43670,7 @@ account. It is a ~21 character numerical string.</li>
|
||||
<code>https://www.googleapis.com/auth/drive</code> to grant read/write
|
||||
access to Google Drive specifically. You can also use
|
||||
<code>https://www.googleapis.com/auth/drive.readonly</code> for read
|
||||
only access with <code>--drive-scope=drive.readonly</code>.</li>
|
||||
only access.</li>
|
||||
<li>Click "Authorise"</li>
|
||||
</ul>
|
||||
<h5 id="configure-rclone-assuming-a-new-install">3. Configure rclone,
|
||||
@@ -62675,32 +62593,6 @@ the output.</p>
|
||||
<!-- autogenerated options stop -->
|
||||
<!-- markdownlint-disable line-length -->
|
||||
<h1 id="changelog-1">Changelog</h1>
|
||||
<h2 id="v1.72.1---2025-12-10">v1.72.1 - 2025-12-10</h2>
|
||||
<p><a
|
||||
href="https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1">See
|
||||
commits</a></p>
|
||||
<ul>
|
||||
<li>Bug Fixes
|
||||
<ul>
|
||||
<li>build: update to go1.25.5 to fix <a
|
||||
href="https://pkg.go.dev/vuln/GO-2025-4155">CVE-2025-61729</a></li>
|
||||
<li>doc fixes (Duncan Smart, Nick Craig-Wood)</li>
|
||||
<li>configfile: Fix piped config support (Jonas Tingeborn)</li>
|
||||
<li>log
|
||||
<ul>
|
||||
<li>Fix PID not included in JSON log output (Tingsong Xu)</li>
|
||||
<li>Fix backtrace not going to the --log-file (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
</ul></li>
|
||||
<li>Google Cloud Storage
|
||||
<ul>
|
||||
<li>Improve endpoint parameter docs (Johannes Rothe)</li>
|
||||
</ul></li>
|
||||
<li>S3
|
||||
<ul>
|
||||
<li>Add missing regions for Selectel provider (Nick Craig-Wood)</li>
|
||||
</ul></li>
|
||||
</ul>
|
||||
<h2 id="v1.72.0---2025-11-21">v1.72.0 - 2025-11-21</h2>
|
||||
<p><a
|
||||
href="https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0">See
|
||||
|
||||
98
MANUAL.md
generated
98
MANUAL.md
generated
@@ -1,6 +1,6 @@
|
||||
% rclone(1) User Manual
|
||||
% Nick Craig-Wood
|
||||
% Dec 10, 2025
|
||||
% Nov 21, 2025
|
||||
|
||||
# NAME
|
||||
|
||||
@@ -5369,12 +5369,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
|
||||
// Output: stories/The Quick Brown Fox!-20251210
|
||||
// Output: stories/The Quick Brown Fox!-20251121
|
||||
```
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
|
||||
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM
|
||||
// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
|
||||
```
|
||||
|
||||
```console
|
||||
@@ -24802,7 +24802,7 @@ Flags for general networking and HTTP stuff.
|
||||
--tpslimit float Limit HTTP transactions per second to this
|
||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||
--use-cookies Enable session cookiejar
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
|
||||
```
|
||||
|
||||
|
||||
@@ -25319,7 +25319,7 @@ Backend-only flags (these can be set in the config file also).
|
||||
--gcs-description string Description of the remote
|
||||
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
|
||||
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
|
||||
--gcs-endpoint string Endpoint for the service
|
||||
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
|
||||
--gcs-location string Location for the newly created buckets
|
||||
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
|
||||
@@ -27514,17 +27514,7 @@ The following backends have known issues that need more investigation:
|
||||
<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
|
||||
- `TestDropbox` (`dropbox`)
|
||||
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
|
||||
- `TestGoFile` (`gofile`)
|
||||
- [`TestBisyncRemoteLocal/all_changed`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
- [`TestBisyncRemoteLocal/backupdir`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
- [`TestBisyncRemoteLocal/basic`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
- [`TestBisyncRemoteLocal/changes`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
- [`TestBisyncRemoteLocal/check_access`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
- [78 more](https://pub.rclone.org/integration-tests/current/)
|
||||
- `TestPcloud` (`pcloud`)
|
||||
- [`TestBisyncRemoteRemote/check_access`](https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
|
||||
- [`TestBisyncRemoteRemote/check_access_filters`](https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
|
||||
- Updated: 2025-12-10-010012
|
||||
- Updated: 2025-11-21-010037
|
||||
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
|
||||
|
||||
The following backends either have not been tested recently or have known issues
|
||||
@@ -30353,21 +30343,12 @@ Properties:
|
||||
- "ru-1"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel,Servercore
|
||||
- "ru-3"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel
|
||||
- "ru-7"
|
||||
- Moscow
|
||||
- Provider: Selectel,Servercore
|
||||
- "gis-1"
|
||||
- Moscow
|
||||
- Provider: Selectel,Servercore
|
||||
- "kz-1"
|
||||
- Kazakhstan
|
||||
- Provider: Selectel
|
||||
- "uz-2"
|
||||
- Uzbekistan
|
||||
- Provider: Selectel
|
||||
- Provider: Servercore
|
||||
- "ru-7"
|
||||
- Moscow
|
||||
- Provider: Servercore
|
||||
- "uz-2"
|
||||
- Tashkent, Uzbekistan
|
||||
- Provider: Servercore
|
||||
@@ -31159,25 +31140,13 @@ Properties:
|
||||
- SeaweedFS S3 localhost
|
||||
- Provider: SeaweedFS
|
||||
- "s3.ru-1.storage.selcloud.ru"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel
|
||||
- "s3.ru-3.storage.selcloud.ru"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel
|
||||
- "s3.ru-7.storage.selcloud.ru"
|
||||
- Moscow
|
||||
- Saint Petersburg
|
||||
- Provider: Selectel,Servercore
|
||||
- "s3.gis-1.storage.selcloud.ru"
|
||||
- Moscow
|
||||
- Provider: Selectel,Servercore
|
||||
- "s3.kz-1.storage.selcloud.ru"
|
||||
- Kazakhstan
|
||||
- Provider: Selectel
|
||||
- "s3.uz-2.storage.selcloud.ru"
|
||||
- Uzbekistan
|
||||
- Provider: Selectel
|
||||
- "s3.ru-1.storage.selcloud.ru"
|
||||
- Saint Petersburg
|
||||
- Provider: Servercore
|
||||
- "s3.ru-7.storage.selcloud.ru"
|
||||
- Moscow
|
||||
- Provider: Servercore
|
||||
- "s3.uz-2.srvstorage.uz"
|
||||
- Tashkent, Uzbekistan
|
||||
@@ -44004,9 +43973,6 @@ managing files in the cloud easy. Its cross-platform file backup
|
||||
services let you upload and back up files from any internet-connected
|
||||
device.
|
||||
|
||||
**Note** FileLu now has a fully featured S3 backend [FileLu S5](/s3#filelu-s5),
|
||||
an industry standard S3 compatible object store.
|
||||
|
||||
## Configuration
|
||||
|
||||
Here is an example of how to make a remote called `filelu`. First, run:
|
||||
@@ -46105,14 +46071,9 @@ Properties:
|
||||
|
||||
#### --gcs-endpoint
|
||||
|
||||
Custom endpoint for the storage API. Leave blank to use the provider default.
|
||||
Endpoint for the service.
|
||||
|
||||
When using a custom endpoint that includes a subpath (e.g. example.org/custom/endpoint),
|
||||
the subpath will be ignored during upload operations due to a limitation in the
|
||||
underlying Google API Go client library.
|
||||
Download and listing operations will work correctly with the full endpoint path.
|
||||
If you require subpath support for uploads, avoid using subpaths in your custom
|
||||
endpoint configuration.
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -46120,13 +46081,6 @@ Properties:
|
||||
- Env Var: RCLONE_GCS_ENDPOINT
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "storage.example.org"
|
||||
- Specify a custom endpoint
|
||||
- "storage.example.org:4443"
|
||||
- Specifying a custom endpoint with port
|
||||
- "storage.example.org:4443/gcs/api"
|
||||
- Specifying a subpath, see the note, uploads won't use the custom path!
|
||||
|
||||
#### --gcs-encoding
|
||||
|
||||
@@ -46425,7 +46379,7 @@ account key" button.
|
||||
`https://www.googleapis.com/auth/drive`
|
||||
to grant read/write access to Google Drive specifically.
|
||||
You can also use `https://www.googleapis.com/auth/drive.readonly` for read
|
||||
only access with `--drive-scope=drive.readonly`.
|
||||
only access.
|
||||
- Click "Authorise"
|
||||
|
||||
##### 3. Configure rclone, assuming a new install
|
||||
@@ -66913,22 +66867,6 @@ Options:
|
||||
|
||||
# Changelog
|
||||
|
||||
## v1.72.1 - 2025-12-10
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
|
||||
|
||||
- Bug Fixes
|
||||
- build: update to go1.25.5 to fix [CVE-2025-61729](https://pkg.go.dev/vuln/GO-2025-4155)
|
||||
- doc fixes (Duncan Smart, Nick Craig-Wood)
|
||||
- configfile: Fix piped config support (Jonas Tingeborn)
|
||||
- log
|
||||
- Fix PID not included in JSON log output (Tingsong Xu)
|
||||
- Fix backtrace not going to the --log-file (Nick Craig-Wood)
|
||||
- Google Cloud Storage
|
||||
- Improve endpoint parameter docs (Johannes Rothe)
|
||||
- S3
|
||||
- Add missing regions for Selectel provider (Nick Craig-Wood)
|
||||
|
||||
## v1.72.0 - 2025-11-21
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0)
|
||||
@@ -66949,7 +66887,7 @@ Options:
|
||||
- [rclone test speed](https://rclone.org/commands/rclone_test_speed/): Add command to test a specified remotes speed (dougal)
|
||||
- New Features
|
||||
- backends: many backends have has a paged listing (`ListP`) interface added
|
||||
- this enables progress when listing large directories and reduced memory usage
|
||||
- this enables progress when listing large directories and reduced memory usage
|
||||
- build
|
||||
- Bump golang.org/x/crypto from 0.43.0 to 0.45.0 to fix CVE-2025-58181 (dependabot[bot])
|
||||
- Modernize code and tests (Nick Craig-Wood, russcoss, juejinyuxitu, reddaisyy, dulanting, Oleksandr Redko)
|
||||
|
||||
99
MANUAL.txt
generated
99
MANUAL.txt
generated
@@ -1,6 +1,6 @@
|
||||
rclone(1) User Manual
|
||||
Nick Craig-Wood
|
||||
Dec 10, 2025
|
||||
Nov 21, 2025
|
||||
|
||||
NAME
|
||||
|
||||
@@ -4588,10 +4588,10 @@ Examples:
|
||||
// Output: stories/The Quick Brown Fox!.txt
|
||||
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
|
||||
// Output: stories/The Quick Brown Fox!-20251210
|
||||
// Output: stories/The Quick Brown Fox!-20251121
|
||||
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
|
||||
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM
|
||||
// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
|
||||
|
||||
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
|
||||
// Output: ababababababab/ababab ababababab ababababab ababab!abababab
|
||||
@@ -23110,7 +23110,7 @@ Flags for general networking and HTTP stuff.
|
||||
--tpslimit float Limit HTTP transactions per second to this
|
||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||
--use-cookies Enable session cookiejar
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
|
||||
|
||||
Performance
|
||||
|
||||
@@ -23597,7 +23597,7 @@ Backend-only flags (these can be set in the config file also).
|
||||
--gcs-description string Description of the remote
|
||||
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
|
||||
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
|
||||
--gcs-endpoint string Endpoint for the service
|
||||
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
|
||||
--gcs-location string Location for the newly created buckets
|
||||
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
|
||||
@@ -25734,17 +25734,7 @@ The following backends have known issues that need more investigation:
|
||||
|
||||
- TestDropbox (dropbox)
|
||||
- TestBisyncRemoteRemote/normalization
|
||||
- TestGoFile (gofile)
|
||||
- TestBisyncRemoteLocal/all_changed
|
||||
- TestBisyncRemoteLocal/backupdir
|
||||
- TestBisyncRemoteLocal/basic
|
||||
- TestBisyncRemoteLocal/changes
|
||||
- TestBisyncRemoteLocal/check_access
|
||||
- 78 more
|
||||
- TestPcloud (pcloud)
|
||||
- TestBisyncRemoteRemote/check_access
|
||||
- TestBisyncRemoteRemote/check_access_filters
|
||||
- Updated: 2025-12-10-010012
|
||||
- Updated: 2025-11-21-010037
|
||||
|
||||
The following backends either have not been tested recently or have
|
||||
known issues that are deemed unfixable for the time being:
|
||||
@@ -28527,21 +28517,12 @@ Properties:
|
||||
- "ru-1"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel,Servercore
|
||||
- "ru-3"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel
|
||||
- "ru-7"
|
||||
- Moscow
|
||||
- Provider: Selectel,Servercore
|
||||
- "gis-1"
|
||||
- Moscow
|
||||
- Provider: Selectel,Servercore
|
||||
- "kz-1"
|
||||
- Kazakhstan
|
||||
- Provider: Selectel
|
||||
- "uz-2"
|
||||
- Uzbekistan
|
||||
- Provider: Selectel
|
||||
- Provider: Servercore
|
||||
- "ru-7"
|
||||
- Moscow
|
||||
- Provider: Servercore
|
||||
- "uz-2"
|
||||
- Tashkent, Uzbekistan
|
||||
- Provider: Servercore
|
||||
@@ -29334,25 +29315,13 @@ Properties:
|
||||
- SeaweedFS S3 localhost
|
||||
- Provider: SeaweedFS
|
||||
- "s3.ru-1.storage.selcloud.ru"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel
|
||||
- "s3.ru-3.storage.selcloud.ru"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel
|
||||
- "s3.ru-7.storage.selcloud.ru"
|
||||
- Moscow
|
||||
- Saint Petersburg
|
||||
- Provider: Selectel,Servercore
|
||||
- "s3.gis-1.storage.selcloud.ru"
|
||||
- Moscow
|
||||
- Provider: Selectel,Servercore
|
||||
- "s3.kz-1.storage.selcloud.ru"
|
||||
- Kazakhstan
|
||||
- Provider: Selectel
|
||||
- "s3.uz-2.storage.selcloud.ru"
|
||||
- Uzbekistan
|
||||
- Provider: Selectel
|
||||
- "s3.ru-1.storage.selcloud.ru"
|
||||
- Saint Petersburg
|
||||
- Provider: Servercore
|
||||
- "s3.ru-7.storage.selcloud.ru"
|
||||
- Moscow
|
||||
- Provider: Servercore
|
||||
- "s3.uz-2.srvstorage.uz"
|
||||
- Tashkent, Uzbekistan
|
||||
@@ -41751,9 +41720,6 @@ integration with rclone, FileLu makes managing files in the cloud easy.
|
||||
Its cross-platform file backup services let you upload and back up files
|
||||
from any internet-connected device.
|
||||
|
||||
Note FileLu now has a fully featured S3 backend FileLu S5, an industry
|
||||
standard S3 compatible object store.
|
||||
|
||||
Configuration
|
||||
|
||||
Here is an example of how to make a remote called filelu. First, run:
|
||||
@@ -43730,15 +43696,9 @@ Properties:
|
||||
|
||||
--gcs-endpoint
|
||||
|
||||
Custom endpoint for the storage API. Leave blank to use the provider
|
||||
default.
|
||||
Endpoint for the service.
|
||||
|
||||
When using a custom endpoint that includes a subpath (e.g.
|
||||
example.org/custom/endpoint), the subpath will be ignored during upload
|
||||
operations due to a limitation in the underlying Google API Go client
|
||||
library. Download and listing operations will work correctly with the
|
||||
full endpoint path. If you require subpath support for uploads, avoid
|
||||
using subpaths in your custom endpoint configuration.
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -43746,14 +43706,6 @@ Properties:
|
||||
- Env Var: RCLONE_GCS_ENDPOINT
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "storage.example.org"
|
||||
- Specify a custom endpoint
|
||||
- "storage.example.org:4443"
|
||||
- Specifying a custom endpoint with port
|
||||
- "storage.example.org:4443/gcs/api"
|
||||
- Specifying a subpath, see the note, uploads won't use the
|
||||
custom path!
|
||||
|
||||
--gcs-encoding
|
||||
|
||||
@@ -44037,8 +43989,7 @@ key" button.
|
||||
- In the next field, "OAuth Scopes", enter
|
||||
https://www.googleapis.com/auth/drive to grant read/write access to
|
||||
Google Drive specifically. You can also use
|
||||
https://www.googleapis.com/auth/drive.readonly for read only access
|
||||
with --drive-scope=drive.readonly.
|
||||
https://www.googleapis.com/auth/drive.readonly for read only access.
|
||||
- Click "Authorise"
|
||||
|
||||
3. Configure rclone, assuming a new install
|
||||
@@ -64059,22 +64010,6 @@ Options:
|
||||
|
||||
Changelog
|
||||
|
||||
v1.72.1 - 2025-12-10
|
||||
|
||||
See commits
|
||||
|
||||
- Bug Fixes
|
||||
- build: update to go1.25.5 to fix CVE-2025-61729
|
||||
- doc fixes (Duncan Smart, Nick Craig-Wood)
|
||||
- configfile: Fix piped config support (Jonas Tingeborn)
|
||||
- log
|
||||
- Fix PID not included in JSON log output (Tingsong Xu)
|
||||
- Fix backtrace not going to the --log-file (Nick Craig-Wood)
|
||||
- Google Cloud Storage
|
||||
- Improve endpoint parameter docs (Johannes Rothe)
|
||||
- S3
|
||||
- Add missing regions for Selectel provider (Nick Craig-Wood)
|
||||
|
||||
v1.72.0 - 2025-11-21
|
||||
|
||||
See commits
|
||||
|
||||
@@ -133,23 +133,32 @@ type File struct {
|
||||
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
|
||||
}
|
||||
|
||||
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
|
||||
type AuthorizeAccountResponse struct {
|
||||
// StorageAPI is as returned from the b2_authorize_account call
|
||||
type StorageAPI struct {
|
||||
AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
|
||||
AccountID string `json:"accountId"` // The identifier for the account.
|
||||
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
|
||||
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket.
|
||||
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty
|
||||
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has.
|
||||
Buckets []struct { // When present, access is restricted to one or more buckets.
|
||||
ID string `json:"id"` // ID of bucket
|
||||
Name string `json:"name"` // When present, name of bucket - may be empty
|
||||
} `json:"buckets"`
|
||||
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has for every bucket.
|
||||
NamePrefix any `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
|
||||
} `json:"allowed"`
|
||||
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
|
||||
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
|
||||
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
|
||||
MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead.
|
||||
RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance.
|
||||
}
|
||||
|
||||
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
|
||||
type AuthorizeAccountResponse struct {
|
||||
AccountID string `json:"accountId"` // The identifier for the account.
|
||||
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
|
||||
APIs struct { // Supported APIs for this account / key. These are API-dependent JSON objects.
|
||||
Storage StorageAPI `json:"storageApi"`
|
||||
} `json:"apiInfo"`
|
||||
}
|
||||
|
||||
// ListBucketsRequest is parameters for b2_list_buckets call
|
||||
type ListBucketsRequest struct {
|
||||
AccountID string `json:"accountId"` // The identifier for the account.
|
||||
|
||||
120
backend/b2/b2.go
120
backend/b2/b2.go
@@ -607,17 +607,29 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to authorize account: %w", err)
|
||||
}
|
||||
// If this is a key limited to a single bucket, it must exist already
|
||||
if f.rootBucket != "" && f.info.Allowed.BucketID != "" {
|
||||
allowedBucket := f.opt.Enc.ToStandardName(f.info.Allowed.BucketName)
|
||||
if allowedBucket == "" {
|
||||
return nil, errors.New("bucket that application key is restricted to no longer exists")
|
||||
// If this is a key limited to one or more buckets, one of them must exist
|
||||
// and be ours.
|
||||
if f.rootBucket != "" && len(f.info.APIs.Storage.Allowed.Buckets) != 0 {
|
||||
buckets := f.info.APIs.Storage.Allowed.Buckets
|
||||
var rootFound = false
|
||||
var rootID string
|
||||
for _, b := range buckets {
|
||||
allowedBucket := f.opt.Enc.ToStandardName(b.Name)
|
||||
if allowedBucket == "" {
|
||||
fs.Debugf(f, "bucket %q that application key is restricted to no longer exists", b.ID)
|
||||
continue
|
||||
}
|
||||
|
||||
if allowedBucket == f.rootBucket {
|
||||
rootFound = true
|
||||
rootID = b.ID
|
||||
}
|
||||
}
|
||||
if allowedBucket != f.rootBucket {
|
||||
return nil, fmt.Errorf("you must use bucket %q with this application key", allowedBucket)
|
||||
if !rootFound {
|
||||
return nil, fmt.Errorf("you must use bucket(s) %q with this application key", buckets)
|
||||
}
|
||||
f.cache.MarkOK(f.rootBucket)
|
||||
f.setBucketID(f.rootBucket, f.info.Allowed.BucketID)
|
||||
f.setBucketID(f.rootBucket, rootID)
|
||||
}
|
||||
if f.rootBucket != "" && f.rootDirectory != "" {
|
||||
// Check to see if the (bucket,directory) is actually an existing file
|
||||
@@ -643,7 +655,7 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
|
||||
defer f.authMu.Unlock()
|
||||
opts := rest.Opts{
|
||||
Method: "GET",
|
||||
Path: "/b2api/v1/b2_authorize_account",
|
||||
Path: "/b2api/v4/b2_authorize_account",
|
||||
RootURL: f.opt.Endpoint,
|
||||
UserName: f.opt.Account,
|
||||
Password: f.opt.Key,
|
||||
@@ -656,13 +668,13 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to authenticate: %w", err)
|
||||
}
|
||||
f.srv.SetRoot(f.info.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken)
|
||||
f.srv.SetRoot(f.info.APIs.Storage.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken)
|
||||
return nil
|
||||
}
|
||||
|
||||
// hasPermission returns if the current AuthorizationToken has the selected permission
|
||||
func (f *Fs) hasPermission(permission string) bool {
|
||||
return slices.Contains(f.info.Allowed.Capabilities, permission)
|
||||
return slices.Contains(f.info.APIs.Storage.Allowed.Capabilities, permission)
|
||||
}
|
||||
|
||||
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
|
||||
@@ -1067,44 +1079,68 @@ type listBucketFn func(*api.Bucket) error
|
||||
|
||||
// listBucketsToFn lists the buckets to the function supplied
|
||||
func (f *Fs) listBucketsToFn(ctx context.Context, bucketName string, fn listBucketFn) error {
|
||||
var account = api.ListBucketsRequest{
|
||||
AccountID: f.info.AccountID,
|
||||
BucketID: f.info.Allowed.BucketID,
|
||||
}
|
||||
if bucketName != "" && account.BucketID == "" {
|
||||
account.BucketName = f.opt.Enc.FromStandardName(bucketName)
|
||||
responses := make([]api.ListBucketsResponse, len(f.info.APIs.Storage.Allowed.Buckets))[:0]
|
||||
|
||||
for i := range f.info.APIs.Storage.Allowed.Buckets {
|
||||
b := &f.info.APIs.Storage.Allowed.Buckets[i]
|
||||
// Empty names indicate a bucket that no longer exists, this is non-fatal
|
||||
// for multi-bucket API keys.
|
||||
if b.Name == "" {
|
||||
continue
|
||||
}
|
||||
// When requesting a specific bucket skip over non-matching names
|
||||
if bucketName != "" && b.Name != bucketName {
|
||||
continue
|
||||
}
|
||||
|
||||
var account = api.ListBucketsRequest{
|
||||
AccountID: f.info.AccountID,
|
||||
BucketID: b.ID,
|
||||
}
|
||||
if bucketName != "" && account.BucketID == "" {
|
||||
account.BucketName = f.opt.Enc.FromStandardName(bucketName)
|
||||
}
|
||||
|
||||
var response api.ListBucketsResponse
|
||||
opts := rest.Opts{
|
||||
Method: "POST",
|
||||
Path: "/b2_list_buckets",
|
||||
}
|
||||
err := f.pacer.Call(func() (bool, error) {
|
||||
resp, err := f.srv.CallJSON(ctx, &opts, &account, &response)
|
||||
return f.shouldRetry(ctx, resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
responses = append(responses, response)
|
||||
}
|
||||
|
||||
var response api.ListBucketsResponse
|
||||
opts := rest.Opts{
|
||||
Method: "POST",
|
||||
Path: "/b2_list_buckets",
|
||||
}
|
||||
err := f.pacer.Call(func() (bool, error) {
|
||||
resp, err := f.srv.CallJSON(ctx, &opts, &account, &response)
|
||||
return f.shouldRetry(ctx, resp, err)
|
||||
})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
f.bucketIDMutex.Lock()
|
||||
f.bucketTypeMutex.Lock()
|
||||
f._bucketID = make(map[string]string, 1)
|
||||
f._bucketType = make(map[string]string, 1)
|
||||
for i := range response.Buckets {
|
||||
bucket := &response.Buckets[i]
|
||||
bucket.Name = f.opt.Enc.ToStandardName(bucket.Name)
|
||||
f.cache.MarkOK(bucket.Name)
|
||||
f._bucketID[bucket.Name] = bucket.ID
|
||||
f._bucketType[bucket.Name] = bucket.Type
|
||||
|
||||
for ri := range responses {
|
||||
response := &responses[ri]
|
||||
for i := range response.Buckets {
|
||||
bucket := &response.Buckets[i]
|
||||
bucket.Name = f.opt.Enc.ToStandardName(bucket.Name)
|
||||
f.cache.MarkOK(bucket.Name)
|
||||
f._bucketID[bucket.Name] = bucket.ID
|
||||
f._bucketType[bucket.Name] = bucket.Type
|
||||
}
|
||||
}
|
||||
f.bucketTypeMutex.Unlock()
|
||||
f.bucketIDMutex.Unlock()
|
||||
for i := range response.Buckets {
|
||||
bucket := &response.Buckets[i]
|
||||
err = fn(bucket)
|
||||
if err != nil {
|
||||
return err
|
||||
for ri := range responses {
|
||||
response := &responses[ri]
|
||||
for i := range response.Buckets {
|
||||
bucket := &response.Buckets[i]
|
||||
err := fn(bucket)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
@@ -1606,7 +1642,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
|
||||
bucket, bucketPath := f.split(remote)
|
||||
var RootURL string
|
||||
if f.opt.DownloadURL == "" {
|
||||
RootURL = f.info.DownloadURL
|
||||
RootURL = f.info.APIs.Storage.DownloadURL
|
||||
} else {
|
||||
RootURL = f.opt.DownloadURL
|
||||
}
|
||||
@@ -1957,7 +1993,7 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
|
||||
// Use downloadUrl from backblaze if downloadUrl is not set
|
||||
// otherwise use the custom downloadUrl
|
||||
if o.fs.opt.DownloadURL == "" {
|
||||
opts.RootURL = o.fs.info.DownloadURL
|
||||
opts.RootURL = o.fs.info.APIs.Storage.DownloadURL
|
||||
} else {
|
||||
opts.RootURL = o.fs.opt.DownloadURL
|
||||
}
|
||||
|
||||
@@ -346,26 +346,9 @@ can't check the size and hash but the file contents will be decompressed.
|
||||
Advanced: true,
|
||||
Default: false,
|
||||
}, {
|
||||
Name: "endpoint",
|
||||
Help: `Custom endpoint for the storage API. Leave blank to use the provider default.
|
||||
|
||||
When using a custom endpoint that includes a subpath (e.g. example.org/custom/endpoint),
|
||||
the subpath will be ignored during upload operations due to a limitation in the
|
||||
underlying Google API Go client library.
|
||||
Download and listing operations will work correctly with the full endpoint path.
|
||||
If you require subpath support for uploads, avoid using subpaths in your custom
|
||||
endpoint configuration.`,
|
||||
Name: "endpoint",
|
||||
Help: "Endpoint for the service.\n\nLeave blank normally.",
|
||||
Advanced: true,
|
||||
Examples: []fs.OptionExample{{
|
||||
Value: "storage.example.org",
|
||||
Help: "Specify a custom endpoint",
|
||||
}, {
|
||||
Value: "storage.example.org:4443",
|
||||
Help: "Specifying a custom endpoint with port",
|
||||
}, {
|
||||
Value: "storage.example.org:4443/gcs/api",
|
||||
Help: "Specifying a subpath, see the note, uploads won't use the custom path!",
|
||||
}},
|
||||
}, {
|
||||
Name: config.ConfigEncoding,
|
||||
Help: config.ConfigEncodingHelp,
|
||||
|
||||
@@ -2,17 +2,7 @@ name: Selectel
|
||||
description: Selectel Object Storage
|
||||
region:
|
||||
ru-1: St. Petersburg
|
||||
ru-3: St. Petersburg
|
||||
ru-7: Moscow
|
||||
gis-1: Moscow
|
||||
kz-1: Kazakhstan
|
||||
uz-2: Uzbekistan
|
||||
endpoint:
|
||||
s3.ru-1.storage.selcloud.ru: St. Petersburg
|
||||
s3.ru-3.storage.selcloud.ru: St. Petersburg
|
||||
s3.ru-7.storage.selcloud.ru: Moscow
|
||||
s3.gis-1.storage.selcloud.ru: Moscow
|
||||
s3.kz-1.storage.selcloud.ru: Kazakhstan
|
||||
s3.uz-2.storage.selcloud.ru: Uzbekistan
|
||||
s3.ru-1.storage.selcloud.ru: Saint Petersburg
|
||||
quirks:
|
||||
list_url_encode: false
|
||||
|
||||
@@ -153,7 +153,7 @@ func TestRun(t *testing.T) {
|
||||
fs.Fatal(nil, "error generating test private key "+privateKeyErr.Error())
|
||||
}
|
||||
publicKey, publicKeyError := ssh.NewPublicKey(&privateKey.PublicKey)
|
||||
if publicKeyError != nil {
|
||||
if privateKeyErr != nil {
|
||||
fs.Fatal(nil, "error generating test public key "+publicKeyError.Error())
|
||||
}
|
||||
|
||||
|
||||
@@ -1048,3 +1048,5 @@ put them back in again. -->
|
||||
- jijamik <30904953+jijamik@users.noreply.github.com>
|
||||
- Dominik Sander <git@dsander.de>
|
||||
- Nikolay Kiryanov <nikolay@kiryanov.ru>
|
||||
- Diana <5275194+DianaNites@users.noreply.github.com>
|
||||
- Duncan Smart <duncan.smart@gmail.com>
|
||||
|
||||
@@ -283,7 +283,7 @@ It is useful to know how many requests are sent to the server in different scena
|
||||
All copy commands send the following 4 requests:
|
||||
|
||||
```text
|
||||
/b2api/v1/b2_authorize_account
|
||||
/b2api/v4/b2_authorize_account
|
||||
/b2api/v1/b2_create_bucket
|
||||
/b2api/v1/b2_list_buckets
|
||||
/b2api/v1/b2_list_file_names
|
||||
|
||||
@@ -1049,17 +1049,7 @@ The following backends have known issues that need more investigation:
|
||||
<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
|
||||
- `TestDropbox` (`dropbox`)
|
||||
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
|
||||
- `TestGoFile` (`gofile`)
|
||||
- [`TestBisyncRemoteLocal/all_changed`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
- [`TestBisyncRemoteLocal/backupdir`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
- [`TestBisyncRemoteLocal/basic`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
- [`TestBisyncRemoteLocal/changes`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
- [`TestBisyncRemoteLocal/check_access`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
- [78 more](https://pub.rclone.org/integration-tests/current/)
|
||||
- `TestPcloud` (`pcloud`)
|
||||
- [`TestBisyncRemoteRemote/check_access`](https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
|
||||
- [`TestBisyncRemoteRemote/check_access_filters`](https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
|
||||
- Updated: 2025-12-10-010012
|
||||
- Updated: 2025-11-21-010037
|
||||
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
|
||||
|
||||
The following backends either have not been tested recently or have known issues
|
||||
|
||||
@@ -6,22 +6,6 @@ description: "Rclone Changelog"
|
||||
|
||||
# Changelog
|
||||
|
||||
## v1.72.1 - 2025-12-10
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
|
||||
|
||||
- Bug Fixes
|
||||
- build: update to go1.25.5 to fix [CVE-2025-61729](https://pkg.go.dev/vuln/GO-2025-4155)
|
||||
- doc fixes (Duncan Smart, Nick Craig-Wood)
|
||||
- configfile: Fix piped config support (Jonas Tingeborn)
|
||||
- log
|
||||
- Fix PID not included in JSON log output (Tingsong Xu)
|
||||
- Fix backtrace not going to the --log-file (Nick Craig-Wood)
|
||||
- Google Cloud Storage
|
||||
- Improve endpoint parameter docs (Johannes Rothe)
|
||||
- S3
|
||||
- Add missing regions for Selectel provider (Nick Craig-Wood)
|
||||
|
||||
## v1.72.0 - 2025-11-21
|
||||
|
||||
[See commits](https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0)
|
||||
|
||||
@@ -369,7 +369,7 @@ rclone [flags]
|
||||
--gcs-description string Description of the remote
|
||||
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
|
||||
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
|
||||
--gcs-endpoint string Endpoint for the service
|
||||
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
|
||||
--gcs-location string Location for the newly created buckets
|
||||
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
|
||||
@@ -1023,7 +1023,7 @@ rclone [flags]
|
||||
--use-json-log Use json log format
|
||||
--use-mmap Use mmap allocator (see docs)
|
||||
--use-server-modtime Use server modified time instead of object metadata
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
|
||||
-v, --verbose count Print lots more stuff (repeat for more)
|
||||
-V, --version Print the version number
|
||||
--webdav-auth-redirect Preserve authentication on redirect
|
||||
|
||||
@@ -231,12 +231,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
|
||||
// Output: stories/The Quick Brown Fox!-20251210
|
||||
// Output: stories/The Quick Brown Fox!-20251121
|
||||
```
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
|
||||
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM
|
||||
// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
|
||||
```
|
||||
|
||||
```console
|
||||
|
||||
@@ -121,7 +121,7 @@ Flags for general networking and HTTP stuff.
|
||||
--tpslimit float Limit HTTP transactions per second to this
|
||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||
--use-cookies Enable session cookiejar
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1")
|
||||
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
|
||||
```
|
||||
|
||||
|
||||
@@ -638,7 +638,7 @@ Backend-only flags (these can be set in the config file also).
|
||||
--gcs-description string Description of the remote
|
||||
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
|
||||
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
|
||||
--gcs-endpoint string Endpoint for the service
|
||||
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
|
||||
--gcs-location string Location for the newly created buckets
|
||||
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
|
||||
|
||||
@@ -785,14 +785,9 @@ Properties:
|
||||
|
||||
#### --gcs-endpoint
|
||||
|
||||
Custom endpoint for the storage API. Leave blank to use the provider default.
|
||||
Endpoint for the service.
|
||||
|
||||
When using a custom endpoint that includes a subpath (e.g. example.org/custom/endpoint),
|
||||
the subpath will be ignored during upload operations due to a limitation in the
|
||||
underlying Google API Go client library.
|
||||
Download and listing operations will work correctly with the full endpoint path.
|
||||
If you require subpath support for uploads, avoid using subpaths in your custom
|
||||
endpoint configuration.
|
||||
Leave blank normally.
|
||||
|
||||
Properties:
|
||||
|
||||
@@ -800,13 +795,6 @@ Properties:
|
||||
- Env Var: RCLONE_GCS_ENDPOINT
|
||||
- Type: string
|
||||
- Required: false
|
||||
- Examples:
|
||||
- "storage.example.org"
|
||||
- Specify a custom endpoint
|
||||
- "storage.example.org:4443"
|
||||
- Specifying a custom endpoint with port
|
||||
- "storage.example.org:4443/gcs/api"
|
||||
- Specifying a subpath, see the note, uploads won't use the custom path!
|
||||
|
||||
#### --gcs-encoding
|
||||
|
||||
|
||||
@@ -1383,21 +1383,12 @@ Properties:
|
||||
- "ru-1"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel,Servercore
|
||||
- "ru-3"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel
|
||||
- "ru-7"
|
||||
- Moscow
|
||||
- Provider: Selectel,Servercore
|
||||
- "gis-1"
|
||||
- Moscow
|
||||
- Provider: Selectel,Servercore
|
||||
- "kz-1"
|
||||
- Kazakhstan
|
||||
- Provider: Selectel
|
||||
- "uz-2"
|
||||
- Uzbekistan
|
||||
- Provider: Selectel
|
||||
- Provider: Servercore
|
||||
- "ru-7"
|
||||
- Moscow
|
||||
- Provider: Servercore
|
||||
- "uz-2"
|
||||
- Tashkent, Uzbekistan
|
||||
- Provider: Servercore
|
||||
@@ -2189,25 +2180,13 @@ Properties:
|
||||
- SeaweedFS S3 localhost
|
||||
- Provider: SeaweedFS
|
||||
- "s3.ru-1.storage.selcloud.ru"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel
|
||||
- "s3.ru-3.storage.selcloud.ru"
|
||||
- St. Petersburg
|
||||
- Provider: Selectel
|
||||
- "s3.ru-7.storage.selcloud.ru"
|
||||
- Moscow
|
||||
- Saint Petersburg
|
||||
- Provider: Selectel,Servercore
|
||||
- "s3.gis-1.storage.selcloud.ru"
|
||||
- Moscow
|
||||
- Provider: Selectel,Servercore
|
||||
- "s3.kz-1.storage.selcloud.ru"
|
||||
- Kazakhstan
|
||||
- Provider: Selectel
|
||||
- "s3.uz-2.storage.selcloud.ru"
|
||||
- Uzbekistan
|
||||
- Provider: Selectel
|
||||
- "s3.ru-1.storage.selcloud.ru"
|
||||
- Saint Petersburg
|
||||
- Provider: Servercore
|
||||
- "s3.ru-7.storage.selcloud.ru"
|
||||
- Moscow
|
||||
- Provider: Servercore
|
||||
- "s3.uz-2.srvstorage.uz"
|
||||
- Tashkent, Uzbekistan
|
||||
|
||||
@@ -1 +1 @@
|
||||
v1.72.1
|
||||
v1.73.0
|
||||
@@ -2,7 +2,6 @@ package configfile
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
@@ -363,39 +362,3 @@ func TestConfigFileSaveSymlinkAbsolute(t *testing.T) {
|
||||
testSymlink(t, link, target, resolvedTarget)
|
||||
})
|
||||
}
|
||||
|
||||
type pipedInput struct {
|
||||
io.Reader
|
||||
}
|
||||
|
||||
func (p *pipedInput) Read(b []byte) (int, error) {
|
||||
return p.Reader.Read(b)
|
||||
}
|
||||
|
||||
func (*pipedInput) Seek(int64, int) (int64, error) {
|
||||
return 0, fmt.Errorf("Seek not supported")
|
||||
}
|
||||
|
||||
func TestPipedConfig(t *testing.T) {
|
||||
t.Run("DoesNotSupportSeeking", func(t *testing.T) {
|
||||
r := &pipedInput{strings.NewReader("")}
|
||||
_, err := r.Seek(0, io.SeekStart)
|
||||
require.Error(t, err)
|
||||
})
|
||||
|
||||
t.Run("IsSupported", func(t *testing.T) {
|
||||
r := &pipedInput{strings.NewReader(configData)}
|
||||
_, err := config.Decrypt(r)
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("PlainTextConfigIsNotConsumedByCryptCheck", func(t *testing.T) {
|
||||
in := &pipedInput{strings.NewReader(configData)}
|
||||
|
||||
r, _ := config.Decrypt(in)
|
||||
got, err := io.ReadAll(r)
|
||||
require.NoError(t, err)
|
||||
|
||||
assert.Equal(t, configData, string(got))
|
||||
})
|
||||
}
|
||||
|
||||
@@ -77,9 +77,8 @@ func Decrypt(b io.ReadSeeker) (io.Reader, error) {
|
||||
if strings.HasPrefix(l, "RCLONE_ENCRYPT_V") {
|
||||
return nil, errors.New("unsupported configuration encryption - update rclone for support")
|
||||
}
|
||||
// Restore non-seekable plain-text stream to its original state
|
||||
if _, err := b.Seek(0, io.SeekStart); err != nil {
|
||||
return io.MultiReader(strings.NewReader(l+"\n"), r), nil
|
||||
return nil, err
|
||||
}
|
||||
return b, nil
|
||||
}
|
||||
|
||||
@@ -209,7 +209,7 @@ func InitLogging() {
|
||||
// Log file output
|
||||
if Opt.File != "" {
|
||||
var w io.Writer
|
||||
if Opt.MaxSize < 0 {
|
||||
if Opt.MaxSize == 0 {
|
||||
// No log rotation - just open the file as normal
|
||||
// We'll capture tracebacks like this too.
|
||||
f, err := os.OpenFile(Opt.File, os.O_WRONLY|os.O_CREATE|os.O_APPEND, 0640)
|
||||
|
||||
@@ -310,10 +310,6 @@ func (h *OutputHandler) jsonLog(ctx context.Context, buf *bytes.Buffer, r slog.R
|
||||
r.AddAttrs(
|
||||
slog.String("source", getCaller(2)),
|
||||
)
|
||||
// Add PID if requested
|
||||
if h.format&logFormatPid != 0 {
|
||||
r.AddAttrs(slog.Int("pid", os.Getpid()))
|
||||
}
|
||||
h.mu.Lock()
|
||||
err = h.jsonHandler.Handle(ctx, r)
|
||||
if err == nil {
|
||||
|
||||
@@ -198,17 +198,6 @@ func TestAddOutputUseJSONLog(t *testing.T) {
|
||||
assert.Equal(t, "2020/01/02 03:04:05 INFO : world\n", extraText)
|
||||
}
|
||||
|
||||
// Test JSON log includes PID when logFormatPid is set.
|
||||
func TestJSONLogWithPid(t *testing.T) {
|
||||
buf := &bytes.Buffer{}
|
||||
h := NewOutputHandler(buf, nil, logFormatJSON|logFormatPid)
|
||||
|
||||
r := slog.NewRecord(t0, slog.LevelInfo, "hello", 0)
|
||||
require.NoError(t, h.Handle(context.Background(), r))
|
||||
output := buf.String()
|
||||
assert.Contains(t, output, fmt.Sprintf(`"pid":%d`, os.Getpid()))
|
||||
}
|
||||
|
||||
// Test WithAttrs and WithGroup return new handlers with same settings.
|
||||
func TestWithAttrsAndGroup(t *testing.T) {
|
||||
buf := &bytes.Buffer{}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
package fs
|
||||
|
||||
// VersionTag of rclone
|
||||
var VersionTag = "v1.72.1"
|
||||
var VersionTag = "v1.73.0"
|
||||
|
||||
@@ -218,12 +218,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
|
||||
// Output: stories/The Quick Brown Fox!-20251210
|
||||
// Output: stories/The Quick Brown Fox!-20251121
|
||||
```
|
||||
|
||||
```console
|
||||
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
|
||||
// Output: stories/The Quick Brown Fox!-2025-12-10 1253PM
|
||||
// Output: stories/The Quick Brown Fox!-2025-11-21 0508PM
|
||||
```
|
||||
|
||||
```console
|
||||
|
||||
185
rclone.1
generated
185
rclone.1
generated
@@ -15,7 +15,7 @@
|
||||
. ftr VB CB
|
||||
. ftr VBI CBI
|
||||
.\}
|
||||
.TH "rclone" "1" "Dec 10, 2025" "User Manual" ""
|
||||
.TH "rclone" "1" "Nov 21, 2025" "User Manual" ""
|
||||
.hy
|
||||
.SH NAME
|
||||
.PP
|
||||
@@ -6260,14 +6260,14 @@ rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]a
|
||||
.nf
|
||||
\f[C]
|
||||
rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{YYYYMMDD}\[dq]
|
||||
// Output: stories/The Quick Brown Fox!-20251210
|
||||
// Output: stories/The Quick Brown Fox!-20251121
|
||||
\f[R]
|
||||
.fi
|
||||
.IP
|
||||
.nf
|
||||
\f[C]
|
||||
rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{macfriendlytime}\[dq]
|
||||
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM
|
||||
// Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
|
||||
\f[R]
|
||||
.fi
|
||||
.IP
|
||||
@@ -31741,7 +31741,7 @@ Flags for general networking and HTTP stuff.
|
||||
--tpslimit float Limit HTTP transactions per second to this
|
||||
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
|
||||
--use-cookies Enable session cookiejar
|
||||
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.72.1\[dq])
|
||||
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.72.0\[dq])
|
||||
\f[R]
|
||||
.fi
|
||||
.SS Performance
|
||||
@@ -32258,7 +32258,7 @@ Backend-only flags (these can be set in the config file also).
|
||||
--gcs-description string Description of the remote
|
||||
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
|
||||
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
|
||||
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default
|
||||
--gcs-endpoint string Endpoint for the service
|
||||
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
|
||||
--gcs-location string Location for the newly created buckets
|
||||
--gcs-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it
|
||||
@@ -34968,31 +34968,7 @@ The following backends have known issues that need more investigation:
|
||||
\f[V]TestBisyncRemoteRemote/normalization\f[R] (https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\f[V]TestGoFile\f[R] (\f[V]gofile\f[R])
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
\f[V]TestBisyncRemoteLocal/all_changed\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
.IP \[bu] 2
|
||||
\f[V]TestBisyncRemoteLocal/backupdir\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
.IP \[bu] 2
|
||||
\f[V]TestBisyncRemoteLocal/basic\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
.IP \[bu] 2
|
||||
\f[V]TestBisyncRemoteLocal/changes\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
.IP \[bu] 2
|
||||
\f[V]TestBisyncRemoteLocal/check_access\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
|
||||
.IP \[bu] 2
|
||||
78 more (https://pub.rclone.org/integration-tests/current/)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\f[V]TestPcloud\f[R] (\f[V]pcloud\f[R])
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
\f[V]TestBisyncRemoteRemote/check_access\f[R] (https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
|
||||
.IP \[bu] 2
|
||||
\f[V]TestBisyncRemoteRemote/check_access_filters\f[R] (https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
Updated: 2025-12-10-010012
|
||||
Updated: 2025-11-21-010037
|
||||
.PP
|
||||
The following backends either have not been tested recently or have
|
||||
known issues that are deemed unfixable for the time being:
|
||||
@@ -39287,13 +39263,12 @@ Petersburg
|
||||
Provider: Selectel,Servercore
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]ru-3\[dq]
|
||||
\[dq]gis-1\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
St.
|
||||
Petersburg
|
||||
Moscow
|
||||
.IP \[bu] 2
|
||||
Provider: Selectel
|
||||
Provider: Servercore
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]ru-7\[dq]
|
||||
@@ -39301,31 +39276,7 @@ Provider: Selectel
|
||||
.IP \[bu] 2
|
||||
Moscow
|
||||
.IP \[bu] 2
|
||||
Provider: Selectel,Servercore
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]gis-1\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Moscow
|
||||
.IP \[bu] 2
|
||||
Provider: Selectel,Servercore
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]kz-1\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Kazakhstan
|
||||
.IP \[bu] 2
|
||||
Provider: Selectel
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]uz-2\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Uzbekistan
|
||||
.IP \[bu] 2
|
||||
Provider: Selectel
|
||||
Provider: Servercore
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]uz-2\[dq]
|
||||
@@ -41420,25 +41371,7 @@ Provider: SeaweedFS
|
||||
\[dq]s3.ru-1.storage.selcloud.ru\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
St.
|
||||
Petersburg
|
||||
.IP \[bu] 2
|
||||
Provider: Selectel
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]s3.ru-3.storage.selcloud.ru\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
St.
|
||||
Petersburg
|
||||
.IP \[bu] 2
|
||||
Provider: Selectel
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]s3.ru-7.storage.selcloud.ru\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Moscow
|
||||
Saint Petersburg
|
||||
.IP \[bu] 2
|
||||
Provider: Selectel,Servercore
|
||||
.RE
|
||||
@@ -41448,29 +41381,13 @@ Provider: Selectel,Servercore
|
||||
.IP \[bu] 2
|
||||
Moscow
|
||||
.IP \[bu] 2
|
||||
Provider: Selectel,Servercore
|
||||
Provider: Servercore
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]s3.kz-1.storage.selcloud.ru\[dq]
|
||||
\[dq]s3.ru-7.storage.selcloud.ru\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Kazakhstan
|
||||
.IP \[bu] 2
|
||||
Provider: Selectel
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]s3.uz-2.storage.selcloud.ru\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Uzbekistan
|
||||
.IP \[bu] 2
|
||||
Provider: Selectel
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]s3.ru-1.storage.selcloud.ru\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Saint Petersburg
|
||||
Moscow
|
||||
.IP \[bu] 2
|
||||
Provider: Servercore
|
||||
.RE
|
||||
@@ -57528,9 +57445,6 @@ With support for high storage limits and seamless integration with
|
||||
rclone, FileLu makes managing files in the cloud easy.
|
||||
Its cross-platform file backup services let you upload and back up files
|
||||
from any internet-connected device.
|
||||
.PP
|
||||
\f[B]Note\f[R] FileLu now has a fully featured S3 backend FileLu S5, an
|
||||
industry standard S3 compatible object store.
|
||||
.SS Configuration
|
||||
.PP
|
||||
Here is an example of how to make a remote called \f[V]filelu\f[R].
|
||||
@@ -60302,17 +60216,9 @@ Type: bool
|
||||
Default: false
|
||||
.SS --gcs-endpoint
|
||||
.PP
|
||||
Custom endpoint for the storage API.
|
||||
Leave blank to use the provider default.
|
||||
Endpoint for the service.
|
||||
.PP
|
||||
When using a custom endpoint that includes a subpath (e.g.
|
||||
example.org/custom/endpoint), the subpath will be ignored during upload
|
||||
operations due to a limitation in the underlying Google API Go client
|
||||
library.
|
||||
Download and listing operations will work correctly with the full
|
||||
endpoint path.
|
||||
If you require subpath support for uploads, avoid using subpaths in your
|
||||
custom endpoint configuration.
|
||||
Leave blank normally.
|
||||
.PP
|
||||
Properties:
|
||||
.IP \[bu] 2
|
||||
@@ -60323,29 +60229,6 @@ Env Var: RCLONE_GCS_ENDPOINT
|
||||
Type: string
|
||||
.IP \[bu] 2
|
||||
Required: false
|
||||
.IP \[bu] 2
|
||||
Examples:
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
\[dq]storage.example.org\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Specify a custom endpoint
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]storage.example.org:4443\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Specifying a custom endpoint with port
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
\[dq]storage.example.org:4443/gcs/api\[dq]
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Specifying a subpath, see the note, uploads won\[aq]t use the custom
|
||||
path!
|
||||
.RE
|
||||
.RE
|
||||
.SS --gcs-encoding
|
||||
.PP
|
||||
The encoding for the backend.
|
||||
@@ -60674,7 +60557,7 @@ In the next field, \[dq]OAuth Scopes\[dq], enter
|
||||
access to Google Drive specifically.
|
||||
You can also use
|
||||
\f[V]https://www.googleapis.com/auth/drive.readonly\f[R] for read only
|
||||
access with \f[V]--drive-scope=drive.readonly\f[R].
|
||||
access.
|
||||
.IP \[bu] 2
|
||||
Click \[dq]Authorise\[dq]
|
||||
.SS 3. Configure rclone, assuming a new install
|
||||
@@ -87232,40 +87115,6 @@ Options:
|
||||
.IP \[bu] 2
|
||||
\[dq]error\[dq]: Return an error based on option value.
|
||||
.SH Changelog
|
||||
.SS v1.72.1 - 2025-12-10
|
||||
.PP
|
||||
See commits (https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
|
||||
.IP \[bu] 2
|
||||
Bug Fixes
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
build: update to go1.25.5 to fix
|
||||
CVE-2025-61729 (https://pkg.go.dev/vuln/GO-2025-4155)
|
||||
.IP \[bu] 2
|
||||
doc fixes (Duncan Smart, Nick Craig-Wood)
|
||||
.IP \[bu] 2
|
||||
configfile: Fix piped config support (Jonas Tingeborn)
|
||||
.IP \[bu] 2
|
||||
log
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Fix PID not included in JSON log output (Tingsong Xu)
|
||||
.IP \[bu] 2
|
||||
Fix backtrace not going to the --log-file (Nick Craig-Wood)
|
||||
.RE
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
Google Cloud Storage
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Improve endpoint parameter docs (Johannes Rothe)
|
||||
.RE
|
||||
.IP \[bu] 2
|
||||
S3
|
||||
.RS 2
|
||||
.IP \[bu] 2
|
||||
Add missing regions for Selectel provider (Nick Craig-Wood)
|
||||
.RE
|
||||
.SS v1.72.0 - 2025-11-21
|
||||
.PP
|
||||
See commits (https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0)
|
||||
|
||||
Reference in New Issue
Block a user