1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-21 11:53:17 +00:00

Compare commits

..

69 Commits

Author SHA1 Message Date
Nick Craig-Wood
3a9c7ceeb1 uptobox: remove backend as service is no longer available
Uptobox was shutdown in September 2023 and does not appear to be
returning.
2026-01-14 15:31:46 +00:00
Nick Craig-Wood
5502c0f8ae rc: add operations/hashsumfile to sum a single file only 2026-01-14 12:29:48 +00:00
Nick Craig-Wood
d707ae7cf4 docs: update sponsor link 2026-01-14 12:29:48 +00:00
Enduriel
9bef7f0dbf filen: add Filen backend - Fixes #6728 2026-01-13 12:50:27 +00:00
Nick Craig-Wood
933bbf3ac8 sftp: fix proxy initialisation
This was being done in NewFs instead of NewFsWithConnection like it
should have been which meant calls to NewFsWithConnection were not
initialising the proxy correctly.
2026-01-13 12:40:17 +00:00
Nick Craig-Wood
ecc5972d6f fstest: skip Copy mutation test with --sftp-copy-is-hardlink 2026-01-13 12:40:17 +00:00
Nick Craig-Wood
07805796ab fstest: Make Copy mutation test work properly
Before this change it could miss a mutation if the Modtime was cached
2026-01-13 12:40:17 +00:00
Nick Craig-Wood
189e6dbf6a Add Qingwei Li to contributors 2026-01-13 12:40:12 +00:00
Nick Craig-Wood
d47e289165 Add Nicolas Dessart to contributors 2026-01-13 12:40:12 +00:00
dougal
e51a0599a0 log: fix systemd adding extra newline - fixes #9086
This was broken in v1.71.0 as a typo.
2026-01-09 16:30:01 +00:00
Qingwei Li
530a901de3 oracleobjectstorage, sftp: eliminate unnecessary heap allocation
Move the declaration location of variables to eliminate heap
allocation which may make rclone faster and reduce memory usage slightly.

Fixes #9078
2026-01-09 16:10:02 +00:00
Nicolas Dessart
a64a8aad0e sftp,ftp: add http proxy authentication support
This change supports the `http://user:pass@host:port` syntax for the
http_proxy setting.
2026-01-08 16:31:11 +00:00
dougal
6529d2cd8f Add Drime backend
Co-Authored-By: Nick Craig-Wood <nick@craig-wood.com>
2026-01-08 12:05:37 +00:00
Nick Craig-Wood
d9895fef9d lib/rest: add opts.MultipartContentType to explicitly set Content-Type of attachements
Before this the standard library set it to application/octet-stream for some reason
2026-01-08 12:05:37 +00:00
dougal
8c7b7ac891 dircache: allow empty string as root parent id
This was causing an internal error with the drime backend which has the
root parent id as an empty string. This shouldn't affect anything else.
2026-01-08 12:05:37 +00:00
Nick Craig-Wood
f814498561 docs: update sponsors 2026-01-08 12:05:30 +00:00
vupn0712
5f4e4b1a20 s3: add provider Bizfly Cloud Simple Storage
Co-authored-by: sys6101 <csvmen@gmail.com>
2026-01-06 14:56:49 +00:00
Nick Craig-Wood
28c187b9b4 docs: update sponsor logos 2025-12-31 17:04:11 +00:00
Nick Craig-Wood
e07afc4645 Add sys6101 to contributors 2025-12-31 17:04:11 +00:00
Nick Craig-Wood
08932ab92a Add darkdragon-001 to contributors 2025-12-31 17:04:11 +00:00
Nick Craig-Wood
356ee57edb Add vupn0712 to contributors 2025-12-31 17:04:11 +00:00
yuval-cloudinary
7c1660214d docs: add cloudinary to readme 2025-12-22 22:39:53 +01:00
darkdragon-001
51b197c86f docs: fix headers hierarchy in mount docs 2025-12-21 12:23:59 +01:00
vupn0712
029ffd2761 s3: fix Copy ignoring storage class
Co-authored-by: sys6101 <csvmen@gmail.com>
2025-12-20 14:42:00 +00:00
Nick Craig-Wood
f81cd7d279 serve s3: make errors in --s3-auth-key fatal - fixes #9044
Previously if auth keys were provided without a comma then rclone
would only log an INFO message which could mean it went on to serve
without any auth.

The parsing for environment variables was changed in v1.70.0 to make
them work properly with multiple inputs. This means the input is
treated like a mini CSV file which works well except in this case when
the input has commas. This meant `user,auth` without quotes is treated
as two key pairs `user` and `quote`. The correct syntax is
`"user,auth"`. This updates the documentation accordingly.
2025-12-18 10:17:41 +00:00
Nick Craig-Wood
1a0a4628d7 Add masrlinu to contributors 2025-12-18 10:17:41 +00:00
masrlinu
c10a4d465c pcloud: add support for real-time updates in mount
Co-authored-by: masrlinu <5259918+masrlinu@users.noreply.github.com>
2025-12-17 15:13:25 +00:00
Nick Craig-Wood
3a6e07a613 memory: add --memory-discard flag for speed testing - fixes #9037 2025-12-17 10:21:12 +00:00
Nick Craig-Wood
c36f99d343 Add vyv03354 to contributors 2025-12-17 10:21:12 +00:00
jhasse-shade
3e21a7261b shade: Fix VFS test issues 2025-12-16 17:21:22 +00:00
vyv03354
fd439fab62 docs: mention use of ListR feature in ls docs 2025-12-15 09:11:00 +01:00
dependabot[bot]
976aa6b416 build: bump actions/download-artifact from 6 to 7
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 6 to 7.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v6...v7)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-version: '7'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-13 11:01:27 +01:00
dependabot[bot]
b3a0383ca3 build: bump actions/upload-artifact from 5 to 6
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 5 to 6.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-13 11:00:59 +01:00
dependabot[bot]
c13f129339 build: bump actions/cache from 4 to 5
Bumps [actions/cache](https://github.com/actions/cache) from 4 to 5.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-12-12 14:52:57 +01:00
vyv03354
748d8c8957 docs: reflects the fact that pCloud supports ListR 2025-12-11 20:32:53 +01:00
jbagwell-akamai
4d379efcbb S3: Linode: updated endpoints to use ISO 3166-1 alpha-2 standard
ISO 3166-1 alpha-2 standard for countries and region short name in parentheses instead of separated by another comma
2025-12-11 17:20:34 +00:00
dougal
e5e6a4b5ae sync: fix error propagation in tests (#9025)
This commit fixes the sync transform test IO errors by resetting the
error flag which stops subsequent tests failing.
2025-12-10 15:43:22 +00:00
Nick Craig-Wood
df18e8c55b Changelog updates from Version v1.72.1 2025-12-10 15:31:48 +00:00
Nick Craig-Wood
f4e17d8b0b s3: add more regions for Selectel 2025-12-10 15:31:48 +00:00
Nick Craig-Wood
e5c69511bc Add jhasse-shade to contributors 2025-12-10 15:31:48 +00:00
jhasse-shade
175d4bc553 Add Shade backend 2025-12-09 17:08:57 +00:00
Nick Craig-Wood
4851f1796c log: fix backtrace not going to the --log-file #9014
Before the log re-organisation in:

8d353039a6 log: add log rotation to --log-file

rclone would write any backtraces to the --log-file which was very
convenient for users.

This got accidentally disabled due to a typo which meant backtraces
started going to stderr even if --log-file was supplied.

This fixes the problem.
2025-12-09 16:35:07 +00:00
Nick Craig-Wood
4ff8899b2c build: fix lint warning after linter upgrade 2025-12-09 16:15:17 +00:00
Nick Craig-Wood
8f29a0b0a1 Add Jonas Tingeborn to contributors 2025-12-09 16:15:17 +00:00
Nick Craig-Wood
8b0e76e53b Add Tingsong Xu to contributors 2025-12-09 16:15:17 +00:00
Jonas Tingeborn
233fef5c4d configfile: add piped config support - fixes #9012 2025-12-08 18:42:17 +00:00
Tingsong Xu
b9586c3e03 fs/log: fix PID not included in JSON log output
When using `--log-format pid,json`, the PID was not being added to the JSON log output. This fix adds PID support to JSON logging.
2025-12-08 18:41:58 +00:00
Nick Craig-Wood
0dc0ab1330 build: adjust lint rules to exclude new errors from linter update 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
a6bbdb35a0 proxy: fix error handling in tests spotted by the linter 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
b33cb77b6c Add Johannes Rothe to contributors 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
d51322bb5f Add Leo to contributors 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
e718ab6091 Add Vladislav Tropnikov to contributors 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
0a9e6e130f Add Cliff Frey to contributors 2025-12-08 14:45:06 +00:00
Nick Craig-Wood
3358b9049c Add vicerace to contributors 2025-12-08 14:45:06 +00:00
DianaNites
847734d421 b2: Fix listing root buckets with unrestricted API key
Fixes previous pull request #8978

An oversight meant that unrestricted API keys
never called b2_list_buckets,
meaning the root remote could not be listed.

The call is now made in the event there are no allowed buckets,
indicating an unrestricted API key

Fixes #9007
2025-12-04 15:55:17 +00:00
Johannes Rothe
f7b255d4ec googlecloudstorage: improve endpoint parameter docs
When specifying a custom endpoint with a subpath, there is a limitation
in the Google cloud storage integration that the subpath is ignored
during upload operations. For example with the custom endpoint
"example.org/custom/endpoint" on upload the /custom/endpoint is not
reflected.

As this is most likely an issue with the underlying API client, there is
no way to fix this in rclone. By extending the documentation at least
rclone users are made aware of this limitation.

Related forum thread: https://forum.rclone.org/t/googlecloudstorage-custom-endpoint-subpath-removed-for-upload/53059
2025-12-01 19:04:02 +00:00
Leo
24c752ed9e serve webdav: implement download-directory-as-zip
Signed-off-by: Leo <i@hardrain980.com>
2025-12-01 15:42:16 +00:00
Vladislav Tropnikov
a99d155fd4 s3: The ability to specify an IAM role for cross-account interaction 2025-11-29 13:53:00 +00:00
Cliff Frey
f72b32b470 azureblob: add metadata and tags support across upload and copy paths
This change adds first-class metadata support to the Azure Blob backend,
including headers, user metadata, tags, and modtime overrides, and wires
it through uploads and server-side copies.

There is a behavior change in that rclone will now set the "mtime"
custom metadata when doing server side copies to azure and the
`--metadata` argument is given.

- Map standard headers: cache-control, content-disposition, content-encoding,
  content-language, content-type to corresponding x-ms-blob-* HTTP headers.
- Map user metadata: any non-reserved keys (excluding x-ms-*) are sent as
  blob user metadata. Keys are normalized to lowercase for consistency.
- Support tags: parse `x-ms-tags` as a comma-separated list of key=value
  pairs and apply them on uploads and copies.
- Support mtime override: accept `mtime` in metadata (RFC3339/RFC3339Nano)
  to override the stored modtime persisted in user metadata.
2025-11-27 16:58:07 +00:00
vicerace
9be7f99bf8 refactor: use strings.Cut to simplify code
Signed-off-by: vicerace <vicerace@sohu.com>
2025-11-27 14:42:11 +00:00
Nick Craig-Wood
6858bf242e docs: note where a provider has an S3 compatible alternative 2025-11-26 12:22:48 +00:00
Nick Craig-Wood
e8c6867e4c Add Shade as sponsor 2025-11-26 12:22:48 +00:00
Nick Craig-Wood
50fbd6b049 Add Duncan Smart to contributors 2025-11-26 12:22:48 +00:00
Nick Craig-Wood
0783cab952 Add Diana to contributors 2025-11-26 12:22:48 +00:00
Duncan Smart
886ac7af1d docs: Clarify OAuth scopes for readonly Google Drive access 2025-11-24 15:58:53 +00:00
Diana
3c40238f02 b2: support authentication with new bucket restricted application keys
Backblaze has updated its b2_authorize_account API endpoint, newly created
application keys are now "multi-bucket" keys, capable of being limited to
multiple buckets. These keys can only be used with the v4 endpoint, not v1 which
returns an HTTP 400.

This commit switches authorization to the v4 endpoint, and allowing such keys to
work with any of the allowed buckets.

With multi-bucket keys, missing restricted buckets can be non-fatal.

Supports listing root with multi-bucket API keys
2025-11-24 15:46:41 +00:00
Nick Craig-Wood
46ca0dd7fe docs: update sponsor logos 2025-11-24 14:58:33 +00:00
Nick Craig-Wood
2e968e7ce0 docs: fix lint error in changelog 2025-11-21 18:23:16 +00:00
Nick Craig-Wood
1886c552db Start v1.73.0-DEV development 2025-11-21 18:23:07 +00:00
86 changed files with 6980 additions and 2209 deletions

View File

@@ -229,7 +229,7 @@ jobs:
cache: false cache: false
- name: Cache - name: Cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
path: | path: |
~/go/pkg/mod ~/go/pkg/mod

View File

@@ -129,7 +129,7 @@ jobs:
- name: Load Go Build Cache for Docker - name: Load Go Build Cache for Docker
id: go-cache id: go-cache
uses: actions/cache@v4 uses: actions/cache@v5
with: with:
key: ${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}-${{ hashFiles('**/go.mod') }}-${{ hashFiles('**/go.sum') }} key: ${{ runner.os }}-${{ steps.imageos.outputs.result }}-go-${{ env.CACHE_NAME }}-${{ env.PLATFORM }}-${{ hashFiles('**/go.mod') }}-${{ hashFiles('**/go.sum') }}
restore-keys: | restore-keys: |
@@ -183,7 +183,7 @@ jobs:
touch "/tmp/digests/${digest#sha256:}" touch "/tmp/digests/${digest#sha256:}"
- name: Upload Image Digest - name: Upload Image Digest
uses: actions/upload-artifact@v5 uses: actions/upload-artifact@v6
with: with:
name: digests-${{ env.PLATFORM }} name: digests-${{ env.PLATFORM }}
path: /tmp/digests/* path: /tmp/digests/*
@@ -198,7 +198,7 @@ jobs:
steps: steps:
- name: Download Image Digests - name: Download Image Digests
uses: actions/download-artifact@v6 uses: actions/download-artifact@v7
with: with:
path: /tmp/digests path: /tmp/digests
pattern: digests-* pattern: digests-*

142
MANUAL.html generated
View File

@@ -233,7 +233,7 @@
<header id="title-block-header"> <header id="title-block-header">
<h1 class="title">rclone(1) User Manual</h1> <h1 class="title">rclone(1) User Manual</h1>
<p class="author">Nick Craig-Wood</p> <p class="author">Nick Craig-Wood</p>
<p class="date">Dec 10, 2025</p> <p class="date">Nov 21, 2025</p>
</header> </header>
<h1 id="name">NAME</h1> <h1 id="name">NAME</h1>
<p>rclone - manage files on cloud storage</p> <p>rclone - manage files on cloud storage</p>
@@ -4531,9 +4531,9 @@ SquareBracket</code></pre>
<pre class="console"><code>rclone convmv &quot;stories/The Quick Brown Fox!.txt&quot; --name-transform &quot;all,command=echo&quot; <pre class="console"><code>rclone convmv &quot;stories/The Quick Brown Fox!.txt&quot; --name-transform &quot;all,command=echo&quot;
// Output: stories/The Quick Brown Fox!.txt</code></pre> // Output: stories/The Quick Brown Fox!.txt</code></pre>
<pre class="console"><code>rclone convmv &quot;stories/The Quick Brown Fox!&quot; --name-transform &quot;date=-{YYYYMMDD}&quot; <pre class="console"><code>rclone convmv &quot;stories/The Quick Brown Fox!&quot; --name-transform &quot;date=-{YYYYMMDD}&quot;
// Output: stories/The Quick Brown Fox!-20251210</code></pre> // Output: stories/The Quick Brown Fox!-20251121</code></pre>
<pre class="console"><code>rclone convmv &quot;stories/The Quick Brown Fox!&quot; --name-transform &quot;date=-{macfriendlytime}&quot; <pre class="console"><code>rclone convmv &quot;stories/The Quick Brown Fox!&quot; --name-transform &quot;date=-{macfriendlytime}&quot;
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM</code></pre> // Output: stories/The Quick Brown Fox!-2025-11-21 0505PM</code></pre>
<pre class="console"><code>rclone convmv &quot;stories/The Quick Brown Fox!.txt&quot; --name-transform &quot;all,regex=[\\.\\w]/ab&quot; <pre class="console"><code>rclone convmv &quot;stories/The Quick Brown Fox!.txt&quot; --name-transform &quot;all,regex=[\\.\\w]/ab&quot;
// Output: ababababababab/ababab ababababab ababababab ababab!abababab</code></pre> // Output: ababababababab/ababab ababababab ababababab ababab!abababab</code></pre>
<p>The regex command generally accepts Perl-style regular expressions, <p>The regex command generally accepts Perl-style regular expressions,
@@ -22567,7 +22567,7 @@ split into groups.</p>
--tpslimit float Limit HTTP transactions per second to this --tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.72.1&quot;)</code></pre> --user-agent string Set the user-agent to a specified string (default &quot;rclone/v1.72.0&quot;)</code></pre>
<h2 id="performance">Performance</h2> <h2 id="performance">Performance</h2>
<p>Flags helpful for increasing performance.</p> <p>Flags helpful for increasing performance.</p>
<pre><code> --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi) <pre><code> --buffer-size SizeSuffix In memory buffer size when reading files for each --transfer (default 16Mi)
@@ -23024,7 +23024,7 @@ split into groups.</p>
--gcs-description string Description of the remote --gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default --gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars) --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets --gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don&#39;t attempt to check the bucket exists or create it --gcs-no-check-bucket If set, don&#39;t attempt to check the bucket exists or create it
@@ -25234,29 +25234,7 @@ investigation:</p>
<li><a <li><a
href="https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt"><code>TestBisyncRemoteRemote/normalization</code></a></li> href="https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt"><code>TestBisyncRemoteRemote/normalization</code></a></li>
</ul></li> </ul></li>
<li><code>TestGoFile</code> (<code>gofile</code>) <li>Updated: 2025-11-21-010037
<ul>
<li><a
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/all_changed</code></a></li>
<li><a
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/backupdir</code></a></li>
<li><a
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/basic</code></a></li>
<li><a
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/changes</code></a></li>
<li><a
href="https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt"><code>TestBisyncRemoteLocal/check_access</code></a></li>
<li><a href="https://pub.rclone.org/integration-tests/current/">78
more</a></li>
</ul></li>
<li><code>TestPcloud</code> (<code>pcloud</code>)
<ul>
<li><a
href="https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt"><code>TestBisyncRemoteRemote/check_access</code></a></li>
<li><a
href="https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt"><code>TestBisyncRemoteRemote/check_access_filters</code></a></li>
</ul></li>
<li>Updated: 2025-12-10-010012
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs ---></li> <!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs ---></li>
</ul> </ul>
<p>The following backends either have not been tested recently or have <p>The following backends either have not been tested recently or have
@@ -28396,30 +28374,15 @@ centers for low latency.</li>
<li>St. Petersburg</li> <li>St. Petersburg</li>
<li>Provider: Selectel,Servercore</li> <li>Provider: Selectel,Servercore</li>
</ul></li> </ul></li>
<li>"ru-3" <li>"gis-1"
<ul> <ul>
<li>St. Petersburg</li> <li>Moscow</li>
<li>Provider: Selectel</li> <li>Provider: Servercore</li>
</ul></li> </ul></li>
<li>"ru-7" <li>"ru-7"
<ul> <ul>
<li>Moscow</li> <li>Moscow</li>
<li>Provider: Selectel,Servercore</li> <li>Provider: Servercore</li>
</ul></li>
<li>"gis-1"
<ul>
<li>Moscow</li>
<li>Provider: Selectel,Servercore</li>
</ul></li>
<li>"kz-1"
<ul>
<li>Kazakhstan</li>
<li>Provider: Selectel</li>
</ul></li>
<li>"uz-2"
<ul>
<li>Uzbekistan</li>
<li>Provider: Selectel</li>
</ul></li> </ul></li>
<li>"uz-2" <li>"uz-2"
<ul> <ul>
@@ -29727,37 +29690,17 @@ AWS,Alibaba,ArvanCloud,Ceph,ChinaMobile,Cloudflare,Cubbit,DigitalOcean,Dreamhost
</ul></li> </ul></li>
<li>"s3.ru-1.storage.selcloud.ru" <li>"s3.ru-1.storage.selcloud.ru"
<ul> <ul>
<li>St. Petersburg</li> <li>Saint Petersburg</li>
<li>Provider: Selectel</li>
</ul></li>
<li>"s3.ru-3.storage.selcloud.ru"
<ul>
<li>St. Petersburg</li>
<li>Provider: Selectel</li>
</ul></li>
<li>"s3.ru-7.storage.selcloud.ru"
<ul>
<li>Moscow</li>
<li>Provider: Selectel,Servercore</li> <li>Provider: Selectel,Servercore</li>
</ul></li> </ul></li>
<li>"s3.gis-1.storage.selcloud.ru" <li>"s3.gis-1.storage.selcloud.ru"
<ul> <ul>
<li>Moscow</li> <li>Moscow</li>
<li>Provider: Selectel,Servercore</li> <li>Provider: Servercore</li>
</ul></li> </ul></li>
<li>"s3.kz-1.storage.selcloud.ru" <li>"s3.ru-7.storage.selcloud.ru"
<ul> <ul>
<li>Kazakhstan</li> <li>Moscow</li>
<li>Provider: Selectel</li>
</ul></li>
<li>"s3.uz-2.storage.selcloud.ru"
<ul>
<li>Uzbekistan</li>
<li>Provider: Selectel</li>
</ul></li>
<li>"s3.ru-1.storage.selcloud.ru"
<ul>
<li>Saint Petersburg</li>
<li>Provider: Servercore</li> <li>Provider: Servercore</li>
</ul></li> </ul></li>
<li>"s3.uz-2.srvstorage.uz" <li>"s3.uz-2.srvstorage.uz"
@@ -41553,9 +41496,6 @@ storage options, and sharing capabilities. With support for high storage
limits and seamless integration with rclone, FileLu makes managing files limits and seamless integration with rclone, FileLu makes managing files
in the cloud easy. Its cross-platform file backup services let you in the cloud easy. Its cross-platform file backup services let you
upload and back up files from any internet-connected device.</p> upload and back up files from any internet-connected device.</p>
<p><strong>Note</strong> FileLu now has a fully featured S3 backend <a
href="/s3#filelu-s5">FileLu S5</a>, an industry standard S3 compatible
object store.</p>
<h2 id="configuration-16">Configuration</h2> <h2 id="configuration-16">Configuration</h2>
<p>Here is an example of how to make a remote called <p>Here is an example of how to make a remote called
<code>filelu</code>. First, run:</p> <code>filelu</code>. First, run:</p>
@@ -43478,36 +43418,14 @@ decompressed.</p>
<li>Default: false</li> <li>Default: false</li>
</ul> </ul>
<h4 id="gcs-endpoint">--gcs-endpoint</h4> <h4 id="gcs-endpoint">--gcs-endpoint</h4>
<p>Custom endpoint for the storage API. Leave blank to use the provider <p>Endpoint for the service.</p>
default.</p> <p>Leave blank normally.</p>
<p>When using a custom endpoint that includes a subpath (e.g.
example.org/custom/endpoint), the subpath will be ignored during upload
operations due to a limitation in the underlying Google API Go client
library. Download and listing operations will work correctly with the
full endpoint path. If you require subpath support for uploads, avoid
using subpaths in your custom endpoint configuration.</p>
<p>Properties:</p> <p>Properties:</p>
<ul> <ul>
<li>Config: endpoint</li> <li>Config: endpoint</li>
<li>Env Var: RCLONE_GCS_ENDPOINT</li> <li>Env Var: RCLONE_GCS_ENDPOINT</li>
<li>Type: string</li> <li>Type: string</li>
<li>Required: false</li> <li>Required: false</li>
<li>Examples:
<ul>
<li>"storage.example.org"
<ul>
<li>Specify a custom endpoint</li>
</ul></li>
<li>"storage.example.org:4443"
<ul>
<li>Specifying a custom endpoint with port</li>
</ul></li>
<li>"storage.example.org:4443/gcs/api"
<ul>
<li>Specifying a subpath, see the note, uploads won't use the custom
path!</li>
</ul></li>
</ul></li>
</ul> </ul>
<h4 id="gcs-encoding">--gcs-encoding</h4> <h4 id="gcs-encoding">--gcs-encoding</h4>
<p>The encoding for the backend.</p> <p>The encoding for the backend.</p>
@@ -43752,7 +43670,7 @@ account. It is a ~21 character numerical string.</li>
<code>https://www.googleapis.com/auth/drive</code> to grant read/write <code>https://www.googleapis.com/auth/drive</code> to grant read/write
access to Google Drive specifically. You can also use access to Google Drive specifically. You can also use
<code>https://www.googleapis.com/auth/drive.readonly</code> for read <code>https://www.googleapis.com/auth/drive.readonly</code> for read
only access with <code>--drive-scope=drive.readonly</code>.</li> only access.</li>
<li>Click "Authorise"</li> <li>Click "Authorise"</li>
</ul> </ul>
<h5 id="configure-rclone-assuming-a-new-install">3. Configure rclone, <h5 id="configure-rclone-assuming-a-new-install">3. Configure rclone,
@@ -62675,32 +62593,6 @@ the output.</p>
<!-- autogenerated options stop --> <!-- autogenerated options stop -->
<!-- markdownlint-disable line-length --> <!-- markdownlint-disable line-length -->
<h1 id="changelog-1">Changelog</h1> <h1 id="changelog-1">Changelog</h1>
<h2 id="v1.72.1---2025-12-10">v1.72.1 - 2025-12-10</h2>
<p><a
href="https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1">See
commits</a></p>
<ul>
<li>Bug Fixes
<ul>
<li>build: update to go1.25.5 to fix <a
href="https://pkg.go.dev/vuln/GO-2025-4155">CVE-2025-61729</a></li>
<li>doc fixes (Duncan Smart, Nick Craig-Wood)</li>
<li>configfile: Fix piped config support (Jonas Tingeborn)</li>
<li>log
<ul>
<li>Fix PID not included in JSON log output (Tingsong Xu)</li>
<li>Fix backtrace not going to the --log-file (Nick Craig-Wood)</li>
</ul></li>
</ul></li>
<li>Google Cloud Storage
<ul>
<li>Improve endpoint parameter docs (Johannes Rothe)</li>
</ul></li>
<li>S3
<ul>
<li>Add missing regions for Selectel provider (Nick Craig-Wood)</li>
</ul></li>
</ul>
<h2 id="v1.72.0---2025-11-21">v1.72.0 - 2025-11-21</h2> <h2 id="v1.72.0---2025-11-21">v1.72.0 - 2025-11-21</h2>
<p><a <p><a
href="https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0">See href="https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0">See

96
MANUAL.md generated
View File

@@ -1,6 +1,6 @@
% rclone(1) User Manual % rclone(1) User Manual
% Nick Craig-Wood % Nick Craig-Wood
% Dec 10, 2025 % Nov 21, 2025
# NAME # NAME
@@ -5369,12 +5369,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```console ```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20251210 // Output: stories/The Quick Brown Fox!-20251121
``` ```
```console ```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM // Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
``` ```
```console ```console
@@ -24802,7 +24802,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this --tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1") --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
``` ```
@@ -25319,7 +25319,7 @@ Backend-only flags (these can be set in the config file also).
--gcs-description string Description of the remote --gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default --gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars) --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets --gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it --gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
@@ -27514,17 +27514,7 @@ The following backends have known issues that need more investigation:
<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs ---> <!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
- `TestDropbox` (`dropbox`) - `TestDropbox` (`dropbox`)
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt) - [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
- `TestGoFile` (`gofile`) - Updated: 2025-11-21-010037
- [`TestBisyncRemoteLocal/all_changed`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/backupdir`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/basic`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/changes`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/check_access`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [78 more](https://pub.rclone.org/integration-tests/current/)
- `TestPcloud` (`pcloud`)
- [`TestBisyncRemoteRemote/check_access`](https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
- [`TestBisyncRemoteRemote/check_access_filters`](https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
- Updated: 2025-12-10-010012
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs ---> <!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
The following backends either have not been tested recently or have known issues The following backends either have not been tested recently or have known issues
@@ -30353,21 +30343,12 @@ Properties:
- "ru-1" - "ru-1"
- St. Petersburg - St. Petersburg
- Provider: Selectel,Servercore - Provider: Selectel,Servercore
- "ru-3"
- St. Petersburg
- Provider: Selectel
- "ru-7"
- Moscow
- Provider: Selectel,Servercore
- "gis-1" - "gis-1"
- Moscow - Moscow
- Provider: Selectel,Servercore - Provider: Servercore
- "kz-1" - "ru-7"
- Kazakhstan - Moscow
- Provider: Selectel - Provider: Servercore
- "uz-2"
- Uzbekistan
- Provider: Selectel
- "uz-2" - "uz-2"
- Tashkent, Uzbekistan - Tashkent, Uzbekistan
- Provider: Servercore - Provider: Servercore
@@ -31159,25 +31140,13 @@ Properties:
- SeaweedFS S3 localhost - SeaweedFS S3 localhost
- Provider: SeaweedFS - Provider: SeaweedFS
- "s3.ru-1.storage.selcloud.ru" - "s3.ru-1.storage.selcloud.ru"
- St. Petersburg - Saint Petersburg
- Provider: Selectel
- "s3.ru-3.storage.selcloud.ru"
- St. Petersburg
- Provider: Selectel
- "s3.ru-7.storage.selcloud.ru"
- Moscow
- Provider: Selectel,Servercore - Provider: Selectel,Servercore
- "s3.gis-1.storage.selcloud.ru" - "s3.gis-1.storage.selcloud.ru"
- Moscow - Moscow
- Provider: Selectel,Servercore - Provider: Servercore
- "s3.kz-1.storage.selcloud.ru" - "s3.ru-7.storage.selcloud.ru"
- Kazakhstan - Moscow
- Provider: Selectel
- "s3.uz-2.storage.selcloud.ru"
- Uzbekistan
- Provider: Selectel
- "s3.ru-1.storage.selcloud.ru"
- Saint Petersburg
- Provider: Servercore - Provider: Servercore
- "s3.uz-2.srvstorage.uz" - "s3.uz-2.srvstorage.uz"
- Tashkent, Uzbekistan - Tashkent, Uzbekistan
@@ -44004,9 +43973,6 @@ managing files in the cloud easy. Its cross-platform file backup
services let you upload and back up files from any internet-connected services let you upload and back up files from any internet-connected
device. device.
**Note** FileLu now has a fully featured S3 backend [FileLu S5](/s3#filelu-s5),
an industry standard S3 compatible object store.
## Configuration ## Configuration
Here is an example of how to make a remote called `filelu`. First, run: Here is an example of how to make a remote called `filelu`. First, run:
@@ -46105,14 +46071,9 @@ Properties:
#### --gcs-endpoint #### --gcs-endpoint
Custom endpoint for the storage API. Leave blank to use the provider default. Endpoint for the service.
When using a custom endpoint that includes a subpath (e.g. example.org/custom/endpoint), Leave blank normally.
the subpath will be ignored during upload operations due to a limitation in the
underlying Google API Go client library.
Download and listing operations will work correctly with the full endpoint path.
If you require subpath support for uploads, avoid using subpaths in your custom
endpoint configuration.
Properties: Properties:
@@ -46120,13 +46081,6 @@ Properties:
- Env Var: RCLONE_GCS_ENDPOINT - Env Var: RCLONE_GCS_ENDPOINT
- Type: string - Type: string
- Required: false - Required: false
- Examples:
- "storage.example.org"
- Specify a custom endpoint
- "storage.example.org:4443"
- Specifying a custom endpoint with port
- "storage.example.org:4443/gcs/api"
- Specifying a subpath, see the note, uploads won't use the custom path!
#### --gcs-encoding #### --gcs-encoding
@@ -46425,7 +46379,7 @@ account key" button.
`https://www.googleapis.com/auth/drive` `https://www.googleapis.com/auth/drive`
to grant read/write access to Google Drive specifically. to grant read/write access to Google Drive specifically.
You can also use `https://www.googleapis.com/auth/drive.readonly` for read You can also use `https://www.googleapis.com/auth/drive.readonly` for read
only access with `--drive-scope=drive.readonly`. only access.
- Click "Authorise" - Click "Authorise"
##### 3. Configure rclone, assuming a new install ##### 3. Configure rclone, assuming a new install
@@ -66913,22 +66867,6 @@ Options:
# Changelog # Changelog
## v1.72.1 - 2025-12-10
[See commits](https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
- Bug Fixes
- build: update to go1.25.5 to fix [CVE-2025-61729](https://pkg.go.dev/vuln/GO-2025-4155)
- doc fixes (Duncan Smart, Nick Craig-Wood)
- configfile: Fix piped config support (Jonas Tingeborn)
- log
- Fix PID not included in JSON log output (Tingsong Xu)
- Fix backtrace not going to the --log-file (Nick Craig-Wood)
- Google Cloud Storage
- Improve endpoint parameter docs (Johannes Rothe)
- S3
- Add missing regions for Selectel provider (Nick Craig-Wood)
## v1.72.0 - 2025-11-21 ## v1.72.0 - 2025-11-21
[See commits](https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0) [See commits](https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0)

99
MANUAL.txt generated
View File

@@ -1,6 +1,6 @@
rclone(1) User Manual rclone(1) User Manual
Nick Craig-Wood Nick Craig-Wood
Dec 10, 2025 Nov 21, 2025
NAME NAME
@@ -4588,10 +4588,10 @@ Examples:
// Output: stories/The Quick Brown Fox!.txt // Output: stories/The Quick Brown Fox!.txt
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20251210 // Output: stories/The Quick Brown Fox!-20251121
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM // Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab" rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,regex=[\\.\\w]/ab"
// Output: ababababababab/ababab ababababab ababababab ababab!abababab // Output: ababababababab/ababab ababababab ababababab ababab!abababab
@@ -23110,7 +23110,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this --tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1") --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
Performance Performance
@@ -23597,7 +23597,7 @@ Backend-only flags (these can be set in the config file also).
--gcs-description string Description of the remote --gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default --gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars) --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets --gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it --gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
@@ -25734,17 +25734,7 @@ The following backends have known issues that need more investigation:
- TestDropbox (dropbox) - TestDropbox (dropbox)
- TestBisyncRemoteRemote/normalization - TestBisyncRemoteRemote/normalization
- TestGoFile (gofile) - Updated: 2025-11-21-010037
- TestBisyncRemoteLocal/all_changed
- TestBisyncRemoteLocal/backupdir
- TestBisyncRemoteLocal/basic
- TestBisyncRemoteLocal/changes
- TestBisyncRemoteLocal/check_access
- 78 more
- TestPcloud (pcloud)
- TestBisyncRemoteRemote/check_access
- TestBisyncRemoteRemote/check_access_filters
- Updated: 2025-12-10-010012
The following backends either have not been tested recently or have The following backends either have not been tested recently or have
known issues that are deemed unfixable for the time being: known issues that are deemed unfixable for the time being:
@@ -28527,21 +28517,12 @@ Properties:
- "ru-1" - "ru-1"
- St. Petersburg - St. Petersburg
- Provider: Selectel,Servercore - Provider: Selectel,Servercore
- "ru-3"
- St. Petersburg
- Provider: Selectel
- "ru-7"
- Moscow
- Provider: Selectel,Servercore
- "gis-1" - "gis-1"
- Moscow - Moscow
- Provider: Selectel,Servercore - Provider: Servercore
- "kz-1" - "ru-7"
- Kazakhstan - Moscow
- Provider: Selectel - Provider: Servercore
- "uz-2"
- Uzbekistan
- Provider: Selectel
- "uz-2" - "uz-2"
- Tashkent, Uzbekistan - Tashkent, Uzbekistan
- Provider: Servercore - Provider: Servercore
@@ -29334,25 +29315,13 @@ Properties:
- SeaweedFS S3 localhost - SeaweedFS S3 localhost
- Provider: SeaweedFS - Provider: SeaweedFS
- "s3.ru-1.storage.selcloud.ru" - "s3.ru-1.storage.selcloud.ru"
- St. Petersburg - Saint Petersburg
- Provider: Selectel
- "s3.ru-3.storage.selcloud.ru"
- St. Petersburg
- Provider: Selectel
- "s3.ru-7.storage.selcloud.ru"
- Moscow
- Provider: Selectel,Servercore - Provider: Selectel,Servercore
- "s3.gis-1.storage.selcloud.ru" - "s3.gis-1.storage.selcloud.ru"
- Moscow - Moscow
- Provider: Selectel,Servercore - Provider: Servercore
- "s3.kz-1.storage.selcloud.ru" - "s3.ru-7.storage.selcloud.ru"
- Kazakhstan - Moscow
- Provider: Selectel
- "s3.uz-2.storage.selcloud.ru"
- Uzbekistan
- Provider: Selectel
- "s3.ru-1.storage.selcloud.ru"
- Saint Petersburg
- Provider: Servercore - Provider: Servercore
- "s3.uz-2.srvstorage.uz" - "s3.uz-2.srvstorage.uz"
- Tashkent, Uzbekistan - Tashkent, Uzbekistan
@@ -41751,9 +41720,6 @@ integration with rclone, FileLu makes managing files in the cloud easy.
Its cross-platform file backup services let you upload and back up files Its cross-platform file backup services let you upload and back up files
from any internet-connected device. from any internet-connected device.
Note FileLu now has a fully featured S3 backend FileLu S5, an industry
standard S3 compatible object store.
Configuration Configuration
Here is an example of how to make a remote called filelu. First, run: Here is an example of how to make a remote called filelu. First, run:
@@ -43730,15 +43696,9 @@ Properties:
--gcs-endpoint --gcs-endpoint
Custom endpoint for the storage API. Leave blank to use the provider Endpoint for the service.
default.
When using a custom endpoint that includes a subpath (e.g. Leave blank normally.
example.org/custom/endpoint), the subpath will be ignored during upload
operations due to a limitation in the underlying Google API Go client
library. Download and listing operations will work correctly with the
full endpoint path. If you require subpath support for uploads, avoid
using subpaths in your custom endpoint configuration.
Properties: Properties:
@@ -43746,14 +43706,6 @@ Properties:
- Env Var: RCLONE_GCS_ENDPOINT - Env Var: RCLONE_GCS_ENDPOINT
- Type: string - Type: string
- Required: false - Required: false
- Examples:
- "storage.example.org"
- Specify a custom endpoint
- "storage.example.org:4443"
- Specifying a custom endpoint with port
- "storage.example.org:4443/gcs/api"
- Specifying a subpath, see the note, uploads won't use the
custom path!
--gcs-encoding --gcs-encoding
@@ -44037,8 +43989,7 @@ key" button.
- In the next field, "OAuth Scopes", enter - In the next field, "OAuth Scopes", enter
https://www.googleapis.com/auth/drive to grant read/write access to https://www.googleapis.com/auth/drive to grant read/write access to
Google Drive specifically. You can also use Google Drive specifically. You can also use
https://www.googleapis.com/auth/drive.readonly for read only access https://www.googleapis.com/auth/drive.readonly for read only access.
with --drive-scope=drive.readonly.
- Click "Authorise" - Click "Authorise"
3. Configure rclone, assuming a new install 3. Configure rclone, assuming a new install
@@ -64059,22 +64010,6 @@ Options:
Changelog Changelog
v1.72.1 - 2025-12-10
See commits
- Bug Fixes
- build: update to go1.25.5 to fix CVE-2025-61729
- doc fixes (Duncan Smart, Nick Craig-Wood)
- configfile: Fix piped config support (Jonas Tingeborn)
- log
- Fix PID not included in JSON log output (Tingsong Xu)
- Fix backtrace not going to the --log-file (Nick Craig-Wood)
- Google Cloud Storage
- Improve endpoint parameter docs (Johannes Rothe)
- S3
- Add missing regions for Selectel provider (Nick Craig-Wood)
v1.72.0 - 2025-11-21 v1.72.0 - 2025-11-21
See commits See commits

View File

@@ -28,21 +28,25 @@ directories to and from different cloud storage providers.
- Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss) - Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
- Amazon S3 [:page_facing_up:](https://rclone.org/s3/) - Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
- ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos) - ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
- Bizfly Cloud Simple Storage [:page_facing_up:](https://rclone.org/s3/#bizflycloud)
- Backblaze B2 [:page_facing_up:](https://rclone.org/b2/) - Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
- Box [:page_facing_up:](https://rclone.org/box/) - Box [:page_facing_up:](https://rclone.org/box/)
- Ceph [:page_facing_up:](https://rclone.org/s3/#ceph) - Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
- China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos) - China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
- Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
- Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/) - Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
- Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
- Cloudinary [:page_facing_up:](https://rclone.org/cloudinary/)
- Cubbit DS3 [:page_facing_up:](https://rclone.org/s3/#Cubbit) - Cubbit DS3 [:page_facing_up:](https://rclone.org/s3/#Cubbit)
- DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces) - DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
- Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage) - Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
- Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost) - Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
- Drime [:page_facing_up:](https://rclone.org/s3/#drime)
- Dropbox [:page_facing_up:](https://rclone.org/dropbox/) - Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
- Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/) - Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
- Exaba [:page_facing_up:](https://rclone.org/s3/#exaba) - Exaba [:page_facing_up:](https://rclone.org/s3/#exaba)
- Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files) - Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
- FileLu [:page_facing_up:](https://rclone.org/filelu/) - FileLu [:page_facing_up:](https://rclone.org/filelu/)
- Filen [:page_facing_up:](https://rclone.org/filen/)
- Files.com [:page_facing_up:](https://rclone.org/filescom/) - Files.com [:page_facing_up:](https://rclone.org/filescom/)
- FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade) - FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade)
- FTP [:page_facing_up:](https://rclone.org/ftp/) - FTP [:page_facing_up:](https://rclone.org/ftp/)
@@ -109,6 +113,7 @@ directories to and from different cloud storage providers.
- Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel) - Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
- Servercore Object Storage [:page_facing_up:](https://rclone.org/s3/#servercore) - Servercore Object Storage [:page_facing_up:](https://rclone.org/s3/#servercore)
- SFTP [:page_facing_up:](https://rclone.org/sftp/) - SFTP [:page_facing_up:](https://rclone.org/sftp/)
- Shade [:page_facing_up:](https://rclone.org/shade/)
- SMB / CIFS [:page_facing_up:](https://rclone.org/smb/) - SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
- Spectra Logic [:page_facing_up:](https://rclone.org/s3/#spectralogic) - Spectra Logic [:page_facing_up:](https://rclone.org/s3/#spectralogic)
- StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath) - StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)

View File

@@ -1 +1 @@
v1.72.1 v1.73.0

View File

@@ -16,11 +16,13 @@ import (
_ "github.com/rclone/rclone/backend/compress" _ "github.com/rclone/rclone/backend/compress"
_ "github.com/rclone/rclone/backend/crypt" _ "github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/doi" _ "github.com/rclone/rclone/backend/doi"
_ "github.com/rclone/rclone/backend/drime"
_ "github.com/rclone/rclone/backend/drive" _ "github.com/rclone/rclone/backend/drive"
_ "github.com/rclone/rclone/backend/dropbox" _ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier" _ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/filefabric" _ "github.com/rclone/rclone/backend/filefabric"
_ "github.com/rclone/rclone/backend/filelu" _ "github.com/rclone/rclone/backend/filelu"
_ "github.com/rclone/rclone/backend/filen"
_ "github.com/rclone/rclone/backend/filescom" _ "github.com/rclone/rclone/backend/filescom"
_ "github.com/rclone/rclone/backend/ftp" _ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/gofile" _ "github.com/rclone/rclone/backend/gofile"
@@ -55,6 +57,7 @@ import (
_ "github.com/rclone/rclone/backend/s3" _ "github.com/rclone/rclone/backend/s3"
_ "github.com/rclone/rclone/backend/seafile" _ "github.com/rclone/rclone/backend/seafile"
_ "github.com/rclone/rclone/backend/sftp" _ "github.com/rclone/rclone/backend/sftp"
_ "github.com/rclone/rclone/backend/shade"
_ "github.com/rclone/rclone/backend/sharefile" _ "github.com/rclone/rclone/backend/sharefile"
_ "github.com/rclone/rclone/backend/sia" _ "github.com/rclone/rclone/backend/sia"
_ "github.com/rclone/rclone/backend/smb" _ "github.com/rclone/rclone/backend/smb"
@@ -63,7 +66,6 @@ import (
_ "github.com/rclone/rclone/backend/swift" _ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/ulozto" _ "github.com/rclone/rclone/backend/ulozto"
_ "github.com/rclone/rclone/backend/union" _ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav" _ "github.com/rclone/rclone/backend/webdav"
_ "github.com/rclone/rclone/backend/yandex" _ "github.com/rclone/rclone/backend/yandex"
_ "github.com/rclone/rclone/backend/zoho" _ "github.com/rclone/rclone/backend/zoho"

View File

@@ -86,12 +86,56 @@ var (
metadataMu sync.Mutex metadataMu sync.Mutex
) )
// system metadata keys which this backend owns
var systemMetadataInfo = map[string]fs.MetadataHelp{
"cache-control": {
Help: "Cache-Control header",
Type: "string",
Example: "no-cache",
},
"content-disposition": {
Help: "Content-Disposition header",
Type: "string",
Example: "inline",
},
"content-encoding": {
Help: "Content-Encoding header",
Type: "string",
Example: "gzip",
},
"content-language": {
Help: "Content-Language header",
Type: "string",
Example: "en-US",
},
"content-type": {
Help: "Content-Type header",
Type: "string",
Example: "text/plain",
},
"tier": {
Help: "Tier of the object",
Type: "string",
Example: "Hot",
ReadOnly: true,
},
"mtime": {
Help: "Time of last modification, read from rclone metadata",
Type: "RFC 3339",
Example: "2006-01-02T15:04:05.999999999Z07:00",
},
}
// Register with Fs // Register with Fs
func init() { func init() {
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
Name: "azureblob", Name: "azureblob",
Description: "Microsoft Azure Blob Storage", Description: "Microsoft Azure Blob Storage",
NewFs: NewFs, NewFs: NewFs,
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `User metadata is stored as x-ms-meta- keys. Azure metadata keys are case insensitive and are always returned in lower case.`,
},
Options: []fs.Option{{ Options: []fs.Option{{
Name: "account", Name: "account",
Help: `Azure Storage Account Name. Help: `Azure Storage Account Name.
@@ -810,6 +854,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.features = (&fs.Features{ f.features = (&fs.Features{
ReadMimeType: true, ReadMimeType: true,
WriteMimeType: true, WriteMimeType: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
BucketBased: true, BucketBased: true,
BucketBasedRootOK: true, BucketBasedRootOK: true,
SetTier: true, SetTier: true,
@@ -1157,6 +1204,289 @@ func (o *Object) updateMetadataWithModTime(modTime time.Time) {
o.meta[modTimeKey] = modTime.Format(timeFormatOut) o.meta[modTimeKey] = modTime.Format(timeFormatOut)
} }
// parseXMsTags parses the value of the x-ms-tags header into a map.
// It expects comma-separated key=value pairs. Whitespace around keys and
// values is trimmed. Empty pairs and empty keys are rejected.
func parseXMsTags(s string) (map[string]string, error) {
if strings.TrimSpace(s) == "" {
return map[string]string{}, nil
}
out := make(map[string]string)
parts := strings.Split(s, ",")
for _, p := range parts {
p = strings.TrimSpace(p)
if p == "" {
continue
}
kv := strings.SplitN(p, "=", 2)
if len(kv) != 2 {
return nil, fmt.Errorf("invalid tag %q", p)
}
k := strings.TrimSpace(kv[0])
v := strings.TrimSpace(kv[1])
if k == "" {
return nil, fmt.Errorf("invalid tag key in %q", p)
}
out[k] = v
}
return out, nil
}
// mapMetadataToAzure maps a generic metadata map to Azure HTTP headers,
// user metadata, tags and optional modTime override.
// Reserved x-ms-* keys (except x-ms-tags) are ignored for user metadata.
//
// Pass a logger to surface non-fatal parsing issues (e.g. bad mtime).
func mapMetadataToAzure(meta map[string]string, logf func(string, ...any)) (headers blob.HTTPHeaders, userMeta map[string]*string, tags map[string]string, modTime *time.Time, err error) {
if meta == nil {
return headers, nil, nil, nil, nil
}
tmp := make(map[string]string)
for k, v := range meta {
lowerKey := strings.ToLower(k)
switch lowerKey {
case "cache-control":
headers.BlobCacheControl = pString(v)
case "content-disposition":
headers.BlobContentDisposition = pString(v)
case "content-encoding":
headers.BlobContentEncoding = pString(v)
case "content-language":
headers.BlobContentLanguage = pString(v)
case "content-type":
headers.BlobContentType = pString(v)
case "x-ms-tags":
parsed, perr := parseXMsTags(v)
if perr != nil {
return headers, nil, nil, nil, perr
}
// allocate only if there are tags
if len(parsed) > 0 {
tags = parsed
}
case "mtime":
// Accept multiple layouts for tolerance
var parsed time.Time
var pErr error
for _, layout := range []string{time.RFC3339Nano, time.RFC3339, timeFormatOut} {
parsed, pErr = time.Parse(layout, v)
if pErr == nil {
modTime = &parsed
break
}
}
// Log and ignore if unparseable
if modTime == nil && logf != nil {
logf("metadata: couldn't parse mtime %q: %v", v, pErr)
}
case "tier":
// ignore - handled elsewhere
default:
// Filter out other reserved headers so they don't end up as user metadata
if strings.HasPrefix(lowerKey, "x-ms-") {
continue
}
tmp[lowerKey] = v
}
}
userMeta = toAzureMetaPtr(tmp)
return headers, userMeta, tags, modTime, nil
}
// toAzureMetaPtr converts a map[string]string to map[string]*string as used by Azure SDK
func toAzureMetaPtr(in map[string]string) map[string]*string {
if len(in) == 0 {
return nil
}
out := make(map[string]*string, len(in))
for k, v := range in {
vv := v
out[k] = &vv
}
return out
}
// assembleCopyParams prepares headers, metadata and tags for copy operations.
//
// It starts from the source properties, optionally overlays mapped metadata
// from rclone's metadata options, ensures mtime presence when mapping is
// enabled, and returns whether mapping was actually requested (hadMapping).
// assembleCopyParams prepares headers, metadata and tags for copy operations.
//
// If includeBaseMeta is true, start user metadata from the source's metadata
// and overlay mapped values. This matches multipart copy commit behavior.
// If false, only include mapped user metadata (no source baseline) which
// matches previous singlepart StartCopyFromURL semantics.
func assembleCopyParams(ctx context.Context, f *Fs, src fs.Object, srcProps *blob.GetPropertiesResponse, includeBaseMeta bool) (headers blob.HTTPHeaders, meta map[string]*string, tags map[string]string, hadMapping bool, err error) {
// Start from source properties
headers = blob.HTTPHeaders{
BlobCacheControl: srcProps.CacheControl,
BlobContentDisposition: srcProps.ContentDisposition,
BlobContentEncoding: srcProps.ContentEncoding,
BlobContentLanguage: srcProps.ContentLanguage,
BlobContentMD5: srcProps.ContentMD5,
BlobContentType: srcProps.ContentType,
}
// Optionally deep copy user metadata pointers from source. Normalise keys to
// lower-case to avoid duplicate x-ms-meta headers when we later inject/overlay
// metadata (Azure treats keys case-insensitively but Go's http.Header will
// join duplicate keys into a comma separated list, which breaks shared-key
// signing).
if includeBaseMeta && len(srcProps.Metadata) > 0 {
meta = make(map[string]*string, len(srcProps.Metadata))
for k, v := range srcProps.Metadata {
if v != nil {
vv := *v
meta[strings.ToLower(k)] = &vv
}
}
}
// Only consider mapping if metadata pipeline is enabled
if fs.GetConfig(ctx).Metadata {
mapped, mapErr := fs.GetMetadataOptions(ctx, f, src, fs.MetadataAsOpenOptions(ctx))
if mapErr != nil {
return headers, meta, nil, false, fmt.Errorf("failed to map metadata: %w", mapErr)
}
if mapped != nil {
// Map rclone metadata to Azure shapes
mappedHeaders, userMeta, mappedTags, mappedModTime, herr := mapMetadataToAzure(mapped, func(format string, args ...any) { fs.Debugf(f, format, args...) })
if herr != nil {
return headers, meta, nil, false, fmt.Errorf("metadata mapping: %w", herr)
}
hadMapping = true
// Overlay headers (only non-nil)
if mappedHeaders.BlobCacheControl != nil {
headers.BlobCacheControl = mappedHeaders.BlobCacheControl
}
if mappedHeaders.BlobContentDisposition != nil {
headers.BlobContentDisposition = mappedHeaders.BlobContentDisposition
}
if mappedHeaders.BlobContentEncoding != nil {
headers.BlobContentEncoding = mappedHeaders.BlobContentEncoding
}
if mappedHeaders.BlobContentLanguage != nil {
headers.BlobContentLanguage = mappedHeaders.BlobContentLanguage
}
if mappedHeaders.BlobContentType != nil {
headers.BlobContentType = mappedHeaders.BlobContentType
}
// Overlay user metadata
if len(userMeta) > 0 {
if meta == nil {
meta = make(map[string]*string, len(userMeta))
}
for k, v := range userMeta {
meta[k] = v
}
}
// Apply tags if any
if len(mappedTags) > 0 {
tags = mappedTags
}
// Ensure mtime present using mapped or source time
if _, ok := meta[modTimeKey]; !ok {
when := src.ModTime(ctx)
if mappedModTime != nil {
when = *mappedModTime
}
val := when.Format(time.RFC3339Nano)
if meta == nil {
meta = make(map[string]*string, 1)
}
meta[modTimeKey] = &val
}
// Ensure content-type fallback to source if not set by mapper
if headers.BlobContentType == nil {
headers.BlobContentType = srcProps.ContentType
}
} else {
// Mapping enabled but not provided: ensure mtime present based on source ModTime
if _, ok := meta[modTimeKey]; !ok {
when := src.ModTime(ctx)
val := when.Format(time.RFC3339Nano)
if meta == nil {
meta = make(map[string]*string, 1)
}
meta[modTimeKey] = &val
}
}
}
return headers, meta, tags, hadMapping, nil
}
// applyMappedMetadata applies mapped metadata and headers to the object state for uploads.
//
// It reads `--metadata`, `--metadata-set`, and `--metadata-mapper` outputs via fs.GetMetadataOptions
// and updates o.meta, o.tags and ui.httpHeaders accordingly.
func (o *Object) applyMappedMetadata(ctx context.Context, src fs.ObjectInfo, ui *uploadInfo, options []fs.OpenOption) (modTime time.Time, err error) {
// Start from the source modtime; may be overridden by metadata
modTime = src.ModTime(ctx)
// Fetch mapped metadata if --metadata is enabled
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil {
return modTime, err
}
if meta == nil {
// No metadata processing requested
return modTime, nil
}
// Map metadata using common helper
headers, userMeta, tags, mappedModTime, err := mapMetadataToAzure(meta, func(format string, args ...any) { fs.Debugf(o, format, args...) })
if err != nil {
return modTime, err
}
// Merge headers into ui
if headers.BlobCacheControl != nil {
ui.httpHeaders.BlobCacheControl = headers.BlobCacheControl
}
if headers.BlobContentDisposition != nil {
ui.httpHeaders.BlobContentDisposition = headers.BlobContentDisposition
}
if headers.BlobContentEncoding != nil {
ui.httpHeaders.BlobContentEncoding = headers.BlobContentEncoding
}
if headers.BlobContentLanguage != nil {
ui.httpHeaders.BlobContentLanguage = headers.BlobContentLanguage
}
if headers.BlobContentType != nil {
ui.httpHeaders.BlobContentType = headers.BlobContentType
}
// Apply user metadata to o.meta with a single critical section
if len(userMeta) > 0 {
metadataMu.Lock()
if o.meta == nil {
o.meta = make(map[string]string, len(userMeta))
}
for k, v := range userMeta {
if v != nil {
o.meta[k] = *v
}
}
metadataMu.Unlock()
}
// Apply tags
if len(tags) > 0 {
if o.tags == nil {
o.tags = make(map[string]string, len(tags))
}
for k, v := range tags {
o.tags[k] = v
}
}
if mappedModTime != nil {
modTime = *mappedModTime
}
return modTime, nil
}
// Returns whether file is a directory marker or not // Returns whether file is a directory marker or not
func isDirectoryMarker(size int64, metadata map[string]*string, remote string) bool { func isDirectoryMarker(size int64, metadata map[string]*string, remote string) bool {
// Directory markers are 0 length // Directory markers are 0 length
@@ -1951,18 +2281,19 @@ func (f *Fs) copyMultipart(ctx context.Context, remote, dstContainer, dstPath st
return nil, err return nil, err
} }
// Convert metadata from source object // Prepare metadata/headers/tags for destination
// For multipart commit, include base metadata from source then overlay mapped
commitHeaders, commitMeta, commitTags, _, err := assembleCopyParams(ctx, f, src, srcProperties, true)
if err != nil {
return nil, fmt.Errorf("multipart copy: %w", err)
}
// Convert metadata from source or mapper
options := blockblob.CommitBlockListOptions{ options := blockblob.CommitBlockListOptions{
Metadata: srcProperties.Metadata, Metadata: commitMeta,
Tags: commitTags,
Tier: parseTier(f.opt.AccessTier), Tier: parseTier(f.opt.AccessTier),
HTTPHeaders: &blob.HTTPHeaders{ HTTPHeaders: &commitHeaders,
BlobCacheControl: srcProperties.CacheControl,
BlobContentDisposition: srcProperties.ContentDisposition,
BlobContentEncoding: srcProperties.ContentEncoding,
BlobContentLanguage: srcProperties.ContentLanguage,
BlobContentMD5: srcProperties.ContentMD5,
BlobContentType: srcProperties.ContentType,
},
} }
// Finalise the upload session // Finalise the upload session
@@ -1993,10 +2324,36 @@ func (f *Fs) copySinglepart(ctx context.Context, remote, dstContainer, dstPath s
return nil, fmt.Errorf("single part copy: source auth: %w", err) return nil, fmt.Errorf("single part copy: source auth: %w", err)
} }
// Start the copy // Prepare mapped metadata/tags/headers if requested
options := blob.StartCopyFromURLOptions{ options := blob.StartCopyFromURLOptions{
Tier: parseTier(f.opt.AccessTier), Tier: parseTier(f.opt.AccessTier),
} }
var postHeaders *blob.HTTPHeaders
// Read source properties and assemble params; this also handles the case when mapping is disabled
srcProps, err := src.readMetaDataAlways(ctx)
if err != nil {
return nil, fmt.Errorf("single part copy: read source properties: %w", err)
}
// For singlepart copy, do not include base metadata from source in StartCopyFromURL
headers, meta, tags, hadMapping, aerr := assembleCopyParams(ctx, f, src, srcProps, false)
if aerr != nil {
return nil, fmt.Errorf("single part copy: %w", aerr)
}
// Apply tags and post-copy headers only when mapping requested changes
if len(tags) > 0 {
options.BlobTags = make(map[string]string, len(tags))
for k, v := range tags {
options.BlobTags[k] = v
}
}
if hadMapping {
// Only set metadata explicitly when mapping was requested; otherwise
// let the service copy source metadata (including mtime) automatically.
if len(meta) > 0 {
options.Metadata = meta
}
postHeaders = &headers
}
var startCopy blob.StartCopyFromURLResponse var startCopy blob.StartCopyFromURLResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
startCopy, err = dstBlobSVC.StartCopyFromURL(ctx, srcURL, &options) startCopy, err = dstBlobSVC.StartCopyFromURL(ctx, srcURL, &options)
@@ -2026,6 +2383,16 @@ func (f *Fs) copySinglepart(ctx context.Context, remote, dstContainer, dstPath s
pollTime = min(2*pollTime, time.Second) pollTime = min(2*pollTime, time.Second)
} }
// If mapper requested header changes, set them post-copy
if postHeaders != nil {
blb := f.getBlobSVC(dstContainer, dstPath)
_, setErr := blb.SetHTTPHeaders(ctx, *postHeaders, nil)
if setErr != nil {
return nil, fmt.Errorf("single part copy: failed to set headers: %w", setErr)
}
}
// Metadata (when requested) is set via StartCopyFromURL options.Metadata
return f.NewObject(ctx, remote) return f.NewObject(ctx, remote)
} }
@@ -2157,6 +2524,35 @@ func (o *Object) getMetadata() (metadata map[string]*string) {
return metadata return metadata
} }
// Metadata returns metadata for an object
//
// It returns a combined view of system and user metadata.
func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
// Ensure metadata is loaded
if err := o.readMetaData(ctx); err != nil {
return nil, err
}
m := fs.Metadata{}
// System metadata we expose
if !o.modTime.IsZero() {
m["mtime"] = o.modTime.Format(time.RFC3339Nano)
}
if o.accessTier != "" {
m["tier"] = string(o.accessTier)
}
// Merge user metadata (already lower-cased keys)
metadataMu.Lock()
for k, v := range o.meta {
m[k] = v
}
metadataMu.Unlock()
return m, nil
}
// decodeMetaDataFromPropertiesResponse sets the metadata from the data passed in // decodeMetaDataFromPropertiesResponse sets the metadata from the data passed in
// //
// Sets // Sets
@@ -2995,17 +3391,19 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
// containerPath = containerPath[:len(containerPath)-1] // containerPath = containerPath[:len(containerPath)-1]
// } // }
// Update Mod time // Start with default content-type based on source
o.updateMetadataWithModTime(src.ModTime(ctx))
if err != nil {
return ui, err
}
// Create the HTTP headers for the upload
ui.httpHeaders = blob.HTTPHeaders{ ui.httpHeaders = blob.HTTPHeaders{
BlobContentType: pString(fs.MimeType(ctx, src)), BlobContentType: pString(fs.MimeType(ctx, src)),
} }
// Apply mapped metadata/headers/tags if requested
modTime, err := o.applyMappedMetadata(ctx, src, &ui, options)
if err != nil {
return ui, err
}
// Ensure mtime is set in metadata based on possibly overridden modTime
o.updateMetadataWithModTime(modTime)
// Compute the Content-MD5 of the file. As we stream all uploads it // Compute the Content-MD5 of the file. As we stream all uploads it
// will be set in PutBlockList API call using the 'x-ms-blob-content-md5' header // will be set in PutBlockList API call using the 'x-ms-blob-content-md5' header
if !o.fs.opt.DisableCheckSum { if !o.fs.opt.DisableCheckSum {

View File

@@ -5,11 +5,16 @@ package azureblob
import ( import (
"context" "context"
"encoding/base64" "encoding/base64"
"fmt"
"net/http"
"strings" "strings"
"testing" "testing"
"time"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blob"
"github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob" "github.com/Azure/azure-sdk-for-go/sdk/storage/azblob/blockblob"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests" "github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random" "github.com/rclone/rclone/lib/random"
@@ -148,4 +153,417 @@ func (f *Fs) testWriteUncommittedBlocks(t *testing.T) {
func (f *Fs) InternalTest(t *testing.T) { func (f *Fs) InternalTest(t *testing.T) {
t.Run("Features", f.testFeatures) t.Run("Features", f.testFeatures)
t.Run("WriteUncommittedBlocks", f.testWriteUncommittedBlocks) t.Run("WriteUncommittedBlocks", f.testWriteUncommittedBlocks)
t.Run("Metadata", f.testMetadataPaths)
}
// helper to read blob properties for an object
func getProps(ctx context.Context, t *testing.T, o fs.Object) *blob.GetPropertiesResponse {
ao := o.(*Object)
props, err := ao.readMetaDataAlways(ctx)
require.NoError(t, err)
return props
}
// helper to assert select headers and user metadata
func assertHeadersAndMetadata(t *testing.T, props *blob.GetPropertiesResponse, want map[string]string, wantUserMeta map[string]string) {
// Headers
get := func(p *string) string {
if p == nil {
return ""
}
return *p
}
if v, ok := want["content-type"]; ok {
assert.Equal(t, v, get(props.ContentType), "content-type")
}
if v, ok := want["cache-control"]; ok {
assert.Equal(t, v, get(props.CacheControl), "cache-control")
}
if v, ok := want["content-disposition"]; ok {
assert.Equal(t, v, get(props.ContentDisposition), "content-disposition")
}
if v, ok := want["content-encoding"]; ok {
assert.Equal(t, v, get(props.ContentEncoding), "content-encoding")
}
if v, ok := want["content-language"]; ok {
assert.Equal(t, v, get(props.ContentLanguage), "content-language")
}
// User metadata (case-insensitive keys from service)
norm := make(map[string]*string, len(props.Metadata))
for kk, vv := range props.Metadata {
norm[strings.ToLower(kk)] = vv
}
for k, v := range wantUserMeta {
pv, ok := norm[strings.ToLower(k)]
if assert.True(t, ok, fmt.Sprintf("missing user metadata key %q", k)) {
if pv == nil {
assert.Equal(t, v, "", k)
} else {
assert.Equal(t, v, *pv, k)
}
} else {
// Log available keys for diagnostics
keys := make([]string, 0, len(props.Metadata))
for kk := range props.Metadata {
keys = append(keys, kk)
}
t.Logf("available user metadata keys: %v", keys)
}
}
}
// helper to read blob tags for an object
func getTagsMap(ctx context.Context, t *testing.T, o fs.Object) map[string]string {
ao := o.(*Object)
blb := ao.getBlobSVC()
resp, err := blb.GetTags(ctx, nil)
require.NoError(t, err)
out := make(map[string]string)
for _, tag := range resp.BlobTagSet {
if tag.Key != nil {
k := *tag.Key
v := ""
if tag.Value != nil {
v = *tag.Value
}
out[k] = v
}
}
return out
}
// Test metadata across different write paths
func (f *Fs) testMetadataPaths(t *testing.T) {
ctx := context.Background()
if testing.Short() {
t.Skip("skipping in short mode")
}
// Common expected metadata and headers
baseMeta := fs.Metadata{
"cache-control": "no-cache",
"content-disposition": "inline",
"content-language": "en-US",
// Note: Don't set content-encoding here to avoid download decoding differences
// We will set a custom user metadata key
"potato": "royal",
// and modtime
"mtime": fstest.Time("2009-05-06T04:05:06.499999999Z").Format(time.RFC3339Nano),
}
// Singlepart upload
t.Run("PutSinglepart", func(t *testing.T) {
// size less than chunk size
contents := random.String(int(f.opt.ChunkSize / 2))
item := fstest.NewItem("meta-single.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
// override content-type via metadata mapping
meta := fs.Metadata{}
meta.Merge(baseMeta)
meta["content-type"] = "text/plain"
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "text/html", meta)
defer func() { _ = obj.Remove(ctx) }()
props := getProps(ctx, t, obj)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "text/plain",
"cache-control": "no-cache",
"content-disposition": "inline",
"content-language": "en-US",
}, map[string]string{
"potato": "royal",
})
_ = http.StatusOK // keep import for parity but don't inspect RawResponse
})
// Multipart upload
t.Run("PutMultipart", func(t *testing.T) {
// size greater than chunk size to force multipart
contents := random.String(int(f.opt.ChunkSize + 1024))
item := fstest.NewItem("meta-multipart.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
meta := fs.Metadata{}
meta.Merge(baseMeta)
meta["content-type"] = "application/json"
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "text/html", meta)
defer func() { _ = obj.Remove(ctx) }()
props := getProps(ctx, t, obj)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "application/json",
"cache-control": "no-cache",
"content-disposition": "inline",
"content-language": "en-US",
}, map[string]string{
"potato": "royal",
})
// Tags: Singlepart upload
t.Run("PutSinglepartTags", func(t *testing.T) {
contents := random.String(int(f.opt.ChunkSize / 2))
item := fstest.NewItem("tags-single.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
meta := fs.Metadata{
"x-ms-tags": "env=dev,team=sync",
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "text/plain", meta)
defer func() { _ = obj.Remove(ctx) }()
tags := getTagsMap(ctx, t, obj)
assert.Equal(t, "dev", tags["env"])
assert.Equal(t, "sync", tags["team"])
})
// Tags: Multipart upload
t.Run("PutMultipartTags", func(t *testing.T) {
contents := random.String(int(f.opt.ChunkSize + 2048))
item := fstest.NewItem("tags-multipart.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
meta := fs.Metadata{
"x-ms-tags": "project=alpha,release=2025-08",
}
obj := fstests.PutTestContentsMetadata(ctx, t, f, &item, true, contents, true, "application/octet-stream", meta)
defer func() { _ = obj.Remove(ctx) }()
tags := getTagsMap(ctx, t, obj)
assert.Equal(t, "alpha", tags["project"])
assert.Equal(t, "2025-08", tags["release"])
})
})
// Singlepart copy with metadata-set mapping; omit content-type to exercise fallback
t.Run("CopySinglepart", func(t *testing.T) {
// create small source
contents := random.String(int(f.opt.ChunkSize / 2))
srcItem := fstest.NewItem("meta-copy-single-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "text/plain", nil)
defer func() { _ = srcObj.Remove(ctx) }()
// set mapping via MetadataSet
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
ci.MetadataSet = fs.Metadata{
"cache-control": "private, max-age=60",
"content-disposition": "attachment; filename=foo.txt",
"content-language": "fr",
// no content-type: should fallback to source
"potato": "maris",
}
// do copy
dstName := "meta-copy-single-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
props := getProps(ctx2, t, dst)
// content-type should fallback to source (text/plain)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "text/plain",
"cache-control": "private, max-age=60",
"content-disposition": "attachment; filename=foo.txt",
"content-language": "fr",
}, map[string]string{
"potato": "maris",
})
// mtime should be populated on copy when --metadata is used
// and should equal the source ModTime (RFC3339Nano)
// Read user metadata (case-insensitive)
m := props.Metadata
var gotMtime string
for k, v := range m {
if strings.EqualFold(k, "mtime") && v != nil {
gotMtime = *v
break
}
}
if assert.NotEmpty(t, gotMtime, "mtime not set on destination metadata") {
// parse and compare times ignoring formatting differences
parsed, err := time.Parse(time.RFC3339Nano, gotMtime)
require.NoError(t, err)
assert.True(t, srcObj.ModTime(ctx2).Equal(parsed), "dst mtime should equal src ModTime")
}
})
// CopySinglepart with only --metadata (no MetadataSet) must inject mtime and preserve src content-type
t.Run("CopySinglepart_MetadataOnly", func(t *testing.T) {
contents := random.String(int(f.opt.ChunkSize / 2))
srcItem := fstest.NewItem("meta-copy-single-only-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "text/plain", nil)
defer func() { _ = srcObj.Remove(ctx) }()
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
dstName := "meta-copy-single-only-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
props := getProps(ctx2, t, dst)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "text/plain",
}, map[string]string{})
// Assert mtime injected
m := props.Metadata
var gotMtime string
for k, v := range m {
if strings.EqualFold(k, "mtime") && v != nil {
gotMtime = *v
break
}
}
if assert.NotEmpty(t, gotMtime, "mtime not set on destination metadata") {
parsed, err := time.Parse(time.RFC3339Nano, gotMtime)
require.NoError(t, err)
assert.True(t, srcObj.ModTime(ctx2).Equal(parsed), "dst mtime should equal src ModTime")
}
})
// Multipart copy with metadata-set mapping; omit content-type to exercise fallback
t.Run("CopyMultipart", func(t *testing.T) {
// create large source to force multipart
contents := random.String(int(f.opt.CopyCutoff + 1024))
srcItem := fstest.NewItem("meta-copy-multi-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "application/octet-stream", nil)
defer func() { _ = srcObj.Remove(ctx) }()
// set mapping via MetadataSet
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
ci.MetadataSet = fs.Metadata{
"cache-control": "max-age=0, no-cache",
// omit content-type to trigger fallback
"content-language": "de",
"potato": "desiree",
}
dstName := "meta-copy-multi-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
props := getProps(ctx2, t, dst)
// content-type should fallback to source (application/octet-stream)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "application/octet-stream",
"cache-control": "max-age=0, no-cache",
"content-language": "de",
}, map[string]string{
"potato": "desiree",
})
// mtime should be populated on copy when --metadata is used
m := props.Metadata
var gotMtime string
for k, v := range m {
if strings.EqualFold(k, "mtime") && v != nil {
gotMtime = *v
break
}
}
if assert.NotEmpty(t, gotMtime, "mtime not set on destination metadata") {
parsed, err := time.Parse(time.RFC3339Nano, gotMtime)
require.NoError(t, err)
assert.True(t, srcObj.ModTime(ctx2).Equal(parsed), "dst mtime should equal src ModTime")
}
})
// CopyMultipart with only --metadata must inject mtime and preserve src content-type
t.Run("CopyMultipart_MetadataOnly", func(t *testing.T) {
contents := random.String(int(f.opt.CopyCutoff + 2048))
srcItem := fstest.NewItem("meta-copy-multi-only-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "application/octet-stream", nil)
defer func() { _ = srcObj.Remove(ctx) }()
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
dstName := "meta-copy-multi-only-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
props := getProps(ctx2, t, dst)
assertHeadersAndMetadata(t, props, map[string]string{
"content-type": "application/octet-stream",
}, map[string]string{})
m := props.Metadata
var gotMtime string
for k, v := range m {
if strings.EqualFold(k, "mtime") && v != nil {
gotMtime = *v
break
}
}
if assert.NotEmpty(t, gotMtime, "mtime not set on destination metadata") {
parsed, err := time.Parse(time.RFC3339Nano, gotMtime)
require.NoError(t, err)
assert.True(t, srcObj.ModTime(ctx2).Equal(parsed), "dst mtime should equal src ModTime")
}
})
// Tags: Singlepart copy
t.Run("CopySinglepartTags", func(t *testing.T) {
// create small source
contents := random.String(int(f.opt.ChunkSize / 2))
srcItem := fstest.NewItem("tags-copy-single-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "text/plain", nil)
defer func() { _ = srcObj.Remove(ctx) }()
// set mapping via MetadataSet including tags
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
ci.MetadataSet = fs.Metadata{
"x-ms-tags": "copy=single,mode=test",
}
dstName := "tags-copy-single-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
tags := getTagsMap(ctx2, t, dst)
assert.Equal(t, "single", tags["copy"])
assert.Equal(t, "test", tags["mode"])
})
// Tags: Multipart copy
t.Run("CopyMultipartTags", func(t *testing.T) {
// create large source to force multipart
contents := random.String(int(f.opt.CopyCutoff + 4096))
srcItem := fstest.NewItem("tags-copy-multi-src.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
srcObj := fstests.PutTestContentsMetadata(ctx, t, f, &srcItem, true, contents, true, "application/octet-stream", nil)
defer func() { _ = srcObj.Remove(ctx) }()
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
ci.MetadataSet = fs.Metadata{
"x-ms-tags": "copy=multi,mode=test",
}
dstName := "tags-copy-multi-dst.txt"
dst, err := f.Copy(ctx2, srcObj, dstName)
require.NoError(t, err)
defer func() { _ = dst.Remove(ctx2) }()
tags := getTagsMap(ctx2, t, dst)
assert.Equal(t, "multi", tags["copy"])
assert.Equal(t, "test", tags["mode"])
})
// Negative: invalid x-ms-tags must error
t.Run("InvalidXMsTags", func(t *testing.T) {
contents := random.String(32)
item := fstest.NewItem("tags-invalid.txt", contents, fstest.Time("2001-05-06T04:05:06.499999999Z"))
// construct ObjectInfo with invalid x-ms-tags
buf := strings.NewReader(contents)
// Build obj info with metadata
meta := fs.Metadata{
"x-ms-tags": "badpair-without-equals",
}
// force metadata on
ctx2, ci := fs.AddConfig(ctx)
ci.Metadata = true
obji := object.NewStaticObjectInfo(item.Path, item.ModTime, int64(len(contents)), true, nil, nil)
obji = obji.WithMetadata(meta).WithMimeType("text/plain")
_, err := f.Put(ctx2, buf, obji)
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid tag")
})
} }

View File

@@ -133,23 +133,32 @@ type File struct {
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file. Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
} }
// AuthorizeAccountResponse is as returned from the b2_authorize_account call // StorageAPI is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct { type StorageAPI struct {
AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file. AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it. Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket. Buckets []struct { // When present, access is restricted to one or more buckets.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty ID string `json:"id"` // ID of bucket
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has. Name string `json:"name"` // When present, name of bucket - may be empty
} `json:"buckets"`
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has for every bucket.
NamePrefix any `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix NamePrefix any `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"` } `json:"allowed"`
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files. APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files. DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead. MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead.
RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance. RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance.
} }
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct {
AccountID string `json:"accountId"` // The identifier for the account.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
APIs struct { // Supported APIs for this account / key. These are API-dependent JSON objects.
Storage StorageAPI `json:"storageApi"`
} `json:"apiInfo"`
}
// ListBucketsRequest is parameters for b2_list_buckets call // ListBucketsRequest is parameters for b2_list_buckets call
type ListBucketsRequest struct { type ListBucketsRequest struct {
AccountID string `json:"accountId"` // The identifier for the account. AccountID string `json:"accountId"` // The identifier for the account.

View File

@@ -607,17 +607,29 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to authorize account: %w", err) return nil, fmt.Errorf("failed to authorize account: %w", err)
} }
// If this is a key limited to a single bucket, it must exist already // If this is a key limited to one or more buckets, one of them must exist
if f.rootBucket != "" && f.info.Allowed.BucketID != "" { // and be ours.
allowedBucket := f.opt.Enc.ToStandardName(f.info.Allowed.BucketName) if f.rootBucket != "" && len(f.info.APIs.Storage.Allowed.Buckets) != 0 {
buckets := f.info.APIs.Storage.Allowed.Buckets
var rootFound = false
var rootID string
for _, b := range buckets {
allowedBucket := f.opt.Enc.ToStandardName(b.Name)
if allowedBucket == "" { if allowedBucket == "" {
return nil, errors.New("bucket that application key is restricted to no longer exists") fs.Debugf(f, "bucket %q that application key is restricted to no longer exists", b.ID)
continue
} }
if allowedBucket != f.rootBucket {
return nil, fmt.Errorf("you must use bucket %q with this application key", allowedBucket) if allowedBucket == f.rootBucket {
rootFound = true
rootID = b.ID
}
}
if !rootFound {
return nil, fmt.Errorf("you must use bucket(s) %q with this application key", buckets)
} }
f.cache.MarkOK(f.rootBucket) f.cache.MarkOK(f.rootBucket)
f.setBucketID(f.rootBucket, f.info.Allowed.BucketID) f.setBucketID(f.rootBucket, rootID)
} }
if f.rootBucket != "" && f.rootDirectory != "" { if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the (bucket,directory) is actually an existing file // Check to see if the (bucket,directory) is actually an existing file
@@ -643,7 +655,7 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
defer f.authMu.Unlock() defer f.authMu.Unlock()
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: "/b2api/v1/b2_authorize_account", Path: "/b2api/v4/b2_authorize_account",
RootURL: f.opt.Endpoint, RootURL: f.opt.Endpoint,
UserName: f.opt.Account, UserName: f.opt.Account,
Password: f.opt.Key, Password: f.opt.Key,
@@ -656,13 +668,13 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
if err != nil { if err != nil {
return fmt.Errorf("failed to authenticate: %w", err) return fmt.Errorf("failed to authenticate: %w", err)
} }
f.srv.SetRoot(f.info.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken) f.srv.SetRoot(f.info.APIs.Storage.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken)
return nil return nil
} }
// hasPermission returns if the current AuthorizationToken has the selected permission // hasPermission returns if the current AuthorizationToken has the selected permission
func (f *Fs) hasPermission(permission string) bool { func (f *Fs) hasPermission(permission string) bool {
return slices.Contains(f.info.Allowed.Capabilities, permission) return slices.Contains(f.info.APIs.Storage.Allowed.Capabilities, permission)
} }
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken // getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
@@ -1067,9 +1079,12 @@ type listBucketFn func(*api.Bucket) error
// listBucketsToFn lists the buckets to the function supplied // listBucketsToFn lists the buckets to the function supplied
func (f *Fs) listBucketsToFn(ctx context.Context, bucketName string, fn listBucketFn) error { func (f *Fs) listBucketsToFn(ctx context.Context, bucketName string, fn listBucketFn) error {
responses := make([]api.ListBucketsResponse, len(f.info.APIs.Storage.Allowed.Buckets))[:0]
call := func(id string) error {
var account = api.ListBucketsRequest{ var account = api.ListBucketsRequest{
AccountID: f.info.AccountID, AccountID: f.info.AccountID,
BucketID: f.info.Allowed.BucketID, BucketID: id,
} }
if bucketName != "" && account.BucketID == "" { if bucketName != "" && account.BucketID == "" {
account.BucketName = f.opt.Enc.FromStandardName(bucketName) account.BucketName = f.opt.Enc.FromStandardName(bucketName)
@@ -1087,10 +1102,42 @@ func (f *Fs) listBucketsToFn(ctx context.Context, bucketName string, fn listBuck
if err != nil { if err != nil {
return err return err
} }
responses = append(responses, response)
return nil
}
for i := range f.info.APIs.Storage.Allowed.Buckets {
b := &f.info.APIs.Storage.Allowed.Buckets[i]
// Empty names indicate a bucket that no longer exists, this is non-fatal
// for multi-bucket API keys.
if b.Name == "" {
continue
}
// When requesting a specific bucket skip over non-matching names
if bucketName != "" && b.Name != bucketName {
continue
}
err := call(b.ID)
if err != nil {
return err
}
}
if len(f.info.APIs.Storage.Allowed.Buckets) == 0 {
err := call("")
if err != nil {
return err
}
}
f.bucketIDMutex.Lock() f.bucketIDMutex.Lock()
f.bucketTypeMutex.Lock() f.bucketTypeMutex.Lock()
f._bucketID = make(map[string]string, 1) f._bucketID = make(map[string]string, 1)
f._bucketType = make(map[string]string, 1) f._bucketType = make(map[string]string, 1)
for ri := range responses {
response := &responses[ri]
for i := range response.Buckets { for i := range response.Buckets {
bucket := &response.Buckets[i] bucket := &response.Buckets[i]
bucket.Name = f.opt.Enc.ToStandardName(bucket.Name) bucket.Name = f.opt.Enc.ToStandardName(bucket.Name)
@@ -1098,15 +1145,19 @@ func (f *Fs) listBucketsToFn(ctx context.Context, bucketName string, fn listBuck
f._bucketID[bucket.Name] = bucket.ID f._bucketID[bucket.Name] = bucket.ID
f._bucketType[bucket.Name] = bucket.Type f._bucketType[bucket.Name] = bucket.Type
} }
}
f.bucketTypeMutex.Unlock() f.bucketTypeMutex.Unlock()
f.bucketIDMutex.Unlock() f.bucketIDMutex.Unlock()
for ri := range responses {
response := &responses[ri]
for i := range response.Buckets { for i := range response.Buckets {
bucket := &response.Buckets[i] bucket := &response.Buckets[i]
err = fn(bucket) err := fn(bucket)
if err != nil { if err != nil {
return err return err
} }
} }
}
return nil return nil
} }
@@ -1606,7 +1657,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
bucket, bucketPath := f.split(remote) bucket, bucketPath := f.split(remote)
var RootURL string var RootURL string
if f.opt.DownloadURL == "" { if f.opt.DownloadURL == "" {
RootURL = f.info.DownloadURL RootURL = f.info.APIs.Storage.DownloadURL
} else { } else {
RootURL = f.opt.DownloadURL RootURL = f.opt.DownloadURL
} }
@@ -1957,7 +2008,7 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
// Use downloadUrl from backblaze if downloadUrl is not set // Use downloadUrl from backblaze if downloadUrl is not set
// otherwise use the custom downloadUrl // otherwise use the custom downloadUrl
if o.fs.opt.DownloadURL == "" { if o.fs.opt.DownloadURL == "" {
opts.RootURL = o.fs.info.DownloadURL opts.RootURL = o.fs.info.APIs.Storage.DownloadURL
} else { } else {
opts.RootURL = o.fs.opt.DownloadURL opts.RootURL = o.fs.opt.DownloadURL
} }

View File

@@ -403,14 +403,14 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
if ciphertext == "" { if ciphertext == "" {
return "", nil return "", nil
} }
pos := strings.Index(ciphertext, ".") before, after, ok := strings.Cut(ciphertext, ".")
if pos == -1 { if !ok {
return "", ErrorNotAnEncryptedFile return "", ErrorNotAnEncryptedFile
} // No . } // No .
num := ciphertext[:pos] num := before
if num == "!" { if num == "!" {
// No rotation; probably original was not valid unicode // No rotation; probably original was not valid unicode
return ciphertext[pos+1:], nil return after, nil
} }
dir, err := strconv.Atoi(num) dir, err := strconv.Atoi(num)
if err != nil { if err != nil {
@@ -425,7 +425,7 @@ func (c *Cipher) deobfuscateSegment(ciphertext string) (string, error) {
var result bytes.Buffer var result bytes.Buffer
inQuote := false inQuote := false
for _, runeValue := range ciphertext[pos+1:] { for _, runeValue := range after {
switch { switch {
case inQuote: case inQuote:
_, _ = result.WriteRune(runeValue) _, _ = result.WriteRune(runeValue)

237
backend/drime/api/types.go Normal file
View File

@@ -0,0 +1,237 @@
// Package api has type definitions for drime
//
// Converted from the API docs with help from https://mholt.github.io/json-to-go/
package api
import (
"encoding/json"
"fmt"
"time"
)
// Types of things in Item
const (
ItemTypeFolder = "folder"
)
// User information
type User struct {
Email string `json:"email"`
ID json.Number `json:"id"`
Avatar string `json:"avatar"`
ModelType string `json:"model_type"`
OwnsEntry bool `json:"owns_entry"`
EntryPermissions []any `json:"entry_permissions"`
DisplayName string `json:"display_name"`
}
// Permissions for a file
type Permissions struct {
FilesUpdate bool `json:"files.update"`
FilesCreate bool `json:"files.create"`
FilesDownload bool `json:"files.download"`
FilesDelete bool `json:"files.delete"`
}
// Item describes a folder or a file as returned by /drive/file-entries
type Item struct {
ID json.Number `json:"id"`
Name string `json:"name"`
Description any `json:"description"`
FileName string `json:"file_name"`
Mime string `json:"mime"`
Color any `json:"color"`
Backup bool `json:"backup"`
Tracked int `json:"tracked"`
FileSize int64 `json:"file_size"`
UserID json.Number `json:"user_id"`
ParentID json.Number `json:"parent_id"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
DeletedAt any `json:"deleted_at"`
IsDeleted int `json:"is_deleted"`
Path string `json:"path"`
DiskPrefix any `json:"disk_prefix"`
Type string `json:"type"`
Extension any `json:"extension"`
FileHash any `json:"file_hash"`
Public bool `json:"public"`
Thumbnail bool `json:"thumbnail"`
MuxStatus any `json:"mux_status"`
ThumbnailURL any `json:"thumbnail_url"`
WorkspaceID int `json:"workspace_id"`
IsEncrypted int `json:"is_encrypted"`
Iv any `json:"iv"`
VaultID any `json:"vault_id"`
OwnerID int `json:"owner_id"`
Hash string `json:"hash"`
URL string `json:"url"`
Users []User `json:"users"`
Tags []any `json:"tags"`
Permissions Permissions `json:"permissions"`
}
// Listing response
type Listing struct {
CurrentPage int `json:"current_page"`
Data []Item `json:"data"`
From int `json:"from"`
LastPage int `json:"last_page"`
NextPage int `json:"next_page"`
PerPage int `json:"per_page"`
PrevPage int `json:"prev_page"`
To int `json:"to"`
Total int `json:"total"`
}
// UploadResponse for a file
type UploadResponse struct {
Status string `json:"status"`
FileEntry Item `json:"fileEntry"`
}
// CreateFolderRequest for a folder
type CreateFolderRequest struct {
Name string `json:"name"`
ParentID json.Number `json:"parentId,omitempty"`
}
// CreateFolderResponse for a folder
type CreateFolderResponse struct {
Status string `json:"status"`
Folder Item `json:"folder"`
}
// Error is returned from drime when things go wrong
type Error struct {
Message string `json:"message"`
}
// Error returns a string for the error and satisfies the error interface
func (e Error) Error() string {
out := fmt.Sprintf("Error %q", e.Message)
return out
}
// Check Error satisfies the error interface
var _ error = (*Error)(nil)
// DeleteRequest is the input to DELETE /file-entries
type DeleteRequest struct {
EntryIDs []string `json:"entryIds"`
DeleteForever bool `json:"deleteForever"`
}
// DeleteResponse is the input to DELETE /file-entries
type DeleteResponse struct {
Status string `json:"status"`
Message string `json:"message"`
Errors map[string]string `json:"errors"`
}
// UpdateItemRequest describes the updates to be done to an item for PUT /file-entries/{id}/
type UpdateItemRequest struct {
Name string `json:"name,omitempty"`
Description string `json:"description,omitempty"`
}
// UpdateItemResponse is returned by PUT /file-entries/{id}/
type UpdateItemResponse struct {
Status string `json:"status"`
FileEntry Item `json:"fileEntry"`
}
// MoveRequest is the input to /file-entries/move
type MoveRequest struct {
EntryIDs []string `json:"entryIds"`
DestinationID string `json:"destinationId"`
}
// MoveResponse is returned by POST /file-entries/move
type MoveResponse struct {
Status string `json:"status"`
Entries []Item `json:"entries"`
}
// CopyRequest is the input to /file-entries/duplicate
type CopyRequest struct {
EntryIDs []string `json:"entryIds"`
DestinationID string `json:"destinationId"`
}
// CopyResponse is returned by POST /file-entries/duplicate
type CopyResponse struct {
Status string `json:"status"`
Entries []Item `json:"entries"`
}
// MultiPartCreateRequest is the input of POST /s3/multipart/create
type MultiPartCreateRequest struct {
Filename string `json:"filename"`
Mime string `json:"mime"`
Size int64 `json:"size"`
Extension string `json:"extension"`
ParentID json.Number `json:"parent_id"`
RelativePath string `json:"relativePath"`
}
// MultiPartCreateResponse is returned by POST /s3/multipart/create
type MultiPartCreateResponse struct {
UploadID string `json:"uploadId"`
Key string `json:"key"`
}
// CompletedPart Type for completed parts when making a multipart upload.
type CompletedPart struct {
ETag string `json:"ETag"`
PartNumber int32 `json:"PartNumber"`
}
// MultiPartGetURLsRequest is the input of POST /s3/multipart/batch-sign-part-urls
type MultiPartGetURLsRequest struct {
UploadID string `json:"uploadId"`
Key string `json:"key"`
PartNumbers []int `json:"partNumbers"`
}
// MultiPartGetURLsResponse is the result of POST /s3/multipart/batch-sign-part-urls
type MultiPartGetURLsResponse struct {
URLs []struct {
URL string `json:"url"`
PartNumber int32 `json:"partNumber"`
} `json:"urls"`
}
// MultiPartCompleteRequest is the input to POST /s3/multipart/complete
type MultiPartCompleteRequest struct {
UploadID string `json:"uploadId"`
Key string `json:"key"`
Parts []CompletedPart `json:"parts"`
}
// MultiPartCompleteResponse is the result of POST /s3/multipart/complete
type MultiPartCompleteResponse struct {
Location string `json:"location"`
}
// MultiPartEntriesRequest is the input to POST /s3/entries
type MultiPartEntriesRequest struct {
ClientMime string `json:"clientMime"`
ClientName string `json:"clientName"`
Filename string `json:"filename"`
Size int64 `json:"size"`
ClientExtension string `json:"clientExtension"`
ParentID json.Number `json:"parent_id"`
RelativePath string `json:"relativePath"`
}
// MultiPartEntriesResponse is the result of POST /s3/entries
type MultiPartEntriesResponse struct {
FileEntry Item `json:"fileEntry"`
}
// MultiPartAbort is the input of POST /s3/multipart/abort
type MultiPartAbort struct {
UploadID string `json:"uploadId"`
Key string `json:"key"`
}

1563
backend/drime/drime.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,33 @@
// Drime filesystem interface
package drime
import (
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestDrime:",
NilObject: (*Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
},
})
}
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
)

1178
backend/filen/filen.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,14 @@
package filen
import (
"testing"
"github.com/rclone/rclone/fstest/fstests"
)
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFilen:",
NilObject: (*Object)(nil),
})
}

View File

@@ -204,6 +204,12 @@ Example:
Help: `URL for HTTP CONNECT proxy Help: `URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb. Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
Supports the format http://user:pass@host:port, http://host:port, http://host.
Example:
http://myUser:myPass@proxyhostname.example.com:8000
`, `,
Advanced: true, Advanced: true,
}, { }, {
@@ -892,7 +898,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
resultchan := make(chan []*ftp.Entry, 1) resultchan := make(chan []*ftp.Entry, 1)
errchan := make(chan error, 1) errchan := make(chan error, 1)
go func() { go func(c *ftp.ServerConn) {
result, err := c.List(f.dirFromStandardPath(path.Join(f.root, dir))) result, err := c.List(f.dirFromStandardPath(path.Join(f.root, dir)))
f.putFtpConnection(&c, err) f.putFtpConnection(&c, err)
if err != nil { if err != nil {
@@ -900,7 +906,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return return
} }
resultchan <- result resultchan <- result
}() }(c)
// Wait for List for up to Timeout seconds // Wait for List for up to Timeout seconds
timer := time.NewTimer(f.ci.TimeoutOrInfinite()) timer := time.NewTimer(f.ci.TimeoutOrInfinite())

View File

@@ -72,7 +72,7 @@ func (ik *ImageKit) Upload(ctx context.Context, file io.Reader, param UploadPara
response := &UploadResult{} response := &UploadResult{}
formReader, contentType, _, err := rest.MultipartUpload(ctx, file, formParams, "file", param.FileName) formReader, contentType, _, err := rest.MultipartUpload(ctx, file, formParams, "file", param.FileName, "application/octet-stream")
if err != nil { if err != nil {
return nil, nil, fmt.Errorf("failed to make multipart upload: %w", err) return nil, nil, fmt.Errorf("failed to make multipart upload: %w", err)

View File

@@ -6,6 +6,7 @@ import (
"context" "context"
"crypto/md5" "crypto/md5"
"encoding/hex" "encoding/hex"
"errors"
"fmt" "fmt"
"io" "io"
"path" "path"
@@ -25,6 +26,7 @@ var (
hashType = hash.MD5 hashType = hash.MD5
// the object storage is persistent // the object storage is persistent
buckets = newBucketsInfo() buckets = newBucketsInfo()
errWriteOnly = errors.New("can't read when using --memory-discard")
) )
// Register with Fs // Register with Fs
@@ -33,12 +35,32 @@ func init() {
Name: "memory", Name: "memory",
Description: "In memory object storage system.", Description: "In memory object storage system.",
NewFs: NewFs, NewFs: NewFs,
Options: []fs.Option{}, Options: []fs.Option{{
Name: "discard",
Default: false,
Advanced: true,
Help: `If set all writes will be discarded and reads will return an error
If set then when files are uploaded the contents not be saved. The
files will appear to have been uploaded but will give an error on
read. Files will have their MD5 sum calculated on upload which takes
very little CPU time and allows the transfers to be checked.
This can be useful for testing performance.
Probably most easily used by using the connection string syntax:
:memory,discard:bucket
`,
}},
}) })
} }
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct{} type Options struct {
Discard bool `config:"discard"`
}
// Fs represents a remote memory server // Fs represents a remote memory server
type Fs struct { type Fs struct {
@@ -164,6 +186,7 @@ type objectData struct {
hash string hash string
mimeType string mimeType string
data []byte data []byte
size int64
} }
// Object describes a memory object // Object describes a memory object
@@ -558,7 +581,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
if t != hashType { if t != hashType {
return "", hash.ErrUnsupported return "", hash.ErrUnsupported
} }
if o.od.hash == "" { if o.od.hash == "" && !o.fs.opt.Discard {
sum := md5.Sum(o.od.data) sum := md5.Sum(o.od.data)
o.od.hash = hex.EncodeToString(sum[:]) o.od.hash = hex.EncodeToString(sum[:])
} }
@@ -567,7 +590,7 @@ func (o *Object) Hash(ctx context.Context, t hash.Type) (string, error) {
// Size returns the size of an object in bytes // Size returns the size of an object in bytes
func (o *Object) Size() int64 { func (o *Object) Size() int64 {
return int64(len(o.od.data)) return o.od.size
} }
// ModTime returns the modification time of the object // ModTime returns the modification time of the object
@@ -593,6 +616,9 @@ func (o *Object) Storable() bool {
// Open an object for read // Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
if o.fs.opt.Discard {
return nil, errWriteOnly
}
var offset, limit int64 = 0, -1 var offset, limit int64 = 0, -1
for _, option := range options { for _, option := range options {
switch x := option.(type) { switch x := option.(type) {
@@ -624,13 +650,24 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// The new object may have been created if an error is returned // The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
data, err := io.ReadAll(in) var data []byte
var size int64
var hash string
if o.fs.opt.Discard {
h := md5.New()
size, err = io.Copy(h, in)
hash = hex.EncodeToString(h.Sum(nil))
} else {
data, err = io.ReadAll(in)
size = int64(len(data))
}
if err != nil { if err != nil {
return fmt.Errorf("failed to update memory object: %w", err) return fmt.Errorf("failed to update memory object: %w", err)
} }
o.od = &objectData{ o.od = &objectData{
data: data, data: data,
hash: "", size: size,
hash: hash,
modTime: src.ModTime(ctx), modTime: src.ModTime(ctx),
mimeType: fs.MimeType(ctx, src), mimeType: fs.MimeType(ctx, src),
} }

View File

@@ -60,9 +60,6 @@ type StateChangeConf struct {
func (conf *StateChangeConf) WaitForStateContext(ctx context.Context, entityType string) (any, error) { func (conf *StateChangeConf) WaitForStateContext(ctx context.Context, entityType string) (any, error) {
// fs.Debugf(entityType, "Waiting for state to become: %s", conf.Target) // fs.Debugf(entityType, "Waiting for state to become: %s", conf.Target)
notfoundTick := 0
targetOccurrence := 0
// Set a default for times to check for not found // Set a default for times to check for not found
if conf.NotFoundChecks == 0 { if conf.NotFoundChecks == 0 {
conf.NotFoundChecks = 20 conf.NotFoundChecks = 20
@@ -84,9 +81,11 @@ func (conf *StateChangeConf) WaitForStateContext(ctx context.Context, entityType
// cancellation channel for the refresh loop // cancellation channel for the refresh loop
cancelCh := make(chan struct{}) cancelCh := make(chan struct{})
go func() {
notfoundTick := 0
targetOccurrence := 0
result := Result{} result := Result{}
go func() {
defer close(resCh) defer close(resCh)
select { select {

View File

@@ -222,3 +222,11 @@ type UserInfo struct {
} `json:"steps"` } `json:"steps"`
} `json:"journey"` } `json:"journey"`
} }
// DiffResult is the response from /diff
type DiffResult struct {
Result int `json:"result"`
DiffID int64 `json:"diffid"`
Entries []map[string]any `json:"entries"`
Error string `json:"error"`
}

View File

@@ -171,6 +171,7 @@ type Fs struct {
dirCache *dircache.DirCache // Map of directory path to directory id dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry tokenRenewer *oauthutil.Renew // renew the token on expiry
lastDiffID int64 // change tracking state for diff long-polling
} }
// Object describes a pcloud object // Object describes a pcloud object
@@ -1033,6 +1034,137 @@ func (f *Fs) Shutdown(ctx context.Context) error {
return nil return nil
} }
// ChangeNotify implements fs.Features.ChangeNotify
func (f *Fs) ChangeNotify(ctx context.Context, notify func(string, fs.EntryType), ch <-chan time.Duration) {
// Start long-poll loop in background
go f.changeNotifyLoop(ctx, notify, ch)
}
// changeNotifyLoop contains the blocking long-poll logic.
func (f *Fs) changeNotifyLoop(ctx context.Context, notify func(string, fs.EntryType), ch <-chan time.Duration) {
// Standard polling interval
interval := 30 * time.Second
// Start with diffID = 0 to get the current state
var diffID int64
// Helper to process changes from the diff API
handleChanges := func(entries []map[string]any) {
notifiedPaths := make(map[string]bool)
for _, entry := range entries {
meta, ok := entry["metadata"].(map[string]any)
if !ok {
continue
}
// Robust extraction of ParentFolderID
var pid int64
if val, ok := meta["parentfolderid"]; ok {
switch v := val.(type) {
case float64:
pid = int64(v)
case int64:
pid = v
case int:
pid = int64(v)
}
}
// Resolve the path using dirCache.GetInv
// pCloud uses "d" prefix for directory IDs in cache, but API returns numbers
dirID := fmt.Sprintf("d%d", pid)
parentPath, ok := f.dirCache.GetInv(dirID)
if !ok {
// Parent not in cache, so we can ignore this change as it is outside
// of what the mount has seen or cares about.
continue
}
name, _ := meta["name"].(string)
fullPath := path.Join(parentPath, name)
// Determine EntryType (File or Directory)
entryType := fs.EntryObject
if isFolder, ok := meta["isfolder"].(bool); ok && isFolder {
entryType = fs.EntryDirectory
}
// Deduplicate notifications for this batch
if !notifiedPaths[fullPath] {
fs.Debugf(f, "ChangeNotify: detected change in %q (type: %v)", fullPath, entryType)
notify(fullPath, entryType)
notifiedPaths[fullPath] = true
}
}
}
for {
// Check context and channel
select {
case <-ctx.Done():
return
case newInterval, ok := <-ch:
if !ok {
return
}
interval = newInterval
default:
}
// Setup /diff Request
opts := rest.Opts{
Method: "GET",
Path: "/diff",
Parameters: url.Values{},
}
if diffID != 0 {
opts.Parameters.Set("diffid", strconv.FormatInt(diffID, 10))
opts.Parameters.Set("block", "1")
} else {
opts.Parameters.Set("last", "0")
}
// Perform Long-Poll
// Timeout set to 90s (server usually blocks for 60s max)
reqCtx, cancel := context.WithTimeout(ctx, 90*time.Second)
var result api.DiffResult
_, err := f.srv.CallJSON(reqCtx, &opts, nil, &result)
cancel()
if err != nil {
if errors.Is(err, context.Canceled) {
return
}
// Ignore timeout errors as they are normal for long-polling
if !errors.Is(err, context.DeadlineExceeded) {
fs.Infof(f, "ChangeNotify: polling error: %v. Waiting %v.", err, interval)
time.Sleep(interval)
}
continue
}
// If result is not 0, reset DiffID to resync
if result.Result != 0 {
diffID = 0
time.Sleep(2 * time.Second)
continue
}
if result.DiffID != 0 {
diffID = result.DiffID
f.lastDiffID = diffID
}
if len(result.Entries) > 0 {
handleChanges(result.Entries)
}
}
}
// Hashes returns the supported hash sets. // Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set { func (f *Fs) Hashes() hash.Set {
// EU region supports SHA1 and SHA256 (but rclone doesn't // EU region supports SHA1 and SHA256 (but rclone doesn't
@@ -1327,7 +1459,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// opts.Body=0), so upload it as a multipart form POST with // opts.Body=0), so upload it as a multipart form POST with
// Content-Length set. // Content-Length set.
if size == 0 { if size == 0 {
formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, opts.Parameters, "content", leaf) formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, opts.Parameters, "content", leaf, opts.ContentType)
if err != nil { if err != nil {
return fmt.Errorf("failed to make multipart upload for 0 length file: %w", err) return fmt.Errorf("failed to make multipart upload for 0 length file: %w", err)
} }
@@ -1401,6 +1533,7 @@ var (
_ fs.ListPer = (*Fs)(nil) _ fs.ListPer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil) _ fs.Abouter = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil) _ fs.Shutdowner = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.Object = (*Object)(nil) _ fs.Object = (*Object)(nil)
_ fs.IDer = (*Object)(nil) _ fs.IDer = (*Object)(nil)
) )

View File

@@ -1384,7 +1384,7 @@ func (f *Fs) uploadByForm(ctx context.Context, in io.Reader, name string, size i
for i := range iVal.NumField() { for i := range iVal.NumField() {
params.Set(iTyp.Field(i).Tag.Get("json"), iVal.Field(i).String()) params.Set(iTyp.Field(i).Tag.Get("json"), iVal.Field(i).String())
} }
formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, params, "file", name) formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, params, "file", name, "application/octet-stream")
if err != nil { if err != nil {
return fmt.Errorf("failed to make multipart upload: %w", err) return fmt.Errorf("failed to make multipart upload: %w", err)
} }

View File

@@ -0,0 +1,15 @@
name: BizflyCloud
description: Bizfly Cloud Simple Storage
region:
hn: Ha Noi
hcm: Ho Chi Minh
endpoint:
hn.ss.bfcplatform.vn: Hanoi endpoint
hcm.ss.bfcplatform.vn: Ho Chi Minh endpoint
acl: {}
bucket_acl: true
quirks:
force_path_style: true
list_url_encode: false
use_multipart_etag: false
use_already_exists: false

View File

@@ -1,26 +1,26 @@
name: Linode name: Linode
description: Linode Object Storage description: Linode Object Storage
endpoint: endpoint:
nl-ams-1.linodeobjects.com: Amsterdam (Netherlands), nl-ams-1 nl-ams-1.linodeobjects.com: Amsterdam, NL (nl-ams-1)
us-southeast-1.linodeobjects.com: Atlanta, GA (USA), us-southeast-1 us-southeast-1.linodeobjects.com: Atlanta, GA, US (us-southeast-1)
in-maa-1.linodeobjects.com: Chennai (India), in-maa-1 in-maa-1.linodeobjects.com: Chennai, IN (in-maa-1)
us-ord-1.linodeobjects.com: Chicago, IL (USA), us-ord-1 us-ord-1.linodeobjects.com: Chicago, IL, US (us-ord-1)
eu-central-1.linodeobjects.com: Frankfurt (Germany), eu-central-1 eu-central-1.linodeobjects.com: Frankfurt, DE (eu-central-1)
id-cgk-1.linodeobjects.com: Jakarta (Indonesia), id-cgk-1 id-cgk-1.linodeobjects.com: Jakarta, ID (id-cgk-1)
gb-lon-1.linodeobjects.com: London 2 (Great Britain), gb-lon-1 gb-lon-1.linodeobjects.com: London 2, UK (gb-lon-1)
us-lax-1.linodeobjects.com: Los Angeles, CA (USA), us-lax-1 us-lax-1.linodeobjects.com: Los Angeles, CA, US (us-lax-1)
es-mad-1.linodeobjects.com: Madrid (Spain), es-mad-1 es-mad-1.linodeobjects.com: Madrid, ES (es-mad-1)
au-mel-1.linodeobjects.com: Melbourne (Australia), au-mel-1 us-mia-1.linodeobjects.com: Miami, FL, US (us-mia-1)
us-mia-1.linodeobjects.com: Miami, FL (USA), us-mia-1 it-mil-1.linodeobjects.com: Milan, IT (it-mil-1)
it-mil-1.linodeobjects.com: Milan (Italy), it-mil-1 us-east-1.linodeobjects.com: Newark, NJ, US (us-east-1)
us-east-1.linodeobjects.com: Newark, NJ (USA), us-east-1 jp-osa-1.linodeobjects.com: Osaka, JP (jp-osa-1)
jp-osa-1.linodeobjects.com: Osaka (Japan), jp-osa-1 fr-par-1.linodeobjects.com: Paris, FR (fr-par-1)
fr-par-1.linodeobjects.com: Paris (France), fr-par-1 br-gru-1.linodeobjects.com: Sao Paulo, BR (br-gru-1)
br-gru-1.linodeobjects.com: São Paulo (Brazil), br-gru-1 us-sea-1.linodeobjects.com: Seattle, WA, US (us-sea-1)
us-sea-1.linodeobjects.com: Seattle, WA (USA), us-sea-1 ap-south-1.linodeobjects.com: Singapore, SG (ap-south-1)
ap-south-1.linodeobjects.com: Singapore, ap-south-1 sg-sin-1.linodeobjects.com: Singapore 2, SG (sg-sin-1)
sg-sin-1.linodeobjects.com: Singapore 2, sg-sin-1 se-sto-1.linodeobjects.com: Stockholm, SE (se-sto-1)
se-sto-1.linodeobjects.com: Stockholm (Sweden), se-sto-1 jp-tyo-1.linodeobjects.com: Tokyo 3, JP (jp-tyo-1)
us-iad-1.linodeobjects.com: Washington, DC, (USA), us-iad-1 us-iad-10.linodeobjects.com: Washington, DC, US (us-iad-10)
acl: {} acl: {}
bucket_acl: true bucket_acl: true

View File

@@ -30,9 +30,11 @@ import (
v4signer "github.com/aws/aws-sdk-go-v2/aws/signer/v4" v4signer "github.com/aws/aws-sdk-go-v2/aws/signer/v4"
awsconfig "github.com/aws/aws-sdk-go-v2/config" awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials" "github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/credentials/stscreds"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager" "github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3" "github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types" "github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/aws/aws-sdk-go-v2/service/sts"
"github.com/aws/smithy-go" "github.com/aws/smithy-go"
"github.com/aws/smithy-go/logging" "github.com/aws/smithy-go/logging"
"github.com/aws/smithy-go/middleware" "github.com/aws/smithy-go/middleware"
@@ -325,6 +327,30 @@ If empty it will default to the environment variable "AWS_PROFILE" or
Help: "An AWS session token.", Help: "An AWS session token.",
Advanced: true, Advanced: true,
Sensitive: true, Sensitive: true,
}, {
Name: "role_arn",
Help: `ARN of the IAM role to assume.
Leave blank if not using assume role.`,
Advanced: true,
}, {
Name: "role_session_name",
Help: `Session name for assumed role.
If empty, a session name will be generated automatically.`,
Advanced: true,
}, {
Name: "role_session_duration",
Help: `Session duration for assumed role.
If empty, the default session duration will be used.`,
Advanced: true,
}, {
Name: "role_external_id",
Help: `External ID for assumed role.
Leave blank if not using an external ID.`,
Advanced: true,
}, { }, {
Name: "upload_concurrency", Name: "upload_concurrency",
Help: `Concurrency for multipart uploads and copies. Help: `Concurrency for multipart uploads and copies.
@@ -927,6 +953,10 @@ type Options struct {
SharedCredentialsFile string `config:"shared_credentials_file"` SharedCredentialsFile string `config:"shared_credentials_file"`
Profile string `config:"profile"` Profile string `config:"profile"`
SessionToken string `config:"session_token"` SessionToken string `config:"session_token"`
RoleARN string `config:"role_arn"`
RoleSessionName string `config:"role_session_name"`
RoleSessionDuration fs.Duration `config:"role_session_duration"`
RoleExternalID string `config:"role_external_id"`
UploadConcurrency int `config:"upload_concurrency"` UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"` ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"` V2Auth bool `config:"v2_auth"`
@@ -1290,6 +1320,34 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (s3Cli
opt.Region = "us-east-1" opt.Region = "us-east-1"
} }
// Handle assume role if RoleARN is specified
if opt.RoleARN != "" {
fs.Debugf(nil, "Using assume role with ARN: %s", opt.RoleARN)
// Set region for the config before creating STS client
awsConfig.Region = opt.Region
// Create STS client using the base credentials
stsClient := sts.NewFromConfig(awsConfig)
// Configure AssumeRole options
assumeRoleOptions := func(aro *stscreds.AssumeRoleOptions) {
// Set session name if provided, otherwise use a default
if opt.RoleSessionName != "" {
aro.RoleSessionName = opt.RoleSessionName
}
if opt.RoleSessionDuration != 0 {
aro.Duration = time.Duration(opt.RoleSessionDuration)
}
if opt.RoleExternalID != "" {
aro.ExternalID = &opt.RoleExternalID
}
}
// Create AssumeRole credentials provider
awsConfig.Credentials = stscreds.NewAssumeRoleProvider(stsClient, opt.RoleARN, assumeRoleOptions)
}
provider = loadProvider(opt.Provider) provider = loadProvider(opt.Provider)
if provider == nil { if provider == nil {
fs.Logf("s3", "s3 provider %q not known - please set correctly", opt.Provider) fs.Logf("s3", "s3 provider %q not known - please set correctly", opt.Provider)
@@ -2870,7 +2928,9 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
req := s3.CopyObjectInput{ req := s3.CopyObjectInput{
MetadataDirective: types.MetadataDirectiveCopy, MetadataDirective: types.MetadataDirectiveCopy,
} }
if srcObj.storageClass != nil {
req.StorageClass = types.StorageClass(*srcObj.storageClass)
}
// Build upload options including headers and metadata // Build upload options including headers and metadata
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
uploadOptions := fs.MetadataAsOpenOptions(ctx) uploadOptions := fs.MetadataAsOpenOptions(ctx)
@@ -4443,7 +4503,12 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
ACL: types.ObjectCannedACL(o.fs.opt.ACL), ACL: types.ObjectCannedACL(o.fs.opt.ACL),
Key: &bucketPath, Key: &bucketPath,
} }
if tierObj, ok := src.(fs.GetTierer); ok {
tier := tierObj.GetTier()
if tier != "" {
ui.req.StorageClass = types.StorageClass(strings.ToUpper(tier))
}
}
// Fetch metadata if --metadata is in use // Fetch metadata if --metadata is in use
meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options) meta, err := fs.GetMetadataOptions(ctx, o.fs, src, options)
if err != nil { if err != nil {

View File

@@ -688,7 +688,7 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, uploadLink, filePath stri
"need_idx_progress": {"true"}, "need_idx_progress": {"true"},
"replace": {"1"}, "replace": {"1"},
} }
formReader, contentType, _, err := rest.MultipartUpload(ctx, in, parameters, "file", f.opt.Enc.FromStandardName(filename)) formReader, contentType, _, err := rest.MultipartUpload(ctx, in, parameters, "file", f.opt.Enc.FromStandardName(filename), "application/octet-stream")
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to make multipart upload: %w", err) return nil, fmt.Errorf("failed to make multipart upload: %w", err)
} }

View File

@@ -519,6 +519,12 @@ Example:
Help: `URL for HTTP CONNECT proxy Help: `URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb. Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
Supports the format http://user:pass@host:port, http://host:port, http://host.
Example:
http://myUser:myPass@proxyhostname.example.com:8000
`, `,
Advanced: true, Advanced: true,
}, { }, {
@@ -919,15 +925,8 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
opt.Port = "22" opt.Port = "22"
} }
// get proxy URL if set // Set up sshConfig here from opt
if opt.HTTPProxy != "" { // **NB** everything else should be setup in NewFsWithConnection
proxyURL, err := url.Parse(opt.HTTPProxy)
if err != nil {
return nil, fmt.Errorf("failed to parse HTTP Proxy URL: %w", err)
}
f.proxyURL = proxyURL
}
sshConfig := &ssh.ClientConfig{ sshConfig := &ssh.ClientConfig{
User: opt.User, User: opt.User,
Auth: []ssh.AuthMethod{}, Auth: []ssh.AuthMethod{},
@@ -1175,11 +1174,21 @@ func NewFsWithConnection(ctx context.Context, f *Fs, name string, root string, m
f.mkdirLock = newStringLock() f.mkdirLock = newStringLock()
f.pacer = fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))) f.pacer = fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant)))
f.savedpswd = "" f.savedpswd = ""
// set the pool drainer timer going // set the pool drainer timer going
if f.opt.IdleTimeout > 0 { if f.opt.IdleTimeout > 0 {
f.drain = time.AfterFunc(time.Duration(f.opt.IdleTimeout), func() { _ = f.drainPool(ctx) }) f.drain = time.AfterFunc(time.Duration(f.opt.IdleTimeout), func() { _ = f.drainPool(ctx) })
} }
// get proxy URL if set
if opt.HTTPProxy != "" {
proxyURL, err := url.Parse(opt.HTTPProxy)
if err != nil {
return nil, fmt.Errorf("failed to parse HTTP Proxy URL: %w", err)
}
f.proxyURL = proxyURL
}
f.features = (&fs.Features{ f.features = (&fs.Features{
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
SlowHash: true, SlowHash: true,

View File

@@ -0,0 +1,27 @@
// Package api has type definitions for shade
package api
// ListDirResponse -------------------------------------------------
// Format from shade api
type ListDirResponse struct {
Type string `json:"type"` // "file" or "tree"
Path string `json:"path"` // Full path including root
Ino int `json:"ino"` // inode number
Mtime int64 `json:"mtime"` // Modified time in milliseconds
Ctime int64 `json:"ctime"` // Created time in milliseconds
Size int64 `json:"size"` // Size in bytes
Hash string `json:"hash"` // MD5 hash
Draft bool `json:"draft"` // Whether this is a draft file
}
// PartURL Type for multipart upload/download
type PartURL struct {
URL string `json:"url"`
Headers map[string]string `json:"headers,omitempty"`
}
// CompletedPart Type for completed parts when making a multipart upload.
type CompletedPart struct {
ETag string
PartNumber int32
}

1039
backend/shade/shade.go Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,21 @@
package shade_test
import (
"testing"
"github.com/rclone/rclone/backend/shade"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
name := "TestShade"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*shade.Object)(nil),
SkipInvalidUTF8: true,
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "eventually_consistent_delay", Value: "7"},
},
})
}

336
backend/shade/upload.go Normal file
View File

@@ -0,0 +1,336 @@
//multipart upload for shade
package shade
import (
"bytes"
"context"
"fmt"
"io"
"net/http"
"net/url"
"path"
"sort"
"sync"
"github.com/rclone/rclone/backend/shade/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/chunksize"
"github.com/rclone/rclone/lib/multipart"
"github.com/rclone/rclone/lib/rest"
)
var warnStreamUpload sync.Once
type shadeChunkWriter struct {
initToken string
chunkSize int64
size int64
f *Fs
o *Object
completedParts []api.CompletedPart
completedPartsMu sync.Mutex
}
// uploadMultipart handles multipart upload for larger files
func (o *Object) uploadMultipart(ctx context.Context, src fs.ObjectInfo, in io.Reader, options ...fs.OpenOption) error {
chunkWriter, err := multipart.UploadMultipart(ctx, src, in, multipart.UploadMultipartOptions{
Open: o.fs,
OpenOptions: options,
})
if err != nil {
return err
}
var shadeWriter = chunkWriter.(*shadeChunkWriter)
o.size = shadeWriter.size
return nil
}
// OpenChunkWriter returns the chunk size and a ChunkWriter
//
// Pass in the remote and the src object
// You can also use options to hint at the desired chunk size
func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectInfo, options ...fs.OpenOption) (info fs.ChunkWriterInfo, writer fs.ChunkWriter, err error) {
// Temporary Object under construction
o := &Object{
fs: f,
remote: remote,
}
uploadParts := f.opt.MaxUploadParts
if uploadParts < 1 {
uploadParts = 1
} else if uploadParts > maxUploadParts {
uploadParts = maxUploadParts
}
size := src.Size()
fs.FixRangeOption(options, size)
// calculate size of parts
chunkSize := f.opt.ChunkSize
// size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize
// buffers here (default 64 MB). With a maximum number of parts (10,000) this will be a file of
// 640 GB.
if size == -1 {
warnStreamUpload.Do(func() {
fs.Logf(f, "Streaming uploads using chunk size %v will have maximum file size of %v",
chunkSize, fs.SizeSuffix(int64(chunkSize)*int64(uploadParts)))
})
} else {
chunkSize = chunksize.Calculator(src, size, uploadParts, chunkSize)
}
token, err := o.fs.refreshJWTToken(ctx)
if err != nil {
return info, nil, fmt.Errorf("failed to get token: %w", err)
}
err = f.ensureParentDirectories(ctx, remote)
if err != nil {
return info, nil, fmt.Errorf("failed to ensure parent directories: %w", err)
}
fullPath := remote
if f.root != "" {
fullPath = path.Join(f.root, remote)
}
// Initiate multipart upload
type initRequest struct {
Path string `json:"path"`
PartSize int64 `json:"partSize"`
}
reqBody := initRequest{
Path: fullPath,
PartSize: int64(chunkSize),
}
var initResp struct {
Token string `json:"token"`
}
opts := rest.Opts{
Method: "POST",
Path: fmt.Sprintf("/%s/upload/multipart", o.fs.drive),
RootURL: o.fs.endpoint,
ExtraHeaders: map[string]string{
"Authorization": "Bearer " + token,
},
Options: options,
}
err = o.fs.pacer.Call(func() (bool, error) {
res, err := o.fs.srv.CallJSON(ctx, &opts, reqBody, &initResp)
if err != nil {
return res != nil && res.StatusCode == http.StatusTooManyRequests, err
}
return false, nil
})
if err != nil {
return info, nil, fmt.Errorf("failed to initiate multipart upload: %w", err)
}
chunkWriter := &shadeChunkWriter{
initToken: initResp.Token,
chunkSize: int64(chunkSize),
size: size,
f: f,
o: o,
}
info = fs.ChunkWriterInfo{
ChunkSize: int64(chunkSize),
Concurrency: f.opt.Concurrency,
LeavePartsOnError: false,
}
return info, chunkWriter, err
}
// WriteChunk will write chunk number with reader bytes, where chunk number >= 0
func (s *shadeChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader io.ReadSeeker) (bytesWritten int64, err error) {
token, err := s.f.refreshJWTToken(ctx)
if err != nil {
return 0, err
}
// Read chunk
var chunk bytes.Buffer
n, err := io.Copy(&chunk, reader)
if n == 0 {
return 0, nil
}
if err != nil {
return 0, fmt.Errorf("failed to read chunk: %w", err)
}
// Get presigned URL for this part
var partURL api.PartURL
partOpts := rest.Opts{
Method: "POST",
Path: fmt.Sprintf("/%s/upload/multipart/part/%d?token=%s", s.f.drive, chunkNumber+1, url.QueryEscape(s.initToken)),
RootURL: s.f.endpoint,
ExtraHeaders: map[string]string{
"Authorization": "Bearer " + token,
},
}
err = s.f.pacer.Call(func() (bool, error) {
res, err := s.f.srv.CallJSON(ctx, &partOpts, nil, &partURL)
if err != nil {
return res != nil && res.StatusCode == http.StatusTooManyRequests, err
}
return false, nil
})
if err != nil {
return 0, fmt.Errorf("failed to get part URL: %w", err)
}
opts := rest.Opts{
Method: "PUT",
RootURL: partURL.URL,
Body: &chunk,
ContentType: "",
ContentLength: &n,
}
// Add headers
var uploadRes *http.Response
if len(partURL.Headers) > 0 {
opts.ExtraHeaders = make(map[string]string)
for k, v := range partURL.Headers {
opts.ExtraHeaders[k] = v
}
}
err = s.f.pacer.Call(func() (bool, error) {
uploadRes, err = s.f.srv.Call(ctx, &opts)
if err != nil {
return uploadRes != nil && uploadRes.StatusCode == http.StatusTooManyRequests, err
}
return false, nil
})
if err != nil {
return 0, fmt.Errorf("failed to upload part %d: %w", chunk, err)
}
if uploadRes.StatusCode != http.StatusOK && uploadRes.StatusCode != http.StatusCreated {
body, _ := io.ReadAll(uploadRes.Body)
fs.CheckClose(uploadRes.Body, &err)
return 0, fmt.Errorf("part upload failed with status %d: %s", uploadRes.StatusCode, string(body))
}
// Get ETag from response
etag := uploadRes.Header.Get("ETag")
fs.CheckClose(uploadRes.Body, &err)
s.completedPartsMu.Lock()
defer s.completedPartsMu.Unlock()
s.completedParts = append(s.completedParts, api.CompletedPart{
PartNumber: int32(chunkNumber + 1),
ETag: etag,
})
return n, nil
}
// Close complete chunked writer finalising the file.
func (s *shadeChunkWriter) Close(ctx context.Context) error {
// Complete multipart upload
sort.Slice(s.completedParts, func(i, j int) bool {
return s.completedParts[i].PartNumber < s.completedParts[j].PartNumber
})
type completeRequest struct {
Parts []api.CompletedPart `json:"parts"`
}
var completeBody completeRequest
if s.completedParts == nil {
completeBody = completeRequest{Parts: []api.CompletedPart{}}
} else {
completeBody = completeRequest{Parts: s.completedParts}
}
token, err := s.f.refreshJWTToken(ctx)
if err != nil {
return err
}
completeOpts := rest.Opts{
Method: "POST",
Path: fmt.Sprintf("/%s/upload/multipart/complete?token=%s", s.f.drive, url.QueryEscape(s.initToken)),
RootURL: s.f.endpoint,
ExtraHeaders: map[string]string{
"Authorization": "Bearer " + token,
},
}
var response http.Response
err = s.f.pacer.Call(func() (bool, error) {
res, err := s.f.srv.CallJSON(ctx, &completeOpts, completeBody, &response)
if err != nil && res == nil {
return false, err
}
if res.StatusCode == http.StatusTooManyRequests {
return true, err // Retry on 429
}
if res.StatusCode != http.StatusOK && res.StatusCode != http.StatusCreated {
body, _ := io.ReadAll(res.Body)
return false, fmt.Errorf("complete multipart failed with status %d: %s", res.StatusCode, string(body))
}
return false, nil
})
if err != nil {
return fmt.Errorf("failed to complete multipart upload: %w", err)
}
return nil
}
// Abort chunk write
//
// You can and should call Abort without calling Close.
func (s *shadeChunkWriter) Abort(ctx context.Context) error {
token, err := s.f.refreshJWTToken(ctx)
if err != nil {
return err
}
opts := rest.Opts{
Method: "POST",
Path: fmt.Sprintf("/%s/upload/abort/multipart?token=%s", s.f.drive, url.QueryEscape(s.initToken)),
RootURL: s.f.endpoint,
ExtraHeaders: map[string]string{
"Authorization": "Bearer " + token,
},
}
err = s.f.pacer.Call(func() (bool, error) {
res, err := s.f.srv.Call(ctx, &opts)
if err != nil {
fs.Debugf(s.f, "Failed to abort multipart upload: %v", err)
return false, nil // Don't retry abort
}
if res.StatusCode != http.StatusOK && res.StatusCode != http.StatusCreated {
fs.Debugf(s.f, "Abort returned status %d", res.StatusCode)
}
return false, nil
})
if err != nil {
return fmt.Errorf("failed to abort multipart upload: %w", err)
}
return nil
}

View File

@@ -1,171 +0,0 @@
// Package api provides types used by the Uptobox API.
package api
import "fmt"
// Error contains the error code and message returned by the API
type Error struct {
Success bool `json:"success,omitempty"`
StatusCode int `json:"statusCode,omitempty"`
Message string `json:"message,omitempty"`
Data string `json:"data,omitempty"`
}
// Error returns a string for the error and satisfies the error interface
func (e Error) Error() string {
out := fmt.Sprintf("api error %d", e.StatusCode)
if e.Message != "" {
out += ": " + e.Message
}
if e.Data != "" {
out += ": " + e.Data
}
return out
}
// FolderEntry represents a Uptobox subfolder when listing folder contents
type FolderEntry struct {
FolderID uint64 `json:"fld_id"`
Description string `json:"fld_descr"`
Password string `json:"fld_password"`
FullPath string `json:"fullPath"`
Path string `json:"fld_name"`
Name string `json:"name"`
Hash string `json:"hash"`
}
// FolderInfo represents the current folder when listing folder contents
type FolderInfo struct {
FolderID uint64 `json:"fld_id"`
Hash string `json:"hash"`
FileCount uint64 `json:"fileCount"`
TotalFileSize int64 `json:"totalFileSize"`
}
// FileInfo represents a file when listing folder contents
type FileInfo struct {
Name string `json:"file_name"`
Description string `json:"file_descr"`
Created string `json:"file_created"`
Size int64 `json:"file_size"`
Downloads uint64 `json:"file_downloads"`
Code string `json:"file_code"`
Password string `json:"file_password"`
Public int `json:"file_public"`
LastDownload string `json:"file_last_download"`
ID uint64 `json:"id"`
}
// ReadMetadataResponse is the response when listing folder contents
type ReadMetadataResponse struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
CurrentFolder FolderInfo `json:"currentFolder"`
Folders []FolderEntry `json:"folders"`
Files []FileInfo `json:"files"`
PageCount int `json:"pageCount"`
TotalFileCount int `json:"totalFileCount"`
TotalFileSize int64 `json:"totalFileSize"`
} `json:"data"`
}
// UploadInfo is the response when initiating an upload
type UploadInfo struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
UploadLink string `json:"uploadLink"`
MaxUpload string `json:"maxUpload"`
} `json:"data"`
}
// UploadResponse is the response to a successful upload
type UploadResponse struct {
Files []struct {
Name string `json:"name"`
Size int64 `json:"size"`
URL string `json:"url"`
DeleteURL string `json:"deleteUrl"`
} `json:"files"`
}
// UpdateResponse is a generic response to various action on files (rename/copy/move)
type UpdateResponse struct {
Message string `json:"message"`
StatusCode int `json:"statusCode"`
}
// Download is the response when requesting a download link
type Download struct {
StatusCode int `json:"statusCode"`
Message string `json:"message"`
Data struct {
DownloadLink string `json:"dlLink"`
} `json:"data"`
}
// MetadataRequestOptions represents all the options when listing folder contents
type MetadataRequestOptions struct {
Limit uint64
Offset uint64
SearchField string
Search string
}
// CreateFolderRequest is used for creating a folder
type CreateFolderRequest struct {
Token string `json:"token"`
Path string `json:"path"`
Name string `json:"name"`
}
// DeleteFolderRequest is used for deleting a folder
type DeleteFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
}
// CopyMoveFileRequest is used for moving/copying a file
type CopyMoveFileRequest struct {
Token string `json:"token"`
FileCodes string `json:"file_codes"`
DestinationFolderID uint64 `json:"destination_fld_id"`
Action string `json:"action"`
}
// MoveFolderRequest is used for moving a folder
type MoveFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
DestinationFolderID uint64 `json:"destination_fld_id"`
Action string `json:"action"`
}
// RenameFolderRequest is used for renaming a folder
type RenameFolderRequest struct {
Token string `json:"token"`
FolderID uint64 `json:"fld_id"`
NewName string `json:"new_name"`
}
// UpdateFileInformation is used for renaming a file
type UpdateFileInformation struct {
Token string `json:"token"`
FileCode string `json:"file_code"`
NewName string `json:"new_name,omitempty"`
Description string `json:"description,omitempty"`
Password string `json:"password,omitempty"`
Public string `json:"public,omitempty"`
}
// RemoveFileRequest is used for deleting a file
type RemoveFileRequest struct {
Token string `json:"token"`
FileCodes string `json:"file_codes"`
}
// Token represents the authentication token
type Token struct {
Token string `json:"token"`
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,21 +0,0 @@
// Test Uptobox filesystem interface
package uptobox_test
import (
"testing"
"github.com/rclone/rclone/backend/uptobox"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
*fstest.RemoteName = "TestUptobox:"
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*uptobox.Object)(nil),
})
}

View File

@@ -817,7 +817,7 @@ func (f *Fs) upload(ctx context.Context, name string, parent string, size int64,
params.Set("filename", url.QueryEscape(name)) params.Set("filename", url.QueryEscape(name))
params.Set("parent_id", parent) params.Set("parent_id", parent)
params.Set("override-name-exist", strconv.FormatBool(true)) params.Set("override-name-exist", strconv.FormatBool(true))
formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, nil, "content", name) formReader, contentType, overhead, err := rest.MultipartUpload(ctx, in, nil, "content", name, "application/octet-stream")
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to make multipart upload: %w", err) return nil, fmt.Errorf("failed to make multipart upload: %w", err)
} }

View File

@@ -43,9 +43,11 @@ docs = [
"compress.md", "compress.md",
"combine.md", "combine.md",
"doi.md", "doi.md",
"drime.md"
"dropbox.md", "dropbox.md",
"filefabric.md", "filefabric.md",
"filelu.md", "filelu.md",
"filen.md",
"filescom.md", "filescom.md",
"ftp.md", "ftp.md",
"gofile.md", "gofile.md",
@@ -84,11 +86,11 @@ docs = [
"protondrive.md", "protondrive.md",
"seafile.md", "seafile.md",
"sftp.md", "sftp.md",
"shade.md",
"smb.md", "smb.md",
"storj.md", "storj.md",
"sugarsync.md", "sugarsync.md",
"ulozto.md", "ulozto.md",
"uptobox.md",
"union.md", "union.md",
"webdav.md", "webdav.md",
"yandex.md", "yandex.md",

View File

@@ -389,8 +389,8 @@ func parseHash(str string) (string, string, error) {
if str == "-" { if str == "-" {
return "", "", nil return "", "", nil
} }
if pos := strings.Index(str, ":"); pos > 0 { if before, after, ok := strings.Cut(str, ":"); ok {
name, val := str[:pos], str[pos+1:] name, val := before, after
if name != "" && val != "" { if name != "" && val != "" {
return name, val, nil return name, val, nil
} }

View File

@@ -26,6 +26,10 @@ Note that |ls| and |lsl| recurse by default - use |--max-depth 1| to stop the re
The other list commands |lsd|,|lsf|,|lsjson| do not recurse by default - The other list commands |lsd|,|lsf|,|lsjson| do not recurse by default -
use |-R| to make them recurse. use |-R| to make them recurse.
List commands prefer a recursive method that uses more memory but fewer
transactions by default. Use |--disable ListR| to suppress the behavior.
See [|--fast-list|](/docs/#fast-list) for more details.
Listing a nonexistent directory will produce an error except for Listing a nonexistent directory will produce an error except for
remotes which can't have empty directories (e.g. s3, swift, or gcs - remotes which can't have empty directories (e.g. s3, swift, or gcs -
the bucket-based remotes).`, "|", "`") the bucket-based remotes).`, "|", "`")

View File

@@ -13,6 +13,26 @@ docs](https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html)).
`--auth-key` is not provided then `serve s3` will allow anonymous `--auth-key` is not provided then `serve s3` will allow anonymous
access. access.
Like all rclone flags `--auth-key` can be set via environment
variables, in this case `RCLONE_AUTH_KEY`. Since this flag can be
repeated, the input to `RCLONE_AUTH_KEY` is CSV encoded. Because the
`accessKey,secretKey` has a comma in, this means it needs to be in
quotes.
```console
export RCLONE_AUTH_KEY='"user,pass"'
rclone serve s3 ...
```
Or to supply multiple identities:
```console
export RCLONE_AUTH_KEY='"user1,pass1","user2,pass2"'
rclone serve s3 ...
```
Setting this variable without quotes will produce an error.
Please note that some clients may require HTTPS endpoints. See [the Please note that some clients may require HTTPS endpoints. See [the
SSL docs](#tls-ssl) for more information. SSL docs](#tls-ssl) for more information.

View File

@@ -70,6 +70,11 @@ func newServer(ctx context.Context, f fs.Fs, opt *Options, vfsOpt *vfscommon.Opt
w.s3Secret = getAuthSecret(opt.AuthKey) w.s3Secret = getAuthSecret(opt.AuthKey)
} }
authList, err := authlistResolver(opt.AuthKey)
if err != nil {
return nil, fmt.Errorf("parsing auth list failed: %q", err)
}
var newLogger logger var newLogger logger
w.faker = gofakes3.New( w.faker = gofakes3.New(
newBackend(w), newBackend(w),
@@ -77,7 +82,7 @@ func newServer(ctx context.Context, f fs.Fs, opt *Options, vfsOpt *vfscommon.Opt
gofakes3.WithLogger(newLogger), gofakes3.WithLogger(newLogger),
gofakes3.WithRequestID(rand.Uint64()), gofakes3.WithRequestID(rand.Uint64()),
gofakes3.WithoutVersioning(), gofakes3.WithoutVersioning(),
gofakes3.WithV4Auth(authlistResolver(opt.AuthKey)), gofakes3.WithV4Auth(authList),
gofakes3.WithIntegrityCheck(true), // Check Content-MD5 if supplied gofakes3.WithIntegrityCheck(true), // Check Content-MD5 if supplied
) )
@@ -92,7 +97,7 @@ func newServer(ctx context.Context, f fs.Fs, opt *Options, vfsOpt *vfscommon.Opt
w._vfs = vfs.New(f, vfsOpt) w._vfs = vfs.New(f, vfsOpt)
if len(opt.AuthKey) > 0 { if len(opt.AuthKey) > 0 {
w.faker.AddAuthKeys(authlistResolver(opt.AuthKey)) w.faker.AddAuthKeys(authList)
} }
} }

View File

@@ -3,6 +3,7 @@ package s3
import ( import (
"context" "context"
"encoding/hex" "encoding/hex"
"errors"
"io" "io"
"os" "os"
"path" "path"
@@ -125,15 +126,14 @@ func rmdirRecursive(p string, VFS *vfs.VFS) {
} }
} }
func authlistResolver(list []string) map[string]string { func authlistResolver(list []string) (map[string]string, error) {
authList := make(map[string]string) authList := make(map[string]string)
for _, v := range list { for _, v := range list {
parts := strings.Split(v, ",") parts := strings.Split(v, ",")
if len(parts) != 2 { if len(parts) != 2 {
fs.Infof(nil, "Ignored: invalid auth pair %s", v) return nil, errors.New("invalid auth pair: expecting a single comma")
continue
} }
authList[parts[0]] = parts[1] authList[parts[0]] = parts[1]
} }
return authList return authList, nil
} }

View File

@@ -58,10 +58,10 @@ type conn struct {
// interoperate with the rclone sftp backend // interoperate with the rclone sftp backend
func (c *conn) execCommand(ctx context.Context, out io.Writer, command string) (err error) { func (c *conn) execCommand(ctx context.Context, out io.Writer, command string) (err error) {
binary, args := command, "" binary, args := command, ""
space := strings.Index(command, " ") before, after, ok := strings.Cut(command, " ")
if space >= 0 { if ok {
binary = command[:space] binary = before
args = strings.TrimLeft(command[space+1:], " ") args = strings.TrimLeft(after, " ")
} }
args = shellUnEscape(args) args = shellUnEscape(args)
fs.Debugf(c.what, "exec command: binary = %q, args = %q", binary, args) fs.Debugf(c.what, "exec command: binary = %q, args = %q", binary, args)
@@ -291,7 +291,7 @@ func (c *conn) handleChannel(newChannel ssh.NewChannel) {
} }
} }
fs.Debugf(c.what, " - accepted: %v\n", ok) fs.Debugf(c.what, " - accepted: %v\n", ok)
err = req.Reply(ok, reply) err := req.Reply(ok, reply)
if err != nil { if err != nil {
fs.Errorf(c.what, "Failed to Reply to request: %v", err) fs.Errorf(c.what, "Failed to Reply to request: %v", err)
return return

View File

@@ -45,6 +45,10 @@ var OptionsInfo = fs.Options{{
Name: "disable_dir_list", Name: "disable_dir_list",
Default: false, Default: false,
Help: "Disable HTML directory list on GET request for a directory", Help: "Disable HTML directory list on GET request for a directory",
}, {
Name: "disable_zip",
Default: false,
Help: "Disable zip download of directories",
}}. }}.
Add(libhttp.ConfigInfo). Add(libhttp.ConfigInfo).
Add(libhttp.AuthConfigInfo). Add(libhttp.AuthConfigInfo).
@@ -57,6 +61,7 @@ type Options struct {
Template libhttp.TemplateConfig Template libhttp.TemplateConfig
EtagHash string `config:"etag_hash"` EtagHash string `config:"etag_hash"`
DisableDirList bool `config:"disable_dir_list"` DisableDirList bool `config:"disable_dir_list"`
DisableZip bool `config:"disable_zip"`
} }
// Opt is options set by command line flags // Opt is options set by command line flags
@@ -408,6 +413,24 @@ func (w *WebDAV) serveDir(rw http.ResponseWriter, r *http.Request, dirRemote str
return return
} }
dir := node.(*vfs.Dir) dir := node.(*vfs.Dir)
if r.URL.Query().Get("download") == "zip" && !w.opt.DisableZip {
fs.Infof(dirRemote, "%s: Zipping directory", r.RemoteAddr)
zipName := path.Base(dirRemote)
if dirRemote == "" {
zipName = "root"
}
rw.Header().Set("Content-Disposition", "attachment; filename=\""+zipName+".zip\"")
rw.Header().Set("Content-Type", "application/zip")
rw.Header().Set("Last-Modified", time.Now().UTC().Format(http.TimeFormat))
err := vfs.CreateZip(ctx, dir, rw)
if err != nil {
serve.Error(ctx, dirRemote, rw, "Failed to create zip", err)
return
}
return
}
dirEntries, err := dir.ReadDirAll() dirEntries, err := dir.ReadDirAll()
if err != nil { if err != nil {
@@ -417,6 +440,7 @@ func (w *WebDAV) serveDir(rw http.ResponseWriter, r *http.Request, dirRemote str
// Make the entries for display // Make the entries for display
directory := serve.NewDirectory(dirRemote, w.server.HTMLTemplate()) directory := serve.NewDirectory(dirRemote, w.server.HTMLTemplate())
directory.DisableZip = w.opt.DisableZip
for _, node := range dirEntries { for _, node := range dirEntries {
if vfscommon.Opt.NoModTime { if vfscommon.Opt.NoModTime {
directory.AddHTMLEntry(node.Path(), node.IsDir(), node.Size(), time.Time{}) directory.AddHTMLEntry(node.Path(), node.IsDir(), node.Size(), time.Time{})

View File

@@ -116,6 +116,7 @@ WebDAV or S3, that work out of the box.)
{{< provider name="Akamai Netstorage" home="https://www.akamai.com/us/en/products/media-delivery/netstorage.jsp" config="/netstorage/" >}} {{< provider name="Akamai Netstorage" home="https://www.akamai.com/us/en/products/media-delivery/netstorage.jsp" config="/netstorage/" >}}
{{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}} {{< provider name="Alibaba Cloud (Aliyun) Object Storage System (OSS)" home="https://www.alibabacloud.com/product/oss/" config="/s3/#alibaba-oss" >}}
{{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}} {{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}}
{{< provider name="Bizfly Cloud Simple Storage" home="https://bizflycloud.vn/" config="/s3/#bizflycloud" >}}
{{< provider name="Backblaze B2" home="https://www.backblaze.com/cloud-storage" config="/b2/" >}} {{< provider name="Backblaze B2" home="https://www.backblaze.com/cloud-storage" config="/b2/" >}}
{{< provider name="Box" home="https://www.box.com/" config="/box/" >}} {{< provider name="Box" home="https://www.box.com/" config="/box/" >}}
{{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}} {{< provider name="Ceph" home="http://ceph.com/" config="/s3/#ceph" >}}
@@ -128,12 +129,14 @@ WebDAV or S3, that work out of the box.)
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}} {{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
{{< provider name="Digi Storage" home="https://storage.rcs-rds.ro/" config="/koofr/#digi-storage" >}} {{< provider name="Digi Storage" home="https://storage.rcs-rds.ro/" config="/koofr/#digi-storage" >}}
{{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}} {{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
{{< provider name="Drime" home="https://www.drime.cloud/" config="/drime/" >}}
{{< provider name="Dropbox" home="https://www.dropbox.com/" config="/dropbox/" >}} {{< provider name="Dropbox" home="https://www.dropbox.com/" config="/dropbox/" >}}
{{< provider name="Enterprise File Fabric" home="https://storagemadeeasy.com/about/" config="/filefabric/" >}} {{< provider name="Enterprise File Fabric" home="https://storagemadeeasy.com/about/" config="/filefabric/" >}}
{{< provider name="Exaba" home="https://exaba.com/" config="/s3/#exaba" >}} {{< provider name="Exaba" home="https://exaba.com/" config="/s3/#exaba" >}}
{{< provider name="Fastmail Files" home="https://www.fastmail.com/" config="/webdav/#fastmail-files" >}} {{< provider name="Fastmail Files" home="https://www.fastmail.com/" config="/webdav/#fastmail-files" >}}
{{< provider name="FileLu Cloud Storage" home="https://filelu.com/" config="/filelu/" >}} {{< provider name="FileLu Cloud Storage" home="https://filelu.com/" config="/filelu/" >}}
{{< provider name="FileLu S5 (S3-Compatible Object Storage)" home="https://s5lu.com/" config="/s3/#filelu-s5" >}} {{< provider name="FileLu S5 (S3-Compatible Object Storage)" home="https://s5lu.com/" config="/s3/#filelu-s5" >}}
{{< provider name="Filen" home="https://www.filen.io/" config="/filen/" >}}
{{< provider name="Files.com" home="https://www.files.com/" config="/filescom/" >}} {{< provider name="Files.com" home="https://www.files.com/" config="/filescom/" >}}
{{< provider name="FlashBlade" home="https://www.purestorage.com/products/unstructured-data-storage.html" config="/s3/#pure-storage-flashblade" >}} {{< provider name="FlashBlade" home="https://www.purestorage.com/products/unstructured-data-storage.html" config="/s3/#pure-storage-flashblade" >}}
{{< provider name="FTP" home="https://en.wikipedia.org/wiki/File_Transfer_Protocol" config="/ftp/" >}} {{< provider name="FTP" home="https://en.wikipedia.org/wiki/File_Transfer_Protocol" config="/ftp/" >}}
@@ -202,6 +205,7 @@ WebDAV or S3, that work out of the box.)
{{< provider name="Selectel" home="https://selectel.ru/services/cloud/storage/" config="/s3/#selectel" >}} {{< provider name="Selectel" home="https://selectel.ru/services/cloud/storage/" config="/s3/#selectel" >}}
{{< provider name="Servercore Object Storage" home="https://servercore.com/services/object-storage/" config="/s3/#servercore" >}} {{< provider name="Servercore Object Storage" home="https://servercore.com/services/object-storage/" config="/s3/#servercore" >}}
{{< provider name="SFTP" home="https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol" config="/sftp/" >}} {{< provider name="SFTP" home="https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol" config="/sftp/" >}}
{{< provider name="Shade" home="https://shade.inc" config="/shade/" >}}
{{< provider name="Sia" home="https://sia.tech/" config="/sia/" >}} {{< provider name="Sia" home="https://sia.tech/" config="/sia/" >}}
{{< provider name="SMB / CIFS" home="https://en.wikipedia.org/wiki/Server_Message_Block" config="/smb/" >}} {{< provider name="SMB / CIFS" home="https://en.wikipedia.org/wiki/Server_Message_Block" config="/smb/" >}}
{{< provider name="Spectra Logic" home="https://spectralogic.com/blackpearl-nearline-object-gateway/" config="/s3/#spectralogic" >}} {{< provider name="Spectra Logic" home="https://spectralogic.com/blackpearl-nearline-object-gateway/" config="/s3/#spectralogic" >}}
@@ -211,7 +215,6 @@ WebDAV or S3, that work out of the box.)
{{< provider name="SugarSync" home="https://sugarsync.com/" config="/sugarsync/" >}} {{< provider name="SugarSync" home="https://sugarsync.com/" config="/sugarsync/" >}}
{{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}} {{< provider name="Tencent Cloud Object Storage (COS)" home="https://intl.cloud.tencent.com/product/cos" config="/s3/#tencent-cos" >}}
{{< provider name="Uloz.to" home="https://uloz.to" config="/ulozto/" >}} {{< provider name="Uloz.to" home="https://uloz.to" config="/ulozto/" >}}
{{< provider name="Uptobox" home="https://uptobox.com" config="/uptobox/" >}}
{{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}} {{< provider name="Wasabi" home="https://wasabi.com/" config="/s3/#wasabi" >}}
{{< provider name="WebDAV" home="https://en.wikipedia.org/wiki/WebDAV" config="/webdav/" >}} {{< provider name="WebDAV" home="https://en.wikipedia.org/wiki/WebDAV" config="/webdav/" >}}
{{< provider name="Yandex Disk" home="https://disk.yandex.com/" config="/yandex/" >}} {{< provider name="Yandex Disk" home="https://disk.yandex.com/" config="/yandex/" >}}

View File

@@ -1048,3 +1048,20 @@ put them back in again. -->
- jijamik <30904953+jijamik@users.noreply.github.com> - jijamik <30904953+jijamik@users.noreply.github.com>
- Dominik Sander <git@dsander.de> - Dominik Sander <git@dsander.de>
- Nikolay Kiryanov <nikolay@kiryanov.ru> - Nikolay Kiryanov <nikolay@kiryanov.ru>
- Diana <5275194+DianaNites@users.noreply.github.com>
- Duncan Smart <duncan.smart@gmail.com>
- vicerace <vicerace@sohu.com>
- Cliff Frey <cliff@openai.com>
- Vladislav Tropnikov <vtr.name@gmail.com>
- Leo <i@hardrain980.com>
- Johannes Rothe <mail@johannes-rothe.de>
- Tingsong Xu <tingsong.xu@rightcapital.com>
- Jonas Tingeborn <134889+jojje@users.noreply.github.com>
- jhasse-shade <jacob@shade.inc>
- vyv03354 <VYV03354@nifty.ne.jp>
- masrlinu <masrlinu@users.noreply.github.com> <5259918+masrlinu@users.noreply.github.com>
- vupn0712 <126212736+vupn0712@users.noreply.github.com>
- darkdragon-001 <darkdragon-001@users.noreply.github.com>
- sys6101 <csvmen@gmail.com>
- Nicolas Dessart <nds@outsight.tech>
- Qingwei Li <332664203@qq.com>

View File

@@ -103,6 +103,26 @@ MD5 hashes are stored with blobs. However blobs that were uploaded in
chunks only have an MD5 if the source remote was capable of MD5 chunks only have an MD5 if the source remote was capable of MD5
hashes, e.g. the local disk. hashes, e.g. the local disk.
### Metadata and tags
Rclone can map arbitrary metadata to Azure Blob headers, user metadata, and tags
when `--metadata` is enabled (or when using `--metadata-set` / `--metadata-mapper`).
- Headers: Set these keys in metadata to map to the corresponding blob headers:
- `cache-control`, `content-disposition`, `content-encoding`, `content-language`, `content-type`.
- User metadata: Any other non-reserved keys are written as user metadata
(keys are normalized to lowercase). Keys starting with `x-ms-` are reserved and
are not stored as user metadata.
- Tags: Provide `x-ms-tags` as a comma-separated list of `key=value` pairs, e.g.
`x-ms-tags=env=dev,team=sync`. These are applied as blob tags on upload and on
server-side copies. Whitespace around keys/values is ignored.
- Modtime override: Provide `mtime` in RFC3339/RFC3339Nano format to override the
stored modtime persisted in user metadata. If `mtime` cannot be parsed, rclone
logs a debug message and ignores the override.
Notes:
- Rclone ignores reserved `x-ms-*` keys (except `x-ms-tags`) for user metadata.
### Performance ### Performance
When uploading large files, increasing the value of When uploading large files, increasing the value of

View File

@@ -283,7 +283,7 @@ It is useful to know how many requests are sent to the server in different scena
All copy commands send the following 4 requests: All copy commands send the following 4 requests:
```text ```text
/b2api/v1/b2_authorize_account /b2api/v4/b2_authorize_account
/b2api/v1/b2_create_bucket /b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets /b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names /b2api/v1/b2_list_file_names

View File

@@ -1049,17 +1049,7 @@ The following backends have known issues that need more investigation:
<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs ---> <!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
- `TestDropbox` (`dropbox`) - `TestDropbox` (`dropbox`)
- [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt) - [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
- `TestGoFile` (`gofile`) - Updated: 2025-11-21-010037
- [`TestBisyncRemoteLocal/all_changed`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/backupdir`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/basic`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/changes`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/check_access`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [78 more](https://pub.rclone.org/integration-tests/current/)
- `TestPcloud` (`pcloud`)
- [`TestBisyncRemoteRemote/check_access`](https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
- [`TestBisyncRemoteRemote/check_access_filters`](https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
- Updated: 2025-12-10-010012
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs ---> <!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
The following backends either have not been tested recently or have known issues The following backends either have not been tested recently or have known issues

View File

@@ -369,7 +369,7 @@ rclone [flags]
--gcs-description string Description of the remote --gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default --gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars) --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets --gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it --gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
@@ -1015,15 +1015,11 @@ rclone [flags]
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff") --union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams --union-upstreams string List of space separated upstreams
-u, --update Skip files that are newer on the destination -u, --update Skip files that are newer on the destination
--uptobox-access-token string Your access token
--uptobox-description string Description of the remote
--uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--use-json-log Use json log format --use-json-log Use json log format
--use-mmap Use mmap allocator (see docs) --use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata --use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1") --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
-v, --verbose count Print lots more stuff (repeat for more) -v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number -V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect --webdav-auth-redirect Preserve authentication on redirect

View File

@@ -231,12 +231,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```console ```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20251210 // Output: stories/The Quick Brown Fox!-20251121
``` ```
```console ```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM // Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
``` ```
```console ```console

View File

@@ -336,7 +336,7 @@ full new copy of the file.
When mounting with `--read-only`, attempts to write to files will fail *silently* When mounting with `--read-only`, attempts to write to files will fail *silently*
as opposed to with a clear warning as in macFUSE. as opposed to with a clear warning as in macFUSE.
# Mounting on Linux ## Mounting on Linux
On newer versions of Ubuntu, you may encounter the following error when running On newer versions of Ubuntu, you may encounter the following error when running
`rclone mount`: `rclone mount`:

View File

@@ -43,9 +43,11 @@ See the following for detailed instructions for
- [Crypt](/crypt/) - to encrypt other remotes - [Crypt](/crypt/) - to encrypt other remotes
- [DigitalOcean Spaces](/s3/#digitalocean-spaces) - [DigitalOcean Spaces](/s3/#digitalocean-spaces)
- [Digi Storage](/koofr/#digi-storage) - [Digi Storage](/koofr/#digi-storage)
- [Drime](/drime/)
- [Dropbox](/dropbox/) - [Dropbox](/dropbox/)
- [Enterprise File Fabric](/filefabric/) - [Enterprise File Fabric](/filefabric/)
- [FileLu Cloud Storage](/filelu/) - [FileLu Cloud Storage](/filelu/)
- [Filen](/filen/)
- [Files.com](/filescom/) - [Files.com](/filescom/)
- [FTP](/ftp/) - [FTP](/ftp/)
- [Gofile](/gofile/) - [Gofile](/gofile/)
@@ -82,13 +84,13 @@ See the following for detailed instructions for
- [rsync.net](/sftp/#rsync-net) - [rsync.net](/sftp/#rsync-net)
- [Seafile](/seafile/) - [Seafile](/seafile/)
- [SFTP](/sftp/) - [SFTP](/sftp/)
- [Shade](/shade/)
- [Sia](/sia/) - [Sia](/sia/)
- [SMB](/smb/) - [SMB](/smb/)
- [Storj](/storj/) - [Storj](/storj/)
- [SugarSync](/sugarsync/) - [SugarSync](/sugarsync/)
- [Union](/union/) - [Union](/union/)
- [Uloz.to](/ulozto/) - [Uloz.to](/ulozto/)
- [Uptobox](/uptobox/)
- [WebDAV](/webdav/) - [WebDAV](/webdav/)
- [Yandex Disk](/yandex/) - [Yandex Disk](/yandex/)
- [Zoho WorkDrive](/zoho/) - [Zoho WorkDrive](/zoho/)

244
docs/content/drime.md Normal file
View File

@@ -0,0 +1,244 @@
---
title: "Drime"
description: "Rclone docs for Drime"
versionIntroduced: "v1.73"
---
# {{< icon "fa fa-cloud" >}} Drime
[Drime](https://drime.cloud/) is a cloud storage and transfer service focused
on fast, resilient file delivery. It offers both free and paid tiers with
emphasis on high-speed uploads and link sharing.
To setup Drime you need to log in, navigate to Settings, Developer, and create a
token to use as an API access key. Give it a sensible name and copy the token
for use in the config.
## Configuration
Here is a run through of `rclone config` to make a remote called `remote`.
Firstly run:
```console
rclone config
```
Then follow through the interactive setup:
```text
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
Enter name for new remote.
name> remote
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
XX / Drime
\ (drime)
Storage> drime
Option access_token.
API Access token
You can get this from the web control panel.
Enter a value. Press Enter to leave empty.
access_token> YOUR_API_ACCESS_TOKEN
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: drime
- access_token: YOUR_API_ACCESS_TOKEN
Keep this "remote" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
Once configured you can then use `rclone` like this (replace `remote` with the
name you gave your remote):
List directories and files in the top level of your Drime
```console
rclone lsf remote:
```
To copy a local directory to a Drime directory called backup
```console
rclone copy /home/source remote:backup
```
### Modification times and hashes
Drime does not support modification times or hashes.
This means that by default syncs will only use the size of the file to determine
if it needs updating.
You can use the `--update` flag which will use the time the object was uploaded.
For many operations this is sufficient to determine if it has changed. However
files created with timestamps in the past will be missed by the sync if using
`--update`.
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| \ | 0x5C | |
File names can also not start or end with the following characters. These only
get replaced if they are the first or last character in the name:
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| SP | 0x20 | ␠ |
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in JSON strings.
### Root folder ID
You can set the `root_folder_id` for rclone. This is the directory
(identified by its `Folder ID`) that rclone considers to be the root
of your Drime drive.
Normally you will leave this blank and rclone will determine the
correct root to use itself and fill in the value in the config file.
However you can set this to restrict rclone to a specific folder
hierarchy.
In order to do this you will have to find the `Folder ID` of the
directory you wish rclone to display.
You can do this with rclone
```console
$ rclone lsf -Fip --dirs-only remote:
d6341f53-ee65-4f29-9f59-d11e8070b2a0;Files/
f4f5c9b8-6ece-478b-b03e-4538edfe5a1c;Photos/
d50e356c-29ca-4b27-a3a7-494d91026e04;Videos/
```
The ID to use is the part before the `;` so you could set
```text
root_folder_id = d6341f53-ee65-4f29-9f59-d11e8070b2a0
```
To restrict rclone to the `Files` directory.
<!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/drime/drime.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options
Here are the Standard options specific to drime (Drime).
#### --drime-access-token
API Access token
You can get this from the web control panel.
Properties:
- Config: access_token
- Env Var: RCLONE_DRIME_ACCESS_TOKEN
- Type: string
- Required: false
### Advanced options
Here are the Advanced options specific to drime (Drime).
#### --drime-root-folder-id
ID of the root folder
Leave this blank normally, rclone will fill it in automatically.
If you want rclone to be restricted to a particular folder you can
fill it in - see the docs for more info.
Properties:
- Config: root_folder_id
- Env Var: RCLONE_DRIME_ROOT_FOLDER_ID
- Type: string
- Required: false
#### --drime-workspace-id
Account ID
Leave this blank normally, rclone will fill it in automatically.
Properties:
- Config: workspace_id
- Env Var: RCLONE_DRIME_WORKSPACE_ID
- Type: string
- Required: false
#### --drime-list-chunk
Number of items to list in each call
Properties:
- Config: list_chunk
- Env Var: RCLONE_DRIME_LIST_CHUNK
- Type: int
- Default: 1000
#### --drime-encoding
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_DRIME_ENCODING
- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,LeftSpace,RightSpace,InvalidUtf8,Dot
#### --drime-description
Description of the remote.
Properties:
- Config: description
- Env Var: RCLONE_DRIME_DESCRIPTION
- Type: string
- Required: false
<!-- autogenerated options stop -->
## Limitations
Drime only supports filenames up to 255 bytes in length, where filenames are
encoded in UTF8.

244
docs/content/filen.md Normal file
View File

@@ -0,0 +1,244 @@
---
title: "Filen"
description: "Rclone docs for Filen"
versionIntroduced: "1.73"
---
# {{< icon "fa fa-solid fa-f" >}} Filen
## Configuration
The initial setup for Filen requires that you get an API key for your account,
currently this is only possible using the [Filen CLI](https://github.com/FilenCloudDienste/filen-cli).
This means you must first download the CLI, login, and then run the `export-api-key` command.
Here is an example of how to make a remote called `FilenRemote`. First run:
rclone config
This will guide you through an interactive setup process:
```
No remotes found, make a new one?
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n
name> FilenRemote
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[snip]
XX / Filen
\ "filen"
[snip]
Storage> filen
Option Email.
The email of your Filen account
Enter a value.
Email> youremail@provider.com
Option Password.
The password of your Filen account
Choose an alternative below.
y) Yes, type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
Option API Key.
An API Key for your Filen account
Get this using the Filen CLI export-api-key command
You can download the Filen CLI from https://github.com/FilenCloudDienste/filen-cli
Choose an alternative below.
y) Yes, type in my own password
g) Generate random password
y/g> y
Enter the password:
password:
Confirm the password:
password:
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: filen
- Email: youremail@provider.com
- Password: *** ENCRYPTED ***
- API Key: *** ENCRYPTED ***
Keep this "FilenRemote" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
### Modification times and hashes
Modification times are fully supported for files, for directories, only the creation time matters.
Filen supports SHA512 hashes.
### Restricted filename characters
Invalid UTF-8 bytes will be [replaced](/overview/#invalid-utf8)
### API Key
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/filen/filen.go then run make backenddocs" >}}
### Standard options
Here are the Standard options specific to filen (Filen).
#### --filen-email
Email of your Filen account
Properties:
- Config: email
- Env Var: RCLONE_FILEN_EMAIL
- Type: string
- Required: true
#### --filen-password
Password of your Filen account
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
Properties:
- Config: password
- Env Var: RCLONE_FILEN_PASSWORD
- Type: string
- Required: true
#### --filen-api-key
API Key for your Filen account
Get this using the Filen CLI export-api-key command
You can download the Filen CLI from https://github.com/FilenCloudDienste/filen-cli
**NB** Input to this must be obscured - see [rclone obscure](/commands/rclone_obscure/).
Properties:
- Config: api_key
- Env Var: RCLONE_FILEN_API_KEY
- Type: string
- Required: true
### Advanced options
Here are the Advanced options specific to filen (Filen).
#### --filen-encoding
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_FILEN_ENCODING
- Type: Encoding
- Default: Slash,Del,Ctl,InvalidUtf8,Dot
#### --filen-upload-concurrency
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded
concurrently for multipart uploads.
Note that chunks are stored in memory and there may be up to
"--transfers" * "--filen-upload-concurrency" chunks stored at once
in memory.
If you are uploading small numbers of large files over high-speed links
and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.
Properties:
- Config: upload_concurrency
- Env Var: RCLONE_FILEN_UPLOAD_CONCURRENCY
- Type: int
- Default: 16
#### --filen-master-keys
Master Keys (internal use only)
Properties:
- Config: master_keys
- Env Var: RCLONE_FILEN_MASTER_KEYS
- Type: string
- Required: false
#### --filen-private-key
Private RSA Key (internal use only)
Properties:
- Config: private_key
- Env Var: RCLONE_FILEN_PRIVATE_KEY
- Type: string
- Required: false
#### --filen-public-key
Public RSA Key (internal use only)
Properties:
- Config: public_key
- Env Var: RCLONE_FILEN_PUBLIC_KEY
- Type: string
- Required: false
#### --filen-auth-version
Authentication Version (internal use only)
Properties:
- Config: auth_version
- Env Var: RCLONE_FILEN_AUTH_VERSION
- Type: string
- Required: false
#### --filen-base-folder-uuid
UUID of Account Root Directory (internal use only)
Properties:
- Config: base_folder_uuid
- Env Var: RCLONE_FILEN_BASE_FOLDER_UUID
- Type: string
- Required: false
#### --filen-description
Description of the remote.
Properties:
- Config: description
- Env Var: RCLONE_FILEN_DESCRIPTION
- Type: string
- Required: false
{{< rem autogenerated options stop >}}

View File

@@ -121,7 +121,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this --tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default "rclone/v1.72.1") --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
``` ```
@@ -638,7 +638,7 @@ Backend-only flags (these can be set in the config file also).
--gcs-description string Description of the remote --gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default --gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars) --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets --gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it --gcs-no-check-bucket If set, don't attempt to check the bucket exists or create it
@@ -1138,10 +1138,6 @@ Backend-only flags (these can be set in the config file also).
--union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi) --union-min-free-space SizeSuffix Minimum viable free space for lfs/eplfs policies (default 1Gi)
--union-search-policy string Policy to choose upstream on SEARCH category (default "ff") --union-search-policy string Policy to choose upstream on SEARCH category (default "ff")
--union-upstreams string List of space separated upstreams --union-upstreams string List of space separated upstreams
--uptobox-access-token string Your access token
--uptobox-description string Description of the remote
--uptobox-encoding Encoding The encoding for the backend (default Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot)
--uptobox-private Set to make uploaded files private
--webdav-auth-redirect Preserve authentication on redirect --webdav-auth-redirect Preserve authentication on redirect
--webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon) --webdav-bearer-token string Bearer token instead of user/pass (e.g. a Macaroon)
--webdav-bearer-token-command string Command to run to get a bearer token --webdav-bearer-token-command string Command to run to get a bearer token

View File

@@ -498,6 +498,12 @@ URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb. Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
Supports the format http://user:pass@host:port, http://host:port, http://host.
Example:
http://myUser:myPass@proxyhostname.example.com:8000
Properties: Properties:

View File

@@ -785,14 +785,9 @@ Properties:
#### --gcs-endpoint #### --gcs-endpoint
Custom endpoint for the storage API. Leave blank to use the provider default. Endpoint for the service.
When using a custom endpoint that includes a subpath (e.g. example.org/custom/endpoint), Leave blank normally.
the subpath will be ignored during upload operations due to a limitation in the
underlying Google API Go client library.
Download and listing operations will work correctly with the full endpoint path.
If you require subpath support for uploads, avoid using subpaths in your custom
endpoint configuration.
Properties: Properties:
@@ -800,13 +795,6 @@ Properties:
- Env Var: RCLONE_GCS_ENDPOINT - Env Var: RCLONE_GCS_ENDPOINT
- Type: string - Type: string
- Required: false - Required: false
- Examples:
- "storage.example.org"
- Specify a custom endpoint
- "storage.example.org:4443"
- Specifying a custom endpoint with port
- "storage.example.org:4443/gcs/api"
- Specifying a subpath, see the note, uploads won't use the custom path!
#### --gcs-encoding #### --gcs-encoding

View File

@@ -23,9 +23,11 @@ Here is an overview of the major features of each cloud storage system.
| Box | SHA1 | R/W | Yes | No | - | - | | Box | SHA1 | R/W | Yes | No | - | - |
| Citrix ShareFile | MD5 | R/W | Yes | No | - | - | | Citrix ShareFile | MD5 | R/W | Yes | No | - | - |
| Cloudinary | MD5 | R | No | Yes | - | - | | Cloudinary | MD5 | R | No | Yes | - | - |
| Drime | - | - | No | No | R/W | - |
| Dropbox | DBHASH ¹ | R | Yes | No | - | - | | Dropbox | DBHASH ¹ | R | Yes | No | - | - |
| Enterprise File Fabric | - | R/W | Yes | No | R/W | - | | Enterprise File Fabric | - | R/W | Yes | No | R/W | - |
| FileLu Cloud Storage | MD5 | R/W | No | Yes | R | - | | FileLu Cloud Storage | MD5 | R/W | No | Yes | R | - |
| Filen | SHA512 | R/W | Yes | No | R/W | - |
| Files.com | MD5, CRC32 | DR/W | Yes | No | R | - | | Files.com | MD5, CRC32 | DR/W | Yes | No | R | - |
| FTP | - | R/W ¹⁰ | No | No | - | - | | FTP | - | R/W ¹⁰ | No | No | - | - |
| Gofile | MD5 | DR/W | No | Yes | R | - | | Gofile | MD5 | DR/W | No | Yes | R | - |
@@ -59,12 +61,12 @@ Here is an overview of the major features of each cloud storage system.
| Quatrix by Maytech | - | R/W | No | No | - | - | | Quatrix by Maytech | - | R/W | No | No | - | - |
| Seafile | - | - | No | No | - | - | | Seafile | - | - | No | No | - | - |
| SFTP | MD5, SHA1 ² | DR/W | Depends | No | - | - | | SFTP | MD5, SHA1 ² | DR/W | Depends | No | - | - |
| Shade | - | - | Yes | No | - | - |
| Sia | - | - | No | No | - | - | | Sia | - | - | No | No | - | - |
| SMB | - | R/W | Yes | No | - | - | | SMB | - | R/W | Yes | No | - | - |
| SugarSync | - | - | No | No | - | - | | SugarSync | - | - | No | No | - | - |
| Storj | - | R | No | No | - | - | | Storj | - | R | No | No | - | - |
| Uloz.to | MD5, SHA256 ¹³ | - | No | Yes | - | - | | Uloz.to | MD5, SHA256 ¹³ | - | No | Yes | - | - |
| Uptobox | - | - | No | Yes | - | - |
| WebDAV | MD5, SHA1 ³ | R ⁴ | Depends | No | - | - | | WebDAV | MD5, SHA1 ³ | R ⁴ | Depends | No | - | - |
| Yandex Disk | MD5 | R/W | No | No | R | - | | Yandex Disk | MD5 | R/W | No | No | R | - |
| Zoho WorkDrive | - | - | No | No | - | - | | Zoho WorkDrive | - | - | No | No | - | - |
@@ -514,9 +516,11 @@ upon backend-specific capabilities.
| Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No | | Backblaze B2 | No | Yes | No | No | Yes | Yes | Yes | Yes | Yes | No | No |
| Box | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes | | Box | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes |
| Citrix ShareFile | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes | | Citrix ShareFile | Yes | Yes | Yes | Yes | No | No | No | No | No | No | Yes |
| Drime | Yes | Yes | Yes | Yes | No | No | Yes | Yes | No | No | Yes |
| Dropbox | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes | | Dropbox | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
| Cloudinary | No | No | No | No | No | No | Yes | No | No | No | No | | Cloudinary | No | No | No | No | No | No | Yes | No | No | No | No |
| Enterprise File Fabric | Yes | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | | Enterprise File Fabric | Yes | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes |
| Filen | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No | Yes | Yes |
| Files.com | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | No | Yes | | Files.com | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | No | Yes |
| FTP | No | No | Yes | Yes | No | No | Yes | No | No | No | Yes | | FTP | No | No | Yes | Yes | No | No | Yes | No | No | No | Yes |
| Gofile | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes | | Gofile | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
@@ -540,7 +544,7 @@ upon backend-specific capabilities.
| OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes | | OpenDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |
| OpenStack Swift | Yes ¹ | Yes | No | No | No | Yes | Yes | No | No | Yes | No | | OpenStack Swift | Yes ¹ | Yes | No | No | No | Yes | Yes | No | No | Yes | No |
| Oracle Object Storage | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | No | | Oracle Object Storage | No | Yes | No | No | Yes | Yes | Yes | Yes | No | No | No |
| pCloud | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes | | pCloud | Yes | Yes | Yes | Yes | Yes | Yes | No | No | Yes | Yes | Yes |
| PikPak | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes | | PikPak | Yes | Yes | Yes | Yes | Yes | No | No | No | Yes | Yes | Yes |
| Pixeldrain | Yes | No | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes | | Pixeldrain | Yes | No | Yes | Yes | No | No | Yes | No | Yes | Yes | Yes |
| premiumize.me | Yes | No | Yes | Yes | No | No | No | No | Yes | Yes | Yes | | premiumize.me | Yes | No | Yes | Yes | No | No | No | No | Yes | Yes | Yes |
@@ -555,7 +559,6 @@ upon backend-specific capabilities.
| SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | No | Yes | | SugarSync | Yes | Yes | Yes | Yes | No | No | Yes | No | Yes | No | Yes |
| Storj | Yes ² | Yes | Yes | No | No | Yes | Yes | No | Yes | No | No | | Storj | Yes ² | Yes | Yes | No | No | Yes | Yes | No | Yes | No | No |
| Uloz.to | No | No | Yes | Yes | No | No | No | No | No | No | Yes | | Uloz.to | No | No | Yes | Yes | No | No | No | No | No | No | Yes |
| Uptobox | No | Yes | Yes | Yes | No | No | No | No | No | No | No |
| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ³ | No | No | Yes | Yes | | WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ³ | No | No | Yes | Yes |
| Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes | | Yandex Disk | Yes | Yes | Yes | Yes | Yes | No | Yes | No | Yes | Yes | Yes |
| Zoho WorkDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes | | Zoho WorkDrive | Yes | Yes | Yes | Yes | No | No | No | No | No | Yes | Yes |

View File

@@ -173,6 +173,31 @@ So if the folder you want rclone to use your is "My Music/", then use the return
id from ```rclone lsf``` command (ex. `dxxxxxxxx2`) as the `root_folder_id` variable id from ```rclone lsf``` command (ex. `dxxxxxxxx2`) as the `root_folder_id` variable
value in the config file. value in the config file.
### Change notifications and mounts
The pCloud backend supports realtime updates for rclone mounts via change
notifications. rclone uses pClouds diff longpolling API to detect changes and
will automatically refresh directory listings in the mounted filesystem when
changes occur.
Notes and behavior:
- Works automatically when using `rclone mount` and requires no additional
configuration.
- Notifications are directoryscoped: when rclone detects a change, it refreshes
the affected directory so new/removed/renamed files become visible promptly.
- Updates are near realtime. The backend uses a longpoll with short fallback
polling intervals, so you should see changes appear quickly without manual
refreshes.
If you want to debug or verify notifications, you can use the helper command:
```bash
rclone test changenotify remote:
```
This will log incoming change notifications for the given remote.
<!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/pcloud/pcloud.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length --> <!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/pcloud/pcloud.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options ### Standard options

View File

@@ -18,6 +18,7 @@ The S3 backend can be used with a number of different providers:
{{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}} {{< provider name="China Mobile Ecloud Elastic Object Storage (EOS)" home="https://ecloud.10086.cn/home/product-introduction/eos/" config="/s3/#china-mobile-ecloud-eos" >}}
{{< provider name="Cloudflare R2" home="https://blog.cloudflare.com/r2-open-beta/" config="/s3/#cloudflare-r2" >}} {{< provider name="Cloudflare R2" home="https://blog.cloudflare.com/r2-open-beta/" config="/s3/#cloudflare-r2" >}}
{{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.com/en/products/cloud-storage" config="/s3/#arvan-cloud" >}} {{< provider name="Arvan Cloud Object Storage (AOS)" home="https://www.arvancloud.com/en/products/cloud-storage" config="/s3/#arvan-cloud" >}}
{{< provider name="Bizfly Cloud Simple Storage" home="https://bizflycloud.vn/" config="/s3/#bizflycloud" >}}
{{< provider name="Cubbit DS3" home="https://cubbit.io/ds3-cloud" config="/s3/#Cubbit" >}} {{< provider name="Cubbit DS3" home="https://cubbit.io/ds3-cloud" config="/s3/#Cubbit" >}}
{{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}} {{< provider name="DigitalOcean Spaces" home="https://www.digitalocean.com/products/object-storage/" config="/s3/#digitalocean-spaces" >}}
{{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}} {{< provider name="Dreamhost" home="https://www.dreamhost.com/cloud/storage/" config="/s3/#dreamhost" >}}
@@ -745,6 +746,68 @@ If none of these option actually end up providing `rclone` with AWS
credentials then S3 interaction will be non-authenticated (see the credentials then S3 interaction will be non-authenticated (see the
[anonymous access](#anonymous-access) section for more info). [anonymous access](#anonymous-access) section for more info).
#### Assume Role (Cross-Account Access)
If you need to access S3 resources in a different AWS account, you can use IAM role assumption.
This is useful for cross-account access scenarios where you have credentials in one account
but need to access resources in another account.
To use assume role, configure the following parameters:
- `role_arn` - The ARN (Amazon Resource Name) of the IAM role to assume in the target account.
Format: `arn:aws:iam::ACCOUNT-ID:role/ROLE-NAME`
- `role_session_name` (optional) - A name for the assumed role session. If not specified,
rclone will generate one automatically.
- `role_session_duration` (optional) - Duration for which the assumed role credentials are valid.
If not specified, AWS default duration will be used (typically 1 hour).
- `role_external_id` (optional) - An external ID required by the role's trust policy for additional security.
This is typically used when the role is accessed by a third party.
The assume role feature works with both direct credentials (`env_auth = false`) and environment-based
authentication (`env_auth = true`). Rclone will first authenticate using the base credentials, then
use those credentials to assume the specified role.
Example configuration for cross-account access:
```
[s3-cross-account]
type = s3
provider = AWS
env_auth = true
region = us-east-1
role_arn = arn:aws:iam::123456789012:role/CrossAccountS3Role
role_session_name = rclone-session
role_external_id = unique-role-external-id-12345
```
In this example:
- Base credentials are obtained from the environment (IAM role, credentials file, or environment variables)
- These credentials are then used to assume the role `CrossAccountS3Role` in account `123456789012`
- An external ID is provided for additional security as required by the role's trust policy
The target role's trust policy in the destination account must allow the source account or user to assume it.
Example trust policy:
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::SOURCE-ACCOUNT-ID:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalID": "unique-role-external-id-12345"
}
}
}
]
}
```
### S3 Permissions ### S3 Permissions
When using the `sync` subcommand of `rclone` the following minimum When using the `sync` subcommand of `rclone` the following minimum
@@ -1383,21 +1446,12 @@ Properties:
- "ru-1" - "ru-1"
- St. Petersburg - St. Petersburg
- Provider: Selectel,Servercore - Provider: Selectel,Servercore
- "ru-3"
- St. Petersburg
- Provider: Selectel
- "ru-7"
- Moscow
- Provider: Selectel,Servercore
- "gis-1" - "gis-1"
- Moscow - Moscow
- Provider: Selectel,Servercore - Provider: Servercore
- "kz-1" - "ru-7"
- Kazakhstan - Moscow
- Provider: Selectel - Provider: Servercore
- "uz-2"
- Uzbekistan
- Provider: Selectel
- "uz-2" - "uz-2"
- Tashkent, Uzbekistan - Tashkent, Uzbekistan
- Provider: Servercore - Provider: Servercore
@@ -2189,25 +2243,13 @@ Properties:
- SeaweedFS S3 localhost - SeaweedFS S3 localhost
- Provider: SeaweedFS - Provider: SeaweedFS
- "s3.ru-1.storage.selcloud.ru" - "s3.ru-1.storage.selcloud.ru"
- St. Petersburg - Saint Petersburg
- Provider: Selectel
- "s3.ru-3.storage.selcloud.ru"
- St. Petersburg
- Provider: Selectel
- "s3.ru-7.storage.selcloud.ru"
- Moscow
- Provider: Selectel,Servercore - Provider: Selectel,Servercore
- "s3.gis-1.storage.selcloud.ru" - "s3.gis-1.storage.selcloud.ru"
- Moscow - Moscow
- Provider: Selectel,Servercore - Provider: Servercore
- "s3.kz-1.storage.selcloud.ru" - "s3.ru-7.storage.selcloud.ru"
- Kazakhstan - Moscow
- Provider: Selectel
- "s3.uz-2.storage.selcloud.ru"
- Uzbekistan
- Provider: Selectel
- "s3.ru-1.storage.selcloud.ru"
- Saint Petersburg
- Provider: Servercore - Provider: Servercore
- "s3.uz-2.srvstorage.uz" - "s3.uz-2.srvstorage.uz"
- Tashkent, Uzbekistan - Tashkent, Uzbekistan
@@ -4495,6 +4537,36 @@ server_side_encryption =
storage_class = storage_class =
``` ```
### BizflyCloud {#bizflycloud}
[Bizfly Cloud Simple Storage](https://bizflycloud.vn/simple-storage) is an
S3-compatible service with regions in Hanoi (HN) and Ho Chi Minh City (HCM).
Use the endpoint for your region:
- HN: `hn.ss.bfcplatform.vn`
- HCM: `hcm.ss.bfcplatform.vn`
A minimal configuration looks like this.
```ini
[bizfly]
type = s3
provider = BizflyCloud
env_auth = false
access_key_id = YOUR_ACCESS_KEY
secret_access_key = YOUR_SECRET_KEY
region = HN
endpoint = hn.ss.bfcplatform.vn
location_constraint =
acl =
server_side_encryption =
storage_class =
```
Switch `region` and `endpoint` to `HCM` and `hcm.ss.bfcplatform.vn` for Ho Chi
Minh City.
### Ceph ### Ceph
[Ceph](https://ceph.com/) is an open-source, unified, distributed [Ceph](https://ceph.com/) is an open-source, unified, distributed

View File

@@ -1186,6 +1186,12 @@ URL for HTTP CONNECT proxy
Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb. Set this to a URL for an HTTP proxy which supports the HTTP CONNECT verb.
Supports the format http://user:pass@host:port, http://host:port, http://host.
Example:
http://myUser:myPass@proxyhostname.example.com:8000
Properties: Properties:

218
docs/content/shade.md Normal file
View File

@@ -0,0 +1,218 @@
# {{< icon "fa fa-moon" >}} Shade
This is a backend for the [Shade](https://shade.inc/) platform
## About Shade
[Shade](https://shade.inc/) is an AI-powered cloud NAS that makes your cloud files behave like a local drive, optimized for media and creative workflows. It provides fast, secure access with natural-language search, easy sharing, and scalable cloud storage.
## Accounts & Pricing
To use this backend, you need to [create a free account](https://app.shade.inc/) on Shade. You can start with a free account and get 20GB of storage for free.
## Usage
Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
Here is an example of making a Shade configuration.
First, create a [create a free account](https://app.shade.inc/) account and choose a plan.
You will need to log in and get the `API Key` and `Drive ID` for your account from the settings section of your account and created drive respectively.
Now run
`rclone config`
Follow this interactive process:
```sh
$ rclone config
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
Enter name for new remote.
name> Shade
Option Storage.
Type of storage to configure.
Choose a number from below, or type in your own value.
[OTHER OPTIONS]
xx / Shade FS
\ (shade)
[OTHER OPTIONS]
Storage> xx
Option drive_id.
The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive.
Enter a value.
drive_id> [YOUR_ID]
Option api_key.
An API key for your account.
Enter a value.
api_key> [YOUR_API_KEY]
Edit advanced config?
y) Yes
n) No (default)
y/n> n
Configuration complete.
Options:
- type: shade
- drive_id: [YOUR_ID]
- api_key: [YOUR_API_KEY]
Keep this "Shade" remote?
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
```
### Modification times and hashes
Shade does not support hashes and writing mod times.
### Transfers
Shade uses multipart uploads by default. This means that files will be chunked and sent up to Shade concurrently. In order to configure how many simultaneous uploads you want to use, upload the 'concurrency' option in the advanced config section. Note that this uses more memory and initiates more http requests.
### Deleting files
Please note that when deleting files in Shade via rclone it will delete the file instantly, instead of sending it to the trash. This means that it will not be recoverable.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/box/box.go then run make backenddocs" >}}
### Standard options
Here are the Standard options specific to shade (Shade FS).
#### --shade-drive-id
The ID of your drive, see this in the drive settings. Individual rclone configs must be made per drive.
Properties:
- Config: drive_id
- Env Var: RCLONE_SHADE_DRIVE_ID
- Type: string
- Required: true
#### --shade-api-key
An API key for your account. You can find this under Settings > API Keys
Properties:
- Config: api_key
- Env Var: RCLONE_SHADE_API_KEY
- Type: string
- Required: true
### Advanced options
Here are the Advanced options specific to shade (Shade FS).
#### --shade-endpoint
Endpoint for the service.
Leave blank normally.
Properties:
- Config: endpoint
- Env Var: RCLONE_SHADE_ENDPOINT
- Type: string
- Required: false
#### --shade-chunk-size
Chunk size to use for uploading.
Any files larger than this will be uploaded in chunks of this size.
Note that this is stored in memory per transfer, so increasing it will
increase memory usage.
Minimum is 5MB, maximum is 5GB.
Properties:
- Config: chunk_size
- Env Var: RCLONE_SHADE_CHUNK_SIZE
- Type: SizeSuffix
- Default: 64Mi
#### --shade-encoding
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_SHADE_ENCODING
- Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --shade-description
Description of the remote.
Properties:
- Config: description
- Env Var: RCLONE_SHADE_DESCRIPTION
- Type: string
- Required: false
{{< rem autogenerated options stop >}}
## Limitations
Note that Shade is case insensitive so you can't have a file called
"Hello.doc" and one called "hello.doc".
Shade only supports filenames up to 255 characters in length.
`rclone about` is not supported by the Shade backend. Backends without
this capability cannot determine free space for an rclone mount or
use policy `mfs` (most free space) as a member of an rclone union
remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/)
## Backend commands
Here are the commands specific to the shade backend.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
See the [backend](/commands/rclone_backend/) command for more
info on how to pass options and arguments.
These can be run on a running backend using the rc command
[backend/command](/rc/#backend-command).

View File

@@ -13,7 +13,7 @@ Thank you to our sponsors:
<!-- markdownlint-capture --> <!-- markdownlint-capture -->
<!-- markdownlint-disable line-length no-bare-urls --> <!-- markdownlint-disable line-length no-bare-urls -->
{{< sponsor src="/img/logos/rabata/txt_1_300x114.png" width="300" height="200" title="Visit our sponsor Rabata.io" link="https://rabata.io/?utm_source=banner&utm_medium=rclone&utm_content=general">}} {{< sponsor src="/img/logos/rabata.svg" width="300" height="200" title="Visit our sponsor Rabata.io" link="https://rabata.io/?utm_source=banner&utm_medium=rclone&utm_content=general">}}
{{< sponsor src="/img/logos/idrive_e2.svg" width="300" height="200" title="Visit our sponsor IDrive e2" link="https://www.idrive.com/e2/?refer=rclone">}} {{< sponsor src="/img/logos/idrive_e2.svg" width="300" height="200" title="Visit our sponsor IDrive e2" link="https://www.idrive.com/e2/?refer=rclone">}}
{{< sponsor src="/img/logos/filescom-enterprise-grade-workflows.png" width="300" height="200" title="Start Your Free Trial Today" link="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=rclone">}} {{< sponsor src="/img/logos/filescom-enterprise-grade-workflows.png" width="300" height="200" title="Start Your Free Trial Today" link="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=rclone">}}
{{< sponsor src="/img/logos/mega-s4.svg" width="300" height="200" title="MEGA S4: New S3 compatible object storage. High scale. Low cost. Free egress." link="https://mega.io/objectstorage?utm_source=rclone&utm_medium=referral&utm_campaign=rclone-mega-s4&mct=rclonepromo">}} {{< sponsor src="/img/logos/mega-s4.svg" width="300" height="200" title="MEGA S4: New S3 compatible object storage. High scale. Low cost. Free egress." link="https://mega.io/objectstorage?utm_source=rclone&utm_medium=referral&utm_campaign=rclone-mega-s4&mct=rclonepromo">}}

View File

@@ -1,179 +0,0 @@
---
title: "Uptobox"
description: "Rclone docs for Uptobox"
versionIntroduced: "v1.56"
---
# {{< icon "fa fa-archive" >}} Uptobox
This is a Backend for Uptobox file storage service. Uptobox is closer to a
one-click hoster than a traditional cloud storage provider and therefore not
suitable for long term storage.
Paths are specified as `remote:path`
Paths may be as deep as required, e.g. `remote:directory/subdirectory`.
## Configuration
To configure an Uptobox backend you'll need your personal api token. You'll find
it in your [account settings](https://uptobox.com/my_account).
Here is an example of how to make a remote called `remote` with the default setup.
First run:
```console
rclone config
```
This will guide you through an interactive setup process:
```text
Current remotes:
Name Type
==== ====
TestUptobox uptobox
e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q> n
name> uptobox
Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
[...]
37 / Uptobox
\ "uptobox"
[...]
Storage> uptobox
** See help for uptobox backend at: https://rclone.org/uptobox/ **
Your API Key, get it from https://uptobox.com/my_account
Enter a string value. Press Enter for the default ("").
api_key> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n
Remote config
--------------------
[uptobox]
type = uptobox
api_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d>
```
Once configured you can then use `rclone` like this (replace `remote` with the
name you gave your remote):
List directories in top level of your Uptobox
```console
rclone lsd remote:
```
List all the files in your Uptobox
```console
rclone ls remote:
```
To copy a local directory to an Uptobox directory called backup
```console
rclone copy /home/source remote:backup
```
### Modification times and hashes
Uptobox supports neither modified times nor checksums. All timestamps
will read as that set by `--default-time`.
### Restricted filename characters
In addition to the [default restricted characters set](/overview/#restricted-characters)
the following characters are also replaced:
| Character | Value | Replacement |
| --------- |:-----:|:-----------:|
| " | 0x22 | |
| ` | 0x41 | |
Invalid UTF-8 bytes will also be [replaced](/overview/#invalid-utf8),
as they can't be used in XML strings.
<!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/uptobox/uptobox.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options
Here are the Standard options specific to uptobox (Uptobox).
#### --uptobox-access-token
Your access token.
Get it from https://uptobox.com/my_account.
Properties:
- Config: access_token
- Env Var: RCLONE_UPTOBOX_ACCESS_TOKEN
- Type: string
- Required: false
### Advanced options
Here are the Advanced options specific to uptobox (Uptobox).
#### --uptobox-private
Set to make uploaded files private
Properties:
- Config: private
- Env Var: RCLONE_UPTOBOX_PRIVATE
- Type: bool
- Default: false
#### --uptobox-encoding
The encoding for the backend.
See the [encoding section in the overview](/overview/#encoding) for more info.
Properties:
- Config: encoding
- Env Var: RCLONE_UPTOBOX_ENCODING
- Type: Encoding
- Default: Slash,LtGt,DoubleQuote,BackQuote,Del,Ctl,LeftSpace,InvalidUtf8,Dot
#### --uptobox-description
Description of the remote.
Properties:
- Config: description
- Env Var: RCLONE_UPTOBOX_DESCRIPTION
- Type: string
- Required: false
<!-- autogenerated options stop -->
## Limitations
Uptobox will delete inactive files that have not been accessed in 60 days.
`rclone about` is not supported by this backend an overview of used space can however
been seen in the uptobox web interface.

View File

@@ -10,40 +10,21 @@
{{end}} {{end}}
<div class="card"> <div class="card">
<div class="card-header"> <div class="card-header">Platinum Sponsor</div>
Platinum Sponsor
</div>
<div class="card-body"> <div class="card-body">
<a id="platinum" href="https://rabata.io/?utm_source=banner&utm_medium=rclone&utm_content=general" target="_blank" rel="noopener" title="Visit rclone's sponsor Rabata.io"><img style="width: 100%; height: auto;" src="/img/logos/rabata/txt_1_website.png"></a><br /> <a href="https://rabata.io/?utm_source=banner&utm_medium=rclone&utm_content=general" target="_blank" rel="noopener" title="Visit rclone's sponsor Rabata.io"><img src="/img/logos/rabata.svg"></a><br />
<script>
const imgs = [
{ href: "https://rabata.io/?utm_source=banner&utm_medium=rclone&utm_content=general", img: "/img/logos/rabata/txt_1_website.png" },
{ href: "https://rabata.io/?utm_source=banner&utm_medium=rclone&utm_content=general", img: "/img/logos/rabata/txt_2_website.png" },
{ href: "https://rabata.io/grant-application?utm_source=banner&utm_medium=rclone&utm_content=grant1", img: "/img/logos/rabata/100k_website.png" },
];
const img = imgs[Math.floor(Math.random() * imgs.length)];
document.addEventListener("DOMContentLoaded", () => {
const a = document.getElementById("platinum");
a.href = img.href;
a.querySelector("img").src = img.img;
});
</script>
</div> </div>
</div> </div>
<div class="card"> <div class="card">
<div class="card-header"> <div class="card-header">Gold Sponsor</div>
Gold Sponsor
</div>
<div class="card-body"> <div class="card-body">
<a href="https://www.idrive.com/e2/?refer=rclone" target="_blank" rel="noopener" title="Visit rclone's sponsor IDrive e2"><img src="/img/logos/idrive_e2.svg" viewBox="0 0 60 55"></a><br /> <a href="https://www.idrive.com/e2/?refer=rclone" target="_blank" rel="noopener" title="Visit rclone's sponsor IDrive e2"><img src="/img/logos/idrive_e2.svg" viewBox="0 0 60 55"></a><br />
</div> </div>
</div> </div>
<div class="card"> <div class="card">
<div class="card-header"> <div class="card-header">Gold Sponsor</div>
Gold Sponsor
</div>
<div class="card-body"> <div class="card-body">
<a href="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=rclone" target="_blank" rel="noopener" title="Start Your Free Trial Today"><img style="max-width: 100%; height: auto;" src="/img/logos/filescom-enterprise-grade-workflows.png"></a><br /> <a href="https://files.com/?utm_source=rclone&utm_medium=referral&utm_campaign=banner&utm_term=rclone" target="_blank" rel="noopener" title="Start Your Free Trial Today"><img style="max-width: 100%; height: auto;" src="/img/logos/filescom-enterprise-grade-workflows.png"></a><br />
</div> </div>
@@ -51,25 +32,19 @@
{{if .IsHome}} {{if .IsHome}}
<div class="card"> <div class="card">
<div class="card-header"> <div class="card-header">Silver Sponsor</div>
Silver Sponsor
</div>
<div class="card-body"> <div class="card-body">
<a href="https://rcloneview.com/?utm_source=rclone" target="_blank" rel="noopener" title="Visit rclone's sponsor RcloneView"><img src="/img/logos/rcloneview-banner.svg"></a><br /> <a href="https://rcloneview.com/?utm_source=rclone" target="_blank" rel="noopener" title="Visit rclone's sponsor RcloneView"><img src="/img/logos/rcloneview.svg"></a><br />
</div> </div>
</div> </div>
<div class="card"> <div class="card">
<div class="card-header"> <div class="card-header">Silver Sponsor</div>
Silver Sponsor
</div>
<div class="card-body"> <div class="card-body">
<a href="https://github.com/rclone-ui/rclone-ui" target="_blank" rel="noopener" title="Visit rclone's sponsor rclone UI"><img src="/img/logos/rcloneui.svg"></a><br /> <a href="https://rcloneui.com" target="_blank" rel="noopener" title="Visit rclone's sponsor rclone UI"><img src="/img/logos/rcloneui.svg"></a><br />
</div> </div>
</div> </div>
<div class="card"> <div class="card">
<div class="card-header"> <div class="card-header">Silver Sponsor</div>
Silver Sponsor
</div>
<div class="card-body"> <div class="card-body">
<a href="https://shade.inc/" target="_blank" rel="noopener" title="Visit rclone's sponsor Shade"><img style="max-width: 100%; height: auto;" src="/img/logos/shade.svg"></a><br /> <a href="https://shade.inc/" target="_blank" rel="noopener" title="Visit rclone's sponsor Shade"><img style="max-width: 100%; height: auto;" src="/img/logos/shade.svg"></a><br />
</div> </div>

View File

@@ -66,10 +66,12 @@
<a class="dropdown-item" href="/sharefile/"><i class="fas fa-share-square fa-fw"></i> Citrix ShareFile</a> <a class="dropdown-item" href="/sharefile/"><i class="fas fa-share-square fa-fw"></i> Citrix ShareFile</a>
<a class="dropdown-item" href="/crypt/"><i class="fa fa-lock fa-fw"></i> Crypt (encrypts the others)</a> <a class="dropdown-item" href="/crypt/"><i class="fa fa-lock fa-fw"></i> Crypt (encrypts the others)</a>
<a class="dropdown-item" href="/koofr/#digi-storage"><i class="fa fa-cloud fa-fw"></i> Digi Storage</a> <a class="dropdown-item" href="/koofr/#digi-storage"><i class="fa fa-cloud fa-fw"></i> Digi Storage</a>
<a class="dropdown-item" href="/drime/"><i class="fab fa-cloud fa-fw"></i> Drime</a>
<a class="dropdown-item" href="/dropbox/"><i class="fab fa-dropbox fa-fw"></i> Dropbox</a> <a class="dropdown-item" href="/dropbox/"><i class="fab fa-dropbox fa-fw"></i> Dropbox</a>
<a class="dropdown-item" href="/filefabric/"><i class="fa fa-cloud fa-fw"></i> Enterprise File Fabric</a> <a class="dropdown-item" href="/filefabric/"><i class="fa fa-cloud fa-fw"></i> Enterprise File Fabric</a>
<a class="dropdown-item" href="/filelu/"><i class="fa fa-folder fa-fw"></i> FileLu Cloud Storage</a> <a class="dropdown-item" href="/filelu/"><i class="fa fa-folder fa-fw"></i> FileLu Cloud Storage</a>
<a class="dropdown-item" href="/s3/#filelu-s5"><i class="fa fa-folder fa-fw"></i> FileLu S5 (S3-Compatible)</a> <a class="dropdown-item" href="/s3/#filelu-s5"><i class="fa fa-folder fa-fw"></i> FileLu S5 (S3-Compatible)</a>
<a class="dropdown-item" href="/filen/"><i class="fa fa-solid fa-f"></i> Filen</a>
<a class="dropdown-item" href="/filescom/"><i class="fa fa-brands fa-files-pinwheel fa-fw"></i> Files.com</a> <a class="dropdown-item" href="/filescom/"><i class="fa fa-brands fa-files-pinwheel fa-fw"></i> Files.com</a>
<a class="dropdown-item" href="/ftp/"><i class="fa fa-file fa-fw"></i> FTP</a> <a class="dropdown-item" href="/ftp/"><i class="fa fa-file fa-fw"></i> FTP</a>
<a class="dropdown-item" href="/gofile/"><i class="fa fa-folder fa-fw"></i> Gofile</a> <a class="dropdown-item" href="/gofile/"><i class="fa fa-folder fa-fw"></i> Gofile</a>
@@ -107,11 +109,11 @@
<a class="dropdown-item" href="/seafile/"><i class="fa fa-server fa-fw"></i> Seafile</a> <a class="dropdown-item" href="/seafile/"><i class="fa fa-server fa-fw"></i> Seafile</a>
<a class="dropdown-item" href="/sftp/"><i class="fa fa-server fa-fw"></i> SFTP</a> <a class="dropdown-item" href="/sftp/"><i class="fa fa-server fa-fw"></i> SFTP</a>
<a class="dropdown-item" href="/sia/"><i class="fa fa-globe fa-fw"></i> Sia</a> <a class="dropdown-item" href="/sia/"><i class="fa fa-globe fa-fw"></i> Sia</a>
<a class="dropdown-item" href="/shade/"><i class="fa fa-moon fa-fw"></i> Shade</a>
<a class="dropdown-item" href="/smb/"><i class="fa fa-server fa-fw"></i> SMB / CIFS</a> <a class="dropdown-item" href="/smb/"><i class="fa fa-server fa-fw"></i> SMB / CIFS</a>
<a class="dropdown-item" href="/storj/"><i class="fas fa-dove fa-fw"></i> Storj</a> <a class="dropdown-item" href="/storj/"><i class="fas fa-dove fa-fw"></i> Storj</a>
<a class="dropdown-item" href="/sugarsync/"><i class="fas fa-dove fa-fw"></i> SugarSync</a> <a class="dropdown-item" href="/sugarsync/"><i class="fas fa-dove fa-fw"></i> SugarSync</a>
<a class="dropdown-item" href="/ulozto/"><i class="fas fa-angle-double-down fa-fw"></i> Uloz.to</a> <a class="dropdown-item" href="/ulozto/"><i class="fas fa-angle-double-down fa-fw"></i> Uloz.to</a>
<a class="dropdown-item" href="/uptobox/"><i class="fa fa-archive fa-fw"></i> Uptobox</a>
<a class="dropdown-item" href="/union/"><i class="fa fa-link fa-fw"></i> Union (merge backends)</a> <a class="dropdown-item" href="/union/"><i class="fa fa-link fa-fw"></i> Union (merge backends)</a>
<a class="dropdown-item" href="/webdav/"><i class="fa fa-server fa-fw"></i> WebDAV</a> <a class="dropdown-item" href="/webdav/"><i class="fa fa-server fa-fw"></i> WebDAV</a>
<a class="dropdown-item" href="/yandex/"><i class="fa fa-space-shuttle fa-fw"></i> Yandex Disk</a> <a class="dropdown-item" href="/yandex/"><i class="fa fa-space-shuttle fa-fw"></i> Yandex Disk</a>

View File

@@ -1 +1 @@
v1.72.1 v1.73.0

View File

@@ -29,16 +29,16 @@ func (bp *BwPair) String() string {
// Set the bandwidth from a string which is either // Set the bandwidth from a string which is either
// SizeSuffix or SizeSuffix:SizeSuffix (for tx:rx bandwidth) // SizeSuffix or SizeSuffix:SizeSuffix (for tx:rx bandwidth)
func (bp *BwPair) Set(s string) (err error) { func (bp *BwPair) Set(s string) (err error) {
colon := strings.Index(s, ":") before, after, ok := strings.Cut(s, ":")
stx, srx := s, "" stx, srx := s, ""
if colon >= 0 { if ok {
stx, srx = s[:colon], s[colon+1:] stx, srx = before, after
} }
err = bp.Tx.Set(stx) err = bp.Tx.Set(stx)
if err != nil { if err != nil {
return err return err
} }
if colon < 0 { if !ok {
bp.Rx = bp.Tx bp.Rx = bp.Tx
} else { } else {
err = bp.Rx.Set(srx) err = bp.Rx.Set(srx)

View File

@@ -16,7 +16,7 @@ func startSystemdLog(handler *OutputHandler) bool {
handler.clearFormatFlags(logFormatDate | logFormatTime | logFormatMicroseconds | logFormatUTC | logFormatLongFile | logFormatShortFile | logFormatPid) handler.clearFormatFlags(logFormatDate | logFormatTime | logFormatMicroseconds | logFormatUTC | logFormatLongFile | logFormatShortFile | logFormatPid)
handler.setFormatFlags(logFormatNoLevel) handler.setFormatFlags(logFormatNoLevel)
handler.SetOutput(func(level slog.Level, text string) { handler.SetOutput(func(level slog.Level, text string) {
_ = journal.Print(slogLevelToSystemdPriority(level), "%-6s: %s\n", level, text) _ = journal.Print(slogLevelToSystemdPriority(level), "%-6s: %s", level, text)
}) })
return true return true
} }

View File

@@ -921,6 +921,18 @@ See the [hashsum](/commands/rclone_hashsum/) command for more information on the
}) })
} }
// Parse download, base64 and hashType parameters
func parseHashParameters(in rc.Params) (download bool, base64 bool, ht hash.Type, err error) {
download, _ = in.GetBool("download")
base64, _ = in.GetBool("base64")
hashType, err := in.GetString("hashType")
if err != nil {
return
}
err = ht.Set(hashType)
return
}
// Hashsum a directory // Hashsum a directory
func rcHashsum(ctx context.Context, in rc.Params) (out rc.Params, err error) { func rcHashsum(ctx context.Context, in rc.Params) (out rc.Params, err error) {
ctx, f, err := rc.GetFsNamedFileOK(ctx, in, "fs") ctx, f, err := rc.GetFsNamedFileOK(ctx, in, "fs")
@@ -928,16 +940,9 @@ func rcHashsum(ctx context.Context, in rc.Params) (out rc.Params, err error) {
return nil, err return nil, err
} }
download, _ := in.GetBool("download") download, base64, ht, err := parseHashParameters(in)
base64, _ := in.GetBool("base64")
hashType, err := in.GetString("hashType")
if err != nil { if err != nil {
return nil, fmt.Errorf("%s\n%w", hash.HelpString(0), err) return out, err
}
var ht hash.Type
err = ht.Set(hashType)
if err != nil {
return nil, fmt.Errorf("%s\n%w", hash.HelpString(0), err)
} }
hashes := []string{} hashes := []string{}
@@ -948,3 +953,64 @@ func rcHashsum(ctx context.Context, in rc.Params) (out rc.Params, err error) {
} }
return out, err return out, err
} }
func init() {
rc.Add(rc.Call{
Path: "operations/hashsumfile",
AuthRequired: true,
Fn: rcHashsumFile,
Title: "Produces a hash for a single file.",
Help: `Produces a hash for a single file using the hash named.
This takes the following parameters:
- fs - a remote name string e.g. "drive:"
- remote - a path within that remote e.g. "file.txt"
- hashType - type of hash to be used
- download - check by downloading rather than with hash (boolean)
- base64 - output the hashes in base64 rather than hex (boolean)
If you supply the download flag, it will download the data from the
remote and create the hash on the fly. This can be useful for remotes
that don't support the given hash or if you really want to read all
the data.
Returns:
- hash - hash for the file
- hashType - type of hash used
Example:
$ rclone rc --loopback operations/hashsumfile fs=/ remote=/bin/bash hashType=MD5 download=true base64=true
{
"hashType": "md5",
"hash": "MDMw-fG2YXs7Uz5Nz-H68A=="
}
See the [hashsum](/commands/rclone_hashsum/) command for more information on the above.
`,
})
}
// Hashsum a file
func rcHashsumFile(ctx context.Context, in rc.Params) (out rc.Params, err error) {
f, remote, err := rc.GetFsAndRemote(ctx, in)
if err != nil {
return nil, err
}
download, base64, ht, err := parseHashParameters(in)
if err != nil {
return out, err
}
o, err := f.NewObject(ctx, remote)
if err != nil {
return nil, err
}
sum, err := HashSum(ctx, ht, base64, download, o)
out = rc.Params{
"hashType": ht.String(),
"hash": sum,
}
return out, err
}

View File

@@ -561,7 +561,7 @@ func TestUploadFile(t *testing.T) {
assert.NoError(t, currentFile.Close()) assert.NoError(t, currentFile.Close())
}() }()
formReader, contentType, _, err := rest.MultipartUpload(ctx, currentFile, url.Values{}, "file", testFileName) formReader, contentType, _, err := rest.MultipartUpload(ctx, currentFile, url.Values{}, "file", testFileName, "application/octet-stream")
require.NoError(t, err) require.NoError(t, err)
httpReq := httptest.NewRequest("POST", "/", formReader) httpReq := httptest.NewRequest("POST", "/", formReader)
@@ -587,7 +587,7 @@ func TestUploadFile(t *testing.T) {
assert.NoError(t, currentFile2.Close()) assert.NoError(t, currentFile2.Close())
}() }()
formReader, contentType, _, err = rest.MultipartUpload(ctx, currentFile2, url.Values{}, "file", testFileName) formReader, contentType, _, err = rest.MultipartUpload(ctx, currentFile2, url.Values{}, "file", testFileName, "application/octet-stream")
require.NoError(t, err) require.NoError(t, err)
httpReq = httptest.NewRequest("POST", "/", formReader) httpReq = httptest.NewRequest("POST", "/", formReader)
@@ -840,7 +840,7 @@ func TestRcHashsum(t *testing.T) {
} }
// operations/hashsum: hashsum a single file // operations/hashsum: hashsum a single file
func TestRcHashsumFile(t *testing.T) { func TestRcHashsumSingleFile(t *testing.T) {
ctx := context.Background() ctx := context.Background()
r, call := rcNewRun(t, "operations/hashsum") r, call := rcNewRun(t, "operations/hashsum")
r.Mkdir(ctx, r.Fremote) r.Mkdir(ctx, r.Fremote)
@@ -866,3 +866,27 @@ func TestRcHashsumFile(t *testing.T) {
assert.Equal(t, "md5", out["hashType"]) assert.Equal(t, "md5", out["hashType"])
assert.Equal(t, []string{"0ef726ce9b1a7692357ff70dd321d595 hashsum-file1"}, out["hashsum"]) assert.Equal(t, []string{"0ef726ce9b1a7692357ff70dd321d595 hashsum-file1"}, out["hashsum"])
} }
// operations/hashsumfile: hashsum a single file
func TestRcHashsumFile(t *testing.T) {
ctx := context.Background()
r, call := rcNewRun(t, "operations/hashsumfile")
r.Mkdir(ctx, r.Fremote)
file1Contents := "file1 contents"
file1 := r.WriteBoth(ctx, "hashsumfile-file1", file1Contents, t1)
r.CheckLocalItems(t, file1)
r.CheckRemoteItems(t, file1)
in := rc.Params{
"fs": r.FremoteName,
"remote": file1.Path,
"hashType": "MD5",
"download": true,
}
out, err := call.Fn(ctx, in)
require.NoError(t, err)
assert.Equal(t, "md5", out["hashType"])
assert.Equal(t, "0ef726ce9b1a7692357ff70dd321d595", out["hash"])
}

View File

@@ -1301,6 +1301,7 @@ func TestSyncAfterRemovingAFileAndAddingAFileSubDirWithErrors(t *testing.T) {
err := Sync(ctx, r.Fremote, r.Flocal, false) err := Sync(ctx, r.Fremote, r.Flocal, false)
assert.Equal(t, fs.ErrorNotDeleting, err) assert.Equal(t, fs.ErrorNotDeleting, err)
testLoggerVsLsf(ctx, r.Fremote, r.Flocal, operations.GetLoggerOpt(ctx).JSON, t) testLoggerVsLsf(ctx, r.Fremote, r.Flocal, operations.GetLoggerOpt(ctx).JSON, t)
accounting.GlobalStats().ResetCounters()
r.CheckLocalListing( r.CheckLocalListing(
t, t,

View File

@@ -13,6 +13,7 @@ import (
_ "github.com/rclone/rclone/backend/all" _ "github.com/rclone/rclone/backend/all"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/filter" "github.com/rclone/rclone/fs/filter"
"github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/fs/walk"
@@ -507,6 +508,7 @@ func TestError(t *testing.T) {
err = Sync(ctx, r.Fremote, r.Flocal, true) err = Sync(ctx, r.Fremote, r.Flocal, true)
// testLoggerVsLsf(ctx, r.Fremote, r.Flocal, operations.GetLoggerOpt(ctx).JSON, t) // testLoggerVsLsf(ctx, r.Fremote, r.Flocal, operations.GetLoggerOpt(ctx).JSON, t)
assert.Error(t, err) assert.Error(t, err)
accounting.GlobalStats().ResetCounters()
r.CheckLocalListing(t, []fstest.Item{file1}, []string{"toe", "toe/toe"}) r.CheckLocalListing(t, []fstest.Item{file1}, []string{"toe", "toe/toe"})
r.CheckRemoteListing(t, []fstest.Item{file1}, []string{"toe", "toe/toe"}) r.CheckRemoteListing(t, []fstest.Item{file1}, []string{"toe", "toe/toe"})

View File

@@ -1,4 +1,4 @@
package fs package fs
// VersionTag of rclone // VersionTag of rclone
var VersionTag = "v1.72.1" var VersionTag = "v1.73.0"

View File

@@ -368,7 +368,7 @@ func Run(t *testing.T, opt *Opt) {
} }
file1Contents string file1Contents string
file1MimeType = "text/csv" file1MimeType = "text/csv"
file1Metadata = fs.Metadata{"rclone-test": "potato"} file1Metadata = fs.Metadata{"rclonetest": "potato"}
file2 = fstest.Item{ file2 = fstest.Item{
ModTime: fstest.Time("2001-02-03T04:05:10.123123123Z"), ModTime: fstest.Time("2001-02-03T04:05:10.123123123Z"),
Path: `hello? sausage/êé/Hello, 世界/ " ' @ < > & ? + ≠/z.txt`, Path: `hello? sausage/êé/Hello, 世界/ " ' @ < > & ? + ≠/z.txt`,
@@ -1273,11 +1273,15 @@ func Run(t *testing.T, opt *Opt) {
assert.Equal(t, file2Copy.Path, dst.Remote()) assert.Equal(t, file2Copy.Path, dst.Remote())
// check that mutating dst does not mutate src // check that mutating dst does not mutate src
if !strings.Contains(fs.ConfigStringFull(f), "copy_is_hardlink") {
err = dst.SetModTime(ctx, fstest.Time("2004-03-03T04:05:06.499999999Z")) err = dst.SetModTime(ctx, fstest.Time("2004-03-03T04:05:06.499999999Z"))
if err != fs.ErrorCantSetModTimeWithoutDelete && err != fs.ErrorCantSetModTime { if err != fs.ErrorCantSetModTimeWithoutDelete && err != fs.ErrorCantSetModTime {
assert.NoError(t, err) assert.NoError(t, err)
// Re-read the source and check its modtime
src = fstest.NewObject(ctx, t, f, src.Remote())
assert.False(t, src.ModTime(ctx).Equal(dst.ModTime(ctx)), "mutating dst should not mutate src -- is it Copying by pointer?") assert.False(t, src.ModTime(ctx).Equal(dst.ModTime(ctx)), "mutating dst should not mutate src -- is it Copying by pointer?")
} }
}
// Delete copy // Delete copy
err = dst.Remove(ctx) err = dst.Remove(ctx)

View File

@@ -164,6 +164,9 @@ backends:
- backend: "gofile" - backend: "gofile"
remote: "TestGoFile:" remote: "TestGoFile:"
fastlist: true fastlist: true
- backend: "filen"
remote: "TestFilen:"
fastlist: false
- backend: "filescom" - backend: "filescom"
remote: "TestFilesCom:" remote: "TestFilesCom:"
fastlist: false fastlist: false
@@ -624,11 +627,6 @@ backends:
- TestSyncUTFNorm - TestSyncUTFNorm
ignoretests: ignoretests:
- cmd/gitannex - cmd/gitannex
# - backend: "uptobox"
# remote: "TestUptobox:"
# fastlist: false
# ignore:
# - TestRWFileHandleWriteNoWrite
- backend: "oracleobjectstorage" - backend: "oracleobjectstorage"
remote: "TestOracleObjectStorage:" remote: "TestOracleObjectStorage:"
fastlist: true fastlist: true
@@ -662,6 +660,10 @@ backends:
ignoretests: ignoretests:
- cmd/bisync - cmd/bisync
- cmd/gitannex - cmd/gitannex
- backend: "shade"
remote: "TestShade:"
fastlist: false
- backend: "archive" - backend: "archive"
remote: "TestArchive:" remote: "TestArchive:"
fastlist: false fastlist: false
@@ -673,3 +675,9 @@ backends:
# with the parent backend having a different precision. # with the parent backend having a different precision.
- TestServerSideCopyOverSelf - TestServerSideCopyOverSelf
- TestServerSideMoveOverSelf - TestServerSideMoveOverSelf
- backend: "drime"
remote: "TestDrime:"
ignoretests:
# The TestBisyncRemoteLocal/check_access_filters tests fail due to duplicated objects
- cmd/bisync
fastlist: false

4
go.mod
View File

@@ -11,6 +11,7 @@ require (
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3 github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3
github.com/Azure/azure-sdk-for-go/sdk/storage/azfile v1.5.3 github.com/Azure/azure-sdk-for-go/sdk/storage/azfile v1.5.3
github.com/Azure/go-ntlmssp v0.0.2-0.20251110135918-10b7b7e7cd26 github.com/Azure/go-ntlmssp v0.0.2-0.20251110135918-10b7b7e7cd26
github.com/FilenCloudDienste/filen-sdk-go v0.0.34
github.com/Files-com/files-sdk-go/v3 v3.2.264 github.com/Files-com/files-sdk-go/v3 v3.2.264
github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd github.com/Max-Sum/base32768 v0.0.0-20230304063302-18e6ce5945fd
github.com/a1ex3/zstd-seekable-format-go/pkg v0.10.0 github.com/a1ex3/zstd-seekable-format-go/pkg v0.10.0
@@ -25,6 +26,7 @@ require (
github.com/aws/aws-sdk-go-v2/credentials v1.18.21 github.com/aws/aws-sdk-go-v2/credentials v1.18.21
github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.4 github.com/aws/aws-sdk-go-v2/feature/s3/manager v1.20.4
github.com/aws/aws-sdk-go-v2/service/s3 v1.90.0 github.com/aws/aws-sdk-go-v2/service/s3 v1.90.0
github.com/aws/aws-sdk-go-v2/service/sts v1.39.1
github.com/aws/smithy-go v1.23.2 github.com/aws/smithy-go v1.23.2
github.com/buengese/sgzip v0.1.1 github.com/buengese/sgzip v0.1.1
github.com/cloudinary/cloudinary-go/v2 v2.13.0 github.com/cloudinary/cloudinary-go/v2 v2.13.0
@@ -133,7 +135,6 @@ require (
github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.13 // indirect github.com/aws/aws-sdk-go-v2/service/internal/s3shared v1.19.13 // indirect
github.com/aws/aws-sdk-go-v2/service/sso v1.30.1 // indirect github.com/aws/aws-sdk-go-v2/service/sso v1.30.1 // indirect
github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.5 // indirect github.com/aws/aws-sdk-go-v2/service/ssooidc v1.35.5 // indirect
github.com/aws/aws-sdk-go-v2/service/sts v1.39.1 // indirect
github.com/bahlo/generic-list-go v0.2.0 // indirect github.com/bahlo/generic-list-go v0.2.0 // indirect
github.com/beorn7/perks v1.0.1 // indirect github.com/beorn7/perks v1.0.1 // indirect
github.com/bodgit/plumbing v1.3.0 // indirect github.com/bodgit/plumbing v1.3.0 // indirect
@@ -155,6 +156,7 @@ require (
github.com/cronokirby/saferith v0.33.0 // indirect github.com/cronokirby/saferith v0.33.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/dsnet/compress v0.0.2-0.20230904184137-39efe44ab707 // indirect github.com/dsnet/compress v0.0.2-0.20230904184137-39efe44ab707 // indirect
github.com/dromara/dongle v1.0.1 // indirect
github.com/dustin/go-humanize v1.0.1 // indirect github.com/dustin/go-humanize v1.0.1 // indirect
github.com/ebitengine/purego v0.9.1 // indirect github.com/ebitengine/purego v0.9.1 // indirect
github.com/emersion/go-message v0.18.2 // indirect github.com/emersion/go-message v0.18.2 // indirect

10
go.sum
View File

@@ -61,6 +61,8 @@ github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0 h1:XRzhVemXdgv
github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk= github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0/go.mod h1:HKpQxkWaGLJ+D/5H8QRpyQXA1eKjxkFlOMwck5+33Jk=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
github.com/FilenCloudDienste/filen-sdk-go v0.0.34 h1:Fd/wagh/Qn35p3PkCUYubmaELATQlYGC9pxpJ9TkHUE=
github.com/FilenCloudDienste/filen-sdk-go v0.0.34/go.mod h1:XkI1Iq30/tU8vk4Zd1cKr2cCTiFqBEf0ZfG4+KKUBrY=
github.com/Files-com/files-sdk-go/v3 v3.2.264 h1:lMHTplAYI9FtmCo/QOcpRxmPA5REVAct1r2riQmDQKw= github.com/Files-com/files-sdk-go/v3 v3.2.264 h1:lMHTplAYI9FtmCo/QOcpRxmPA5REVAct1r2riQmDQKw=
github.com/Files-com/files-sdk-go/v3 v3.2.264/go.mod h1:wGqkOzRu/ClJibvDgcfuJNAqI2nLhe8g91tPlDKRCdE= github.com/Files-com/files-sdk-go/v3 v3.2.264/go.mod h1:wGqkOzRu/ClJibvDgcfuJNAqI2nLhe8g91tPlDKRCdE=
github.com/IBM/go-sdk-core/v5 v5.21.0 h1:DUnYhvC4SoC8T84rx5omnhY3+xcQg/Whyoa3mDPIMkk= github.com/IBM/go-sdk-core/v5 v5.21.0 h1:DUnYhvC4SoC8T84rx5omnhY3+xcQg/Whyoa3mDPIMkk=
@@ -232,6 +234,8 @@ github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI=
github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ= github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ=
github.com/dop251/scsu v0.0.0-20220106150536-84ac88021d00 h1:xJBhC00smQpSZw3Kr0ErMUBXhUSjYoLRm2szxdbRBL0= github.com/dop251/scsu v0.0.0-20220106150536-84ac88021d00 h1:xJBhC00smQpSZw3Kr0ErMUBXhUSjYoLRm2szxdbRBL0=
github.com/dop251/scsu v0.0.0-20220106150536-84ac88021d00/go.mod h1:nNICngOdmNImBb/vuL+dSc0aIg3ryNATpjxThNoPw4g= github.com/dop251/scsu v0.0.0-20220106150536-84ac88021d00/go.mod h1:nNICngOdmNImBb/vuL+dSc0aIg3ryNATpjxThNoPw4g=
github.com/dromara/dongle v1.0.1 h1:si/7UP/EXxnFVZok1cNos70GiMGxInAYMilHQFP5dJs=
github.com/dromara/dongle v1.0.1/go.mod h1:ebFhTaDgxaDIKppycENTWlBsxz8mWCPWOLnsEgDpMv4=
github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5 h1:FT+t0UEDykcor4y3dMVKXIiWJETBpRgERYTGlmMd7HU= github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5 h1:FT+t0UEDykcor4y3dMVKXIiWJETBpRgERYTGlmMd7HU=
github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5/go.mod h1:rSS3kM9XMzSQ6pw91Qgd6yB5jdt70N4OdtrAf74As5M= github.com/dropbox/dropbox-sdk-go-unofficial/v6 v6.0.5/go.mod h1:rSS3kM9XMzSQ6pw91Qgd6yB5jdt70N4OdtrAf74As5M=
github.com/dsnet/compress v0.0.2-0.20230904184137-39efe44ab707 h1:2tV76y6Q9BB+NEBasnqvs7e49aEBFI8ejC89PSnWH+4= github.com/dsnet/compress v0.0.2-0.20230904184137-39efe44ab707 h1:2tV76y6Q9BB+NEBasnqvs7e49aEBFI8ejC89PSnWH+4=
@@ -249,6 +253,7 @@ github.com/emersion/go-message v0.18.2 h1:rl55SQdjd9oJcIoQNhubD2Acs1E6IzlZISRTK7
github.com/emersion/go-message v0.18.2/go.mod h1:XpJyL70LwRvq2a8rVbHXikPgKj8+aI0kGdHlg16ibYA= github.com/emersion/go-message v0.18.2/go.mod h1:XpJyL70LwRvq2a8rVbHXikPgKj8+aI0kGdHlg16ibYA=
github.com/emersion/go-vcard v0.0.0-20241024213814-c9703dde27ff h1:4N8wnS3f1hNHSmFD5zgFkWCyA4L1kCDkImPAtK7D6tg= github.com/emersion/go-vcard v0.0.0-20241024213814-c9703dde27ff h1:4N8wnS3f1hNHSmFD5zgFkWCyA4L1kCDkImPAtK7D6tg=
github.com/emersion/go-vcard v0.0.0-20241024213814-c9703dde27ff/go.mod h1:HMJKR5wlh/ziNp+sHEDV2ltblO4JD2+IdDOWtGcQBTM= github.com/emersion/go-vcard v0.0.0-20241024213814-c9703dde27ff/go.mod h1:HMJKR5wlh/ziNp+sHEDV2ltblO4JD2+IdDOWtGcQBTM=
github.com/emmansun/gmsm v0.15.5/go.mod h1:2m4jygryohSWkaSduFErgCwQKab5BNjURoFrn2DNwyU=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4= github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98= github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
@@ -748,6 +753,7 @@ golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPh
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.4.0/go.mod h1:3quD/ATkf6oY+rnes5c3ExXTbLc8mueNue5/DoinL80=
golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58= golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58=
golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU= golang.org/x/crypto v0.7.0/go.mod h1:pYwdfH91IfpZVANVyUOhSIPZaFoJGxTFbZhFTx+dXZU=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc= golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
@@ -828,6 +834,7 @@ golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwY
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y= golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.3.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc= golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
@@ -904,6 +911,7 @@ golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -916,6 +924,7 @@ golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE= golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U= golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo= golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
@@ -932,6 +941,7 @@ golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ= golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8= golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8= golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=

View File

@@ -361,9 +361,6 @@ func (dc *DirCache) RootParentID(ctx context.Context, create bool) (ID string, e
} else if dc.rootID == dc.trueRootID { } else if dc.rootID == dc.trueRootID {
return "", errors.New("is root directory") return "", errors.New("is root directory")
} }
if dc.rootParentID == "" {
return "", errors.New("internal error: didn't find rootParentID")
}
return dc.rootParentID, nil return dc.rootParentID, nil
} }

View File

@@ -3,6 +3,7 @@ package proxy
import ( import (
"bufio" "bufio"
"crypto/tls" "crypto/tls"
"encoding/base64"
"fmt" "fmt"
"net" "net"
"net/http" "net/http"
@@ -55,7 +56,13 @@ func HTTPConnectDial(network, addr string, proxyURL *url.URL, proxyDialer proxy.
} }
// send CONNECT // send CONNECT
user := proxyURL.User
if user != nil {
credential := base64.StdEncoding.EncodeToString([]byte(user.String()))
_, err = fmt.Fprintf(conn, "CONNECT %s HTTP/1.1\r\nHost: %s\r\nProxy-Authorization: Basic %s\r\n\r\n", addr, addr, credential)
} else {
_, err = fmt.Fprintf(conn, "CONNECT %s HTTP/1.1\r\nHost: %s\r\n\r\n", addr, addr) _, err = fmt.Fprintf(conn, "CONNECT %s HTTP/1.1\r\nHost: %s\r\n\r\n", addr, addr)
}
if err != nil { if err != nil {
_ = conn.Close() _ = conn.Close()
return nil, fmt.Errorf("HTTP CONNECT proxy failed to send CONNECT: %q", err) return nil, fmt.Errorf("HTTP CONNECT proxy failed to send CONNECT: %q", err)

View File

@@ -14,7 +14,9 @@ import (
"maps" "maps"
"mime/multipart" "mime/multipart"
"net/http" "net/http"
"net/textproto"
"net/url" "net/url"
"strings"
"sync" "sync"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
@@ -145,6 +147,7 @@ type Opts struct {
MultipartMetadataName string // ..this is used for the name of the metadata form part if set MultipartMetadataName string // ..this is used for the name of the metadata form part if set
MultipartContentName string // ..name of the parameter which is the attached file MultipartContentName string // ..name of the parameter which is the attached file
MultipartFileName string // ..name of the file for the attached file MultipartFileName string // ..name of the file for the attached file
MultipartContentType string // ..content type of the attached file
Parameters url.Values // any parameters for the final URL Parameters url.Values // any parameters for the final URL
TransferEncoding []string // transfer encoding, set to "identity" to disable chunked encoding TransferEncoding []string // transfer encoding, set to "identity" to disable chunked encoding
Trailer *http.Header // set the request trailer Trailer *http.Header // set the request trailer
@@ -371,6 +374,32 @@ func (api *Client) Call(ctx context.Context, opts *Opts) (resp *http.Response, e
return resp, nil return resp, nil
} }
var quoteEscaper = strings.NewReplacer("\\", "\\\\", `"`, "\\\"")
func escapeQuotes(s string) string {
return quoteEscaper.Replace(s)
}
// multipartFileContentDisposition returns the value of a Content-Disposition header
// with the provided field name and file name.
func multipartFileContentDisposition(fieldname, filename string) string {
return fmt.Sprintf(`form-data; name="%s"; filename="%s"`,
escapeQuotes(fieldname), escapeQuotes(filename))
}
// CreateFormFile is a convenience wrapper around [Writer.CreatePart]. It creates
// a new form-data header with the provided field name and file name.
func CreateFormFile(w *multipart.Writer, fieldname, filename, contentType string) (io.Writer, error) {
h := make(textproto.MIMEHeader)
// FIXME when go1.24 is no longer supported, change to
// multipart.FileContentDisposition and remove definition above
h.Set("Content-Disposition", multipartFileContentDisposition(fieldname, filename))
if contentType != "" {
h.Set("Content-Type", contentType)
}
return w.CreatePart(h)
}
// MultipartUpload creates an io.Reader which produces an encoded a // MultipartUpload creates an io.Reader which produces an encoded a
// multipart form upload from the params passed in and the passed in // multipart form upload from the params passed in and the passed in
// //
@@ -382,10 +411,10 @@ func (api *Client) Call(ctx context.Context, opts *Opts) (resp *http.Response, e
// the int64 returned is the overhead in addition to the file contents, in case Content-Length is required // the int64 returned is the overhead in addition to the file contents, in case Content-Length is required
// //
// NB This doesn't allow setting the content type of the attachment // NB This doesn't allow setting the content type of the attachment
func MultipartUpload(ctx context.Context, in io.Reader, params url.Values, contentName, fileName string) (io.ReadCloser, string, int64, error) { func MultipartUpload(ctx context.Context, in io.Reader, params url.Values, contentName, fileName string, contentType string) (io.ReadCloser, string, int64, error) {
bodyReader, bodyWriter := io.Pipe() bodyReader, bodyWriter := io.Pipe()
writer := multipart.NewWriter(bodyWriter) writer := multipart.NewWriter(bodyWriter)
contentType := writer.FormDataContentType() formContentType := writer.FormDataContentType()
// Create a Multipart Writer as base for calculating the Content-Length // Create a Multipart Writer as base for calculating the Content-Length
buf := &bytes.Buffer{} buf := &bytes.Buffer{}
@@ -404,7 +433,7 @@ func MultipartUpload(ctx context.Context, in io.Reader, params url.Values, conte
} }
} }
if in != nil { if in != nil {
_, err = dummyMultipartWriter.CreateFormFile(contentName, fileName) _, err = CreateFormFile(dummyMultipartWriter, contentName, fileName, contentType)
if err != nil { if err != nil {
return nil, "", 0, err return nil, "", 0, err
} }
@@ -445,7 +474,7 @@ func MultipartUpload(ctx context.Context, in io.Reader, params url.Values, conte
} }
if in != nil { if in != nil {
part, err := writer.CreateFormFile(contentName, fileName) part, err := CreateFormFile(writer, contentName, fileName, contentType)
if err != nil { if err != nil {
_ = bodyWriter.CloseWithError(fmt.Errorf("failed to create form file: %w", err)) _ = bodyWriter.CloseWithError(fmt.Errorf("failed to create form file: %w", err))
return return
@@ -467,7 +496,7 @@ func MultipartUpload(ctx context.Context, in io.Reader, params url.Values, conte
_ = bodyWriter.Close() _ = bodyWriter.Close()
}() }()
return bodyReader, contentType, multipartLength, nil return bodyReader, formContentType, multipartLength, nil
} }
// CallJSON runs Call and decodes the body as a JSON object into response (if not nil) // CallJSON runs Call and decodes the body as a JSON object into response (if not nil)
@@ -539,7 +568,7 @@ func (api *Client) callCodec(ctx context.Context, opts *Opts, request any, respo
opts = opts.Copy() opts = opts.Copy()
var overhead int64 var overhead int64
opts.Body, opts.ContentType, overhead, err = MultipartUpload(ctx, opts.Body, params, opts.MultipartContentName, opts.MultipartFileName) opts.Body, opts.ContentType, overhead, err = MultipartUpload(ctx, opts.Body, params, opts.MultipartContentName, opts.MultipartFileName, opts.MultipartContentType)
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@@ -218,12 +218,12 @@ rclone convmv "stories/The Quick Brown Fox!.txt" --name-transform "all,command=e
```console ```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{YYYYMMDD}"
// Output: stories/The Quick Brown Fox!-20251210 // Output: stories/The Quick Brown Fox!-20251121
``` ```
```console ```console
rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}" rclone convmv "stories/The Quick Brown Fox!" --name-transform "date=-{macfriendlytime}"
// Output: stories/The Quick Brown Fox!-2025-12-10 1253PM // Output: stories/The Quick Brown Fox!-2025-11-21 0508PM
``` ```
```console ```console

185
rclone.1 generated
View File

@@ -15,7 +15,7 @@
. ftr VB CB . ftr VB CB
. ftr VBI CBI . ftr VBI CBI
.\} .\}
.TH "rclone" "1" "Dec 10, 2025" "User Manual" "" .TH "rclone" "1" "Nov 21, 2025" "User Manual" ""
.hy .hy
.SH NAME .SH NAME
.PP .PP
@@ -6260,14 +6260,14 @@ rclone convmv \[dq]stories/The Quick Brown Fox!.txt\[dq] --name-transform \[dq]a
.nf .nf
\f[C] \f[C]
rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{YYYYMMDD}\[dq] rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{YYYYMMDD}\[dq]
// Output: stories/The Quick Brown Fox!-20251210 // Output: stories/The Quick Brown Fox!-20251121
\f[R] \f[R]
.fi .fi
.IP .IP
.nf .nf
\f[C] \f[C]
rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{macfriendlytime}\[dq] rclone convmv \[dq]stories/The Quick Brown Fox!\[dq] --name-transform \[dq]date=-{macfriendlytime}\[dq]
// Output: stories/The Quick Brown Fox!-2025-12-10 1247PM // Output: stories/The Quick Brown Fox!-2025-11-21 0505PM
\f[R] \f[R]
.fi .fi
.IP .IP
@@ -31741,7 +31741,7 @@ Flags for general networking and HTTP stuff.
--tpslimit float Limit HTTP transactions per second to this --tpslimit float Limit HTTP transactions per second to this
--tpslimit-burst int Max burst of transactions for --tpslimit (default 1) --tpslimit-burst int Max burst of transactions for --tpslimit (default 1)
--use-cookies Enable session cookiejar --use-cookies Enable session cookiejar
--user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.72.1\[dq]) --user-agent string Set the user-agent to a specified string (default \[dq]rclone/v1.72.0\[dq])
\f[R] \f[R]
.fi .fi
.SS Performance .SS Performance
@@ -32258,7 +32258,7 @@ Backend-only flags (these can be set in the config file also).
--gcs-description string Description of the remote --gcs-description string Description of the remote
--gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created --gcs-directory-markers Upload an empty object with a trailing slash when a new directory is created
--gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot) --gcs-encoding Encoding The encoding for the backend (default Slash,CrLf,InvalidUtf8,Dot)
--gcs-endpoint string Custom endpoint for the storage API. Leave blank to use the provider default --gcs-endpoint string Endpoint for the service
--gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars) --gcs-env-auth Get GCP IAM credentials from runtime (environment variables or instance meta data if no env vars)
--gcs-location string Location for the newly created buckets --gcs-location string Location for the newly created buckets
--gcs-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it --gcs-no-check-bucket If set, don\[aq]t attempt to check the bucket exists or create it
@@ -34968,31 +34968,7 @@ The following backends have known issues that need more investigation:
\f[V]TestBisyncRemoteRemote/normalization\f[R] (https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt) \f[V]TestBisyncRemoteRemote/normalization\f[R] (https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
\f[V]TestGoFile\f[R] (\f[V]gofile\f[R]) Updated: 2025-11-21-010037
.RS 2
.IP \[bu] 2
\f[V]TestBisyncRemoteLocal/all_changed\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
.IP \[bu] 2
\f[V]TestBisyncRemoteLocal/backupdir\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
.IP \[bu] 2
\f[V]TestBisyncRemoteLocal/basic\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
.IP \[bu] 2
\f[V]TestBisyncRemoteLocal/changes\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
.IP \[bu] 2
\f[V]TestBisyncRemoteLocal/check_access\f[R] (https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
.IP \[bu] 2
78 more (https://pub.rclone.org/integration-tests/current/)
.RE
.IP \[bu] 2
\f[V]TestPcloud\f[R] (\f[V]pcloud\f[R])
.RS 2
.IP \[bu] 2
\f[V]TestBisyncRemoteRemote/check_access\f[R] (https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
.IP \[bu] 2
\f[V]TestBisyncRemoteRemote/check_access_filters\f[R] (https://pub.rclone.org/integration-tests/current/pcloud-cmd.bisync-TestPcloud-1.txt)
.RE
.IP \[bu] 2
Updated: 2025-12-10-010012
.PP .PP
The following backends either have not been tested recently or have The following backends either have not been tested recently or have
known issues that are deemed unfixable for the time being: known issues that are deemed unfixable for the time being:
@@ -39287,13 +39263,12 @@ Petersburg
Provider: Selectel,Servercore Provider: Selectel,Servercore
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
\[dq]ru-3\[dq] \[dq]gis-1\[dq]
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
St. Moscow
Petersburg
.IP \[bu] 2 .IP \[bu] 2
Provider: Selectel Provider: Servercore
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
\[dq]ru-7\[dq] \[dq]ru-7\[dq]
@@ -39301,31 +39276,7 @@ Provider: Selectel
.IP \[bu] 2 .IP \[bu] 2
Moscow Moscow
.IP \[bu] 2 .IP \[bu] 2
Provider: Selectel,Servercore Provider: Servercore
.RE
.IP \[bu] 2
\[dq]gis-1\[dq]
.RS 2
.IP \[bu] 2
Moscow
.IP \[bu] 2
Provider: Selectel,Servercore
.RE
.IP \[bu] 2
\[dq]kz-1\[dq]
.RS 2
.IP \[bu] 2
Kazakhstan
.IP \[bu] 2
Provider: Selectel
.RE
.IP \[bu] 2
\[dq]uz-2\[dq]
.RS 2
.IP \[bu] 2
Uzbekistan
.IP \[bu] 2
Provider: Selectel
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
\[dq]uz-2\[dq] \[dq]uz-2\[dq]
@@ -41420,25 +41371,7 @@ Provider: SeaweedFS
\[dq]s3.ru-1.storage.selcloud.ru\[dq] \[dq]s3.ru-1.storage.selcloud.ru\[dq]
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
St. Saint Petersburg
Petersburg
.IP \[bu] 2
Provider: Selectel
.RE
.IP \[bu] 2
\[dq]s3.ru-3.storage.selcloud.ru\[dq]
.RS 2
.IP \[bu] 2
St.
Petersburg
.IP \[bu] 2
Provider: Selectel
.RE
.IP \[bu] 2
\[dq]s3.ru-7.storage.selcloud.ru\[dq]
.RS 2
.IP \[bu] 2
Moscow
.IP \[bu] 2 .IP \[bu] 2
Provider: Selectel,Servercore Provider: Selectel,Servercore
.RE .RE
@@ -41448,29 +41381,13 @@ Provider: Selectel,Servercore
.IP \[bu] 2 .IP \[bu] 2
Moscow Moscow
.IP \[bu] 2 .IP \[bu] 2
Provider: Selectel,Servercore Provider: Servercore
.RE .RE
.IP \[bu] 2 .IP \[bu] 2
\[dq]s3.kz-1.storage.selcloud.ru\[dq] \[dq]s3.ru-7.storage.selcloud.ru\[dq]
.RS 2 .RS 2
.IP \[bu] 2 .IP \[bu] 2
Kazakhstan Moscow
.IP \[bu] 2
Provider: Selectel
.RE
.IP \[bu] 2
\[dq]s3.uz-2.storage.selcloud.ru\[dq]
.RS 2
.IP \[bu] 2
Uzbekistan
.IP \[bu] 2
Provider: Selectel
.RE
.IP \[bu] 2
\[dq]s3.ru-1.storage.selcloud.ru\[dq]
.RS 2
.IP \[bu] 2
Saint Petersburg
.IP \[bu] 2 .IP \[bu] 2
Provider: Servercore Provider: Servercore
.RE .RE
@@ -57528,9 +57445,6 @@ With support for high storage limits and seamless integration with
rclone, FileLu makes managing files in the cloud easy. rclone, FileLu makes managing files in the cloud easy.
Its cross-platform file backup services let you upload and back up files Its cross-platform file backup services let you upload and back up files
from any internet-connected device. from any internet-connected device.
.PP
\f[B]Note\f[R] FileLu now has a fully featured S3 backend FileLu S5, an
industry standard S3 compatible object store.
.SS Configuration .SS Configuration
.PP .PP
Here is an example of how to make a remote called \f[V]filelu\f[R]. Here is an example of how to make a remote called \f[V]filelu\f[R].
@@ -60302,17 +60216,9 @@ Type: bool
Default: false Default: false
.SS --gcs-endpoint .SS --gcs-endpoint
.PP .PP
Custom endpoint for the storage API. Endpoint for the service.
Leave blank to use the provider default.
.PP .PP
When using a custom endpoint that includes a subpath (e.g. Leave blank normally.
example.org/custom/endpoint), the subpath will be ignored during upload
operations due to a limitation in the underlying Google API Go client
library.
Download and listing operations will work correctly with the full
endpoint path.
If you require subpath support for uploads, avoid using subpaths in your
custom endpoint configuration.
.PP .PP
Properties: Properties:
.IP \[bu] 2 .IP \[bu] 2
@@ -60323,29 +60229,6 @@ Env Var: RCLONE_GCS_ENDPOINT
Type: string Type: string
.IP \[bu] 2 .IP \[bu] 2
Required: false Required: false
.IP \[bu] 2
Examples:
.RS 2
.IP \[bu] 2
\[dq]storage.example.org\[dq]
.RS 2
.IP \[bu] 2
Specify a custom endpoint
.RE
.IP \[bu] 2
\[dq]storage.example.org:4443\[dq]
.RS 2
.IP \[bu] 2
Specifying a custom endpoint with port
.RE
.IP \[bu] 2
\[dq]storage.example.org:4443/gcs/api\[dq]
.RS 2
.IP \[bu] 2
Specifying a subpath, see the note, uploads won\[aq]t use the custom
path!
.RE
.RE
.SS --gcs-encoding .SS --gcs-encoding
.PP .PP
The encoding for the backend. The encoding for the backend.
@@ -60674,7 +60557,7 @@ In the next field, \[dq]OAuth Scopes\[dq], enter
access to Google Drive specifically. access to Google Drive specifically.
You can also use You can also use
\f[V]https://www.googleapis.com/auth/drive.readonly\f[R] for read only \f[V]https://www.googleapis.com/auth/drive.readonly\f[R] for read only
access with \f[V]--drive-scope=drive.readonly\f[R]. access.
.IP \[bu] 2 .IP \[bu] 2
Click \[dq]Authorise\[dq] Click \[dq]Authorise\[dq]
.SS 3. Configure rclone, assuming a new install .SS 3. Configure rclone, assuming a new install
@@ -87232,40 +87115,6 @@ Options:
.IP \[bu] 2 .IP \[bu] 2
\[dq]error\[dq]: Return an error based on option value. \[dq]error\[dq]: Return an error based on option value.
.SH Changelog .SH Changelog
.SS v1.72.1 - 2025-12-10
.PP
See commits (https://github.com/rclone/rclone/compare/v1.72.0...v1.72.1)
.IP \[bu] 2
Bug Fixes
.RS 2
.IP \[bu] 2
build: update to go1.25.5 to fix
CVE-2025-61729 (https://pkg.go.dev/vuln/GO-2025-4155)
.IP \[bu] 2
doc fixes (Duncan Smart, Nick Craig-Wood)
.IP \[bu] 2
configfile: Fix piped config support (Jonas Tingeborn)
.IP \[bu] 2
log
.RS 2
.IP \[bu] 2
Fix PID not included in JSON log output (Tingsong Xu)
.IP \[bu] 2
Fix backtrace not going to the --log-file (Nick Craig-Wood)
.RE
.RE
.IP \[bu] 2
Google Cloud Storage
.RS 2
.IP \[bu] 2
Improve endpoint parameter docs (Johannes Rothe)
.RE
.IP \[bu] 2
S3
.RS 2
.IP \[bu] 2
Add missing regions for Selectel provider (Nick Craig-Wood)
.RE
.SS v1.72.0 - 2025-11-21 .SS v1.72.0 - 2025-11-21
.PP .PP
See commits (https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0) See commits (https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0)