1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-06 00:03:32 +00:00

Compare commits

...

93 Commits

Author SHA1 Message Date
Duncan Smart
886ac7af1d docs: Clarify OAuth scopes for readonly Google Drive access 2025-11-24 15:58:53 +00:00
Diana
3c40238f02 b2: support authentication with new bucket restricted application keys
Backblaze has updated its b2_authorize_account API endpoint, newly created
application keys are now "multi-bucket" keys, capable of being limited to
multiple buckets. These keys can only be used with the v4 endpoint, not v1 which
returns an HTTP 400.

This commit switches authorization to the v4 endpoint, and allowing such keys to
work with any of the allowed buckets.

With multi-bucket keys, missing restricted buckets can be non-fatal.

Supports listing root with multi-bucket API keys
2025-11-24 15:46:41 +00:00
Nick Craig-Wood
46ca0dd7fe docs: update sponsor logos 2025-11-24 14:58:33 +00:00
Nick Craig-Wood
2e968e7ce0 docs: fix lint error in changelog 2025-11-21 18:23:16 +00:00
Nick Craig-Wood
1886c552db Start v1.73.0-DEV development 2025-11-21 18:23:07 +00:00
Nick Craig-Wood
38ab3dd5b1 Version v1.72.0 2025-11-21 17:10:17 +00:00
Nick Craig-Wood
1d02e1219a rc: fix formatting in job/batch 2025-11-21 17:06:18 +00:00
Nick Craig-Wood
035d3f344c test speed: fix formatting of help 2025-11-21 17:02:45 +00:00
Nick Craig-Wood
7d45aee70f docs: update sponsor logos 2025-11-21 12:48:29 +00:00
dependabot[bot]
f30789180d build: bump actions/checkout from 5 to 6
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-20 23:16:24 +00:00
Sean Turner
7cb05a84e9 s3: add multi-part-upload support for If-Match and If-None-Match
#8947 implemented support for the If-Match and If-None-Match headers for S3 PUT
Object requests; however, this support did not extend to multi-part copy and
upload requests. These headers are implemented via inclusion in the
CompleteMultipartUpload request.

This updates the auto generated code also which was needed for multipart copy.
2025-11-20 17:31:15 +00:00
Nick Craig-Wood
6d4c625bfb rc: config/unlock: rename parameter to configPassword accept old as well
We accidentally added a non `camelCase` parameter to the rc
(`config_password`)- this fixes it (to `configPassword`) but accepts
the old name too as it has been in a release.
2025-11-20 16:46:01 +00:00
Nick Craig-Wood
4eccc40168 rc: correct names of parameters in job/list output
These were accidentally committed as snake_case whereas we use
camelCase elsewhere.

This corrects the issue before the first release in v1.72.0
2025-11-20 16:46:01 +00:00
Nick Craig-Wood
e451f9c999 Add Nikolay Kiryanov to contributors 2025-11-20 16:46:01 +00:00
Nikolay Kiryanov
321488441e rc: add executeId to job statuses - fixes #8972 2025-11-20 13:15:22 +00:00
dependabot[bot]
bd99e05ff0 build: bump golang.org/x/crypto from 0.43.0 to 0.45.0 to fix CVE-2025-58181
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-20 13:09:29 +00:00
hunshcn
6440052fbd s3: fix single file copying behavior with low permission - Fixes #8975 2025-11-18 17:01:07 +00:00
Nick Craig-Wood
4afb59bc93 docs: onedrive: note how to backup up any user's data 2025-11-18 16:21:06 +00:00
Nick Craig-Wood
0343670375 Add Dominik Sander to contributors 2025-11-18 16:21:06 +00:00
Nick Craig-Wood
5b2b372ba9 Add jijamik to contributors 2025-11-18 16:21:06 +00:00
Dominik Sander
08c35ae741 box: allow to configure with config file contents
Especially when using rclone via rc it is helpful to configure the box
backend using the contents of the config file instead of heaving to
upload the file to the server that is running rclone.
2025-11-18 16:09:06 +00:00
Oleg Kunitsyn
ecea0cd6f9 http: add basic metadata and provide it via serve
Co-authored-by: dougal <147946567+roucc@users.noreply.github.com>
2025-11-17 16:52:30 +00:00
jijamik
80e6389a50 ftp: fix transfers from servers that return 250 ok messages 2025-11-14 21:01:25 +00:00
dougal
a3ccf4d8a0 b2: allow individual old versions to be deleted with --b2-versions - fixes #1626 2025-11-14 17:04:45 +00:00
Nick Craig-Wood
31df39d356 build: fix tls: failed to verify certificate: x509: negative serial number
Before Go 1.23, x509.ParseCertificate accepted certificates with
negative serial numbers. Rejecting these certificates caused a small
number of users to see this error.

From Go 1.23 debug flags can be added to go.mod so this change adds a
debug flag to ensure negative serial numbers are still allowed since
this is a spec violation, not a security issue.

See: https://forum.rclone.org/t/ssl-validation-broken-between-v1-69-1-latest-version/
2025-11-14 12:51:17 +00:00
Nick Craig-Wood
03d3811f7f Add Sean Turner to contributors 2025-11-14 12:51:17 +00:00
Sean Turner
83b83f7768 s3: add support for --upload-header If-Match and If-None-Match
The If-Match and If-None-Match headers were being dropped rather
than implemented in the Put Object request to S3. These headers
make requests conditional which allow AWS S3 Bucket Policies to
prevent Object overwriting.
2025-11-13 13:50:47 +00:00
n4n5
71138082ea fix: comment typos 2025-11-13 13:47:40 +00:00
Nick Craig-Wood
cf94824426 dropbox: fix error moving just created objects - fixes #8881
The bisync tests have been failing as Dropbox is failing to move just
created objects. This seems to be caused by an eventual consistency
problem so this attempts to fix it by retrying the specific error.
2025-11-12 15:54:01 +00:00
hunshcn
16971ab6b9 s3: add --s3-use-data-integrity-protections to fix BadDigest error in Alibaba, Tencent
Since aws/aws-sdk-go-v2#2960, aws-go-sdk-v2 changes its default integrity
behavior. This breaks some s3 providers (eg Tencent, Alibaba)

https://github.com/aws/aws-sdk-go-v2/discussions/2960

This introduces `use_data_integrity_protections` option to disable it.

Defaults to false with it set to true for AWS.

Fixes #8432
Fixes #8483
2025-11-12 15:15:13 +00:00
Nick Craig-Wood
9f75af38e3 rc: make sure fatal errors don't crash rclone - fixes #8955
Before this change, if any code called fs.Fatal(f) then it would stop
rclone as designed. However this is not appropriate when using the RC
API - we want the error returned to the user.

This change turns the fs.Fatal(f) call into a panic which is caught by
the RC API handler and returned to the user as a 500 error.
2025-11-12 12:22:04 +00:00
Nick Craig-Wood
b5e4d39b05 pacer: factor call stack searching into its own package 2025-11-12 12:22:04 +00:00
Nick Craig-Wood
4d19afdbbf rc: add osVersion, osKernel and osArch to core/version
This makes it return the same info as `rclone version`
2025-11-12 11:16:48 +00:00
Nick Craig-Wood
2ebfedce85 build: update all dependencies 2025-11-12 10:36:30 +00:00
dependabot[bot]
1a4b85b6e7 build(deps): bump golangci/golangci-lint-action from 8 to 9
Bumps [golangci/golangci-lint-action](https://github.com/golangci/golangci-lint-action) from 8 to 9.
- [Release notes](https://github.com/golangci/golangci-lint-action/releases)
- [Commits](https://github.com/golangci/golangci-lint-action/compare/v8...v9)

---
updated-dependencies:
- dependency-name: golangci/golangci-lint-action
  dependency-version: '9'
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-11 17:10:10 +01:00
Nick Craig-Wood
5052b80298 webdav: fix out of memory with sharepoint-ntlm when uploading large file
Fixes #7469
Fixes #8959
See: https://forum.rclone.org/t/huge-memory-usage-10gb-when-upload-a-single-large-file-16gb-in-webdav/43312/
2025-11-10 16:57:18 +00:00
Nick Craig-Wood
fada870ff0 testserver: fix owncloud test server startup 2025-11-10 16:57:18 +00:00
Nick Craig-Wood
38f456c527 Add aliaj1 to contributors 2025-11-10 16:57:18 +00:00
aliaj1
e6d82ac6ee ulozto: Fix downloads returning HTML error page
The uloz.to backend was failing to download files, instead returning
an HTML page with a "Slow download" message. This was caused by
recent changes in the uloz.to API.

This commit fixes the issue by making the following changes to the
download process:

1.  The `hash` received from the download link API is now appended as a
    query parameter to the download URL.
2.  The download is now performed using the authenticated `rest` client
    to ensure premium access is recognized.
3.  The `DeviceID` is now generated dynamically for each download request
    to avoid potential rate-limiting of a static ID.
2025-11-10 15:56:06 +00:00
Nick Craig-Wood
4c74ded85a docs: adjust spectra logic example endpoint name 2025-11-10 13:47:33 +00:00
kapitainsky
43848f5c42 docs: update version introduced to v1.70 in doi docs
Fixes #8948
2025-11-08 21:33:38 +00:00
Nick Craig-Wood
fb895f69a1 testserver: fix HDFS server after run.bash adjustments 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
b204090325 testserver: remind developers about allocating a port 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
1821d86911 testserver: make run.bash variables less likely to collide with scripts 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
7ce67347fb testserver: fix seafile servers messing up _connect string 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
0228bbff39 testserver: make sure TestWebdavInfiniteScale uses an assigned port 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
6890bd7738 testserver: make sure we don't overwrite the NAME variable set
This fixes some oddities stopping and starting servers
2025-11-05 17:56:28 +00:00
Nick Craig-Wood
bc5d1dfaf3 Add n4n5 to contributors 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
c33aeb705f Add Alex to contributors 2025-11-05 17:56:28 +00:00
Nick Craig-Wood
12cf8e71df Add Copilot to contributors 2025-11-05 17:56:28 +00:00
albertony
ec5ddb68a8 docs: update contributing docs regarding backend documentation 2025-11-05 14:06:09 +01:00
n4n5
8335596207 rc: add jobs stats 2025-11-05 12:36:39 +00:00
albertony
4f56ab2341 docs: fix alignment of some of the icons in the storage system dropdown 2025-11-04 23:00:46 +01:00
albertony
8b5b7ecfd9 docs: run markdownlint on _index.md 2025-11-04 23:00:46 +01:00
albertony
2aa2cfc70e docs: fix markdownlint issues and other styling improvements in backend command docs 2025-11-04 23:00:46 +01:00
albertony
7265b2331f docs: fix markdownlint issue md046/code-block-style in backend command docs 2025-11-04 23:00:46 +01:00
albertony
0dd56ff2a3 docs: fix missing punctuation in backend commands short description 2025-11-04 23:00:46 +01:00
albertony
2443cb284e docs: fix markdownlint issues in backend command generated output 2025-11-04 23:00:46 +01:00
albertony
0f3aa17fb6 build: improve backend docs autogenerated marker line
Replace custom rem hugo shortcode template with HTML comment. HTML comments are now
allowed in Hugo without enabling unsafe HTML parsing.

Improve the text in the comment: Remove unnecessary quoting, and avoid impression that
make backenddocs has to be run and results committed, since we have a lint check which
will then report error because we want to prevent manual changes in autogenerated sections.

Disable the markdownlint rule line-length on the autogenerated marker line.

Make the autogenerated marker detection a bit more robust.

See #8942 for more details.
2025-11-04 21:56:01 +01:00
Alex
8f74e7d331 backend/compress: add zstd compression
Added support for reading and writing zstd-compressed archives in seekable format
using "github.com/klauspost/compress/zstd" and
"github.com/SaveTheRbtz/zstd-seekable-format-go/pkg".

Bumped Go version from 1.24.0 to 1.24.4 due to requirements of
"github.com/SaveTheRbtz/zstd-seekable-format-go/pkg".
2025-11-04 14:50:56 +00:00
Copilot
ee92673e1b sftp: fix zombie SSH processes with --sftp-ssh - Fixes #8929
Before this fix using --sftp-ssh with the sftp backend could leave
zombie processes.

This patch fixes the problem that sshClientExternal.session was never
assigned, so Wait() always returned nil without waiting for the SSH
process to exit. This caused zombie processes because the process was
never reaped.

It also ensures that Wait() is only called once on each process.

I gave this issue to Copilot to fix as an experiment. It went off in
the wrong direction to start with and fixed something which wasn't the
problem but still needed fixing. With a bit of a nudge it fixed the
correct problem too.

Co-authored-by: Nick Craig-Wood <nick@craig-wood.com>
2025-11-04 12:09:47 +00:00
Nick Craig-Wood
55655efabf testserver: fix tests failing due to stopped servers
Before this fix there were various issues with the test server
framework, most noticeably servers stopping when they shouldn't
causing timeouts. This was caused by the reference counting in the Go
code not being engineered to work in multiple processes so it was not
working at all properly.

This fix moves the reference counting logic to the start scripts and
in turn removes that logic from the Go code. This means that the
reference counting is now global and works correctly over multiple
processes.
2025-11-04 11:45:15 +00:00
dougal
700e6e11fd docs: add new integration tester site link 2025-11-03 17:15:53 +00:00
Nick Craig-Wood
edb47076b5 docs: update the method for running integration tests 2025-11-03 16:52:33 +00:00
Nick Craig-Wood
e5fd97b8d2 bisync: fix failing tests
In this commit

d240d044c3 check: improved reporting of differences in sizes and contents

We adjusted the sense of operations.CheckIdenticalDownload to return
true if files are identical as is implied by the name, but we forgot
to invert the logic in the bisync DownloadCheckFn which caused lots of
tests to fail.
2025-11-03 16:52:33 +00:00
Nick Craig-Wood
bc57a31859 Add SublimePeace to contributors 2025-11-03 16:52:33 +00:00
dougal
4adb48fbbc b2: fix "expected a FileSseMode but found: ''"
94deb6bd6f b2: Add Server-Side encryption support

From the commit above, without setting SSE, rclone would send invalid
SSE requests with empty strings. This is as omitempty only works with
struct pointers not structs.
2025-11-03 16:42:40 +00:00
SublimePeace
c41d0f7d3a docs: s3: clarify multipart uploads memory usage
Clarified phrasing to avoid confusion. Fixed a typo.

Fixes #8525
2025-11-03 16:35:33 +00:00
Nick Craig-Wood
d34ba258b0 test_all: fix detection of running servers
Before this change stopping servers was unreliable, expecially the non
docker based ones. This caused timeouts and connection errors in the
tests.
2025-11-03 14:44:39 +00:00
Nick Craig-Wood
05d54a95b8 accounting: add AccountReadN for use in cluster 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
f16b39165b fs: add NonDefaultRC for discovering options in use
This enables us to send rc messages with the config in use.
2025-11-03 14:44:39 +00:00
Nick Craig-Wood
86edb26fd5 fs: move tests into correct files 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
203e1bdbf9 rc: add NewJobFromBytes for reading jobs from non HTTP transactions 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
a522c056fe rc: add job/batch for sending batches of rc commands to run concurrently 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
31adc7d89f Add Ted Robertson to contributors 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
c559ab7c58 Add Joseph Brownlee to contributors 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
80610ef774 Add fries1234 to contributors 2025-11-03 14:44:39 +00:00
Nick Craig-Wood
a6c943a1ad Add Fawzib Rojas to contributors 2025-11-03 14:43:56 +00:00
Nick Craig-Wood
53e0dbb5cb Add Riaz Arbi to contributors 2025-11-03 14:43:56 +00:00
Nick Craig-Wood
3a0000526b Add Lukas Krejci to contributors 2025-11-03 14:43:56 +00:00
Nick Craig-Wood
1fa6941e26 Add Adam Dinwoodie to contributors 2025-11-03 14:43:56 +00:00
Nick Craig-Wood
9bb7ad31e6 Add dulanting to contributors 2025-11-03 14:43:56 +00:00
Ted Robertson
da8c6847ad docs: add AppArmor restrictions to rclone mount 2025-11-01 19:28:14 +00:00
albertony
d240d044c3 check: improved reporting of differences in sizes and contents
fixes rclone check --download not showing differing files
2025-11-01 19:23:01 +00:00
iTrooz
1056ace80f mega: implement 2FA login 2025-11-01 19:03:49 +00:00
albertony
a06c1c0cb7 docs: change to light code block style to better match overall theme 2025-11-01 18:55:11 +01:00
albertony
7672c3d586 docs: fix various markdownlint issues 2025-11-01 18:54:19 +01:00
albertony
f361cdf1cb build: restrict the markdown languages to use for code blocks 2025-11-01 15:52:41 +01:00
albertony
26d3c71bab docs: fix various markdownlint issues 2025-11-01 15:33:38 +01:00
albertony
c76396f03c docs: fix markdownlint issue md013/line-length 2025-11-01 15:33:38 +01:00
albertony
059ad47336 docs: change syntax hightlighting for command examples from sh to console 2025-11-01 15:33:38 +01:00
Joseph Brownlee
becc068d36 docs: Clarify remote naming convention
Co-authored-by: dougal <147946567+roucc@users.noreply.github.com>
Co-authored-by: dougal <dougal.craigwood@gmail.com>
2025-10-31 15:42:38 +00:00
fries1234
94deb6bd6f b2: Add Server-Side encryption support
This commit adds SSE-C (Server-Side Encryption - Customer) support to
the B2 native backend. The server uses a customer provided AES-256 key
to encrypt the files when you upload them to the bucket, and then it
discards your key from the servers RAM after you're done uploading.

The option names and descriptions are based off the S3 backend
implementation as the way S3 and B2 does SSE-C is pretty similar.

Fixes #6585
2025-10-31 15:33:31 +00:00
316 changed files with 50918 additions and 20489 deletions

View File

@@ -95,7 +95,7 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -216,7 +216,7 @@ jobs:
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout - name: Checkout
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -239,13 +239,13 @@ jobs:
restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}- restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-
- name: Code quality test (Linux) - name: Code quality test (Linux)
uses: golangci/golangci-lint-action@v8 uses: golangci/golangci-lint-action@v9
with: with:
version: latest version: latest
skip-cache: true skip-cache: true
- name: Code quality test (Windows) - name: Code quality test (Windows)
uses: golangci/golangci-lint-action@v8 uses: golangci/golangci-lint-action@v9
env: env:
GOOS: "windows" GOOS: "windows"
with: with:
@@ -253,7 +253,7 @@ jobs:
skip-cache: true skip-cache: true
- name: Code quality test (macOS) - name: Code quality test (macOS)
uses: golangci/golangci-lint-action@v8 uses: golangci/golangci-lint-action@v9
env: env:
GOOS: "darwin" GOOS: "darwin"
with: with:
@@ -261,7 +261,7 @@ jobs:
skip-cache: true skip-cache: true
- name: Code quality test (FreeBSD) - name: Code quality test (FreeBSD)
uses: golangci/golangci-lint-action@v8 uses: golangci/golangci-lint-action@v9
env: env:
GOOS: "freebsd" GOOS: "freebsd"
with: with:
@@ -269,7 +269,7 @@ jobs:
skip-cache: true skip-cache: true
- name: Code quality test (OpenBSD) - name: Code quality test (OpenBSD)
uses: golangci/golangci-lint-action@v8 uses: golangci/golangci-lint-action@v9
env: env:
GOOS: "openbsd" GOOS: "openbsd"
with: with:
@@ -291,7 +291,9 @@ jobs:
README.md README.md
RELEASE.md RELEASE.md
CODE_OF_CONDUCT.md CODE_OF_CONDUCT.md
docs/content/{authors,bugs,changelog,docs,downloads,faq,filtering,gui,install,licence,overview,privacy}.md librclone\README.md
backend\s3\README.md
docs/content/{_index,authors,bugs,changelog,docs,downloads,faq,filtering,gui,install,licence,overview,privacy}.md
- name: Scan edits of autogenerated files - name: Scan edits of autogenerated files
run: bin/check_autogenerated_edits.py 'origin/${{ github.base_ref }}' run: bin/check_autogenerated_edits.py 'origin/${{ github.base_ref }}'
@@ -305,7 +307,7 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0

View File

@@ -52,7 +52,7 @@ jobs:
df -h . df -h .
- name: Checkout Repository - name: Checkout Repository
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0

View File

@@ -30,7 +30,7 @@ jobs:
sudo rm -rf /usr/share/dotnet || true sudo rm -rf /usr/share/dotnet || true
df -h . df -h .
- name: Checkout master - name: Checkout master
uses: actions/checkout@v5 uses: actions/checkout@v6
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Build and publish docker plugin - name: Build and publish docker plugin

View File

@@ -41,3 +41,32 @@ single-title: # MD025
# Markdown files we must use whatever works in the final HTML generated docs. # Markdown files we must use whatever works in the final HTML generated docs.
# Suppress Markdownlint warning: Link fragments should be valid. # Suppress Markdownlint warning: Link fragments should be valid.
link-fragments: false # MD051 link-fragments: false # MD051
# Restrict the languages and language identifiers to use for code blocks.
# We only want those supported by both Hugo and GitHub. These are documented
# here:
# https://gohugo.io/content-management/syntax-highlighting/#languages
# https://docs.github.com//get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks#syntax-highlighting
# In addition, we only want to allow identifiers (aliases) that correspond to
# the same language in Hugo and GitHub, and preferrably also VSCode and other
# commonly used tools, to avoid confusion. An example of this is that "shell"
# by some are considered an identifier for shell scripts, i.e. an alias for
# "sh", while others consider it an identifier for shell sessions, i.e. an
# alias for "console". Although Hugo and GitHub in this case are consistent and
# have choosen the former, using "sh" instead, and not allowing use of "shell",
# avoids the confusion entirely.
fenced-code-language: # MD040
allowed_languages:
- text
- console
- sh
- bat
- ini
- json
- yaml
- go
- python
- c++
- c#
- java
- powershell

View File

@@ -38,7 +38,7 @@ and [email](https://docs.github.com/en/github/setting-up-and-managing-your-githu
Next open your terminal, change directory to your preferred folder and initialise Next open your terminal, change directory to your preferred folder and initialise
your local rclone project: your local rclone project:
```sh ```console
git clone https://github.com/rclone/rclone.git git clone https://github.com/rclone/rclone.git
cd rclone cd rclone
git remote rename origin upstream git remote rename origin upstream
@@ -53,13 +53,13 @@ executed from the rclone folder created above.
Now [install Go](https://golang.org/doc/install) and verify your installation: Now [install Go](https://golang.org/doc/install) and verify your installation:
```sh ```console
go version go version
``` ```
Great, you can now compile and execute your own version of rclone: Great, you can now compile and execute your own version of rclone:
```sh ```console
go build go build
./rclone version ./rclone version
``` ```
@@ -68,7 +68,7 @@ go build
more accurate version number in the executable as well as enable you to specify more accurate version number in the executable as well as enable you to specify
more build options.) Finally make a branch to add your new feature more build options.) Finally make a branch to add your new feature
```sh ```console
git checkout -b my-new-feature git checkout -b my-new-feature
``` ```
@@ -80,7 +80,7 @@ and a quick view on the rclone [code organisation](#code-organisation).
When ready - test the affected functionality and run the unit tests for the When ready - test the affected functionality and run the unit tests for the
code you changed code you changed
```sh ```console
cd folder/with/changed/files cd folder/with/changed/files
go test -v go test -v
``` ```
@@ -99,7 +99,7 @@ Make sure you
When you are done with that push your changes to GitHub: When you are done with that push your changes to GitHub:
```sh ```console
git push -u origin my-new-feature git push -u origin my-new-feature
``` ```
@@ -119,7 +119,7 @@ or [squash your commits](#squashing-your-commits).
Follow the guideline for [commit messages](#commit-messages) and then: Follow the guideline for [commit messages](#commit-messages) and then:
```sh ```console
git checkout my-new-feature # To switch to your branch git checkout my-new-feature # To switch to your branch
git status # To see the new and changed files git status # To see the new and changed files
git add FILENAME # To select FILENAME for the commit git add FILENAME # To select FILENAME for the commit
@@ -130,7 +130,7 @@ git log # To verify the commit. Use q to quit the log
You can modify the message or changes in the latest commit using: You can modify the message or changes in the latest commit using:
```sh ```console
git commit --amend git commit --amend
``` ```
@@ -145,7 +145,7 @@ pushed to GitHub.
Your previously pushed commits are replaced by: Your previously pushed commits are replaced by:
```sh ```console
git push --force origin my-new-feature git push --force origin my-new-feature
``` ```
@@ -154,7 +154,7 @@ git push --force origin my-new-feature
To base your changes on the latest version of the To base your changes on the latest version of the
[rclone master](https://github.com/rclone/rclone/tree/master) (upstream): [rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
```sh ```console
git checkout master git checkout master
git fetch upstream git fetch upstream
git merge --ff-only git merge --ff-only
@@ -170,7 +170,7 @@ If you rebase commits that have been pushed to GitHub, then you will have to
To combine your commits into one commit: To combine your commits into one commit:
```sh ```console
git log # To count the commits to squash, e.g. the last 2 git log # To count the commits to squash, e.g. the last 2
git reset --soft HEAD~2 # To undo the 2 latest commits git reset --soft HEAD~2 # To undo the 2 latest commits
git status # To check everything is as expected git status # To check everything is as expected
@@ -178,13 +178,13 @@ git status # To check everything is as expected
If everything is fine, then make the new combined commit: If everything is fine, then make the new combined commit:
```sh ```console
git commit # To commit the undone commits as one git commit # To commit the undone commits as one
``` ```
otherwise, you may roll back using: otherwise, you may roll back using:
```sh ```console
git reflog # To check that HEAD{1} is your previous state git reflog # To check that HEAD{1} is your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state git reset --soft 'HEAD@{1}' # To roll back to your previous state
``` ```
@@ -219,13 +219,13 @@ to check an error return).
rclone's tests are run from the go testing framework, so at the top rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests. level you can run this to run all the tests.
```sh ```console
go test -v ./... go test -v ./...
``` ```
You can also use `make`, if supported by your platform You can also use `make`, if supported by your platform
```sh ```console
make quicktest make quicktest
``` ```
@@ -246,7 +246,7 @@ need to make a remote called `TestDrive`.
You can then run the unit tests in the drive directory. These tests You can then run the unit tests in the drive directory. These tests
are skipped if `TestDrive:` isn't defined. are skipped if `TestDrive:` isn't defined.
```sh ```console
cd backend/drive cd backend/drive
go test -v go test -v
``` ```
@@ -255,7 +255,7 @@ You can then run the integration tests which test all of rclone's
operations. Normally these get run against the local file system, operations. Normally these get run against the local file system,
but they can be run against any of the remotes. but they can be run against any of the remotes.
```sh ```console
cd fs/sync cd fs/sync
go test -v -remote TestDrive: go test -v -remote TestDrive:
go test -v -remote TestDrive: -fast-list go test -v -remote TestDrive: -fast-list
@@ -268,9 +268,8 @@ If you want to use the integration test framework to run these tests
altogether with an HTML report and test retries then from the altogether with an HTML report and test retries then from the
project root: project root:
```sh ```console
go install github.com/rclone/rclone/fstest/test_all go run ./fstest/test_all -backends drive
test_all -backends drive
``` ```
### Full integration testing ### Full integration testing
@@ -278,19 +277,19 @@ test_all -backends drive
If you want to run all the integration tests against all the remotes, If you want to run all the integration tests against all the remotes,
then change into the project root and run then change into the project root and run
```sh ```console
make check make check
make test make test
``` ```
The commands may require some extra go packages which you can install with The commands may require some extra go packages which you can install with
```sh ```console
make build_dep make build_dep
``` ```
The full integration tests are run daily on the integration test server. You can The full integration tests are run daily on the integration test server. You can
find the results at <https://pub.rclone.org/integration-tests/> find the results at <https://integration.rclone.org>
## Code Organisation ## Code Organisation
@@ -349,11 +348,13 @@ If you are adding a new feature then please update the documentation.
The documentation sources are generally in Markdown format, in conformance The documentation sources are generally in Markdown format, in conformance
with the CommonMark specification and compatible with GitHub Flavored with the CommonMark specification and compatible with GitHub Flavored
Markdown (GFM). The markdown format is checked as part of the lint operation Markdown (GFM). The markdown format and style is checked as part of the lint
that runs automatically on pull requests, to enforce standards and consistency. operation that runs automatically on pull requests, to enforce standards and
This is based on the [markdownlint](https://github.com/DavidAnson/markdownlint) consistency. This is based on the [markdownlint](https://github.com/DavidAnson/markdownlint)
tool, which can also be integrated into editors so you can perform the same tool by David Anson, which can also be integrated into editors so you can
checks while writing. perform the same checks while writing. It generally follows Ciro Santilli's
[Markdown Style Guide](https://cirosantilli.com/markdown-style-guide), which
is good source if you want to know more.
HTML pages, served as website <rclone.org>, are generated from the Markdown, HTML pages, served as website <rclone.org>, are generated from the Markdown,
using [Hugo](https://gohugo.io). Note that when generating the HTML pages, using [Hugo](https://gohugo.io). Note that when generating the HTML pages,
@@ -382,7 +383,7 @@ If you add a new general flag (not for a backend), then document it in
alphabetical order. alphabetical order.
If you add a new backend option/flag, then it should be documented in If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field. the source file in the `Help:` field:
- Start with the most important information about the option, - Start with the most important information about the option,
as a single sentence on a single line. as a single sentence on a single line.
@@ -404,6 +405,30 @@ the source file in the `Help:` field.
as an unordered list, therefore a single line break is enough to as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character. countries, it looks better without an ending period/full stop character.
- You can run `make backenddocs` to verify the resulting Markdown.
- This will update the autogenerated sections of the backend docs Markdown
files under `docs/content`.
- It requires you to have [Python](https://www.python.org) installed.
- The `backenddocs` make target runs the Python script `bin/make_backend_docs.py`,
and you can also run this directly, optionally with the name of a backend
as argument to only update the docs for a specific backend.
- **Do not** commit the updated Markdown files. This operation is run as part of
the release process. Since any manual changes in the autogenerated sections
of the Markdown files will then be lost, we have a pull request check that
reports error for any changes within the autogenerated sections. Should you
have done manual changes outside of the autogenerated sections they must be
committed, of course.
- You can run `make serve` to verify the resulting website.
- This will build the website and serve it locally, so you can open it in
your web browser and verify that the end result looks OK. Check specifically
any added links, also in light of the note above regarding different algorithms
for generated header anchors.
- It requires you to have the [Hugo](https://gohugo.io) tool available.
- The `serve` make target depends on the `website` target, which runs the
`hugo` command from the `docs` directory to build the website, and then
it serves the website locally with an embedded web server using a command
`hugo server --logLevel info -w --disableFastRender --ignoreCache`, so you
can run similar Hugo commands directly as well.
When writing documentation for an entirely new backend, When writing documentation for an entirely new backend,
see [backend documentation](#backend-documentation). see [backend documentation](#backend-documentation).
@@ -420,6 +445,11 @@ for small changes in the docs which makes it very easy. Just remember the
caveat when linking to header anchors, noted above, which means that GitHub's caveat when linking to header anchors, noted above, which means that GitHub's
Markdown preview may not be an entirely reliable verification of the results. Markdown preview may not be an entirely reliable verification of the results.
After your changes have been merged, you can verify them on
[tip.rclone.org](https://tip.rclone.org). This site is updated daily with the
current state of the master branch at 07:00 UTC. The changes will be on the main
[rclone.org](https://rclone.org) site once they have been included in a release.
## Making a release ## Making a release
There are separate instructions for making a release in the RELEASE.md There are separate instructions for making a release in the RELEASE.md
@@ -478,7 +508,7 @@ To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to instructions below. These will fetch the dependency and add it to
`go.mod` and `go.sum`. `go.mod` and `go.sum`.
```sh ```console
go get github.com/ncw/new_dependency go get github.com/ncw/new_dependency
``` ```
@@ -492,7 +522,7 @@ and `go.sum` in the same commit as your other changes.
If you need to update a dependency then run If you need to update a dependency then run
```sh ```console
go get golang.org/x/crypto go get golang.org/x/crypto
``` ```
@@ -581,8 +611,7 @@ remote or an fs.
- Add your backend to `fstest/test_all/config.yaml` - Add your backend to `fstest/test_all/config.yaml`
- Once you've done that then you can use the integration test framework from - Once you've done that then you can use the integration test framework from
the project root: the project root:
- go install ./... - `go run ./fstest/test_all -backends remote`
- test_all -backends remote
Or if you want to run the integration tests manually: Or if you want to run the integration tests manually:

15185
MANUAL.html generated

File diff suppressed because it is too large Load Diff

17750
MANUAL.md generated

File diff suppressed because it is too large Load Diff

7896
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -21,6 +21,7 @@ This file describes how to make the various kinds of releases
- make doc - make doc
- git status - to check for new man pages - git add them - git status - to check for new man pages - git add them
- git commit -a -v -m "Version v1.XX.0" - git commit -a -v -m "Version v1.XX.0"
- make check
- make retag - make retag
- git push origin # without --follow-tags so it doesn't push the tag if it fails - git push origin # without --follow-tags so it doesn't push the tag if it fails
- git push --follow-tags origin - git push --follow-tags origin
@@ -60,7 +61,7 @@ If `make updatedirect` added a `toolchain` directive then remove it.
We don't want to force a toolchain on our users. Linux packagers are We don't want to force a toolchain on our users. Linux packagers are
often using a version of Go that is a few versions out of date. often using a version of Go that is a few versions out of date.
```sh ```console
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades) go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.22 -compat=1.22 go mod tidy -go=1.22 -compat=1.22
@@ -70,7 +71,7 @@ If the `go mod tidy` fails use the output from it to remove the
package which can't be upgraded from `/tmp/potential-upgrades` when package which can't be upgraded from `/tmp/potential-upgrades` when
done done
```sh ```console
git co go.mod go.sum git co go.mod go.sum
``` ```
@@ -102,7 +103,7 @@ The above procedure will not upgrade major versions, so v2 to v3.
However this tool can show which major versions might need to be However this tool can show which major versions might need to be
upgraded: upgraded:
```sh ```console
go run github.com/icholy/gomajor@latest list -major go run github.com/icholy/gomajor@latest list -major
``` ```
@@ -112,7 +113,7 @@ Expect API breakage when updating major versions.
At some point after the release run At some point after the release run
```sh ```console
bin/tidy-beta v1.55 bin/tidy-beta v1.55
``` ```
@@ -159,7 +160,7 @@ which is a private repo containing artwork from sponsors.
Create an update website branch based off the last release Create an update website branch based off the last release
```sh ```console
git co -b update-website git co -b update-website
``` ```
@@ -167,19 +168,19 @@ If the branch already exists, double check there are no commits that need saving
Now reset the branch to the last release Now reset the branch to the last release
```sh ```console
git reset --hard v1.64.0 git reset --hard v1.64.0
``` ```
Create the changes, check them in, test with `make serve` then Create the changes, check them in, test with `make serve` then
```sh ```console
make upload_test_website make upload_test_website
``` ```
Check out <https://test.rclone.org> and when happy Check out <https://test.rclone.org> and when happy
```sh ```console
make upload_website make upload_website
``` ```
@@ -189,14 +190,14 @@ Cherry pick any changes back to master and the stable branch if it is active.
To do a basic build of rclone's docker image to debug builds locally: To do a basic build of rclone's docker image to debug builds locally:
```sh ```console
docker buildx build --load -t rclone/rclone:testing --progress=plain . docker buildx build --load -t rclone/rclone:testing --progress=plain .
docker run --rm rclone/rclone:testing version docker run --rm rclone/rclone:testing version
``` ```
To test the multipatform build To test the multipatform build
```sh ```console
docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 . docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 .
``` ```
@@ -204,6 +205,6 @@ To make a full build then set the tags correctly and add `--push`
Note that you can't only build one architecture - you need to build them all. Note that you can't only build one architecture - you need to build them all.
```sh ```console
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push . docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
``` ```

View File

@@ -1 +1 @@
v1.72.0 v1.73.0

View File

@@ -48,6 +48,14 @@ type LifecycleRule struct {
FileNamePrefix string `json:"fileNamePrefix"` FileNamePrefix string `json:"fileNamePrefix"`
} }
// ServerSideEncryption is a configuration object for B2 Server-Side Encryption
type ServerSideEncryption struct {
Mode string `json:"mode"`
Algorithm string `json:"algorithm"` // Encryption algorithm to use
CustomerKey string `json:"customerKey"` // User provided Base64 encoded key that is used by the server to encrypt files
CustomerKeyMd5 string `json:"customerKeyMd5"` // An MD5 hash of the decoded key
}
// Timestamp is a UTC time when this file was uploaded. It is a base // Timestamp is a UTC time when this file was uploaded. It is a base
// 10 number of milliseconds since midnight, January 1, 1970 UTC. This // 10 number of milliseconds since midnight, January 1, 1970 UTC. This
// fits in a 64 bit integer such as the type "long" in the programming // fits in a 64 bit integer such as the type "long" in the programming
@@ -125,23 +133,32 @@ type File struct {
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file. Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
} }
// AuthorizeAccountResponse is as returned from the b2_authorize_account call // StorageAPI is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct { type StorageAPI struct {
AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file. AbsoluteMinimumPartSize int `json:"absoluteMinimumPartSize"` // The smallest possible size of a part of a large file.
AccountID string `json:"accountId"` // The identifier for the account.
Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it. Allowed struct { // An object (see below) containing the capabilities of this auth token, and any restrictions on using it.
BucketID string `json:"bucketId"` // When present, access is restricted to one bucket. Buckets []struct { // When present, access is restricted to one or more buckets.
BucketName string `json:"bucketName"` // When present, name of bucket - may be empty ID string `json:"id"` // ID of bucket
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has. Name string `json:"name"` // When present, name of bucket - may be empty
} `json:"buckets"`
Capabilities []string `json:"capabilities"` // A list of strings, each one naming a capability the key has for every bucket.
NamePrefix any `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix NamePrefix any `json:"namePrefix"` // When present, access is restricted to files whose names start with the prefix
} `json:"allowed"` } `json:"allowed"`
APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files. APIURL string `json:"apiUrl"` // The base URL to use for all API calls except for uploading and downloading files.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files. DownloadURL string `json:"downloadUrl"` // The base URL to use for downloading files.
MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead. MinimumPartSize int `json:"minimumPartSize"` // DEPRECATED: This field will always have the same value as recommendedPartSize. Use recommendedPartSize instead.
RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance. RecommendedPartSize int `json:"recommendedPartSize"` // The recommended size for each part of a large file. We recommend using this part size for optimal upload performance.
} }
// AuthorizeAccountResponse is as returned from the b2_authorize_account call
type AuthorizeAccountResponse struct {
AccountID string `json:"accountId"` // The identifier for the account.
AuthorizationToken string `json:"authorizationToken"` // An authorization token to use with all calls, other than b2_authorize_account, that need an Authorization header.
APIs struct { // Supported APIs for this account / key. These are API-dependent JSON objects.
Storage StorageAPI `json:"storageApi"`
} `json:"apiInfo"`
}
// ListBucketsRequest is parameters for b2_list_buckets call // ListBucketsRequest is parameters for b2_list_buckets call
type ListBucketsRequest struct { type ListBucketsRequest struct {
AccountID string `json:"accountId"` // The identifier for the account. AccountID string `json:"accountId"` // The identifier for the account.
@@ -261,21 +278,22 @@ type GetFileInfoRequest struct {
// //
// Example: { "src_last_modified_millis" : "1452802803026", "large_file_sha1" : "a3195dc1e7b46a2ff5da4b3c179175b75671e80d", "color": "blue" } // Example: { "src_last_modified_millis" : "1452802803026", "large_file_sha1" : "a3195dc1e7b46a2ff5da4b3c179175b75671e80d", "color": "blue" }
type StartLargeFileRequest struct { type StartLargeFileRequest struct {
BucketID string `json:"bucketId"` //The ID of the bucket that the file will go in. BucketID string `json:"bucketId"` // The ID of the bucket that the file will go in.
Name string `json:"fileName"` // The name of the file. See Files for requirements on file names. Name string `json:"fileName"` // The name of the file. See Files for requirements on file names.
ContentType string `json:"contentType"` // The MIME type of the content of the file, which will be returned in the Content-Type header when downloading the file. Use the Content-Type b2/x-auto to automatically set the stored Content-Type post upload. In the case where a file extension is absent or the lookup fails, the Content-Type is set to application/octet-stream. ContentType string `json:"contentType"` // The MIME type of the content of the file, which will be returned in the Content-Type header when downloading the file. Use the Content-Type b2/x-auto to automatically set the stored Content-Type post upload. In the case where a file extension is absent or the lookup fails, the Content-Type is set to application/octet-stream.
Info map[string]string `json:"fileInfo"` // A JSON object holding the name/value pairs for the custom file info. Info map[string]string `json:"fileInfo"` // A JSON object holding the name/value pairs for the custom file info.
ServerSideEncryption *ServerSideEncryption `json:"serverSideEncryption,omitempty"` // A JSON object holding values related to Server-Side Encryption
} }
// StartLargeFileResponse is the response to StartLargeFileRequest // StartLargeFileResponse is the response to StartLargeFileRequest
type StartLargeFileResponse struct { type StartLargeFileResponse struct {
ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version. ID string `json:"fileId"` // The unique identifier for this version of this file. Used with b2_get_file_info, b2_download_file_by_id, and b2_delete_file_version.
Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name. Name string `json:"fileName"` // The name of this file, which can be used with b2_download_file_by_name.
AccountID string `json:"accountId"` // The identifier for the account. AccountID string `json:"accountId"` // The identifier for the account.
BucketID string `json:"bucketId"` // The unique ID of the bucket. BucketID string `json:"bucketId"` // The unique ID of the bucket.
ContentType string `json:"contentType"` // The MIME type of the file. ContentType string `json:"contentType"` // The MIME type of the file.
Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file. Info map[string]string `json:"fileInfo"` // The custom information that was uploaded with the file. This is a JSON object, holding the name/value pairs that were uploaded with the file.
UploadTimestamp Timestamp `json:"uploadTimestamp"` // This is a UTC time when this file was uploaded. UploadTimestamp Timestamp `json:"uploadTimestamp,omitempty"` // This is a UTC time when this file was uploaded.
} }
// GetUploadPartURLRequest is passed to b2_get_upload_part_url // GetUploadPartURLRequest is passed to b2_get_upload_part_url
@@ -325,21 +343,25 @@ type CancelLargeFileResponse struct {
// CopyFileRequest is as passed to b2_copy_file // CopyFileRequest is as passed to b2_copy_file
type CopyFileRequest struct { type CopyFileRequest struct {
SourceID string `json:"sourceFileId"` // The ID of the source file being copied. SourceID string `json:"sourceFileId"` // The ID of the source file being copied.
Name string `json:"fileName"` // The name of the new file being created. Name string `json:"fileName"` // The name of the new file being created.
Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied. Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied.
MetadataDirective string `json:"metadataDirective,omitempty"` // The strategy for how to populate metadata for the new file: COPY or REPLACE MetadataDirective string `json:"metadataDirective,omitempty"` // The strategy for how to populate metadata for the new file: COPY or REPLACE
ContentType string `json:"contentType,omitempty"` // The MIME type of the content of the file (REPLACE only) ContentType string `json:"contentType,omitempty"` // The MIME type of the content of the file (REPLACE only)
Info map[string]string `json:"fileInfo,omitempty"` // This field stores the metadata that will be stored with the file. (REPLACE only) Info map[string]string `json:"fileInfo,omitempty"` // This field stores the metadata that will be stored with the file. (REPLACE only)
DestBucketID string `json:"destinationBucketId,omitempty"` // The destination ID of the bucket if set, if not the source bucket will be used DestBucketID string `json:"destinationBucketId,omitempty"` // The destination ID of the bucket if set, if not the source bucket will be used
SourceServerSideEncryption *ServerSideEncryption `json:"sourceServerSideEncryption,omitempty"` // A JSON object holding values related to Server-Side Encryption for the source file
DestinationServerSideEncryption *ServerSideEncryption `json:"destinationServerSideEncryption,omitempty"` // A JSON object holding values related to Server-Side Encryption for the destination file
} }
// CopyPartRequest is the request for b2_copy_part - the response is UploadPartResponse // CopyPartRequest is the request for b2_copy_part - the response is UploadPartResponse
type CopyPartRequest struct { type CopyPartRequest struct {
SourceID string `json:"sourceFileId"` // The ID of the source file being copied. SourceID string `json:"sourceFileId"` // The ID of the source file being copied.
LargeFileID string `json:"largeFileId"` // The ID of the large file the part will belong to, as returned by b2_start_large_file. LargeFileID string `json:"largeFileId"` // The ID of the large file the part will belong to, as returned by b2_start_large_file.
PartNumber int64 `json:"partNumber"` // Which part this is (starting from 1) PartNumber int64 `json:"partNumber"` // Which part this is (starting from 1)
Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied. Range string `json:"range,omitempty"` // The range of bytes to copy. If not provided, the whole source file will be copied.
SourceServerSideEncryption *ServerSideEncryption `json:"sourceServerSideEncryption,omitempty"` // A JSON object holding values related to Server-Side Encryption for the source file
DestinationServerSideEncryption *ServerSideEncryption `json:"destinationServerSideEncryption,omitempty"` // A JSON object holding values related to Server-Side Encryption for the destination file
} }
// UpdateBucketRequest describes a request to modify a B2 bucket // UpdateBucketRequest describes a request to modify a B2 bucket

View File

@@ -8,7 +8,9 @@ import (
"bufio" "bufio"
"bytes" "bytes"
"context" "context"
"crypto/md5"
"crypto/sha1" "crypto/sha1"
"encoding/base64"
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
@@ -53,6 +55,9 @@ const (
nameHeader = "X-Bz-File-Name" nameHeader = "X-Bz-File-Name"
timestampHeader = "X-Bz-Upload-Timestamp" timestampHeader = "X-Bz-Upload-Timestamp"
retryAfterHeader = "Retry-After" retryAfterHeader = "Retry-After"
sseAlgorithmHeader = "X-Bz-Server-Side-Encryption-Customer-Algorithm"
sseKeyHeader = "X-Bz-Server-Side-Encryption-Customer-Key"
sseMd5Header = "X-Bz-Server-Side-Encryption-Customer-Key-Md5"
minSleep = 10 * time.Millisecond minSleep = 10 * time.Millisecond
maxSleep = 5 * time.Minute maxSleep = 5 * time.Minute
decayConstant = 1 // bigger for slower decay, exponential decayConstant = 1 // bigger for slower decay, exponential
@@ -67,7 +72,7 @@ const (
// Globals // Globals
var ( var (
errNotWithVersions = errors.New("can't modify or delete files in --b2-versions mode") errNotWithVersions = errors.New("can't modify files in --b2-versions mode")
errNotWithVersionAt = errors.New("can't modify or delete files in --b2-version-at mode") errNotWithVersionAt = errors.New("can't modify or delete files in --b2-version-at mode")
) )
@@ -252,6 +257,51 @@ See: [rclone backend lifecycle](#lifecycle) for setting lifecycles after bucket
Default: (encoder.Display | Default: (encoder.Display |
encoder.EncodeBackSlash | encoder.EncodeBackSlash |
encoder.EncodeInvalidUtf8), encoder.EncodeInvalidUtf8),
}, {
Name: "sse_customer_algorithm",
Help: "If using SSE-C, the server-side encryption algorithm used when storing this object in B2.",
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}, {
Value: "AES256",
Help: "Advanced Encryption Standard (256 bits key length)",
}},
}, {
Name: "sse_customer_key",
Help: `To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data
Alternatively you can provide --sse-customer-key-base64.`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}},
Sensitive: true,
}, {
Name: "sse_customer_key_base64",
Help: `To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data
Alternatively you can provide --sse-customer-key.`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}},
Sensitive: true,
}, {
Name: "sse_customer_key_md5",
Help: `If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
`,
Advanced: true,
Examples: []fs.OptionExample{{
Value: "",
Help: "None",
}},
Sensitive: true,
}}, }},
}) })
} }
@@ -274,6 +324,10 @@ type Options struct {
DownloadAuthorizationDuration fs.Duration `config:"download_auth_duration"` DownloadAuthorizationDuration fs.Duration `config:"download_auth_duration"`
Lifecycle int `config:"lifecycle"` Lifecycle int `config:"lifecycle"`
Enc encoder.MultiEncoder `config:"encoding"` Enc encoder.MultiEncoder `config:"encoding"`
SSECustomerAlgorithm string `config:"sse_customer_algorithm"`
SSECustomerKey string `config:"sse_customer_key"`
SSECustomerKeyBase64 string `config:"sse_customer_key_base64"`
SSECustomerKeyMD5 string `config:"sse_customer_key_md5"`
} }
// Fs represents a remote b2 server // Fs represents a remote b2 server
@@ -504,6 +558,24 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if opt.Endpoint == "" { if opt.Endpoint == "" {
opt.Endpoint = defaultEndpoint opt.Endpoint = defaultEndpoint
} }
if opt.SSECustomerKey != "" && opt.SSECustomerKeyBase64 != "" {
return nil, errors.New("b2: can't use both sse_customer_key and sse_customer_key_base64 at the same time")
} else if opt.SSECustomerKeyBase64 != "" {
// Decode the Base64-encoded key and store it in the SSECustomerKey field
decoded, err := base64.StdEncoding.DecodeString(opt.SSECustomerKeyBase64)
if err != nil {
return nil, fmt.Errorf("b2: Could not decode sse_customer_key_base64: %w", err)
}
opt.SSECustomerKey = string(decoded)
} else {
// Encode the raw key as Base64
opt.SSECustomerKeyBase64 = base64.StdEncoding.EncodeToString([]byte(opt.SSECustomerKey))
}
if opt.SSECustomerKey != "" && opt.SSECustomerKeyMD5 == "" {
// Calculate CustomerKeyMd5 if not supplied
md5sumBinary := md5.Sum([]byte(opt.SSECustomerKey))
opt.SSECustomerKeyMD5 = base64.StdEncoding.EncodeToString(md5sumBinary[:])
}
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
f := &Fs{ f := &Fs{
name: name, name: name,
@@ -535,17 +607,29 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to authorize account: %w", err) return nil, fmt.Errorf("failed to authorize account: %w", err)
} }
// If this is a key limited to a single bucket, it must exist already // If this is a key limited to one or more buckets, one of them must exist
if f.rootBucket != "" && f.info.Allowed.BucketID != "" { // and be ours.
allowedBucket := f.opt.Enc.ToStandardName(f.info.Allowed.BucketName) if f.rootBucket != "" && len(f.info.APIs.Storage.Allowed.Buckets) != 0 {
if allowedBucket == "" { buckets := f.info.APIs.Storage.Allowed.Buckets
return nil, errors.New("bucket that application key is restricted to no longer exists") var rootFound = false
var rootID string
for _, b := range buckets {
allowedBucket := f.opt.Enc.ToStandardName(b.Name)
if allowedBucket == "" {
fs.Debugf(f, "bucket %q that application key is restricted to no longer exists", b.ID)
continue
}
if allowedBucket == f.rootBucket {
rootFound = true
rootID = b.ID
}
} }
if allowedBucket != f.rootBucket { if !rootFound {
return nil, fmt.Errorf("you must use bucket %q with this application key", allowedBucket) return nil, fmt.Errorf("you must use bucket(s) %q with this application key", buckets)
} }
f.cache.MarkOK(f.rootBucket) f.cache.MarkOK(f.rootBucket)
f.setBucketID(f.rootBucket, f.info.Allowed.BucketID) f.setBucketID(f.rootBucket, rootID)
} }
if f.rootBucket != "" && f.rootDirectory != "" { if f.rootBucket != "" && f.rootDirectory != "" {
// Check to see if the (bucket,directory) is actually an existing file // Check to see if the (bucket,directory) is actually an existing file
@@ -571,7 +655,7 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
defer f.authMu.Unlock() defer f.authMu.Unlock()
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: "/b2api/v1/b2_authorize_account", Path: "/b2api/v4/b2_authorize_account",
RootURL: f.opt.Endpoint, RootURL: f.opt.Endpoint,
UserName: f.opt.Account, UserName: f.opt.Account,
Password: f.opt.Key, Password: f.opt.Key,
@@ -584,13 +668,13 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
if err != nil { if err != nil {
return fmt.Errorf("failed to authenticate: %w", err) return fmt.Errorf("failed to authenticate: %w", err)
} }
f.srv.SetRoot(f.info.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken) f.srv.SetRoot(f.info.APIs.Storage.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken)
return nil return nil
} }
// hasPermission returns if the current AuthorizationToken has the selected permission // hasPermission returns if the current AuthorizationToken has the selected permission
func (f *Fs) hasPermission(permission string) bool { func (f *Fs) hasPermission(permission string) bool {
return slices.Contains(f.info.Allowed.Capabilities, permission) return slices.Contains(f.info.APIs.Storage.Allowed.Capabilities, permission)
} }
// getUploadURL returns the upload info with the UploadURL and the AuthorizationToken // getUploadURL returns the upload info with the UploadURL and the AuthorizationToken
@@ -995,44 +1079,68 @@ type listBucketFn func(*api.Bucket) error
// listBucketsToFn lists the buckets to the function supplied // listBucketsToFn lists the buckets to the function supplied
func (f *Fs) listBucketsToFn(ctx context.Context, bucketName string, fn listBucketFn) error { func (f *Fs) listBucketsToFn(ctx context.Context, bucketName string, fn listBucketFn) error {
var account = api.ListBucketsRequest{ responses := make([]api.ListBucketsResponse, len(f.info.APIs.Storage.Allowed.Buckets))[:0]
AccountID: f.info.AccountID,
BucketID: f.info.Allowed.BucketID, for i := range f.info.APIs.Storage.Allowed.Buckets {
} b := &f.info.APIs.Storage.Allowed.Buckets[i]
if bucketName != "" && account.BucketID == "" { // Empty names indicate a bucket that no longer exists, this is non-fatal
account.BucketName = f.opt.Enc.FromStandardName(bucketName) // for multi-bucket API keys.
if b.Name == "" {
continue
}
// When requesting a specific bucket skip over non-matching names
if bucketName != "" && b.Name != bucketName {
continue
}
var account = api.ListBucketsRequest{
AccountID: f.info.AccountID,
BucketID: b.ID,
}
if bucketName != "" && account.BucketID == "" {
account.BucketName = f.opt.Enc.FromStandardName(bucketName)
}
var response api.ListBucketsResponse
opts := rest.Opts{
Method: "POST",
Path: "/b2_list_buckets",
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &account, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
}
responses = append(responses, response)
} }
var response api.ListBucketsResponse
opts := rest.Opts{
Method: "POST",
Path: "/b2_list_buckets",
}
err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, &account, &response)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return err
}
f.bucketIDMutex.Lock() f.bucketIDMutex.Lock()
f.bucketTypeMutex.Lock() f.bucketTypeMutex.Lock()
f._bucketID = make(map[string]string, 1) f._bucketID = make(map[string]string, 1)
f._bucketType = make(map[string]string, 1) f._bucketType = make(map[string]string, 1)
for i := range response.Buckets {
bucket := &response.Buckets[i] for ri := range responses {
bucket.Name = f.opt.Enc.ToStandardName(bucket.Name) response := &responses[ri]
f.cache.MarkOK(bucket.Name) for i := range response.Buckets {
f._bucketID[bucket.Name] = bucket.ID bucket := &response.Buckets[i]
f._bucketType[bucket.Name] = bucket.Type bucket.Name = f.opt.Enc.ToStandardName(bucket.Name)
f.cache.MarkOK(bucket.Name)
f._bucketID[bucket.Name] = bucket.ID
f._bucketType[bucket.Name] = bucket.Type
}
} }
f.bucketTypeMutex.Unlock() f.bucketTypeMutex.Unlock()
f.bucketIDMutex.Unlock() f.bucketIDMutex.Unlock()
for i := range response.Buckets { for ri := range responses {
bucket := &response.Buckets[i] response := &responses[ri]
err = fn(bucket) for i := range response.Buckets {
if err != nil { bucket := &response.Buckets[i]
return err err := fn(bucket)
if err != nil {
return err
}
} }
} }
return nil return nil
@@ -1435,6 +1543,16 @@ func (f *Fs) copy(ctx context.Context, dstObj *Object, srcObj *Object, newInfo *
Name: f.opt.Enc.FromStandardPath(dstPath), Name: f.opt.Enc.FromStandardPath(dstPath),
DestBucketID: destBucketID, DestBucketID: destBucketID,
} }
if f.opt.SSECustomerKey != "" && f.opt.SSECustomerKeyMD5 != "" {
serverSideEncryptionConfig := api.ServerSideEncryption{
Mode: "SSE-C",
Algorithm: f.opt.SSECustomerAlgorithm,
CustomerKey: f.opt.SSECustomerKeyBase64,
CustomerKeyMd5: f.opt.SSECustomerKeyMD5,
}
request.SourceServerSideEncryption = &serverSideEncryptionConfig
request.DestinationServerSideEncryption = &serverSideEncryptionConfig
}
if newInfo == nil { if newInfo == nil {
request.MetadataDirective = "COPY" request.MetadataDirective = "COPY"
} else { } else {
@@ -1524,7 +1642,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
bucket, bucketPath := f.split(remote) bucket, bucketPath := f.split(remote)
var RootURL string var RootURL string
if f.opt.DownloadURL == "" { if f.opt.DownloadURL == "" {
RootURL = f.info.DownloadURL RootURL = f.info.APIs.Storage.DownloadURL
} else { } else {
RootURL = f.opt.DownloadURL RootURL = f.opt.DownloadURL
} }
@@ -1866,15 +1984,16 @@ var _ io.ReadCloser = &openFile{}
func (o *Object) getOrHead(ctx context.Context, method string, options []fs.OpenOption) (resp *http.Response, info *api.File, err error) { func (o *Object) getOrHead(ctx context.Context, method string, options []fs.OpenOption) (resp *http.Response, info *api.File, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: method, Method: method,
Options: options, Options: options,
NoResponse: method == "HEAD", NoResponse: method == "HEAD",
ExtraHeaders: map[string]string{},
} }
// Use downloadUrl from backblaze if downloadUrl is not set // Use downloadUrl from backblaze if downloadUrl is not set
// otherwise use the custom downloadUrl // otherwise use the custom downloadUrl
if o.fs.opt.DownloadURL == "" { if o.fs.opt.DownloadURL == "" {
opts.RootURL = o.fs.info.DownloadURL opts.RootURL = o.fs.info.APIs.Storage.DownloadURL
} else { } else {
opts.RootURL = o.fs.opt.DownloadURL opts.RootURL = o.fs.opt.DownloadURL
} }
@@ -1886,6 +2005,11 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
opts.Path += "/file/" + urlEncode(o.fs.opt.Enc.FromStandardName(bucket)) + "/" + urlEncode(o.fs.opt.Enc.FromStandardPath(bucketPath)) opts.Path += "/file/" + urlEncode(o.fs.opt.Enc.FromStandardName(bucket)) + "/" + urlEncode(o.fs.opt.Enc.FromStandardPath(bucketPath))
} }
if o.fs.opt.SSECustomerKey != "" && o.fs.opt.SSECustomerKeyMD5 != "" {
opts.ExtraHeaders[sseAlgorithmHeader] = o.fs.opt.SSECustomerAlgorithm
opts.ExtraHeaders[sseKeyHeader] = o.fs.opt.SSECustomerKeyBase64
opts.ExtraHeaders[sseMd5Header] = o.fs.opt.SSECustomerKeyMD5
}
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(ctx, resp, err) return o.fs.shouldRetry(ctx, resp, err)
@@ -2150,6 +2274,11 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
}, },
ContentLength: &size, ContentLength: &size,
} }
if o.fs.opt.SSECustomerKey != "" && o.fs.opt.SSECustomerKeyMD5 != "" {
opts.ExtraHeaders[sseAlgorithmHeader] = o.fs.opt.SSECustomerAlgorithm
opts.ExtraHeaders[sseKeyHeader] = o.fs.opt.SSECustomerKeyBase64
opts.ExtraHeaders[sseMd5Header] = o.fs.opt.SSECustomerKeyMD5
}
var response api.FileInfo var response api.FileInfo
// Don't retry, return a retry error instead // Don't retry, return a retry error instead
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
@@ -2241,7 +2370,10 @@ func (f *Fs) OpenChunkWriter(ctx context.Context, remote string, src fs.ObjectIn
func (o *Object) Remove(ctx context.Context) error { func (o *Object) Remove(ctx context.Context) error {
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
if o.fs.opt.Versions { if o.fs.opt.Versions {
return errNotWithVersions t, path := api.RemoveVersion(bucketPath)
if !t.IsZero() {
return o.fs.deleteByID(ctx, o.id, path)
}
} }
if o.fs.opt.VersionAt.IsSet() { if o.fs.opt.VersionAt.IsSet() {
return errNotWithVersionAt return errNotWithVersionAt
@@ -2264,32 +2396,36 @@ func (o *Object) ID() string {
var lifecycleHelp = fs.CommandHelp{ var lifecycleHelp = fs.CommandHelp{
Name: "lifecycle", Name: "lifecycle",
Short: "Read or set the lifecycle for a bucket", Short: "Read or set the lifecycle for a bucket.",
Long: `This command can be used to read or set the lifecycle for a bucket. Long: `This command can be used to read or set the lifecycle for a bucket.
Usage Examples:
To show the current lifecycle rules: To show the current lifecycle rules:
rclone backend lifecycle b2:bucket ` + "```console" + `
rclone backend lifecycle b2:bucket
` + "```" + `
This will dump something like this showing the lifecycle rules. This will dump something like this showing the lifecycle rules.
[ ` + "```json" + `
{ [
"daysFromHidingToDeleting": 1, {
"daysFromUploadingToHiding": null, "daysFromHidingToDeleting": 1,
"daysFromStartingToCancelingUnfinishedLargeFiles": null, "daysFromUploadingToHiding": null,
"fileNamePrefix": "" "daysFromStartingToCancelingUnfinishedLargeFiles": null,
} "fileNamePrefix": ""
] }
]
` + "```" + `
If there are no lifecycle rules (the default) then it will just return []. If there are no lifecycle rules (the default) then it will just return ` + "`[]`" + `.
To reset the current lifecycle rules: To reset the current lifecycle rules:
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30 ` + "```console" + `
rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1 rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
` + "```" + `
This will run and then print the new lifecycle rules as above. This will run and then print the new lifecycle rules as above.
@@ -2301,14 +2437,17 @@ the daysFromHidingToDeleting to 1 day. You can enable hard_delete in
the config also which will mean deletions won't cause versions but the config also which will mean deletions won't cause versions but
overwrites will still cause versions to be made. overwrites will still cause versions to be made.
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1 ` + "```console" + `
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
` + "```" + `
See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules See: <https://www.backblaze.com/docs/cloud-storage-lifecycle-rules>`,
`,
Opts: map[string]string{ Opts: map[string]string{
"daysFromHidingToDeleting": "After a file has been hidden for this many days it is deleted. 0 is off.", "daysFromHidingToDeleting": `After a file has been hidden for this many days
"daysFromUploadingToHiding": "This many days after uploading a file is hidden", it is deleted. 0 is off.`,
"daysFromStartingToCancelingUnfinishedLargeFiles": "Cancels any unfinished large file versions after this many days", "daysFromUploadingToHiding": `This many days after uploading a file is hidden.`,
"daysFromStartingToCancelingUnfinishedLargeFiles": `Cancels any unfinished
large file versions after this many days.`,
}, },
} }
@@ -2391,13 +2530,14 @@ max-age, which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command to see what Note that you can use --interactive/-i or --dry-run with this command to see what
it would do. it would do.
rclone backend cleanup b2:bucket/path/to/object ` + "```console" + `
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object rclone backend cleanup b2:bucket/path/to/object
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object
` + "```" + `
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.`,
`,
Opts: map[string]string{ Opts: map[string]string{
"max-age": "Max age of upload to delete", "max-age": "Max age of upload to delete.",
}, },
} }
@@ -2420,8 +2560,9 @@ var cleanupHiddenHelp = fs.CommandHelp{
Note that you can use --interactive/-i or --dry-run with this command to see what Note that you can use --interactive/-i or --dry-run with this command to see what
it would do. it would do.
rclone backend cleanup-hidden b2:bucket/path/to/dir ` + "```console" + `
`, rclone backend cleanup-hidden b2:bucket/path/to/dir
` + "```",
} }
func (f *Fs) cleanupHiddenCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) { func (f *Fs) cleanupHiddenCommand(ctx context.Context, name string, arg []string, opt map[string]string) (out any, err error) {

View File

@@ -144,6 +144,14 @@ func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs
request.ContentType = newInfo.ContentType request.ContentType = newInfo.ContentType
request.Info = newInfo.Info request.Info = newInfo.Info
} }
if o.fs.opt.SSECustomerKey != "" && o.fs.opt.SSECustomerKeyMD5 != "" {
request.ServerSideEncryption = &api.ServerSideEncryption{
Mode: "SSE-C",
Algorithm: o.fs.opt.SSECustomerAlgorithm,
CustomerKey: o.fs.opt.SSECustomerKeyBase64,
CustomerKeyMd5: o.fs.opt.SSECustomerKeyMD5,
}
}
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
Path: "/b2_start_large_file", Path: "/b2_start_large_file",
@@ -295,6 +303,12 @@ func (up *largeUpload) WriteChunk(ctx context.Context, chunkNumber int, reader i
ContentLength: &sizeWithHash, ContentLength: &sizeWithHash,
} }
if up.o.fs.opt.SSECustomerKey != "" && up.o.fs.opt.SSECustomerKeyMD5 != "" {
opts.ExtraHeaders[sseAlgorithmHeader] = up.o.fs.opt.SSECustomerAlgorithm
opts.ExtraHeaders[sseKeyHeader] = up.o.fs.opt.SSECustomerKeyBase64
opts.ExtraHeaders[sseMd5Header] = up.o.fs.opt.SSECustomerKeyMD5
}
var response api.UploadPartResponse var response api.UploadPartResponse
resp, err := up.f.srv.CallJSON(ctx, &opts, nil, &response) resp, err := up.f.srv.CallJSON(ctx, &opts, nil, &response)
@@ -334,6 +348,17 @@ func (up *largeUpload) copyChunk(ctx context.Context, part int, partSize int64)
PartNumber: int64(part + 1), PartNumber: int64(part + 1),
Range: fmt.Sprintf("bytes=%d-%d", offset, offset+partSize-1), Range: fmt.Sprintf("bytes=%d-%d", offset, offset+partSize-1),
} }
if up.o.fs.opt.SSECustomerKey != "" && up.o.fs.opt.SSECustomerKeyMD5 != "" {
serverSideEncryptionConfig := api.ServerSideEncryption{
Mode: "SSE-C",
Algorithm: up.o.fs.opt.SSECustomerAlgorithm,
CustomerKey: up.o.fs.opt.SSECustomerKeyBase64,
CustomerKeyMd5: up.o.fs.opt.SSECustomerKeyMD5,
}
request.SourceServerSideEncryption = &serverSideEncryptionConfig
request.DestinationServerSideEncryption = &serverSideEncryptionConfig
}
var response api.UploadPartResponse var response api.UploadPartResponse
resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response) resp, err := up.f.srv.CallJSON(ctx, &opts, &request, &response)
retry, err := up.f.shouldRetry(ctx, resp, err) retry, err := up.f.shouldRetry(ctx, resp, err)

View File

@@ -87,13 +87,11 @@ func init() {
Description: "Box", Description: "Box",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) { Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) {
jsonFile, ok := m.Get("box_config_file")
boxSubType, boxSubTypeOk := m.Get("box_sub_type")
boxAccessToken, boxAccessTokenOk := m.Get("access_token") boxAccessToken, boxAccessTokenOk := m.Get("access_token")
var err error var err error
// If using box config.json, use JWT auth // If using box config.json, use JWT auth
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" { if usesJWTAuth(m) {
err = refreshJWTToken(ctx, jsonFile, boxSubType, name, m) err = refreshJWTToken(ctx, name, m)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to configure token with jwt authentication: %w", err) return nil, fmt.Errorf("failed to configure token with jwt authentication: %w", err)
} }
@@ -114,6 +112,11 @@ func init() {
}, { }, {
Name: "box_config_file", Name: "box_config_file",
Help: "Box App config.json location\n\nLeave blank normally." + env.ShellExpandHelp, Help: "Box App config.json location\n\nLeave blank normally." + env.ShellExpandHelp,
}, {
Name: "config_credentials",
Help: "Box App config.json contents.\n\nLeave blank normally.",
Hide: fs.OptionHideBoth,
Sensitive: true,
}, { }, {
Name: "access_token", Name: "access_token",
Help: "Box App Primary Access Token\n\nLeave blank normally.", Help: "Box App Primary Access Token\n\nLeave blank normally.",
@@ -184,9 +187,17 @@ See: https://developer.box.com/guides/authentication/jwt/as-user/
}) })
} }
func refreshJWTToken(ctx context.Context, jsonFile string, boxSubType string, name string, m configmap.Mapper) error { func usesJWTAuth(m configmap.Mapper) bool {
jsonFile = env.ShellExpand(jsonFile) jsonFile, okFile := m.Get("box_config_file")
boxConfig, err := getBoxConfig(jsonFile) jsonFileCredentials, okCredentials := m.Get("config_credentials")
boxSubType, boxSubTypeOk := m.Get("box_sub_type")
return (okFile || okCredentials) && boxSubTypeOk && (jsonFile != "" || jsonFileCredentials != "") && boxSubType != ""
}
func refreshJWTToken(ctx context.Context, name string, m configmap.Mapper) error {
boxSubType, _ := m.Get("box_sub_type")
boxConfig, err := getBoxConfig(m)
if err != nil { if err != nil {
return fmt.Errorf("get box config: %w", err) return fmt.Errorf("get box config: %w", err)
} }
@@ -205,12 +216,19 @@ func refreshJWTToken(ctx context.Context, jsonFile string, boxSubType string, na
return err return err
} }
func getBoxConfig(configFile string) (boxConfig *api.ConfigJSON, err error) { func getBoxConfig(m configmap.Mapper) (boxConfig *api.ConfigJSON, err error) {
file, err := os.ReadFile(configFile) configFileCredentials, _ := m.Get("config_credentials")
if err != nil { configFileBytes := []byte(configFileCredentials)
return nil, fmt.Errorf("box: failed to read Box config: %w", err)
if configFileCredentials == "" {
configFile, _ := m.Get("box_config_file")
configFileBytes, err = os.ReadFile(configFile)
if err != nil {
return nil, fmt.Errorf("box: failed to read Box config: %w", err)
}
} }
err = json.Unmarshal(file, &boxConfig)
err = json.Unmarshal(configFileBytes, &boxConfig)
if err != nil { if err != nil {
return nil, fmt.Errorf("box: failed to parse Box config: %w", err) return nil, fmt.Errorf("box: failed to parse Box config: %w", err)
} }
@@ -485,15 +503,12 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.srv.SetHeader("as-user", f.opt.Impersonate) f.srv.SetHeader("as-user", f.opt.Impersonate)
} }
jsonFile, ok := m.Get("box_config_file")
boxSubType, boxSubTypeOk := m.Get("box_sub_type")
if ts != nil { if ts != nil {
// If using box config.json and JWT, renewing should just refresh the token and // If using box config.json and JWT, renewing should just refresh the token and
// should do so whether there are uploads pending or not. // should do so whether there are uploads pending or not.
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" { if usesJWTAuth(m) {
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
err := refreshJWTToken(ctx, jsonFile, boxSubType, name, m) err := refreshJWTToken(ctx, name, m)
return err return err
}) })
f.tokenRenewer.Start() f.tokenRenewer.Start()

View File

@@ -2,10 +2,8 @@
package compress package compress
import ( import (
"bufio"
"bytes" "bytes"
"context" "context"
"crypto/md5"
"encoding/base64" "encoding/base64"
"encoding/binary" "encoding/binary"
"encoding/hex" "encoding/hex"
@@ -46,6 +44,7 @@ const (
minCompressionRatio = 1.1 minCompressionRatio = 1.1
gzFileExt = ".gz" gzFileExt = ".gz"
zstdFileExt = ".zst"
metaFileExt = ".json" metaFileExt = ".json"
uncompressedFileExt = ".bin" uncompressedFileExt = ".bin"
) )
@@ -54,6 +53,7 @@ const (
const ( const (
Uncompressed = 0 Uncompressed = 0
Gzip = 2 Gzip = 2
Zstd = 4
) )
var nameRegexp = regexp.MustCompile(`^(.+?)\.([A-Za-z0-9-_]{11})$`) var nameRegexp = regexp.MustCompile(`^(.+?)\.([A-Za-z0-9-_]{11})$`)
@@ -66,6 +66,10 @@ func init() {
Value: "gzip", Value: "gzip",
Help: "Standard gzip compression with fastest parameters.", Help: "Standard gzip compression with fastest parameters.",
}, },
{
Value: "zstd",
Help: "Zstandard compression — fast modern algorithm offering adjustable speed-to-compression tradeoffs.",
},
} }
// Register our remote // Register our remote
@@ -87,17 +91,23 @@ func init() {
Examples: compressionModeOptions, Examples: compressionModeOptions,
}, { }, {
Name: "level", Name: "level",
Help: `GZIP compression level (-2 to 9). Help: `GZIP (levels -2 to 9):
- -2 — Huffman encoding only. Only use if you know what you're doing.
Generally -1 (default, equivalent to 5) is recommended. - -1 (default) — recommended; equivalent to level 5.
Levels 1 to 9 increase compression at the cost of speed. Going past 6 - 0 — turns off compression.
generally offers very little return. - 19 — increase compression at the cost of speed. Going past 6 generally offers very little return.
Level -2 uses Huffman encoding only. Only use if you know what you ZSTD (levels 0 to 4):
are doing. - 0 — turns off compression entirely.
Level 0 turns off compression.`, - 1 — fastest compression with the lowest ratio.
Default: sgzip.DefaultCompression, - 2 (default) — good balance of speed and compression.
Advanced: true, - 3 — better compression, but uses about 23x more CPU than the default.
- 4 — best possible compression ratio (highest CPU cost).
Notes:
- Choose GZIP for wide compatibility; ZSTD for better speed/ratio tradeoffs.
- Negative gzip levels: -2 = Huffman-only, -1 = default (≈ level 5).`,
Required: true,
}, { }, {
Name: "ram_cache_limit", Name: "ram_cache_limit",
Help: `Some remotes don't allow the upload of files with unknown size. Help: `Some remotes don't allow the upload of files with unknown size.
@@ -112,6 +122,47 @@ this limit will be cached on disk.`,
}) })
} }
// compressionModeHandler defines the interface for handling different compression modes
type compressionModeHandler interface {
// processFileNameGetFileExtension returns the file extension for the given compression mode
processFileNameGetFileExtension(compressionMode int) string
// newObjectGetOriginalSize returns the original file size from the metadata
newObjectGetOriginalSize(meta *ObjectMetadata) (int64, error)
// isCompressible checks the compression ratio of the provided data and returns true if the ratio exceeds
// the configured threshold
isCompressible(r io.Reader, compressionMode int) (bool, error)
// putCompress compresses the input data and uploads it to the remote, returning the new object and its metadata
putCompress(
ctx context.Context,
f *Fs,
in io.Reader,
src fs.ObjectInfo,
options []fs.OpenOption,
mimeType string,
) (fs.Object, *ObjectMetadata, error)
// openGetReadCloser opens a compressed object and returns a ReadCloser in the Open method
openGetReadCloser(
ctx context.Context,
o *Object,
offset int64,
limit int64,
cr chunkedreader.ChunkedReader,
closer io.Closer,
options ...fs.OpenOption,
) (rc io.ReadCloser, err error)
// putUncompressGetNewMetadata returns metadata in the putUncompress method for a specific compression algorithm
putUncompressGetNewMetadata(o fs.Object, mode int, md5 string, mimeType string, sum []byte) (fs.Object, *ObjectMetadata, error)
// This function generates a metadata object for sgzip.GzipMetadata or SzstdMetadata.
// Warning: This function panics if cmeta is not of the expected type.
newMetadata(size int64, mode int, cmeta any, md5 string, mimeType string) *ObjectMetadata
}
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
Remote string `config:"remote"` Remote string `config:"remote"`
@@ -125,12 +176,13 @@ type Options struct {
// Fs represents a wrapped fs.Fs // Fs represents a wrapped fs.Fs
type Fs struct { type Fs struct {
fs.Fs fs.Fs
wrapper fs.Fs wrapper fs.Fs
name string name string
root string root string
opt Options opt Options
mode int // compression mode id mode int // compression mode id
features *fs.Features // optional features features *fs.Features // optional features
modeHandler compressionModeHandler // compression mode handler
} }
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
@@ -167,13 +219,28 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
return nil, fmt.Errorf("failed to make remote %s:%q to wrap: %w", wName, remotePath, err) return nil, fmt.Errorf("failed to make remote %s:%q to wrap: %w", wName, remotePath, err)
} }
compressionMode := compressionModeFromName(opt.CompressionMode)
var modeHandler compressionModeHandler
switch compressionMode {
case Gzip:
modeHandler = &gzipModeHandler{}
case Zstd:
modeHandler = &zstdModeHandler{}
case Uncompressed:
modeHandler = &uncompressedModeHandler{}
default:
modeHandler = &unknownModeHandler{}
}
// Create the wrapping fs // Create the wrapping fs
f := &Fs{ f := &Fs{
Fs: wrappedFs, Fs: wrappedFs,
name: name, name: name,
root: rpath, root: rpath,
opt: *opt, opt: *opt,
mode: compressionModeFromName(opt.CompressionMode), mode: compressionMode,
modeHandler: modeHandler,
} }
// Correct root if definitely pointing to a file // Correct root if definitely pointing to a file
if err == fs.ErrorIsFile { if err == fs.ErrorIsFile {
@@ -215,10 +282,13 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
return f, err return f, err
} }
// compressionModeFromName converts a compression mode name to its int representation.
func compressionModeFromName(name string) int { func compressionModeFromName(name string) int {
switch name { switch name {
case "gzip": case "gzip":
return Gzip return Gzip
case "zstd":
return Zstd
default: default:
return Uncompressed return Uncompressed
} }
@@ -242,7 +312,7 @@ func base64ToInt64(str string) (int64, error) {
// Processes a file name for a compressed file. Returns the original file name, the extension, and the size of the original file. // Processes a file name for a compressed file. Returns the original file name, the extension, and the size of the original file.
// Returns -2 for the original size if the file is uncompressed. // Returns -2 for the original size if the file is uncompressed.
func processFileName(compressedFileName string) (origFileName string, extension string, origSize int64, err error) { func processFileName(compressedFileName string, modeHandler compressionModeHandler) (origFileName string, extension string, origSize int64, err error) {
// Separate the filename and size from the extension // Separate the filename and size from the extension
extensionPos := strings.LastIndex(compressedFileName, ".") extensionPos := strings.LastIndex(compressedFileName, ".")
if extensionPos == -1 { if extensionPos == -1 {
@@ -261,7 +331,8 @@ func processFileName(compressedFileName string) (origFileName string, extension
if err != nil { if err != nil {
return "", "", 0, errors.New("could not decode size") return "", "", 0, errors.New("could not decode size")
} }
return match[1], gzFileExt, size, nil ext := modeHandler.processFileNameGetFileExtension(compressionModeFromName(compressedFileName[extensionPos+1:]))
return match[1], ext, size, nil
} }
// Generates the file name for a metadata file // Generates the file name for a metadata file
@@ -286,11 +357,15 @@ func unwrapMetadataFile(filename string) (string, bool) {
// makeDataName generates the file name for a data file with specified compression mode // makeDataName generates the file name for a data file with specified compression mode
func makeDataName(remote string, size int64, mode int) (newRemote string) { func makeDataName(remote string, size int64, mode int) (newRemote string) {
if mode != Uncompressed { switch mode {
case Gzip:
newRemote = remote + "." + int64ToBase64(size) + gzFileExt newRemote = remote + "." + int64ToBase64(size) + gzFileExt
} else { case Zstd:
newRemote = remote + "." + int64ToBase64(size) + zstdFileExt
default:
newRemote = remote + uncompressedFileExt newRemote = remote + uncompressedFileExt
} }
return newRemote return newRemote
} }
@@ -304,7 +379,7 @@ func (f *Fs) dataName(remote string, size int64, compressed bool) (name string)
// addData parses an object and adds it to the DirEntries // addData parses an object and adds it to the DirEntries
func (f *Fs) addData(entries *fs.DirEntries, o fs.Object) { func (f *Fs) addData(entries *fs.DirEntries, o fs.Object) {
origFileName, _, size, err := processFileName(o.Remote()) origFileName, _, size, err := processFileName(o.Remote(), f.modeHandler)
if err != nil { if err != nil {
fs.Errorf(o, "Error on parsing file name: %v", err) fs.Errorf(o, "Error on parsing file name: %v", err)
return return
@@ -427,8 +502,12 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
if err != nil { if err != nil {
return nil, fmt.Errorf("error decoding metadata: %w", err) return nil, fmt.Errorf("error decoding metadata: %w", err)
} }
size, err := f.modeHandler.newObjectGetOriginalSize(meta)
if err != nil {
return nil, fmt.Errorf("error reading metadata: %w", err)
}
// Create our Object // Create our Object
o, err := f.Fs.NewObject(ctx, makeDataName(remote, meta.CompressionMetadata.Size, meta.Mode)) o, err := f.Fs.NewObject(ctx, makeDataName(remote, size, meta.Mode))
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -437,7 +516,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// checkCompressAndType checks if an object is compressible and determines it's mime type // checkCompressAndType checks if an object is compressible and determines it's mime type
// returns a multireader with the bytes that were read to determine mime type // returns a multireader with the bytes that were read to determine mime type
func checkCompressAndType(in io.Reader) (newReader io.Reader, compressible bool, mimeType string, err error) { func checkCompressAndType(in io.Reader, compressionMode int, modeHandler compressionModeHandler) (newReader io.Reader, compressible bool, mimeType string, err error) {
in, wrap := accounting.UnWrap(in) in, wrap := accounting.UnWrap(in)
buf := make([]byte, heuristicBytes) buf := make([]byte, heuristicBytes)
n, err := in.Read(buf) n, err := in.Read(buf)
@@ -446,7 +525,7 @@ func checkCompressAndType(in io.Reader) (newReader io.Reader, compressible bool,
return nil, false, "", err return nil, false, "", err
} }
mime := mimetype.Detect(buf) mime := mimetype.Detect(buf)
compressible, err = isCompressible(bytes.NewReader(buf)) compressible, err = modeHandler.isCompressible(bytes.NewReader(buf), compressionMode)
if err != nil { if err != nil {
return nil, false, "", err return nil, false, "", err
} }
@@ -454,26 +533,6 @@ func checkCompressAndType(in io.Reader) (newReader io.Reader, compressible bool,
return wrap(in), compressible, mime.String(), nil return wrap(in), compressible, mime.String(), nil
} }
// isCompressible checks the compression ratio of the provided data and returns true if the ratio exceeds
// the configured threshold
func isCompressible(r io.Reader) (bool, error) {
var b bytes.Buffer
w, err := sgzip.NewWriterLevel(&b, sgzip.DefaultCompression)
if err != nil {
return false, err
}
n, err := io.Copy(w, r)
if err != nil {
return false, err
}
err = w.Close()
if err != nil {
return false, err
}
ratio := float64(n) / float64(b.Len())
return ratio > minCompressionRatio, nil
}
// verifyObjectHash verifies the Objects hash // verifyObjectHash verifies the Objects hash
func (f *Fs) verifyObjectHash(ctx context.Context, o fs.Object, hasher *hash.MultiHasher, ht hash.Type) error { func (f *Fs) verifyObjectHash(ctx context.Context, o fs.Object, hasher *hash.MultiHasher, ht hash.Type) error {
srcHash := hasher.Sums()[ht] srcHash := hasher.Sums()[ht]
@@ -494,9 +553,9 @@ func (f *Fs) verifyObjectHash(ctx context.Context, o fs.Object, hasher *hash.Mul
type putFn func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) type putFn func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error)
type compressionResult struct { type compressionResult[T sgzip.GzipMetadata | SzstdMetadata] struct {
err error err error
meta sgzip.GzipMetadata meta T
} }
// replicating some of operations.Rcat functionality because we want to support remotes without streaming // replicating some of operations.Rcat functionality because we want to support remotes without streaming
@@ -537,106 +596,18 @@ func (f *Fs) rcat(ctx context.Context, dstFileName string, in io.ReadCloser, mod
return nil, fmt.Errorf("failed to write temporary local file: %w", err) return nil, fmt.Errorf("failed to write temporary local file: %w", err)
} }
if _, err = tempFile.Seek(0, 0); err != nil { if _, err = tempFile.Seek(0, 0); err != nil {
return nil, err return nil, fmt.Errorf("failed to seek temporary local file: %w", err)
} }
finfo, err := tempFile.Stat() finfo, err := tempFile.Stat()
if err != nil { if err != nil {
return nil, err return nil, fmt.Errorf("failed to stat temporary local file: %w", err)
} }
return f.Fs.Put(ctx, tempFile, object.NewStaticObjectInfo(dstFileName, modTime, finfo.Size(), false, nil, f.Fs)) return f.Fs.Put(ctx, tempFile, object.NewStaticObjectInfo(dstFileName, modTime, finfo.Size(), false, nil, f.Fs))
} }
// Put a compressed version of a file. Returns a wrappable object and metadata. // Put a compressed version of a file. Returns a wrappable object and metadata.
func (f *Fs) putCompress(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, mimeType string) (fs.Object, *ObjectMetadata, error) { func (f *Fs) putCompress(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, mimeType string) (fs.Object, *ObjectMetadata, error) {
// Unwrap reader accounting return f.modeHandler.putCompress(ctx, f, in, src, options, mimeType)
in, wrap := accounting.UnWrap(in)
// Add the metadata hasher
metaHasher := md5.New()
in = io.TeeReader(in, metaHasher)
// Compress the file
pipeReader, pipeWriter := io.Pipe()
results := make(chan compressionResult)
go func() {
gz, err := sgzip.NewWriterLevel(pipeWriter, f.opt.CompressionLevel)
if err != nil {
results <- compressionResult{err: err, meta: sgzip.GzipMetadata{}}
return
}
_, err = io.Copy(gz, in)
gzErr := gz.Close()
if gzErr != nil {
fs.Errorf(nil, "Failed to close compress: %v", gzErr)
if err == nil {
err = gzErr
}
}
closeErr := pipeWriter.Close()
if closeErr != nil {
fs.Errorf(nil, "Failed to close pipe: %v", closeErr)
if err == nil {
err = closeErr
}
}
results <- compressionResult{err: err, meta: gz.MetaData()}
}()
wrappedIn := wrap(bufio.NewReaderSize(pipeReader, bufferSize)) // Probably no longer needed as sgzip has it's own buffering
// Find a hash the destination supports to compute a hash of
// the compressed data.
ht := f.Fs.Hashes().GetOne()
var hasher *hash.MultiHasher
var err error
if ht != hash.None {
// unwrap the accounting again
wrappedIn, wrap = accounting.UnWrap(wrappedIn)
hasher, err = hash.NewMultiHasherTypes(hash.NewHashSet(ht))
if err != nil {
return nil, nil, err
}
// add the hasher and re-wrap the accounting
wrappedIn = io.TeeReader(wrappedIn, hasher)
wrappedIn = wrap(wrappedIn)
}
// Transfer the data
o, err := f.rcat(ctx, makeDataName(src.Remote(), src.Size(), f.mode), io.NopCloser(wrappedIn), src.ModTime(ctx), options)
//o, err := operations.Rcat(ctx, f.Fs, makeDataName(src.Remote(), src.Size(), f.mode), io.NopCloser(wrappedIn), src.ModTime(ctx))
if err != nil {
if o != nil {
removeErr := o.Remove(ctx)
if removeErr != nil {
fs.Errorf(o, "Failed to remove partially transferred object: %v", err)
}
}
return nil, nil, err
}
// Check whether we got an error during compression
result := <-results
err = result.err
if err != nil {
if o != nil {
removeErr := o.Remove(ctx)
if removeErr != nil {
fs.Errorf(o, "Failed to remove partially compressed object: %v", err)
}
}
return nil, nil, err
}
// Generate metadata
meta := newMetadata(result.meta.Size, f.mode, result.meta, hex.EncodeToString(metaHasher.Sum(nil)), mimeType)
// Check the hashes of the compressed data if we were comparing them
if ht != hash.None && hasher != nil {
err = f.verifyObjectHash(ctx, o, hasher, ht)
if err != nil {
return nil, nil, err
}
}
return o, meta, nil
} }
// Put an uncompressed version of a file. Returns a wrappable object and metadata. // Put an uncompressed version of a file. Returns a wrappable object and metadata.
@@ -680,7 +651,8 @@ func (f *Fs) putUncompress(ctx context.Context, in io.Reader, src fs.ObjectInfo,
if err != nil { if err != nil {
return nil, nil, err return nil, nil, err
} }
return o, newMetadata(o.Size(), Uncompressed, sgzip.GzipMetadata{}, hex.EncodeToString(sum), mimeType), nil
return f.modeHandler.putUncompressGetNewMetadata(o, Uncompressed, hex.EncodeToString(sum), mimeType, sum)
} }
// This function will write a metadata struct to a metadata Object for an src. Returns a wrappable metadata object. // This function will write a metadata struct to a metadata Object for an src. Returns a wrappable metadata object.
@@ -751,7 +723,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
o, err := f.NewObject(ctx, src.Remote()) o, err := f.NewObject(ctx, src.Remote())
if err == fs.ErrorObjectNotFound { if err == fs.ErrorObjectNotFound {
// Get our file compressibility // Get our file compressibility
in, compressible, mimeType, err := checkCompressAndType(in) in, compressible, mimeType, err := checkCompressAndType(in, f.mode, f.modeHandler)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -771,7 +743,7 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
} }
found := err == nil found := err == nil
in, compressible, mimeType, err := checkCompressAndType(in) in, compressible, mimeType, err := checkCompressAndType(in, f.mode, f.modeHandler)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1090,11 +1062,12 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, duration fs.Duration
// ObjectMetadata describes the metadata for an Object. // ObjectMetadata describes the metadata for an Object.
type ObjectMetadata struct { type ObjectMetadata struct {
Mode int // Compression mode of the file. Mode int // Compression mode of the file.
Size int64 // Size of the object. Size int64 // Size of the object.
MD5 string // MD5 hash of the file. MD5 string // MD5 hash of the file.
MimeType string // Mime type of the file MimeType string // Mime type of the file
CompressionMetadata sgzip.GzipMetadata CompressionMetadataGzip *sgzip.GzipMetadata // Metadata for Gzip compression
CompressionMetadataZstd *SzstdMetadata // Metadata for Zstd compression
} }
// Object with external metadata // Object with external metadata
@@ -1107,17 +1080,6 @@ type Object struct {
meta *ObjectMetadata // Metadata struct for this object (nil if not loaded) meta *ObjectMetadata // Metadata struct for this object (nil if not loaded)
} }
// This function generates a metadata object
func newMetadata(size int64, mode int, cmeta sgzip.GzipMetadata, md5 string, mimeType string) *ObjectMetadata {
meta := new(ObjectMetadata)
meta.Size = size
meta.Mode = mode
meta.CompressionMetadata = cmeta
meta.MD5 = md5
meta.MimeType = mimeType
return meta
}
// This function will read the metadata from a metadata object. // This function will read the metadata from a metadata object.
func readMetadata(ctx context.Context, mo fs.Object) (meta *ObjectMetadata, err error) { func readMetadata(ctx context.Context, mo fs.Object) (meta *ObjectMetadata, err error) {
// Open our meradata object // Open our meradata object
@@ -1165,7 +1127,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
return o.mo, o.mo.Update(ctx, in, src, options...) return o.mo, o.mo.Update(ctx, in, src, options...)
} }
in, compressible, mimeType, err := checkCompressAndType(in) in, compressible, mimeType, err := checkCompressAndType(in, o.meta.Mode, o.f.modeHandler)
if err != nil { if err != nil {
return err return err
} }
@@ -1278,7 +1240,7 @@ func (o *Object) String() string {
// Remote returns the remote path // Remote returns the remote path
func (o *Object) Remote() string { func (o *Object) Remote() string {
origFileName, _, _, err := processFileName(o.Object.Remote()) origFileName, _, _, err := processFileName(o.Object.Remote(), o.f.modeHandler)
if err != nil { if err != nil {
fs.Errorf(o.f, "Could not get remote path for: %s", o.Object.Remote()) fs.Errorf(o.f, "Could not get remote path for: %s", o.Object.Remote())
return o.Object.Remote() return o.Object.Remote()
@@ -1381,7 +1343,6 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
return o.Object.Open(ctx, options...) return o.Object.Open(ctx, options...)
} }
// Get offset and limit from OpenOptions, pass the rest to the underlying remote // Get offset and limit from OpenOptions, pass the rest to the underlying remote
var openOptions = []fs.OpenOption{&fs.SeekOption{Offset: 0}}
var offset, limit int64 = 0, -1 var offset, limit int64 = 0, -1
for _, option := range options { for _, option := range options {
switch x := option.(type) { switch x := option.(type) {
@@ -1389,31 +1350,12 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
offset = x.Offset offset = x.Offset
case *fs.RangeOption: case *fs.RangeOption:
offset, limit = x.Decode(o.Size()) offset, limit = x.Decode(o.Size())
default:
openOptions = append(openOptions, option)
} }
} }
// Get a chunkedreader for the wrapped object // Get a chunkedreader for the wrapped object
chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize, chunkStreams) chunkedReader := chunkedreader.New(ctx, o.Object, initialChunkSize, maxChunkSize, chunkStreams)
// Get file handle var retCloser io.Closer = chunkedReader
var file io.Reader return o.f.modeHandler.openGetReadCloser(ctx, o, offset, limit, chunkedReader, retCloser, options...)
if offset != 0 {
file, err = sgzip.NewReaderAt(chunkedReader, &o.meta.CompressionMetadata, offset)
} else {
file, err = sgzip.NewReader(chunkedReader)
}
if err != nil {
return nil, err
}
var fileReader io.Reader
if limit != -1 {
fileReader = io.LimitReader(file, limit)
} else {
fileReader = file
}
// Return a ReadCloser
return ReadCloserWrapper{Reader: fileReader, Closer: chunkedReader}, nil
} }
// ObjectInfo describes a wrapped fs.ObjectInfo for being the source // ObjectInfo describes a wrapped fs.ObjectInfo for being the source

View File

@@ -48,7 +48,27 @@ func TestRemoteGzip(t *testing.T) {
opt.ExtraConfig = []fstests.ExtraConfigItem{ opt.ExtraConfig = []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "compress"}, {Name: name, Key: "type", Value: "compress"},
{Name: name, Key: "remote", Value: tempdir}, {Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "compression_mode", Value: "gzip"}, {Name: name, Key: "mode", Value: "gzip"},
{Name: name, Key: "level", Value: "-1"},
}
opt.QuickTestOK = true
fstests.Run(t, &opt)
}
// TestRemoteZstd tests ZSTD compression
func TestRemoteZstd(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-compress-test-zstd")
name := "TestCompressZstd"
opt := defaultOpt
opt.RemoteName = name + ":"
opt.ExtraConfig = []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "compress"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "mode", Value: "zstd"},
{Name: name, Key: "level", Value: "2"},
} }
opt.QuickTestOK = true opt.QuickTestOK = true
fstests.Run(t, &opt) fstests.Run(t, &opt)

View File

@@ -0,0 +1,207 @@
package compress
import (
"bufio"
"bytes"
"context"
"crypto/md5"
"encoding/hex"
"errors"
"io"
"github.com/buengese/sgzip"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/chunkedreader"
"github.com/rclone/rclone/fs/hash"
)
// gzipModeHandler implements compressionModeHandler for gzip
type gzipModeHandler struct{}
// isCompressible checks the compression ratio of the provided data and returns true if the ratio exceeds
// the configured threshold
func (g *gzipModeHandler) isCompressible(r io.Reader, compressionMode int) (bool, error) {
var b bytes.Buffer
var n int64
w, err := sgzip.NewWriterLevel(&b, sgzip.DefaultCompression)
if err != nil {
return false, err
}
n, err = io.Copy(w, r)
if err != nil {
return false, err
}
err = w.Close()
if err != nil {
return false, err
}
ratio := float64(n) / float64(b.Len())
return ratio > minCompressionRatio, nil
}
// newObjectGetOriginalSize returns the original file size from the metadata
func (g *gzipModeHandler) newObjectGetOriginalSize(meta *ObjectMetadata) (int64, error) {
if meta.CompressionMetadataGzip == nil {
return 0, errors.New("missing gzip metadata")
}
return meta.CompressionMetadataGzip.Size, nil
}
// openGetReadCloser opens a compressed object and returns a ReadCloser in the Open method
func (g *gzipModeHandler) openGetReadCloser(
ctx context.Context,
o *Object,
offset int64,
limit int64,
cr chunkedreader.ChunkedReader,
closer io.Closer,
options ...fs.OpenOption,
) (rc io.ReadCloser, err error) {
var file io.Reader
if offset != 0 {
file, err = sgzip.NewReaderAt(cr, o.meta.CompressionMetadataGzip, offset)
} else {
file, err = sgzip.NewReader(cr)
}
if err != nil {
return nil, err
}
var fileReader io.Reader
if limit != -1 {
fileReader = io.LimitReader(file, limit)
} else {
fileReader = file
}
// Return a ReadCloser
return ReadCloserWrapper{Reader: fileReader, Closer: closer}, nil
}
// processFileNameGetFileExtension returns the file extension for the given compression mode
func (g *gzipModeHandler) processFileNameGetFileExtension(compressionMode int) string {
if compressionMode == Gzip {
return gzFileExt
}
return ""
}
// putCompress compresses the input data and uploads it to the remote, returning the new object and its metadata
func (g *gzipModeHandler) putCompress(
ctx context.Context,
f *Fs,
in io.Reader,
src fs.ObjectInfo,
options []fs.OpenOption,
mimeType string,
) (fs.Object, *ObjectMetadata, error) {
// Unwrap reader accounting
in, wrap := accounting.UnWrap(in)
// Add the metadata hasher
metaHasher := md5.New()
in = io.TeeReader(in, metaHasher)
// Compress the file
pipeReader, pipeWriter := io.Pipe()
resultsGzip := make(chan compressionResult[sgzip.GzipMetadata])
go func() {
gz, err := sgzip.NewWriterLevel(pipeWriter, f.opt.CompressionLevel)
if err != nil {
resultsGzip <- compressionResult[sgzip.GzipMetadata]{err: err, meta: sgzip.GzipMetadata{}}
close(resultsGzip)
return
}
_, err = io.Copy(gz, in)
gzErr := gz.Close()
if gzErr != nil && err == nil {
err = gzErr
}
closeErr := pipeWriter.Close()
if closeErr != nil && err == nil {
err = closeErr
}
resultsGzip <- compressionResult[sgzip.GzipMetadata]{err: err, meta: gz.MetaData()}
close(resultsGzip)
}()
wrappedIn := wrap(bufio.NewReaderSize(pipeReader, bufferSize)) // Probably no longer needed as sgzip has it's own buffering
// Find a hash the destination supports to compute a hash of
// the compressed data.
ht := f.Fs.Hashes().GetOne()
var hasher *hash.MultiHasher
var err error
if ht != hash.None {
// unwrap the accounting again
wrappedIn, wrap = accounting.UnWrap(wrappedIn)
hasher, err = hash.NewMultiHasherTypes(hash.NewHashSet(ht))
if err != nil {
return nil, nil, err
}
// add the hasher and re-wrap the accounting
wrappedIn = io.TeeReader(wrappedIn, hasher)
wrappedIn = wrap(wrappedIn)
}
// Transfer the data
o, err := f.rcat(ctx, makeDataName(src.Remote(), src.Size(), f.mode), io.NopCloser(wrappedIn), src.ModTime(ctx), options)
if err != nil {
if o != nil {
if removeErr := o.Remove(ctx); removeErr != nil {
fs.Errorf(o, "Failed to remove partially transferred object: %v", removeErr)
}
}
return nil, nil, err
}
// Check whether we got an error during compression
result := <-resultsGzip
if result.err != nil {
if o != nil {
if removeErr := o.Remove(ctx); removeErr != nil {
fs.Errorf(o, "Failed to remove partially compressed object: %v", removeErr)
}
}
return nil, nil, result.err
}
// Generate metadata
meta := g.newMetadata(result.meta.Size, f.mode, result.meta, hex.EncodeToString(metaHasher.Sum(nil)), mimeType)
// Check the hashes of the compressed data if we were comparing them
if ht != hash.None && hasher != nil {
err = f.verifyObjectHash(ctx, o, hasher, ht)
if err != nil {
return nil, nil, err
}
}
return o, meta, nil
}
// putUncompressGetNewMetadata returns metadata in the putUncompress method for a specific compression algorithm
func (g *gzipModeHandler) putUncompressGetNewMetadata(o fs.Object, mode int, md5 string, mimeType string, sum []byte) (fs.Object, *ObjectMetadata, error) {
return o, g.newMetadata(o.Size(), mode, sgzip.GzipMetadata{}, hex.EncodeToString(sum), mimeType), nil
}
// This function generates a metadata object for sgzip.GzipMetadata or SzstdMetadata.
// Warning: This function panics if cmeta is not of the expected type.
func (g *gzipModeHandler) newMetadata(size int64, mode int, cmeta any, md5 string, mimeType string) *ObjectMetadata {
meta, ok := cmeta.(sgzip.GzipMetadata)
if !ok {
panic("invalid cmeta type: expected sgzip.GzipMetadata")
}
objMeta := new(ObjectMetadata)
objMeta.Size = size
objMeta.Mode = mode
objMeta.CompressionMetadataGzip = &meta
objMeta.CompressionMetadataZstd = nil
objMeta.MD5 = md5
objMeta.MimeType = mimeType
return objMeta
}

View File

@@ -0,0 +1,327 @@
package compress
import (
"context"
"errors"
"io"
"runtime"
"sync"
szstd "github.com/a1ex3/zstd-seekable-format-go/pkg"
"github.com/klauspost/compress/zstd"
)
const szstdChunkSize int = 1 << 20 // 1 MiB chunk size
// SzstdMetadata holds metadata for szstd compressed files.
type SzstdMetadata struct {
BlockSize int // BlockSize is the size of the blocks in the zstd file
Size int64 // Size is the uncompressed size of the file
BlockData []uint32 // BlockData is the block data for the zstd file, used for seeking
}
// SzstdWriter is a writer that compresses data in szstd format.
type SzstdWriter struct {
enc *zstd.Encoder
w szstd.ConcurrentWriter
metadata SzstdMetadata
mu sync.Mutex
}
// NewWriterSzstd creates a new szstd writer with the specified options.
// It initializes the szstd writer with a zstd encoder and returns a pointer to the SzstdWriter.
// The writer can be used to write data in chunks, and it will automatically handle block sizes and metadata.
func NewWriterSzstd(w io.Writer, opts ...zstd.EOption) (*SzstdWriter, error) {
encoder, err := zstd.NewWriter(nil, opts...)
if err != nil {
return nil, err
}
sw, err := szstd.NewWriter(w, encoder)
if err != nil {
if err := encoder.Close(); err != nil {
return nil, err
}
return nil, err
}
return &SzstdWriter{
enc: encoder,
w: sw,
metadata: SzstdMetadata{
BlockSize: szstdChunkSize,
Size: 0,
},
}, nil
}
// Write writes data to the szstd writer in chunks of szstdChunkSize.
// It handles the block size and metadata updates automatically.
func (w *SzstdWriter) Write(p []byte) (int, error) {
if len(p) == 0 {
return 0, nil
}
if w.metadata.BlockData == nil {
numBlocks := (len(p) + w.metadata.BlockSize - 1) / w.metadata.BlockSize
w.metadata.BlockData = make([]uint32, 1, numBlocks+1)
w.metadata.BlockData[0] = 0
}
start := 0
total := len(p)
var writerFunc szstd.FrameSource = func() ([]byte, error) {
if start >= total {
return nil, nil
}
end := min(start+w.metadata.BlockSize, total)
chunk := p[start:end]
size := end - start
w.mu.Lock()
w.metadata.Size += int64(size)
w.mu.Unlock()
start = end
return chunk, nil
}
// write sizes of compressed blocks in the callback
err := w.w.WriteMany(context.Background(), writerFunc,
szstd.WithWriteCallback(func(size uint32) {
w.mu.Lock()
lastOffset := w.metadata.BlockData[len(w.metadata.BlockData)-1]
w.metadata.BlockData = append(w.metadata.BlockData, lastOffset+size)
w.mu.Unlock()
}),
)
if err != nil {
return 0, err
}
return total, nil
}
// Close closes the SzstdWriter and its underlying encoder.
func (w *SzstdWriter) Close() error {
if err := w.w.Close(); err != nil {
return err
}
if err := w.enc.Close(); err != nil {
return err
}
return nil
}
// GetMetadata returns the metadata of the szstd writer.
func (w *SzstdWriter) GetMetadata() SzstdMetadata {
return w.metadata
}
// SzstdReaderAt is a reader that allows random access in szstd compressed data.
type SzstdReaderAt struct {
r szstd.Reader
decoder *zstd.Decoder
metadata *SzstdMetadata
pos int64
mu sync.Mutex
}
// NewReaderAtSzstd creates a new SzstdReaderAt at the specified io.ReadSeeker.
func NewReaderAtSzstd(rs io.ReadSeeker, meta *SzstdMetadata, offset int64, opts ...zstd.DOption) (*SzstdReaderAt, error) {
decoder, err := zstd.NewReader(nil, opts...)
if err != nil {
return nil, err
}
r, err := szstd.NewReader(rs, decoder)
if err != nil {
decoder.Close()
return nil, err
}
sr := &SzstdReaderAt{
r: r,
decoder: decoder,
metadata: meta,
pos: 0,
}
// Set initial position to the provided offset
if _, err := sr.Seek(offset, io.SeekStart); err != nil {
if err := sr.Close(); err != nil {
return nil, err
}
return nil, err
}
return sr, nil
}
// Seek sets the offset for the next Read.
func (s *SzstdReaderAt) Seek(offset int64, whence int) (int64, error) {
s.mu.Lock()
defer s.mu.Unlock()
pos, err := s.r.Seek(offset, whence)
if err == nil {
s.pos = pos
}
return pos, err
}
func (s *SzstdReaderAt) Read(p []byte) (int, error) {
s.mu.Lock()
defer s.mu.Unlock()
n, err := s.r.Read(p)
if err == nil {
s.pos += int64(n)
}
return n, err
}
// ReadAt reads data at the specified offset.
func (s *SzstdReaderAt) ReadAt(p []byte, off int64) (int, error) {
if off < 0 {
return 0, errors.New("invalid offset")
}
if off >= s.metadata.Size {
return 0, io.EOF
}
endOff := min(off+int64(len(p)), s.metadata.Size)
// Find all blocks covered by the range
type blockInfo struct {
index int // Block index
offsetInBlock int64 // Offset within the block for starting reading
bytesToRead int64 // How many bytes to read from this block
}
var blocks []blockInfo
uncompressedOffset := int64(0)
currentOff := off
for i := 0; i < len(s.metadata.BlockData)-1; i++ {
blockUncompressedEnd := min(uncompressedOffset+int64(s.metadata.BlockSize), s.metadata.Size)
if currentOff < blockUncompressedEnd && endOff > uncompressedOffset {
offsetInBlock := max(0, currentOff-uncompressedOffset)
bytesToRead := min(blockUncompressedEnd-uncompressedOffset-offsetInBlock, endOff-currentOff)
blocks = append(blocks, blockInfo{
index: i,
offsetInBlock: offsetInBlock,
bytesToRead: bytesToRead,
})
currentOff += bytesToRead
if currentOff >= endOff {
break
}
}
uncompressedOffset = blockUncompressedEnd
}
if len(blocks) == 0 {
return 0, io.EOF
}
// Parallel block decoding
type decodeResult struct {
index int
data []byte
err error
}
resultCh := make(chan decodeResult, len(blocks))
var wg sync.WaitGroup
sem := make(chan struct{}, runtime.NumCPU())
for _, block := range blocks {
wg.Add(1)
go func(block blockInfo) {
defer wg.Done()
sem <- struct{}{}
defer func() { <-sem }()
startOffset := int64(s.metadata.BlockData[block.index])
endOffset := int64(s.metadata.BlockData[block.index+1])
compressedSize := endOffset - startOffset
compressed := make([]byte, compressedSize)
_, err := s.r.ReadAt(compressed, startOffset)
if err != nil && err != io.EOF {
resultCh <- decodeResult{index: block.index, err: err}
return
}
decoded, err := s.decoder.DecodeAll(compressed, nil)
if err != nil {
resultCh <- decodeResult{index: block.index, err: err}
return
}
resultCh <- decodeResult{index: block.index, data: decoded, err: nil}
}(block)
}
go func() {
wg.Wait()
close(resultCh)
}()
// Collect results in block index order
totalRead := 0
results := make(map[int]decodeResult)
expected := len(blocks)
minIndex := blocks[0].index
for res := range resultCh {
results[res.index] = res
for {
if result, ok := results[minIndex]; ok {
if result.err != nil {
return 0, result.err
}
// find the corresponding blockInfo
var blk blockInfo
for _, b := range blocks {
if b.index == result.index {
blk = b
break
}
}
start := blk.offsetInBlock
end := start + blk.bytesToRead
copy(p[totalRead:totalRead+int(blk.bytesToRead)], result.data[start:end])
totalRead += int(blk.bytesToRead)
minIndex++
if minIndex-blocks[0].index >= len(blocks) {
break
}
} else {
break
}
}
if len(results) == expected && minIndex-blocks[0].index >= len(blocks) {
break
}
}
return totalRead, nil
}
// Close closes the SzstdReaderAt and underlying decoder.
func (s *SzstdReaderAt) Close() error {
if err := s.r.Close(); err != nil {
return err
}
s.decoder.Close()
return nil
}

View File

@@ -0,0 +1,65 @@
package compress
import (
"context"
"fmt"
"io"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/chunkedreader"
)
// uncompressedModeHandler implements compressionModeHandler for uncompressed files
type uncompressedModeHandler struct{}
// isCompressible checks the compression ratio of the provided data and returns true if the ratio exceeds
// the configured threshold
func (u *uncompressedModeHandler) isCompressible(r io.Reader, compressionMode int) (bool, error) {
return false, nil
}
// newObjectGetOriginalSize returns the original file size from the metadata
func (u *uncompressedModeHandler) newObjectGetOriginalSize(meta *ObjectMetadata) (int64, error) {
return 0, nil
}
// openGetReadCloser opens a compressed object and returns a ReadCloser in the Open method
func (u *uncompressedModeHandler) openGetReadCloser(
ctx context.Context,
o *Object,
offset int64,
limit int64,
cr chunkedreader.ChunkedReader,
closer io.Closer,
options ...fs.OpenOption,
) (rc io.ReadCloser, err error) {
return o.Object.Open(ctx, options...)
}
// processFileNameGetFileExtension returns the file extension for the given compression mode
func (u *uncompressedModeHandler) processFileNameGetFileExtension(compressionMode int) string {
return ""
}
// putCompress compresses the input data and uploads it to the remote, returning the new object and its metadata
func (u *uncompressedModeHandler) putCompress(
ctx context.Context,
f *Fs,
in io.Reader,
src fs.ObjectInfo,
options []fs.OpenOption,
mimeType string,
) (fs.Object, *ObjectMetadata, error) {
return nil, nil, fmt.Errorf("unsupported compression mode %d", f.mode)
}
// putUncompressGetNewMetadata returns metadata in the putUncompress method for a specific compression algorithm
func (u *uncompressedModeHandler) putUncompressGetNewMetadata(o fs.Object, mode int, md5 string, mimeType string, sum []byte) (fs.Object, *ObjectMetadata, error) {
return nil, nil, fmt.Errorf("unsupported compression mode %d", Uncompressed)
}
// This function generates a metadata object for sgzip.GzipMetadata or SzstdMetadata.
// Warning: This function panics if cmeta is not of the expected type.
func (u *uncompressedModeHandler) newMetadata(size int64, mode int, cmeta any, md5 string, mimeType string) *ObjectMetadata {
return nil
}

View File

@@ -0,0 +1,65 @@
package compress
import (
"context"
"fmt"
"io"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/chunkedreader"
)
// unknownModeHandler implements compressionModeHandler for unknown compression types
type unknownModeHandler struct{}
// isCompressible checks the compression ratio of the provided data and returns true if the ratio exceeds
// the configured threshold
func (unk *unknownModeHandler) isCompressible(r io.Reader, compressionMode int) (bool, error) {
return false, fmt.Errorf("unknown compression mode %d", compressionMode)
}
// newObjectGetOriginalSize returns the original file size from the metadata
func (unk *unknownModeHandler) newObjectGetOriginalSize(meta *ObjectMetadata) (int64, error) {
return 0, nil
}
// openGetReadCloser opens a compressed object and returns a ReadCloser in the Open method
func (unk *unknownModeHandler) openGetReadCloser(
ctx context.Context,
o *Object,
offset int64,
limit int64,
cr chunkedreader.ChunkedReader,
closer io.Closer,
options ...fs.OpenOption,
) (rc io.ReadCloser, err error) {
return nil, fmt.Errorf("unknown compression mode %d", o.meta.Mode)
}
// processFileNameGetFileExtension returns the file extension for the given compression mode
func (unk *unknownModeHandler) processFileNameGetFileExtension(compressionMode int) string {
return ""
}
// putCompress compresses the input data and uploads it to the remote, returning the new object and its metadata
func (unk *unknownModeHandler) putCompress(
ctx context.Context,
f *Fs,
in io.Reader,
src fs.ObjectInfo,
options []fs.OpenOption,
mimeType string,
) (fs.Object, *ObjectMetadata, error) {
return nil, nil, fmt.Errorf("unknown compression mode %d", f.mode)
}
// putUncompressGetNewMetadata returns metadata in the putUncompress method for a specific compression algorithm
func (unk *unknownModeHandler) putUncompressGetNewMetadata(o fs.Object, mode int, md5 string, mimeType string, sum []byte) (fs.Object, *ObjectMetadata, error) {
return nil, nil, fmt.Errorf("unknown compression mode")
}
// This function generates a metadata object for sgzip.GzipMetadata or SzstdMetadata.
// Warning: This function panics if cmeta is not of the expected type.
func (unk *unknownModeHandler) newMetadata(size int64, mode int, cmeta any, md5 string, mimeType string) *ObjectMetadata {
return nil
}

View File

@@ -0,0 +1,192 @@
package compress
import (
"bufio"
"bytes"
"context"
"crypto/md5"
"encoding/hex"
"errors"
"io"
"github.com/klauspost/compress/zstd"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/chunkedreader"
"github.com/rclone/rclone/fs/hash"
)
// zstdModeHandler implements compressionModeHandler for zstd
type zstdModeHandler struct{}
// isCompressible checks the compression ratio of the provided data and returns true if the ratio exceeds
// the configured threshold
func (z *zstdModeHandler) isCompressible(r io.Reader, compressionMode int) (bool, error) {
var b bytes.Buffer
var n int64
w, err := NewWriterSzstd(&b, zstd.WithEncoderLevel(zstd.SpeedDefault))
if err != nil {
return false, err
}
n, err = io.Copy(w, r)
if err != nil {
return false, err
}
err = w.Close()
if err != nil {
return false, err
}
ratio := float64(n) / float64(b.Len())
return ratio > minCompressionRatio, nil
}
// newObjectGetOriginalSize returns the original file size from the metadata
func (z *zstdModeHandler) newObjectGetOriginalSize(meta *ObjectMetadata) (int64, error) {
if meta.CompressionMetadataZstd == nil {
return 0, errors.New("missing zstd metadata")
}
return meta.CompressionMetadataZstd.Size, nil
}
// openGetReadCloser opens a compressed object and returns a ReadCloser in the Open method
func (z *zstdModeHandler) openGetReadCloser(
ctx context.Context,
o *Object,
offset int64,
limit int64,
cr chunkedreader.ChunkedReader,
closer io.Closer,
options ...fs.OpenOption,
) (rc io.ReadCloser, err error) {
var file io.Reader
if offset != 0 {
file, err = NewReaderAtSzstd(cr, o.meta.CompressionMetadataZstd, offset)
} else {
file, err = zstd.NewReader(cr)
}
if err != nil {
return nil, err
}
var fileReader io.Reader
if limit != -1 {
fileReader = io.LimitReader(file, limit)
} else {
fileReader = file
}
// Return a ReadCloser
return ReadCloserWrapper{Reader: fileReader, Closer: closer}, nil
}
// processFileNameGetFileExtension returns the file extension for the given compression mode
func (z *zstdModeHandler) processFileNameGetFileExtension(compressionMode int) string {
if compressionMode == Zstd {
return zstdFileExt
}
return ""
}
// putCompress compresses the input data and uploads it to the remote, returning the new object and its metadata
func (z *zstdModeHandler) putCompress(
ctx context.Context,
f *Fs,
in io.Reader,
src fs.ObjectInfo,
options []fs.OpenOption,
mimeType string,
) (fs.Object, *ObjectMetadata, error) {
// Unwrap reader accounting
in, wrap := accounting.UnWrap(in)
// Add the metadata hasher
metaHasher := md5.New()
in = io.TeeReader(in, metaHasher)
// Compress the file
pipeReader, pipeWriter := io.Pipe()
resultsZstd := make(chan compressionResult[SzstdMetadata])
go func() {
writer, err := NewWriterSzstd(pipeWriter, zstd.WithEncoderLevel(zstd.EncoderLevel(f.opt.CompressionLevel)))
if err != nil {
resultsZstd <- compressionResult[SzstdMetadata]{err: err}
close(resultsZstd)
return
}
_, err = io.Copy(writer, in)
if wErr := writer.Close(); wErr != nil && err == nil {
err = wErr
}
if cErr := pipeWriter.Close(); cErr != nil && err == nil {
err = cErr
}
resultsZstd <- compressionResult[SzstdMetadata]{err: err, meta: writer.GetMetadata()}
close(resultsZstd)
}()
wrappedIn := wrap(bufio.NewReaderSize(pipeReader, bufferSize))
ht := f.Fs.Hashes().GetOne()
var hasher *hash.MultiHasher
var err error
if ht != hash.None {
wrappedIn, wrap = accounting.UnWrap(wrappedIn)
hasher, err = hash.NewMultiHasherTypes(hash.NewHashSet(ht))
if err != nil {
return nil, nil, err
}
wrappedIn = io.TeeReader(wrappedIn, hasher)
wrappedIn = wrap(wrappedIn)
}
o, err := f.rcat(ctx, makeDataName(src.Remote(), src.Size(), f.mode), io.NopCloser(wrappedIn), src.ModTime(ctx), options)
if err != nil {
return nil, nil, err
}
result := <-resultsZstd
if result.err != nil {
if o != nil {
_ = o.Remove(ctx)
}
return nil, nil, result.err
}
// Build metadata using uncompressed size for filename
meta := z.newMetadata(result.meta.Size, f.mode, result.meta, hex.EncodeToString(metaHasher.Sum(nil)), mimeType)
if ht != hash.None && hasher != nil {
err = f.verifyObjectHash(ctx, o, hasher, ht)
if err != nil {
return nil, nil, err
}
}
return o, meta, nil
}
// putUncompressGetNewMetadata returns metadata in the putUncompress method for a specific compression algorithm
func (z *zstdModeHandler) putUncompressGetNewMetadata(o fs.Object, mode int, md5 string, mimeType string, sum []byte) (fs.Object, *ObjectMetadata, error) {
return o, z.newMetadata(o.Size(), mode, SzstdMetadata{}, hex.EncodeToString(sum), mimeType), nil
}
// This function generates a metadata object for sgzip.GzipMetadata or SzstdMetadata.
// Warning: This function panics if cmeta is not of the expected type.
func (z *zstdModeHandler) newMetadata(size int64, mode int, cmeta any, md5 string, mimeType string) *ObjectMetadata {
meta, ok := cmeta.(SzstdMetadata)
if !ok {
panic("invalid cmeta type: expected SzstdMetadata")
}
objMeta := new(ObjectMetadata)
objMeta.Size = size
objMeta.Mode = mode
objMeta.CompressionMetadataGzip = nil
objMeta.CompressionMetadataZstd = &meta
objMeta.MD5 = md5
objMeta.MimeType = mimeType
return objMeta
}

View File

@@ -923,28 +923,30 @@ func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryT
var commandHelp = []fs.CommandHelp{ var commandHelp = []fs.CommandHelp{
{ {
Name: "encode", Name: "encode",
Short: "Encode the given filename(s)", Short: "Encode the given filename(s).",
Long: `This encodes the filenames given as arguments returning a list of Long: `This encodes the filenames given as arguments returning a list of
strings of the encoded results. strings of the encoded results.
Usage Example: Usage examples:
rclone backend encode crypt: file1 [file2...] ` + "```console" + `
rclone rc backend/command command=encode fs=crypt: file1 [file2...] rclone backend encode crypt: file1 [file2...]
`, rclone rc backend/command command=encode fs=crypt: file1 [file2...]
` + "```",
}, },
{ {
Name: "decode", Name: "decode",
Short: "Decode the given filename(s)", Short: "Decode the given filename(s).",
Long: `This decodes the filenames given as arguments returning a list of Long: `This decodes the filenames given as arguments returning a list of
strings of the decoded results. It will return an error if any of the strings of the decoded results. It will return an error if any of the
inputs are invalid. inputs are invalid.
Usage Example: Usage examples:
rclone backend decode crypt: encryptedfile1 [encryptedfile2...] ` + "```console" + `
rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...] rclone backend decode crypt: encryptedfile1 [encryptedfile2...]
`, rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile2...]
` + "```",
}, },
} }

View File

@@ -563,21 +563,26 @@ var commandHelp = []fs.CommandHelp{{
Short: "Show metadata about the DOI.", Short: "Show metadata about the DOI.",
Long: `This command returns a JSON object with some information about the DOI. Long: `This command returns a JSON object with some information about the DOI.
rclone backend medatadata doi: Usage example:
It returns a JSON object representing metadata about the DOI. ` + "```console" + `
`, rclone backend metadata doi:
` + "```" + `
It returns a JSON object representing metadata about the DOI.`,
}, { }, {
Name: "set", Name: "set",
Short: "Set command for updating the config parameters.", Short: "Set command for updating the config parameters.",
Long: `This set command can be used to update the config parameters Long: `This set command can be used to update the config parameters
for a running doi backend. for a running doi backend.
Usage Examples: Usage examples:
rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2] ` + "```console" + `
rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2] rclone backend set doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI rclone rc backend/command command=set fs=doi: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=doi: -o doi=NEW_DOI
` + "```" + `
The option keys are named as they are in the config file. The option keys are named as they are in the config file.
@@ -585,8 +590,7 @@ This rebuilds the connection to the doi backend when it is called with
the new parameters. Only new parameters need be passed as the values the new parameters. Only new parameters need be passed as the values
will default to those currently in use. will default to those currently in use.
It doesn't return anything. It doesn't return anything.`,
`,
}} }}
// Command the backend to run a named command // Command the backend to run a named command

View File

@@ -3664,41 +3664,47 @@ func (f *Fs) rescue(ctx context.Context, dirID string, delete bool) (err error)
var commandHelp = []fs.CommandHelp{{ var commandHelp = []fs.CommandHelp{{
Name: "get", Name: "get",
Short: "Get command for fetching the drive config parameters", Short: "Get command for fetching the drive config parameters.",
Long: `This is a get command which will be used to fetch the various drive config parameters Long: `This is a get command which will be used to fetch the various drive config
parameters.
Usage Examples: Usage examples:
rclone backend get drive: [-o service_account_file] [-o chunk_size] ` + "```console" + `
rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size] rclone backend get drive: [-o service_account_file] [-o chunk_size]
`, rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size]
` + "```",
Opts: map[string]string{ Opts: map[string]string{
"chunk_size": "show the current upload chunk size", "chunk_size": "Show the current upload chunk size.",
"service_account_file": "show the current service account file", "service_account_file": "Show the current service account file.",
}, },
}, { }, {
Name: "set", Name: "set",
Short: "Set command for updating the drive config parameters", Short: "Set command for updating the drive config parameters.",
Long: `This is a set command which will be used to update the various drive config parameters Long: `This is a set command which will be used to update the various drive config
parameters.
Usage Examples: Usage examples:
rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] ` + "```console" + `
rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864] rclone backend set drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
`, rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]
` + "```",
Opts: map[string]string{ Opts: map[string]string{
"chunk_size": "update the current upload chunk size", "chunk_size": "Update the current upload chunk size.",
"service_account_file": "update the current service account file", "service_account_file": "Update the current service account file.",
}, },
}, { }, {
Name: "shortcut", Name: "shortcut",
Short: "Create shortcuts from files or directories", Short: "Create shortcuts from files or directories.",
Long: `This command creates shortcuts from files or directories. Long: `This command creates shortcuts from files or directories.
Usage: Usage examples:
rclone backend shortcut drive: source_item destination_shortcut ` + "```console" + `
rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut rclone backend shortcut drive: source_item destination_shortcut
rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut
` + "```" + `
In the first example this creates a shortcut from the "source_item" In the first example this creates a shortcut from the "source_item"
which can be a file or a directory to the "destination_shortcut". The which can be a file or a directory to the "destination_shortcut". The
@@ -3708,90 +3714,100 @@ from "drive:"
In the second example this creates a shortcut from the "source_item" In the second example this creates a shortcut from the "source_item"
relative to "drive:" to the "destination_shortcut" relative to relative to "drive:" to the "destination_shortcut" relative to
"drive2:". This may fail with a permission error if the user "drive2:". This may fail with a permission error if the user
authenticated with "drive2:" can't read files from "drive:". authenticated with "drive2:" can't read files from "drive:".`,
`,
Opts: map[string]string{ Opts: map[string]string{
"target": "optional target remote for the shortcut destination", "target": "Optional target remote for the shortcut destination.",
}, },
}, { }, {
Name: "drives", Name: "drives",
Short: "List the Shared Drives available to this account", Short: "List the Shared Drives available to this account.",
Long: `This command lists the Shared Drives (Team Drives) available to this Long: `This command lists the Shared Drives (Team Drives) available to this
account. account.
Usage: Usage example:
rclone backend [-o config] drives drive: ` + "```console" + `
rclone backend [-o config] drives drive:
` + "```" + `
This will return a JSON list of objects like this This will return a JSON list of objects like this:
[ ` + "```json" + `
{ [
"id": "0ABCDEF-01234567890", {
"kind": "drive#teamDrive", "id": "0ABCDEF-01234567890",
"name": "My Drive" "kind": "drive#teamDrive",
}, "name": "My Drive"
{ },
"id": "0ABCDEFabcdefghijkl", {
"kind": "drive#teamDrive", "id": "0ABCDEFabcdefghijkl",
"name": "Test Drive" "kind": "drive#teamDrive",
} "name": "Test Drive"
] }
]
` + "```" + `
With the -o config parameter it will output the list in a format With the -o config parameter it will output the list in a format
suitable for adding to a config file to make aliases for all the suitable for adding to a config file to make aliases for all the
drives found and a combined drive. drives found and a combined drive.
[My Drive] ` + "```ini" + `
type = alias [My Drive]
remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=: type = alias
remote = drive,team_drive=0ABCDEF-01234567890,root_folder_id=:
[Test Drive] [Test Drive]
type = alias type = alias
remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=: remote = drive,team_drive=0ABCDEFabcdefghijkl,root_folder_id=:
[AllDrives] [AllDrives]
type = combine type = combine
upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:" upstreams = "My Drive=My Drive:" "Test Drive=Test Drive:"
` + "```" + `
Adding this to the rclone config file will cause those team drives to Adding this to the rclone config file will cause those team drives to
be accessible with the aliases shown. Any illegal characters will be be accessible with the aliases shown. Any illegal characters will be
substituted with "_" and duplicate names will have numbers suffixed. substituted with "_" and duplicate names will have numbers suffixed.
It will also add a remote called AllDrives which shows all the shared It will also add a remote called AllDrives which shows all the shared
drives combined into one directory tree. drives combined into one directory tree.`,
`,
}, { }, {
Name: "untrash", Name: "untrash",
Short: "Untrash files and directories", Short: "Untrash files and directories.",
Long: `This command untrashes all the files and directories in the directory Long: `This command untrashes all the files and directories in the directory
passed in recursively. passed in recursively.
Usage: Usage example:
` + "```console" + `
rclone backend untrash drive:directory
rclone backend --interactive untrash drive:directory subdir
` + "```" + `
This takes an optional directory to trash which make this easier to This takes an optional directory to trash which make this easier to
use via the API. use via the API.
rclone backend untrash drive:directory Use the --interactive/-i or --dry-run flag to see what would be restored before
rclone backend --interactive untrash drive:directory subdir restoring it.
Use the --interactive/-i or --dry-run flag to see what would be restored before restoring it.
Result: Result:
{ ` + "```json" + `
"Untrashed": 17, {
"Errors": 0 "Untrashed": 17,
} "Errors": 0
`, }
` + "```",
}, { }, {
Name: "copyid", Name: "copyid",
Short: "Copy files by ID", Short: "Copy files by ID.",
Long: `This command copies files by ID Long: `This command copies files by ID.
Usage: Usage examples:
rclone backend copyid drive: ID path ` + "```console" + `
rclone backend copyid drive: ID1 path1 ID2 path2 rclone backend copyid drive: ID path
rclone backend copyid drive: ID1 path1 ID2 path2
` + "```" + `
It copies the drive file with ID given to the path (an rclone path which It copies the drive file with ID given to the path (an rclone path which
will be passed internally to rclone copyto). The ID and path pairs can be will be passed internally to rclone copyto). The ID and path pairs can be
@@ -3804,17 +3820,19 @@ component will be used as the file name.
If the destination is a drive backend then server-side copying will be If the destination is a drive backend then server-side copying will be
attempted if possible. attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be copied before copying. Use the --interactive/-i or --dry-run flag to see what would be copied before
`, copying.`,
}, { }, {
Name: "moveid", Name: "moveid",
Short: "Move files by ID", Short: "Move files by ID.",
Long: `This command moves files by ID Long: `This command moves files by ID.
Usage: Usage examples:
rclone backend moveid drive: ID path ` + "```console" + `
rclone backend moveid drive: ID1 path1 ID2 path2 rclone backend moveid drive: ID path
rclone backend moveid drive: ID1 path1 ID2 path2
` + "```" + `
It moves the drive file with ID given to the path (an rclone path which It moves the drive file with ID given to the path (an rclone path which
will be passed internally to rclone moveto). will be passed internally to rclone moveto).
@@ -3826,58 +3844,65 @@ component will be used as the file name.
If the destination is a drive backend then server-side moving will be If the destination is a drive backend then server-side moving will be
attempted if possible. attempted if possible.
Use the --interactive/-i or --dry-run flag to see what would be moved beforehand. Use the --interactive/-i or --dry-run flag to see what would be moved beforehand.`,
`,
}, { }, {
Name: "exportformats", Name: "exportformats",
Short: "Dump the export formats for debug purposes", Short: "Dump the export formats for debug purposes.",
}, { }, {
Name: "importformats", Name: "importformats",
Short: "Dump the import formats for debug purposes", Short: "Dump the import formats for debug purposes.",
}, { }, {
Name: "query", Name: "query",
Short: "List files using Google Drive query language", Short: "List files using Google Drive query language.",
Long: `This command lists files based on a query Long: `This command lists files based on a query.
Usage: Usage example:
` + "```console" + `
rclone backend query drive: query
` + "```" + `
rclone backend query drive: query
The query syntax is documented at [Google Drive Search query terms and The query syntax is documented at [Google Drive Search query terms and
operators](https://developers.google.com/drive/api/guides/ref-search-terms). operators](https://developers.google.com/drive/api/guides/ref-search-terms).
For example: For example:
rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'" ` + "```console" + `
rclone backend query drive: "'0ABc9DEFGHIJKLMNop0QRatUVW3X' in parents and name contains 'foo'"
` + "```" + `
If the query contains literal ' or \ characters, these need to be escaped with If the query contains literal ' or \ characters, these need to be escaped with
\ characters. "'" becomes "\'" and "\" becomes "\\\", for example to match a \ characters. "'" becomes "\'" and "\" becomes "\\\", for example to match a
file named "foo ' \.txt": file named "foo ' \.txt":
rclone backend query drive: "name = 'foo \' \\\.txt'" ` + "```console" + `
rclone backend query drive: "name = 'foo \' \\\.txt'"
` + "```" + `
The result is a JSON array of matches, for example: The result is a JSON array of matches, for example:
[ ` + "```json" + `
{ [
"createdTime": "2017-06-29T19:58:28.537Z", {
"id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD", "createdTime": "2017-06-29T19:58:28.537Z",
"md5Checksum": "68518d16be0c6fbfab918be61d658032", "id": "0AxBe_CDEF4zkGHI4d0FjYko2QkD",
"mimeType": "text/plain", "md5Checksum": "68518d16be0c6fbfab918be61d658032",
"modifiedTime": "2024-02-02T10:40:02.874Z", "mimeType": "text/plain",
"name": "foo ' \\.txt", "modifiedTime": "2024-02-02T10:40:02.874Z",
"parents": [ "name": "foo ' \\.txt",
"0BxAe_BCDE4zkFGZpcWJGek0xbzC" "parents": [
], "0BxAe_BCDE4zkFGZpcWJGek0xbzC"
"resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC", ],
"sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893", "resourceKey": "0-ABCDEFGHIXJQpIGqBJq3MC",
"size": "311", "sha1Checksum": "8f284fa768bfb4e45d076a579ab3905ab6bfa893",
"webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC" "size": "311",
} "webViewLink": "https://drive.google.com/file/d/0AxBe_CDEF4zkGHI4d0FjYko2QkD/view?usp=drivesdk\u0026resourcekey=0-ABCDEFGHIXJQpIGqBJq3MC"
]`, }
]
` + "```console",
}, { }, {
Name: "rescue", Name: "rescue",
Short: "Rescue or delete any orphaned files", Short: "Rescue or delete any orphaned files.",
Long: `This command rescues or deletes any orphaned files or directories. Long: `This command rescues or deletes any orphaned files or directories.
Sometimes files can get orphaned in Google Drive. This means that they Sometimes files can get orphaned in Google Drive. This means that they
@@ -3886,26 +3911,31 @@ are no longer in any folder in Google Drive.
This command finds those files and either rescues them to a directory This command finds those files and either rescues them to a directory
you specify or deletes them. you specify or deletes them.
Usage:
This can be used in 3 ways. This can be used in 3 ways.
First, list all orphaned files First, list all orphaned files:
rclone backend rescue drive: ` + "```console" + `
rclone backend rescue drive:
` + "```" + `
Second rescue all orphaned files to the directory indicated Second rescue all orphaned files to the directory indicated:
rclone backend rescue drive: "relative/path/to/rescue/directory" ` + "```console" + `
rclone backend rescue drive: "relative/path/to/rescue/directory"
` + "```" + `
e.g. To rescue all orphans to a directory called "Orphans" in the top level E.g. to rescue all orphans to a directory called "Orphans" in the top level:
rclone backend rescue drive: Orphans ` + "```console" + `
rclone backend rescue drive: Orphans
` + "```" + `
Third delete all orphaned files to the trash Third delete all orphaned files to the trash:
rclone backend rescue drive: -o delete ` + "```console" + `
`, rclone backend rescue drive: -o delete
` + "```",
}} }}
// Command the backend to run a named command // Command the backend to run a named command

View File

@@ -1330,6 +1330,16 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
var result *files.RelocationResult var result *files.RelocationResult
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
result, err = f.srv.MoveV2(&arg) result, err = f.srv.MoveV2(&arg)
switch e := err.(type) {
case files.MoveV2APIError:
// There seems to be a bit of eventual consistency here which causes this to
// fail on just created objects
// See: https://github.com/rclone/rclone/issues/8881
if e.EndpointError != nil && e.EndpointError.FromLookup != nil && e.EndpointError.FromLookup.Tag == files.LookupErrorNotFound {
fs.Debugf(srcObj, "Retrying move on %v error", err)
return true, err
}
}
return shouldRetry(ctx, err) return shouldRetry(ctx, err)
}) })
if err != nil { if err != nil {

View File

@@ -1292,7 +1292,7 @@ func (f *ftpReadCloser) Close() error {
// See: https://github.com/rclone/rclone/issues/3445#issuecomment-521654257 // See: https://github.com/rclone/rclone/issues/3445#issuecomment-521654257
if errX := textprotoError(err); errX != nil { if errX := textprotoError(err); errX != nil {
switch errX.Code { switch errX.Code {
case ftp.StatusTransfertAborted, ftp.StatusFileUnavailable, ftp.StatusAboutToSend: case ftp.StatusTransfertAborted, ftp.StatusFileUnavailable, ftp.StatusAboutToSend, ftp.StatusRequestedFileActionOK:
err = nil err = nil
} }
} }

View File

@@ -43,33 +43,42 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
var commandHelp = []fs.CommandHelp{{ var commandHelp = []fs.CommandHelp{{
Name: "drop", Name: "drop",
Short: "Drop cache", Short: "Drop cache.",
Long: `Completely drop checksum cache. Long: `Completely drop checksum cache.
Usage Example:
rclone backend drop hasher: Usage example:
`,
` + "```console" + `
rclone backend drop hasher:
` + "```",
}, { }, {
Name: "dump", Name: "dump",
Short: "Dump the database", Short: "Dump the database.",
Long: "Dump cache records covered by the current remote", Long: "Dump cache records covered by the current remote.",
}, { }, {
Name: "fulldump", Name: "fulldump",
Short: "Full dump of the database", Short: "Full dump of the database.",
Long: "Dump all cache records in the database", Long: "Dump all cache records in the database.",
}, { }, {
Name: "import", Name: "import",
Short: "Import a SUM file", Short: "Import a SUM file.",
Long: `Amend hash cache from a SUM file and bind checksums to files by size/time. Long: `Amend hash cache from a SUM file and bind checksums to files by size/time.
Usage Example:
rclone backend import hasher:subdir md5 /path/to/sum.md5 Usage example:
`,
` + "```console" + `
rclone backend import hasher:subdir md5 /path/to/sum.md5
` + "```",
}, { }, {
Name: "stickyimport", Name: "stickyimport",
Short: "Perform fast import of a SUM file", Short: "Perform fast import of a SUM file.",
Long: `Fill hash cache from a SUM file without verifying file fingerprints. Long: `Fill hash cache from a SUM file without verifying file fingerprints.
Usage Example:
rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5 Usage example:
`,
` + "```console" + `
rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5
` + "```",
}} }}
func (f *Fs) dbDump(ctx context.Context, full bool, root string) error { func (f *Fs) dbDump(ctx context.Context, full bool, root string) error {

View File

@@ -11,6 +11,7 @@ import (
"io" "io"
"mime" "mime"
"net/http" "net/http"
"net/textproto"
"net/url" "net/url"
"path" "path"
"strings" "strings"
@@ -37,6 +38,10 @@ func init() {
Description: "HTTP", Description: "HTTP",
NewFs: NewFs, NewFs: NewFs,
CommandHelp: commandHelp, CommandHelp: commandHelp,
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `HTTP metadata keys are case insensitive and are always returned in lower case.`,
},
Options: []fs.Option{{ Options: []fs.Option{{
Name: "url", Name: "url",
Help: "URL of HTTP host to connect to.\n\nE.g. \"https://example.com\", or \"https://user:pass@example.com\" to use a username and password.", Help: "URL of HTTP host to connect to.\n\nE.g. \"https://example.com\", or \"https://user:pass@example.com\" to use a username and password.",
@@ -98,6 +103,40 @@ sizes of any files, and some files that don't exist may be in the listing.`,
fs.Register(fsi) fs.Register(fsi)
} }
// system metadata keys which this backend owns
var systemMetadataInfo = map[string]fs.MetadataHelp{
"cache-control": {
Help: "Cache-Control header",
Type: "string",
Example: "no-cache",
},
"content-disposition": {
Help: "Content-Disposition header",
Type: "string",
Example: "inline",
},
"content-disposition-filename": {
Help: "Filename retrieved from Content-Disposition header",
Type: "string",
Example: "file.txt",
},
"content-encoding": {
Help: "Content-Encoding header",
Type: "string",
Example: "gzip",
},
"content-language": {
Help: "Content-Language header",
Type: "string",
Example: "en-US",
},
"content-type": {
Help: "Content-Type header",
Type: "string",
Example: "text/plain",
},
}
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
Endpoint string `config:"url"` Endpoint string `config:"url"`
@@ -126,6 +165,13 @@ type Object struct {
size int64 size int64
modTime time.Time modTime time.Time
contentType string contentType string
// Metadata as pointers to strings as they often won't be present
contentDisposition *string // Content-Disposition: header
contentDispositionFilename *string // Filename retrieved from Content-Disposition: header
cacheControl *string // Cache-Control: header
contentEncoding *string // Content-Encoding: header
contentLanguage *string // Content-Language: header
} }
// statusError returns an error if the res contained an error // statusError returns an error if the res contained an error
@@ -277,6 +323,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
ci: ci, ci: ci,
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
ReadMetadata: true,
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
}).Fill(ctx, f) }).Fill(ctx, f)
@@ -429,6 +476,29 @@ func parse(base *url.URL, in io.Reader) (names []string, err error) {
return names, nil return names, nil
} }
// parseFilename extracts the filename from a Content-Disposition header
func parseFilename(contentDisposition string) (string, error) {
// Normalize the contentDisposition to canonical MIME format
mediaType, params, err := mime.ParseMediaType(contentDisposition)
if err != nil {
return "", fmt.Errorf("failed to parse contentDisposition: %v", err)
}
// Check if the contentDisposition is an attachment
if strings.ToLower(mediaType) != "attachment" {
return "", fmt.Errorf("not an attachment: %s", mediaType)
}
// Extract the filename from the parameters
filename, ok := params["filename"]
if !ok {
return "", fmt.Errorf("filename not found in contentDisposition")
}
// Decode filename if it contains special encoding
return textproto.TrimString(filename), nil
}
// Adds the configured headers to the request if any // Adds the configured headers to the request if any
func addHeaders(req *http.Request, opt *Options) { func addHeaders(req *http.Request, opt *Options) {
for i := 0; i < len(opt.Headers); i += 2 { for i := 0; i < len(opt.Headers); i += 2 {
@@ -577,6 +647,9 @@ func (o *Object) String() string {
// Remote the name of the remote HTTP file, relative to the fs root // Remote the name of the remote HTTP file, relative to the fs root
func (o *Object) Remote() string { func (o *Object) Remote() string {
if o.contentDispositionFilename != nil {
return *o.contentDispositionFilename
}
return o.remote return o.remote
} }
@@ -634,6 +707,29 @@ func (o *Object) decodeMetadata(ctx context.Context, res *http.Response) error {
o.modTime = t o.modTime = t
o.contentType = res.Header.Get("Content-Type") o.contentType = res.Header.Get("Content-Type")
o.size = rest.ParseSizeFromHeaders(res.Header) o.size = rest.ParseSizeFromHeaders(res.Header)
contentDisposition := res.Header.Get("Content-Disposition")
if contentDisposition != "" {
o.contentDisposition = &contentDisposition
}
if o.contentDisposition != nil {
var filename string
filename, err = parseFilename(*o.contentDisposition)
if err == nil && filename != "" {
o.contentDispositionFilename = &filename
}
}
cacheControl := res.Header.Get("Cache-Control")
if cacheControl != "" {
o.cacheControl = &cacheControl
}
contentEncoding := res.Header.Get("Content-Encoding")
if contentEncoding != "" {
o.contentEncoding = &contentEncoding
}
contentLanguage := res.Header.Get("Content-Language")
if contentLanguage != "" {
o.contentLanguage = &contentLanguage
}
// If NoSlash is set then check ContentType to see if it is a directory // If NoSlash is set then check ContentType to see if it is a directory
if o.fs.opt.NoSlash { if o.fs.opt.NoSlash {
@@ -722,11 +818,13 @@ var commandHelp = []fs.CommandHelp{{
Long: `This set command can be used to update the config parameters Long: `This set command can be used to update the config parameters
for a running http backend. for a running http backend.
Usage Examples: Usage examples:
rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2] ` + "```console" + `
rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2] rclone backend set remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=remote: -o url=https://example.com rclone rc backend/command command=set fs=remote: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=remote: -o url=https://example.com
` + "```" + `
The option keys are named as they are in the config file. The option keys are named as they are in the config file.
@@ -734,8 +832,7 @@ This rebuilds the connection to the http backend when it is called with
the new parameters. Only new parameters need be passed as the values the new parameters. Only new parameters need be passed as the values
will default to those currently in use. will default to those currently in use.
It doesn't return anything. It doesn't return anything.`,
`,
}} }}
// Command the backend to run a named command // Command the backend to run a named command
@@ -771,6 +868,30 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
} }
} }
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
metadata = make(fs.Metadata, 6)
if o.contentType != "" {
metadata["content-type"] = o.contentType
}
// Set system metadata
setMetadata := func(k string, v *string) {
if v == nil || *v == "" {
return
}
metadata[k] = *v
}
setMetadata("content-disposition", o.contentDisposition)
setMetadata("content-disposition-filename", o.contentDispositionFilename)
setMetadata("cache-control", o.cacheControl)
setMetadata("content-language", o.contentLanguage)
setMetadata("content-encoding", o.contentEncoding)
return metadata, nil
}
// Check the interfaces are satisfied // Check the interfaces are satisfied
var ( var (
_ fs.Fs = &Fs{} _ fs.Fs = &Fs{}
@@ -778,4 +899,5 @@ var (
_ fs.Object = &Object{} _ fs.Object = &Object{}
_ fs.MimeTyper = &Object{} _ fs.MimeTyper = &Object{}
_ fs.Commander = &Fs{} _ fs.Commander = &Fs{}
_ fs.Metadataer = &Object{}
) )

View File

@@ -60,6 +60,17 @@ func prepareServer(t *testing.T) configmap.Simple {
what := fmt.Sprintf("%s %s: Header ", r.Method, r.URL.Path) what := fmt.Sprintf("%s %s: Header ", r.Method, r.URL.Path)
assert.Equal(t, headers[1], r.Header.Get(headers[0]), what+headers[0]) assert.Equal(t, headers[1], r.Header.Get(headers[0]), what+headers[0])
assert.Equal(t, headers[3], r.Header.Get(headers[2]), what+headers[2]) assert.Equal(t, headers[3], r.Header.Get(headers[2]), what+headers[2])
// Set the content disposition header for the fifth file
// later we will check if it is set using the metadata method
if r.URL.Path == "/five.txt.gz" {
w.Header().Set("Content-Disposition", "attachment; filename=\"five.txt.gz\"")
w.Header().Set("Content-Type", "text/plain; charset=utf-8")
w.Header().Set("Cache-Control", "no-cache")
w.Header().Set("Content-Language", "en-US")
w.Header().Set("Content-Encoding", "gzip")
}
fileServer.ServeHTTP(w, r) fileServer.ServeHTTP(w, r)
}) })
@@ -102,27 +113,33 @@ func testListRoot(t *testing.T, f fs.Fs, noSlash bool) {
sort.Sort(entries) sort.Sort(entries)
require.Equal(t, 4, len(entries)) require.Equal(t, 5, len(entries))
e := entries[0] e := entries[0]
assert.Equal(t, "four", e.Remote()) assert.Equal(t, "five.txt.gz", e.Remote())
assert.Equal(t, int64(-1), e.Size()) assert.Equal(t, int64(-1), e.Size())
_, ok := e.(fs.Directory) _, ok := e.(fs.Object)
assert.True(t, ok) assert.True(t, ok)
e = entries[1] e = entries[1]
assert.Equal(t, "four", e.Remote())
assert.Equal(t, int64(-1), e.Size())
_, ok = e.(fs.Directory)
assert.True(t, ok)
e = entries[2]
assert.Equal(t, "one%.txt", e.Remote()) assert.Equal(t, "one%.txt", e.Remote())
assert.Equal(t, int64(5+lineEndSize), e.Size()) assert.Equal(t, int64(5+lineEndSize), e.Size())
_, ok = e.(*Object) _, ok = e.(*Object)
assert.True(t, ok) assert.True(t, ok)
e = entries[2] e = entries[3]
assert.Equal(t, "three", e.Remote()) assert.Equal(t, "three", e.Remote())
assert.Equal(t, int64(-1), e.Size()) assert.Equal(t, int64(-1), e.Size())
_, ok = e.(fs.Directory) _, ok = e.(fs.Directory)
assert.True(t, ok) assert.True(t, ok)
e = entries[3] e = entries[4]
assert.Equal(t, "two.html", e.Remote()) assert.Equal(t, "two.html", e.Remote())
if noSlash { if noSlash {
assert.Equal(t, int64(-1), e.Size()) assert.Equal(t, int64(-1), e.Size())
@@ -218,6 +235,23 @@ func TestNewObjectWithLeadingSlash(t *testing.T) {
assert.Equal(t, fs.ErrorObjectNotFound, err) assert.Equal(t, fs.ErrorObjectNotFound, err)
} }
func TestNewObjectWithMetadata(t *testing.T) {
f := prepare(t)
o, err := f.NewObject(context.Background(), "/five.txt.gz")
require.NoError(t, err)
assert.Equal(t, "five.txt.gz", o.Remote())
ho, ok := o.(*Object)
assert.True(t, ok)
metadata, err := ho.Metadata(context.Background())
require.NoError(t, err)
assert.Equal(t, "text/plain; charset=utf-8", metadata["content-type"])
assert.Equal(t, "attachment; filename=\"five.txt.gz\"", metadata["content-disposition"])
assert.Equal(t, "five.txt.gz", metadata["content-disposition-filename"])
assert.Equal(t, "no-cache", metadata["cache-control"])
assert.Equal(t, "en-US", metadata["content-language"])
assert.Equal(t, "gzip", metadata["content-encoding"])
}
func TestOpen(t *testing.T) { func TestOpen(t *testing.T) {
m := prepareServer(t) m := prepareServer(t)

Binary file not shown.

View File

@@ -1070,12 +1070,11 @@ func (f *Fs) Hashes() hash.Set {
var commandHelp = []fs.CommandHelp{ var commandHelp = []fs.CommandHelp{
{ {
Name: "noop", Name: "noop",
Short: "A null operation for testing backend commands", Short: "A null operation for testing backend commands.",
Long: `This is a test command which has some options Long: `This is a test command which has some options you can try to change the output.`,
you can try to change the output.`,
Opts: map[string]string{ Opts: map[string]string{
"echo": "echo the input arguments", "echo": "Echo the input arguments.",
"error": "return an error based on option value", "error": "Return an error based on option value.",
}, },
}, },
} }

View File

@@ -18,6 +18,7 @@ Improvements:
import ( import (
"context" "context"
"crypto/tls" "crypto/tls"
"encoding/base64"
"errors" "errors"
"fmt" "fmt"
"io" "io"
@@ -47,6 +48,9 @@ const (
maxSleep = 2 * time.Second maxSleep = 2 * time.Second
eventWaitTime = 500 * time.Millisecond eventWaitTime = 500 * time.Millisecond
decayConstant = 2 // bigger for slower decay, exponential decayConstant = 2 // bigger for slower decay, exponential
sessionIDConfigKey = "session_id"
masterKeyConfigKey = "master_key"
) )
var ( var (
@@ -70,6 +74,24 @@ func init() {
Help: "Password.", Help: "Password.",
Required: true, Required: true,
IsPassword: true, IsPassword: true,
}, {
Name: "2fa",
Help: `The 2FA code of your MEGA account if the account is set up with one`,
Required: false,
}, {
Name: sessionIDConfigKey,
Help: "Session (internal use only)",
Required: false,
Advanced: true,
Sensitive: true,
Hide: fs.OptionHideBoth,
}, {
Name: masterKeyConfigKey,
Help: "Master key (internal use only)",
Required: false,
Advanced: true,
Sensitive: true,
Hide: fs.OptionHideBoth,
}, { }, {
Name: "debug", Name: "debug",
Help: `Output more debug from Mega. Help: `Output more debug from Mega.
@@ -113,6 +135,9 @@ Enabling it will increase CPU usage and add network overhead.`,
type Options struct { type Options struct {
User string `config:"user"` User string `config:"user"`
Pass string `config:"pass"` Pass string `config:"pass"`
TwoFA string `config:"2fa"`
SessionID string `config:"session_id"`
MasterKey string `config:"master_key"`
Debug bool `config:"debug"` Debug bool `config:"debug"`
HardDelete bool `config:"hard_delete"` HardDelete bool `config:"hard_delete"`
UseHTTPS bool `config:"use_https"` UseHTTPS bool `config:"use_https"`
@@ -209,6 +234,19 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
// Create Fs
root = parsePath(root)
f := &Fs{
name: name,
root: root,
opt: *opt,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
DuplicateFiles: true,
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
// cache *mega.Mega on username so we can reuse and share // cache *mega.Mega on username so we can reuse and share
// them between remotes. They are expensive to make as they // them between remotes. They are expensive to make as they
// contain all the objects and sharing the objects makes the // contain all the objects and sharing the objects makes the
@@ -248,25 +286,29 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
}) })
} }
err := srv.Login(opt.User, opt.Pass) if opt.SessionID == "" {
if err != nil { fs.Debugf(f, "Using username and password to initialize the Mega API")
return nil, fmt.Errorf("couldn't login: %w", err) err := srv.MultiFactorLogin(opt.User, opt.Pass, opt.TwoFA)
if err != nil {
return nil, fmt.Errorf("couldn't login: %w", err)
}
megaCache[opt.User] = srv
m.Set(sessionIDConfigKey, srv.GetSessionID())
encodedMasterKey := base64.StdEncoding.EncodeToString(srv.GetMasterKey())
m.Set(masterKeyConfigKey, encodedMasterKey)
} else {
fs.Debugf(f, "Using previously stored session ID and master key to initialize the Mega API")
decodedMasterKey, err := base64.StdEncoding.DecodeString(opt.MasterKey)
if err != nil {
return nil, fmt.Errorf("couldn't decode master key: %w", err)
}
err = srv.LoginWithKeys(opt.SessionID, decodedMasterKey)
if err != nil {
fs.Debugf(f, "login with previous auth keys failed: %v", err)
}
} }
megaCache[opt.User] = srv
} }
f.srv = srv
root = parsePath(root)
f := &Fs{
name: name,
root: root,
opt: *opt,
srv: srv,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
}
f.features = (&fs.Features{
DuplicateFiles: true,
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
// Find the root node and check if it is a file or not // Find the root node and check if it is a file or not
_, err = f.findRoot(ctx, false) _, err = f.findRoot(ctx, false)

View File

@@ -87,7 +87,7 @@ Please choose the 'y' option to set your own password then enter your secret.`,
var commandHelp = []fs.CommandHelp{{ var commandHelp = []fs.CommandHelp{{
Name: "du", Name: "du",
Short: "Return disk usage information for a specified directory", Short: "Return disk usage information for a specified directory.",
Long: `The usage information returned, includes the targeted directory as well as all Long: `The usage information returned, includes the targeted directory as well as all
files stored in any sub-directories that may exist.`, files stored in any sub-directories that may exist.`,
}, { }, {
@@ -96,7 +96,12 @@ files stored in any sub-directories that may exist.`,
Long: `The desired path location (including applicable sub-directories) ending in Long: `The desired path location (including applicable sub-directories) ending in
the object that will be the target of the symlink (for example, /links/mylink). the object that will be the target of the symlink (for example, /links/mylink).
Include the file extension for the object, if applicable. Include the file extension for the object, if applicable.
` + "`rclone backend symlink <src> <path>`",
Usage example:
` + "```console" + `
rclone backend symlink <src> <path>
` + "```",
}, },
} }

View File

@@ -30,20 +30,25 @@ const (
var commandHelp = []fs.CommandHelp{{ var commandHelp = []fs.CommandHelp{{
Name: operationRename, Name: operationRename,
Short: "change the name of an object", Short: "change the name of an object.",
Long: `This command can be used to rename a object. Long: `This command can be used to rename a object.
Usage Examples: Usage example:
rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name ` + "```console" + `
`, rclone backend rename oos:bucket relative-object-path-under-bucket object-new-name
` + "```",
Opts: nil, Opts: nil,
}, { }, {
Name: operationListMultiPart, Name: operationListMultiPart,
Short: "List the unfinished multipart uploads", Short: "List the unfinished multipart uploads.",
Long: `This command lists the unfinished multipart uploads in JSON format. Long: `This command lists the unfinished multipart uploads in JSON format.
rclone backend list-multipart-uploads oos:bucket/path/to/object Usage example:
` + "```console" + `
rclone backend list-multipart-uploads oos:bucket/path/to/object
` + "```" + `
It returns a dictionary of buckets with values as lists of unfinished It returns a dictionary of buckets with values as lists of unfinished
multipart uploads. multipart uploads.
@@ -51,70 +56,82 @@ multipart uploads.
You can call it with no bucket in which case it lists all bucket, with You can call it with no bucket in which case it lists all bucket, with
a bucket or with a bucket and path. a bucket or with a bucket and path.
{ ` + "```json" + `
"test-bucket": [ {
{ "test-bucket": [
"namespace": "test-namespace", {
"bucket": "test-bucket", "namespace": "test-namespace",
"object": "600m.bin", "bucket": "test-bucket",
"uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8", "object": "600m.bin",
"timeCreated": "2022-07-29T06:21:16.595Z", "uploadId": "51dd8114-52a4-b2f2-c42f-5291f05eb3c8",
"storageTier": "Standard" "timeCreated": "2022-07-29T06:21:16.595Z",
} "storageTier": "Standard"
] }
`, ]
}`,
}, { }, {
Name: operationCleanup, Name: operationCleanup,
Short: "Remove unfinished multipart uploads.", Short: "Remove unfinished multipart uploads.",
Long: `This command removes unfinished multipart uploads of age greater than Long: `This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours. max-age which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command to see what Note that you can use --interactive/-i or --dry-run with this command to see
it would do. what it would do.
rclone backend cleanup oos:bucket/path/to/object Usage examples:
rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. ` + "```console" + `
`, rclone backend cleanup oos:bucket/path/to/object
rclone backend cleanup -o max-age=7w oos:bucket/path/to/object
` + "```" + `
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.`,
Opts: map[string]string{ Opts: map[string]string{
"max-age": "Max age of upload to delete", "max-age": "Max age of upload to delete.",
}, },
}, { }, {
Name: operationRestore, Name: operationRestore,
Short: "Restore objects from Archive to Standard storage", Short: "Restore objects from Archive to Standard storage.",
Long: `This command can be used to restore one or more objects from Archive to Standard storage. Long: `This command can be used to restore one or more objects from Archive to
Standard storage.
Usage Examples: Usage examples:
rclone backend restore oos:bucket/path/to/directory -o hours=HOURS ` + "```console" + `
rclone backend restore oos:bucket -o hours=HOURS rclone backend restore oos:bucket/path/to/directory -o hours=HOURS
rclone backend restore oos:bucket -o hours=HOURS
` + "```" + `
This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags
rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72 ` + "```console" + `
rclone --interactive backend restore --include "*.txt" oos:bucket/path -o hours=72
` + "```" + `
All the objects shown will be marked for restore, then All the objects shown will be marked for restore, then:
rclone backend restore --include "*.txt" oos:bucket/path -o hours=72 ` + "```console" + `
rclone backend restore --include "*.txt" oos:bucket/path -o hours=72
` + "```" + `
It returns a list of status dictionaries with Object Name and Status It returns a list of status dictionaries with Object Name and Status keys.
keys. The Status will be "RESTORED"" if it was successful or an error message The Status will be "RESTORED"" if it was successful or an error message if not.
if not.
[ ` + "```json" + `
{ [
"Object": "test.txt" {
"Status": "RESTORED", "Object": "test.txt"
}, "Status": "RESTORED",
{ },
"Object": "test/file4.txt" {
"Status": "RESTORED", "Object": "test/file4.txt"
} "Status": "RESTORED",
] }
`, ]
` + "```",
Opts: map[string]string{ Opts: map[string]string{
"hours": "The number of hours for which this object will be restored. Default is 24 hrs.", "hours": `The number of hours for which this object will be restored.
Default is 24 hrs.`,
}, },
}, },
} }

View File

@@ -75,7 +75,7 @@ func TestLinkValid(t *testing.T) {
Expire: Time(time.Now().Add(time.Hour)), Expire: Time(time.Now().Add(time.Hour)),
}, },
expected: true, expected: true,
desc: "should fallback to Expire field when URL expire parameter is unparseable", desc: "should fallback to Expire field when URL expire parameter is unparsable",
}, },
{ {
name: "invalid when both URL expire and Expire field are expired", name: "invalid when both URL expire and Expire field are expired",

View File

@@ -1678,39 +1678,43 @@ func (f *Fs) decompressDir(ctx context.Context, filename, id, password string, s
var commandHelp = []fs.CommandHelp{{ var commandHelp = []fs.CommandHelp{{
Name: "addurl", Name: "addurl",
Short: "Add offline download task for url", Short: "Add offline download task for url.",
Long: `This command adds offline download task for url. Long: `This command adds offline download task for url.
Usage: Usage example:
rclone backend addurl pikpak:dirpath url ` + "```console" + `
rclone backend addurl pikpak:dirpath url
` + "```" + `
Downloads will be stored in 'dirpath'. If 'dirpath' is invalid, Downloads will be stored in 'dirpath'. If 'dirpath' is invalid,
download will fallback to default 'My Pack' folder. download will fallback to default 'My Pack' folder.`,
`,
}, { }, {
Name: "decompress", Name: "decompress",
Short: "Request decompress of a file/files in a folder", Short: "Request decompress of a file/files in a folder.",
Long: `This command requests decompress of file/files in a folder. Long: `This command requests decompress of file/files in a folder.
Usage: Usage examples:
rclone backend decompress pikpak:dirpath {filename} -o password=password ` + "```console" + `
rclone backend decompress pikpak:dirpath {filename} -o delete-src-file rclone backend decompress pikpak:dirpath {filename} -o password=password
rclone backend decompress pikpak:dirpath {filename} -o delete-src-file
` + "```" + `
An optional argument 'filename' can be specified for a file located in An optional argument 'filename' can be specified for a file located in
'pikpak:dirpath'. You may want to pass '-o password=password' for a 'pikpak:dirpath'. You may want to pass '-o password=password' for a
password-protected files. Also, pass '-o delete-src-file' to delete password-protected files. Also, pass '-o delete-src-file' to delete
source files after decompression finished. source files after decompression finished.
Result: Result:
{ ` + "```json" + `
"Decompressed": 17, {
"SourceDeleted": 0, "Decompressed": 17,
"Errors": 0 "SourceDeleted": 0,
} "Errors": 0
`, }
` + "```",
}} }}
// Command the backend to run a named command // Command the backend to run a named command

View File

@@ -1,4 +1,4 @@
## Adding a new s3 provider # Adding a new s3 provider
It is quite easy to add a new S3 provider to rclone. It is quite easy to add a new S3 provider to rclone.
@@ -12,179 +12,202 @@ All tags can be found in `backend/s3/providers.go` Provider Struct.
Looking through a few of the yaml files as examples should make things Looking through a few of the yaml files as examples should make things
clear. `AWS.yaml` as the most config. pasting. clear. `AWS.yaml` as the most config. pasting.
### YAML ## YAML
In `backend/s3/provider/YourProvider.yaml` In `backend/s3/provider/YourProvider.yaml`
- name - name
- description - description
- More like the full name often "YourProvider + Object Storage" - More like the full name often "YourProvider + Object Storage"
- [Region] - [Region]
- Any regions your provider supports or the defaults (use `region: {}` for this) - Any regions your provider supports or the defaults (use `region: {}` for this)
- Example from AWS.yaml: - Example from AWS.yaml:
```yaml
region: ```yaml
us-east-1: |- region:
The default endpoint - a good choice if you are unsure. us-east-1: |-
US Region, Northern Virginia, or Pacific Northwest. The default endpoint - a good choice if you are unsure.
Leave location constraint empty. US Region, Northern Virginia, or Pacific Northwest.
``` Leave location constraint empty.
- The defaults (as seen in Rclone.yaml): ```
```yaml
region: - The defaults (as seen in Rclone.yaml):
"": |-
Use this if unsure. ```yaml
Will use v4 signatures and an empty region. region:
other-v2-signature: |- "": |-
Use this only if v4 signatures don't work. Use this if unsure.
E.g. pre Jewel/v10 CEPH. Will use v4 signatures and an empty region.
``` other-v2-signature: |-
Use this only if v4 signatures don't work.
E.g. pre Jewel/v10 CEPH.
```
- [Endpoint] - [Endpoint]
- Any endpoints your provider supports - Any endpoints your provider supports
- Example from Mega.yaml
```yaml - Example from Mega.yaml
endpoint:
s3.eu-central-1.s4.mega.io: Mega S4 eu-central-1 (Amsterdam) ```yaml
``` endpoint:
s3.eu-central-1.s4.mega.io: Mega S4 eu-central-1 (Amsterdam)
```
- [Location Constraint] - [Location Constraint]
- The Location Constraint of your remote, often same as region. - The Location Constraint of your remote, often same as region.
- Example from AWS.yaml - Example from AWS.yaml
```yaml
location_constraint: ```yaml
"": Empty for US Region, Northern Virginia, or Pacific Northwest location_constraint:
us-east-2: US East (Ohio) Region "": Empty for US Region, Northern Virginia, or Pacific Northwest
``` us-east-2: US East (Ohio) Region
```
- [ACL] - [ACL]
- Identical across *most* providers. Select the default with `acl: {}` - Identical across *most* providers. Select the default with `acl: {}`
- Example from AWS.yaml - Example from AWS.yaml
```yaml
acl: ```yaml
private: |- acl:
Owner gets FULL_CONTROL. private: |-
No one else has access rights (default). Owner gets FULL_CONTROL.
public-read: |- No one else has access rights (default).
Owner gets FULL_CONTROL. public-read: |-
The AllUsers group gets READ access. Owner gets FULL_CONTROL.
public-read-write: |- The AllUsers group gets READ access.
Owner gets FULL_CONTROL. public-read-write: |-
The AllUsers group gets READ and WRITE access. Owner gets FULL_CONTROL.
Granting this on a bucket is generally not recommended. The AllUsers group gets READ and WRITE access.
authenticated-read: |- Granting this on a bucket is generally not recommended.
Owner gets FULL_CONTROL. authenticated-read: |-
The AuthenticatedUsers group gets READ access. Owner gets FULL_CONTROL.
bucket-owner-read: |- The AuthenticatedUsers group gets READ access.
Object owner gets FULL_CONTROL. bucket-owner-read: |-
Bucket owner gets READ access. Object owner gets FULL_CONTROL.
If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. Bucket owner gets READ access.
bucket-owner-full-control: |- If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
Both the object owner and the bucket owner get FULL_CONTROL over the object. bucket-owner-full-control: |-
If you specify this canned ACL when creating a bucket, Amazon S3 ignores it. Both the object owner and the bucket owner get FULL_CONTROL over the object.
``` If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
```
- [Storage Class] - [Storage Class]
- Identical across *most* providers. - Identical across *most* providers.
- Defaults from AWS.yaml - Defaults from AWS.yaml
```yaml
storage_class: ```yaml
"": Default storage_class:
STANDARD: Standard storage class "": Default
REDUCED_REDUNDANCY: Reduced redundancy storage class STANDARD: Standard storage class
STANDARD_IA: Standard Infrequent Access storage class REDUCED_REDUNDANCY: Reduced redundancy storage class
ONEZONE_IA: One Zone Infrequent Access storage class STANDARD_IA: Standard Infrequent Access storage class
GLACIER: Glacier Flexible Retrieval storage class ONEZONE_IA: One Zone Infrequent Access storage class
DEEP_ARCHIVE: Glacier Deep Archive storage class GLACIER: Glacier Flexible Retrieval storage class
INTELLIGENT_TIERING: Intelligent-Tiering storage class DEEP_ARCHIVE: Glacier Deep Archive storage class
GLACIER_IR: Glacier Instant Retrieval storage class INTELLIGENT_TIERING: Intelligent-Tiering storage class
``` GLACIER_IR: Glacier Instant Retrieval storage class
```
- [Server Side Encryption] - [Server Side Encryption]
- Not common, identical across *most* providers. - Not common, identical across *most* providers.
- Defaults from AWS.yaml - Defaults from AWS.yaml
```yaml
server_side_encryption: ```yaml
"": None server_side_encryption:
AES256: AES256 "": None
aws:kms: aws:kms AES256: AES256
``` aws:kms: aws:kms
```
- [Advanced Options] - [Advanced Options]
- All advanced options are Boolean - if true the configurator asks about that value, if not it doesn't: - All advanced options are Boolean - if true the configurator asks about that
```go value, if not it doesn't:
BucketACL bool `yaml:"bucket_acl,omitempty"`
DirectoryBucket bool `yaml:"directory_bucket,omitempty"` ```go
LeavePartsOnError bool `yaml:"leave_parts_on_error,omitempty"` BucketACL bool `yaml:"bucket_acl,omitempty"`
RequesterPays bool `yaml:"requester_pays,omitempty"` DirectoryBucket bool `yaml:"directory_bucket,omitempty"`
SSECustomerAlgorithm bool `yaml:"sse_customer_algorithm,omitempty"` LeavePartsOnError bool `yaml:"leave_parts_on_error,omitempty"`
SSECustomerKey bool `yaml:"sse_customer_key,omitempty"` RequesterPays bool `yaml:"requester_pays,omitempty"`
SSECustomerKeyBase64 bool `yaml:"sse_customer_key_base64,omitempty"` SSECustomerAlgorithm bool `yaml:"sse_customer_algorithm,omitempty"`
SSECustomerKeyMd5 bool `yaml:"sse_customer_key_md5,omitempty"` SSECustomerKey bool `yaml:"sse_customer_key,omitempty"`
SSEKmsKeyID bool `yaml:"sse_kms_key_id,omitempty"` SSECustomerKeyBase64 bool `yaml:"sse_customer_key_base64,omitempty"`
STSEndpoint bool `yaml:"sts_endpoint,omitempty"` SSECustomerKeyMd5 bool `yaml:"sse_customer_key_md5,omitempty"`
UseAccelerateEndpoint bool `yaml:"use_accelerate_endpoint,omitempty"` SSEKmsKeyID bool `yaml:"sse_kms_key_id,omitempty"`
``` STSEndpoint bool `yaml:"sts_endpoint,omitempty"`
- Example from AWS.yaml: UseAccelerateEndpoint bool `yaml:"use_accelerate_endpoint,omitempty"`
```yaml ```
bucket_acl: true
directory_bucket: true - Example from AWS.yaml:
leave_parts_on_error: true
requester_pays: true ```yaml
sse_customer_algorithm: true bucket_acl: true
sse_customer_key: true directory_bucket: true
sse_customer_key_base64: true leave_parts_on_error: true
sse_customer_key_md5: true requester_pays: true
sse_kms_key_id: true sse_customer_algorithm: true
sts_endpoint: true sse_customer_key: true
use_accelerate_endpoint: true sse_customer_key_base64: true
``` sse_customer_key_md5: true
sse_kms_key_id: true
sts_endpoint: true
use_accelerate_endpoint: true
```
- Quirks - Quirks
- Quirks are discovered through documentation and running the tests as seen below. - Quirks are discovered through documentation and running the tests as seen below.
- Most quirks are *bool as to have 3 values, `true`, `false` and `dont care`. - Most quirks are *bool as to have 3 values, `true`, `false` and `dont care`.
```go
type Quirks struct { ```go
ListVersion *int `yaml:"list_version,omitempty"` // 1 or 2 type Quirks struct {
ForcePathStyle *bool `yaml:"force_path_style,omitempty"` // true = path-style ListVersion *int `yaml:"list_version,omitempty"` // 1 or 2
ListURLEncode *bool `yaml:"list_url_encode,omitempty"` ForcePathStyle *bool `yaml:"force_path_style,omitempty"` // true = path-style
UseMultipartEtag *bool `yaml:"use_multipart_etag,omitempty"` ListURLEncode *bool `yaml:"list_url_encode,omitempty"`
UseAlreadyExists *bool `yaml:"use_already_exists,omitempty"` UseMultipartEtag *bool `yaml:"use_multipart_etag,omitempty"`
UseAcceptEncodingGzip *bool `yaml:"use_accept_encoding_gzip,omitempty"` UseAlreadyExists *bool `yaml:"use_already_exists,omitempty"`
MightGzip *bool `yaml:"might_gzip,omitempty"` UseAcceptEncodingGzip *bool `yaml:"use_accept_encoding_gzip,omitempty"`
UseMultipartUploads *bool `yaml:"use_multipart_uploads,omitempty"` MightGzip *bool `yaml:"might_gzip,omitempty"`
UseUnsignedPayload *bool `yaml:"use_unsigned_payload,omitempty"` UseMultipartUploads *bool `yaml:"use_multipart_uploads,omitempty"`
UseXID *bool `yaml:"use_x_id,omitempty"` UseUnsignedPayload *bool `yaml:"use_unsigned_payload,omitempty"`
SignAcceptEncoding *bool `yaml:"sign_accept_encoding,omitempty"` UseXID *bool `yaml:"use_x_id,omitempty"`
CopyCutoff *int64 `yaml:"copy_cutoff,omitempty"` SignAcceptEncoding *bool `yaml:"sign_accept_encoding,omitempty"`
MaxUploadParts *int `yaml:"max_upload_parts,omitempty"` CopyCutoff *int64 `yaml:"copy_cutoff,omitempty"`
MinChunkSize *int64 `yaml:"min_chunk_size,omitempty"` MaxUploadParts *int `yaml:"max_upload_parts,omitempty"`
} MinChunkSize *int64 `yaml:"min_chunk_size,omitempty"`
}
``` ```
- Example from AWS.yaml
- Example from AWS.yaml
```yaml ```yaml
quirks: quirks:
might_gzip: false # Never auto gzips objects might_gzip: false # Never auto gzips objects
use_unsigned_payload: false # AWS has trailer support use_unsigned_payload: false # AWS has trailer support
``` ```
Note that if you omit a section, eg `region` then the user won't be Note that if you omit a section, eg `region` then the user won't be
asked that question, and if you add an empty section e.g. `region: {}` asked that question, and if you add an empty section e.g. `region: {}`
then the defaults from the `Other.yaml` will be used. then the defaults from the `Other.yaml` will be used.
### DOCS ## DOCS
- `docs/content/s3.md` - `docs/content/s3.md`
- Add the provider at the top of the page. - Add the provider at the top of the page.
- Add a section about the provider linked from there. - Add a section about the provider linked from there.
- Make sure this is in alphabetical order in the `Providers` section. - Make sure this is in alphabetical order in the `Providers` section.
- Add a transcript of a trial `rclone config` session - Add a transcript of a trial `rclone config` session
- Edit the transcript to remove things which might change in subsequent versions - Edit the transcript to remove things which might change in subsequent versions
- **Do not** alter or add to the autogenerated parts of `s3.md` - **Do not** alter or add to the autogenerated parts of `s3.md`
- Rule of thumb: don't edit anything not mentioned above. - Rule of thumb: don't edit anything not mentioned above.
- **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3` - **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3`
- This will make autogenerated changes! - This will make autogenerated changes!
- `README.md` - this is the home page in github - `README.md` - this is the home page in github
- Add the provider and a link to the section you wrote in `docs/contents/s3.md` - Add the provider and a link to the section you wrote in `docs/contents/s3.md`
- `docs/content/_index.md` - this is the home page of rclone.org - `docs/content/_index.md` - this is the home page of rclone.org
- Add the provider and a link to the section you wrote in `docs/contents/s3.md` - Add the provider and a link to the section you wrote in `docs/contents/s3.md`
- Once you've written the docs, run `make serve` and check they look OK - Once you've written the docs, run `make serve` and check they look OK
in the web browser and the links (internal and external) all work. in the web browser and the links (internal and external) all work.
### TESTS ## TESTS
Once you've written the code, test `rclone config` works to your Once you've written the code, test `rclone config` works to your
satisfaction and looks correct, and check the integration tests work satisfaction and looks correct, and check the integration tests work

View File

@@ -137,3 +137,4 @@ use_accelerate_endpoint: true
quirks: quirks:
might_gzip: false # Never auto gzips objects might_gzip: false # Never auto gzips objects
use_unsigned_payload: false # AWS has trailer support which means it adds checksums in the trailer without seeking use_unsigned_payload: false # AWS has trailer support which means it adds checksums in the trailer without seeking
use_data_integrity_protections: true

View File

@@ -20,20 +20,21 @@ var NewYamlMap = orderedmap.New[string, string]
// Quirks defines all the S3 provider quirks // Quirks defines all the S3 provider quirks
type Quirks struct { type Quirks struct {
ListVersion *int `yaml:"list_version,omitempty"` // 1 or 2 ListVersion *int `yaml:"list_version,omitempty"` // 1 or 2
ForcePathStyle *bool `yaml:"force_path_style,omitempty"` // true = path-style ForcePathStyle *bool `yaml:"force_path_style,omitempty"` // true = path-style
ListURLEncode *bool `yaml:"list_url_encode,omitempty"` ListURLEncode *bool `yaml:"list_url_encode,omitempty"`
UseMultipartEtag *bool `yaml:"use_multipart_etag,omitempty"` UseMultipartEtag *bool `yaml:"use_multipart_etag,omitempty"`
UseAlreadyExists *bool `yaml:"use_already_exists,omitempty"` UseAlreadyExists *bool `yaml:"use_already_exists,omitempty"`
UseAcceptEncodingGzip *bool `yaml:"use_accept_encoding_gzip,omitempty"` UseAcceptEncodingGzip *bool `yaml:"use_accept_encoding_gzip,omitempty"`
MightGzip *bool `yaml:"might_gzip,omitempty"` UseDataIntegrityProtections *bool `yaml:"use_data_integrity_protections,omitempty"`
UseMultipartUploads *bool `yaml:"use_multipart_uploads,omitempty"` MightGzip *bool `yaml:"might_gzip,omitempty"`
UseUnsignedPayload *bool `yaml:"use_unsigned_payload,omitempty"` UseMultipartUploads *bool `yaml:"use_multipart_uploads,omitempty"`
UseXID *bool `yaml:"use_x_id,omitempty"` UseUnsignedPayload *bool `yaml:"use_unsigned_payload,omitempty"`
SignAcceptEncoding *bool `yaml:"sign_accept_encoding,omitempty"` UseXID *bool `yaml:"use_x_id,omitempty"`
CopyCutoff *int64 `yaml:"copy_cutoff,omitempty"` SignAcceptEncoding *bool `yaml:"sign_accept_encoding,omitempty"`
MaxUploadParts *int `yaml:"max_upload_parts,omitempty"` CopyCutoff *int64 `yaml:"copy_cutoff,omitempty"`
MinChunkSize *int64 `yaml:"min_chunk_size,omitempty"` MaxUploadParts *int `yaml:"max_upload_parts,omitempty"`
MinChunkSize *int64 `yaml:"min_chunk_size,omitempty"`
} }
// Provider defines the configurable data in each provider.yaml // Provider defines the configurable data in each provider.yaml

View File

@@ -39,6 +39,9 @@ import (
smithyhttp "github.com/aws/smithy-go/transport/http" smithyhttp "github.com/aws/smithy-go/transport/http"
"github.com/ncw/swift/v2" "github.com/ncw/swift/v2"
"golang.org/x/net/http/httpguts"
"golang.org/x/sync/errgroup"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/chunksize" "github.com/rclone/rclone/fs/chunksize"
@@ -59,8 +62,6 @@ import (
"github.com/rclone/rclone/lib/readers" "github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/rest" "github.com/rclone/rclone/lib/rest"
"github.com/rclone/rclone/lib/version" "github.com/rclone/rclone/lib/version"
"golang.org/x/net/http/httpguts"
"golang.org/x/sync/errgroup"
) )
// Register with Fs // Register with Fs
@@ -574,6 +575,13 @@ circumstances or for testing.
`, `,
Default: false, Default: false,
Advanced: true, Advanced: true,
}, {
Name: "use_data_integrity_protections",
Help: `If true use AWS S3 data integrity protections.
See [AWS Docs on Data Integrity Protections](https://docs.aws.amazon.com/sdkref/latest/guide/feature-dataintegrity.html)`,
Default: fs.Tristate{},
Advanced: true,
}, { }, {
Name: "versions", Name: "versions",
Help: "Include old versions in directory listings.", Help: "Include old versions in directory listings.",
@@ -892,67 +900,68 @@ var systemMetadataInfo = map[string]fs.MetadataHelp{
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
Provider string `config:"provider"` Provider string `config:"provider"`
EnvAuth bool `config:"env_auth"` EnvAuth bool `config:"env_auth"`
AccessKeyID string `config:"access_key_id"` AccessKeyID string `config:"access_key_id"`
SecretAccessKey string `config:"secret_access_key"` SecretAccessKey string `config:"secret_access_key"`
Region string `config:"region"` Region string `config:"region"`
Endpoint string `config:"endpoint"` Endpoint string `config:"endpoint"`
STSEndpoint string `config:"sts_endpoint"` STSEndpoint string `config:"sts_endpoint"`
UseDualStack bool `config:"use_dual_stack"` UseDualStack bool `config:"use_dual_stack"`
LocationConstraint string `config:"location_constraint"` LocationConstraint string `config:"location_constraint"`
ACL string `config:"acl"` ACL string `config:"acl"`
BucketACL string `config:"bucket_acl"` BucketACL string `config:"bucket_acl"`
RequesterPays bool `config:"requester_pays"` RequesterPays bool `config:"requester_pays"`
ServerSideEncryption string `config:"server_side_encryption"` ServerSideEncryption string `config:"server_side_encryption"`
SSEKMSKeyID string `config:"sse_kms_key_id"` SSEKMSKeyID string `config:"sse_kms_key_id"`
SSECustomerAlgorithm string `config:"sse_customer_algorithm"` SSECustomerAlgorithm string `config:"sse_customer_algorithm"`
SSECustomerKey string `config:"sse_customer_key"` SSECustomerKey string `config:"sse_customer_key"`
SSECustomerKeyBase64 string `config:"sse_customer_key_base64"` SSECustomerKeyBase64 string `config:"sse_customer_key_base64"`
SSECustomerKeyMD5 string `config:"sse_customer_key_md5"` SSECustomerKeyMD5 string `config:"sse_customer_key_md5"`
StorageClass string `config:"storage_class"` StorageClass string `config:"storage_class"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"` UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CopyCutoff fs.SizeSuffix `config:"copy_cutoff"` CopyCutoff fs.SizeSuffix `config:"copy_cutoff"`
ChunkSize fs.SizeSuffix `config:"chunk_size"` ChunkSize fs.SizeSuffix `config:"chunk_size"`
MaxUploadParts int `config:"max_upload_parts"` MaxUploadParts int `config:"max_upload_parts"`
DisableChecksum bool `config:"disable_checksum"` DisableChecksum bool `config:"disable_checksum"`
SharedCredentialsFile string `config:"shared_credentials_file"` SharedCredentialsFile string `config:"shared_credentials_file"`
Profile string `config:"profile"` Profile string `config:"profile"`
SessionToken string `config:"session_token"` SessionToken string `config:"session_token"`
UploadConcurrency int `config:"upload_concurrency"` UploadConcurrency int `config:"upload_concurrency"`
ForcePathStyle bool `config:"force_path_style"` ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"` V2Auth bool `config:"v2_auth"`
UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"` UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"`
UseARNRegion bool `config:"use_arn_region"` UseARNRegion bool `config:"use_arn_region"`
LeavePartsOnError bool `config:"leave_parts_on_error"` LeavePartsOnError bool `config:"leave_parts_on_error"`
ListChunk int32 `config:"list_chunk"` ListChunk int32 `config:"list_chunk"`
ListVersion int `config:"list_version"` ListVersion int `config:"list_version"`
ListURLEncode fs.Tristate `config:"list_url_encode"` ListURLEncode fs.Tristate `config:"list_url_encode"`
NoCheckBucket bool `config:"no_check_bucket"` NoCheckBucket bool `config:"no_check_bucket"`
NoHead bool `config:"no_head"` NoHead bool `config:"no_head"`
NoHeadObject bool `config:"no_head_object"` NoHeadObject bool `config:"no_head_object"`
Enc encoder.MultiEncoder `config:"encoding"` Enc encoder.MultiEncoder `config:"encoding"`
DisableHTTP2 bool `config:"disable_http2"` DisableHTTP2 bool `config:"disable_http2"`
DownloadURL string `config:"download_url"` DownloadURL string `config:"download_url"`
DirectoryMarkers bool `config:"directory_markers"` DirectoryMarkers bool `config:"directory_markers"`
UseMultipartEtag fs.Tristate `config:"use_multipart_etag"` UseMultipartEtag fs.Tristate `config:"use_multipart_etag"`
UsePresignedRequest bool `config:"use_presigned_request"` UsePresignedRequest bool `config:"use_presigned_request"`
Versions bool `config:"versions"` UseDataIntegrityProtections fs.Tristate `config:"use_data_integrity_protections"`
VersionAt fs.Time `config:"version_at"` Versions bool `config:"versions"`
VersionDeleted bool `config:"version_deleted"` VersionAt fs.Time `config:"version_at"`
Decompress bool `config:"decompress"` VersionDeleted bool `config:"version_deleted"`
MightGzip fs.Tristate `config:"might_gzip"` Decompress bool `config:"decompress"`
UseAcceptEncodingGzip fs.Tristate `config:"use_accept_encoding_gzip"` MightGzip fs.Tristate `config:"might_gzip"`
NoSystemMetadata bool `config:"no_system_metadata"` UseAcceptEncodingGzip fs.Tristate `config:"use_accept_encoding_gzip"`
UseAlreadyExists fs.Tristate `config:"use_already_exists"` NoSystemMetadata bool `config:"no_system_metadata"`
UseMultipartUploads fs.Tristate `config:"use_multipart_uploads"` UseAlreadyExists fs.Tristate `config:"use_already_exists"`
UseUnsignedPayload fs.Tristate `config:"use_unsigned_payload"` UseMultipartUploads fs.Tristate `config:"use_multipart_uploads"`
SDKLogMode sdkLogMode `config:"sdk_log_mode"` UseUnsignedPayload fs.Tristate `config:"use_unsigned_payload"`
DirectoryBucket bool `config:"directory_bucket"` SDKLogMode sdkLogMode `config:"sdk_log_mode"`
IBMAPIKey string `config:"ibm_api_key"` DirectoryBucket bool `config:"directory_bucket"`
IBMInstanceID string `config:"ibm_resource_instance_id"` IBMAPIKey string `config:"ibm_api_key"`
UseXID fs.Tristate `config:"use_x_id"` IBMInstanceID string `config:"ibm_resource_instance_id"`
SignAcceptEncoding fs.Tristate `config:"sign_accept_encoding"` UseXID fs.Tristate `config:"use_x_id"`
SignAcceptEncoding fs.Tristate `config:"sign_accept_encoding"`
} }
// Fs represents a remote s3 server // Fs represents a remote s3 server
@@ -1302,6 +1311,10 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (s3Cli
} else { } else {
s3Opt.EndpointOptions.UseDualStackEndpoint = aws.DualStackEndpointStateDisabled s3Opt.EndpointOptions.UseDualStackEndpoint = aws.DualStackEndpointStateDisabled
} }
if !opt.UseDataIntegrityProtections.Value {
s3Opt.RequestChecksumCalculation = aws.RequestChecksumCalculationWhenRequired
s3Opt.ResponseChecksumValidation = aws.ResponseChecksumValidationWhenRequired
}
// FIXME not ported from SDK v1 - not sure what this does // FIXME not ported from SDK v1 - not sure what this does
// s3Opt.UsEast1RegionalEndpoint = endpoints.RegionalS3UsEast1Endpoint // s3Opt.UsEast1RegionalEndpoint = endpoints.RegionalS3UsEast1Endpoint
}) })
@@ -1497,6 +1510,7 @@ func setQuirks(opt *Options, provider *Provider) {
set(&opt.ListURLEncode, true, provider.Quirks.ListURLEncode) set(&opt.ListURLEncode, true, provider.Quirks.ListURLEncode)
set(&opt.UseMultipartEtag, true, provider.Quirks.UseMultipartEtag) set(&opt.UseMultipartEtag, true, provider.Quirks.UseMultipartEtag)
set(&opt.UseAcceptEncodingGzip, true, provider.Quirks.UseAcceptEncodingGzip) set(&opt.UseAcceptEncodingGzip, true, provider.Quirks.UseAcceptEncodingGzip)
set(&opt.UseDataIntegrityProtections, false, provider.Quirks.UseDataIntegrityProtections)
set(&opt.MightGzip, true, provider.Quirks.MightGzip) set(&opt.MightGzip, true, provider.Quirks.MightGzip)
set(&opt.UseAlreadyExists, true, provider.Quirks.UseAlreadyExists) set(&opt.UseAlreadyExists, true, provider.Quirks.UseAlreadyExists)
set(&opt.UseMultipartUploads, true, provider.Quirks.UseMultipartUploads) set(&opt.UseMultipartUploads, true, provider.Quirks.UseMultipartUploads)
@@ -1634,11 +1648,14 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
newRoot, leaf := path.Split(oldRoot) newRoot, leaf := path.Split(oldRoot)
f.setRoot(newRoot) f.setRoot(newRoot)
_, err := f.NewObject(ctx, leaf) _, err := f.NewObject(ctx, leaf)
if err != nil { if errors.Is(err, fs.ErrorObjectNotFound) {
// File doesn't exist or is a directory so return old f // File doesn't exist or is a directory so return old f
f.setRoot(oldRoot) f.setRoot(oldRoot)
return f, nil return f, nil
} }
if err != nil {
return nil, err
}
// return an error with an fs which points to the parent // return an error with an fs which points to the parent
return f, fs.ErrorIsFile return f, fs.ErrorIsFile
} }
@@ -2818,6 +2835,8 @@ func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dst
SSECustomerKey: req.SSECustomerKey, SSECustomerKey: req.SSECustomerKey,
SSECustomerKeyMD5: req.SSECustomerKeyMD5, SSECustomerKeyMD5: req.SSECustomerKeyMD5,
UploadId: uid, UploadId: uid,
IfMatch: copyReq.IfMatch,
IfNoneMatch: copyReq.IfNoneMatch,
}) })
return f.shouldRetry(ctx, err) return f.shouldRetry(ctx, err)
}) })
@@ -2852,13 +2871,20 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
MetadataDirective: types.MetadataDirectiveCopy, MetadataDirective: types.MetadataDirectiveCopy,
} }
// Update the metadata if it is in use // Build upload options including headers and metadata
if ci := fs.GetConfig(ctx); ci.Metadata { ci := fs.GetConfig(ctx)
ui, err := srcObj.prepareUpload(ctx, src, fs.MetadataAsOpenOptions(ctx), true) uploadOptions := fs.MetadataAsOpenOptions(ctx)
if err != nil { for _, option := range ci.UploadHeaders {
return nil, fmt.Errorf("failed to prepare upload: %w", err) uploadOptions = append(uploadOptions, option)
} }
setFrom_s3CopyObjectInput_s3PutObjectInput(&req, ui.req)
ui, err := srcObj.prepareUpload(ctx, src, uploadOptions, true)
if err != nil {
return nil, fmt.Errorf("failed to prepare upload: %w", err)
}
setFrom_s3CopyObjectInput_s3PutObjectInput(&req, ui.req)
if ci.Metadata {
req.MetadataDirective = types.MetadataDirectiveReplace req.MetadataDirective = types.MetadataDirectiveReplace
} }
@@ -2902,101 +2928,118 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var commandHelp = []fs.CommandHelp{{ var commandHelp = []fs.CommandHelp{{
Name: "restore", Name: "restore",
Short: "Restore objects from GLACIER or INTELLIGENT-TIERING archive tier", Short: "Restore objects from GLACIER or INTELLIGENT-TIERING archive tier.",
Long: `This command can be used to restore one or more objects from GLACIER to normal storage Long: `This command can be used to restore one or more objects from GLACIER to normal
or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier. storage or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier
to the Frequent Access tier.
Usage Examples: Usage examples:
rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS ` + "```console" + `
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS rclone backend restore s3:bucket/path/to/ --include /object -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY rclone backend restore s3:bucket -o priority=PRIORITY -o lifetime=DAYS
rclone backend restore s3:bucket/path/to/directory -o priority=PRIORITY
` + "```" + `
This flag also obeys the filters. Test first with --interactive/-i or --dry-run flags This flag also obeys the filters. Test first with --interactive/-i or --dry-run
flags.
rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 ` + "```console" + `
rclone --interactive backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1
` + "```" + `
All the objects shown will be marked for restore, then All the objects shown will be marked for restore, then:
rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1 ` + "```console" + `
rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard -o lifetime=1
` + "```" + `
It returns a list of status dictionaries with Remote and Status It returns a list of status dictionaries with Remote and Status
keys. The Status will be OK if it was successful or an error message keys. The Status will be OK if it was successful or an error message
if not. if not.
[ ` + "```json" + `
{ [
"Status": "OK", {
"Remote": "test.txt" "Status": "OK",
}, "Remote": "test.txt"
{ },
"Status": "OK", {
"Remote": "test/file4.txt" "Status": "OK",
} "Remote": "test/file4.txt"
] }
]
`, ` + "```",
Opts: map[string]string{ Opts: map[string]string{
"priority": "Priority of restore: Standard|Expedited|Bulk", "priority": "Priority of restore: Standard|Expedited|Bulk",
"lifetime": "Lifetime of the active copy in days, ignored for INTELLIGENT-TIERING storage", "lifetime": `Lifetime of the active copy in days, ignored for INTELLIGENT-TIERING
storage.`,
"description": "The optional description for the job.", "description": "The optional description for the job.",
}, },
}, { }, {
Name: "restore-status", Name: "restore-status",
Short: "Show the restore status for objects being restored from GLACIER or INTELLIGENT-TIERING storage", Short: "Show the status for objects being restored from GLACIER or INTELLIGENT-TIERING.",
Long: `This command can be used to show the status for objects being restored from GLACIER to normal storage Long: `This command can be used to show the status for objects being restored from
or from INTELLIGENT-TIERING Archive Access / Deep Archive Access tier to the Frequent Access tier. GLACIER to normal storage or from INTELLIGENT-TIERING Archive Access / Deep
Archive Access tier to the Frequent Access tier.
Usage Examples: Usage examples:
rclone backend restore-status s3:bucket/path/to/object ` + "```console" + `
rclone backend restore-status s3:bucket/path/to/directory rclone backend restore-status s3:bucket/path/to/object
rclone backend restore-status -o all s3:bucket/path/to/directory rclone backend restore-status s3:bucket/path/to/directory
rclone backend restore-status -o all s3:bucket/path/to/directory
` + "```" + `
This command does not obey the filters. This command does not obey the filters.
It returns a list of status dictionaries. It returns a list of status dictionaries:
[ ` + "```json" + `
{ [
"Remote": "file.txt", {
"VersionID": null, "Remote": "file.txt",
"RestoreStatus": { "VersionID": null,
"IsRestoreInProgress": true, "RestoreStatus": {
"RestoreExpiryDate": "2023-09-06T12:29:19+01:00" "IsRestoreInProgress": true,
}, "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
"StorageClass": "GLACIER"
}, },
{ "StorageClass": "GLACIER"
"Remote": "test.pdf", },
"VersionID": null, {
"RestoreStatus": { "Remote": "test.pdf",
"IsRestoreInProgress": false, "VersionID": null,
"RestoreExpiryDate": "2023-09-06T12:29:19+01:00" "RestoreStatus": {
}, "IsRestoreInProgress": false,
"StorageClass": "DEEP_ARCHIVE" "RestoreExpiryDate": "2023-09-06T12:29:19+01:00"
}, },
{ "StorageClass": "DEEP_ARCHIVE"
"Remote": "test.gz", },
"VersionID": null, {
"RestoreStatus": { "Remote": "test.gz",
"IsRestoreInProgress": true, "VersionID": null,
"RestoreExpiryDate": "null" "RestoreStatus": {
}, "IsRestoreInProgress": true,
"StorageClass": "INTELLIGENT_TIERING" "RestoreExpiryDate": "null"
} },
] "StorageClass": "INTELLIGENT_TIERING"
`, }
]
` + "```",
Opts: map[string]string{ Opts: map[string]string{
"all": "if set then show all objects, not just ones with restore status", "all": "If set then show all objects, not just ones with restore status.",
}, },
}, { }, {
Name: "list-multipart-uploads", Name: "list-multipart-uploads",
Short: "List the unfinished multipart uploads", Short: "List the unfinished multipart uploads.",
Long: `This command lists the unfinished multipart uploads in JSON format. Long: `This command lists the unfinished multipart uploads in JSON format.
rclone backend list-multipart s3:bucket/path/to/object Usage examples:
` + "```console" + `
rclone backend list-multipart s3:bucket/path/to/object
` + "```" + `
It returns a dictionary of buckets with values as lists of unfinished It returns a dictionary of buckets with values as lists of unfinished
multipart uploads. multipart uploads.
@@ -3004,44 +3047,47 @@ multipart uploads.
You can call it with no bucket in which case it lists all bucket, with You can call it with no bucket in which case it lists all bucket, with
a bucket or with a bucket and path. a bucket or with a bucket and path.
{ ` + "```json" + `
"rclone": [ {
"rclone": [
{ {
"Initiated": "2020-06-26T14:20:36Z", "Initiated": "2020-06-26T14:20:36Z",
"Initiator": { "Initiator": {
"DisplayName": "XXX", "DisplayName": "XXX",
"ID": "arn:aws:iam::XXX:user/XXX" "ID": "arn:aws:iam::XXX:user/XXX"
}, },
"Key": "KEY", "Key": "KEY",
"Owner": { "Owner": {
"DisplayName": null, "DisplayName": null,
"ID": "XXX" "ID": "XXX"
}, },
"StorageClass": "STANDARD", "StorageClass": "STANDARD",
"UploadId": "XXX" "UploadId": "XXX"
} }
], ],
"rclone-1000files": [], "rclone-1000files": [],
"rclone-dst": [] "rclone-dst": []
} }
` + "```",
`,
}, { }, {
Name: "cleanup", Name: "cleanup",
Short: "Remove unfinished multipart uploads.", Short: "Remove unfinished multipart uploads.",
Long: `This command removes unfinished multipart uploads of age greater than Long: `This command removes unfinished multipart uploads of age greater than
max-age which defaults to 24 hours. max-age which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command to see what Note that you can use --interactive/-i or --dry-run with this command to see
it would do. what it would do.
rclone backend cleanup s3:bucket/path/to/object Usage examples:
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. ` + "```console" + `
`, rclone backend cleanup s3:bucket/path/to/object
rclone backend cleanup -o max-age=7w s3:bucket/path/to/object
` + "```" + `
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.`,
Opts: map[string]string{ Opts: map[string]string{
"max-age": "Max age of upload to delete", "max-age": "Max age of upload to delete.",
}, },
}, { }, {
Name: "cleanup-hidden", Name: "cleanup-hidden",
@@ -3049,11 +3095,14 @@ Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
Long: `This command removes any old hidden versions of files Long: `This command removes any old hidden versions of files
on a versions enabled bucket. on a versions enabled bucket.
Note that you can use --interactive/-i or --dry-run with this command to see what Note that you can use --interactive/-i or --dry-run with this command to see
it would do. what it would do.
rclone backend cleanup-hidden s3:bucket/path/to/dir Usage example:
`,
` + "```console" + `
rclone backend cleanup-hidden s3:bucket/path/to/dir
` + "```",
}, { }, {
Name: "versioning", Name: "versioning",
Short: "Set/get versioning support for a bucket.", Short: "Set/get versioning support for a bucket.",
@@ -3061,24 +3110,29 @@ it would do.
passed and then returns the current versioning status for the bucket passed and then returns the current versioning status for the bucket
supplied. supplied.
rclone backend versioning s3:bucket # read status only Usage examples:
rclone backend versioning s3:bucket Enabled
rclone backend versioning s3:bucket Suspended
It may return "Enabled", "Suspended" or "Unversioned". Note that once versioning ` + "```console" + `
has been enabled the status can't be set back to "Unversioned". rclone backend versioning s3:bucket # read status only
`, rclone backend versioning s3:bucket Enabled
rclone backend versioning s3:bucket Suspended
` + "```" + `
It may return "Enabled", "Suspended" or "Unversioned". Note that once
versioning has been enabled the status can't be set back to "Unversioned".`,
}, { }, {
Name: "set", Name: "set",
Short: "Set command for updating the config parameters.", Short: "Set command for updating the config parameters.",
Long: `This set command can be used to update the config parameters Long: `This set command can be used to update the config parameters
for a running s3 backend. for a running s3 backend.
Usage Examples: Usage examples:
rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] ` + "```console" + `
rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2] rclone backend set s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X rclone rc backend/command command=set fs=s3: [-o opt_name=opt_value] [-o opt_name2=opt_value2]
rclone rc backend/command command=set fs=s3: -o session_token=X -o access_key_id=X -o secret_access_key=X
` + "```" + `
The option keys are named as they are in the config file. The option keys are named as they are in the config file.
@@ -3086,8 +3140,7 @@ This rebuilds the connection to the s3 backend when it is called with
the new parameters. Only new parameters need be passed as the values the new parameters. Only new parameters need be passed as the values
will default to those currently in use. will default to those currently in use.
It doesn't return anything. It doesn't return anything.`,
`,
}} }}
// Command the backend to run a named command // Command the backend to run a named command
@@ -4240,6 +4293,8 @@ func (w *s3ChunkWriter) Close(ctx context.Context) (err error) {
SSECustomerKey: w.multiPartUploadInput.SSECustomerKey, SSECustomerKey: w.multiPartUploadInput.SSECustomerKey,
SSECustomerKeyMD5: w.multiPartUploadInput.SSECustomerKeyMD5, SSECustomerKeyMD5: w.multiPartUploadInput.SSECustomerKeyMD5,
UploadId: w.uploadID, UploadId: w.uploadID,
IfMatch: w.ui.req.IfMatch,
IfNoneMatch: w.ui.req.IfNoneMatch,
}) })
return w.f.shouldRetry(ctx, err) return w.f.shouldRetry(ctx, err)
}) })
@@ -4511,6 +4566,10 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
ui.req.ContentLanguage = aws.String(value) ui.req.ContentLanguage = aws.String(value)
case "content-type": case "content-type":
ui.req.ContentType = aws.String(value) ui.req.ContentType = aws.String(value)
case "if-match":
ui.req.IfMatch = aws.String(value)
case "if-none-match":
ui.req.IfNoneMatch = aws.String(value)
case "x-amz-tagging": case "x-amz-tagging":
ui.req.Tagging = aws.String(value) ui.req.Tagging = aws.String(value)
default: default:

View File

@@ -70,6 +70,7 @@ func setFrom_s3ListObjectsV2Output_s3ListObjectVersionsOutput(a *s3.ListObjectsV
// setFrom_typesObject_typesObjectVersion copies matching elements from a to b // setFrom_typesObject_typesObjectVersion copies matching elements from a to b
func setFrom_typesObject_typesObjectVersion(a *types.Object, b *types.ObjectVersion) { func setFrom_typesObject_typesObjectVersion(a *types.Object, b *types.ObjectVersion) {
a.ChecksumAlgorithm = b.ChecksumAlgorithm a.ChecksumAlgorithm = b.ChecksumAlgorithm
a.ChecksumType = b.ChecksumType
a.ETag = b.ETag a.ETag = b.ETag
a.Key = b.Key a.Key = b.Key
a.LastModified = b.LastModified a.LastModified = b.LastModified
@@ -82,6 +83,7 @@ func setFrom_typesObject_typesObjectVersion(a *types.Object, b *types.ObjectVers
func setFrom_s3CreateMultipartUploadInput_s3HeadObjectOutput(a *s3.CreateMultipartUploadInput, b *s3.HeadObjectOutput) { func setFrom_s3CreateMultipartUploadInput_s3HeadObjectOutput(a *s3.CreateMultipartUploadInput, b *s3.HeadObjectOutput) {
a.BucketKeyEnabled = b.BucketKeyEnabled a.BucketKeyEnabled = b.BucketKeyEnabled
a.CacheControl = b.CacheControl a.CacheControl = b.CacheControl
a.ChecksumType = b.ChecksumType
a.ContentDisposition = b.ContentDisposition a.ContentDisposition = b.ContentDisposition
a.ContentEncoding = b.ContentEncoding a.ContentEncoding = b.ContentEncoding
a.ContentLanguage = b.ContentLanguage a.ContentLanguage = b.ContentLanguage
@@ -160,12 +162,15 @@ func setFrom_s3HeadObjectOutput_s3GetObjectOutput(a *s3.HeadObjectOutput, b *s3.
a.CacheControl = b.CacheControl a.CacheControl = b.CacheControl
a.ChecksumCRC32 = b.ChecksumCRC32 a.ChecksumCRC32 = b.ChecksumCRC32
a.ChecksumCRC32C = b.ChecksumCRC32C a.ChecksumCRC32C = b.ChecksumCRC32C
a.ChecksumCRC64NVME = b.ChecksumCRC64NVME
a.ChecksumSHA1 = b.ChecksumSHA1 a.ChecksumSHA1 = b.ChecksumSHA1
a.ChecksumSHA256 = b.ChecksumSHA256 a.ChecksumSHA256 = b.ChecksumSHA256
a.ChecksumType = b.ChecksumType
a.ContentDisposition = b.ContentDisposition a.ContentDisposition = b.ContentDisposition
a.ContentEncoding = b.ContentEncoding a.ContentEncoding = b.ContentEncoding
a.ContentLanguage = b.ContentLanguage a.ContentLanguage = b.ContentLanguage
a.ContentLength = b.ContentLength a.ContentLength = b.ContentLength
a.ContentRange = b.ContentRange
a.ContentType = b.ContentType a.ContentType = b.ContentType
a.DeleteMarker = b.DeleteMarker a.DeleteMarker = b.DeleteMarker
a.ETag = b.ETag a.ETag = b.ETag
@@ -187,6 +192,7 @@ func setFrom_s3HeadObjectOutput_s3GetObjectOutput(a *s3.HeadObjectOutput, b *s3.
a.SSEKMSKeyId = b.SSEKMSKeyId a.SSEKMSKeyId = b.SSEKMSKeyId
a.ServerSideEncryption = b.ServerSideEncryption a.ServerSideEncryption = b.ServerSideEncryption
a.StorageClass = b.StorageClass a.StorageClass = b.StorageClass
a.TagCount = b.TagCount
a.VersionId = b.VersionId a.VersionId = b.VersionId
a.WebsiteRedirectLocation = b.WebsiteRedirectLocation a.WebsiteRedirectLocation = b.WebsiteRedirectLocation
a.ResultMetadata = b.ResultMetadata a.ResultMetadata = b.ResultMetadata
@@ -232,6 +238,7 @@ func setFrom_s3HeadObjectOutput_s3PutObjectInput(a *s3.HeadObjectOutput, b *s3.P
a.CacheControl = b.CacheControl a.CacheControl = b.CacheControl
a.ChecksumCRC32 = b.ChecksumCRC32 a.ChecksumCRC32 = b.ChecksumCRC32
a.ChecksumCRC32C = b.ChecksumCRC32C a.ChecksumCRC32C = b.ChecksumCRC32C
a.ChecksumCRC64NVME = b.ChecksumCRC64NVME
a.ChecksumSHA1 = b.ChecksumSHA1 a.ChecksumSHA1 = b.ChecksumSHA1
a.ChecksumSHA256 = b.ChecksumSHA256 a.ChecksumSHA256 = b.ChecksumSHA256
a.ContentDisposition = b.ContentDisposition a.ContentDisposition = b.ContentDisposition
@@ -270,6 +277,8 @@ func setFrom_s3CopyObjectInput_s3PutObjectInput(a *s3.CopyObjectInput, b *s3.Put
a.GrantRead = b.GrantRead a.GrantRead = b.GrantRead
a.GrantReadACP = b.GrantReadACP a.GrantReadACP = b.GrantReadACP
a.GrantWriteACP = b.GrantWriteACP a.GrantWriteACP = b.GrantWriteACP
a.IfMatch = b.IfMatch
a.IfNoneMatch = b.IfNoneMatch
a.Metadata = b.Metadata a.Metadata = b.Metadata
a.ObjectLockLegalHoldStatus = b.ObjectLockLegalHoldStatus a.ObjectLockLegalHoldStatus = b.ObjectLockLegalHoldStatus
a.ObjectLockMode = b.ObjectLockMode a.ObjectLockMode = b.ObjectLockMode

View File

@@ -10,6 +10,7 @@ import (
"os/exec" "os/exec"
"slices" "slices"
"strings" "strings"
"sync"
"time" "time"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
@@ -50,6 +51,9 @@ func (s *sshClientExternal) Close() error {
func (s *sshClientExternal) NewSession() (sshSession, error) { func (s *sshClientExternal) NewSession() (sshSession, error) {
session := s.f.newSSHSessionExternal() session := s.f.newSSHSessionExternal()
if s.session == nil { if s.session == nil {
// Store the first session so Wait() and Close() can use it
s.session = session
} else {
fs.Debugf(s.f, "ssh external: creating additional session") fs.Debugf(s.f, "ssh external: creating additional session")
} }
return session, nil return session, nil
@@ -76,6 +80,8 @@ type sshSessionExternal struct {
cancel func() cancel func()
startCalled bool startCalled bool
runningSFTP bool runningSFTP bool
waitOnce sync.Once // ensure Wait() is only called once
waitErr error // result of the Wait() call
} }
func (f *Fs) newSSHSessionExternal() *sshSessionExternal { func (f *Fs) newSSHSessionExternal() *sshSessionExternal {
@@ -175,16 +181,17 @@ func (s *sshSessionExternal) exited() bool {
// Wait for the command to exit // Wait for the command to exit
func (s *sshSessionExternal) Wait() error { func (s *sshSessionExternal) Wait() error {
if s.exited() { // Use sync.Once to ensure we only wait for the process once.
return nil // This is safe even if Wait() is called from multiple goroutines.
} s.waitOnce.Do(func() {
err := s.cmd.Wait() s.waitErr = s.cmd.Wait()
if err == nil { if s.waitErr == nil {
fs.Debugf(s.f, "ssh external: command exited OK") fs.Debugf(s.f, "ssh external: command exited OK")
} else { } else {
fs.Debugf(s.f, "ssh external: command exited with error: %v", err) fs.Debugf(s.f, "ssh external: command exited with error: %v", s.waitErr)
} }
return err })
return s.waitErr
} }
// Run runs cmd on the remote host. Typically, the remote // Run runs cmd on the remote host. Typically, the remote

View File

@@ -0,0 +1,84 @@
//go:build !plan9
package sftp
import (
"testing"
"time"
"github.com/rclone/rclone/fs"
"github.com/stretchr/testify/assert"
)
// TestSSHExternalWaitMultipleCalls verifies that calling Wait() multiple times
// doesn't cause zombie processes
func TestSSHExternalWaitMultipleCalls(t *testing.T) {
// Create a minimal Fs object for testing
opt := &Options{
SSH: fs.SpaceSepList{"echo", "test"},
}
f := &Fs{
opt: *opt,
}
// Create a new SSH session
session := f.newSSHSessionExternal()
// Start a simple command that exits quickly
err := session.Start("exit 0")
assert.NoError(t, err)
// Give the command time to complete
time.Sleep(100 * time.Millisecond)
// Call Wait() multiple times - this should not cause issues
err1 := session.Wait()
err2 := session.Wait()
err3 := session.Wait()
// All calls should return the same result (no error in this case)
assert.NoError(t, err1)
assert.NoError(t, err2)
assert.NoError(t, err3)
// Verify the process has exited
assert.True(t, session.exited())
}
// TestSSHExternalCloseMultipleCalls verifies that calling Close() multiple times
// followed by Wait() calls doesn't cause zombie processes
func TestSSHExternalCloseMultipleCalls(t *testing.T) {
// Create a minimal Fs object for testing
opt := &Options{
SSH: fs.SpaceSepList{"sleep", "10"},
}
f := &Fs{
opt: *opt,
}
// Create a new SSH session
session := f.newSSHSessionExternal()
// Start a long-running command
err := session.Start("sleep 10")
if err != nil {
t.Skip("Cannot start sleep command:", err)
}
// Close should cancel and wait for the process
_ = session.Close()
// Additional Wait() calls should return the same error
err2 := session.Wait()
err3 := session.Wait()
// All should complete without panicking
// err1 could be nil or an error depending on how the process was killed
// err2 and err3 should be the same
assert.Equal(t, err2, err3, "Subsequent Wait() calls should return same result")
// Verify the process has exited
assert.True(t, session.exited())
}

View File

@@ -801,8 +801,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
req := &api.GetDownloadLinkRequest{ req := &api.GetDownloadLinkRequest{
Slug: o.slug, Slug: o.slug,
UserLogin: o.fs.opt.Username, UserLogin: o.fs.opt.Username,
// Has to be set but doesn't seem to be used server side. DeviceID: fmt.Sprintf("%d", time.Now().UnixNano()),
DeviceID: "foobar",
} }
var resp *api.GetDownloadLinkResponse var resp *api.GetDownloadLinkResponse
@@ -815,16 +814,26 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.Read
return nil, err return nil, err
} }
downloadURL := resp.Link
if resp.Hash != "" {
if strings.Contains(downloadURL, "?") {
downloadURL += "&"
} else {
downloadURL += "?"
}
downloadURL += "hash=" + url.QueryEscape(resp.Hash)
}
opts = rest.Opts{ opts = rest.Opts{
Method: "GET", Method: "GET",
RootURL: resp.Link, RootURL: downloadURL,
Options: options, Options: options,
} }
var httpResp *http.Response var httpResp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
httpResp, err = o.fs.cdn.Call(ctx, &opts) httpResp, err = o.fs.rest.Call(ctx, &opts)
return o.fs.shouldRetry(ctx, httpResp, err, true) return o.fs.shouldRetry(ctx, httpResp, err, true)
}) })
if err != nil { if err != nil {

View File

@@ -136,7 +136,7 @@ func (u *Uploader) UploadChunk(ctx context.Context, cnt int, options ...fs.OpenO
size, err := u.upload.stream.Read(data) size, err := u.upload.stream.Read(data)
if err != nil { if err != nil {
fs.Errorf(u.fs, "Chunk %d: Error: Can not read from data strem: %v", cnt, err) fs.Errorf(u.fs, "Chunk %d: Error: Can not read from data stream: %v", cnt, err)
return err return err
} }

View File

@@ -961,7 +961,7 @@ func (o *Object) setMetaData(info *api.ResourceInfoResponse) (err error) {
return nil return nil
} }
// readMetaData reads ands sets the new metadata for a storage.Object // readMetaData reads and sets the new metadata for a storage.Object
func (o *Object) readMetaData(ctx context.Context) (err error) { func (o *Object) readMetaData(ctx context.Context) (err error) {
if o.hasMetaData { if o.hasMetaData {
return nil return nil

View File

@@ -51,9 +51,9 @@ def find_regions(lines):
regions = [] regions = []
start = None start = None
for i, line in enumerate(lines, 1): for i, line in enumerate(lines, 1):
if "rem autogenerated options start" in line: if line.lstrip().startswith("<!-- autogenerated options start "):
start = i start = i
elif "rem autogenerated options stop" in line and start is not None: elif start is not None and line.rstrip().endswith(" autogenerated options stop -->"):
regions.append((start, i)) regions.append((start, i))
start = None start = None
return regions return regions

View File

@@ -9,10 +9,12 @@ import io
import subprocess import subprocess
from pathlib import Path from pathlib import Path
marker = "{{< rem autogenerated options" begin = "<!-- "
start = marker + " start" end = " -->"
stop = marker + " stop" marker = "autogenerated options"
end = ">}}" line_marker_start_prefix = begin + marker + " start "
line_marker_stop = begin + marker + " stop" + end
markdownlint_disable = begin + "markdownlint-disable-line line-length" + end
def find_backends(): def find_backends():
"""Return a list of all backends""" """Return a list of all backends"""
@@ -27,7 +29,7 @@ def output_backend_tool_docs(backend, out, cwd):
"""Output documentation for backend tool to out""" """Output documentation for backend tool to out"""
out.flush() out.flush()
subprocess.call(["./rclone", "--config=/notfound", "backend", "help", backend], stdout=out, stderr=subprocess.DEVNULL) subprocess.call(["./rclone", "--config=/notfound", "backend", "help", backend], stdout=out, stderr=subprocess.DEVNULL)
def alter_doc(backend): def alter_doc(backend):
"""Alter the documentation for backend""" """Alter the documentation for backend"""
rclone_bin_dir = Path(sys.path[0]).parent.absolute() rclone_bin_dir = Path(sys.path[0]).parent.absolute()
@@ -43,23 +45,23 @@ def alter_doc(backend):
in_docs = False in_docs = False
for line in in_file: for line in in_file:
if not in_docs: if not in_docs:
if start in line: if line.lstrip().startswith(line_marker_start_prefix):
in_docs = True in_docs = True
start_full = (start + "\" - DO NOT EDIT - instead edit fs.RegInfo in backend/%s/%s.go then run make backenddocs\" " + end + "\n") % (backend, backend) line_marker_start = (line_marker_start_prefix + "- DO NOT EDIT - instead edit fs.RegInfo in backend/%s/%s.go and run make backenddocs to verify" + end) % (backend, backend)
out_file.write(start_full) out_file.write(line_marker_start + " " + markdownlint_disable + "\n")
output_docs(backend, out_file, rclone_bin_dir) output_docs(backend, out_file, rclone_bin_dir)
output_backend_tool_docs(backend, out_file, rclone_bin_dir) output_backend_tool_docs(backend, out_file, rclone_bin_dir)
out_file.write(stop+" "+end+"\n") out_file.write(line_marker_stop + "\n")
altered = True altered = True
if not in_docs: if not in_docs:
out_file.write(line) out_file.write(line)
if in_docs: if in_docs:
if stop in line: if line.strip() == line_marker_stop:
in_docs = False in_docs = False
os.rename(doc_file, doc_file+"~") os.rename(doc_file, doc_file+"~")
os.rename(new_file, doc_file) os.rename(new_file, doc_file)
if not altered: if not altered:
raise ValueError("Didn't find '%s' markers for in %s" % (start, doc_file)) raise ValueError("Didn't find '%s' markers in %s" % (line_marker_start_prefix, doc_file))
def main(args): def main(args):

View File

@@ -152,7 +152,7 @@ def read_doc(doc):
# Make [...](/links/) absolute # Make [...](/links/) absolute
contents = re.sub(r'\]\((\/.*?\/(#.*)?)\)', r"](https://rclone.org\1)", contents) contents = re.sub(r'\]\((\/.*?\/(#.*)?)\)', r"](https://rclone.org\1)", contents)
# Add additional links on the front page # Add additional links on the front page
contents = re.sub(r'\{\{< rem MAINPAGELINK >\}\}', "- [Donate.](https://rclone.org/donate/)", contents) contents = re.sub(r'<!-- MAINPAGELINK -->', "- [Donate.](https://rclone.org/donate/)", contents)
# Interpret provider shortcode # Interpret provider shortcode
# {{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}} # {{< provider name="Amazon S3" home="https://aws.amazon.com/s3/" config="/s3/" >}}
contents = re.sub(r'\{\{<\s*provider.*?name="(.*?)".*?>\}\}', r"- \1", contents) contents = re.sub(r'\{\{<\s*provider.*?name="(.*?)".*?>\}\}', r"- \1", contents)

View File

@@ -15,8 +15,8 @@ fusermount -u -z /tmp/rclone/rc_mount > /dev/null 2>&1 || umount /tmp/rclone/rc_
awk ' awk '
BEGIN {p=1} BEGIN {p=1}
/^\{\{< rem autogenerated start/ {print;system("cat /tmp/rclone/z.md");p=0} /^<!-- autogenerated start/ {print;system("cat /tmp/rclone/z.md");p=0}
/^\{\{< rem autogenerated stop/ {p=1} /^<!-- autogenerated stop/ {p=1}
p' docs/content/rc.md > /tmp/rclone/rc.md p' docs/content/rc.md > /tmp/rclone/rc.md
mv /tmp/rclone/rc.md docs/content/rc.md mv /tmp/rclone/rc.md docs/content/rc.md

View File

@@ -1,6 +1,6 @@
//go:build !plan9 //go:build !plan9
// Package list inplements 'rclone archive list' // Package list implements 'rclone archive list'
package list package list
import ( import (

View File

@@ -37,7 +37,7 @@ see the backend docs for definitions.
You can discover what commands a backend implements by using You can discover what commands a backend implements by using
` + "```sh" + ` ` + "```console" + `
rclone backend help remote: rclone backend help remote:
rclone backend help <backendname> rclone backend help <backendname>
` + "```" + ` ` + "```" + `
@@ -46,19 +46,19 @@ You can also discover information about the backend using (see
[operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs [operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs
for more info). for more info).
` + "```sh" + ` ` + "```console" + `
rclone backend features remote: rclone backend features remote:
` + "```" + ` ` + "```" + `
Pass options to the backend command with -o. This should be key=value or key, e.g.: Pass options to the backend command with -o. This should be key=value or key, e.g.:
` + "```sh" + ` ` + "```console" + `
rclone backend stats remote:path stats -o format=json -o long rclone backend stats remote:path stats -o format=json -o long
` + "```" + ` ` + "```" + `
Pass arguments to the backend by placing them on the end of the line Pass arguments to the backend by placing them on the end of the line
` + "```sh" + ` ` + "```console" + `
rclone backend cleanup remote:path file1 file2 file3 rclone backend cleanup remote:path file1 file2 file3
` + "```" + ` ` + "```" + `
@@ -156,9 +156,11 @@ func showHelp(fsInfo *fs.RegInfo) error {
fmt.Printf("## Backend commands\n\n") fmt.Printf("## Backend commands\n\n")
fmt.Printf(`Here are the commands specific to the %s backend. fmt.Printf(`Here are the commands specific to the %s backend.
Run them with Run them with:
rclone backend COMMAND remote: `+"```console"+`
rclone backend COMMAND remote:
`+"```"+`
The help below will explain what arguments each command takes. The help below will explain what arguments each command takes.
@@ -172,7 +174,7 @@ These can be run on a running backend using the rc command
for _, cmd := range cmds { for _, cmd := range cmds {
fmt.Printf("### %s\n\n", cmd.Name) fmt.Printf("### %s\n\n", cmd.Name)
fmt.Printf("%s\n\n", cmd.Short) fmt.Printf("%s\n\n", cmd.Short)
fmt.Printf(" rclone backend %s remote: [options] [<arguments>+]\n\n", cmd.Name) fmt.Printf("```console\nrclone backend %s remote: [options] [<arguments>+]\n```\n\n", cmd.Name)
if cmd.Long != "" { if cmd.Long != "" {
fmt.Printf("%s\n\n", cmd.Long) fmt.Printf("%s\n\n", cmd.Long)
} }

View File

@@ -125,12 +125,12 @@ func (b *bisyncRun) ReverseCryptCheckFn(ctx context.Context, dst, src fs.Object)
} }
// DownloadCheckFn is a slightly modified version of Check with --download // DownloadCheckFn is a slightly modified version of Check with --download
func DownloadCheckFn(ctx context.Context, a, b fs.Object) (differ bool, noHash bool, err error) { func DownloadCheckFn(ctx context.Context, dst, src fs.Object) (equal bool, noHash bool, err error) {
differ, err = operations.CheckIdenticalDownload(ctx, a, b) equal, err = operations.CheckIdenticalDownload(ctx, src, dst)
if err != nil { if err != nil {
return true, true, fmt.Errorf("failed to download: %w", err) return true, true, fmt.Errorf("failed to download: %w", err)
} }
return differ, false, nil return !equal, false, nil
} }
// check potential conflicts (to avoid renaming if already identical) // check potential conflicts (to avoid renaming if already identical)

View File

@@ -35,7 +35,7 @@ name. If the source is a directory then it acts exactly like the
So So
` + "```sh" + ` ` + "```console" + `
rclone copyto src dst rclone copyto src dst
` + "```" + ` ` + "```" + `

View File

@@ -23,7 +23,7 @@ func init() {
var commandDefinition = &cobra.Command{ var commandDefinition = &cobra.Command{
Use: "cryptcheck remote:path cryptedremote:path", Use: "cryptcheck remote:path cryptedremote:path",
Short: `Cryptcheck checks the integrity of an encrypted remote.`, Short: `Cryptcheck checks the integrity of an encrypted remote.`,
Long: `Checks a remote against a [crypted](/crypt/) remote. This is the equivalent Long: `Checks a remote against an [encrypted](/crypt/) remote. This is the equivalent
of running rclone [check](/commands/rclone_check/), but able to check the of running rclone [check](/commands/rclone_check/), but able to check the
checksums of the encrypted remote. checksums of the encrypted remote.
@@ -37,14 +37,14 @@ checksum of the file it has just encrypted.
Use it like this Use it like this
` + "```sh" + ` ` + "```console" + `
rclone cryptcheck /path/to/files encryptedremote:path rclone cryptcheck /path/to/files encryptedremote:path
` + "```" + ` ` + "```" + `
You can use it like this also, but that will involve downloading all You can use it like this also, but that will involve downloading all
the files in ` + "`remote:path`" + `. the files in ` + "`remote:path`" + `.
` + "```sh" + ` ` + "```console" + `
rclone cryptcheck remote:path encryptedremote:path rclone cryptcheck remote:path encryptedremote:path
` + "```" + ` ` + "```" + `

View File

@@ -34,7 +34,7 @@ If you supply the ` + "`--reverse`" + ` flag, it will return encrypted file name
use it like this use it like this
` + "```sh" + ` ` + "```console" + `
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2 rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
rclone cryptdecode --reverse encryptedremote: filename1 filename2 rclone cryptdecode --reverse encryptedremote: filename1 filename2
` + "```" + ` ` + "```" + `

View File

@@ -68,7 +68,7 @@ Here is an example run.
Before - with duplicates Before - with duplicates
` + "```sh" + ` ` + "```console" + `
$ rclone lsl drive:dupes $ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt 6048320 2016-03-05 16:23:16.798000000 one.txt
6048320 2016-03-05 16:23:11.775000000 one.txt 6048320 2016-03-05 16:23:11.775000000 one.txt
@@ -81,7 +81,7 @@ $ rclone lsl drive:dupes
Now the ` + "`dedupe`" + ` session Now the ` + "`dedupe`" + ` session
` + "```sh" + ` ` + "```console" + `
$ rclone dedupe drive:dupes $ rclone dedupe drive:dupes
2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode. 2016/03/05 16:24:37 Google drive root 'dupes': Looking for duplicates using interactive mode.
one.txt: Found 4 files with duplicate names one.txt: Found 4 files with duplicate names
@@ -111,7 +111,7 @@ two-3.txt: renamed from: two.txt
The result being The result being
` + "```sh" + ` ` + "```console" + `
$ rclone lsl drive:dupes $ rclone lsl drive:dupes
6048320 2016-03-05 16:23:16.798000000 one.txt 6048320 2016-03-05 16:23:16.798000000 one.txt
564374 2016-03-05 16:22:52.118000000 two-1.txt 564374 2016-03-05 16:22:52.118000000 two-1.txt
@@ -135,13 +135,13 @@ or by using an extra parameter with the same value
For example, to rename all the identically named photos in your Google Photos For example, to rename all the identically named photos in your Google Photos
directory, do directory, do
` + "```sh" + ` ` + "```console" + `
rclone dedupe --dedupe-mode rename "drive:Google Photos" rclone dedupe --dedupe-mode rename "drive:Google Photos"
` + "```" + ` ` + "```" + `
Or Or
` + "```sh" + ` ` + "```console" + `
rclone dedupe rename "drive:Google Photos" rclone dedupe rename "drive:Google Photos"
` + "```", ` + "```",
Annotations: map[string]string{ Annotations: map[string]string{

View File

@@ -20,13 +20,13 @@ var bashCommandDefinition = &cobra.Command{
By default, when run without any arguments, By default, when run without any arguments,
` + "```sh" + ` ` + "```console" + `
rclone completion bash rclone completion bash
` + "```" + ` ` + "```" + `
the generated script will be written to the generated script will be written to
` + "```sh" + ` ` + "```console" + `
/etc/bash_completion.d/rclone /etc/bash_completion.d/rclone
` + "```" + ` ` + "```" + `
@@ -43,7 +43,7 @@ can logout and login again to use the autocompletion script.
Alternatively, you can source the script directly Alternatively, you can source the script directly
` + "```sh" + ` ` + "```console" + `
. /path/to/my_bash_completion_scripts/rclone . /path/to/my_bash_completion_scripts/rclone
` + "```" + ` ` + "```" + `

View File

@@ -21,14 +21,14 @@ var fishCommandDefinition = &cobra.Command{
This writes to /etc/fish/completions/rclone.fish by default so will This writes to /etc/fish/completions/rclone.fish by default so will
probably need to be run with sudo or as root, e.g. probably need to be run with sudo or as root, e.g.
` + "```sh" + ` ` + "```console" + `
sudo rclone completion fish sudo rclone completion fish
` + "```" + ` ` + "```" + `
Logout and login again to use the autocompletion scripts, or source Logout and login again to use the autocompletion scripts, or source
them directly them directly
` + "```sh" + ` ` + "```console" + `
. /etc/fish/completions/rclone.fish . /etc/fish/completions/rclone.fish
` + "```" + ` ` + "```" + `

View File

@@ -20,7 +20,7 @@ var powershellCommandDefinition = &cobra.Command{
To load completions in your current shell session: To load completions in your current shell session:
` + "```sh" + ` ` + "```console" + `
rclone completion powershell | Out-String | Invoke-Expression rclone completion powershell | Out-String | Invoke-Expression
` + "```" + ` ` + "```" + `

View File

@@ -21,14 +21,14 @@ var zshCommandDefinition = &cobra.Command{
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will This writes to /usr/share/zsh/vendor-completions/_rclone by default so will
probably need to be run with sudo or as root, e.g. probably need to be run with sudo or as root, e.g.
` + "```sh" + ` ` + "```console" + `
sudo rclone completion zsh sudo rclone completion zsh
` + "```" + ` ` + "```" + `
Logout and login again to use the autocompletion scripts, or source Logout and login again to use the autocompletion scripts, or source
them directly them directly
` + "```sh" + ` ` + "```console" + `
autoload -U compinit && compinit autoload -U compinit && compinit
` + "```" + ` ` + "```" + `

View File

@@ -11,11 +11,14 @@ users.
name. This symlink helps git-annex tell rclone it wants to run the "gitannex" name. This symlink helps git-annex tell rclone it wants to run the "gitannex"
subcommand. subcommand.
```sh Create the helper symlink in "$HOME/bin":
# Create the helper symlink in "$HOME/bin".
```console
ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin" ln -s "$(realpath rclone)" "$HOME/bin/git-annex-remote-rclone-builtin"
# Verify the new symlink is on your PATH. Verify the new symlink is on your PATH:
```console
which git-annex-remote-rclone-builtin which git-annex-remote-rclone-builtin
``` ```
@@ -27,11 +30,15 @@ users.
Start by asking git-annex to describe the remote's available configuration Start by asking git-annex to describe the remote's available configuration
parameters. parameters.
```sh If you skipped step 1:
# If you skipped step 1:
git annex initremote MyRemote type=rclone --whatelse
# If you created a symlink in step 1: ```console
git annex initremote MyRemote type=rclone --whatelse
```
If you created a symlink in step 1:
```console
git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse git annex initremote MyRemote type=external externaltype=rclone-builtin --whatelse
``` ```
@@ -47,7 +54,7 @@ users.
be one configured in your rclone.conf file, which can be located with `rclone be one configured in your rclone.conf file, which can be located with `rclone
config file`. config file`.
```sh ```console
git annex initremote MyRemote \ git annex initremote MyRemote \
type=external \ type=external \
externaltype=rclone-builtin \ externaltype=rclone-builtin \
@@ -61,7 +68,7 @@ users.
remote**. This command is very new and has not been tested on many rclone remote**. This command is very new and has not been tested on many rclone
backends. Caveat emptor! backends. Caveat emptor!
```sh ```console
git annex testremote MyRemote git annex testremote MyRemote
``` ```

View File

@@ -103,13 +103,13 @@ as a relative path).
Run without a hash to see the list of all supported hashes, e.g. Run without a hash to see the list of all supported hashes, e.g.
` + "```sh" + ` ` + "```console" + `
$ rclone hashsum $ rclone hashsum
` + hash.HelpString(0) + "```" + ` ` + hash.HelpString(0) + "```" + `
Then Then
` + "```sh" + ` ` + "```console" + `
rclone hashsum MD5 remote:path rclone hashsum MD5 remote:path
` + "```" + ` ` + "```" + `

View File

@@ -343,12 +343,12 @@ func showBackend(name string) {
fmt.Printf("- Examples:\n") fmt.Printf("- Examples:\n")
} }
for _, ex := range opt.Examples { for _, ex := range opt.Examples {
fmt.Printf(" - %s\n", quoteString(ex.Value)) fmt.Printf(" - %s\n", quoteString(ex.Value))
for line := range strings.SplitSeq(ex.Help, "\n") { for line := range strings.SplitSeq(ex.Help, "\n") {
fmt.Printf(" - %s\n", line) fmt.Printf(" - %s\n", line)
} }
if ex.Provider != "" { if ex.Provider != "" {
fmt.Printf(" - Provider: %s\n", ex.Provider) fmt.Printf(" - Provider: %s\n", ex.Provider)
} }
} }
} }

View File

@@ -29,7 +29,7 @@ var commandDefinition = &cobra.Command{
Short: `Generate public link to file/folder.`, Short: `Generate public link to file/folder.`,
Long: `Create, retrieve or remove a public link to the given file or folder. Long: `Create, retrieve or remove a public link to the given file or folder.
` + "```sh" + ` ` + "```console" + `
rclone link remote:path/to/file rclone link remote:path/to/file
rclone link remote:path/to/folder/ rclone link remote:path/to/folder/
rclone link --unlink remote:path/to/folder/ rclone link --unlink remote:path/to/folder/

View File

@@ -23,7 +23,7 @@ readable format with size and path. Recurses by default.
E.g. E.g.
` + "```sh" + ` ` + "```console" + `
$ rclone ls swift:bucket $ rclone ls swift:bucket
60295 bevajer5jef 60295 bevajer5jef
90613 canole 90613 canole

View File

@@ -34,7 +34,7 @@ not), the modification time (if known, the current time if not), the
number of objects in the directory (if known, -1 if not) and the name number of objects in the directory (if known, -1 if not) and the name
of the directory, E.g. of the directory, E.g.
` + "```sh" + ` ` + "```console" + `
$ rclone lsd swift: $ rclone lsd swift:
494000 2018-04-26 08:43:20 10000 10000files 494000 2018-04-26 08:43:20 10000 10000files
65 2018-04-26 08:43:20 1 1File 65 2018-04-26 08:43:20 1 1File
@@ -42,7 +42,7 @@ $ rclone lsd swift:
Or Or
` + "```sh" + ` ` + "```console" + `
$ rclone lsd drive:test $ rclone lsd drive:test
-1 2016-10-17 17:41:53 -1 1000files -1 2016-10-17 17:41:53 -1 1000files
-1 2017-01-03 14:40:54 -1 2500files -1 2017-01-03 14:40:54 -1 2500files

View File

@@ -54,7 +54,7 @@ one per line. The directories will have a / suffix.
E.g. E.g.
` + "```sh" + ` ` + "```console" + `
$ rclone lsf swift:bucket $ rclone lsf swift:bucket
bevajer5jef bevajer5jef
canole canole
@@ -85,7 +85,7 @@ So if you wanted the path, size and modification time, you would use
E.g. E.g.
` + "```sh" + ` ` + "```console" + `
$ rclone lsf --format "tsp" swift:bucket $ rclone lsf --format "tsp" swift:bucket
2016-06-25 18:55:41;60295;bevajer5jef 2016-06-25 18:55:41;60295;bevajer5jef
2016-06-25 18:55:43;90613;canole 2016-06-25 18:55:43;90613;canole
@@ -103,13 +103,13 @@ type.
For example, to emulate the md5sum command you can use For example, to emulate the md5sum command you can use
` + "```sh" + ` ` + "```console" + `
rclone lsf -R --hash MD5 --format hp --separator " " --files-only . rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
` + "```" + ` ` + "```" + `
E.g. E.g.
` + "```sh" + ` ` + "```console" + `
$ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket $ rclone lsf -R --hash MD5 --format hp --separator " " --files-only swift:bucket
7908e352297f0f530b84a756f188baa3 bevajer5jef 7908e352297f0f530b84a756f188baa3 bevajer5jef
cd65ac234e6fea5925974a51cdd865cc canole cd65ac234e6fea5925974a51cdd865cc canole
@@ -126,7 +126,7 @@ putting it last is a good strategy.
E.g. E.g.
` + "```sh" + ` ` + "```console" + `
$ rclone lsf --separator "," --format "tshp" swift:bucket $ rclone lsf --separator "," --format "tshp" swift:bucket
2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef 2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole 2016-06-25 18:55:43,90613,cd65ac234e6fea5925974a51cdd865cc,canole
@@ -140,7 +140,7 @@ if they contain,
E.g. E.g.
` + "```sh" + ` ` + "```console" + `
$ rclone lsf --csv --files-only --format ps remote:path $ rclone lsf --csv --files-only --format ps remote:path
test.log,22355 test.log,22355
test.sh,449 test.sh,449
@@ -153,7 +153,7 @@ to pass to an rclone copy with the ` + "`--files-from-raw`" + ` flag.
For example, to find all the files modified within one day and copy For example, to find all the files modified within one day and copy
those only (without traversing the whole directory structure): those only (without traversing the whole directory structure):
` + "```sh" + ` ` + "```console" + `
rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
rclone copy --files-from-raw new_files /path/to/local remote:path rclone copy --files-from-raw new_files /path/to/local remote:path
` + "```" + ` ` + "```" + `
@@ -162,7 +162,7 @@ The default time format is ` + "`'2006-01-02 15:04:05'`" + `.
[Other formats](https://pkg.go.dev/time#pkg-constants) can be specified with [Other formats](https://pkg.go.dev/time#pkg-constants) can be specified with
the ` + "`--time-format`" + ` flag. Examples: the ` + "`--time-format`" + ` flag. Examples:
` + "```sh" + ` ` + "```console" + `
rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)' rclone lsf remote:path --format pt --time-format 'Jan 2, 2006 at 3:04pm (MST)'
rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000' rclone lsf remote:path --format pt --time-format '2006-01-02 15:04:05.000000000'
rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00' rclone lsf remote:path --format pt --time-format '2006-01-02T15:04:05.999999999Z07:00'

View File

@@ -23,7 +23,7 @@ readable format with modification time, size and path. Recurses by default.
E.g. E.g.
` + "```sh" + ` ` + "```console" + `
$ rclone lsl swift:bucket $ rclone lsl swift:bucket
60295 2016-06-25 18:55:41.062626927 bevajer5jef 60295 2016-06-25 18:55:41.062626927 bevajer5jef
90613 2016-06-25 18:55:43.302607074 canole 90613 2016-06-25 18:55:43.302607074 canole

View File

@@ -16,7 +16,7 @@ mount, waits until success or timeout and exits with appropriate code
On Linux/macOS/FreeBSD start the mount like this, where `/path/to/local/mount` On Linux/macOS/FreeBSD start the mount like this, where `/path/to/local/mount`
is an **empty** **existing** directory: is an **empty** **existing** directory:
```sh ```console
rclone @ remote:path/to/files /path/to/local/mount rclone @ remote:path/to/files /path/to/local/mount
``` ```
@@ -32,7 +32,7 @@ and is not supported when [mounting as a network drive](#mounting-modes-on-windo
and the last example will mount as network share `\\cloud\remote` and map it to an and the last example will mount as network share `\\cloud\remote` and map it to an
automatically assigned drive: automatically assigned drive:
```sh ```console
rclone @ remote:path/to/files * rclone @ remote:path/to/files *
rclone @ remote:path/to/files X: rclone @ remote:path/to/files X:
rclone @ remote:path/to/files C:\path\parent\mount rclone @ remote:path/to/files C:\path\parent\mount
@@ -44,7 +44,7 @@ a SIGINT or SIGTERM signal, the mount should be automatically stopped.
When running in background mode the user will have to stop the mount manually: When running in background mode the user will have to stop the mount manually:
```sh ```console
# Linux # Linux
fusermount -u /path/to/local/mount fusermount -u /path/to/local/mount
#... or on some systems #... or on some systems
@@ -65,7 +65,7 @@ at all, then 1 PiB is set as both the total and the free size.
### Installing on Windows ### Installing on Windows
To run rclone @ on Windows, you will need to To run `rclone @ on Windows`, you will need to
download and install [WinFsp](http://www.secfs.net/winfsp/). download and install [WinFsp](http://www.secfs.net/winfsp/).
[WinFsp](https://github.com/winfsp/winfsp) is an open-source [WinFsp](https://github.com/winfsp/winfsp) is an open-source
@@ -96,7 +96,7 @@ directory or drive. Using the special value `*` will tell rclone to
automatically assign the next available drive letter, starting with Z: and moving automatically assign the next available drive letter, starting with Z: and moving
backward. Examples: backward. Examples:
```sh ```console
rclone @ remote:path/to/files * rclone @ remote:path/to/files *
rclone @ remote:path/to/files X: rclone @ remote:path/to/files X:
rclone @ remote:path/to/files C:\path\parent\mount rclone @ remote:path/to/files C:\path\parent\mount
@@ -111,7 +111,7 @@ to your @ command. Mounting to a directory path is not supported in
this mode, it is a limitation Windows imposes on junctions, so the remote must always this mode, it is a limitation Windows imposes on junctions, so the remote must always
be mounted to a drive letter. be mounted to a drive letter.
```sh ```console
rclone @ remote:path/to/files X: --network-mode rclone @ remote:path/to/files X: --network-mode
``` ```
@@ -129,7 +129,7 @@ volume label for the mapped drive, shown in Windows Explorer etc, while the comp
If you specify a full network share UNC path with `--volname`, this will implicitly If you specify a full network share UNC path with `--volname`, this will implicitly
set the `--network-mode` option, so the following two examples have same result: set the `--network-mode` option, so the following two examples have same result:
```sh ```console
rclone @ remote:path/to/files X: --network-mode rclone @ remote:path/to/files X: --network-mode
rclone @ remote:path/to/files X: --volname \\server\share rclone @ remote:path/to/files X: --volname \\server\share
``` ```
@@ -140,7 +140,7 @@ mountpoint, and instead use the UNC path specified as the volume name, as if it
specified with the `--volname` option. This will also implicitly set specified with the `--volname` option. This will also implicitly set
the `--network-mode` option. This means the following two examples have same result: the `--network-mode` option. This means the following two examples have same result:
```sh ```console
rclone @ remote:path/to/files \\cloud\remote rclone @ remote:path/to/files \\cloud\remote
rclone @ remote:path/to/files * --volname \\cloud\remote rclone @ remote:path/to/files * --volname \\cloud\remote
``` ```
@@ -296,7 +296,7 @@ from the website, rclone will locate the macFUSE libraries without any further i
If however, macFUSE is installed using the [macports](https://www.macports.org/) If however, macFUSE is installed using the [macports](https://www.macports.org/)
package manager, the following addition steps are required. package manager, the following addition steps are required.
```sh ```console
sudo mkdir /usr/local/lib sudo mkdir /usr/local/lib
cd /usr/local/lib cd /usr/local/lib
sudo ln -s /opt/local/lib/libfuse.2.dylib sudo ln -s /opt/local/lib/libfuse.2.dylib
@@ -324,6 +324,17 @@ full new copy of the file.
When mounting with `--read-only`, attempts to write to files will fail *silently* When mounting with `--read-only`, attempts to write to files will fail *silently*
as opposed to with a clear warning as in macFUSE. as opposed to with a clear warning as in macFUSE.
## Mounting on Linux
On newer versions of Ubuntu, you may encounter the following error when running
`rclone mount`:
> NOTICE: mount helper error: fusermount3: mount failed: Permission denied
> CRITICAL: Fatal error: failed to mount FUSE fs: fusermount: exit status 1
This may be due to newer [Apparmor](https://wiki.ubuntu.com/AppArmor) restrictions,
which can be disabled with `sudo aa-disable /usr/bin/fusermount3` (you may need to
`sudo apt install apparmor-utils` beforehand).
### Limitations ### Limitations
Without the use of `--vfs-cache-mode` this can only write files Without the use of `--vfs-cache-mode` this can only write files
@@ -424,7 +435,7 @@ rclone will detect it and translate command-line arguments appropriately.
Now you can run classic mounts like this: Now you can run classic mounts like this:
```sh ```console
mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem mount sftp1:subdir /mnt/data -t rclone -o vfs_cache_mode=writes,sftp_key_file=/path/to/pem
``` ```
@@ -456,7 +467,7 @@ WantedBy=multi-user.target
or add in `/etc/fstab` a line like or add in `/etc/fstab` a line like
```sh ```console
sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0 sftp1:subdir /mnt/data rclone rw,noauto,nofail,_netdev,x-systemd.automount,args2env,vfs_cache_mode=writes,config=/etc/rclone.conf,cache_dir=/var/cache/rclone 0 0
``` ```

View File

@@ -65,7 +65,7 @@ This takes the following parameters:
Example: Example:
` + "```sh" + ` ` + "```console" + `
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint
rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mountType=mount
rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}' rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}' mountOpt='{"AllowOther": true}'
@@ -74,7 +74,7 @@ rclone rc mount/mount fs=TestDrive: mountPoint=/mnt/tmp vfsOpt='{"CacheMode": 2}
The vfsOpt are as described in options/get and can be seen in the the The vfsOpt are as described in options/get and can be seen in the the
"vfs" section when running and the mountOpt can be seen in the "mount" section: "vfs" section when running and the mountOpt can be seen in the "mount" section:
` + "```sh" + ` ` + "```console" + `
rclone rc options/get rclone rc options/get
` + "```" + ` ` + "```" + `
`, `,

View File

@@ -35,7 +35,7 @@ like the [move](/commands/rclone_move/) command.
So So
` + "```sh" + ` ` + "```console" + `
rclone moveto src dst rclone moveto src dst
` + "```" + ` ` + "```" + `

View File

@@ -33,7 +33,7 @@ This command can also accept a password through STDIN instead of an
argument by passing a hyphen as an argument. This will use the first argument by passing a hyphen as an argument. This will use the first
line of STDIN as the password not including the trailing newline. line of STDIN as the password not including the trailing newline.
` + "```sh" + ` ` + "```console" + `
echo "secretpassword" | rclone obscure - echo "secretpassword" | rclone obscure -
` + "```" + ` ` + "```" + `

View File

@@ -28,7 +28,7 @@ var commandDefinition = &cobra.Command{
Short: `Copies standard input to file on remote.`, Short: `Copies standard input to file on remote.`,
Long: `Reads from standard input (stdin) and copies it to a single remote file. Long: `Reads from standard input (stdin) and copies it to a single remote file.
` + "```sh" + ` ` + "```console" + `
echo "hello world" | rclone rcat remote:path/to/file echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat remote:path/to/file ffmpeg - | rclone rcat remote:path/to/file
` + "```" + ` ` + "```" + `

View File

@@ -9,7 +9,7 @@ Docker plugins can run as a managed plugin under control of the docker daemon
or as an independent native service. For testing, you can just run it directly or as an independent native service. For testing, you can just run it directly
from the command line, for example: from the command line, for example:
```sh ```console
sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv sudo rclone serve docker --base-dir /tmp/rclone-volumes --socket-addr localhost:8787 -vv
``` ```

View File

@@ -119,7 +119,7 @@ following instructions.
Now start the rclone restic server Now start the rclone restic server
` + "```sh" + ` ` + "```console" + `
rclone serve restic -v remote:backup rclone serve restic -v remote:backup
` + "```" + ` ` + "```" + `
@@ -149,7 +149,7 @@ the URL for the REST server.
For example: For example:
` + "```sh" + ` ` + "```console" + `
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/
$ export RESTIC_PASSWORD=yourpassword $ export RESTIC_PASSWORD=yourpassword
$ restic init $ restic init
@@ -173,7 +173,7 @@ Note that you can use the endpoint to host multiple repositories. Do
this by adding a directory name or path after the URL. Note that this by adding a directory name or path after the URL. Note that
these **must** end with /. Eg these **must** end with /. Eg
` + "```sh" + ` ` + "```console" + `
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user1repo/
# backup user1 stuff # backup user1 stuff
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/ $ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/

View File

@@ -33,14 +33,14 @@ cause problems for S3 clients which rely on the Etag being the MD5.
For a simple set up, to serve `remote:path` over s3, run the server For a simple set up, to serve `remote:path` over s3, run the server
like this: like this:
```sh ```console
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY remote:path
``` ```
For example, to use a simple folder in the filesystem, run the server For example, to use a simple folder in the filesystem, run the server
with a command like this: with a command like this:
```sh ```console
rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY local:/path/to/folder rclone serve s3 --auth-key ACCESS_KEY_ID,SECRET_ACCESS_KEY local:/path/to/folder
``` ```

View File

@@ -19,11 +19,19 @@ var Command = &cobra.Command{
Long: `Serve a remote over a given protocol. Requires the use of a Long: `Serve a remote over a given protocol. Requires the use of a
subcommand to specify the protocol, e.g. subcommand to specify the protocol, e.g.
` + "```sh" + ` ` + "```console" + `
rclone serve http remote: rclone serve http remote:
` + "```" + ` ` + "```" + `
Each subcommand has its own options which you can see in their help.`, When the "--metadata" flag is enabled, the following metadata fields will be provided as headers:
- "content-disposition"
- "cache-control"
- "content-language"
- "content-encoding"
Note: The availability of these fields depends on whether the remote supports metadata.
Each subcommand has its own options which you can see in their help.
`,
Annotations: map[string]string{ Annotations: map[string]string{
"versionIntroduced": "v1.39", "versionIntroduced": "v1.39",
}, },

View File

@@ -151,7 +151,7 @@ It can be configured with .socket and .service unit files as described in
Socket activation can be tested ad-hoc with the ` + "`systemd-socket-activate`" + `command: Socket activation can be tested ad-hoc with the ` + "`systemd-socket-activate`" + `command:
` + "```sh" + ` ` + "```console" + `
systemd-socket-activate -l 2222 -- rclone serve sftp :local:vfs/ systemd-socket-activate -l 2222 -- rclone serve sftp :local:vfs/
` + "```" + ` ` + "```" + `

View File

@@ -157,13 +157,13 @@ Create a new DWORD BasicAuthLevel with value 2.
You can serve the webdav on a unix socket like this: You can serve the webdav on a unix socket like this:
` + "```sh" + ` ` + "```console" + `
rclone serve webdav --addr unix:///tmp/my.socket remote:path rclone serve webdav --addr unix:///tmp/my.socket remote:path
` + "```" + ` ` + "```" + `
and connect to it like this using rclone and the webdav backend: and connect to it like this using rclone and the webdav backend:
` + "```sh" + ` ` + "```console" + `
rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav: rclone --webdav-unix-socket /tmp/my.socket --webdav-url http://localhost lsf :webdav:
` + "```" + ` ` + "```" + `

View File

@@ -29,19 +29,19 @@ inaccessible.true
You can use it to tier single object You can use it to tier single object
` + "```sh" + ` ` + "```console" + `
rclone settier Cool remote:path/file rclone settier Cool remote:path/file
` + "```" + ` ` + "```" + `
Or use rclone filters to set tier on only specific files Or use rclone filters to set tier on only specific files
` + "```sh" + ` ` + "```console" + `
rclone --include "*.txt" settier Hot remote:path/dir rclone --include "*.txt" settier Hot remote:path/dir
` + "```" + ` ` + "```" + `
Or just provide remote directory and all files in directory will be tiered Or just provide remote directory and all files in directory will be tiered
` + "```sh" + ` ` + "```console" + `
rclone settier tier remote:path/dir rclone settier tier remote:path/dir
` + "```", ` + "```",
Annotations: map[string]string{ Annotations: map[string]string{

View File

@@ -56,22 +56,22 @@ var speedCmd = &cobra.Command{
Short: `Run a speed test to the remote`, Short: `Run a speed test to the remote`,
Long: `Run a speed test to the remote. Long: `Run a speed test to the remote.
This command runs a series of uploads and downloads to the remote, measuring This command runs a series of uploads and downloads to the remote, measuring
and printing the speed of each test using varying file sizes and numbers of and printing the speed of each test using varying file sizes and numbers of
files. files.
Test time can be innaccurate with small file caps and large files. As it Test time can be innaccurate with small file caps and large files. As it
uses the results of an initial test to determine how many files to use in uses the results of an initial test to determine how many files to use in
each subsequent test. each subsequent test.
It is recommended to use -q flag for a simpler output. e.g.: It is recommended to use -q flag for a simpler output. e.g.:
rlone test speed remote: -q
**NB** This command will create and delete files on the remote in a randomly rclone test speed remote: -q
named directory which should be tidied up after.
You can use the --json flag to only print the results in JSON format.`, **NB** This command will create and delete files on the remote in a randomly
named directory which will be automatically removed on a clean exit.
You can use the --json flag to only print the results in JSON format.`,
Annotations: map[string]string{ Annotations: map[string]string{
"versionIntroduced": "v1.72", "versionIntroduced": "v1.72",
}, },

View File

@@ -18,7 +18,7 @@ var Command = &cobra.Command{
Select which test command you want with the subcommand, eg Select which test command you want with the subcommand, eg
` + "```sh" + ` ` + "```console" + `
rclone test memory remote: rclone test memory remote:
` + "```" + ` ` + "```" + `

View File

@@ -42,7 +42,7 @@ build tags and the type of executable (static or dynamic).
For example: For example:
` + "```sh" + ` ` + "```console" + `
$ rclone version $ rclone version
rclone v1.55.0 rclone v1.55.0
- os/version: ubuntu 18.04 (64 bit) - os/version: ubuntu 18.04 (64 bit)
@@ -60,7 +60,7 @@ Note: before rclone version 1.55 the os/type and os/arch lines were merged,
If you supply the --check flag, then it will do an online check to If you supply the --check flag, then it will do an online check to
compare your version with the latest release and the latest beta. compare your version with the latest release and the latest beta.
` + "```sh" + ` ` + "```console" + `
$ rclone version --check $ rclone version --check
yours: 1.42.0.6 yours: 1.42.0.6
latest: 1.42 (released 2018-06-16) latest: 1.42 (released 2018-06-16)
@@ -69,7 +69,7 @@ beta: 1.42.0.5 (released 2018-06-17)
Or Or
` + "```sh" + ` ` + "```console" + `
$ rclone version --check $ rclone version --check
yours: 1.41 yours: 1.41
latest: 1.42 (released 2018-06-16) latest: 1.42 (released 2018-06-16)

View File

@@ -32,6 +32,9 @@
"renderer": { "renderer": {
"unsafe": false "unsafe": false
} }
},
"highlight": {
"style": "monokailight"
} }
} }
} }

View File

@@ -7,6 +7,7 @@ notoc: true
# Rclone syncs your files to cloud storage # Rclone syncs your files to cloud storage
<!-- markdownlint-disable-next-line line-length -->
{{< img width="50%" src="/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" >}} {{< img width="50%" src="/img/logo_on_light__horizontal_color.svg" alt="rclone logo" style="float:right; padding: 5px;" >}}
- [About rclone](#about) - [About rclone](#about)
@@ -15,7 +16,7 @@ notoc: true
- [What providers does rclone support?](#providers) - [What providers does rclone support?](#providers)
- [Download](/downloads/) - [Download](/downloads/)
- [Install](/install/) - [Install](/install/)
{{< rem MAINPAGELINK >}} <!-- MAINPAGELINK -->
## About rclone {#about} ## About rclone {#about}
@@ -79,8 +80,10 @@ Rclone helps you:
- Mirror cloud data to other cloud services or locally - Mirror cloud data to other cloud services or locally
- Migrate data to the cloud, or between cloud storage vendors - Migrate data to the cloud, or between cloud storage vendors
- Mount multiple, encrypted, cached or diverse cloud storage as a disk - Mount multiple, encrypted, cached or diverse cloud storage as a disk
- Analyse and account for data held on cloud storage using [lsf](/commands/rclone_lsf/), [ljson](/commands/rclone_lsjson/), [size](/commands/rclone_size/), [ncdu](/commands/rclone_ncdu/) - Analyse and account for data held on cloud storage using [lsf](/commands/rclone_lsf/),
- [Union](/union/) file systems together to present multiple local and/or cloud file systems as one [ljson](/commands/rclone_lsjson/), [size](/commands/rclone_size/), [ncdu](/commands/rclone_ncdu/)
- [Union](/union/) file systems together to present multiple local and/or cloud
file systems as one
## Features {#features} ## Features {#features}
@@ -93,7 +96,8 @@ Rclone helps you:
- [Copy](/commands/rclone_copy/) new or changed files to cloud storage - [Copy](/commands/rclone_copy/) new or changed files to cloud storage
- [Sync](/commands/rclone_sync/) (one way) to make a directory identical - [Sync](/commands/rclone_sync/) (one way) to make a directory identical
- [Bisync](/bisync/) (two way) to keep two directories in sync bidirectionally - [Bisync](/bisync/) (two way) to keep two directories in sync bidirectionally
- [Move](/commands/rclone_move/) files to cloud storage deleting the local after verification - [Move](/commands/rclone_move/) files to cloud storage deleting the local after
verification
- [Check](/commands/rclone_check/) hashes and for missing/extra files - [Check](/commands/rclone_check/) hashes and for missing/extra files
- [Mount](/commands/rclone_mount/) your cloud storage as a network disk - [Mount](/commands/rclone_mount/) your cloud storage as a network disk
- [Serve](/commands/rclone_serve/) local or remote files over [HTTP](/commands/rclone_serve_http/)/[WebDav](/commands/rclone_serve_webdav/)/[FTP](/commands/rclone_serve_ftp/)/[SFTP](/commands/rclone_serve_sftp/)/[DLNA](/commands/rclone_serve_dlna/) - [Serve](/commands/rclone_serve/) local or remote files over [HTTP](/commands/rclone_serve_http/)/[WebDav](/commands/rclone_serve_webdav/)/[FTP](/commands/rclone_serve_ftp/)/[SFTP](/commands/rclone_serve_sftp/)/[DLNA](/commands/rclone_serve_dlna/)
@@ -104,6 +108,9 @@ Rclone helps you:
(There are many others, built on standard protocols such as (There are many others, built on standard protocols such as
WebDAV or S3, that work out of the box.) WebDAV or S3, that work out of the box.)
<!-- markdownlint-capture -->
<!-- markdownlint-disable line-length no-bare-urls -->
{{< provider_list >}} {{< provider_list >}}
{{< provider name="1Fichier" home="https://1fichier.com/" config="/fichier/" start="true">}} {{< provider name="1Fichier" home="https://1fichier.com/" config="/fichier/" start="true">}}
{{< provider name="Akamai Netstorage" home="https://www.akamai.com/us/en/products/media-delivery/netstorage.jsp" config="/netstorage/" >}} {{< provider name="Akamai Netstorage" home="https://www.akamai.com/us/en/products/media-delivery/netstorage.jsp" config="/netstorage/" >}}
@@ -213,10 +220,15 @@ WebDAV or S3, that work out of the box.)
{{< provider name="The local filesystem" home="/local/" config="/local/" end="true">}} {{< provider name="The local filesystem" home="/local/" config="/local/" end="true">}}
{{< /provider_list >}} {{< /provider_list >}}
<!-- markdownlint-restore -->
## Virtual providers ## Virtual providers
These backends adapt or modify other storage providers: These backends adapt or modify other storage providers:
<!-- markdownlint-capture -->
<!-- markdownlint-disable line-length no-bare-urls -->
{{< provider name="Alias: Rename existing remotes" home="/alias/" config="/alias/" >}} {{< provider name="Alias: Rename existing remotes" home="/alias/" config="/alias/" >}}
{{< provider name="Archive: Read archive files" home="/archive/" config="/archive/" >}} {{< provider name="Archive: Read archive files" home="/archive/" config="/archive/" >}}
{{< provider name="Cache: Cache remotes (DEPRECATED)" home="/cache/" config="/cache/" >}} {{< provider name="Cache: Cache remotes (DEPRECATED)" home="/cache/" config="/cache/" >}}
@@ -227,6 +239,8 @@ These backends adapt or modify other storage providers:
{{< provider name="Hasher: Hash files" home="/hasher/" config="/hasher/" >}} {{< provider name="Hasher: Hash files" home="/hasher/" config="/hasher/" >}}
{{< provider name="Union: Join multiple remotes to work together" home="/union/" config="/union/" >}} {{< provider name="Union: Join multiple remotes to work together" home="/union/" config="/union/" >}}
<!-- markdownlint-restore -->
## Links ## Links
- {{< icon "fa fa-home" >}} [Home page](https://rclone.org/) - {{< icon "fa fa-home" >}} [Home page](https://rclone.org/)

View File

@@ -34,7 +34,7 @@ can be used to only show the trashed files in `myDrive`.
Here is an example of how to make an alias called `remote` for local folder. Here is an example of how to make an alias called `remote` for local folder.
First run: First run:
```sh ```console
rclone config rclone config
``` ```
@@ -83,27 +83,28 @@ q) Quit config
e/n/d/r/c/s/q> q e/n/d/r/c/s/q> q
``` ```
Once configured you can then use `rclone` like this, Once configured you can then use `rclone` like this (replace `remote` with the
name you gave your remote):
List directories in top level in `/mnt/storage/backup` List directories in top level in `/mnt/storage/backup`
```sh ```console
rclone lsd remote: rclone lsd remote:
``` ```
List all the files in `/mnt/storage/backup` List all the files in `/mnt/storage/backup`
```sh ```console
rclone ls remote: rclone ls remote:
``` ```
Copy another local directory to the alias directory called source Copy another local directory to the alias directory called source
```sh ```console
rclone copy /home/source remote:source rclone copy /home/source remote:source
``` ```
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/alias/alias.go then run make backenddocs" >}} <!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/alias/alias.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options ### Standard options
Here are the Standard options specific to alias (Alias for an existing remote). Here are the Standard options specific to alias (Alias for an existing remote).
@@ -136,4 +137,4 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
{{< rem autogenerated options stop >}} <!-- autogenerated options stop -->

View File

@@ -236,7 +236,7 @@ It would be possible to add ISO support fairly easily as the library we use ([go
It would be possible to add write support, but this would only be for creating new archives, not for updating existing archives. It would be possible to add write support, but this would only be for creating new archives, not for updating existing archives.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/archive/archive.go then run make backenddocs" >}} <!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/archive/archive.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options ### Standard options
Here are the Standard options specific to archive (Read archives). Here are the Standard options specific to archive (Read archives).
@@ -283,4 +283,4 @@ Any metadata supported by the underlying remote is read and written.
See the [metadata](/docs/#metadata) docs for more info. See the [metadata](/docs/#metadata) docs for more info.
{{< rem autogenerated options stop >}} <!-- autogenerated options stop -->

View File

@@ -12,9 +12,9 @@ description: "Rclone Authors and Contributors"
## Contributors ## Contributors
{{< rem `email addresses removed from here need to be added to <!-- email addresses removed from here need to be added to
bin/.ignore-emails to make sure update-authors.py doesn't immediately bin/.ignore-emails to make sure update-authors.py doesn't immediately
put them back in again.` >}} put them back in again. -->
- Alex Couper <amcouper@gmail.com> - Alex Couper <amcouper@gmail.com>
- Leonid Shalupov <leonid@shalupov.com> <shalupov@diverse.org.ru> - Leonid Shalupov <leonid@shalupov.com> <shalupov@diverse.org.ru>
@@ -1031,3 +1031,20 @@ put them back in again.` >}}
- divinity76 <hans@loltek.net> - divinity76 <hans@loltek.net>
- Andrew Gunnerson <accounts+github@chiller3.com> - Andrew Gunnerson <accounts+github@chiller3.com>
- Lakshmi-Surekha <Lakshmi.Kovvuri@ibm.com> - Lakshmi-Surekha <Lakshmi.Kovvuri@ibm.com>
- dulanting <dulanting@outlook.jp>
- Adam Dinwoodie <me-and@users.noreply.github.com>
- Lukas Krejci <metlos@users.noreply.github.com>
- Riaz Arbi <riazarbi@users.noreply.github.com>
- Fawzib Rojas <fawzib.rojas@gmail.com>
- fries1234 <fries1234@protonmail.com>
- Joseph Brownlee <39440458+JellyJoe198@users.noreply.github.com>
- Ted Robertson <10043369+tredondo@users.noreply.github.com>
- SublimePeace <184005903+SublimePeace@users.noreply.github.com>
- Copilot <198982749+Copilot@users.noreply.github.com>
- Alex <64072843+A1ex3@users.noreply.github.com>
- n4n5 <its.just.n4n5@gmail.com>
- aliaj1 <ali19961@gmail.com>
- Sean Turner <30396892+seanturner026@users.noreply.github.com>
- jijamik <30904953+jijamik@users.noreply.github.com>
- Dominik Sander <git@dsander.de>
- Nikolay Kiryanov <nikolay@kiryanov.ru>

View File

@@ -15,7 +15,7 @@ command.) You may put subdirectories in too, e.g.
Here is an example of making a Microsoft Azure Blob Storage Here is an example of making a Microsoft Azure Blob Storage
configuration. For a remote called `remote`. First run: configuration. For a remote called `remote`. First run:
```sh ```console
rclone config rclone config
``` ```
@@ -57,26 +57,26 @@ y/e/d> y
See all containers See all containers
```sh ```console
rclone lsd remote: rclone lsd remote:
``` ```
Make a new container Make a new container
```sh ```console
rclone mkdir remote:container rclone mkdir remote:container
``` ```
List the contents of a container List the contents of a container
```sh ```console
rclone ls remote:container rclone ls remote:container
``` ```
Sync `/home/local/directory` to the remote container, deleting any excess Sync `/home/local/directory` to the remote container, deleting any excess
files in the container. files in the container.
```sh ```console
rclone sync --interactive /home/local/directory remote:container rclone sync --interactive /home/local/directory remote:container
``` ```
@@ -212,25 +212,25 @@ Credentials created with the `az` tool can be picked up using `env_auth`.
For example if you were to login with a service principal like this: For example if you were to login with a service principal like this:
```sh ```console
az login --service-principal -u XXX -p XXX --tenant XXX az login --service-principal -u XXX -p XXX --tenant XXX
``` ```
Then you could access rclone resources like this: Then you could access rclone resources like this:
```sh ```console
rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER rclone lsf :azureblob,env_auth,account=ACCOUNT:CONTAINER
``` ```
Or Or
```sh ```console
rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER rclone lsf --azureblob-env-auth --azureblob-account=ACCOUNT :azureblob:CONTAINER
``` ```
Which is analogous to using the `az` tool: Which is analogous to using the `az` tool:
```sh ```console
az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login az storage blob list --container-name CONTAINER --account-name ACCOUNT --auth-mode login
``` ```
@@ -253,14 +253,14 @@ explorer in the Azure portal.
If you use a container level SAS URL, rclone operations are permitted If you use a container level SAS URL, rclone operations are permitted
only on a particular container, e.g. only on a particular container, e.g.
```sh ```console
rclone ls azureblob:container rclone ls azureblob:container
``` ```
You can also list the single container from the root. This will only You can also list the single container from the root. This will only
show the container specified by the SAS URL. show the container specified by the SAS URL.
```sh ```console
$ rclone lsd azureblob: $ rclone lsd azureblob:
container/ container/
``` ```
@@ -268,7 +268,7 @@ container/
Note that you can't see or access any other containers - this will Note that you can't see or access any other containers - this will
fail fail
```sh ```console
rclone ls azureblob:othercontainer rclone ls azureblob:othercontainer
``` ```
@@ -364,11 +364,11 @@ Don't set `env_auth` at the same time.
If you want to access resources with public anonymous access then set If you want to access resources with public anonymous access then set
`account` only. You can do this without making an rclone config: `account` only. You can do this without making an rclone config:
```sh ```console
rclone lsf :azureblob,account=ACCOUNT:CONTAINER rclone lsf :azureblob,account=ACCOUNT:CONTAINER
``` ```
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/azureblob/azureblob.go then run make backenddocs" >}} <!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/azureblob/azureblob.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options ### Standard options
Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage). Here are the Standard options specific to azureblob (Microsoft Azure Blob Storage).
@@ -959,13 +959,13 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
- Examples: - Examples:
- "" - ""
- The container and its blobs can be accessed only with an authorized request. - The container and its blobs can be accessed only with an authorized request.
- It's a default value. - It's a default value.
- "blob" - "blob"
- Blob data within this container can be read via anonymous request. - Blob data within this container can be read via anonymous request.
- "container" - "container"
- Allow full public read access for container and blob data. - Allow full public read access for container and blob data.
#### --azureblob-directory-markers #### --azureblob-directory-markers
@@ -1022,12 +1022,12 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
- Choices: - Choices:
- "" - ""
- By default, the delete operation fails if a blob has snapshots - By default, the delete operation fails if a blob has snapshots
- "include" - "include"
- Specify 'include' to remove the root blob and all its snapshots - Specify 'include' to remove the root blob and all its snapshots
- "only" - "only"
- Specify 'only' to remove only the snapshots but keep the root blob. - Specify 'only' to remove only the snapshots but keep the root blob.
#### --azureblob-description #### --azureblob-description
@@ -1040,11 +1040,11 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
{{< rem autogenerated options stop >}} <!-- autogenerated options stop -->
### Custom upload headers ### Custom upload headers
You can set custom upload headers with the `--header-upload` flag. You can set custom upload headers with the `--header-upload` flag.
- Cache-Control - Cache-Control
- Content-Disposition - Content-Disposition
@@ -1053,19 +1053,21 @@ You can set custom upload headers with the `--header-upload` flag.
- Content-Type - Content-Type
- X-MS-Tags - X-MS-Tags
Eg `--header-upload "Content-Type: text/potato"` or `--header-upload "X-MS-Tags: foo=bar"` Eg `--header-upload "Content-Type: text/potato"` or
`--header-upload "X-MS-Tags: foo=bar"`.
## Limitations ## Limitations
MD5 sums are only uploaded with chunked files if the source has an MD5 MD5 sums are only uploaded with chunked files if the source has an MD5
sum. This will always be the case for a local to azure copy. sum. This will always be the case for a local to azure copy.
`rclone about` is not supported by the Microsoft Azure Blob storage backend. Backends without `rclone about` is not supported by the Microsoft Azure Blob storage backend.
this capability cannot determine free space for an rclone mount or Backends without this capability cannot determine free space for an rclone
use policy `mfs` (most free space) as a member of an rclone union mount or use policy `mfs` (most free space) as a member of an rclone union
remote. remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
and [rclone about](https://rclone.org/commands/rclone_about/).
## Azure Storage Emulator Support ## Azure Storage Emulator Support

View File

@@ -14,7 +14,7 @@ e.g. `remote:path/to/dir`.
Here is an example of making a Microsoft Azure Files Storage Here is an example of making a Microsoft Azure Files Storage
configuration. For a remote called `remote`. First run: configuration. For a remote called `remote`. First run:
```sh ```console
rclone config rclone config
``` ```
@@ -90,26 +90,26 @@ Once configured you can use rclone.
See all files in the top level: See all files in the top level:
```sh ```console
rclone lsf remote: rclone lsf remote:
``` ```
Make a new directory in the root: Make a new directory in the root:
```sh ```console
rclone mkdir remote:dir rclone mkdir remote:dir
``` ```
Recursively List the contents: Recursively List the contents:
```sh ```console
rclone ls remote: rclone ls remote:
``` ```
Sync `/home/local/directory` to the remote directory, deleting any Sync `/home/local/directory` to the remote directory, deleting any
excess files in the directory. excess files in the directory.
```sh ```console
rclone sync --interactive /home/local/directory remote:dir rclone sync --interactive /home/local/directory remote:dir
``` ```
@@ -238,19 +238,19 @@ Credentials created with the `az` tool can be picked up using `env_auth`.
For example if you were to login with a service principal like this: For example if you were to login with a service principal like this:
```sh ```console
az login --service-principal -u XXX -p XXX --tenant XXX az login --service-principal -u XXX -p XXX --tenant XXX
``` ```
Then you could access rclone resources like this: Then you could access rclone resources like this:
```sh ```console
rclone lsf :azurefiles,env_auth,account=ACCOUNT: rclone lsf :azurefiles,env_auth,account=ACCOUNT:
``` ```
Or Or
```sh ```console
rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles: rclone lsf --azurefiles-env-auth --azurefiles-account=ACCOUNT :azurefiles:
``` ```
@@ -348,7 +348,7 @@ Setting this can be useful if you wish to use the `az` CLI on a host with
a System Managed Identity that you do not want to use. a System Managed Identity that you do not want to use.
Don't set `env_auth` at the same time. Don't set `env_auth` at the same time.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/azurefiles/azurefiles.go then run make backenddocs" >}} <!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/azurefiles/azurefiles.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options ### Standard options
Here are the Standard options specific to azurefiles (Microsoft Azure Files). Here are the Standard options specific to azurefiles (Microsoft Azure Files).
@@ -793,7 +793,7 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
{{< rem autogenerated options stop >}} <!-- autogenerated options stop -->
### Custom upload headers ### Custom upload headers

View File

@@ -15,7 +15,7 @@ command.) You may put subdirectories in too, e.g. `remote:bucket/path/to/dir`.
Here is an example of making a b2 configuration. First run Here is an example of making a b2 configuration. First run
```sh ```console
rclone config rclone config
``` ```
@@ -62,27 +62,26 @@ This remote is called `remote` and can now be used like this
See all buckets See all buckets
```sh ```console
rclone lsd remote: rclone lsd remote:
``` ```
Create a new bucket Create a new bucket
```sh ```console
rclone mkdir remote:bucket rclone mkdir remote:bucket
``` ```
List the contents of a bucket List the contents of a bucket
```sh ```console
rclone ls remote:bucket rclone ls remote:bucket
``` ```
Sync `/home/local/directory` to the remote bucket, deleting any Sync `/home/local/directory` to the remote bucket, deleting any
excess files in the bucket. excess files in the bucket.
```sh ```console
rclone sync --interactive /home/local/directory remote:bucket rclone sync --interactive /home/local/directory remote:bucket
``` ```
@@ -98,7 +97,7 @@ Follow Backblaze's docs to create an Application Key with the required
permission and add the `applicationKeyId` as the `account` and the permission and add the `applicationKeyId` as the `account` and the
`Application Key` itself as the `key`. `Application Key` itself as the `key`.
Note that you must put the _applicationKeyId_ as the `account` you Note that you must put the *applicationKeyId* as the `account` you
can't use the master Account ID. If you try then B2 will return 401 can't use the master Account ID. If you try then B2 will return 401
errors. errors.
@@ -192,8 +191,8 @@ You may opt in to a "hard delete" of files with the `--b2-hard-delete`
flag which permanently removes files on deletion instead of hiding flag which permanently removes files on deletion instead of hiding
them. them.
Old versions of files, where available, are visible using the Old versions of files, where available, are visible using the
`--b2-versions` flag. `--b2-versions` flag. These can be deleted as required with `delete`.
It is also possible to view a bucket as it was at a certain point in time, It is also possible to view a bucket as it was at a certain point in time,
using the `--b2-version-at` flag. This will show the file versions as they using the `--b2-version-at` flag. This will show the file versions as they
@@ -230,7 +229,7 @@ version followed by a `cleanup` of the old versions.
Show current version and all the versions with `--b2-versions` flag. Show current version and all the versions with `--b2-versions` flag.
```sh ```console
$ rclone -q ls b2:cleanup-test $ rclone -q ls b2:cleanup-test
9 one.txt 9 one.txt
@@ -243,7 +242,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test
Retrieve an old version Retrieve an old version
```sh ```console
$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp $ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
$ ls -l /tmp/one-v2016-07-04-141003-000.txt $ ls -l /tmp/one-v2016-07-04-141003-000.txt
@@ -252,7 +251,7 @@ $ ls -l /tmp/one-v2016-07-04-141003-000.txt
Clean up all the old versions and show that they've gone. Clean up all the old versions and show that they've gone.
```sh ```console
$ rclone -q cleanup b2:cleanup-test $ rclone -q cleanup b2:cleanup-test
$ rclone -q ls b2:cleanup-test $ rclone -q ls b2:cleanup-test
@@ -268,7 +267,7 @@ When using `--b2-versions` flag rclone is relying on the file name
to work out whether the objects are versions or not. Versions' names to work out whether the objects are versions or not. Versions' names
are created by inserting timestamp between file name and its extension. are created by inserting timestamp between file name and its extension.
```sh ```console
9 file.txt 9 file.txt
8 file-v2023-07-17-161032-000.txt 8 file-v2023-07-17-161032-000.txt
16 file-v2023-06-15-141003-000.txt 16 file-v2023-06-15-141003-000.txt
@@ -284,7 +283,7 @@ It is useful to know how many requests are sent to the server in different scena
All copy commands send the following 4 requests: All copy commands send the following 4 requests:
```text ```text
/b2api/v1/b2_authorize_account /b2api/v4/b2_authorize_account
/b2api/v1/b2_create_bucket /b2api/v1/b2_create_bucket
/b2api/v1/b2_list_buckets /b2api/v1/b2_list_buckets
/b2api/v1/b2_list_file_names /b2api/v1/b2_list_file_names
@@ -322,14 +321,14 @@ rclone will show and act on older versions of files. For example
Listing without `--b2-versions` Listing without `--b2-versions`
```sh ```console
$ rclone -q ls b2:cleanup-test $ rclone -q ls b2:cleanup-test
9 one.txt 9 one.txt
``` ```
And with And with
```sh ```console
$ rclone -q --b2-versions ls b2:cleanup-test $ rclone -q --b2-versions ls b2:cleanup-test
9 one.txt 9 one.txt
8 one-v2016-07-04-141032-000.txt 8 one-v2016-07-04-141032-000.txt
@@ -349,7 +348,7 @@ permitted, so you can't upload files or delete them.
Rclone supports generating file share links for private B2 buckets. Rclone supports generating file share links for private B2 buckets.
They can either be for a file for example: They can either be for a file for example:
```sh ```console
./rclone link B2:bucket/path/to/file.txt ./rclone link B2:bucket/path/to/file.txt
https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx
@@ -357,7 +356,7 @@ https://f002.backblazeb2.com/file/bucket/path/to/file.txt?Authorization=xxxxxxxx
or if run on a directory you will get: or if run on a directory you will get:
```sh ```console
./rclone link B2:bucket/path ./rclone link B2:bucket/path
https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx https://f002.backblazeb2.com/file/bucket/path?Authorization=xxxxxxxx
``` ```
@@ -372,7 +371,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
``` ```
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/b2/b2.go then run make backenddocs" >}} <!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/b2/b2.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options ### Standard options
Here are the Standard options specific to b2 (Backblaze B2). Here are the Standard options specific to b2 (Backblaze B2).
@@ -668,6 +667,71 @@ Properties:
- Type: Encoding - Type: Encoding
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot - Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
#### --b2-sse-customer-algorithm
If using SSE-C, the server-side encryption algorithm used when storing this object in B2.
Properties:
- Config: sse_customer_algorithm
- Env Var: RCLONE_B2_SSE_CUSTOMER_ALGORITHM
- Type: string
- Required: false
- Examples:
- ""
- None
- "AES256"
- Advanced Encryption Standard (256 bits key length)
#### --b2-sse-customer-key
To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data
Alternatively you can provide --sse-customer-key-base64.
Properties:
- Config: sse_customer_key
- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY
- Type: string
- Required: false
- Examples:
- ""
- None
#### --b2-sse-customer-key-base64
To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data
Alternatively you can provide --sse-customer-key.
Properties:
- Config: sse_customer_key_base64
- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_BASE64
- Type: string
- Required: false
- Examples:
- ""
- None
#### --b2-sse-customer-key-md5
If using SSE-C you may provide the secret encryption key MD5 checksum (optional).
If you leave it blank, this is calculated automatically from the sse_customer_key provided.
Properties:
- Config: sse_customer_key_md5
- Env Var: RCLONE_B2_SSE_CUSTOMER_KEY_MD5
- Type: string
- Required: false
- Examples:
- ""
- None
#### --b2-description #### --b2-description
Description of the remote. Description of the remote.
@@ -683,9 +747,11 @@ Properties:
Here are the commands specific to the b2 backend. Here are the commands specific to the b2 backend.
Run them with Run them with:
rclone backend COMMAND remote: ```console
rclone backend COMMAND remote:
```
The help below will explain what arguments each command takes. The help below will explain what arguments each command takes.
@@ -697,35 +763,41 @@ These can be run on a running backend using the rc command
### lifecycle ### lifecycle
Read or set the lifecycle for a bucket Read or set the lifecycle for a bucket.
rclone backend lifecycle remote: [options] [<arguments>+] ```console
rclone backend lifecycle remote: [options] [<arguments>+]
```
This command can be used to read or set the lifecycle for a bucket. This command can be used to read or set the lifecycle for a bucket.
Usage Examples:
To show the current lifecycle rules: To show the current lifecycle rules:
rclone backend lifecycle b2:bucket ```console
rclone backend lifecycle b2:bucket
```
This will dump something like this showing the lifecycle rules. This will dump something like this showing the lifecycle rules.
[ ```json
{ [
"daysFromHidingToDeleting": 1, {
"daysFromUploadingToHiding": null, "daysFromHidingToDeleting": 1,
"daysFromStartingToCancelingUnfinishedLargeFiles": null, "daysFromUploadingToHiding": null,
"fileNamePrefix": "" "daysFromStartingToCancelingUnfinishedLargeFiles": null,
} "fileNamePrefix": ""
] }
]
```
If there are no lifecycle rules (the default) then it will just return []. If there are no lifecycle rules (the default) then it will just return `[]`.
To reset the current lifecycle rules: To reset the current lifecycle rules:
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30 ```console
rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1 rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=30
rclone backend lifecycle b2:bucket -o daysFromUploadingToHiding=5 -o daysFromHidingToDeleting=1
```
This will run and then print the new lifecycle rules as above. This will run and then print the new lifecycle rules as above.
@@ -737,22 +809,27 @@ the daysFromHidingToDeleting to 1 day. You can enable hard_delete in
the config also which will mean deletions won't cause versions but the config also which will mean deletions won't cause versions but
overwrites will still cause versions to be made. overwrites will still cause versions to be made.
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1 ```console
rclone backend lifecycle b2:bucket -o daysFromHidingToDeleting=1
See: https://www.backblaze.com/docs/cloud-storage-lifecycle-rules ```
See: <https://www.backblaze.com/docs/cloud-storage-lifecycle-rules>
Options: Options:
- "daysFromHidingToDeleting": After a file has been hidden for this many days it is deleted. 0 is off. - "daysFromHidingToDeleting": After a file has been hidden for this many days
- "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any unfinished large file versions after this many days it is deleted. 0 is off.
- "daysFromUploadingToHiding": This many days after uploading a file is hidden - "daysFromStartingToCancelingUnfinishedLargeFiles": Cancels any unfinished
large file versions after this many days.
- "daysFromUploadingToHiding": This many days after uploading a file is hidden.
### cleanup ### cleanup
Remove unfinished large file uploads. Remove unfinished large file uploads.
rclone backend cleanup remote: [options] [<arguments>+] ```console
rclone backend cleanup remote: [options] [<arguments>+]
```
This command removes unfinished large file uploads of age greater than This command removes unfinished large file uploads of age greater than
max-age, which defaults to 24 hours. max-age, which defaults to 24 hours.
@@ -760,31 +837,35 @@ max-age, which defaults to 24 hours.
Note that you can use --interactive/-i or --dry-run with this command to see what Note that you can use --interactive/-i or --dry-run with this command to see what
it would do. it would do.
rclone backend cleanup b2:bucket/path/to/object ```console
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object rclone backend cleanup b2:bucket/path/to/object
rclone backend cleanup -o max-age=7w b2:bucket/path/to/object
```
Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc. Durations are parsed as per the rest of rclone, 2h, 7d, 7w etc.
Options: Options:
- "max-age": Max age of upload to delete - "max-age": Max age of upload to delete.
### cleanup-hidden ### cleanup-hidden
Remove old versions of files. Remove old versions of files.
rclone backend cleanup-hidden remote: [options] [<arguments>+] ```console
rclone backend cleanup-hidden remote: [options] [<arguments>+]
```
This command removes any old hidden versions of files. This command removes any old hidden versions of files.
Note that you can use --interactive/-i or --dry-run with this command to see what Note that you can use --interactive/-i or --dry-run with this command to see what
it would do. it would do.
rclone backend cleanup-hidden b2:bucket/path/to/dir ```console
rclone backend cleanup-hidden b2:bucket/path/to/dir
```
<!-- autogenerated options stop -->
{{< rem autogenerated options stop >}}
## Limitations ## Limitations
@@ -793,6 +874,5 @@ this capability cannot determine free space for an rclone mount or
use policy `mfs` (most free space) as a member of an rclone union use policy `mfs` (most free space) as a member of an rclone union
remote. remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
and [rclone about](https://rclone.org/commands/rclone_about/).

View File

@@ -31,7 +31,7 @@ section) before using, or data loss can result. Questions can be asked in the
For example, your first command might look like this: For example, your first command might look like this:
```sh ```console
rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run rclone bisync remote1:path1 remote2:path2 --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case --resync --dry-run
``` ```
@@ -40,7 +40,7 @@ After that, remove `--resync` as well.
Here is a typical run log (with timestamps removed for clarity): Here is a typical run log (with timestamps removed for clarity):
```sh ```console
rclone bisync /testdir/path1/ /testdir/path2/ --verbose rclone bisync /testdir/path1/ /testdir/path2/ --verbose
INFO : Synching Path1 "/testdir/path1/" with Path2 "/testdir/path2/" INFO : Synching Path1 "/testdir/path1/" with Path2 "/testdir/path2/"
INFO : Path1 checking for diffs INFO : Path1 checking for diffs
@@ -86,7 +86,7 @@ INFO : Bisync successful
## Command line syntax ## Command line syntax
```sh ```console
$ rclone bisync --help $ rclone bisync --help
Usage: Usage:
rclone bisync remote1:path1 remote2:path2 [flags] rclone bisync remote1:path1 remote2:path2 [flags]
@@ -169,7 +169,7 @@ be copied to Path1, and the process will then copy the Path1 tree to Path2.
The `--resync` sequence is roughly equivalent to the following The `--resync` sequence is roughly equivalent to the following
(but see [`--resync-mode`](#resync-mode) for other options): (but see [`--resync-mode`](#resync-mode) for other options):
```sh ```console
rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs] rclone copy Path2 Path1 --ignore-existing [--create-empty-src-dirs]
rclone copy Path1 Path2 [--create-empty-src-dirs] rclone copy Path1 Path2 [--create-empty-src-dirs]
``` ```
@@ -225,7 +225,7 @@ Shutdown](#graceful-shutdown) mode, when needed) for a very robust
almost any interruption it might encounter. Consider adding something like the almost any interruption it might encounter. Consider adding something like the
following: following:
```sh ```text
--resilient --recover --max-lock 2m --conflict-resolve newer --resilient --recover --max-lock 2m --conflict-resolve newer
``` ```
@@ -353,13 +353,13 @@ simultaneously (or just `modtime` AND `checksum`).
being `size`, `modtime`, and `checksum`. For example, if you want to compare being `size`, `modtime`, and `checksum`. For example, if you want to compare
size and checksum, but not modtime, you would do: size and checksum, but not modtime, you would do:
```sh ```text
--compare size,checksum --compare size,checksum
``` ```
Or if you want to compare all three: Or if you want to compare all three:
```sh ```text
--compare size,modtime,checksum --compare size,modtime,checksum
``` ```
@@ -627,7 +627,7 @@ specified (or when two identical suffixes are specified.) i.e. with
`--conflict-loser pathname`, all of the following would produce exactly the `--conflict-loser pathname`, all of the following would produce exactly the
same result: same result:
```sh ```text
--conflict-suffix path --conflict-suffix path
--conflict-suffix path,path --conflict-suffix path,path
--conflict-suffix path1,path2 --conflict-suffix path1,path2
@@ -642,7 +642,7 @@ changed with the [`--suffix-keep-extension`](/docs/#suffix-keep-extension) flag
curly braces as globs. This can be helpful to track the date and/or time that curly braces as globs. This can be helpful to track the date and/or time that
each conflict was handled by bisync. For example: each conflict was handled by bisync. For example:
```sh ```text
--conflict-suffix {DateOnly}-conflict --conflict-suffix {DateOnly}-conflict
// result: myfile.txt.2006-01-02-conflict1 // result: myfile.txt.2006-01-02-conflict1
``` ```
@@ -667,7 +667,7 @@ conflicts with `..path1` and `..path2` (with two periods, and `path` instead of
additional dots can be added by including them in the specified suffix string. additional dots can be added by including them in the specified suffix string.
For example, for behavior equivalent to the previous default, use: For example, for behavior equivalent to the previous default, use:
```sh ```text
[--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path [--conflict-resolve none] --conflict-loser pathname --conflict-suffix .path
``` ```
@@ -707,13 +707,13 @@ For example, a possible sequence could look like this:
1. Normally scheduled bisync run: 1. Normally scheduled bisync run:
```sh ```console
rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient rclone bisync Path1 Path2 -MPc --check-access --max-delete 10 --filters-file /path/to/filters.txt -v --no-cleanup --ignore-listing-checksum --disable ListR --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient
``` ```
2. Periodic independent integrity check (perhaps scheduled nightly or weekly): 2. Periodic independent integrity check (perhaps scheduled nightly or weekly):
```sh ```console
rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt rclone check -MvPc Path1 Path2 --filter-from /path/to/filters.txt
``` ```
@@ -721,7 +721,7 @@ For example, a possible sequence could look like this:
If one side is more up-to-date and you want to make the other side match it, If one side is more up-to-date and you want to make the other side match it,
you could run: you could run:
```sh ```console
rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v rclone sync Path1 Path2 --filter-from /path/to/filters.txt --create-empty-src-dirs -MPc -v
``` ```
@@ -851,7 +851,7 @@ override `--backup-dir`.
Example: Example:
```sh ```console
rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case rclone bisync /Users/someuser/some/local/path/Bisync gdrive:Bisync --backup-dir1 /Users/someuser/some/local/path/BackupDir --backup-dir2 gdrive:BackupDir --suffix -2023-08-26 --suffix-keep-extension --check-access --max-delete 10 --filters-file /Users/someuser/some/local/path/bisync_filters.txt --no-cleanup --ignore-listing-checksum --checkers=16 --drive-pacer-min-sleep=10ms --create-empty-src-dirs --resilient -MvP --drive-skip-gdocs --fix-case
``` ```
@@ -1047,20 +1047,16 @@ encodings.)
The following backends have known issues that need more investigation: The following backends have known issues that need more investigation:
<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs ---> <!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
- `TestGoFile` (`gofile`) - `TestDropbox` (`dropbox`)
- [`TestBisyncRemoteLocal/all_changed`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) - [`TestBisyncRemoteRemote/normalization`](https://pub.rclone.org/integration-tests/current/dropbox-cmd.bisync-TestDropbox-1.txt)
- [`TestBisyncRemoteLocal/backupdir`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt) - Updated: 2025-11-21-010037
- [`TestBisyncRemoteLocal/basic`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/changes`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [`TestBisyncRemoteLocal/check_access`](https://pub.rclone.org/integration-tests/current/gofile-cmd.bisync-TestGoFile-1.txt)
- [78 more](https://pub.rclone.org/integration-tests/current/)
- Updated: 2025-08-21-010015
<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs ---> <!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->
The following backends either have not been tested recently or have known issues The following backends either have not been tested recently or have known issues
that are deemed unfixable for the time being: that are deemed unfixable for the time being:
<!--- start list_ignores - DO NOT EDIT THIS SECTION - use make commanddocs ---> <!--- start list_ignores - DO NOT EDIT THIS SECTION - use make commanddocs --->
- `TestArchive` (`archive`)
- `TestCache` (`cache`) - `TestCache` (`cache`)
- `TestFileLu` (`filelu`) - `TestFileLu` (`filelu`)
- `TestFilesCom` (`filescom`) - `TestFilesCom` (`filescom`)
@@ -1383,7 +1379,7 @@ listings and thus not checked during the check access phase.
Here are two normal runs. The first one has a newer file on the remote. Here are two normal runs. The first one has a newer file on the remote.
The second has no deltas between local and remote. The second has no deltas between local and remote.
```sh ```text
2021/05/16 00:24:38 INFO : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/" 2021/05/16 00:24:38 INFO : Synching Path1 "/path/to/local/tree/" with Path2 "dropbox:/"
2021/05/16 00:24:38 INFO : Path1 checking for diffs 2021/05/16 00:24:38 INFO : Path1 checking for diffs
2021/05/16 00:24:38 INFO : - Path1 File is new - file.txt 2021/05/16 00:24:38 INFO : - Path1 File is new - file.txt
@@ -1433,7 +1429,7 @@ numerous such messages in the log.
Since there are no final error/warning messages on line *7*, rclone has Since there are no final error/warning messages on line *7*, rclone has
recovered from failure after a retry, and the overall sync was successful. recovered from failure after a retry, and the overall sync was successful.
```sh ```text
1: 2021/05/14 00:44:12 INFO : Synching Path1 "/path/to/local/tree" with Path2 "dropbox:" 1: 2021/05/14 00:44:12 INFO : Synching Path1 "/path/to/local/tree" with Path2 "dropbox:"
2: 2021/05/14 00:44:12 INFO : Path1 checking for diffs 2: 2021/05/14 00:44:12 INFO : Path1 checking for diffs
3: 2021/05/14 00:44:12 INFO : Path2 checking for diffs 3: 2021/05/14 00:44:12 INFO : Path2 checking for diffs
@@ -1446,7 +1442,7 @@ recovered from failure after a retry, and the overall sync was successful.
This log shows a *Critical failure* which requires a `--resync` to recover from. This log shows a *Critical failure* which requires a `--resync` to recover from.
See the [Runtime Error Handling](#error-handling) section. See the [Runtime Error Handling](#error-handling) section.
```sh ```text
2021/05/12 00:49:40 INFO : Google drive root '': Waiting for checks to finish 2021/05/12 00:49:40 INFO : Google drive root '': Waiting for checks to finish
2021/05/12 00:49:40 INFO : Google drive root '': Waiting for transfers to finish 2021/05/12 00:49:40 INFO : Google drive root '': Waiting for transfers to finish
2021/05/12 00:49:40 INFO : Google drive root '': not deleting files as there were IO errors 2021/05/12 00:49:40 INFO : Google drive root '': not deleting files as there were IO errors
@@ -1531,7 +1527,7 @@ on Linux you can use *Cron* which is described below.
The 1st example runs a sync every 5 minutes between a local directory The 1st example runs a sync every 5 minutes between a local directory
and an OwnCloud server, with output logged to a runlog file: and an OwnCloud server, with output logged to a runlog file:
```sh ```text
# Minute (0-59) # Minute (0-59)
# Hour (0-23) # Hour (0-23)
# Day of Month (1-31) # Day of Month (1-31)
@@ -1548,7 +1544,7 @@ If you run `rclone bisync` as a cron job, redirect stdout/stderr to a file.
The 2nd example runs a sync to Dropbox every hour and logs all stdout (via the `>>`) The 2nd example runs a sync to Dropbox every hour and logs all stdout (via the `>>`)
and stderr (via `2>&1`) to a log file. and stderr (via `2>&1`) to a log file.
```sh ```text
0 * * * * /path/to/rclone bisync /path/to/local/dropbox Dropbox: --check-access --filters-file /home/user/filters.txt >> /path/to/logs/dropbox-run.log 2>&1 0 * * * * /path/to/rclone bisync /path/to/local/dropbox Dropbox: --check-access --filters-file /home/user/filters.txt >> /path/to/logs/dropbox-run.log 2>&1
``` ```
@@ -1630,7 +1626,7 @@ Rerunning the test will let it pass. Consider such failures as noise.
### Test command syntax ### Test command syntax
```sh ```text
usage: go test ./cmd/bisync [options...] usage: go test ./cmd/bisync [options...]
Options: Options:

View File

@@ -18,7 +18,7 @@ to use JWT authentication. `rclone config` walks you through it.
Here is an example of how to make a remote called `remote`. First run: Here is an example of how to make a remote called `remote`. First run:
```sh ```console
rclone config rclone config
``` ```
@@ -92,23 +92,26 @@ your browser to the moment you get back the verification code. This
is on `http://127.0.0.1:53682/` and this may require you to unblock is on `http://127.0.0.1:53682/` and this may require you to unblock
it temporarily if you are running a host firewall. it temporarily if you are running a host firewall.
Once configured you can then use `rclone` like this, Once configured you can then use `rclone` like this (replace `remote` with the
name you gave your remote):
List directories in top level of your Box List directories in top level of your Box
```sh ```console
rclone lsd remote: rclone lsd remote:
``` ```
List all the files in your Box List all the files in your Box
```sh ```console
rclone ls remote: rclone ls remote:
``` ```
To copy a local directory to an Box directory called backup To copy a local directory to an Box directory called backup
rclone copy /home/source remote:backup ```console
rclone copy /home/source remote:backup
```
### Using rclone with an Enterprise account with SSO ### Using rclone with an Enterprise account with SSO
@@ -144,7 +147,7 @@ did the authentication on.
Here is how to do it. Here is how to do it.
```sh ```console
$ rclone config $ rclone config
Current remotes: Current remotes:
@@ -248,8 +251,8 @@ either be actually deleted from Box or moved to the trash.
Emptying the trash is supported via the rclone however cleanup command Emptying the trash is supported via the rclone however cleanup command
however this deletes every trashed file and folder individually so it however this deletes every trashed file and folder individually so it
may take a very long time. may take a very long time.
Emptying the trash via the WebUI does not have this limitation Emptying the trash via the WebUI does not have this limitation
so it is advised to empty the trash via the WebUI. so it is advised to empty the trash via the WebUI.
### Root folder ID ### Root folder ID
@@ -274,7 +277,7 @@ So if the folder you want rclone to use has a URL which looks like
in the browser, then you use `11xxxxxxxxx8` as in the browser, then you use `11xxxxxxxxx8` as
the `root_folder_id` in the config. the `root_folder_id` in the config.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/box/box.go then run make backenddocs" >}} <!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/box/box.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options ### Standard options
Here are the Standard options specific to box (Box). Here are the Standard options specific to box (Box).
@@ -320,6 +323,19 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
#### --box-config-credentials
Box App config.json contents.
Leave blank normally.
Properties:
- Config: config_credentials
- Env Var: RCLONE_BOX_CONFIG_CREDENTIALS
- Type: string
- Required: false
#### --box-access-token #### --box-access-token
Box App Primary Access Token Box App Primary Access Token
@@ -344,10 +360,10 @@ Properties:
- Type: string - Type: string
- Default: "user" - Default: "user"
- Examples: - Examples:
- "user" - "user"
- Rclone should act on behalf of a user. - Rclone should act on behalf of a user.
- "enterprise" - "enterprise"
- Rclone should act on behalf of a service account. - Rclone should act on behalf of a service account.
### Advanced options ### Advanced options
@@ -506,7 +522,7 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
{{< rem autogenerated options stop >}} <!-- autogenerated options stop -->
## Limitations ## Limitations
@@ -519,14 +535,16 @@ Reverse Solidus).
Box only supports filenames up to 255 characters in length. Box only supports filenames up to 255 characters in length.
Box has [API rate limits](https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/) that sometimes reduce the speed of rclone. Box has [API rate limits](https://developer.box.com/guides/api-calls/permissions-and-errors/rate-limits/)
that sometimes reduce the speed of rclone.
`rclone about` is not supported by the Box backend. Backends without `rclone about` is not supported by the Box backend. Backends without
this capability cannot determine free space for an rclone mount or this capability cannot determine free space for an rclone mount or
use policy `mfs` (most free space) as a member of an rclone union use policy `mfs` (most free space) as a member of an rclone union
remote. remote.
See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features) and [rclone about](https://rclone.org/commands/rclone_about/) See [List of backends that do not support rclone about](https://rclone.org/overview/#optional-features)
and [rclone about](https://rclone.org/commands/rclone_about/).
## Get your own Box App ID ## Get your own Box App ID

View File

@@ -31,7 +31,7 @@ with `cache`.
Here is an example of how to make a remote called `test-cache`. First run: Here is an example of how to make a remote called `test-cache`. First run:
```sh ```console
rclone config rclone config
``` ```
@@ -117,19 +117,19 @@ You can then use it like this,
List directories in top level of your drive List directories in top level of your drive
```sh ```console
rclone lsd test-cache: rclone lsd test-cache:
``` ```
List all the files in your drive List all the files in your drive
```sh ```console
rclone ls test-cache: rclone ls test-cache:
``` ```
To start a cached mount To start a cached mount
```sh ```console
rclone mount --allow-other test-cache: /var/tmp/test-cache rclone mount --allow-other test-cache: /var/tmp/test-cache
``` ```
@@ -325,7 +325,7 @@ Params:
- **withData** = true/false to delete cached data (chunks) as - **withData** = true/false to delete cached data (chunks) as
well *(optional, false by default)* well *(optional, false by default)*
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/cache/cache.go then run make backenddocs" >}} <!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/cache/cache.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options ### Standard options
Here are the Standard options specific to cache (Cache a remote). Here are the Standard options specific to cache (Cache a remote).
@@ -394,12 +394,12 @@ Properties:
- Type: SizeSuffix - Type: SizeSuffix
- Default: 5Mi - Default: 5Mi
- Examples: - Examples:
- "1M" - "1M"
- 1 MiB - 1 MiB
- "5M" - "5M"
- 5 MiB - 5 MiB
- "10M" - "10M"
- 10 MiB - 10 MiB
#### --cache-info-age #### --cache-info-age
@@ -414,12 +414,12 @@ Properties:
- Type: Duration - Type: Duration
- Default: 6h0m0s - Default: 6h0m0s
- Examples: - Examples:
- "1h" - "1h"
- 1 hour - 1 hour
- "24h" - "24h"
- 24 hours - 24 hours
- "48h" - "48h"
- 48 hours - 48 hours
#### --cache-chunk-total-size #### --cache-chunk-total-size
@@ -435,12 +435,12 @@ Properties:
- Type: SizeSuffix - Type: SizeSuffix
- Default: 10Gi - Default: 10Gi
- Examples: - Examples:
- "500M" - "500M"
- 500 MiB - 500 MiB
- "1G" - "1G"
- 1 GiB - 1 GiB
- "10G" - "10G"
- 10 GiB - 10 GiB
### Advanced options ### Advanced options
@@ -698,9 +698,11 @@ Properties:
Here are the commands specific to the cache backend. Here are the commands specific to the cache backend.
Run them with Run them with:
rclone backend COMMAND remote: ```console
rclone backend COMMAND remote:
```
The help below will explain what arguments each command takes. The help below will explain what arguments each command takes.
@@ -714,6 +716,8 @@ These can be run on a running backend using the rc command
Print stats on the cache backend in JSON format. Print stats on the cache backend in JSON format.
rclone backend stats remote: [options] [<arguments>+] ```console
rclone backend stats remote: [options] [<arguments>+]
```
{{< rem autogenerated options stop >}} <!-- autogenerated options stop -->

View File

@@ -6,6 +6,130 @@ description: "Rclone Changelog"
# Changelog # Changelog
## v1.72.0 - 2025-11-21
[See commits](https://github.com/rclone/rclone/compare/v1.71.0...v1.72.0)
- New backends
- [Archive](/archive) backend to read archives on cloud storage. (Nick Craig-Wood)
- New S3 providers
- [Cubbit Object Storage](/s3/#Cubbit) (Marco Ferretti)
- [FileLu S5 Object Storage](/s3/#filelu-s5) (kingston125)
- [Hetzner Object Storage](/s3/#hetzner) (spiffytech)
- [Intercolo Object Storage](/s3/#intercolo) (Robin Rolf)
- [Rabata S3-compatible secure cloud storage](/s3/#Rabata) (dougal)
- [Servercore Object Storage](/s3/#servercore) (dougal)
- [SpectraLogic](/s3/#spectralogic) (dougal)
- New commands
- [rclone archive](/commands/rclone_archive/): command to create and read archive files (Fawzib Rojas)
- [rclone config string](/commands/rclone_config_string/): for making connection strings (Nick Craig-Wood)
- [rclone test speed](/commands/rclone_test_speed/): Add command to test a specified remotes speed (dougal)
- New Features
- backends: many backends have has a paged listing (`ListP`) interface added
- this enables progress when listing large directories and reduced memory usage
- build
- Bump golang.org/x/crypto from 0.43.0 to 0.45.0 to fix CVE-2025-58181 (dependabot[bot])
- Modernize code and tests (Nick Craig-Wood, russcoss, juejinyuxitu, reddaisyy, dulanting, Oleksandr Redko)
- Update all dependencies (Nick Craig-Wood)
- Enable support for `aix/ppc64` (Lakshmi-Surekha)
- check: Improved reporting of differences in sizes and contents (albertony)
- copyurl: Added `--url` to read URLs from CSV file (S-Pegg1, dougal)
- docs:
- markdown linting (albertony)
- fixes (albertony, Andrew Gunnerson, anon-pradip, Claudius Ellsel, dougal, iTrooz, Jean-Christophe Cura, Joseph Brownlee, kapitainsky, Matt LaPaglia, n4n5, Nick Craig-Wood, nielash, SublimePeace, Ted Robertson, vastonus)
- fs: remove unnecessary Seek call on log file (Aneesh Agrawal)
- hashsum: Improved output format when listing algorithms (albertony)
- lib/http: Cleanup indentation and other whitespace in http serve template (albertony)
- lsf: Add support for `unix` and `unixnano` time formats (Motte)
- oauthutil: Improved debug logs from token refresh (albertony)
- rc
- Add [job/batch](/rc/#job-batch) for sending batches of rc commands to run concurrently (Nick Craig-Wood)
- Add `runningIds` and `finishedIds` to [job/list](/rc/#job-list) (n4n5)
- Add `osVersion`, `osKernel` and `osArch` to [core/version](/rc/#core-version) (Nick Craig-Wood)
- Make sure fatal errors run via the rc don't crash rclone (Nick Craig-Wood)
- Add `executeId` to job statuses in [job/list](/rc/#job-list) (Nikolay Kiryanov)
- `config/unlock`: rename parameter to `configPassword` accept old as well (Nick Craig-Wood)
- serve http: Download folders as zip (dougal)
- Bug Fixes
- build
- Fix tls: failed to verify certificate: x509: negative serial number (Nick Craig-Wood)
- march
- Fix `--no-traverse` being very slow (Nick Craig-Wood)
- serve s3: Fix log output to remove the EXTRA messages (iTrooz)
- Mount
- Windows: improve error message on missing WinFSP (divinity76)
- Local
- Add `--skip-specials` to ignore special files (Adam Dinwoodie)
- Azure Blob
- Add ListP interface (dougal)
- Azurefiles
- Add ListP interface (Nick Craig-Wood)
- B2
- Add ListP interface (dougal)
- Add Server-Side encryption support (fries1234)
- Fix "expected a FileSseMode but found: ''" (dougal)
- Allow individual old versions to be deleted with `--b2-versions` (dougal)
- Box
- Add ListP interface (Nick Craig-Wood)
- Allow configuration with config file contents (Dominik Sander)
- Compress
- Add zstd compression (Alex)
- Drive
- Add ListP interface (Nick Craig-Wood)
- Dropbox
- Add ListP interface (Nick Craig-Wood)
- Fix error moving just created objects (Nick Craig-Wood)
- FTP
- Fix SOCKS proxy support (dougal)
- Fix transfers from servers that return 250 ok messages (jijamik)
- Google Cloud Storage
- Add ListP interface (dougal)
- Fix `--gcs-storage-class` to work with server side copy for objects (Riaz Arbi)
- HTTP
- Add basic metadata and provide it via serve (Oleg Kunitsyn)
- Jottacloud
- Add support for Let's Go Cloud (from MediaMarkt) as a whitelabel service (albertony)
- Add support for MediaMarkt Cloud as a whitelabel service (albertony)
- Added support for traditional oauth authentication also for the main service (albertony)
- Abort attempts to run unsupported rclone authorize command (albertony)
- Improved token refresh handling (albertony)
- Fix legacy authentication (albertony)
- Fix authentication for whitelabel services from Elkjøp subsidiaries (albertony)
- Mega
- Implement 2FA login (iTrooz)
- Memory
- Add ListP interface (dougal)
- Onedrive
- Add ListP interface (Nick Craig-Wood)
- Oracle Object Storage
- Add ListP interface (dougal)
- Pcloud
- Add ListP interface (Nick Craig-Wood)
- Proton Drive
- Automated 2FA login with OTP secret key (Microscotch)
- S3
- Make it easier to add new S3 providers (dougal)
- Add `--s3-use-data-integrity-protections` quirk to fix BadDigest error in Alibaba, Tencent (hunshcn)
- Add support for `--upload-header`, `If-Match` and `If-None-Match` (Sean Turner)
- Fix single file copying behavior with low permission (hunshcn)
- SFTP
- Fix zombie SSH processes with `--sftp-ssh` (Copilot)
- Smb
- Optimize smb mount performance by avoiding stat checks during initialization (Sudipto Baral)
- Swift
- Add ListP interface (dougal)
- If storage_policy isn't set, use the root containers policy (Andrew Ruthven)
- Report disk usage in segment containers (Andrew Ruthven)
- Ulozto
- Implement the About functionality (Lukas Krejci)
- Fix downloads returning HTML error page (aliaj1)
- WebDAV
- Optimize bearer token fetching with singleflight (hunshcn)
- Add ListP interface (Nick Craig-Wood)
- Use SpaceSepList to parse bearer token command (hunshcn)
- Add `Access-Control-Max-Age` header for CORS preflight caching (viocha)
- Fix out of memory with sharepoint-ntlm when uploading large file (Nick Craig-Wood)
## v1.71.2 - 2025-10-20 ## v1.71.2 - 2025-10-20
[See commits](https://github.com/rclone/rclone/compare/v1.71.1...v1.71.2) [See commits](https://github.com/rclone/rclone/compare/v1.71.1...v1.71.2)

View File

@@ -313,7 +313,7 @@ to keep rclone up-to-date to avoid data corruption.
Changing `transactions` is dangerous and requires explicit migration. Changing `transactions` is dangerous and requires explicit migration.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/chunker/chunker.go then run make backenddocs" >}} <!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/chunker/chunker.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options ### Standard options
Here are the Standard options specific to chunker (Transparently chunk/split large files). Here are the Standard options specific to chunker (Transparently chunk/split large files).
@@ -356,22 +356,22 @@ Properties:
- Type: string - Type: string
- Default: "md5" - Default: "md5"
- Examples: - Examples:
- "none" - "none"
- Pass any hash supported by wrapped remote for non-chunked files. - Pass any hash supported by wrapped remote for non-chunked files.
- Return nothing otherwise. - Return nothing otherwise.
- "md5" - "md5"
- MD5 for composite files. - MD5 for composite files.
- "sha1" - "sha1"
- SHA1 for composite files. - SHA1 for composite files.
- "md5all" - "md5all"
- MD5 for all files. - MD5 for all files.
- "sha1all" - "sha1all"
- SHA1 for all files. - SHA1 for all files.
- "md5quick" - "md5quick"
- Copying a file to chunker will request MD5 from the source. - Copying a file to chunker will request MD5 from the source.
- Falling back to SHA1 if unsupported. - Falling back to SHA1 if unsupported.
- "sha1quick" - "sha1quick"
- Similar to "md5quick" but prefers SHA1 over MD5. - Similar to "md5quick" but prefers SHA1 over MD5.
### Advanced options ### Advanced options
@@ -421,13 +421,13 @@ Properties:
- Type: string - Type: string
- Default: "simplejson" - Default: "simplejson"
- Examples: - Examples:
- "none" - "none"
- Do not use metadata files at all. - Do not use metadata files at all.
- Requires hash type "none". - Requires hash type "none".
- "simplejson" - "simplejson"
- Simple JSON supports hash sums and chunk validation. - Simple JSON supports hash sums and chunk validation.
- -
- It has the following fields: ver, size, nchunks, md5, sha1. - It has the following fields: ver, size, nchunks, md5, sha1.
#### --chunker-fail-hard #### --chunker-fail-hard
@@ -440,10 +440,10 @@ Properties:
- Type: bool - Type: bool
- Default: false - Default: false
- Examples: - Examples:
- "true" - "true"
- Report errors and abort current command. - Report errors and abort current command.
- "false" - "false"
- Warn user, skip incomplete file and proceed. - Warn user, skip incomplete file and proceed.
#### --chunker-transactions #### --chunker-transactions
@@ -456,19 +456,19 @@ Properties:
- Type: string - Type: string
- Default: "rename" - Default: "rename"
- Examples: - Examples:
- "rename" - "rename"
- Rename temporary files after a successful transaction. - Rename temporary files after a successful transaction.
- "norename" - "norename"
- Leave temporary file names and write transaction ID to metadata file. - Leave temporary file names and write transaction ID to metadata file.
- Metadata is required for no rename transactions (meta format cannot be "none"). - Metadata is required for no rename transactions (meta format cannot be "none").
- If you are using norename transactions you should be careful not to downgrade Rclone - If you are using norename transactions you should be careful not to downgrade Rclone
- as older versions of Rclone don't support this transaction style and will misinterpret - as older versions of Rclone don't support this transaction style and will misinterpret
- files manipulated by norename transactions. - files manipulated by norename transactions.
- This method is EXPERIMENTAL, don't use on production systems. - This method is EXPERIMENTAL, don't use on production systems.
- "auto" - "auto"
- Rename or norename will be used depending on capabilities of the backend. - Rename or norename will be used depending on capabilities of the backend.
- If meta format is set to "none", rename transactions will always be used. - If meta format is set to "none", rename transactions will always be used.
- This method is EXPERIMENTAL, don't use on production systems. - This method is EXPERIMENTAL, don't use on production systems.
#### --chunker-description #### --chunker-description
@@ -481,4 +481,4 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
{{< rem autogenerated options stop >}} <!-- autogenerated options stop -->

View File

@@ -38,7 +38,7 @@ from the developer section.
Now run Now run
```sh ```console
rclone config rclone config
``` ```
@@ -113,19 +113,19 @@ y/e/d> y
List directories in the top level of your Media Library List directories in the top level of your Media Library
```sh ```console
rclone lsd cloudinary-media-library: rclone lsd cloudinary-media-library:
``` ```
Make a new directory. Make a new directory.
```sh ```console
rclone mkdir cloudinary-media-library:directory rclone mkdir cloudinary-media-library:directory
``` ```
List the contents of a directory. List the contents of a directory.
```sh ```console
rclone ls cloudinary-media-library:directory rclone ls cloudinary-media-library:directory
``` ```
@@ -133,7 +133,7 @@ rclone ls cloudinary-media-library:directory
Cloudinary stores md5 and timestamps for any successful Put automatically and read-only. Cloudinary stores md5 and timestamps for any successful Put automatically and read-only.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/cloudinary/cloudinary.go then run make backenddocs" >}} <!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/cloudinary/cloudinary.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options ### Standard options
Here are the Standard options specific to cloudinary (Cloudinary). Here are the Standard options specific to cloudinary (Cloudinary).
@@ -254,4 +254,4 @@ Properties:
- Type: string - Type: string
- Required: false - Required: false
{{< rem autogenerated options stop >}} <!-- autogenerated options stop -->

View File

@@ -11,7 +11,7 @@ tree.
For example you might have a remote for images on one provider: For example you might have a remote for images on one provider:
```sh ```console
$ rclone tree s3:imagesbucket $ rclone tree s3:imagesbucket
/ /
├── image1.jpg ├── image1.jpg
@@ -20,7 +20,7 @@ $ rclone tree s3:imagesbucket
And a remote for files on another: And a remote for files on another:
```sh ```console
$ rclone tree drive:important/files $ rclone tree drive:important/files
/ /
├── file1.txt ├── file1.txt
@@ -30,7 +30,7 @@ $ rclone tree drive:important/files
The `combine` backend can join these together into a synthetic The `combine` backend can join these together into a synthetic
directory structure like this: directory structure like this:
```sh ```console
$ rclone tree combined: $ rclone tree combined:
/ /
├── files ├── files
@@ -57,7 +57,7 @@ either be a local paths or other remotes.
Here is an example of how to make a combine called `remote` for the Here is an example of how to make a combine called `remote` for the
example above. First run: example above. First run:
```sh ```console
rclone config rclone config
``` ```
@@ -107,7 +107,7 @@ the shared drives you have access to.
Assuming your main (non shared drive) Google drive remote is called Assuming your main (non shared drive) Google drive remote is called
`drive:` you would run `drive:` you would run
```sh ```console
rclone backend -o config drives drive: rclone backend -o config drives drive:
``` ```
@@ -133,7 +133,7 @@ with the `AllDrives:` remote.
See [the Google Drive docs](/drive/#drives) for full info. See [the Google Drive docs](/drive/#drives) for full info.
{{< rem autogenerated options start" - DO NOT EDIT - instead edit fs.RegInfo in backend/combine/combine.go then run make backenddocs" >}} <!-- autogenerated options start - DO NOT EDIT - instead edit fs.RegInfo in backend/combine/combine.go and run make backenddocs to verify --> <!-- markdownlint-disable-line line-length -->
### Standard options ### Standard options
Here are the Standard options specific to combine (Combine several remotes into one). Here are the Standard options specific to combine (Combine several remotes into one).
@@ -183,4 +183,4 @@ Any metadata supported by the underlying remote is read and written.
See the [metadata](/docs/#metadata) docs for more info. See the [metadata](/docs/#metadata) docs for more info.
{{< rem autogenerated options stop >}} <!-- autogenerated options stop -->

View File

@@ -15,8 +15,6 @@ mounting them, listing them in lots of different ways.
See the home page (https://rclone.org/) for installation, usage, See the home page (https://rclone.org/) for installation, usage,
documentation, changelog and configuration walkthroughs. documentation, changelog and configuration walkthroughs.
``` ```
rclone [flags] rclone [flags]
``` ```
@@ -26,6 +24,8 @@ rclone [flags]
``` ```
--alias-description string Description of the remote --alias-description string Description of the remote
--alias-remote string Remote or path to alias --alias-remote string Remote or path to alias
--archive-description string Description of the remote
--archive-remote string Remote to wrap to read archives from
--ask-password Allow prompt for password for encrypted configuration (default true) --ask-password Allow prompt for password for encrypted configuration (default true)
--auto-confirm If enabled, do not request console confirmation --auto-confirm If enabled, do not request console confirmation
--azureblob-access-tier string Access tier of blob: hot, cool, cold or archive --azureblob-access-tier string Access tier of blob: hot, cool, cold or archive
@@ -105,6 +105,10 @@ rclone [flags]
--b2-hard-delete Permanently delete files on remote removal, otherwise hide files --b2-hard-delete Permanently delete files on remote removal, otherwise hide files
--b2-key string Application Key --b2-key string Application Key
--b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket --b2-lifecycle int Set the number of days deleted files should be kept when creating a bucket
--b2-sse-customer-algorithm string If using SSE-C, the server-side encryption algorithm used when storing this object in B2
--b2-sse-customer-key string To use SSE-C, you may provide the secret encryption key encoded in a UTF-8 compatible string to encrypt/decrypt your data
--b2-sse-customer-key-base64 string To use SSE-C, you may provide the secret encryption key encoded in Base64 format to encrypt/decrypt your data
--b2-sse-customer-key-md5 string If using SSE-C you may provide the secret encryption key MD5 checksum (optional)
--b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging --b2-test-mode string A flag string for X-Bz-Test-Mode header for debugging
--b2-upload-concurrency int Concurrency for multipart uploads (default 4) --b2-upload-concurrency int Concurrency for multipart uploads (default 4)
--b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi) --b2-upload-cutoff SizeSuffix Cutoff for switching to chunked upload (default 200Mi)
@@ -181,7 +185,7 @@ rclone [flags]
--combine-upstreams SpaceSepList Upstreams for combining --combine-upstreams SpaceSepList Upstreams for combining
--compare-dest stringArray Include additional server-side paths during comparison --compare-dest stringArray Include additional server-side paths during comparison
--compress-description string Description of the remote --compress-description string Description of the remote
--compress-level int GZIP compression level (-2 to 9) (default -1) --compress-level string GZIP (levels -2 to 9):
--compress-mode string Compression mode (default "gzip") --compress-mode string Compression mode (default "gzip")
--compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi) --compress-ram-cache-limit SizeSuffix Some remotes don't allow the upload of files with unknown size (default 20Mi)
--compress-remote string Remote to compress --compress-remote string Remote to compress
@@ -549,6 +553,7 @@ rclone [flags]
--max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off) --max-size SizeSuffix Only transfer files smaller than this in KiB or suffix B|K|M|G|T|P (default off)
--max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000) --max-stats-groups int Maximum number of stats groups to keep in memory, on max oldest is discarded (default 1000)
--max-transfer SizeSuffix Maximum size of data to transfer (default off) --max-transfer SizeSuffix Maximum size of data to transfer (default off)
--mega-2fa string The 2FA code of your MEGA account if the account is set up with one
--mega-debug Output more debug from Mega --mega-debug Output more debug from Mega
--mega-description string Description of the remote --mega-description string Description of the remote
--mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot) --mega-encoding Encoding The encoding for the backend (default Slash,InvalidUtf8,Dot)
@@ -715,6 +720,7 @@ rclone [flags]
--protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot) --protondrive-encoding Encoding The encoding for the backend (default Slash,LeftSpace,RightSpace,InvalidUtf8,Dot)
--protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured) --protondrive-mailbox-password string The mailbox password of your two-password proton account (obscured)
--protondrive-original-file-size Return the file size before encryption (default true) --protondrive-original-file-size Return the file size before encryption (default true)
--protondrive-otp-secret-key string The OTP secret key (obscured)
--protondrive-password string The password of your proton account (obscured) --protondrive-password string The password of your proton account (obscured)
--protondrive-replace-existing-draft Create a new revision when filename conflict is detected --protondrive-replace-existing-draft Create a new revision when filename conflict is detected
--protondrive-username string The username of your proton account --protondrive-username string The username of your proton account
@@ -831,6 +837,7 @@ rclone [flags]
--s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset) --s3-use-accept-encoding-gzip Accept-Encoding: gzip Whether to send Accept-Encoding: gzip header (default unset)
--s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset) --s3-use-already-exists Tristate Set if rclone should report BucketAlreadyExists errors on bucket creation (default unset)
--s3-use-arn-region If true, enables arn region support for the service --s3-use-arn-region If true, enables arn region support for the service
--s3-use-data-integrity-protections Tristate If true use AWS S3 data integrity protections (default unset)
--s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support) --s3-use-dual-stack If true use AWS S3 dual-stack endpoint (IPv6 support)
--s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset) --s3-use-multipart-etag Tristate Whether to use ETag in multipart uploads for verification (default unset)
--s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset) --s3-use-multipart-uploads Tristate Set if rclone should use multipart uploads (default unset)
@@ -915,6 +922,7 @@ rclone [flags]
--sia-user-agent string Siad User Agent (default "Sia-Agent") --sia-user-agent string Siad User Agent (default "Sia-Agent")
--size-only Skip based on size only, not modtime or checksum --size-only Skip based on size only, not modtime or checksum
--skip-links Don't warn about skipped symlinks --skip-links Don't warn about skipped symlinks
--skip-specials Don't warn about skipped pipes, sockets and device objects
--smb-case-insensitive Whether the server is configured to be case-insensitive (default true) --smb-case-insensitive Whether the server is configured to be case-insensitive (default true)
--smb-description string Description of the remote --smb-description string Description of the remote
--smb-domain string Domain name for NTLM authentication (default "WORKGROUP") --smb-domain string Domain name for NTLM authentication (default "WORKGROUP")
@@ -1015,7 +1023,7 @@ rclone [flags]
--use-json-log Use json log format --use-json-log Use json log format
--use-mmap Use mmap allocator (see docs) --use-mmap Use mmap allocator (see docs)
--use-server-modtime Use server modified time instead of object metadata --use-server-modtime Use server modified time instead of object metadata
--user-agent string Set the user-agent to a specified string (default "rclone/v1.71.0") --user-agent string Set the user-agent to a specified string (default "rclone/v1.72.0")
-v, --verbose count Print lots more stuff (repeat for more) -v, --verbose count Print lots more stuff (repeat for more)
-V, --version Print the version number -V, --version Print the version number
--webdav-auth-redirect Preserve authentication on redirect --webdav-auth-redirect Preserve authentication on redirect
@@ -1057,7 +1065,11 @@ rclone [flags]
## See Also ## See Also
<!-- markdownlint-capture -->
<!-- markdownlint-disable ul-style line-length -->
* [rclone about](/commands/rclone_about/) - Get quota information from the remote. * [rclone about](/commands/rclone_about/) - Get quota information from the remote.
* [rclone archive](/commands/rclone_archive/) - Perform an action on an archive.
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization. * [rclone authorize](/commands/rclone_authorize/) - Remote authorization.
* [rclone backend](/commands/rclone_backend/) - Run a backend-specific command. * [rclone backend](/commands/rclone_backend/) - Run a backend-specific command.
* [rclone bisync](/commands/rclone_bisync/) - Perform bidirectional synchronization between two paths. * [rclone bisync](/commands/rclone_bisync/) - Perform bidirectional synchronization between two paths.
@@ -1111,3 +1123,5 @@ rclone [flags]
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion. * [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number. * [rclone version](/commands/rclone_version/) - Show the version number.
<!-- markdownlint-restore -->

Some files were not shown because too many files have changed in this diff Show More