1
0
mirror of https://github.com/rclone/rclone.git synced 2026-01-07 02:54:04 +00:00

Compare commits

..

37 Commits

Author SHA1 Message Date
Nick Craig-Wood
4133a197bc Version v1.70.3 2025-07-09 10:51:25 +01:00
Nick Craig-Wood
a30a4909fe azureblob: fix server side copy error "requires exactly one scope"
Before this change, if not using shared key or SAS URL authentication
for the source, rclone gave this error

    ManagedIdentityCredential.GetToken() requires exactly one scope

when doing server side copies.

This was introduced in:

3a5ddfcd3c azureblob: implement multipart server side copy

This fixes the problem by creating a temporary SAS URL using user
delegation to read the source blob when copying.

Fixes #8662
2025-07-09 10:32:12 +01:00
albertony
cdc6d22929 docs: explain the json log format in more detail 2025-07-09 10:32:12 +01:00
albertony
e319406f52 check: fix difference report (was reporting error counts) 2025-07-09 10:32:12 +01:00
Nick Craig-Wood
ac54cccced linkbox: fix upload error "user upload file not exist"
Linkbox have started issuing 302 redirects on some of their PUT
requests when rclone uploads a file.

This is problematic for several reasons:

1. This is the wrong redirect code - it should be 307 to preserve the method
2. Since Expect/100-Continue isn't supported the whole body gets uploaded

This fixes the problem by first doing a HEAD request on the URL. This
will allow us to read the redirect Location and not upload the body to
the wrong place.

It should still work (albeit a little more inefficiently) if Linkbox
stop redirecting the PUT requests.

See: https://forum.rclone.org/t/linkbox-upload-error/51795
Fixes: #8606
2025-07-09 10:32:12 +01:00
Nick Craig-Wood
4c4d366e29 march: fix deadlock when using --no-traverse - fixes #8656
This ocurred whenever there were more than 100 files in the source due
to the output channel filling up.

The fix is not to use list.NewSorter but take more care to output the
dst objects in the same order the src objects are delivered. As the
src objects are delivered sorted, no sorting is needed.

In order not to cause another deadlock, we need to send nil dst
objects which is safe since this adjusts the termination conditions
for the channels.

Thanks to @jeremy for the test script the Go tests are based on.
2025-07-09 10:32:12 +01:00
wiserain
64fc3d05ae pikpak: improve error handling for missing links and unrecoverable 500s
This commit improves error handling in two specific scenarios:

* Missing Download Links: A 5-second delay is introduced when a download
  link is missing, as low-level retries aren't enough. Empirically, it
  takes about 30s-1m for the link to become available. This resolves
  failed integration tests: backend: TestIntegration/FsMkdir/FsPutFiles/
  ObjectUpdate, vfs: TestFileReadAtNonZeroLength

* Unrecoverable 500 Errors: The shouldRetry method is updated to skip
  retries for 500 errors from "idx.shub.mypikpak.com" indicating "no
  record for gcid." These errors are non-recoverable, so retrying is futile.
2025-07-09 10:32:12 +01:00
WeidiDeng
90386efeb1 webdav: fix setting modtime to that of local object instead of remote
In this commit the source of the modtime got changed to the wrong object by accident

0b9671313b webdav: add an ownCloud Infinite Scale vendor that enables tus chunked upload support

This reverts that change and fixes the integration tests.
2025-07-09 10:32:12 +01:00
Davide Bizzarri
5f78b47295 fix: b2 versionAt read metadata 2025-07-09 10:32:12 +01:00
Nick Craig-Wood
775ee90fa5 Start v1.70.3-DEV development 2025-07-02 15:36:43 +01:00
Nick Craig-Wood
444392bf9c docs: fix filescom/filelu link mixup
See: https://forum.rclone.org/t/a-small-bug-in-rclone-documentation/51774
2025-07-02 15:35:18 +01:00
Nick Craig-Wood
d36259749f docs: update link for filescom 2025-06-30 11:10:31 +01:00
Nick Craig-Wood
4010380ea8 Version v1.70.2 2025-06-27 12:30:18 +01:00
Ali Zein Yousuf
c138e52a57 docs: update client ID instructions to current Azure AD portal - fixes #8027 2025-06-27 12:23:00 +01:00
necaran
e22ce597ad mega: fix tls handshake failure - fixes #8565
The cipher suites used by Mega's storage endpoints: https://github.com/meganz/webclient/issues/103
are no longer supported by default since Go 1.22: https://tip.golang.org/doc/go1.22#minor_library_changes
This therefore assigns the cipher suites explicitly to include the one Mega needs.
2025-06-26 17:06:03 +01:00
Nick Craig-Wood
79bd9e7913 pacer: fix nil pointer deref in RetryError - fixes #8077
Before this change, if RetryAfterError was called with a nil err, then
it's Error method would return this when wrapped in a fmt.Errorf
statement

    error %!v(PANIC=Error method: runtime error: invalid memory address or nil pointer dereference))

Looking at the code, it looks like RetryAfterError will usually be
called with a nil pointer, so this patch makes sure it has a sensible
error.
2025-06-26 17:05:37 +01:00
nielash
32f9393ac8 convmv: fix moving to unicode-equivalent name - fixes #8634
Before this change, using convmv to convert filenames between NFD and NFC could
fail on certain backends (such as onedrive) that were insensitive to the
difference. This change fixes the issue by extending the existing
needsMoveCaseInsensitive logic for use in this scenario.
2025-06-26 17:05:37 +01:00
nielash
f97c876eb1 convmv: make --dry-run logs less noisy
Before this change, convmv dry runs would log a SkipDestructive message for
every single object, even objects that would not really be moved during a real
run. This made it quite difficult to tell what would actually happen during the
real run. This change fixes that by returning silently in such cases (as would
happen during a real run.)
2025-06-26 17:05:37 +01:00
nielash
9b43836e19 sync: avoid copying dir metadata to itself
In convmv, src and dst can point to the same directory. Unless a dir's name is
changing, we should leave it alone and not attempt to copy its metadata to
itself.
2025-06-26 17:05:37 +01:00
Nick Craig-Wood
ff817e8764 combine: fix directory not found errors with ListP interface - Fixes #8627
In

b1d774c2e3 combine: implement ListP interface

We introduced the ListP interface to the combine backend. This was
passing the wrong remote to the upstreams. This was picked up by the
integration tests but was ignored by accident.
2025-06-26 17:05:37 +01:00
Nick Craig-Wood
3c63dec849 local: fix --skip-links on Windows when skipping Junction points
Due to a change in Go which was enabled by the `go 1.22` in `go.mod`
rclone has stopped skipping junction points ("My Documents" in
particular) if `--skip-links` is set on Windows.

This is because the output from os.Lstat has changed and junction
points are no longer marked with os.ModeSymlink but with
os.ModeIrregular instead.

This fix now skips os.ModeIrregular objects if --skip-links is set on
Windows only.

Fixes #8561
See: https://github.com/golang/go/issues/73827
2025-06-26 17:05:37 +01:00
dependabot[bot]
33876c5806 build: bump github.com/go-chi/chi/v5 from 5.2.1 to 5.2.2 to fix GHSA-vrw8-fxc6-2r93
See: https://github.com/go-chi/chi/security/advisories/GHSA-vrw8-fxc6-2r93
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-26 17:05:37 +01:00
Nick Craig-Wood
fa3b444341 log: fix deadlock when using systemd logging - fixes #8621
In this commit the logging system was re-worked

dfa4d94827 fs: Remove github.com/sirupsen/logrus and replace with log/slog

Unfortunately the systemd logging was still using the plain log
package and this caused a deadlock as it was recursively calling the
logging package.

The fix was to use the dedicated systemd journal logging routines in
the process removing a TODO!
2025-06-26 17:05:37 +01:00
Nick Craig-Wood
e5fc424955 docs: googlephotos: detail how to make your own client_id - fixes #8622 2025-06-26 17:05:37 +01:00
Nick Craig-Wood
06badeffa3 pikpak: fix uploads fail with "aws-chunked encoding is not supported" error
This downgrades the AWS SDK slightly (this is still an upgrade from
rclone v1.69.3) to work around a breakage in the upstream SDK when
used with pikpak. This isn't a long term solution - either they will
fix it upstream or we will implement a workaround.

See: https://github.com/aws/aws-sdk-go-v2/issues/3007
See: #8629
2025-06-26 16:58:02 +01:00
Nick Craig-Wood
eb71d1be18 Start v1.70.2-DEV development 2025-06-25 16:40:16 +01:00
Nick Craig-Wood
7506a3c84c docs: Remove Warp as a sponsor 2025-06-25 16:38:25 +01:00
Nick Craig-Wood
831abd3406 docs: add files.com as a Gold sponsor 2025-06-25 16:38:25 +01:00
Nick Craig-Wood
9c08cd80c7 docs: add links to SecureBuild docker image 2025-06-25 16:38:25 +01:00
Nick Craig-Wood
948db193a2 Version v1.70.1 2025-06-19 11:48:30 +01:00
Ed Craig-Wood
72bc3f5079 docs: DOI grammar error 2025-06-19 11:36:27 +01:00
albertony
bf8a428fbd docs: lib/transform: cleanup formatting 2025-06-19 11:36:27 +01:00
albertony
05cc6f829b lib/transform: avoid empty charmap entry 2025-06-19 11:36:27 +01:00
jinjingroad
af73833773 chore: fix function name
Signed-off-by: jinjingroad <jinjingroad@sina.com>
2025-06-19 11:36:27 +01:00
Nick Craig-Wood
3167a63780 convmv: fix spurious "error running command echo" on Windows
Before this change the help for convmv was generated by running the
examples each time rclone started up. Unfortunately this involved
running the echo command which did not work on Windows.

This pre-generates the help into `transform.md` and embeds it. It can
be re-generated with `go generate` which is a better solution.

See: https://forum.rclone.org/t/invoke-of-1-70-0-complains-of-echo-not-found/51618
2025-06-19 11:36:27 +01:00
Ed Craig-Wood
1d9795daa6 docs: client-credentials is not support by all backends 2025-06-19 11:36:27 +01:00
Nick Craig-Wood
03ea89adf0 Start v1.70.1-DEV development 2025-06-19 11:35:18 +01:00
368 changed files with 47596 additions and 81650 deletions

View File

@@ -23,18 +23,15 @@ jobs:
build: build:
if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) if: inputs.manual || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name))
timeout-minutes: 60 timeout-minutes: 60
defaults:
run:
shell: bash
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.24'] job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.23']
include: include:
- job_name: linux - job_name: linux
os: ubuntu-latest os: ubuntu-latest
go: '>=1.25.0-rc.1' go: '>=1.24.0-rc.1'
gotags: cmount gotags: cmount
build_flags: '-include "^linux/"' build_flags: '-include "^linux/"'
check: true check: true
@@ -45,14 +42,14 @@ jobs:
- job_name: linux_386 - job_name: linux_386
os: ubuntu-latest os: ubuntu-latest
go: '>=1.25.0-rc.1' go: '>=1.24.0-rc.1'
goarch: 386 goarch: 386
gotags: cmount gotags: cmount
quicktest: true quicktest: true
- job_name: mac_amd64 - job_name: mac_amd64
os: macos-latest os: macos-latest
go: '>=1.25.0-rc.1' go: '>=1.24.0-rc.1'
gotags: 'cmount' gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo' build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true quicktest: true
@@ -61,14 +58,14 @@ jobs:
- job_name: mac_arm64 - job_name: mac_arm64
os: macos-latest os: macos-latest
go: '>=1.25.0-rc.1' go: '>=1.24.0-rc.1'
gotags: 'cmount' gotags: 'cmount'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib' build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib'
deploy: true deploy: true
- job_name: windows - job_name: windows
os: windows-latest os: windows-latest
go: '>=1.25.0-rc.1' go: '>=1.24.0-rc.1'
gotags: cmount gotags: cmount
cgo: '0' cgo: '0'
build_flags: '-include "^windows/"' build_flags: '-include "^windows/"'
@@ -78,14 +75,14 @@ jobs:
- job_name: other_os - job_name: other_os
os: ubuntu-latest os: ubuntu-latest
go: '>=1.25.0-rc.1' go: '>=1.24.0-rc.1'
build_flags: '-exclude "^(windows/|darwin/|linux/)"' build_flags: '-exclude "^(windows/|darwin/|linux/)"'
compile_all: true compile_all: true
deploy: true deploy: true
- job_name: go1.24 - job_name: go1.23
os: ubuntu-latest os: ubuntu-latest
go: '1.24' go: '1.23'
quicktest: true quicktest: true
racequicktest: true racequicktest: true
@@ -95,17 +92,18 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v5 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Install Go - name: Install Go
uses: actions/setup-go@v6 uses: actions/setup-go@v5
with: with:
go-version: ${{ matrix.go }} go-version: ${{ matrix.go }}
check-latest: true check-latest: true
- name: Set environment variables - name: Set environment variables
shell: bash
run: | run: |
echo 'GOTAGS=${{ matrix.gotags }}' >> $GITHUB_ENV echo 'GOTAGS=${{ matrix.gotags }}' >> $GITHUB_ENV
echo 'BUILD_FLAGS=${{ matrix.build_flags }}' >> $GITHUB_ENV echo 'BUILD_FLAGS=${{ matrix.build_flags }}' >> $GITHUB_ENV
@@ -114,6 +112,7 @@ jobs:
if [[ "${{ matrix.cgo }}" != "" ]]; then echo 'CGO_ENABLED=${{ matrix.cgo }}' >> $GITHUB_ENV ; fi if [[ "${{ matrix.cgo }}" != "" ]]; then echo 'CGO_ENABLED=${{ matrix.cgo }}' >> $GITHUB_ENV ; fi
- name: Install Libraries on Linux - name: Install Libraries on Linux
shell: bash
run: | run: |
sudo modprobe fuse sudo modprobe fuse
sudo chmod 666 /dev/fuse sudo chmod 666 /dev/fuse
@@ -123,6 +122,7 @@ jobs:
if: matrix.os == 'ubuntu-latest' if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS - name: Install Libraries on macOS
shell: bash
run: | run: |
# https://github.com/Homebrew/brew/issues/15621#issuecomment-1619266788 # https://github.com/Homebrew/brew/issues/15621#issuecomment-1619266788
# https://github.com/orgs/Homebrew/discussions/4612#discussioncomment-6319008 # https://github.com/orgs/Homebrew/discussions/4612#discussioncomment-6319008
@@ -151,6 +151,7 @@ jobs:
if: matrix.os == 'windows-latest' if: matrix.os == 'windows-latest'
- name: Print Go version and environment - name: Print Go version and environment
shell: bash
run: | run: |
printf "Using go at: $(which go)\n" printf "Using go at: $(which go)\n"
printf "Go version: $(go version)\n" printf "Go version: $(go version)\n"
@@ -162,24 +163,29 @@ jobs:
env env
- name: Build rclone - name: Build rclone
shell: bash
run: | run: |
make make
- name: Rclone version - name: Rclone version
shell: bash
run: | run: |
rclone version rclone version
- name: Run tests - name: Run tests
shell: bash
run: | run: |
make quicktest make quicktest
if: matrix.quicktest if: matrix.quicktest
- name: Race test - name: Race test
shell: bash
run: | run: |
make racequicktest make racequicktest
if: matrix.racequicktest if: matrix.racequicktest
- name: Run librclone tests - name: Run librclone tests
shell: bash
run: | run: |
make -C librclone/ctest test make -C librclone/ctest test
make -C librclone/ctest clean make -C librclone/ctest clean
@@ -187,12 +193,14 @@ jobs:
if: matrix.librclonetest if: matrix.librclonetest
- name: Compile all architectures test - name: Compile all architectures test
shell: bash
run: | run: |
make make
make compile_all make compile_all
if: matrix.compile_all if: matrix.compile_all
- name: Deploy built binaries - name: Deploy built binaries
shell: bash
run: | run: |
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep_linux ; fi if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then make release_dep_linux ; fi
make ci_beta make ci_beta
@@ -211,20 +219,21 @@ jobs:
steps: steps:
- name: Get runner parameters - name: Get runner parameters
id: get-runner-parameters id: get-runner-parameters
shell: bash
run: | run: |
echo "year-week=$(/bin/date -u "+%Y%V")" >> $GITHUB_OUTPUT echo "year-week=$(/bin/date -u "+%Y%V")" >> $GITHUB_OUTPUT
echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT echo "runner-os-version=$ImageOS" >> $GITHUB_OUTPUT
- name: Checkout - name: Checkout
uses: actions/checkout@v5 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Install Go - name: Install Go
id: setup-go id: setup-go
uses: actions/setup-go@v6 uses: actions/setup-go@v5
with: with:
go-version: '>=1.24.0-rc.1' go-version: '>=1.23.0-rc.1'
check-latest: true check-latest: true
cache: false cache: false
@@ -239,13 +248,13 @@ jobs:
restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}- restore-keys: golangci-lint-${{ steps.get-runner-parameters.outputs.runner-os-version }}-go${{ steps.setup-go.outputs.go-version }}-${{ steps.get-runner-parameters.outputs.year-week }}-
- name: Code quality test (Linux) - name: Code quality test (Linux)
uses: golangci/golangci-lint-action@v8 uses: golangci/golangci-lint-action@v6
with: with:
version: latest version: latest
skip-cache: true skip-cache: true
- name: Code quality test (Windows) - name: Code quality test (Windows)
uses: golangci/golangci-lint-action@v8 uses: golangci/golangci-lint-action@v6
env: env:
GOOS: "windows" GOOS: "windows"
with: with:
@@ -253,7 +262,7 @@ jobs:
skip-cache: true skip-cache: true
- name: Code quality test (macOS) - name: Code quality test (macOS)
uses: golangci/golangci-lint-action@v8 uses: golangci/golangci-lint-action@v6
env: env:
GOOS: "darwin" GOOS: "darwin"
with: with:
@@ -261,7 +270,7 @@ jobs:
skip-cache: true skip-cache: true
- name: Code quality test (FreeBSD) - name: Code quality test (FreeBSD)
uses: golangci/golangci-lint-action@v8 uses: golangci/golangci-lint-action@v6
env: env:
GOOS: "freebsd" GOOS: "freebsd"
with: with:
@@ -269,7 +278,7 @@ jobs:
skip-cache: true skip-cache: true
- name: Code quality test (OpenBSD) - name: Code quality test (OpenBSD)
uses: golangci/golangci-lint-action@v8 uses: golangci/golangci-lint-action@v6
env: env:
GOOS: "openbsd" GOOS: "openbsd"
with: with:
@@ -282,19 +291,8 @@ jobs:
- name: Scan for vulnerabilities - name: Scan for vulnerabilities
run: govulncheck ./... run: govulncheck ./...
- name: Check Markdown format
uses: DavidAnson/markdownlint-cli2-action@v20
with:
globs: |
CONTRIBUTING.md
MAINTAINERS.md
README.md
RELEASE.md
CODE_OF_CONDUCT.md
docs/content/{authors,bugs,changelog,docs,downloads,faq,filtering,gui,install,licence,overview,privacy}.md
- name: Scan edits of autogenerated files - name: Scan edits of autogenerated files
run: bin/check_autogenerated_edits.py 'origin/${{ github.base_ref }}' run: bin/check_autogenerated_edits.py
if: github.event_name == 'pull_request' if: github.event_name == 'pull_request'
android: android:
@@ -305,17 +303,18 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v5 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
# Upgrade together with NDK version # Upgrade together with NDK version
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v6 uses: actions/setup-go@v5
with: with:
go-version: '>=1.25.0-rc.1' go-version: '>=1.24.0-rc.1'
- name: Set global environment variables - name: Set global environment variables
shell: bash
run: | run: |
echo "VERSION=$(make version)" >> $GITHUB_ENV echo "VERSION=$(make version)" >> $GITHUB_ENV
@@ -334,6 +333,7 @@ jobs:
run: env PATH=$PATH:~/go/bin gomobile bind -androidapi ${RCLONE_NDK_VERSION} -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile run: env PATH=$PATH:~/go/bin gomobile bind -androidapi ${RCLONE_NDK_VERSION} -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile
- name: arm-v7a Set environment variables - name: arm-v7a Set environment variables
shell: bash
run: | run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -347,6 +347,7 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv7a . run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv7a .
- name: arm64-v8a Set environment variables - name: arm64-v8a Set environment variables
shell: bash
run: | run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -359,6 +360,7 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv8a . run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv8a .
- name: x86 Set environment variables - name: x86 Set environment variables
shell: bash
run: | run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
@@ -371,6 +373,7 @@ jobs:
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x86 . run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x86 .
- name: x64 Set environment variables - name: x64 Set environment variables
shell: bash
run: | run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV

View File

@@ -52,7 +52,7 @@ jobs:
df -h . df -h .
- name: Checkout Repository - name: Checkout Repository
uses: actions/checkout@v5 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
@@ -92,7 +92,7 @@ jobs:
# There's no way around this, because "ImageOS" is only available to # There's no way around this, because "ImageOS" is only available to
# processes, but the setup-go action uses it in its key. # processes, but the setup-go action uses it in its key.
id: imageos id: imageos
uses: actions/github-script@v8 uses: actions/github-script@v7
with: with:
result-encoding: string result-encoding: string
script: | script: |
@@ -198,7 +198,7 @@ jobs:
steps: steps:
- name: Download Image Digests - name: Download Image Digests
uses: actions/download-artifact@v5 uses: actions/download-artifact@v4
with: with:
path: /tmp/digests path: /tmp/digests
pattern: digests-* pattern: digests-*

View File

@@ -30,7 +30,7 @@ jobs:
sudo rm -rf /usr/share/dotnet || true sudo rm -rf /usr/share/dotnet || true
df -h . df -h .
- name: Checkout master - name: Checkout master
uses: actions/checkout@v5 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Build and publish docker plugin - name: Build and publish docker plugin

View File

@@ -1,146 +1,144 @@
version: "2" # golangci-lint configuration options
linters: linters:
# Configure the linter set. To avoid unexpected results the implicit default
# set is ignored and all the ones to use are explicitly enabled.
default: none
enable: enable:
# Default
- errcheck - errcheck
- govet
- ineffassign
- staticcheck
- unused
# Additional
- gocritic
- misspell
#- prealloc # TODO
- revive
- unconvert
# Configure checks. Mostly using defaults but with some commented exceptions.
settings:
staticcheck:
# With staticcheck there is only one setting, so to extend the implicit
# default value it must be explicitly included.
checks:
# Default
- all
- -ST1000
- -ST1003
- -ST1016
- -ST1020
- -ST1021
- -ST1022
# Disable quickfix checks
- -QF*
gocritic:
# With gocritic there are different settings, but since enabled-checks
# and disabled-checks cannot both be set, for full customization the
# alternative is to disable all defaults and explicitly enable the ones
# to use.
disable-all: true
enabled-checks:
#- appendAssign # Skip default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Skip default
- caseOrder
- codegenComment
#- commentFormatting # Skip default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Skip default
- flagDeref
- flagName
#- ifElseChain # Skip default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Enable additional check that are not enabled by default
#- singleCaseSwitch # Skip default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: ${base-path}/bin/rules.go
revive:
# With revive there is in reality only one setting, and when at least one
# rule are specified then only these rules will be considered, defaults
# and all others are then implicitly disabled, so must explicitly enable
# all rules to be used.
rules:
- name: blank-imports
disabled: false
- name: context-as-argument
disabled: false
- name: context-keys-type
disabled: false
- name: dot-imports
disabled: false
#- name: empty-block # Skip default
# disabled: true
- name: error-naming
disabled: false
- name: error-return
disabled: false
- name: error-strings
disabled: false
- name: errorf
disabled: false
- name: exported
disabled: false
#- name: increment-decrement # Skip default
# disabled: true
- name: indent-error-flow
disabled: false
- name: package-comments
disabled: false
- name: range
disabled: false
- name: receiver-naming
disabled: false
#- name: redefines-builtin-id # Skip default
# disabled: true
#- name: superfluous-else # Skip default
# disabled: true
- name: time-naming
disabled: false
- name: unexported-return
disabled: false
#- name: unreachable-code # Skip default
# disabled: true
#- name: unused-parameter # Skip default
# disabled: true
- name: var-declaration
disabled: false
- name: var-naming
disabled: false
formatters:
enable:
- goimports - goimports
- revive
- ineffassign
- govet
- unconvert
- staticcheck
- gosimple
- stylecheck
- unused
- misspell
- gocritic
#- prealloc
#- maligned
disable-all: true
issues: issues:
# Enable some lints excluded by default
exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50. # Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0 max-issues-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3. # Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0 max-same-issues: 0
exclude-rules:
- linters:
- staticcheck
text: 'SA1019: "github.com/rclone/rclone/cmd/serve/httplib" is deprecated'
# don't disable the revive messages about comments on exported functions
include:
- EXC0012
- EXC0013
- EXC0014
- EXC0015
run: run:
# Timeout for total work, e.g. 30s, 5m, 5m30s. Default is 0 (disabled). # timeout for analysis, e.g. 30s, 5m, default is 1m
timeout: 10m timeout: 10m
linters-settings:
revive:
# setting rules seems to disable all the rules, so re-enable them here
rules:
- name: blank-imports
disabled: false
- name: context-as-argument
disabled: false
- name: context-keys-type
disabled: false
- name: dot-imports
disabled: false
- name: empty-block
disabled: true
- name: error-naming
disabled: false
- name: error-return
disabled: false
- name: error-strings
disabled: false
- name: errorf
disabled: false
- name: exported
disabled: false
- name: increment-decrement
disabled: true
- name: indent-error-flow
disabled: false
- name: package-comments
disabled: false
- name: range
disabled: false
- name: receiver-naming
disabled: false
- name: redefines-builtin-id
disabled: true
- name: superfluous-else
disabled: true
- name: time-naming
disabled: false
- name: unexported-return
disabled: false
- name: unreachable-code
disabled: true
- name: unused-parameter
disabled: true
- name: var-declaration
disabled: false
- name: var-naming
disabled: false
stylecheck:
# Only enable the checks performed by the staticcheck stand-alone tool,
# as documented here: https://staticcheck.io/docs/configuration/options/#checks
checks: ["all", "-ST1000", "-ST1003", "-ST1016", "-ST1020", "-ST1021", "-ST1022", "-ST1023"]
gocritic:
# Enable all default checks with some exceptions and some additions (commented).
# Cannot use both enabled-checks and disabled-checks, so must specify all to be used.
disable-all: true
enabled-checks:
#- appendAssign # Enabled by default
- argOrder
- assignOp
- badCall
- badCond
#- captLocal # Enabled by default
- caseOrder
- codegenComment
#- commentFormatting # Enabled by default
- defaultCaseOrder
- deprecatedComment
- dupArg
- dupBranchBody
- dupCase
- dupSubExpr
- elseif
#- exitAfterDefer # Enabled by default
- flagDeref
- flagName
#- ifElseChain # Enabled by default
- mapKey
- newDeref
- offBy1
- regexpMust
- ruleguard # Not enabled by default
#- singleCaseSwitch # Enabled by default
- sloppyLen
- sloppyTypeAssert
- switchTrue
- typeSwitchVar
- underef
- unlambda
- unslice
- valSwap
- wrapperFunc
settings:
ruleguard:
rules: "${configDir}/bin/rules.go"

View File

@@ -1,43 +0,0 @@
default: true
# Use specific styles, to be consistent accross all documents.
# Default is to accept any as long as it is consistent within the same document.
heading-style: # MD003
style: atx
ul-style: # MD004
style: dash
hr-style: # MD035
style: ---
code-block-style: # MD046
style: fenced
code-fence-style: # MD048
style: backtick
emphasis-style: # MD049
style: asterisk
strong-style: # MD050
style: asterisk
# Allow multiple headers with same text as long as they are not siblings.
no-duplicate-heading: # MD024
siblings_only: true
# Allow long lines in code blocks and tables.
line-length: # MD013
code_blocks: false
tables: false
# The Markdown files used to generated docs with Hugo contain a top level
# header, even though the YAML front matter has a title property (which is
# used for the HTML document title only). Suppress Markdownlint warning:
# Multiple top-level headings in the same document.
single-title: # MD025
level: 1
front_matter_title:
# The HTML docs generated by Hugo from Markdown files may have slightly
# different header anchors than GitHub rendered Markdown, e.g. Hugo trims
# leading dashes so "--config string" becomes "#config-string" while it is
# "#--config-string" in GitHub preview. When writing links to headers in the
# Markdown files we must use whatever works in the final HTML generated docs.
# Suppress Markdownlint warning: Link fragments should be valid.
link-fragments: false # MD051

View File

@@ -1,80 +0,0 @@
# Rclone Code of Conduct
Like the technical community as a whole, the Rclone team and community
is made up of a mixture of professionals and volunteers from all over
the world, working on every aspect of the mission - including
mentorship, teaching, and connecting people.
Diversity is one of our huge strengths, but it can also lead to
communication issues and unhappiness. To that end, we have a few
ground rules that we ask people to adhere to. This code applies
equally to founders, mentors and those seeking help and guidance.
This isn't an exhaustive list of things that you can't do. Rather,
take it in the spirit in which it's intended - a guide to make it
easier to enrich all of us and the technical communities in which we
participate.
This code of conduct applies to all spaces managed by the Rclone
project or Rclone Services Ltd. This includes the issue tracker, the
forum, the GitHub site, the wiki, any other online services or
in-person events. In addition, violations of this code outside these
spaces may affect a person's ability to participate within them.
- **Be friendly and patient.**
- **Be welcoming.** We strive to be a community that welcomes and
supports people of all backgrounds and identities. This includes,
but is not limited to members of any race, ethnicity, culture,
national origin, colour, immigration status, social and economic
class, educational level, sex, sexual orientation, gender identity
and expression, age, size, family status, political belief,
religion, and mental and physical ability.
- **Be considerate.** Your work will be used by other people, and you
in turn will depend on the work of others. Any decision you take
will affect users and colleagues, and you should take those
consequences into account when making decisions. Remember that we're
a world-wide community, so you might not be communicating in someone
else's primary language.
- **Be respectful.** Not all of us will agree all the time, but
disagreement is no excuse for poor behavior and poor manners. We
might all experience some frustration now and then, but we cannot
allow that frustration to turn into a personal attack. It's
important to remember that a community where people feel
uncomfortable or threatened is not a productive one. Members of the
Rclone community should be respectful when dealing with other
members as well as with people outside the Rclone community.
- **Be careful in the words that you choose.** We are a community of
professionals, and we conduct ourselves professionally. Be kind to
others. Do not insult or put down other participants. Harassment and
other exclusionary behavior aren't acceptable. This includes, but is
not limited to:
- Violent threats or language directed against another person.
- Discriminatory jokes and language.
- Posting sexually explicit or violent material.
- Posting (or threatening to post) other people's personally
identifying information ("doxing").
- Personal insults, especially those using racist or sexist terms.
- Unwelcome sexual attention.
- Advocating for, or encouraging, any of the above behavior.
- Repeated harassment of others. In general, if someone asks you to
stop, then stop.
- **When we disagree, try to understand why.** Disagreements, both
social and technical, happen all the time and Rclone is no
exception. It is important that we resolve disagreements and
differing views constructively. Remember that we're different. The
strength of Rclone comes from its varied community, people from a
wide range of backgrounds. Different people have different
perspectives on issues. Being unable to understand why someone holds
a viewpoint doesn't mean that they're wrong. Don't forget that it is
human to err and blaming each other doesn't get us anywhere.
Instead, focus on helping to resolve issues and learning from
mistakes.
If you believe someone is violating the code of conduct, we ask that
you report it by emailing [info@rclone.com](mailto:info@rclone.com).
Original text courtesy of the [Speak Up! project](http://web.archive.org/web/20141109123859/http://speakup.io/coc.html).
## Questions?
If you have questions, please feel free to [contact us](mailto:info@rclone.com).

View File

@@ -15,81 +15,61 @@ with the [latest beta of rclone](https://beta.rclone.org/):
- Rclone version (e.g. output from `rclone version`) - Rclone version (e.g. output from `rclone version`)
- Which OS you are using and how many bits (e.g. Windows 10, 64 bit) - Which OS you are using and how many bits (e.g. Windows 10, 64 bit)
- The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`) - The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`)
- A log of the command with the `-vv` flag (e.g. output from - A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`)
`rclone -vv copy /tmp remote:tmp`) - if the log contains secrets then edit the file with a text editor first to obscure them
- if the log contains secrets then edit the file with a text editor first to
obscure them
## Submitting a new feature or bug fix ## Submitting a new feature or bug fix
If you find a bug that you'd like to fix, or a new feature that you'd If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via GitHub. like to implement then please submit a pull request via GitHub.
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues) If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues) first so it can be discussed.
first so it can be discussed.
To prepare your pull request first press the fork button on [rclone's GitHub To prepare your pull request first press the fork button on [rclone's GitHub
page](https://github.com/rclone/rclone). page](https://github.com/rclone/rclone).
Then [install Git](https://git-scm.com/downloads) and set your public contribution Then [install Git](https://git-scm.com/downloads) and set your public contribution [name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git) and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
[name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git)
and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git).
Next open your terminal, change directory to your preferred folder and initialise Next open your terminal, change directory to your preferred folder and initialise your local rclone project:
your local rclone project:
```sh git clone https://github.com/rclone/rclone.git
git clone https://github.com/rclone/rclone.git cd rclone
cd rclone git remote rename origin upstream
git remote rename origin upstream # if you have SSH keys setup in your GitHub account:
# if you have SSH keys setup in your GitHub account: git remote add origin git@github.com:YOURUSER/rclone.git
git remote add origin git@github.com:YOURUSER/rclone.git # otherwise:
# otherwise: git remote add origin https://github.com/YOURUSER/rclone.git
git remote add origin https://github.com/YOURUSER/rclone.git
```
Note that most of the terminal commands in the rest of this guide must be Note that most of the terminal commands in the rest of this guide must be executed from the rclone folder created above.
executed from the rclone folder created above.
Now [install Go](https://golang.org/doc/install) and verify your installation: Now [install Go](https://golang.org/doc/install) and verify your installation:
```sh go version
go version
```
Great, you can now compile and execute your own version of rclone: Great, you can now compile and execute your own version of rclone:
```sh go build
go build ./rclone version
./rclone version
```
(Note that you can also replace `go build` with `make`, which will include a (Note that you can also replace `go build` with `make`, which will include a
more accurate version number in the executable as well as enable you to specify more accurate version number in the executable as well as enable you to specify
more build options.) Finally make a branch to add your new feature more build options.) Finally make a branch to add your new feature
```sh git checkout -b my-new-feature
git checkout -b my-new-feature
```
And get hacking. And get hacking.
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins) You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins) and a quick view on the rclone [code organisation](#code-organisation).
and a quick view on the rclone [code organisation](#code-organisation).
When ready - test the affected functionality and run the unit tests for the When ready - test the affected functionality and run the unit tests for the code you changed
code you changed
```sh cd folder/with/changed/files
cd folder/with/changed/files go test -v
go test -v
```
Note that you may need to make a test remote, e.g. `TestSwift` for some Note that you may need to make a test remote, e.g. `TestSwift` for some
of the unit tests. of the unit tests.
This is typically enough if you made a simple bug fix, otherwise please read This is typically enough if you made a simple bug fix, otherwise please read the rclone [testing](#testing) section too.
the rclone [testing](#testing) section too.
Make sure you Make sure you
@@ -99,19 +79,14 @@ Make sure you
When you are done with that push your changes to GitHub: When you are done with that push your changes to GitHub:
```sh git push -u origin my-new-feature
git push -u origin my-new-feature
```
and open the GitHub website to [create your pull and open the GitHub website to [create your pull
request](https://help.github.com/articles/creating-a-pull-request/). request](https://help.github.com/articles/creating-a-pull-request/).
Your changes will then get reviewed and you might get asked to fix some stuff. Your changes will then get reviewed and you might get asked to fix some stuff. If so, then make the changes in the same branch, commit and push your updates to GitHub.
If so, then make the changes in the same branch, commit and push your updates to
GitHub.
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master) You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master) or [squash your commits](#squashing-your-commits).
or [squash your commits](#squashing-your-commits).
## Using Git and GitHub ## Using Git and GitHub
@@ -119,118 +94,87 @@ or [squash your commits](#squashing-your-commits).
Follow the guideline for [commit messages](#commit-messages) and then: Follow the guideline for [commit messages](#commit-messages) and then:
```sh git checkout my-new-feature # To switch to your branch
git checkout my-new-feature # To switch to your branch git status # To see the new and changed files
git status # To see the new and changed files git add FILENAME # To select FILENAME for the commit
git add FILENAME # To select FILENAME for the commit git status # To verify the changes to be committed
git status # To verify the changes to be committed git commit # To do the commit
git commit # To do the commit git log # To verify the commit. Use q to quit the log
git log # To verify the commit. Use q to quit the log
```
You can modify the message or changes in the latest commit using: You can modify the message or changes in the latest commit using:
```sh git commit --amend
git commit --amend
```
If you amend to commits that have been pushed to GitHub, then you will have to If you amend to commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Replacing your previously pushed commits ### Replacing your previously pushed commits
Note that you are about to rewrite the GitHub history of your branch. It is good Note that you are about to rewrite the GitHub history of your branch. It is good practice to involve your collaborators before modifying commits that have been pushed to GitHub.
practice to involve your collaborators before modifying commits that have been
pushed to GitHub.
Your previously pushed commits are replaced by: Your previously pushed commits are replaced by:
```sh git push --force origin my-new-feature
git push --force origin my-new-feature
```
### Basing your changes on the latest master ### Basing your changes on the latest master
To base your changes on the latest version of the To base your changes on the latest version of the [rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
[rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
```sh git checkout master
git checkout master git fetch upstream
git fetch upstream git merge --ff-only
git merge --ff-only git push origin --follow-tags # optional update of your fork in GitHub
git push origin --follow-tags # optional update of your fork in GitHub git checkout my-new-feature
git checkout my-new-feature git rebase master
git rebase master
```
If you rebase commits that have been pushed to GitHub, then you will have to If you rebase commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Squashing your commits ### Squashing your commits ###
To combine your commits into one commit: To combine your commits into one commit:
```sh git log # To count the commits to squash, e.g. the last 2
git log # To count the commits to squash, e.g. the last 2 git reset --soft HEAD~2 # To undo the 2 latest commits
git reset --soft HEAD~2 # To undo the 2 latest commits git status # To check everything is as expected
git status # To check everything is as expected
```
If everything is fine, then make the new combined commit: If everything is fine, then make the new combined commit:
```sh git commit # To commit the undone commits as one
git commit # To commit the undone commits as one
```
otherwise, you may roll back using: otherwise, you may roll back using:
```sh git reflog # To check that HEAD{1} is your previous state
git reflog # To check that HEAD{1} is your previous state git reset --soft 'HEAD@{1}' # To roll back to your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state
```
If you squash commits that have been pushed to GitHub, then you will have to If you squash commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
[replace your previously pushed commits](#replacing-your-previously-pushed-commits).
Tip: You may like to use `git rebase -i master` if you are experienced or have a Tip: You may like to use `git rebase -i master` if you are experienced or have a more complex situation.
more complex situation.
### GitHub Continuous Integration ### GitHub Continuous Integration
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
to build and test the project, which should be automatically available for your
fork too from the `Actions` tab in your repository.
## Testing ## Testing
### Code quality tests ### Code quality tests
If you install [golangci-lint](https://github.com/golangci/golangci-lint) then If you install [golangci-lint](https://github.com/golangci/golangci-lint) then you can run the same tests as get run in the CI which can be very helpful.
you can run the same tests as get run in the CI which can be very helpful.
You can run them with `make check` or with `golangci-lint run ./...`. You can run them with `make check` or with `golangci-lint run ./...`.
Using these tests ensures that the rclone codebase all uses the same coding Using these tests ensures that the rclone codebase all uses the same coding standards. These tests also check for easy mistakes to make (like forgetting to check an error return).
standards. These tests also check for easy mistakes to make (like forgetting
to check an error return).
### Quick testing ### Quick testing
rclone's tests are run from the go testing framework, so at the top rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests. level you can run this to run all the tests.
```sh go test -v ./...
go test -v ./...
```
You can also use `make`, if supported by your platform You can also use `make`, if supported by your platform
```sh make quicktest
make quicktest
```
The quicktest is [automatically run by GitHub](#github-continuous-integration) The quicktest is [automatically run by GitHub](#github-continuous-integration) when you push your branch to GitHub.
when you push your branch to GitHub.
### Backend testing ### Backend testing
@@ -246,51 +190,41 @@ need to make a remote called `TestDrive`.
You can then run the unit tests in the drive directory. These tests You can then run the unit tests in the drive directory. These tests
are skipped if `TestDrive:` isn't defined. are skipped if `TestDrive:` isn't defined.
```sh cd backend/drive
cd backend/drive go test -v
go test -v
```
You can then run the integration tests which test all of rclone's You can then run the integration tests which test all of rclone's
operations. Normally these get run against the local file system, operations. Normally these get run against the local file system,
but they can be run against any of the remotes. but they can be run against any of the remotes.
```sh cd fs/sync
cd fs/sync go test -v -remote TestDrive:
go test -v -remote TestDrive: go test -v -remote TestDrive: -fast-list
go test -v -remote TestDrive: -fast-list
cd fs/operations cd fs/operations
go test -v -remote TestDrive: go test -v -remote TestDrive:
```
If you want to use the integration test framework to run these tests If you want to use the integration test framework to run these tests
altogether with an HTML report and test retries then from the altogether with an HTML report and test retries then from the
project root: project root:
```sh go install github.com/rclone/rclone/fstest/test_all
go install github.com/rclone/rclone/fstest/test_all test_all -backends drive
test_all -backends drive
```
### Full integration testing ### Full integration testing
If you want to run all the integration tests against all the remotes, If you want to run all the integration tests against all the remotes,
then change into the project root and run then change into the project root and run
```sh make check
make check make test
make test
```
The commands may require some extra go packages which you can install with The commands may require some extra go packages which you can install with
```sh make build_dep
make build_dep
```
The full integration tests are run daily on the integration test server. You can The full integration tests are run daily on the integration test server. You can
find the results at <https://pub.rclone.org/integration-tests/> find the results at https://pub.rclone.org/integration-tests/
## Code Organisation ## Code Organisation
@@ -298,48 +232,46 @@ Rclone code is organised into a small number of top level directories
with modules beneath. with modules beneath.
- backend - the rclone backends for interfacing to cloud providers - - backend - the rclone backends for interfacing to cloud providers -
- all - import this to load all the cloud providers - all - import this to load all the cloud providers
- ...providers - ...providers
- bin - scripts for use while building or maintaining rclone - bin - scripts for use while building or maintaining rclone
- cmd - the rclone commands - cmd - the rclone commands
- all - import this to load all the commands - all - import this to load all the commands
- ...commands - ...commands
- cmdtest - end-to-end tests of commands, flags, environment variables,... - cmdtest - end-to-end tests of commands, flags, environment variables,...
- docs - the documentation and website - docs - the documentation and website
- content - adjust these docs only, except those marked autogenerated - content - adjust these docs only - everything else is autogenerated
or portions marked autogenerated where the corresponding .go file must be - command - these are auto-generated - edit the corresponding .go file
edited instead, and everything else is autogenerated
- commands - these are auto-generated, edit the corresponding .go file
- fs - main rclone definitions - minimal amount of code - fs - main rclone definitions - minimal amount of code
- accounting - bandwidth limiting and statistics - accounting - bandwidth limiting and statistics
- asyncreader - an io.Reader which reads ahead - asyncreader - an io.Reader which reads ahead
- config - manage the config file and flags - config - manage the config file and flags
- driveletter - detect if a name is a drive letter - driveletter - detect if a name is a drive letter
- filter - implements include/exclude filtering - filter - implements include/exclude filtering
- fserrors - rclone specific error handling - fserrors - rclone specific error handling
- fshttp - http handling for rclone - fshttp - http handling for rclone
- fspath - path handling for rclone - fspath - path handling for rclone
- hash - defines rclone's hash types and functions - hash - defines rclone's hash types and functions
- list - list a remote - list - list a remote
- log - logging facilities - log - logging facilities
- march - iterates directories in lock step - march - iterates directories in lock step
- object - in memory Fs objects - object - in memory Fs objects
- operations - primitives for sync, e.g. Copy, Move - operations - primitives for sync, e.g. Copy, Move
- sync - sync directories - sync - sync directories
- walk - walk a directory - walk - walk a directory
- fstest - provides integration test framework - fstest - provides integration test framework
- fstests - integration tests for the backends - fstests - integration tests for the backends
- mockdir - mocks an fs.Directory - mockdir - mocks an fs.Directory
- mockobject - mocks an fs.Object - mockobject - mocks an fs.Object
- test_all - Runs integration tests for everything - test_all - Runs integration tests for everything
- graphics - the images used in the website, etc. - graphics - the images used in the website, etc.
- lib - libraries used by the backend - lib - libraries used by the backend
- atexit - register functions to run when rclone exits - atexit - register functions to run when rclone exits
- dircache - directory ID to name caching - dircache - directory ID to name caching
- oauthutil - helpers for using oauth - oauthutil - helpers for using oauth
- pacer - retries with backoff and paces operations - pacer - retries with backoff and paces operations
- readers - a selection of useful io.Readers - readers - a selection of useful io.Readers
- rest - a thin abstraction over net/http for REST - rest - a thin abstraction over net/http for REST
- librclone - in memory interface to rclone's API for embedding rclone - librclone - in memory interface to rclone's API for embedding rclone
- vfs - Virtual FileSystem layer for implementing rclone mount and similar - vfs - Virtual FileSystem layer for implementing rclone mount and similar
@@ -347,36 +279,6 @@ with modules beneath.
If you are adding a new feature then please update the documentation. If you are adding a new feature then please update the documentation.
The documentation sources are generally in Markdown format, in conformance
with the CommonMark specification and compatible with GitHub Flavored
Markdown (GFM). The markdown format is checked as part of the lint operation
that runs automatically on pull requests, to enforce standards and consistency.
This is based on the [markdownlint](https://github.com/DavidAnson/markdownlint)
tool, which can also be integrated into editors so you can perform the same
checks while writing.
HTML pages, served as website <rclone.org>, are generated from the Markdown,
using [Hugo](https://gohugo.io). Note that when generating the HTML pages,
there is currently used a different algorithm for generating header anchors
than what GitHub uses for its Markdown rendering. For example, in the HTML docs
generated by Hugo any leading `-` characters are ignored, which means when
linking to a header with text `--config string` we therefore need to use the
link `#config-string` in our Markdown source, which will not work in GitHub's
preview where `#--config-string` would be the correct link.
Most of the documentation are written directly in text files with extension
`.md`, mainly within folder `docs/content`. Note that several of such files
are autogenerated (e.g. the command documentation, and `docs/content/flags.md`),
or contain autogenerated portions (e.g. the backend documentation under
`docs/content/commands`). These are marked with an `autogenerated` comment.
The sources of the autogenerated text are usually Markdown formatted text
embedded as string values in the Go source code, so you need to locate these
and edit the `.go` file instead. The `MANUAL.*`, `rclone.1` and other text
files in the root of the repository are also autogenerated. The autogeneration
of files, and the website, will be done during the release process. See the
`make doc` and `make website` targets in the Makefile if you are interested in
how. You don't need to run these when adding a feature.
If you add a new general flag (not for a backend), then document it in If you add a new general flag (not for a backend), then document it in
`docs/content/docs.md` - the flags there are supposed to be in `docs/content/docs.md` - the flags there are supposed to be in
alphabetical order. alphabetical order.
@@ -385,40 +287,39 @@ If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field. the source file in the `Help:` field.
- Start with the most important information about the option, - Start with the most important information about the option,
as a single sentence on a single line. as a single sentence on a single line.
- This text will be used for the command-line flag help. - This text will be used for the command-line flag help.
- It will be combined with other information, such as any default value, - It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence. and the result will look odd if not written as a single sentence.
- It should end with a period/full stop character, which will be shown - It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help. in docs but automatically removed when producing the flag help.
- Try to keep it below 80 characters, to reduce text wrapping in the terminal. - Try to keep it below 80 characters, to reduce text wrapping in the terminal.
- More details can be added in a new paragraph, after an empty line (`"\n\n"`). - More details can be added in a new paragraph, after an empty line (`"\n\n"`).
- Like with docs generated from Markdown, a single line break is ignored - Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph. and two line breaks creates a new paragraph.
- This text will be shown to the user in `rclone config` - This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`, and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release). normally run some time before next release).
- To create options of enumeration type use the `Examples:` field. - To create options of enumeration type use the `Examples:` field.
- Each example value have their own `Help:` field, but they are treated - Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character. countries, it looks better without an ending period/full stop character.
When writing documentation for an entirely new backend, The only documentation you need to edit are the `docs/content/*.md`
see [backend documentation](#backend-documentation). files. The `MANUAL.*`, `rclone.1`, website, etc. are all auto-generated
from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature.
If you are updating documentation for a command, you must do that in the Documentation for rclone sub commands is with their code, e.g.
command source code, e.g. `cmd/ls/ls.go`. Write flag help strings as a single `cmd/ls/ls.go`. Write flag help strings as a single sentence on a single
sentence on a single line, without a period/full stop character at the end, line, without a period/full stop character at the end, as it will be
as it will be combined unmodified with other information (such as any default combined unmodified with other information (such as any default value).
value).
Note that you can use Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
[GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository) for small changes in the docs which makes it very easy.
for small changes in the docs which makes it very easy. Just remember the
caveat when linking to header anchors, noted above, which means that GitHub's
Markdown preview may not be an entirely reliable verification of the results.
## Making a release ## Making a release
@@ -449,13 +350,13 @@ change will get linked into the issue.
Here is an example of a short commit message: Here is an example of a short commit message:
```text ```
drive: add team drive support - fixes #885 drive: add team drive support - fixes #885
``` ```
And here is an example of a longer one: And here is an example of a longer one:
```text ```
mount: fix hang on errored upload mount: fix hang on errored upload
In certain circumstances, if an upload failed then the mount could hang In certain circumstances, if an upload failed then the mount could hang
@@ -478,9 +379,7 @@ To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to instructions below. These will fetch the dependency and add it to
`go.mod` and `go.sum`. `go.mod` and `go.sum`.
```sh go get github.com/ncw/new_dependency
go get github.com/ncw/new_dependency
```
You can add constraints on that package when doing `go get` (see the You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to. go docs linked above), but don't unless you really need to.
@@ -492,9 +391,7 @@ and `go.sum` in the same commit as your other changes.
If you need to update a dependency then run If you need to update a dependency then run
```sh go get golang.org/x/crypto
go get golang.org/x/crypto
```
Check in a single commit as above. Check in a single commit as above.
@@ -537,38 +434,25 @@ remote or an fs.
### Getting going ### Getting going
- Create `backend/remote/remote.go` (copy this from a similar remote) - Create `backend/remote/remote.go` (copy this from a similar remote)
- box is a good one to start from if you have a directory-based remote (and - box is a good one to start from if you have a directory-based remote (and shows how to use the directory cache)
shows how to use the directory cache) - b2 is a good one to start from if you have a bucket-based remote
- b2 is a good one to start from if you have a bucket-based remote
- Add your remote to the imports in `backend/all/all.go` - Add your remote to the imports in `backend/all/all.go`
- HTTP based remotes are easiest to maintain if they use rclone's - HTTP based remotes are easiest to maintain if they use rclone's [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but if there is a really good Go SDK from the provider then use that instead.
[lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) module, but - Try to implement as many optional methods as possible as it makes the remote more usable.
if there is a really good Go SDK from the provider then use that instead. - Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to make sure we can encode any path name and `rclone info` to help determine the encodings needed
- Try to implement as many optional methods as possible as it makes the remote - `rclone purge -v TestRemote:rclone-info`
more usable. - `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- Use [lib/encoder](https://pkg.go.dev/github.com/rclone/rclone/lib/encoder) to - `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
make sure we can encode any path name and `rclone info` to help determine the - open `remote.csv` in a spreadsheet and examine
encodings needed
- `rclone purge -v TestRemote:rclone-info`
- `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
- `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json`
- open `remote.csv` in a spreadsheet and examine
### Guidelines for a speedy merge ### Guidelines for a speedy merge
- **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) - **Do** use [lib/rest](https://pkg.go.dev/github.com/rclone/rclone/lib/rest) if you are implementing a REST like backend and parsing XML/JSON in the backend.
if you are implementing a REST like backend and parsing XML/JSON in the backend. - **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp) if your backend is HTTP based - this adds features like `--dump bodies`, `--tpslimit`, `--user-agent` without you having to code anything!
- **Do** use rclone's Client or Transport from [fs/fshttp](https://pkg.go.dev/github.com/rclone/rclone/fs/fshttp) - **Do** follow your example backend exactly - use the same code order, function names, layout, structure. **Don't** move stuff around and **Don't** delete the comments.
if your backend is HTTP based - this adds features like `--dump bodies`, - **Do not** split your backend up into `fs.go` and `object.go` (there are a few backends like that - don't follow them!)
`--tpslimit`, `--user-agent` without you having to code anything!
- **Do** follow your example backend exactly - use the same code order, function
names, layout, structure. **Don't** move stuff around and **Don't** delete the
comments.
- **Do not** split your backend up into `fs.go` and `object.go` (there are a few
backends like that - don't follow them!)
- **Do** put your API type definitions in a separate file - by preference `api/types.go` - **Do** put your API type definitions in a separate file - by preference `api/types.go`
- **Remember** we have >50 backends to maintain so keeping them as similar as - **Remember** we have >50 backends to maintain so keeping them as similar as possible to each other is a high priority!
possible to each other is a high priority!
### Unit tests ### Unit tests
@@ -579,20 +463,19 @@ remote or an fs.
### Integration tests ### Integration tests
- Add your backend to `fstest/test_all/config.yaml` - Add your backend to `fstest/test_all/config.yaml`
- Once you've done that then you can use the integration test framework from - Once you've done that then you can use the integration test framework from the project root:
the project root: - go install ./...
- go install ./... - test_all -backends remote
- test_all -backends remote
Or if you want to run the integration tests manually: Or if you want to run the integration tests manually:
- Make sure integration tests pass with - Make sure integration tests pass with
- `cd fs/operations` - `cd fs/operations`
- `go test -v -remote TestRemote:` - `go test -v -remote TestRemote:`
- `cd fs/sync` - `cd fs/sync`
- `go test -v -remote TestRemote:` - `go test -v -remote TestRemote:`
- If your remote defines `ListR` check with this also - If your remote defines `ListR` check with this also
- `go test -v -remote TestRemote: -fast-list` - `go test -v -remote TestRemote: -fast-list`
See the [testing](#testing) section for more information on integration tests. See the [testing](#testing) section for more information on integration tests.
@@ -604,13 +487,10 @@ alphabetical order of full name of remote (e.g. `drive` is ordered as
`Google Drive`) but with the local file system last. `Google Drive`) but with the local file system last.
- `README.md` - main GitHub page - `README.md` - main GitHub page
- `docs/content/remote.md` - main docs page (note the backend options are - `docs/content/remote.md` - main docs page (note the backend options are automatically added to this file with `make backenddocs`)
automatically added to this file with `make backenddocs`) - make sure this has the `autogenerated options` comments in (see your reference backend docs)
- make sure this has the `autogenerated options` comments in (see your - update them in your backend with `bin/make_backend_docs.py remote`
reference backend docs) - `docs/content/overview.md` - overview docs - add an entry into the Features table and the Optional Features table.
- update them in your backend with `bin/make_backend_docs.py remote`
- `docs/content/overview.md` - overview docs - add an entry into the Features
table and the Optional Features table.
- `docs/content/docs.md` - list of remotes in config section - `docs/content/docs.md` - list of remotes in config section
- `docs/content/_index.md` - front page of rclone.org - `docs/content/_index.md` - front page of rclone.org
- `docs/layouts/chrome/navbar.html` - add it to the website navigation - `docs/layouts/chrome/navbar.html` - add it to the website navigation
@@ -626,22 +506,21 @@ It is quite easy to add a new S3 provider to rclone.
You'll need to modify the following files You'll need to modify the following files
- `backend/s3/s3.go` - `backend/s3/s3.go`
- Add the provider to `providerOption` at the top of the file - Add the provider to `providerOption` at the top of the file
- Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`. - Add endpoints and other config for your provider gated on the provider in `fs.RegInfo`.
- Exclude your provider from generic config questions (eg `region` and `endpoint). - Exclude your provider from generic config questions (eg `region` and `endpoint).
- Add the provider to the `setQuirks` function - see the documentation there. - Add the provider to the `setQuirks` function - see the documentation there.
- `docs/content/s3.md` - `docs/content/s3.md`
- Add the provider at the top of the page. - Add the provider at the top of the page.
- Add a section about the provider linked from there. - Add a section about the provider linked from there.
- Make sure this is in alphabetical order in the `Providers` section. - Add a transcript of a trial `rclone config` session
- Add a transcript of a trial `rclone config` session - Edit the transcript to remove things which might change in subsequent versions
- Edit the transcript to remove things which might change in subsequent versions - **Do not** alter or add to the autogenerated parts of `s3.md`
- **Do not** alter or add to the autogenerated parts of `s3.md` - **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3`
- **Do not** run `make backenddocs` or `bin/make_backend_docs.py s3`
- `README.md` - this is the home page in github - `README.md` - this is the home page in github
- Add the provider and a link to the section you wrote in `docs/contents/s3.md` - Add the provider and a link to the section you wrote in `docs/contents/s3.md`
- `docs/content/_index.md` - this is the home page of rclone.org - `docs/content/_index.md` - this is the home page of rclone.org
- Add the provider and a link to the section you wrote in `docs/contents/s3.md` - Add the provider and a link to the section you wrote in `docs/contents/s3.md`
When adding the provider, endpoints, quirks, docs etc keep them in When adding the provider, endpoints, quirks, docs etc keep them in
alphabetical order by `Provider` name, but with `AWS` first and alphabetical order by `Provider` name, but with `AWS` first and
@@ -662,39 +541,38 @@ For an example of adding an s3 provider see [eb3082a1](https://github.com/rclone
## Writing a plugin ## Writing a plugin
New features (backends, commands) can also be added "out-of-tree", through Go New features (backends, commands) can also be added "out-of-tree", through Go plugins.
plugins. Changes will be kept in a dynamically loaded file instead of being Changes will be kept in a dynamically loaded file instead of being compiled into the main binary.
compiled into the main binary. This is useful if you can't merge your changes This is useful if you can't merge your changes upstream or don't want to maintain a fork of rclone.
upstream or don't want to maintain a fork of rclone.
### Usage ### Usage
- Naming - Naming
- Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`. - Plugins names must have the pattern `librcloneplugin_KIND_NAME.so`.
- `KIND` should be one of `backend`, `command` or `bundle`. - `KIND` should be one of `backend`, `command` or `bundle`.
- Example: A plugin with backend support for PiFS would be called - Example: A plugin with backend support for PiFS would be called
`librcloneplugin_backend_pifs.so`. `librcloneplugin_backend_pifs.so`.
- Loading - Loading
- Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282)) - Supported on macOS & Linux as of now. ([Go issue for Windows support](https://github.com/golang/go/issues/19282))
- Supported on rclone v1.50 or greater. - Supported on rclone v1.50 or greater.
- All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded. - All plugins in the folder specified by variable `$RCLONE_PLUGIN_PATH` are loaded.
- If this variable doesn't exist, plugin support is disabled. - If this variable doesn't exist, plugin support is disabled.
- Plugins must be compiled against the exact version of rclone to work. - Plugins must be compiled against the exact version of rclone to work.
(The rclone used during building the plugin must be the same as the source (The rclone used during building the plugin must be the same as the source of rclone)
of rclone)
### Building ### Building
To turn your existing additions into a Go plugin, move them to an external repository To turn your existing additions into a Go plugin, move them to an external repository
and change the top-level package name to `main`. and change the top-level package name to `main`.
Check `rclone --version` and make sure that the plugin's rclone dependency and Check `rclone --version` and make sure that the plugin's rclone dependency and host Go version match.
host Go version match.
Then, run `go build -buildmode=plugin -o PLUGIN_NAME.so .` to build the plugin. Then, run `go build -buildmode=plugin -o PLUGIN_NAME.so .` to build the plugin.
[Go reference](https://godoc.org/github.com/rclone/rclone/lib/plugin) [Go reference](https://godoc.org/github.com/rclone/rclone/lib/plugin)
[Minimal example](https://gist.github.com/terorie/21b517ee347828e899e1913efc1d684f)
## Keeping a backend or command out of tree ## Keeping a backend or command out of tree
Rclone was designed to be modular so it is very easy to keep a backend Rclone was designed to be modular so it is very easy to keep a backend
@@ -707,6 +585,6 @@ add them out of tree.
This may be easier than using a plugin and is supported on all This may be easier than using a plugin and is supported on all
platforms not just macOS and Linux. platforms not just macOS and Linux.
This is explained further in <https://github.com/rclone/rclone_out_of_tree_example> This is explained further in https://github.com/rclone/rclone_out_of_tree_example
which has an example of an out of tree backend `ram` (which is a which has an example of an out of tree backend `ram` (which is a
renamed version of the `memory` backend). renamed version of the `memory` backend).

View File

@@ -1,4 +1,4 @@
# Maintainers guide for rclone # Maintainers guide for rclone #
Current active maintainers of rclone are: Current active maintainers of rclone are:
@@ -24,108 +24,80 @@ Current active maintainers of rclone are:
| Dan McArdle | @dmcardle | gitannex | | Dan McArdle | @dmcardle | gitannex |
| Sam Harrison | @childish-sambino | filescom | | Sam Harrison | @childish-sambino | filescom |
## This is a work in progress draft **This is a work in progress Draft**
This is a guide for how to be an rclone maintainer. This is mostly a write-up This is a guide for how to be an rclone maintainer. This is mostly a write-up of what I (@ncw) attempt to do.
of what I (@ncw) attempt to do.
## Triaging Tickets ## Triaging Tickets ##
When a ticket comes in it should be triaged. This means it should be classified When a ticket comes in it should be triaged. This means it should be classified by adding labels and placed into a milestone. Quite a lot of tickets need a bit of back and forth to determine whether it is a valid ticket so tickets may remain without labels or milestone for a while.
by adding labels and placed into a milestone. Quite a lot of tickets need a bit
of back and forth to determine whether it is a valid ticket so tickets may
remain without labels or milestone for a while.
Rclone uses the labels like this: Rclone uses the labels like this:
- `bug` - a definitely verified bug * `bug` - a definitely verified bug
- `can't reproduce` - a problem which we can't reproduce * `can't reproduce` - a problem which we can't reproduce
- `doc fix` - a bug in the documentation - if users need help understanding the * `doc fix` - a bug in the documentation - if users need help understanding the docs add this label
docs add this label * `duplicate` - normally close these and ask the user to subscribe to the original
- `duplicate` - normally close these and ask the user to subscribe to the original * `enhancement: new remote` - a new rclone backend
- `enhancement: new remote` - a new rclone backend * `enhancement` - a new feature
- `enhancement` - a new feature * `FUSE` - to do with `rclone mount` command
- `FUSE` - to do with `rclone mount` command * `good first issue` - mark these if you find a small self-contained issue - these get shown to new visitors to the project
- `good first issue` - mark these if you find a small self-contained issue - * `help` wanted - mark these if you find a self-contained issue - these get shown to new visitors to the project
these get shown to new visitors to the project * `IMPORTANT` - note to maintainers not to forget to fix this for the release
- `help` wanted - mark these if you find a self-contained issue - these get * `maintenance` - internal enhancement, code re-organisation, etc.
shown to new visitors to the project * `Needs Go 1.XX` - waiting for that version of Go to be released
- `IMPORTANT` - note to maintainers not to forget to fix this for the release * `question` - not a `bug` or `enhancement` - direct to the forum for next time
- `maintenance` - internal enhancement, code re-organisation, etc. * `Remote: XXX` - which rclone backend this affects
- `Needs Go 1.XX` - waiting for that version of Go to be released * `thinking` - not decided on the course of action yet
- `question` - not a `bug` or `enhancement` - direct to the forum for next time
- `Remote: XXX` - which rclone backend this affects
- `thinking` - not decided on the course of action yet
If it turns out to be a bug or an enhancement it should be tagged as such, with If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "good first issue" tag to give new contributors something easy to do to get going.
the appropriate other tags. Don't forget the "good first issue" tag to give new
contributors something easy to do to get going.
When a ticket is tagged it should be added to a milestone, either the next When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (e.g. the next go release).
release, the one after, Soon or Help Wanted. Bugs can be added to the
"Known Bugs" milestone if they aren't planned to be fixed or need to wait for
something (e.g. the next go release).
The milestones have these meanings: The milestones have these meanings:
- v1.XX - stuff we would like to fit into this release * v1.XX - stuff we would like to fit into this release
- v1.XX+1 - stuff we are leaving until the next release * v1.XX+1 - stuff we are leaving until the next release
- Soon - stuff we think is a good idea - waiting to be scheduled for a release * Soon - stuff we think is a good idea - waiting to be scheduled for a release
- Help wanted - blue sky stuff that might get moved up, or someone could help with * Help wanted - blue sky stuff that might get moved up, or someone could help with
- Known bugs - bugs waiting on external factors or we aren't going to fix for * Known bugs - bugs waiting on external factors or we aren't going to fix for the moment
the moment
Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile) Tickets [with no milestone](https://github.com/rclone/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile) are good candidates for ones that have slipped between the gaps and need following up.
are good candidates for ones that have slipped between the gaps and need
following up.
## Closing Tickets ## Closing Tickets ##
Close tickets as soon as you can - make sure they are tagged with a release. Close tickets as soon as you can - make sure they are tagged with a release. Post a link to a beta in the ticket with the fix in, asking for feedback.
Post a link to a beta in the ticket with the fix in, asking for feedback.
## Pull requests ## Pull requests ##
Try to process pull requests promptly! Try to process pull requests promptly!
Merging pull requests on GitHub itself works quite well nowadays so you can Merging pull requests on GitHub itself works quite well nowadays so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message.
squash and rebase or rebase pull requests. rclone doesn't use merge commits.
Use the squash and rebase option if you need to edit the commit message.
After merging the commit, in your local master branch, do `git pull` then run After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`.
`bin/update-authors.py` to update the authors file then `git push`.
Sometimes pull requests need to be left open for a while - this especially true Sometimes pull requests need to be left open for a while - this especially true of contributions of new backends which take a long time to get right.
of contributions of new backends which take a long time to get right.
## Merges ## Merges ##
If you are merging a branch locally then do `git merge --ff-only branch-name` to If you are merging a branch locally then do `git merge --ff-only branch-name` to avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
## Release cycle ## Release cycle ##
Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer if there is something big to merge that didn't stabilize properly or for personal reasons.
if there is something big to merge that didn't stabilize properly or for personal
reasons.
High impact regressions should be fixed before the next release. High impact regressions should be fixed before the next release.
Near the start of the release cycle, the dependencies should be updated with Near the start of the release cycle, the dependencies should be updated with `make update` to give time for bugs to surface.
`make update` to give time for bugs to surface.
Towards the end of the release cycle try not to merge anything too big so let Towards the end of the release cycle try not to merge anything too big so let things settle down.
things settle down.
Follow the instructions in RELEASE.md for making the release. Note that the Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time-consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained.
testing part is the most time-consuming often needing several rounds of test
and fix depending on exactly how many new features rclone has gained.
## Mailing list ## Mailing list ##
There is now an invite-only mailing list for rclone developers `rclone-dev` on There is now an invite-only mailing list for rclone developers `rclone-dev` on google groups.
google groups.
## TODO ## TODO ##
I should probably make a <dev@rclone.org> to register with cloud providers. I should probably make a dev@rclone.org to register with cloud providers.

46356
MANUAL.html generated

File diff suppressed because it is too large Load Diff

20335
MANUAL.md generated

File diff suppressed because it is too large Load Diff

5780
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

View File

@@ -100,7 +100,6 @@ compiletest:
check: rclone check: rclone
@echo "-- START CODE QUALITY REPORT -------------------------------" @echo "-- START CODE QUALITY REPORT -------------------------------"
@golangci-lint run $(LINTTAGS) ./... @golangci-lint run $(LINTTAGS) ./...
@bin/markdown-lint
@echo "-- END CODE QUALITY REPORT ---------------------------------" @echo "-- END CODE QUALITY REPORT ---------------------------------"
# Get the build dependencies # Get the build dependencies
@@ -145,11 +144,9 @@ MANUAL.txt: MANUAL.md
pandoc -s --from markdown-smart --to plain MANUAL.md -o MANUAL.txt pandoc -s --from markdown-smart --to plain MANUAL.md -o MANUAL.txt
commanddocs: rclone commanddocs: rclone
go generate ./lib/transform
-@rmdir -p '$$HOME/.config/rclone' -@rmdir -p '$$HOME/.config/rclone'
XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs --config=/notfound docs/content/ XDG_CACHE_HOME="" XDG_CONFIG_HOME="" HOME="\$$HOME" USER="\$$USER" rclone gendocs --config=/notfound docs/content/
@[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1) @[ ! -e '$$HOME' ] || (echo 'Error: created unwanted directory named $$HOME' && exit 1)
go run bin/make_bisync_docs.go ./docs/content/
backenddocs: rclone bin/make_backend_docs.py backenddocs: rclone bin/make_backend_docs.py
-@rmdir -p '$$HOME/.config/rclone' -@rmdir -p '$$HOME/.config/rclone'
@@ -246,7 +243,7 @@ fetch_binaries:
rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/ rclone -P sync --exclude "/testbuilds/**" --delete-excluded $(BETA_UPLOAD) build/
serve: website serve: website
cd docs && hugo server --logLevel info -w --disableFastRender --ignoreCache cd docs && hugo server --logLevel info -w --disableFastRender
tag: retag doc tag: retag doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new

261
README.md
View File

@@ -1,6 +1,6 @@
<!-- markdownlint-disable-next-line first-line-heading no-inline-html -->
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only) [<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only)
<!-- markdownlint-disable-next-line no-inline-html -->
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only) [<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
[Website](https://rclone.org) | [Website](https://rclone.org) |
@@ -18,106 +18,101 @@
# Rclone # Rclone
Rclone *("rsync for cloud storage")* is a command-line program to sync files and Rclone *("rsync for cloud storage")* is a command-line program to sync files and directories to and from different cloud storage providers.
directories to and from different cloud storage providers.
## Storage providers ## Storage providers
- 1Fichier [:page_facing_up:](https://rclone.org/fichier/) * 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
- Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/) * Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
- Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss) * Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
- Amazon S3 [:page_facing_up:](https://rclone.org/s3/) * Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
- ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos) * ArvanCloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
- Backblaze B2 [:page_facing_up:](https://rclone.org/b2/) * Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
- Box [:page_facing_up:](https://rclone.org/box/) * Box [:page_facing_up:](https://rclone.org/box/)
- Ceph [:page_facing_up:](https://rclone.org/s3/#ceph) * Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
- China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos) * China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
- Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2) * Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
- Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/) * Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
- DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces) * DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
- Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage) * Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
- Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost) * Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
- Dropbox [:page_facing_up:](https://rclone.org/dropbox/) * Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
- Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/) * Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
- Exaba [:page_facing_up:](https://rclone.org/s3/#exaba) * Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files)
- Fastmail Files [:page_facing_up:](https://rclone.org/webdav/#fastmail-files) * FileLu [:page_facing_up:](https://rclone.org/filelu/)
- FileLu [:page_facing_up:](https://rclone.org/filelu/) * Files.com [:page_facing_up:](https://rclone.org/filescom/)
- Files.com [:page_facing_up:](https://rclone.org/filescom/) * FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade)
- FlashBlade [:page_facing_up:](https://rclone.org/s3/#pure-storage-flashblade) * FTP [:page_facing_up:](https://rclone.org/ftp/)
- FTP [:page_facing_up:](https://rclone.org/ftp/) * GoFile [:page_facing_up:](https://rclone.org/gofile/)
- GoFile [:page_facing_up:](https://rclone.org/gofile/) * Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
- Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/) * Google Drive [:page_facing_up:](https://rclone.org/drive/)
- Google Drive [:page_facing_up:](https://rclone.org/drive/) * Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
- Google Photos [:page_facing_up:](https://rclone.org/googlephotos/) * HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
- HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/) * Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box)
- Hetzner Storage Box [:page_facing_up:](https://rclone.org/sftp/#hetzner-storage-box) * HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
- HiDrive [:page_facing_up:](https://rclone.org/hidrive/) * HTTP [:page_facing_up:](https://rclone.org/http/)
- HTTP [:page_facing_up:](https://rclone.org/http/) * Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs)
- Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs) * iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/)
- iCloud Drive [:page_facing_up:](https://rclone.org/iclouddrive/) * ImageKit [:page_facing_up:](https://rclone.org/imagekit/)
- ImageKit [:page_facing_up:](https://rclone.org/imagekit/) * Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
- Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/) * Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
- Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/) * IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
- IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3) * IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
- Intercolo Object Storage [:page_facing_up:](https://rclone.org/s3/#intercolo) * Koofr [:page_facing_up:](https://rclone.org/koofr/)
- IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos) * Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia)
- Koofr [:page_facing_up:](https://rclone.org/koofr/) * Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
- Leviia Object Storage [:page_facing_up:](https://rclone.org/s3/#leviia) * Linkbox [:page_facing_up:](https://rclone.org/linkbox)
- Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage) * Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode)
- Linkbox [:page_facing_up:](https://rclone.org/linkbox) * Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu)
- Linode Object Storage [:page_facing_up:](https://rclone.org/s3/#linode) * Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
- Magalu Object Storage [:page_facing_up:](https://rclone.org/s3/#magalu) * Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
- Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/) * MEGA [:page_facing_up:](https://rclone.org/mega/)
- Memset Memstore [:page_facing_up:](https://rclone.org/swift/) * MEGA S4 Object Storage [:page_facing_up:](https://rclone.org/s3/#mega)
- MEGA [:page_facing_up:](https://rclone.org/mega/) * Memory [:page_facing_up:](https://rclone.org/memory/)
- MEGA S4 Object Storage [:page_facing_up:](https://rclone.org/s3/#mega) * Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/)
- Memory [:page_facing_up:](https://rclone.org/memory/) * Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/)
- Microsoft Azure Blob Storage [:page_facing_up:](https://rclone.org/azureblob/) * Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/)
- Microsoft Azure Files Storage [:page_facing_up:](https://rclone.org/azurefiles/) * Minio [:page_facing_up:](https://rclone.org/s3/#minio)
- Microsoft OneDrive [:page_facing_up:](https://rclone.org/onedrive/) * Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud)
- Minio [:page_facing_up:](https://rclone.org/s3/#minio) * OVH [:page_facing_up:](https://rclone.org/swift/)
- Nextcloud [:page_facing_up:](https://rclone.org/webdav/#nextcloud) * Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
- Blomp Cloud Storage [:page_facing_up:](https://rclone.org/swift/) * OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
- OpenDrive [:page_facing_up:](https://rclone.org/opendrive/) * OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
- OpenStack Swift [:page_facing_up:](https://rclone.org/swift/) * Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
- Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/) * Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
- Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/) * Outscale [:page_facing_up:](https://rclone.org/s3/#outscale)
- Outscale [:page_facing_up:](https://rclone.org/s3/#outscale) * ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
- OVHcloud Object Storage (Swift) [:page_facing_up:](https://rclone.org/swift/) * pCloud [:page_facing_up:](https://rclone.org/pcloud/)
- OVHcloud Object Storage (S3-compatible) [:page_facing_up:](https://rclone.org/s3/#ovhcloud) * Petabox [:page_facing_up:](https://rclone.org/s3/#petabox)
- ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud) * PikPak [:page_facing_up:](https://rclone.org/pikpak/)
- pCloud [:page_facing_up:](https://rclone.org/pcloud/) * Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/)
- Petabox [:page_facing_up:](https://rclone.org/s3/#petabox) * premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
- PikPak [:page_facing_up:](https://rclone.org/pikpak/) * put.io [:page_facing_up:](https://rclone.org/putio/)
- Pixeldrain [:page_facing_up:](https://rclone.org/pixeldrain/) * Proton Drive [:page_facing_up:](https://rclone.org/protondrive/)
- premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/) * QingStor [:page_facing_up:](https://rclone.org/qingstor/)
- put.io [:page_facing_up:](https://rclone.org/putio/) * Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu)
- Proton Drive [:page_facing_up:](https://rclone.org/protondrive/) * Quatrix [:page_facing_up:](https://rclone.org/quatrix/)
- QingStor [:page_facing_up:](https://rclone.org/qingstor/) * Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
- Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu) * RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
- Quatrix [:page_facing_up:](https://rclone.org/quatrix/) * rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net)
- Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/) * Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
- RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp) * Seafile [:page_facing_up:](https://rclone.org/seafile/)
- rsync.net [:page_facing_up:](https://rclone.org/sftp/#rsync-net) * Seagate Lyve Cloud [:page_facing_up:](https://rclone.org/s3/#lyve)
- Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway) * SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
- Seafile [:page_facing_up:](https://rclone.org/seafile/) * Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel)
- Seagate Lyve Cloud [:page_facing_up:](https://rclone.org/s3/#lyve) * SFTP [:page_facing_up:](https://rclone.org/sftp/)
- SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs) * SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
- Selectel Object Storage [:page_facing_up:](https://rclone.org/s3/#selectel) * StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
- SFTP [:page_facing_up:](https://rclone.org/sftp/) * Storj [:page_facing_up:](https://rclone.org/storj/)
- SMB / CIFS [:page_facing_up:](https://rclone.org/smb/) * SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
- StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath) * Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2)
- Storj [:page_facing_up:](https://rclone.org/storj/) * Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos)
- SugarSync [:page_facing_up:](https://rclone.org/sugarsync/) * Uloz.to [:page_facing_up:](https://rclone.org/ulozto/)
- Synology C2 Object Storage [:page_facing_up:](https://rclone.org/s3/#synology-c2) * Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
- Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos) * WebDAV [:page_facing_up:](https://rclone.org/webdav/)
- Uloz.to [:page_facing_up:](https://rclone.org/ulozto/) * Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
- Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi) * Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
- WebDAV [:page_facing_up:](https://rclone.org/webdav/) * The local filesystem [:page_facing_up:](https://rclone.org/local/)
- Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
- Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
- Zata.ai [:page_facing_up:](https://rclone.org/s3/#Zata)
- The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/) Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
@@ -125,54 +120,50 @@ Please see [the full list of all storage providers and their features](https://r
These backends adapt or modify other storage providers These backends adapt or modify other storage providers
- Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/) * Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/)
- Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/) * Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/)
- Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/) * Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/)
- Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/) * Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/)
- Compress: compress files [:page_facing_up:](https://rclone.org/compress/) * Compress: compress files [:page_facing_up:](https://rclone.org/compress/)
- Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/) * Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/)
- Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/) * Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/)
- Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/) * Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/)
## Features ## Features
- MD5/SHA-1 hashes checked at all times for file integrity * MD5/SHA-1 hashes checked at all times for file integrity
- Timestamps preserved on files * Timestamps preserved on files
- Partial syncs supported on a whole file basis * Partial syncs supported on a whole file basis
- [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed * [Copy](https://rclone.org/commands/rclone_copy/) mode to just copy new/changed files
files * [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory identical
- [Sync](https://rclone.org/commands/rclone_sync/) (one way) mode to make a directory * [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync bidirectionally
identical * [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
- [Bisync](https://rclone.org/bisync/) (two way) to keep two directories in sync * Can sync to and from network, e.g. two different cloud accounts
bidirectionally * Optional large file chunking ([Chunker](https://rclone.org/chunker/))
- [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash * Optional transparent compression ([Compress](https://rclone.org/compress/))
equality * Optional encryption ([Crypt](https://rclone.org/crypt/))
- Can sync to and from network, e.g. two different cloud accounts * Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
- Optional large file chunking ([Chunker](https://rclone.org/chunker/)) * Multi-threaded downloads to local disk
- Optional transparent compression ([Compress](https://rclone.org/compress/)) * Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDAV/FTP/SFTP/DLNA
- Optional encryption ([Crypt](https://rclone.org/crypt/))
- Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
- Multi-threaded downloads to local disk
- Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files
over HTTP/WebDAV/FTP/SFTP/DLNA
## Installation & documentation ## Installation & documentation
Please see the [rclone website](https://rclone.org/) for: Please see the [rclone website](https://rclone.org/) for:
- [Installation](https://rclone.org/install/) * [Installation](https://rclone.org/install/)
- [Documentation & configuration](https://rclone.org/docs/) * [Documentation & configuration](https://rclone.org/docs/)
- [Changelog](https://rclone.org/changelog/) * [Changelog](https://rclone.org/changelog/)
- [FAQ](https://rclone.org/faq/) * [FAQ](https://rclone.org/faq/)
- [Storage providers](https://rclone.org/overview/) * [Storage providers](https://rclone.org/overview/)
- [Forum](https://forum.rclone.org/) * [Forum](https://forum.rclone.org/)
- ...and more * ...and more
## Downloads ## Downloads
- <https://rclone.org/downloads/> * https://rclone.org/downloads/
## License License
-------
This is free software under the terms of the MIT license (check the This is free software under the terms of the MIT license (check the
[COPYING file](/COPYING) included in this package). [COPYING file](/COPYING) included in this package).

View File

@@ -4,55 +4,52 @@ This file describes how to make the various kinds of releases
## Extra required software for making a release ## Extra required software for making a release
- [gh the github cli](https://github.com/cli/cli) for uploading packages * [gh the github cli](https://github.com/cli/cli) for uploading packages
- pandoc for making the html and man pages * pandoc for making the html and man pages
## Making a release ## Making a release
- git checkout master # see below for stable branch * git checkout master # see below for stable branch
- git pull # IMPORTANT * git pull # IMPORTANT
- git status - make sure everything is checked in * git status - make sure everything is checked in
- Check GitHub actions build for master is Green * Check GitHub actions build for master is Green
- make test # see integration test server or run locally * make test # see integration test server or run locally
- make tag * make tag
- edit docs/content/changelog.md # make sure to remove duplicate logs from point * edit docs/content/changelog.md # make sure to remove duplicate logs from point releases
releases * make tidy
- make tidy * make doc
- make doc * git status - to check for new man pages - git add them
- git status - to check for new man pages - git add them * git commit -a -v -m "Version v1.XX.0"
- git commit -a -v -m "Version v1.XX.0" * make retag
- make retag * git push origin # without --follow-tags so it doesn't push the tag if it fails
- git push origin # without --follow-tags so it doesn't push the tag if it fails * git push --follow-tags origin
- git push --follow-tags origin * # Wait for the GitHub builds to complete then...
- \# Wait for the GitHub builds to complete then... * make fetch_binaries
- make fetch_binaries * make tarball
- make tarball * make vendorball
- make vendorball * make sign_upload
- make sign_upload * make check_sign
- make check_sign * make upload
- make upload * make upload_website
- make upload_website * make upload_github
- make upload_github * make startdev # make startstable for stable branch
- make startdev # make startstable for stable branch * # announce with forum post, twitter post, patreon post
- \# announce with forum post, twitter post, patreon post
## Update dependencies ## Update dependencies
Early in the next release cycle update the dependencies. Early in the next release cycle update the dependencies.
- Review any pinned packages in go.mod and remove if possible * Review any pinned packages in go.mod and remove if possible
- `make updatedirect` * `make updatedirect`
- `make GOTAGS=cmount` * `make GOTAGS=cmount`
- `make compiletest` * `make compiletest`
- Fix anything which doesn't compile at this point and commit changes here * Fix anything which doesn't compile at this point and commit changes here
- `git commit -a -v -m "build: update all dependencies"` * `git commit -a -v -m "build: update all dependencies"`
If the `make updatedirect` upgrades the version of go in the `go.mod` If the `make updatedirect` upgrades the version of go in the `go.mod`
```text go 1.22.0
go 1.22.0
```
then go to manual mode. `go1.22` here is the lowest supported version then go to manual mode. `go1.22` here is the lowest supported version
in the `go.mod`. in the `go.mod`.
@@ -60,7 +57,7 @@ If `make updatedirect` added a `toolchain` directive then remove it.
We don't want to force a toolchain on our users. Linux packagers are We don't want to force a toolchain on our users. Linux packagers are
often using a version of Go that is a few versions out of date. often using a version of Go that is a few versions out of date.
```sh ```
go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all > /tmp/potential-upgrades
go get -d $(cat /tmp/potential-upgrades) go get -d $(cat /tmp/potential-upgrades)
go mod tidy -go=1.22 -compat=1.22 go mod tidy -go=1.22 -compat=1.22
@@ -70,7 +67,7 @@ If the `go mod tidy` fails use the output from it to remove the
package which can't be upgraded from `/tmp/potential-upgrades` when package which can't be upgraded from `/tmp/potential-upgrades` when
done done
```sh ```
git co go.mod go.sum git co go.mod go.sum
``` ```
@@ -80,12 +77,12 @@ Optionally upgrade the direct and indirect dependencies. This is very
likely to fail if the manual method was used abve - in that case likely to fail if the manual method was used abve - in that case
ignore it as it is too time consuming to fix. ignore it as it is too time consuming to fix.
- `make update` * `make update`
- `make GOTAGS=cmount` * `make GOTAGS=cmount`
- `make compiletest` * `make compiletest`
- roll back any updates which didn't compile * roll back any updates which didn't compile
- `git commit -a -v --amend` * `git commit -a -v --amend`
- **NB** watch out for this changing the default go version in `go.mod` * **NB** watch out for this changing the default go version in `go.mod`
Note that `make update` updates all direct and indirect dependencies Note that `make update` updates all direct and indirect dependencies
and there can occasionally be forwards compatibility problems with and there can occasionally be forwards compatibility problems with
@@ -102,9 +99,7 @@ The above procedure will not upgrade major versions, so v2 to v3.
However this tool can show which major versions might need to be However this tool can show which major versions might need to be
upgraded: upgraded:
```sh go run github.com/icholy/gomajor@latest list -major
go run github.com/icholy/gomajor@latest list -major
```
Expect API breakage when updating major versions. Expect API breakage when updating major versions.
@@ -112,9 +107,7 @@ Expect API breakage when updating major versions.
At some point after the release run At some point after the release run
```sh bin/tidy-beta v1.55
bin/tidy-beta v1.55
```
where the version number is that of a couple ago to remove old beta binaries. where the version number is that of a couple ago to remove old beta binaries.
@@ -124,64 +117,54 @@ If rclone needs a point release due to some horrendous bug:
Set vars Set vars
- BASE_TAG=v1.XX # e.g. v1.52 * BASE_TAG=v1.XX # e.g. v1.52
- NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1 * NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1
- echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1 * echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
First make the release branch. If this is a second point release then First make the release branch. If this is a second point release then
this will be done already. this will be done already.
- git co -b ${BASE_TAG}-stable ${BASE_TAG}.0 * git co -b ${BASE_TAG}-stable ${BASE_TAG}.0
- make startstable * make startstable
Now Now
- git co ${BASE_TAG}-stable * git co ${BASE_TAG}-stable
- git cherry-pick any fixes * git cherry-pick any fixes
- make startstable * make startstable
- Do the steps as above * Do the steps as above
- git co master * git co master
- `#` cherry pick the changes to the changelog - check the diff to make sure it * `#` cherry pick the changes to the changelog - check the diff to make sure it is correct
is correct * git checkout ${BASE_TAG}-stable docs/content/changelog.md
- git checkout ${BASE_TAG}-stable docs/content/changelog.md * git commit -a -v -m "Changelog updates from Version ${NEW_TAG}"
- git commit -a -v -m "Changelog updates from Version ${NEW_TAG}" * git push
- git push
## Sponsor logos ## Sponsor logos
If updating the website note that the sponsor logos have been moved out of the If updating the website note that the sponsor logos have been moved out of the main repository.
main repository.
You will need to checkout `/docs/static/img/logos` from <https://github.com/rclone/third-party-logos> You will need to checkout `/docs/static/img/logos` from https://github.com/rclone/third-party-logos
which is a private repo containing artwork from sponsors. which is a private repo containing artwork from sponsors.
## Update the website between releases ## Update the website between releases
Create an update website branch based off the last release Create an update website branch based off the last release
```sh git co -b update-website
git co -b update-website
```
If the branch already exists, double check there are no commits that need saving. If the branch already exists, double check there are no commits that need saving.
Now reset the branch to the last release Now reset the branch to the last release
```sh git reset --hard v1.64.0
git reset --hard v1.64.0
```
Create the changes, check them in, test with `make serve` then Create the changes, check them in, test with `make serve` then
```sh make upload_test_website
make upload_test_website
```
Check out <https://test.rclone.org> and when happy Check out https://test.rclone.org and when happy
```sh make upload_website
make upload_website
```
Cherry pick any changes back to master and the stable branch if it is active. Cherry pick any changes back to master and the stable branch if it is active.
@@ -189,14 +172,14 @@ Cherry pick any changes back to master and the stable branch if it is active.
To do a basic build of rclone's docker image to debug builds locally: To do a basic build of rclone's docker image to debug builds locally:
```sh ```
docker buildx build --load -t rclone/rclone:testing --progress=plain . docker buildx build --load -t rclone/rclone:testing --progress=plain .
docker run --rm rclone/rclone:testing version docker run --rm rclone/rclone:testing version
``` ```
To test the multipatform build To test the multipatform build
```sh ```
docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 . docker buildx build -t rclone/rclone:testing --progress=plain --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 .
``` ```
@@ -204,6 +187,6 @@ To make a full build then set the tags correctly and add `--push`
Note that you can't only build one architecture - you need to build them all. Note that you can't only build one architecture - you need to build them all.
```sh ```
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push . docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
``` ```

View File

@@ -1 +1 @@
v1.72.0 v1.70.3

View File

@@ -51,7 +51,6 @@ import (
"github.com/rclone/rclone/lib/env" "github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/multipart" "github.com/rclone/rclone/lib/multipart"
"github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/pool"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
@@ -991,38 +990,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err) return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
} }
case opt.ClientID != "" && opt.Tenant != "" && opt.MSIClientID != "":
// Workload Identity based authentication
var options azidentity.ManagedIdentityCredentialOptions
options.ID = azidentity.ClientID(opt.MSIClientID)
msiCred, err := azidentity.NewManagedIdentityCredential(&options)
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
getClientAssertions := func(context.Context) (string, error) {
token, err := msiCred.GetToken(context.Background(), policy.TokenRequestOptions{
Scopes: []string{"api://AzureADTokenExchange"},
})
if err != nil {
return "", fmt.Errorf("failed to acquire MSI token: %w", err)
}
return token.Token, nil
}
assertOpts := &azidentity.ClientAssertionCredentialOptions{}
f.cred, err = azidentity.NewClientAssertionCredential(
opt.Tenant,
opt.ClientID,
getClientAssertions,
assertOpts)
if err != nil {
return nil, fmt.Errorf("failed to acquire client assertion token: %w", err)
}
case opt.UseAZ: case opt.UseAZ:
var options = azidentity.AzureCLICredentialOptions{} var options = azidentity.AzureCLICredentialOptions{}
f.cred, err = azidentity.NewAzureCLICredential(&options) f.cred, err = azidentity.NewAzureCLICredential(&options)
@@ -1338,9 +1305,9 @@ func (f *Fs) containerOK(container string) bool {
} }
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix string, addContainer bool, callback func(fs.DirEntry) error) (err error) { func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) {
if !f.containerOK(containerName) { if !f.containerOK(containerName) {
return fs.ErrorDirNotFound return nil, fs.ErrorDirNotFound
} }
err = f.list(ctx, containerName, directory, prefix, addContainer, false, int32(f.opt.ListChunkSize), func(remote string, object *container.BlobItem, isDirectory bool) error { err = f.list(ctx, containerName, directory, prefix, addContainer, false, int32(f.opt.ListChunkSize), func(remote string, object *container.BlobItem, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory) entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -1348,16 +1315,16 @@ func (f *Fs) listDir(ctx context.Context, containerName, directory, prefix strin
return err return err
} }
if entry != nil { if entry != nil {
return callback(entry) entries = append(entries, entry)
} }
return nil return nil
}) })
if err != nil { if err != nil {
return err return nil, err
} }
// container must be present if listing succeeded // container must be present if listing succeeded
f.cache.MarkOK(containerName) f.cache.MarkOK(containerName)
return nil return entries, nil
} }
// listContainers returns all the containers to out // listContainers returns all the containers to out
@@ -1393,47 +1360,14 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
container, directory := f.split(dir) container, directory := f.split(dir)
if container == "" { if container == "" {
if directory != "" { if directory != "" {
return fs.ErrorListBucketRequired return nil, fs.ErrorListBucketRequired
} }
entries, err := f.listContainers(ctx) return f.listContainers(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "", list.Add)
if err != nil {
return err
}
} }
return list.Flush() return f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "")
} }
// ListR lists the objects and directories of the Fs starting // ListR lists the objects and directories of the Fs starting
@@ -2704,13 +2638,6 @@ func (w *azChunkWriter) WriteChunk(ctx context.Context, chunkNumber int, reader
return -1, err return -1, err
} }
// Only account after the checksum reads have been done
if do, ok := reader.(pool.DelayAccountinger); ok {
// To figure out this number, do a transfer and if the accounted size is 0 or a
// multiple of what it should be, increase or decrease this number.
do.DelayAccounting(2)
}
// Upload the block, with MD5 for check // Upload the block, with MD5 for check
m := md5.New() m := md5.New()
currentChunkSize, err := io.Copy(m, reader) currentChunkSize, err := io.Copy(m, reader)
@@ -2798,8 +2725,6 @@ func (o *Object) clearUncommittedBlocks(ctx context.Context) (err error) {
blockList blockblob.GetBlockListResponse blockList blockblob.GetBlockListResponse
properties *blob.GetPropertiesResponse properties *blob.GetPropertiesResponse
options *blockblob.CommitBlockListOptions options *blockblob.CommitBlockListOptions
// Use temporary pacer as this can be called recursively which can cause a deadlock with --max-connections
pacer = fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant)))
) )
properties, err = o.readMetaDataAlways(ctx) properties, err = o.readMetaDataAlways(ctx)
@@ -2811,7 +2736,7 @@ func (o *Object) clearUncommittedBlocks(ctx context.Context) (err error) {
if objectExists { if objectExists {
// Get the committed block list // Get the committed block list
err = pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
blockList, err = blockBlobSVC.GetBlockList(ctx, blockblob.BlockListTypeAll, nil) blockList, err = blockBlobSVC.GetBlockList(ctx, blockblob.BlockListTypeAll, nil)
return o.fs.shouldRetry(ctx, err) return o.fs.shouldRetry(ctx, err)
}) })
@@ -2853,7 +2778,7 @@ func (o *Object) clearUncommittedBlocks(ctx context.Context) (err error) {
// Commit only the committed blocks // Commit only the committed blocks
fs.Debugf(o, "Committing %d blocks to remove uncommitted blocks", len(blockIDs)) fs.Debugf(o, "Committing %d blocks to remove uncommitted blocks", len(blockIDs))
err = pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
_, err := blockBlobSVC.CommitBlockList(ctx, blockIDs, options) _, err := blockBlobSVC.CommitBlockList(ctx, blockIDs, options)
return o.fs.shouldRetry(ctx, err) return o.fs.shouldRetry(ctx, err)
}) })
@@ -3189,7 +3114,6 @@ var (
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.Purger = &Fs{} _ fs.Purger = &Fs{}
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.OpenChunkWriter = &Fs{} _ fs.OpenChunkWriter = &Fs{}
_ fs.Object = &Object{} _ fs.Object = &Object{}
_ fs.MimeTyper = &Object{} _ fs.MimeTyper = &Object{}

View File

@@ -453,7 +453,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
return nil, fmt.Errorf("create new shared key credential failed: %w", err) return nil, fmt.Errorf("create new shared key credential failed: %w", err)
} }
case opt.UseAZ: case opt.UseAZ:
options := azidentity.AzureCLICredentialOptions{} var options = azidentity.AzureCLICredentialOptions{}
cred, err = azidentity.NewAzureCLICredential(&options) cred, err = azidentity.NewAzureCLICredential(&options)
fmt.Println(cred) fmt.Println(cred)
if err != nil { if err != nil {
@@ -550,7 +550,7 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
case opt.UseMSI: case opt.UseMSI:
// Specifying a user-assigned identity. Exactly one of the above IDs must be specified. // Specifying a user-assigned identity. Exactly one of the above IDs must be specified.
// Validate and ensure exactly one is set. (To do: better validation.) // Validate and ensure exactly one is set. (To do: better validation.)
b2i := map[bool]int{false: 0, true: 1} var b2i = map[bool]int{false: 0, true: 1}
set := b2i[opt.MSIClientID != ""] + b2i[opt.MSIObjectID != ""] + b2i[opt.MSIResourceID != ""] set := b2i[opt.MSIClientID != ""] + b2i[opt.MSIObjectID != ""] + b2i[opt.MSIResourceID != ""]
if set > 1 { if set > 1 {
return nil, errors.New("more than one user-assigned identity ID is set") return nil, errors.New("more than one user-assigned identity ID is set")
@@ -569,37 +569,6 @@ func newFsFromOptions(ctx context.Context, name, root string, opt *Options) (fs.
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err) return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
} }
case opt.ClientID != "" && opt.Tenant != "" && opt.MSIClientID != "":
// Workload Identity based authentication
var options azidentity.ManagedIdentityCredentialOptions
options.ID = azidentity.ClientID(opt.MSIClientID)
msiCred, err := azidentity.NewManagedIdentityCredential(&options)
if err != nil {
return nil, fmt.Errorf("failed to acquire MSI token: %w", err)
}
getClientAssertions := func(context.Context) (string, error) {
token, err := msiCred.GetToken(context.Background(), policy.TokenRequestOptions{
Scopes: []string{"api://AzureADTokenExchange"},
})
if err != nil {
return "", fmt.Errorf("failed to acquire MSI token: %w", err)
}
return token.Token, nil
}
assertOpts := &azidentity.ClientAssertionCredentialOptions{}
cred, err = azidentity.NewClientAssertionCredential(
opt.Tenant,
opt.ClientID,
getClientAssertions,
assertOpts)
if err != nil {
return nil, fmt.Errorf("failed to acquire client assertion token: %w", err)
}
default: default:
return nil, errors.New("no authentication method configured") return nil, errors.New("no authentication method configured")
} }
@@ -854,7 +823,7 @@ func (f *Fs) List(ctx context.Context, dir string) (fs.DirEntries, error) {
return entries, err return entries, err
} }
opt := &directory.ListFilesAndDirectoriesOptions{ var opt = &directory.ListFilesAndDirectoriesOptions{
Include: directory.ListFilesInclude{ Include: directory.ListFilesInclude{
Timestamps: true, Timestamps: true,
}, },
@@ -1013,10 +982,6 @@ func (o *Object) SetModTime(ctx context.Context, t time.Time) error {
SMBProperties: &file.SMBProperties{ SMBProperties: &file.SMBProperties{
LastWriteTime: &t, LastWriteTime: &t,
}, },
HTTPHeaders: &file.HTTPHeaders{
ContentMD5: o.md5,
ContentType: &o.contentType,
},
} }
_, err := o.fileClient().SetHTTPHeaders(ctx, &opt) _, err := o.fileClient().SetHTTPHeaders(ctx, &opt)
if err != nil { if err != nil {

View File

@@ -847,7 +847,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *api.File
} }
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) { func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
last := "" last := ""
err = f.list(ctx, bucket, directory, prefix, f.rootBucket == "", false, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error { err = f.list(ctx, bucket, directory, prefix, f.rootBucket == "", false, 0, f.opt.Versions, false, func(remote string, object *api.File, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory, &last) entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory, &last)
@@ -855,16 +855,16 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
return err return err
} }
if entry != nil { if entry != nil {
return callback(entry) entries = append(entries, entry)
} }
return nil return nil
}) })
if err != nil { if err != nil {
return err return nil, err
} }
// bucket must be present if listing succeeded // bucket must be present if listing succeeded
f.cache.MarkOK(bucket) f.cache.MarkOK(bucket)
return nil return entries, nil
} }
// listBuckets returns all the buckets to out // listBuckets returns all the buckets to out
@@ -890,46 +890,14 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir) bucket, directory := f.split(dir)
if bucket == "" { if bucket == "" {
if directory != "" { if directory != "" {
return fs.ErrorListBucketRequired return nil, fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
} }
return f.listBuckets(ctx)
} }
return list.Flush() return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
} }
// ListR lists the objects and directories of the Fs starting // ListR lists the objects and directories of the Fs starting
@@ -2460,7 +2428,6 @@ var (
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.CleanUpper = &Fs{} _ fs.CleanUpper = &Fs{}
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.PublicLinker = &Fs{} _ fs.PublicLinker = &Fs{}
_ fs.OpenChunkWriter = &Fs{} _ fs.OpenChunkWriter = &Fs{}
_ fs.Commander = &Fs{} _ fs.Commander = &Fs{}

View File

@@ -125,21 +125,10 @@ type FolderItems struct {
Offset int `json:"offset"` Offset int `json:"offset"`
Limit int `json:"limit"` Limit int `json:"limit"`
NextMarker *string `json:"next_marker,omitempty"` NextMarker *string `json:"next_marker,omitempty"`
// There is some confusion about how this is actually Order []struct {
// returned. The []struct has worked for many years, but in By string `json:"by"`
// https://github.com/rclone/rclone/issues/8776 box was Direction string `json:"direction"`
// returning it returned not as a list. We don't actually use } `json:"order"`
// this so comment it out.
//
// Order struct {
// By string `json:"by"`
// Direction string `json:"direction"`
// } `json:"order"`
//
// Order []struct {
// By string `json:"by"`
// Direction string `json:"direction"`
// } `json:"order"`
} }
// Parent defined the ID of the parent directory // Parent defined the ID of the parent directory
@@ -282,9 +271,9 @@ type User struct {
ModifiedAt time.Time `json:"modified_at"` ModifiedAt time.Time `json:"modified_at"`
Language string `json:"language"` Language string `json:"language"`
Timezone string `json:"timezone"` Timezone string `json:"timezone"`
SpaceAmount float64 `json:"space_amount"` SpaceAmount int64 `json:"space_amount"`
SpaceUsed float64 `json:"space_used"` SpaceUsed int64 `json:"space_used"`
MaxUploadSize float64 `json:"max_upload_size"` MaxUploadSize int64 `json:"max_upload_size"`
Status string `json:"status"` Status string `json:"status"`
JobTitle string `json:"job_title"` JobTitle string `json:"job_title"`
Phone string `json:"phone"` Phone string `json:"phone"`

View File

@@ -241,22 +241,18 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs
DirModTimeUpdatesOnWrite: true, DirModTimeUpdatesOnWrite: true,
PartialUploads: true, PartialUploads: true,
}).Fill(ctx, f) }).Fill(ctx, f)
canMove, slowHash := true, false canMove := true
for _, u := range f.upstreams { for _, u := range f.upstreams {
features = features.Mask(ctx, u.f) // Mask all upstream fs features = features.Mask(ctx, u.f) // Mask all upstream fs
if !operations.CanServerSideMove(u.f) { if !operations.CanServerSideMove(u.f) {
canMove = false canMove = false
} }
slowHash = slowHash || u.f.Features().SlowHash
} }
// We can move if all remotes support Move or Copy // We can move if all remotes support Move or Copy
if canMove { if canMove {
features.Move = f.Move features.Move = f.Move
} }
// If any of upstreams are SlowHash, propagate it
features.SlowHash = slowHash
// Enable ListR when upstreams either support ListR or is local // Enable ListR when upstreams either support ListR or is local
// But not when all upstreams are local // But not when all upstreams are local
if features.ListR == nil { if features.ListR == nil {

View File

@@ -1446,9 +1446,9 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
} }
} }
usage = &fs.Usage{ usage = &fs.Usage{
Total: fs.NewUsageValue(total), // quota of bytes that can be used Total: fs.NewUsageValue(int64(total)), // quota of bytes that can be used
Used: fs.NewUsageValue(used), // bytes in use Used: fs.NewUsageValue(int64(used)), // bytes in use
Free: fs.NewUsageValue(total - used), // bytes which can be uploaded before reaching the quota Free: fs.NewUsageValue(int64(total - used)), // bytes which can be uploaded before reaching the quota
} }
return usage, nil return usage, nil
} }

View File

@@ -14,7 +14,7 @@ import (
) )
// errFileNotFound represent file not found error // errFileNotFound represent file not found error
var errFileNotFound = errors.New("file not found") var errFileNotFound error = errors.New("file not found")
// getFileCode retrieves the file code for a given file path // getFileCode retrieves the file code for a given file path
func (f *Fs) getFileCode(ctx context.Context, filePath string) (string, error) { func (f *Fs) getFileCode(ctx context.Context, filePath string) (string, error) {

View File

@@ -163,16 +163,6 @@ Enabled by default. Use 0 to disable.`,
Help: "Disable TLS 1.3 (workaround for FTP servers with buggy TLS)", Help: "Disable TLS 1.3 (workaround for FTP servers with buggy TLS)",
Default: false, Default: false,
Advanced: true, Advanced: true,
}, {
Name: "allow_insecure_tls_ciphers",
Help: `Allow insecure TLS ciphers
Setting this flag will allow the usage of the following TLS ciphers in addition to the secure defaults:
- TLS_RSA_WITH_AES_128_GCM_SHA256
`,
Default: false,
Advanced: true,
}, { }, {
Name: "shut_timeout", Name: "shut_timeout",
Help: "Maximum time to wait for data connection closing status.", Help: "Maximum time to wait for data connection closing status.",
@@ -246,30 +236,29 @@ a write only folder.
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
Host string `config:"host"` Host string `config:"host"`
User string `config:"user"` User string `config:"user"`
Pass string `config:"pass"` Pass string `config:"pass"`
Port string `config:"port"` Port string `config:"port"`
TLS bool `config:"tls"` TLS bool `config:"tls"`
ExplicitTLS bool `config:"explicit_tls"` ExplicitTLS bool `config:"explicit_tls"`
TLSCacheSize int `config:"tls_cache_size"` TLSCacheSize int `config:"tls_cache_size"`
DisableTLS13 bool `config:"disable_tls13"` DisableTLS13 bool `config:"disable_tls13"`
AllowInsecureTLSCiphers bool `config:"allow_insecure_tls_ciphers"` Concurrency int `config:"concurrency"`
Concurrency int `config:"concurrency"` SkipVerifyTLSCert bool `config:"no_check_certificate"`
SkipVerifyTLSCert bool `config:"no_check_certificate"` DisableEPSV bool `config:"disable_epsv"`
DisableEPSV bool `config:"disable_epsv"` DisableMLSD bool `config:"disable_mlsd"`
DisableMLSD bool `config:"disable_mlsd"` DisableUTF8 bool `config:"disable_utf8"`
DisableUTF8 bool `config:"disable_utf8"` WritingMDTM bool `config:"writing_mdtm"`
WritingMDTM bool `config:"writing_mdtm"` ForceListHidden bool `config:"force_list_hidden"`
ForceListHidden bool `config:"force_list_hidden"` IdleTimeout fs.Duration `config:"idle_timeout"`
IdleTimeout fs.Duration `config:"idle_timeout"` CloseTimeout fs.Duration `config:"close_timeout"`
CloseTimeout fs.Duration `config:"close_timeout"` ShutTimeout fs.Duration `config:"shut_timeout"`
ShutTimeout fs.Duration `config:"shut_timeout"` AskPassword bool `config:"ask_password"`
AskPassword bool `config:"ask_password"` Enc encoder.MultiEncoder `config:"encoding"`
Enc encoder.MultiEncoder `config:"encoding"` SocksProxy string `config:"socks_proxy"`
SocksProxy string `config:"socks_proxy"` HTTPProxy string `config:"http_proxy"`
HTTPProxy string `config:"http_proxy"` NoCheckUpload bool `config:"no_check_upload"`
NoCheckUpload bool `config:"no_check_upload"`
} }
// Fs represents a remote FTP server // Fs represents a remote FTP server
@@ -283,7 +272,6 @@ type Fs struct {
user string user string
pass string pass string
dialAddr string dialAddr string
tlsConf *tls.Config // default TLS client config
poolMu sync.Mutex poolMu sync.Mutex
pool []*ftp.ServerConn pool []*ftp.ServerConn
drain *time.Timer // used to drain the pool when we stop using the connections drain *time.Timer // used to drain the pool when we stop using the connections
@@ -409,14 +397,9 @@ func shouldRetry(ctx context.Context, err error) (bool, error) {
func (f *Fs) tlsConfig() *tls.Config { func (f *Fs) tlsConfig() *tls.Config {
var tlsConfig *tls.Config var tlsConfig *tls.Config
if f.opt.TLS || f.opt.ExplicitTLS { if f.opt.TLS || f.opt.ExplicitTLS {
if f.tlsConf != nil { tlsConfig = &tls.Config{
tlsConfig = f.tlsConf.Clone() ServerName: f.opt.Host,
} else { InsecureSkipVerify: f.opt.SkipVerifyTLSCert,
tlsConfig = new(tls.Config)
}
tlsConfig.ServerName = f.opt.Host
if f.opt.SkipVerifyTLSCert {
tlsConfig.InsecureSkipVerify = true
} }
if f.opt.TLSCacheSize > 0 { if f.opt.TLSCacheSize > 0 {
tlsConfig.ClientSessionCache = tls.NewLRUClientSessionCache(f.opt.TLSCacheSize) tlsConfig.ClientSessionCache = tls.NewLRUClientSessionCache(f.opt.TLSCacheSize)
@@ -424,14 +407,6 @@ func (f *Fs) tlsConfig() *tls.Config {
if f.opt.DisableTLS13 { if f.opt.DisableTLS13 {
tlsConfig.MaxVersion = tls.VersionTLS12 tlsConfig.MaxVersion = tls.VersionTLS12
} }
if f.opt.AllowInsecureTLSCiphers {
var ids []uint16
// Read default ciphers
for _, cs := range tls.CipherSuites() {
ids = append(ids, cs.ID)
}
tlsConfig.CipherSuites = append(ids, tls.TLS_RSA_WITH_AES_128_GCM_SHA256)
}
} }
return tlsConfig return tlsConfig
} }
@@ -677,7 +652,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (ff fs.Fs
dialAddr: dialAddr, dialAddr: dialAddr,
tokens: pacer.NewTokenDispenser(opt.Concurrency), tokens: pacer.NewTokenDispenser(opt.Concurrency),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
tlsConf: fshttp.NewTransport(ctx).TLSClientConfig,
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,

View File

@@ -52,7 +52,7 @@ func (f *Fs) testUploadTimeout(t *testing.T) {
ci.Timeout = saveTimeout ci.Timeout = saveTimeout
}() }()
ci.LowLevelRetries = 1 ci.LowLevelRetries = 1
ci.Timeout = fs.Duration(idleTimeout) ci.Timeout = idleTimeout
upload := func(concurrency int, shutTimeout time.Duration) (obj fs.Object, err error) { upload := func(concurrency int, shutTimeout time.Duration) (obj fs.Object, err error) {
fixFs := deriveFs(ctx, t, f, settings{ fixFs := deriveFs(ctx, t, f, settings{

View File

@@ -760,7 +760,7 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *storage.
} }
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) { func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
// List the objects // List the objects
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error { err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, object *storage.Object, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory) entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
@@ -768,16 +768,16 @@ func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addB
return err return err
} }
if entry != nil { if entry != nil {
return callback(entry) entries = append(entries, entry)
} }
return nil return nil
}) })
if err != nil { if err != nil {
return err return nil, err
} }
// bucket must be present if listing succeeded // bucket must be present if listing succeeded
f.cache.MarkOK(bucket) f.cache.MarkOK(bucket)
return err return entries, err
} }
// listBuckets lists the buckets // listBuckets lists the buckets
@@ -820,46 +820,14 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir) bucket, directory := f.split(dir)
if bucket == "" { if bucket == "" {
if directory != "" { if directory != "" {
return fs.ErrorListBucketRequired return nil, fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
} }
return f.listBuckets(ctx)
} }
return list.Flush() return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
} }
// ListR lists the objects and directories of the Fs starting // ListR lists the objects and directories of the Fs starting
@@ -1494,7 +1462,6 @@ var (
_ fs.Copier = &Fs{} _ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{} _ fs.Object = &Object{}
_ fs.MimeTyper = &Object{} _ fs.MimeTyper = &Object{}
) )

View File

@@ -117,22 +117,16 @@ func init() {
} else { } else {
oauthConfig.Scopes = scopesReadWrite oauthConfig.Scopes = scopesReadWrite
} }
return oauthutil.ConfigOut("warning1", &oauthutil.Options{ return oauthutil.ConfigOut("warning", &oauthutil.Options{
OAuth2Config: oauthConfig, OAuth2Config: oauthConfig,
}) })
case "warning1": case "warning":
// Warn the user as required by google photos integration // Warn the user as required by google photos integration
return fs.ConfigConfirm("warning2", true, "config_warning", `Warning return fs.ConfigConfirm("warning_done", true, "config_warning", `Warning
IMPORTANT: All media items uploaded to Google Photos with rclone IMPORTANT: All media items uploaded to Google Photos with rclone
are stored in full resolution at original quality. These uploads are stored in full resolution at original quality. These uploads
will count towards storage in your Google Account.`) will count towards storage in your Google Account.`)
case "warning2":
// Warn the user that rclone can no longer download photos it didnt upload from google photos
return fs.ConfigConfirm("warning_done", true, "config_warning", `Warning
IMPORTANT: Due to Google policy changes rclone can now only download photos it uploaded.`)
case "warning_done": case "warning_done":
return nil, nil return nil, nil
} }

View File

@@ -371,9 +371,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
return nil, err return nil, err
} }
return &fs.Usage{ return &fs.Usage{
Total: fs.NewUsageValue(info.Capacity), Total: fs.NewUsageValue(int64(info.Capacity)),
Used: fs.NewUsageValue(info.Used), Used: fs.NewUsageValue(int64(info.Used)),
Free: fs.NewUsageValue(info.Remaining), Free: fs.NewUsageValue(int64(info.Remaining)),
}, nil }, nil
} }

View File

@@ -421,9 +421,6 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// will return the object and the error, otherwise will return // will return the object and the error, otherwise will return
// nil and the error // nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if src.Size() == 0 {
return nil, fs.ErrorCantUploadEmptyFiles
}
return uploadFile(ctx, f, in, src.Remote(), options...) return uploadFile(ctx, f, in, src.Remote(), options...)
} }
@@ -662,9 +659,6 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
// But for unknown-sized objects (indicated by src.Size() == -1), Upload should either // But for unknown-sized objects (indicated by src.Size() == -1), Upload should either
// return an error or update the object properly (rather than e.g. calling panic). // return an error or update the object properly (rather than e.g. calling panic).
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
if src.Size() == 0 {
return fs.ErrorCantUploadEmptyFiles
}
srcRemote := o.Remote() srcRemote := o.Remote()
@@ -676,7 +670,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var resp *client.UploadResult var resp *client.UploadResult
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
var res *http.Response var res *http.Response
res, resp, err = o.fs.ik.Upload(ctx, in, client.UploadParam{ res, resp, err = o.fs.ik.Upload(ctx, in, client.UploadParam{
FileName: fileName, FileName: fileName,
@@ -731,7 +725,7 @@ func uploadFile(ctx context.Context, f *Fs, in io.Reader, srcRemote string, opti
UseUniqueFileName := new(bool) UseUniqueFileName := new(bool)
*UseUniqueFileName = false *UseUniqueFileName = false
err := f.pacer.CallNoRetry(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
var res *http.Response var res *http.Response
var err error var err error
res, _, err = f.ik.Upload(ctx, in, client.UploadParam{ res, _, err = f.ik.Upload(ctx, in, client.UploadParam{
@@ -800,10 +794,35 @@ func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error)
return metadata, nil return metadata, nil
} }
// Copy src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantMove
}
file, err := srcObj.Open(ctx)
if err != nil {
return nil, err
}
return uploadFile(ctx, f, file, remote)
}
// Check the interfaces are satisfied. // Check the interfaces are satisfied.
var ( var (
_ fs.Fs = &Fs{} _ fs.Fs = &Fs{}
_ fs.Purger = &Fs{} _ fs.Purger = &Fs{}
_ fs.PublicLinker = &Fs{} _ fs.PublicLinker = &Fs{}
_ fs.Object = &Object{} _ fs.Object = &Object{}
_ fs.Copier = &Fs{}
) )

View File

@@ -590,7 +590,7 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
return "", err return "", err
} }
bucket, bucketPath := f.split(remote) bucket, bucketPath := f.split(remote)
return path.Join(f.opt.FrontEndpoint, "/download/", bucket, rest.URLPathEscapeAll(bucketPath)), nil return path.Join(f.opt.FrontEndpoint, "/download/", bucket, quotePath(bucketPath)), nil
} }
// Copy src to this remote using server-side copy operations. // Copy src to this remote using server-side copy operations.
@@ -622,7 +622,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (_ fs.Objec
"x-archive-auto-make-bucket": "1", "x-archive-auto-make-bucket": "1",
"x-archive-queue-derive": "0", "x-archive-queue-derive": "0",
"x-archive-keep-old-version": "0", "x-archive-keep-old-version": "0",
"x-amz-copy-source": rest.URLPathEscapeAll(path.Join("/", srcBucket, srcPath)), "x-amz-copy-source": quotePath(path.Join("/", srcBucket, srcPath)),
"x-amz-metadata-directive": "COPY", "x-amz-metadata-directive": "COPY",
"x-archive-filemeta-sha1": srcObj.sha1, "x-archive-filemeta-sha1": srcObj.sha1,
"x-archive-filemeta-md5": srcObj.md5, "x-archive-filemeta-md5": srcObj.md5,
@@ -778,7 +778,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// make a GET request to (frontend)/download/:item/:path // make a GET request to (frontend)/download/:item/:path
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: path.Join("/download/", o.fs.root, rest.URLPathEscapeAll(o.fs.opt.Enc.FromStandardPath(o.remote))), Path: path.Join("/download/", o.fs.root, quotePath(o.fs.opt.Enc.FromStandardPath(o.remote))),
Options: optionsFixed, Options: optionsFixed,
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
@@ -1334,6 +1334,16 @@ func trimPathPrefix(s, prefix string, enc encoder.MultiEncoder) string {
return enc.ToStandardPath(strings.TrimPrefix(s, prefix+"/")) return enc.ToStandardPath(strings.TrimPrefix(s, prefix+"/"))
} }
// mimics urllib.parse.quote() on Python; exclude / from url.PathEscape
func quotePath(s string) string {
seg := strings.Split(s, "/")
newValues := []string{}
for _, v := range seg {
newValues = append(newValues, url.PathEscape(v))
}
return strings.Join(newValues, "/")
}
var ( var (
_ fs.Fs = &Fs{} _ fs.Fs = &Fs{}
_ fs.Copier = &Fs{} _ fs.Copier = &Fs{}

View File

@@ -461,7 +461,7 @@ func translateErrorsDir(err error) error {
return err return err
} }
// translateErrorsObject translates Koofr errors to rclone errors (for an object operation) // translatesErrorsObject translates Koofr errors to rclone errors (for an object operation)
func translateErrorsObject(err error) error { func translateErrorsObject(err error) error {
switch err := err.(type) { switch err := err.(type) {
case httpclient.InvalidStatusError: case httpclient.InvalidStatusError:

View File

@@ -305,12 +305,6 @@ only useful for reading.
Help: "The last status change time.", Help: "The last status change time.",
}}, }},
}, },
{
Name: "hashes",
Help: `Comma separated list of supported checksum types.`,
Default: fs.CommaSepList{},
Advanced: true,
},
{ {
Name: config.ConfigEncoding, Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp, Help: config.ConfigEncodingHelp,
@@ -337,7 +331,6 @@ type Options struct {
NoSparse bool `config:"no_sparse"` NoSparse bool `config:"no_sparse"`
NoSetModTime bool `config:"no_set_modtime"` NoSetModTime bool `config:"no_set_modtime"`
TimeType timeType `config:"time_type"` TimeType timeType `config:"time_type"`
Hashes fs.CommaSepList `config:"hashes"`
Enc encoder.MultiEncoder `config:"encoding"` Enc encoder.MultiEncoder `config:"encoding"`
NoClone bool `config:"no_clone"` NoClone bool `config:"no_clone"`
} }
@@ -671,12 +664,8 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
name := fi.Name() name := fi.Name()
mode := fi.Mode() mode := fi.Mode()
newRemote := f.cleanRemote(dir, name) newRemote := f.cleanRemote(dir, name)
symlinkFlag := os.ModeSymlink
if runtime.GOOS == "windows" {
symlinkFlag |= os.ModeIrregular
}
// Follow symlinks if required // Follow symlinks if required
if f.opt.FollowSymlinks && (mode&symlinkFlag) != 0 { if f.opt.FollowSymlinks && (mode&os.ModeSymlink) != 0 {
localPath := filepath.Join(fsDirPath, name) localPath := filepath.Join(fsDirPath, name)
fi, err = os.Stat(localPath) fi, err = os.Stat(localPath)
// Quietly skip errors on excluded files and directories // Quietly skip errors on excluded files and directories
@@ -698,13 +687,13 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if fi.IsDir() { if fi.IsDir() {
// Ignore directories which are symlinks. These are junction points under windows which // Ignore directories which are symlinks. These are junction points under windows which
// are kind of a souped up symlink. Unix doesn't have directories which are symlinks. // are kind of a souped up symlink. Unix doesn't have directories which are symlinks.
if (mode&symlinkFlag) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) { if (mode&os.ModeSymlink) == 0 && f.dev == readDevice(fi, f.opt.OneFileSystem) {
d := f.newDirectory(newRemote, fi) d := f.newDirectory(newRemote, fi)
entries = append(entries, d) entries = append(entries, d)
} }
} else { } else {
// Check whether this link should be translated // Check whether this link should be translated
if f.opt.TranslateSymlinks && fi.Mode()&symlinkFlag != 0 { if f.opt.TranslateSymlinks && fi.Mode()&os.ModeSymlink != 0 {
newRemote += fs.LinkSuffix newRemote += fs.LinkSuffix
} }
// Don't include non directory if not included // Don't include non directory if not included
@@ -1032,19 +1021,6 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
// Hashes returns the supported hash sets. // Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set { func (f *Fs) Hashes() hash.Set {
if len(f.opt.Hashes) > 0 {
// Return only configured hashes.
// Note: Could have used hash.SupportOnly to limit supported hashes for all hash related features.
var supported hash.Set
for _, hashName := range f.opt.Hashes {
var ht hash.Type
if err := ht.Set(hashName); err != nil {
fs.Infof(nil, "Invalid token %q in hash string %q", hashName, f.opt.Hashes.String())
}
supported.Add(ht)
}
return supported
}
return hash.Supported() return hash.Supported()
} }

View File

@@ -634,7 +634,7 @@ func (f *Fs) readItemMetaData(ctx context.Context, path string) (entry fs.DirEnt
return return
} }
// itemToDirEntry converts API item to rclone directory entry // itemToEntry converts API item to rclone directory entry
// The dirSize return value is: // The dirSize return value is:
// //
// <0 - for a file or in case of error // <0 - for a file or in case of error

View File

@@ -946,9 +946,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
return nil, fmt.Errorf("failed to get Mega Quota: %w", err) return nil, fmt.Errorf("failed to get Mega Quota: %w", err)
} }
usage := &fs.Usage{ usage := &fs.Usage{
Total: fs.NewUsageValue(q.Mstrg), // quota of bytes that can be used Total: fs.NewUsageValue(int64(q.Mstrg)), // quota of bytes that can be used
Used: fs.NewUsageValue(q.Cstrg), // bytes in use Used: fs.NewUsageValue(int64(q.Cstrg)), // bytes in use
Free: fs.NewUsageValue(q.Mstrg - q.Cstrg), // bytes which can be uploaded before reaching the quota Free: fs.NewUsageValue(int64(q.Mstrg - q.Cstrg)), // bytes which can be uploaded before reaching the quota
} }
return usage, nil return usage, nil
} }

View File

@@ -325,12 +325,13 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
} }
// listDir lists the bucket to the entries // listDir lists the bucket to the entries
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) { func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
// List the objects and directories // List the objects and directories
err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, entry fs.DirEntry, isDirectory bool) error { err = f.list(ctx, bucket, directory, prefix, addBucket, false, func(remote string, entry fs.DirEntry, isDirectory bool) error {
return callback(entry) entries = append(entries, entry)
return nil
}) })
return err return entries, err
} }
// listBuckets lists the buckets to entries // listBuckets lists the buckets to entries
@@ -353,46 +354,15 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f) // defer fslog.Trace(dir, "")("entries = %q, err = %v", &entries, &err)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucket, directory := f.split(dir) bucket, directory := f.split(dir)
if bucket == "" { if bucket == "" {
if directory != "" { if directory != "" {
return fs.ErrorListBucketRequired return nil, fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
} }
return f.listBuckets(ctx)
} }
return list.Flush() return f.listDir(ctx, bucket, directory, f.rootDirectory, f.rootBucket == "")
} }
// ListR lists the objects and directories of the Fs starting // ListR lists the objects and directories of the Fs starting
@@ -659,7 +629,6 @@ var (
_ fs.Copier = &Fs{} _ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{} _ fs.Object = &Object{}
_ fs.MimeTyper = &Object{} _ fs.MimeTyper = &Object{}
) )

View File

@@ -12,7 +12,6 @@ import (
"strings" "strings"
"time" "time"
"github.com/ncw/swift/v2"
"github.com/oracle/oci-go-sdk/v65/common" "github.com/oracle/oci-go-sdk/v65/common"
"github.com/oracle/oci-go-sdk/v65/objectstorage" "github.com/oracle/oci-go-sdk/v65/objectstorage"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
@@ -34,46 +33,9 @@ func init() {
NewFs: NewFs, NewFs: NewFs,
CommandHelp: commandHelp, CommandHelp: commandHelp,
Options: newOptions(), Options: newOptions(),
MetadataInfo: &fs.MetadataInfo{
System: systemMetadataInfo,
Help: `User metadata is stored as opc-meta- keys.`,
},
}) })
} }
var systemMetadataInfo = map[string]fs.MetadataHelp{
"opc-meta-mode": {
Help: "File type and mode",
Type: "octal, unix style",
Example: "0100664",
},
"opc-meta-uid": {
Help: "User ID of owner",
Type: "decimal number",
Example: "500",
},
"opc-meta-gid": {
Help: "Group ID of owner",
Type: "decimal number",
Example: "500",
},
"opc-meta-atime": {
Help: "Time of last access",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
"opc-meta-mtime": {
Help: "Time of last modification",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
"opc-meta-btime": {
Help: "Time of file birth (creation)",
Type: "ISO 8601",
Example: "2025-06-30T22:27:43-04:00",
},
}
// Fs represents a remote object storage server // Fs represents a remote object storage server
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
@@ -120,7 +82,6 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
f.setRoot(root) f.setRoot(root)
f.features = (&fs.Features{ f.features = (&fs.Features{
ReadMetadata: true,
ReadMimeType: true, ReadMimeType: true,
WriteMimeType: true, WriteMimeType: true,
BucketBased: true, BucketBased: true,
@@ -254,47 +215,15 @@ func (f *Fs) split(rootRelativePath string) (bucketName, bucketPath string) {
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
bucketName, directory := f.split(dir) bucketName, directory := f.split(dir)
fs.Debugf(f, "listing: bucket : %v, directory: %v", bucketName, dir) fs.Debugf(f, "listing: bucket : %v, directory: %v", bucketName, dir)
if bucketName == "" { if bucketName == "" {
if directory != "" { if directory != "" {
return fs.ErrorListBucketRequired return nil, fs.ErrorListBucketRequired
}
entries, err := f.listBuckets(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, bucketName, directory, f.rootDirectory, f.rootBucket == "", list.Add)
if err != nil {
return err
} }
return f.listBuckets(ctx)
} }
return list.Flush() return f.listDir(ctx, bucketName, directory, f.rootDirectory, f.rootBucket == "")
} }
// listFn is called from list to handle an object. // listFn is called from list to handle an object.
@@ -443,24 +372,24 @@ func (f *Fs) itemToDirEntry(ctx context.Context, remote string, object *objectst
} }
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool, callback func(fs.DirEntry) error) (err error) { func (f *Fs) listDir(ctx context.Context, bucket, directory, prefix string, addBucket bool) (entries fs.DirEntries, err error) {
fn := func(remote string, object *objectstorage.ObjectSummary, isDirectory bool) error { fn := func(remote string, object *objectstorage.ObjectSummary, isDirectory bool) error {
entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory) entry, err := f.itemToDirEntry(ctx, remote, object, isDirectory)
if err != nil { if err != nil {
return err return err
} }
if entry != nil { if entry != nil {
return callback(entry) entries = append(entries, entry)
} }
return nil return nil
} }
err = f.list(ctx, bucket, directory, prefix, addBucket, false, 0, fn) err = f.list(ctx, bucket, directory, prefix, addBucket, false, 0, fn)
if err != nil { if err != nil {
return err return nil, err
} }
// bucket must be present if listing succeeded // bucket must be present if listing succeeded
f.cache.MarkOK(bucket) f.cache.MarkOK(bucket)
return nil return entries, nil
} }
// listBuckets returns all the buckets to out // listBuckets returns all the buckets to out
@@ -759,45 +688,12 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
return list.Flush() return list.Flush()
} }
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (metadata fs.Metadata, err error) {
err = o.readMetaData(ctx)
if err != nil {
return nil, err
}
metadata = make(fs.Metadata, len(o.meta)+7)
for k, v := range o.meta {
switch k {
case metaMtime:
if modTime, err := swift.FloatStringToTime(v); err == nil {
metadata["mtime"] = modTime.Format(time.RFC3339Nano)
}
case metaMD5Hash:
// don't write hash metadata
default:
metadata[k] = v
}
}
if o.mimeType != "" {
metadata["content-type"] = o.mimeType
}
if !o.lastModified.IsZero() {
metadata["btime"] = o.lastModified.Format(time.RFC3339Nano)
}
return metadata, nil
}
// Check the interfaces are satisfied // Check the interfaces are satisfied
var ( var (
_ fs.Fs = &Fs{} _ fs.Fs = &Fs{}
_ fs.Copier = &Fs{} _ fs.Copier = &Fs{}
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Commander = &Fs{} _ fs.Commander = &Fs{}
_ fs.CleanUpper = &Fs{} _ fs.CleanUpper = &Fs{}
_ fs.OpenChunkWriter = &Fs{} _ fs.OpenChunkWriter = &Fs{}

View File

@@ -1,333 +0,0 @@
package pikpak
import (
"context"
"fmt"
"io"
"sort"
"strings"
"sync"
"time"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/aws/aws-sdk-go-v2/service/s3/types"
"github.com/rclone/rclone/backend/pikpak/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/chunksize"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/pool"
"golang.org/x/sync/errgroup"
)
const (
bufferSize = 1024 * 1024 // default size of the pages used in the reader
bufferCacheSize = 64 // max number of buffers to keep in cache
bufferCacheFlushTime = 5 * time.Second // flush the cached buffers after this long
)
// bufferPool is a global pool of buffers
var (
bufferPool *pool.Pool
bufferPoolOnce sync.Once
)
// get a buffer pool
func getPool() *pool.Pool {
bufferPoolOnce.Do(func() {
ci := fs.GetConfig(context.Background())
// Initialise the buffer pool when used
bufferPool = pool.New(bufferCacheFlushTime, bufferSize, bufferCacheSize, ci.UseMmap)
})
return bufferPool
}
// NewRW gets a pool.RW using the multipart pool
func NewRW() *pool.RW {
return pool.NewRW(getPool())
}
// Upload does a multipart upload in parallel
func (w *pikpakChunkWriter) Upload(ctx context.Context) (err error) {
// make concurrency machinery
tokens := pacer.NewTokenDispenser(w.con)
uploadCtx, cancel := context.WithCancel(ctx)
defer cancel()
defer atexit.OnError(&err, func() {
cancel()
fs.Debugf(w.o, "multipart upload: Cancelling...")
errCancel := w.Abort(ctx)
if errCancel != nil {
fs.Debugf(w.o, "multipart upload: failed to cancel: %v", errCancel)
}
})()
var (
g, gCtx = errgroup.WithContext(uploadCtx)
finished = false
off int64
size = w.size
chunkSize = w.chunkSize
)
// Do the accounting manually
in, acc := accounting.UnWrapAccounting(w.in)
for partNum := int64(0); !finished; partNum++ {
// Get a block of memory from the pool and token which limits concurrency.
tokens.Get()
rw := NewRW()
if acc != nil {
rw.SetAccounting(acc.AccountRead)
}
free := func() {
// return the memory and token
_ = rw.Close() // Can't return an error
tokens.Put()
}
// Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in uploading all the other parts.
if gCtx.Err() != nil {
free()
break
}
// Read the chunk
var n int64
n, err = io.CopyN(rw, in, chunkSize)
if err == io.EOF {
if n == 0 && partNum != 0 { // end if no data and if not first chunk
free()
break
}
finished = true
} else if err != nil {
free()
return fmt.Errorf("multipart upload: failed to read source: %w", err)
}
partNum := partNum
partOff := off
off += n
g.Go(func() (err error) {
defer free()
fs.Debugf(w.o, "multipart upload: starting chunk %d size %v offset %v/%v", partNum, fs.SizeSuffix(n), fs.SizeSuffix(partOff), fs.SizeSuffix(size))
_, err = w.WriteChunk(gCtx, int32(partNum), rw)
return err
})
}
err = g.Wait()
if err != nil {
return err
}
err = w.Close(ctx)
if err != nil {
return fmt.Errorf("multipart upload: failed to finalise: %w", err)
}
return nil
}
var warnStreamUpload sync.Once
// state of ChunkWriter
type pikpakChunkWriter struct {
chunkSize int64
size int64
con int
f *Fs
o *Object
in io.Reader
mu sync.Mutex
completedParts []types.CompletedPart
client *s3.Client
mOut *s3.CreateMultipartUploadOutput
}
func (f *Fs) newChunkWriter(ctx context.Context, remote string, size int64, p *api.ResumableParams, in io.Reader, options ...fs.OpenOption) (w *pikpakChunkWriter, err error) {
// Temporary Object under construction
o := &Object{
fs: f,
remote: remote,
}
// calculate size of parts
chunkSize := f.opt.ChunkSize
// size can be -1 here meaning we don't know the size of the incoming file. We use ChunkSize
// buffers here (default 5 MiB). With a maximum number of parts (10,000) this will be a file of
// 48 GiB which seems like a not too unreasonable limit.
if size == -1 {
warnStreamUpload.Do(func() {
fs.Logf(f, "Streaming uploads using chunk size %v will have maximum file size of %v",
f.opt.ChunkSize, fs.SizeSuffix(int64(chunkSize)*int64(maxUploadParts)))
})
} else {
chunkSize = chunksize.Calculator(o, size, maxUploadParts, chunkSize)
}
client, err := f.newS3Client(ctx, p)
if err != nil {
return nil, fmt.Errorf("failed to create upload client: %w", err)
}
w = &pikpakChunkWriter{
chunkSize: int64(chunkSize),
size: size,
con: max(1, f.opt.UploadConcurrency),
f: f,
o: o,
in: in,
completedParts: make([]types.CompletedPart, 0),
client: client,
}
req := &s3.CreateMultipartUploadInput{
Bucket: &p.Bucket,
Key: &p.Key,
}
// Apply upload options
for _, option := range options {
key, value := option.Header()
lowerKey := strings.ToLower(key)
switch lowerKey {
case "":
// ignore
case "cache-control":
req.CacheControl = aws.String(value)
case "content-disposition":
req.ContentDisposition = aws.String(value)
case "content-encoding":
req.ContentEncoding = aws.String(value)
case "content-type":
req.ContentType = aws.String(value)
}
}
err = w.f.pacer.Call(func() (bool, error) {
w.mOut, err = w.client.CreateMultipartUpload(ctx, req)
return w.shouldRetry(ctx, err)
})
if err != nil {
return nil, fmt.Errorf("create multipart upload failed: %w", err)
}
fs.Debugf(w.o, "multipart upload: %q initiated", *w.mOut.UploadId)
return
}
// shouldRetry returns a boolean as to whether this err
// deserve to be retried. It returns the err as a convenience
func (w *pikpakChunkWriter) shouldRetry(ctx context.Context, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if fserrors.ShouldRetry(err) {
return true, err
}
return false, err
}
// add a part number and etag to the completed parts
func (w *pikpakChunkWriter) addCompletedPart(part types.CompletedPart) {
w.mu.Lock()
defer w.mu.Unlock()
w.completedParts = append(w.completedParts, part)
}
// WriteChunk will write chunk number with reader bytes, where chunk number >= 0
func (w *pikpakChunkWriter) WriteChunk(ctx context.Context, chunkNumber int32, reader io.ReadSeeker) (currentChunkSize int64, err error) {
if chunkNumber < 0 {
err := fmt.Errorf("invalid chunk number provided: %v", chunkNumber)
return -1, err
}
partNumber := chunkNumber + 1
var res *s3.UploadPartOutput
err = w.f.pacer.Call(func() (bool, error) {
// Discover the size by seeking to the end
currentChunkSize, err = reader.Seek(0, io.SeekEnd)
if err != nil {
return false, err
}
// rewind the reader on retry and after reading md5
_, err := reader.Seek(0, io.SeekStart)
if err != nil {
return false, err
}
res, err = w.client.UploadPart(ctx, &s3.UploadPartInput{
Bucket: w.mOut.Bucket,
Key: w.mOut.Key,
UploadId: w.mOut.UploadId,
PartNumber: &partNumber,
Body: reader,
})
if err != nil {
if chunkNumber <= 8 {
return w.shouldRetry(ctx, err)
}
// retry all chunks once have done the first few
return true, err
}
return false, nil
})
if err != nil {
return -1, fmt.Errorf("failed to upload chunk %d with %v bytes: %w", partNumber, currentChunkSize, err)
}
w.addCompletedPart(types.CompletedPart{
PartNumber: &partNumber,
ETag: res.ETag,
})
fs.Debugf(w.o, "multipart upload: wrote chunk %d with %v bytes", partNumber, currentChunkSize)
return currentChunkSize, err
}
// Abort the multipart upload
func (w *pikpakChunkWriter) Abort(ctx context.Context) (err error) {
// Abort the upload session
err = w.f.pacer.Call(func() (bool, error) {
_, err = w.client.AbortMultipartUpload(ctx, &s3.AbortMultipartUploadInput{
Bucket: w.mOut.Bucket,
Key: w.mOut.Key,
UploadId: w.mOut.UploadId,
})
return w.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to abort multipart upload %q: %w", *w.mOut.UploadId, err)
}
fs.Debugf(w.o, "multipart upload: %q aborted", *w.mOut.UploadId)
return
}
// Close and finalise the multipart upload
func (w *pikpakChunkWriter) Close(ctx context.Context) (err error) {
// sort the completed parts by part number
sort.Slice(w.completedParts, func(i, j int) bool {
return *w.completedParts[i].PartNumber < *w.completedParts[j].PartNumber
})
// Finalise the upload session
err = w.f.pacer.Call(func() (bool, error) {
_, err = w.client.CompleteMultipartUpload(ctx, &s3.CompleteMultipartUploadInput{
Bucket: w.mOut.Bucket,
Key: w.mOut.Key,
UploadId: w.mOut.UploadId,
MultipartUpload: &types.CompletedMultipartUpload{
Parts: w.completedParts,
},
})
return w.shouldRetry(ctx, err)
})
if err != nil {
return fmt.Errorf("failed to complete multipart upload: %w", err)
}
fs.Debugf(w.o, "multipart upload: %q finished", *w.mOut.UploadId)
return
}

View File

@@ -41,10 +41,12 @@ import (
"github.com/aws/aws-sdk-go-v2/aws" "github.com/aws/aws-sdk-go-v2/aws"
awsconfig "github.com/aws/aws-sdk-go-v2/config" awsconfig "github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials" "github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/service/s3" "github.com/aws/aws-sdk-go-v2/service/s3"
"github.com/rclone/rclone/backend/pikpak/api" "github.com/rclone/rclone/backend/pikpak/api"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/chunksize"
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/configstruct"
@@ -64,22 +66,17 @@ import (
// Constants // Constants
const ( const (
clientID = "YUMx5nI8ZU8Ap8pm" clientID = "YUMx5nI8ZU8Ap8pm"
clientVersion = "2.0.0" clientVersion = "2.0.0"
packageName = "mypikpak.com" packageName = "mypikpak.com"
defaultUserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" defaultUserAgent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0"
minSleep = 100 * time.Millisecond minSleep = 100 * time.Millisecond
maxSleep = 2 * time.Second maxSleep = 2 * time.Second
taskWaitTime = 500 * time.Millisecond taskWaitTime = 500 * time.Millisecond
decayConstant = 2 // bigger for slower decay, exponential decayConstant = 2 // bigger for slower decay, exponential
rootURL = "https://api-drive.mypikpak.com" rootURL = "https://api-drive.mypikpak.com"
minChunkSize = fs.SizeSuffix(manager.MinUploadPartSize)
maxUploadParts = 10000 // Part number must be an integer between 1 and 10000, inclusive. defaultUploadConcurrency = manager.DefaultUploadConcurrency
defaultChunkSize = fs.SizeSuffix(1024 * 1024 * 5) // Part size should be in [100KB, 5GB]
minChunkSize = 100 * fs.Kibi
maxChunkSize = 5 * fs.Gibi
defaultUploadCutoff = fs.SizeSuffix(200 * 1024 * 1024)
maxUploadCutoff = 5 * fs.Gibi // maximum allowed size for singlepart uploads
) )
// Globals // Globals
@@ -226,14 +223,6 @@ Fill in for rclone to use a non root folder as its starting point.
Help: "Files bigger than this will be cached on disk to calculate hash if required.", Help: "Files bigger than this will be cached on disk to calculate hash if required.",
Default: fs.SizeSuffix(10 * 1024 * 1024), Default: fs.SizeSuffix(10 * 1024 * 1024),
Advanced: true, Advanced: true,
}, {
Name: "upload_cutoff",
Help: `Cutoff for switching to chunked upload.
Any files larger than this will be uploaded in chunks of chunk_size.
The minimum is 0 and the maximum is 5 GiB.`,
Default: defaultUploadCutoff,
Advanced: true,
}, { }, {
Name: "chunk_size", Name: "chunk_size",
Help: `Chunk size for multipart uploads. Help: `Chunk size for multipart uploads.
@@ -252,7 +241,7 @@ large file of known size to stay below the 10,000 chunks limit.
Increasing the chunk size decreases the accuracy of the progress Increasing the chunk size decreases the accuracy of the progress
statistics displayed with "-P" flag.`, statistics displayed with "-P" flag.`,
Default: defaultChunkSize, Default: minChunkSize,
Advanced: true, Advanced: true,
}, { }, {
Name: "upload_concurrency", Name: "upload_concurrency",
@@ -268,7 +257,7 @@ in memory.
If you are uploading small numbers of large files over high-speed links If you are uploading small numbers of large files over high-speed links
and these uploads do not fully utilize your bandwidth, then increasing and these uploads do not fully utilize your bandwidth, then increasing
this may help to speed up the transfers.`, this may help to speed up the transfers.`,
Default: 4, Default: defaultUploadConcurrency,
Advanced: true, Advanced: true,
}, { }, {
Name: config.ConfigEncoding, Name: config.ConfigEncoding,
@@ -305,7 +294,6 @@ type Options struct {
NoMediaLink bool `config:"no_media_link"` NoMediaLink bool `config:"no_media_link"`
HashMemoryThreshold fs.SizeSuffix `config:"hash_memory_limit"` HashMemoryThreshold fs.SizeSuffix `config:"hash_memory_limit"`
ChunkSize fs.SizeSuffix `config:"chunk_size"` ChunkSize fs.SizeSuffix `config:"chunk_size"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
UploadConcurrency int `config:"upload_concurrency"` UploadConcurrency int `config:"upload_concurrency"`
Enc encoder.MultiEncoder `config:"encoding"` Enc encoder.MultiEncoder `config:"encoding"`
} }
@@ -541,39 +529,6 @@ func (f *Fs) newClientWithPacer(ctx context.Context) (err error) {
return nil return nil
} }
func checkUploadChunkSize(cs fs.SizeSuffix) error {
if cs < minChunkSize {
return fmt.Errorf("%s is less than %s", cs, minChunkSize)
}
if cs > maxChunkSize {
return fmt.Errorf("%s is greater than %s", cs, maxChunkSize)
}
return nil
}
func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadChunkSize(cs)
if err == nil {
old, f.opt.ChunkSize = f.opt.ChunkSize, cs
}
return
}
func checkUploadCutoff(cs fs.SizeSuffix) error {
if cs > maxUploadCutoff {
return fmt.Errorf("%s is greater than %s", cs, maxUploadCutoff)
}
return nil
}
func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
err = checkUploadCutoff(cs)
if err == nil {
old, f.opt.UploadCutoff = f.opt.UploadCutoff, cs
}
return
}
// newFs partially constructs Fs from the path // newFs partially constructs Fs from the path
// //
// It constructs a valid Fs but doesn't attempt to figure out whether // It constructs a valid Fs but doesn't attempt to figure out whether
@@ -581,17 +536,11 @@ func (f *Fs) setUploadCutoff(cs fs.SizeSuffix) (old fs.SizeSuffix, err error) {
func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, error) { func newFs(ctx context.Context, name, path string, m configmap.Mapper) (*Fs, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) if err := configstruct.Set(m, opt); err != nil {
if err != nil {
return nil, err return nil, err
} }
err = checkUploadChunkSize(opt.ChunkSize) if opt.ChunkSize < minChunkSize {
if err != nil { return nil, fmt.Errorf("chunk size must be at least %s", minChunkSize)
return nil, fmt.Errorf("pikpak: chunk size: %w", err)
}
err = checkUploadCutoff(opt.UploadCutoff)
if err != nil {
return nil, fmt.Errorf("pikpak: upload cutoff: %w", err)
} }
root := parsePath(path) root := parsePath(path)
@@ -979,24 +928,6 @@ func (f *Fs) deleteObjects(ctx context.Context, IDs []string, useTrash bool) (er
return nil return nil
} }
// untrash a file or directory by ID
//
// If a name collision occurs in the destination folder, PikPak might automatically
// rename the restored item(s) by appending a numbered suffix. For example,
// foo.txt -> foo(1).txt or foo(2).txt if foo(1).txt already exists
func (f *Fs) untrashObjects(ctx context.Context, IDs []string) (err error) {
if len(IDs) == 0 {
return nil
}
req := api.RequestBatch{
IDs: IDs,
}
if err := f.requestBatchAction(ctx, "batchUntrash", &req); err != nil {
return fmt.Errorf("untrash object failed: %w", err)
}
return nil
}
// purgeCheck removes the root directory, if check is set then it // purgeCheck removes the root directory, if check is set then it
// refuses to do so if it has anything in // refuses to do so if it has anything in
func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error { func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
@@ -1081,14 +1012,7 @@ func (f *Fs) CleanUp(ctx context.Context) (err error) {
return f.waitTask(ctx, info.TaskID) return f.waitTask(ctx, info.TaskID)
} }
// Move the object to a new parent folder // Move the object
//
// Objects cannot be moved to their current folder.
// "file_move_or_copy_to_cur" (9): Please don't move or copy to current folder or sub folder
//
// If a name collision occurs in the destination folder, PikPak might automatically
// rename the moved item(s) by appending a numbered suffix. For example,
// foo.txt -> foo(1).txt or foo(2).txt if foo(1).txt already exists
func (f *Fs) moveObjects(ctx context.Context, IDs []string, dirID string) (err error) { func (f *Fs) moveObjects(ctx context.Context, IDs []string, dirID string) (err error) {
if len(IDs) == 0 { if len(IDs) == 0 {
return nil return nil
@@ -1104,12 +1028,6 @@ func (f *Fs) moveObjects(ctx context.Context, IDs []string, dirID string) (err e
} }
// renames the object // renames the object
//
// The new name must be different from the current name.
// "file_rename_to_same_name" (3): Name of file or folder is not changed
//
// Within the same folder, object names must be unique.
// "file_duplicated_name" (3): File name cannot be repeated
func (f *Fs) renameObject(ctx context.Context, ID, newName string) (info *api.File, err error) { func (f *Fs) renameObject(ctx context.Context, ID, newName string) (info *api.File, err error) {
req := api.File{ req := api.File{
Name: f.opt.Enc.FromStandardName(newName), Name: f.opt.Enc.FromStandardName(newName),
@@ -1194,13 +1112,18 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
// If it isn't possible then return fs.ErrorCantMove // If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (dst fs.Object, err error) { func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object) srcObj, ok := src.(*Object)
if !ok { if !ok {
fs.Debugf(src, "Can't move - not same remote type") fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove return nil, fs.ErrorCantMove
} }
err = srcObj.readMetaData(ctx) err := srcObj.readMetaData(ctx)
if err != nil {
return nil, err
}
srcLeaf, srcParentID, err := srcObj.fs.dirCache.FindPath(ctx, srcObj.remote, false)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1211,74 +1134,31 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (dst fs.Obj
return nil, err return nil, err
} }
if srcObj.parent != dstParentID { if srcParentID != dstParentID {
// Perform the move. A numbered copy might be generated upon name collision. // Do the move
if err = f.moveObjects(ctx, []string{srcObj.id}, dstParentID); err != nil { if err = f.moveObjects(ctx, []string{srcObj.id}, dstParentID); err != nil {
return nil, fmt.Errorf("move: failed to move object %s to new parent %s: %w", srcObj.id, dstParentID, err) return nil, err
} }
defer func() {
if err != nil {
// FIXME: Restored file might have a numbered name if a conflict occurs
if mvErr := f.moveObjects(ctx, []string{srcObj.id}, srcObj.parent); mvErr != nil {
fs.Logf(f, "move: couldn't restore original object %q to %q after move failure: %v", dstObj.id, src.Remote(), mvErr)
}
}
}()
} }
// Manually update info of moved object to save API calls
dstObj.id = srcObj.id
dstObj.mimeType = srcObj.mimeType
dstObj.gcid = srcObj.gcid
dstObj.md5sum = srcObj.md5sum
dstObj.hasMetaData = true
// Find the moved object and any conflict object with the same name. if srcLeaf != dstLeaf {
var moved, conflict *api.File // Rename
_, err = f.listAll(ctx, dstParentID, api.KindOfFile, "false", func(item *api.File) bool { info, err := f.renameObject(ctx, srcObj.id, dstLeaf)
if item.ID == srcObj.id { if err != nil {
moved = item return nil, fmt.Errorf("move: couldn't rename moved file: %w", err)
if item.Name == dstLeaf {
return true
}
} else if item.Name == dstLeaf {
conflict = item
} }
// Stop early if both found return dstObj, dstObj.setMetaData(info)
return moved != nil && conflict != nil
})
if err != nil {
return nil, fmt.Errorf("move: couldn't locate moved file %q in destination directory %q: %w", srcObj.id, dstParentID, err)
} }
if moved == nil { return dstObj, nil
return nil, fmt.Errorf("move: moved file %q not found in destination", srcObj.id)
}
// If moved object already has the correct name, return
if moved.Name == dstLeaf {
return dstObj, dstObj.setMetaData(moved)
}
// If name collision, delete conflicting file first
if conflict != nil {
if err = f.deleteObjects(ctx, []string{conflict.ID}, true); err != nil {
return nil, fmt.Errorf("move: couldn't delete conflicting file: %w", err)
}
defer func() {
if err != nil {
if restoreErr := f.untrashObjects(ctx, []string{conflict.ID}); restoreErr != nil {
fs.Logf(f, "move: couldn't restore conflicting file: %v", restoreErr)
}
}
}()
}
info, err := f.renameObject(ctx, srcObj.id, dstLeaf)
if err != nil {
return nil, fmt.Errorf("move: couldn't rename moved file %q to %q: %w", dstObj.id, dstLeaf, err)
}
return dstObj, dstObj.setMetaData(info)
} }
// copy objects // copy objects
//
// Objects cannot be copied to their current folder.
// "file_move_or_copy_to_cur" (9): Please don't move or copy to current folder or sub folder
//
// If a name collision occurs in the destination folder, PikPak might automatically
// rename the copied item(s) by appending a numbered suffix. For example,
// foo.txt -> foo(1).txt or foo(2).txt if foo(1).txt already exists
func (f *Fs) copyObjects(ctx context.Context, IDs []string, dirID string) (err error) { func (f *Fs) copyObjects(ctx context.Context, IDs []string, dirID string) (err error) {
if len(IDs) == 0 { if len(IDs) == 0 {
return nil return nil
@@ -1302,13 +1182,13 @@ func (f *Fs) copyObjects(ctx context.Context, IDs []string, dirID string) (err e
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
// If it isn't possible then return fs.ErrorCantCopy // If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Object, err error) { func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object) srcObj, ok := src.(*Object)
if !ok { if !ok {
fs.Debugf(src, "Can't copy - not same remote type") fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy return nil, fs.ErrorCantCopy
} }
err = srcObj.readMetaData(ctx) err := srcObj.readMetaData(ctx)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -1323,55 +1203,31 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (dst fs.Obj
fs.Debugf(src, "Can't copy - same parent") fs.Debugf(src, "Can't copy - same parent")
return nil, fs.ErrorCantCopy return nil, fs.ErrorCantCopy
} }
// Check for possible conflicts: Pikpak creates numbered copies on name collision.
var conflict *api.File
_, srcLeaf := dircache.SplitPath(srcObj.remote)
if srcLeaf == dstLeaf {
if conflict, err = f.readMetaDataForPath(ctx, remote); err == nil {
// delete conflicting file
if err = f.deleteObjects(ctx, []string{conflict.ID}, true); err != nil {
return nil, fmt.Errorf("copy: couldn't delete conflicting file: %w", err)
}
defer func() {
if err != nil {
if restoreErr := f.untrashObjects(ctx, []string{conflict.ID}); restoreErr != nil {
fs.Logf(f, "copy: couldn't restore conflicting file: %v", restoreErr)
}
}
}()
} else if err != fs.ErrorObjectNotFound {
return nil, err
}
} else {
dstDir, _ := dircache.SplitPath(remote)
dstObj.remote = path.Join(dstDir, srcLeaf)
if conflict, err = f.readMetaDataForPath(ctx, dstObj.remote); err == nil {
tmpName := conflict.Name + "-rclone-copy-" + random.String(8)
if _, err = f.renameObject(ctx, conflict.ID, tmpName); err != nil {
return nil, fmt.Errorf("copy: couldn't rename conflicting file: %w", err)
}
defer func() {
if _, renameErr := f.renameObject(ctx, conflict.ID, conflict.Name); renameErr != nil {
fs.Logf(f, "copy: couldn't rename conflicting file back to original: %v", renameErr)
}
}()
} else if err != fs.ErrorObjectNotFound {
return nil, err
}
}
// Copy the object // Copy the object
if err := f.copyObjects(ctx, []string{srcObj.id}, dstParentID); err != nil { if err := f.copyObjects(ctx, []string{srcObj.id}, dstParentID); err != nil {
return nil, fmt.Errorf("couldn't copy file: %w", err) return nil, fmt.Errorf("couldn't copy file: %w", err)
} }
err = dstObj.readMetaData(ctx) // Update info of the copied object with new parent but source name
if err != nil { if info, err := dstObj.fs.readMetaDataForPath(ctx, srcObj.remote); err != nil {
return nil, fmt.Errorf("copy: couldn't locate copied file: %w", err) return nil, fmt.Errorf("copy: couldn't locate copied file: %w", err)
} else if err = dstObj.setMetaData(info); err != nil {
return nil, err
}
// Can't copy and change name in one step so we have to check if we have
// the correct name after copy
srcLeaf, _, err := srcObj.fs.dirCache.FindPath(ctx, srcObj.remote, false)
if err != nil {
return nil, err
} }
if srcLeaf != dstLeaf { if srcLeaf != dstLeaf {
return f.Move(ctx, dstObj, remote) // Rename
info, err := f.renameObject(ctx, dstObj.id, dstLeaf)
if err != nil {
return nil, fmt.Errorf("copy: couldn't rename copied file: %w", err)
}
return dstObj, dstObj.setMetaData(info)
} }
return dstObj, nil return dstObj, nil
} }
@@ -1409,7 +1265,9 @@ func (f *Fs) uploadByForm(ctx context.Context, in io.Reader, name string, size i
return return
} }
func (f *Fs) newS3Client(ctx context.Context, p *api.ResumableParams) (s3Client *s3.Client, err error) { func (f *Fs) uploadByResumable(ctx context.Context, in io.Reader, name string, size int64, resumable *api.Resumable) (err error) {
p := resumable.Params
// Create a credentials provider // Create a credentials provider
creds := credentials.NewStaticCredentialsProvider(p.AccessKeyID, p.AccessKeySecret, p.SecurityToken) creds := credentials.NewStaticCredentialsProvider(p.AccessKeyID, p.AccessKeySecret, p.SecurityToken)
@@ -1419,64 +1277,22 @@ func (f *Fs) newS3Client(ctx context.Context, p *api.ResumableParams) (s3Client
if err != nil { if err != nil {
return return
} }
ci := fs.GetConfig(ctx)
cfg.RetryMaxAttempts = ci.LowLevelRetries
cfg.HTTPClient = getClient(ctx, &f.opt)
client := s3.NewFromConfig(cfg, func(o *s3.Options) { client := s3.NewFromConfig(cfg, func(o *s3.Options) {
o.BaseEndpoint = aws.String("https://mypikpak.com/") o.BaseEndpoint = aws.String("https://mypikpak.com/")
o.RequestChecksumCalculation = aws.RequestChecksumCalculationWhenRequired
o.ResponseChecksumValidation = aws.ResponseChecksumValidationWhenRequired
}) })
return client, nil partSize := chunksize.Calculator(name, size, int(manager.MaxUploadParts), f.opt.ChunkSize)
}
func (f *Fs) uploadByResumable(ctx context.Context, in io.Reader, name string, size int64, resumable *api.Resumable, options ...fs.OpenOption) (err error) { // Create an uploader with custom options
p := resumable.Params uploader := manager.NewUploader(client, func(u *manager.Uploader) {
u.PartSize = int64(partSize)
if size < 0 || size >= int64(f.opt.UploadCutoff) { u.Concurrency = f.opt.UploadConcurrency
mu, err := f.newChunkWriter(ctx, name, size, p, in, options...) })
if err != nil { // Perform an upload
return fmt.Errorf("multipart upload failed to initialise: %w", err) _, err = uploader.Upload(ctx, &s3.PutObjectInput{
}
return mu.Upload(ctx)
}
// upload singlepart
client, err := f.newS3Client(ctx, p)
if err != nil {
return fmt.Errorf("failed to create upload client: %w", err)
}
req := &s3.PutObjectInput{
Bucket: &p.Bucket, Bucket: &p.Bucket,
Key: &p.Key, Key: &p.Key,
Body: io.NopCloser(in), Body: in,
}
// Apply upload options
for _, option := range options {
key, value := option.Header()
lowerKey := strings.ToLower(key)
switch lowerKey {
case "":
// ignore
case "cache-control":
req.CacheControl = aws.String(value)
case "content-disposition":
req.ContentDisposition = aws.String(value)
case "content-encoding":
req.ContentEncoding = aws.String(value)
case "content-type":
req.ContentType = aws.String(value)
}
}
var s3opts = []func(*s3.Options){}
// Can't retry single part uploads as only have an io.Reader
s3opts = append(s3opts, func(o *s3.Options) {
o.RetryMaxAttempts = 1
})
err = f.pacer.CallNoRetry(func() (bool, error) {
_, err = client.PutObject(ctx, req, s3opts...)
return f.shouldRetry(ctx, nil, err)
}) })
return return
} }
@@ -1508,30 +1324,8 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
} }
if new.File == nil { if new.File == nil {
return nil, fmt.Errorf("invalid response: %+v", new) return nil, fmt.Errorf("invalid response: %+v", new)
} } else if new.File.Phase == api.PhaseTypeComplete {
// early return; in case of zero-byte objects
defer atexit.OnError(&err, func() {
fs.Debugf(leaf, "canceling upload: %v", err)
if cancelErr := f.deleteObjects(ctx, []string{new.File.ID}, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
if new.Task != nil {
if cancelErr := f.deleteTask(ctx, new.Task.ID, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", taskWaitTime)
time.Sleep(taskWaitTime)
}
})()
// Note: The API might automatically append a numbered suffix to the filename,
// even if a file with the same name does not exist in the target directory.
if upName := f.opt.Enc.ToStandardName(new.File.Name); leaf != upName {
return nil, fserrors.NoRetryError(fmt.Errorf("uploaded file name mismatch: expected %q, got %q", leaf, upName))
}
// early return; in case of zero-byte objects or uploaded by matched gcid
if new.File.Phase == api.PhaseTypeComplete {
if acc, ok := in.(*accounting.Account); ok && acc != nil { if acc, ok := in.(*accounting.Account); ok && acc != nil {
// if `in io.Reader` is still in type of `*accounting.Account` (meaning that it is unused) // if `in io.Reader` is still in type of `*accounting.Account` (meaning that it is unused)
// it is considered as a server side copy as no incoming/outgoing traffic occur at all // it is considered as a server side copy as no incoming/outgoing traffic occur at all
@@ -1541,10 +1335,22 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
return new.File, nil return new.File, nil
} }
defer atexit.OnError(&err, func() {
fs.Debugf(leaf, "canceling upload: %v", err)
if cancelErr := f.deleteObjects(ctx, []string{new.File.ID}, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
if cancelErr := f.deleteTask(ctx, new.Task.ID, false); cancelErr != nil {
fs.Logf(leaf, "failed to cancel upload: %v", cancelErr)
}
fs.Debugf(leaf, "waiting %v for the cancellation to be effective", taskWaitTime)
time.Sleep(taskWaitTime)
})()
if uploadType == api.UploadTypeForm && new.Form != nil { if uploadType == api.UploadTypeForm && new.Form != nil {
err = f.uploadByForm(ctx, in, req.Name, size, new.Form, options...) err = f.uploadByForm(ctx, in, req.Name, size, new.Form, options...)
} else if uploadType == api.UploadTypeResumable && new.Resumable != nil { } else if uploadType == api.UploadTypeResumable && new.Resumable != nil {
err = f.uploadByResumable(ctx, in, leaf, size, new.Resumable, options...) err = f.uploadByResumable(ctx, in, leaf, size, new.Resumable)
} else { } else {
err = fmt.Errorf("no method available for uploading: %+v", new) err = fmt.Errorf("no method available for uploading: %+v", new)
} }
@@ -1552,9 +1358,6 @@ func (f *Fs) upload(ctx context.Context, in io.Reader, leaf, dirID, gcid string,
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to upload: %w", err) return nil, fmt.Errorf("failed to upload: %w", err)
} }
if new.Task == nil {
return new.File, nil
}
return new.File, f.waitTask(ctx, new.Task.ID) return new.File, f.waitTask(ctx, new.Task.ID)
} }

View File

@@ -1,10 +1,10 @@
// Test PikPak filesystem interface // Test PikPak filesystem interface
package pikpak package pikpak_test
import ( import (
"testing" "testing"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/backend/pikpak"
"github.com/rclone/rclone/fstest/fstests" "github.com/rclone/rclone/fstest/fstests"
) )
@@ -12,23 +12,6 @@ import (
func TestIntegration(t *testing.T) { func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{ fstests.Run(t, &fstests.Opt{
RemoteName: "TestPikPak:", RemoteName: "TestPikPak:",
NilObject: (*Object)(nil), NilObject: (*pikpak.Object)(nil),
ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: minChunkSize,
MaxChunkSize: maxChunkSize,
},
}) })
} }
func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs)
}
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
)

View File

@@ -793,7 +793,7 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
return nil, err return nil, err
} }
usage = &fs.Usage{ usage = &fs.Usage{
Used: fs.NewUsageValue(info.SpaceUsed), Used: fs.NewUsageValue(int64(info.SpaceUsed)),
} }
return usage, nil return usage, nil
} }

View File

@@ -119,9 +119,6 @@ var providerOption = fs.Option{
}, { }, {
Value: "IDrive", Value: "IDrive",
Help: "IDrive e2", Help: "IDrive e2",
}, {
Value: "Intercolo",
Help: "Intercolo Object Storage",
}, { }, {
Value: "IONOS", Value: "IONOS",
Help: "IONOS Cloud", Help: "IONOS Cloud",
@@ -152,9 +149,6 @@ var providerOption = fs.Option{
}, { }, {
Value: "Outscale", Value: "Outscale",
Help: "OUTSCALE Object Storage (OOS)", Help: "OUTSCALE Object Storage (OOS)",
}, {
Value: "OVHcloud",
Help: "OVHcloud Object Storage",
}, { }, {
Value: "Petabox", Value: "Petabox",
Help: "Petabox Object Storage", Help: "Petabox Object Storage",
@@ -191,9 +185,6 @@ var providerOption = fs.Option{
}, { }, {
Value: "Qiniu", Value: "Qiniu",
Help: "Qiniu Object Storage (Kodo)", Help: "Qiniu Object Storage (Kodo)",
}, {
Value: "Zata",
Help: "Zata (S3 compatible Gateway)",
}, { }, {
Value: "Other", Value: "Other",
Help: "Any other S3 compatible provider", Help: "Any other S3 compatible provider",
@@ -499,22 +490,6 @@ func init() {
Value: "ap-northeast-1", Value: "ap-northeast-1",
Help: "Northeast Asia Region 1.\nNeeds location constraint ap-northeast-1.", Help: "Northeast Asia Region 1.\nNeeds location constraint ap-northeast-1.",
}}, }},
}, {
Name: "region",
Help: "Region where you can connect with.\n",
Provider: "Zata",
Examples: []fs.OptionExample{{
Value: "us-east-1",
Help: "Indore, Madhya Pradesh, India",
}},
}, {
Name: "region",
Help: "Region where your bucket will be created and your data stored.\n",
Provider: "Intercolo",
Examples: []fs.OptionExample{{
Value: "de-fra",
Help: "Frankfurt, Germany",
}},
}, { }, {
Name: "region", Name: "region",
Help: "Region where your bucket will be created and your data stored.\n", Help: "Region where your bucket will be created and your data stored.\n",
@@ -549,59 +524,6 @@ func init() {
Value: "ap-northeast-1", Value: "ap-northeast-1",
Help: "Tokyo, Japan", Help: "Tokyo, Japan",
}}, }},
}, {
// References:
// https://help.ovhcloud.com/csm/en-public-cloud-storage-s3-location?id=kb_article_view&sysparm_article=KB0047384
// https://support.us.ovhcloud.com/hc/en-us/articles/10667991081107-Endpoints-and-Object-Storage-Geoavailability
Name: "region",
Help: "Region where your bucket will be created and your data stored.\n",
Provider: "OVHcloud",
Examples: []fs.OptionExample{{
Value: "gra",
Help: "Gravelines, France",
}, {
Value: "rbx",
Help: "Roubaix, France",
}, {
Value: "sbg",
Help: "Strasbourg, France",
}, {
Value: "eu-west-par",
Help: "Paris, France (3AZ)",
}, {
Value: "de",
Help: "Frankfurt, Germany",
}, {
Value: "uk",
Help: "London, United Kingdom",
}, {
Value: "waw",
Help: "Warsaw, Poland",
}, {
Value: "bhs",
Help: "Beauharnois, Canada",
}, {
Value: "ca-east-tor",
Help: "Toronto, Canada",
}, {
Value: "sgp",
Help: "Singapore",
}, {
Value: "ap-southeast-syd",
Help: "Sydney, Australia",
}, {
Value: "ap-south-mum",
Help: "Mumbai, India",
}, {
Value: "us-east-va",
Help: "Vint Hill, Virginia, USA",
}, {
Value: "us-west-or",
Help: "Hillsboro, Oregon, USA",
}, {
Value: "rbx-archive",
Help: "Roubaix, France (Cold Archive)",
}},
}, { }, {
Name: "region", Name: "region",
Help: "Region where your bucket will be created and your data stored.\n", Help: "Region where your bucket will be created and your data stored.\n",
@@ -654,7 +576,7 @@ func init() {
}, { }, {
Name: "region", Name: "region",
Help: "Region to connect to.\n\nLeave blank if you are using an S3 clone and you don't have a region.", Help: "Region to connect to.\n\nLeave blank if you are using an S3 clone and you don't have a region.",
Provider: "!AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,FlashBlade,Intercolo,IONOS,Petabox,Liara,Linode,Magalu,OVHcloud,Qiniu,RackCorp,Scaleway,Selectel,Storj,Synology,TencentCOS,HuaweiOBS,IDrive,Mega,Zata", Provider: "!AWS,Alibaba,ArvanCloud,ChinaMobile,Cloudflare,FlashBlade,IONOS,Petabox,Liara,Linode,Magalu,Qiniu,RackCorp,Scaleway,Selectel,Storj,Synology,TencentCOS,HuaweiOBS,IDrive,Mega",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "", Value: "",
Help: "Use this if unsure.\nWill use v4 signatures and an empty region.", Help: "Use this if unsure.\nWill use v4 signatures and an empty region.",
@@ -965,14 +887,6 @@ func init() {
Value: "s3.private.sng01.cloud-object-storage.appdomain.cloud", Value: "s3.private.sng01.cloud-object-storage.appdomain.cloud",
Help: "Singapore Single Site Private Endpoint", Help: "Singapore Single Site Private Endpoint",
}}, }},
}, {
Name: "endpoint",
Help: "Endpoint for Intercolo Object Storage.",
Provider: "Intercolo",
Examples: []fs.OptionExample{{
Value: "de-fra.i3storage.com",
Help: "Frankfurt, Germany",
}},
}, { }, {
Name: "endpoint", Name: "endpoint",
Help: "Endpoint for IONOS S3 Object Storage.\n\nSpecify the endpoint from the same region.", Help: "Endpoint for IONOS S3 Object Storage.\n\nSpecify the endpoint from the same region.",
@@ -1249,71 +1163,6 @@ func init() {
Value: "obs.ru-northwest-2.myhuaweicloud.com", Value: "obs.ru-northwest-2.myhuaweicloud.com",
Help: "RU-Moscow2", Help: "RU-Moscow2",
}}, }},
}, {
Name: "endpoint",
Help: "Endpoint for OVHcloud Object Storage.",
Provider: "OVHcloud",
Examples: []fs.OptionExample{{
Value: "s3.gra.io.cloud.ovh.net",
Help: "OVHcloud Gravelines, France",
Provider: "OVHcloud",
}, {
Value: "s3.rbx.io.cloud.ovh.net",
Help: "OVHcloud Roubaix, France",
Provider: "OVHcloud",
}, {
Value: "s3.sbg.io.cloud.ovh.net",
Help: "OVHcloud Strasbourg, France",
Provider: "OVHcloud",
}, {
Value: "s3.eu-west-par.io.cloud.ovh.net",
Help: "OVHcloud Paris, France (3AZ)",
Provider: "OVHcloud",
}, {
Value: "s3.de.io.cloud.ovh.net",
Help: "OVHcloud Frankfurt, Germany",
Provider: "OVHcloud",
}, {
Value: "s3.uk.io.cloud.ovh.net",
Help: "OVHcloud London, United Kingdom",
Provider: "OVHcloud",
}, {
Value: "s3.waw.io.cloud.ovh.net",
Help: "OVHcloud Warsaw, Poland",
Provider: "OVHcloud",
}, {
Value: "s3.bhs.io.cloud.ovh.net",
Help: "OVHcloud Beauharnois, Canada",
Provider: "OVHcloud",
}, {
Value: "s3.ca-east-tor.io.cloud.ovh.net",
Help: "OVHcloud Toronto, Canada",
Provider: "OVHcloud",
}, {
Value: "s3.sgp.io.cloud.ovh.net",
Help: "OVHcloud Singapore",
Provider: "OVHcloud",
}, {
Value: "s3.ap-southeast-syd.io.cloud.ovh.net",
Help: "OVHcloud Sydney, Australia",
Provider: "OVHcloud",
}, {
Value: "s3.ap-south-mum.io.cloud.ovh.net",
Help: "OVHcloud Mumbai, India",
Provider: "OVHcloud",
}, {
Value: "s3.us-east-va.io.cloud.ovh.us",
Help: "OVHcloud Vint Hill, Virginia, USA",
Provider: "OVHcloud",
}, {
Value: "s3.us-west-or.io.cloud.ovh.us",
Help: "OVHcloud Hillsboro, Oregon, USA",
Provider: "OVHcloud",
}, {
Value: "s3.rbx-archive.io.cloud.ovh.net",
Help: "OVHcloud Roubaix, France (Cold Archive)",
Provider: "OVHcloud",
}},
}, { }, {
Name: "endpoint", Name: "endpoint",
Help: "Endpoint for Scaleway Object Storage.", Help: "Endpoint for Scaleway Object Storage.",
@@ -1531,14 +1380,6 @@ func init() {
Value: "s3-ap-northeast-1.qiniucs.com", Value: "s3-ap-northeast-1.qiniucs.com",
Help: "Northeast Asia Endpoint 1", Help: "Northeast Asia Endpoint 1",
}}, }},
}, {
Name: "endpoint",
Help: "Endpoint for Zata Object Storage.",
Provider: "Zata",
Examples: []fs.OptionExample{{
Value: "idr01.zata.ai",
Help: "South Asia Endpoint",
}},
}, { }, {
// Selectel endpoints: https://docs.selectel.ru/en/cloud/object-storage/manage/domains/#s3-api-domains // Selectel endpoints: https://docs.selectel.ru/en/cloud/object-storage/manage/domains/#s3-api-domains
Name: "endpoint", Name: "endpoint",
@@ -1551,7 +1392,7 @@ func init() {
}, { }, {
Name: "endpoint", Name: "endpoint",
Help: "Endpoint for S3 API.\n\nRequired when using an S3 clone.", Help: "Endpoint for S3 API.\n\nRequired when using an S3 clone.",
Provider: "!AWS,ArvanCloud,IBMCOS,IDrive,Intercolo,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Linode,LyveCloud,Magalu,OVHcloud,Scaleway,Selectel,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox,Zata", Provider: "!AWS,ArvanCloud,IBMCOS,IDrive,IONOS,TencentCOS,HuaweiOBS,Alibaba,ChinaMobile,GCS,Liara,Linode,LyveCloud,Magalu,Scaleway,Selectel,StackPath,Storj,Synology,RackCorp,Qiniu,Petabox",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "objects-us-east-1.dream.io", Value: "objects-us-east-1.dream.io",
Help: "Dream Objects endpoint", Help: "Dream Objects endpoint",
@@ -2086,7 +1927,7 @@ func init() {
}, { }, {
Name: "location_constraint", Name: "location_constraint",
Help: "Location constraint - must be set to match the Region.\n\nLeave blank if not sure. Used when creating buckets only.", Help: "Location constraint - must be set to match the Region.\n\nLeave blank if not sure. Used when creating buckets only.",
Provider: "!AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,FlashBlade,IBMCOS,IDrive,Intercolo,IONOS,Leviia,Liara,Linode,Magalu,Outscale,OVHcloud,Qiniu,RackCorp,Scaleway,Selectel,StackPath,Storj,TencentCOS,Petabox,Mega", Provider: "!AWS,Alibaba,ArvanCloud,HuaweiOBS,ChinaMobile,Cloudflare,FlashBlade,IBMCOS,IDrive,IONOS,Leviia,Liara,Linode,Magalu,Outscale,Qiniu,RackCorp,Scaleway,Selectel,StackPath,Storj,TencentCOS,Petabox,Mega",
}, { }, {
Name: "acl", Name: "acl",
Help: `Canned ACL used when creating buckets and storing or copying objects. Help: `Canned ACL used when creating buckets and storing or copying objects.
@@ -2568,11 +2409,6 @@ See [AWS Docs on Dualstack Endpoints](https://docs.aws.amazon.com/AmazonS3/lates
See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)`, See: [AWS S3 Transfer acceleration](https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration-examples.html)`,
Default: false, Default: false,
Advanced: true, Advanced: true,
}, {
Name: "use_arn_region",
Help: `If true, enables arn region support for the service.`,
Default: false,
Advanced: true,
}, { }, {
Name: "leave_parts_on_error", Name: "leave_parts_on_error",
Provider: "AWS", Provider: "AWS",
@@ -2780,7 +2616,7 @@ The parameter should be a date, "2006-01-02", datetime "2006-01-02
Note that when using this no file write operations are permitted, Note that when using this no file write operations are permitted,
so you can't upload files or delete them. so you can't upload files or delete them.
See [the time option docs](/docs/#time-options) for valid formats. See [the time option docs](/docs/#time-option) for valid formats.
`, `,
Default: fs.Time{}, Default: fs.Time{},
Advanced: true, Advanced: true,
@@ -3120,7 +2956,6 @@ type Options struct {
ForcePathStyle bool `config:"force_path_style"` ForcePathStyle bool `config:"force_path_style"`
V2Auth bool `config:"v2_auth"` V2Auth bool `config:"v2_auth"`
UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"` UseAccelerateEndpoint bool `config:"use_accelerate_endpoint"`
UseARNRegion bool `config:"use_arn_region"`
LeavePartsOnError bool `config:"leave_parts_on_error"` LeavePartsOnError bool `config:"leave_parts_on_error"`
ListChunk int32 `config:"list_chunk"` ListChunk int32 `config:"list_chunk"`
ListVersion int `config:"list_version"` ListVersion int `config:"list_version"`
@@ -3485,7 +3320,6 @@ func s3Connection(ctx context.Context, opt *Options, client *http.Client) (s3Cli
options = append(options, func(s3Opt *s3.Options) { options = append(options, func(s3Opt *s3.Options) {
s3Opt.UsePathStyle = opt.ForcePathStyle s3Opt.UsePathStyle = opt.ForcePathStyle
s3Opt.UseAccelerate = opt.UseAccelerateEndpoint s3Opt.UseAccelerate = opt.UseAccelerateEndpoint
s3Opt.UseARNRegion = opt.UseARNRegion
// FIXME maybe this should be a tristate so can default to DualStackEndpointStateUnset? // FIXME maybe this should be a tristate so can default to DualStackEndpointStateUnset?
if opt.UseDualStack { if opt.UseDualStack {
s3Opt.EndpointOptions.UseDualStackEndpoint = aws.DualStackEndpointStateEnabled s3Opt.EndpointOptions.UseDualStackEndpoint = aws.DualStackEndpointStateEnabled
@@ -3696,9 +3530,6 @@ func setQuirks(opt *Options) {
case "IDrive": case "IDrive":
virtualHostStyle = false virtualHostStyle = false
useAlreadyExists = false // untested useAlreadyExists = false // untested
case "Intercolo":
// no quirks
useUnsignedPayload = false // Intercolo has trailer support
case "IONOS": case "IONOS":
// listObjectsV2 supported - https://api.ionos.com/docs/s3/#Basic-Operations-get-Bucket-list-type-2 // listObjectsV2 supported - https://api.ionos.com/docs/s3/#Basic-Operations-get-Bucket-list-type-2
virtualHostStyle = false virtualHostStyle = false
@@ -3739,8 +3570,6 @@ func setQuirks(opt *Options) {
useAlreadyExists = false // untested useAlreadyExists = false // untested
case "Outscale": case "Outscale":
virtualHostStyle = false virtualHostStyle = false
case "OVHcloud":
// No quirks
case "RackCorp": case "RackCorp":
// No quirks // No quirks
useMultipartEtag = false // untested useMultipartEtag = false // untested
@@ -3802,11 +3631,6 @@ func setQuirks(opt *Options) {
urlEncodeListings = false urlEncodeListings = false
virtualHostStyle = false virtualHostStyle = false
useAlreadyExists = false // untested useAlreadyExists = false // untested
case "Zata":
useMultipartEtag = false
mightGzip = false
useUnsignedPayload = false
useAlreadyExists = false
case "Exaba": case "Exaba":
virtualHostStyle = false virtualHostStyle = false
case "GCS": case "GCS":
@@ -4610,7 +4434,7 @@ func (f *Fs) list(ctx context.Context, opt listOpt, fn listFn) error {
} }
foundItems += len(resp.Contents) foundItems += len(resp.Contents)
for i, object := range resp.Contents { for i, object := range resp.Contents {
remote := *stringClone(deref(object.Key)) remote := deref(object.Key)
if urlEncodeListings { if urlEncodeListings {
remote, err = url.QueryUnescape(remote) remote, err = url.QueryUnescape(remote)
if err != nil { if err != nil {
@@ -5213,11 +5037,8 @@ func (f *Fs) copyMultipart(ctx context.Context, copyReq *s3.CopyObjectInput, dst
MultipartUpload: &types.CompletedMultipartUpload{ MultipartUpload: &types.CompletedMultipartUpload{
Parts: parts, Parts: parts,
}, },
RequestPayer: req.RequestPayer, RequestPayer: req.RequestPayer,
SSECustomerAlgorithm: req.SSECustomerAlgorithm, UploadId: uid,
SSECustomerKey: req.SSECustomerKey,
SSECustomerKeyMD5: req.SSECustomerKeyMD5,
UploadId: uid,
}) })
return f.shouldRetry(ctx, err) return f.shouldRetry(ctx, err)
}) })
@@ -6066,7 +5887,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
func s3MetadataToMap(s3Meta map[string]string) map[string]string { func s3MetadataToMap(s3Meta map[string]string) map[string]string {
meta := make(map[string]string, len(s3Meta)) meta := make(map[string]string, len(s3Meta))
for k, v := range s3Meta { for k, v := range s3Meta {
meta[strings.ToLower(k)] = *stringClone(v) meta[strings.ToLower(k)] = v
} }
return meta return meta
} }
@@ -6109,14 +5930,14 @@ func (o *Object) setMetaData(resp *s3.HeadObjectOutput) {
o.lastModified = *resp.LastModified o.lastModified = *resp.LastModified
} }
} }
o.mimeType = strings.Clone(deref(resp.ContentType)) o.mimeType = deref(resp.ContentType)
// Set system metadata // Set system metadata
o.storageClass = stringClone(string(resp.StorageClass)) o.storageClass = (*string)(&resp.StorageClass)
o.cacheControl = stringClonePointer(resp.CacheControl) o.cacheControl = resp.CacheControl
o.contentDisposition = stringClonePointer(resp.ContentDisposition) o.contentDisposition = resp.ContentDisposition
o.contentEncoding = stringClonePointer(removeAWSChunked(resp.ContentEncoding)) o.contentEncoding = resp.ContentEncoding
o.contentLanguage = stringClonePointer(resp.ContentLanguage) o.contentLanguage = resp.ContentLanguage
// If decompressing then size and md5sum are unknown // If decompressing then size and md5sum are unknown
if o.fs.opt.Decompress && deref(o.contentEncoding) == "gzip" { if o.fs.opt.Decompress && deref(o.contentEncoding) == "gzip" {
@@ -6183,36 +6004,6 @@ func (o *Object) Storable() bool {
return true return true
} }
// removeAWSChunked removes the "aws-chunked" content-coding from a
// Content-Encoding field value (RFC 9110). Comparison is case-insensitive.
// Returns nil if encoding is empty after removal.
func removeAWSChunked(pv *string) *string {
if pv == nil {
return nil
}
v := *pv
if v == "" {
return nil
}
if !strings.Contains(strings.ToLower(v), "aws-chunked") {
return pv
}
parts := strings.Split(v, ",")
out := make([]string, 0, len(parts))
for _, p := range parts {
tok := strings.TrimSpace(p)
if tok == "" || strings.EqualFold(tok, "aws-chunked") {
continue
}
out = append(out, tok)
}
if len(out) == 0 {
return nil
}
v = strings.Join(out, ",")
return &v
}
func (o *Object) downloadFromURL(ctx context.Context, bucketPath string, options ...fs.OpenOption) (in io.ReadCloser, err error) { func (o *Object) downloadFromURL(ctx context.Context, bucketPath string, options ...fs.OpenOption) (in io.ReadCloser, err error) {
url := o.fs.opt.DownloadURL + bucketPath url := o.fs.opt.DownloadURL + bucketPath
var resp *http.Response var resp *http.Response
@@ -6381,7 +6172,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
o.setMetaData(&head) o.setMetaData(&head)
// Decompress body if necessary // Decompress body if necessary
if deref(removeAWSChunked(resp.ContentEncoding)) == "gzip" { if deref(resp.ContentEncoding) == "gzip" {
if o.fs.opt.Decompress || (resp.ContentLength == nil && o.fs.opt.MightGzip.Value) { if o.fs.opt.Decompress || (resp.ContentLength == nil && o.fs.opt.MightGzip.Value) {
return readers.NewGzipReader(resp.Body) return readers.NewGzipReader(resp.Body)
} }
@@ -6631,11 +6422,8 @@ func (w *s3ChunkWriter) Close(ctx context.Context) (err error) {
MultipartUpload: &types.CompletedMultipartUpload{ MultipartUpload: &types.CompletedMultipartUpload{
Parts: w.completedParts, Parts: w.completedParts,
}, },
RequestPayer: w.multiPartUploadInput.RequestPayer, RequestPayer: w.multiPartUploadInput.RequestPayer,
SSECustomerAlgorithm: w.multiPartUploadInput.SSECustomerAlgorithm, UploadId: w.uploadID,
SSECustomerKey: w.multiPartUploadInput.SSECustomerKey,
SSECustomerKeyMD5: w.multiPartUploadInput.SSECustomerKeyMD5,
UploadId: w.uploadID,
}) })
return w.f.shouldRetry(ctx, err) return w.f.shouldRetry(ctx, err)
}) })
@@ -6663,9 +6451,9 @@ func (o *Object) uploadMultipart(ctx context.Context, src fs.ObjectInfo, in io.R
return wantETag, gotETag, versionID, ui, err return wantETag, gotETag, versionID, ui, err
} }
s3cw := chunkWriter.(*s3ChunkWriter) var s3cw *s3ChunkWriter = chunkWriter.(*s3ChunkWriter)
gotETag = *stringClone(s3cw.eTag) gotETag = s3cw.eTag
versionID = stringClone(s3cw.versionID) versionID = aws.String(s3cw.versionID)
hashOfHashes := md5.Sum(s3cw.md5s) hashOfHashes := md5.Sum(s3cw.md5s)
wantETag = fmt.Sprintf("%s-%d", hex.EncodeToString(hashOfHashes[:]), len(s3cw.completedParts)) wantETag = fmt.Sprintf("%s-%d", hex.EncodeToString(hashOfHashes[:]), len(s3cw.completedParts))
@@ -6697,8 +6485,8 @@ func (o *Object) uploadSinglepartPutObject(ctx context.Context, req *s3.PutObjec
} }
lastModified = time.Now() lastModified = time.Now()
if resp != nil { if resp != nil {
etag = *stringClone(deref(resp.ETag)) etag = deref(resp.ETag)
versionID = stringClonePointer(resp.VersionId) versionID = resp.VersionId
} }
return etag, lastModified, versionID, nil return etag, lastModified, versionID, nil
} }
@@ -6750,8 +6538,8 @@ func (o *Object) uploadSinglepartPresignedRequest(ctx context.Context, req *s3.P
if date, err := http.ParseTime(resp.Header.Get("Date")); err != nil { if date, err := http.ParseTime(resp.Header.Get("Date")); err != nil {
lastModified = date lastModified = date
} }
etag = *stringClone(resp.Header.Get("Etag")) etag = resp.Header.Get("Etag")
vID := *stringClone(resp.Header.Get("x-amz-version-id")) vID := resp.Header.Get("x-amz-version-id")
if vID != "" { if vID != "" {
versionID = &vID versionID = &vID
} }
@@ -6805,7 +6593,7 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
case "content-disposition": case "content-disposition":
ui.req.ContentDisposition = pv ui.req.ContentDisposition = pv
case "content-encoding": case "content-encoding":
ui.req.ContentEncoding = removeAWSChunked(pv) ui.req.ContentEncoding = pv
case "content-language": case "content-language":
ui.req.ContentLanguage = pv ui.req.ContentLanguage = pv
case "content-type": case "content-type":
@@ -6902,7 +6690,7 @@ func (o *Object) prepareUpload(ctx context.Context, src fs.ObjectInfo, options [
case "content-disposition": case "content-disposition":
ui.req.ContentDisposition = aws.String(value) ui.req.ContentDisposition = aws.String(value)
case "content-encoding": case "content-encoding":
ui.req.ContentEncoding = removeAWSChunked(aws.String(value)) ui.req.ContentEncoding = aws.String(value)
case "content-language": case "content-language":
ui.req.ContentLanguage = aws.String(value) ui.req.ContentLanguage = aws.String(value)
case "content-type": case "content-type":

View File

@@ -248,47 +248,6 @@ func TestMergeDeleteMarkers(t *testing.T) {
} }
} }
func TestRemoveAWSChunked(t *testing.T) {
ps := func(s string) *string {
return &s
}
tests := []struct {
name string
in *string
want *string
}{
{"nil", nil, nil},
{"empty", ps(""), nil},
{"only aws", ps("aws-chunked"), nil},
{"leading aws", ps("aws-chunked, gzip"), ps("gzip")},
{"trailing aws", ps("gzip, aws-chunked"), ps("gzip")},
{"middle aws", ps("gzip, aws-chunked, br"), ps("gzip,br")},
{"case insensitive", ps("GZip, AwS-ChUnKeD, Br"), ps("GZip,Br")},
{"duplicates", ps("aws-chunked , aws-chunked"), nil},
{"no aws normalize spaces", ps(" gzip , br "), ps(" gzip , br ")},
{"surrounding spaces", ps(" aws-chunked "), nil},
{"no change", ps("gzip, br"), ps("gzip, br")},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := removeAWSChunked(tc.in)
check := func(want, got *string) {
t.Helper()
if tc.want == nil {
assert.Nil(t, got)
} else {
require.NotNil(t, got)
assert.Equal(t, *tc.want, *got)
}
}
check(tc.want, got)
// Idempotent
got2 := removeAWSChunked(got)
check(got, got2)
})
}
}
func (f *Fs) InternalTestVersions(t *testing.T) { func (f *Fs) InternalTestVersions(t *testing.T) {
ctx := context.Background() ctx := context.Background()

View File

@@ -111,8 +111,7 @@ func init() {
encoder.EncodeSlash | encoder.EncodeSlash |
encoder.EncodeBackSlash | encoder.EncodeBackSlash |
encoder.EncodeDoubleQuote | encoder.EncodeDoubleQuote |
encoder.EncodeInvalidUtf8 | encoder.EncodeInvalidUtf8),
encoder.EncodeDot),
}}, }},
}) })
} }

View File

@@ -222,45 +222,15 @@ E.g. the second example above should be rewritten as:
Help: "Windows Command Prompt", Help: "Windows Command Prompt",
}, },
}, },
}, {
Name: "hashes",
Help: `Comma separated list of supported checksum types.`,
Default: fs.CommaSepList{},
Advanced: true,
}, { }, {
Name: "md5sum_command", Name: "md5sum_command",
Default: "", Default: "",
Help: "The command used to read MD5 hashes.\n\nLeave blank for autodetect.", Help: "The command used to read md5 hashes.\n\nLeave blank for autodetect.",
Advanced: true, Advanced: true,
}, { }, {
Name: "sha1sum_command", Name: "sha1sum_command",
Default: "", Default: "",
Help: "The command used to read SHA-1 hashes.\n\nLeave blank for autodetect.", Help: "The command used to read sha1 hashes.\n\nLeave blank for autodetect.",
Advanced: true,
}, {
Name: "crc32sum_command",
Default: "",
Help: "The command used to read CRC-32 hashes.\n\nLeave blank for autodetect.",
Advanced: true,
}, {
Name: "sha256sum_command",
Default: "",
Help: "The command used to read SHA-256 hashes.\n\nLeave blank for autodetect.",
Advanced: true,
}, {
Name: "blake3sum_command",
Default: "",
Help: "The command used to read BLAKE3 hashes.\n\nLeave blank for autodetect.",
Advanced: true,
}, {
Name: "xxh3sum_command",
Default: "",
Help: "The command used to read XXH3 hashes.\n\nLeave blank for autodetect.",
Advanced: true,
}, {
Name: "xxh128sum_command",
Default: "",
Help: "The command used to read XXH128 hashes.\n\nLeave blank for autodetect.",
Advanced: true, Advanced: true,
}, { }, {
Name: "skip_links", Name: "skip_links",
@@ -565,14 +535,8 @@ type Options struct {
PathOverride string `config:"path_override"` PathOverride string `config:"path_override"`
SetModTime bool `config:"set_modtime"` SetModTime bool `config:"set_modtime"`
ShellType string `config:"shell_type"` ShellType string `config:"shell_type"`
Hashes fs.CommaSepList `config:"hashes"`
Md5sumCommand string `config:"md5sum_command"` Md5sumCommand string `config:"md5sum_command"`
Sha1sumCommand string `config:"sha1sum_command"` Sha1sumCommand string `config:"sha1sum_command"`
Crc32sumCommand string `config:"crc32sum_command"`
Sha256sumCommand string `config:"sha256sum_command"`
Blake3sumCommand string `config:"blake3sum_command"`
Xxh3sumCommand string `config:"xxh3sum_command"`
Xxh128sumCommand string `config:"xxh128sum_command"`
SkipLinks bool `config:"skip_links"` SkipLinks bool `config:"skip_links"`
Subsystem string `config:"subsystem"` Subsystem string `config:"subsystem"`
ServerCommand string `config:"server_command"` ServerCommand string `config:"server_command"`
@@ -621,18 +585,13 @@ type Fs struct {
// Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading) // Object is a remote SFTP file that has been stat'd (so it exists, but is not necessarily open for reading)
type Object struct { type Object struct {
fs *Fs fs *Fs
remote string remote string
size int64 // size of the object size int64 // size of the object
modTime uint32 // modification time of the object as unix time modTime uint32 // modification time of the object as unix time
mode os.FileMode // mode bits from the file mode os.FileMode // mode bits from the file
md5sum *string // Cached MD5 checksum md5sum *string // Cached MD5 checksum
sha1sum *string // Cached SHA-1 checksum sha1sum *string // Cached SHA1 checksum
crc32sum *string // Cached CRC-32 checksum
sha256sum *string // Cached SHA-256 checksum
blake3sum *string // Cached BLAKE3 checksum
xxh3sum *string // Cached XXH3 checksum
xxh128sum *string // Cached XXH128 checksum
} }
// conn encapsulates an ssh client and corresponding sftp client // conn encapsulates an ssh client and corresponding sftp client
@@ -932,7 +891,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
User: opt.User, User: opt.User,
Auth: []ssh.AuthMethod{}, Auth: []ssh.AuthMethod{},
HostKeyCallback: ssh.InsecureIgnoreHostKey(), HostKeyCallback: ssh.InsecureIgnoreHostKey(),
Timeout: time.Duration(f.ci.ConnectTimeout), Timeout: f.ci.ConnectTimeout,
ClientVersion: "SSH-2.0-" + f.ci.UserAgent, ClientVersion: "SSH-2.0-" + f.ci.UserAgent,
} }
@@ -1684,112 +1643,14 @@ func (f *Fs) Hashes() hash.Set {
return *f.cachedHashes return *f.cachedHashes
} }
hashTypesSupported := hash.NewHashSet() hashSet := hash.NewHashSet()
f.cachedHashes = &hashTypesSupported f.cachedHashes = &hashSet
if f.opt.DisableHashCheck || f.shellType == shellTypeNotSupported { if f.opt.DisableHashCheck || f.shellType == shellTypeNotSupported {
return hashTypesSupported return hashSet
}
hashTypes := hash.NewHashSet()
if len(f.opt.Hashes) > 0 {
for _, hashName := range f.opt.Hashes {
var hashType hash.Type
if err := hashType.Set(hashName); err != nil {
fs.Infof(nil, "Invalid token %q in hash string %q", hashName, f.opt.Hashes.String())
}
hashTypes.Add(hashType)
}
} else {
hashTypes.Add(hash.MD5, hash.SHA1)
}
hashCommands := map[hash.Type]struct {
option *string
emptyHash string
hashCommands []struct{ hashFile, hashEmpty string }
}{
hash.MD5: {
&f.opt.Md5sumCommand,
"d41d8cd98f00b204e9800998ecf8427e",
[]struct{ hashFile, hashEmpty string }{
{"md5sum", "md5sum"},
{"md5 -r", "md5 -r"},
{"rclone md5sum", "rclone md5sum"},
},
},
hash.SHA1: {
&f.opt.Sha1sumCommand,
"da39a3ee5e6b4b0d3255bfef95601890afd80709",
[]struct{ hashFile, hashEmpty string }{
{"sha1sum", "sha1sum"},
{"sha1 -r", "sha1 -r"},
{"rclone sha1sum", "rclone sha1sum"},
},
},
hash.CRC32: {
&f.opt.Sha1sumCommand,
"00000000",
[]struct{ hashFile, hashEmpty string }{
{"crc32", "crc32"},
{"rclone hashsum crc32", "rclone hashsum crc32"},
},
},
hash.SHA256: {
&f.opt.Sha256sumCommand,
"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
[]struct{ hashFile, hashEmpty string }{
{"sha256sum", "sha1sum"},
{"sha256 -r", "sha1 -r"},
{"rclone hashsum sha256", "rclone hashsum sha256"},
},
},
hash.BLAKE3: {
&f.opt.Blake3sumCommand,
"af1349b9f5f9a1a6a0404dea36dcc9499bcb25c9adc112b7cc9a93cae41f3262",
[]struct{ hashFile, hashEmpty string }{
{"b3sum", "b3sum"},
{"rclone hashsum blake3", "rclone hashsum blake3"},
},
},
hash.XXH3: {
&f.opt.Xxh3sumCommand,
"2d06800538d394c2",
[]struct{ hashFile, hashEmpty string }{
// The xxhsum tool uses a non-standard prefix "XXH3_" preceding the hash output for the 64-bit variant
// of XXH3, to avoid confusion with the older 64-bit algorithm XXH64. This was introduced in version
// 0.8.3 released Dec 30, 2024. Older versions only supported the alternative BSD style output format,
// otherwise optional with argument --tag. We are currently not expecting these output formats and can
// therefore not use the "xxhsum -H3" command or its xxh3sum alias directly.
//{"xxh3sum", "xxh3sum"},
//{"xxhsum -H3", "xxhsum -H3"},
{"rclone hashsum xxh3", "rclone hashsum xxh3"},
},
},
hash.XXH128: {
&f.opt.Xxh128sumCommand,
"99aa06d3014798d86001c324468d497f",
[]struct{ hashFile, hashEmpty string }{
{"xxh128sum", "xxh128sum"},
{"xxhsum -H2", "xxhsum -H2"},
{"rclone hashsum xxh128", "rclone hashsum xxh128"},
},
},
}
if f.shellType == "powershell" {
for _, hashType := range []hash.Type{hash.MD5, hash.SHA1, hash.SHA256} {
if entry, ok := hashCommands[hashType]; ok {
entry.hashCommands = append(hashCommands[hashType].hashCommands, struct {
hashFile, hashEmpty string
}{
fmt.Sprintf("&{param($Path);Get-FileHash -Algorithm %v -LiteralPath $Path -ErrorAction Stop|Select-Object -First 1 -ExpandProperty Hash|ForEach-Object{\"$($_.ToLower()) ${Path}\"}}", hashType),
fmt.Sprintf("Get-FileHash -Algorithm %v -InputStream ([System.IO.MemoryStream]::new()) -ErrorAction Stop|Select-Object -First 1 -ExpandProperty Hash|ForEach-Object{$_.ToLower()}", hashType),
})
hashCommands[hashType] = entry
}
}
} }
// look for a hash command which works
checkHash := func(hashType hash.Type, commands []struct{ hashFile, hashEmpty string }, expected string, hashCommand *string, changed *bool) bool { checkHash := func(hashType hash.Type, commands []struct{ hashFile, hashEmpty string }, expected string, hashCommand *string, changed *bool) bool {
if *hashCommand == hashCommandNotSupported { if *hashCommand == hashCommandNotSupported {
return false return false
@@ -1818,25 +1679,55 @@ func (f *Fs) Hashes() hash.Set {
} }
changed := false changed := false
for _, hashType := range hashTypes.Array() { md5Commands := []struct {
if entry, ok := hashCommands[hashType]; ok { hashFile, hashEmpty string
if works := checkHash(hashType, entry.hashCommands, entry.emptyHash, entry.option, &changed); works { }{
hashTypesSupported.Add(hashType) {"md5sum", "md5sum"},
} {"md5 -r", "md5 -r"},
} {"rclone md5sum", "rclone md5sum"},
} }
sha1Commands := []struct {
hashFile, hashEmpty string
}{
{"sha1sum", "sha1sum"},
{"sha1 -r", "sha1 -r"},
{"rclone sha1sum", "rclone sha1sum"},
}
if f.shellType == "powershell" {
md5Commands = append(md5Commands, struct {
hashFile, hashEmpty string
}{
"&{param($Path);Get-FileHash -Algorithm MD5 -LiteralPath $Path -ErrorAction Stop|Select-Object -First 1 -ExpandProperty Hash|ForEach-Object{\"$($_.ToLower()) ${Path}\"}}",
"Get-FileHash -Algorithm MD5 -InputStream ([System.IO.MemoryStream]::new()) -ErrorAction Stop|Select-Object -First 1 -ExpandProperty Hash|ForEach-Object{$_.ToLower()}",
})
sha1Commands = append(sha1Commands, struct {
hashFile, hashEmpty string
}{
"&{param($Path);Get-FileHash -Algorithm SHA1 -LiteralPath $Path -ErrorAction Stop|Select-Object -First 1 -ExpandProperty Hash|ForEach-Object{\"$($_.ToLower()) ${Path}\"}}",
"Get-FileHash -Algorithm SHA1 -InputStream ([System.IO.MemoryStream]::new()) -ErrorAction Stop|Select-Object -First 1 -ExpandProperty Hash|ForEach-Object{$_.ToLower()}",
})
}
md5Works := checkHash(hash.MD5, md5Commands, "d41d8cd98f00b204e9800998ecf8427e", &f.opt.Md5sumCommand, &changed)
sha1Works := checkHash(hash.SHA1, sha1Commands, "da39a3ee5e6b4b0d3255bfef95601890afd80709", &f.opt.Sha1sumCommand, &changed)
if changed { if changed {
// Save permanently in config to avoid the extra work next time // Save permanently in config to avoid the extra work next time
for _, hashType := range hashTypes.Array() { fs.Debugf(f, "Setting hash command for %v to %q (set sha1sum_command to override)", hash.MD5, f.opt.Md5sumCommand)
if entry, ok := hashCommands[hashType]; ok { f.m.Set("md5sum_command", f.opt.Md5sumCommand)
fs.Debugf(f, "Setting hash command for %v to %q (set %vsum_command to override)", hashType, *entry.option, hashType) fs.Debugf(f, "Setting hash command for %v to %q (set md5sum_command to override)", hash.SHA1, f.opt.Sha1sumCommand)
f.m.Set(fmt.Sprintf("%vsum_command", hashType), *entry.option) f.m.Set("sha1sum_command", f.opt.Sha1sumCommand)
}
}
} }
return hashTypesSupported if sha1Works {
hashSet.Add(hash.SHA1)
}
if md5Works {
hashSet.Add(hash.MD5)
}
return hashSet
} }
// About gets usage stats // About gets usage stats
@@ -1863,9 +1754,9 @@ func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
free := vfsStats.FreeSpace() free := vfsStats.FreeSpace()
used := total - free used := total - free
return &fs.Usage{ return &fs.Usage{
Total: fs.NewUsageValue(total), Total: fs.NewUsageValue(int64(total)),
Used: fs.NewUsageValue(used), Used: fs.NewUsageValue(int64(used)),
Free: fs.NewUsageValue(free), Free: fs.NewUsageValue(int64(free)),
}, nil }, nil
} else if err != nil { } else if err != nil {
if errors.Is(err, os.ErrNotExist) { if errors.Is(err, os.ErrNotExist) {
@@ -1971,43 +1862,17 @@ func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
_ = o.fs.Hashes() _ = o.fs.Hashes()
var hashCmd string var hashCmd string
switch r { if r == hash.MD5 {
case hash.MD5:
if o.md5sum != nil { if o.md5sum != nil {
return *o.md5sum, nil return *o.md5sum, nil
} }
hashCmd = o.fs.opt.Md5sumCommand hashCmd = o.fs.opt.Md5sumCommand
case hash.SHA1: } else if r == hash.SHA1 {
if o.sha1sum != nil { if o.sha1sum != nil {
return *o.sha1sum, nil return *o.sha1sum, nil
} }
hashCmd = o.fs.opt.Sha1sumCommand hashCmd = o.fs.opt.Sha1sumCommand
case hash.CRC32: } else {
if o.crc32sum != nil {
return *o.crc32sum, nil
}
hashCmd = o.fs.opt.Crc32sumCommand
case hash.SHA256:
if o.sha256sum != nil {
return *o.sha256sum, nil
}
hashCmd = o.fs.opt.Sha256sumCommand
case hash.BLAKE3:
if o.blake3sum != nil {
return *o.blake3sum, nil
}
hashCmd = o.fs.opt.Blake3sumCommand
case hash.XXH3:
if o.xxh3sum != nil {
return *o.xxh3sum, nil
}
hashCmd = o.fs.opt.Xxh3sumCommand
case hash.XXH128:
if o.xxh128sum != nil {
return *o.xxh128sum, nil
}
hashCmd = o.fs.opt.Xxh128sumCommand
default:
return "", hash.ErrUnsupported return "", hash.ErrUnsupported
} }
if hashCmd == "" || hashCmd == hashCommandNotSupported { if hashCmd == "" || hashCmd == hashCommandNotSupported {
@@ -2024,21 +1889,10 @@ func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
} }
hashString := parseHash(outBytes) hashString := parseHash(outBytes)
fs.Debugf(o, "Parsed hash: %s", hashString) fs.Debugf(o, "Parsed hash: %s", hashString)
switch r { if r == hash.MD5 {
case hash.MD5:
o.md5sum = &hashString o.md5sum = &hashString
case hash.SHA1: } else if r == hash.SHA1 {
o.sha1sum = &hashString o.sha1sum = &hashString
case hash.CRC32:
o.crc32sum = &hashString
case hash.SHA256:
o.sha256sum = &hashString
case hash.BLAKE3:
o.blake3sum = &hashString
case hash.XXH3:
o.xxh3sum = &hashString
case hash.XXH128:
o.xxh128sum = &hashString
} }
return hashString, nil return hashString, nil
} }
@@ -2103,7 +1957,7 @@ func (f *Fs) remoteShellPath(remote string) string {
} }
// Converts a byte array from the SSH session returned by // Converts a byte array from the SSH session returned by
// an invocation of hash command to a hash string // an invocation of md5sum/sha1sum to a hash string
// as expected by the rest of this application // as expected by the rest of this application
func parseHash(bytes []byte) string { func parseHash(bytes []byte) string {
// For strings with backslash *sum writes a leading \ // For strings with backslash *sum writes a leading \
@@ -2332,11 +2186,6 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Clear the hash cache since we are about to update the object // Clear the hash cache since we are about to update the object
o.md5sum = nil o.md5sum = nil
o.sha1sum = nil o.sha1sum = nil
o.crc32sum = nil
o.sha256sum = nil
o.blake3sum = nil
o.xxh3sum = nil
o.xxh128sum = nil
c, err := o.fs.getSftpConnection(ctx) c, err := o.fs.getSftpConnection(ctx)
if err != nil { if err != nil {
return fmt.Errorf("Update: %w", err) return fmt.Errorf("Update: %w", err)

View File

@@ -38,7 +38,7 @@ func (f *Fs) dial(ctx context.Context, network, addr string) (*conn, error) {
d := &smb2.Dialer{} d := &smb2.Dialer{}
if f.opt.UseKerberos { if f.opt.UseKerberos {
cl, err := NewKerberosFactory().GetClient(f.opt.KerberosCCache) cl, err := getKerberosClient()
if err != nil { if err != nil {
return nil, err return nil, err
} }

View File

@@ -1,99 +0,0 @@
package smb
import (
"context"
"fmt"
"os"
"sync"
"github.com/cloudsoda/go-smb2"
"golang.org/x/sync/errgroup"
)
// FsInterface defines the methods that filePool needs from Fs
type FsInterface interface {
getConnection(ctx context.Context, share string) (*conn, error)
putConnection(pc **conn, err error)
removeSession()
}
type file struct {
*smb2.File
c *conn
}
type filePool struct {
ctx context.Context
fs FsInterface
share string
path string
mu sync.Mutex
pool []*file
}
func newFilePool(ctx context.Context, fs FsInterface, share, path string) *filePool {
return &filePool{
ctx: ctx,
fs: fs,
share: share,
path: path,
}
}
func (p *filePool) get() (*file, error) {
p.mu.Lock()
if len(p.pool) > 0 {
f := p.pool[len(p.pool)-1]
p.pool = p.pool[:len(p.pool)-1]
p.mu.Unlock()
return f, nil
}
p.mu.Unlock()
c, err := p.fs.getConnection(p.ctx, p.share)
if err != nil {
return nil, err
}
fl, err := c.smbShare.OpenFile(p.path, os.O_WRONLY, 0o644)
if err != nil {
p.fs.putConnection(&c, err)
return nil, fmt.Errorf("failed to open: %w", err)
}
return &file{File: fl, c: c}, nil
}
func (p *filePool) put(f *file, err error) {
if f == nil {
return
}
if err != nil {
_ = f.Close()
p.fs.putConnection(&f.c, err)
return
}
p.mu.Lock()
p.pool = append(p.pool, f)
p.mu.Unlock()
}
func (p *filePool) drain() error {
p.mu.Lock()
files := p.pool
p.pool = nil
p.mu.Unlock()
g, _ := errgroup.WithContext(p.ctx)
for _, f := range files {
g.Go(func() error {
err := f.Close()
p.fs.putConnection(&f.c, err)
return err
})
}
return g.Wait()
}

View File

@@ -1,228 +0,0 @@
package smb
import (
"context"
"errors"
"sync"
"testing"
"github.com/cloudsoda/go-smb2"
"github.com/stretchr/testify/assert"
)
// Mock Fs that implements FsInterface
type mockFs struct {
mu sync.Mutex
putConnectionCalled bool
putConnectionErr error
getConnectionCalled bool
getConnectionErr error
getConnectionResult *conn
removeSessionCalled bool
}
func (m *mockFs) putConnection(pc **conn, err error) {
m.mu.Lock()
defer m.mu.Unlock()
m.putConnectionCalled = true
m.putConnectionErr = err
}
func (m *mockFs) getConnection(ctx context.Context, share string) (*conn, error) {
m.mu.Lock()
defer m.mu.Unlock()
m.getConnectionCalled = true
if m.getConnectionErr != nil {
return nil, m.getConnectionErr
}
if m.getConnectionResult != nil {
return m.getConnectionResult, nil
}
return &conn{}, nil
}
func (m *mockFs) removeSession() {
m.mu.Lock()
defer m.mu.Unlock()
m.removeSessionCalled = true
}
func (m *mockFs) isPutConnectionCalled() bool {
m.mu.Lock()
defer m.mu.Unlock()
return m.putConnectionCalled
}
func (m *mockFs) getPutConnectionErr() error {
m.mu.Lock()
defer m.mu.Unlock()
return m.putConnectionErr
}
func (m *mockFs) isGetConnectionCalled() bool {
m.mu.Lock()
defer m.mu.Unlock()
return m.getConnectionCalled
}
func newMockFs() *mockFs {
return &mockFs{}
}
// Helper function to create a mock file
func newMockFile() *file {
return &file{
File: &smb2.File{},
c: &conn{},
}
}
// Test filePool creation
func TestNewFilePool(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
share := "testshare"
path := "/test/path"
pool := newFilePool(ctx, fs, share, path)
assert.NotNil(t, pool)
assert.Equal(t, ctx, pool.ctx)
assert.Equal(t, fs, pool.fs)
assert.Equal(t, share, pool.share)
assert.Equal(t, path, pool.path)
assert.Empty(t, pool.pool)
}
// Test getting file from pool when pool has files
func TestFilePool_Get_FromPool(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
// Add a mock file to the pool
mockFile := newMockFile()
pool.pool = append(pool.pool, mockFile)
// Get file from pool
f, err := pool.get()
assert.NoError(t, err)
assert.NotNil(t, f)
assert.Equal(t, mockFile, f)
assert.Empty(t, pool.pool)
}
// Test getting file when pool is empty
func TestFilePool_Get_EmptyPool(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
// Set up the mock to return an error from getConnection
// This tests that the pool calls getConnection when empty
fs.getConnectionErr = errors.New("connection failed")
pool := newFilePool(ctx, fs, "testshare", "test/path")
// This should call getConnection and return the error
f, err := pool.get()
assert.Error(t, err)
assert.Nil(t, f)
assert.True(t, fs.isGetConnectionCalled())
assert.Equal(t, "connection failed", err.Error())
}
// Test putting file successfully
func TestFilePool_Put_Success(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
mockFile := newMockFile()
pool.put(mockFile, nil)
assert.Len(t, pool.pool, 1)
assert.Equal(t, mockFile, pool.pool[0])
}
// Test putting file with error
func TestFilePool_Put_WithError(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
mockFile := newMockFile()
pool.put(mockFile, errors.New("write error"))
// Should call putConnection with error
assert.True(t, fs.isPutConnectionCalled())
assert.Equal(t, errors.New("write error"), fs.getPutConnectionErr())
assert.Empty(t, pool.pool)
}
// Test putting nil file
func TestFilePool_Put_NilFile(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
// Should not panic
pool.put(nil, nil)
pool.put(nil, errors.New("some error"))
assert.Empty(t, pool.pool)
}
// Test draining pool with files
func TestFilePool_Drain_WithFiles(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
// Add mock files to pool
mockFile1 := newMockFile()
mockFile2 := newMockFile()
pool.pool = append(pool.pool, mockFile1, mockFile2)
// Before draining
assert.Len(t, pool.pool, 2)
_ = pool.drain()
assert.Empty(t, pool.pool)
}
// Test concurrent access to pool
func TestFilePool_ConcurrentAccess(t *testing.T) {
ctx := context.Background()
fs := newMockFs()
pool := newFilePool(ctx, fs, "testshare", "/test/path")
const numGoroutines = 10
for i := 0; i < numGoroutines; i++ {
mockFile := newMockFile()
pool.pool = append(pool.pool, mockFile)
}
// Test concurrent get operations
done := make(chan bool, numGoroutines)
for i := 0; i < numGoroutines; i++ {
go func() {
defer func() { done <- true }()
f, err := pool.get()
if err == nil {
pool.put(f, nil)
}
}()
}
for i := 0; i < numGoroutines; i++ {
<-done
}
// Pool should be in a consistent after the concurrence access
assert.Len(t, pool.pool, numGoroutines)
}

View File

@@ -7,132 +7,72 @@ import (
"path/filepath" "path/filepath"
"strings" "strings"
"sync" "sync"
"time"
"github.com/jcmturner/gokrb5/v8/client" "github.com/jcmturner/gokrb5/v8/client"
"github.com/jcmturner/gokrb5/v8/config" "github.com/jcmturner/gokrb5/v8/config"
"github.com/jcmturner/gokrb5/v8/credentials" "github.com/jcmturner/gokrb5/v8/credentials"
) )
// KerberosFactory encapsulates dependencies and caches for Kerberos clients. var (
type KerberosFactory struct { kerberosClient *client.Client
// clientCache caches Kerberos clients keyed by resolved ccache path. kerberosErr error
// Clients are reused unless the associated ccache file changes. kerberosOnce sync.Once
clientCache sync.Map // map[string]*client.Client )
// errCache caches errors encountered when loading Kerberos clients. // getKerberosClient returns a Kerberos client that can be used to authenticate.
// Prevents repeated attempts for paths that previously failed. func getKerberosClient() (*client.Client, error) {
errCache sync.Map // map[string]error if kerberosClient == nil || kerberosErr == nil {
kerberosOnce.Do(func() {
kerberosClient, kerberosErr = createKerberosClient()
})
}
// modTimeCache tracks the last known modification time of ccache files. return kerberosClient, kerberosErr
// Used to detect changes and trigger credential refresh.
modTimeCache sync.Map // map[string]time.Time
loadCCache func(string) (*credentials.CCache, error)
newClient func(*credentials.CCache, *config.Config, ...func(*client.Settings)) (*client.Client, error)
loadConfig func() (*config.Config, error)
} }
// NewKerberosFactory creates a new instance of KerberosFactory with default dependencies. // createKerberosClient creates a new Kerberos client.
func NewKerberosFactory() *KerberosFactory { func createKerberosClient() (*client.Client, error) {
return &KerberosFactory{
loadCCache: credentials.LoadCCache,
newClient: client.NewFromCCache,
loadConfig: defaultLoadKerberosConfig,
}
}
// GetClient returns a cached Kerberos client or creates a new one if needed.
func (kf *KerberosFactory) GetClient(ccachePath string) (*client.Client, error) {
resolvedPath, err := resolveCcachePath(ccachePath)
if err != nil {
return nil, err
}
stat, err := os.Stat(resolvedPath)
if err != nil {
kf.errCache.Store(resolvedPath, err)
return nil, err
}
mtime := stat.ModTime()
if oldMod, ok := kf.modTimeCache.Load(resolvedPath); ok {
if oldTime, ok := oldMod.(time.Time); ok && oldTime.Equal(mtime) {
if errVal, ok := kf.errCache.Load(resolvedPath); ok {
return nil, errVal.(error)
}
if clientVal, ok := kf.clientCache.Load(resolvedPath); ok {
return clientVal.(*client.Client), nil
}
}
}
// Load Kerberos config
cfg, err := kf.loadConfig()
if err != nil {
kf.errCache.Store(resolvedPath, err)
return nil, err
}
// Load ccache
ccache, err := kf.loadCCache(resolvedPath)
if err != nil {
kf.errCache.Store(resolvedPath, err)
return nil, err
}
// Create new client
cl, err := kf.newClient(ccache, cfg)
if err != nil {
kf.errCache.Store(resolvedPath, err)
return nil, err
}
// Cache and return
kf.clientCache.Store(resolvedPath, cl)
kf.errCache.Delete(resolvedPath)
kf.modTimeCache.Store(resolvedPath, mtime)
return cl, nil
}
// resolveCcachePath resolves the KRB5 ccache path.
func resolveCcachePath(ccachePath string) (string, error) {
if ccachePath == "" {
ccachePath = os.Getenv("KRB5CCNAME")
}
switch {
case strings.Contains(ccachePath, ":"):
parts := strings.SplitN(ccachePath, ":", 2)
prefix, path := parts[0], parts[1]
switch prefix {
case "FILE":
return path, nil
case "DIR":
primary, err := os.ReadFile(filepath.Join(path, "primary"))
if err != nil {
return "", err
}
return filepath.Join(path, strings.TrimSpace(string(primary))), nil
default:
return "", fmt.Errorf("unsupported KRB5CCNAME: %s", ccachePath)
}
case ccachePath == "":
u, err := user.Current()
if err != nil {
return "", err
}
return "/tmp/krb5cc_" + u.Uid, nil
default:
return ccachePath, nil
}
}
// defaultLoadKerberosConfig loads Kerberos config from default or env path.
func defaultLoadKerberosConfig() (*config.Config, error) {
cfgPath := os.Getenv("KRB5_CONFIG") cfgPath := os.Getenv("KRB5_CONFIG")
if cfgPath == "" { if cfgPath == "" {
cfgPath = "/etc/krb5.conf" cfgPath = "/etc/krb5.conf"
} }
return config.Load(cfgPath)
cfg, err := config.Load(cfgPath)
if err != nil {
return nil, err
}
// Determine the ccache location from the environment, falling back to the
// default location.
ccachePath := os.Getenv("KRB5CCNAME")
switch {
case strings.Contains(ccachePath, ":"):
parts := strings.SplitN(ccachePath, ":", 2)
switch parts[0] {
case "FILE":
ccachePath = parts[1]
case "DIR":
primary, err := os.ReadFile(filepath.Join(parts[1], "primary"))
if err != nil {
return nil, err
}
ccachePath = filepath.Join(parts[1], strings.TrimSpace(string(primary)))
default:
return nil, fmt.Errorf("unsupported KRB5CCNAME: %s", ccachePath)
}
case ccachePath == "":
u, err := user.Current()
if err != nil {
return nil, err
}
ccachePath = "/tmp/krb5cc_" + u.Uid
}
ccache, err := credentials.LoadCCache(ccachePath)
if err != nil {
return nil, err
}
return client.NewFromCCache(ccache, cfg)
} }

View File

@@ -1,142 +0,0 @@
package smb
import (
"os"
"path/filepath"
"testing"
"time"
"github.com/jcmturner/gokrb5/v8/client"
"github.com/jcmturner/gokrb5/v8/config"
"github.com/jcmturner/gokrb5/v8/credentials"
"github.com/stretchr/testify/assert"
)
func TestResolveCcachePath(t *testing.T) {
tmpDir := t.TempDir()
// Setup: files for FILE and DIR modes
fileCcache := filepath.Join(tmpDir, "file_ccache")
err := os.WriteFile(fileCcache, []byte{}, 0600)
assert.NoError(t, err)
dirCcache := filepath.Join(tmpDir, "dir_ccache")
err = os.Mkdir(dirCcache, 0755)
assert.NoError(t, err)
err = os.WriteFile(filepath.Join(dirCcache, "primary"), []byte("ticket"), 0600)
assert.NoError(t, err)
dirCcacheTicket := filepath.Join(dirCcache, "ticket")
err = os.WriteFile(dirCcacheTicket, []byte{}, 0600)
assert.NoError(t, err)
tests := []struct {
name string
ccachePath string
envKRB5CCNAME string
expected string
expectError bool
}{
{
name: "FILE: prefix from env",
ccachePath: "",
envKRB5CCNAME: "FILE:" + fileCcache,
expected: fileCcache,
},
{
name: "DIR: prefix from env",
ccachePath: "",
envKRB5CCNAME: "DIR:" + dirCcache,
expected: dirCcacheTicket,
},
{
name: "Unsupported prefix",
ccachePath: "",
envKRB5CCNAME: "MEMORY:/bad/path",
expectError: true,
},
{
name: "Direct file path (no prefix)",
ccachePath: "/tmp/myccache",
expected: "/tmp/myccache",
},
{
name: "Default to /tmp/krb5cc_<uid>",
ccachePath: "",
envKRB5CCNAME: "",
expected: "/tmp/krb5cc_",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
t.Setenv("KRB5CCNAME", tt.envKRB5CCNAME)
result, err := resolveCcachePath(tt.ccachePath)
if tt.expectError {
assert.Error(t, err)
} else {
assert.NoError(t, err)
assert.Contains(t, result, tt.expected)
}
})
}
}
func TestKerberosFactory_GetClient_ReloadOnCcacheChange(t *testing.T) {
// Create temp ccache file
tmpFile, err := os.CreateTemp("", "krb5cc_test")
assert.NoError(t, err)
defer func() {
if err := os.Remove(tmpFile.Name()); err != nil {
t.Logf("Failed to remove temp file %s: %v", tmpFile.Name(), err)
}
}()
unixPath := filepath.ToSlash(tmpFile.Name())
ccachePath := "FILE:" + unixPath
initialContent := []byte("CCACHE_VERSION 4\n")
_, err = tmpFile.Write(initialContent)
assert.NoError(t, err)
assert.NoError(t, tmpFile.Close())
// Setup mocks
loadCallCount := 0
mockLoadCCache := func(path string) (*credentials.CCache, error) {
loadCallCount++
return &credentials.CCache{}, nil
}
mockNewClient := func(cc *credentials.CCache, cfg *config.Config, opts ...func(*client.Settings)) (*client.Client, error) {
return &client.Client{}, nil
}
mockLoadConfig := func() (*config.Config, error) {
return &config.Config{}, nil
}
factory := &KerberosFactory{
loadCCache: mockLoadCCache,
newClient: mockNewClient,
loadConfig: mockLoadConfig,
}
// First call — triggers loading
_, err = factory.GetClient(ccachePath)
assert.NoError(t, err)
assert.Equal(t, 1, loadCallCount, "expected 1 load call")
// Second call — should reuse cache, no additional load
_, err = factory.GetClient(ccachePath)
assert.NoError(t, err)
assert.Equal(t, 1, loadCallCount, "expected cached reuse, no new load")
// Simulate file update
time.Sleep(1 * time.Second) // ensure mtime changes
err = os.WriteFile(tmpFile.Name(), []byte("CCACHE_VERSION 4\n#updated"), 0600)
assert.NoError(t, err)
// Third call — should detect change, reload
_, err = factory.GetClient(ccachePath)
assert.NoError(t, err)
assert.Equal(t, 2, loadCallCount, "expected reload on changed ccache")
}

View File

@@ -3,7 +3,6 @@ package smb
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"io" "io"
"os" "os"
@@ -108,20 +107,6 @@ Set to 0 to keep connections indefinitely.
Help: "Whether the server is configured to be case-insensitive.\n\nAlways true on Windows shares.", Help: "Whether the server is configured to be case-insensitive.\n\nAlways true on Windows shares.",
Default: true, Default: true,
Advanced: true, Advanced: true,
}, {
Name: "kerberos_ccache",
Help: `Path to the Kerberos credential cache (krb5cc).
Overrides the default KRB5CCNAME environment variable and allows this
instance of the SMB backend to use a different Kerberos cache file.
This is useful when mounting multiple SMB with different credentials
or running in multi-user environments.
Supported formats:
- FILE:/path/to/ccache Use the specified file.
- DIR:/path/to/ccachedir Use the primary file inside the specified directory.
- /path/to/ccache Interpreted as a file path.`,
Advanced: true,
}, { }, {
Name: config.ConfigEncoding, Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp, Help: config.ConfigEncodingHelp,
@@ -152,7 +137,6 @@ type Options struct {
Domain string `config:"domain"` Domain string `config:"domain"`
SPN string `config:"spn"` SPN string `config:"spn"`
UseKerberos bool `config:"use_kerberos"` UseKerberos bool `config:"use_kerberos"`
KerberosCCache string `config:"kerberos_ccache"`
HideSpecial bool `config:"hide_special_share"` HideSpecial bool `config:"hide_special_share"`
CaseInsensitive bool `config:"case_insensitive"` CaseInsensitive bool `config:"case_insensitive"`
IdleTimeout fs.Duration `config:"idle_timeout"` IdleTimeout fs.Duration `config:"idle_timeout"`
@@ -495,82 +479,22 @@ func (f *Fs) About(ctx context.Context) (_ *fs.Usage, err error) {
return nil, err return nil, err
} }
bs := stat.BlockSize() bs := int64(stat.BlockSize())
usage := &fs.Usage{ usage := &fs.Usage{
Total: fs.NewUsageValue(bs * stat.TotalBlockCount()), Total: fs.NewUsageValue(bs * int64(stat.TotalBlockCount())),
Used: fs.NewUsageValue(bs * (stat.TotalBlockCount() - stat.FreeBlockCount())), Used: fs.NewUsageValue(bs * int64(stat.TotalBlockCount()-stat.FreeBlockCount())),
Free: fs.NewUsageValue(bs * stat.AvailableBlockCount()), Free: fs.NewUsageValue(bs * int64(stat.AvailableBlockCount())),
} }
return usage, nil return usage, nil
} }
type smbWriterAt struct {
pool *filePool
closed bool
closeMu sync.Mutex
wg sync.WaitGroup
}
func (w *smbWriterAt) WriteAt(p []byte, off int64) (int, error) {
w.closeMu.Lock()
if w.closed {
w.closeMu.Unlock()
return 0, errors.New("writer already closed")
}
w.wg.Add(1)
w.closeMu.Unlock()
defer w.wg.Done()
f, err := w.pool.get()
if err != nil {
return 0, fmt.Errorf("failed to get file from pool: %w", err)
}
n, writeErr := f.WriteAt(p, off)
w.pool.put(f, writeErr)
if writeErr != nil {
return n, fmt.Errorf("failed to write at offset %d: %w", off, writeErr)
}
return n, writeErr
}
func (w *smbWriterAt) Close() error {
w.closeMu.Lock()
defer w.closeMu.Unlock()
if w.closed {
return nil
}
w.closed = true
// Wait for all pending writes to finish
w.wg.Wait()
var errs []error
// Drain the pool
if err := w.pool.drain(); err != nil {
errs = append(errs, fmt.Errorf("failed to drain file pool: %w", err))
}
// Remove session
w.pool.fs.removeSession()
if len(errs) > 0 {
return errors.Join(errs...)
}
return nil
}
// OpenWriterAt opens with a handle for random access writes // OpenWriterAt opens with a handle for random access writes
// //
// Pass in the remote desired and the size if known. // Pass in the remote desired and the size if known.
// //
// It truncates any existing object // It truncates any existing object
func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) { func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.WriterAtCloser, error) {
var err error
o := &Object{ o := &Object{
fs: f, fs: f,
remote: remote, remote: remote,
@@ -580,42 +504,27 @@ func (f *Fs) OpenWriterAt(ctx context.Context, remote string, size int64) (fs.Wr
return nil, fs.ErrorIsDir return nil, fs.ErrorIsDir
} }
err := o.fs.ensureDirectory(ctx, share, filename) err = o.fs.ensureDirectory(ctx, share, filename)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to make parent directories: %w", err) return nil, fmt.Errorf("failed to make parent directories: %w", err)
} }
smbPath := o.fs.toSambaPath(filename) filename = o.fs.toSambaPath(filename)
o.fs.addSession() // Show session in use
defer o.fs.removeSession()
// One-time truncate
cn, err := o.fs.getConnection(ctx, share) cn, err := o.fs.getConnection(ctx, share)
if err != nil { if err != nil {
return nil, err return nil, err
} }
file, err := cn.smbShare.OpenFile(smbPath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0o644)
fl, err := cn.smbShare.OpenFile(filename, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0o644)
if err != nil { if err != nil {
o.fs.putConnection(&cn, err) return nil, fmt.Errorf("failed to open: %w", err)
return nil, err
} }
if size > 0 {
if truncateErr := file.Truncate(size); truncateErr != nil {
_ = file.Close()
o.fs.putConnection(&cn, truncateErr)
return nil, fmt.Errorf("failed to truncate file: %w", truncateErr)
}
}
if closeErr := file.Close(); closeErr != nil {
o.fs.putConnection(&cn, closeErr)
return nil, fmt.Errorf("failed to close file after truncate: %w", closeErr)
}
o.fs.putConnection(&cn, nil)
// Add a new session return fl, nil
o.fs.addSession()
return &smbWriterAt{
pool: newFilePool(ctx, o.fs, share, smbPath),
}, nil
} }
// Shutdown the backend, closing any background tasks and any // Shutdown the backend, closing any background tasks and any

View File

@@ -6,7 +6,6 @@ import (
"testing" "testing"
"github.com/rclone/rclone/backend/smb" "github.com/rclone/rclone/backend/smb"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests" "github.com/rclone/rclone/fstest/fstests"
) )
@@ -19,9 +18,6 @@ func TestIntegration(t *testing.T) {
} }
func TestIntegration2(t *testing.T) { func TestIntegration2(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
krb5Dir := t.TempDir() krb5Dir := t.TempDir()
t.Setenv("KRB5_CONFIG", filepath.Join(krb5Dir, "krb5.conf")) t.Setenv("KRB5_CONFIG", filepath.Join(krb5Dir, "krb5.conf"))
t.Setenv("KRB5CCNAME", filepath.Join(krb5Dir, "ccache")) t.Setenv("KRB5CCNAME", filepath.Join(krb5Dir, "ccache"))
@@ -30,24 +26,3 @@ func TestIntegration2(t *testing.T) {
NilObject: (*smb.Object)(nil), NilObject: (*smb.Object)(nil),
}) })
} }
func TestIntegration3(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
krb5Dir := t.TempDir()
t.Setenv("KRB5_CONFIG", filepath.Join(krb5Dir, "krb5.conf"))
ccache := filepath.Join(krb5Dir, "ccache")
t.Setenv("RCLONE_TEST_CUSTOM_CCACHE_LOCATION", ccache)
name := "TestSMBKerberosCcache"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":rclone",
NilObject: (*smb.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "kerberos_ccache", Value: ccache},
},
})
}

View File

@@ -491,8 +491,8 @@ func swiftConnection(ctx context.Context, opt *Options, name string) (*swift.Con
ApplicationCredentialName: opt.ApplicationCredentialName, ApplicationCredentialName: opt.ApplicationCredentialName,
ApplicationCredentialSecret: opt.ApplicationCredentialSecret, ApplicationCredentialSecret: opt.ApplicationCredentialSecret,
EndpointType: swift.EndpointType(opt.EndpointType), EndpointType: swift.EndpointType(opt.EndpointType),
ConnectTimeout: time.Duration(10 * ci.ConnectTimeout), // Use the timeouts in the transport ConnectTimeout: 10 * ci.ConnectTimeout, // Use the timeouts in the transport
Timeout: time.Duration(10 * ci.Timeout), // Use the timeouts in the transport Timeout: 10 * ci.Timeout, // Use the timeouts in the transport
Transport: fshttp.NewTransport(ctx), Transport: fshttp.NewTransport(ctx),
FetchUntilEmptyPage: opt.FetchUntilEmptyPage, FetchUntilEmptyPage: opt.FetchUntilEmptyPage,
PartialPageFetchThreshold: opt.PartialPageFetchThreshold, PartialPageFetchThreshold: opt.PartialPageFetchThreshold,
@@ -773,20 +773,21 @@ func (f *Fs) list(ctx context.Context, container, directory, prefix string, addC
} }
// listDir lists a single directory // listDir lists a single directory
func (f *Fs) listDir(ctx context.Context, container, directory, prefix string, addContainer bool, callback func(fs.DirEntry) error) (err error) { func (f *Fs) listDir(ctx context.Context, container, directory, prefix string, addContainer bool) (entries fs.DirEntries, err error) {
if container == "" { if container == "" {
return fs.ErrorListBucketRequired return nil, fs.ErrorListBucketRequired
} }
// List the objects // List the objects
err = f.list(ctx, container, directory, prefix, addContainer, false, false, func(entry fs.DirEntry) error { err = f.list(ctx, container, directory, prefix, addContainer, false, false, func(entry fs.DirEntry) error {
return callback(entry) entries = append(entries, entry)
return nil
}) })
if err != nil { if err != nil {
return err return nil, err
} }
// container must be present if listing succeeded // container must be present if listing succeeded
f.cache.MarkOK(container) f.cache.MarkOK(container)
return nil return entries, nil
} }
// listContainers lists the containers // listContainers lists the containers
@@ -817,46 +818,14 @@ func (f *Fs) listContainers(ctx context.Context) (entries fs.DirEntries, err err
// This should return ErrDirNotFound if the directory isn't // This should return ErrDirNotFound if the directory isn't
// found. // found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
return list.WithListP(ctx, dir, f)
}
// ListP lists the objects and directories of the Fs starting
// from dir non recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
func (f *Fs) ListP(ctx context.Context, dir string, callback fs.ListRCallback) error {
list := list.NewHelper(callback)
container, directory := f.split(dir) container, directory := f.split(dir)
if container == "" { if container == "" {
if directory != "" { if directory != "" {
return fs.ErrorListBucketRequired return nil, fs.ErrorListBucketRequired
}
entries, err := f.listContainers(ctx)
if err != nil {
return err
}
for _, entry := range entries {
err = list.Add(entry)
if err != nil {
return err
}
}
} else {
err := f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "", list.Add)
if err != nil {
return err
} }
return f.listContainers(ctx)
} }
return list.Flush() return f.listDir(ctx, container, directory, f.rootDirectory, f.rootContainer == "")
} }
// ListR lists the objects and directories of the Fs starting // ListR lists the objects and directories of the Fs starting
@@ -1681,7 +1650,6 @@ var (
_ fs.PutStreamer = &Fs{} _ fs.PutStreamer = &Fs{}
_ fs.Copier = &Fs{} _ fs.Copier = &Fs{}
_ fs.ListRer = &Fs{} _ fs.ListRer = &Fs{}
_ fs.ListPer = &Fs{}
_ fs.Object = &Object{} _ fs.Object = &Object{}
_ fs.MimeTyper = &Object{} _ fs.MimeTyper = &Object{}
) )

View File

@@ -1,7 +1,7 @@
// Package common defines code common to the union and the policies // Package common defines code common to the union and the policies
// //
// These need to be defined in a separate package to avoid import loops // These need to be defined in a separate package to avoid import loops
package common //nolint:revive // Don't include revive when running golangci-lint because this triggers var-naming: avoid meaningless package names package common
import "github.com/rclone/rclone/fs" import "github.com/rclone/rclone/fs"

View File

@@ -12,5 +12,4 @@
<seb•ɑƬ•chezwam•ɖɵʈ•org> <seb•ɑƬ•chezwam•ɖɵʈ•org>
<allllaboutyou@gmail.com> <allllaboutyou@gmail.com>
<psycho@feltzv.fr> <psycho@feltzv.fr>
<afw5059@gmail.com> <afw5059@gmail.com>
<piyushgarg80>

View File

@@ -4,12 +4,12 @@ This script checks for unauthorized modifications in autogenerated sections of m
It is designed to be used in a GitHub Actions workflow or a local pre-commit hook. It is designed to be used in a GitHub Actions workflow or a local pre-commit hook.
Features: Features:
- Detects markdown files changed between a commit and one of its ancestors. Default is to - Detects markdown files changed in the last commit.
check the last commit only. When triggered on a pull request it should typically compare the
pull request branch head and its merge base - the commit on the main branch before it diverged.
- Identifies modified autogenerated sections marked by specific comments. - Identifies modified autogenerated sections marked by specific comments.
- Reports violations using GitHub Actions error messages. - Reports violations using GitHub Actions error messages.
- Exits with a nonzero status code if unauthorized changes are found. - Exits with a nonzero status code if unauthorized changes are found.
It currently only checks the last commit.
""" """
import re import re
@@ -22,18 +22,18 @@ def run_git(args):
""" """
return subprocess.run(["git"] + args, stdout=subprocess.PIPE, text=True, check=True).stdout.strip() return subprocess.run(["git"] + args, stdout=subprocess.PIPE, text=True, check=True).stdout.strip()
def get_changed_files(base, head): def get_changed_files():
""" """
Retrieve a list of markdown files that were changed between the base and head commits. Retrieve a list of markdown files that were changed in the last commit.
""" """
files = run_git(["diff", "--name-only", f"{base}...{head}"]).splitlines() files = run_git(["diff", "--name-only", "HEAD~1", "HEAD"]).splitlines()
return [f for f in files if f.endswith(".md")] return [f for f in files if f.endswith(".md")]
def get_diff(file, base, head): def get_diff(file):
""" """
Get the diff of a given file between the base and head commits. Get the diff of a given file between the last commit and the current version.
""" """
return run_git(["diff", "-U0", f"{base}...{head}", "--", file]).splitlines() return run_git(["diff", "-U0", "HEAD~1", "HEAD", "--", file]).splitlines()
def get_file_content(ref, file): def get_file_content(ref, file):
""" """
@@ -70,7 +70,7 @@ def show_error(file_name, line, message):
""" """
print(f"::error file={file_name},line={line}::{message} at {file_name} line {line}") print(f"::error file={file_name},line={line}::{message} at {file_name} line {line}")
def check_file(file, base, head): def check_file(file):
""" """
Check a markdown file for modifications in autogenerated regions. Check a markdown file for modifications in autogenerated regions.
""" """
@@ -84,7 +84,7 @@ def check_file(file, base, head):
# Entire autogenerated file check. # Entire autogenerated file check.
if any("autogenerated - DO NOT EDIT" in l for l in new_lines[:10]): if any("autogenerated - DO NOT EDIT" in l for l in new_lines[:10]):
if get_diff(file, base, head): if get_diff(file):
show_error(file, 1, "Autogenerated file modified") show_error(file, 1, "Autogenerated file modified")
return True return True
return False return False
@@ -92,7 +92,7 @@ def check_file(file, base, head):
# Partial autogenerated regions. # Partial autogenerated regions.
regions_new = find_regions(new_lines) regions_new = find_regions(new_lines)
regions_old = find_regions(old_lines) regions_old = find_regions(old_lines)
diff = get_diff(file, base, head) diff = get_diff(file)
hunk_re = re.compile(r"^@@ -(\d+),?(\d*) \+(\d+),?(\d*) @@") hunk_re = re.compile(r"^@@ -(\d+),?(\d*) \+(\d+),?(\d*) @@")
new_ln = old_ln = None new_ln = old_ln = None
@@ -124,15 +124,9 @@ def main():
""" """
Main function that iterates over changed files and checks them for violations. Main function that iterates over changed files and checks them for violations.
""" """
base = "HEAD~1"
head = "HEAD"
if len(sys.argv) > 1:
base = sys.argv[1]
if len(sys.argv) > 2:
head = sys.argv[2]
found = False found = False
for f in get_changed_files(base, head): for f in get_changed_files():
if check_file(f, base, head): if check_file(f):
found = True found = True
if found: if found:
sys.exit(1) sys.exit(1)

View File

@@ -1,119 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# Create test TLS certificates for use with rclone.
OUT_DIR="${OUT_DIR:-./tls-test}"
CA_SUBJ="${CA_SUBJ:-/C=US/ST=Test/L=Test/O=Test Org/OU=Test Unit/CN=Test Root CA}"
SERVER_CN="${SERVER_CN:-localhost}"
CLIENT_CN="${CLIENT_CN:-Test Client}"
CLIENT_KEY_PASS="${CLIENT_KEY_PASS:-testpassword}"
CA_DAYS=${CA_DAYS:-3650}
SERVER_DAYS=${SERVER_DAYS:-825}
CLIENT_DAYS=${CLIENT_DAYS:-825}
mkdir -p "$OUT_DIR"
cd "$OUT_DIR"
# Create OpenSSL config
# CA extensions
cat > ca_openssl.cnf <<'EOF'
[ ca_ext ]
basicConstraints = critical, CA:true, pathlen:1
keyUsage = critical, keyCertSign, cRLSign
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always,issuer
EOF
# Server extensions (SAN includes localhost + loopback IP)
cat > server_openssl.cnf <<EOF
[ server_ext ]
basicConstraints = critical, CA:false
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
subjectAltName = @alt_names
[ alt_names ]
DNS.1 = ${SERVER_CN}
IP.1 = 127.0.0.1
EOF
# Client extensions (for mTLS client auth)
cat > client_openssl.cnf <<'EOF'
[ client_ext ]
basicConstraints = critical, CA:false
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
EOF
echo "Create CA key, CSR, and self-signed CA cert"
if [ ! -f ca.key.pem ]; then
openssl genrsa -out ca.key.pem 4096
chmod 600 ca.key.pem
fi
openssl req -new -key ca.key.pem -subj "$CA_SUBJ" -out ca.csr.pem
openssl x509 -req -in ca.csr.pem -signkey ca.key.pem \
-sha256 -days "$CA_DAYS" \
-extfile ca_openssl.cnf -extensions ca_ext \
-out ca.cert.pem
echo "Create server key (NO PASSWORD) and cert signed by CA"
openssl genrsa -out server.key.pem 2048
chmod 600 server.key.pem
openssl req -new -key server.key.pem -subj "/CN=${SERVER_CN}" -out server.csr.pem
openssl x509 -req -in server.csr.pem \
-CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial \
-out server.cert.pem -days "$SERVER_DAYS" -sha256 \
-extfile server_openssl.cnf -extensions server_ext
echo "Create client key (PASSWORD-PROTECTED), CSR, and cert"
openssl genrsa -aes256 -passout pass:"$CLIENT_KEY_PASS" -out client.key.pem 2048
chmod 600 client.key.pem
openssl req -new -key client.key.pem -passin pass:"$CLIENT_KEY_PASS" \
-subj "/CN=${CLIENT_CN}" -out client.csr.pem
openssl x509 -req -in client.csr.pem \
-CA ca.cert.pem -CAkey ca.key.pem -CAcreateserial \
-out client.cert.pem -days "$CLIENT_DAYS" -sha256 \
-extfile client_openssl.cnf -extensions client_ext
echo "Verify chain"
openssl verify -CAfile ca.cert.pem server.cert.pem client.cert.pem
echo "Done"
echo
echo "Summary"
echo "-------"
printf "%-22s %s\n" \
"CA key:" "ca.key.pem" \
"CA cert:" "ca.cert.pem" \
"Server key:" "server.key.pem (no password)" \
"Server CSR:" "server.csr.pem" \
"Server cert:" "server.cert.pem (SAN: ${SERVER_CN}, 127.0.0.1)" \
"Client key:" "client.key.pem (encrypted)" \
"Client CSR:" "client.csr.pem" \
"Client cert:" "client.cert.pem" \
"Client key password:" "$CLIENT_KEY_PASS"
echo
echo "Test rclone server"
echo
echo "rclone serve http -vv --addr :8080 --cert ${OUT_DIR}/server.cert.pem --key ${OUT_DIR}/server.key.pem --client-ca ${OUT_DIR}/ca.cert.pem ."
echo
echo "Test rclone client"
echo
echo "rclone lsf :http: --http-url 'https://localhost:8080' --ca-cert ${OUT_DIR}/ca.cert.pem --client-cert ${OUT_DIR}/client.cert.pem --client-key ${OUT_DIR}/client.key.pem --client-pass \$(rclone obscure $CLIENT_KEY_PASS)"
echo

View File

@@ -1,159 +0,0 @@
//go:build ignore
package main
import (
"bytes"
"cmp"
"context"
"encoding/json"
"flag"
"fmt"
"os"
"path/filepath"
"slices"
"strings"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest/runs"
"github.com/stretchr/testify/assert/yaml"
)
var path = flag.String("path", "./docs/content/", "root path")
const (
configFile = "fstest/test_all/config.yaml"
startListIgnores = "<!--- start list_ignores - DO NOT EDIT THIS SECTION - use make commanddocs --->"
endListIgnores = "<!--- end list_ignores - DO NOT EDIT THIS SECTION - use make commanddocs --->"
startListFailures = "<!--- start list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->"
endListFailures = "<!--- end list_failures - DO NOT EDIT THIS SECTION - use make commanddocs --->"
integrationTestsJSONURL = "https://pub.rclone.org/integration-tests/current/index.json"
integrationTestsHTMLURL = "https://pub.rclone.org/integration-tests/current/"
)
func main() {
err := replaceBetween(*path, startListIgnores, endListIgnores, getIgnores)
if err != nil {
fs.Errorf(*path, "error replacing ignores: %v", err)
}
err = replaceBetween(*path, startListFailures, endListFailures, getFailures)
if err != nil {
fs.Errorf(*path, "error replacing failures: %v", err)
}
}
// replaceBetween replaces the text between startSep and endSep with fn()
func replaceBetween(path, startSep, endSep string, fn func() (string, error)) error {
b, err := os.ReadFile(filepath.Join(path, "bisync.md"))
if err != nil {
return err
}
doc := string(b)
before, after, found := strings.Cut(doc, startSep)
if !found {
return fmt.Errorf("could not find: %v", startSep)
}
_, after, found = strings.Cut(after, endSep)
if !found {
return fmt.Errorf("could not find: %v", endSep)
}
replaceSection, err := fn()
if err != nil {
return err
}
newDoc := before + startSep + "\n" + strings.TrimSpace(replaceSection) + "\n" + endSep + after
err = os.WriteFile(filepath.Join(path, "bisync.md"), []byte(newDoc), 0777)
if err != nil {
return err
}
return nil
}
// getIgnores updates the list of ignores from config.yaml
func getIgnores() (string, error) {
config, err := parseConfig()
if err != nil {
return "", fmt.Errorf("failed to parse config: %v", err)
}
s := ""
slices.SortFunc(config.Backends, func(a, b runs.Backend) int {
return cmp.Compare(a.Remote, b.Remote)
})
for _, backend := range config.Backends {
include := false
if slices.Contains(backend.IgnoreTests, "cmd/bisync") {
include = true
s += fmt.Sprintf("- `%s` (`%s`)\n", strings.TrimSuffix(backend.Remote, ":"), backend.Backend)
}
for _, ignore := range backend.Ignore {
if strings.Contains(strings.ToLower(ignore), "bisync") {
if !include { // don't have header row yet
s += fmt.Sprintf("- `%s` (`%s`)\n", strings.TrimSuffix(backend.Remote, ":"), backend.Backend)
}
include = true
s += fmt.Sprintf(" - `%s`\n", ignore)
// TODO: might be neat to add a "reason" param displaying the reason the test is ignored
}
}
}
return s, nil
}
// getFailures updates the list of currently failing tests from the integration tests server
func getFailures() (string, error) {
var buf bytes.Buffer
err := operations.CopyURLToWriter(context.Background(), integrationTestsJSONURL, &buf)
if err != nil {
return "", err
}
r := runs.Report{}
err = json.Unmarshal(buf.Bytes(), &r)
if err != nil {
return "", fmt.Errorf("failed to unmarshal json: %v", err)
}
s := ""
for _, run := range r.Failed {
for i, t := range run.FailedTests {
if strings.Contains(strings.ToLower(t), "bisync") {
if i == 0 { // don't have header row yet
s += fmt.Sprintf("- `%s` (`%s`)\n", strings.TrimSuffix(run.Remote, ":"), run.Backend)
}
url := integrationTestsHTMLURL + run.TrialName
url = url[:len(url)-5] + "1.txt" // numbers higher than 1 could change from night to night
s += fmt.Sprintf(" - [`%s`](%v)\n", t, url)
if i == 4 && len(run.FailedTests) > 5 { // stop after 5
s += fmt.Sprintf(" - [%v more](%v)\n", len(run.FailedTests)-5, integrationTestsHTMLURL)
break
}
}
}
}
s += fmt.Sprintf("- Updated: %v", r.DateTime)
return s, nil
}
// parseConfig reads and parses the config.yaml file
func parseConfig() (*runs.Config, error) {
d, err := os.ReadFile(configFile)
if err != nil {
return nil, fmt.Errorf("failed to read config file: %w", err)
}
config := &runs.Config{}
err = yaml.Unmarshal(d, &config)
if err != nil {
return nil, fmt.Errorf("failed to parse config file: %w", err)
}
return config, nil
}

View File

@@ -57,11 +57,11 @@ def make_out(data, indent=""):
return return
del(data[category]) del(data[category])
if indent != "" and len(lines) == 1: if indent != "" and len(lines) == 1:
out_lines.append(indent+"- " + title+": " + lines[0]) out_lines.append(indent+"* " + title+": " + lines[0])
return return
out_lines.append(indent+"- " + title) out_lines.append(indent+"* " + title)
for line in lines: for line in lines:
out_lines.append(indent+" - " + line) out_lines.append(indent+" * " + line)
return out, out_lines return out, out_lines
@@ -129,12 +129,12 @@ def main():
new_features[name].append(message) new_features[name].append(message)
# Output new features # Output new features
out, new_features_lines = make_out(new_features, indent=" ") out, new_features_lines = make_out(new_features, indent=" ")
for name in sorted(new_features.keys()): for name in sorted(new_features.keys()):
out(name) out(name)
# Output bugfixes # Output bugfixes
out, bugfix_lines = make_out(bugfixes, indent=" ") out, bugfix_lines = make_out(bugfixes, indent=" ")
for name in sorted(bugfixes.keys()): for name in sorted(bugfixes.keys()):
out(name) out(name)
@@ -163,15 +163,15 @@ def main():
[See commits](https://github.com/rclone/rclone/compare/%(version)s...%(next_version)s) [See commits](https://github.com/rclone/rclone/compare/%(version)s...%(next_version)s)
- New backends * New backends
- New commands * New commands
- New Features * New Features
%(new_features)s %(new_features)s
- Bug Fixes * Bug Fixes
%(bugfixes)s %(bugfixes)s
%(backend_changes)s""" % locals()) %(backend_changes)s""" % locals())
sys.stdout.write(old_tail) sys.stdout.write(old_tail)
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@@ -1,17 +0,0 @@
#!/usr/bin/env bash
#
# Run markdown linting locally
set -e
# Workflow
build=.github/workflows/build.yml
# Globs read from from $build
globs=$(awk '/- name: Check Markdown format/{f=1;next} f && /globs:/{f=2;next} f==2 && NF{if($1=="-"){exit} print $0}' $build)
if [ -z "$globs" ]; then
echo "Error: No globs found in Check Markdown step in $build" >&2
exit 1
fi
docker run -v $PWD:/workdir --user $(id -u):$(id -g) davidanson/markdownlint-cli2 $globs

View File

@@ -23,7 +23,7 @@ def add_email(name, email):
""" """
print("Adding %s <%s>" % (name, email)) print("Adding %s <%s>" % (name, email))
with open(AUTHORS, "a+") as fd: with open(AUTHORS, "a+") as fd:
print("- %s <%s>" % (name, email), file=fd) print(" * %s <%s>" % (name, email), file=fd)
subprocess.check_call(["git", "commit", "-m", "Add %s to contributors" % name, AUTHORS]) subprocess.check_call(["git", "commit", "-m", "Add %s to contributors" % name, AUTHORS])
def main(): def main():

View File

@@ -51,52 +51,47 @@ output. The output is typically used, free, quota and trash contents.
E.g. Typical output from ` + "`rclone about remote:`" + ` is: E.g. Typical output from ` + "`rclone about remote:`" + ` is:
` + "```text" + ` Total: 17 GiB
Total: 17 GiB Used: 7.444 GiB
Used: 7.444 GiB Free: 1.315 GiB
Free: 1.315 GiB Trashed: 100.000 MiB
Trashed: 100.000 MiB Other: 8.241 GiB
Other: 8.241 GiB
` + "```" + `
Where the fields are: Where the fields are:
- Total: Total size available. * Total: Total size available.
- Used: Total size used. * Used: Total size used.
- Free: Total space available to this user. * Free: Total space available to this user.
- Trashed: Total space used by trash. * Trashed: Total space used by trash.
- Other: Total amount in other storage (e.g. Gmail, Google Photos). * Other: Total amount in other storage (e.g. Gmail, Google Photos).
- Objects: Total number of objects in the storage. * Objects: Total number of objects in the storage.
All sizes are in number of bytes. All sizes are in number of bytes.
Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g. Applying a ` + "`--full`" + ` flag to the command prints the bytes in full, e.g.
` + "```text" + ` Total: 18253611008
Total: 18253611008 Used: 7993453766
Used: 7993453766 Free: 1411001220
Free: 1411001220 Trashed: 104857602
Trashed: 104857602 Other: 8849156022
Other: 8849156022
` + "```" + `
A ` + "`--json`" + ` flag generates conveniently machine-readable output, e.g. A ` + "`--json`" + ` flag generates conveniently machine-readable output, e.g.
` + "```json" + ` {
{ "total": 18253611008,
"total": 18253611008, "used": 7993453766,
"used": 7993453766, "trashed": 104857602,
"trashed": 104857602, "other": 8849156022,
"other": 8849156022, "free": 1411001220
"free": 1411001220 }
}
` + "```" + `
Not all backends print all fields. Information is not included if it is not Not all backends print all fields. Information is not included if it is not
provided by a backend. Where the value is unlimited it is omitted. provided by a backend. Where the value is unlimited it is omitted.
Some backends does not support the ` + "`rclone about`" + ` command at all, Some backends does not support the ` + "`rclone about`" + ` command at all,
see complete list in [documentation](https://rclone.org/overview/#optional-features).`, see complete list in [documentation](https://rclone.org/overview/#optional-features).
`,
Annotations: map[string]string{ Annotations: map[string]string{
"versionIntroduced": "v1.41", "versionIntroduced": "v1.41",
// "groups": "", // "groups": "",

View File

@@ -30,16 +30,14 @@ rclone from a machine with a browser - use as instructed by
rclone config. rclone config.
The command requires 1-3 arguments: The command requires 1-3 arguments:
- fs name (e.g., "drive", "s3", etc.)
- fs name (e.g., "drive", "s3", etc.) - Either a base64 encoded JSON blob obtained from a previous rclone config session
- Either a base64 encoded JSON blob obtained from a previous rclone config session - Or a client_id and client_secret pair obtained from the remote service
- Or a client_id and client_secret pair obtained from the remote service
Use --auth-no-open-browser to prevent rclone to open auth Use --auth-no-open-browser to prevent rclone to open auth
link in default browser automatically. link in default browser automatically.
Use --template to generate HTML output via a custom Go template. If a blank Use --template to generate HTML output via a custom Go template. If a blank string is provided as an argument to this flag, the default template is used.`,
string is provided as an argument to this flag, the default template is used.`,
Annotations: map[string]string{ Annotations: map[string]string{
"versionIntroduced": "v1.27", "versionIntroduced": "v1.27",
}, },

View File

@@ -37,33 +37,26 @@ see the backend docs for definitions.
You can discover what commands a backend implements by using You can discover what commands a backend implements by using
` + "```sh" + ` rclone backend help remote:
rclone backend help remote: rclone backend help <backendname>
rclone backend help <backendname>
` + "```" + `
You can also discover information about the backend using (see You can also discover information about the backend using (see
[operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs [operations/fsinfo](/rc/#operations-fsinfo) in the remote control docs
for more info). for more info).
` + "```sh" + ` rclone backend features remote:
rclone backend features remote:
` + "```" + `
Pass options to the backend command with -o. This should be key=value or key, e.g.: Pass options to the backend command with -o. This should be key=value or key, e.g.:
` + "```sh" + ` rclone backend stats remote:path stats -o format=json -o long
rclone backend stats remote:path stats -o format=json -o long
` + "```" + `
Pass arguments to the backend by placing them on the end of the line Pass arguments to the backend by placing them on the end of the line
` + "```sh" + ` rclone backend cleanup remote:path file1 file2 file3
rclone backend cleanup remote:path file1 file2 file3
` + "```" + `
Note to run these commands on a running backend then see Note to run these commands on a running backend then see
[backend/command](/rc/#backend-command) in the rc docs.`, [backend/command](/rc/#backend-command) in the rc docs.
`,
Annotations: map[string]string{ Annotations: map[string]string{
"versionIntroduced": "v1.52", "versionIntroduced": "v1.52",
"groups": "Important", "groups": "Important",

View File

@@ -176,8 +176,6 @@ var (
// Flag -refresh-times helps with Dropbox tests failing with message // Flag -refresh-times helps with Dropbox tests failing with message
// "src and dst identical but can't set mod time without deleting and re-uploading" // "src and dst identical but can't set mod time without deleting and re-uploading"
argRefreshTimes = flag.Bool("refresh-times", false, "Force refreshing the target modtime, useful for Dropbox (default: false)") argRefreshTimes = flag.Bool("refresh-times", false, "Force refreshing the target modtime, useful for Dropbox (default: false)")
ignoreLogs = flag.Bool("ignore-logs", false, "skip comparing log lines but still compare listings")
argPCount = flag.Int("pcount", 2, "number of parallel subtests to run for TestBisyncConcurrent") // go test ./cmd/bisync -race -pcount 10
) )
// bisyncTest keeps all test data in a single place // bisyncTest keeps all test data in a single place
@@ -228,18 +226,6 @@ var color = bisync.Color
// TestMain drives the tests // TestMain drives the tests
func TestMain(m *testing.M) { func TestMain(m *testing.M) {
bisync.LogTZ = time.UTC
ci := fs.GetConfig(context.TODO())
ciSave := *ci
defer func() {
*ci = ciSave
}()
// need to set context.TODO() here as we cannot pass a ctx to fs.LogLevelPrintf
ci.LogLevel = fs.LogLevelInfo
if *argDebug {
ci.LogLevel = fs.LogLevelDebug
}
fstest.Initialise()
fstest.TestMain(m) fstest.TestMain(m)
} }
@@ -252,8 +238,7 @@ func TestBisyncRemoteLocal(t *testing.T) {
fs.Logf(nil, "remote: %v", remote) fs.Logf(nil, "remote: %v", remote)
require.NoError(t, err) require.NoError(t, err)
defer cleanup() defer cleanup()
ctx, _ := fs.AddConfig(context.TODO()) testBisync(t, remote, *argRemote2)
testBisync(ctx, t, remote, *argRemote2)
} }
// Path1 is local, Path2 is remote // Path1 is local, Path2 is remote
@@ -265,8 +250,7 @@ func TestBisyncLocalRemote(t *testing.T) {
fs.Logf(nil, "remote: %v", remote) fs.Logf(nil, "remote: %v", remote)
require.NoError(t, err) require.NoError(t, err)
defer cleanup() defer cleanup()
ctx, _ := fs.AddConfig(context.TODO()) testBisync(t, *argRemote2, remote)
testBisync(ctx, t, *argRemote2, remote)
} }
// Path1 and Path2 are both different directories on remote // Path1 and Path2 are both different directories on remote
@@ -276,44 +260,14 @@ func TestBisyncRemoteRemote(t *testing.T) {
fs.Logf(nil, "remote: %v", remote) fs.Logf(nil, "remote: %v", remote)
require.NoError(t, err) require.NoError(t, err)
defer cleanup() defer cleanup()
ctx, _ := fs.AddConfig(context.TODO()) testBisync(t, remote, remote)
testBisync(ctx, t, remote, remote)
}
// make sure rc can cope with running concurrent jobs
func TestBisyncConcurrent(t *testing.T) {
if !isLocal(*fstest.RemoteName) {
t.Skip("TestBisyncConcurrent is skipped on non-local")
}
if *argTestCase != "" && *argTestCase != "basic" {
t.Skip("TestBisyncConcurrent only tests 'basic'")
}
if *argPCount < 2 {
t.Skip("TestBisyncConcurrent is pointless with -pcount < 2")
}
if *argGolden {
t.Skip("skip TestBisyncConcurrent when goldenizing")
}
oldArgTestCase := argTestCase
*argTestCase = "basic"
*ignoreLogs = true // not useful to compare logs here because both runs will be logging at once
t.Cleanup(func() {
argTestCase = oldArgTestCase
*ignoreLogs = false
})
for i := 0; i < *argPCount; i++ {
t.Run(fmt.Sprintf("test%v", i), testParallel)
}
}
func testParallel(t *testing.T) {
t.Parallel()
TestBisyncRemoteRemote(t)
} }
// TestBisync is a test engine for bisync test cases. // TestBisync is a test engine for bisync test cases.
func testBisync(ctx context.Context, t *testing.T, path1, path2 string) { func testBisync(t *testing.T, path1, path2 string) {
ctx := context.Background()
fstest.Initialise()
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
ciSave := *ci ciSave := *ci
defer func() { defer func() {
@@ -322,10 +276,9 @@ func testBisync(ctx context.Context, t *testing.T, path1, path2 string) {
if *argRefreshTimes { if *argRefreshTimes {
ci.RefreshTimes = true ci.RefreshTimes = true
} }
bisync.ColorsLock.Lock()
bisync.Colors = true bisync.Colors = true
bisync.ColorsLock.Unlock() time.Local = bisync.TZ
ci.FsCacheExpireDuration = fs.Duration(5 * time.Hour) ci.FsCacheExpireDuration = 5 * time.Hour
baseDir, err := os.Getwd() baseDir, err := os.Getwd()
require.NoError(t, err, "get current directory") require.NoError(t, err, "get current directory")
@@ -476,7 +429,6 @@ func (b *bisyncTest) runTestCase(ctx context.Context, t *testing.T, testCase str
// Prepare initial content // Prepare initial content
b.cleanupCase(ctx) b.cleanupCase(ctx)
ctx = accounting.WithStatsGroup(ctx, random.String(8))
fstest.CheckListingWithPrecision(b.t, b.fs1, []fstest.Item{}, []string{}, b.fs1.Precision()) // verify starting from empty fstest.CheckListingWithPrecision(b.t, b.fs1, []fstest.Item{}, []string{}, b.fs1.Precision()) // verify starting from empty
fstest.CheckListingWithPrecision(b.t, b.fs2, []fstest.Item{}, []string{}, b.fs2.Precision()) fstest.CheckListingWithPrecision(b.t, b.fs2, []fstest.Item{}, []string{}, b.fs2.Precision())
initFs, err := cache.Get(ctx, b.initDir) initFs, err := cache.Get(ctx, b.initDir)
@@ -611,15 +563,11 @@ func (b *bisyncTest) runTestCase(ctx context.Context, t *testing.T, testCase str
} }
} }
func isLocal(remote string) bool {
return bilib.IsLocalPath(remote) && !strings.HasPrefix(remote, ":") && !strings.Contains(remote, ",")
}
// makeTempRemote creates temporary folder and makes a filesystem // makeTempRemote creates temporary folder and makes a filesystem
// if a local path is provided, it's ignored (the test will run under system temp) // if a local path is provided, it's ignored (the test will run under system temp)
func (b *bisyncTest) makeTempRemote(ctx context.Context, remote, subdir string) (f, parent fs.Fs, path, canon string) { func (b *bisyncTest) makeTempRemote(ctx context.Context, remote, subdir string) (f, parent fs.Fs, path, canon string) {
var err error var err error
if isLocal(remote) { if bilib.IsLocalPath(remote) && !strings.HasPrefix(remote, ":") && !strings.Contains(remote, ",") {
if remote != "" && !strings.HasPrefix(remote, "local") && *fstest.RemoteName != "" { if remote != "" && !strings.HasPrefix(remote, "local") && *fstest.RemoteName != "" {
b.t.Fatalf(`Missing ":" in remote %q. Use "local" to test with local filesystem.`, remote) b.t.Fatalf(`Missing ":" in remote %q. Use "local" to test with local filesystem.`, remote)
} }
@@ -650,14 +598,20 @@ func (b *bisyncTest) makeTempRemote(ctx context.Context, remote, subdir string)
} }
func (b *bisyncTest) cleanupCase(ctx context.Context) { func (b *bisyncTest) cleanupCase(ctx context.Context) {
_ = operations.Purge(ctx, b.fs1, "") // Silence "directory not found" errors from the ftp backend
_ = operations.Purge(ctx, b.fs2, "") _ = bilib.CaptureOutput(func() {
_ = operations.Purge(ctx, b.fs1, "")
})
_ = bilib.CaptureOutput(func() {
_ = operations.Purge(ctx, b.fs2, "")
})
_ = os.RemoveAll(b.workDir) _ = os.RemoveAll(b.workDir)
accounting.Stats(ctx).ResetCounters()
} }
func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) { func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) {
var fsrc, fdst fs.Fs var fsrc, fdst fs.Fs
ctx = accounting.WithStatsGroup(ctx, random.String(8)) accounting.Stats(ctx).ResetErrors()
b.logPrintf("%s %s", color(terminal.CyanFg, b.stepStr), color(terminal.BlueFg, line)) b.logPrintf("%s %s", color(terminal.CyanFg, b.stepStr), color(terminal.BlueFg, line))
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
@@ -665,6 +619,11 @@ func (b *bisyncTest) runTestStep(ctx context.Context, line string) (err error) {
defer func() { defer func() {
*ci = ciSave *ci = ciSave
}() }()
ci.LogLevel = fs.LogLevelInfo
if b.debug {
ci.LogLevel = fs.LogLevelDebug
}
testFunc := func() { testFunc := func() {
src := filepath.Join(b.dataDir, "file7.txt") src := filepath.Join(b.dataDir, "file7.txt")
@@ -994,12 +953,6 @@ func (b *bisyncTest) checkPreReqs(ctx context.Context, opt *bisync.Options) (con
b.fs2.Features().Disable("Copy") // API has longstanding bug for conflictBehavior=replace https://github.com/rclone/rclone/issues/4590 b.fs2.Features().Disable("Copy") // API has longstanding bug for conflictBehavior=replace https://github.com/rclone/rclone/issues/4590
b.fs2.Features().Disable("Move") b.fs2.Features().Disable("Move")
} }
if strings.HasPrefix(b.fs1.String(), "sftp") {
b.fs1.Features().Disable("Copy") // disable --sftp-copy-is-hardlink as hardlinks are not truly copies
}
if strings.HasPrefix(b.fs2.String(), "sftp") {
b.fs2.Features().Disable("Copy") // disable --sftp-copy-is-hardlink as hardlinks are not truly copies
}
if strings.Contains(strings.ToLower(fs.ConfigString(b.fs1)), "mailru") || strings.Contains(strings.ToLower(fs.ConfigString(b.fs2)), "mailru") { if strings.Contains(strings.ToLower(fs.ConfigString(b.fs1)), "mailru") || strings.Contains(strings.ToLower(fs.ConfigString(b.fs2)), "mailru") {
fs.GetConfig(ctx).TPSLimit = 10 // https://github.com/rclone/rclone/issues/7768#issuecomment-2060888980 fs.GetConfig(ctx).TPSLimit = 10 // https://github.com/rclone/rclone/issues/7768#issuecomment-2060888980
} }
@@ -1018,33 +971,21 @@ func (b *bisyncTest) checkPreReqs(ctx context.Context, opt *bisync.Options) (con
} }
// test if modtimes are writeable // test if modtimes are writeable
testSetModtime := func(f fs.Fs) { testSetModtime := func(f fs.Fs) {
ctx := accounting.WithStatsGroup(ctx, random.String(8)) // keep stats separate
in := bytes.NewBufferString("modtime_write_test") in := bytes.NewBufferString("modtime_write_test")
objinfo := object.NewStaticObjectInfo("modtime_write_test", initDate, int64(len("modtime_write_test")), true, nil, nil) objinfo := object.NewStaticObjectInfo("modtime_write_test", initDate, int64(len("modtime_write_test")), true, nil, nil)
obj, err := f.Put(ctx, in, objinfo) obj, err := f.Put(ctx, in, objinfo)
require.NoError(b.t, err) require.NoError(b.t, err)
if !f.Features().IsLocal {
time.Sleep(time.Second) // avoid GoogleCloudStorage Error 429 rateLimitExceeded
}
err = obj.SetModTime(ctx, initDate) err = obj.SetModTime(ctx, initDate)
if err == fs.ErrorCantSetModTime { if err == fs.ErrorCantSetModTime {
b.t.Skip("skipping test as at least one remote does not support setting modtime") if b.testCase != "nomodtime" {
} b.t.Skip("skipping test as at least one remote does not support setting modtime")
if err == fs.ErrorCantSetModTimeWithoutDelete { // transfers stats expected to differ on this backend }
logReplacements = append(logReplacements, `^.*There was nothing to transfer.*$`, dropMe)
} else {
require.NoError(b.t, err)
}
if !f.Features().IsLocal {
time.Sleep(time.Second) // avoid GoogleCloudStorage Error 429 rateLimitExceeded
} }
err = obj.Remove(ctx) err = obj.Remove(ctx)
require.NoError(b.t, err) require.NoError(b.t, err)
} }
if b.testCase != "nomodtime" { testSetModtime(b.fs1)
testSetModtime(b.fs1) testSetModtime(b.fs2)
testSetModtime(b.fs2)
}
if b.testCase == "normalization" || b.testCase == "extended_char_paths" || b.testCase == "extended_filenames" { if b.testCase == "normalization" || b.testCase == "extended_char_paths" || b.testCase == "extended_filenames" {
// test whether remote is capable of running test // test whether remote is capable of running test
@@ -1488,9 +1429,6 @@ func (b *bisyncTest) compareResults() int {
resultText := b.mangleResult(b.workDir, file, false) resultText := b.mangleResult(b.workDir, file, false)
if fileType(file) == "log" { if fileType(file) == "log" {
if *ignoreLogs {
continue
}
// save mangled logs so difference is easier on eyes // save mangled logs so difference is easier on eyes
goldenFile := filepath.Join(b.logDir, "mangled.golden.log") goldenFile := filepath.Join(b.logDir, "mangled.golden.log")
resultFile := filepath.Join(b.logDir, "mangled.result.log") resultFile := filepath.Join(b.logDir, "mangled.result.log")
@@ -1636,14 +1574,6 @@ func (b *bisyncTest) mangleResult(dir, file string, golden bool) string {
`^.*not equal on recheck.*$`, dropMe, `^.*not equal on recheck.*$`, dropMe,
) )
} }
if b.ignoreBlankHash || !b.fs1.Hashes().Contains(hash.MD5) || !b.fs2.Hashes().Contains(hash.MD5) {
// if either side lacks support for md5, need to ignore the "nothing to transfer" log,
// as sync may in fact need to transfer, where it would otherwise skip based on hash or just update modtime.
// transfer stats will also differ in fs.ErrorCantSetModTimeWithoutDelete scenario, and where --download-hash is needed.
logReplacements = append(logReplacements,
`^.*There was nothing to transfer.*$`, dropMe,
)
}
rep := logReplacements rep := logReplacements
if b.testCase == "dry_run" { if b.testCase == "dry_run" {
rep = append(rep, dryrunReplacements...) rep = append(rep, dryrunReplacements...)

View File

@@ -16,17 +16,15 @@ import (
"github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/operations"
) )
type bisyncCheck = struct { var hashType hash.Type
hashType hash.Type var fsrc, fdst fs.Fs
fsrc, fdst fs.Fs var fcrypt *crypt.Fs
fcrypt *crypt.Fs
}
// WhichCheck determines which CheckFn we should use based on the Fs types // WhichCheck determines which CheckFn we should use based on the Fs types
// It is more robust and accurate than Check because // It is more robust and accurate than Check because
// it will fallback to CryptCheck or DownloadCheck instead of --size-only! // it will fallback to CryptCheck or DownloadCheck instead of --size-only!
// it returns the *operations.CheckOpt with the CheckFn set. // it returns the *operations.CheckOpt with the CheckFn set.
func (b *bisyncRun) WhichCheck(ctx context.Context, opt *operations.CheckOpt) *operations.CheckOpt { func WhichCheck(ctx context.Context, opt *operations.CheckOpt) *operations.CheckOpt {
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
common := opt.Fsrc.Hashes().Overlap(opt.Fdst.Hashes()) common := opt.Fsrc.Hashes().Overlap(opt.Fdst.Hashes())
@@ -42,32 +40,32 @@ func (b *bisyncRun) WhichCheck(ctx context.Context, opt *operations.CheckOpt) *o
if (srcIsCrypt && dstIsCrypt) || (!srcIsCrypt && dstIsCrypt) { if (srcIsCrypt && dstIsCrypt) || (!srcIsCrypt && dstIsCrypt) {
// if both are crypt or only dst is crypt // if both are crypt or only dst is crypt
b.check.hashType = FdstCrypt.UnWrap().Hashes().GetOne() hashType = FdstCrypt.UnWrap().Hashes().GetOne()
if b.check.hashType != hash.None { if hashType != hash.None {
// use cryptcheck // use cryptcheck
b.check.fsrc = opt.Fsrc fsrc = opt.Fsrc
b.check.fdst = opt.Fdst fdst = opt.Fdst
b.check.fcrypt = FdstCrypt fcrypt = FdstCrypt
fs.Infof(b.check.fdst, "Crypt detected! Using cryptcheck instead of check. (Use --size-only or --ignore-checksum to disable)") fs.Infof(fdst, "Crypt detected! Using cryptcheck instead of check. (Use --size-only or --ignore-checksum to disable)")
opt.Check = b.CryptCheckFn opt.Check = CryptCheckFn
return opt return opt
} }
} else if srcIsCrypt && !dstIsCrypt { } else if srcIsCrypt && !dstIsCrypt {
// if only src is crypt // if only src is crypt
b.check.hashType = FsrcCrypt.UnWrap().Hashes().GetOne() hashType = FsrcCrypt.UnWrap().Hashes().GetOne()
if b.check.hashType != hash.None { if hashType != hash.None {
// use reverse cryptcheck // use reverse cryptcheck
b.check.fsrc = opt.Fdst fsrc = opt.Fdst
b.check.fdst = opt.Fsrc fdst = opt.Fsrc
b.check.fcrypt = FsrcCrypt fcrypt = FsrcCrypt
fs.Infof(b.check.fdst, "Crypt detected! Using cryptcheck instead of check. (Use --size-only or --ignore-checksum to disable)") fs.Infof(fdst, "Crypt detected! Using cryptcheck instead of check. (Use --size-only or --ignore-checksum to disable)")
opt.Check = b.ReverseCryptCheckFn opt.Check = ReverseCryptCheckFn
return opt return opt
} }
} }
// if we've gotten this far, neither check or cryptcheck will work, so use --download // if we've gotten this far, neither check or cryptcheck will work, so use --download
fs.Infof(b.check.fdst, "Can't compare hashes, so using check --download for safety. (Use --size-only or --ignore-checksum to disable)") fs.Infof(fdst, "Can't compare hashes, so using check --download for safety. (Use --size-only or --ignore-checksum to disable)")
opt.Check = DownloadCheckFn opt.Check = DownloadCheckFn
return opt return opt
} }
@@ -90,17 +88,17 @@ func CheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool,
} }
// CryptCheckFn is a slightly modified version of CryptCheck // CryptCheckFn is a slightly modified version of CryptCheck
func (b *bisyncRun) CryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) { func CryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) {
cryptDst := dst.(*crypt.Object) cryptDst := dst.(*crypt.Object)
underlyingDst := cryptDst.UnWrap() underlyingDst := cryptDst.UnWrap()
underlyingHash, err := underlyingDst.Hash(ctx, b.check.hashType) underlyingHash, err := underlyingDst.Hash(ctx, hashType)
if err != nil { if err != nil {
return true, false, fmt.Errorf("error reading hash from underlying %v: %w", underlyingDst, err) return true, false, fmt.Errorf("error reading hash from underlying %v: %w", underlyingDst, err)
} }
if underlyingHash == "" { if underlyingHash == "" {
return false, true, nil return false, true, nil
} }
cryptHash, err := b.check.fcrypt.ComputeHash(ctx, cryptDst, src, b.check.hashType) cryptHash, err := fcrypt.ComputeHash(ctx, cryptDst, src, hashType)
if err != nil { if err != nil {
return true, false, fmt.Errorf("error computing hash: %w", err) return true, false, fmt.Errorf("error computing hash: %w", err)
} }
@@ -108,10 +106,10 @@ func (b *bisyncRun) CryptCheckFn(ctx context.Context, dst, src fs.Object) (diffe
return false, true, nil return false, true, nil
} }
if cryptHash != underlyingHash { if cryptHash != underlyingHash {
err = fmt.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", b.check.fdst.Name(), b.check.fdst.Root(), cryptHash, b.check.fsrc.Name(), b.check.fsrc.Root(), underlyingHash) err = fmt.Errorf("hashes differ (%s:%s) %q vs (%s:%s) %q", fdst.Name(), fdst.Root(), cryptHash, fsrc.Name(), fsrc.Root(), underlyingHash)
fs.Debugf(src, "%s", err.Error()) fs.Debugf(src, "%s", err.Error())
// using same error msg as CheckFn so integration tests match // using same error msg as CheckFn so integration tests match
err = fmt.Errorf("%v differ", b.check.hashType) err = fmt.Errorf("%v differ", hashType)
fs.Errorf(src, "%s", err.Error()) fs.Errorf(src, "%s", err.Error())
return true, false, nil return true, false, nil
} }
@@ -120,8 +118,8 @@ func (b *bisyncRun) CryptCheckFn(ctx context.Context, dst, src fs.Object) (diffe
// ReverseCryptCheckFn is like CryptCheckFn except src and dst are switched // ReverseCryptCheckFn is like CryptCheckFn except src and dst are switched
// result: src is crypt, dst is non-crypt // result: src is crypt, dst is non-crypt
func (b *bisyncRun) ReverseCryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) { func ReverseCryptCheckFn(ctx context.Context, dst, src fs.Object) (differ bool, noHash bool, err error) {
return b.CryptCheckFn(ctx, src, dst) return CryptCheckFn(ctx, src, dst)
} }
// DownloadCheckFn is a slightly modified version of Check with --download // DownloadCheckFn is a slightly modified version of Check with --download
@@ -139,7 +137,7 @@ func (b *bisyncRun) checkconflicts(ctxCheck context.Context, filterCheck *filter
if filterCheck.HaveFilesFrom() { if filterCheck.HaveFilesFrom() {
fs.Debugf(nil, "There are potential conflicts to check.") fs.Debugf(nil, "There are potential conflicts to check.")
opt, close, checkopterr := check.GetCheckOpt(fs1, fs2) opt, close, checkopterr := check.GetCheckOpt(b.fs1, b.fs2)
if checkopterr != nil { if checkopterr != nil {
b.critical = true b.critical = true
b.retryable = true b.retryable = true
@@ -150,16 +148,16 @@ func (b *bisyncRun) checkconflicts(ctxCheck context.Context, filterCheck *filter
opt.Match = new(bytes.Buffer) opt.Match = new(bytes.Buffer)
opt = b.WhichCheck(ctxCheck, opt) opt = WhichCheck(ctxCheck, opt)
fs.Infof(nil, "Checking potential conflicts...") fs.Infof(nil, "Checking potential conflicts...")
check := operations.CheckFn(ctxCheck, opt) check := operations.CheckFn(ctxCheck, opt)
fs.Infof(nil, "Finished checking the potential conflicts. %s", check) fs.Infof(nil, "Finished checking the potential conflicts. %s", check)
// reset error count, because we don't want to count check errors as bisync errors //reset error count, because we don't want to count check errors as bisync errors
accounting.Stats(ctxCheck).ResetErrors() accounting.Stats(ctxCheck).ResetErrors()
// return the list of identical files to check against later //return the list of identical files to check against later
if len(fmt.Sprint(opt.Match)) > 0 { if len(fmt.Sprint(opt.Match)) > 0 {
matches = bilib.ToNames(strings.Split(fmt.Sprint(opt.Match), "\n")) matches = bilib.ToNames(strings.Split(fmt.Sprint(opt.Match), "\n"))
} }
@@ -175,14 +173,14 @@ func (b *bisyncRun) checkconflicts(ctxCheck context.Context, filterCheck *filter
// WhichEqual is similar to WhichCheck, but checks a single object. // WhichEqual is similar to WhichCheck, but checks a single object.
// Returns true if the objects are equal, false if they differ or if we don't know // Returns true if the objects are equal, false if they differ or if we don't know
func (b *bisyncRun) WhichEqual(ctx context.Context, src, dst fs.Object, Fsrc, Fdst fs.Fs) bool { func WhichEqual(ctx context.Context, src, dst fs.Object, Fsrc, Fdst fs.Fs) bool {
opt, close, checkopterr := check.GetCheckOpt(Fsrc, Fdst) opt, close, checkopterr := check.GetCheckOpt(Fsrc, Fdst)
if checkopterr != nil { if checkopterr != nil {
fs.Debugf(nil, "GetCheckOpt error: %v", checkopterr) fs.Debugf(nil, "GetCheckOpt error: %v", checkopterr)
} }
defer close() defer close()
opt = b.WhichCheck(ctx, opt) opt = WhichCheck(ctx, opt)
differ, noHash, err := opt.Check(ctx, dst, src) differ, noHash, err := opt.Check(ctx, dst, src)
if err != nil { if err != nil {
fs.Errorf(src, "failed to check: %v", err) fs.Errorf(src, "failed to check: %v", err)
@@ -219,7 +217,7 @@ func (b *bisyncRun) EqualFn(ctx context.Context) context.Context {
equal, skipHash = timeSizeEqualFn() equal, skipHash = timeSizeEqualFn()
if equal && !skipHash { if equal && !skipHash {
whichHashType := func(f fs.Info) hash.Type { whichHashType := func(f fs.Info) hash.Type {
ht := b.getHashType(f.Name()) ht := getHashType(f.Name())
if ht == hash.None && b.opt.Compare.SlowHashSyncOnly && !b.opt.Resync { if ht == hash.None && b.opt.Compare.SlowHashSyncOnly && !b.opt.Resync {
ht = f.Hashes().GetOne() ht = f.Hashes().GetOne()
} }
@@ -227,9 +225,9 @@ func (b *bisyncRun) EqualFn(ctx context.Context) context.Context {
} }
srcHash, _ := src.Hash(ctx, whichHashType(src.Fs())) srcHash, _ := src.Hash(ctx, whichHashType(src.Fs()))
dstHash, _ := dst.Hash(ctx, whichHashType(dst.Fs())) dstHash, _ := dst.Hash(ctx, whichHashType(dst.Fs()))
srcHash, _ = b.tryDownloadHash(ctx, src, srcHash) srcHash, _ = tryDownloadHash(ctx, src, srcHash)
dstHash, _ = b.tryDownloadHash(ctx, dst, dstHash) dstHash, _ = tryDownloadHash(ctx, dst, dstHash)
equal = !b.hashDiffers(srcHash, dstHash, whichHashType(src.Fs()), whichHashType(dst.Fs()), src.Size(), dst.Size()) equal = !hashDiffers(srcHash, dstHash, whichHashType(src.Fs()), whichHashType(dst.Fs()), src.Size(), dst.Size())
} }
if equal { if equal {
logger(ctx, operations.Match, src, dst, nil) logger(ctx, operations.Match, src, dst, nil)
@@ -249,7 +247,7 @@ func (b *bisyncRun) resyncTimeSizeEqual(ctxNoLogger context.Context, src fs.Obje
// note that arg order is path1, path2, regardless of src/dst // note that arg order is path1, path2, regardless of src/dst
path1, path2 := b.resyncWhichIsWhich(src, dst) path1, path2 := b.resyncWhichIsWhich(src, dst)
if sizeDiffers(path1.Size(), path2.Size()) { if sizeDiffers(path1.Size(), path2.Size()) {
winningPath := b.resolveLargerSmaller(path1.Size(), path2.Size(), path1.Remote(), b.opt.ResyncMode) winningPath := b.resolveLargerSmaller(path1.Size(), path2.Size(), path1.Remote(), path2.Remote(), b.opt.ResyncMode)
// don't need to check/update modtime here, as sizes definitely differ and something will be transferred // don't need to check/update modtime here, as sizes definitely differ and something will be transferred
return b.resyncWinningPathToEqual(winningPath), b.resyncWinningPathToEqual(winningPath) // skip hash check if true return b.resyncWinningPathToEqual(winningPath), b.resyncWinningPathToEqual(winningPath) // skip hash check if true
} }
@@ -259,7 +257,7 @@ func (b *bisyncRun) resyncTimeSizeEqual(ctxNoLogger context.Context, src fs.Obje
// note that arg order is path1, path2, regardless of src/dst // note that arg order is path1, path2, regardless of src/dst
path1, path2 := b.resyncWhichIsWhich(src, dst) path1, path2 := b.resyncWhichIsWhich(src, dst)
if timeDiffers(ctxNoLogger, path1.ModTime(ctxNoLogger), path2.ModTime(ctxNoLogger), path1.Fs(), path2.Fs()) { if timeDiffers(ctxNoLogger, path1.ModTime(ctxNoLogger), path2.ModTime(ctxNoLogger), path1.Fs(), path2.Fs()) {
winningPath := b.resolveNewerOlder(path1.ModTime(ctxNoLogger), path2.ModTime(ctxNoLogger), path1.Remote(), b.opt.ResyncMode) winningPath := b.resolveNewerOlder(path1.ModTime(ctxNoLogger), path2.ModTime(ctxNoLogger), path1.Remote(), path2.Remote(), b.opt.ResyncMode)
// if src is winner, proceed with equal to check size/hash and possibly just update dest modtime instead of transferring // if src is winner, proceed with equal to check size/hash and possibly just update dest modtime instead of transferring
if !b.resyncWinningPathToEqual(winningPath) { if !b.resyncWinningPathToEqual(winningPath) {
return operations.Equal(ctxNoLogger, src, dst), false // note we're back to src/dst, not path1/path2 return operations.Equal(ctxNoLogger, src, dst), false // note we're back to src/dst, not path1/path2

View File

@@ -55,7 +55,7 @@ type Options struct {
Compare CompareOpt Compare CompareOpt
CompareFlag string CompareFlag string
DebugName string DebugName string
MaxLock fs.Duration MaxLock time.Duration
ConflictResolve Prefer ConflictResolve Prefer
ConflictLoser ConflictLoserAction ConflictLoser ConflictLoserAction
ConflictSuffixFlag string ConflictSuffixFlag string
@@ -115,7 +115,6 @@ func (x *CheckSyncMode) Type() string {
} }
// Opt keeps command line options // Opt keeps command line options
// internal functions should use b.opt instead
var Opt Options var Opt Options
func init() { func init() {
@@ -141,13 +140,13 @@ func init() {
flags.BoolVarP(cmdFlags, &tzLocal, "localtime", "", tzLocal, "Use local time in listings (default: UTC)", "") flags.BoolVarP(cmdFlags, &tzLocal, "localtime", "", tzLocal, "Use local time in listings (default: UTC)", "")
flags.BoolVarP(cmdFlags, &Opt.NoCleanup, "no-cleanup", "", Opt.NoCleanup, "Retain working files (useful for troubleshooting and testing).", "") flags.BoolVarP(cmdFlags, &Opt.NoCleanup, "no-cleanup", "", Opt.NoCleanup, "Retain working files (useful for troubleshooting and testing).", "")
flags.BoolVarP(cmdFlags, &Opt.IgnoreListingChecksum, "ignore-listing-checksum", "", Opt.IgnoreListingChecksum, "Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)", "") flags.BoolVarP(cmdFlags, &Opt.IgnoreListingChecksum, "ignore-listing-checksum", "", Opt.IgnoreListingChecksum, "Do not use checksums for listings (add --ignore-checksum to additionally skip post-copy checksum checks)", "")
flags.BoolVarP(cmdFlags, &Opt.Resilient, "resilient", "", Opt.Resilient, "Allow future runs to retry after certain less-serious errors, instead of requiring --resync.", "") flags.BoolVarP(cmdFlags, &Opt.Resilient, "resilient", "", Opt.Resilient, "Allow future runs to retry after certain less-serious errors, instead of requiring --resync. Use at your own risk!", "")
flags.BoolVarP(cmdFlags, &Opt.Recover, "recover", "", Opt.Recover, "Automatically recover from interruptions without requiring --resync.", "") flags.BoolVarP(cmdFlags, &Opt.Recover, "recover", "", Opt.Recover, "Automatically recover from interruptions without requiring --resync.", "")
flags.StringVarP(cmdFlags, &Opt.CompareFlag, "compare", "", Opt.CompareFlag, "Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime')", "") flags.StringVarP(cmdFlags, &Opt.CompareFlag, "compare", "", Opt.CompareFlag, "Comma-separated list of bisync-specific compare options ex. 'size,modtime,checksum' (default: 'size,modtime')", "")
flags.BoolVarP(cmdFlags, &Opt.Compare.NoSlowHash, "no-slow-hash", "", Opt.Compare.NoSlowHash, "Ignore listing checksums only on backends where they are slow", "") flags.BoolVarP(cmdFlags, &Opt.Compare.NoSlowHash, "no-slow-hash", "", Opt.Compare.NoSlowHash, "Ignore listing checksums only on backends where they are slow", "")
flags.BoolVarP(cmdFlags, &Opt.Compare.SlowHashSyncOnly, "slow-hash-sync-only", "", Opt.Compare.SlowHashSyncOnly, "Ignore slow checksums for listings and deltas, but still consider them during sync calls.", "") flags.BoolVarP(cmdFlags, &Opt.Compare.SlowHashSyncOnly, "slow-hash-sync-only", "", Opt.Compare.SlowHashSyncOnly, "Ignore slow checksums for listings and deltas, but still consider them during sync calls.", "")
flags.BoolVarP(cmdFlags, &Opt.Compare.DownloadHash, "download-hash", "", Opt.Compare.DownloadHash, "Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!)", "") flags.BoolVarP(cmdFlags, &Opt.Compare.DownloadHash, "download-hash", "", Opt.Compare.DownloadHash, "Compute hash by downloading when otherwise unavailable. (warning: may be slow and use lots of data!)", "")
flags.FVarP(cmdFlags, &Opt.MaxLock, "max-lock", "", "Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m)", "") flags.DurationVarP(cmdFlags, &Opt.MaxLock, "max-lock", "", Opt.MaxLock, "Consider lock files older than this to be expired (default: 0 (never expire)) (minimum: 2m)", "")
flags.FVarP(cmdFlags, &Opt.ConflictResolve, "conflict-resolve", "", "Automatically resolve conflicts by preferring the version that is: "+ConflictResolveList+" (default: none)", "") flags.FVarP(cmdFlags, &Opt.ConflictResolve, "conflict-resolve", "", "Automatically resolve conflicts by preferring the version that is: "+ConflictResolveList+" (default: none)", "")
flags.FVarP(cmdFlags, &Opt.ConflictLoser, "conflict-loser", "", "Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): "+ConflictLoserList+" (default: num)", "") flags.FVarP(cmdFlags, &Opt.ConflictLoser, "conflict-loser", "", "Action to take on the loser of a sync conflict (when there is a winner) or on both files (when there is no winner): "+ConflictLoserList+" (default: num)", "")
flags.StringVarP(cmdFlags, &Opt.ConflictSuffixFlag, "conflict-suffix", "", Opt.ConflictSuffixFlag, "Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict')", "") flags.StringVarP(cmdFlags, &Opt.ConflictSuffixFlag, "conflict-suffix", "", Opt.ConflictSuffixFlag, "Suffix to use when renaming a --conflict-loser. Can be either one string or two comma-separated strings to assign different suffixes to Path1/Path2. (default: 'conflict')", "")
@@ -163,6 +162,7 @@ var commandDefinition = &cobra.Command{
Annotations: map[string]string{ Annotations: map[string]string{
"versionIntroduced": "v1.58", "versionIntroduced": "v1.58",
"groups": "Filter,Copy,Important", "groups": "Filter,Copy,Important",
"status": "Beta",
}, },
RunE: func(command *cobra.Command, args []string) error { RunE: func(command *cobra.Command, args []string) error {
// NOTE: avoid putting too much handling here, as it won't apply to the rc. // NOTE: avoid putting too much handling here, as it won't apply to the rc.
@@ -190,6 +190,7 @@ var commandDefinition = &cobra.Command{
} }
} }
fs.Logf(nil, "bisync is IN BETA. Don't use in production!")
cmd.Run(false, true, command, func() error { cmd.Run(false, true, command, func() error {
err := Bisync(ctx, fs1, fs2, &opt) err := Bisync(ctx, fs1, fs2, &opt)
if err == ErrBisyncAborted { if err == ErrBisyncAborted {

View File

@@ -28,7 +28,7 @@ type CompareOpt = struct {
DownloadHash bool DownloadHash bool
} }
func (b *bisyncRun) setCompareDefaults(ctx context.Context) (err error) { func (b *bisyncRun) setCompareDefaults(ctx context.Context) error {
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
// defaults // defaults
@@ -120,25 +120,25 @@ func sizeDiffers(a, b int64) bool {
// returns true if the hashes are definitely different. // returns true if the hashes are definitely different.
// returns false if equal, or if either is unknown. // returns false if equal, or if either is unknown.
func (b *bisyncRun) hashDiffers(stringA, stringB string, ht1, ht2 hash.Type, size1, size2 int64) bool { func hashDiffers(a, b string, ht1, ht2 hash.Type, size1, size2 int64) bool {
if stringA == "" || stringB == "" { if a == "" || b == "" {
if ht1 != hash.None && ht2 != hash.None && !(size1 <= 0 || size2 <= 0) { if ht1 != hash.None && ht2 != hash.None && !(size1 <= 0 || size2 <= 0) {
fs.Logf(nil, Color(terminal.YellowFg, "WARNING: hash unexpectedly blank despite Fs support (%s, %s) (you may need to --resync!)"), stringA, stringB) fs.Logf(nil, Color(terminal.YellowFg, "WARNING: hash unexpectedly blank despite Fs support (%s, %s) (you may need to --resync!)"), a, b)
} }
return false return false
} }
if ht1 != ht2 { if ht1 != ht2 {
if !(b.downloadHashOpt.downloadHash && ((ht1 == hash.MD5 && ht2 == hash.None) || (ht1 == hash.None && ht2 == hash.MD5))) { if !(downloadHash && ((ht1 == hash.MD5 && ht2 == hash.None) || (ht1 == hash.None && ht2 == hash.MD5))) {
fs.Infof(nil, Color(terminal.YellowFg, "WARNING: Can't compare hashes of different types (%s, %s)"), ht1.String(), ht2.String()) fs.Infof(nil, Color(terminal.YellowFg, "WARNING: Can't compare hashes of different types (%s, %s)"), ht1.String(), ht2.String())
return false return false
} }
} }
return stringA != stringB return a != b
} }
// chooses hash type, giving priority to types both sides have in common // chooses hash type, giving priority to types both sides have in common
func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) { func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) {
b.downloadHashOpt.downloadHash = b.opt.Compare.DownloadHash downloadHash = b.opt.Compare.DownloadHash
if b.opt.Compare.NoSlowHash && b.opt.Compare.SlowHashDetected { if b.opt.Compare.NoSlowHash && b.opt.Compare.SlowHashDetected {
fs.Infof(nil, "Not checking for common hash as at least one slow hash detected.") fs.Infof(nil, "Not checking for common hash as at least one slow hash detected.")
} else { } else {
@@ -177,7 +177,7 @@ func (b *bisyncRun) setHashType(ci *fs.ConfigInfo) {
} }
if (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.fs2.Features().SlowHash { if (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.fs2.Features().SlowHash {
fs.Infoc(nil, Color(terminal.YellowFg, "Slow hash detected on Path2. Will ignore checksum due to slow-hash settings")) fs.Infoc(nil, Color(terminal.YellowFg, "Slow hash detected on Path2. Will ignore checksum due to slow-hash settings"))
b.opt.Compare.HashType2 = hash.None b.opt.Compare.HashType1 = hash.None
} else { } else {
b.opt.Compare.HashType2 = b.fs2.Hashes().GetOne() b.opt.Compare.HashType2 = b.fs2.Hashes().GetOne()
if b.opt.Compare.HashType2 != hash.None { if b.opt.Compare.HashType2 != hash.None {
@@ -268,15 +268,13 @@ func (b *bisyncRun) setFromCompareFlag(ctx context.Context) error {
return nil return nil
} }
// b.downloadHashOpt.downloadHash is true if we should attempt to compute hash by downloading when otherwise unavailable // downloadHash is true if we should attempt to compute hash by downloading when otherwise unavailable
type downloadHashOpt struct { var downloadHash bool
downloadHash bool var downloadHashWarn mutex.Once
downloadHashWarn mutex.Once var firstDownloadHash mutex.Once
firstDownloadHash mutex.Once
}
func (b *bisyncRun) tryDownloadHash(ctx context.Context, o fs.DirEntry, hashVal string) (string, error) { func tryDownloadHash(ctx context.Context, o fs.DirEntry, hashVal string) (string, error) {
if hashVal != "" || !b.downloadHashOpt.downloadHash { if hashVal != "" || !downloadHash {
return hashVal, nil return hashVal, nil
} }
obj, ok := o.(fs.Object) obj, ok := o.(fs.Object)
@@ -285,14 +283,14 @@ func (b *bisyncRun) tryDownloadHash(ctx context.Context, o fs.DirEntry, hashVal
return hashVal, fs.ErrorObjectNotFound return hashVal, fs.ErrorObjectNotFound
} }
if o.Size() < 0 { if o.Size() < 0 {
b.downloadHashOpt.downloadHashWarn.Do(func() { downloadHashWarn.Do(func() {
fs.Log(o, Color(terminal.YellowFg, "Skipping hash download as checksum not reliable with files of unknown length.")) fs.Log(o, Color(terminal.YellowFg, "Skipping hash download as checksum not reliable with files of unknown length."))
}) })
fs.Debugf(o, "Skipping hash download as checksum not reliable with files of unknown length.") fs.Debugf(o, "Skipping hash download as checksum not reliable with files of unknown length.")
return hashVal, hash.ErrUnsupported return hashVal, hash.ErrUnsupported
} }
b.downloadHashOpt.firstDownloadHash.Do(func() { firstDownloadHash.Do(func() {
fs.Infoc(obj.Fs().Name(), Color(terminal.Dim, "Downloading hashes...")) fs.Infoc(obj.Fs().Name(), Color(terminal.Dim, "Downloading hashes..."))
}) })
tr := accounting.Stats(ctx).NewCheckingTransfer(o, "computing hash with --download-hash") tr := accounting.Stats(ctx).NewCheckingTransfer(o, "computing hash with --download-hash")

View File

@@ -219,7 +219,7 @@ func (b *bisyncRun) findDeltas(fctx context.Context, f fs.Fs, oldListing string,
} }
} }
if b.opt.Compare.Checksum { if b.opt.Compare.Checksum {
if b.hashDiffers(old.getHash(file), now.getHash(file), old.hash, now.hash, old.getSize(file), now.getSize(file)) { if hashDiffers(old.getHash(file), now.getHash(file), old.hash, now.hash, old.getSize(file), now.getSize(file)) {
fs.Debugf(file, "(old: %v current: %v)", old.getHash(file), now.getHash(file)) fs.Debugf(file, "(old: %v current: %v)", old.getHash(file), now.getHash(file))
whatchanged = append(whatchanged, Color(terminal.MagentaFg, "hash")) whatchanged = append(whatchanged, Color(terminal.MagentaFg, "hash"))
d |= deltaHash d |= deltaHash
@@ -346,7 +346,7 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
if d2.is(deltaOther) { if d2.is(deltaOther) {
// if size or hash differ, skip this, as we already know they're not equal // if size or hash differ, skip this, as we already know they're not equal
if (b.opt.Compare.Size && sizeDiffers(ds1.size[file], ds2.size[file2])) || if (b.opt.Compare.Size && sizeDiffers(ds1.size[file], ds2.size[file2])) ||
(b.opt.Compare.Checksum && b.hashDiffers(ds1.hash[file], ds2.hash[file2], b.opt.Compare.HashType1, b.opt.Compare.HashType2, ds1.size[file], ds2.size[file2])) { (b.opt.Compare.Checksum && hashDiffers(ds1.hash[file], ds2.hash[file2], b.opt.Compare.HashType1, b.opt.Compare.HashType2, ds1.size[file], ds2.size[file2])) {
fs.Debugf(file, "skipping equality check as size/hash definitely differ") fs.Debugf(file, "skipping equality check as size/hash definitely differ")
} else { } else {
checkit := func(filename string) { checkit := func(filename string) {
@@ -393,10 +393,10 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
// if files are identical, leave them alone instead of renaming // if files are identical, leave them alone instead of renaming
if (dirs1.has(file) || dirs1.has(alias)) && (dirs2.has(file) || dirs2.has(alias)) { if (dirs1.has(file) || dirs1.has(alias)) && (dirs2.has(file) || dirs2.has(alias)) {
fs.Infof(nil, "This is a directory, not a file. Skipping equality check and will not rename: %s", file) fs.Infof(nil, "This is a directory, not a file. Skipping equality check and will not rename: %s", file)
b.march.ls1.getPut(file, skippedDirs1) ls1.getPut(file, skippedDirs1)
b.march.ls2.getPut(file, skippedDirs2) ls2.getPut(file, skippedDirs2)
b.debugFn(file, func() { b.debugFn(file, func() {
b.debug(file, fmt.Sprintf("deltas dir: %s, ls1 has name?: %v, ls2 has name?: %v", file, b.march.ls1.has(b.DebugName), b.march.ls2.has(b.DebugName))) b.debug(file, fmt.Sprintf("deltas dir: %s, ls1 has name?: %v, ls2 has name?: %v", file, ls1.has(b.DebugName), ls2.has(b.DebugName)))
}) })
} else { } else {
equal := matches.Has(file) equal := matches.Has(file)
@@ -409,16 +409,16 @@ func (b *bisyncRun) applyDeltas(ctx context.Context, ds1, ds2 *deltaSet) (result
// the Path1 version is deemed "correct" in this scenario // the Path1 version is deemed "correct" in this scenario
fs.Infof(alias, "Files are equal but will copy anyway to fix case to %s", file) fs.Infof(alias, "Files are equal but will copy anyway to fix case to %s", file)
copy1to2.Add(file) copy1to2.Add(file)
} else if b.opt.Compare.Modtime && timeDiffers(ctx, b.march.ls1.getTime(b.march.ls1.getTryAlias(file, alias)), b.march.ls2.getTime(b.march.ls2.getTryAlias(file, alias)), b.fs1, b.fs2) { } else if b.opt.Compare.Modtime && timeDiffers(ctx, ls1.getTime(ls1.getTryAlias(file, alias)), ls2.getTime(ls2.getTryAlias(file, alias)), b.fs1, b.fs2) {
fs.Infof(file, "Files are equal but will copy anyway to update modtime (will not rename)") fs.Infof(file, "Files are equal but will copy anyway to update modtime (will not rename)")
if b.march.ls1.getTime(b.march.ls1.getTryAlias(file, alias)).Before(b.march.ls2.getTime(b.march.ls2.getTryAlias(file, alias))) { if ls1.getTime(ls1.getTryAlias(file, alias)).Before(ls2.getTime(ls2.getTryAlias(file, alias))) {
// Path2 is newer // Path2 is newer
b.indent("Path2", p1, "Queue copy to Path1") b.indent("Path2", p1, "Queue copy to Path1")
copy2to1.Add(b.march.ls2.getTryAlias(file, alias)) copy2to1.Add(ls2.getTryAlias(file, alias))
} else { } else {
// Path1 is newer // Path1 is newer
b.indent("Path1", p2, "Queue copy to Path2") b.indent("Path1", p2, "Queue copy to Path2")
copy1to2.Add(b.march.ls1.getTryAlias(file, alias)) copy1to2.Add(ls1.getTryAlias(file, alias))
} }
} else { } else {
fs.Infof(nil, "Files are equal! Skipping: %s", file) fs.Infof(nil, "Files are equal! Skipping: %s", file)
@@ -590,10 +590,10 @@ func (b *bisyncRun) updateAliases(ctx context.Context, ds1, ds2 *deltaSet) {
fullMap1 := map[string]string{} // [transformedname]originalname fullMap1 := map[string]string{} // [transformedname]originalname
fullMap2 := map[string]string{} // [transformedname]originalname fullMap2 := map[string]string{} // [transformedname]originalname
for _, name := range b.march.ls1.list { for _, name := range ls1.list {
fullMap1[transform(name)] = name fullMap1[transform(name)] = name
} }
for _, name := range b.march.ls2.list { for _, name := range ls2.list {
fullMap2[transform(name)] = name fullMap2[transform(name)] = name
} }

View File

@@ -35,7 +35,8 @@ var rcHelp = makeHelp(`This takes the following parameters
- removeEmptyDirs - remove empty directories at the final cleanup step - removeEmptyDirs - remove empty directories at the final cleanup step
- filtersFile - read filtering patterns from a file - filtersFile - read filtering patterns from a file
- ignoreListingChecksum - Do not use checksums for listings - ignoreListingChecksum - Do not use checksums for listings
- resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync. - resilient - Allow future runs to retry after certain less-serious errors, instead of requiring resync.
Use at your own risk!
- workdir - server directory for history files (default: |~/.cache/rclone/bisync|) - workdir - server directory for history files (default: |~/.cache/rclone/bisync|)
- backupdir1 - --backup-dir for Path1. Must be a non-overlapping path on the same remote. - backupdir1 - --backup-dir for Path1. Must be a non-overlapping path on the same remote.
- backupdir2 - --backup-dir for Path2. Must be a non-overlapping path on the same remote. - backupdir2 - --backup-dir for Path2. Must be a non-overlapping path on the same remote.
@@ -51,15 +52,14 @@ var longHelp = shortHelp + makeHelp(`
bidirectional cloud sync solution in rclone. bidirectional cloud sync solution in rclone.
It retains the Path1 and Path2 filesystem listings from the prior run. It retains the Path1 and Path2 filesystem listings from the prior run.
On each successive run it will: On each successive run it will:
- list files on Path1 and Path2, and check for changes on each side. - list files on Path1 and Path2, and check for changes on each side.
Changes include |New|, |Newer|, |Older|, and |Deleted| files. Changes include |New|, |Newer|, |Older|, and |Deleted| files.
- Propagate changes on Path1 to Path2, and vice-versa. - Propagate changes on Path1 to Path2, and vice-versa.
Bisync is considered an **advanced command**, so use with care. Bisync is **in beta** and is considered an **advanced command**, so use with care.
Make sure you have read and understood the entire [manual](https://rclone.org/bisync) Make sure you have read and understood the entire [manual](https://rclone.org/bisync)
(especially the [Limitations](https://rclone.org/bisync/#limitations) section) (especially the [Limitations](https://rclone.org/bisync/#limitations) section) before using,
before using, or data loss can result. Questions can be asked in the or data loss can result. Questions can be asked in the [Rclone Forum](https://forum.rclone.org/).
[Rclone Forum](https://forum.rclone.org/).
See [full bisync description](https://rclone.org/bisync/) for details.`) See [full bisync description](https://rclone.org/bisync/) for details.
`)

View File

@@ -42,14 +42,10 @@ var lineRegex = regexp.MustCompile(`^(\S) +(-?\d+) (\S+) (\S+) (\d{4}-\d\d-\d\dT
// timeFormat defines time format used in listings // timeFormat defines time format used in listings
const timeFormat = "2006-01-02T15:04:05.000000000-0700" const timeFormat = "2006-01-02T15:04:05.000000000-0700"
// TZ defines time zone used in listings
var ( var (
// TZ defines time zone used in listings
TZ = time.UTC TZ = time.UTC
tzLocal = false tzLocal = false
// LogTZ defines time zone used in logs (which may be different than that used in listings).
// time.Local by default, but we force UTC on tests to make them deterministic regardless of tester's location.
LogTZ = time.Local
) )
// fileInfo describes a file // fileInfo describes a file
@@ -202,8 +198,8 @@ func (b *bisyncRun) fileInfoEqual(file1, file2 string, ls1, ls2 *fileList) bool
equal = false equal = false
} }
} }
if b.opt.Compare.Checksum && !b.queueOpt.ignoreListingChecksum { if b.opt.Compare.Checksum && !ignoreListingChecksum {
if b.hashDiffers(ls1.getHash(file1), ls2.getHash(file2), b.opt.Compare.HashType1, b.opt.Compare.HashType2, ls1.getSize(file1), ls2.getSize(file2)) { if hashDiffers(ls1.getHash(file1), ls2.getHash(file2), b.opt.Compare.HashType1, b.opt.Compare.HashType2, ls1.getSize(file1), ls2.getSize(file2)) {
b.indent("ERROR", file1, fmt.Sprintf("Checksum not equal in listing. Path1: %v, Path2: %v", ls1.getHash(file1), ls2.getHash(file2))) b.indent("ERROR", file1, fmt.Sprintf("Checksum not equal in listing. Path1: %v, Path2: %v", ls1.getHash(file1), ls2.getHash(file2)))
equal = false equal = false
} }
@@ -247,7 +243,7 @@ func (ls *fileList) sort() {
} }
// save will save listing to a file. // save will save listing to a file.
func (ls *fileList) save(listing string) error { func (ls *fileList) save(ctx context.Context, listing string) error {
file, err := os.Create(listing) file, err := os.Create(listing)
if err != nil { if err != nil {
return err return err
@@ -712,9 +708,9 @@ func (b *bisyncRun) modifyListing(ctx context.Context, src fs.Fs, dst fs.Fs, res
b.debug(b.DebugName, fmt.Sprintf("%s pre-save dstList has it?: %v", direction, dstList.has(b.DebugName))) b.debug(b.DebugName, fmt.Sprintf("%s pre-save dstList has it?: %v", direction, dstList.has(b.DebugName)))
} }
// update files // update files
err = srcList.save(srcListing) err = srcList.save(ctx, srcListing)
b.handleErr(srcList, "error saving srcList from modifyListing", err, true, true) b.handleErr(srcList, "error saving srcList from modifyListing", err, true, true)
err = dstList.save(dstListing) err = dstList.save(ctx, dstListing)
b.handleErr(dstList, "error saving dstList from modifyListing", err, true, true) b.handleErr(dstList, "error saving dstList from modifyListing", err, true, true)
return err return err
@@ -745,7 +741,7 @@ func (b *bisyncRun) recheck(ctxRecheck context.Context, src, dst fs.Fs, srcList,
if hashType != hash.None { if hashType != hash.None {
hashVal, _ = obj.Hash(ctxRecheck, hashType) hashVal, _ = obj.Hash(ctxRecheck, hashType)
} }
hashVal, _ = b.tryDownloadHash(ctxRecheck, obj, hashVal) hashVal, _ = tryDownloadHash(ctxRecheck, obj, hashVal)
} }
var modtime time.Time var modtime time.Time
if b.opt.Compare.Modtime { if b.opt.Compare.Modtime {
@@ -759,7 +755,7 @@ func (b *bisyncRun) recheck(ctxRecheck context.Context, src, dst fs.Fs, srcList,
for _, dstObj := range dstObjs { for _, dstObj := range dstObjs {
if srcObj.Remote() == dstObj.Remote() || srcObj.Remote() == b.aliases.Alias(dstObj.Remote()) { if srcObj.Remote() == dstObj.Remote() || srcObj.Remote() == b.aliases.Alias(dstObj.Remote()) {
// note: unlike Equal(), WhichEqual() does not update the modtime in dest if sums match but modtimes don't. // note: unlike Equal(), WhichEqual() does not update the modtime in dest if sums match but modtimes don't.
if b.opt.DryRun || b.WhichEqual(ctxRecheck, srcObj, dstObj, src, dst) { if b.opt.DryRun || WhichEqual(ctxRecheck, srcObj, dstObj, src, dst) {
putObj(srcObj, srcList) putObj(srcObj, srcList)
putObj(dstObj, dstList) putObj(dstObj, dstList)
resolved = append(resolved, srcObj.Remote()) resolved = append(resolved, srcObj.Remote())
@@ -773,7 +769,7 @@ func (b *bisyncRun) recheck(ctxRecheck context.Context, src, dst fs.Fs, srcList,
// skip and error during --resync, as rollback is not possible // skip and error during --resync, as rollback is not possible
if !slices.Contains(resolved, srcObj.Remote()) && !b.opt.DryRun { if !slices.Contains(resolved, srcObj.Remote()) && !b.opt.DryRun {
if b.opt.Resync { if b.opt.Resync {
err := errors.New("no dstObj match or files not equal") err = errors.New("no dstObj match or files not equal")
b.handleErr(srcObj, "Unable to rollback during --resync", err, true, false) b.handleErr(srcObj, "Unable to rollback during --resync", err, true, false)
} else { } else {
toRollback = append(toRollback, srcObj.Remote()) toRollback = append(toRollback, srcObj.Remote())

View File

@@ -14,19 +14,18 @@ import (
"github.com/rclone/rclone/lib/terminal" "github.com/rclone/rclone/lib/terminal"
) )
const basicallyforever = fs.Duration(200 * 365 * 24 * time.Hour) const basicallyforever = 200 * 365 * 24 * time.Hour
type lockFileOpt struct { var stopRenewal func()
stopRenewal func()
data struct {
Session string
PID string
TimeRenewed time.Time
TimeExpires time.Time
}
}
func (b *bisyncRun) setLockFile() (err error) { var data = struct {
Session string
PID string
TimeRenewed time.Time
TimeExpires time.Time
}{}
func (b *bisyncRun) setLockFile() error {
b.lockFile = "" b.lockFile = ""
b.setLockFileExpiration() b.setLockFileExpiration()
if !b.opt.DryRun { if !b.opt.DryRun {
@@ -46,29 +45,30 @@ func (b *bisyncRun) setLockFile() (err error) {
} }
fs.Debugf(nil, "Lock file created: %s", b.lockFile) fs.Debugf(nil, "Lock file created: %s", b.lockFile)
b.renewLockFile() b.renewLockFile()
b.lockFileOpt.stopRenewal = b.startLockRenewal() stopRenewal = b.startLockRenewal()
} }
return nil return nil
} }
func (b *bisyncRun) removeLockFile() (err error) { func (b *bisyncRun) removeLockFile() {
if b.lockFile != "" { if b.lockFile != "" {
b.lockFileOpt.stopRenewal() stopRenewal()
err = os.Remove(b.lockFile) errUnlock := os.Remove(b.lockFile)
if err == nil { if errUnlock == nil {
fs.Debugf(nil, "Lock file removed: %s", b.lockFile) fs.Debugf(nil, "Lock file removed: %s", b.lockFile)
} else if err == nil {
err = errUnlock
} else { } else {
fs.Errorf(nil, "cannot remove lockfile %s: %v", b.lockFile, err) fs.Errorf(nil, "cannot remove lockfile %s: %v", b.lockFile, errUnlock)
} }
b.lockFile = "" // block removing it again b.lockFile = "" // block removing it again
} }
return err
} }
func (b *bisyncRun) setLockFileExpiration() { func (b *bisyncRun) setLockFileExpiration() {
if b.opt.MaxLock > 0 && b.opt.MaxLock < fs.Duration(2*time.Minute) { if b.opt.MaxLock > 0 && b.opt.MaxLock < 2*time.Minute {
fs.Logf(nil, Color(terminal.YellowFg, "--max-lock cannot be shorter than 2 minutes (unless 0.) Changing --max-lock from %v to %v"), b.opt.MaxLock, 2*time.Minute) fs.Logf(nil, Color(terminal.YellowFg, "--max-lock cannot be shorter than 2 minutes (unless 0.) Changing --max-lock from %v to %v"), b.opt.MaxLock, 2*time.Minute)
b.opt.MaxLock = fs.Duration(2 * time.Minute) b.opt.MaxLock = 2 * time.Minute
} else if b.opt.MaxLock <= 0 { } else if b.opt.MaxLock <= 0 {
b.opt.MaxLock = basicallyforever b.opt.MaxLock = basicallyforever
} }
@@ -77,18 +77,18 @@ func (b *bisyncRun) setLockFileExpiration() {
func (b *bisyncRun) renewLockFile() { func (b *bisyncRun) renewLockFile() {
if b.lockFile != "" && bilib.FileExists(b.lockFile) { if b.lockFile != "" && bilib.FileExists(b.lockFile) {
b.lockFileOpt.data.Session = b.basePath data.Session = b.basePath
b.lockFileOpt.data.PID = strconv.Itoa(os.Getpid()) data.PID = strconv.Itoa(os.Getpid())
b.lockFileOpt.data.TimeRenewed = time.Now() data.TimeRenewed = time.Now()
b.lockFileOpt.data.TimeExpires = time.Now().Add(time.Duration(b.opt.MaxLock)) data.TimeExpires = time.Now().Add(b.opt.MaxLock)
// save data file // save data file
df, err := os.Create(b.lockFile) df, err := os.Create(b.lockFile)
b.handleErr(b.lockFile, "error renewing lock file", err, true, true) b.handleErr(b.lockFile, "error renewing lock file", err, true, true)
b.handleErr(b.lockFile, "error encoding JSON to lock file", json.NewEncoder(df).Encode(b.lockFileOpt.data), true, true) b.handleErr(b.lockFile, "error encoding JSON to lock file", json.NewEncoder(df).Encode(data), true, true)
b.handleErr(b.lockFile, "error closing lock file", df.Close(), true, true) b.handleErr(b.lockFile, "error closing lock file", df.Close(), true, true)
if b.opt.MaxLock < basicallyforever { if b.opt.MaxLock < basicallyforever {
fs.Infof(nil, Color(terminal.HiBlueFg, "lock file renewed for %v. New expiration: %v"), b.opt.MaxLock, b.lockFileOpt.data.TimeExpires) fs.Infof(nil, Color(terminal.HiBlueFg, "lock file renewed for %v. New expiration: %v"), b.opt.MaxLock, data.TimeExpires)
} }
} }
} }
@@ -99,7 +99,7 @@ func (b *bisyncRun) lockFileIsExpired() bool {
b.handleErr(b.lockFile, "error reading lock file", err, true, true) b.handleErr(b.lockFile, "error reading lock file", err, true, true)
dec := json.NewDecoder(rdf) dec := json.NewDecoder(rdf)
for { for {
if err := dec.Decode(&b.lockFileOpt.data); err != nil { if err := dec.Decode(&data); err != nil {
if err != io.EOF { if err != io.EOF {
fs.Errorf(b.lockFile, "err: %v", err) fs.Errorf(b.lockFile, "err: %v", err)
} }
@@ -107,14 +107,14 @@ func (b *bisyncRun) lockFileIsExpired() bool {
} }
} }
b.handleErr(b.lockFile, "error closing file", rdf.Close(), true, true) b.handleErr(b.lockFile, "error closing file", rdf.Close(), true, true)
if !b.lockFileOpt.data.TimeExpires.IsZero() && b.lockFileOpt.data.TimeExpires.Before(time.Now()) { if !data.TimeExpires.IsZero() && data.TimeExpires.Before(time.Now()) {
fs.Infof(b.lockFile, Color(terminal.GreenFg, "Lock file found, but it expired at %v. Will delete it and proceed."), b.lockFileOpt.data.TimeExpires) fs.Infof(b.lockFile, Color(terminal.GreenFg, "Lock file found, but it expired at %v. Will delete it and proceed."), data.TimeExpires)
markFailed(b.listing1) // listing is untrusted so force revert to prior (if --recover) or create new ones (if --resync) markFailed(b.listing1) // listing is untrusted so force revert to prior (if --recover) or create new ones (if --resync)
markFailed(b.listing2) markFailed(b.listing2)
return true return true
} }
fs.Infof(b.lockFile, Color(terminal.RedFg, "Valid lock file found. Expires at %v. (%v from now)"), b.lockFileOpt.data.TimeExpires, time.Since(b.lockFileOpt.data.TimeExpires).Abs().Round(time.Second)) fs.Infof(b.lockFile, Color(terminal.RedFg, "Valid lock file found. Expires at %v. (%v from now)"), data.TimeExpires, time.Since(data.TimeExpires).Abs().Round(time.Second))
prettyprint(b.lockFileOpt.data, "Lockfile info", fs.LogLevelInfo) prettyprint(data, "Lockfile info", fs.LogLevelInfo)
} }
return false return false
} }
@@ -131,7 +131,7 @@ func (b *bisyncRun) startLockRenewal() func() {
wg.Add(1) wg.Add(1)
go func() { go func() {
defer wg.Done() defer wg.Done()
ticker := time.NewTicker(time.Duration(b.opt.MaxLock) - time.Minute) ticker := time.NewTicker(b.opt.MaxLock - time.Minute)
for { for {
select { select {
case <-ticker.C: case <-ticker.C:

View File

@@ -6,7 +6,6 @@ import (
"runtime" "runtime"
"strconv" "strconv"
"strings" "strings"
"sync"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/lib/encoder" "github.com/rclone/rclone/lib/encoder"
@@ -68,15 +67,10 @@ func quotePath(path string) string {
} }
// Colors controls whether terminal colors are enabled // Colors controls whether terminal colors are enabled
var ( var Colors bool
Colors bool
ColorsLock sync.Mutex
)
// Color handles terminal colors for bisync // Color handles terminal colors for bisync
func Color(style string, s string) string { func Color(style string, s string) string {
ColorsLock.Lock()
defer ColorsLock.Unlock()
if !Colors { if !Colors {
return s return s
} }
@@ -86,8 +80,6 @@ func Color(style string, s string) string {
// ColorX handles terminal colors for bisync // ColorX handles terminal colors for bisync
func ColorX(style string, s string) string { func ColorX(style string, s string) string {
ColorsLock.Lock()
defer ColorsLock.Unlock()
if !Colors { if !Colors {
return s return s
} }

View File

@@ -12,20 +12,18 @@ import (
"github.com/rclone/rclone/fs/march" "github.com/rclone/rclone/fs/march"
) )
type bisyncMarch struct { var ls1 = newFileList()
ls1 *fileList var ls2 = newFileList()
ls2 *fileList var err error
err error var firstErr error
firstErr error var marchAliasLock sync.Mutex
marchAliasLock sync.Mutex var marchLsLock sync.Mutex
marchLsLock sync.Mutex var marchErrLock sync.Mutex
marchErrLock sync.Mutex var marchCtx context.Context
marchCtx context.Context
}
func (b *bisyncRun) makeMarchListing(ctx context.Context) (*fileList, *fileList, error) { func (b *bisyncRun) makeMarchListing(ctx context.Context) (*fileList, *fileList, error) {
ci := fs.GetConfig(ctx) ci := fs.GetConfig(ctx)
b.march.marchCtx = ctx marchCtx = ctx
b.setupListing() b.setupListing()
fs.Debugf(b, "starting to march!") fs.Debugf(b, "starting to march!")
@@ -41,31 +39,31 @@ func (b *bisyncRun) makeMarchListing(ctx context.Context) (*fileList, *fileList,
NoCheckDest: false, NoCheckDest: false,
NoUnicodeNormalization: ci.NoUnicodeNormalization, NoUnicodeNormalization: ci.NoUnicodeNormalization,
} }
b.march.err = m.Run(ctx) err = m.Run(ctx)
fs.Debugf(b, "march completed. err: %v", b.march.err) fs.Debugf(b, "march completed. err: %v", err)
if b.march.err == nil { if err == nil {
b.march.err = b.march.firstErr err = firstErr
} }
if b.march.err != nil { if err != nil {
b.handleErr("march", "error during march", b.march.err, true, true) b.handleErr("march", "error during march", err, true, true)
b.abort = true b.abort = true
return b.march.ls1, b.march.ls2, b.march.err return ls1, ls2, err
} }
// save files // save files
if b.opt.Compare.DownloadHash && b.march.ls1.hash == hash.None { if b.opt.Compare.DownloadHash && ls1.hash == hash.None {
b.march.ls1.hash = hash.MD5 ls1.hash = hash.MD5
} }
if b.opt.Compare.DownloadHash && b.march.ls2.hash == hash.None { if b.opt.Compare.DownloadHash && ls2.hash == hash.None {
b.march.ls2.hash = hash.MD5 ls2.hash = hash.MD5
} }
b.march.err = b.march.ls1.save(b.newListing1) err = ls1.save(ctx, b.newListing1)
b.handleErr(b.march.ls1, "error saving b.march.ls1 from march", b.march.err, true, true) b.handleErr(ls1, "error saving ls1 from march", err, true, true)
b.march.err = b.march.ls2.save(b.newListing2) err = ls2.save(ctx, b.newListing2)
b.handleErr(b.march.ls2, "error saving b.march.ls2 from march", b.march.err, true, true) b.handleErr(ls2, "error saving ls2 from march", err, true, true)
return b.march.ls1, b.march.ls2, b.march.err return ls1, ls2, err
} }
// SrcOnly have an object which is on path1 only // SrcOnly have an object which is on path1 only
@@ -85,9 +83,9 @@ func (b *bisyncRun) DstOnly(o fs.DirEntry) (recurse bool) {
// Match is called when object exists on both path1 and path2 (whether equal or not) // Match is called when object exists on both path1 and path2 (whether equal or not)
func (b *bisyncRun) Match(ctx context.Context, o2, o1 fs.DirEntry) (recurse bool) { func (b *bisyncRun) Match(ctx context.Context, o2, o1 fs.DirEntry) (recurse bool) {
fs.Debugf(o1, "both path1 and path2") fs.Debugf(o1, "both path1 and path2")
b.march.marchAliasLock.Lock() marchAliasLock.Lock()
b.aliases.Add(o1.Remote(), o2.Remote()) b.aliases.Add(o1.Remote(), o2.Remote())
b.march.marchAliasLock.Unlock() marchAliasLock.Unlock()
b.parse(o1, true) b.parse(o1, true)
b.parse(o2, false) b.parse(o2, false)
return isDir(o1) return isDir(o1)
@@ -121,76 +119,76 @@ func (b *bisyncRun) parse(e fs.DirEntry, isPath1 bool) {
} }
func (b *bisyncRun) setupListing() { func (b *bisyncRun) setupListing() {
b.march.ls1 = newFileList() ls1 = newFileList()
b.march.ls2 = newFileList() ls2 = newFileList()
// note that --ignore-listing-checksum is different from --ignore-checksum // note that --ignore-listing-checksum is different from --ignore-checksum
// and we already checked it when we set b.opt.Compare.HashType1 and 2 // and we already checked it when we set b.opt.Compare.HashType1 and 2
b.march.ls1.hash = b.opt.Compare.HashType1 ls1.hash = b.opt.Compare.HashType1
b.march.ls2.hash = b.opt.Compare.HashType2 ls2.hash = b.opt.Compare.HashType2
} }
func (b *bisyncRun) ForObject(o fs.Object, isPath1 bool) { func (b *bisyncRun) ForObject(o fs.Object, isPath1 bool) {
tr := accounting.Stats(b.march.marchCtx).NewCheckingTransfer(o, "listing file - "+whichPath(isPath1)) tr := accounting.Stats(marchCtx).NewCheckingTransfer(o, "listing file - "+whichPath(isPath1))
defer func() { defer func() {
tr.Done(b.march.marchCtx, nil) tr.Done(marchCtx, nil)
}() }()
var ( var (
hashVal string hashVal string
hashErr error hashErr error
) )
ls := b.whichLs(isPath1) ls := whichLs(isPath1)
hashType := ls.hash hashType := ls.hash
if hashType != hash.None { if hashType != hash.None {
hashVal, hashErr = o.Hash(b.march.marchCtx, hashType) hashVal, hashErr = o.Hash(marchCtx, hashType)
b.march.marchErrLock.Lock() marchErrLock.Lock()
if b.march.firstErr == nil { if firstErr == nil {
b.march.firstErr = hashErr firstErr = hashErr
} }
b.march.marchErrLock.Unlock() marchErrLock.Unlock()
} }
hashVal, hashErr = b.tryDownloadHash(b.march.marchCtx, o, hashVal) hashVal, hashErr = tryDownloadHash(marchCtx, o, hashVal)
b.march.marchErrLock.Lock() marchErrLock.Lock()
if b.march.firstErr == nil { if firstErr == nil {
b.march.firstErr = hashErr firstErr = hashErr
} }
if b.march.firstErr != nil { if firstErr != nil {
b.handleErr(hashType, "error hashing during march", b.march.firstErr, false, true) b.handleErr(hashType, "error hashing during march", firstErr, false, true)
} }
b.march.marchErrLock.Unlock() marchErrLock.Unlock()
var modtime time.Time var modtime time.Time
if b.opt.Compare.Modtime { if b.opt.Compare.Modtime {
modtime = o.ModTime(b.march.marchCtx).In(TZ) modtime = o.ModTime(marchCtx).In(TZ)
} }
id := "" // TODO: ID(o) id := "" // TODO: ID(o)
flags := "-" // "-" for a file and "d" for a directory flags := "-" // "-" for a file and "d" for a directory
b.march.marchLsLock.Lock() marchLsLock.Lock()
ls.put(o.Remote(), o.Size(), modtime, hashVal, id, flags) ls.put(o.Remote(), o.Size(), modtime, hashVal, id, flags)
b.march.marchLsLock.Unlock() marchLsLock.Unlock()
} }
func (b *bisyncRun) ForDir(o fs.Directory, isPath1 bool) { func (b *bisyncRun) ForDir(o fs.Directory, isPath1 bool) {
tr := accounting.Stats(b.march.marchCtx).NewCheckingTransfer(o, "listing dir - "+whichPath(isPath1)) tr := accounting.Stats(marchCtx).NewCheckingTransfer(o, "listing dir - "+whichPath(isPath1))
defer func() { defer func() {
tr.Done(b.march.marchCtx, nil) tr.Done(marchCtx, nil)
}() }()
ls := b.whichLs(isPath1) ls := whichLs(isPath1)
var modtime time.Time var modtime time.Time
if b.opt.Compare.Modtime { if b.opt.Compare.Modtime {
modtime = o.ModTime(b.march.marchCtx).In(TZ) modtime = o.ModTime(marchCtx).In(TZ)
} }
id := "" // TODO id := "" // TODO
flags := "d" // "-" for a file and "d" for a directory flags := "d" // "-" for a file and "d" for a directory
b.march.marchLsLock.Lock() marchLsLock.Lock()
ls.put(o.Remote(), -1, modtime, "", id, flags) ls.put(o.Remote(), -1, modtime, "", id, flags)
b.march.marchLsLock.Unlock() marchLsLock.Unlock()
} }
func (b *bisyncRun) whichLs(isPath1 bool) *fileList { func whichLs(isPath1 bool) *fileList {
ls := b.march.ls1 ls := ls1
if !isPath1 { if !isPath1 {
ls = b.march.ls2 ls = ls2
} }
return ls return ls
} }
@@ -208,7 +206,7 @@ func (b *bisyncRun) findCheckFiles(ctx context.Context) (*fileList, *fileList, e
b.handleErr(b.opt.CheckFilename, "error adding CheckFilename to filter", filterCheckFile.Add(true, b.opt.CheckFilename), true, true) b.handleErr(b.opt.CheckFilename, "error adding CheckFilename to filter", filterCheckFile.Add(true, b.opt.CheckFilename), true, true)
b.handleErr(b.opt.CheckFilename, "error adding ** exclusion to filter", filterCheckFile.Add(false, "**"), true, true) b.handleErr(b.opt.CheckFilename, "error adding ** exclusion to filter", filterCheckFile.Add(false, "**"), true, true)
ci := fs.GetConfig(ctxCheckFile) ci := fs.GetConfig(ctxCheckFile)
b.march.marchCtx = ctxCheckFile marchCtx = ctxCheckFile
b.setupListing() b.setupListing()
fs.Debugf(b, "starting to march!") fs.Debugf(b, "starting to march!")
@@ -225,18 +223,18 @@ func (b *bisyncRun) findCheckFiles(ctx context.Context) (*fileList, *fileList, e
NoCheckDest: false, NoCheckDest: false,
NoUnicodeNormalization: ci.NoUnicodeNormalization, NoUnicodeNormalization: ci.NoUnicodeNormalization,
} }
b.march.err = m.Run(ctxCheckFile) err = m.Run(ctxCheckFile)
fs.Debugf(b, "march completed. err: %v", b.march.err) fs.Debugf(b, "march completed. err: %v", err)
if b.march.err == nil { if err == nil {
b.march.err = b.march.firstErr err = firstErr
} }
if b.march.err != nil { if err != nil {
b.handleErr("march", "error during findCheckFiles", b.march.err, true, true) b.handleErr("march", "error during findCheckFiles", err, true, true)
b.abort = true b.abort = true
} }
return b.march.ls1, b.march.ls2, b.march.err return ls1, ls2, err
} }
// ID returns the ID of the Object if known, or "" if not // ID returns the ID of the Object if known, or "" if not

View File

@@ -51,11 +51,6 @@ type bisyncRun struct {
lockFile string lockFile string
renames renames renames renames
resyncIs1to2 bool resyncIs1to2 bool
march bisyncMarch
check bisyncCheck
queueOpt bisyncQueueOpt
downloadHashOpt downloadHashOpt
lockFileOpt lockFileOpt
} }
type queues struct { type queues struct {
@@ -69,6 +64,7 @@ type queues struct {
// Bisync handles lock file, performs bisync run and checks exit status // Bisync handles lock file, performs bisync run and checks exit status
func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) { func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
defer resetGlobals()
opt := *optArg // ensure that input is never changed opt := *optArg // ensure that input is never changed
b := &bisyncRun{ b := &bisyncRun{
fs1: fs1, fs1: fs1,
@@ -87,9 +83,7 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
opt.OrigBackupDir = ci.BackupDir opt.OrigBackupDir = ci.BackupDir
if ci.TerminalColorMode == fs.TerminalColorModeAlways || (ci.TerminalColorMode == fs.TerminalColorModeAuto && !log.Redirected()) { if ci.TerminalColorMode == fs.TerminalColorModeAlways || (ci.TerminalColorMode == fs.TerminalColorModeAuto && !log.Redirected()) {
ColorsLock.Lock()
Colors = true Colors = true
ColorsLock.Unlock()
} }
err = b.setCompareDefaults(ctx) err = b.setCompareDefaults(ctx)
@@ -99,7 +93,7 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
b.setResyncDefaults() b.setResyncDefaults()
err = b.setResolveDefaults() err = b.setResolveDefaults(ctx)
if err != nil { if err != nil {
return err return err
} }
@@ -130,8 +124,6 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
return err return err
} }
b.queueOpt.logger = operations.NewLoggerOpt()
// Handle SIGINT // Handle SIGINT
var finaliseOnce gosync.Once var finaliseOnce gosync.Once
@@ -146,7 +138,7 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
if b.SyncCI != nil { if b.SyncCI != nil {
fs.Infoc(nil, Color(terminal.YellowFg, "Telling Sync to wrap up early.")) fs.Infoc(nil, Color(terminal.YellowFg, "Telling Sync to wrap up early."))
b.SyncCI.MaxTransfer = 1 b.SyncCI.MaxTransfer = 1
b.SyncCI.MaxDuration = fs.Duration(1 * time.Second) b.SyncCI.MaxDuration = 1 * time.Second
b.SyncCI.CutoffMode = fs.CutoffModeSoft b.SyncCI.CutoffMode = fs.CutoffModeSoft
gracePeriod := 30 * time.Second // TODO: flag to customize this? gracePeriod := 30 * time.Second // TODO: flag to customize this?
if !waitFor("Canceling Sync if not done in", gracePeriod, func() bool { return b.CleanupCompleted }) { if !waitFor("Canceling Sync if not done in", gracePeriod, func() bool { return b.CleanupCompleted }) {
@@ -169,7 +161,7 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
markFailed(b.listing1) markFailed(b.listing1)
markFailed(b.listing2) markFailed(b.listing2)
} }
err = b.removeLockFile() b.removeLockFile()
} }
}) })
} }
@@ -179,10 +171,7 @@ func Bisync(ctx context.Context, fs1, fs2 fs.Fs, optArg *Options) (err error) {
// run bisync // run bisync
err = b.runLocked(ctx) err = b.runLocked(ctx)
removeLockErr := b.removeLockFile() b.removeLockFile()
if err == nil {
err = removeLockErr
}
b.CleanupCompleted = true b.CleanupCompleted = true
if b.InGracefulShutdown { if b.InGracefulShutdown {
@@ -273,7 +262,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
// Generate Path1 and Path2 listings and copy any unique Path2 files to Path1 // Generate Path1 and Path2 listings and copy any unique Path2 files to Path1
if opt.Resync { if opt.Resync {
return b.resync(fctx) return b.resync(octx, fctx)
} }
// Check for existence of prior Path1 and Path2 listings // Check for existence of prior Path1 and Path2 listings
@@ -308,7 +297,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
} }
fs.Infof(nil, "Building Path1 and Path2 listings") fs.Infof(nil, "Building Path1 and Path2 listings")
b.march.ls1, b.march.ls2, err = b.makeMarchListing(fctx) ls1, ls2, err = b.makeMarchListing(fctx)
if err != nil || accounting.Stats(fctx).Errored() { if err != nil || accounting.Stats(fctx).Errored() {
fs.Error(nil, Color(terminal.RedFg, "There were errors while building listings. Aborting as it is too dangerous to continue.")) fs.Error(nil, Color(terminal.RedFg, "There were errors while building listings. Aborting as it is too dangerous to continue."))
b.critical = true b.critical = true
@@ -318,7 +307,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
// Check for Path1 deltas relative to the prior sync // Check for Path1 deltas relative to the prior sync
fs.Infof(nil, "Path1 checking for diffs") fs.Infof(nil, "Path1 checking for diffs")
ds1, err := b.findDeltas(fctx, b.fs1, b.listing1, b.march.ls1, "Path1") ds1, err := b.findDeltas(fctx, b.fs1, b.listing1, ls1, "Path1")
if err != nil { if err != nil {
return err return err
} }
@@ -326,7 +315,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
// Check for Path2 deltas relative to the prior sync // Check for Path2 deltas relative to the prior sync
fs.Infof(nil, "Path2 checking for diffs") fs.Infof(nil, "Path2 checking for diffs")
ds2, err := b.findDeltas(fctx, b.fs2, b.listing2, b.march.ls2, "Path2") ds2, err := b.findDeltas(fctx, b.fs2, b.listing2, ls2, "Path2")
if err != nil { if err != nil {
return err return err
} }
@@ -400,7 +389,7 @@ func (b *bisyncRun) runLocked(octx context.Context) (err error) {
newl1, _ := b.loadListing(b.newListing1) newl1, _ := b.loadListing(b.newListing1)
newl2, _ := b.loadListing(b.newListing2) newl2, _ := b.loadListing(b.newListing2)
b.debug(b.DebugName, fmt.Sprintf("pre-saveOldListings, ls1 has name?: %v, ls2 has name?: %v", l1.has(b.DebugName), l2.has(b.DebugName))) b.debug(b.DebugName, fmt.Sprintf("pre-saveOldListings, ls1 has name?: %v, ls2 has name?: %v", l1.has(b.DebugName), l2.has(b.DebugName)))
b.debug(b.DebugName, fmt.Sprintf("pre-saveOldListings, newls1 has name?: %v, ls2 has name?: %v", newl1.has(b.DebugName), newl2.has(b.DebugName))) b.debug(b.DebugName, fmt.Sprintf("pre-saveOldListings, newls1 has name?: %v, newls2 has name?: %v", newl1.has(b.DebugName), newl2.has(b.DebugName)))
} }
b.saveOldListings() b.saveOldListings()
// save new listings // save new listings
@@ -564,7 +553,7 @@ func (b *bisyncRun) setBackupDir(ctx context.Context, destPath int) context.Cont
return ctx return ctx
} }
func (b *bisyncRun) overlappingPathsCheck(fctx context.Context, fs1, fs2 fs.Fs) (err error) { func (b *bisyncRun) overlappingPathsCheck(fctx context.Context, fs1, fs2 fs.Fs) error {
if operations.OverlappingFilterCheck(fctx, fs2, fs1) { if operations.OverlappingFilterCheck(fctx, fs2, fs1) {
err = errors.New(Color(terminal.RedFg, "Overlapping paths detected. Cannot bisync between paths that overlap, unless excluded by filters.")) err = errors.New(Color(terminal.RedFg, "Overlapping paths detected. Cannot bisync between paths that overlap, unless excluded by filters."))
return err return err
@@ -597,7 +586,7 @@ func (b *bisyncRun) overlappingPathsCheck(fctx context.Context, fs1, fs2 fs.Fs)
return nil return nil
} }
func (b *bisyncRun) checkSyntax() (err error) { func (b *bisyncRun) checkSyntax() error {
// check for odd number of quotes in path, usually indicating an escaping issue // check for odd number of quotes in path, usually indicating an escaping issue
path1 := bilib.FsPath(b.fs1) path1 := bilib.FsPath(b.fs1)
path2 := bilib.FsPath(b.fs2) path2 := bilib.FsPath(b.fs2)
@@ -645,3 +634,25 @@ func waitFor(msg string, totalWait time.Duration, fn func() bool) (ok bool) {
} }
return false return false
} }
// mainly to make sure tests don't interfere with each other when running more than one
func resetGlobals() {
downloadHash = false
logger = operations.NewLoggerOpt()
ignoreListingChecksum = false
ignoreListingModtime = false
hashTypes = nil
queueCI = nil
hashType = 0
fsrc, fdst = nil, nil
fcrypt = nil
Opt = Options{}
once = gosync.Once{}
downloadHashWarn = gosync.Once{}
firstDownloadHash = gosync.Once{}
ls1 = newFileList()
ls2 = newFileList()
err = nil
firstErr = nil
marchCtx = nil
}

View File

@@ -51,19 +51,19 @@ func (rs *ResultsSlice) has(name string) bool {
return false return false
} }
type bisyncQueueOpt struct { var (
logger operations.LoggerOpt logger = operations.NewLoggerOpt()
lock mutex.Mutex lock mutex.Mutex
once mutex.Once once mutex.Once
ignoreListingChecksum bool ignoreListingChecksum bool
ignoreListingModtime bool ignoreListingModtime bool
hashTypes map[string]hash.Type hashTypes map[string]hash.Type
queueCI *fs.ConfigInfo queueCI *fs.ConfigInfo
} )
// allows us to get the right hashtype during the LoggerFn without knowing whether it's Path1/Path2 // allows us to get the right hashtype during the LoggerFn without knowing whether it's Path1/Path2
func (b *bisyncRun) getHashType(fname string) hash.Type { func getHashType(fname string) hash.Type {
ht, ok := b.queueOpt.hashTypes[fname] ht, ok := hashTypes[fname]
if ok { if ok {
return ht return ht
} }
@@ -106,9 +106,9 @@ func altName(name string, src, dst fs.DirEntry) string {
} }
// WriteResults is Bisync's LoggerFn // WriteResults is Bisync's LoggerFn
func (b *bisyncRun) WriteResults(ctx context.Context, sigil operations.Sigil, src, dst fs.DirEntry, err error) { func WriteResults(ctx context.Context, sigil operations.Sigil, src, dst fs.DirEntry, err error) {
b.queueOpt.lock.Lock() lock.Lock()
defer b.queueOpt.lock.Unlock() defer lock.Unlock()
opt := operations.GetLoggerOpt(ctx) opt := operations.GetLoggerOpt(ctx)
result := Results{ result := Results{
@@ -131,14 +131,14 @@ func (b *bisyncRun) WriteResults(ctx context.Context, sigil operations.Sigil, sr
result.Flags = "-" result.Flags = "-"
if side != nil { if side != nil {
result.Size = side.Size() result.Size = side.Size()
if !b.queueOpt.ignoreListingModtime { if !ignoreListingModtime {
result.Modtime = side.ModTime(ctx).In(TZ) result.Modtime = side.ModTime(ctx).In(TZ)
} }
if !b.queueOpt.ignoreListingChecksum { if !ignoreListingChecksum {
sideObj, ok := side.(fs.ObjectInfo) sideObj, ok := side.(fs.ObjectInfo)
if ok { if ok {
result.Hash, _ = sideObj.Hash(ctx, b.getHashType(sideObj.Fs().Name())) result.Hash, _ = sideObj.Hash(ctx, getHashType(sideObj.Fs().Name()))
result.Hash, _ = b.tryDownloadHash(ctx, sideObj, result.Hash) result.Hash, _ = tryDownloadHash(ctx, sideObj, result.Hash)
} }
} }
@@ -159,8 +159,8 @@ func (b *bisyncRun) WriteResults(ctx context.Context, sigil operations.Sigil, sr
} }
prettyprint(result, "writing result", fs.LogLevelDebug) prettyprint(result, "writing result", fs.LogLevelDebug)
if result.Size < 0 && result.Flags != "d" && ((b.queueOpt.queueCI.CheckSum && !b.downloadHashOpt.downloadHash) || b.queueOpt.queueCI.SizeOnly) { if result.Size < 0 && result.Flags != "d" && ((queueCI.CheckSum && !downloadHash) || queueCI.SizeOnly) {
b.queueOpt.once.Do(func() { once.Do(func() {
fs.Log(result.Name, Color(terminal.YellowFg, "Files of unknown size (such as Google Docs) do not sync reliably with --checksum or --size-only. Consider using modtime instead (the default) or --drive-skip-gdocs")) fs.Log(result.Name, Color(terminal.YellowFg, "Files of unknown size (such as Google Docs) do not sync reliably with --checksum or --size-only. Consider using modtime instead (the default) or --drive-skip-gdocs"))
}) })
} }
@@ -189,14 +189,14 @@ func ReadResults(results io.Reader) []Results {
// for setup code shared by both fastCopy and resyncDir // for setup code shared by both fastCopy and resyncDir
func (b *bisyncRun) preCopy(ctx context.Context) context.Context { func (b *bisyncRun) preCopy(ctx context.Context) context.Context {
b.queueOpt.queueCI = fs.GetConfig(ctx) queueCI = fs.GetConfig(ctx)
b.queueOpt.ignoreListingChecksum = b.opt.IgnoreListingChecksum ignoreListingChecksum = b.opt.IgnoreListingChecksum
b.queueOpt.ignoreListingModtime = !b.opt.Compare.Modtime ignoreListingModtime = !b.opt.Compare.Modtime
b.queueOpt.hashTypes = map[string]hash.Type{ hashTypes = map[string]hash.Type{
b.fs1.Name(): b.opt.Compare.HashType1, b.fs1.Name(): b.opt.Compare.HashType1,
b.fs2.Name(): b.opt.Compare.HashType2, b.fs2.Name(): b.opt.Compare.HashType2,
} }
b.queueOpt.logger.LoggerFn = b.WriteResults logger.LoggerFn = WriteResults
overridingEqual := false overridingEqual := false
if (b.opt.Compare.Modtime && b.opt.Compare.Checksum) || b.opt.Compare.DownloadHash { if (b.opt.Compare.Modtime && b.opt.Compare.Checksum) || b.opt.Compare.DownloadHash {
overridingEqual = true overridingEqual = true
@@ -209,15 +209,15 @@ func (b *bisyncRun) preCopy(ctx context.Context) context.Context {
fs.Debugf(nil, "overriding equal") fs.Debugf(nil, "overriding equal")
ctx = b.EqualFn(ctx) ctx = b.EqualFn(ctx)
} }
ctxCopyLogger := operations.WithSyncLogger(ctx, b.queueOpt.logger) ctxCopyLogger := operations.WithSyncLogger(ctx, logger)
if b.opt.Compare.Checksum && (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.opt.Compare.SlowHashDetected { if b.opt.Compare.Checksum && (b.opt.Compare.NoSlowHash || b.opt.Compare.SlowHashSyncOnly) && b.opt.Compare.SlowHashDetected {
// set here in case !b.opt.Compare.Modtime // set here in case !b.opt.Compare.Modtime
b.queueOpt.queueCI = fs.GetConfig(ctxCopyLogger) queueCI = fs.GetConfig(ctxCopyLogger)
if b.opt.Compare.NoSlowHash { if b.opt.Compare.NoSlowHash {
b.queueOpt.queueCI.CheckSum = false queueCI.CheckSum = false
} }
if b.opt.Compare.SlowHashSyncOnly && !overridingEqual { if b.opt.Compare.SlowHashSyncOnly && !overridingEqual {
b.queueOpt.queueCI.CheckSum = true queueCI.CheckSum = true
} }
} }
return ctxCopyLogger return ctxCopyLogger
@@ -245,16 +245,14 @@ func (b *bisyncRun) fastCopy(ctx context.Context, fsrc, fdst fs.Fs, files bilib.
} }
} }
b.SyncCI = fs.GetConfig(ctxCopy) // allows us to request graceful shutdown b.SyncCI = fs.GetConfig(ctxCopy) // allows us to request graceful shutdown
if accounting.MaxCompletedTransfers != -1 { accounting.MaxCompletedTransfers = -1 // we need a complete list in the event of graceful shutdown
accounting.MaxCompletedTransfers = -1 // we need a complete list in the event of graceful shutdown
}
ctxCopy, b.CancelSync = context.WithCancel(ctxCopy) ctxCopy, b.CancelSync = context.WithCancel(ctxCopy)
b.testFn() b.testFn()
err := sync.Sync(ctxCopy, fdst, fsrc, b.opt.CreateEmptySrcDirs) err := sync.Sync(ctxCopy, fdst, fsrc, b.opt.CreateEmptySrcDirs)
prettyprint(b.queueOpt.logger, "b.queueOpt.logger", fs.LogLevelDebug) prettyprint(logger, "logger", fs.LogLevelDebug)
getResults := ReadResults(b.queueOpt.logger.JSON) getResults := ReadResults(logger.JSON)
fs.Debugf(nil, "Got %v results for %v", len(getResults), queueName) fs.Debugf(nil, "Got %v results for %v", len(getResults), queueName)
lineFormat := "%s %8d %s %s %s %q\n" lineFormat := "%s %8d %s %s %s %q\n"
@@ -294,9 +292,9 @@ func (b *bisyncRun) resyncDir(ctx context.Context, fsrc, fdst fs.Fs) ([]Results,
ctx = b.preCopy(ctx) ctx = b.preCopy(ctx)
err := sync.CopyDir(ctx, fdst, fsrc, b.opt.CreateEmptySrcDirs) err := sync.CopyDir(ctx, fdst, fsrc, b.opt.CreateEmptySrcDirs)
prettyprint(b.queueOpt.logger, "b.queueOpt.logger", fs.LogLevelDebug) prettyprint(logger, "logger", fs.LogLevelDebug)
getResults := ReadResults(b.queueOpt.logger.JSON) getResults := ReadResults(logger.JSON)
fs.Debugf(nil, "Got %v results for %v", len(getResults), "resync") fs.Debugf(nil, "Got %v results for %v", len(getResults), "resync")
return getResults, err return getResults, err
@@ -378,8 +376,8 @@ func (b *bisyncRun) saveQueue(files bilib.Names, jobName string) error {
return files.Save(queueFile) return files.Save(queueFile)
} }
func naptime(totalWait fs.Duration) { func naptime(totalWait time.Duration) {
expireTime := time.Now().Add(time.Duration(totalWait)) expireTime := time.Now().Add(totalWait)
fs.Logf(nil, "will retry in %v at %v", totalWait, expireTime.Format("2006-01-02 15:04:05 MST")) fs.Logf(nil, "will retry in %v at %v", totalWait, expireTime.Format("2006-01-02 15:04:05 MST"))
for i := 0; time.Until(expireTime) > 0; i++ { for i := 0; time.Until(expireTime) > 0; i++ {
if i > 0 && i%10 == 0 { if i > 0 && i%10 == 0 {

View File

@@ -77,7 +77,7 @@ func (conflictLoserChoices) Type() string {
// ConflictLoserList is a list of --conflict-loser flag choices used in the help // ConflictLoserList is a list of --conflict-loser flag choices used in the help
var ConflictLoserList = Opt.ConflictLoser.Help() var ConflictLoserList = Opt.ConflictLoser.Help()
func (b *bisyncRun) setResolveDefaults() error { func (b *bisyncRun) setResolveDefaults(ctx context.Context) error {
if b.opt.ConflictLoser == ConflictLoserSkip { if b.opt.ConflictLoser == ConflictLoserSkip {
b.opt.ConflictLoser = ConflictLoserNumber b.opt.ConflictLoser = ConflictLoserNumber
} }
@@ -135,7 +135,7 @@ type namePair struct {
newName string newName string
} }
func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias string, renameSkipped, copy1to2, copy2to1 *bilib.Names, ds1, ds2 *deltaSet) (err error) { func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias string, renameSkipped, copy1to2, copy2to1 *bilib.Names, ds1, ds2 *deltaSet) error {
winningPath := 0 winningPath := 0
if b.opt.ConflictResolve != PreferNone { if b.opt.ConflictResolve != PreferNone {
winningPath = b.conflictWinner(ds1, ds2, file, alias) winningPath = b.conflictWinner(ds1, ds2, file, alias)
@@ -197,7 +197,7 @@ func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias s
// note also that deletes and renames are mutually exclusive -- we never delete one path and rename the other. // note also that deletes and renames are mutually exclusive -- we never delete one path and rename the other.
if b.opt.ConflictLoser == ConflictLoserDelete && winningPath == 1 { if b.opt.ConflictLoser == ConflictLoserDelete && winningPath == 1 {
// delete 2, copy 1 to 2 // delete 2, copy 1 to 2
err = b.delete(ctxMove, r.path2, path2, b.fs2, 2, renameSkipped) err = b.delete(ctxMove, r.path2, path2, path1, b.fs2, 2, 1, renameSkipped)
if err != nil { if err != nil {
return err return err
} }
@@ -207,7 +207,7 @@ func (b *bisyncRun) resolve(ctxMove context.Context, path1, path2, file, alias s
copy1to2.Add(r.path1.oldName) copy1to2.Add(r.path1.oldName)
} else if b.opt.ConflictLoser == ConflictLoserDelete && winningPath == 2 { } else if b.opt.ConflictLoser == ConflictLoserDelete && winningPath == 2 {
// delete 1, copy 2 to 1 // delete 1, copy 2 to 1
err = b.delete(ctxMove, r.path1, path1, b.fs1, 1, renameSkipped) err = b.delete(ctxMove, r.path1, path1, path2, b.fs1, 1, 2, renameSkipped)
if err != nil { if err != nil {
return err return err
} }
@@ -261,15 +261,15 @@ func (ri *renamesInfo) getNames(is1to2 bool) (srcOldName, srcNewName, dstOldName
func (b *bisyncRun) numerate(ctx context.Context, startnum int, file, alias string) int { func (b *bisyncRun) numerate(ctx context.Context, startnum int, file, alias string) int {
for i := startnum; i < math.MaxInt; i++ { for i := startnum; i < math.MaxInt; i++ {
iStr := fmt.Sprint(i) iStr := fmt.Sprint(i)
if !b.march.ls1.has(SuffixName(ctx, file, b.opt.ConflictSuffix1+iStr)) && if !ls1.has(SuffixName(ctx, file, b.opt.ConflictSuffix1+iStr)) &&
!b.march.ls1.has(SuffixName(ctx, alias, b.opt.ConflictSuffix1+iStr)) && !ls1.has(SuffixName(ctx, alias, b.opt.ConflictSuffix1+iStr)) &&
!b.march.ls2.has(SuffixName(ctx, file, b.opt.ConflictSuffix2+iStr)) && !ls2.has(SuffixName(ctx, file, b.opt.ConflictSuffix2+iStr)) &&
!b.march.ls2.has(SuffixName(ctx, alias, b.opt.ConflictSuffix2+iStr)) { !ls2.has(SuffixName(ctx, alias, b.opt.ConflictSuffix2+iStr)) {
// make sure it still holds true with suffixes switched (it should) // make sure it still holds true with suffixes switched (it should)
if !b.march.ls1.has(SuffixName(ctx, file, b.opt.ConflictSuffix2+iStr)) && if !ls1.has(SuffixName(ctx, file, b.opt.ConflictSuffix2+iStr)) &&
!b.march.ls1.has(SuffixName(ctx, alias, b.opt.ConflictSuffix2+iStr)) && !ls1.has(SuffixName(ctx, alias, b.opt.ConflictSuffix2+iStr)) &&
!b.march.ls2.has(SuffixName(ctx, file, b.opt.ConflictSuffix1+iStr)) && !ls2.has(SuffixName(ctx, file, b.opt.ConflictSuffix1+iStr)) &&
!b.march.ls2.has(SuffixName(ctx, alias, b.opt.ConflictSuffix1+iStr)) { !ls2.has(SuffixName(ctx, alias, b.opt.ConflictSuffix1+iStr)) {
fs.Debugf(file, "The first available suffix is: %s", iStr) fs.Debugf(file, "The first available suffix is: %s", iStr)
return i return i
} }
@@ -280,10 +280,10 @@ func (b *bisyncRun) numerate(ctx context.Context, startnum int, file, alias stri
// like numerate, but consider only one side's suffix (for when suffixes are different) // like numerate, but consider only one side's suffix (for when suffixes are different)
func (b *bisyncRun) numerateSingle(ctx context.Context, startnum int, file, alias string, path int) int { func (b *bisyncRun) numerateSingle(ctx context.Context, startnum int, file, alias string, path int) int {
lsA, lsB := b.march.ls1, b.march.ls2 lsA, lsB := ls1, ls2
suffix := b.opt.ConflictSuffix1 suffix := b.opt.ConflictSuffix1
if path == 2 { if path == 2 {
lsA, lsB = b.march.ls2, b.march.ls1 lsA, lsB = ls2, ls1
suffix = b.opt.ConflictSuffix2 suffix = b.opt.ConflictSuffix2
} }
for i := startnum; i < math.MaxInt; i++ { for i := startnum; i < math.MaxInt; i++ {
@@ -299,7 +299,7 @@ func (b *bisyncRun) numerateSingle(ctx context.Context, startnum int, file, alia
return 0 // not really possible, as no one has 9223372036854775807 conflicts, and if they do, they have bigger problems return 0 // not really possible, as no one has 9223372036854775807 conflicts, and if they do, they have bigger problems
} }
func (b *bisyncRun) rename(ctx context.Context, thisNamePair namePair, thisPath, thatPath string, thisFs fs.Fs, thisPathNum, thatPathNum, winningPath int, q, renameSkipped *bilib.Names) (err error) { func (b *bisyncRun) rename(ctx context.Context, thisNamePair namePair, thisPath, thatPath string, thisFs fs.Fs, thisPathNum, thatPathNum, winningPath int, q, renameSkipped *bilib.Names) error {
if winningPath == thisPathNum { if winningPath == thisPathNum {
b.indent(fmt.Sprintf("!Path%d", thisPathNum), thisPath+thisNamePair.newName, fmt.Sprintf("Not renaming Path%d copy, as it was determined the winner", thisPathNum)) b.indent(fmt.Sprintf("!Path%d", thisPathNum), thisPath+thisNamePair.newName, fmt.Sprintf("Not renaming Path%d copy, as it was determined the winner", thisPathNum))
} else { } else {
@@ -321,7 +321,7 @@ func (b *bisyncRun) rename(ctx context.Context, thisNamePair namePair, thisPath,
return nil return nil
} }
func (b *bisyncRun) delete(ctx context.Context, thisNamePair namePair, thisPath string, thisFs fs.Fs, thisPathNum int, renameSkipped *bilib.Names) (err error) { func (b *bisyncRun) delete(ctx context.Context, thisNamePair namePair, thisPath, thatPath string, thisFs fs.Fs, thisPathNum, thatPathNum int, renameSkipped *bilib.Names) error {
skip := operations.SkipDestructive(ctx, thisNamePair.oldName, "delete") skip := operations.SkipDestructive(ctx, thisNamePair.oldName, "delete")
if !skip { if !skip {
b.indent(fmt.Sprintf("!Path%d", thisPathNum), thisPath+thisNamePair.oldName, fmt.Sprintf("Deleting Path%d copy", thisPathNum)) b.indent(fmt.Sprintf("!Path%d", thisPathNum), thisPath+thisNamePair.oldName, fmt.Sprintf("Deleting Path%d copy", thisPathNum))
@@ -359,17 +359,17 @@ func (b *bisyncRun) conflictWinner(ds1, ds2 *deltaSet, remote1, remote2 string)
return 2 return 2
case PreferNewer, PreferOlder: case PreferNewer, PreferOlder:
t1, t2 := ds1.time[remote1], ds2.time[remote2] t1, t2 := ds1.time[remote1], ds2.time[remote2]
return b.resolveNewerOlder(t1, t2, remote1, b.opt.ConflictResolve) return b.resolveNewerOlder(t1, t2, remote1, remote2, b.opt.ConflictResolve)
case PreferLarger, PreferSmaller: case PreferLarger, PreferSmaller:
s1, s2 := ds1.size[remote1], ds2.size[remote2] s1, s2 := ds1.size[remote1], ds2.size[remote2]
return b.resolveLargerSmaller(s1, s2, remote1, b.opt.ConflictResolve) return b.resolveLargerSmaller(s1, s2, remote1, remote2, b.opt.ConflictResolve)
default: default:
return 0 return 0
} }
} }
// returns the winning path number, or 0 if winner can't be determined // returns the winning path number, or 0 if winner can't be determined
func (b *bisyncRun) resolveNewerOlder(t1, t2 time.Time, remote1 string, prefer Prefer) int { func (b *bisyncRun) resolveNewerOlder(t1, t2 time.Time, remote1, remote2 string, prefer Prefer) int {
if fs.GetModifyWindow(b.octx, b.fs1, b.fs2) == fs.ModTimeNotSupported { if fs.GetModifyWindow(b.octx, b.fs1, b.fs2) == fs.ModTimeNotSupported {
fs.Infof(remote1, "Winner cannot be determined as at least one path lacks modtime support.") fs.Infof(remote1, "Winner cannot be determined as at least one path lacks modtime support.")
return 0 return 0
@@ -380,31 +380,31 @@ func (b *bisyncRun) resolveNewerOlder(t1, t2 time.Time, remote1 string, prefer P
} }
if t1.After(t2) { if t1.After(t2) {
if prefer == PreferNewer { if prefer == PreferNewer {
fs.Infof(remote1, "Path1 is newer. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t1.Sub(t2)) fs.Infof(remote1, "Path1 is newer. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t1.Sub(t2))
return 1 return 1
} else if prefer == PreferOlder { } else if prefer == PreferOlder {
fs.Infof(remote1, "Path2 is older. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t1.Sub(t2)) fs.Infof(remote1, "Path2 is older. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t1.Sub(t2))
return 2 return 2
} }
} else if t1.Before(t2) { } else if t1.Before(t2) {
if prefer == PreferNewer { if prefer == PreferNewer {
fs.Infof(remote1, "Path2 is newer. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t2.Sub(t1)) fs.Infof(remote1, "Path2 is newer. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t2.Sub(t1))
return 2 return 2
} else if prefer == PreferOlder { } else if prefer == PreferOlder {
fs.Infof(remote1, "Path1 is older. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t2.Sub(t1)) fs.Infof(remote1, "Path1 is older. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t2.Sub(t1))
return 1 return 1
} }
} }
if t1.Equal(t2) { if t1.Equal(t2) {
fs.Infof(remote1, "Winner cannot be determined as times are equal. Path1: %v, Path2: %v, Difference: %s", t1.In(LogTZ), t2.In(LogTZ), t2.Sub(t1)) fs.Infof(remote1, "Winner cannot be determined as times are equal. Path1: %v, Path2: %v, Difference: %s", t1.Local(), t2.Local(), t2.Sub(t1))
return 0 return 0
} }
fs.Errorf(remote1, "Winner cannot be determined. Path1: %v, Path2: %v", t1.In(LogTZ), t2.In(LogTZ)) // shouldn't happen unless prefer is of wrong type fs.Errorf(remote1, "Winner cannot be determined. Path1: %v, Path2: %v", t1.Local(), t2.Local()) // shouldn't happen unless prefer is of wrong type
return 0 return 0
} }
// returns the winning path number, or 0 if winner can't be determined // returns the winning path number, or 0 if winner can't be determined
func (b *bisyncRun) resolveLargerSmaller(s1, s2 int64, remote1 string, prefer Prefer) int { func (b *bisyncRun) resolveLargerSmaller(s1, s2 int64, remote1, remote2 string, prefer Prefer) int {
if s1 < 0 || s2 < 0 { if s1 < 0 || s2 < 0 {
fs.Infof(remote1, "Winner cannot be determined as at least one size is unknown. Path1: %v, Path2: %v", s1, s2) fs.Infof(remote1, "Winner cannot be determined as at least one size is unknown. Path1: %v, Path2: %v", s1, s2)
return 0 return 0

View File

@@ -20,6 +20,7 @@ func (b *bisyncRun) setResyncDefaults() {
} }
if b.opt.ResyncMode != PreferNone { if b.opt.ResyncMode != PreferNone {
b.opt.Resync = true b.opt.Resync = true
Opt.Resync = true // shouldn't be using this one, but set to be safe
} }
// checks and warnings // checks and warnings
@@ -40,18 +41,18 @@ func (b *bisyncRun) setResyncDefaults() {
// It will generate path1 and path2 listings, // It will generate path1 and path2 listings,
// copy any unique files to the opposite path, // copy any unique files to the opposite path,
// and resolve any differing files according to the --resync-mode. // and resolve any differing files according to the --resync-mode.
func (b *bisyncRun) resync(fctx context.Context) (err error) { func (b *bisyncRun) resync(octx, fctx context.Context) error {
fs.Infof(nil, "Copying Path2 files to Path1") fs.Infof(nil, "Copying Path2 files to Path1")
// Save blank filelists (will be filled from sync results) // Save blank filelists (will be filled from sync results)
ls1 := newFileList() var ls1 = newFileList()
ls2 := newFileList() var ls2 = newFileList()
err = ls1.save(b.newListing1) err = ls1.save(fctx, b.newListing1)
if err != nil { if err != nil {
b.handleErr(ls1, "error saving ls1 from resync", err, true, true) b.handleErr(ls1, "error saving ls1 from resync", err, true, true)
b.abort = true b.abort = true
} }
err = ls2.save(b.newListing2) err = ls2.save(fctx, b.newListing2)
if err != nil { if err != nil {
b.handleErr(ls2, "error saving ls2 from resync", err, true, true) b.handleErr(ls2, "error saving ls2 from resync", err, true, true)
b.abort = true b.abort = true

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -61,7 +59,6 @@ INFO : - Path1 Queue copy to Path2 - {
INFO : - Path1 Queue copy to Path2 - {path2/}file1.txt INFO : - Path1 Queue copy to Path2 - {path2/}file1.txt
INFO : - Path1 Queue copy to Path2 - {path2/}subdir/file20.txt INFO : - Path1 Queue copy to Path2 - {path2/}subdir/file20.txt
INFO : - Path1 Do queued copies to - Path2 INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -136,7 +133,6 @@ INFO : - Path1 Queue copy to Path2 - {
INFO : - Path1 Queue copy to Path2 - {path2/}file1.txt INFO : - Path1 Queue copy to Path2 - {path2/}file1.txt
INFO : - Path1 Queue copy to Path2 - {path2/}subdir/file20.txt INFO : - Path1 Queue copy to Path2 - {path2/}subdir/file20.txt
INFO : - Path1 Do queued copies to - Path2 INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -89,7 +87,6 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -21,9 +21,7 @@ INFO : Using filters file {workdir/}exclude-other-filtersfile.txt
INFO : Storing filters file hash to {workdir/}exclude-other-filtersfile.txt.{hashtype} INFO : Storing filters file hash to {workdir/}exclude-other-filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -138,9 +136,7 @@ INFO : Using filters file {workdir/}include-other-filtersfile.txt
INFO : Storing filters file hash to {workdir/}include-other-filtersfile.txt.{hashtype} INFO : Storing filters file hash to {workdir/}include-other-filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -92,9 +90,7 @@ INFO : Copying Path2 files to Path1
INFO : Checking access health INFO : Checking access health
INFO : Found 2 matching ".chk_file" files on both paths INFO : Found 2 matching ".chk_file" files on both paths
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -104,9 +102,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -15,9 +15,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -15,9 +15,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -23,9 +23,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -82,7 +80,7 @@ INFO : Path2 checking for diffs
INFO : Applying changes INFO : Applying changes
INFO : - Path1 Queue copy to Path2 - {path2/}subdir INFO : - Path1 Queue copy to Path2 - {path2/}subdir
INFO : - Path1 Do queued copies to - Path2 INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer INFO : subdir: Making directory
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -126,7 +124,6 @@ INFO : Path2: 1 changes:  0 new,  0 modified, 
INFO : Applying changes INFO : Applying changes
INFO : - Path2 Queue delete - {path2/}RCLONE_TEST INFO : - Path2 Queue delete - {path2/}RCLONE_TEST
INFO : - Path1 Do queued copies to - Path2 INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -151,9 +148,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -193,7 +188,6 @@ INFO : Path2 checking for diffs
INFO : Applying changes INFO : Applying changes
INFO : - Path2 Queue delete - {path2/}subdir INFO : - Path2 Queue delete - {path2/}subdir
INFO : - Path1 Do queued copies to - Path2 INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : subdir: Removing directory INFO : subdir: Removing directory
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -27,9 +27,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}測試Русский ěáñ/" with Path2 "{path2/}測試Русский ěáñ/" INFO : Synching Path1 "{path1/}測試Русский ěáñ/" with Path2 "{path2/}測試Русский ěáñ/"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}測試Русский ěáñ/" vs Path2 "{path2/}測試Русский ěáñ/" INFO : Validating listings for Path1 "{path1/}測試Русский ěáñ/" vs Path2 "{path2/}測試Русский ěáñ/"
INFO : Bisync successful INFO : Bisync successful
@@ -86,9 +84,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -178,9 +174,7 @@ INFO : Using filters file {workdir/}測試_filtersfile.txt
INFO : Storing filters file hash to {workdir/}測試_filtersfile.txt.{hashtype} INFO : Storing filters file hash to {workdir/}測試_filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -20,9 +20,7 @@ INFO : Using filters file {workdir/}filtersfile.flt
INFO : Storing filters file hash to {workdir/}filtersfile.flt.{hashtype} INFO : Storing filters file hash to {workdir/}filtersfile.flt.{hashtype}
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -83,9 +81,7 @@ INFO : Using filters file {workdir/}filtersfile.txt
INFO : Storing filters file hash to {workdir/}filtersfile.txt.{hashtype} INFO : Storing filters file hash to {workdir/}filtersfile.txt.{hashtype}
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -150,9 +146,7 @@ INFO : Using filters file {workdir/}filtersfile.txt
INFO : Skipped storing filters file hash to {workdir/}filtersfile.txt.{hashtype} as --dry-run is set INFO : Skipped storing filters file hash to {workdir/}filtersfile.txt.{hashtype} as --dry-run is set
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
NOTICE: - Path2 Resync is copying files to - Path1 NOTICE: - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
NOTICE: - Path1 Resync is copying files to - Path2 NOTICE: - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -35,9 +33,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -86,7 +84,6 @@ INFO : - Path2 Queue delete - {
INFO : - Path2 Queue delete - {path2/}file4.txt INFO : - Path2 Queue delete - {path2/}file4.txt
INFO : - Path2 Queue delete - {path2/}file5.txt INFO : - Path2 Queue delete - {path2/}file5.txt
INFO : - Path1 Do queued copies to - Path2 INFO : - Path1 Do queued copies to - Path2
INFO : There was nothing to transfer
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -86,7 +84,6 @@ INFO : - Path1 Queue delete - {
INFO : - Path1 Queue delete - {path1/}file4.txt INFO : - Path1 Queue delete - {path1/}file4.txt
INFO : - Path1 Queue delete - {path1/}file5.txt INFO : - Path1 Queue delete - {path1/}file5.txt
INFO : - Path2 Do queued copies to - Path1 INFO : - Path2 Do queued copies to - Path1
INFO : There was nothing to transfer
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -15,9 +15,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -17,9 +17,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -117,9 +115,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -158,7 +154,6 @@ INFO : Applying changes
INFO : - Path2 Queue copy to Path1 - {path1/}file2.txt INFO : - Path2 Queue copy to Path1 - {path1/}file2.txt
INFO : - Path2 Queue copy to Path1 - {path1/}subdir/file21.txt INFO : - Path2 Queue copy to Path1 - {path1/}subdir/file21.txt
INFO : - Path2 Do queued copies to - Path1 INFO : - Path2 Do queued copies to - Path1
INFO : There was nothing to transfer
INFO : Updating listings INFO : Updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful
@@ -176,7 +171,6 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -16,9 +16,7 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : There was nothing to transfer
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
INFO : Bisync successful INFO : Bisync successful

View File

@@ -39,7 +39,6 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

View File

@@ -22,7 +22,6 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"
@@ -130,7 +129,6 @@ INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : file1.txt: Path1 is smaller. Path1: 33, Path2: 42, Difference: 9 INFO : file1.txt: Path1 is smaller. Path1: 33, Path2: 42, Difference: 9
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : file1.txt: Path1 is smaller. Path1: 33, Path2: 42, Difference: 9 INFO : file1.txt: Path1 is smaller. Path1: 33, Path2: 42, Difference: 9
INFO : Resync updating listings INFO : Resync updating listings
@@ -160,7 +158,6 @@ INFO : Bisyncing with Comparison Settings:
INFO : Synching Path1 "{path1/}" with Path2 "{path2/}" INFO : Synching Path1 "{path1/}" with Path2 "{path2/}"
INFO : Copying Path2 files to Path1 INFO : Copying Path2 files to Path1
INFO : - Path2 Resync is copying files to - Path1 INFO : - Path2 Resync is copying files to - Path1
INFO : There was nothing to transfer
INFO : - Path1 Resync is copying files to - Path2 INFO : - Path1 Resync is copying files to - Path2
INFO : Resync updating listings INFO : Resync updating listings
INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}" INFO : Validating listings for Path1 "{path1/}" vs Path2 "{path2/}"

Some files were not shown because too many files have changed in this diff Show More