1
0
mirror of https://github.com/rclone/rclone.git synced 2025-12-17 08:43:19 +00:00

Compare commits

..

1 Commits

Author SHA1 Message Date
Nick Craig-Wood
394a4b0afe vfs: remove virtual directory entries on backends which can have empty dirs
Before this change we only removed virtual directory entries when they
appeared in the listing.

This works fine except for when virtual directory entries are deleted
outside rclone.

This change deletes directory virtual directory entries for backends
which can have empty directories before reading the directory.

See: https://forum.rclone.org/t/google-drive-rclone-rc-operations-mkdir-fails-on-repeats/17787
2020-07-15 14:57:21 +01:00
4642 changed files with 955271 additions and 266309 deletions

View File

@@ -5,31 +5,19 @@ about: Report a problem with rclone
<!-- <!--
We understand you are having a problem with rclone; we want to help you with that! Welcome :-) We understand you are having a problem with rclone; we want to help you with that!
**STOP and READ** If you've just got a question or aren't sure if you've found a bug then please use the rclone forum:
**YOUR POST WILL BE REMOVED IF IT IS LOW QUALITY**:
Please show the effort you've put into solving the problem and please be specific.
People are volunteering their time to help! Low effort posts are not likely to get good answers!
If you think you might have found a bug, try to replicate it with the latest beta (or stable).
The update instructions are available at https://rclone.org/commands/rclone_selfupdate/
If you can still replicate it or just got a question then please use the rclone forum:
https://forum.rclone.org/ https://forum.rclone.org/
for a quick response instead of filing an issue on this repo. instead of filing an issue for a quick response.
If nothing else helps, then please fill in the info below which helps us help you. If you think you might have found a bug, please can you try to replicate it with the latest beta?
**DO NOT REDACT** any information except passwords/keys/personal info. https://beta.rclone.org/
You should use 3 backticks to begin and end your paste to make it readable. If you can still replicate it with the latest beta, then please fill in the info below which makes our lives much easier. A log with -vv will make our day :-)
Make sure to include a log obtained with '-vv'.
You can also use '-vv --log-file bug.log' and a service such as https://pastebin.com or https://gist.github.com/
Thank you Thank you
@@ -37,10 +25,6 @@ The Rclone Developers
--> -->
#### The associated forum post URL from `https://forum.rclone.org`
#### What is the problem you are having with rclone? #### What is the problem you are having with rclone?
@@ -49,26 +33,18 @@ The Rclone Developers
#### Which OS you are using and how many bits (e.g. Windows 7, 64 bit) #### Which OS you are using and how many bits (eg Windows 7, 64 bit)
#### Which cloud storage system are you using? (e.g. Google Drive) #### Which cloud storage system are you using? (eg Google Drive)
#### The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`) #### The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
#### A log from the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`) #### A log from the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`)
<!--- Please keep the note below for others who read your bug report. -->
#### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.

View File

@@ -7,16 +7,12 @@ about: Suggest a new feature or enhancement for rclone
Welcome :-) Welcome :-)
So you've got an idea to improve rclone? We love that! So you've got an idea to improve rclone? We love that! You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
You'll be glad to hear we've incorporated hundreds of ideas from contributors already.
Probably the latest beta (or stable) release has your feature, so try to update your rclone. Here is a checklist of things to do:
The update instructions are available at https://rclone.org/commands/rclone_selfupdate/
If it still isn't there, here is a checklist of things to do: 1. Please search the old issues first for your idea and +1 or comment on an existing issue if possible.
2. Discuss on the forum first: https://forum.rclone.org/
1. Search the old issues for your idea and +1 or comment on an existing issue if possible.
2. Discuss on the forum: https://forum.rclone.org/
3. Make a feature request issue (this is the right place!). 3. Make a feature request issue (this is the right place!).
4. Be prepared to get involved making the feature :-) 4. Be prepared to get involved making the feature :-)
@@ -26,9 +22,6 @@ The Rclone Developers
--> -->
#### The associated forum post URL from `https://forum.rclone.org`
#### What is your current rclone version (output from `rclone version`)? #### What is your current rclone version (output from `rclone version`)?
@@ -41,11 +34,3 @@ The Rclone Developers
#### How do you think rclone should be changed to solve that? #### How do you think rclone should be changed to solve that?
<!--- Please keep the note below for others who read your feature request. -->
#### How to use GitHub
* Please use the 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to show that you are affected by the same issue.
* Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
* Subscribe to receive notifications on status change and new comments.

View File

@@ -22,7 +22,7 @@ Link issues and relevant forum posts here.
#### Checklist #### Checklist
- [ ] I have read the [contribution guidelines](https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#submitting-a-new-feature-or-bug-fix). - [ ] I have read the [contribution guidelines](https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#submitting-a-pull-request).
- [ ] I have added tests for all changes in this PR if appropriate. - [ ] I have added tests for all changes in this PR if appropriate.
- [ ] I have added documentation for the changes if appropriate. - [ ] I have added documentation for the changes if appropriate.
- [ ] All commit messages are in [house style](https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#commit-messages). - [ ] All commit messages are in [house style](https://github.com/rclone/rclone/blob/master/CONTRIBUTING.md#commit-messages).

View File

@@ -12,86 +12,89 @@ on:
tags: tags:
- '*' - '*'
pull_request: pull_request:
workflow_dispatch:
inputs:
manual:
description: Manual run (bypass default conditions)
type: boolean
required: true
default: true
jobs: jobs:
build: build:
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
timeout-minutes: 60 timeout-minutes: 60
strategy: strategy:
fail-fast: false fail-fast: false
matrix: matrix:
job_name: ['linux', 'linux_386', 'mac_amd64', 'mac_arm64', 'windows', 'other_os', 'go1.18', 'go1.19'] job_name: ['linux', 'mac', 'windows_amd64', 'windows_386', 'other_os', 'modules_race', 'go1.11', 'go1.12', 'go1.13']
include: include:
- job_name: linux - job_name: linux
os: ubuntu-latest os: ubuntu-latest
go: '1.20' go: '1.14.x'
modules: 'off'
gotags: cmount gotags: cmount
build_flags: '-include "^linux/"' build_flags: '-include "^linux/"'
check: true check: true
quicktest: true quicktest: true
racequicktest: true
librclonetest: true
deploy: true deploy: true
- job_name: linux_386 - job_name: mac
os: ubuntu-latest os: macOS-latest
go: '1.20' go: '1.14.x'
goarch: 386 modules: 'off'
gotags: cmount gotags: '' # cmount doesn't work on osx travis for some reason
quicktest: true
- job_name: mac_amd64
os: macos-11
go: '1.20'
gotags: 'cmount'
build_flags: '-include "^darwin/amd64" -cgo' build_flags: '-include "^darwin/amd64" -cgo'
quicktest: true quicktest: true
racequicktest: true racequicktest: true
deploy: true deploy: true
- job_name: mac_arm64 - job_name: windows_amd64
os: macos-11 os: windows-latest
go: '1.20' go: '1.14.x'
gotags: 'cmount' modules: 'off'
build_flags: '-include "^darwin/arm64" -cgo -macos-arch arm64 -cgo-cflags=-I/usr/local/include -cgo-ldflags=-L/usr/local/lib' gotags: cmount
build_flags: '-include "^windows/amd64" -cgo'
quicktest: true
racequicktest: true
deploy: true deploy: true
- job_name: windows - job_name: windows_386
os: windows-latest os: windows-latest
go: '1.20' go: '1.14.x'
modules: 'off'
gotags: cmount gotags: cmount
cgo: '0' goarch: '386'
build_flags: '-include "^windows/"' cgo: '1'
build_args: '-buildmode exe' build_flags: '-include "^windows/386" -cgo'
quicktest: true quicktest: true
deploy: true deploy: true
- job_name: other_os - job_name: other_os
os: ubuntu-latest os: ubuntu-latest
go: '1.20' go: '1.14.x'
build_flags: '-exclude "^(windows/|darwin/|linux/)"' modules: 'off'
build_flags: '-exclude "^(windows/|darwin/amd64|linux/)"'
compile_all: true compile_all: true
deploy: true deploy: true
- job_name: go1.18 - job_name: modules_race
os: ubuntu-latest os: ubuntu-latest
go: '1.18' go: '1.14.x'
modules: 'on'
quicktest: true quicktest: true
racequicktest: true racequicktest: true
- job_name: go1.19 - job_name: go1.11
os: ubuntu-latest os: ubuntu-latest
go: '1.19' go: '1.11.x'
modules: 'off'
quicktest: true
- job_name: go1.12
os: ubuntu-latest
go: '1.12.x'
modules: 'off'
quicktest: true
- job_name: go1.13
os: ubuntu-latest
go: '1.13.x'
modules: 'off'
quicktest: true quicktest: true
racequicktest: true
name: ${{ matrix.job_name }} name: ${{ matrix.job_name }}
@@ -99,24 +102,26 @@ jobs:
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v1
with: with:
fetch-depth: 0 # Checkout into a fixed path to avoid import path problems on go < 1.11
path: ./src/github.com/rclone/rclone
- name: Install Go - name: Install Go
uses: actions/setup-go@v3 uses: actions/setup-go@v1
with: with:
go-version: ${{ matrix.go }} go-version: ${{ matrix.go }}
check-latest: true
- name: Set environment variables - name: Set environment variables
shell: bash shell: bash
run: | run: |
echo 'GOTAGS=${{ matrix.gotags }}' >> $GITHUB_ENV echo '::set-env name=GOPATH::${{ runner.workspace }}'
echo 'BUILD_FLAGS=${{ matrix.build_flags }}' >> $GITHUB_ENV echo '::add-path::${{ runner.workspace }}/bin'
echo 'BUILD_ARGS=${{ matrix.build_args }}' >> $GITHUB_ENV echo '::set-env name=GO111MODULE::${{ matrix.modules }}'
if [[ "${{ matrix.goarch }}" != "" ]]; then echo 'GOARCH=${{ matrix.goarch }}' >> $GITHUB_ENV ; fi echo '::set-env name=GOTAGS::${{ matrix.gotags }}'
if [[ "${{ matrix.cgo }}" != "" ]]; then echo 'CGO_ENABLED=${{ matrix.cgo }}' >> $GITHUB_ENV ; fi echo '::set-env name=BUILD_FLAGS::${{ matrix.build_flags }}'
if [[ "${{ matrix.goarch }}" != "" ]]; then echo '::set-env name=GOARCH::${{ matrix.goarch }}' ; fi
if [[ "${{ matrix.cgo }}" != "" ]]; then echo '::set-env name=CGO_ENABLED::${{ matrix.cgo }}' ; fi
- name: Install Libraries on Linux - name: Install Libraries on Linux
shell: bash shell: bash
@@ -124,25 +129,25 @@ jobs:
sudo modprobe fuse sudo modprobe fuse
sudo chmod 666 /dev/fuse sudo chmod 666 /dev/fuse
sudo chown root:$USER /etc/fuse.conf sudo chown root:$USER /etc/fuse.conf
sudo apt-get install fuse3 libfuse-dev rpm pkg-config sudo apt-get install fuse libfuse-dev rpm pkg-config
if: matrix.os == 'ubuntu-latest' if: matrix.os == 'ubuntu-latest'
- name: Install Libraries on macOS - name: Install Libraries on macOS
shell: bash shell: bash
run: | run: |
brew update brew update
brew install --cask macfuse brew cask install osxfuse
if: matrix.os == 'macos-11' if: matrix.os == 'macOS-latest'
- name: Install Libraries on Windows - name: Install Libraries on Windows
shell: powershell shell: powershell
run: | run: |
$ProgressPreference = 'SilentlyContinue' $ProgressPreference = 'SilentlyContinue'
choco install -y winfsp zip choco install -y winfsp zip
echo "CPATH=C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse" | Out-File -FilePath $env:GITHUB_ENV -Encoding utf8 -Append Write-Host "::set-env name=CPATH::C:\Program Files\WinFsp\inc\fuse;C:\Program Files (x86)\WinFsp\inc\fuse"
if ($env:GOARCH -eq "386") { if ($env:GOARCH -eq "386") {
choco install -y mingw --forcex86 --force choco install -y mingw --forcex86 --force
echo "C:\\ProgramData\\chocolatey\\lib\\mingw\\tools\\install\\mingw32\\bin" | Out-File -FilePath $env:GITHUB_PATH -Encoding utf8 -Append Write-Host "::add-path::C:\\ProgramData\\chocolatey\\lib\\mingw\\tools\\install\\mingw32\\bin"
} }
# Copy mingw32-make.exe to make.exe so the same command line # Copy mingw32-make.exe to make.exe so the same command line
# can be used on Windows as on macOS and Linux # can be used on Windows as on macOS and Linux
@@ -162,27 +167,10 @@ jobs:
printf "\n\nSystem environment:\n\n" printf "\n\nSystem environment:\n\n"
env env
- name: Go module cache
uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Build rclone
shell: bash
run: |
make
- name: Rclone version
shell: bash
run: |
rclone version
- name: Run tests - name: Run tests
shell: bash shell: bash
run: | run: |
make
make quicktest make quicktest
if: matrix.quicktest if: matrix.quicktest
@@ -192,13 +180,12 @@ jobs:
make racequicktest make racequicktest
if: matrix.racequicktest if: matrix.racequicktest
- name: Run librclone tests - name: Code quality test
shell: bash shell: bash
run: | run: |
make -C librclone/ctest test make build_dep
make -C librclone/ctest clean make check
librclone/python/test_rclone.py if: matrix.check
if: matrix.librclonetest
- name: Compile all architectures test - name: Compile all architectures test
shell: bash shell: bash
@@ -219,132 +206,45 @@ jobs:
# Deploy binaries if enabled in config && not a PR && not a fork # Deploy binaries if enabled in config && not a PR && not a fork
if: matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone' if: matrix.deploy && github.head_ref == '' && github.repository == 'rclone/rclone'
lint: xgo:
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }} timeout-minutes: 60
timeout-minutes: 30 name: "xgo cross compile"
name: "lint"
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@v1
- name: Code quality test
uses: golangci/golangci-lint-action@v3
with: with:
# Optional: version of golangci-lint to use in form of v1.2 or v1.2.3 or `latest` to use the latest version # Checkout into a fixed path to avoid import path problems on go < 1.11
version: latest path: ./src/github.com/rclone/rclone
# Run govulncheck on the latest go version, the one we build binaries with - name: Set environment variables
- name: Install Go
uses: actions/setup-go@v3
with:
go-version: '1.20'
check-latest: true
- name: Install govulncheck
run: go install golang.org/x/vuln/cmd/govulncheck@latest
- name: Scan for vulnerabilities
run: govulncheck ./...
android:
if: ${{ github.event.inputs.manual == 'true' || (github.repository == 'rclone/rclone' && (github.event_name != 'pull_request' || github.event.pull_request.head.repo.full_name != github.event.pull_request.base.repo.full_name)) }}
timeout-minutes: 30
name: "android-all"
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
# Upgrade together with NDK version
- name: Set up Go
uses: actions/setup-go@v3
with:
go-version: '1.20'
- name: Go module cache
uses: actions/cache@v3
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-
- name: Set global environment variables
shell: bash shell: bash
run: | run: |
echo "VERSION=$(make version)" >> $GITHUB_ENV echo '::set-env name=GOPATH::${{ runner.workspace }}'
echo '::add-path::${{ runner.workspace }}/bin'
- name: build native rclone - name: Cross-compile rclone
run: | run: |
make docker pull billziss/xgo-cgofuse
GO111MODULE=off go get -v github.com/karalabe/xgo # don't add to go.mod
# xgo \
# -image=billziss/xgo-cgofuse \
# -targets=darwin/386,darwin/amd64,linux/386,linux/amd64,windows/386,windows/amd64 \
# -tags cmount \
# -dest build \
# .
xgo \
-image=billziss/xgo-cgofuse \
-targets=android/*,ios/* \
-dest build \
.
- name: install gomobile - name: Build rclone
run: | run: |
go install golang.org/x/mobile/cmd/gobind@latest docker pull golang
go install golang.org/x/mobile/cmd/gomobile@latest docker run --rm -v "$PWD":/usr/src/rclone -w /usr/src/rclone golang go build -mod=vendor -v
env PATH=$PATH:~/go/bin gomobile init
echo "RCLONE_NDK_VERSION=21" >> $GITHUB_ENV
- name: arm-v7a gomobile build
run: env PATH=$PATH:~/go/bin gomobile bind -androidapi ${RCLONE_NDK_VERSION} -v -target=android/arm -javapkg=org.rclone -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} github.com/rclone/rclone/librclone/gomobile
- name: arm-v7a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/armv7a-linux-androideabi${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm' >> $GITHUB_ENV
echo 'GOARM=7' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: arm-v7a build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv7a .
- name: arm64-v8a Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/aarch64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=arm64' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: arm64-v8a build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-armv8a .
- name: x86 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/i686-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=386' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: x86 build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x86 .
- name: x64 Set environment variables
shell: bash
run: |
echo "CC=$(echo $ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64/bin/x86_64-linux-android${RCLONE_NDK_VERSION}-clang)" >> $GITHUB_ENV
echo "CC_FOR_TARGET=$CC" >> $GITHUB_ENV
echo 'GOOS=android' >> $GITHUB_ENV
echo 'GOARCH=amd64' >> $GITHUB_ENV
echo 'CGO_ENABLED=1' >> $GITHUB_ENV
echo 'CGO_LDFLAGS=-fuse-ld=lld -s -w' >> $GITHUB_ENV
- name: x64 build
run: go build -v -tags android -trimpath -ldflags '-s -X github.com/rclone/rclone/fs.Version='${VERSION} -o build/rclone-android-${RCLONE_NDK_VERSION}-x64 .
- name: Upload artifacts - name: Upload artifacts
run: | run: |

View File

@@ -7,20 +7,19 @@ on:
jobs: jobs:
build: build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest runs-on: ubuntu-latest
name: Build image job name: Build image job
steps: steps:
- name: Checkout master - name: Checkout master
uses: actions/checkout@v3 uses: actions/checkout@v2
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Build and publish image - name: Build and publish image
uses: ilteoood/docker_buildx@1.1.0 uses: ilteoood/docker_buildx@439099796bfc03dd9cedeb72a0c7cb92be5cc92c
with: with:
tag: beta tag: beta
imageName: rclone/rclone imageName: rclone/rclone
platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7
publish: true publish: true
dockerHubUser: ${{ secrets.DOCKER_HUB_USER }} dockerHubUser: ${{ secrets.DOCKER_HUB_USER }}
dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }} dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }}

View File

@@ -6,12 +6,11 @@ on:
jobs: jobs:
build: build:
if: github.repository == 'rclone/rclone'
runs-on: ubuntu-latest runs-on: ubuntu-latest
name: Build image job name: Build image job
steps: steps:
- name: Checkout master - name: Checkout master
uses: actions/checkout@v3 uses: actions/checkout@v2
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Get actual patch version - name: Get actual patch version
@@ -24,36 +23,11 @@ jobs:
id: actual_major_version id: actual_major_version
run: echo ::set-output name=ACTUAL_MAJOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1) run: echo ::set-output name=ACTUAL_MAJOR_VERSION::$(echo $GITHUB_REF | cut -d / -f 3 | sed 's/v//g' | cut -d "." -f 1)
- name: Build and publish image - name: Build and publish image
uses: ilteoood/docker_buildx@1.1.0 uses: ilteoood/docker_buildx@439099796bfc03dd9cedeb72a0c7cb92be5cc92c
with: with:
tag: latest,${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }},${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }},${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }} tag: latest,${{ steps.actual_patch_version.outputs.ACTUAL_PATCH_VERSION }},${{ steps.actual_minor_version.outputs.ACTUAL_MINOR_VERSION }},${{ steps.actual_major_version.outputs.ACTUAL_MAJOR_VERSION }}
imageName: rclone/rclone imageName: rclone/rclone
platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6 platform: linux/amd64,linux/386,linux/arm64,linux/arm/v7
publish: true publish: true
dockerHubUser: ${{ secrets.DOCKER_HUB_USER }} dockerHubUser: ${{ secrets.DOCKER_HUB_USER }}
dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }} dockerHubPassword: ${{ secrets.DOCKER_HUB_PASSWORD }}
build_docker_volume_plugin:
if: github.repository == 'rclone/rclone'
needs: build
runs-on: ubuntu-latest
name: Build docker plugin job
steps:
- name: Checkout master
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Build and publish docker plugin
shell: bash
run: |
VER=${GITHUB_REF#refs/tags/}
PLUGIN_USER=rclone
docker login --username ${{ secrets.DOCKER_HUB_USER }} \
--password-stdin <<< "${{ secrets.DOCKER_HUB_PASSWORD }}"
for PLUGIN_ARCH in amd64 arm64 arm/v7 arm/v6 ;do
export PLUGIN_USER PLUGIN_ARCH
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}
make docker-plugin PLUGIN_TAG=${PLUGIN_ARCH/\//-}-${VER#v}
done
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=latest
make docker-plugin PLUGIN_ARCH=amd64 PLUGIN_TAG=${VER#v}

6
.gitignore vendored
View File

@@ -1,7 +1,6 @@
*~ *~
_junk/ _junk/
rclone rclone
rclone.exe
build build
docs/public docs/public
rclone.iml rclone.iml
@@ -10,8 +9,3 @@ rclone.iml
*.test *.test
*.log *.log
*.iml *.iml
fuzz-build.zip
*.orig
*.rej
Thumbs.db
__pycache__

View File

@@ -5,7 +5,7 @@ linters:
- deadcode - deadcode
- errcheck - errcheck
- goimports - goimports
- revive - golint
- ineffassign - ineffassign
- structcheck - structcheck
- varcheck - varcheck
@@ -20,11 +20,7 @@ issues:
exclude-use-default: false exclude-use-default: false
# Maximum issues count per one linter. Set to 0 to disable. Default is 50. # Maximum issues count per one linter. Set to 0 to disable. Default is 50.
max-issues-per-linter: 0 max-per-linter: 0
# Maximum count of issues with the same text. Set to 0 to disable. Default is 3. # Maximum count of issues with the same text. Set to 0 to disable. Default is 3.
max-same-issues: 0 max-same-issues: 0
run:
# timeout for analysis, e.g. 30s, 5m, default is 1m
timeout: 10m

View File

@@ -12,164 +12,94 @@ When filing an issue, please include the following information if
possible as well as a description of the problem. Make sure you test possible as well as a description of the problem. Make sure you test
with the [latest beta of rclone](https://beta.rclone.org/): with the [latest beta of rclone](https://beta.rclone.org/):
* Rclone version (e.g. output from `rclone version`) * Rclone version (eg output from `rclone -V`)
* Which OS you are using and how many bits (e.g. Windows 10, 64 bit) * Which OS you are using and how many bits (eg Windows 7, 64 bit)
* The command you were trying to run (e.g. `rclone copy /tmp remote:tmp`) * The command you were trying to run (eg `rclone copy /tmp remote:tmp`)
* A log of the command with the `-vv` flag (e.g. output from `rclone -vv copy /tmp remote:tmp`) * A log of the command with the `-vv` flag (eg output from `rclone -vv copy /tmp remote:tmp`)
* if the log contains secrets then edit the file with a text editor first to obscure them * if the log contains secrets then edit the file with a text editor first to obscure them
## Submitting a new feature or bug fix ## ## Submitting a pull request ##
If you find a bug that you'd like to fix, or a new feature that you'd If you find a bug that you'd like to fix, or a new feature that you'd
like to implement then please submit a pull request via GitHub. like to implement then please submit a pull request via GitHub.
If it is a big feature, then [make an issue](https://github.com/rclone/rclone/issues) first so it can be discussed. If it is a big feature then make an issue first so it can be discussed.
To prepare your pull request first press the fork button on [rclone's GitHub You'll need a Go environment set up with GOPATH set. See [the Go
getting started docs](https://golang.org/doc/install) for more info.
First in your web browser press the fork button on [rclone's GitHub
page](https://github.com/rclone/rclone). page](https://github.com/rclone/rclone).
Then [install Git](https://git-scm.com/downloads) and set your public contribution [name](https://docs.github.com/en/github/getting-started-with-github/setting-your-username-in-git) and [email](https://docs.github.com/en/github/setting-up-and-managing-your-github-user-account/setting-your-commit-email-address#setting-your-commit-email-address-in-git). Now in your terminal
Next open your terminal, change directory to your preferred folder and initialise your local rclone project: go get -u github.com/rclone/rclone
cd $GOPATH/src/github.com/rclone/rclone
git clone https://github.com/rclone/rclone.git
cd rclone
git remote rename origin upstream git remote rename origin upstream
# if you have SSH keys setup in your GitHub account:
git remote add origin git@github.com:YOURUSER/rclone.git git remote add origin git@github.com:YOURUSER/rclone.git
# otherwise:
git remote add origin https://github.com/YOURUSER/rclone.git
Note that most of the terminal commands in the rest of this guide must be executed from the rclone folder created above. Make a branch to add your new feature
Now [install Go](https://golang.org/doc/install) and verify your installation:
go version
Great, you can now compile and execute your own version of rclone:
go build
./rclone version
(Note that you can also replace `go build` with `make`, which will include a
more accurate version number in the executable as well as enable you to specify
more build options.) Finally make a branch to add your new feature
git checkout -b my-new-feature git checkout -b my-new-feature
And get hacking. And get hacking.
You may like one of the [popular editors/IDE's for Go](https://github.com/golang/go/wiki/IDEsAndTextEditorPlugins) and a quick view on the rclone [code organisation](#code-organisation). When ready - run the unit tests for the code you changed
When ready - test the affected functionality and run the unit tests for the code you changed
cd folder/with/changed/files
go test -v go test -v
Note that you may need to make a test remote, e.g. `TestSwift` for some Note that you may need to make a test remote, eg `TestSwift` for some
of the unit tests. of the unit tests.
This is typically enough if you made a simple bug fix, otherwise please read the rclone [testing](#testing) section too. Note the top level Makefile targets
* make check
* make test
Both of these will be run by Travis when you make a pull request but
you can do this yourself locally too. These require some extra go
packages which you can install with
* make build_dep
Make sure you Make sure you
* Add [unit tests](#testing) for a new feature.
* Add [documentation](#writing-documentation) for a new feature. * Add [documentation](#writing-documentation) for a new feature.
* [Commit your changes](#committing-your-changes) using the [message guideline](#commit-messages). * Follow the [commit message guidelines](#commit-messages).
* Add [unit tests](#testing) for a new feature
* squash commits down to one per feature
* rebase to master with `git rebase master`
When you are done with that push your changes to GitHub: When you are done with that
git push -u origin my-new-feature git push origin my-new-feature
and open the GitHub website to [create your pull Go to the GitHub website and click [Create pull
request](https://help.github.com/articles/creating-a-pull-request/). request](https://help.github.com/articles/creating-a-pull-request/).
Your changes will then get reviewed and you might get asked to fix some stuff. If so, then make the changes in the same branch, commit and push your updates to GitHub. You patch will get reviewed and you might get asked to fix some stuff.
You may sometimes be asked to [base your changes on the latest master](#basing-your-changes-on-the-latest-master) or [squash your commits](#squashing-your-commits). If so, then make the changes in the same branch, squash the commits (make multiple commits one commit) by running:
```
git log # See how many commits you want to squash
git reset --soft HEAD~2 # This squashes the 2 latest commits together.
git status # Check what will happen, if you made a mistake resetting, you can run git reset 'HEAD@{1}' to undo.
git commit # Add a new commit message.
git push --force # Push the squashed commit to your GitHub repo.
# For more, see Stack Overflow, Git docs, or generally Duck around the web. jtagcat also reccommends wizardzines.com
```
## Using Git and GitHub ## ## CI for your fork ##
### Committing your changes ###
Follow the guideline for [commit messages](#commit-messages) and then:
git checkout my-new-feature # To switch to your branch
git status # To see the new and changed files
git add FILENAME # To select FILENAME for the commit
git status # To verify the changes to be committed
git commit # To do the commit
git log # To verify the commit. Use q to quit the log
You can modify the message or changes in the latest commit using:
git commit --amend
If you amend to commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Replacing your previously pushed commits ###
Note that you are about to rewrite the GitHub history of your branch. It is good practice to involve your collaborators before modifying commits that have been pushed to GitHub.
Your previously pushed commits are replaced by:
git push --force origin my-new-feature
### Basing your changes on the latest master ###
To base your changes on the latest version of the [rclone master](https://github.com/rclone/rclone/tree/master) (upstream):
git checkout master
git fetch upstream
git merge --ff-only
git push origin --follow-tags # optional update of your fork in GitHub
git checkout my-new-feature
git rebase master
If you rebase commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
### Squashing your commits ###
To combine your commits into one commit:
git log # To count the commits to squash, e.g. the last 2
git reset --soft HEAD~2 # To undo the 2 latest commits
git status # To check everything is as expected
If everything is fine, then make the new combined commit:
git commit # To commit the undone commits as one
otherwise, you may roll back using:
git reflog # To check that HEAD{1} is your previous state
git reset --soft 'HEAD@{1}' # To roll back to your previous state
If you squash commits that have been pushed to GitHub, then you will have to [replace your previously pushed commits](#replacing-your-previously-pushed-commits).
Tip: You may like to use `git rebase -i master` if you are experienced or have a more complex situation.
### GitHub Continuous Integration ###
rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository. rclone currently uses [GitHub Actions](https://github.com/rclone/rclone/actions) to build and test the project, which should be automatically available for your fork too from the `Actions` tab in your repository.
## Testing ## ## Testing ##
### Quick testing ###
rclone's tests are run from the go testing framework, so at the top rclone's tests are run from the go testing framework, so at the top
level you can run this to run all the tests. level you can run this to run all the tests.
go test -v ./... go test -v ./...
You can also use `make`, if supported by your platform
make quicktest
The quicktest is [automatically run by GitHub](#github-continuous-integration) when you push your branch to GitHub.
### Backend testing ###
rclone contains a mixture of unit tests and integration tests. rclone contains a mixture of unit tests and integration tests.
Because it is difficult (and in some respects pointless) to test cloud Because it is difficult (and in some respects pointless) to test cloud
storage systems by mocking all their interfaces, rclone unit tests can storage systems by mocking all their interfaces, rclone unit tests can
@@ -185,8 +115,8 @@ are skipped if `TestDrive:` isn't defined.
cd backend/drive cd backend/drive
go test -v go test -v
You can then run the integration tests which test all of rclone's You can then run the integration tests which tests all of rclone's
operations. Normally these get run against the local file system, operations. Normally these get run against the local filing system,
but they can be run against any of the remotes. but they can be run against any of the remotes.
cd fs/sync cd fs/sync
@@ -197,25 +127,18 @@ but they can be run against any of the remotes.
go test -v -remote TestDrive: go test -v -remote TestDrive:
If you want to use the integration test framework to run these tests If you want to use the integration test framework to run these tests
altogether with an HTML report and test retries then from the all together with an HTML report and test retries then from the
project root: project root:
go install github.com/rclone/rclone/fstest/test_all go install github.com/rclone/rclone/fstest/test_all
test_all -backend drive test_all -backend drive
### Full integration testing ###
If you want to run all the integration tests against all the remotes, If you want to run all the integration tests against all the remotes,
then change into the project root and run then change into the project root and run
make check
make test make test
The commands may require some extra go packages which you can install with This command is run daily on the integration test server. You can
make build_dep
The full integration tests are run daily on the integration test server. You can
find the results at https://pub.rclone.org/integration-tests/ find the results at https://pub.rclone.org/integration-tests/
## Code Organisation ## ## Code Organisation ##
@@ -230,10 +153,9 @@ with modules beneath.
* cmd - the rclone commands * cmd - the rclone commands
* all - import this to load all the commands * all - import this to load all the commands
* ...commands * ...commands
* cmdtest - end-to-end tests of commands, flags, environment variables,...
* docs - the documentation and website * docs - the documentation and website
* content - adjust these docs only - everything else is autogenerated * content - adjust these docs only - everything else is autogenerated
* command - these are auto-generated - edit the corresponding .go file * command - these are auto generated - edit the corresponding .go file
* fs - main rclone definitions - minimal amount of code * fs - main rclone definitions - minimal amount of code
* accounting - bandwidth limiting and statistics * accounting - bandwidth limiting and statistics
* asyncreader - an io.Reader which reads ahead * asyncreader - an io.Reader which reads ahead
@@ -248,7 +170,7 @@ with modules beneath.
* log - logging facilities * log - logging facilities
* march - iterates directories in lock step * march - iterates directories in lock step
* object - in memory Fs objects * object - in memory Fs objects
* operations - primitives for sync, e.g. Copy, Move * operations - primitives for sync, eg Copy, Move
* sync - sync directories * sync - sync directories
* walk - walk a directory * walk - walk a directory
* fstest - provides integration test framework * fstest - provides integration test framework
@@ -256,7 +178,7 @@ with modules beneath.
* mockdir - mocks an fs.Directory * mockdir - mocks an fs.Directory
* mockobject - mocks an fs.Object * mockobject - mocks an fs.Object
* test_all - Runs integration tests for everything * test_all - Runs integration tests for everything
* graphics - the images used in the website, etc. * graphics - the images used in the website etc
* lib - libraries used by the backend * lib - libraries used by the backend
* atexit - register functions to run when rclone exits * atexit - register functions to run when rclone exits
* dircache - directory ID to name caching * dircache - directory ID to name caching
@@ -264,6 +186,7 @@ with modules beneath.
* pacer - retries with backoff and paces operations * pacer - retries with backoff and paces operations
* readers - a selection of useful io.Readers * readers - a selection of useful io.Readers
* rest - a thin abstraction over net/http for REST * rest - a thin abstraction over net/http for REST
* vendor - 3rd party code managed by `go mod`
* vfs - Virtual FileSystem layer for implementing rclone mount and similar * vfs - Virtual FileSystem layer for implementing rclone mount and similar
## Writing Documentation ## ## Writing Documentation ##
@@ -275,39 +198,18 @@ If you add a new general flag (not for a backend), then document it in
alphabetical order. alphabetical order.
If you add a new backend option/flag, then it should be documented in If you add a new backend option/flag, then it should be documented in
the source file in the `Help:` field. the source file in the `Help:` field. The first line of this is used
for the flag help, the remainder is shown to the user in `rclone
* Start with the most important information about the option, config` and is added to the docs with `make backenddocs`.
as a single sentence on a single line.
* This text will be used for the command-line flag help.
* It will be combined with other information, such as any default value,
and the result will look odd if not written as a single sentence.
* It should end with a period/full stop character, which will be shown
in docs but automatically removed when producing the flag help.
* Try to keep it below 80 characters, to reduce text wrapping in the terminal.
* More details can be added in a new paragraph, after an empty line (`"\n\n"`).
* Like with docs generated from Markdown, a single line break is ignored
and two line breaks creates a new paragraph.
* This text will be shown to the user in `rclone config`
and in the docs (where it will be added by `make backenddocs`,
normally run some time before next release).
* To create options of enumeration type use the `Examples:` field.
* Each example value have their own `Help:` field, but they are treated
a bit different than the main option help text. They will be shown
as an unordered list, therefore a single line break is enough to
create a new list item. Also, for enumeration texts like name of
countries, it looks better without an ending period/full stop character.
The only documentation you need to edit are the `docs/content/*.md` The only documentation you need to edit are the `docs/content/*.md`
files. The `MANUAL.*`, `rclone.1`, website, etc. are all auto-generated files. The MANUAL.*, rclone.1, web site etc are all auto generated
from those during the release process. See the `make doc` and `make from those during the release process. See the `make doc` and `make
website` targets in the Makefile if you are interested in how. You website` targets in the Makefile if you are interested in how. You
don't need to run these when adding a feature. don't need to run these when adding a feature.
Documentation for rclone sub commands is with their code, e.g. Documentation for rclone sub commands is with their code, eg
`cmd/ls/ls.go`. Write flag help strings as a single sentence on a single `cmd/ls/ls.go`.
line, without a period/full stop character at the end, as it will be
combined unmodified with other information (such as any default value).
Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository) Note that you can use [GitHub's online editor](https://help.github.com/en/github/managing-files-in-a-repository/editing-files-in-another-users-repository)
for small changes in the docs which makes it very easy. for small changes in the docs which makes it very easy.
@@ -350,7 +252,7 @@ And here is an example of a longer one:
``` ```
mount: fix hang on errored upload mount: fix hang on errored upload
In certain circumstances, if an upload failed then the mount could hang In certain circumstances if an upload failed then the mount could hang
indefinitely. This was fixed by closing the read pipe after the Put indefinitely. This was fixed by closing the read pipe after the Put
completed. This will cause the write side to return a pipe closed completed. This will cause the write side to return a pipe closed
error fixing the hang. error fixing the hang.
@@ -364,25 +266,41 @@ rclone uses the [go
modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more) modules](https://tip.golang.org/cmd/go/#hdr-Modules__module_versions__and_more)
support in go1.11 and later to manage its dependencies. support in go1.11 and later to manage its dependencies.
rclone can be built with modules outside of the `GOPATH`. **NB** you must be using go1.11 or above to add a dependency to
rclone. Rclone will still build with older versions of go, but we use
the `go mod` command for dependencies which is only in go1.11 and
above.
rclone can be built with modules outside of the GOPATH, but for
backwards compatibility with older go versions, rclone also maintains
a `vendor` directory with all the external code rclone needs for
building.
The `vendor` directory is entirely managed by the `go mod` tool, do
not add things manually.
To add a dependency `github.com/ncw/new_dependency` see the To add a dependency `github.com/ncw/new_dependency` see the
instructions below. These will fetch the dependency and add it to instructions below. These will fetch the dependency, add it to
`go.mod` and `go.sum`. `go.mod` and `go.sum` and vendor it for older go versions.
GO111MODULE=on go get github.com/ncw/new_dependency GO111MODULE=on go get github.com/ncw/new_dependency
GO111MODULE=on go mod vendor
You can add constraints on that package when doing `go get` (see the You can add constraints on that package when doing `go get` (see the
go docs linked above), but don't unless you really need to. go docs linked above), but don't unless you really need to.
Please check in the changes generated by `go mod` including `go.mod` Please check in the changes generated by `go mod` including the
and `go.sum` in the same commit as your other changes. `vendor` directory and `go.mod` and `go.sum` in a single commit
separate from any other code changes with the title "vendor: add
github.com/ncw/new_dependency". Remember to `git add` any new files
in `vendor`.
## Updating a dependency ## ## Updating a dependency ##
If you need to update a dependency then run If you need to update a dependency then run
GO111MODULE=on go get -u golang.org/x/crypto GO111MODULE=on go get -u github.com/pkg/errors
GO111MODULE=on go mod vendor
Check in a single commit as above. Check in a single commit as above.
@@ -425,15 +343,15 @@ Research
Getting going Getting going
* Create `backend/remote/remote.go` (copy this from a similar remote) * Create `backend/remote/remote.go` (copy this from a similar remote)
* box is a good one to start from if you have a directory-based remote * box is a good one to start from if you have a directory based remote
* b2 is a good one to start from if you have a bucket-based remote * b2 is a good one to start from if you have a bucket based remote
* Add your remote to the imports in `backend/all/all.go` * Add your remote to the imports in `backend/all/all.go`
* HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead. * HTTP based remotes are easiest to maintain if they use rclone's rest module, but if there is a really good go SDK then use that instead.
* Try to implement as many optional methods as possible as it makes the remote more usable. * Try to implement as many optional methods as possible as it makes the remote more usable.
* Use lib/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed * Use lib/encoder to make sure we can encode any path name and `rclone info` to help determine the encodings needed
* `rclone purge -v TestRemote:rclone-info` * `rclone purge -v TestRemote:rclone-info`
* `rclone test info --all --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info` * `rclone info --remote-encoding None -vv --write-json remote.json TestRemote:rclone-info`
* `go run cmd/test/info/internal/build_csv/main.go -o remote.csv remote.json` * `go run cmd/info/internal/build_csv/main.go -o remote.csv remote.json`
* open `remote.csv` in a spreadsheet and examine * open `remote.csv` in a spreadsheet and examine
Unit tests Unit tests
@@ -463,7 +381,7 @@ See the [testing](#testing) section for more information on integration tests.
Add your fs to the docs - you'll need to pick an icon for it from Add your fs to the docs - you'll need to pick an icon for it from
[fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in [fontawesome](http://fontawesome.io/icons/). Keep lists of remotes in
alphabetical order of full name of remote (e.g. `drive` is ordered as alphabetical order of full name of remote (eg `drive` is ordered as
`Google Drive`) but with the local file system last. `Google Drive`) but with the local file system last.
* `README.md` - main GitHub page * `README.md` - main GitHub page

View File

@@ -16,8 +16,6 @@ RUN apk --no-cache add ca-certificates fuse tzdata && \
COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/ COPY --from=builder /go/src/github.com/rclone/rclone/rclone /usr/local/bin/
RUN addgroup -g 1009 rclone && adduser -u 1009 -Ds /bin/sh -G rclone rclone
ENTRYPOINT [ "rclone" ] ENTRYPOINT [ "rclone" ]
WORKDIR /data WORKDIR /data

View File

@@ -11,15 +11,15 @@ Current active maintainers of rclone are:
| Fabian Möller | @B4dM4n | | | Fabian Möller | @B4dM4n | |
| Alex Chen | @Cnly | onedrive backend | | Alex Chen | @Cnly | onedrive backend |
| Sandeep Ummadi | @sandeepkru | azureblob backend | | Sandeep Ummadi | @sandeepkru | azureblob backend |
| Sebastian Bünger | @buengese | jottacloud, yandex & compress backends | | Sebastian Bünger | @buengese | jottacloud & yandex backends |
| Ivan Andreev | @ivandeex | chunker & mailru backends | | Ivan Andreev | @ivandeex | chunker & mailru backends |
| Max Sum | @Max-Sum | union backend | | Max Sum | @Max-Sum | union backend |
| Fred | @creativeprojects | seafile backend | | Fred | @creativeprojects | seafile backend |
| Caleb Case | @calebcase | storj backend | | Caleb Case | @calebcase | tardigrade backend |
**This is a work in progress Draft** **This is a work in progress Draft**
This is a guide for how to be an rclone maintainer. This is mostly a write-up of what I (@ncw) attempt to do. This is a guide for how to be an rclone maintainer. This is mostly a writeup of what I (@ncw) attempt to do.
## Triaging Tickets ## ## Triaging Tickets ##
@@ -27,17 +27,17 @@ When a ticket comes in it should be triaged. This means it should be classified
Rclone uses the labels like this: Rclone uses the labels like this:
* `bug` - a definitely verified bug * `bug` - a definite verified bug
* `can't reproduce` - a problem which we can't reproduce * `can't reproduce` - a problem which we can't reproduce
* `doc fix` - a bug in the documentation - if users need help understanding the docs add this label * `doc fix` - a bug in the documentation - if users need help understanding the docs add this label
* `duplicate` - normally close these and ask the user to subscribe to the original * `duplicate` - normally close these and ask the user to subscribe to the original
* `enhancement: new remote` - a new rclone backend * `enhancement: new remote` - a new rclone backend
* `enhancement` - a new feature * `enhancement` - a new feature
* `FUSE` - to do with `rclone mount` command * `FUSE` - to do with `rclone mount` command
* `good first issue` - mark these if you find a small self-contained issue - these get shown to new visitors to the project * `good first issue` - mark these if you find a small self contained issue - these get shown to new visitors to the project
* `help` wanted - mark these if you find a self-contained issue - these get shown to new visitors to the project * `help` wanted - mark these if you find a self contained issue - these get shown to new visitors to the project
* `IMPORTANT` - note to maintainers not to forget to fix this for the release * `IMPORTANT` - note to maintainers not to forget to fix this for the release
* `maintenance` - internal enhancement, code re-organisation, etc. * `maintenance` - internal enhancement, code re-organisation etc
* `Needs Go 1.XX` - waiting for that version of Go to be released * `Needs Go 1.XX` - waiting for that version of Go to be released
* `question` - not a `bug` or `enhancement` - direct to the forum for next time * `question` - not a `bug` or `enhancement` - direct to the forum for next time
* `Remote: XXX` - which rclone backend this affects * `Remote: XXX` - which rclone backend this affects
@@ -45,13 +45,13 @@ Rclone uses the labels like this:
If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "good first issue" tag to give new contributors something easy to do to get going. If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "good first issue" tag to give new contributors something easy to do to get going.
When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (e.g. the next go release). When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Help Wanted. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (eg the next go release).
The milestones have these meanings: The milestones have these meanings:
* v1.XX - stuff we would like to fit into this release * v1.XX - stuff we would like to fit into this release
* v1.XX+1 - stuff we are leaving until the next release * v1.XX+1 - stuff we are leaving until the next release
* Soon - stuff we think is a good idea - waiting to be scheduled for a release * Soon - stuff we think is a good idea - waiting to be scheduled to a release
* Help wanted - blue sky stuff that might get moved up, or someone could help with * Help wanted - blue sky stuff that might get moved up, or someone could help with
* Known bugs - bugs waiting on external factors or we aren't going to fix for the moment * Known bugs - bugs waiting on external factors or we aren't going to fix for the moment
@@ -65,7 +65,7 @@ Close tickets as soon as you can - make sure they are tagged with a release. Po
Try to process pull requests promptly! Try to process pull requests promptly!
Merging pull requests on GitHub itself works quite well nowadays so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message. Merging pull requests on GitHub itself works quite well now-a-days so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits. Use the squash and rebase option if you need to edit the commit message.
After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`. After merging the commit, in your local master branch, do `git pull` then run `bin/update-authors.py` to update the authors file then `git push`.
@@ -81,15 +81,15 @@ Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer
High impact regressions should be fixed before the next release. High impact regressions should be fixed before the next release.
Near the start of the release cycle, the dependencies should be updated with `make update` to give time for bugs to surface. Near the start of the release cycle the dependencies should be updated with `make update` to give time for bugs to surface.
Towards the end of the release cycle try not to merge anything too big so let things settle down. Towards the end of the release cycle try not to merge anything too big so let things settle down.
Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time-consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained. Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained.
## Mailing list ## ## Mailing list ##
There is now an invite-only mailing list for rclone developers `rclone-dev` on google groups. There is now an invite only mailing list for rclone developers `rclone-dev` on google groups.
## TODO ## ## TODO ##

26557
MANUAL.html generated

File diff suppressed because it is too large Load Diff

29922
MANUAL.md generated

File diff suppressed because it is too large Load Diff

35348
MANUAL.txt generated

File diff suppressed because it is too large Load Diff

131
Makefile
View File

@@ -7,29 +7,27 @@ RELEASE_TAG := $(shell git tag -l --points-at HEAD)
VERSION := $(shell cat VERSION) VERSION := $(shell cat VERSION)
# Last tag on this branch # Last tag on this branch
LAST_TAG := $(shell git describe --tags --abbrev=0) LAST_TAG := $(shell git describe --tags --abbrev=0)
# Next version
NEXT_VERSION := $(shell echo $(VERSION) | awk -F. -v OFS=. '{print $$1,$$2+1,0}')
NEXT_PATCH_VERSION := $(shell echo $(VERSION) | awk -F. -v OFS=. '{print $$1,$$2,$$3+1}')
# If we are working on a release, override branch to master # If we are working on a release, override branch to master
ifdef RELEASE_TAG ifdef RELEASE_TAG
BRANCH := master BRANCH := master
LAST_TAG := $(shell git describe --abbrev=0 --tags $(VERSION)^)
endif endif
TAG_BRANCH := .$(BRANCH) TAG_BRANCH := -$(BRANCH)
BRANCH_PATH := branch/$(BRANCH)/ BRANCH_PATH := branch/
# If building HEAD or master then unset TAG_BRANCH and BRANCH_PATH # If building HEAD or master then unset TAG_BRANCH and BRANCH_PATH
ifeq ($(subst HEAD,,$(subst master,,$(BRANCH))),) ifeq ($(subst HEAD,,$(subst master,,$(BRANCH))),)
TAG_BRANCH := TAG_BRANCH :=
BRANCH_PATH := BRANCH_PATH :=
endif endif
# Make version suffix -beta.NNNN.CCCCCCCC (N=Commit number, C=Commit) # Make version suffix -DDD-gCCCCCCCC (D=commits since last relase, C=Commit) or blank
VERSION_SUFFIX := -beta.$(shell git rev-list --count HEAD).$(shell git show --no-patch --no-notes --pretty='%h' HEAD) VERSION_SUFFIX := $(shell git describe --abbrev=8 --tags | perl -lpe 's/^v\d+\.\d+\.\d+//; s/^-(\d+)/"-".sprintf("%03d",$$1)/e;')
# TAG is current version + commit number + commit + branch # TAG is current version + number of commits since last release + branch
TAG := $(VERSION)$(VERSION_SUFFIX)$(TAG_BRANCH) TAG := $(VERSION)$(VERSION_SUFFIX)$(TAG_BRANCH)
ifdef RELEASE_TAG NEXT_VERSION := $(shell echo $(VERSION) | perl -lpe 's/v//; $$_ += 0.01; $$_ = sprintf("v%.2f.0", $$_)')
TAG := $(RELEASE_TAG) ifndef RELEASE_TAG
TAG := $(TAG)-beta
endif endif
GO_VERSION := $(shell go version) GO_VERSION := $(shell go version)
GO_FILES := $(shell go list ./... | grep -v /vendor/ )
ifdef BETA_SUBDIR ifdef BETA_SUBDIR
BETA_SUBDIR := /$(BETA_SUBDIR) BETA_SUBDIR := /$(BETA_SUBDIR)
endif endif
@@ -46,19 +44,20 @@ endif
.PHONY: rclone test_all vars version .PHONY: rclone test_all vars version
rclone: rclone:
go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS) go build -v --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS)
mkdir -p `go env GOPATH`/bin/ mkdir -p `go env GOPATH`/bin/
cp -av rclone`go env GOEXE` `go env GOPATH`/bin/rclone`go env GOEXE`.new cp -av rclone`go env GOEXE` `go env GOPATH`/bin/rclone`go env GOEXE`.new
mv -v `go env GOPATH`/bin/rclone`go env GOEXE`.new `go env GOPATH`/bin/rclone`go env GOEXE` mv -v `go env GOPATH`/bin/rclone`go env GOEXE`.new `go env GOPATH`/bin/rclone`go env GOEXE`
test_all: test_all:
go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) $(BUILD_ARGS) github.com/rclone/rclone/fstest/test_all go install --ldflags "-s -X github.com/rclone/rclone/fs.Version=$(TAG)" $(BUILDTAGS) github.com/rclone/rclone/fstest/test_all
vars: vars:
@echo SHELL="'$(SHELL)'" @echo SHELL="'$(SHELL)'"
@echo BRANCH="'$(BRANCH)'" @echo BRANCH="'$(BRANCH)'"
@echo TAG="'$(TAG)'" @echo TAG="'$(TAG)'"
@echo VERSION="'$(VERSION)'" @echo VERSION="'$(VERSION)'"
@echo NEXT_VERSION="'$(NEXT_VERSION)'"
@echo GO_VERSION="'$(GO_VERSION)'" @echo GO_VERSION="'$(GO_VERSION)'"
@echo BETA_URL="'$(BETA_URL)'" @echo BETA_URL="'$(BETA_URL)'"
@@ -76,13 +75,10 @@ test: rclone test_all
# Quick test # Quick test
quicktest: quicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) ./... RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) $(GO_FILES)
racequicktest: racequicktest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race ./... RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -cpu=2 -race $(GO_FILES)
compiletest:
RCLONE_CONFIG="/notfound" go test $(BUILDTAGS) -run XXX ./...
# Do source code quality checks # Do source code quality checks
check: rclone check: rclone
@@ -96,37 +92,30 @@ build_dep:
# Get the release dependencies we only install on linux # Get the release dependencies we only install on linux
release_dep_linux: release_dep_linux:
go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64\.tar\.gz' go run bin/get-github-release.go -extract nfpm goreleaser/nfpm 'nfpm_.*_Linux_x86_64.tar.gz'
go run bin/get-github-release.go -extract github-release aktau/github-release 'linux-amd64-github-release.tar.bz2'
# Get the release dependencies we only install on Windows # Get the release dependencies we only install on Windows
release_dep_windows: release_dep_windows:
GOOS="" GOARCH="" go install github.com/josephspurrier/goversioninfo/cmd/goversioninfo@latest GO111MODULE=off GOOS="" GOARCH="" go get github.com/josephspurrier/goversioninfo/cmd/goversioninfo
# Update dependencies # Update dependencies
showupdates:
@echo "*** Direct dependencies that could be updated ***"
@GO111MODULE=on go list -u -f '{{if (and (not (or .Main .Indirect)) .Update)}}{{.Path}}: {{.Version}} -> {{.Update.Version}}{{end}}' -m all 2> /dev/null
# Update direct dependencies only
updatedirect:
GO111MODULE=on go get -d $$(go list -m -f '{{if not (or .Main .Indirect)}}{{.Path}}{{end}}' all)
GO111MODULE=on go mod tidy
# Update direct and indirect dependencies and test dependencies
update: update:
GO111MODULE=on go get -d -u -t ./... GO111MODULE=on go get -u ./...
GO111MODULE=on go mod tidy GO111MODULE=on go mod tidy
GO111MODULE=on go mod vendor
# Tidy the module dependencies # Tidy the module dependencies
tidy: tidy:
GO111MODULE=on go mod tidy GO111MODULE=on go mod tidy
GO111MODULE=on go mod vendor
doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs doc: rclone.1 MANUAL.html MANUAL.txt rcdocs commanddocs
rclone.1: MANUAL.md rclone.1: MANUAL.md
pandoc -s --from markdown-smart --to man MANUAL.md -o rclone.1 pandoc -s --from markdown-smart --to man MANUAL.md -o rclone.1
MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs backenddocs rcdocs MANUAL.md: bin/make_manual.py docs/content/*.md commanddocs backenddocs
./bin/make_manual.py ./bin/make_manual.py
MANUAL.html: MANUAL.md MANUAL.html: MANUAL.md
@@ -171,11 +160,6 @@ validate_website: website
tarball: tarball:
git archive -9 --format=tar.gz --prefix=rclone-$(TAG)/ -o build/rclone-$(TAG).tar.gz $(TAG) git archive -9 --format=tar.gz --prefix=rclone-$(TAG)/ -o build/rclone-$(TAG).tar.gz $(TAG)
vendorball:
go mod vendor
tar -zcf build/rclone-$(TAG)-vendor.tar.gz vendor
rm -rf vendor
sign_upload: sign_upload:
cd build && md5sum rclone-v* | gpg --clearsign > MD5SUMS cd build && md5sum rclone-v* | gpg --clearsign > MD5SUMS
cd build && sha1sum rclone-v* | gpg --clearsign > SHA1SUMS cd build && sha1sum rclone-v* | gpg --clearsign > SHA1SUMS
@@ -194,10 +178,10 @@ upload_github:
./bin/upload-github $(TAG) ./bin/upload-github $(TAG)
cross: doc cross: doc
go run bin/cross-compile.go -release current $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG) go run bin/cross-compile.go -release current $(BUILDTAGS) $(TAG)
beta: beta:
go run bin/cross-compile.go $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG) go run bin/cross-compile.go $(BUILDTAGS) $(TAG)
rclone -v copy build/ memstore:pub-rclone-org/$(TAG) rclone -v copy build/ memstore:pub-rclone-org/$(TAG)
@echo Beta release ready at https://pub.rclone.org/$(TAG)/ @echo Beta release ready at https://pub.rclone.org/$(TAG)/
@@ -205,23 +189,23 @@ log_since_last_release:
git log $(LAST_TAG).. git log $(LAST_TAG)..
compile_all: compile_all:
go run bin/cross-compile.go -compile-only $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG) go run bin/cross-compile.go -compile-only $(BUILDTAGS) $(TAG)
ci_upload: ci_upload:
sudo chown -R $$USER build sudo chown -R $$USER build
find build -type l -delete find build -type l -delete
gzip -r9v build gzip -r9v build
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds ./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD)/testbuilds
ifeq ($(or $(BRANCH_PATH),$(RELEASE_TAG)),) ifndef BRANCH_PATH
./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest ./rclone --config bin/travis.rclone.conf -v copy build/ $(BETA_UPLOAD_ROOT)/test/testbuilds-latest
endif endif
@echo Beta release ready at $(BETA_URL)/testbuilds @echo Beta release ready at $(BETA_URL)/testbuilds
ci_beta: ci_beta:
git log $(LAST_TAG).. > /tmp/git-log.txt git log $(LAST_TAG).. > /tmp/git-log.txt
go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) $(BUILDTAGS) $(BUILD_ARGS) $(TAG) go run bin/cross-compile.go -release beta-latest -git-log /tmp/git-log.txt $(BUILD_FLAGS) $(BUILDTAGS) $(TAG)
rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD) rclone --config bin/travis.rclone.conf -v copy --exclude '*beta-latest*' build/ $(BETA_UPLOAD)
ifeq ($(or $(BRANCH_PATH),$(RELEASE_TAG)),) ifndef BRANCH_PATH
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR) rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' --include version.txt build/ $(BETA_UPLOAD_ROOT)$(BETA_SUBDIR)
endif endif
@echo Beta release ready at $(BETA_URL) @echo Beta release ready at $(BETA_URL)
@@ -233,63 +217,26 @@ fetch_binaries:
serve: website serve: website
cd docs && hugo server -v -w --disableFastRender cd docs && hugo server -v -w --disableFastRender
tag: retag doc tag: doc
bin/make_changelog.py $(LAST_TAG) $(VERSION) > docs/content/changelog.md.new @echo "Old tag is $(VERSION)"
@echo "New tag is $(NEXT_VERSION)"
echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(NEXT_VERSION)\"\n" | gofmt > fs/version.go
echo -n "$(NEXT_VERSION)" > docs/layouts/partials/version.html
echo "$(NEXT_VERSION)" > VERSION
git tag -s -m "Version $(NEXT_VERSION)" $(NEXT_VERSION)
bin/make_changelog.py $(LAST_TAG) $(NEXT_VERSION) > docs/content/changelog.md.new
mv docs/content/changelog.md.new docs/content/changelog.md mv docs/content/changelog.md.new docs/content/changelog.md
@echo "Edit the new changelog in docs/content/changelog.md" @echo "Edit the new changelog in docs/content/changelog.md"
@echo "Then commit all the changes" @echo "Then commit all the changes"
@echo git commit -m \"Version $(VERSION)\" -a -v @echo git commit -m \"Version $(NEXT_VERSION)\" -a -v
@echo "And finally run make retag before make cross, etc." @echo "And finally run make retag before make cross etc"
retag: retag:
@echo "Version is $(VERSION)"
git tag -f -s -m "Version $(VERSION)" $(VERSION) git tag -f -s -m "Version $(VERSION)" $(VERSION)
startdev: startdev:
@echo "Version is $(VERSION)" echo -e "package fs\n\n// Version of rclone\nvar Version = \"$(VERSION)-DEV\"\n" | gofmt > fs/version.go
@echo "Next version is $(NEXT_VERSION)" git commit -m "Start $(VERSION)-DEV development" fs/version.go
echo -e "package fs\n\n// VersionTag of rclone\nvar VersionTag = \"$(NEXT_VERSION)\"\n" | gofmt > fs/versiontag.go
echo -n "$(NEXT_VERSION)" > docs/layouts/partials/version.html
echo "$(NEXT_VERSION)" > VERSION
git commit -m "Start $(NEXT_VERSION)-DEV development" fs/versiontag.go VERSION docs/layouts/partials/version.html
startstable:
@echo "Version is $(VERSION)"
@echo "Next stable version is $(NEXT_PATCH_VERSION)"
echo -e "package fs\n\n// VersionTag of rclone\nvar VersionTag = \"$(NEXT_PATCH_VERSION)\"\n" | gofmt > fs/versiontag.go
echo -n "$(NEXT_PATCH_VERSION)" > docs/layouts/partials/version.html
echo "$(NEXT_PATCH_VERSION)" > VERSION
git commit -m "Start $(NEXT_PATCH_VERSION)-DEV development" fs/versiontag.go VERSION docs/layouts/partials/version.html
winzip: winzip:
zip -9 rclone-$(TAG).zip rclone.exe zip -9 rclone-$(TAG).zip rclone.exe
# docker volume plugin
PLUGIN_USER ?= rclone
PLUGIN_TAG ?= latest
PLUGIN_BASE_TAG ?= latest
PLUGIN_ARCH ?= amd64
PLUGIN_IMAGE := $(PLUGIN_USER)/docker-volume-rclone:$(PLUGIN_TAG)
PLUGIN_BASE := $(PLUGIN_USER)/rclone:$(PLUGIN_BASE_TAG)
PLUGIN_BUILD_DIR := ./build/docker-plugin
PLUGIN_CONTRIB_DIR := ./contrib/docker-plugin/managed
docker-plugin-create:
docker buildx inspect |grep -q /${PLUGIN_ARCH} || \
docker run --rm --privileged tonistiigi/binfmt --install all
rm -rf ${PLUGIN_BUILD_DIR}
docker buildx build \
--no-cache --pull \
--build-arg BASE_IMAGE=${PLUGIN_BASE} \
--platform linux/${PLUGIN_ARCH} \
--output ${PLUGIN_BUILD_DIR}/rootfs \
${PLUGIN_CONTRIB_DIR}
cp ${PLUGIN_CONTRIB_DIR}/config.json ${PLUGIN_BUILD_DIR}
docker plugin rm --force ${PLUGIN_IMAGE} 2>/dev/null || true
docker plugin create ${PLUGIN_IMAGE} ${PLUGIN_BUILD_DIR}
docker-plugin-push:
docker plugin push ${PLUGIN_IMAGE}
docker plugin rm ${PLUGIN_IMAGE}
docker-plugin: docker-plugin-create docker-plugin-push

View File

@@ -1,5 +1,4 @@
[<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-light-mode-only) [<img src="https://rclone.org/img/logo_on_light__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/)
[<img src="https://rclone.org/img/logo_on_dark__horizontal_color.svg" width="50%" alt="rclone logo">](https://rclone.org/#gh-dark-mode-only)
[Website](https://rclone.org) | [Website](https://rclone.org) |
[Documentation](https://rclone.org/docs/) | [Documentation](https://rclone.org/docs/) |
@@ -16,41 +15,31 @@
# Rclone # Rclone
Rclone *("rsync for cloud storage")* is a command-line program to sync files and directories to and from different cloud storage providers. Rclone *("rsync for cloud storage")* is a command line program to sync files and directories to and from different cloud storage providers.
## Storage providers ## Storage providers
* 1Fichier [:page_facing_up:](https://rclone.org/fichier/) * 1Fichier [:page_facing_up:](https://rclone.org/fichier/)
* Akamai Netstorage [:page_facing_up:](https://rclone.org/netstorage/)
* Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss) * Alibaba Cloud (Aliyun) Object Storage System (OSS) [:page_facing_up:](https://rclone.org/s3/#alibaba-oss)
* Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status)) * Amazon Drive [:page_facing_up:](https://rclone.org/amazonclouddrive/) ([See note](https://rclone.org/amazonclouddrive/#status))
* Amazon S3 [:page_facing_up:](https://rclone.org/s3/) * Amazon S3 [:page_facing_up:](https://rclone.org/s3/)
* Backblaze B2 [:page_facing_up:](https://rclone.org/b2/) * Backblaze B2 [:page_facing_up:](https://rclone.org/b2/)
* Box [:page_facing_up:](https://rclone.org/box/) * Box [:page_facing_up:](https://rclone.org/box/)
* Ceph [:page_facing_up:](https://rclone.org/s3/#ceph) * Ceph [:page_facing_up:](https://rclone.org/s3/#ceph)
* China Mobile Ecloud Elastic Object Storage (EOS) [:page_facing_up:](https://rclone.org/s3/#china-mobile-ecloud-eos)
* Cloudflare R2 [:page_facing_up:](https://rclone.org/s3/#cloudflare-r2)
* Arvan Cloud Object Storage (AOS) [:page_facing_up:](https://rclone.org/s3/#arvan-cloud-object-storage-aos)
* Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/) * Citrix ShareFile [:page_facing_up:](https://rclone.org/sharefile/)
* DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces) * DigitalOcean Spaces [:page_facing_up:](https://rclone.org/s3/#digitalocean-spaces)
* Digi Storage [:page_facing_up:](https://rclone.org/koofr/#digi-storage)
* Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost) * Dreamhost [:page_facing_up:](https://rclone.org/s3/#dreamhost)
* Dropbox [:page_facing_up:](https://rclone.org/dropbox/) * Dropbox [:page_facing_up:](https://rclone.org/dropbox/)
* Enterprise File Fabric [:page_facing_up:](https://rclone.org/filefabric/)
* FTP [:page_facing_up:](https://rclone.org/ftp/) * FTP [:page_facing_up:](https://rclone.org/ftp/)
* GetSky [:page_facing_up:](https://rclone.org/jottacloud/)
* Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/) * Google Cloud Storage [:page_facing_up:](https://rclone.org/googlecloudstorage/)
* Google Drive [:page_facing_up:](https://rclone.org/drive/) * Google Drive [:page_facing_up:](https://rclone.org/drive/)
* Google Photos [:page_facing_up:](https://rclone.org/googlephotos/) * Google Photos [:page_facing_up:](https://rclone.org/googlephotos/)
* HDFS (Hadoop Distributed Filesystem) [:page_facing_up:](https://rclone.org/hdfs/)
* HiDrive [:page_facing_up:](https://rclone.org/hidrive/)
* HTTP [:page_facing_up:](https://rclone.org/http/) * HTTP [:page_facing_up:](https://rclone.org/http/)
* Huawei Cloud Object Storage Service(OBS) [:page_facing_up:](https://rclone.org/s3/#huawei-obs) * Hubic [:page_facing_up:](https://rclone.org/hubic/)
* Internet Archive [:page_facing_up:](https://rclone.org/internetarchive/)
* Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/) * Jottacloud [:page_facing_up:](https://rclone.org/jottacloud/)
* IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3) * IBM COS S3 [:page_facing_up:](https://rclone.org/s3/#ibm-cos-s3)
* IONOS Cloud [:page_facing_up:](https://rclone.org/s3/#ionos)
* Koofr [:page_facing_up:](https://rclone.org/koofr/) * Koofr [:page_facing_up:](https://rclone.org/koofr/)
* Liara Object Storage [:page_facing_up:](https://rclone.org/s3/#liara-object-storage)
* Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/) * Mail.ru Cloud [:page_facing_up:](https://rclone.org/mailru/)
* Memset Memstore [:page_facing_up:](https://rclone.org/swift/) * Memset Memstore [:page_facing_up:](https://rclone.org/swift/)
* Mega [:page_facing_up:](https://rclone.org/mega/) * Mega [:page_facing_up:](https://rclone.org/mega/)
@@ -63,45 +52,25 @@ Rclone *("rsync for cloud storage")* is a command-line program to sync files and
* OpenDrive [:page_facing_up:](https://rclone.org/opendrive/) * OpenDrive [:page_facing_up:](https://rclone.org/opendrive/)
* OpenStack Swift [:page_facing_up:](https://rclone.org/swift/) * OpenStack Swift [:page_facing_up:](https://rclone.org/swift/)
* Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/) * Oracle Cloud Storage [:page_facing_up:](https://rclone.org/swift/)
* Oracle Object Storage [:page_facing_up:](https://rclone.org/oracleobjectstorage/)
* ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud) * ownCloud [:page_facing_up:](https://rclone.org/webdav/#owncloud)
* pCloud [:page_facing_up:](https://rclone.org/pcloud/) * pCloud [:page_facing_up:](https://rclone.org/pcloud/)
* premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/) * premiumize.me [:page_facing_up:](https://rclone.org/premiumizeme/)
* put.io [:page_facing_up:](https://rclone.org/putio/) * put.io [:page_facing_up:](https://rclone.org/putio/)
* QingStor [:page_facing_up:](https://rclone.org/qingstor/) * QingStor [:page_facing_up:](https://rclone.org/qingstor/)
* Qiniu Cloud Object Storage (Kodo) [:page_facing_up:](https://rclone.org/s3/#qiniu)
* Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/) * Rackspace Cloud Files [:page_facing_up:](https://rclone.org/swift/)
* RackCorp Object Storage [:page_facing_up:](https://rclone.org/s3/#RackCorp)
* Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway) * Scaleway [:page_facing_up:](https://rclone.org/s3/#scaleway)
* Seafile [:page_facing_up:](https://rclone.org/seafile/) * Seafile [:page_facing_up:](https://rclone.org/seafile/)
* SeaweedFS [:page_facing_up:](https://rclone.org/s3/#seaweedfs)
* SFTP [:page_facing_up:](https://rclone.org/sftp/) * SFTP [:page_facing_up:](https://rclone.org/sftp/)
* SMB / CIFS [:page_facing_up:](https://rclone.org/smb/)
* StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath) * StackPath [:page_facing_up:](https://rclone.org/s3/#stackpath)
* Storj [:page_facing_up:](https://rclone.org/storj/)
* SugarSync [:page_facing_up:](https://rclone.org/sugarsync/) * SugarSync [:page_facing_up:](https://rclone.org/sugarsync/)
* Tencent Cloud Object Storage (COS) [:page_facing_up:](https://rclone.org/s3/#tencent-cos) * Tardigrade [:page_facing_up:](https://rclone.org/tardigrade/)
* Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi) * Wasabi [:page_facing_up:](https://rclone.org/s3/#wasabi)
* WebDAV [:page_facing_up:](https://rclone.org/webdav/) * WebDAV [:page_facing_up:](https://rclone.org/webdav/)
* Yandex Disk [:page_facing_up:](https://rclone.org/yandex/) * Yandex Disk [:page_facing_up:](https://rclone.org/yandex/)
* Zoho WorkDrive [:page_facing_up:](https://rclone.org/zoho/)
* The local filesystem [:page_facing_up:](https://rclone.org/local/) * The local filesystem [:page_facing_up:](https://rclone.org/local/)
Please see [the full list of all storage providers and their features](https://rclone.org/overview/) Please see [the full list of all storage providers and their features](https://rclone.org/overview/)
### Virtual storage providers
These backends adapt or modify other storage providers
* Alias: rename existing remotes [:page_facing_up:](https://rclone.org/alias/)
* Cache: cache remotes (DEPRECATED) [:page_facing_up:](https://rclone.org/cache/)
* Chunker: split large files [:page_facing_up:](https://rclone.org/chunker/)
* Combine: combine multiple remotes into a directory tree [:page_facing_up:](https://rclone.org/combine/)
* Compress: compress files [:page_facing_up:](https://rclone.org/compress/)
* Crypt: encrypt files [:page_facing_up:](https://rclone.org/crypt/)
* Hasher: hash files [:page_facing_up:](https://rclone.org/hasher/)
* Union: join multiple remotes to work together [:page_facing_up:](https://rclone.org/union/)
## Features ## Features
* MD5/SHA-1 hashes checked at all times for file integrity * MD5/SHA-1 hashes checked at all times for file integrity
@@ -112,11 +81,11 @@ These backends adapt or modify other storage providers
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality * [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, e.g. two different cloud accounts * Can sync to and from network, e.g. two different cloud accounts
* Optional large file chunking ([Chunker](https://rclone.org/chunker/)) * Optional large file chunking ([Chunker](https://rclone.org/chunker/))
* Optional transparent compression ([Compress](https://rclone.org/compress/))
* Optional encryption ([Crypt](https://rclone.org/crypt/)) * Optional encryption ([Crypt](https://rclone.org/crypt/))
* Optional cache ([Cache](https://rclone.org/cache/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/)) * Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
* Multi-threaded downloads to local disk * Multi-threaded downloads to local disk
* Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDAV/FTP/SFTP/DLNA * Can [serve](https://rclone.org/commands/rclone_serve/) local or remote files over HTTP/WebDav/FTP/SFTP/dlna
## Installation & documentation ## Installation & documentation
@@ -137,5 +106,5 @@ Please see the [rclone website](https://rclone.org/) for:
License License
------- -------
This is free software under the terms of the MIT license (check the This is free software under the terms of MIT the license (check the
[COPYING file](/COPYING) included in this package). [COPYING file](/COPYING) included in this package).

View File

@@ -4,12 +4,12 @@ This file describes how to make the various kinds of releases
## Extra required software for making a release ## Extra required software for making a release
* [gh the github cli](https://github.com/cli/cli) for uploading packages * [github-release](https://github.com/aktau/github-release) for uploading packages
* pandoc for making the html and man pages * pandoc for making the html and man pages
## Making a release ## Making a release
* git checkout master # see below for stable branch * git checkout master
* git pull * git pull
* git status - make sure everything is checked in * git status - make sure everything is checked in
* Check GitHub actions build for master is Green * Check GitHub actions build for master is Green
@@ -21,97 +21,87 @@ This file describes how to make the various kinds of releases
* git status - to check for new man pages - git add them * git status - to check for new man pages - git add them
* git commit -a -v -m "Version v1.XX.0" * git commit -a -v -m "Version v1.XX.0"
* make retag * make retag
* git push --follow-tags origin * git push --tags origin master
* # Wait for the GitHub builds to complete then... * # Wait for the GitHub builds to complete then...
* make fetch_binaries * make fetch_binaries
* make tarball * make tarball
* make vendorball
* make sign_upload * make sign_upload
* make check_sign * make check_sign
* make upload * make upload
* make upload_website * make upload_website
* make upload_github * make upload_github
* make startdev # make startstable for stable branch * make startdev
* # announce with forum post, twitter post, patreon post * # announce with forum post, twitter post, patreon post
## Update dependencies Early in the next release cycle update the vendored dependencies
Early in the next release cycle update the dependencies
* Review any pinned packages in go.mod and remove if possible * Review any pinned packages in go.mod and remove if possible
* make updatedirect
* make
* git commit -a -v
* make update * make update
* make * git status
* roll back any updates which didn't compile * git add new files
* git commit -a -v --amend * git commit -a -v
Note that `make update` updates all direct and indirect dependencies If `make update` fails with errors like this:
and there can occasionally be forwards compatibility problems with
doing that so it may be necessary to roll back dependencies to the
version specified by `make updatedirect` in order to get rclone to
build.
## Tidy beta ```
# github.com/cpuguy83/go-md2man/md2man
../../../../pkg/mod/github.com/cpuguy83/go-md2man@v1.0.8/md2man/md2man.go:11:16: undefined: blackfriday.EXTENSION_NO_INTRA_EMPHASIS
../../../../pkg/mod/github.com/cpuguy83/go-md2man@v1.0.8/md2man/md2man.go:12:16: undefined: blackfriday.EXTENSION_TABLES
```
At some point after the release run Can be fixed with
bin/tidy-beta v1.55 * GO111MODULE=on go get -u github.com/russross/blackfriday@v1.5.2
* GO111MODULE=on go mod tidy
* GO111MODULE=on go mod vendor
where the version number is that of a couple ago to remove old beta binaries.
## Making a point release ## Making a point release
If rclone needs a point release due to some horrendous bug: If rclone needs a point release due to some horrendous bug:
Set vars
* BASE_TAG=v1.XX # e.g. v1.52
* NEW_TAG=${BASE_TAG}.Y # e.g. v1.52.1
* echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
First make the release branch. If this is a second point release then First make the release branch. If this is a second point release then
this will be done already. this will be done already.
* git co -b ${BASE_TAG}-stable ${BASE_TAG}.0 * BASE_TAG=v1.XX # eg v1.52
* make startstable * NEW_TAG=${BASE_TAG}.Y # eg v1.52.1
* echo $BASE_TAG $NEW_TAG # v1.52 v1.52.1
* git branch ${BASE_TAG} ${BASE_TAG}-stable
Now Now
* git co ${BASE_TAG}-stable * git co ${BASE_TAG}-stable
* git cherry-pick any fixes * git cherry-pick any fixes
* Do the steps as above * Test (see above)
* make startstable * make NEXT_VERSION=${NEW_TAG} tag
* edit docs/content/changelog.md
* make TAG=${NEW_TAG} doc
* git commit -a -v -m "Version ${NEW_TAG}"
* git tag -d ${NEW_TAG}
* git tag -s -m "Version ${NEW_TAG}" ${NEW_TAG}
* git push --tags -u origin ${BASE_TAG}-stable
* Wait for builds to complete
* make BRANCH_PATH= TAG=${NEW_TAG} fetch_binaries
* make TAG=${NEW_TAG} tarball
* make TAG=${NEW_TAG} sign_upload
* make TAG=${NEW_TAG} check_sign
* make TAG=${NEW_TAG} upload
* make TAG=${NEW_TAG} upload_website
* make TAG=${NEW_TAG} upload_github
* NB this overwrites the current beta so we need to do this
* git co master * git co master
* `#` cherry pick the changes to the changelog - check the diff to make sure it is correct * make VERSION=${NEW_TAG} startdev
* git checkout ${BASE_TAG}-stable docs/content/changelog.md * # cherry pick the changes to the changelog and VERSION
* git commit -a -v -m "Changelog updates from Version ${NEW_TAG}" * git checkout ${BASE_TAG}-stable VERSION docs/content/changelog.md
* git commit --amend
* git push * git push
* Announce!
## Making a manual build of docker ## Making a manual build of docker
The rclone docker image should autobuild on via GitHub actions. If it doesn't The rclone docker image should autobuild on via GitHub actions. If it doesn't
or needs to be updated then rebuild like this. or needs to be updated then rebuild like this.
See: https://github.com/ilteoood/docker_buildx/issues/19
See: https://github.com/ilteoood/docker_buildx/blob/master/scripts/install_buildx.sh
```
git co v1.54.1
docker pull golang
export DOCKER_CLI_EXPERIMENTAL=enabled
docker buildx create --name actions_builder --use
docker run --rm --privileged docker/binfmt:820fdd95a9972a5308930a2bdfb8573dd4447ad3
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
SUPPORTED_PLATFORMS=$(docker buildx inspect --bootstrap | grep 'Platforms:*.*' | cut -d : -f2,3)
echo "Supported platforms: $SUPPORTED_PLATFORMS"
docker buildx build --platform linux/amd64,linux/386,linux/arm64,linux/arm/v7 -t rclone/rclone:1.54.1 -t rclone/rclone:1.54 -t rclone/rclone:1 -t rclone/rclone:latest --push .
docker buildx stop actions_builder
```
### Old build for linux/amd64 only
``` ```
docker pull golang docker pull golang
docker build --rm --ulimit memlock=67108864 -t rclone/rclone:1.52.0 -t rclone/rclone:1.52 -t rclone/rclone:1 -t rclone/rclone:latest . docker build --rm --ulimit memlock=67108864 -t rclone/rclone:1.52.0 -t rclone/rclone:1.52 -t rclone/rclone:1 -t rclone/rclone:latest .

View File

@@ -1 +1 @@
v1.62.0 v1.52.2

View File

@@ -1,13 +1,10 @@
// Package alias implements a virtual provider to rename existing remotes.
package alias package alias
import ( import (
"context"
"errors" "errors"
"strings" "strings"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath" "github.com/rclone/rclone/fs/fspath"
@@ -21,7 +18,7 @@ func init() {
NewFs: NewFs, NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "remote", Name: "remote",
Help: "Remote or path to alias.\n\nCan be \"myremote:path/to/dir\", \"myremote:bucket\", \"myremote:\" or \"/local/path\".", Help: "Remote or path to alias.\nCan be \"myremote:path/to/dir\", \"myremote:bucket\", \"myremote:\" or \"/local/path\".",
Required: true, Required: true,
}}, }},
} }
@@ -36,7 +33,7 @@ type Options struct {
// NewFs constructs an Fs from the path. // NewFs constructs an Fs from the path.
// //
// The returned Fs is the actual Fs, referenced by remote in the config // The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
@@ -49,5 +46,9 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if strings.HasPrefix(opt.Remote, name+":") { if strings.HasPrefix(opt.Remote, name+":") {
return nil, errors.New("can't point alias remote at itself - check the value of the remote setting") return nil, errors.New("can't point alias remote at itself - check the value of the remote setting")
} }
return cache.Get(ctx, fspath.JoinRootPath(opt.Remote, root)) fsInfo, configName, fsPath, config, err := fs.ConfigFs(opt.Remote)
if err != nil {
return nil, err
}
return fsInfo.NewFs(configName, fspath.JoinRootPath(fsPath, root), config)
} }

View File

@@ -11,7 +11,6 @@ import (
_ "github.com/rclone/rclone/backend/local" // pull in test backend _ "github.com/rclone/rclone/backend/local" // pull in test backend
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configfile"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@@ -20,7 +19,7 @@ var (
) )
func prepare(t *testing.T, root string) { func prepare(t *testing.T, root string) {
configfile.Install() config.LoadConfig()
// Configure the remote // Configure the remote
config.FileSet(remoteName, "type", "alias") config.FileSet(remoteName, "type", "alias")
@@ -55,22 +54,21 @@ func TestNewFS(t *testing.T) {
{"four/under four.txt", 9, false}, {"four/under four.txt", 9, false},
}}, }},
{"four", "..", "", true, []testEntry{ {"four", "..", "", true, []testEntry{
{"five", -1, true}, {"four", -1, true},
{"under four.txt", 9, false}, {"one%.txt", 6, false},
{"three", -1, true},
{"two.html", 7, false},
}}, }},
{"", "../../three", "", true, []testEntry{ {"four", "../three", "", true, []testEntry{
{"underthree.txt", 9, false}, {"underthree.txt", 9, false},
}}, }},
{"four", "../../five", "", true, []testEntry{
{"underfive.txt", 6, false},
}},
} { } {
what := fmt.Sprintf("test %d remoteRoot=%q, fsRoot=%q, fsList=%q", testi, test.remoteRoot, test.fsRoot, test.fsList) what := fmt.Sprintf("test %d remoteRoot=%q, fsRoot=%q, fsList=%q", testi, test.remoteRoot, test.fsRoot, test.fsList)
remoteRoot, err := filepath.Abs(filepath.FromSlash(path.Join("test/files", test.remoteRoot))) remoteRoot, err := filepath.Abs(filepath.FromSlash(path.Join("test/files", test.remoteRoot)))
require.NoError(t, err, what) require.NoError(t, err, what)
prepare(t, remoteRoot) prepare(t, remoteRoot)
f, err := fs.NewFs(context.Background(), fmt.Sprintf("%s:%s", remoteName, test.fsRoot)) f, err := fs.NewFs(fmt.Sprintf("%s:%s", remoteName, test.fsRoot))
require.NoError(t, err, what) require.NoError(t, err, what)
gotEntries, err := f.List(context.Background(), test.fsList) gotEntries, err := f.List(context.Background(), test.fsList)
require.NoError(t, err, what) require.NoError(t, err, what)
@@ -92,7 +90,7 @@ func TestNewFS(t *testing.T) {
func TestNewFSNoRemote(t *testing.T) { func TestNewFSNoRemote(t *testing.T) {
prepare(t, "") prepare(t, "")
f, err := fs.NewFs(context.Background(), fmt.Sprintf("%s:", remoteName)) f, err := fs.NewFs(fmt.Sprintf("%s:", remoteName))
require.Error(t, err) require.Error(t, err)
require.Nil(t, f) require.Nil(t, f)
@@ -100,7 +98,7 @@ func TestNewFSNoRemote(t *testing.T) {
func TestNewFSInvalidRemote(t *testing.T) { func TestNewFSInvalidRemote(t *testing.T) {
prepare(t, "not_existing_test_remote:") prepare(t, "not_existing_test_remote:")
f, err := fs.NewFs(context.Background(), fmt.Sprintf("%s:", remoteName)) f, err := fs.NewFs(fmt.Sprintf("%s:", remoteName))
require.Error(t, err) require.Error(t, err)
require.Nil(t, f) require.Nil(t, f)

View File

@@ -1,4 +1,3 @@
// Package all imports all the backends
package all package all
import ( import (
@@ -10,31 +9,23 @@ import (
_ "github.com/rclone/rclone/backend/box" _ "github.com/rclone/rclone/backend/box"
_ "github.com/rclone/rclone/backend/cache" _ "github.com/rclone/rclone/backend/cache"
_ "github.com/rclone/rclone/backend/chunker" _ "github.com/rclone/rclone/backend/chunker"
_ "github.com/rclone/rclone/backend/combine"
_ "github.com/rclone/rclone/backend/compress"
_ "github.com/rclone/rclone/backend/crypt" _ "github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/drive" _ "github.com/rclone/rclone/backend/drive"
_ "github.com/rclone/rclone/backend/dropbox" _ "github.com/rclone/rclone/backend/dropbox"
_ "github.com/rclone/rclone/backend/fichier" _ "github.com/rclone/rclone/backend/fichier"
_ "github.com/rclone/rclone/backend/filefabric"
_ "github.com/rclone/rclone/backend/ftp" _ "github.com/rclone/rclone/backend/ftp"
_ "github.com/rclone/rclone/backend/googlecloudstorage" _ "github.com/rclone/rclone/backend/googlecloudstorage"
_ "github.com/rclone/rclone/backend/googlephotos" _ "github.com/rclone/rclone/backend/googlephotos"
_ "github.com/rclone/rclone/backend/hasher"
_ "github.com/rclone/rclone/backend/hdfs"
_ "github.com/rclone/rclone/backend/hidrive"
_ "github.com/rclone/rclone/backend/http" _ "github.com/rclone/rclone/backend/http"
_ "github.com/rclone/rclone/backend/internetarchive" _ "github.com/rclone/rclone/backend/hubic"
_ "github.com/rclone/rclone/backend/jottacloud" _ "github.com/rclone/rclone/backend/jottacloud"
_ "github.com/rclone/rclone/backend/koofr" _ "github.com/rclone/rclone/backend/koofr"
_ "github.com/rclone/rclone/backend/local" _ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/mailru" _ "github.com/rclone/rclone/backend/mailru"
_ "github.com/rclone/rclone/backend/mega" _ "github.com/rclone/rclone/backend/mega"
_ "github.com/rclone/rclone/backend/memory" _ "github.com/rclone/rclone/backend/memory"
_ "github.com/rclone/rclone/backend/netstorage"
_ "github.com/rclone/rclone/backend/onedrive" _ "github.com/rclone/rclone/backend/onedrive"
_ "github.com/rclone/rclone/backend/opendrive" _ "github.com/rclone/rclone/backend/opendrive"
_ "github.com/rclone/rclone/backend/oracleobjectstorage"
_ "github.com/rclone/rclone/backend/pcloud" _ "github.com/rclone/rclone/backend/pcloud"
_ "github.com/rclone/rclone/backend/premiumizeme" _ "github.com/rclone/rclone/backend/premiumizeme"
_ "github.com/rclone/rclone/backend/putio" _ "github.com/rclone/rclone/backend/putio"
@@ -43,14 +34,10 @@ import (
_ "github.com/rclone/rclone/backend/seafile" _ "github.com/rclone/rclone/backend/seafile"
_ "github.com/rclone/rclone/backend/sftp" _ "github.com/rclone/rclone/backend/sftp"
_ "github.com/rclone/rclone/backend/sharefile" _ "github.com/rclone/rclone/backend/sharefile"
_ "github.com/rclone/rclone/backend/sia"
_ "github.com/rclone/rclone/backend/smb"
_ "github.com/rclone/rclone/backend/storj"
_ "github.com/rclone/rclone/backend/sugarsync" _ "github.com/rclone/rclone/backend/sugarsync"
_ "github.com/rclone/rclone/backend/swift" _ "github.com/rclone/rclone/backend/swift"
_ "github.com/rclone/rclone/backend/tardigrade"
_ "github.com/rclone/rclone/backend/union" _ "github.com/rclone/rclone/backend/union"
_ "github.com/rclone/rclone/backend/uptobox"
_ "github.com/rclone/rclone/backend/webdav" _ "github.com/rclone/rclone/backend/webdav"
_ "github.com/rclone/rclone/backend/yandex" _ "github.com/rclone/rclone/backend/yandex"
_ "github.com/rclone/rclone/backend/zoho"
) )

View File

@@ -14,15 +14,16 @@ we ignore assets completely!
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"io" "io"
"log"
"net/http" "net/http"
"path" "path"
"strings" "strings"
"time" "time"
acd "github.com/ncw/go-acd" acd "github.com/ncw/go-acd"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
@@ -69,28 +70,45 @@ func init() {
Prefix: "acd", Prefix: "acd",
Description: "Amazon Drive", Description: "Amazon Drive",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) { Config: func(name string, m configmap.Mapper) {
return oauthutil.ConfigOut("", &oauthutil.Options{ err := oauthutil.Config("amazon cloud drive", name, m, acdConfig, nil)
OAuth2Config: acdConfig, if err != nil {
}) log.Fatalf("Failed to configure token: %v", err)
}
}, },
Options: append(oauthutil.SharedOptions, []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Amazon Application Client ID.",
Required: true,
}, {
Name: config.ConfigClientSecret,
Help: "Amazon Application Client Secret.",
Required: true,
}, {
Name: config.ConfigAuthURL,
Help: "Auth server URL.\nLeave blank to use Amazon's.",
Advanced: true,
}, {
Name: config.ConfigTokenURL,
Help: "Token server url.\nleave blank to use Amazon's.",
Advanced: true,
}, {
Name: "checkpoint", Name: "checkpoint",
Help: "Checkpoint for internal polling (debug).", Help: "Checkpoint for internal polling (debug).",
Hide: fs.OptionHideBoth, Hide: fs.OptionHideBoth,
Advanced: true, Advanced: true,
}, { }, {
Name: "upload_wait_per_gb", Name: "upload_wait_per_gb",
Help: `Additional time per GiB to wait after a failed complete upload to see if it appears. Help: `Additional time per GB to wait after a failed complete upload to see if it appears.
Sometimes Amazon Drive gives an error when a file has been fully Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This uploaded but the file appears anyway after a little while. This
happens sometimes for files over 1 GiB in size and nearly every time for happens sometimes for files over 1GB in size and nearly every time for
files bigger than 10 GiB. This parameter controls the time rclone waits files bigger than 10GB. This parameter controls the time rclone waits
for the file to appear. for the file to appear.
The default value for this parameter is 3 minutes per GiB, so by The default value for this parameter is 3 minutes per GB, so by
default it will wait 3 minutes for every GiB uploaded to see if the default it will wait 3 minutes for every GB uploaded to see if the
file appears. file appears.
You can disable this feature by setting it to 0. This may cause You can disable this feature by setting it to 0. This may cause
@@ -110,7 +128,7 @@ in this situation.`,
Files this size or more will be downloaded via their "tempLink". This Files this size or more will be downloaded via their "tempLink". This
is to work around a problem with Amazon Drive which blocks downloads is to work around a problem with Amazon Drive which blocks downloads
of files bigger than about 10 GiB. The default for this is 9 GiB which of files bigger than about 10GB. The default for this is 9GB which
shouldn't need to be changed. shouldn't need to be changed.
To download files above this threshold, rclone requests a "tempLink" To download files above this threshold, rclone requests a "tempLink"
@@ -125,7 +143,7 @@ underlying S3 storage.`,
// Encode invalid UTF-8 bytes as json doesn't handle them properly. // Encode invalid UTF-8 bytes as json doesn't handle them properly.
Default: (encoder.Base | Default: (encoder.Base |
encoder.EncodeInvalidUtf8), encoder.EncodeInvalidUtf8),
}}...), }},
}) })
} }
@@ -142,7 +160,6 @@ type Fs struct {
name string // name of this remote name string // name of this remote
features *fs.Features // optional features features *fs.Features // optional features
opt Options // options for this Fs opt Options // options for this Fs
ci *fs.ConfigInfo // global config
c *acd.Client // the connection to the acd server c *acd.Client // the connection to the acd server
noAuthClient *http.Client // unauthenticated http client noAuthClient *http.Client // unauthenticated http client
root string // the path we are working on root string // the path we are working on
@@ -203,10 +220,7 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) { func (f *Fs) shouldRetry(resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if resp != nil { if resp != nil {
if resp.StatusCode == 401 { if resp.StatusCode == 401 {
f.tokenRenewer.Invalidate() f.tokenRenewer.Invalidate()
@@ -241,7 +255,8 @@ func filterRequest(req *http.Request) {
} }
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.Background()
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
@@ -249,7 +264,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return nil, err return nil, err
} }
root = parsePath(root) root = parsePath(root)
baseClient := fshttp.NewClient(ctx) baseClient := fshttp.NewClient(fs.Config)
if do, ok := baseClient.Transport.(interface { if do, ok := baseClient.Transport.(interface {
SetRequestFilter(f func(req *http.Request)) SetRequestFilter(f func(req *http.Request))
}); ok { }); ok {
@@ -257,31 +272,29 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} else { } else {
fs.Debugf(name+":", "Couldn't add request filter - large file downloads will fail") fs.Debugf(name+":", "Couldn't add request filter - large file downloads will fail")
} }
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(ctx, name, m, acdConfig, baseClient) oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, acdConfig, baseClient)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to configure Amazon Drive: %w", err) return nil, errors.Wrap(err, "failed to configure Amazon Drive")
} }
c := acd.NewClient(oAuthClient) c := acd.NewClient(oAuthClient)
ci := fs.GetConfig(ctx)
f := &Fs{ f := &Fs{
name: name, name: name,
root: root, root: root,
opt: *opt, opt: *opt,
ci: ci,
c: c, c: c,
pacer: fs.NewPacer(ctx, pacer.NewAmazonCloudDrive(pacer.MinSleep(minSleep))), pacer: fs.NewPacer(pacer.NewAmazonCloudDrive(pacer.MinSleep(minSleep))),
noAuthClient: fshttp.NewClient(ctx), noAuthClient: fshttp.NewClient(fs.Config),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CaseInsensitive: true, CaseInsensitive: true,
ReadMimeType: true, ReadMimeType: true,
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
}).Fill(ctx, f) }).Fill(f)
// Renew the token in the background // Renew the token in the background
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
_, err := f.getRootInfo(ctx) _, err := f.getRootInfo()
return err return err
}) })
@@ -289,16 +302,16 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
_, resp, err = f.c.Account.GetEndpoints() _, resp, err = f.c.Account.GetEndpoints()
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to get endpoints: %w", err) return nil, errors.Wrap(err, "failed to get endpoints")
} }
// Get rootID // Get rootID
rootInfo, err := f.getRootInfo(ctx) rootInfo, err := f.getRootInfo()
if err != nil || rootInfo.Id == nil { if err != nil || rootInfo.Id == nil {
return nil, fmt.Errorf("failed to get root: %w", err) return nil, errors.Wrap(err, "failed to get root")
} }
f.trueRootID = *rootInfo.Id f.trueRootID = *rootInfo.Id
@@ -338,11 +351,11 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
// getRootInfo gets the root folder info // getRootInfo gets the root folder info
func (f *Fs) getRootInfo(ctx context.Context) (rootInfo *acd.Folder, err error) { func (f *Fs) getRootInfo() (rootInfo *acd.Folder, err error) {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
rootInfo, resp, err = f.c.Nodes.GetRoot() rootInfo, resp, err = f.c.Nodes.GetRoot()
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
return rootInfo, err return rootInfo, err
} }
@@ -381,7 +394,7 @@ func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut strin
var subFolder *acd.Folder var subFolder *acd.Folder
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
subFolder, resp, err = folder.GetFolder(f.opt.Enc.FromStandardName(leaf)) subFolder, resp, err = folder.GetFolder(f.opt.Enc.FromStandardName(leaf))
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
if err == acd.ErrorNodeNotFound { if err == acd.ErrorNodeNotFound {
@@ -408,7 +421,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
var info *acd.Folder var info *acd.Folder
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
info, resp, err = folder.CreateFolder(f.opt.Enc.FromStandardName(leaf)) info, resp, err = folder.CreateFolder(f.opt.Enc.FromStandardName(leaf))
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
//fmt.Printf("...Error %v\n", err) //fmt.Printf("...Error %v\n", err)
@@ -429,13 +442,13 @@ type listAllFn func(*acd.Node) bool
// Lists the directory required calling the user function on each item found // Lists the directory required calling the user function on each item found
// //
// If the user fn ever returns true then it early exits with found = true // If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(ctx context.Context, dirID string, title string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) { func (f *Fs) listAll(dirID string, title string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
query := "parents:" + dirID query := "parents:" + dirID
if directoriesOnly { if directoriesOnly {
query += " AND kind:" + folderKind query += " AND kind:" + folderKind
} else if filesOnly { } else if filesOnly {
query += " AND kind:" + fileKind query += " AND kind:" + fileKind
//} else { } else {
// FIXME none of these work // FIXME none of these work
//query += " AND kind:(" + fileKind + " OR " + folderKind + ")" //query += " AND kind:(" + fileKind + " OR " + folderKind + ")"
//query += " AND (kind:" + fileKind + " OR kind:" + folderKind + ")" //query += " AND (kind:" + fileKind + " OR kind:" + folderKind + ")"
@@ -450,7 +463,7 @@ func (f *Fs) listAll(ctx context.Context, dirID string, title string, directorie
var resp *http.Response var resp *http.Response
err = f.pacer.CallNoRetry(func() (bool, error) { err = f.pacer.CallNoRetry(func() (bool, error) {
nodes, resp, err = f.c.Nodes.GetNodes(&opts) nodes, resp, err = f.c.Nodes.GetNodes(&opts)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return false, err return false, err
@@ -505,11 +518,11 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
if err != nil { if err != nil {
return nil, err return nil, err
} }
maxTries := f.ci.LowLevelRetries maxTries := fs.Config.LowLevelRetries
var iErr error var iErr error
for tries := 1; tries <= maxTries; tries++ { for tries := 1; tries <= maxTries; tries++ {
entries = nil entries = nil
_, err = f.listAll(ctx, directoryID, "", false, false, func(node *acd.Node) bool { _, err = f.listAll(directoryID, "", false, false, func(node *acd.Node) bool {
remote := path.Join(dir, *node.Name) remote := path.Join(dir, *node.Name)
switch *node.Kind { switch *node.Kind {
case folderKind: case folderKind:
@@ -526,7 +539,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
} }
entries = append(entries, o) entries = append(entries, o)
default: default:
// ignore ASSET, etc. // ignore ASSET etc
} }
return false return false
}) })
@@ -556,9 +569,9 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// //
// This is a workaround for Amazon sometimes returning // This is a workaround for Amazon sometimes returning
// //
// - 408 REQUEST_TIMEOUT // * 408 REQUEST_TIMEOUT
// - 504 GATEWAY_TIMEOUT // * 504 GATEWAY_TIMEOUT
// - 500 Internal server error // * 500 Internal server error
// //
// At the end of large uploads. The speculation is that the timeout // At the end of large uploads. The speculation is that the timeout
// is waiting for the sha1 hashing to complete and the file may well // is waiting for the sha1 hashing to complete and the file may well
@@ -626,7 +639,7 @@ func (f *Fs) checkUpload(ctx context.Context, resp *http.Response, in io.Reader,
// Put the object into the container // Put the object into the container
// //
// Copy the reader in to the new object which is returned. // Copy the reader in to the new object which is returned
// //
// The new object may have been created if an error is returned // The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
@@ -668,7 +681,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
if ok { if ok {
return false, nil return false, nil
} }
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -683,11 +696,11 @@ func (f *Fs) Mkdir(ctx context.Context, dir string) error {
return err return err
} }
// Move src to this remote using server-side move operations. // Move src to this remote using server side move operations.
// //
// This is stored with the remote path given. // This is stored with the remote path given
// //
// It returns the destination Object and a possible error. // It returns the destination Object and a possible error
// //
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
@@ -709,7 +722,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
if err != nil { if err != nil {
return nil, err return nil, err
} }
err = f.moveNode(ctx, srcObj.remote, dstLeaf, dstDirectoryID, srcObj.info, srcLeaf, srcDirectoryID, false) err = f.moveNode(srcObj.remote, dstLeaf, dstDirectoryID, srcObj.info, srcLeaf, srcDirectoryID, false)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -720,7 +733,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
dstObj fs.Object dstObj fs.Object
srcErr, dstErr error srcErr, dstErr error
) )
for i := 1; i <= f.ci.LowLevelRetries; i++ { for i := 1; i <= fs.Config.LowLevelRetries; i++ {
_, srcErr = srcObj.fs.NewObject(ctx, srcObj.remote) // try reading the object _, srcErr = srcObj.fs.NewObject(ctx, srcObj.remote) // try reading the object
if srcErr != nil && srcErr != fs.ErrorObjectNotFound { if srcErr != nil && srcErr != fs.ErrorObjectNotFound {
// exit if error on source // exit if error on source
@@ -735,7 +748,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
// finished if src not found and dst found // finished if src not found and dst found
break break
} }
fs.Debugf(src, "Wait for directory listing to update after move %d/%d", i, f.ci.LowLevelRetries) fs.Debugf(src, "Wait for directory listing to update after move %d/%d", i, fs.Config.LowLevelRetries)
time.Sleep(1 * time.Second) time.Sleep(1 * time.Second)
} }
return dstObj, dstErr return dstObj, dstErr
@@ -748,7 +761,7 @@ func (f *Fs) DirCacheFlush() {
} }
// DirMove moves src, srcRemote to this remote at dstRemote // DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations. // using server side move operations.
// //
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
@@ -804,7 +817,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
var jsonStr string var jsonStr string
err = srcFs.pacer.Call(func() (bool, error) { err = srcFs.pacer.Call(func() (bool, error) {
jsonStr, err = srcInfo.GetMetadata() jsonStr, err = srcInfo.GetMetadata()
return srcFs.shouldRetry(ctx, nil, err) return srcFs.shouldRetry(nil, err)
}) })
if err != nil { if err != nil {
fs.Debugf(src, "DirMove error: error reading src metadata: %v", err) fs.Debugf(src, "DirMove error: error reading src metadata: %v", err)
@@ -816,7 +829,7 @@ func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string
return err return err
} }
err = f.moveNode(ctx, srcPath, dstLeaf, dstDirectoryID, srcInfo, srcLeaf, srcDirectoryID, true) err = f.moveNode(srcPath, dstLeaf, dstDirectoryID, srcInfo, srcLeaf, srcDirectoryID, true)
if err != nil { if err != nil {
return err return err
} }
@@ -841,7 +854,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
if check { if check {
// check directory is empty // check directory is empty
empty := true empty := true
_, err = f.listAll(ctx, rootID, "", false, false, func(node *acd.Node) bool { _, err = f.listAll(rootID, "", false, false, func(node *acd.Node) bool {
switch *node.Kind { switch *node.Kind {
case folderKind: case folderKind:
empty = false empty = false
@@ -866,7 +879,7 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = node.Trash() resp, err = node.Trash()
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -896,7 +909,7 @@ func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.MD5) return hash.Set(hash.MD5)
} }
// Copy src to this remote using server-side copy operations. // Copy src to this remote using server side copy operations.
// //
// This is stored with the remote path given // This is stored with the remote path given
// //
@@ -924,8 +937,8 @@ func (f *Fs) Hashes() hash.Set {
// Optional interface: Only implement this if you have a way of // Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the // deleting all the files quicker than just running Remove() on the
// result of List() // result of List()
func (f *Fs) Purge(ctx context.Context, dir string) error { func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, dir, false) return f.purgeCheck(ctx, "", false)
} }
// ------------------------------------------------------------ // ------------------------------------------------------------
@@ -988,7 +1001,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
var info *acd.File var info *acd.File
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
info, resp, err = folder.GetFile(o.fs.opt.Enc.FromStandardName(leaf)) info, resp, err = folder.GetFile(o.fs.opt.Enc.FromStandardName(leaf))
return o.fs.shouldRetry(ctx, resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
if err == acd.ErrorNodeNotFound { if err == acd.ErrorNodeNotFound {
@@ -1002,6 +1015,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
// ModTime returns the modification time of the object // ModTime returns the modification time of the object
// //
//
// It attempts to read the objects mtime and if that isn't present the // It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers // LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time { func (o *Object) ModTime(ctx context.Context) time.Time {
@@ -1044,7 +1058,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} else { } else {
in, resp, err = file.OpenTempURLHeaders(o.fs.noAuthClient, headers) in, resp, err = file.OpenTempURLHeaders(o.fs.noAuthClient, headers)
} }
return o.fs.shouldRetry(ctx, resp, err) return o.fs.shouldRetry(resp, err)
}) })
return in, err return in, err
} }
@@ -1067,7 +1081,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if ok { if ok {
return false, nil return false, nil
} }
return o.fs.shouldRetry(ctx, resp, err) return o.fs.shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1077,70 +1091,70 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
} }
// Remove a node // Remove a node
func (f *Fs) removeNode(ctx context.Context, info *acd.Node) error { func (f *Fs) removeNode(info *acd.Node) error {
var resp *http.Response var resp *http.Response
var err error var err error
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = info.Trash() resp, err = info.Trash()
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
return err return err
} }
// Remove an object // Remove an object
func (o *Object) Remove(ctx context.Context) error { func (o *Object) Remove(ctx context.Context) error {
return o.fs.removeNode(ctx, o.info) return o.fs.removeNode(o.info)
} }
// Restore a node // Restore a node
func (f *Fs) restoreNode(ctx context.Context, info *acd.Node) (newInfo *acd.Node, err error) { func (f *Fs) restoreNode(info *acd.Node) (newInfo *acd.Node, err error) {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
newInfo, resp, err = info.Restore() newInfo, resp, err = info.Restore()
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
return newInfo, err return newInfo, err
} }
// Changes name of given node // Changes name of given node
func (f *Fs) renameNode(ctx context.Context, info *acd.Node, newName string) (newInfo *acd.Node, err error) { func (f *Fs) renameNode(info *acd.Node, newName string) (newInfo *acd.Node, err error) {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
newInfo, resp, err = info.Rename(f.opt.Enc.FromStandardName(newName)) newInfo, resp, err = info.Rename(f.opt.Enc.FromStandardName(newName))
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
return newInfo, err return newInfo, err
} }
// Replaces one parent with another, effectively moving the file. Leaves other // Replaces one parent with another, effectively moving the file. Leaves other
// parents untouched. ReplaceParent cannot be used when the file is trashed. // parents untouched. ReplaceParent cannot be used when the file is trashed.
func (f *Fs) replaceParent(ctx context.Context, info *acd.Node, oldParentID string, newParentID string) error { func (f *Fs) replaceParent(info *acd.Node, oldParentID string, newParentID string) error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := info.ReplaceParent(oldParentID, newParentID) resp, err := info.ReplaceParent(oldParentID, newParentID)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
} }
// Adds one additional parent to object. // Adds one additional parent to object.
func (f *Fs) addParent(ctx context.Context, info *acd.Node, newParentID string) error { func (f *Fs) addParent(info *acd.Node, newParentID string) error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := info.AddParent(newParentID) resp, err := info.AddParent(newParentID)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
} }
// Remove given parent from object, leaving the other possible // Remove given parent from object, leaving the other possible
// parents untouched. Object can end up having no parents. // parents untouched. Object can end up having no parents.
func (f *Fs) removeParent(ctx context.Context, info *acd.Node, parentID string) error { func (f *Fs) removeParent(info *acd.Node, parentID string) error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := info.RemoveParent(parentID) resp, err := info.RemoveParent(parentID)
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(resp, err)
}) })
} }
// moveNode moves the node given from the srcLeaf,srcDirectoryID to // moveNode moves the node given from the srcLeaf,srcDirectoryID to
// the dstLeaf,dstDirectoryID // the dstLeaf,dstDirectoryID
func (f *Fs) moveNode(ctx context.Context, name, dstLeaf, dstDirectoryID string, srcInfo *acd.Node, srcLeaf, srcDirectoryID string, useDirErrorMsgs bool) (err error) { func (f *Fs) moveNode(name, dstLeaf, dstDirectoryID string, srcInfo *acd.Node, srcLeaf, srcDirectoryID string, useDirErrorMsgs bool) (err error) {
// fs.Debugf(name, "moveNode dst(%q,%s) <- src(%q,%s)", dstLeaf, dstDirectoryID, srcLeaf, srcDirectoryID) // fs.Debugf(name, "moveNode dst(%q,%s) <- src(%q,%s)", dstLeaf, dstDirectoryID, srcLeaf, srcDirectoryID)
cantMove := fs.ErrorCantMove cantMove := fs.ErrorCantMove
if useDirErrorMsgs { if useDirErrorMsgs {
@@ -1154,7 +1168,7 @@ func (f *Fs) moveNode(ctx context.Context, name, dstLeaf, dstDirectoryID string,
if srcLeaf != dstLeaf { if srcLeaf != dstLeaf {
// fs.Debugf(name, "renaming") // fs.Debugf(name, "renaming")
_, err = f.renameNode(ctx, srcInfo, dstLeaf) _, err = f.renameNode(srcInfo, dstLeaf)
if err != nil { if err != nil {
fs.Debugf(name, "Move: quick path rename failed: %v", err) fs.Debugf(name, "Move: quick path rename failed: %v", err)
goto OnConflict goto OnConflict
@@ -1162,7 +1176,7 @@ func (f *Fs) moveNode(ctx context.Context, name, dstLeaf, dstDirectoryID string,
} }
if srcDirectoryID != dstDirectoryID { if srcDirectoryID != dstDirectoryID {
// fs.Debugf(name, "trying parent replace: %s -> %s", oldParentID, newParentID) // fs.Debugf(name, "trying parent replace: %s -> %s", oldParentID, newParentID)
err = f.replaceParent(ctx, srcInfo, srcDirectoryID, dstDirectoryID) err = f.replaceParent(srcInfo, srcDirectoryID, dstDirectoryID)
if err != nil { if err != nil {
fs.Debugf(name, "Move: quick path parent replace failed: %v", err) fs.Debugf(name, "Move: quick path parent replace failed: %v", err)
return err return err
@@ -1175,13 +1189,13 @@ OnConflict:
fs.Debugf(name, "Could not directly rename file, presumably because there was a file with the same name already. Instead, the file will now be trashed where such operations do not cause errors. It will be restored to the correct parent after. If any of the subsequent calls fails, the rename/move will be in an invalid state.") fs.Debugf(name, "Could not directly rename file, presumably because there was a file with the same name already. Instead, the file will now be trashed where such operations do not cause errors. It will be restored to the correct parent after. If any of the subsequent calls fails, the rename/move will be in an invalid state.")
// fs.Debugf(name, "Trashing file") // fs.Debugf(name, "Trashing file")
err = f.removeNode(ctx, srcInfo) err = f.removeNode(srcInfo)
if err != nil { if err != nil {
fs.Debugf(name, "Move: remove node failed: %v", err) fs.Debugf(name, "Move: remove node failed: %v", err)
return err return err
} }
// fs.Debugf(name, "Renaming file") // fs.Debugf(name, "Renaming file")
_, err = f.renameNode(ctx, srcInfo, dstLeaf) _, err = f.renameNode(srcInfo, dstLeaf)
if err != nil { if err != nil {
fs.Debugf(name, "Move: rename node failed: %v", err) fs.Debugf(name, "Move: rename node failed: %v", err)
return err return err
@@ -1189,19 +1203,19 @@ OnConflict:
// note: replacing parent is forbidden by API, modifying them individually is // note: replacing parent is forbidden by API, modifying them individually is
// okay though // okay though
// fs.Debugf(name, "Adding target parent") // fs.Debugf(name, "Adding target parent")
err = f.addParent(ctx, srcInfo, dstDirectoryID) err = f.addParent(srcInfo, dstDirectoryID)
if err != nil { if err != nil {
fs.Debugf(name, "Move: addParent failed: %v", err) fs.Debugf(name, "Move: addParent failed: %v", err)
return err return err
} }
// fs.Debugf(name, "removing original parent") // fs.Debugf(name, "removing original parent")
err = f.removeParent(ctx, srcInfo, srcDirectoryID) err = f.removeParent(srcInfo, srcDirectoryID)
if err != nil { if err != nil {
fs.Debugf(name, "Move: removeParent failed: %v", err) fs.Debugf(name, "Move: removeParent failed: %v", err)
return err return err
} }
// fs.Debugf(name, "Restoring") // fs.Debugf(name, "Restoring")
_, err = f.restoreNode(ctx, srcInfo) _, err = f.restoreNode(srcInfo)
if err != nil { if err != nil {
fs.Debugf(name, "Move: restoreNode node failed: %v", err) fs.Debugf(name, "Move: restoreNode node failed: %v", err)
return err return err

View File

@@ -1,6 +1,5 @@
// Test AmazonCloudDrive filesystem interface // Test AmazonCloudDrive filesystem interface
//go:build acd
// +build acd // +build acd
package amazonclouddrive_test package amazonclouddrive_test

File diff suppressed because it is too large Load Diff

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !solaris && !js && go1.18 // +build !plan9,!solaris,go1.13
// +build !plan9,!solaris,!js,go1.18
package azureblob package azureblob

View File

@@ -1,7 +1,6 @@
// Test AzureBlob filesystem interface // Test AzureBlob filesystem interface
//go:build !plan9 && !solaris && !js && go1.18 // +build !plan9,!solaris,go1.13
// +build !plan9,!solaris,!js,go1.18
package azureblob package azureblob
@@ -10,7 +9,6 @@ import (
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests" "github.com/rclone/rclone/fstest/fstests"
"github.com/stretchr/testify/assert"
) )
// TestIntegration runs integration tests against the remote // TestIntegration runs integration tests against the remote
@@ -20,7 +18,7 @@ func TestIntegration(t *testing.T) {
NilObject: (*Object)(nil), NilObject: (*Object)(nil),
TiersToTest: []string{"Hot", "Cool"}, TiersToTest: []string{"Hot", "Cool"},
ChunkedUpload: fstests.ChunkedUploadConfig{ ChunkedUpload: fstests.ChunkedUploadConfig{
MinChunkSize: defaultChunkSize, MaxChunkSize: maxChunkSize,
}, },
}) })
} }
@@ -29,28 +27,11 @@ func (f *Fs) SetUploadChunkSize(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadChunkSize(cs) return f.setUploadChunkSize(cs)
} }
func (f *Fs) SetUploadCutoff(cs fs.SizeSuffix) (fs.SizeSuffix, error) {
return f.setUploadCutoff(cs)
}
var ( var (
_ fstests.SetUploadChunkSizer = (*Fs)(nil) _ fstests.SetUploadChunkSizer = (*Fs)(nil)
_ fstests.SetUploadCutoffer = (*Fs)(nil)
) )
func TestValidateAccessTier(t *testing.T) {
tests := map[string]struct {
accessTier string
want bool
}{
"hot": {"hot", true},
"HOT": {"HOT", true},
"Hot": {"Hot", true},
"cool": {"cool", true},
"archive": {"archive", true},
"empty": {"", false},
"unknown": {"unknown", false},
}
for name, test := range tests {
t.Run(name, func(t *testing.T) {
got := validateAccessTier(test.accessTier)
assert.Equal(t, test.want, got)
})
}
}

View File

@@ -1,7 +1,6 @@
// Build for azureblob for unsupported platforms to stop go complaining // Build for azureblob for unsupported platforms to stop go complaining
// about "no buildable Go source files " // about "no buildable Go source files "
//go:build plan9 || solaris || js || !go1.18 // +build plan9 solaris !go1.13
// +build plan9 solaris js !go1.18
package azureblob package azureblob

View File

@@ -1,13 +1,13 @@
// Package api provides types used by the Backblaze B2 API.
package api package api
import ( import (
"fmt" "fmt"
"path"
"strconv" "strconv"
"strings"
"time" "time"
"github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/version"
) )
// Error describes a B2 error response // Error describes a B2 error response
@@ -63,17 +63,16 @@ func (t *Timestamp) UnmarshalJSON(data []byte) error {
return nil return nil
} }
// HasVersion returns true if it looks like the passed filename has a timestamp on it. const versionFormat = "-v2006-01-02-150405.000"
//
// Note that the passed filename's timestamp may still be invalid even if this
// function returns true.
func HasVersion(remote string) bool {
return version.Match(remote)
}
// AddVersion adds the timestamp as a version string into the filename passed in. // AddVersion adds the timestamp as a version string into the filename passed in.
func (t Timestamp) AddVersion(remote string) string { func (t Timestamp) AddVersion(remote string) string {
return version.Add(remote, time.Time(t)) ext := path.Ext(remote)
base := remote[:len(remote)-len(ext)]
s := time.Time(t).Format(versionFormat)
// Replace the '.' with a '-'
s = strings.Replace(s, ".", "-", -1)
return base + s + ext
} }
// RemoveVersion removes the timestamp from a filename as a version string. // RemoveVersion removes the timestamp from a filename as a version string.
@@ -81,9 +80,24 @@ func (t Timestamp) AddVersion(remote string) string {
// It returns the new file name and a timestamp, or the old filename // It returns the new file name and a timestamp, or the old filename
// and a zero timestamp. // and a zero timestamp.
func RemoveVersion(remote string) (t Timestamp, newRemote string) { func RemoveVersion(remote string) (t Timestamp, newRemote string) {
time, newRemote := version.Remove(remote) newRemote = remote
t = Timestamp(time) ext := path.Ext(remote)
return base := remote[:len(remote)-len(ext)]
if len(base) < len(versionFormat) {
return
}
versionStart := len(base) - len(versionFormat)
// Check it ends in -xxx
if base[len(base)-4] != '-' {
return
}
// Replace with .xxx for parsing
base = base[:len(base)-4] + "." + base[len(base)-3:]
newT, err := time.Parse(versionFormat, base[versionStart:])
if err != nil {
return
}
return Timestamp(newT), base[:versionStart] + ext
} }
// IsZero returns true if the timestamp is uninitialized // IsZero returns true if the timestamp is uninitialized
@@ -239,7 +253,7 @@ type GetFileInfoRequest struct {
// If the original source of the file being uploaded has a last // If the original source of the file being uploaded has a last
// modified time concept, Backblaze recommends using // modified time concept, Backblaze recommends using
// src_last_modified_millis as the name, and a string holding the base // src_last_modified_millis as the name, and a string holding the base
// 10 number of milliseconds since midnight, January 1, 1970 // 10 number number of milliseconds since midnight, January 1, 1970
// UTC. This fits in a 64 bit integer such as the type "long" in the // UTC. This fits in a 64 bit integer such as the type "long" in the
// programming language Java. It is intended to be compatible with // programming language Java. It is intended to be compatible with
// Java's time long. For example, it can be passed directly into the // Java's time long. For example, it can be passed directly into the

View File

@@ -13,6 +13,7 @@ import (
var ( var (
emptyT api.Timestamp emptyT api.Timestamp
t0 = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123456789Z")) t0 = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123456789Z"))
t0r = api.Timestamp(fstest.Time("1970-01-01T01:01:01.123000000Z"))
t1 = api.Timestamp(fstest.Time("2001-02-03T04:05:06.123000000Z")) t1 = api.Timestamp(fstest.Time("2001-02-03T04:05:06.123000000Z"))
) )
@@ -35,6 +36,40 @@ func TestTimestampUnmarshalJSON(t *testing.T) {
assert.Equal(t, (time.Time)(t1), (time.Time)(tActual)) assert.Equal(t, (time.Time)(t1), (time.Time)(tActual))
} }
func TestTimestampAddVersion(t *testing.T) {
for _, test := range []struct {
t api.Timestamp
in string
expected string
}{
{t0, "potato.txt", "potato-v1970-01-01-010101-123.txt"},
{t1, "potato", "potato-v2001-02-03-040506-123"},
{t1, "", "-v2001-02-03-040506-123"},
} {
actual := test.t.AddVersion(test.in)
assert.Equal(t, test.expected, actual, test.in)
}
}
func TestTimestampRemoveVersion(t *testing.T) {
for _, test := range []struct {
in string
expectedT api.Timestamp
expectedRemote string
}{
{"potato.txt", emptyT, "potato.txt"},
{"potato-v1970-01-01-010101-123.txt", t0r, "potato.txt"},
{"potato-v2001-02-03-040506-123", t1, "potato"},
{"-v2001-02-03-040506-123", t1, ""},
{"potato-v2A01-02-03-040506-123", emptyT, "potato-v2A01-02-03-040506-123"},
{"potato-v2001-02-03-040506=123", emptyT, "potato-v2001-02-03-040506=123"},
} {
actualT, actualRemote := api.RemoveVersion(test.in)
assert.Equal(t, test.expectedT, actualT, test.in)
assert.Equal(t, test.expectedRemote, actualRemote, test.in)
}
}
func TestTimestampIsZero(t *testing.T) { func TestTimestampIsZero(t *testing.T) {
assert.True(t, emptyT.IsZero()) assert.True(t, emptyT.IsZero())
assert.False(t, t0.IsZero()) assert.False(t, t0.IsZero())

View File

@@ -1,4 +1,4 @@
// Package b2 provides an interface to the Backblaze B2 object storage system. // Package b2 provides an interface to the Backblaze B2 object storage system
package b2 package b2
// FIXME should we remove sha1 checks from here as rclone now supports // FIXME should we remove sha1 checks from here as rclone now supports
@@ -9,7 +9,6 @@ import (
"bytes" "bytes"
"context" "context"
"crypto/sha1" "crypto/sha1"
"errors"
"fmt" "fmt"
gohash "hash" gohash "hash"
"io" "io"
@@ -20,6 +19,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/b2/api" "github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/accounting"
@@ -44,28 +44,25 @@ const (
timeHeader = headerPrefix + timeKey timeHeader = headerPrefix + timeKey
sha1Key = "large_file_sha1" sha1Key = "large_file_sha1"
sha1Header = "X-Bz-Content-Sha1" sha1Header = "X-Bz-Content-Sha1"
sha1InfoHeader = headerPrefix + sha1Key
testModeHeader = "X-Bz-Test-Mode" testModeHeader = "X-Bz-Test-Mode"
idHeader = "X-Bz-File-Id"
nameHeader = "X-Bz-File-Name"
timestampHeader = "X-Bz-Upload-Timestamp"
retryAfterHeader = "Retry-After" retryAfterHeader = "Retry-After"
minSleep = 10 * time.Millisecond minSleep = 10 * time.Millisecond
maxSleep = 5 * time.Minute maxSleep = 5 * time.Minute
decayConstant = 1 // bigger for slower decay, exponential decayConstant = 1 // bigger for slower decay, exponential
maxParts = 10000 maxParts = 10000
maxVersions = 100 // maximum number of versions we search in --b2-versions mode maxVersions = 100 // maximum number of versions we search in --b2-versions mode
minChunkSize = 5 * fs.Mebi minChunkSize = 5 * fs.MebiByte
defaultChunkSize = 96 * fs.Mebi defaultChunkSize = 96 * fs.MebiByte
defaultUploadCutoff = 200 * fs.Mebi defaultUploadCutoff = 200 * fs.MebiByte
largeFileCopyCutoff = 4 * fs.Gibi // 5E9 is the max largeFileCopyCutoff = 4 * fs.GibiByte // 5E9 is the max
memoryPoolFlushTime = fs.Duration(time.Minute) // flush the cached buffers after this long memoryPoolFlushTime = fs.Duration(time.Minute) // flush the cached buffers after this long
memoryPoolUseMmap = false memoryPoolUseMmap = false
) )
// Globals // Globals
var ( var (
errNotWithVersions = errors.New("can't modify or delete files in --b2-versions mode") errNotWithVersions = errors.New("can't modify or delete files in --b2-versions mode")
errNotWithVersionAt = errors.New("can't modify or delete files in --b2-version-at mode")
) )
// Register with Fs // Register with Fs
@@ -76,15 +73,15 @@ func init() {
NewFs: NewFs, NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "account", Name: "account",
Help: "Account ID or Application Key ID.", Help: "Account ID or Application Key ID",
Required: true, Required: true,
}, { }, {
Name: "key", Name: "key",
Help: "Application Key.", Help: "Application Key",
Required: true, Required: true,
}, { }, {
Name: "endpoint", Name: "endpoint",
Help: "Endpoint for the service.\n\nLeave blank normally.", Help: "Endpoint for the service.\nLeave blank normally.",
Advanced: true, Advanced: true,
}, { }, {
Name: "test_mode", Name: "test_mode",
@@ -104,14 +101,9 @@ in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration
Advanced: true, Advanced: true,
}, { }, {
Name: "versions", Name: "versions",
Help: "Include old versions in directory listings.\n\nNote that when using this no file write operations are permitted,\nso you can't upload files or delete them.", Help: "Include old versions in directory listings.\nNote that when using this no file write operations are permitted,\nso you can't upload files or delete them.",
Default: false, Default: false,
Advanced: true, Advanced: true,
}, {
Name: "version_at",
Help: "Show file versions as they were at the specified time.\n\nNote that when using this no file write operations are permitted,\nso you can't upload files or delete them.",
Default: fs.Time{},
Advanced: true,
}, { }, {
Name: "hard_delete", Name: "hard_delete",
Help: "Permanently delete files on remote removal, otherwise hide files.", Help: "Permanently delete files on remote removal, otherwise hide files.",
@@ -122,34 +114,32 @@ in the [b2 integrations checklist](https://www.backblaze.com/b2/docs/integration
Files above this size will be uploaded in chunks of "--b2-chunk-size". Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657 GiB (== 5 GB).`, This value should be set no larger than 4.657GiB (== 5GB).`,
Default: defaultUploadCutoff, Default: defaultUploadCutoff,
Advanced: true, Advanced: true,
}, { }, {
Name: "copy_cutoff", Name: "copy_cutoff",
Help: `Cutoff for switching to multipart copy. Help: `Cutoff for switching to multipart copy
Any files larger than this that need to be server-side copied will be Any files larger than this that need to be server side copied will be
copied in chunks of this size. copied in chunks of this size.
The minimum is 0 and the maximum is 4.6 GiB.`, The minimum is 0 and the maximum is 4.6GB.`,
Default: largeFileCopyCutoff, Default: largeFileCopyCutoff,
Advanced: true, Advanced: true,
}, { }, {
Name: "chunk_size", Name: "chunk_size",
Help: `Upload chunk size. Help: `Upload chunk size. Must fit in memory.
When uploading large files, chunk the file into this size. When uploading large files, chunk the file into this size. Note that
these chunks are buffered in memory and there might a maximum of
Must fit in memory. These chunks are buffered in memory and there "--transfers" chunks in progress at once. 5,000,000 Bytes is the
might a maximum of "--transfers" chunks in progress at once. minimum size.`,
5,000,000 Bytes is the minimum size.`,
Default: defaultChunkSize, Default: defaultChunkSize,
Advanced: true, Advanced: true,
}, { }, {
Name: "disable_checksum", Name: "disable_checksum",
Help: `Disable checksums for large (> upload cutoff) files. Help: `Disable checksums for large (> upload cutoff) files
Normally rclone will calculate the SHA1 checksum of the input before Normally rclone will calculate the SHA1 checksum of the input before
uploading it so it can add it to metadata on the object. This is great uploading it so it can add it to metadata on the object. This is great
@@ -163,18 +153,8 @@ to start uploading.`,
This is usually set to a Cloudflare CDN URL as Backblaze offers This is usually set to a Cloudflare CDN URL as Backblaze offers
free egress for data downloaded through the Cloudflare network. free egress for data downloaded through the Cloudflare network.
Rclone works with private buckets by sending an "Authorization" header. This is probably only useful for a public bucket.
If the custom endpoint rewrites the requests for authentication, Leave blank if you want to use the endpoint provided by Backblaze.`,
e.g., in Cloudflare Workers, this header needs to be handled properly.
Leave blank if you want to use the endpoint provided by Backblaze.
The URL provided here SHOULD have the protocol and SHOULD NOT have
a trailing slash or specify the /file/bucket subpath as rclone will
request files with "{download_url}/file/{bucket_name}/{path}".
Example:
> https://mysubdomain.mydomain.tld
(No trailing "/", "file" or "bucket")`,
Advanced: true, Advanced: true,
}, { }, {
Name: "download_auth_duration", Name: "download_auth_duration",
@@ -217,7 +197,6 @@ type Options struct {
Endpoint string `config:"endpoint"` Endpoint string `config:"endpoint"`
TestMode string `config:"test_mode"` TestMode string `config:"test_mode"`
Versions bool `config:"versions"` Versions bool `config:"versions"`
VersionAt fs.Time `config:"version_at"`
HardDelete bool `config:"hard_delete"` HardDelete bool `config:"hard_delete"`
UploadCutoff fs.SizeSuffix `config:"upload_cutoff"` UploadCutoff fs.SizeSuffix `config:"upload_cutoff"`
CopyCutoff fs.SizeSuffix `config:"copy_cutoff"` CopyCutoff fs.SizeSuffix `config:"copy_cutoff"`
@@ -235,7 +214,6 @@ type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on if any root string // the path we are working on if any
opt Options // parsed config options opt Options // parsed config options
ci *fs.ConfigInfo // global config
features *fs.Features // optional features features *fs.Features // optional features
srv *rest.Client // the connection to the b2 server srv *rest.Client // the connection to the b2 server
rootBucket string // bucket part of root (if any) rootBucket string // bucket part of root (if any)
@@ -280,7 +258,7 @@ func (f *Fs) Root() string {
// String converts this Fs to a string // String converts this Fs to a string
func (f *Fs) String() string { func (f *Fs) String() string {
if f.rootBucket == "" { if f.rootBucket == "" {
return "B2 root" return fmt.Sprintf("B2 root")
} }
if f.rootDirectory == "" { if f.rootDirectory == "" {
return fmt.Sprintf("B2 bucket %s", f.rootBucket) return fmt.Sprintf("B2 bucket %s", f.rootBucket)
@@ -312,7 +290,7 @@ func (o *Object) split() (bucket, bucketPath string) {
// retryErrorCodes is a slice of error codes that we will retry // retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{ var retryErrorCodes = []int{
401, // Unauthorized (e.g. "Token has expired") 401, // Unauthorized (eg "Token has expired")
408, // Request Timeout 408, // Request Timeout
429, // Rate exceeded. 429, // Rate exceeded.
500, // Get occasional 500 Internal Server Error 500, // Get occasional 500 Internal Server Error
@@ -322,10 +300,7 @@ var retryErrorCodes = []int{
// shouldRetryNoAuth returns a boolean as to whether this resp and err // shouldRetryNoAuth returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func (f *Fs) shouldRetryNoReauth(ctx context.Context, resp *http.Response, err error) (bool, error) { func (f *Fs) shouldRetryNoReauth(resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// For 429 or 503 errors look at the Retry-After: header and // For 429 or 503 errors look at the Retry-After: header and
// set the retry appropriately, starting with a minimum of 1 // set the retry appropriately, starting with a minimum of 1
// second if it isn't set. // second if it isn't set.
@@ -356,7 +331,7 @@ func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (b
} }
return true, err return true, err
} }
return f.shouldRetryNoReauth(ctx, resp, err) return f.shouldRetryNoReauth(resp, err)
} }
// errorHandler parses a non 2xx error response into an error // errorHandler parses a non 2xx error response into an error
@@ -381,7 +356,7 @@ func errorHandler(resp *http.Response) error {
func checkUploadChunkSize(cs fs.SizeSuffix) error { func checkUploadChunkSize(cs fs.SizeSuffix) error {
if cs < minChunkSize { if cs < minChunkSize {
return fmt.Errorf("%s is less than %s", cs, minChunkSize) return errors.Errorf("%s is less than %s", cs, minChunkSize)
} }
return nil return nil
} }
@@ -396,7 +371,7 @@ func (f *Fs) setUploadChunkSize(cs fs.SizeSuffix) (old fs.SizeSuffix, err error)
func checkUploadCutoff(opt *Options, cs fs.SizeSuffix) error { func checkUploadCutoff(opt *Options, cs fs.SizeSuffix) error {
if cs < opt.ChunkSize { if cs < opt.ChunkSize {
return fmt.Errorf("%v is less than chunk size %v", cs, opt.ChunkSize) return errors.Errorf("%v is less than chunk size %v", cs, opt.ChunkSize)
} }
return nil return nil
} }
@@ -416,24 +391,21 @@ func (f *Fs) setRoot(root string) {
} }
// NewFs constructs an Fs from the path, bucket:path // NewFs constructs an Fs from the path, bucket:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.Background()
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
if err != nil { if err != nil {
return nil, err return nil, err
} }
if opt.UploadCutoff < opt.ChunkSize {
opt.UploadCutoff = opt.ChunkSize
fs.Infof(nil, "b2: raising upload cutoff to chunk size: %v", opt.UploadCutoff)
}
err = checkUploadCutoff(opt, opt.UploadCutoff) err = checkUploadCutoff(opt, opt.UploadCutoff)
if err != nil { if err != nil {
return nil, fmt.Errorf("b2: upload cutoff: %w", err) return nil, errors.Wrap(err, "b2: upload cutoff")
} }
err = checkUploadChunkSize(opt.ChunkSize) err = checkUploadChunkSize(opt.ChunkSize)
if err != nil { if err != nil {
return nil, fmt.Errorf("b2: chunk size: %w", err) return nil, errors.Wrap(err, "b2: chunk size")
} }
if opt.Account == "" { if opt.Account == "" {
return nil, errors.New("account not found") return nil, errors.New("account not found")
@@ -444,22 +416,20 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
if opt.Endpoint == "" { if opt.Endpoint == "" {
opt.Endpoint = defaultEndpoint opt.Endpoint = defaultEndpoint
} }
ci := fs.GetConfig(ctx)
f := &Fs{ f := &Fs{
name: name, name: name,
opt: *opt, opt: *opt,
ci: ci, srv: rest.NewClient(fshttp.NewClient(fs.Config)).SetErrorHandler(errorHandler),
srv: rest.NewClient(fshttp.NewClient(ctx)).SetErrorHandler(errorHandler),
cache: bucket.NewCache(), cache: bucket.NewCache(),
_bucketID: make(map[string]string, 1), _bucketID: make(map[string]string, 1),
_bucketType: make(map[string]string, 1), _bucketType: make(map[string]string, 1),
uploads: make(map[string][]*api.GetUploadURLResponse), uploads: make(map[string][]*api.GetUploadURLResponse),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(ci.Transfers), uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
pool: pool.New( pool: pool.New(
time.Duration(opt.MemoryPoolFlushTime), time.Duration(opt.MemoryPoolFlushTime),
int(opt.ChunkSize), int(opt.ChunkSize),
ci.Transfers, fs.Config.Transfers,
opt.MemoryPoolUseMmap, opt.MemoryPoolUseMmap,
), ),
} }
@@ -469,7 +439,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
WriteMimeType: true, WriteMimeType: true,
BucketBased: true, BucketBased: true,
BucketBasedRootOK: true, BucketBasedRootOK: true,
}).Fill(ctx, f) }).Fill(f)
// Set the test flag if required // Set the test flag if required
if opt.TestMode != "" { if opt.TestMode != "" {
testMode := strings.TrimSpace(opt.TestMode) testMode := strings.TrimSpace(opt.TestMode)
@@ -478,7 +448,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
err = f.authorizeAccount(ctx) err = f.authorizeAccount(ctx)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to authorize account: %w", err) return nil, errors.Wrap(err, "failed to authorize account")
} }
// If this is a key limited to a single bucket, it must exist already // If this is a key limited to a single bucket, it must exist already
if f.rootBucket != "" && f.info.Allowed.BucketID != "" { if f.rootBucket != "" && f.info.Allowed.BucketID != "" {
@@ -487,7 +457,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return nil, errors.New("bucket that application key is restricted to no longer exists") return nil, errors.New("bucket that application key is restricted to no longer exists")
} }
if allowedBucket != f.rootBucket { if allowedBucket != f.rootBucket {
return nil, fmt.Errorf("you must use bucket %q with this application key", allowedBucket) return nil, errors.Errorf("you must use bucket %q with this application key", allowedBucket)
} }
f.cache.MarkOK(f.rootBucket) f.cache.MarkOK(f.rootBucket)
f.setBucketID(f.rootBucket, f.info.Allowed.BucketID) f.setBucketID(f.rootBucket, f.info.Allowed.BucketID)
@@ -499,9 +469,12 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
f.setRoot(newRoot) f.setRoot(newRoot)
_, err := f.NewObject(ctx, leaf) _, err := f.NewObject(ctx, leaf)
if err != nil { if err != nil {
// File doesn't exist so return old f if err == fs.ErrorObjectNotFound {
f.setRoot(oldRoot) // File doesn't exist so return old f
return f, nil f.setRoot(oldRoot)
return f, nil
}
return nil, err
} }
// return an error with an fs which points to the parent // return an error with an fs which points to the parent
return f, fs.ErrorIsFile return f, fs.ErrorIsFile
@@ -524,10 +497,10 @@ func (f *Fs) authorizeAccount(ctx context.Context) error {
} }
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &f.info) resp, err := f.srv.CallJSON(ctx, &opts, nil, &f.info)
return f.shouldRetryNoReauth(ctx, resp, err) return f.shouldRetryNoReauth(resp, err)
}) })
if err != nil { if err != nil {
return fmt.Errorf("failed to authenticate: %w", err) return errors.Wrap(err, "failed to authenticate")
} }
f.srv.SetRoot(f.info.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken) f.srv.SetRoot(f.info.APIURL+"/b2api/v1").SetHeader("Authorization", f.info.AuthorizationToken)
return nil return nil
@@ -573,7 +546,7 @@ func (f *Fs) getUploadURL(ctx context.Context, bucket string) (upload *api.GetUp
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to get upload URL: %w", err) return nil, errors.Wrap(err, "failed to get upload URL")
} }
return upload, nil return upload, nil
} }
@@ -656,15 +629,15 @@ var errEndList = errors.New("end list")
// //
// (bucket, directory) is the starting directory // (bucket, directory) is the starting directory
// //
// If prefix is set then it is removed from all file names. // If prefix is set then it is removed from all file names
// //
// If addBucket is set then it adds the bucket to the start of the // If addBucket is set then it adds the bucket to the start of the
// remotes generated. // remotes generated
// //
// If recurse is set the function will recursively list. // If recurse is set the function will recursively list
// //
// If limit is > 0 then it limits to that many files (must be less // If limit is > 0 then it limits to that many files (must be less
// than 1000). // than 1000)
// //
// If hidden is set then it will list the hidden (deleted) files too. // If hidden is set then it will list the hidden (deleted) files too.
// //
@@ -703,12 +676,9 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
Method: "POST", Method: "POST",
Path: "/b2_list_file_names", Path: "/b2_list_file_names",
} }
if hidden || f.opt.VersionAt.IsSet() { if hidden {
opts.Path = "/b2_list_file_versions" opts.Path = "/b2_list_file_versions"
} }
lastFileName := ""
for { for {
var response api.ListFileNamesResponse var response api.ListFileNamesResponse
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
@@ -732,27 +702,13 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
remote := file.Name[len(prefix):] remote := file.Name[len(prefix):]
// Check for directory // Check for directory
isDirectory := remote == "" || strings.HasSuffix(remote, "/") isDirectory := remote == "" || strings.HasSuffix(remote, "/")
if isDirectory && len(remote) > 1 { if isDirectory {
remote = remote[:len(remote)-1] remote = remote[:len(remote)-1]
} }
if addBucket { if addBucket {
remote = path.Join(bucket, remote) remote = path.Join(bucket, remote)
} }
if f.opt.VersionAt.IsSet() {
if time.Time(file.UploadTimestamp).After(time.Time(f.opt.VersionAt)) {
// Ignore versions that were created after the specified time
continue
}
if file.Name == lastFileName {
// Ignore versions before the already returned version
continue
}
}
// Send object // Send object
lastFileName = file.Name
err = fn(remote, file, isDirectory) err = fn(remote, file, isDirectory)
if err != nil { if err != nil {
if err == errEndList { if err == errEndList {
@@ -1025,7 +981,7 @@ func (f *Fs) clearBucketID(bucket string) {
// Put the object into the bucket // Put the object into the bucket
// //
// Copy the reader in to the new object which is returned. // Copy the reader in to the new object which is returned
// //
// The new object may have been created if an error is returned // The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
@@ -1080,7 +1036,7 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) error {
} }
} }
} }
return fmt.Errorf("failed to create bucket: %w", err) return errors.Wrap(err, "failed to create bucket")
} }
f.setBucketID(bucket, response.ID) f.setBucketID(bucket, response.ID)
f.setBucketType(bucket, response.Type) f.setBucketType(bucket, response.Type)
@@ -1115,7 +1071,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return fmt.Errorf("failed to delete bucket: %w", err) return errors.Wrap(err, "failed to delete bucket")
} }
f.clearBucketID(bucket) f.clearBucketID(bucket)
f.clearBucketType(bucket) f.clearBucketType(bucket)
@@ -1156,7 +1112,7 @@ func (f *Fs) hide(ctx context.Context, bucket, bucketPath string) error {
return nil return nil
} }
} }
return fmt.Errorf("failed to hide %q: %w", bucketPath, err) return errors.Wrapf(err, "failed to hide %q", bucketPath)
} }
return nil return nil
} }
@@ -1177,7 +1133,7 @@ func (f *Fs) deleteByID(ctx context.Context, ID, Name string) error {
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return fmt.Errorf("failed to delete %q: %w", Name, err) return errors.Wrapf(err, "failed to delete %q", Name)
} }
return nil return nil
} }
@@ -1187,8 +1143,7 @@ func (f *Fs) deleteByID(ctx context.Context, ID, Name string) error {
// if oldOnly is true then it deletes only non current files. // if oldOnly is true then it deletes only non current files.
// //
// Implemented here so we can make sure we delete old versions. // Implemented here so we can make sure we delete old versions.
func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error { func (f *Fs) purge(ctx context.Context, bucket, directory string, oldOnly bool) error {
bucket, directory := f.split(dir)
if bucket == "" { if bucket == "" {
return errors.New("can't purge from root") return errors.New("can't purge from root")
} }
@@ -1205,14 +1160,17 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
} }
} }
var isUnfinishedUploadStale = func(timestamp api.Timestamp) bool { var isUnfinishedUploadStale = func(timestamp api.Timestamp) bool {
return time.Since(time.Time(timestamp)).Hours() > 24 if time.Since(time.Time(timestamp)).Hours() > 24 {
return true
}
return false
} }
// Delete Config.Transfers in parallel // Delete Config.Transfers in parallel
toBeDeleted := make(chan *api.File, f.ci.Transfers) toBeDeleted := make(chan *api.File, fs.Config.Transfers)
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(f.ci.Transfers) wg.Add(fs.Config.Transfers)
for i := 0; i < f.ci.Transfers; i++ { for i := 0; i < fs.Config.Transfers; i++ {
go func() { go func() {
defer wg.Done() defer wg.Done()
for object := range toBeDeleted { for object := range toBeDeleted {
@@ -1221,10 +1179,10 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
fs.Errorf(object.Name, "Can't create object %v", err) fs.Errorf(object.Name, "Can't create object %v", err)
continue continue
} }
tr := accounting.Stats(ctx).NewCheckingTransfer(oi, "deleting") tr := accounting.Stats(ctx).NewCheckingTransfer(oi)
err = f.deleteByID(ctx, object.ID, object.Name) err = f.deleteByID(ctx, object.ID, object.Name)
checkErr(err) checkErr(err)
tr.Done(ctx, err) tr.Done(err)
} }
}() }()
} }
@@ -1235,7 +1193,7 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
if err != nil { if err != nil {
fs.Errorf(object, "Can't create object %+v", err) fs.Errorf(object, "Can't create object %+v", err)
} }
tr := accounting.Stats(ctx).NewCheckingTransfer(oi, "checking") tr := accounting.Stats(ctx).NewCheckingTransfer(oi)
if oldOnly && last != remote { if oldOnly && last != remote {
// Check current version of the file // Check current version of the file
if object.Action == "hide" { if object.Action == "hide" {
@@ -1252,7 +1210,7 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
toBeDeleted <- object toBeDeleted <- object
} }
last = remote last = remote
tr.Done(ctx, nil) tr.Done(nil)
} }
return nil return nil
})) }))
@@ -1260,22 +1218,22 @@ func (f *Fs) purge(ctx context.Context, dir string, oldOnly bool) error {
wg.Wait() wg.Wait()
if !oldOnly { if !oldOnly {
checkErr(f.Rmdir(ctx, dir)) checkErr(f.Rmdir(ctx, ""))
} }
return errReturn return errReturn
} }
// Purge deletes all the files and directories including the old versions. // Purge deletes all the files and directories including the old versions.
func (f *Fs) Purge(ctx context.Context, dir string) error { func (f *Fs) Purge(ctx context.Context) error {
return f.purge(ctx, dir, false) return f.purge(ctx, f.rootBucket, f.rootDirectory, false)
} }
// CleanUp deletes all the hidden files. // CleanUp deletes all the hidden files.
func (f *Fs) CleanUp(ctx context.Context) error { func (f *Fs) CleanUp(ctx context.Context) error {
return f.purge(ctx, "", true) return f.purge(ctx, f.rootBucket, f.rootDirectory, true)
} }
// copy does a server-side copy from dstObj <- srcObj // copy does a server side copy from dstObj <- srcObj
// //
// If newInfo is nil then the metadata will be copied otherwise it // If newInfo is nil then the metadata will be copied otherwise it
// will be replaced with newInfo // will be replaced with newInfo
@@ -1332,11 +1290,11 @@ func (f *Fs) copy(ctx context.Context, dstObj *Object, srcObj *Object, newInfo *
return dstObj.decodeMetaDataFileInfo(&response) return dstObj.decodeMetaDataFileInfo(&response)
} }
// Copy src to this remote using server-side copy operations. // Copy src to this remote using server side copy operations.
// //
// This is stored with the remote path given. // This is stored with the remote path given
// //
// It returns the destination Object and a possible error. // It returns the destination Object and a possible error
// //
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
@@ -1384,7 +1342,7 @@ func (f *Fs) getDownloadAuthorization(ctx context.Context, bucket, remote string
} }
var request = api.GetDownloadAuthorizationRequest{ var request = api.GetDownloadAuthorizationRequest{
BucketID: bucketID, BucketID: bucketID,
FileNamePrefix: f.opt.Enc.FromStandardPath(path.Join(f.rootDirectory, remote)), FileNamePrefix: f.opt.Enc.FromStandardPath(path.Join(f.root, remote)),
ValidDurationInSeconds: validDurationInSeconds, ValidDurationInSeconds: validDurationInSeconds,
} }
var response api.GetDownloadAuthorizationResponse var response api.GetDownloadAuthorizationResponse
@@ -1393,7 +1351,7 @@ func (f *Fs) getDownloadAuthorization(ctx context.Context, bucket, remote string
return f.shouldRetry(ctx, resp, err) return f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return "", fmt.Errorf("failed to get download authorization: %w", err) return "", errors.Wrap(err, "failed to get download authorization")
} }
return response.AuthorizationToken, nil return response.AuthorizationToken, nil
} }
@@ -1478,23 +1436,26 @@ func (o *Object) Size() int64 {
// Clean the SHA1 // Clean the SHA1
// //
// Make sure it is lower case. // Make sure it is lower case
// //
// Remove unverified prefix - see https://www.backblaze.com/b2/docs/uploading.html // Remove unverified prefix - see https://www.backblaze.com/b2/docs/uploading.html
// Some tools (e.g. Cyberduck) use this // Some tools (eg Cyberduck) use this
func cleanSHA1(sha1 string) string { func cleanSHA1(sha1 string) (out string) {
out = strings.ToLower(sha1)
const unverified = "unverified:" const unverified = "unverified:"
return strings.TrimPrefix(strings.ToLower(sha1), unverified) if strings.HasPrefix(out, unverified) {
out = out[len(unverified):]
}
return out
} }
// decodeMetaDataRaw sets the metadata from the data passed in // decodeMetaDataRaw sets the metadata from the data passed in
// //
// Sets // Sets
// // o.id
// o.id // o.modTime
// o.modTime // o.size
// o.size // o.sha1
// o.sha1
func (o *Object) decodeMetaDataRaw(ID, SHA1 string, Size int64, UploadTimestamp api.Timestamp, Info map[string]string, mimeType string) (err error) { func (o *Object) decodeMetaDataRaw(ID, SHA1 string, Size int64, UploadTimestamp api.Timestamp, Info map[string]string, mimeType string) (err error) {
o.id = ID o.id = ID
o.sha1 = SHA1 o.sha1 = SHA1
@@ -1513,11 +1474,10 @@ func (o *Object) decodeMetaDataRaw(ID, SHA1 string, Size int64, UploadTimestamp
// decodeMetaData sets the metadata in the object from an api.File // decodeMetaData sets the metadata in the object from an api.File
// //
// Sets // Sets
// // o.id
// o.id // o.modTime
// o.modTime // o.size
// o.size // o.sha1
// o.sha1
func (o *Object) decodeMetaData(info *api.File) (err error) { func (o *Object) decodeMetaData(info *api.File) (err error) {
return o.decodeMetaDataRaw(info.ID, info.SHA1, info.Size, info.UploadTimestamp, info.Info, info.ContentType) return o.decodeMetaDataRaw(info.ID, info.SHA1, info.Size, info.UploadTimestamp, info.Info, info.ContentType)
} }
@@ -1525,20 +1485,16 @@ func (o *Object) decodeMetaData(info *api.File) (err error) {
// decodeMetaDataFileInfo sets the metadata in the object from an api.FileInfo // decodeMetaDataFileInfo sets the metadata in the object from an api.FileInfo
// //
// Sets // Sets
// // o.id
// o.id // o.modTime
// o.modTime // o.size
// o.size // o.sha1
// o.sha1
func (o *Object) decodeMetaDataFileInfo(info *api.FileInfo) (err error) { func (o *Object) decodeMetaDataFileInfo(info *api.FileInfo) (err error) {
return o.decodeMetaDataRaw(info.ID, info.SHA1, info.Size, info.UploadTimestamp, info.Info, info.ContentType) return o.decodeMetaDataRaw(info.ID, info.SHA1, info.Size, info.UploadTimestamp, info.Info, info.ContentType)
} }
// getMetaDataListing gets the metadata from the object unconditionally from the listing // getMetaData gets the metadata from the object unconditionally
// func (o *Object) getMetaData(ctx context.Context) (info *api.File, err error) {
// Note that listing is a class C transaction which costs more than
// the B transaction used in getMetaData
func (o *Object) getMetaDataListing(ctx context.Context) (info *api.File, err error) {
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
maxSearched := 1 maxSearched := 1
var timestamp api.Timestamp var timestamp api.Timestamp
@@ -1571,27 +1527,13 @@ func (o *Object) getMetaDataListing(ctx context.Context) (info *api.File, err er
return info, nil return info, nil
} }
// getMetaData gets the metadata from the object unconditionally
func (o *Object) getMetaData(ctx context.Context) (info *api.File, err error) {
// If using versions and have a version suffix, need to list the directory to find the correct versions
if o.fs.opt.Versions {
timestamp, _ := api.RemoveVersion(o.remote)
if !timestamp.IsZero() {
return o.getMetaDataListing(ctx)
}
}
_, info, err = o.getOrHead(ctx, "HEAD", nil)
return info, err
}
// readMetaData gets the metadata if it hasn't already been fetched // readMetaData gets the metadata if it hasn't already been fetched
// //
// Sets // Sets
// // o.id
// o.id // o.modTime
// o.modTime // o.size
// o.size // o.sha1
// o.sha1
func (o *Object) readMetaData(ctx context.Context) (err error) { func (o *Object) readMetaData(ctx context.Context) (err error) {
if o.id != "" { if o.id != "" {
return nil return nil
@@ -1698,14 +1640,14 @@ func (file *openFile) Close() (err error) {
// Check to see we read the correct number of bytes // Check to see we read the correct number of bytes
if file.o.Size() != file.bytes { if file.o.Size() != file.bytes {
return fmt.Errorf("object corrupted on transfer - length mismatch (want %d got %d)", file.o.Size(), file.bytes) return errors.Errorf("object corrupted on transfer - length mismatch (want %d got %d)", file.o.Size(), file.bytes)
} }
// Check the SHA1 // Check the SHA1
receivedSHA1 := file.o.sha1 receivedSHA1 := file.o.sha1
calculatedSHA1 := fmt.Sprintf("%x", file.hash.Sum(nil)) calculatedSHA1 := fmt.Sprintf("%x", file.hash.Sum(nil))
if receivedSHA1 != "" && receivedSHA1 != calculatedSHA1 { if receivedSHA1 != "" && receivedSHA1 != calculatedSHA1 {
return fmt.Errorf("object corrupted on transfer - SHA1 mismatch (want %q got %q)", receivedSHA1, calculatedSHA1) return errors.Errorf("object corrupted on transfer - SHA1 mismatch (want %q got %q)", receivedSHA1, calculatedSHA1)
} }
return nil return nil
@@ -1714,11 +1656,12 @@ func (file *openFile) Close() (err error) {
// Check it satisfies the interfaces // Check it satisfies the interfaces
var _ io.ReadCloser = &openFile{} var _ io.ReadCloser = &openFile{}
func (o *Object) getOrHead(ctx context.Context, method string, options []fs.OpenOption) (resp *http.Response, info *api.File, err error) { // Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
fs.FixRangeOption(options, o.size)
opts := rest.Opts{ opts := rest.Opts{
Method: method, Method: "GET",
Options: options, Options: options,
NoResponse: method == "HEAD",
} }
// Use downloadUrl from backblaze if downloadUrl is not set // Use downloadUrl from backblaze if downloadUrl is not set
@@ -1729,81 +1672,44 @@ func (o *Object) getOrHead(ctx context.Context, method string, options []fs.Open
opts.RootURL = o.fs.opt.DownloadURL opts.RootURL = o.fs.opt.DownloadURL
} }
// Download by id if set and not using DownloadURL otherwise by name // Download by id if set otherwise by name
if o.id != "" && o.fs.opt.DownloadURL == "" { if o.id != "" {
opts.Path += "/b2api/v1/b2_download_file_by_id?fileId=" + urlEncode(o.id) opts.Path += "/b2api/v1/b2_download_file_by_id?fileId=" + urlEncode(o.id)
} else { } else {
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
opts.Path += "/file/" + urlEncode(o.fs.opt.Enc.FromStandardName(bucket)) + "/" + urlEncode(o.fs.opt.Enc.FromStandardPath(bucketPath)) opts.Path += "/file/" + urlEncode(o.fs.opt.Enc.FromStandardName(bucket)) + "/" + urlEncode(o.fs.opt.Enc.FromStandardPath(bucketPath))
} }
var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return o.fs.shouldRetry(ctx, resp, err) return o.fs.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
// 404 for files, 400 for directories return nil, errors.Wrap(err, "failed to open for download")
if resp != nil && (resp.StatusCode == http.StatusNotFound || resp.StatusCode == http.StatusBadRequest) {
return nil, nil, fs.ErrorObjectNotFound
}
return nil, nil, fmt.Errorf("failed to %s for download: %w", method, err)
} }
// NB resp may be Open here - don't return err != nil without closing // Parse the time out of the headers if possible
err = o.parseTimeString(resp.Header.Get(timeHeader))
// Convert the Headers into an api.File
var uploadTimestamp api.Timestamp
err = uploadTimestamp.UnmarshalJSON([]byte(resp.Header.Get(timestampHeader)))
if err != nil {
fs.Debugf(o, "Bad "+timestampHeader+" header: %v", err)
}
var Info = make(map[string]string)
for k, vs := range resp.Header {
k = strings.ToLower(k)
for _, v := range vs {
if strings.HasPrefix(k, headerPrefix) {
Info[k[len(headerPrefix):]] = v
}
}
}
info = &api.File{
ID: resp.Header.Get(idHeader),
Name: resp.Header.Get(nameHeader),
Action: "upload",
Size: resp.ContentLength,
UploadTimestamp: uploadTimestamp,
SHA1: resp.Header.Get(sha1Header),
ContentType: resp.Header.Get("Content-Type"),
Info: Info,
}
// When reading files from B2 via cloudflare using
// --b2-download-url cloudflare strips the Content-Length
// headers (presumably so it can inject stuff) so use the old
// length read from the listing.
if info.Size < 0 {
info.Size = o.size
}
return resp, info, nil
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
fs.FixRangeOption(options, o.size)
resp, info, err := o.getOrHead(ctx, "GET", options)
if err != nil {
return nil, err
}
// Don't check length or hash or metadata on partial content
if resp.StatusCode == http.StatusPartialContent {
return resp.Body, nil
}
err = o.decodeMetaData(info)
if err != nil { if err != nil {
_ = resp.Body.Close() _ = resp.Body.Close()
return nil, err return nil, err
} }
// Read sha1 from header if it isn't set
if o.sha1 == "" {
o.sha1 = resp.Header.Get(sha1Header)
fs.Debugf(o, "Reading sha1 from header - %q", o.sha1)
// if sha1 header is "none" (in big files), then need
// to read it from the metadata
if o.sha1 == "none" {
o.sha1 = resp.Header.Get(sha1InfoHeader)
fs.Debugf(o, "Reading sha1 from info - %q", o.sha1)
}
o.sha1 = cleanSHA1(o.sha1)
}
// Don't check length or hash on partial content
if resp.StatusCode == http.StatusPartialContent {
return resp.Body, nil
}
return newOpenFile(o, resp), nil return newOpenFile(o, resp), nil
} }
@@ -1849,9 +1755,6 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
if o.fs.opt.Versions { if o.fs.opt.Versions {
return errNotWithVersions return errNotWithVersions
} }
if o.fs.opt.VersionAt.IsSet() {
return errNotWithVersionAt
}
size := src.Size() size := src.Size()
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
@@ -2007,9 +1910,6 @@ func (o *Object) Remove(ctx context.Context) error {
if o.fs.opt.Versions { if o.fs.opt.Versions {
return errNotWithVersions return errNotWithVersions
} }
if o.fs.opt.VersionAt.IsSet() {
return errNotWithVersionAt
}
if o.fs.opt.HardDelete { if o.fs.opt.HardDelete {
return o.fs.deleteByID(ctx, o.id, bucketPath) return o.fs.deleteByID(ctx, o.id, bucketPath)
} }

View File

@@ -14,15 +14,13 @@ import (
"io" "io"
"strings" "strings"
"sync" "sync"
"time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/b2/api" "github.com/rclone/rclone/backend/b2/api"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/chunksize"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/atexit" "github.com/rclone/rclone/lib/atexit"
"github.com/rclone/rclone/lib/pool"
"github.com/rclone/rclone/lib/rest" "github.com/rclone/rclone/lib/rest"
"golang.org/x/sync/errgroup" "golang.org/x/sync/errgroup"
) )
@@ -91,19 +89,21 @@ type largeUpload struct {
// newLargeUpload starts an upload of object o from in with metadata in src // newLargeUpload starts an upload of object o from in with metadata in src
// //
// If newInfo is set then metadata from that will be used instead of reading it from src // If newInfo is set then metadata from that will be used instead of reading it from src
func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, defaultChunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File) (up *largeUpload, err error) { func (f *Fs) newLargeUpload(ctx context.Context, o *Object, in io.Reader, src fs.ObjectInfo, chunkSize fs.SizeSuffix, doCopy bool, newInfo *api.File) (up *largeUpload, err error) {
remote := o.remote
size := src.Size() size := src.Size()
parts := int64(0) parts := int64(0)
sha1SliceSize := int64(maxParts) sha1SliceSize := int64(maxParts)
chunkSize := defaultChunkSize
if size == -1 { if size == -1 {
fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", f.opt.ChunkSize, maxParts*f.opt.ChunkSize) fs.Debugf(o, "Streaming upload with --b2-chunk-size %s allows uploads of up to %s and will fail only when that limit is reached.", f.opt.ChunkSize, maxParts*f.opt.ChunkSize)
} else { } else {
chunkSize = chunksize.Calculator(o, size, maxParts, defaultChunkSize)
parts = size / int64(chunkSize) parts = size / int64(chunkSize)
if size%int64(chunkSize) != 0 { if size%int64(chunkSize) != 0 {
parts++ parts++
} }
if parts > maxParts {
return nil, errors.Errorf("%q too big (%d bytes) makes too many parts %d > %d - increase --b2-chunk-size", remote, size, parts, maxParts)
}
sha1SliceSize = parts sha1SliceSize = parts
} }
@@ -185,7 +185,7 @@ func (up *largeUpload) getUploadURL(ctx context.Context) (upload *api.GetUploadP
return up.f.shouldRetry(ctx, resp, err) return up.f.shouldRetry(ctx, resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to get upload URL: %w", err) return nil, errors.Wrap(err, "failed to get upload URL")
} }
} else { } else {
upload, up.uploads = up.uploads[0], up.uploads[1:] upload, up.uploads = up.uploads[0], up.uploads[1:]
@@ -230,14 +230,14 @@ func (up *largeUpload) transferChunk(ctx context.Context, part int64, body []byt
// //
// The number of bytes in the file being uploaded. Note that // The number of bytes in the file being uploaded. Note that
// this header is required; you cannot leave it out and just // this header is required; you cannot leave it out and just
// use chunked encoding. The minimum size of every part but // use chunked encoding. The minimum size of every part but
// the last one is 100 MB (100,000,000 bytes) // the last one is 100MB.
// //
// X-Bz-Content-Sha1 // X-Bz-Content-Sha1
// //
// The SHA1 checksum of the this part of the file. B2 will // The SHA1 checksum of the this part of the file. B2 will
// check this when the part is uploaded, to make sure that the // check this when the part is uploaded, to make sure that the
// data arrived correctly. The same SHA1 checksum must be // data arrived correctly. The same SHA1 checksum must be
// passed to b2_finish_large_file. // passed to b2_finish_large_file.
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
@@ -406,7 +406,7 @@ func (up *largeUpload) Stream(ctx context.Context, initialUploadBlock []byte) (e
up.size += int64(n) up.size += int64(n)
if part > maxParts { if part > maxParts {
up.f.putBuf(buf, false) up.f.putBuf(buf, false)
return fmt.Errorf("%q too big (%d bytes so far) makes too many parts %d > %d - increase --b2-chunk-size", up.o, up.size, up.parts, maxParts) return errors.Errorf("%q too big (%d bytes so far) makes too many parts %d > %d - increase --b2-chunk-size", up.o, up.size, up.parts, maxParts)
} }
part := part // for the closure part := part // for the closure
@@ -430,47 +430,18 @@ func (up *largeUpload) Upload(ctx context.Context) (err error) {
defer atexit.OnError(&err, func() { _ = up.cancel(ctx) })() defer atexit.OnError(&err, func() { _ = up.cancel(ctx) })()
fs.Debugf(up.o, "Starting %s of large file in %d chunks (id %q)", up.what, up.parts, up.id) fs.Debugf(up.o, "Starting %s of large file in %d chunks (id %q)", up.what, up.parts, up.id)
var ( var (
g, gCtx = errgroup.WithContext(ctx) g, gCtx = errgroup.WithContext(ctx)
remaining = up.size remaining = up.size
uploadPool *pool.Pool
ci = fs.GetConfig(ctx)
) )
// If using large chunk size then make a temporary pool
if up.chunkSize <= int64(up.f.opt.ChunkSize) {
uploadPool = up.f.pool
} else {
uploadPool = pool.New(
time.Duration(up.f.opt.MemoryPoolFlushTime),
int(up.chunkSize),
ci.Transfers,
up.f.opt.MemoryPoolUseMmap,
)
defer uploadPool.Flush()
}
// Get an upload token and a buffer
getBuf := func() (buf []byte) {
up.f.getBuf(true)
if !up.doCopy {
buf = uploadPool.Get()
}
return buf
}
// Put an upload token and a buffer
putBuf := func(buf []byte) {
if !up.doCopy {
uploadPool.Put(buf)
}
up.f.putBuf(nil, true)
}
g.Go(func() error { g.Go(func() error {
for part := int64(1); part <= up.parts; part++ { for part := int64(1); part <= up.parts; part++ {
// Get a block of memory from the pool and token which limits concurrency. // Get a block of memory from the pool and token which limits concurrency.
buf := getBuf() buf := up.f.getBuf(up.doCopy)
// Fail fast, in case an errgroup managed function returns an error // Fail fast, in case an errgroup managed function returns an error
// gCtx is cancelled. There is no point in uploading all the other parts. // gCtx is cancelled. There is no point in uploading all the other parts.
if gCtx.Err() != nil { if gCtx.Err() != nil {
putBuf(buf) up.f.putBuf(buf, up.doCopy)
return nil return nil
} }
@@ -484,14 +455,14 @@ func (up *largeUpload) Upload(ctx context.Context) (err error) {
buf = buf[:reqSize] buf = buf[:reqSize]
_, err = io.ReadFull(up.in, buf) _, err = io.ReadFull(up.in, buf)
if err != nil { if err != nil {
putBuf(buf) up.f.putBuf(buf, up.doCopy)
return err return err
} }
} }
part := part // for the closure part := part // for the closure
g.Go(func() (err error) { g.Go(func() (err error) {
defer putBuf(buf) defer up.f.putBuf(buf, up.doCopy)
if !up.doCopy { if !up.doCopy {
err = up.transferChunk(gCtx, part, buf) err = up.transferChunk(gCtx, part, buf)
} else { } else {

View File

@@ -14,7 +14,7 @@ const (
timeFormat = `"` + time.RFC3339 + `"` timeFormat = `"` + time.RFC3339 + `"`
) )
// Time represents date and time information for the // Time represents represents date and time information for the
// box API, by using RFC3339 // box API, by using RFC3339
type Time time.Time type Time time.Time
@@ -36,13 +36,13 @@ func (t *Time) UnmarshalJSON(data []byte) error {
// Error is returned from box when things go wrong // Error is returned from box when things go wrong
type Error struct { type Error struct {
Type string `json:"type"` Type string `json:"type"`
Status int `json:"status"` Status int `json:"status"`
Code string `json:"code"` Code string `json:"code"`
ContextInfo json.RawMessage `json:"context_info"` ContextInfo json.RawMessage
HelpURL string `json:"help_url"` HelpURL string `json:"help_url"`
Message string `json:"message"` Message string `json:"message"`
RequestID string `json:"request_id"` RequestID string `json:"request_id"`
} }
// Error returns a string for the error and satisfies the error interface // Error returns a string for the error and satisfies the error interface
@@ -61,7 +61,7 @@ func (e *Error) Error() string {
var _ error = (*Error)(nil) var _ error = (*Error)(nil)
// ItemFields are the fields needed for FileInfo // ItemFields are the fields needed for FileInfo
var ItemFields = "type,id,sequence_id,etag,sha1,name,size,created_at,modified_at,content_created_at,content_modified_at,item_status,shared_link,owned_by" var ItemFields = "type,id,sequence_id,etag,sha1,name,size,created_at,modified_at,content_created_at,content_modified_at,item_status,shared_link"
// Types of things in Item // Types of things in Item
const ( const (
@@ -90,12 +90,6 @@ type Item struct {
URL string `json:"url,omitempty"` URL string `json:"url,omitempty"`
Access string `json:"access,omitempty"` Access string `json:"access,omitempty"`
} `json:"shared_link"` } `json:"shared_link"`
OwnedBy struct {
Type string `json:"type"`
ID string `json:"id"`
Name string `json:"name"`
Login string `json:"login"`
} `json:"owned_by"`
} }
// ModTime returns the modification time of the item // ModTime returns the modification time of the item
@@ -109,11 +103,10 @@ func (i *Item) ModTime() (t time.Time) {
// FolderItems is returned from the GetFolderItems call // FolderItems is returned from the GetFolderItems call
type FolderItems struct { type FolderItems struct {
TotalCount int `json:"total_count"` TotalCount int `json:"total_count"`
Entries []Item `json:"entries"` Entries []Item `json:"entries"`
Offset int `json:"offset"` Offset int `json:"offset"`
Limit int `json:"limit"` Limit int `json:"limit"`
NextMarker *string `json:"next_marker,omitempty"`
Order []struct { Order []struct {
By string `json:"by"` By string `json:"by"`
Direction string `json:"direction"` Direction string `json:"direction"`
@@ -139,38 +132,6 @@ type UploadFile struct {
ContentModifiedAt Time `json:"content_modified_at"` ContentModifiedAt Time `json:"content_modified_at"`
} }
// PreUploadCheck is the request for upload preflight check
type PreUploadCheck struct {
Name string `json:"name"`
Parent Parent `json:"parent"`
Size *int64 `json:"size,omitempty"`
}
// PreUploadCheckResponse is the response from upload preflight check
// if successful
type PreUploadCheckResponse struct {
UploadToken string `json:"upload_token"`
UploadURL string `json:"upload_url"`
}
// PreUploadCheckConflict is returned in the ContextInfo error field
// from PreUploadCheck when the error code is "item_name_in_use"
type PreUploadCheckConflict struct {
Conflicts struct {
Type string `json:"type"`
ID string `json:"id"`
FileVersion struct {
Type string `json:"type"`
ID string `json:"id"`
Sha1 string `json:"sha1"`
} `json:"file_version"`
SequenceID string `json:"sequence_id"`
Etag string `json:"etag"`
Sha1 string `json:"sha1"`
Name string `json:"name"`
} `json:"conflicts"`
}
// UpdateFileModTime is used in Update File Info // UpdateFileModTime is used in Update File Info
type UpdateFileModTime struct { type UpdateFileModTime struct {
ContentModifiedAt Time `json:"content_modified_at"` ContentModifiedAt Time `json:"content_modified_at"`

View File

@@ -14,19 +14,24 @@ import (
"crypto/rsa" "crypto/rsa"
"encoding/json" "encoding/json"
"encoding/pem" "encoding/pem"
"errors"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"log"
"net/http" "net/http"
"net/url" "net/url"
"os"
"path" "path"
"strconv" "strconv"
"strings" "strings"
"sync"
"sync/atomic"
"time" "time"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/jwtutil"
"github.com/youmark/pkcs8"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/box/api" "github.com/rclone/rclone/backend/box/api"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config"
@@ -37,13 +42,9 @@ import (
"github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/dircache" "github.com/rclone/rclone/lib/dircache"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/env"
"github.com/rclone/rclone/lib/jwtutil"
"github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest" "github.com/rclone/rclone/lib/rest"
"github.com/youmark/pkcs8"
"golang.org/x/oauth2" "golang.org/x/oauth2"
"golang.org/x/oauth2/jws" "golang.org/x/oauth2/jws"
) )
@@ -56,6 +57,7 @@ const (
decayConstant = 2 // bigger for slower decay, exponential decayConstant = 2 // bigger for slower decay, exponential
rootURL = "https://api.box.com/2.0" rootURL = "https://api.box.com/2.0"
uploadURL = "https://upload.box.com/api/2.0" uploadURL = "https://upload.box.com/api/2.0"
listChunks = 1000 // chunk size to read directory listings
minUploadCutoff = 50000000 // upload cutoff can be no lower than this minUploadCutoff = 50000000 // upload cutoff can be no lower than this
defaultUploadCutoff = 50 * 1024 * 1024 defaultUploadCutoff = 50 * 1024 * 1024
tokenURL = "https://api.box.com/oauth2/token" tokenURL = "https://api.box.com/oauth2/token"
@@ -82,49 +84,49 @@ func init() {
Name: "box", Name: "box",
Description: "Box", Description: "Box",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) { Config: func(name string, m configmap.Mapper) {
jsonFile, ok := m.Get("box_config_file") jsonFile, ok := m.Get("box_config_file")
boxSubType, boxSubTypeOk := m.Get("box_sub_type") boxSubType, boxSubTypeOk := m.Get("box_sub_type")
boxAccessToken, boxAccessTokenOk := m.Get("access_token")
var err error var err error
// If using box config.json, use JWT auth
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" { if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" {
err = refreshJWTToken(ctx, jsonFile, boxSubType, name, m) err = refreshJWTToken(jsonFile, boxSubType, name, m)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to configure token with jwt authentication: %w", err) log.Fatalf("Failed to configure token with jwt authentication: %v", err)
}
} else {
err = oauthutil.Config("box", name, m, oauthConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token with oauth authentication: %v", err)
} }
// Else, if not using an access token, use oauth2
} else if boxAccessToken == "" || !boxAccessTokenOk {
return oauthutil.ConfigOut("", &oauthutil.Options{
OAuth2Config: oauthConfig,
})
} }
return nil, nil
}, },
Options: append(oauthutil.SharedOptions, []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Box App Client Id.\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Box App Client Secret\nLeave blank normally.",
}, {
Name: "root_folder_id", Name: "root_folder_id",
Help: "Fill in for rclone to use a non root folder as its starting point.", Help: "Fill in for rclone to use a non root folder as its starting point.",
Default: "0", Default: "0",
Advanced: true, Advanced: true,
}, { }, {
Name: "box_config_file", Name: "box_config_file",
Help: "Box App config.json location\n\nLeave blank normally." + env.ShellExpandHelp, Help: "Box App config.json location\nLeave blank normally." + env.ShellExpandHelp,
}, {
Name: "access_token",
Help: "Box App Primary Access Token\n\nLeave blank normally.",
}, { }, {
Name: "box_sub_type", Name: "box_sub_type",
Default: "user", Default: "user",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "user", Value: "user",
Help: "Rclone should act on behalf of a user.", Help: "Rclone should act on behalf of a user",
}, { }, {
Value: "enterprise", Value: "enterprise",
Help: "Rclone should act on behalf of a service account.", Help: "Rclone should act on behalf of a service account",
}}, }},
}, { }, {
Name: "upload_cutoff", Name: "upload_cutoff",
Help: "Cutoff for switching to multipart upload (>= 50 MiB).", Help: "Cutoff for switching to multipart upload (>= 50MB).",
Default: fs.SizeSuffix(defaultUploadCutoff), Default: fs.SizeSuffix(defaultUploadCutoff),
Advanced: true, Advanced: true,
}, { }, {
@@ -132,16 +134,6 @@ func init() {
Help: "Max number of times to try committing a multipart file.", Help: "Max number of times to try committing a multipart file.",
Default: 100, Default: 100,
Advanced: true, Advanced: true,
}, {
Name: "list_chunk",
Default: 1000,
Help: "Size of listing chunk 1-1000.",
Advanced: true,
}, {
Name: "owned_by",
Default: "",
Help: "Only show items owned by the login (email address) passed in.",
Advanced: true,
}, { }, {
Name: config.ConfigEncoding, Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp, Help: config.ConfigEncodingHelp,
@@ -157,39 +149,39 @@ func init() {
encoder.EncodeBackSlash | encoder.EncodeBackSlash |
encoder.EncodeRightSpace | encoder.EncodeRightSpace |
encoder.EncodeInvalidUtf8), encoder.EncodeInvalidUtf8),
}}...), }},
}) })
} }
func refreshJWTToken(ctx context.Context, jsonFile string, boxSubType string, name string, m configmap.Mapper) error { func refreshJWTToken(jsonFile string, boxSubType string, name string, m configmap.Mapper) error {
jsonFile = env.ShellExpand(jsonFile) jsonFile = env.ShellExpand(jsonFile)
boxConfig, err := getBoxConfig(jsonFile) boxConfig, err := getBoxConfig(jsonFile)
if err != nil { if err != nil {
return fmt.Errorf("get box config: %w", err) log.Fatalf("Failed to configure token: %v", err)
} }
privateKey, err := getDecryptedPrivateKey(boxConfig) privateKey, err := getDecryptedPrivateKey(boxConfig)
if err != nil { if err != nil {
return fmt.Errorf("get decrypted private key: %w", err) log.Fatalf("Failed to configure token: %v", err)
} }
claims, err := getClaims(boxConfig, boxSubType) claims, err := getClaims(boxConfig, boxSubType)
if err != nil { if err != nil {
return fmt.Errorf("get claims: %w", err) log.Fatalf("Failed to configure token: %v", err)
} }
signingHeaders := getSigningHeaders(boxConfig) signingHeaders := getSigningHeaders(boxConfig)
queryParams := getQueryParams(boxConfig) queryParams := getQueryParams(boxConfig)
client := fshttp.NewClient(ctx) client := fshttp.NewClient(fs.Config)
err = jwtutil.Config("box", name, claims, signingHeaders, queryParams, privateKey, m, client) err = jwtutil.Config("box", name, claims, signingHeaders, queryParams, privateKey, m, client)
return err return err
} }
func getBoxConfig(configFile string) (boxConfig *api.ConfigJSON, err error) { func getBoxConfig(configFile string) (boxConfig *api.ConfigJSON, err error) {
file, err := os.ReadFile(configFile) file, err := ioutil.ReadFile(configFile)
if err != nil { if err != nil {
return nil, fmt.Errorf("box: failed to read Box config: %w", err) return nil, errors.Wrap(err, "box: failed to read Box config")
} }
err = json.Unmarshal(file, &boxConfig) err = json.Unmarshal(file, &boxConfig)
if err != nil { if err != nil {
return nil, fmt.Errorf("box: failed to parse Box config: %w", err) return nil, errors.Wrap(err, "box: failed to parse Box config")
} }
return boxConfig, nil return boxConfig, nil
} }
@@ -197,7 +189,7 @@ func getBoxConfig(configFile string) (boxConfig *api.ConfigJSON, err error) {
func getClaims(boxConfig *api.ConfigJSON, boxSubType string) (claims *jws.ClaimSet, err error) { func getClaims(boxConfig *api.ConfigJSON, boxSubType string) (claims *jws.ClaimSet, err error) {
val, err := jwtutil.RandomHex(20) val, err := jwtutil.RandomHex(20)
if err != nil { if err != nil {
return nil, fmt.Errorf("box: failed to generate random string for jti: %w", err) return nil, errors.Wrap(err, "box: failed to generate random string for jti")
} }
claims = &jws.ClaimSet{ claims = &jws.ClaimSet{
@@ -238,12 +230,12 @@ func getDecryptedPrivateKey(boxConfig *api.ConfigJSON) (key *rsa.PrivateKey, err
block, rest := pem.Decode([]byte(boxConfig.BoxAppSettings.AppAuth.PrivateKey)) block, rest := pem.Decode([]byte(boxConfig.BoxAppSettings.AppAuth.PrivateKey))
if len(rest) > 0 { if len(rest) > 0 {
return nil, fmt.Errorf("box: extra data included in private key: %w", err) return nil, errors.Wrap(err, "box: extra data included in private key")
} }
rsaKey, err := pkcs8.ParsePKCS8PrivateKey(block.Bytes, []byte(boxConfig.BoxAppSettings.AppAuth.Passphrase)) rsaKey, err := pkcs8.ParsePKCS8PrivateKey(block.Bytes, []byte(boxConfig.BoxAppSettings.AppAuth.Passphrase))
if err != nil { if err != nil {
return nil, fmt.Errorf("box: failed to decrypt private key: %w", err) return nil, errors.Wrap(err, "box: failed to decrypt private key")
} }
return rsaKey.(*rsa.PrivateKey), nil return rsaKey.(*rsa.PrivateKey), nil
@@ -255,9 +247,6 @@ type Options struct {
CommitRetries int `config:"commit_retries"` CommitRetries int `config:"commit_retries"`
Enc encoder.MultiEncoder `config:"encoding"` Enc encoder.MultiEncoder `config:"encoding"`
RootFolderID string `config:"root_folder_id"` RootFolderID string `config:"root_folder_id"`
AccessToken string `config:"access_token"`
ListChunk int `config:"list_chunk"`
OwnedBy string `config:"owned_by"`
} }
// Fs represents a remote box // Fs represents a remote box
@@ -266,7 +255,7 @@ type Fs struct {
root string // the path we are working on root string // the path we are working on
opt Options // parsed options opt Options // parsed options
features *fs.Features // optional features features *fs.Features // optional features
srv *rest.Client // the connection to the server srv *rest.Client // the connection to the one drive server
dirCache *dircache.DirCache // Map of directory path to directory id dirCache *dircache.DirCache // Map of directory path to directory id
pacer *fs.Pacer // pacer for API calls pacer *fs.Pacer // pacer for API calls
tokenRenewer *oauthutil.Renew // renew the token on expiry tokenRenewer *oauthutil.Renew // renew the token on expiry
@@ -327,23 +316,13 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) { func shouldRetry(resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
authRetry := false authRetry := false
if resp != nil && resp.StatusCode == 401 && strings.Contains(resp.Header.Get("Www-Authenticate"), "expired_token") { if resp != nil && resp.StatusCode == 401 && len(resp.Header["Www-Authenticate"]) == 1 && strings.Index(resp.Header["Www-Authenticate"][0], "expired_token") >= 0 {
authRetry = true authRetry = true
fs.Debugf(nil, "Should retry: %v", err) fs.Debugf(nil, "Should retry: %v", err)
} }
// Box API errors which should be retries
if apiErr, ok := err.(*api.Error); ok && apiErr.Code == "operation_blocked_temporary" {
fs.Debugf(nil, "Retrying API error %v", err)
return true, err
}
return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return authRetry || fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
@@ -358,8 +337,8 @@ func (f *Fs) readMetaDataForPath(ctx context.Context, path string) (info *api.It
return nil, err return nil, err
} }
found, err := f.listAll(ctx, directoryID, false, true, true, func(item *api.Item) bool { found, err := f.listAll(ctx, directoryID, false, true, func(item *api.Item) bool {
if strings.EqualFold(item.Name, leaf) { if item.Name == leaf {
info = item info = item
return true return true
} }
@@ -392,7 +371,8 @@ func errorHandler(resp *http.Response) error {
} }
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.Background()
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
@@ -401,60 +381,46 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
if opt.UploadCutoff < minUploadCutoff { if opt.UploadCutoff < minUploadCutoff {
return nil, fmt.Errorf("box: upload cutoff (%v) must be greater than equal to %v", opt.UploadCutoff, fs.SizeSuffix(minUploadCutoff)) return nil, errors.Errorf("box: upload cutoff (%v) must be greater than equal to %v", opt.UploadCutoff, fs.SizeSuffix(minUploadCutoff))
} }
root = parsePath(root) root = parsePath(root)
oAuthClient, ts, err := oauthutil.NewClient(name, m, oauthConfig)
client := fshttp.NewClient(ctx) if err != nil {
var ts *oauthutil.TokenSource return nil, errors.Wrap(err, "failed to configure Box")
// If not using an accessToken, create an oauth client and tokensource
if opt.AccessToken == "" {
client, ts, err = oauthutil.NewClient(ctx, name, m, oauthConfig)
if err != nil {
return nil, fmt.Errorf("failed to configure Box: %w", err)
}
} }
ci := fs.GetConfig(ctx)
f := &Fs{ f := &Fs{
name: name, name: name,
root: root, root: root,
opt: *opt, opt: *opt,
srv: rest.NewClient(client).SetRoot(rootURL), srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant))),
uploadToken: pacer.NewTokenDispenser(ci.Transfers), uploadToken: pacer.NewTokenDispenser(fs.Config.Transfers),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CaseInsensitive: true, CaseInsensitive: true,
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
}).Fill(ctx, f) }).Fill(f)
f.srv.SetErrorHandler(errorHandler) f.srv.SetErrorHandler(errorHandler)
// If using an accessToken, set the Authorization header
if f.opt.AccessToken != "" {
f.srv.SetHeader("Authorization", "Bearer "+f.opt.AccessToken)
}
jsonFile, ok := m.Get("box_config_file") jsonFile, ok := m.Get("box_config_file")
boxSubType, boxSubTypeOk := m.Get("box_sub_type") boxSubType, boxSubTypeOk := m.Get("box_sub_type")
if ts != nil { // If using box config.json and JWT, renewing should just refresh the token and
// If using box config.json and JWT, renewing should just refresh the token and // should do so whether there are uploads pending or not.
// should do so whether there are uploads pending or not. if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" {
if ok && boxSubTypeOk && jsonFile != "" && boxSubType != "" { f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { err := refreshJWTToken(jsonFile, boxSubType, name, m)
err := refreshJWTToken(ctx, jsonFile, boxSubType, name, m) return err
return err })
}) f.tokenRenewer.Start()
f.tokenRenewer.Start() } else {
} else { // Renew the token in the background
// Renew the token in the background f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error {
f.tokenRenewer = oauthutil.NewRenew(f.String(), ts, func() error { _, err := f.readMetaDataForPath(ctx, "")
_, err := f.readMetaDataForPath(ctx, "") return err
return err })
})
}
} }
// Get rootFolderID // Get rootFolderID
@@ -483,7 +449,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
} }
return nil, err return nil, err
} }
f.features.Fill(ctx, &tempF) f.features.Fill(&tempF)
// XXX: update the old f here instead of returning tempF, since // XXX: update the old f here instead of returning tempF, since
// `features` were already filled with functions having *f as a receiver. // `features` were already filled with functions having *f as a receiver.
// See https://github.com/rclone/rclone/issues/2182 // See https://github.com/rclone/rclone/issues/2182
@@ -533,8 +499,8 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// FindLeaf finds a directory of name leaf in the folder with ID pathID // FindLeaf finds a directory of name leaf in the folder with ID pathID
func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) { func (f *Fs) FindLeaf(ctx context.Context, pathID, leaf string) (pathIDOut string, found bool, err error) {
// Find the leaf in pathID // Find the leaf in pathID
found, err = f.listAll(ctx, pathID, true, false, true, func(item *api.Item) bool { found, err = f.listAll(ctx, pathID, true, false, func(item *api.Item) bool {
if strings.EqualFold(item.Name, leaf) { if item.Name == leaf {
pathIDOut = item.ID pathIDOut = item.ID
return true return true
} }
@@ -568,7 +534,7 @@ func (f *Fs) CreateDir(ctx context.Context, pathID, leaf string) (newID string,
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info) resp, err = f.srv.CallJSON(ctx, &opts, &mkdir, &info)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
//fmt.Printf("...Error %v\n", err) //fmt.Printf("...Error %v\n", err)
@@ -589,29 +555,26 @@ type listAllFn func(*api.Item) bool
// Lists the directory required calling the user function on each item found // Lists the directory required calling the user function on each item found
// //
// If the user fn ever returns true then it early exits with found = true // If the user fn ever returns true then it early exits with found = true
func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, activeOnly bool, fn listAllFn) (found bool, err error) { func (f *Fs) listAll(ctx context.Context, dirID string, directoriesOnly bool, filesOnly bool, fn listAllFn) (found bool, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
Path: "/folders/" + dirID + "/items", Path: "/folders/" + dirID + "/items",
Parameters: fieldsValue(), Parameters: fieldsValue(),
} }
opts.Parameters.Set("limit", strconv.Itoa(f.opt.ListChunk)) opts.Parameters.Set("limit", strconv.Itoa(listChunks))
opts.Parameters.Set("usemarker", "true") offset := 0
var marker *string
OUTER: OUTER:
for { for {
if marker != nil { opts.Parameters.Set("offset", strconv.Itoa(offset))
opts.Parameters.Set("marker", *marker)
}
var result api.FolderItems var result api.FolderItems
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return found, fmt.Errorf("couldn't list files: %w", err) return found, errors.Wrap(err, "couldn't list files")
} }
for i := range result.Entries { for i := range result.Entries {
item := &result.Entries[i] item := &result.Entries[i]
@@ -627,10 +590,7 @@ OUTER:
fs.Debugf(f, "Ignoring %q - unknown type %q", item.Name, item.Type) fs.Debugf(f, "Ignoring %q - unknown type %q", item.Name, item.Type)
continue continue
} }
if activeOnly && item.ItemStatus != api.ItemStatusActive { if item.ItemStatus != api.ItemStatusActive {
continue
}
if f.opt.OwnedBy != "" && f.opt.OwnedBy != item.OwnedBy.Login {
continue continue
} }
item.Name = f.opt.Enc.ToStandardName(item.Name) item.Name = f.opt.Enc.ToStandardName(item.Name)
@@ -639,8 +599,8 @@ OUTER:
break OUTER break OUTER
} }
} }
marker = result.NextMarker offset += result.Limit
if marker == nil { if offset >= result.TotalCount {
break break
} }
} }
@@ -662,7 +622,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
return nil, err return nil, err
} }
var iErr error var iErr error
_, err = f.listAll(ctx, directoryID, false, false, true, func(info *api.Item) bool { _, err = f.listAll(ctx, directoryID, false, false, func(info *api.Item) bool {
remote := path.Join(dir, info.Name) remote := path.Join(dir, info.Name)
if info.Type == api.ItemTypeFolder { if info.Type == api.ItemTypeFolder {
// cache the directory ID for later lookups // cache the directory ID for later lookups
@@ -692,7 +652,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Creates from the parameters passed in a half finished Object which // Creates from the parameters passed in a half finished Object which
// must have setMetaData called on it // must have setMetaData called on it
// //
// Returns the object, leaf, directoryID and error. // Returns the object, leaf, directoryID and error
// //
// Used to create new objects // Used to create new objects
func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time, size int64) (o *Object, leaf string, directoryID string, err error) { func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time, size int64) (o *Object, leaf string, directoryID string, err error) {
@@ -709,80 +669,22 @@ func (f *Fs) createObject(ctx context.Context, remote string, modTime time.Time,
return o, leaf, directoryID, nil return o, leaf, directoryID, nil
} }
// preUploadCheck checks to see if a file can be uploaded
//
// It returns "", nil if the file is good to go
// It returns "ID", nil if the file must be updated
func (f *Fs) preUploadCheck(ctx context.Context, leaf, directoryID string, size int64) (ID string, err error) {
check := api.PreUploadCheck{
Name: f.opt.Enc.FromStandardName(leaf),
Parent: api.Parent{
ID: directoryID,
},
}
if size >= 0 {
check.Size = &size
}
opts := rest.Opts{
Method: "OPTIONS",
Path: "/files/content/",
}
var result api.PreUploadCheckResponse
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &check, &result)
return shouldRetry(ctx, resp, err)
})
if err != nil {
if apiErr, ok := err.(*api.Error); ok && apiErr.Code == "item_name_in_use" {
var conflict api.PreUploadCheckConflict
err = json.Unmarshal(apiErr.ContextInfo, &conflict)
if err != nil {
return "", fmt.Errorf("pre-upload check: JSON decode failed: %w", err)
}
if conflict.Conflicts.Type != api.ItemTypeFile {
return "", fmt.Errorf("pre-upload check: can't overwrite non file with file: %w", err)
}
return conflict.Conflicts.ID, nil
}
return "", fmt.Errorf("pre-upload check: %w", err)
}
return "", nil
}
// Put the object // Put the object
// //
// Copy the reader in to the new object which is returned. // Copy the reader in to the new object which is returned
// //
// The new object may have been created if an error is returned // The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
// If directory doesn't exist, file doesn't exist so can upload existingObj, err := f.newObjectWithInfo(ctx, src.Remote(), nil)
remote := src.Remote() switch err {
leaf, directoryID, err := f.dirCache.FindPath(ctx, remote, false) case nil:
if err != nil { return existingObj, existingObj.Update(ctx, in, src, options...)
if err == fs.ErrorDirNotFound { case fs.ErrorObjectNotFound:
return f.PutUnchecked(ctx, in, src, options...) // Not found so create it
} return f.PutUnchecked(ctx, in, src)
default:
return nil, err return nil, err
} }
// Preflight check the upload, which returns the ID if the
// object already exists
ID, err := f.preUploadCheck(ctx, leaf, directoryID, src.Size())
if err != nil {
return nil, err
}
if ID == "" {
return f.PutUnchecked(ctx, in, src, options...)
}
// If object exists then create a skeleton one with just id
o := &Object{
fs: f,
remote: remote,
id: ID,
}
return o, o.Update(ctx, in, src, options...)
} }
// PutStream uploads to the remote path with the modTime given of indeterminate size // PutStream uploads to the remote path with the modTime given of indeterminate size
@@ -792,9 +694,9 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
// PutUnchecked the object into the container // PutUnchecked the object into the container
// //
// This will produce an error if the object already exists. // This will produce an error if the object already exists
// //
// Copy the reader in to the new object which is returned. // Copy the reader in to the new object which is returned
// //
// The new object may have been created if an error is returned // The new object may have been created if an error is returned
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
@@ -824,7 +726,7 @@ func (f *Fs) deleteObject(ctx context.Context, id string) error {
} }
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts) resp, err := f.srv.Call(ctx, &opts)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
} }
@@ -851,10 +753,10 @@ func (f *Fs) purgeCheck(ctx context.Context, dir string, check bool) error {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts) resp, err = f.srv.Call(ctx, &opts)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return fmt.Errorf("rmdir failed: %w", err) return errors.Wrap(err, "rmdir failed")
} }
f.dirCache.FlushDir(dir) f.dirCache.FlushDir(dir)
if err != nil { if err != nil {
@@ -875,11 +777,11 @@ func (f *Fs) Precision() time.Duration {
return time.Second return time.Second
} }
// Copy src to this remote using server-side copy operations. // Copy src to this remote using server side copy operations.
// //
// This is stored with the remote path given. // This is stored with the remote path given
// //
// It returns the destination Object and a possible error. // It returns the destination Object and a possible error
// //
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
@@ -897,8 +799,8 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
srcPath := srcObj.fs.rootSlash() + srcObj.remote srcPath := srcObj.fs.rootSlash() + srcObj.remote
dstPath := f.rootSlash() + remote dstPath := f.rootSlash() + remote
if strings.EqualFold(srcPath, dstPath) { if strings.ToLower(srcPath) == strings.ToLower(dstPath) {
return nil, fmt.Errorf("can't copy %q -> %q as are same name when lowercase", srcPath, dstPath) return nil, errors.Errorf("can't copy %q -> %q as are same name when lowercase", srcPath, dstPath)
} }
// Create temporary object // Create temporary object
@@ -923,7 +825,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
var info *api.Item var info *api.Item
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &copyFile, &info) resp, err = f.srv.CallJSON(ctx, &opts, &copyFile, &info)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -940,8 +842,8 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
// Optional interface: Only implement this if you have a way of // Optional interface: Only implement this if you have a way of
// deleting all the files quicker than just running Remove() on the // deleting all the files quicker than just running Remove() on the
// result of List() // result of List()
func (f *Fs) Purge(ctx context.Context, dir string) error { func (f *Fs) Purge(ctx context.Context) error {
return f.purgeCheck(ctx, dir, false) return f.purgeCheck(ctx, "", false)
} }
// move a file or folder // move a file or folder
@@ -961,7 +863,7 @@ func (f *Fs) move(ctx context.Context, endpoint, id, leaf, directoryID string) (
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &move, &info) resp, err = f.srv.CallJSON(ctx, &opts, &move, &info)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -979,10 +881,10 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &user) resp, err = f.srv.CallJSON(ctx, &opts, nil, &user)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to read user info: %w", err) return nil, errors.Wrap(err, "failed to read user info")
} }
// FIXME max upload size would be useful to use in Update // FIXME max upload size would be useful to use in Update
usage = &fs.Usage{ usage = &fs.Usage{
@@ -993,11 +895,11 @@ func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
return usage, nil return usage, nil
} }
// Move src to this remote using server-side move operations. // Move src to this remote using server side move operations.
// //
// This is stored with the remote path given. // This is stored with the remote path given
// //
// It returns the destination Object and a possible error. // It returns the destination Object and a possible error
// //
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
@@ -1029,7 +931,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
} }
// DirMove moves src, srcRemote to this remote at dstRemote // DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations. // using server side move operations.
// //
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
@@ -1092,12 +994,12 @@ func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration,
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &shareLink, &info) resp, err = f.srv.CallJSON(ctx, &opts, &shareLink, &info)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
return info.SharedLink.URL, err return info.SharedLink.URL, err
} }
// deletePermanently permanently deletes a trashed file // deletePermanently permenently deletes a trashed file
func (f *Fs) deletePermanently(ctx context.Context, itemType, id string) error { func (f *Fs) deletePermanently(ctx context.Context, itemType, id string) error {
opts := rest.Opts{ opts := rest.Opts{
Method: "DELETE", Method: "DELETE",
@@ -1110,42 +1012,51 @@ func (f *Fs) deletePermanently(ctx context.Context, itemType, id string) error {
} }
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
resp, err := f.srv.Call(ctx, &opts) resp, err := f.srv.Call(ctx, &opts)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
} }
// CleanUp empties the trash // CleanUp empties the trash
func (f *Fs) CleanUp(ctx context.Context) (err error) { func (f *Fs) CleanUp(ctx context.Context) (err error) {
var ( opts := rest.Opts{
deleteErrors = int64(0) Method: "GET",
concurrencyControl = make(chan struct{}, fs.GetConfig(ctx).Checkers) Path: "/folders/trash/items",
wg sync.WaitGroup Parameters: url.Values{
) "fields": []string{"type", "id"},
_, err = f.listAll(ctx, "trash", false, false, false, func(item *api.Item) bool { },
if item.Type == api.ItemTypeFolder || item.Type == api.ItemTypeFile { }
wg.Add(1) opts.Parameters.Set("limit", strconv.Itoa(listChunks))
concurrencyControl <- struct{}{} offset := 0
go func() { for {
defer func() { opts.Parameters.Set("offset", strconv.Itoa(offset))
<-concurrencyControl
wg.Done() var result api.FolderItems
}() var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(resp, err)
})
if err != nil {
return errors.Wrap(err, "couldn't list trash")
}
for i := range result.Entries {
item := &result.Entries[i]
if item.Type == api.ItemTypeFolder || item.Type == api.ItemTypeFile {
err := f.deletePermanently(ctx, item.Type, item.ID) err := f.deletePermanently(ctx, item.Type, item.ID)
if err != nil { if err != nil {
fs.Errorf(f, "failed to delete trash item %q (%q): %v", item.Name, item.ID, err) return errors.Wrap(err, "failed to delete file")
atomic.AddInt64(&deleteErrors, 1)
} }
}() } else {
} else { fs.Debugf(f, "Ignoring %q - unknown type %q", item.Name, item.Type)
fs.Debugf(f, "Ignoring %q - unknown type %q", item.Name, item.Type) continue
}
}
offset += result.Limit
if offset >= result.TotalCount {
break
} }
return false
})
wg.Wait()
if deleteErrors != 0 {
return fmt.Errorf("failed to delete %d trash items", deleteErrors)
} }
return err return
} }
// DirCacheFlush resets the directory cache - used in testing as an // DirCacheFlush resets the directory cache - used in testing as an
@@ -1199,11 +1110,8 @@ func (o *Object) Size() int64 {
// setMetaData sets the metadata from info // setMetaData sets the metadata from info
func (o *Object) setMetaData(info *api.Item) (err error) { func (o *Object) setMetaData(info *api.Item) (err error) {
if info.Type == api.ItemTypeFolder {
return fs.ErrorIsDir
}
if info.Type != api.ItemTypeFile { if info.Type != api.ItemTypeFile {
return fmt.Errorf("%q is %q: %w", o.remote, info.Type, fs.ErrorNotAFile) return errors.Wrapf(fs.ErrorNotAFile, "%q is %q", o.remote, info.Type)
} }
o.hasMetaData = true o.hasMetaData = true
o.size = int64(info.Size) o.size = int64(info.Size)
@@ -1235,6 +1143,7 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
// ModTime returns the modification time of the object // ModTime returns the modification time of the object
// //
//
// It attempts to read the objects mtime and if that isn't present the // It attempts to read the objects mtime and if that isn't present the
// LastModified returned in the http headers // LastModified returned in the http headers
func (o *Object) ModTime(ctx context.Context) time.Time { func (o *Object) ModTime(ctx context.Context) time.Time {
@@ -1259,7 +1168,7 @@ func (o *Object) setModTime(ctx context.Context, modTime time.Time) (*api.Item,
var info *api.Item var info *api.Item
err := o.fs.pacer.Call(func() (bool, error) { err := o.fs.pacer.Call(func() (bool, error) {
resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info) resp, err := o.fs.srv.CallJSON(ctx, &opts, &update, &info)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
return info, err return info, err
} }
@@ -1292,7 +1201,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1302,7 +1211,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// upload does a single non-multipart upload // upload does a single non-multipart upload
// //
// This is recommended for less than 50 MiB of content // This is recommended for less than 50 MB of content
func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID string, modTime time.Time, options ...fs.OpenOption) (err error) { func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID string, modTime time.Time, options ...fs.OpenOption) (err error) {
upload := api.UploadFile{ upload := api.UploadFile{
Name: o.fs.opt.Enc.FromStandardName(leaf), Name: o.fs.opt.Enc.FromStandardName(leaf),
@@ -1332,27 +1241,25 @@ func (o *Object) upload(ctx context.Context, in io.Reader, leaf, directoryID str
} }
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &upload, &result) resp, err = o.fs.srv.CallJSON(ctx, &opts, &upload, &result)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return err return err
} }
if result.TotalCount != 1 || len(result.Entries) != 1 { if result.TotalCount != 1 || len(result.Entries) != 1 {
return fmt.Errorf("failed to upload %v - not sure why", o) return errors.Errorf("failed to upload %v - not sure why", o)
} }
return o.setMetaData(&result.Entries[0]) return o.setMetaData(&result.Entries[0])
} }
// Update the object with the contents of the io.Reader, modTime and size // Update the object with the contents of the io.Reader, modTime and size
// //
// If existing is set then it updates the object rather than creating a new one. // If existing is set then it updates the object rather than creating a new one
// //
// The new object may have been created if an error is returned. // The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) { func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (err error) {
if o.fs.tokenRenewer != nil { o.fs.tokenRenewer.Start()
o.fs.tokenRenewer.Start() defer o.fs.tokenRenewer.Stop()
defer o.fs.tokenRenewer.Stop()
}
size := src.Size() size := src.Size()
modTime := src.ModTime(ctx) modTime := src.ModTime(ctx)

View File

@@ -1,4 +1,4 @@
// multipart upload for box // multpart upload for box
package box package box
@@ -8,7 +8,6 @@ import (
"crypto/sha1" "crypto/sha1"
"encoding/base64" "encoding/base64"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
@@ -16,6 +15,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/box/api" "github.com/rclone/rclone/backend/box/api"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/accounting"
@@ -44,7 +44,7 @@ func (o *Object) createUploadSession(ctx context.Context, leaf, directoryID stri
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, &response) resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, &response)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
return return
} }
@@ -74,7 +74,7 @@ func (o *Object) uploadPart(ctx context.Context, SessionID string, offset, total
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
opts.Body = wrap(bytes.NewReader(chunk)) opts.Body = wrap(bytes.NewReader(chunk))
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &response) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &response)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -109,10 +109,10 @@ outer:
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil) resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil)
if err != nil { if err != nil {
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
} }
body, err = rest.ReadBody(resp) body, err = rest.ReadBody(resp)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
delay := defaultDelay delay := defaultDelay
var why string var why string
@@ -140,7 +140,7 @@ outer:
} }
} }
default: default:
return nil, fmt.Errorf("unknown HTTP status return %q (%d)", resp.Status, resp.StatusCode) return nil, errors.Errorf("unknown HTTP status return %q (%d)", resp.Status, resp.StatusCode)
} }
} }
fs.Debugf(o, "commit multipart upload failed %d/%d - trying again in %d seconds (%s)", tries+1, maxTries, delay, why) fs.Debugf(o, "commit multipart upload failed %d/%d - trying again in %d seconds (%s)", tries+1, maxTries, delay, why)
@@ -151,7 +151,7 @@ outer:
} }
err = json.Unmarshal(body, &result) err = json.Unmarshal(body, &result)
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't decode commit response: %q: %w", body, err) return nil, errors.Wrapf(err, "couldn't decode commit response: %q", body)
} }
return result, nil return result, nil
} }
@@ -167,7 +167,7 @@ func (o *Object) abortUpload(ctx context.Context, SessionID string) (err error)
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
return err return err
} }
@@ -177,7 +177,7 @@ func (o *Object) uploadMultipart(ctx context.Context, in io.Reader, leaf, direct
// Create upload session // Create upload session
session, err := o.createUploadSession(ctx, leaf, directoryID, size) session, err := o.createUploadSession(ctx, leaf, directoryID, size)
if err != nil { if err != nil {
return fmt.Errorf("multipart upload create session failed: %w", err) return errors.Wrap(err, "multipart upload create session failed")
} }
chunkSize := session.PartSize chunkSize := session.PartSize
fs.Debugf(o, "Multipart upload session started for %d parts of size %v", session.TotalParts, fs.SizeSuffix(chunkSize)) fs.Debugf(o, "Multipart upload session started for %d parts of size %v", session.TotalParts, fs.SizeSuffix(chunkSize))
@@ -222,7 +222,7 @@ outer:
// Read the chunk // Read the chunk
_, err = io.ReadFull(in, buf) _, err = io.ReadFull(in, buf)
if err != nil { if err != nil {
err = fmt.Errorf("multipart upload failed to read source: %w", err) err = errors.Wrap(err, "multipart upload failed to read source")
break outer break outer
} }
@@ -238,7 +238,7 @@ outer:
fs.Debugf(o, "Uploading part %d/%d offset %v/%v part size %v", part+1, session.TotalParts, fs.SizeSuffix(position), fs.SizeSuffix(size), fs.SizeSuffix(chunkSize)) fs.Debugf(o, "Uploading part %d/%d offset %v/%v part size %v", part+1, session.TotalParts, fs.SizeSuffix(position), fs.SizeSuffix(size), fs.SizeSuffix(chunkSize))
partResponse, err := o.uploadPart(ctx, session.ID, position, size, buf, wrap, options...) partResponse, err := o.uploadPart(ctx, session.ID, position, size, buf, wrap, options...)
if err != nil { if err != nil {
err = fmt.Errorf("multipart upload failed to upload part: %w", err) err = errors.Wrap(err, "multipart upload failed to upload part")
select { select {
case errs <- err: case errs <- err:
default: default:
@@ -266,11 +266,11 @@ outer:
// Finalise the upload session // Finalise the upload session
result, err := o.commitUpload(ctx, session.ID, parts, modTime, hash.Sum(nil)) result, err := o.commitUpload(ctx, session.ID, parts, modTime, hash.Sum(nil))
if err != nil { if err != nil {
return fmt.Errorf("multipart upload failed to finalize: %w", err) return errors.Wrap(err, "multipart upload failed to finalize")
} }
if result.TotalCount != 1 || len(result.Entries) != 1 { if result.TotalCount != 1 || len(result.Entries) != 1 {
return fmt.Errorf("multipart upload failed %v - not sure why", o) return errors.Errorf("multipart upload failed %v - not sure why", o)
} }
return o.setMetaData(&result.Entries[0]) return o.setMetaData(&result.Entries[0])
} }

167
backend/cache/cache.go vendored
View File

@@ -1,12 +1,9 @@
//go:build !plan9 && !js // +build !plan9
// +build !plan9,!js
// Package cache implements a virtual provider to cache existing remotes.
package cache package cache
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"io" "io"
"math" "math"
@@ -21,6 +18,7 @@ import (
"syscall" "syscall"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/crypt" "github.com/rclone/rclone/backend/crypt"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache" "github.com/rclone/rclone/fs/cache"
@@ -70,26 +68,26 @@ func init() {
CommandHelp: commandHelp, CommandHelp: commandHelp,
Options: []fs.Option{{ Options: []fs.Option{{
Name: "remote", Name: "remote",
Help: "Remote to cache.\n\nNormally should contain a ':' and a path, e.g. \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).", Help: "Remote to cache.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required: true, Required: true,
}, { }, {
Name: "plex_url", Name: "plex_url",
Help: "The URL of the Plex server.", Help: "The URL of the Plex server",
}, { }, {
Name: "plex_username", Name: "plex_username",
Help: "The username of the Plex user.", Help: "The username of the Plex user",
}, { }, {
Name: "plex_password", Name: "plex_password",
Help: "The password of the Plex user.", Help: "The password of the Plex user",
IsPassword: true, IsPassword: true,
}, { }, {
Name: "plex_token", Name: "plex_token",
Help: "The plex token for authentication - auto set normally.", Help: "The plex token for authentication - auto set normally",
Hide: fs.OptionHideBoth, Hide: fs.OptionHideBoth,
Advanced: true, Advanced: true,
}, { }, {
Name: "plex_insecure", Name: "plex_insecure",
Help: "Skip all certificate verification when connecting to the Plex server.", Help: "Skip all certificate verification when connecting to the Plex server",
Advanced: true, Advanced: true,
}, { }, {
Name: "chunk_size", Name: "chunk_size",
@@ -100,18 +98,18 @@ changed, any downloaded chunks will be invalid and cache-chunk-path
will need to be cleared or unexpected EOF errors will occur.`, will need to be cleared or unexpected EOF errors will occur.`,
Default: DefCacheChunkSize, Default: DefCacheChunkSize,
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "1M", Value: "1m",
Help: "1 MiB", Help: "1MB",
}, { }, {
Value: "5M", Value: "5M",
Help: "5 MiB", Help: "5 MB",
}, { }, {
Value: "10M", Value: "10M",
Help: "10 MiB", Help: "10 MB",
}}, }},
}, { }, {
Name: "info_age", Name: "info_age",
Help: `How long to cache file structure information (directory listings, file size, times, etc.). Help: `How long to cache file structure information (directory listings, file size, times etc).
If all write operations are done through the cache then you can safely make If all write operations are done through the cache then you can safely make
this value very large as the cache store will also be updated in real time.`, this value very large as the cache store will also be updated in real time.`,
Default: DefCacheInfoAge, Default: DefCacheInfoAge,
@@ -134,22 +132,22 @@ oldest chunks until it goes under this value.`,
Default: DefCacheTotalChunkSize, Default: DefCacheTotalChunkSize,
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "500M", Value: "500M",
Help: "500 MiB", Help: "500 MB",
}, { }, {
Value: "1G", Value: "1G",
Help: "1 GiB", Help: "1 GB",
}, { }, {
Value: "10G", Value: "10G",
Help: "10 GiB", Help: "10 GB",
}}, }},
}, { }, {
Name: "db_path", Name: "db_path",
Default: filepath.Join(config.GetCacheDir(), "cache-backend"), Default: filepath.Join(config.CacheDir, "cache-backend"),
Help: "Directory to store file structure metadata DB.\n\nThe remote name is used as the DB file name.", Help: "Directory to store file structure metadata DB.\nThe remote name is used as the DB file name.",
Advanced: true, Advanced: true,
}, { }, {
Name: "chunk_path", Name: "chunk_path",
Default: filepath.Join(config.GetCacheDir(), "cache-backend"), Default: filepath.Join(config.CacheDir, "cache-backend"),
Help: `Directory to cache chunk files. Help: `Directory to cache chunk files.
Path to where partial file data (chunks) are stored locally. The remote Path to where partial file data (chunks) are stored locally. The remote
@@ -169,7 +167,6 @@ then "--cache-chunk-path" will use the same path as "--cache-db-path".`,
Name: "chunk_clean_interval", Name: "chunk_clean_interval",
Default: DefCacheChunkCleanInterval, Default: DefCacheChunkCleanInterval,
Help: `How often should the cache perform cleanups of the chunk storage. Help: `How often should the cache perform cleanups of the chunk storage.
The default value should be ok for most people. If you find that the The default value should be ok for most people. If you find that the
cache goes over "cache-chunk-total-size" too often then try to lower cache goes over "cache-chunk-total-size" too often then try to lower
this value to force it to perform cleanups more often.`, this value to force it to perform cleanups more often.`,
@@ -223,7 +220,7 @@ available on the local machine.`,
}, { }, {
Name: "rps", Name: "rps",
Default: int(DefCacheRps), Default: int(DefCacheRps),
Help: `Limits the number of requests per second to the source FS (-1 to disable). Help: `Limits the number of requests per second to the source FS (-1 to disable)
This setting places a hard limit on the number of requests per second This setting places a hard limit on the number of requests per second
that cache will be doing to the cloud provider remote and try to that cache will be doing to the cloud provider remote and try to
@@ -244,7 +241,7 @@ still pass.`,
}, { }, {
Name: "writes", Name: "writes",
Default: DefCacheWrites, Default: DefCacheWrites,
Help: `Cache file data on writes through the FS. Help: `Cache file data on writes through the FS
If you need to read files immediately after you upload them through If you need to read files immediately after you upload them through
cache you can enable this flag to have their data stored in the cache you can enable this flag to have their data stored in the
@@ -265,7 +262,7 @@ provider`,
}, { }, {
Name: "tmp_wait_time", Name: "tmp_wait_time",
Default: DefCacheTmpWaitTime, Default: DefCacheTmpWaitTime,
Help: `How long should files be stored in local cache before being uploaded. Help: `How long should files be stored in local cache before being uploaded
This is the duration that a file must wait in the temporary location This is the duration that a file must wait in the temporary location
_cache-tmp-upload-path_ before it is selected for upload. _cache-tmp-upload-path_ before it is selected for upload.
@@ -276,7 +273,7 @@ to start the upload if a queue formed for this purpose.`,
}, { }, {
Name: "db_wait_time", Name: "db_wait_time",
Default: DefCacheDbWaitTime, Default: DefCacheDbWaitTime,
Help: `How long to wait for the DB to be available - 0 is unlimited. Help: `How long to wait for the DB to be available - 0 is unlimited
Only one process can have the DB open at any one time, so rclone waits Only one process can have the DB open at any one time, so rclone waits
for this duration for the DB to become available before it gives an for this duration for the DB to become available before it gives an
@@ -342,14 +339,8 @@ func parseRootPath(path string) (string, error) {
return strings.Trim(path, "/"), nil return strings.Trim(path, "/"), nil
} }
var warnDeprecated sync.Once
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, rootPath string, m configmap.Mapper) (fs.Fs, error) {
warnDeprecated.Do(func() {
fs.Logf(nil, "WARNING: Cache backend is deprecated and may be removed in future. Please use VFS instead.")
})
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
@@ -357,7 +348,7 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
return nil, err return nil, err
} }
if opt.ChunkTotalSize < opt.ChunkSize*fs.SizeSuffix(opt.TotalWorkers) { if opt.ChunkTotalSize < opt.ChunkSize*fs.SizeSuffix(opt.TotalWorkers) {
return nil, fmt.Errorf("don't set cache-chunk-total-size(%v) less than cache-chunk-size(%v) * cache-workers(%v)", return nil, errors.Errorf("don't set cache-chunk-total-size(%v) less than cache-chunk-size(%v) * cache-workers(%v)",
opt.ChunkTotalSize, opt.ChunkSize, opt.TotalWorkers) opt.ChunkTotalSize, opt.ChunkSize, opt.TotalWorkers)
} }
@@ -367,13 +358,18 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
rpath, err := parseRootPath(rootPath) rpath, err := parseRootPath(rootPath)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to clean root path %q: %w", rootPath, err) return nil, errors.Wrapf(err, "failed to clean root path %q", rootPath)
} }
remotePath := fspath.JoinRootPath(opt.Remote, rootPath) wInfo, wName, wPath, wConfig, err := fs.ConfigFs(opt.Remote)
wrappedFs, wrapErr := cache.Get(ctx, remotePath) if err != nil {
return nil, errors.Wrapf(err, "failed to parse remote %q to wrap", opt.Remote)
}
remotePath := fspath.JoinRootPath(wPath, rootPath)
wrappedFs, wrapErr := wInfo.NewFs(wName, remotePath, wConfig)
if wrapErr != nil && wrapErr != fs.ErrorIsFile { if wrapErr != nil && wrapErr != fs.ErrorIsFile {
return nil, fmt.Errorf("failed to make remote %q to wrap: %w", remotePath, wrapErr) return nil, errors.Wrapf(wrapErr, "failed to make remote %s:%s to wrap", wName, remotePath)
} }
var fsErr error var fsErr error
fs.Debugf(name, "wrapped %v:%v at root %v", wrappedFs.Name(), wrappedFs.Root(), rpath) fs.Debugf(name, "wrapped %v:%v at root %v", wrappedFs.Name(), wrappedFs.Root(), rpath)
@@ -394,19 +390,14 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
cleanupChan: make(chan bool, 1), cleanupChan: make(chan bool, 1),
notifiedRemotes: make(map[string]bool), notifiedRemotes: make(map[string]bool),
} }
cache.PinUntilFinalized(f.Fs, f) f.rateLimiter = rate.NewLimiter(rate.Limit(float64(opt.Rps)), opt.TotalWorkers)
rps := rate.Inf
if opt.Rps > 0 {
rps = rate.Limit(float64(opt.Rps))
}
f.rateLimiter = rate.NewLimiter(rps, opt.TotalWorkers)
f.plexConnector = &plexConnector{} f.plexConnector = &plexConnector{}
if opt.PlexURL != "" { if opt.PlexURL != "" {
if opt.PlexToken != "" { if opt.PlexToken != "" {
f.plexConnector, err = newPlexConnectorWithToken(f, opt.PlexURL, opt.PlexToken, opt.PlexInsecure) f.plexConnector, err = newPlexConnectorWithToken(f, opt.PlexURL, opt.PlexToken, opt.PlexInsecure)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err) return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", opt.PlexURL)
} }
} else { } else {
if opt.PlexPassword != "" && opt.PlexUsername != "" { if opt.PlexPassword != "" && opt.PlexUsername != "" {
@@ -418,7 +409,7 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
m.Set("plex_token", token) m.Set("plex_token", token)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to connect to the Plex API %v: %w", opt.PlexURL, err) return nil, errors.Wrapf(err, "failed to connect to the Plex API %v", opt.PlexURL)
} }
} }
} }
@@ -427,8 +418,8 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
dbPath := f.opt.DbPath dbPath := f.opt.DbPath
chunkPath := f.opt.ChunkPath chunkPath := f.opt.ChunkPath
// if the dbPath is non default but the chunk path is default, we overwrite the last to follow the same one as dbPath // if the dbPath is non default but the chunk path is default, we overwrite the last to follow the same one as dbPath
if dbPath != filepath.Join(config.GetCacheDir(), "cache-backend") && if dbPath != filepath.Join(config.CacheDir, "cache-backend") &&
chunkPath == filepath.Join(config.GetCacheDir(), "cache-backend") { chunkPath == filepath.Join(config.CacheDir, "cache-backend") {
chunkPath = dbPath chunkPath = dbPath
} }
if filepath.Ext(dbPath) != "" { if filepath.Ext(dbPath) != "" {
@@ -439,11 +430,11 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
} }
err = os.MkdirAll(dbPath, os.ModePerm) err = os.MkdirAll(dbPath, os.ModePerm)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to create cache directory %v: %w", dbPath, err) return nil, errors.Wrapf(err, "failed to create cache directory %v", dbPath)
} }
err = os.MkdirAll(chunkPath, os.ModePerm) err = os.MkdirAll(chunkPath, os.ModePerm)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to create cache directory %v: %w", chunkPath, err) return nil, errors.Wrapf(err, "failed to create cache directory %v", chunkPath)
} }
dbPath = filepath.Join(dbPath, name+".db") dbPath = filepath.Join(dbPath, name+".db")
@@ -455,7 +446,7 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
DbWaitTime: time.Duration(opt.DbWaitTime), DbWaitTime: time.Duration(opt.DbWaitTime),
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to start cache db: %w", err) return nil, errors.Wrapf(err, "failed to start cache db")
} }
// Trap SIGINT and SIGTERM to close the DB handle gracefully // Trap SIGINT and SIGTERM to close the DB handle gracefully
c := make(chan os.Signal, 1) c := make(chan os.Signal, 1)
@@ -489,12 +480,12 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
if f.opt.TempWritePath != "" { if f.opt.TempWritePath != "" {
err = os.MkdirAll(f.opt.TempWritePath, os.ModePerm) err = os.MkdirAll(f.opt.TempWritePath, os.ModePerm)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to create cache directory %v: %w", f.opt.TempWritePath, err) return nil, errors.Wrapf(err, "failed to create cache directory %v", f.opt.TempWritePath)
} }
f.opt.TempWritePath = filepath.ToSlash(f.opt.TempWritePath) f.opt.TempWritePath = filepath.ToSlash(f.opt.TempWritePath)
f.tempFs, err = cache.Get(ctx, f.opt.TempWritePath) f.tempFs, err = cache.Get(f.opt.TempWritePath)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to create temp fs: %w", err) return nil, errors.Wrapf(err, "failed to create temp fs: %v", err)
} }
fs.Infof(name, "Upload Temp Rest Time: %v", f.opt.TempWaitTime) fs.Infof(name, "Upload Temp Rest Time: %v", f.opt.TempWaitTime)
fs.Infof(name, "Upload Temp FS: %v", f.opt.TempWritePath) fs.Infof(name, "Upload Temp FS: %v", f.opt.TempWritePath)
@@ -519,13 +510,13 @@ func NewFs(ctx context.Context, name, rootPath string, m configmap.Mapper) (fs.F
if doChangeNotify := wrappedFs.Features().ChangeNotify; doChangeNotify != nil { if doChangeNotify := wrappedFs.Features().ChangeNotify; doChangeNotify != nil {
pollInterval := make(chan time.Duration, 1) pollInterval := make(chan time.Duration, 1)
pollInterval <- time.Duration(f.opt.ChunkCleanInterval) pollInterval <- time.Duration(f.opt.ChunkCleanInterval)
doChangeNotify(ctx, f.receiveChangeNotify, pollInterval) doChangeNotify(context.Background(), f.receiveChangeNotify, pollInterval)
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
DuplicateFiles: false, // storage doesn't permit this DuplicateFiles: false, // storage doesn't permit this
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs) }).Fill(f).Mask(wrappedFs).WrapsFs(f, wrappedFs)
// override only those features that use a temp fs and it doesn't support them // override only those features that use a temp fs and it doesn't support them
//f.features.ChangeNotify = f.ChangeNotify //f.features.ChangeNotify = f.ChangeNotify
if f.opt.TempWritePath != "" { if f.opt.TempWritePath != "" {
@@ -594,7 +585,7 @@ Some valid examples are:
"0:10" -> the first ten chunks "0:10" -> the first ten chunks
Any parameter with a key that starts with "file" can be used to Any parameter with a key that starts with "file" can be used to
specify files to fetch, e.g. specify files to fetch, eg
rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
@@ -611,7 +602,7 @@ func (f *Fs) httpStats(ctx context.Context, in rc.Params) (out rc.Params, err er
out = make(rc.Params) out = make(rc.Params)
m, err := f.Stats() m, err := f.Stats()
if err != nil { if err != nil {
return out, fmt.Errorf("error while getting cache stats") return out, errors.Errorf("error while getting cache stats")
} }
out["status"] = "ok" out["status"] = "ok"
out["stats"] = m out["stats"] = m
@@ -638,7 +629,7 @@ func (f *Fs) httpExpireRemote(ctx context.Context, in rc.Params) (out rc.Params,
out = make(rc.Params) out = make(rc.Params)
remoteInt, ok := in["remote"] remoteInt, ok := in["remote"]
if !ok { if !ok {
return out, fmt.Errorf("remote is needed") return out, errors.Errorf("remote is needed")
} }
remote := remoteInt.(string) remote := remoteInt.(string)
withData := false withData := false
@@ -649,7 +640,7 @@ func (f *Fs) httpExpireRemote(ctx context.Context, in rc.Params) (out rc.Params,
remote = f.unwrapRemote(remote) remote = f.unwrapRemote(remote)
if !f.cache.HasEntry(path.Join(f.Root(), remote)) { if !f.cache.HasEntry(path.Join(f.Root(), remote)) {
return out, fmt.Errorf("%s doesn't exist in cache", remote) return out, errors.Errorf("%s doesn't exist in cache", remote)
} }
co := NewObject(f, remote) co := NewObject(f, remote)
@@ -658,7 +649,7 @@ func (f *Fs) httpExpireRemote(ctx context.Context, in rc.Params) (out rc.Params,
cd := NewDirectory(f, remote) cd := NewDirectory(f, remote)
err := f.cache.ExpireDir(cd) err := f.cache.ExpireDir(cd)
if err != nil { if err != nil {
return out, fmt.Errorf("error expiring directory: %w", err) return out, errors.WithMessage(err, "error expiring directory")
} }
// notify vfs too // notify vfs too
f.notifyChangeUpstream(cd.Remote(), fs.EntryDirectory) f.notifyChangeUpstream(cd.Remote(), fs.EntryDirectory)
@@ -669,7 +660,7 @@ func (f *Fs) httpExpireRemote(ctx context.Context, in rc.Params) (out rc.Params,
// expire the entry // expire the entry
err = f.cache.ExpireObject(co, withData) err = f.cache.ExpireObject(co, withData)
if err != nil { if err != nil {
return out, fmt.Errorf("error expiring file: %w", err) return out, errors.WithMessage(err, "error expiring file")
} }
// notify vfs too // notify vfs too
f.notifyChangeUpstream(co.Remote(), fs.EntryObject) f.notifyChangeUpstream(co.Remote(), fs.EntryObject)
@@ -690,24 +681,24 @@ func (f *Fs) rcFetch(ctx context.Context, in rc.Params) (rc.Params, error) {
case 1: case 1:
start, err = strconv.ParseInt(ints[0], 10, 64) start, err = strconv.ParseInt(ints[0], 10, 64)
if err != nil { if err != nil {
return nil, fmt.Errorf("invalid range: %q", part) return nil, errors.Errorf("invalid range: %q", part)
} }
end = start + 1 end = start + 1
case 2: case 2:
if ints[0] != "" { if ints[0] != "" {
start, err = strconv.ParseInt(ints[0], 10, 64) start, err = strconv.ParseInt(ints[0], 10, 64)
if err != nil { if err != nil {
return nil, fmt.Errorf("invalid range: %q", part) return nil, errors.Errorf("invalid range: %q", part)
} }
} }
if ints[1] != "" { if ints[1] != "" {
end, err = strconv.ParseInt(ints[1], 10, 64) end, err = strconv.ParseInt(ints[1], 10, 64)
if err != nil { if err != nil {
return nil, fmt.Errorf("invalid range: %q", part) return nil, errors.Errorf("invalid range: %q", part)
} }
} }
default: default:
return nil, fmt.Errorf("invalid range: %q", part) return nil, errors.Errorf("invalid range: %q", part)
} }
crs = append(crs, chunkRange{start: start, end: end}) crs = append(crs, chunkRange{start: start, end: end})
} }
@@ -762,18 +753,18 @@ func (f *Fs) rcFetch(ctx context.Context, in rc.Params) (rc.Params, error) {
delete(in, "chunks") delete(in, "chunks")
crs, err := parseChunks(s) crs, err := parseChunks(s)
if err != nil { if err != nil {
return nil, fmt.Errorf("invalid chunks parameter: %w", err) return nil, errors.Wrap(err, "invalid chunks parameter")
} }
var files [][2]string var files [][2]string
for k, v := range in { for k, v := range in {
if !strings.HasPrefix(k, "file") { if !strings.HasPrefix(k, "file") {
return nil, fmt.Errorf("invalid parameter %s=%s", k, v) return nil, errors.Errorf("invalid parameter %s=%s", k, v)
} }
switch v := v.(type) { switch v := v.(type) {
case string: case string:
files = append(files, [2]string{v, f.unwrapRemote(v)}) files = append(files, [2]string{v, f.unwrapRemote(v)})
default: default:
return nil, fmt.Errorf("invalid parameter %s=%s", k, v) return nil, errors.Errorf("invalid parameter %s=%s", k, v)
} }
} }
type fileStatus struct { type fileStatus struct {
@@ -1038,7 +1029,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
} }
fs.Debugf(dir, "list: remove entry: %v", entryRemote) fs.Debugf(dir, "list: remove entry: %v", entryRemote)
} }
entries = nil //nolint:ineffassign entries = nil
// and then iterate over the ones from source (temp Objects will override source ones) // and then iterate over the ones from source (temp Objects will override source ones)
var batchDirectories []*Directory var batchDirectories []*Directory
@@ -1129,7 +1120,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
case fs.Directory: case fs.Directory:
_ = f.cache.AddDir(DirectoryFromOriginal(ctx, f, o)) _ = f.cache.AddDir(DirectoryFromOriginal(ctx, f, o))
default: default:
return fmt.Errorf("unknown object type %T", entry) return errors.Errorf("Unknown object type %T", entry)
} }
} }
@@ -1249,7 +1240,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
} }
// DirMove moves src, srcRemote to this remote at dstRemote // DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations. // using server side move operations.
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error { func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error {
fs.Debugf(f, "move dir '%s'/'%s' -> '%s'/'%s'", src.Root(), srcRemote, f.Root(), dstRemote) fs.Debugf(f, "move dir '%s'/'%s' -> '%s'/'%s'", src.Root(), srcRemote, f.Root(), dstRemote)
@@ -1530,7 +1521,7 @@ func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, opt
return f.put(ctx, in, src, options, do) return f.put(ctx, in, src, options, do)
} }
// Copy src to this remote using server-side copy operations. // Copy src to this remote using server side copy operations.
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
fs.Debugf(f, "copy obj '%s' -> '%s'", src, remote) fs.Debugf(f, "copy obj '%s' -> '%s'", src, remote)
@@ -1607,7 +1598,7 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return co, nil return co, nil
} }
// Move src to this remote using server-side move operations. // Move src to this remote using server side move operations.
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
fs.Debugf(f, "moving obj '%s' -> %s", src, remote) fs.Debugf(f, "moving obj '%s' -> %s", src, remote)
@@ -1711,20 +1702,17 @@ func (f *Fs) Hashes() hash.Set {
return f.Fs.Hashes() return f.Fs.Hashes()
} }
// Purge all files in the directory // Purge all files in the root and the root directory
func (f *Fs) Purge(ctx context.Context, dir string) error { func (f *Fs) Purge(ctx context.Context) error {
if dir == "" { fs.Infof(f, "purging cache")
// FIXME this isn't quite right as it should purge the dir prefix f.cache.Purge()
fs.Infof(f, "purging cache")
f.cache.Purge()
}
do := f.Fs.Features().Purge do := f.Fs.Features().Purge
if do == nil { if do == nil {
return fs.ErrorCantPurge return nil
} }
err := do(ctx, dir) err := do(ctx)
if err != nil { if err != nil {
return err return err
} }
@@ -1748,7 +1736,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
do := f.Fs.Features().About do := f.Fs.Features().About
if do == nil { if do == nil {
return nil, errors.New("not supported by underlying remote") return nil, errors.New("About not supported")
} }
return do(ctx) return do(ctx)
} }
@@ -1908,16 +1896,6 @@ func (f *Fs) Disconnect(ctx context.Context) error {
return do(ctx) return do(ctx)
} }
// Shutdown the backend, closing any background tasks and any
// cached connections.
func (f *Fs) Shutdown(ctx context.Context) error {
do := f.Fs.Features().Shutdown
if do == nil {
return nil
}
return do(ctx)
}
var commandHelp = []fs.CommandHelp{ var commandHelp = []fs.CommandHelp{
{ {
Name: "stats", Name: "stats",
@@ -1962,5 +1940,4 @@ var (
_ fs.Disconnecter = (*Fs)(nil) _ fs.Disconnecter = (*Fs)(nil)
_ fs.Commander = (*Fs)(nil) _ fs.Commander = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil) _ fs.MergeDirser = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
) )

View File

@@ -1,5 +1,5 @@
//go:build !plan9 && !js && !race // +build !plan9
// +build !plan9,!js,!race // +build !race
package cache_test package cache_test
@@ -7,10 +7,10 @@ import (
"bytes" "bytes"
"context" "context"
"encoding/base64" "encoding/base64"
"errors"
goflag "flag" goflag "flag"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"log" "log"
"math/rand" "math/rand"
"os" "os"
@@ -22,6 +22,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/cache" "github.com/rclone/rclone/backend/cache"
"github.com/rclone/rclone/backend/crypt" "github.com/rclone/rclone/backend/crypt"
_ "github.com/rclone/rclone/backend/drive" _ "github.com/rclone/rclone/backend/drive"
@@ -30,10 +31,13 @@ import (
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/object" "github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/rc"
"github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/testy" "github.com/rclone/rclone/fstest/testy"
"github.com/rclone/rclone/lib/random" "github.com/rclone/rclone/lib/random"
"github.com/rclone/rclone/vfs"
"github.com/rclone/rclone/vfs/vfsflags" "github.com/rclone/rclone/vfs/vfsflags"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@@ -49,7 +53,9 @@ const (
var ( var (
remoteName string remoteName string
mountDir string
uploadDir string uploadDir string
useMount bool
runInstance *run runInstance *run
errNotSupported = errors.New("not supported") errNotSupported = errors.New("not supported")
decryptedToEncryptedRemotes = map[string]string{ decryptedToEncryptedRemotes = map[string]string{
@@ -85,7 +91,9 @@ var (
func init() { func init() {
goflag.StringVar(&remoteName, "remote-internal", "TestInternalCache", "Remote to test with, defaults to local filesystem") goflag.StringVar(&remoteName, "remote-internal", "TestInternalCache", "Remote to test with, defaults to local filesystem")
goflag.StringVar(&mountDir, "mount-dir-internal", "", "")
goflag.StringVar(&uploadDir, "upload-dir-internal", "", "") goflag.StringVar(&uploadDir, "upload-dir-internal", "", "")
goflag.BoolVar(&useMount, "cache-use-mount", false, "Test only with mount")
} }
// TestMain drives the tests // TestMain drives the tests
@@ -93,7 +101,7 @@ func TestMain(m *testing.M) {
goflag.Parse() goflag.Parse()
var rc int var rc int
log.Printf("Running with the following params: \n remote: %v", remoteName) log.Printf("Running with the following params: \n remote: %v, \n mount: %v", remoteName, useMount)
runInstance = newRun() runInstance = newRun()
rc = m.Run() rc = m.Run()
os.Exit(rc) os.Exit(rc)
@@ -101,12 +109,14 @@ func TestMain(m *testing.M) {
func TestInternalListRootAndInnerRemotes(t *testing.T) { func TestInternalListRootAndInnerRemotes(t *testing.T) {
id := fmt.Sprintf("tilrair%v", time.Now().Unix()) id := fmt.Sprintf("tilrair%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
// Instantiate inner fs // Instantiate inner fs
innerFolder := "inner" innerFolder := "inner"
runInstance.mkdir(t, rootFs, innerFolder) runInstance.mkdir(t, rootFs, innerFolder)
rootFs2, _ := runInstance.newCacheFs(t, remoteName, id+"/"+innerFolder, true, true, nil) rootFs2, boltDb2 := runInstance.newCacheFs(t, remoteName, id+"/"+innerFolder, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs2, boltDb2)
runInstance.writeObjectString(t, rootFs2, "one", "content") runInstance.writeObjectString(t, rootFs2, "one", "content")
listRoot, err := runInstance.list(t, rootFs, "") listRoot, err := runInstance.list(t, rootFs, "")
@@ -164,7 +174,7 @@ func TestInternalVfsCache(t *testing.T) {
li2 := [2]string{path.Join("test", "one"), path.Join("test", "second")} li2 := [2]string{path.Join("test", "one"), path.Join("test", "second")}
for _, r := range li2 { for _, r := range li2 {
var err error var err error
ci, err := os.ReadDir(path.Join(runInstance.chunkPath, runInstance.encryptRemoteIfNeeded(t, path.Join(id, r)))) ci, err := ioutil.ReadDir(path.Join(runInstance.chunkPath, runInstance.encryptRemoteIfNeeded(t, path.Join(id, r))))
if err != nil || len(ci) == 0 { if err != nil || len(ci) == 0 {
log.Printf("========== '%v' not in cache", r) log.Printf("========== '%v' not in cache", r)
} else { } else {
@@ -223,7 +233,8 @@ func TestInternalVfsCache(t *testing.T) {
func TestInternalObjWrapFsFound(t *testing.T) { func TestInternalObjWrapFsFound(t *testing.T) {
id := fmt.Sprintf("tiowff%v", time.Now().Unix()) id := fmt.Sprintf("tiowff%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs) cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err) require.NoError(t, err)
@@ -255,17 +266,44 @@ func TestInternalObjWrapFsFound(t *testing.T) {
func TestInternalObjNotFound(t *testing.T) { func TestInternalObjNotFound(t *testing.T) {
id := fmt.Sprintf("tionf%v", time.Now().Unix()) id := fmt.Sprintf("tionf%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
obj, err := rootFs.NewObject(context.Background(), "404") obj, err := rootFs.NewObject(context.Background(), "404")
require.Error(t, err) require.Error(t, err)
require.Nil(t, obj) require.Nil(t, obj)
} }
func TestInternalRemoteWrittenFileFoundInMount(t *testing.T) {
if !runInstance.useMount {
t.Skip("test needs mount mode")
}
id := fmt.Sprintf("tirwffim%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
var testData []byte
if runInstance.rootIsCrypt {
testData, err = base64.StdEncoding.DecodeString(cryptedTextBase64)
require.NoError(t, err)
} else {
testData = []byte("test content")
}
runInstance.writeObjectBytes(t, cfs.UnWrap(), runInstance.encryptRemoteIfNeeded(t, "test"), testData)
data, err := runInstance.readDataFromRemote(t, rootFs, "test", 0, int64(len([]byte("test content"))), false)
require.NoError(t, err)
require.Equal(t, "test content", string(data))
}
func TestInternalCachedWrittenContentMatches(t *testing.T) { func TestInternalCachedWrittenContentMatches(t *testing.T) {
testy.SkipUnreliable(t) testy.SkipUnreliable(t)
id := fmt.Sprintf("ticwcm%v", time.Now().Unix()) id := fmt.Sprintf("ticwcm%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs) cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err) require.NoError(t, err)
@@ -288,11 +326,9 @@ func TestInternalCachedWrittenContentMatches(t *testing.T) {
} }
func TestInternalDoubleWrittenContentMatches(t *testing.T) { func TestInternalDoubleWrittenContentMatches(t *testing.T) {
if runtime.GOOS == "windows" && runtime.GOARCH == "386" {
t.Skip("Skip test on windows/386")
}
id := fmt.Sprintf("tidwcm%v", time.Now().Unix()) id := fmt.Sprintf("tidwcm%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
// write the object // write the object
runInstance.writeRemoteString(t, rootFs, "one", "one content") runInstance.writeRemoteString(t, rootFs, "one", "one content")
@@ -310,7 +346,8 @@ func TestInternalDoubleWrittenContentMatches(t *testing.T) {
func TestInternalCachedUpdatedContentMatches(t *testing.T) { func TestInternalCachedUpdatedContentMatches(t *testing.T) {
testy.SkipUnreliable(t) testy.SkipUnreliable(t)
id := fmt.Sprintf("ticucm%v", time.Now().Unix()) id := fmt.Sprintf("ticucm%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
var err error var err error
// create some rand test data // create some rand test data
@@ -339,7 +376,8 @@ func TestInternalCachedUpdatedContentMatches(t *testing.T) {
func TestInternalWrappedWrittenContentMatches(t *testing.T) { func TestInternalWrappedWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tiwwcm%v", time.Now().Unix()) id := fmt.Sprintf("tiwwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second vfsflags.Opt.DirCacheTime = time.Second
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
if runInstance.rootIsCrypt { if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote") t.Skip("test skipped with crypt remote")
} }
@@ -369,7 +407,8 @@ func TestInternalWrappedWrittenContentMatches(t *testing.T) {
func TestInternalLargeWrittenContentMatches(t *testing.T) { func TestInternalLargeWrittenContentMatches(t *testing.T) {
id := fmt.Sprintf("tilwcm%v", time.Now().Unix()) id := fmt.Sprintf("tilwcm%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second vfsflags.Opt.DirCacheTime = time.Second
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
if runInstance.rootIsCrypt { if runInstance.rootIsCrypt {
t.Skip("test skipped with crypt remote") t.Skip("test skipped with crypt remote")
} }
@@ -395,7 +434,8 @@ func TestInternalLargeWrittenContentMatches(t *testing.T) {
func TestInternalWrappedFsChangeNotSeen(t *testing.T) { func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
id := fmt.Sprintf("tiwfcns%v", time.Now().Unix()) id := fmt.Sprintf("tiwfcns%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs) cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err) require.NoError(t, err)
@@ -435,7 +475,7 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
return err return err
} }
if coSize != expectedSize { if coSize != expectedSize {
return fmt.Errorf("%v <> %v", coSize, expectedSize) return errors.Errorf("%v <> %v", coSize, expectedSize)
} }
return nil return nil
}, 12, time.Second*10) }, 12, time.Second*10)
@@ -449,7 +489,8 @@ func TestInternalWrappedFsChangeNotSeen(t *testing.T) {
func TestInternalMoveWithNotify(t *testing.T) { func TestInternalMoveWithNotify(t *testing.T) {
id := fmt.Sprintf("timwn%v", time.Now().Unix()) id := fmt.Sprintf("timwn%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
if !runInstance.wrappedIsExternal { if !runInstance.wrappedIsExternal {
t.Skipf("Not external") t.Skipf("Not external")
} }
@@ -490,7 +531,7 @@ func TestInternalMoveWithNotify(t *testing.T) {
} }
if len(li) != 2 { if len(li) != 2 {
log.Printf("not expected listing /test: %v", li) log.Printf("not expected listing /test: %v", li)
return fmt.Errorf("not expected listing /test: %v", li) return errors.Errorf("not expected listing /test: %v", li)
} }
li, err = runInstance.list(t, rootFs, "test/one") li, err = runInstance.list(t, rootFs, "test/one")
@@ -500,7 +541,7 @@ func TestInternalMoveWithNotify(t *testing.T) {
} }
if len(li) != 0 { if len(li) != 0 {
log.Printf("not expected listing /test/one: %v", li) log.Printf("not expected listing /test/one: %v", li)
return fmt.Errorf("not expected listing /test/one: %v", li) return errors.Errorf("not expected listing /test/one: %v", li)
} }
li, err = runInstance.list(t, rootFs, "test/second") li, err = runInstance.list(t, rootFs, "test/second")
@@ -510,21 +551,21 @@ func TestInternalMoveWithNotify(t *testing.T) {
} }
if len(li) != 1 { if len(li) != 1 {
log.Printf("not expected listing /test/second: %v", li) log.Printf("not expected listing /test/second: %v", li)
return fmt.Errorf("not expected listing /test/second: %v", li) return errors.Errorf("not expected listing /test/second: %v", li)
} }
if fi, ok := li[0].(os.FileInfo); ok { if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "data.bin" { if fi.Name() != "data.bin" {
log.Printf("not expected name: %v", fi.Name()) log.Printf("not expected name: %v", fi.Name())
return fmt.Errorf("not expected name: %v", fi.Name()) return errors.Errorf("not expected name: %v", fi.Name())
} }
} else if di, ok := li[0].(fs.DirEntry); ok { } else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/second/data.bin" { if di.Remote() != "test/second/data.bin" {
log.Printf("not expected remote: %v", di.Remote()) log.Printf("not expected remote: %v", di.Remote())
return fmt.Errorf("not expected remote: %v", di.Remote()) return errors.Errorf("not expected remote: %v", di.Remote())
} }
} else { } else {
log.Printf("unexpected listing: %v", li) log.Printf("unexpected listing: %v", li)
return fmt.Errorf("unexpected listing: %v", li) return errors.Errorf("unexpected listing: %v", li)
} }
log.Printf("complete listing: %v", li) log.Printf("complete listing: %v", li)
@@ -535,7 +576,8 @@ func TestInternalMoveWithNotify(t *testing.T) {
func TestInternalNotifyCreatesEmptyParts(t *testing.T) { func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
id := fmt.Sprintf("tincep%v", time.Now().Unix()) id := fmt.Sprintf("tincep%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
if !runInstance.wrappedIsExternal { if !runInstance.wrappedIsExternal {
t.Skipf("Not external") t.Skipf("Not external")
} }
@@ -578,17 +620,17 @@ func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"))) found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test")))
if !found { if !found {
log.Printf("not found /test") log.Printf("not found /test")
return fmt.Errorf("not found /test") return errors.Errorf("not found /test")
} }
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one"))) found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one")))
if !found { if !found {
log.Printf("not found /test/one") log.Printf("not found /test/one")
return fmt.Errorf("not found /test/one") return errors.Errorf("not found /test/one")
} }
found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one"), runInstance.encryptRemoteIfNeeded(t, "test2"))) found = boltDb.HasEntry(path.Join(cfs.Root(), runInstance.encryptRemoteIfNeeded(t, "test"), runInstance.encryptRemoteIfNeeded(t, "one"), runInstance.encryptRemoteIfNeeded(t, "test2")))
if !found { if !found {
log.Printf("not found /test/one/test2") log.Printf("not found /test/one/test2")
return fmt.Errorf("not found /test/one/test2") return errors.Errorf("not found /test/one/test2")
} }
li, err := runInstance.list(t, rootFs, "test/one") li, err := runInstance.list(t, rootFs, "test/one")
if err != nil { if err != nil {
@@ -597,21 +639,21 @@ func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
} }
if len(li) != 1 { if len(li) != 1 {
log.Printf("not expected listing /test/one: %v", li) log.Printf("not expected listing /test/one: %v", li)
return fmt.Errorf("not expected listing /test/one: %v", li) return errors.Errorf("not expected listing /test/one: %v", li)
} }
if fi, ok := li[0].(os.FileInfo); ok { if fi, ok := li[0].(os.FileInfo); ok {
if fi.Name() != "test2" { if fi.Name() != "test2" {
log.Printf("not expected name: %v", fi.Name()) log.Printf("not expected name: %v", fi.Name())
return fmt.Errorf("not expected name: %v", fi.Name()) return errors.Errorf("not expected name: %v", fi.Name())
} }
} else if di, ok := li[0].(fs.DirEntry); ok { } else if di, ok := li[0].(fs.DirEntry); ok {
if di.Remote() != "test/one/test2" { if di.Remote() != "test/one/test2" {
log.Printf("not expected remote: %v", di.Remote()) log.Printf("not expected remote: %v", di.Remote())
return fmt.Errorf("not expected remote: %v", di.Remote()) return errors.Errorf("not expected remote: %v", di.Remote())
} }
} else { } else {
log.Printf("unexpected listing: %v", li) log.Printf("unexpected listing: %v", li)
return fmt.Errorf("unexpected listing: %v", li) return errors.Errorf("unexpected listing: %v", li)
} }
log.Printf("complete listing /test/one/test2") log.Printf("complete listing /test/one/test2")
return nil return nil
@@ -621,7 +663,8 @@ func TestInternalNotifyCreatesEmptyParts(t *testing.T) {
func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) { func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
id := fmt.Sprintf("ticsadcf%v", time.Now().Unix()) id := fmt.Sprintf("ticsadcf%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs) cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err) require.NoError(t, err)
@@ -651,9 +694,83 @@ func TestInternalChangeSeenAfterDirCacheFlush(t *testing.T) {
require.Equal(t, wrappedTime.Unix(), co.ModTime(context.Background()).Unix()) require.Equal(t, wrappedTime.Unix(), co.ModTime(context.Background()).Unix())
} }
func TestInternalChangeSeenAfterRc(t *testing.T) {
cacheExpire := rc.Calls.Get("cache/expire")
assert.NotNil(t, cacheExpire)
id := fmt.Sprintf("ticsarc%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
if !runInstance.useMount {
t.Skipf("needs mount")
}
if !runInstance.wrappedIsExternal {
t.Skipf("needs drive")
}
cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err)
chunkSize := cfs.ChunkSize()
// create some rand test data
testData := randStringBytes(int(chunkSize*4 + chunkSize/2))
runInstance.writeRemoteBytes(t, rootFs, "data.bin", testData)
// update in the wrapped fs
o, err := cfs.UnWrap().NewObject(context.Background(), runInstance.encryptRemoteIfNeeded(t, "data.bin"))
require.NoError(t, err)
wrappedTime := time.Now().Add(-1 * time.Hour)
err = o.SetModTime(context.Background(), wrappedTime)
require.NoError(t, err)
// get a new instance from the cache
co, err := rootFs.NewObject(context.Background(), "data.bin")
require.NoError(t, err)
require.NotEqual(t, o.ModTime(context.Background()).String(), co.ModTime(context.Background()).String())
// Call the rc function
m, err := cacheExpire.Fn(context.Background(), rc.Params{"remote": "data.bin"})
require.NoError(t, err)
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])
require.Contains(t, m["message"], "cached file cleared")
// get a new instance from the cache
co, err = rootFs.NewObject(context.Background(), "data.bin")
require.NoError(t, err)
require.Equal(t, wrappedTime.Unix(), co.ModTime(context.Background()).Unix())
_, err = runInstance.list(t, rootFs, "")
require.NoError(t, err)
// create some rand test data
testData2 := randStringBytes(int(chunkSize))
runInstance.writeObjectBytes(t, cfs.UnWrap(), runInstance.encryptRemoteIfNeeded(t, "test2"), testData2)
// list should have 1 item only
li1, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, li1, 1)
// Call the rc function
m, err = cacheExpire.Fn(context.Background(), rc.Params{"remote": "/"})
require.NoError(t, err)
require.Contains(t, m, "status")
require.Contains(t, m, "message")
require.Equal(t, "ok", m["status"])
require.Contains(t, m["message"], "cached directory cleared")
// list should have 2 items now
li2, err := runInstance.list(t, rootFs, "")
require.NoError(t, err)
require.Len(t, li2, 2)
}
func TestInternalCacheWrites(t *testing.T) { func TestInternalCacheWrites(t *testing.T) {
id := "ticw" id := "ticw"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"writes": "true"}) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"writes": "true"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs) cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err) require.NoError(t, err)
@@ -670,11 +787,9 @@ func TestInternalCacheWrites(t *testing.T) {
} }
func TestInternalMaxChunkSizeRespected(t *testing.T) { func TestInternalMaxChunkSizeRespected(t *testing.T) {
if runtime.GOOS == "windows" && runtime.GOARCH == "386" {
t.Skip("Skip test on windows/386")
}
id := fmt.Sprintf("timcsr%v", time.Now().Unix()) id := fmt.Sprintf("timcsr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"workers": "1"}) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil, map[string]string{"workers": "1"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs) cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err) require.NoError(t, err)
@@ -709,7 +824,8 @@ func TestInternalMaxChunkSizeRespected(t *testing.T) {
func TestInternalExpiredEntriesRemoved(t *testing.T) { func TestInternalExpiredEntriesRemoved(t *testing.T) {
id := fmt.Sprintf("tieer%v", time.Now().Unix()) id := fmt.Sprintf("tieer%v", time.Now().Unix())
vfsflags.Opt.DirCacheTime = time.Second * 4 // needs to be lower than the defined vfsflags.Opt.DirCacheTime = time.Second * 4 // needs to be lower than the defined
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, nil) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, map[string]string{"info_age": "5s"}, nil)
defer runInstance.cleanupFs(t, rootFs, boltDb)
cfs, err := runInstance.getCacheFs(rootFs) cfs, err := runInstance.getCacheFs(rootFs)
require.NoError(t, err) require.NoError(t, err)
@@ -746,7 +862,9 @@ func TestInternalBug2117(t *testing.T) {
vfsflags.Opt.DirCacheTime = time.Second * 10 vfsflags.Opt.DirCacheTime = time.Second * 10
id := fmt.Sprintf("tib2117%v", time.Now().Unix()) id := fmt.Sprintf("tib2117%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, false, true, map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"}) rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true, nil,
map[string]string{"info_age": "72h", "chunk_clean_interval": "15m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
if runInstance.rootIsCrypt { if runInstance.rootIsCrypt {
t.Skipf("skipping crypt") t.Skipf("skipping crypt")
@@ -796,9 +914,15 @@ func TestInternalBug2117(t *testing.T) {
type run struct { type run struct {
okDiff time.Duration okDiff time.Duration
runDefaultCfgMap configmap.Simple runDefaultCfgMap configmap.Simple
mntDir string
tmpUploadDir string tmpUploadDir string
useMount bool
isMounted bool
rootIsCrypt bool rootIsCrypt bool
wrappedIsExternal bool wrappedIsExternal bool
unmountFn func() error
unmountRes chan error
vfs *vfs.VFS
tempFiles []*os.File tempFiles []*os.File
dbPath string dbPath string
chunkPath string chunkPath string
@@ -808,7 +932,9 @@ type run struct {
func newRun() *run { func newRun() *run {
var err error var err error
r := &run{ r := &run{
okDiff: time.Second * 9, // really big diff here but the build machines seem to be slow. need a different way for this okDiff: time.Second * 9, // really big diff here but the build machines seem to be slow. need a different way for this
useMount: useMount,
isMounted: false,
} }
// Read in all the defaults for all the options // Read in all the defaults for all the options
@@ -821,10 +947,36 @@ func newRun() *run {
r.runDefaultCfgMap.Set(option.Name, fmt.Sprint(option.Default)) r.runDefaultCfgMap.Set(option.Name, fmt.Sprint(option.Default))
} }
if mountDir == "" {
if runtime.GOOS != "windows" {
r.mntDir, err = ioutil.TempDir("", "rclonecache-mount")
if err != nil {
log.Fatalf("Failed to create mount dir: %v", err)
return nil
}
} else {
// Find a free drive letter
drive := ""
for letter := 'E'; letter <= 'Z'; letter++ {
drive = string(letter) + ":"
_, err := os.Stat(drive + "\\")
if os.IsNotExist(err) {
goto found
}
}
log.Print("Couldn't find free drive letter for test")
found:
r.mntDir = drive
}
} else {
r.mntDir = mountDir
}
log.Printf("Mount Dir: %v", r.mntDir)
if uploadDir == "" { if uploadDir == "" {
r.tmpUploadDir, err = os.MkdirTemp("", "rclonecache-tmp") r.tmpUploadDir, err = ioutil.TempDir("", "rclonecache-tmp")
if err != nil { if err != nil {
panic(fmt.Sprintf("Failed to create temp dir: %v", err)) log.Fatalf("Failed to create temp dir: %v", err)
} }
} else { } else {
r.tmpUploadDir = uploadDir r.tmpUploadDir = uploadDir
@@ -847,7 +999,7 @@ func (r *run) encryptRemoteIfNeeded(t *testing.T, remote string) string {
return enc return enc
} }
func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool, flags map[string]string) (fs.Fs, *cache.Persistent) { func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool, cfg map[string]string, flags map[string]string) (fs.Fs, *cache.Persistent) {
fstest.Initialise() fstest.Initialise()
remoteExists := false remoteExists := false
for _, s := range config.FileSections() { for _, s := range config.FileSections() {
@@ -880,7 +1032,7 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
m.Set("type", "cache") m.Set("type", "cache")
m.Set("remote", localRemote+":"+filepath.Join(os.TempDir(), localRemote)) m.Set("remote", localRemote+":"+filepath.Join(os.TempDir(), localRemote))
} else { } else {
remoteType := config.FileGet(remote, "type") remoteType := config.FileGet(remote, "type", "")
if remoteType == "" { if remoteType == "" {
t.Skipf("skipped due to invalid remote type for %v", remote) t.Skipf("skipped due to invalid remote type for %v", remote)
return nil, nil return nil, nil
@@ -891,14 +1043,14 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
m.Set("password", cryptPassword1) m.Set("password", cryptPassword1)
m.Set("password2", cryptPassword2) m.Set("password2", cryptPassword2)
} }
remoteRemote := config.FileGet(remote, "remote") remoteRemote := config.FileGet(remote, "remote", "")
if remoteRemote == "" { if remoteRemote == "" {
t.Skipf("skipped due to invalid remote wrapper for %v", remote) t.Skipf("skipped due to invalid remote wrapper for %v", remote)
return nil, nil return nil, nil
} }
remoteRemoteParts := strings.Split(remoteRemote, ":") remoteRemoteParts := strings.Split(remoteRemote, ":")
remoteWrapping := remoteRemoteParts[0] remoteWrapping := remoteRemoteParts[0]
remoteType := config.FileGet(remoteWrapping, "type") remoteType := config.FileGet(remoteWrapping, "type", "")
if remoteType != "cache" { if remoteType != "cache" {
t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType) t.Skipf("skipped due to invalid remote type for %v: '%v'", remoteWrapping, remoteType)
return nil, nil return nil, nil
@@ -907,21 +1059,20 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
} }
} }
runInstance.rootIsCrypt = rootIsCrypt runInstance.rootIsCrypt = rootIsCrypt
runInstance.dbPath = filepath.Join(config.GetCacheDir(), "cache-backend", cacheRemote+".db") runInstance.dbPath = filepath.Join(config.CacheDir, "cache-backend", cacheRemote+".db")
runInstance.chunkPath = filepath.Join(config.GetCacheDir(), "cache-backend", cacheRemote) runInstance.chunkPath = filepath.Join(config.CacheDir, "cache-backend", cacheRemote)
runInstance.vfsCachePath = filepath.Join(config.GetCacheDir(), "vfs", remote) runInstance.vfsCachePath = filepath.Join(config.CacheDir, "vfs", remote)
boltDb, err := cache.GetPersistent(runInstance.dbPath, runInstance.chunkPath, &cache.Features{PurgeDb: true}) boltDb, err := cache.GetPersistent(runInstance.dbPath, runInstance.chunkPath, &cache.Features{PurgeDb: true})
require.NoError(t, err) require.NoError(t, err)
ci := fs.GetConfig(context.Background()) fs.Config.LowLevelRetries = 1
ci.LowLevelRetries = 1
// Instantiate root // Instantiate root
if purge { if purge {
boltDb.PurgeTempUploads() boltDb.PurgeTempUploads()
_ = os.RemoveAll(path.Join(runInstance.tmpUploadDir, id)) _ = os.RemoveAll(path.Join(runInstance.tmpUploadDir, id))
} }
f, err := cache.NewFs(context.Background(), remote, id, m) f, err := cache.NewFs(remote, id, m)
require.NoError(t, err) require.NoError(t, err)
cfs, err := r.getCacheFs(f) cfs, err := r.getCacheFs(f)
require.NoError(t, err) require.NoError(t, err)
@@ -935,26 +1086,33 @@ func (r *run) newCacheFs(t *testing.T, remote, id string, needRemote, purge bool
} }
if purge { if purge {
_ = f.Features().Purge(context.Background(), "") _ = f.Features().Purge(context.Background())
require.NoError(t, err) require.NoError(t, err)
} }
err = f.Mkdir(context.Background(), "") err = f.Mkdir(context.Background(), "")
require.NoError(t, err) require.NoError(t, err)
if r.useMount && !r.isMounted {
t.Cleanup(func() { r.mountFs(t, f)
runInstance.cleanupFs(t, f) }
})
return f, boltDb return f, boltDb
} }
func (r *run) cleanupFs(t *testing.T, f fs.Fs) { func (r *run) cleanupFs(t *testing.T, f fs.Fs, b *cache.Persistent) {
err := f.Features().Purge(context.Background(), "") if r.useMount && r.isMounted {
r.unmountFs(t, f)
}
err := f.Features().Purge(context.Background())
require.NoError(t, err) require.NoError(t, err)
cfs, err := r.getCacheFs(f) cfs, err := r.getCacheFs(f)
require.NoError(t, err) require.NoError(t, err)
cfs.StopBackgroundRunners() cfs.StopBackgroundRunners()
if r.useMount && runtime.GOOS != "windows" {
err = os.RemoveAll(r.mntDir)
require.NoError(t, err)
}
err = os.RemoveAll(r.tmpUploadDir) err = os.RemoveAll(r.tmpUploadDir)
require.NoError(t, err) require.NoError(t, err)
@@ -970,7 +1128,7 @@ func (r *run) randomReader(t *testing.T, size int64) io.ReadCloser {
chunk := int64(1024) chunk := int64(1024)
cnt := size / chunk cnt := size / chunk
left := size % chunk left := size % chunk
f, err := os.CreateTemp("", "rclonecache-tempfile") f, err := ioutil.TempFile("", "rclonecache-tempfile")
require.NoError(t, err) require.NoError(t, err)
for i := 0; i < int(cnt); i++ { for i := 0; i < int(cnt); i++ {
@@ -994,11 +1152,37 @@ func (r *run) writeObjectString(t *testing.T, f fs.Fs, remote, content string) f
} }
func (r *run) writeRemoteBytes(t *testing.T, f fs.Fs, remote string, data []byte) { func (r *run) writeRemoteBytes(t *testing.T, f fs.Fs, remote string, data []byte) {
r.writeObjectBytes(t, f, remote, data) var err error
if r.useMount {
err = r.retryBlock(func() error {
return ioutil.WriteFile(path.Join(r.mntDir, remote), data, 0600)
}, 3, time.Second*3)
require.NoError(t, err)
r.vfs.WaitForWriters(10 * time.Second)
} else {
r.writeObjectBytes(t, f, remote, data)
}
} }
func (r *run) writeRemoteReader(t *testing.T, f fs.Fs, remote string, in io.ReadCloser) { func (r *run) writeRemoteReader(t *testing.T, f fs.Fs, remote string, in io.ReadCloser) {
r.writeObjectReader(t, f, remote, in) defer func() {
_ = in.Close()
}()
if r.useMount {
out, err := os.Create(path.Join(r.mntDir, remote))
require.NoError(t, err)
defer func() {
_ = out.Close()
}()
_, err = io.Copy(out, in)
require.NoError(t, err)
r.vfs.WaitForWriters(10 * time.Second)
} else {
r.writeObjectReader(t, f, remote, in)
}
} }
func (r *run) writeObjectBytes(t *testing.T, f fs.Fs, remote string, data []byte) fs.Object { func (r *run) writeObjectBytes(t *testing.T, f fs.Fs, remote string, data []byte) fs.Object {
@@ -1015,6 +1199,10 @@ func (r *run) writeObjectReader(t *testing.T, f fs.Fs, remote string, in io.Read
objInfo := object.NewStaticObjectInfo(remote, modTime, -1, true, nil, f) objInfo := object.NewStaticObjectInfo(remote, modTime, -1, true, nil, f)
obj, err := f.Put(context.Background(), in, objInfo) obj, err := f.Put(context.Background(), in, objInfo)
require.NoError(t, err) require.NoError(t, err)
if r.useMount {
r.vfs.WaitForWriters(10 * time.Second)
}
return obj return obj
} }
@@ -1022,16 +1210,26 @@ func (r *run) updateObjectRemote(t *testing.T, f fs.Fs, remote string, data1 []b
var err error var err error
var obj fs.Object var obj fs.Object
in1 := bytes.NewReader(data1) if r.useMount {
in2 := bytes.NewReader(data2) err = ioutil.WriteFile(path.Join(r.mntDir, remote), data1, 0600)
objInfo1 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f) require.NoError(t, err)
objInfo2 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f) r.vfs.WaitForWriters(10 * time.Second)
err = ioutil.WriteFile(path.Join(r.mntDir, remote), data2, 0600)
require.NoError(t, err)
r.vfs.WaitForWriters(10 * time.Second)
obj, err = f.NewObject(context.Background(), remote)
} else {
in1 := bytes.NewReader(data1)
in2 := bytes.NewReader(data2)
objInfo1 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data1)), true, nil, f)
objInfo2 := object.NewStaticObjectInfo(remote, time.Now(), int64(len(data2)), true, nil, f)
_, err = f.Put(context.Background(), in1, objInfo1) obj, err = f.Put(context.Background(), in1, objInfo1)
require.NoError(t, err) require.NoError(t, err)
obj, err = f.NewObject(context.Background(), remote) obj, err = f.NewObject(context.Background(), remote)
require.NoError(t, err) require.NoError(t, err)
err = obj.Update(context.Background(), in2, objInfo2) err = obj.Update(context.Background(), in2, objInfo2)
}
require.NoError(t, err) require.NoError(t, err)
return obj return obj
@@ -1041,14 +1239,32 @@ func (r *run) readDataFromRemote(t *testing.T, f fs.Fs, remote string, offset, e
size := end - offset size := end - offset
checkSample := make([]byte, size) checkSample := make([]byte, size)
co, err := f.NewObject(context.Background(), remote) if r.useMount {
if err != nil { f, err := os.Open(path.Join(r.mntDir, remote))
return checkSample, err defer func() {
_ = f.Close()
}()
if err != nil {
return checkSample, err
}
_, _ = f.Seek(offset, io.SeekStart)
totalRead, err := io.ReadFull(f, checkSample)
checkSample = checkSample[:totalRead]
if err == io.EOF || err == io.ErrUnexpectedEOF {
err = nil
}
if err != nil {
return checkSample, err
}
} else {
co, err := f.NewObject(context.Background(), remote)
if err != nil {
return checkSample, err
}
checkSample = r.readDataFromObj(t, co, offset, end, noLengthCheck)
} }
checkSample = r.readDataFromObj(t, co, offset, end, noLengthCheck)
if !noLengthCheck && size != int64(len(checkSample)) { if !noLengthCheck && size != int64(len(checkSample)) {
return checkSample, fmt.Errorf("read size doesn't match expected: %v <> %v", len(checkSample), size) return checkSample, errors.Errorf("read size doesn't match expected: %v <> %v", len(checkSample), size)
} }
return checkSample, nil return checkSample, nil
} }
@@ -1069,19 +1285,28 @@ func (r *run) readDataFromObj(t *testing.T, o fs.Object, offset, end int64, noLe
} }
func (r *run) mkdir(t *testing.T, f fs.Fs, remote string) { func (r *run) mkdir(t *testing.T, f fs.Fs, remote string) {
err := f.Mkdir(context.Background(), remote) var err error
if r.useMount {
err = os.Mkdir(path.Join(r.mntDir, remote), 0700)
} else {
err = f.Mkdir(context.Background(), remote)
}
require.NoError(t, err) require.NoError(t, err)
} }
func (r *run) rm(t *testing.T, f fs.Fs, remote string) error { func (r *run) rm(t *testing.T, f fs.Fs, remote string) error {
var err error var err error
var obj fs.Object if r.useMount {
obj, err = f.NewObject(context.Background(), remote) err = os.Remove(path.Join(r.mntDir, remote))
if err != nil {
err = f.Rmdir(context.Background(), remote)
} else { } else {
err = obj.Remove(context.Background()) var obj fs.Object
obj, err = f.NewObject(context.Background(), remote)
if err != nil {
err = f.Rmdir(context.Background(), remote)
} else {
err = obj.Remove(context.Background())
}
} }
return err return err
@@ -1090,10 +1315,18 @@ func (r *run) rm(t *testing.T, f fs.Fs, remote string) error {
func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]interface{}, error) { func (r *run) list(t *testing.T, f fs.Fs, remote string) ([]interface{}, error) {
var err error var err error
var l []interface{} var l []interface{}
var list fs.DirEntries if r.useMount {
list, err = f.List(context.Background(), remote) var list []os.FileInfo
for _, ll := range list { list, err = ioutil.ReadDir(path.Join(r.mntDir, remote))
l = append(l, ll) for _, ll := range list {
l = append(l, ll)
}
} else {
var list fs.DirEntries
list, err = f.List(context.Background(), remote)
for _, ll := range list {
l = append(l, ll)
}
} }
return l, err return l, err
} }
@@ -1122,7 +1355,13 @@ func (r *run) copyFile(t *testing.T, f fs.Fs, src, dst string) error {
func (r *run) dirMove(t *testing.T, rootFs fs.Fs, src, dst string) error { func (r *run) dirMove(t *testing.T, rootFs fs.Fs, src, dst string) error {
var err error var err error
if rootFs.Features().DirMove != nil { if runInstance.useMount {
err = os.Rename(path.Join(runInstance.mntDir, src), path.Join(runInstance.mntDir, dst))
if err != nil {
return err
}
r.vfs.WaitForWriters(10 * time.Second)
} else if rootFs.Features().DirMove != nil {
err = rootFs.Features().DirMove(context.Background(), rootFs, src, dst) err = rootFs.Features().DirMove(context.Background(), rootFs, src, dst)
if err != nil { if err != nil {
return err return err
@@ -1138,7 +1377,13 @@ func (r *run) dirMove(t *testing.T, rootFs fs.Fs, src, dst string) error {
func (r *run) move(t *testing.T, rootFs fs.Fs, src, dst string) error { func (r *run) move(t *testing.T, rootFs fs.Fs, src, dst string) error {
var err error var err error
if rootFs.Features().Move != nil { if runInstance.useMount {
err = os.Rename(path.Join(runInstance.mntDir, src), path.Join(runInstance.mntDir, dst))
if err != nil {
return err
}
r.vfs.WaitForWriters(10 * time.Second)
} else if rootFs.Features().Move != nil {
obj1, err := rootFs.NewObject(context.Background(), src) obj1, err := rootFs.NewObject(context.Background(), src)
if err != nil { if err != nil {
return err return err
@@ -1158,7 +1403,13 @@ func (r *run) move(t *testing.T, rootFs fs.Fs, src, dst string) error {
func (r *run) copy(t *testing.T, rootFs fs.Fs, src, dst string) error { func (r *run) copy(t *testing.T, rootFs fs.Fs, src, dst string) error {
var err error var err error
if rootFs.Features().Copy != nil { if r.useMount {
err = r.copyFile(t, rootFs, path.Join(r.mntDir, src), path.Join(r.mntDir, dst))
if err != nil {
return err
}
r.vfs.WaitForWriters(10 * time.Second)
} else if rootFs.Features().Copy != nil {
obj, err := rootFs.NewObject(context.Background(), src) obj, err := rootFs.NewObject(context.Background(), src)
if err != nil { if err != nil {
return err return err
@@ -1178,6 +1429,13 @@ func (r *run) copy(t *testing.T, rootFs fs.Fs, src, dst string) error {
func (r *run) modTime(t *testing.T, rootFs fs.Fs, src string) (time.Time, error) { func (r *run) modTime(t *testing.T, rootFs fs.Fs, src string) (time.Time, error) {
var err error var err error
if r.useMount {
fi, err := os.Stat(path.Join(runInstance.mntDir, src))
if err != nil {
return time.Time{}, err
}
return fi.ModTime(), nil
}
obj1, err := rootFs.NewObject(context.Background(), src) obj1, err := rootFs.NewObject(context.Background(), src)
if err != nil { if err != nil {
return time.Time{}, err return time.Time{}, err
@@ -1188,6 +1446,13 @@ func (r *run) modTime(t *testing.T, rootFs fs.Fs, src string) (time.Time, error)
func (r *run) size(t *testing.T, rootFs fs.Fs, src string) (int64, error) { func (r *run) size(t *testing.T, rootFs fs.Fs, src string) (int64, error) {
var err error var err error
if r.useMount {
fi, err := os.Stat(path.Join(runInstance.mntDir, src))
if err != nil {
return int64(0), err
}
return fi.Size(), nil
}
obj1, err := rootFs.NewObject(context.Background(), src) obj1, err := rootFs.NewObject(context.Background(), src)
if err != nil { if err != nil {
return int64(0), err return int64(0), err
@@ -1198,15 +1463,28 @@ func (r *run) size(t *testing.T, rootFs fs.Fs, src string) (int64, error) {
func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) error { func (r *run) updateData(t *testing.T, rootFs fs.Fs, src, data, append string) error {
var err error var err error
var obj1 fs.Object if r.useMount {
obj1, err = rootFs.NewObject(context.Background(), src) var f *os.File
if err != nil { f, err = os.OpenFile(path.Join(runInstance.mntDir, src), os.O_TRUNC|os.O_CREATE|os.O_WRONLY, 0644)
return err if err != nil {
return err
}
defer func() {
_ = f.Close()
r.vfs.WaitForWriters(10 * time.Second)
}()
_, err = f.WriteString(data + append)
} else {
var obj1 fs.Object
obj1, err = rootFs.NewObject(context.Background(), src)
if err != nil {
return err
}
data1 := []byte(data + append)
r := bytes.NewReader(data1)
objInfo1 := object.NewStaticObjectInfo(src, time.Now(), int64(len(data1)), true, nil, rootFs)
err = obj1.Update(context.Background(), r, objInfo1)
} }
data1 := []byte(data + append)
reader := bytes.NewReader(data1)
objInfo1 := object.NewStaticObjectInfo(src, time.Now(), int64(len(data1)), true, nil, rootFs)
err = obj1.Update(context.Background(), reader, objInfo1)
return err return err
} }
@@ -1243,7 +1521,7 @@ func (r *run) listenForBackgroundUpload(t *testing.T, f fs.Fs, remote string) ch
case state = <-buCh: case state = <-buCh:
// continue // continue
case <-time.After(maxDuration): case <-time.After(maxDuration):
waitCh <- fmt.Errorf("Timed out waiting for background upload: %v", remote) waitCh <- errors.Errorf("Timed out waiting for background upload: %v", remote)
return return
} }
checkRemote := state.Remote checkRemote := state.Remote
@@ -1260,7 +1538,7 @@ func (r *run) listenForBackgroundUpload(t *testing.T, f fs.Fs, remote string) ch
return return
} }
} }
waitCh <- fmt.Errorf("Too many attempts to wait for the background upload: %v", remote) waitCh <- errors.Errorf("Too many attempts to wait for the background upload: %v", remote)
}() }()
return waitCh return waitCh
} }

21
backend/cache/cache_mount_other_test.go vendored Normal file
View File

@@ -0,0 +1,21 @@
// +build !linux !go1.13
// +build !darwin !go1.13
// +build !freebsd !go1.13
// +build !windows
// +build !race
package cache_test
import (
"testing"
"github.com/rclone/rclone/fs"
)
func (r *run) mountFs(t *testing.T, f fs.Fs) {
panic("mountFs not defined for this platform")
}
func (r *run) unmountFs(t *testing.T, f fs.Fs) {
panic("unmountFs not defined for this platform")
}

79
backend/cache/cache_mount_unix_test.go vendored Normal file
View File

@@ -0,0 +1,79 @@
// +build linux,go1.13 darwin,go1.13 freebsd,go1.13
// +build !race
package cache_test
import (
"os"
"testing"
"time"
"bazil.org/fuse"
fusefs "bazil.org/fuse/fs"
"github.com/rclone/rclone/cmd/mount"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/stretchr/testify/require"
)
func (r *run) mountFs(t *testing.T, f fs.Fs) {
device := f.Name() + ":" + f.Root()
var options = []fuse.MountOption{
fuse.MaxReadahead(uint32(mountlib.MaxReadAhead)),
fuse.Subtype("rclone"),
fuse.FSName(device), fuse.VolumeName(device),
fuse.NoAppleDouble(),
fuse.NoAppleXattr(),
//fuse.AllowOther(),
}
err := os.MkdirAll(r.mntDir, os.ModePerm)
require.NoError(t, err)
c, err := fuse.Mount(r.mntDir, options...)
require.NoError(t, err)
filesys := mount.NewFS(f)
server := fusefs.New(c, nil)
// Serve the mount point in the background returning error to errChan
r.unmountRes = make(chan error, 1)
go func() {
err := server.Serve(filesys)
closeErr := c.Close()
if err == nil {
err = closeErr
}
r.unmountRes <- err
}()
// check if the mount process has an error to report
<-c.Ready
require.NoError(t, c.MountError)
r.unmountFn = func() error {
// Shutdown the VFS
filesys.VFS.Shutdown()
return fuse.Unmount(r.mntDir)
}
r.vfs = filesys.VFS
r.isMounted = true
}
func (r *run) unmountFs(t *testing.T, f fs.Fs) {
var err error
for i := 0; i < 4; i++ {
err = r.unmountFn()
if err != nil {
//log.Printf("signal to umount failed - retrying: %v", err)
time.Sleep(3 * time.Second)
continue
}
break
}
require.NoError(t, err)
err = <-r.unmountRes
require.NoError(t, err)
err = r.vfs.CleanUp()
require.NoError(t, err)
r.isMounted = false
}

View File

@@ -0,0 +1,125 @@
// +build windows
// +build !race
package cache_test
import (
"fmt"
"os"
"testing"
"time"
"github.com/billziss-gh/cgofuse/fuse"
"github.com/pkg/errors"
"github.com/rclone/rclone/cmd/cmount"
"github.com/rclone/rclone/cmd/mountlib"
"github.com/rclone/rclone/fs"
"github.com/stretchr/testify/require"
)
// waitFor runs fn() until it returns true or the timeout expires
func waitFor(fn func() bool) (ok bool) {
const totalWait = 10 * time.Second
const individualWait = 10 * time.Millisecond
for i := 0; i < int(totalWait/individualWait); i++ {
ok = fn()
if ok {
return ok
}
time.Sleep(individualWait)
}
return false
}
func (r *run) mountFs(t *testing.T, f fs.Fs) {
// FIXME implement cmount
t.Skip("windows not supported yet")
device := f.Name() + ":" + f.Root()
options := []string{
"-o", "fsname=" + device,
"-o", "subtype=rclone",
"-o", fmt.Sprintf("max_readahead=%d", mountlib.MaxReadAhead),
"-o", "uid=-1",
"-o", "gid=-1",
"-o", "allow_other",
// This causes FUSE to supply O_TRUNC with the Open
// call which is more efficient for cmount. However
// it does not work with cgofuse on Windows with
// WinFSP so cmount must work with or without it.
"-o", "atomic_o_trunc",
"--FileSystemName=rclone",
}
fsys := cmount.NewFS(f)
host := fuse.NewFileSystemHost(fsys)
// Serve the mount point in the background returning error to errChan
r.unmountRes = make(chan error, 1)
go func() {
var err error
ok := host.Mount(r.mntDir, options)
if !ok {
err = errors.New("mount failed")
}
r.unmountRes <- err
}()
// unmount
r.unmountFn = func() error {
// Shutdown the VFS
fsys.VFS.Shutdown()
if host.Unmount() {
if !waitFor(func() bool {
_, err := os.Stat(r.mntDir)
return err != nil
}) {
t.Fatalf("mountpoint %q didn't disappear after unmount - continuing anyway", r.mntDir)
}
return nil
}
return errors.New("host unmount failed")
}
// Wait for the filesystem to become ready, checking the file
// system didn't blow up before starting
select {
case err := <-r.unmountRes:
require.NoError(t, err)
case <-time.After(time.Second * 3):
}
// Wait for the mount point to be available on Windows
// On Windows the Init signal comes slightly before the mount is ready
if !waitFor(func() bool {
_, err := os.Stat(r.mntDir)
return err == nil
}) {
t.Errorf("mountpoint %q didn't became available on mount", r.mntDir)
}
r.vfs = fsys.VFS
r.isMounted = true
}
func (r *run) unmountFs(t *testing.T, f fs.Fs) {
// FIXME implement cmount
t.Skip("windows not supported yet")
var err error
for i := 0; i < 4; i++ {
err = r.unmountFn()
if err != nil {
//log.Printf("signal to umount failed - retrying: %v", err)
time.Sleep(3 * time.Second)
continue
}
break
}
require.NoError(t, err)
err = <-r.unmountRes
require.NoError(t, err)
err = r.vfs.CleanUp()
require.NoError(t, err)
r.isMounted = false
}

View File

@@ -1,7 +1,7 @@
// Test Cache filesystem interface // Test Cache filesystem interface
//go:build !plan9 && !js && !race // +build !plan9
// +build !plan9,!js,!race // +build !race
package cache_test package cache_test
@@ -19,7 +19,7 @@ func TestIntegration(t *testing.T) {
RemoteName: "TestCache:", RemoteName: "TestCache:",
NilObject: (*cache.Object)(nil), NilObject: (*cache.Object)(nil),
UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt"}, UnimplementableFsMethods: []string{"PublicLink", "OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier", "Metadata"}, UnimplementableObjectMethods: []string{"MimeType", "ID", "GetTier", "SetTier"},
SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache SkipInvalidUTF8: true, // invalid UTF-8 confuses the cache
}) })
} }

View File

@@ -1,7 +1,6 @@
// Build for cache for unsupported platforms to stop go complaining // Build for cache for unsupported platforms to stop go complaining
// about "no buildable Go source files " // about "no buildable Go source files "
//go:build plan9 || js // +build plan9
// +build plan9 js
package cache package cache

View File

@@ -1,5 +1,5 @@
//go:build !plan9 && !js && !race // +build !plan9
// +build !plan9,!js,!race // +build !race
package cache_test package cache_test
@@ -21,8 +21,10 @@ import (
func TestInternalUploadTempDirCreated(t *testing.T) { func TestInternalUploadTempDirCreated(t *testing.T) {
id := fmt.Sprintf("tiutdc%v", time.Now().Unix()) id := fmt.Sprintf("tiutdc%v", time.Now().Unix())
runInstance.newCacheFs(t, remoteName, id, false, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, false, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id)}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id)})
defer runInstance.cleanupFs(t, rootFs, boltDb)
_, err := os.Stat(path.Join(runInstance.tmpUploadDir, id)) _, err := os.Stat(path.Join(runInstance.tmpUploadDir, id))
require.NoError(t, err) require.NoError(t, err)
@@ -61,7 +63,9 @@ func testInternalUploadQueueOneFile(t *testing.T, id string, rootFs fs.Fs, boltD
func TestInternalUploadQueueOneFileNoRest(t *testing.T) { func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix()) id := fmt.Sprintf("tiuqofnr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "0s"}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "0s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb) testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
} }
@@ -69,15 +73,19 @@ func TestInternalUploadQueueOneFileNoRest(t *testing.T) {
func TestInternalUploadQueueOneFileWithRest(t *testing.T) { func TestInternalUploadQueueOneFileWithRest(t *testing.T) {
id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix()) id := fmt.Sprintf("tiuqofwr%v", time.Now().Unix())
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1m"}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1m"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
testInternalUploadQueueOneFile(t, id, rootFs, boltDb) testInternalUploadQueueOneFile(t, id, rootFs, boltDb)
} }
func TestInternalUploadMoveExistingFile(t *testing.T) { func TestInternalUploadMoveExistingFile(t *testing.T) {
id := fmt.Sprintf("tiumef%v", time.Now().Unix()) id := fmt.Sprintf("tiumef%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "3s"}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "3s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir(context.Background(), "one") err := rootFs.Mkdir(context.Background(), "one")
require.NoError(t, err) require.NoError(t, err)
@@ -111,8 +119,10 @@ func TestInternalUploadMoveExistingFile(t *testing.T) {
func TestInternalUploadTempPathCleaned(t *testing.T) { func TestInternalUploadTempPathCleaned(t *testing.T) {
id := fmt.Sprintf("tiutpc%v", time.Now().Unix()) id := fmt.Sprintf("tiutpc%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "5s"}) map[string]string{"cache-tmp-upload-path": path.Join(runInstance.tmpUploadDir, id), "cache-tmp-wait-time": "5s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir(context.Background(), "one") err := rootFs.Mkdir(context.Background(), "one")
require.NoError(t, err) require.NoError(t, err)
@@ -152,8 +162,10 @@ func TestInternalUploadTempPathCleaned(t *testing.T) {
func TestInternalUploadQueueMoreFiles(t *testing.T) { func TestInternalUploadQueueMoreFiles(t *testing.T) {
id := fmt.Sprintf("tiuqmf%v", time.Now().Unix()) id := fmt.Sprintf("tiuqmf%v", time.Now().Unix())
rootFs, _ := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1s"}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1s"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
err := rootFs.Mkdir(context.Background(), "test") err := rootFs.Mkdir(context.Background(), "test")
require.NoError(t, err) require.NoError(t, err)
@@ -201,7 +213,9 @@ func TestInternalUploadQueueMoreFiles(t *testing.T) {
func TestInternalUploadTempFileOperations(t *testing.T) { func TestInternalUploadTempFileOperations(t *testing.T) {
id := "tiutfo" id := "tiutfo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads() boltDb.PurgeTempUploads()
@@ -329,7 +343,9 @@ func TestInternalUploadTempFileOperations(t *testing.T) {
func TestInternalUploadUploadingFileOperations(t *testing.T) { func TestInternalUploadUploadingFileOperations(t *testing.T) {
id := "tiuufo" id := "tiuufo"
rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true, rootFs, boltDb := runInstance.newCacheFs(t, remoteName, id, true, true,
nil,
map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"}) map[string]string{"tmp_upload_path": path.Join(runInstance.tmpUploadDir, id), "tmp_wait_time": "1h"})
defer runInstance.cleanupFs(t, rootFs, boltDb)
boltDb.PurgeTempUploads() boltDb.PurgeTempUploads()

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js // +build !plan9
// +build !plan9,!js
package cache package cache

View File

@@ -1,11 +1,9 @@
//go:build !plan9 && !js // +build !plan9
// +build !plan9,!js
package cache package cache
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"io" "io"
"path" "path"
@@ -14,6 +12,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/operations"
) )
@@ -243,7 +242,7 @@ func (r *Handle) getChunk(chunkStart int64) ([]byte, error) {
return nil, io.ErrUnexpectedEOF return nil, io.ErrUnexpectedEOF
} }
return nil, fmt.Errorf("chunk not found %v", chunkStart) return nil, errors.Errorf("chunk not found %v", chunkStart)
} }
// first chunk will be aligned with the start // first chunk will be aligned with the start
@@ -323,7 +322,7 @@ func (r *Handle) Seek(offset int64, whence int) (int64, error) {
fs.Debugf(r, "moving offset end (%v) from %v to %v", r.cachedObject.Size(), r.offset, r.cachedObject.Size()+offset) fs.Debugf(r, "moving offset end (%v) from %v to %v", r.cachedObject.Size(), r.offset, r.cachedObject.Size()+offset)
r.offset = r.cachedObject.Size() + offset r.offset = r.cachedObject.Size() + offset
default: default:
err = fmt.Errorf("cache: unimplemented seek whence %v", whence) err = errors.Errorf("cache: unimplemented seek whence %v", whence)
} }
chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize)) chunkStart := r.offset - (r.offset % int64(r.cacheFs().opt.ChunkSize))

View File

@@ -1,16 +1,15 @@
//go:build !plan9 && !js // +build !plan9
// +build !plan9,!js
package cache package cache
import ( import (
"context" "context"
"fmt"
"io" "io"
"path" "path"
"sync" "sync"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/readers" "github.com/rclone/rclone/lib/readers"
@@ -178,14 +177,10 @@ func (o *Object) refreshFromSource(ctx context.Context, force bool) error {
} }
if o.isTempFile() { if o.isTempFile() {
liveObject, err = o.ParentFs.NewObject(ctx, o.Remote()) liveObject, err = o.ParentFs.NewObject(ctx, o.Remote())
if err != nil { err = errors.Wrapf(err, "in parent fs %v", o.ParentFs)
err = fmt.Errorf("in parent fs %v: %w", o.ParentFs, err)
}
} else { } else {
liveObject, err = o.CacheFs.Fs.NewObject(ctx, o.Remote()) liveObject, err = o.CacheFs.Fs.NewObject(ctx, o.Remote())
if err != nil { err = errors.Wrapf(err, "in cache fs %v", o.CacheFs.Fs)
err = fmt.Errorf("in cache fs %v: %w", o.CacheFs.Fs, err)
}
} }
if err != nil { if err != nil {
fs.Errorf(o, "error refreshing object in : %v", err) fs.Errorf(o, "error refreshing object in : %v", err)
@@ -257,7 +252,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
defer o.CacheFs.backgroundRunner.play() defer o.CacheFs.backgroundRunner.play()
// don't allow started uploads // don't allow started uploads
if o.isTempFile() && o.tempFileStartedUpload() { if o.isTempFile() && o.tempFileStartedUpload() {
return fmt.Errorf("%v is currently uploading, can't update", o) return errors.Errorf("%v is currently uploading, can't update", o)
} }
} }
fs.Debugf(o, "updating object contents with size %v", src.Size()) fs.Debugf(o, "updating object contents with size %v", src.Size())
@@ -296,7 +291,7 @@ func (o *Object) Remove(ctx context.Context) error {
defer o.CacheFs.backgroundRunner.play() defer o.CacheFs.backgroundRunner.play()
// don't allow started uploads // don't allow started uploads
if o.isTempFile() && o.tempFileStartedUpload() { if o.isTempFile() && o.tempFileStartedUpload() {
return fmt.Errorf("%v is currently uploading, can't delete", o) return errors.Errorf("%v is currently uploading, can't delete", o)
} }
} }
err := o.Object.Remove(ctx) err := o.Object.Remove(ctx)

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js // +build !plan9
// +build !plan9,!js
package cache package cache
@@ -8,7 +7,7 @@ import (
"crypto/tls" "crypto/tls"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io/ioutil"
"net/http" "net/http"
"net/url" "net/url"
"strings" "strings"
@@ -167,7 +166,7 @@ func (p *plexConnector) listenWebsocket() {
continue continue
} }
var data []byte var data []byte
data, err = io.ReadAll(resp.Body) data, err = ioutil.ReadAll(resp.Body)
if err != nil { if err != nil {
continue continue
} }
@@ -213,7 +212,7 @@ func (p *plexConnector) authenticate() error {
var data map[string]interface{} var data map[string]interface{}
err = json.NewDecoder(resp.Body).Decode(&data) err = json.NewDecoder(resp.Body).Decode(&data)
if err != nil { if err != nil {
return fmt.Errorf("failed to obtain token: %w", err) return fmt.Errorf("failed to obtain token: %v", err)
} }
tokenGen, ok := get(data, "user", "authToken") tokenGen, ok := get(data, "user", "authToken")
if !ok { if !ok {

View File

@@ -1,15 +1,14 @@
//go:build !plan9 && !js // +build !plan9
// +build !plan9,!js
package cache package cache
import ( import (
"fmt"
"strconv" "strconv"
"strings" "strings"
"time" "time"
cache "github.com/patrickmn/go-cache" cache "github.com/patrickmn/go-cache"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
) )
@@ -53,7 +52,7 @@ func (m *Memory) GetChunk(cachedObject *Object, offset int64) ([]byte, error) {
return data, nil return data, nil
} }
return nil, fmt.Errorf("couldn't get cached object data at offset %v", offset) return nil, errors.Errorf("couldn't get cached object data at offset %v", offset)
} }
// AddChunk adds a new chunk of a cached object // AddChunk adds a new chunk of a cached object
@@ -76,7 +75,10 @@ func (m *Memory) CleanChunksByAge(chunkAge time.Duration) {
// CleanChunksByNeed will cleanup chunks after the FS passes a specific chunk // CleanChunksByNeed will cleanup chunks after the FS passes a specific chunk
func (m *Memory) CleanChunksByNeed(offset int64) { func (m *Memory) CleanChunksByNeed(offset int64) {
for key := range m.db.Items() { var items map[string]cache.Item
items = m.db.Items()
for key := range items {
sepIdx := strings.LastIndex(key, "-") sepIdx := strings.LastIndex(key, "-")
keyOffset, err := strconv.ParseInt(key[sepIdx+1:], 10, 64) keyOffset, err := strconv.ParseInt(key[sepIdx+1:], 10, 64)
if err != nil { if err != nil {

View File

@@ -1,5 +1,4 @@
//go:build !plan9 && !js // +build !plan9
// +build !plan9,!js
package cache package cache
@@ -9,6 +8,7 @@ import (
"encoding/binary" "encoding/binary"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io/ioutil"
"os" "os"
"path" "path"
"strconv" "strconv"
@@ -16,6 +16,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/walk" "github.com/rclone/rclone/fs/walk"
bolt "go.etcd.io/bbolt" bolt "go.etcd.io/bbolt"
@@ -118,11 +119,11 @@ func (b *Persistent) connect() error {
err = os.MkdirAll(b.dataPath, os.ModePerm) err = os.MkdirAll(b.dataPath, os.ModePerm)
if err != nil { if err != nil {
return fmt.Errorf("failed to create a data directory %q: %w", b.dataPath, err) return errors.Wrapf(err, "failed to create a data directory %q", b.dataPath)
} }
b.db, err = bolt.Open(b.dbPath, 0644, &bolt.Options{Timeout: b.features.DbWaitTime}) b.db, err = bolt.Open(b.dbPath, 0644, &bolt.Options{Timeout: b.features.DbWaitTime})
if err != nil { if err != nil {
return fmt.Errorf("failed to open a cache connection to %q: %w", b.dbPath, err) return errors.Wrapf(err, "failed to open a cache connection to %q", b.dbPath)
} }
if b.features.PurgeDb { if b.features.PurgeDb {
b.Purge() b.Purge()
@@ -174,7 +175,7 @@ func (b *Persistent) GetDir(remote string) (*Directory, error) {
err := b.db.View(func(tx *bolt.Tx) error { err := b.db.View(func(tx *bolt.Tx) error {
bucket := b.getBucket(remote, false, tx) bucket := b.getBucket(remote, false, tx)
if bucket == nil { if bucket == nil {
return fmt.Errorf("couldn't open bucket (%v)", remote) return errors.Errorf("couldn't open bucket (%v)", remote)
} }
data := bucket.Get([]byte(".")) data := bucket.Get([]byte("."))
@@ -182,7 +183,7 @@ func (b *Persistent) GetDir(remote string) (*Directory, error) {
return json.Unmarshal(data, cd) return json.Unmarshal(data, cd)
} }
return fmt.Errorf("%v not found", remote) return errors.Errorf("%v not found", remote)
}) })
return cd, err return cd, err
@@ -207,7 +208,7 @@ func (b *Persistent) AddBatchDir(cachedDirs []*Directory) error {
bucket = b.getBucket(cachedDirs[0].Dir, true, tx) bucket = b.getBucket(cachedDirs[0].Dir, true, tx)
} }
if bucket == nil { if bucket == nil {
return fmt.Errorf("couldn't open bucket (%v)", cachedDirs[0].Dir) return errors.Errorf("couldn't open bucket (%v)", cachedDirs[0].Dir)
} }
for _, cachedDir := range cachedDirs { for _, cachedDir := range cachedDirs {
@@ -224,7 +225,7 @@ func (b *Persistent) AddBatchDir(cachedDirs []*Directory) error {
encoded, err := json.Marshal(cachedDir) encoded, err := json.Marshal(cachedDir)
if err != nil { if err != nil {
return fmt.Errorf("couldn't marshal object (%v): %v", cachedDir, err) return errors.Errorf("couldn't marshal object (%v): %v", cachedDir, err)
} }
err = b.Put([]byte("."), encoded) err = b.Put([]byte("."), encoded)
if err != nil { if err != nil {
@@ -242,17 +243,17 @@ func (b *Persistent) GetDirEntries(cachedDir *Directory) (fs.DirEntries, error)
err := b.db.View(func(tx *bolt.Tx) error { err := b.db.View(func(tx *bolt.Tx) error {
bucket := b.getBucket(cachedDir.abs(), false, tx) bucket := b.getBucket(cachedDir.abs(), false, tx)
if bucket == nil { if bucket == nil {
return fmt.Errorf("couldn't open bucket (%v)", cachedDir.abs()) return errors.Errorf("couldn't open bucket (%v)", cachedDir.abs())
} }
val := bucket.Get([]byte(".")) val := bucket.Get([]byte("."))
if val != nil { if val != nil {
err := json.Unmarshal(val, cachedDir) err := json.Unmarshal(val, cachedDir)
if err != nil { if err != nil {
return fmt.Errorf("error during unmarshalling obj: %w", err) return errors.Errorf("error during unmarshalling obj: %v", err)
} }
} else { } else {
return fmt.Errorf("missing cached dir: %v", cachedDir) return errors.Errorf("missing cached dir: %v", cachedDir)
} }
c := bucket.Cursor() c := bucket.Cursor()
@@ -267,7 +268,7 @@ func (b *Persistent) GetDirEntries(cachedDir *Directory) (fs.DirEntries, error)
// we try to find a cached meta for the dir // we try to find a cached meta for the dir
currentBucket := c.Bucket().Bucket(k) currentBucket := c.Bucket().Bucket(k)
if currentBucket == nil { if currentBucket == nil {
return fmt.Errorf("couldn't open bucket (%v)", string(k)) return errors.Errorf("couldn't open bucket (%v)", string(k))
} }
metaKey := currentBucket.Get([]byte(".")) metaKey := currentBucket.Get([]byte("."))
@@ -316,7 +317,7 @@ func (b *Persistent) RemoveDir(fp string) error {
err = b.db.Update(func(tx *bolt.Tx) error { err = b.db.Update(func(tx *bolt.Tx) error {
bucket := b.getBucket(cleanPath(parentDir), false, tx) bucket := b.getBucket(cleanPath(parentDir), false, tx)
if bucket == nil { if bucket == nil {
return fmt.Errorf("couldn't open bucket (%v)", fp) return errors.Errorf("couldn't open bucket (%v)", fp)
} }
// delete the cached dir // delete the cached dir
err := bucket.DeleteBucket([]byte(cleanPath(dirName))) err := bucket.DeleteBucket([]byte(cleanPath(dirName)))
@@ -376,13 +377,13 @@ func (b *Persistent) GetObject(cachedObject *Object) (err error) {
return b.db.View(func(tx *bolt.Tx) error { return b.db.View(func(tx *bolt.Tx) error {
bucket := b.getBucket(cachedObject.Dir, false, tx) bucket := b.getBucket(cachedObject.Dir, false, tx)
if bucket == nil { if bucket == nil {
return fmt.Errorf("couldn't open parent bucket for %v", cachedObject.Dir) return errors.Errorf("couldn't open parent bucket for %v", cachedObject.Dir)
} }
val := bucket.Get([]byte(cachedObject.Name)) val := bucket.Get([]byte(cachedObject.Name))
if val != nil { if val != nil {
return json.Unmarshal(val, cachedObject) return json.Unmarshal(val, cachedObject)
} }
return fmt.Errorf("couldn't find object (%v)", cachedObject.Name) return errors.Errorf("couldn't find object (%v)", cachedObject.Name)
}) })
} }
@@ -391,16 +392,16 @@ func (b *Persistent) AddObject(cachedObject *Object) error {
return b.db.Update(func(tx *bolt.Tx) error { return b.db.Update(func(tx *bolt.Tx) error {
bucket := b.getBucket(cachedObject.Dir, true, tx) bucket := b.getBucket(cachedObject.Dir, true, tx)
if bucket == nil { if bucket == nil {
return fmt.Errorf("couldn't open parent bucket for %v", cachedObject) return errors.Errorf("couldn't open parent bucket for %v", cachedObject)
} }
// cache Object Info // cache Object Info
encoded, err := json.Marshal(cachedObject) encoded, err := json.Marshal(cachedObject)
if err != nil { if err != nil {
return fmt.Errorf("couldn't marshal object (%v) info: %v", cachedObject, err) return errors.Errorf("couldn't marshal object (%v) info: %v", cachedObject, err)
} }
err = bucket.Put([]byte(cachedObject.Name), encoded) err = bucket.Put([]byte(cachedObject.Name), encoded)
if err != nil { if err != nil {
return fmt.Errorf("couldn't cache object (%v) info: %v", cachedObject, err) return errors.Errorf("couldn't cache object (%v) info: %v", cachedObject, err)
} }
return nil return nil
}) })
@@ -412,7 +413,7 @@ func (b *Persistent) RemoveObject(fp string) error {
return b.db.Update(func(tx *bolt.Tx) error { return b.db.Update(func(tx *bolt.Tx) error {
bucket := b.getBucket(cleanPath(parentDir), false, tx) bucket := b.getBucket(cleanPath(parentDir), false, tx)
if bucket == nil { if bucket == nil {
return fmt.Errorf("couldn't open parent bucket for %v", cleanPath(parentDir)) return errors.Errorf("couldn't open parent bucket for %v", cleanPath(parentDir))
} }
err := bucket.Delete([]byte(cleanPath(objName))) err := bucket.Delete([]byte(cleanPath(objName)))
if err != nil { if err != nil {
@@ -444,7 +445,7 @@ func (b *Persistent) HasEntry(remote string) bool {
err := b.db.View(func(tx *bolt.Tx) error { err := b.db.View(func(tx *bolt.Tx) error {
bucket := b.getBucket(dir, false, tx) bucket := b.getBucket(dir, false, tx)
if bucket == nil { if bucket == nil {
return fmt.Errorf("couldn't open parent bucket for %v", remote) return errors.Errorf("couldn't open parent bucket for %v", remote)
} }
if f := bucket.Bucket([]byte(name)); f != nil { if f := bucket.Bucket([]byte(name)); f != nil {
return nil return nil
@@ -453,9 +454,12 @@ func (b *Persistent) HasEntry(remote string) bool {
return nil return nil
} }
return fmt.Errorf("couldn't find object (%v)", remote) return errors.Errorf("couldn't find object (%v)", remote)
}) })
return err == nil if err == nil {
return true
}
return false
} }
// HasChunk confirms the existence of a single chunk of an object // HasChunk confirms the existence of a single chunk of an object
@@ -472,7 +476,7 @@ func (b *Persistent) GetChunk(cachedObject *Object, offset int64) ([]byte, error
var data []byte var data []byte
fp := path.Join(b.dataPath, cachedObject.abs(), strconv.FormatInt(offset, 10)) fp := path.Join(b.dataPath, cachedObject.abs(), strconv.FormatInt(offset, 10))
data, err := os.ReadFile(fp) data, err := ioutil.ReadFile(fp)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -485,7 +489,7 @@ func (b *Persistent) AddChunk(fp string, data []byte, offset int64) error {
_ = os.MkdirAll(path.Join(b.dataPath, fp), os.ModePerm) _ = os.MkdirAll(path.Join(b.dataPath, fp), os.ModePerm)
filePath := path.Join(b.dataPath, fp, strconv.FormatInt(offset, 10)) filePath := path.Join(b.dataPath, fp, strconv.FormatInt(offset, 10))
err := os.WriteFile(filePath, data, os.ModePerm) err := ioutil.WriteFile(filePath, data, os.ModePerm)
if err != nil { if err != nil {
return err return err
} }
@@ -550,7 +554,7 @@ func (b *Persistent) CleanChunksBySize(maxSize int64) {
err := b.db.Update(func(tx *bolt.Tx) error { err := b.db.Update(func(tx *bolt.Tx) error {
dataTsBucket := tx.Bucket([]byte(DataTsBucket)) dataTsBucket := tx.Bucket([]byte(DataTsBucket))
if dataTsBucket == nil { if dataTsBucket == nil {
return fmt.Errorf("couldn't open (%v) bucket", DataTsBucket) return errors.Errorf("Couldn't open (%v) bucket", DataTsBucket)
} }
// iterate through ts // iterate through ts
c := dataTsBucket.Cursor() c := dataTsBucket.Cursor()
@@ -728,7 +732,7 @@ func (b *Persistent) GetChunkTs(path string, offset int64) (time.Time, error) {
return nil return nil
} }
} }
return fmt.Errorf("not found %v-%v", path, offset) return errors.Errorf("not found %v-%v", path, offset)
}) })
return t, err return t, err
@@ -768,7 +772,7 @@ func (b *Persistent) addPendingUpload(destPath string, started bool) error {
return b.db.Update(func(tx *bolt.Tx) error { return b.db.Update(func(tx *bolt.Tx) error {
bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket)) bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket))
if err != nil { if err != nil {
return fmt.Errorf("couldn't bucket for %v", tempBucket) return errors.Errorf("couldn't bucket for %v", tempBucket)
} }
tempObj := &tempUploadInfo{ tempObj := &tempUploadInfo{
DestPath: destPath, DestPath: destPath,
@@ -779,11 +783,11 @@ func (b *Persistent) addPendingUpload(destPath string, started bool) error {
// cache Object Info // cache Object Info
encoded, err := json.Marshal(tempObj) encoded, err := json.Marshal(tempObj)
if err != nil { if err != nil {
return fmt.Errorf("couldn't marshal object (%v) info: %v", destPath, err) return errors.Errorf("couldn't marshal object (%v) info: %v", destPath, err)
} }
err = bucket.Put([]byte(destPath), encoded) err = bucket.Put([]byte(destPath), encoded)
if err != nil { if err != nil {
return fmt.Errorf("couldn't cache object (%v) info: %v", destPath, err) return errors.Errorf("couldn't cache object (%v) info: %v", destPath, err)
} }
return nil return nil
@@ -798,7 +802,7 @@ func (b *Persistent) getPendingUpload(inRoot string, waitTime time.Duration) (de
err = b.db.Update(func(tx *bolt.Tx) error { err = b.db.Update(func(tx *bolt.Tx) error {
bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket)) bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket))
if err != nil { if err != nil {
return fmt.Errorf("couldn't bucket for %v", tempBucket) return errors.Errorf("couldn't bucket for %v", tempBucket)
} }
c := bucket.Cursor() c := bucket.Cursor()
@@ -831,7 +835,7 @@ func (b *Persistent) getPendingUpload(inRoot string, waitTime time.Duration) (de
return nil return nil
} }
return fmt.Errorf("no pending upload found") return errors.Errorf("no pending upload found")
}) })
return destPath, err return destPath, err
@@ -842,14 +846,14 @@ func (b *Persistent) SearchPendingUpload(remote string) (started bool, err error
err = b.db.View(func(tx *bolt.Tx) error { err = b.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket([]byte(tempBucket)) bucket := tx.Bucket([]byte(tempBucket))
if bucket == nil { if bucket == nil {
return fmt.Errorf("couldn't bucket for %v", tempBucket) return errors.Errorf("couldn't bucket for %v", tempBucket)
} }
var tempObj = &tempUploadInfo{} var tempObj = &tempUploadInfo{}
v := bucket.Get([]byte(remote)) v := bucket.Get([]byte(remote))
err = json.Unmarshal(v, tempObj) err = json.Unmarshal(v, tempObj)
if err != nil { if err != nil {
return fmt.Errorf("pending upload (%v) not found %v", remote, err) return errors.Errorf("pending upload (%v) not found %v", remote, err)
} }
started = tempObj.Started started = tempObj.Started
@@ -864,7 +868,7 @@ func (b *Persistent) searchPendingUploadFromDir(dir string) (remotes []string, e
err = b.db.View(func(tx *bolt.Tx) error { err = b.db.View(func(tx *bolt.Tx) error {
bucket := tx.Bucket([]byte(tempBucket)) bucket := tx.Bucket([]byte(tempBucket))
if bucket == nil { if bucket == nil {
return fmt.Errorf("couldn't bucket for %v", tempBucket) return errors.Errorf("couldn't bucket for %v", tempBucket)
} }
c := bucket.Cursor() c := bucket.Cursor()
@@ -894,22 +898,22 @@ func (b *Persistent) rollbackPendingUpload(remote string) error {
return b.db.Update(func(tx *bolt.Tx) error { return b.db.Update(func(tx *bolt.Tx) error {
bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket)) bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket))
if err != nil { if err != nil {
return fmt.Errorf("couldn't bucket for %v", tempBucket) return errors.Errorf("couldn't bucket for %v", tempBucket)
} }
var tempObj = &tempUploadInfo{} var tempObj = &tempUploadInfo{}
v := bucket.Get([]byte(remote)) v := bucket.Get([]byte(remote))
err = json.Unmarshal(v, tempObj) err = json.Unmarshal(v, tempObj)
if err != nil { if err != nil {
return fmt.Errorf("pending upload (%v) not found: %w", remote, err) return errors.Errorf("pending upload (%v) not found %v", remote, err)
} }
tempObj.Started = false tempObj.Started = false
v2, err := json.Marshal(tempObj) v2, err := json.Marshal(tempObj)
if err != nil { if err != nil {
return fmt.Errorf("pending upload not updated: %w", err) return errors.Errorf("pending upload not updated %v", err)
} }
err = bucket.Put([]byte(tempObj.DestPath), v2) err = bucket.Put([]byte(tempObj.DestPath), v2)
if err != nil { if err != nil {
return fmt.Errorf("pending upload not updated: %w", err) return errors.Errorf("pending upload not updated %v", err)
} }
return nil return nil
}) })
@@ -922,7 +926,7 @@ func (b *Persistent) removePendingUpload(remote string) error {
return b.db.Update(func(tx *bolt.Tx) error { return b.db.Update(func(tx *bolt.Tx) error {
bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket)) bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket))
if err != nil { if err != nil {
return fmt.Errorf("couldn't bucket for %v", tempBucket) return errors.Errorf("couldn't bucket for %v", tempBucket)
} }
return bucket.Delete([]byte(remote)) return bucket.Delete([]byte(remote))
}) })
@@ -937,17 +941,17 @@ func (b *Persistent) updatePendingUpload(remote string, fn func(item *tempUpload
return b.db.Update(func(tx *bolt.Tx) error { return b.db.Update(func(tx *bolt.Tx) error {
bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket)) bucket, err := tx.CreateBucketIfNotExists([]byte(tempBucket))
if err != nil { if err != nil {
return fmt.Errorf("couldn't bucket for %v", tempBucket) return errors.Errorf("couldn't bucket for %v", tempBucket)
} }
var tempObj = &tempUploadInfo{} var tempObj = &tempUploadInfo{}
v := bucket.Get([]byte(remote)) v := bucket.Get([]byte(remote))
err = json.Unmarshal(v, tempObj) err = json.Unmarshal(v, tempObj)
if err != nil { if err != nil {
return fmt.Errorf("pending upload (%v) not found %v", remote, err) return errors.Errorf("pending upload (%v) not found %v", remote, err)
} }
if tempObj.Started { if tempObj.Started {
return fmt.Errorf("pending upload already started %v", remote) return errors.Errorf("pending upload already started %v", remote)
} }
err = fn(tempObj) err = fn(tempObj)
if err != nil { if err != nil {
@@ -965,11 +969,11 @@ func (b *Persistent) updatePendingUpload(remote string, fn func(item *tempUpload
} }
v2, err := json.Marshal(tempObj) v2, err := json.Marshal(tempObj)
if err != nil { if err != nil {
return fmt.Errorf("pending upload not updated: %w", err) return errors.Errorf("pending upload not updated %v", err)
} }
err = bucket.Put([]byte(tempObj.DestPath), v2) err = bucket.Put([]byte(tempObj.DestPath), v2)
if err != nil { if err != nil {
return fmt.Errorf("pending upload not updated: %w", err) return errors.Errorf("pending upload not updated %v", err)
} }
return nil return nil
@@ -1010,11 +1014,11 @@ func (b *Persistent) ReconcileTempUploads(ctx context.Context, cacheFs *Fs) erro
// cache Object Info // cache Object Info
encoded, err := json.Marshal(tempObj) encoded, err := json.Marshal(tempObj)
if err != nil { if err != nil {
return fmt.Errorf("couldn't marshal object (%v) info: %v", queuedEntry, err) return errors.Errorf("couldn't marshal object (%v) info: %v", queuedEntry, err)
} }
err = bucket.Put([]byte(destPath), encoded) err = bucket.Put([]byte(destPath), encoded)
if err != nil { if err != nil {
return fmt.Errorf("couldn't cache object (%v) info: %v", destPath, err) return errors.Errorf("couldn't cache object (%v) info: %v", destPath, err)
} }
fs.Debugf(cacheFs, "reconciled temporary upload: %v", destPath) fs.Debugf(cacheFs, "reconciled temporary upload: %v", destPath)
} }

File diff suppressed because it is too large Load Diff

View File

@@ -5,17 +5,14 @@ import (
"context" "context"
"flag" "flag"
"fmt" "fmt"
"io" "io/ioutil"
"path" "path"
"regexp" "regexp"
"strings" "strings"
"testing" "testing"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests" "github.com/rclone/rclone/fstest/fstests"
@@ -35,35 +32,11 @@ func testPutLarge(t *testing.T, f *Fs, kilobytes int) {
fstests.TestPutLarge(context.Background(), t, f, &fstest.Item{ fstests.TestPutLarge(context.Background(), t, f, &fstest.Item{
ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"), ModTime: fstest.Time("2001-02-03T04:05:06.499999999Z"),
Path: fmt.Sprintf("chunker-upload-%dk", kilobytes), Path: fmt.Sprintf("chunker-upload-%dk", kilobytes),
Size: int64(kilobytes) * int64(fs.Kibi), Size: int64(kilobytes) * int64(fs.KibiByte),
}) })
}) })
} }
type settings map[string]interface{}
func deriveFs(ctx context.Context, t *testing.T, f fs.Fs, path string, opts settings) fs.Fs {
fsName := strings.Split(f.Name(), "{")[0] // strip off hash
configMap := configmap.Simple{}
for key, val := range opts {
configMap[key] = fmt.Sprintf("%v", val)
}
rpath := fspath.JoinRootPath(f.Root(), path)
remote := fmt.Sprintf("%s,%s:%s", fsName, configMap.String(), rpath)
fixFs, err := fs.NewFs(ctx, remote)
require.NoError(t, err)
return fixFs
}
var mtime1 = fstest.Time("2001-02-03T04:05:06.499999999Z")
func testPutFile(ctx context.Context, t *testing.T, f fs.Fs, name, contents, message string, check bool) fs.Object {
item := fstest.Item{Path: name, ModTime: mtime1}
obj := fstests.PutTestContents(ctx, t, f, &item, contents, check)
assert.NotNil(t, obj, message)
return obj
}
// test chunk name parser // test chunk name parser
func testChunkNameFormat(t *testing.T, f *Fs) { func testChunkNameFormat(t *testing.T, f *Fs) {
saveOpt := f.opt saveOpt := f.opt
@@ -413,7 +386,7 @@ func testSmallFileInternals(t *testing.T, f *Fs) {
if r == nil { if r == nil {
return return
} }
data, err := io.ReadAll(r) data, err := ioutil.ReadAll(r)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, contents, string(data)) assert.Equal(t, contents, string(data))
_ = r.Close() _ = r.Close()
@@ -440,7 +413,7 @@ func testSmallFileInternals(t *testing.T, f *Fs) {
checkSmallFile := func(name, contents string) { checkSmallFile := func(name, contents string) {
filename := path.Join(dir, name) filename := path.Join(dir, name)
item := fstest.Item{Path: filename, ModTime: modTime} item := fstest.Item{Path: filename, ModTime: modTime}
put := fstests.PutTestContents(ctx, t, f, &item, contents, false) _, put := fstests.PutTestContents(ctx, t, f, &item, contents, false)
assert.NotNil(t, put) assert.NotNil(t, put)
checkSmallFileInternals(put) checkSmallFileInternals(put)
checkContents(put, contents) checkContents(put, contents)
@@ -489,20 +462,14 @@ func testPreventCorruption(t *testing.T, f *Fs) {
newFile := func(name string) fs.Object { newFile := func(name string) fs.Object {
item := fstest.Item{Path: path.Join(dir, name), ModTime: modTime} item := fstest.Item{Path: path.Join(dir, name), ModTime: modTime}
obj := fstests.PutTestContents(ctx, t, f, &item, contents, true) _, obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
require.NotNil(t, obj) require.NotNil(t, obj)
return obj return obj
} }
billyObj := newFile("billy") billyObj := newFile("billy")
billyTxn := billyObj.(*Object).xactID
if f.useNoRename {
require.True(t, billyTxn != "")
} else {
require.True(t, billyTxn == "")
}
billyChunkName := func(chunkNo int) string { billyChunkName := func(chunkNo int) string {
return f.makeChunkName(billyObj.Remote(), chunkNo, "", billyTxn) return f.makeChunkName(billyObj.Remote(), chunkNo, "", "")
} }
err := f.Mkdir(ctx, billyChunkName(1)) err := f.Mkdir(ctx, billyChunkName(1))
@@ -519,13 +486,11 @@ func testPreventCorruption(t *testing.T, f *Fs) {
// accessing chunks in strict mode is prohibited // accessing chunks in strict mode is prohibited
f.opt.FailHard = true f.opt.FailHard = true
billyChunk4Name := billyChunkName(4) billyChunk4Name := billyChunkName(4)
_, err = f.base.NewObject(ctx, billyChunk4Name) billyChunk4, err := f.NewObject(ctx, billyChunk4Name)
require.NoError(t, err)
_, err = f.NewObject(ctx, billyChunk4Name)
assertOverlapError(err) assertOverlapError(err)
f.opt.FailHard = false f.opt.FailHard = false
billyChunk4, err := f.NewObject(ctx, billyChunk4Name) billyChunk4, err = f.NewObject(ctx, billyChunk4Name)
assert.NoError(t, err) assert.NoError(t, err)
require.NotNil(t, billyChunk4) require.NotNil(t, billyChunk4)
@@ -538,7 +503,7 @@ func testPreventCorruption(t *testing.T, f *Fs) {
assert.NoError(t, err) assert.NoError(t, err)
var chunkContents []byte var chunkContents []byte
assert.NotPanics(t, func() { assert.NotPanics(t, func() {
chunkContents, err = io.ReadAll(r) chunkContents, err = ioutil.ReadAll(r)
_ = r.Close() _ = r.Close()
}) })
assert.NoError(t, err) assert.NoError(t, err)
@@ -554,8 +519,7 @@ func testPreventCorruption(t *testing.T, f *Fs) {
// recreate billy in case it was anyhow corrupted // recreate billy in case it was anyhow corrupted
willyObj := newFile("willy") willyObj := newFile("willy")
willyTxn := willyObj.(*Object).xactID willyChunkName := f.makeChunkName(willyObj.Remote(), 1, "", "")
willyChunkName := f.makeChunkName(willyObj.Remote(), 1, "", willyTxn)
f.opt.FailHard = false f.opt.FailHard = false
willyChunk, err := f.NewObject(ctx, willyChunkName) willyChunk, err := f.NewObject(ctx, willyChunkName)
f.opt.FailHard = true f.opt.FailHard = true
@@ -573,7 +537,7 @@ func testPreventCorruption(t *testing.T, f *Fs) {
r, err = willyChunk.Open(ctx) r, err = willyChunk.Open(ctx)
assert.NoError(t, err) assert.NoError(t, err)
assert.NotPanics(t, func() { assert.NotPanics(t, func() {
_, err = io.ReadAll(r) _, err = ioutil.ReadAll(r)
_ = r.Close() _ = r.Close()
}) })
assert.NoError(t, err) assert.NoError(t, err)
@@ -596,20 +560,17 @@ func testChunkNumberOverflow(t *testing.T, f *Fs) {
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z") modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
contents := random.String(100) contents := random.String(100)
newFile := func(f fs.Fs, name string) (obj fs.Object, filename string, txnID string) { newFile := func(f fs.Fs, name string) (fs.Object, string) {
filename = path.Join(dir, name) filename := path.Join(dir, name)
item := fstest.Item{Path: filename, ModTime: modTime} item := fstest.Item{Path: filename, ModTime: modTime}
obj = fstests.PutTestContents(ctx, t, f, &item, contents, true) _, obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
require.NotNil(t, obj) require.NotNil(t, obj)
if chunkObj, isChunkObj := obj.(*Object); isChunkObj { return obj, filename
txnID = chunkObj.xactID
}
return
} }
f.opt.FailHard = false f.opt.FailHard = false
file, fileName, fileTxn := newFile(f, "wreaker") file, fileName := newFile(f, "wreaker")
wreak, _, _ := newFile(f.base, f.makeChunkName("wreaker", wreakNumber, "", fileTxn)) wreak, _ := newFile(f.base, f.makeChunkName("wreaker", wreakNumber, "", ""))
f.opt.FailHard = false f.opt.FailHard = false
fstest.CheckListingWithRoot(t, f, dir, nil, nil, f.Precision()) fstest.CheckListingWithRoot(t, f, dir, nil, nil, f.Precision())
@@ -643,13 +604,22 @@ func testMetadataInput(t *testing.T, f *Fs) {
}() }()
f.opt.FailHard = false f.opt.FailHard = false
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
putFile := func(f fs.Fs, name, contents, message string, check bool) fs.Object {
item := fstest.Item{Path: name, ModTime: modTime}
_, obj := fstests.PutTestContents(ctx, t, f, &item, contents, check)
assert.NotNil(t, obj, message)
return obj
}
runSubtest := func(contents, name string) { runSubtest := func(contents, name string) {
description := fmt.Sprintf("file with %s metadata", name) description := fmt.Sprintf("file with %s metadata", name)
filename := path.Join(dir, name) filename := path.Join(dir, name)
require.True(t, len(contents) > 2 && len(contents) < minChunkForTest, description+" test data is correct") require.True(t, len(contents) > 2 && len(contents) < minChunkForTest, description+" test data is correct")
part := testPutFile(ctx, t, f.base, f.makeChunkName(filename, 0, "", ""), "oops", "", true) part := putFile(f.base, f.makeChunkName(filename, 0, "", ""), "oops", "", true)
_ = testPutFile(ctx, t, f, filename, contents, "upload "+description, false) _ = putFile(f, filename, contents, "upload "+description, false)
obj, err := f.NewObject(ctx, filename) obj, err := f.NewObject(ctx, filename)
assert.NoError(t, err, "access "+description) assert.NoError(t, err, "access "+description)
@@ -672,14 +642,14 @@ func testMetadataInput(t *testing.T, f *Fs) {
assert.NoError(t, err, "open "+description) assert.NoError(t, err, "open "+description)
assert.NotNil(t, r, "open stream of "+description) assert.NotNil(t, r, "open stream of "+description)
if err == nil && r != nil { if err == nil && r != nil {
data, err := io.ReadAll(r) data, err := ioutil.ReadAll(r)
assert.NoError(t, err, "read all of "+description) assert.NoError(t, err, "read all of "+description)
assert.Equal(t, contents, string(data), description+" contents is ok") assert.Equal(t, contents, string(data), description+" contents is ok")
_ = r.Close() _ = r.Close()
} }
} }
metaData, err := marshalSimpleJSON(ctx, 3, 1, "", "", "") metaData, err := marshalSimpleJSON(ctx, 3, 1, "", "")
require.NoError(t, err) require.NoError(t, err)
todaysMeta := string(metaData) todaysMeta := string(metaData)
runSubtest(todaysMeta, "today") runSubtest(todaysMeta, "today")
@@ -693,212 +663,6 @@ func testMetadataInput(t *testing.T, f *Fs) {
runSubtest(futureMeta, "future") runSubtest(futureMeta, "future")
} }
// Test that chunker refuses to change on objects with future/unknown metadata
func testFutureProof(t *testing.T, f *Fs) {
if !f.useMeta {
t.Skip("this test requires metadata support")
}
saveOpt := f.opt
ctx := context.Background()
f.opt.FailHard = true
const dir = "future"
const file = dir + "/test"
defer func() {
f.opt.FailHard = false
_ = operations.Purge(ctx, f.base, dir)
f.opt = saveOpt
}()
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
putPart := func(name string, part int, data, msg string) {
if part > 0 {
name = f.makeChunkName(name, part-1, "", "")
}
item := fstest.Item{Path: name, ModTime: modTime}
obj := fstests.PutTestContents(ctx, t, f.base, &item, data, true)
assert.NotNil(t, obj, msg)
}
// simulate chunked object from future
meta := `{"ver":999,"nchunks":3,"size":9,"garbage":"litter","sha1":"0707f2970043f9f7c22029482db27733deaec029"}`
putPart(file, 0, meta, "metaobject")
putPart(file, 1, "abc", "chunk1")
putPart(file, 2, "def", "chunk2")
putPart(file, 3, "ghi", "chunk3")
// List should succeed
ls, err := f.List(ctx, dir)
assert.NoError(t, err)
assert.Equal(t, 1, len(ls))
assert.Equal(t, int64(9), ls[0].Size())
// NewObject should succeed
obj, err := f.NewObject(ctx, file)
assert.NoError(t, err)
assert.Equal(t, file, obj.Remote())
assert.Equal(t, int64(9), obj.Size())
// Hash must fail
_, err = obj.Hash(ctx, hash.SHA1)
assert.Equal(t, ErrMetaUnknown, err)
// Move must fail
mobj, err := operations.Move(ctx, f, nil, file+"2", obj)
assert.Nil(t, mobj)
assert.Error(t, err)
if err != nil {
assert.Contains(t, err.Error(), "please upgrade rclone")
}
// Put must fail
oi := object.NewStaticObjectInfo(file, modTime, 3, true, nil, nil)
buf := bytes.NewBufferString("abc")
_, err = f.Put(ctx, buf, oi)
assert.Error(t, err)
// Rcat must fail
in := io.NopCloser(bytes.NewBufferString("abc"))
robj, err := operations.Rcat(ctx, f, file, in, modTime, nil)
assert.Nil(t, robj)
assert.NotNil(t, err)
if err != nil {
assert.Contains(t, err.Error(), "please upgrade rclone")
}
}
// The newer method of doing transactions without renaming should still be able to correctly process chunks that were created with renaming
// If you attempt to do the inverse, however, the data chunks will be ignored causing commands to perform incorrectly
func testBackwardsCompatibility(t *testing.T, f *Fs) {
if !f.useMeta {
t.Skip("Can't do norename transactions without metadata")
}
const dir = "backcomp"
ctx := context.Background()
saveOpt := f.opt
saveUseNoRename := f.useNoRename
defer func() {
f.opt.FailHard = false
_ = operations.Purge(ctx, f.base, dir)
f.opt = saveOpt
f.useNoRename = saveUseNoRename
}()
f.opt.ChunkSize = fs.SizeSuffix(10)
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
contents := random.String(250)
newFile := func(f fs.Fs, name string) (fs.Object, string) {
filename := path.Join(dir, name)
item := fstest.Item{Path: filename, ModTime: modTime}
obj := fstests.PutTestContents(ctx, t, f, &item, contents, true)
require.NotNil(t, obj)
return obj, filename
}
f.opt.FailHard = false
f.useNoRename = false
file, fileName := newFile(f, "renamefile")
f.opt.FailHard = false
item := fstest.NewItem(fileName, contents, modTime)
var items []fstest.Item
items = append(items, item)
f.useNoRename = true
fstest.CheckListingWithRoot(t, f, dir, items, nil, f.Precision())
_, err := f.NewObject(ctx, fileName)
assert.NoError(t, err)
f.opt.FailHard = true
_, err = f.List(ctx, dir)
assert.NoError(t, err)
f.opt.FailHard = false
_ = file.Remove(ctx)
}
func testChunkerServerSideMove(t *testing.T, f *Fs) {
if !f.useMeta {
t.Skip("Can't test norename transactions without metadata")
}
ctx := context.Background()
const dir = "servermovetest"
subRemote := fmt.Sprintf("%s:%s/%s", f.Name(), f.Root(), dir)
subFs1, err := fs.NewFs(ctx, subRemote+"/subdir1")
assert.NoError(t, err)
fs1, isChunkerFs := subFs1.(*Fs)
assert.True(t, isChunkerFs)
fs1.useNoRename = false
fs1.opt.ChunkSize = fs.SizeSuffix(3)
subFs2, err := fs.NewFs(ctx, subRemote+"/subdir2")
assert.NoError(t, err)
fs2, isChunkerFs := subFs2.(*Fs)
assert.True(t, isChunkerFs)
fs2.useNoRename = true
fs2.opt.ChunkSize = fs.SizeSuffix(3)
modTime := fstest.Time("2001-02-03T04:05:06.499999999Z")
item := fstest.Item{Path: "movefile", ModTime: modTime}
contents := "abcdef"
file := fstests.PutTestContents(ctx, t, fs1, &item, contents, true)
dstOverwritten, _ := fs2.NewObject(ctx, "movefile")
dstFile, err := operations.Move(ctx, fs2, dstOverwritten, "movefile", file)
assert.NoError(t, err)
assert.Equal(t, int64(len(contents)), dstFile.Size())
r, err := dstFile.Open(ctx)
assert.NoError(t, err)
assert.NotNil(t, r)
data, err := io.ReadAll(r)
assert.NoError(t, err)
assert.Equal(t, contents, string(data))
_ = r.Close()
_ = operations.Purge(ctx, f.base, dir)
}
// Test that md5all creates metadata even for small files
func testMD5AllSlow(t *testing.T, f *Fs) {
ctx := context.Background()
fsResult := deriveFs(ctx, t, f, "md5all", settings{
"chunk_size": "1P",
"name_format": "*.#",
"hash_type": "md5all",
"transactions": "rename",
"meta_format": "simplejson",
})
chunkFs, ok := fsResult.(*Fs)
require.True(t, ok, "fs must be a chunker remote")
baseFs := chunkFs.base
if !baseFs.Features().SlowHash {
t.Skipf("this test needs a base fs with slow hash, e.g. local")
}
assert.True(t, chunkFs.useMD5, "must use md5")
assert.True(t, chunkFs.hashAll, "must hash all files")
_ = testPutFile(ctx, t, chunkFs, "file", "-", "error", true)
obj, err := chunkFs.NewObject(ctx, "file")
require.NoError(t, err)
sum, err := obj.Hash(ctx, hash.MD5)
assert.NoError(t, err)
assert.Equal(t, "336d5ebc5436534e61d16e63ddfca327", sum)
list, err := baseFs.List(ctx, "")
require.NoError(t, err)
assert.Equal(t, 2, len(list))
_, err = baseFs.NewObject(ctx, "file")
assert.NoError(t, err, "metadata must be created")
_, err = baseFs.NewObject(ctx, "file.1")
assert.NoError(t, err, "first chunk must be created")
require.NoError(t, operations.Purge(ctx, baseFs, ""))
}
// InternalTest dispatches all internal tests // InternalTest dispatches all internal tests
func (f *Fs) InternalTest(t *testing.T) { func (f *Fs) InternalTest(t *testing.T) {
t.Run("PutLarge", func(t *testing.T) { t.Run("PutLarge", func(t *testing.T) {
@@ -922,18 +686,6 @@ func (f *Fs) InternalTest(t *testing.T) {
t.Run("MetadataInput", func(t *testing.T) { t.Run("MetadataInput", func(t *testing.T) {
testMetadataInput(t, f) testMetadataInput(t, f)
}) })
t.Run("FutureProof", func(t *testing.T) {
testFutureProof(t, f)
})
t.Run("BackwardsCompatibility", func(t *testing.T) {
testBackwardsCompatibility(t, f)
})
t.Run("ChunkerServerSideMove", func(t *testing.T) {
testChunkerServerSideMove(t, f)
})
t.Run("MD5AllSlow", func(t *testing.T) {
testMD5AllSlow(t, f)
})
} }
var _ fstests.InternalTester = (*Fs)(nil) var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -15,10 +15,10 @@ import (
// Command line flags // Command line flags
var ( var (
// Invalid characters are not supported by some remotes, e.g. Mailru. // Invalid characters are not supported by some remotes, eg. Mailru.
// We enable testing with invalid characters when -remote is not set, so // We enable testing with invalid characters when -remote is not set, so
// chunker overlays a local directory, but invalid characters are disabled // chunker overlays a local directory, but invalid characters are disabled
// by default when -remote is set, e.g. when test_all runs backend tests. // by default when -remote is set, eg. when test_all runs backend tests.
// You can still test with invalid characters using the below flag. // You can still test with invalid characters using the below flag.
UseBadChars = flag.Bool("bad-chars", false, "Set to test bad characters in file names when -remote is set") UseBadChars = flag.Bool("bad-chars", false, "Set to test bad characters in file names when -remote is set")
) )
@@ -35,7 +35,6 @@ func TestIntegration(t *testing.T) {
"MimeType", "MimeType",
"GetTier", "GetTier",
"SetTier", "SetTier",
"Metadata",
}, },
UnimplementableFsMethods: []string{ UnimplementableFsMethods: []string{
"PublicLink", "PublicLink",
@@ -54,7 +53,6 @@ func TestIntegration(t *testing.T) {
{Name: name, Key: "type", Value: "chunker"}, {Name: name, Key: "type", Value: "chunker"},
{Name: name, Key: "remote", Value: tempDir}, {Name: name, Key: "remote", Value: tempDir},
} }
opt.QuickTestOK = true
} }
fstests.Run(t, &opt) fstests.Run(t, &opt)
} }

View File

@@ -1,992 +0,0 @@
// Package combine implents a backend to combine multiple remotes in a directory tree
package combine
/*
Have API to add/remove branches in the combine
*/
import (
"context"
"errors"
"fmt"
"io"
"path"
"strings"
"sync"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/walk"
"golang.org/x/sync/errgroup"
)
// Register with Fs
func init() {
fsi := &fs.RegInfo{
Name: "combine",
Description: "Combine several remotes into one",
NewFs: NewFs,
MetadataInfo: &fs.MetadataInfo{
Help: `Any metadata supported by the underlying remote is read and written.`,
},
Options: []fs.Option{{
Name: "upstreams",
Help: `Upstreams for combining
These should be in the form
dir=remote:path dir2=remote2:path
Where before the = is specified the root directory and after is the remote to
put there.
Embedded spaces can be added using quotes
"dir=remote:path with space" "dir2=remote2:path with space"
`,
Required: true,
Default: fs.SpaceSepList(nil),
}},
}
fs.Register(fsi)
}
// Options defines the configuration for this backend
type Options struct {
Upstreams fs.SpaceSepList `config:"upstreams"`
}
// Fs represents a combine of upstreams
type Fs struct {
name string // name of this remote
features *fs.Features // optional features
opt Options // options for this Fs
root string // the path we are working on
hashSet hash.Set // common hashes
when time.Time // directory times
upstreams map[string]*upstream // map of upstreams
}
// adjustment stores the info to add a prefix to a path or chop characters off
type adjustment struct {
root string
rootSlash string
mountpoint string
mountpointSlash string
}
// newAdjustment makes a new path adjustment adjusting between mountpoint and root
//
// mountpoint is the point the upstream is mounted and root is the combine root
func newAdjustment(root, mountpoint string) (a adjustment) {
return adjustment{
root: root,
rootSlash: root + "/",
mountpoint: mountpoint,
mountpointSlash: mountpoint + "/",
}
}
var errNotUnderRoot = errors.New("file not under root")
// do makes the adjustment on s, mapping an upstream path into a combine path
func (a *adjustment) do(s string) (string, error) {
absPath := join(a.mountpoint, s)
if a.root == "" {
return absPath, nil
}
if absPath == a.root {
return "", nil
}
if !strings.HasPrefix(absPath, a.rootSlash) {
return "", errNotUnderRoot
}
return absPath[len(a.rootSlash):], nil
}
// undo makes the adjustment on s, mapping a combine path into an upstream path
func (a *adjustment) undo(s string) (string, error) {
absPath := join(a.root, s)
if absPath == a.mountpoint {
return "", nil
}
if !strings.HasPrefix(absPath, a.mountpointSlash) {
return "", errNotUnderRoot
}
return absPath[len(a.mountpointSlash):], nil
}
// upstream represents an upstream Fs
type upstream struct {
f fs.Fs
parent *Fs
dir string // directory the upstream is mounted
pathAdjustment adjustment // how to fiddle with the path
}
// Create an upstream from the directory it is mounted on and the remote
func (f *Fs) newUpstream(ctx context.Context, dir, remote string) (*upstream, error) {
uFs, err := cache.Get(ctx, remote)
if err == fs.ErrorIsFile {
return nil, fmt.Errorf("can't combine files yet, only directories %q: %w", remote, err)
}
if err != nil {
return nil, fmt.Errorf("failed to create upstream %q: %w", remote, err)
}
u := &upstream{
f: uFs,
parent: f,
dir: dir,
pathAdjustment: newAdjustment(f.root, dir),
}
cache.PinUntilFinalized(u.f, u)
return u, nil
}
// NewFs constructs an Fs from the path.
//
// The returned Fs is the actual Fs, referenced by remote in the config
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (outFs fs.Fs, err error) {
// defer log.Trace(nil, "name=%q, root=%q, m=%v", name, root, m)("f=%+v, err=%v", &outFs, &err)
// Parse config into Options struct
opt := new(Options)
err = configstruct.Set(m, opt)
if err != nil {
return nil, err
}
// Backward compatible to old config
if len(opt.Upstreams) == 0 {
return nil, errors.New("combine can't point to an empty upstream - check the value of the upstreams setting")
}
for _, u := range opt.Upstreams {
if strings.HasPrefix(u, name+":") {
return nil, errors.New("can't point combine remote at itself - check the value of the upstreams setting")
}
}
isDir := false
for strings.HasSuffix(root, "/") {
root = root[:len(root)-1]
isDir = true
}
f := &Fs{
name: name,
root: root,
opt: *opt,
upstreams: make(map[string]*upstream, len(opt.Upstreams)),
when: time.Now(),
}
g, gCtx := errgroup.WithContext(ctx)
var mu sync.Mutex
for _, upstream := range opt.Upstreams {
upstream := upstream
g.Go(func() (err error) {
equal := strings.IndexRune(upstream, '=')
if equal < 0 {
return fmt.Errorf("no \"=\" in upstream definition %q", upstream)
}
dir, remote := upstream[:equal], upstream[equal+1:]
if dir == "" {
return fmt.Errorf("empty dir in upstream definition %q", upstream)
}
if remote == "" {
return fmt.Errorf("empty remote in upstream definition %q", upstream)
}
if strings.ContainsRune(dir, '/') {
return fmt.Errorf("dirs can't contain / (yet): %q", dir)
}
u, err := f.newUpstream(gCtx, dir, remote)
if err != nil {
return err
}
mu.Lock()
if _, found := f.upstreams[dir]; found {
err = fmt.Errorf("duplicate directory name %q", dir)
} else {
f.upstreams[dir] = u
}
mu.Unlock()
return err
})
}
err = g.Wait()
if err != nil {
return nil, err
}
// check features
var features = (&fs.Features{
CaseInsensitive: true,
DuplicateFiles: false,
ReadMimeType: true,
WriteMimeType: true,
CanHaveEmptyDirectories: true,
BucketBased: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
}).Fill(ctx, f)
canMove := true
for _, u := range f.upstreams {
features = features.Mask(ctx, u.f) // Mask all upstream fs
if !operations.CanServerSideMove(u.f) {
canMove = false
}
}
// We can move if all remotes support Move or Copy
if canMove {
features.Move = f.Move
}
// Enable ListR when upstreams either support ListR or is local
// But not when all upstreams are local
if features.ListR == nil {
for _, u := range f.upstreams {
if u.f.Features().ListR != nil {
features.ListR = f.ListR
} else if !u.f.Features().IsLocal {
features.ListR = nil
break
}
}
}
// Enable Purge when any upstreams support it
if features.Purge == nil {
for _, u := range f.upstreams {
if u.f.Features().Purge != nil {
features.Purge = f.Purge
break
}
}
}
// Enable Shutdown when any upstreams support it
if features.Shutdown == nil {
for _, u := range f.upstreams {
if u.f.Features().Shutdown != nil {
features.Shutdown = f.Shutdown
break
}
}
}
// Enable DirCacheFlush when any upstreams support it
if features.DirCacheFlush == nil {
for _, u := range f.upstreams {
if u.f.Features().DirCacheFlush != nil {
features.DirCacheFlush = f.DirCacheFlush
break
}
}
}
// Enable ChangeNotify when any upstreams support it
if features.ChangeNotify == nil {
for _, u := range f.upstreams {
if u.f.Features().ChangeNotify != nil {
features.ChangeNotify = f.ChangeNotify
break
}
}
}
f.features = features
// Get common intersection of hashes
var hashSet hash.Set
var first = true
for _, u := range f.upstreams {
if first {
hashSet = u.f.Hashes()
first = false
} else {
hashSet = hashSet.Overlap(u.f.Hashes())
}
}
f.hashSet = hashSet
// Check to see if the root is actually a file
if f.root != "" && !isDir {
_, err := f.NewObject(ctx, "")
if err != nil {
if err == fs.ErrorObjectNotFound || err == fs.ErrorNotAFile || err == fs.ErrorIsDir {
// File doesn't exist or is a directory so return old f
return f, nil
}
return nil, err
}
// Check to see if the root path is actually an existing file
f.root = path.Dir(f.root)
if f.root == "." {
f.root = ""
}
// Adjust path adjustment to remove leaf
for _, u := range f.upstreams {
u.pathAdjustment = newAdjustment(f.root, u.dir)
}
return f, fs.ErrorIsFile
}
return f, nil
}
// Run a function over all the upstreams in parallel
func (f *Fs) multithread(ctx context.Context, fn func(context.Context, *upstream) error) error {
g, gCtx := errgroup.WithContext(ctx)
for _, u := range f.upstreams {
u := u
g.Go(func() (err error) {
return fn(gCtx, u)
})
}
return g.Wait()
}
// join the elements together but unline path.Join return empty string
func join(elem ...string) string {
result := path.Join(elem...)
if result == "." {
return ""
}
if len(result) > 0 && result[0] == '/' {
result = result[1:]
}
return result
}
// find the upstream for the remote passed in, returning the upstream and the adjusted path
func (f *Fs) findUpstream(remote string) (u *upstream, uRemote string, err error) {
// defer log.Trace(remote, "")("f=%v, uRemote=%q, err=%v", &u, &uRemote, &err)
for _, u := range f.upstreams {
uRemote, err = u.pathAdjustment.undo(remote)
if err == nil {
return u, uRemote, nil
}
}
return nil, "", fmt.Errorf("combine for remote %q: %w", remote, fs.ErrorDirNotFound)
}
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String converts this Fs to a string
func (f *Fs) String() string {
return fmt.Sprintf("combine root '%s'", f.root)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Rmdir removes the root directory of the Fs object
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
// The root always exists
if f.root == "" && dir == "" {
return nil
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return err
}
return u.f.Rmdir(ctx, uRemote)
}
// Hashes returns hash.HashNone to indicate remote hashing is unavailable
func (f *Fs) Hashes() hash.Set {
return f.hashSet
}
// Mkdir makes the root directory of the Fs object
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
// The root always exists
if f.root == "" && dir == "" {
return nil
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return err
}
return u.f.Mkdir(ctx, uRemote)
}
// purge the upstream or fallback to a slow way
func (u *upstream) purge(ctx context.Context, dir string) (err error) {
if do := u.f.Features().Purge; do != nil {
err = do(ctx, dir)
} else {
err = operations.Purge(ctx, u.f, dir)
}
return err
}
// Purge all files in the directory
//
// Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List()
//
// Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context, dir string) error {
if f.root == "" && dir == "" {
return f.multithread(ctx, func(ctx context.Context, u *upstream) error {
return u.purge(ctx, "")
})
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return err
}
return u.purge(ctx, uRemote)
}
// Copy src to this remote using server-side copy operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't copy - not same remote type")
return nil, fs.ErrorCantCopy
}
dstU, dstRemote, err := f.findUpstream(remote)
if err != nil {
return nil, err
}
do := dstU.f.Features().Copy
if do == nil {
return nil, fs.ErrorCantCopy
}
o, err := do(ctx, srcObj.Object, dstRemote)
if err != nil {
return nil, err
}
return dstU.newObject(o), nil
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
dstU, dstRemote, err := f.findUpstream(remote)
if err != nil {
return nil, err
}
do := dstU.f.Features().Move
useCopy := false
if do == nil {
do = dstU.f.Features().Copy
if do == nil {
return nil, fs.ErrorCantMove
}
useCopy = true
}
o, err := do(ctx, srcObj.Object, dstRemote)
if err != nil {
return nil, err
}
// If did Copy then remove the source object
if useCopy {
err = srcObj.Remove(ctx)
if err != nil {
return nil, err
}
}
return dstU.newObject(o), nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
// defer log.Trace(f, "src=%v, srcRemote=%q, dstRemote=%q", src, srcRemote, dstRemote)("err=%v", &err)
srcFs, ok := src.(*Fs)
if !ok {
fs.Debugf(src, "Can't move directory - not same remote type")
return fs.ErrorCantDirMove
}
dstU, dstURemote, err := f.findUpstream(dstRemote)
if err != nil {
return err
}
srcU, srcURemote, err := srcFs.findUpstream(srcRemote)
if err != nil {
return err
}
do := dstU.f.Features().DirMove
if do == nil {
return fs.ErrorCantDirMove
}
fs.Logf(dstU.f, "srcU.f=%v, srcURemote=%q, dstURemote=%q", srcU.f, srcURemote, dstURemote)
return do(ctx, srcU.f, srcURemote, dstURemote)
}
// ChangeNotify calls the passed function with a path
// that has had changes. If the implementation
// uses polling, it should adhere to the given interval.
// At least one value will be written to the channel,
// specifying the initial value and updated values might
// follow. A 0 Duration should pause the polling.
// The ChangeNotify implementation must empty the channel
// regularly. When the channel gets closed, the implementation
// should stop polling and release resources.
func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), ch <-chan time.Duration) {
var uChans []chan time.Duration
for _, u := range f.upstreams {
u := u
if do := u.f.Features().ChangeNotify; do != nil {
ch := make(chan time.Duration)
uChans = append(uChans, ch)
wrappedNotifyFunc := func(path string, entryType fs.EntryType) {
newPath, err := u.pathAdjustment.do(path)
if err != nil {
fs.Logf(f, "ChangeNotify: unable to process %q: %s", path, err)
return
}
fs.Debugf(f, "ChangeNotify: path %q entryType %d", newPath, entryType)
notifyFunc(newPath, entryType)
}
do(ctx, wrappedNotifyFunc, ch)
}
}
go func() {
for i := range ch {
for _, c := range uChans {
c <- i
}
}
for _, c := range uChans {
close(c)
}
}()
}
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
ctx := context.Background()
_ = f.multithread(ctx, func(ctx context.Context, u *upstream) error {
if do := u.f.Features().DirCacheFlush; do != nil {
do()
}
return nil
})
}
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, stream bool, options ...fs.OpenOption) (fs.Object, error) {
srcPath := src.Remote()
u, uRemote, err := f.findUpstream(srcPath)
if err != nil {
return nil, err
}
uSrc := fs.NewOverrideRemote(src, uRemote)
var o fs.Object
if stream {
o, err = u.f.Features().PutStream(ctx, in, uSrc, options...)
} else {
o, err = u.f.Put(ctx, in, uSrc, options...)
}
if err != nil {
return nil, err
}
return u.newObject(o), nil
}
// Put in to the remote path with the modTime given of the given size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return o, o.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
return f.put(ctx, in, src, false, options...)
default:
return nil, err
}
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
//
// May create the object even if it returns an error - if so
// will return the object and the error, otherwise will return
// nil and the error
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o, err := f.NewObject(ctx, src.Remote())
switch err {
case nil:
return o, o.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound:
return f.put(ctx, in, src, true, options...)
default:
return nil, err
}
}
// About gets quota information from the Fs
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
usage := &fs.Usage{
Total: new(int64),
Used: new(int64),
Trashed: new(int64),
Other: new(int64),
Free: new(int64),
Objects: new(int64),
}
for _, u := range f.upstreams {
doAbout := u.f.Features().About
if doAbout == nil {
continue
}
usg, err := doAbout(ctx)
if errors.Is(err, fs.ErrorDirNotFound) {
continue
}
if err != nil {
return nil, err
}
if usg.Total != nil && usage.Total != nil {
*usage.Total += *usg.Total
} else {
usage.Total = nil
}
if usg.Used != nil && usage.Used != nil {
*usage.Used += *usg.Used
} else {
usage.Used = nil
}
if usg.Trashed != nil && usage.Trashed != nil {
*usage.Trashed += *usg.Trashed
} else {
usage.Trashed = nil
}
if usg.Other != nil && usage.Other != nil {
*usage.Other += *usg.Other
} else {
usage.Other = nil
}
if usg.Free != nil && usage.Free != nil {
*usage.Free += *usg.Free
} else {
usage.Free = nil
}
if usg.Objects != nil && usage.Objects != nil {
*usage.Objects += *usg.Objects
} else {
usage.Objects = nil
}
}
return usage, nil
}
// Wraps entries for this upstream
func (u *upstream) wrapEntries(ctx context.Context, entries fs.DirEntries) (fs.DirEntries, error) {
for i, entry := range entries {
switch x := entry.(type) {
case fs.Object:
entries[i] = u.newObject(x)
case fs.Directory:
newDir := fs.NewDirCopy(ctx, x)
newPath, err := u.pathAdjustment.do(newDir.Remote())
if err != nil {
return nil, err
}
newDir.SetRemote(newPath)
entries[i] = newDir
default:
return nil, fmt.Errorf("unknown entry type %T", entry)
}
}
return entries, nil
}
// List the objects and directories in dir into entries. The
// entries can be returned in any order but should be for a
// complete directory.
//
// dir should be "" to list the root, and should not have
// trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
// defer log.Trace(f, "dir=%q", dir)("entries = %v, err=%v", &entries, &err)
if f.root == "" && dir == "" {
entries = make(fs.DirEntries, 0, len(f.upstreams))
for combineDir := range f.upstreams {
d := fs.NewDir(combineDir, f.when)
entries = append(entries, d)
}
return entries, nil
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return nil, err
}
entries, err = u.f.List(ctx, uRemote)
if err != nil {
return nil, err
}
return u.wrapEntries(ctx, entries)
}
// ListR lists the objects and directories of the Fs starting
// from dir recursively into out.
//
// dir should be "" to start from the root, and should not
// have trailing slashes.
//
// This should return ErrDirNotFound if the directory isn't
// found.
//
// It should call callback for each tranche of entries read.
// These need not be returned in any particular order. If
// callback returns an error then the listing will stop
// immediately.
//
// Don't implement this unless you have a more efficient way
// of listing recursively that doing a directory traversal.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
// defer log.Trace(f, "dir=%q, callback=%v", dir, callback)("err=%v", &err)
if f.root == "" && dir == "" {
rootEntries, err := f.List(ctx, "")
if err != nil {
return err
}
err = callback(rootEntries)
if err != nil {
return err
}
var mu sync.Mutex
syncCallback := func(entries fs.DirEntries) error {
mu.Lock()
defer mu.Unlock()
return callback(entries)
}
err = f.multithread(ctx, func(ctx context.Context, u *upstream) error {
return f.ListR(ctx, u.dir, syncCallback)
})
if err != nil {
return err
}
return nil
}
u, uRemote, err := f.findUpstream(dir)
if err != nil {
return err
}
wrapCallback := func(entries fs.DirEntries) error {
entries, err := u.wrapEntries(ctx, entries)
if err != nil {
return err
}
return callback(entries)
}
if do := u.f.Features().ListR; do != nil {
err = do(ctx, uRemote, wrapCallback)
} else {
err = walk.ListR(ctx, u.f, uRemote, true, -1, walk.ListAll, wrapCallback)
}
if err == fs.ErrorDirNotFound {
err = nil
}
return err
}
// NewObject creates a new remote combine file object
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
u, uRemote, err := f.findUpstream(remote)
if err != nil {
return nil, err
}
if uRemote == "" || strings.HasSuffix(uRemote, "/") {
return nil, fs.ErrorIsDir
}
o, err := u.f.NewObject(ctx, uRemote)
if err != nil {
return nil, err
}
return u.newObject(o), nil
}
// Precision is the greatest Precision of all upstreams
func (f *Fs) Precision() time.Duration {
var greatestPrecision time.Duration
for _, u := range f.upstreams {
uPrecision := u.f.Precision()
if uPrecision > greatestPrecision {
greatestPrecision = uPrecision
}
}
return greatestPrecision
}
// Shutdown the backend, closing any background tasks and any
// cached connections.
func (f *Fs) Shutdown(ctx context.Context) error {
return f.multithread(ctx, func(ctx context.Context, u *upstream) error {
if do := u.f.Features().Shutdown; do != nil {
return do(ctx)
}
return nil
})
}
// Object describes a wrapped Object
//
// This is a wrapped Object which knows its path prefix
type Object struct {
fs.Object
u *upstream
}
func (u *upstream) newObject(o fs.Object) *Object {
return &Object{
Object: o,
u: u,
}
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info {
return o.u.parent
}
// String returns the remote path
func (o *Object) String() string {
return o.Remote()
}
// Remote returns the remote path
func (o *Object) Remote() string {
newPath, err := o.u.pathAdjustment.do(o.Object.String())
if err != nil {
fs.Errorf(o, "Bad object: %v", err)
return err.Error()
}
return newPath
}
// MimeType returns the content type of the Object if known
func (o *Object) MimeType(ctx context.Context) (mimeType string) {
if do, ok := o.Object.(fs.MimeTyper); ok {
mimeType = do.MimeType(ctx)
}
return mimeType
}
// UnWrap returns the Object that this Object is wrapping or
// nil if it isn't wrapping anything
func (o *Object) UnWrap() fs.Object {
return o.Object
}
// GetTier returns storage tier or class of the Object
func (o *Object) GetTier() string {
do, ok := o.Object.(fs.GetTierer)
if !ok {
return ""
}
return do.GetTier()
}
// ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string {
do, ok := o.Object.(fs.IDer)
if !ok {
return ""
}
return do.ID()
}
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
do, ok := o.Object.(fs.Metadataer)
if !ok {
return nil, nil
}
return do.Metadata(ctx)
}
// SetTier performs changing storage tier of the Object if
// multiple storage classes supported
func (o *Object) SetTier(tier string) error {
do, ok := o.Object.(fs.SetTierer)
if !ok {
return errors.New("underlying remote does not support SetTier")
}
return do.SetTier(tier)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.FullObject = (*Object)(nil)
)

View File

@@ -1,94 +0,0 @@
package combine
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
)
func TestAdjustmentDo(t *testing.T) {
for _, test := range []struct {
root string
mountpoint string
in string
want string
wantErr error
}{
{
root: "",
mountpoint: "mountpoint",
in: "path/to/file.txt",
want: "mountpoint/path/to/file.txt",
},
{
root: "mountpoint",
mountpoint: "mountpoint",
in: "path/to/file.txt",
want: "path/to/file.txt",
},
{
root: "mountpoint/path",
mountpoint: "mountpoint",
in: "path/to/file.txt",
want: "to/file.txt",
},
{
root: "mountpoint/path",
mountpoint: "mountpoint",
in: "wrongpath/to/file.txt",
want: "",
wantErr: errNotUnderRoot,
},
} {
what := fmt.Sprintf("%+v", test)
a := newAdjustment(test.root, test.mountpoint)
got, gotErr := a.do(test.in)
assert.Equal(t, test.wantErr, gotErr)
assert.Equal(t, test.want, got, what)
}
}
func TestAdjustmentUndo(t *testing.T) {
for _, test := range []struct {
root string
mountpoint string
in string
want string
wantErr error
}{
{
root: "",
mountpoint: "mountpoint",
in: "mountpoint/path/to/file.txt",
want: "path/to/file.txt",
},
{
root: "mountpoint",
mountpoint: "mountpoint",
in: "path/to/file.txt",
want: "path/to/file.txt",
},
{
root: "mountpoint/path",
mountpoint: "mountpoint",
in: "to/file.txt",
want: "path/to/file.txt",
},
{
root: "wrongmountpoint/path",
mountpoint: "mountpoint",
in: "to/file.txt",
want: "",
wantErr: errNotUnderRoot,
},
} {
what := fmt.Sprintf("%+v", test)
a := newAdjustment(test.root, test.mountpoint)
got, gotErr := a.undo(test.in)
assert.Equal(t, test.wantErr, gotErr)
assert.Equal(t, test.want, got, what)
}
}

View File

@@ -1,81 +0,0 @@
// Test Combine filesystem interface
package combine_test
import (
"testing"
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/memory"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" {
t.Skip("Skipping as -remote not set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: *fstest.RemoteName,
UnimplementableFsMethods: []string{"OpenWriterAt", "DuplicateFiles"},
UnimplementableObjectMethods: []string{"MimeType"},
})
}
func TestLocal(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs := MakeTestDirs(t, 3)
upstreams := "dir1=" + dirs[0] + " dir2=" + dirs[1] + " dir3=" + dirs[2]
name := "TestCombineLocal"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":dir1",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "combine"},
{Name: name, Key: "upstreams", Value: upstreams},
},
QuickTestOK: true,
})
}
func TestMemory(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
upstreams := "dir1=:memory:dir1 dir2=:memory:dir2 dir3=:memory:dir3"
name := "TestCombineMemory"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":dir1",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "combine"},
{Name: name, Key: "upstreams", Value: upstreams},
},
QuickTestOK: true,
})
}
func TestMixed(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
dirs := MakeTestDirs(t, 2)
upstreams := "dir1=" + dirs[0] + " dir2=" + dirs[1] + " dir3=:memory:dir3"
name := "TestCombineMixed"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":dir1",
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "combine"},
{Name: name, Key: "upstreams", Value: upstreams},
},
})
}
// MakeTestDirs makes directories in /tmp for testing
func MakeTestDirs(t *testing.T, n int) (dirs []string) {
for i := 1; i <= n; i++ {
dir := t.TempDir()
dirs = append(dirs, dir)
}
return dirs
}

View File

@@ -1 +0,0 @@
test

File diff suppressed because it is too large Load Diff

View File

@@ -1,66 +0,0 @@
// Test Crypt filesystem interface
package compress
import (
"os"
"path/filepath"
"testing"
_ "github.com/rclone/rclone/backend/drive"
_ "github.com/rclone/rclone/backend/local"
_ "github.com/rclone/rclone/backend/s3"
_ "github.com/rclone/rclone/backend/swift"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
opt := fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*Object)(nil),
UnimplementableFsMethods: []string{
"OpenWriterAt",
"MergeDirs",
"DirCacheFlush",
"PutUnchecked",
"PutStream",
"UserInfo",
"Disconnect",
},
TiersToTest: []string{"STANDARD", "STANDARD_IA"},
UnimplementableObjectMethods: []string{}}
fstests.Run(t, &opt)
}
// TestRemoteGzip tests GZIP compression
func TestRemoteGzip(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-compress-test-gzip")
name := "TestCompressGzip"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*Object)(nil),
UnimplementableFsMethods: []string{
"OpenWriterAt",
"MergeDirs",
"DirCacheFlush",
"PutUnchecked",
"PutStream",
"UserInfo",
"Disconnect",
},
UnimplementableObjectMethods: []string{
"GetTier",
"SetTier",
},
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "compress"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "compression_mode", Value: "gzip"},
},
QuickTestOK: true,
})
}

View File

@@ -7,21 +7,17 @@ import (
gocipher "crypto/cipher" gocipher "crypto/cipher"
"crypto/rand" "crypto/rand"
"encoding/base32" "encoding/base32"
"encoding/base64"
"errors"
"fmt" "fmt"
"io" "io"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
"time"
"unicode/utf8" "unicode/utf8"
"github.com/Max-Sum/base32768" "github.com/pkg/errors"
"github.com/rclone/rclone/backend/crypt/pkcs7" "github.com/rclone/rclone/backend/crypt/pkcs7"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/lib/version"
"github.com/rfjakob/eme" "github.com/rfjakob/eme"
"golang.org/x/crypto/nacl/secretbox" "golang.org/x/crypto/nacl/secretbox"
"golang.org/x/crypto/scrypt" "golang.org/x/crypto/scrypt"
@@ -96,12 +92,12 @@ func NewNameEncryptionMode(s string) (mode NameEncryptionMode, err error) {
case "obfuscate": case "obfuscate":
mode = NameEncryptionObfuscated mode = NameEncryptionObfuscated
default: default:
err = fmt.Errorf("unknown file name encryption mode %q", s) err = errors.Errorf("Unknown file name encryption mode %q", s)
} }
return mode, err return mode, err
} }
// String turns mode into a human-readable string // String turns mode into a human readable string
func (mode NameEncryptionMode) String() (out string) { func (mode NameEncryptionMode) String() (out string) {
switch mode { switch mode {
case NameEncryptionOff: case NameEncryptionOff:
@@ -116,57 +112,6 @@ func (mode NameEncryptionMode) String() (out string) {
return out return out
} }
// fileNameEncoding are the encoding methods dealing with encrypted file names
type fileNameEncoding interface {
EncodeToString(src []byte) string
DecodeString(s string) ([]byte, error)
}
// caseInsensitiveBase32Encoding defines a file name encoding
// using a modified version of standard base32 as described in
// RFC4648
//
// The standard encoding is modified in two ways
// - it becomes lower case (no-one likes upper case filenames!)
// - we strip the padding character `=`
type caseInsensitiveBase32Encoding struct{}
// EncodeToString encodes a string using the modified version of
// base32 encoding.
func (caseInsensitiveBase32Encoding) EncodeToString(src []byte) string {
encoded := base32.HexEncoding.EncodeToString(src)
encoded = strings.TrimRight(encoded, "=")
return strings.ToLower(encoded)
}
// DecodeString decodes a string as encoded by EncodeToString
func (caseInsensitiveBase32Encoding) DecodeString(s string) ([]byte, error) {
if strings.HasSuffix(s, "=") {
return nil, ErrorBadBase32Encoding
}
// First figure out how many padding characters to add
roundUpToMultipleOf8 := (len(s) + 7) &^ 7
equals := roundUpToMultipleOf8 - len(s)
s = strings.ToUpper(s) + "========"[:equals]
return base32.HexEncoding.DecodeString(s)
}
// NewNameEncoding creates a NameEncoding from a string
func NewNameEncoding(s string) (enc fileNameEncoding, err error) {
s = strings.ToLower(s)
switch s {
case "base32":
enc = caseInsensitiveBase32Encoding{}
case "base64":
enc = base64.RawURLEncoding
case "base32768":
enc = base32768.SafeEncoding
default:
err = fmt.Errorf("unknown file name encoding mode %q", s)
}
return enc, err
}
// Cipher defines an encoding and decoding cipher for the crypt backend // Cipher defines an encoding and decoding cipher for the crypt backend
type Cipher struct { type Cipher struct {
dataKey [32]byte // Key for secretbox dataKey [32]byte // Key for secretbox
@@ -174,17 +119,15 @@ type Cipher struct {
nameTweak [nameCipherBlockSize]byte // used to tweak the name crypto nameTweak [nameCipherBlockSize]byte // used to tweak the name crypto
block gocipher.Block block gocipher.Block
mode NameEncryptionMode mode NameEncryptionMode
fileNameEnc fileNameEncoding
buffers sync.Pool // encrypt/decrypt buffers buffers sync.Pool // encrypt/decrypt buffers
cryptoRand io.Reader // read crypto random numbers from here cryptoRand io.Reader // read crypto random numbers from here
dirNameEncrypt bool dirNameEncrypt bool
} }
// newCipher initialises the cipher. If salt is "" then it uses a built in salt val // newCipher initialises the cipher. If salt is "" then it uses a built in salt val
func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bool, enc fileNameEncoding) (*Cipher, error) { func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bool) (*Cipher, error) {
c := &Cipher{ c := &Cipher{
mode: mode, mode: mode,
fileNameEnc: enc,
cryptoRand: rand.Reader, cryptoRand: rand.Reader,
dirNameEncrypt: dirNameEncrypt, dirNameEncrypt: dirNameEncrypt,
} }
@@ -204,7 +147,7 @@ func newCipher(mode NameEncryptionMode, password, salt string, dirNameEncrypt bo
// If salt is "" we use a fixed salt just to make attackers lives // If salt is "" we use a fixed salt just to make attackers lives
// slighty harder than using no salt. // slighty harder than using no salt.
// //
// Note that empty password makes all 0x00 keys which is used in the // Note that empty passsword makes all 0x00 keys which is used in the
// tests. // tests.
func (c *Cipher) Key(password, salt string) (err error) { func (c *Cipher) Key(password, salt string) (err error) {
const keySize = len(c.dataKey) + len(c.nameKey) + len(c.nameTweak) const keySize = len(c.dataKey) + len(c.nameKey) + len(c.nameTweak)
@@ -242,9 +185,33 @@ func (c *Cipher) putBlock(buf []byte) {
c.buffers.Put(buf) c.buffers.Put(buf)
} }
// encodeFileName encodes a filename using a modified version of
// standard base32 as described in RFC4648
//
// The standard encoding is modified in two ways
// * it becomes lower case (no-one likes upper case filenames!)
// * we strip the padding character `=`
func encodeFileName(in []byte) string {
encoded := base32.HexEncoding.EncodeToString(in)
encoded = strings.TrimRight(encoded, "=")
return strings.ToLower(encoded)
}
// decodeFileName decodes a filename as encoded by encodeFileName
func decodeFileName(in string) ([]byte, error) {
if strings.HasSuffix(in, "=") {
return nil, ErrorBadBase32Encoding
}
// First figure out how many padding characters to add
roundUpToMultipleOf8 := (len(in) + 7) &^ 7
equals := roundUpToMultipleOf8 - len(in)
in = strings.ToUpper(in) + "========"[:equals]
return base32.HexEncoding.DecodeString(in)
}
// encryptSegment encrypts a path segment // encryptSegment encrypts a path segment
// //
// This uses EME with AES. // This uses EME with AES
// //
// EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the // EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the
// 2003 paper "A Parallelizable Enciphering Mode" by Halevi and // 2003 paper "A Parallelizable Enciphering Mode" by Halevi and
@@ -254,15 +221,15 @@ func (c *Cipher) putBlock(buf []byte) {
// same filename must encrypt to the same thing. // same filename must encrypt to the same thing.
// //
// This means that // This means that
// - filenames with the same name will encrypt the same // * filenames with the same name will encrypt the same
// - filenames which start the same won't have a common prefix // * filenames which start the same won't have a common prefix
func (c *Cipher) encryptSegment(plaintext string) string { func (c *Cipher) encryptSegment(plaintext string) string {
if plaintext == "" { if plaintext == "" {
return "" return ""
} }
paddedPlaintext := pkcs7.Pad(nameCipherBlockSize, []byte(plaintext)) paddedPlaintext := pkcs7.Pad(nameCipherBlockSize, []byte(plaintext))
ciphertext := eme.Transform(c.block, c.nameTweak[:], paddedPlaintext, eme.DirectionEncrypt) ciphertext := eme.Transform(c.block, c.nameTweak[:], paddedPlaintext, eme.DirectionEncrypt)
return c.fileNameEnc.EncodeToString(ciphertext) return encodeFileName(ciphertext)
} }
// decryptSegment decrypts a path segment // decryptSegment decrypts a path segment
@@ -270,7 +237,7 @@ func (c *Cipher) decryptSegment(ciphertext string) (string, error) {
if ciphertext == "" { if ciphertext == "" {
return "", nil return "", nil
} }
rawCiphertext, err := c.fileNameEnc.DecodeString(ciphertext) rawCiphertext, err := decodeFileName(ciphertext)
if err != nil { if err != nil {
return "", err return "", err
} }
@@ -475,32 +442,11 @@ func (c *Cipher) encryptFileName(in string) string {
if !c.dirNameEncrypt && i != (len(segments)-1) { if !c.dirNameEncrypt && i != (len(segments)-1) {
continue continue
} }
// Strip version string so that only the non-versioned part
// of the file name gets encrypted/obfuscated
hasVersion := false
var t time.Time
if i == (len(segments)-1) && version.Match(segments[i]) {
var s string
t, s = version.Remove(segments[i])
// version.Remove can fail, in which case it returns segments[i]
if s != segments[i] {
segments[i] = s
hasVersion = true
}
}
if c.mode == NameEncryptionStandard { if c.mode == NameEncryptionStandard {
segments[i] = c.encryptSegment(segments[i]) segments[i] = c.encryptSegment(segments[i])
} else { } else {
segments[i] = c.obfuscateSegment(segments[i]) segments[i] = c.obfuscateSegment(segments[i])
} }
// Add back a version to the encrypted/obfuscated
// file name, if we stripped it off earlier
if hasVersion {
segments[i] = version.Add(segments[i], t)
}
} }
return strings.Join(segments, "/") return strings.Join(segments, "/")
} }
@@ -531,21 +477,6 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
if !c.dirNameEncrypt && i != (len(segments)-1) { if !c.dirNameEncrypt && i != (len(segments)-1) {
continue continue
} }
// Strip version string so that only the non-versioned part
// of the file name gets decrypted/deobfuscated
hasVersion := false
var t time.Time
if i == (len(segments)-1) && version.Match(segments[i]) {
var s string
t, s = version.Remove(segments[i])
// version.Remove can fail, in which case it returns segments[i]
if s != segments[i] {
segments[i] = s
hasVersion = true
}
}
if c.mode == NameEncryptionStandard { if c.mode == NameEncryptionStandard {
segments[i], err = c.decryptSegment(segments[i]) segments[i], err = c.decryptSegment(segments[i])
} else { } else {
@@ -555,12 +486,6 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
if err != nil { if err != nil {
return "", err return "", err
} }
// Add back a version to the decrypted/deobfuscated
// file name, if we stripped it off earlier
if hasVersion {
segments[i] = version.Add(segments[i], t)
}
} }
return strings.Join(segments, "/"), nil return strings.Join(segments, "/"), nil
} }
@@ -569,18 +494,10 @@ func (c *Cipher) decryptFileName(in string) (string, error) {
func (c *Cipher) DecryptFileName(in string) (string, error) { func (c *Cipher) DecryptFileName(in string) (string, error) {
if c.mode == NameEncryptionOff { if c.mode == NameEncryptionOff {
remainingLength := len(in) - len(encryptedSuffix) remainingLength := len(in) - len(encryptedSuffix)
if remainingLength == 0 || !strings.HasSuffix(in, encryptedSuffix) { if remainingLength > 0 && strings.HasSuffix(in, encryptedSuffix) {
return "", ErrorNotAnEncryptedFile return in[:remainingLength], nil
} }
decrypted := in[:remainingLength] return "", ErrorNotAnEncryptedFile
if version.Match(decrypted) {
_, unversioned := version.Remove(decrypted)
if unversioned == "" {
return "", ErrorNotAnEncryptedFile
}
}
// Leave the version string on, if it was there
return decrypted, nil
} }
return c.decryptFileName(in) return c.decryptFileName(in)
} }
@@ -611,7 +528,7 @@ func (n *nonce) pointer() *[fileNonceSize]byte {
func (n *nonce) fromReader(in io.Reader) error { func (n *nonce) fromReader(in io.Reader) error {
read, err := io.ReadFull(in, (*n)[:]) read, err := io.ReadFull(in, (*n)[:])
if read != fileNonceSize { if read != fileNonceSize {
return fmt.Errorf("short read of nonce: %w", err) return errors.Wrap(err, "short read of nonce")
} }
return nil return nil
} }
@@ -716,8 +633,11 @@ func (fh *encrypter) Read(p []byte) (n int, err error) {
} }
// possibly err != nil here, but we will process the // possibly err != nil here, but we will process the
// data and the next call to ReadFull will return 0, err // data and the next call to ReadFull will return 0, err
// Write nonce to start of block
copy(fh.buf, fh.nonce[:])
// Encrypt the block using the nonce // Encrypt the block using the nonce
secretbox.Seal(fh.buf[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey) block := fh.buf
secretbox.Seal(block[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey)
fh.bufIndex = 0 fh.bufIndex = 0
fh.bufSize = blockHeaderSize + n fh.bufSize = blockHeaderSize + n
fh.nonce.increment() fh.nonce.increment()
@@ -862,7 +782,8 @@ func (fh *decrypter) fillBuffer() (err error) {
return ErrorEncryptedFileBadHeader return ErrorEncryptedFileBadHeader
} }
// Decrypt the block using the nonce // Decrypt the block using the nonce
_, ok := secretbox.Open(fh.buf[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey) block := fh.buf
_, ok := secretbox.Open(block[:0], readBuf[:n], fh.nonce.pointer(), &fh.c.dataKey)
if !ok { if !ok {
if err != nil { if err != nil {
return err // return pending error as it is likely more accurate return err // return pending error as it is likely more accurate
@@ -987,7 +908,7 @@ func (fh *decrypter) RangeSeek(ctx context.Context, offset int64, whence int, li
// Re-open the underlying object with the offset given // Re-open the underlying object with the offset given
rc, err := fh.open(ctx, underlyingOffset, underlyingLimit) rc, err := fh.open(ctx, underlyingOffset, underlyingLimit)
if err != nil { if err != nil {
return 0, fh.finish(fmt.Errorf("couldn't reopen file with offset and limit: %w", err)) return 0, fh.finish(errors.Wrap(err, "couldn't reopen file with offset and limit"))
} }
// Set the file handle // Set the file handle
@@ -1085,7 +1006,7 @@ func (c *Cipher) DecryptData(rc io.ReadCloser) (io.ReadCloser, error) {
// DecryptDataSeek decrypts the data stream from offset // DecryptDataSeek decrypts the data stream from offset
// //
// The open function must return a ReadCloser opened to the offset supplied. // The open function must return a ReadCloser opened to the offset supplied
// //
// You must use this form of DecryptData if you might want to Seek the file handle // You must use this form of DecryptData if you might want to Seek the file handle
func (c *Cipher) DecryptDataSeek(ctx context.Context, open OpenRangeSeek, offset, limit int64) (ReadSeekCloser, error) { func (c *Cipher) DecryptDataSeek(ctx context.Context, open OpenRangeSeek, offset, limit int64) (ReadSeekCloser, error) {

View File

@@ -4,14 +4,13 @@ import (
"bytes" "bytes"
"context" "context"
"encoding/base32" "encoding/base32"
"encoding/base64"
"errors"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"strings" "strings"
"testing" "testing"
"github.com/Max-Sum/base32768" "github.com/pkg/errors"
"github.com/rclone/rclone/backend/crypt/pkcs7" "github.com/rclone/rclone/backend/crypt/pkcs7"
"github.com/rclone/rclone/lib/readers" "github.com/rclone/rclone/lib/readers"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -46,31 +45,11 @@ func TestNewNameEncryptionModeString(t *testing.T) {
assert.Equal(t, NameEncryptionMode(3).String(), "Unknown mode #3") assert.Equal(t, NameEncryptionMode(3).String(), "Unknown mode #3")
} }
type EncodingTestCase struct { func TestEncodeFileName(t *testing.T) {
in string for _, test := range []struct {
expected string in string
} expected string
}{
func testEncodeFileName(t *testing.T, encoding string, testCases []EncodingTestCase, caseInsensitive bool) {
for _, test := range testCases {
enc, err := NewNameEncoding(encoding)
assert.NoError(t, err, "There should be no error creating name encoder for base32.")
actual := enc.EncodeToString([]byte(test.in))
assert.Equal(t, actual, test.expected, fmt.Sprintf("in=%q", test.in))
recovered, err := enc.DecodeString(test.expected)
assert.NoError(t, err)
assert.Equal(t, string(recovered), test.in, fmt.Sprintf("reverse=%q", test.expected))
if caseInsensitive {
in := strings.ToUpper(test.expected)
recovered, err = enc.DecodeString(in)
assert.NoError(t, err)
assert.Equal(t, string(recovered), test.in, fmt.Sprintf("reverse=%q", in))
}
}
}
func TestEncodeFileNameBase32(t *testing.T) {
testEncodeFileName(t, "base32", []EncodingTestCase{
{"", ""}, {"", ""},
{"1", "64"}, {"1", "64"},
{"12", "64p0"}, {"12", "64p0"},
@@ -88,56 +67,20 @@ func TestEncodeFileNameBase32(t *testing.T) {
{"12345678901234", "64p36d1l6orjge9g64p36d0"}, {"12345678901234", "64p36d1l6orjge9g64p36d0"},
{"123456789012345", "64p36d1l6orjge9g64p36d1l"}, {"123456789012345", "64p36d1l6orjge9g64p36d1l"},
{"1234567890123456", "64p36d1l6orjge9g64p36d1l6o"}, {"1234567890123456", "64p36d1l6orjge9g64p36d1l6o"},
}, true) } {
actual := encodeFileName([]byte(test.in))
assert.Equal(t, actual, test.expected, fmt.Sprintf("in=%q", test.in))
recovered, err := decodeFileName(test.expected)
assert.NoError(t, err)
assert.Equal(t, string(recovered), test.in, fmt.Sprintf("reverse=%q", test.expected))
in := strings.ToUpper(test.expected)
recovered, err = decodeFileName(in)
assert.NoError(t, err)
assert.Equal(t, string(recovered), test.in, fmt.Sprintf("reverse=%q", in))
}
} }
func TestEncodeFileNameBase64(t *testing.T) { func TestDecodeFileName(t *testing.T) {
testEncodeFileName(t, "base64", []EncodingTestCase{
{"", ""},
{"1", "MQ"},
{"12", "MTI"},
{"123", "MTIz"},
{"1234", "MTIzNA"},
{"12345", "MTIzNDU"},
{"123456", "MTIzNDU2"},
{"1234567", "MTIzNDU2Nw"},
{"12345678", "MTIzNDU2Nzg"},
{"123456789", "MTIzNDU2Nzg5"},
{"1234567890", "MTIzNDU2Nzg5MA"},
{"12345678901", "MTIzNDU2Nzg5MDE"},
{"123456789012", "MTIzNDU2Nzg5MDEy"},
{"1234567890123", "MTIzNDU2Nzg5MDEyMw"},
{"12345678901234", "MTIzNDU2Nzg5MDEyMzQ"},
{"123456789012345", "MTIzNDU2Nzg5MDEyMzQ1"},
{"1234567890123456", "MTIzNDU2Nzg5MDEyMzQ1Ng"},
}, false)
}
func TestEncodeFileNameBase32768(t *testing.T) {
testEncodeFileName(t, "base32768", []EncodingTestCase{
{"", ""},
{"1", "㼿"},
{"12", "㻙ɟ"},
{"123", "㻙ⲿ"},
{"1234", "㻙ⲍƟ"},
{"12345", "㻙ⲍ⍟"},
{"123456", "㻙ⲍ⍆ʏ"},
{"1234567", "㻙ⲍ⍆觟"},
{"12345678", "㻙ⲍ⍆觓ɧ"},
{"123456789", "㻙ⲍ⍆觓栯"},
{"1234567890", "㻙ⲍ⍆觓栩ɣ"},
{"12345678901", "㻙ⲍ⍆觓栩朧"},
{"123456789012", "㻙ⲍ⍆觓栩朤ʅ"},
{"1234567890123", "㻙ⲍ⍆觓栩朤談"},
{"12345678901234", "㻙ⲍ⍆觓栩朤諆ɔ"},
{"123456789012345", "㻙ⲍ⍆觓栩朤諆媕"},
{"1234567890123456", "㻙ⲍ⍆觓栩朤諆媕䆿"},
}, false)
}
func TestDecodeFileNameBase32(t *testing.T) {
enc, err := NewNameEncoding("base32")
assert.NoError(t, err, "There should be no error creating name encoder for base32.")
// We've tested decoding the valid ones above, now concentrate on the invalid ones // We've tested decoding the valid ones above, now concentrate on the invalid ones
for _, test := range []struct { for _, test := range []struct {
in string in string
@@ -147,65 +90,17 @@ func TestDecodeFileNameBase32(t *testing.T) {
{"!", base32.CorruptInputError(0)}, {"!", base32.CorruptInputError(0)},
{"hello=hello", base32.CorruptInputError(5)}, {"hello=hello", base32.CorruptInputError(5)},
} { } {
actual, actualErr := enc.DecodeString(test.in) actual, actualErr := decodeFileName(test.in)
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr)) assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr))
} }
} }
func TestDecodeFileNameBase64(t *testing.T) { func TestEncryptSegment(t *testing.T) {
enc, err := NewNameEncoding("base64") c, _ := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err, "There should be no error creating name encoder for base32.")
// We've tested decoding the valid ones above, now concentrate on the invalid ones
for _, test := range []struct { for _, test := range []struct {
in string in string
expectedErr error expected string
}{ }{
{"64=", base64.CorruptInputError(2)},
{"!", base64.CorruptInputError(0)},
{"Hello=Hello", base64.CorruptInputError(5)},
} {
actual, actualErr := enc.DecodeString(test.in)
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr))
}
}
func TestDecodeFileNameBase32768(t *testing.T) {
enc, err := NewNameEncoding("base32768")
assert.NoError(t, err, "There should be no error creating name encoder for base32.")
// We've tested decoding the valid ones above, now concentrate on the invalid ones
for _, test := range []struct {
in string
expectedErr error
}{
{"㼿c", base32768.CorruptInputError(1)},
{"!", base32768.CorruptInputError(0)},
{"㻙ⲿ=㻙ⲿ", base32768.CorruptInputError(2)},
} {
actual, actualErr := enc.DecodeString(test.in)
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr))
}
}
func testEncryptSegment(t *testing.T, encoding string, testCases []EncodingTestCase, caseInsensitive bool) {
enc, _ := NewNameEncoding(encoding)
c, _ := newCipher(NameEncryptionStandard, "", "", true, enc)
for _, test := range testCases {
actual := c.encryptSegment(test.in)
assert.Equal(t, test.expected, actual, fmt.Sprintf("Testing %q", test.in))
recovered, err := c.decryptSegment(test.expected)
assert.NoError(t, err, fmt.Sprintf("Testing reverse %q", test.expected))
assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %q", test.expected))
if caseInsensitive {
in := strings.ToUpper(test.expected)
recovered, err = c.decryptSegment(in)
assert.NoError(t, err, fmt.Sprintf("Testing reverse %q", in))
assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %q", in))
}
}
}
func TestEncryptSegmentBase32(t *testing.T) {
testEncryptSegment(t, "base32", []EncodingTestCase{
{"", ""}, {"", ""},
{"1", "p0e52nreeaj0a5ea7s64m4j72s"}, {"1", "p0e52nreeaj0a5ea7s64m4j72s"},
{"12", "l42g6771hnv3an9cgc8cr2n1ng"}, {"12", "l42g6771hnv3an9cgc8cr2n1ng"},
@@ -223,61 +118,26 @@ func TestEncryptSegmentBase32(t *testing.T) {
{"12345678901234", "moq0uqdlqrblrc5pa5u5c7hq9g"}, {"12345678901234", "moq0uqdlqrblrc5pa5u5c7hq9g"},
{"123456789012345", "eeam3li4rnommi3a762h5n7meg"}, {"123456789012345", "eeam3li4rnommi3a762h5n7meg"},
{"1234567890123456", "mijbj0frqf6ms7frcr6bd9h0env53jv96pjaaoirk7forcgpt70g"}, {"1234567890123456", "mijbj0frqf6ms7frcr6bd9h0env53jv96pjaaoirk7forcgpt70g"},
}, true) } {
actual := c.encryptSegment(test.in)
assert.Equal(t, test.expected, actual, fmt.Sprintf("Testing %q", test.in))
recovered, err := c.decryptSegment(test.expected)
assert.NoError(t, err, fmt.Sprintf("Testing reverse %q", test.expected))
assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %q", test.expected))
in := strings.ToUpper(test.expected)
recovered, err = c.decryptSegment(in)
assert.NoError(t, err, fmt.Sprintf("Testing reverse %q", in))
assert.Equal(t, test.in, recovered, fmt.Sprintf("Testing reverse %q", in))
}
} }
func TestEncryptSegmentBase64(t *testing.T) { func TestDecryptSegment(t *testing.T) {
testEncryptSegment(t, "base64", []EncodingTestCase{
{"", ""},
{"1", "yBxRX25ypgUVyj8MSxJnFw"},
{"12", "qQUDHOGN_jVdLIMQzYrhvA"},
{"123", "1CxFf2Mti1xIPYlGruDh-A"},
{"1234", "RL-xOTmsxsG7kuTy2XJUxw"},
{"12345", "3FP_GHoeBJdq0yLgaED8IQ"},
{"123456", "Xc4T1Gqrs3OVYnrE6dpEWQ"},
{"1234567", "uZeEzssOnDWHEOzLqjwpog"},
{"12345678", "8noiTP5WkkbEuijsPhOpxQ"},
{"123456789", "GeNxgLA0wiaGAKU3U7qL4Q"},
{"1234567890", "x1DUhdmqoVWYVBLD3dha-A"},
{"12345678901", "iEyP_3BZR6vvv_2WM6NbZw"},
{"123456789012", "4OPGvS4SZdjvS568APUaFw"},
{"1234567890123", "Y8c5Wr8OhYYUo7fPwdojdg"},
{"12345678901234", "tjQPabXW112wuVF8Vh46TA"},
{"123456789012345", "c5Vh1kTd8WtIajmFEtz2dA"},
{"1234567890123456", "tKa5gfvTzW4d-2bMtqYgdf5Rz-k2ZqViW6HfjbIZ6cE"},
}, false)
}
func TestEncryptSegmentBase32768(t *testing.T) {
testEncryptSegment(t, "base32768", []EncodingTestCase{
{"", ""},
{"1", "詮㪗鐮僀伎作㻖㢧⪟"},
{"12", "竢朧䉱虃光塬䟛⣡蓟"},
{"123", "遶㞟鋅缕袡鲅ⵝ蝁ꌟ"},
{"1234", "䢟銮䵵狌㐜燳谒颴詟"},
{"12345", "钉Ꞇ㖃蚩憶狫朰杜㜿"},
{"123456", "啇ᚵⵕ憗䋫➫➓肤卟"},
{"1234567", "茫螓翁連劘樓㶔抉矟"},
{"12345678", "龝☳䘊辄岅較络㧩襟"},
{"123456789", "ⲱ苀㱆犂媐Ꮤ锇惫靟"},
{"1234567890", "計宁憕偵匢皫╛纺ꌟ"},
{"12345678901", "檆䨿鑫㪺藝ꡖ勇䦛婟"},
{"123456789012", "雑頏䰂䲝淚哚鹡魺⪟"},
{"1234567890123", "塃璶繁躸圅㔟䗃肃懟"},
{"12345678901234", "腺ᕚ崚鏕鏥讥鼌䑺䲿"},
{"123456789012345", "怪绕滻蕶肣但⠥荖惟"},
{"1234567890123456", "肳哀旚挶靏鏻㾭䱠慟㪳ꏆ賊兲铧敻塹魀ʟ"},
}, false)
}
func TestDecryptSegmentBase32(t *testing.T) {
// We've tested the forwards above, now concentrate on the errors // We've tested the forwards above, now concentrate on the errors
longName := make([]byte, 3328) longName := make([]byte, 3328)
for i := range longName { for i := range longName {
longName[i] = 'a' longName[i] = 'a'
} }
enc, _ := NewNameEncoding("base32") c, _ := newCipher(NameEncryptionStandard, "", "", true)
c, _ := newCipher(NameEncryptionStandard, "", "", true, enc)
for _, test := range []struct { for _, test := range []struct {
in string in string
expectedErr error expectedErr error
@@ -285,371 +145,42 @@ func TestDecryptSegmentBase32(t *testing.T) {
{"64=", ErrorBadBase32Encoding}, {"64=", ErrorBadBase32Encoding},
{"!", base32.CorruptInputError(0)}, {"!", base32.CorruptInputError(0)},
{string(longName), ErrorTooLongAfterDecode}, {string(longName), ErrorTooLongAfterDecode},
{enc.EncodeToString([]byte("a")), ErrorNotAMultipleOfBlocksize}, {encodeFileName([]byte("a")), ErrorNotAMultipleOfBlocksize},
{enc.EncodeToString([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize}, {encodeFileName([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize},
{enc.EncodeToString([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong}, {encodeFileName([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong},
} { } {
actual, actualErr := c.decryptSegment(test.in) actual, actualErr := c.decryptSegment(test.in)
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr)) assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr))
} }
} }
func TestDecryptSegmentBase64(t *testing.T) { func TestEncryptFileName(t *testing.T) {
// We've tested the forwards above, now concentrate on the errors
longName := make([]byte, 2816)
for i := range longName {
longName[i] = 'a'
}
enc, _ := NewNameEncoding("base64")
c, _ := newCipher(NameEncryptionStandard, "", "", true, enc)
for _, test := range []struct {
in string
expectedErr error
}{
{"6H=", base64.CorruptInputError(2)},
{"!", base64.CorruptInputError(0)},
{string(longName), ErrorTooLongAfterDecode},
{enc.EncodeToString([]byte("a")), ErrorNotAMultipleOfBlocksize},
{enc.EncodeToString([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize},
{enc.EncodeToString([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong},
} {
actual, actualErr := c.decryptSegment(test.in)
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr))
}
}
func TestDecryptSegmentBase32768(t *testing.T) {
// We've tested the forwards above, now concentrate on the errors
longName := strings.Repeat("怪", 1280)
enc, _ := NewNameEncoding("base32768")
c, _ := newCipher(NameEncryptionStandard, "", "", true, enc)
for _, test := range []struct {
in string
expectedErr error
}{
{"怪=", base32768.CorruptInputError(1)},
{"!", base32768.CorruptInputError(0)},
{longName, ErrorTooLongAfterDecode},
{enc.EncodeToString([]byte("a")), ErrorNotAMultipleOfBlocksize},
{enc.EncodeToString([]byte("123456789abcdef")), ErrorNotAMultipleOfBlocksize},
{enc.EncodeToString([]byte("123456789abcdef0")), pkcs7.ErrorPaddingTooLong},
} {
actual, actualErr := c.decryptSegment(test.in)
assert.Equal(t, test.expectedErr, actualErr, fmt.Sprintf("in=%q got actual=%q, err = %v %T", test.in, actual, actualErr, actualErr))
}
}
func testStandardEncryptFileName(t *testing.T, encoding string, testCasesEncryptDir []EncodingTestCase, testCasesNoEncryptDir []EncodingTestCase) {
// First standard mode // First standard mode
enc, _ := NewNameEncoding(encoding) c, _ := newCipher(NameEncryptionStandard, "", "", true)
c, _ := newCipher(NameEncryptionStandard, "", "", true, enc) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1"))
for _, test := range testCasesEncryptDir { assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12"))
assert.Equal(t, test.expected, c.EncryptFileName(test.in)) assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123"))
}
// Standard mode with directory name encryption off // Standard mode with directory name encryption off
c, _ = newCipher(NameEncryptionStandard, "", "", false, enc) c, _ = newCipher(NameEncryptionStandard, "", "", false)
for _, test := range testCasesNoEncryptDir { assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptFileName("1"))
assert.Equal(t, test.expected, c.EncryptFileName(test.in)) assert.Equal(t, "1/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptFileName("1/12"))
} assert.Equal(t, "1/12/qgm4avr35m5loi1th53ato71v0", c.EncryptFileName("1/12/123"))
} // Now off mode
c, _ = newCipher(NameEncryptionOff, "", "", true)
func TestStandardEncryptFileNameBase32(t *testing.T) {
testStandardEncryptFileName(t, "base32", []EncodingTestCase{
{"1", "p0e52nreeaj0a5ea7s64m4j72s"},
{"1/12", "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng"},
{"1/12/123", "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0"},
{"1-v2001-02-03-040506-123", "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123"},
{"1/12-v2001-02-03-040506-123", "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng-v2001-02-03-040506-123"},
}, []EncodingTestCase{
{"1", "p0e52nreeaj0a5ea7s64m4j72s"},
{"1/12", "1/l42g6771hnv3an9cgc8cr2n1ng"},
{"1/12/123", "1/12/qgm4avr35m5loi1th53ato71v0"},
{"1-v2001-02-03-040506-123", "p0e52nreeaj0a5ea7s64m4j72s-v2001-02-03-040506-123"},
{"1/12-v2001-02-03-040506-123", "1/l42g6771hnv3an9cgc8cr2n1ng-v2001-02-03-040506-123"},
})
}
func TestStandardEncryptFileNameBase64(t *testing.T) {
testStandardEncryptFileName(t, "base64", []EncodingTestCase{
{"1", "yBxRX25ypgUVyj8MSxJnFw"},
{"1/12", "yBxRX25ypgUVyj8MSxJnFw/qQUDHOGN_jVdLIMQzYrhvA"},
{"1/12/123", "yBxRX25ypgUVyj8MSxJnFw/qQUDHOGN_jVdLIMQzYrhvA/1CxFf2Mti1xIPYlGruDh-A"},
{"1-v2001-02-03-040506-123", "yBxRX25ypgUVyj8MSxJnFw-v2001-02-03-040506-123"},
{"1/12-v2001-02-03-040506-123", "yBxRX25ypgUVyj8MSxJnFw/qQUDHOGN_jVdLIMQzYrhvA-v2001-02-03-040506-123"},
}, []EncodingTestCase{
{"1", "yBxRX25ypgUVyj8MSxJnFw"},
{"1/12", "1/qQUDHOGN_jVdLIMQzYrhvA"},
{"1/12/123", "1/12/1CxFf2Mti1xIPYlGruDh-A"},
{"1-v2001-02-03-040506-123", "yBxRX25ypgUVyj8MSxJnFw-v2001-02-03-040506-123"},
{"1/12-v2001-02-03-040506-123", "1/qQUDHOGN_jVdLIMQzYrhvA-v2001-02-03-040506-123"},
})
}
func TestStandardEncryptFileNameBase32768(t *testing.T) {
testStandardEncryptFileName(t, "base32768", []EncodingTestCase{
{"1", "詮㪗鐮僀伎作㻖㢧⪟"},
{"1/12", "詮㪗鐮僀伎作㻖㢧⪟/竢朧䉱虃光塬䟛⣡蓟"},
{"1/12/123", "詮㪗鐮僀伎作㻖㢧⪟/竢朧䉱虃光塬䟛⣡蓟/遶㞟鋅缕袡鲅ⵝ蝁ꌟ"},
{"1-v2001-02-03-040506-123", "詮㪗鐮僀伎作㻖㢧⪟-v2001-02-03-040506-123"},
{"1/12-v2001-02-03-040506-123", "詮㪗鐮僀伎作㻖㢧⪟/竢朧䉱虃光塬䟛⣡蓟-v2001-02-03-040506-123"},
}, []EncodingTestCase{
{"1", "詮㪗鐮僀伎作㻖㢧⪟"},
{"1/12", "1/竢朧䉱虃光塬䟛⣡蓟"},
{"1/12/123", "1/12/遶㞟鋅缕袡鲅ⵝ蝁ꌟ"},
{"1-v2001-02-03-040506-123", "詮㪗鐮僀伎作㻖㢧⪟-v2001-02-03-040506-123"},
{"1/12-v2001-02-03-040506-123", "1/竢朧䉱虃光塬䟛⣡蓟-v2001-02-03-040506-123"},
})
}
func TestNonStandardEncryptFileName(t *testing.T) {
// Off mode
c, _ := newCipher(NameEncryptionOff, "", "", true, nil)
assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123")) assert.Equal(t, "1/12/123.bin", c.EncryptFileName("1/12/123"))
// Obfuscation mode // Obfuscation mode
c, _ = newCipher(NameEncryptionObfuscated, "", "", true, nil) c, _ = newCipher(NameEncryptionObfuscated, "", "", true)
assert.Equal(t, "49.6/99.23/150.890/53.!!lipps", c.EncryptFileName("1/12/123/!hello")) assert.Equal(t, "49.6/99.23/150.890/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
assert.Equal(t, "49.6/99.23/150.890/53-v2001-02-03-040506-123.!!lipps", c.EncryptFileName("1/12/123/!hello-v2001-02-03-040506-123"))
assert.Equal(t, "49.6/99.23/150.890/162.uryyB-v2001-02-03-040506-123.GKG", c.EncryptFileName("1/12/123/hello-v2001-02-03-040506-123.txt"))
assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1")) assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1"))
assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0")) assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0"))
// Obfuscation mode with directory name encryption off // Obfuscation mode with directory name encryption off
c, _ = newCipher(NameEncryptionObfuscated, "", "", false, nil) c, _ = newCipher(NameEncryptionObfuscated, "", "", false)
assert.Equal(t, "1/12/123/53.!!lipps", c.EncryptFileName("1/12/123/!hello")) assert.Equal(t, "1/12/123/53.!!lipps", c.EncryptFileName("1/12/123/!hello"))
assert.Equal(t, "1/12/123/53-v2001-02-03-040506-123.!!lipps", c.EncryptFileName("1/12/123/!hello-v2001-02-03-040506-123"))
assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1")) assert.Equal(t, "161.\u00e4", c.EncryptFileName("\u00a1"))
assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0")) assert.Equal(t, "160.\u03c2", c.EncryptFileName("\u03a0"))
} }
func testStandardDecryptFileName(t *testing.T, encoding string, testCases []EncodingTestCase, caseInsensitive bool) { func TestDecryptFileName(t *testing.T) {
enc, _ := NewNameEncoding(encoding)
for _, test := range testCases {
// Test when dirNameEncrypt=true
c, _ := newCipher(NameEncryptionStandard, "", "", true, enc)
actual, actualErr := c.DecryptFileName(test.in)
assert.NoError(t, actualErr)
assert.Equal(t, test.expected, actual)
if caseInsensitive {
c, _ := newCipher(NameEncryptionStandard, "", "", true, enc)
actual, actualErr := c.DecryptFileName(strings.ToUpper(test.in))
assert.NoError(t, actualErr)
assert.Equal(t, test.expected, actual)
}
// Add a character should raise ErrorNotAMultipleOfBlocksize
actual, actualErr = c.DecryptFileName(enc.EncodeToString([]byte("1")) + test.in)
assert.Equal(t, ErrorNotAMultipleOfBlocksize, actualErr)
assert.Equal(t, "", actual)
// Test when dirNameEncrypt=false
noDirEncryptIn := test.in
if strings.LastIndex(test.expected, "/") != -1 {
noDirEncryptIn = test.expected[:strings.LastIndex(test.expected, "/")] + test.in[strings.LastIndex(test.in, "/"):]
}
c, _ = newCipher(NameEncryptionStandard, "", "", false, enc)
actual, actualErr = c.DecryptFileName(noDirEncryptIn)
assert.NoError(t, actualErr)
assert.Equal(t, test.expected, actual)
}
}
func TestStandardDecryptFileNameBase32(t *testing.T) {
testStandardDecryptFileName(t, "base32", []EncodingTestCase{
{"p0e52nreeaj0a5ea7s64m4j72s", "1"},
{"p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "1/12"},
{"p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123"},
}, true)
}
func TestStandardDecryptFileNameBase64(t *testing.T) {
testStandardDecryptFileName(t, "base64", []EncodingTestCase{
{"yBxRX25ypgUVyj8MSxJnFw", "1"},
{"yBxRX25ypgUVyj8MSxJnFw/qQUDHOGN_jVdLIMQzYrhvA", "1/12"},
{"yBxRX25ypgUVyj8MSxJnFw/qQUDHOGN_jVdLIMQzYrhvA/1CxFf2Mti1xIPYlGruDh-A", "1/12/123"},
}, false)
}
func TestStandardDecryptFileNameBase32768(t *testing.T) {
testStandardDecryptFileName(t, "base32768", []EncodingTestCase{
{"詮㪗鐮僀伎作㻖㢧⪟", "1"},
{"詮㪗鐮僀伎作㻖㢧⪟/竢朧䉱虃光塬䟛⣡蓟", "1/12"},
{"詮㪗鐮僀伎作㻖㢧⪟/竢朧䉱虃光塬䟛⣡蓟/遶㞟鋅缕袡鲅ⵝ蝁ꌟ", "1/12/123"},
}, false)
}
func TestNonStandardDecryptFileName(t *testing.T) {
for _, encoding := range []string{"base32", "base64", "base32768"} {
enc, _ := NewNameEncoding(encoding)
for _, test := range []struct {
mode NameEncryptionMode
dirNameEncrypt bool
in string
expected string
expectedErr error
}{
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil},
{NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, "1/12/123-v2001-02-03-040506-123.bin", "1/12/123-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123", nil},
{NameEncryptionOff, true, "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt.bin", "1/12/123-v1970-01-01-010101-123-v2001-02-03-040506-123.txt", nil},
{NameEncryptionObfuscated, true, "!.hello", "hello", nil},
{NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile},
{NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil},
{NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil},
{NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil},
{NameEncryptionObfuscated, false, "1/12/123/53-v2001-02-03-040506-123.!!lipps", "1/12/123/!hello-v2001-02-03-040506-123", nil},
} {
c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt, enc)
actual, actualErr := c.DecryptFileName(test.in)
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
assert.Equal(t, test.expected, actual, what)
assert.Equal(t, test.expectedErr, actualErr, what)
}
}
}
func TestEncDecMatches(t *testing.T) {
for _, encoding := range []string{"base32", "base64", "base32768"} {
enc, _ := NewNameEncoding(encoding)
for _, test := range []struct {
mode NameEncryptionMode
in string
}{
{NameEncryptionStandard, "1/2/3/4"},
{NameEncryptionOff, "1/2/3/4"},
{NameEncryptionObfuscated, "1/2/3/4/!hello\u03a0"},
{NameEncryptionObfuscated, "Avatar The Last Airbender"},
} {
c, _ := newCipher(test.mode, "", "", true, enc)
out, err := c.DecryptFileName(c.EncryptFileName(test.in))
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
assert.Equal(t, out, test.in, what)
assert.Equal(t, err, nil, what)
}
}
}
func testStandardEncryptDirName(t *testing.T, encoding string, testCases []EncodingTestCase) {
enc, _ := NewNameEncoding(encoding)
c, _ := newCipher(NameEncryptionStandard, "", "", true, enc)
// First standard mode
for _, test := range testCases {
assert.Equal(t, test.expected, c.EncryptDirName(test.in))
}
}
func TestStandardEncryptDirNameBase32(t *testing.T) {
testStandardEncryptDirName(t, "base32", []EncodingTestCase{
{"1", "p0e52nreeaj0a5ea7s64m4j72s"},
{"1/12", "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng"},
{"1/12/123", "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0"},
})
}
func TestStandardEncryptDirNameBase64(t *testing.T) {
testStandardEncryptDirName(t, "base64", []EncodingTestCase{
{"1", "yBxRX25ypgUVyj8MSxJnFw"},
{"1/12", "yBxRX25ypgUVyj8MSxJnFw/qQUDHOGN_jVdLIMQzYrhvA"},
{"1/12/123", "yBxRX25ypgUVyj8MSxJnFw/qQUDHOGN_jVdLIMQzYrhvA/1CxFf2Mti1xIPYlGruDh-A"},
})
}
func TestStandardEncryptDirNameBase32768(t *testing.T) {
testStandardEncryptDirName(t, "base32768", []EncodingTestCase{
{"1", "詮㪗鐮僀伎作㻖㢧⪟"},
{"1/12", "詮㪗鐮僀伎作㻖㢧⪟/竢朧䉱虃光塬䟛⣡蓟"},
{"1/12/123", "詮㪗鐮僀伎作㻖㢧⪟/竢朧䉱虃光塬䟛⣡蓟/遶㞟鋅缕袡鲅ⵝ蝁ꌟ"},
})
}
func TestNonStandardEncryptDirName(t *testing.T) {
for _, encoding := range []string{"base32", "base64", "base32768"} {
enc, _ := NewNameEncoding(encoding)
c, _ := newCipher(NameEncryptionStandard, "", "", false, enc)
assert.Equal(t, "1/12", c.EncryptDirName("1/12"))
assert.Equal(t, "1/12/123", c.EncryptDirName("1/12/123"))
// Now off mode
c, _ = newCipher(NameEncryptionOff, "", "", true, enc)
assert.Equal(t, "1/12/123", c.EncryptDirName("1/12/123"))
}
}
func testStandardDecryptDirName(t *testing.T, encoding string, testCases []EncodingTestCase, caseInsensitive bool) {
enc, _ := NewNameEncoding(encoding)
for _, test := range testCases {
// Test dirNameEncrypt=true
c, _ := newCipher(NameEncryptionStandard, "", "", true, enc)
actual, actualErr := c.DecryptDirName(test.in)
assert.Equal(t, test.expected, actual)
assert.NoError(t, actualErr)
if caseInsensitive {
actual, actualErr := c.DecryptDirName(strings.ToUpper(test.in))
assert.Equal(t, actual, test.expected)
assert.NoError(t, actualErr)
}
actual, actualErr = c.DecryptDirName(enc.EncodeToString([]byte("1")) + test.in)
assert.Equal(t, "", actual)
assert.Equal(t, ErrorNotAMultipleOfBlocksize, actualErr)
// Test dirNameEncrypt=false
c, _ = newCipher(NameEncryptionStandard, "", "", false, enc)
actual, actualErr = c.DecryptDirName(test.in)
assert.Equal(t, test.in, actual)
assert.NoError(t, actualErr)
actual, actualErr = c.DecryptDirName(test.expected)
assert.Equal(t, test.expected, actual)
assert.NoError(t, actualErr)
// Test dirNameEncrypt=false
}
}
/*
enc, _ := NewNameEncoding(encoding)
for _, test := range []struct {
mode NameEncryptionMode
dirNameEncrypt bool
in string
expected string
expectedErr error
}{
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s", "1", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "1/12", nil},
{NameEncryptionStandard, true, "p0e52nreeAJ0A5EA7S64M4J72S/L42G6771HNv3an9cgc8cr2n1ng", "1/12", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize},
{NameEncryptionStandard, false, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", nil},
{NameEncryptionStandard, false, "1/12/123", "1/12/123", nil},
} {
c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt, enc)
actual, actualErr := c.DecryptDirName(test.in)
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
assert.Equal(t, test.expected, actual, what)
assert.Equal(t, test.expectedErr, actualErr, what)
}
*/
func TestStandardDecryptDirNameBase32(t *testing.T) {
testStandardDecryptDirName(t, "base32", []EncodingTestCase{
{"p0e52nreeaj0a5ea7s64m4j72s", "1"},
{"p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "1/12"},
{"p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123"},
}, true)
}
func TestStandardDecryptDirNameBase64(t *testing.T) {
testStandardDecryptDirName(t, "base64", []EncodingTestCase{
{"yBxRX25ypgUVyj8MSxJnFw", "1"},
{"yBxRX25ypgUVyj8MSxJnFw/qQUDHOGN_jVdLIMQzYrhvA", "1/12"},
{"yBxRX25ypgUVyj8MSxJnFw/qQUDHOGN_jVdLIMQzYrhvA/1CxFf2Mti1xIPYlGruDh-A", "1/12/123"},
}, false)
}
func TestStandardDecryptDirNameBase32768(t *testing.T) {
testStandardDecryptDirName(t, "base32768", []EncodingTestCase{
{"詮㪗鐮僀伎作㻖㢧⪟", "1"},
{"詮㪗鐮僀伎作㻖㢧⪟/竢朧䉱虃光塬䟛⣡蓟", "1/12"},
{"詮㪗鐮僀伎作㻖㢧⪟/竢朧䉱虃光塬䟛⣡蓟/遶㞟鋅缕袡鲅ⵝ蝁ꌟ", "1/12/123"},
}, false)
}
func TestNonStandardDecryptDirName(t *testing.T) {
for _, test := range []struct { for _, test := range []struct {
mode NameEncryptionMode mode NameEncryptionMode
dirNameEncrypt bool dirNameEncrypt bool
@@ -657,11 +188,82 @@ func TestNonStandardDecryptDirName(t *testing.T) {
expected string expected string
expectedErr error expectedErr error
}{ }{
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s", "1", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "1/12", nil},
{NameEncryptionStandard, true, "p0e52nreeAJ0A5EA7S64M4J72S/L42G6771HNv3an9cgc8cr2n1ng", "1/12", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize},
{NameEncryptionStandard, false, "1/12/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123", nil},
{NameEncryptionOff, true, "1/12/123.bix", "", ErrorNotAnEncryptedFile},
{NameEncryptionOff, true, ".bin", "", ErrorNotAnEncryptedFile},
{NameEncryptionObfuscated, true, "!.hello", "hello", nil},
{NameEncryptionObfuscated, true, "hello", "", ErrorNotAnEncryptedFile},
{NameEncryptionObfuscated, true, "161.\u00e4", "\u00a1", nil},
{NameEncryptionObfuscated, true, "160.\u03c2", "\u03a0", nil},
{NameEncryptionObfuscated, false, "1/12/123/53.!!lipps", "1/12/123/!hello", nil},
} {
c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt)
actual, actualErr := c.DecryptFileName(test.in)
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
assert.Equal(t, test.expected, actual, what)
assert.Equal(t, test.expectedErr, actualErr, what)
}
}
func TestEncDecMatches(t *testing.T) {
for _, test := range []struct {
mode NameEncryptionMode
in string
}{
{NameEncryptionStandard, "1/2/3/4"},
{NameEncryptionOff, "1/2/3/4"},
{NameEncryptionObfuscated, "1/2/3/4/!hello\u03a0"},
{NameEncryptionObfuscated, "Avatar The Last Airbender"},
} {
c, _ := newCipher(test.mode, "", "", true)
out, err := c.DecryptFileName(c.EncryptFileName(test.in))
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
assert.Equal(t, out, test.in, what)
assert.Equal(t, err, nil, what)
}
}
func TestEncryptDirName(t *testing.T) {
// First standard mode
c, _ := newCipher(NameEncryptionStandard, "", "", true)
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s", c.EncryptDirName("1"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", c.EncryptDirName("1/12"))
assert.Equal(t, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", c.EncryptDirName("1/12/123"))
// Standard mode with dir name encryption off
c, _ = newCipher(NameEncryptionStandard, "", "", false)
assert.Equal(t, "1/12", c.EncryptDirName("1/12"))
assert.Equal(t, "1/12/123", c.EncryptDirName("1/12/123"))
// Now off mode
c, _ = newCipher(NameEncryptionOff, "", "", true)
assert.Equal(t, "1/12/123", c.EncryptDirName("1/12/123"))
}
func TestDecryptDirName(t *testing.T) {
for _, test := range []struct {
mode NameEncryptionMode
dirNameEncrypt bool
in string
expected string
expectedErr error
}{
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s", "1", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "1/12", nil},
{NameEncryptionStandard, true, "p0e52nreeAJ0A5EA7S64M4J72S/L42G6771HNv3an9cgc8cr2n1ng", "1/12", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0", "1/12/123", nil},
{NameEncryptionStandard, true, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1/qgm4avr35m5loi1th53ato71v0", "", ErrorNotAMultipleOfBlocksize},
{NameEncryptionStandard, false, "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", "p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng", nil},
{NameEncryptionStandard, false, "1/12/123", "1/12/123", nil},
{NameEncryptionOff, true, "1/12/123.bin", "1/12/123.bin", nil}, {NameEncryptionOff, true, "1/12/123.bin", "1/12/123.bin", nil},
{NameEncryptionOff, true, "1/12/123", "1/12/123", nil}, {NameEncryptionOff, true, "1/12/123", "1/12/123", nil},
{NameEncryptionOff, true, ".bin", ".bin", nil}, {NameEncryptionOff, true, ".bin", ".bin", nil},
} { } {
c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt, nil) c, _ := newCipher(test.mode, "", "", test.dirNameEncrypt)
actual, actualErr := c.DecryptDirName(test.in) actual, actualErr := c.DecryptDirName(test.in)
what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode) what := fmt.Sprintf("Testing %q (mode=%v)", test.in, test.mode)
assert.Equal(t, test.expected, actual, what) assert.Equal(t, test.expected, actual, what)
@@ -670,7 +272,7 @@ func TestNonStandardDecryptDirName(t *testing.T) {
} }
func TestEncryptedSize(t *testing.T) { func TestEncryptedSize(t *testing.T) {
c, _ := newCipher(NameEncryptionStandard, "", "", true, nil) c, _ := newCipher(NameEncryptionStandard, "", "", true)
for _, test := range []struct { for _, test := range []struct {
in int64 in int64
expected int64 expected int64
@@ -694,7 +296,7 @@ func TestEncryptedSize(t *testing.T) {
func TestDecryptedSize(t *testing.T) { func TestDecryptedSize(t *testing.T) {
// Test the errors since we tested the reverse above // Test the errors since we tested the reverse above
c, _ := newCipher(NameEncryptionStandard, "", "", true, nil) c, _ := newCipher(NameEncryptionStandard, "", "", true)
for _, test := range []struct { for _, test := range []struct {
in int64 in int64
expectedErr error expectedErr error
@@ -1023,7 +625,7 @@ func (r *randomSource) Read(p []byte) (n int, err error) {
func (r *randomSource) Write(p []byte) (n int, err error) { func (r *randomSource) Write(p []byte) (n int, err error) {
for i := range p { for i := range p {
if p[i] != r.next() { if p[i] != r.next() {
return 0, fmt.Errorf("Error in stream at %d", r.counter) return 0, errors.Errorf("Error in stream at %d", r.counter)
} }
} }
return len(p), nil return len(p), nil
@@ -1065,14 +667,14 @@ func (z *zeroes) Read(p []byte) (n int, err error) {
// Test encrypt decrypt with different buffer sizes // Test encrypt decrypt with different buffer sizes
func testEncryptDecrypt(t *testing.T, bufSize int, copySize int64) { func testEncryptDecrypt(t *testing.T, bufSize int, copySize int64) {
c, err := newCipher(NameEncryptionStandard, "", "", true, nil) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
c.cryptoRand = &zeroes{} // zero out the nonce c.cryptoRand = &zeroes{} // zero out the nonce
buf := make([]byte, bufSize) buf := make([]byte, bufSize)
source := newRandomSource(copySize) source := newRandomSource(copySize)
encrypted, err := c.newEncrypter(source, nil) encrypted, err := c.newEncrypter(source, nil)
assert.NoError(t, err) assert.NoError(t, err)
decrypted, err := c.newDecrypter(io.NopCloser(encrypted)) decrypted, err := c.newDecrypter(ioutil.NopCloser(encrypted))
assert.NoError(t, err) assert.NoError(t, err)
sink := newRandomSource(copySize) sink := newRandomSource(copySize)
n, err := io.CopyBuffer(sink, decrypted, buf) n, err := io.CopyBuffer(sink, decrypted, buf)
@@ -1135,7 +737,7 @@ func TestEncryptData(t *testing.T) {
{[]byte{1}, file1}, {[]byte{1}, file1},
{[]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, file16}, {[]byte{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}, file16},
} { } {
c, err := newCipher(NameEncryptionStandard, "", "", true, nil) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator
@@ -1143,22 +745,22 @@ func TestEncryptData(t *testing.T) {
buf := bytes.NewBuffer(test.in) buf := bytes.NewBuffer(test.in)
encrypted, err := c.EncryptData(buf) encrypted, err := c.EncryptData(buf)
assert.NoError(t, err) assert.NoError(t, err)
out, err := io.ReadAll(encrypted) out, err := ioutil.ReadAll(encrypted)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, test.expected, out) assert.Equal(t, test.expected, out)
// Check we can decode the data properly too... // Check we can decode the data properly too...
buf = bytes.NewBuffer(out) buf = bytes.NewBuffer(out)
decrypted, err := c.DecryptData(io.NopCloser(buf)) decrypted, err := c.DecryptData(ioutil.NopCloser(buf))
assert.NoError(t, err) assert.NoError(t, err)
out, err = io.ReadAll(decrypted) out, err = ioutil.ReadAll(decrypted)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, test.in, out) assert.Equal(t, test.in, out)
} }
} }
func TestNewEncrypter(t *testing.T) { func TestNewEncrypter(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true, nil) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator
@@ -1174,19 +776,20 @@ func TestNewEncrypter(t *testing.T) {
fh, err = c.newEncrypter(z, nil) fh, err = c.newEncrypter(z, nil)
assert.Nil(t, fh) assert.Nil(t, fh)
assert.Error(t, err, "short read of nonce") assert.Error(t, err, "short read of nonce")
} }
// Test the stream returning 0, io.ErrUnexpectedEOF - this used to // Test the stream returning 0, io.ErrUnexpectedEOF - this used to
// cause a fatal loop // cause a fatal loop
func TestNewEncrypterErrUnexpectedEOF(t *testing.T) { func TestNewEncrypterErrUnexpectedEOF(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true, nil) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
in := &readers.ErrorReader{Err: io.ErrUnexpectedEOF} in := &readers.ErrorReader{Err: io.ErrUnexpectedEOF}
fh, err := c.newEncrypter(in, nil) fh, err := c.newEncrypter(in, nil)
assert.NoError(t, err) assert.NoError(t, err)
n, err := io.CopyN(io.Discard, fh, 1e6) n, err := io.CopyN(ioutil.Discard, fh, 1e6)
assert.Equal(t, io.ErrUnexpectedEOF, err) assert.Equal(t, io.ErrUnexpectedEOF, err)
assert.Equal(t, int64(32), n) assert.Equal(t, int64(32), n)
} }
@@ -1208,7 +811,7 @@ func (c *closeDetector) Close() error {
} }
func TestNewDecrypter(t *testing.T) { func TestNewDecrypter(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true, nil) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator c.cryptoRand = newRandomSource(1e8) // nodge the crypto rand generator
@@ -1251,36 +854,36 @@ func TestNewDecrypter(t *testing.T) {
// Test the stream returning 0, io.ErrUnexpectedEOF // Test the stream returning 0, io.ErrUnexpectedEOF
func TestNewDecrypterErrUnexpectedEOF(t *testing.T) { func TestNewDecrypterErrUnexpectedEOF(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true, nil) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
in2 := &readers.ErrorReader{Err: io.ErrUnexpectedEOF} in2 := &readers.ErrorReader{Err: io.ErrUnexpectedEOF}
in1 := bytes.NewBuffer(file16) in1 := bytes.NewBuffer(file16)
in := io.NopCloser(io.MultiReader(in1, in2)) in := ioutil.NopCloser(io.MultiReader(in1, in2))
fh, err := c.newDecrypter(in) fh, err := c.newDecrypter(in)
assert.NoError(t, err) assert.NoError(t, err)
n, err := io.CopyN(io.Discard, fh, 1e6) n, err := io.CopyN(ioutil.Discard, fh, 1e6)
assert.Equal(t, io.ErrUnexpectedEOF, err) assert.Equal(t, io.ErrUnexpectedEOF, err)
assert.Equal(t, int64(16), n) assert.Equal(t, int64(16), n)
} }
func TestNewDecrypterSeekLimit(t *testing.T) { func TestNewDecrypterSeekLimit(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true, nil) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
c.cryptoRand = &zeroes{} // nodge the crypto rand generator c.cryptoRand = &zeroes{} // nodge the crypto rand generator
// Make random data // Make random data
const dataSize = 150000 const dataSize = 150000
plaintext, err := io.ReadAll(newRandomSource(dataSize)) plaintext, err := ioutil.ReadAll(newRandomSource(dataSize))
assert.NoError(t, err) assert.NoError(t, err)
// Encrypt the data // Encrypt the data
buf := bytes.NewBuffer(plaintext) buf := bytes.NewBuffer(plaintext)
encrypted, err := c.EncryptData(buf) encrypted, err := c.EncryptData(buf)
assert.NoError(t, err) assert.NoError(t, err)
ciphertext, err := io.ReadAll(encrypted) ciphertext, err := ioutil.ReadAll(encrypted)
assert.NoError(t, err) assert.NoError(t, err)
trials := []int{0, 1, 2, 3, 4, 5, 7, 8, 9, 15, 16, 17, 31, 32, 33, 63, 64, 65, trials := []int{0, 1, 2, 3, 4, 5, 7, 8, 9, 15, 16, 17, 31, 32, 33, 63, 64, 65,
@@ -1299,7 +902,7 @@ func TestNewDecrypterSeekLimit(t *testing.T) {
end = len(ciphertext) end = len(ciphertext)
} }
} }
reader = io.NopCloser(bytes.NewBuffer(ciphertext[int(underlyingOffset):end])) reader = ioutil.NopCloser(bytes.NewBuffer(ciphertext[int(underlyingOffset):end]))
return reader, nil return reader, nil
} }
@@ -1473,7 +1076,7 @@ func TestDecrypterCalculateUnderlying(t *testing.T) {
} }
func TestDecrypterRead(t *testing.T) { func TestDecrypterRead(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true, nil) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
// Test truncating the file at each possible point // Test truncating the file at each possible point
@@ -1489,7 +1092,7 @@ func TestDecrypterRead(t *testing.T) {
assert.NoError(t, err, what) assert.NoError(t, err, what)
continue continue
} }
_, err = io.ReadAll(fh) _, err = ioutil.ReadAll(fh)
var expectedErr error var expectedErr error
switch { switch {
case i == fileHeaderSize: case i == fileHeaderSize:
@@ -1513,7 +1116,7 @@ func TestDecrypterRead(t *testing.T) {
cd := newCloseDetector(in) cd := newCloseDetector(in)
fh, err := c.newDecrypter(cd) fh, err := c.newDecrypter(cd)
assert.NoError(t, err) assert.NoError(t, err)
_, err = io.ReadAll(fh) _, err = ioutil.ReadAll(fh)
assert.Error(t, err, "potato") assert.Error(t, err, "potato")
assert.Equal(t, 0, cd.closed) assert.Equal(t, 0, cd.closed)
@@ -1523,13 +1126,13 @@ func TestDecrypterRead(t *testing.T) {
copy(file16copy, file16) copy(file16copy, file16)
for i := range file16copy { for i := range file16copy {
file16copy[i] ^= 0xFF file16copy[i] ^= 0xFF
fh, err := c.newDecrypter(io.NopCloser(bytes.NewBuffer(file16copy))) fh, err := c.newDecrypter(ioutil.NopCloser(bytes.NewBuffer(file16copy)))
if i < fileMagicSize { if i < fileMagicSize {
assert.Error(t, err, ErrorEncryptedBadMagic.Error()) assert.Error(t, err, ErrorEncryptedBadMagic.Error())
assert.Nil(t, fh) assert.Nil(t, fh)
} else { } else {
assert.NoError(t, err) assert.NoError(t, err)
_, err = io.ReadAll(fh) _, err = ioutil.ReadAll(fh)
assert.Error(t, err, ErrorEncryptedFileBadHeader.Error()) assert.Error(t, err, ErrorEncryptedFileBadHeader.Error())
} }
file16copy[i] ^= 0xFF file16copy[i] ^= 0xFF
@@ -1537,7 +1140,7 @@ func TestDecrypterRead(t *testing.T) {
} }
func TestDecrypterClose(t *testing.T) { func TestDecrypterClose(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true, nil) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
cd := newCloseDetector(bytes.NewBuffer(file16)) cd := newCloseDetector(bytes.NewBuffer(file16))
@@ -1564,7 +1167,7 @@ func TestDecrypterClose(t *testing.T) {
assert.Equal(t, 0, cd.closed) assert.Equal(t, 0, cd.closed)
// close after reading // close after reading
out, err := io.ReadAll(fh) out, err := ioutil.ReadAll(fh)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, []byte{1}, out) assert.Equal(t, []byte{1}, out)
assert.Equal(t, io.EOF, fh.err) assert.Equal(t, io.EOF, fh.err)
@@ -1575,7 +1178,7 @@ func TestDecrypterClose(t *testing.T) {
} }
func TestPutGetBlock(t *testing.T) { func TestPutGetBlock(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true, nil) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
block := c.getBlock() block := c.getBlock()
@@ -1586,7 +1189,7 @@ func TestPutGetBlock(t *testing.T) {
} }
func TestKey(t *testing.T) { func TestKey(t *testing.T) {
c, err := newCipher(NameEncryptionStandard, "", "", true, nil) c, err := newCipher(NameEncryptionStandard, "", "", true)
assert.NoError(t, err) assert.NoError(t, err)
// Check zero keys OK // Check zero keys OK

View File

@@ -3,16 +3,15 @@ package crypt
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"io" "io"
"path" "path"
"strings" "strings"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting" "github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct" "github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/config/obscure" "github.com/rclone/rclone/fs/config/obscure"
@@ -28,12 +27,9 @@ func init() {
Description: "Encrypt/Decrypt a remote", Description: "Encrypt/Decrypt a remote",
NewFs: NewFs, NewFs: NewFs,
CommandHelp: commandHelp, CommandHelp: commandHelp,
MetadataInfo: &fs.MetadataInfo{
Help: `Any metadata supported by the underlying remote is read and written.`,
},
Options: []fs.Option{{ Options: []fs.Option{{
Name: "remote", Name: "remote",
Help: "Remote to encrypt/decrypt.\n\nNormally should contain a ':' and a path, e.g. \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).", Help: "Remote to encrypt/decrypt.\nNormally should contain a ':' and a path, eg \"myremote:path/to/dir\",\n\"myremote:bucket\" or maybe \"myremote:\" (not recommended).",
Required: true, Required: true,
}, { }, {
Name: "filename_encryption", Name: "filename_encryption",
@@ -42,13 +38,13 @@ func init() {
Examples: []fs.OptionExample{ Examples: []fs.OptionExample{
{ {
Value: "standard", Value: "standard",
Help: "Encrypt the filenames.\nSee the docs for the details.", Help: "Encrypt the filenames see the docs for the details.",
}, { }, {
Value: "obfuscate", Value: "obfuscate",
Help: "Very simple filename obfuscation.", Help: "Very simple filename obfuscation.",
}, { }, {
Value: "off", Value: "off",
Help: "Don't encrypt the file names.\nAdds a \".bin\" extension only.", Help: "Don't encrypt the file names. Adds a \".bin\" extension only.",
}, },
}, },
}, { }, {
@@ -74,12 +70,12 @@ NB If filename_encryption is "off" then this option will do nothing.`,
Required: true, Required: true,
}, { }, {
Name: "password2", Name: "password2",
Help: "Password or pass phrase for salt.\n\nOptional but recommended.\nShould be different to the previous password.", Help: "Password or pass phrase for salt. Optional but recommended.\nShould be different to the previous password.",
IsPassword: true, IsPassword: true,
}, { }, {
Name: "server_side_across_configs", Name: "server_side_across_configs",
Default: false, Default: false,
Help: `Allow server-side operations (e.g. copy) to work across different crypt configs. Help: `Allow server side operations (eg copy) to work across different crypt configs.
Normally this option is not what you want, but if you have two crypts Normally this option is not what you want, but if you have two crypts
pointing to the same backend you can use it. pointing to the same backend you can use it.
@@ -104,44 +100,6 @@ names, or for debugging purposes.`,
Default: false, Default: false,
Hide: fs.OptionHideConfigurator, Hide: fs.OptionHideConfigurator,
Advanced: true, Advanced: true,
}, {
Name: "no_data_encryption",
Help: "Option to either encrypt file data or leave it unencrypted.",
Default: false,
Advanced: true,
Examples: []fs.OptionExample{
{
Value: "true",
Help: "Don't encrypt file data, leave it unencrypted.",
},
{
Value: "false",
Help: "Encrypt file data.",
},
},
}, {
Name: "filename_encoding",
Help: `How to encode the encrypted filename to text string.
This option could help with shortening the encrypted filename. The
suitable option would depend on the way your remote count the filename
length and if it's case sensitive.`,
Default: "base32",
Examples: []fs.OptionExample{
{
Value: "base32",
Help: "Encode using base32. Suitable for all remote.",
},
{
Value: "base64",
Help: "Encode using base64. Suitable for case sensitive remote.",
},
{
Value: "base32768",
Help: "Encode using base32768. Suitable if your remote counts UTF-16 or\nUnicode codepoint instead of UTF-8 byte length. (Eg. Onedrive)",
},
},
Advanced: true,
}}, }},
}) })
} }
@@ -157,22 +115,18 @@ func newCipherForConfig(opt *Options) (*Cipher, error) {
} }
password, err := obscure.Reveal(opt.Password) password, err := obscure.Reveal(opt.Password)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to decrypt password: %w", err) return nil, errors.Wrap(err, "failed to decrypt password")
} }
var salt string var salt string
if opt.Password2 != "" { if opt.Password2 != "" {
salt, err = obscure.Reveal(opt.Password2) salt, err = obscure.Reveal(opt.Password2)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to decrypt password2: %w", err) return nil, errors.Wrap(err, "failed to decrypt password2")
} }
} }
enc, err := NewNameEncoding(opt.FilenameEncoding) cipher, err := newCipher(mode, password, salt, opt.DirectoryNameEncryption)
if err != nil { if err != nil {
return nil, err return nil, errors.Wrap(err, "failed to make cipher")
}
cipher, err := newCipher(mode, password, salt, opt.DirectoryNameEncryption, enc)
if err != nil {
return nil, fmt.Errorf("failed to make cipher: %w", err)
} }
return cipher, nil return cipher, nil
} }
@@ -189,7 +143,7 @@ func NewCipher(m configmap.Mapper) (*Cipher, error) {
} }
// NewFs constructs an Fs from the path, container:path // NewFs constructs an Fs from the path, container:path
func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, rpath string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
@@ -204,25 +158,24 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
if strings.HasPrefix(remote, name+":") { if strings.HasPrefix(remote, name+":") {
return nil, errors.New("can't point crypt remote at itself - check the value of the remote setting") return nil, errors.New("can't point crypt remote at itself - check the value of the remote setting")
} }
// Make sure to remove trailing . referring to the current dir wInfo, wName, wPath, wConfig, err := fs.ConfigFs(remote)
if err != nil {
return nil, errors.Wrapf(err, "failed to parse remote %q to wrap", remote)
}
// Make sure to remove trailing . reffering to the current dir
if path.Base(rpath) == "." { if path.Base(rpath) == "." {
rpath = strings.TrimSuffix(rpath, ".") rpath = strings.TrimSuffix(rpath, ".")
} }
// Look for a file first // Look for a file first
var wrappedFs fs.Fs remotePath := fspath.JoinRootPath(wPath, cipher.EncryptFileName(rpath))
if rpath == "" { wrappedFs, err := wInfo.NewFs(wName, remotePath, wConfig)
wrappedFs, err = cache.Get(ctx, remote) // if that didn't produce a file, look for a directory
} else { if err != fs.ErrorIsFile {
remotePath := fspath.JoinRootPath(remote, cipher.EncryptFileName(rpath)) remotePath = fspath.JoinRootPath(wPath, cipher.EncryptDirName(rpath))
wrappedFs, err = cache.Get(ctx, remotePath) wrappedFs, err = wInfo.NewFs(wName, remotePath, wConfig)
// if that didn't produce a file, look for a directory
if err != fs.ErrorIsFile {
remotePath = fspath.JoinRootPath(remote, cipher.EncryptDirName(rpath))
wrappedFs, err = cache.Get(ctx, remotePath)
}
} }
if err != fs.ErrorIsFile && err != nil { if err != fs.ErrorIsFile && err != nil {
return nil, fmt.Errorf("failed to make remote %q to wrap: %w", remote, err) return nil, errors.Wrapf(err, "failed to make remote %s:%q to wrap", wName, remotePath)
} }
f := &Fs{ f := &Fs{
Fs: wrappedFs, Fs: wrappedFs,
@@ -231,11 +184,10 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
opt: *opt, opt: *opt,
cipher: cipher, cipher: cipher,
} }
cache.PinUntilFinalized(f.Fs, f)
// the features here are ones we could support, and they are // the features here are ones we could support, and they are
// ANDed with the ones from wrappedFs // ANDed with the ones from wrappedFs
f.features = (&fs.Features{ f.features = (&fs.Features{
CaseInsensitive: !cipher.dirNameEncrypt || cipher.NameEncryptionMode() == NameEncryptionOff, CaseInsensitive: cipher.NameEncryptionMode() == NameEncryptionOff,
DuplicateFiles: true, DuplicateFiles: true,
ReadMimeType: false, // MimeTypes not supported with crypt ReadMimeType: false, // MimeTypes not supported with crypt
WriteMimeType: false, WriteMimeType: false,
@@ -244,10 +196,7 @@ func NewFs(ctx context.Context, name, rpath string, m configmap.Mapper) (fs.Fs,
SetTier: true, SetTier: true,
GetTier: true, GetTier: true,
ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs, ServerSideAcrossConfigs: opt.ServerSideAcrossConfigs,
ReadMetadata: true, }).Fill(f).Mask(wrappedFs).WrapsFs(f, wrappedFs)
WriteMetadata: true,
UserMetadata: true,
}).Fill(ctx, f).Mask(ctx, wrappedFs).WrapsFs(f, wrappedFs)
return f, err return f, err
} }
@@ -257,12 +206,10 @@ type Options struct {
Remote string `config:"remote"` Remote string `config:"remote"`
FilenameEncryption string `config:"filename_encryption"` FilenameEncryption string `config:"filename_encryption"`
DirectoryNameEncryption bool `config:"directory_name_encryption"` DirectoryNameEncryption bool `config:"directory_name_encryption"`
NoDataEncryption bool `config:"no_data_encryption"`
Password string `config:"password"` Password string `config:"password"`
Password2 string `config:"password2"` Password2 string `config:"password2"`
ServerSideAcrossConfigs bool `config:"server_side_across_configs"` ServerSideAcrossConfigs bool `config:"server_side_across_configs"`
ShowMapping bool `config:"show_mapping"` ShowMapping bool `config:"show_mapping"`
FilenameEncoding string `config:"filename_encoding"`
} }
// Fs represents a wrapped fs.Fs // Fs represents a wrapped fs.Fs
@@ -334,7 +281,7 @@ func (f *Fs) encryptEntries(ctx context.Context, entries fs.DirEntries) (newEntr
case fs.Directory: case fs.Directory:
f.addDir(ctx, &newEntries, x) f.addDir(ctx, &newEntries, x)
default: default:
return nil, fmt.Errorf("unknown object type %T", entry) return nil, errors.Errorf("Unknown object type %T", entry)
} }
} }
return newEntries, nil return newEntries, nil
@@ -396,16 +343,6 @@ type putFn func(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ..
// put implements Put or PutStream // put implements Put or PutStream
func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put putFn) (fs.Object, error) { func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options []fs.OpenOption, put putFn) (fs.Object, error) {
ci := fs.GetConfig(ctx)
if f.opt.NoDataEncryption {
o, err := put(ctx, in, f.newObjectInfo(src, nonce{}), options...)
if err == nil && o != nil {
o = f.newObject(o)
}
return o, err
}
// Encrypt the data into wrappedIn // Encrypt the data into wrappedIn
wrappedIn, encrypter, err := f.cipher.encryptData(in) wrappedIn, encrypter, err := f.cipher.encryptData(in)
if err != nil { if err != nil {
@@ -415,9 +352,6 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options [
// Find a hash the destination supports to compute a hash of // Find a hash the destination supports to compute a hash of
// the encrypted data // the encrypted data
ht := f.Fs.Hashes().GetOne() ht := f.Fs.Hashes().GetOne()
if ci.IgnoreChecksum {
ht = hash.None
}
var hasher *hash.MultiHasher var hasher *hash.MultiHasher
if ht != hash.None { if ht != hash.None {
hasher, err = hash.NewMultiHasherTypes(hash.NewHashSet(ht)) hasher, err = hash.NewMultiHasherTypes(hash.NewHashSet(ht))
@@ -445,18 +379,15 @@ func (f *Fs) put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options [
var dstHash string var dstHash string
dstHash, err = o.Hash(ctx, ht) dstHash, err = o.Hash(ctx, ht)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to read destination hash: %w", err) return nil, errors.Wrap(err, "failed to read destination hash")
} }
if srcHash != "" && dstHash != "" { if srcHash != "" && dstHash != "" && srcHash != dstHash {
if srcHash != dstHash { // remove object
// remove object err = o.Remove(ctx)
err = o.Remove(ctx) if err != nil {
if err != nil { fs.Errorf(o, "Failed to remove corrupted object: %v", err)
fs.Errorf(o, "Failed to remove corrupted object: %v", err)
}
return nil, fmt.Errorf("corrupted on transfer: %v crypted hash differ src %q vs dst %q", ht, srcHash, dstHash)
} }
fs.Debugf(src, "%v = %s OK", ht, srcHash) return nil, errors.Errorf("corrupted on transfer: %v crypted hash differ %q vs %q", ht, srcHash, dstHash)
} }
} }
@@ -496,25 +427,25 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return f.Fs.Rmdir(ctx, f.cipher.EncryptDirName(dir)) return f.Fs.Rmdir(ctx, f.cipher.EncryptDirName(dir))
} }
// Purge all files in the directory specified // Purge all files in the root and the root directory
// //
// Implement this if you have a way of deleting all the files // Implement this if you have a way of deleting all the files
// quicker than just running Remove() on the result of List() // quicker than just running Remove() on the result of List()
// //
// Return an error if it doesn't exist // Return an error if it doesn't exist
func (f *Fs) Purge(ctx context.Context, dir string) error { func (f *Fs) Purge(ctx context.Context) error {
do := f.Fs.Features().Purge do := f.Fs.Features().Purge
if do == nil { if do == nil {
return fs.ErrorCantPurge return fs.ErrorCantPurge
} }
return do(ctx, f.cipher.EncryptDirName(dir)) return do(ctx)
} }
// Copy src to this remote using server-side copy operations. // Copy src to this remote using server side copy operations.
// //
// This is stored with the remote path given. // This is stored with the remote path given
// //
// It returns the destination Object and a possible error. // It returns the destination Object and a possible error
// //
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
@@ -535,11 +466,11 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
return f.newObject(oResult), nil return f.newObject(oResult), nil
} }
// Move src to this remote using server-side move operations. // Move src to this remote using server side move operations.
// //
// This is stored with the remote path given. // This is stored with the remote path given
// //
// It returns the destination Object and a possible error. // It returns the destination Object and a possible error
// //
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
@@ -561,7 +492,7 @@ func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object,
} }
// DirMove moves src, srcRemote to this remote at dstRemote // DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations. // using server side move operations.
// //
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
@@ -608,7 +539,7 @@ func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo,
func (f *Fs) CleanUp(ctx context.Context) error { func (f *Fs) CleanUp(ctx context.Context) error {
do := f.Fs.Features().CleanUp do := f.Fs.Features().CleanUp
if do == nil { if do == nil {
return errors.New("not supported by underlying remote") return errors.New("can't CleanUp")
} }
return do(ctx) return do(ctx)
} }
@@ -617,7 +548,7 @@ func (f *Fs) CleanUp(ctx context.Context) error {
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) { func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
do := f.Fs.Features().About do := f.Fs.Features().About
if do == nil { if do == nil {
return nil, errors.New("not supported by underlying remote") return nil, errors.New("About not supported")
} }
return do(ctx) return do(ctx)
} }
@@ -655,24 +586,24 @@ func (f *Fs) computeHashWithNonce(ctx context.Context, nonce nonce, src fs.Objec
// Open the src for input // Open the src for input
in, err := src.Open(ctx) in, err := src.Open(ctx)
if err != nil { if err != nil {
return "", fmt.Errorf("failed to open src: %w", err) return "", errors.Wrap(err, "failed to open src")
} }
defer fs.CheckClose(in, &err) defer fs.CheckClose(in, &err)
// Now encrypt the src with the nonce // Now encrypt the src with the nonce
out, err := f.cipher.newEncrypter(in, &nonce) out, err := f.cipher.newEncrypter(in, &nonce)
if err != nil { if err != nil {
return "", fmt.Errorf("failed to make encrypter: %w", err) return "", errors.Wrap(err, "failed to make encrypter")
} }
// pipe into hash // pipe into hash
m, err := hash.NewMultiHasherTypes(hash.NewHashSet(hashType)) m, err := hash.NewMultiHasherTypes(hash.NewHashSet(hashType))
if err != nil { if err != nil {
return "", fmt.Errorf("failed to make hasher: %w", err) return "", errors.Wrap(err, "failed to make hasher")
} }
_, err = io.Copy(m, out) _, err = io.Copy(m, out)
if err != nil { if err != nil {
return "", fmt.Errorf("failed to hash data: %w", err) return "", errors.Wrap(err, "failed to hash data")
} }
return m.Sums()[hashType], nil return m.Sums()[hashType], nil
@@ -683,20 +614,16 @@ func (f *Fs) computeHashWithNonce(ctx context.Context, nonce nonce, src fs.Objec
// //
// Note that we break lots of encapsulation in this function. // Note that we break lots of encapsulation in this function.
func (f *Fs) ComputeHash(ctx context.Context, o *Object, src fs.Object, hashType hash.Type) (hashStr string, err error) { func (f *Fs) ComputeHash(ctx context.Context, o *Object, src fs.Object, hashType hash.Type) (hashStr string, err error) {
if f.opt.NoDataEncryption {
return src.Hash(ctx, hashType)
}
// Read the nonce - opening the file is sufficient to read the nonce in // Read the nonce - opening the file is sufficient to read the nonce in
// use a limited read so we only read the header // use a limited read so we only read the header
in, err := o.Object.Open(ctx, &fs.RangeOption{Start: 0, End: int64(fileHeaderSize) - 1}) in, err := o.Object.Open(ctx, &fs.RangeOption{Start: 0, End: int64(fileHeaderSize) - 1})
if err != nil { if err != nil {
return "", fmt.Errorf("failed to open object to read nonce: %w", err) return "", errors.Wrap(err, "failed to open object to read nonce")
} }
d, err := f.cipher.newDecrypter(in) d, err := f.cipher.newDecrypter(in)
if err != nil { if err != nil {
_ = in.Close() _ = in.Close()
return "", fmt.Errorf("failed to open object to read nonce: %w", err) return "", errors.Wrap(err, "failed to open object to read nonce")
} }
nonce := d.nonce nonce := d.nonce
// fs.Debugf(o, "Read nonce % 2x", nonce) // fs.Debugf(o, "Read nonce % 2x", nonce)
@@ -715,7 +642,7 @@ func (f *Fs) ComputeHash(ctx context.Context, o *Object, src fs.Object, hashType
// Close d (and hence in) once we have read the nonce // Close d (and hence in) once we have read the nonce
err = d.Close() err = d.Close()
if err != nil { if err != nil {
return "", fmt.Errorf("failed to close nonce read: %w", err) return "", errors.Wrap(err, "failed to close nonce read")
} }
return f.computeHashWithNonce(ctx, nonce, src, hashType) return f.computeHashWithNonce(ctx, nonce, src, hashType)
@@ -834,7 +761,7 @@ func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[str
for _, encryptedFileName := range arg { for _, encryptedFileName := range arg {
fileName, err := f.DecryptFileName(encryptedFileName) fileName, err := f.DecryptFileName(encryptedFileName)
if err != nil { if err != nil {
return out, fmt.Errorf("failed to decrypt: %s: %w", encryptedFileName, err) return out, errors.Wrap(err, fmt.Sprintf("Failed to decrypt : %s", encryptedFileName))
} }
out = append(out, fileName) out = append(out, fileName)
} }
@@ -892,13 +819,9 @@ func (o *Object) Remote() string {
// Size returns the size of the file // Size returns the size of the file
func (o *Object) Size() int64 { func (o *Object) Size() int64 {
size := o.Object.Size() size, err := o.f.cipher.DecryptedSize(o.Object.Size())
if !o.f.opt.NoDataEncryption { if err != nil {
var err error fs.Debugf(o, "Bad size for decrypt: %v", err)
size, err = o.f.cipher.DecryptedSize(size)
if err != nil {
fs.Debugf(o, "Bad size for decrypt: %v", err)
}
} }
return size return size
} }
@@ -916,10 +839,6 @@ func (o *Object) UnWrap() fs.Object {
// Open opens the file for read. Call Close() on the returned io.ReadCloser // Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) { func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (rc io.ReadCloser, err error) {
if o.f.opt.NoDataEncryption {
return o.Object.Open(ctx, options...)
}
var openOptions []fs.OpenOption var openOptions []fs.OpenOption
var offset, limit int64 = 0, -1 var offset, limit int64 = 0, -1
for _, option := range options { for _, option := range options {
@@ -995,16 +914,6 @@ func (f *Fs) Disconnect(ctx context.Context) error {
return do(ctx) return do(ctx)
} }
// Shutdown the backend, closing any background tasks and any
// cached connections.
func (f *Fs) Shutdown(ctx context.Context) error {
do := f.Fs.Features().Shutdown
if do == nil {
return nil
}
return do(ctx)
}
// ObjectInfo describes a wrapped fs.ObjectInfo for being the source // ObjectInfo describes a wrapped fs.ObjectInfo for being the source
// //
// This encrypts the remote name and adjusts the size // This encrypts the remote name and adjusts the size
@@ -1038,9 +947,6 @@ func (o *ObjectInfo) Size() int64 {
if size < 0 { if size < 0 {
return size return size
} }
if o.f.opt.NoDataEncryption {
return size
}
return o.f.cipher.EncryptedSize(size) return o.f.cipher.EncryptedSize(size)
} }
@@ -1052,11 +958,10 @@ func (o *ObjectInfo) Hash(ctx context.Context, hash hash.Type) (string, error) {
// Get the underlying object if there is one // Get the underlying object if there is one
if srcObj, ok = o.ObjectInfo.(fs.Object); ok { if srcObj, ok = o.ObjectInfo.(fs.Object); ok {
// Prefer direct interface assertion // Prefer direct interface assertion
} else if do, ok := o.ObjectInfo.(*fs.OverrideRemote); ok { } else if do, ok := o.ObjectInfo.(fs.ObjectUnWrapper); ok {
// Unwrap if it is an operations.OverrideRemote // Otherwise likely is an operations.OverrideRemote
srcObj = do.UnWrap() srcObj = do.UnWrap()
} else { } else {
// Otherwise don't unwrap any further
return "", nil return "", nil
} }
// if this is wrapping a local object then we work out the hash // if this is wrapping a local object then we work out the hash
@@ -1068,50 +973,6 @@ func (o *ObjectInfo) Hash(ctx context.Context, hash hash.Type) (string, error) {
return "", nil return "", nil
} }
// GetTier returns storage tier or class of the Object
func (o *ObjectInfo) GetTier() string {
do, ok := o.ObjectInfo.(fs.GetTierer)
if !ok {
return ""
}
return do.GetTier()
}
// ID returns the ID of the Object if known, or "" if not
func (o *ObjectInfo) ID() string {
do, ok := o.ObjectInfo.(fs.IDer)
if !ok {
return ""
}
return do.ID()
}
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *ObjectInfo) Metadata(ctx context.Context) (fs.Metadata, error) {
do, ok := o.ObjectInfo.(fs.Metadataer)
if !ok {
return nil, nil
}
return do.Metadata(ctx)
}
// MimeType returns the content type of the Object if
// known, or "" if not
//
// This is deliberately unsupported so we don't leak mime type info by
// default.
func (o *ObjectInfo) MimeType(ctx context.Context) string {
return ""
}
// UnWrap returns the Object that this Object is wrapping or
// nil if it isn't wrapping anything
func (o *ObjectInfo) UnWrap() fs.Object {
return fs.UnWrapObjectInfo(o.ObjectInfo)
}
// ID returns the ID of the Object if known, or "" if not // ID returns the ID of the Object if known, or "" if not
func (o *Object) ID() string { func (o *Object) ID() string {
do, ok := o.Object.(fs.IDer) do, ok := o.Object.(fs.IDer)
@@ -1140,26 +1001,6 @@ func (o *Object) GetTier() string {
return do.GetTier() return do.GetTier()
} }
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
do, ok := o.Object.(fs.Metadataer)
if !ok {
return nil, nil
}
return do.Metadata(ctx)
}
// MimeType returns the content type of the Object if
// known, or "" if not
//
// This is deliberately unsupported so we don't leak mime type info by
// default.
func (o *Object) MimeType(ctx context.Context) string {
return ""
}
// Check the interfaces are satisfied // Check the interfaces are satisfied
var ( var (
_ fs.Fs = (*Fs)(nil) _ fs.Fs = (*Fs)(nil)
@@ -1181,7 +1022,10 @@ var (
_ fs.PublicLinker = (*Fs)(nil) _ fs.PublicLinker = (*Fs)(nil)
_ fs.UserInfoer = (*Fs)(nil) _ fs.UserInfoer = (*Fs)(nil)
_ fs.Disconnecter = (*Fs)(nil) _ fs.Disconnecter = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil) _ fs.ObjectInfo = (*ObjectInfo)(nil)
_ fs.FullObjectInfo = (*ObjectInfo)(nil) _ fs.Object = (*Object)(nil)
_ fs.FullObject = (*Object)(nil) _ fs.ObjectUnWrapper = (*Object)(nil)
_ fs.IDer = (*Object)(nil)
_ fs.SetTierer = (*Object)(nil)
_ fs.GetTierer = (*Object)(nil)
) )

View File

@@ -17,28 +17,41 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
type testWrapper struct {
fs.ObjectInfo
}
// UnWrap returns the Object that this Object is wrapping or nil if it
// isn't wrapping anything
func (o testWrapper) UnWrap() fs.Object {
if o, ok := o.ObjectInfo.(fs.Object); ok {
return o
}
return nil
}
// Create a temporary local fs to upload things from // Create a temporary local fs to upload things from
func makeTempLocalFs(t *testing.T) (localFs fs.Fs) { func makeTempLocalFs(t *testing.T) (localFs fs.Fs, cleanup func()) {
localFs, err := fs.TemporaryLocalFs(context.Background()) localFs, err := fs.TemporaryLocalFs()
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(func() { cleanup = func() {
require.NoError(t, localFs.Rmdir(context.Background(), "")) require.NoError(t, localFs.Rmdir(context.Background(), ""))
}) }
return localFs return localFs, cleanup
} }
// Upload a file to a remote // Upload a file to a remote
func uploadFile(t *testing.T, f fs.Fs, remote, contents string) (obj fs.Object) { func uploadFile(t *testing.T, f fs.Fs, remote, contents string) (obj fs.Object, cleanup func()) {
inBuf := bytes.NewBufferString(contents) inBuf := bytes.NewBufferString(contents)
t1 := time.Date(2012, time.December, 17, 18, 32, 31, 0, time.UTC) t1 := time.Date(2012, time.December, 17, 18, 32, 31, 0, time.UTC)
upSrc := object.NewStaticObjectInfo(remote, t1, int64(len(contents)), true, nil, nil) upSrc := object.NewStaticObjectInfo(remote, t1, int64(len(contents)), true, nil, nil)
obj, err := f.Put(context.Background(), inBuf, upSrc) obj, err := f.Put(context.Background(), inBuf, upSrc)
require.NoError(t, err) require.NoError(t, err)
t.Cleanup(func() { cleanup = func() {
require.NoError(t, obj.Remove(context.Background())) require.NoError(t, obj.Remove(context.Background()))
}) }
return obj return obj, cleanup
} }
// Test the ObjectInfo // Test the ObjectInfo
@@ -52,9 +65,11 @@ func testObjectInfo(t *testing.T, f *Fs, wrap bool) {
path = "_wrap" path = "_wrap"
} }
localFs := makeTempLocalFs(t) localFs, cleanupLocalFs := makeTempLocalFs(t)
defer cleanupLocalFs()
obj := uploadFile(t, localFs, path, contents) obj, cleanupObj := uploadFile(t, localFs, path, contents)
defer cleanupObj()
// encrypt the data // encrypt the data
inBuf := bytes.NewBufferString(contents) inBuf := bytes.NewBufferString(contents)
@@ -68,17 +83,15 @@ func testObjectInfo(t *testing.T, f *Fs, wrap bool) {
var oi fs.ObjectInfo = obj var oi fs.ObjectInfo = obj
if wrap { if wrap {
// wrap the object in an fs.ObjectUnwrapper if required // wrap the object in an fs.ObjectUnwrapper if required
oi = fs.NewOverrideRemote(oi, "new_remote") oi = testWrapper{oi}
} }
// wrap the object in a crypt for upload using the nonce we // wrap the object in a crypt for upload using the nonce we
// saved from the encrypter // saved from the encryptor
src := f.newObjectInfo(oi, nonce) src := f.newObjectInfo(oi, nonce)
// Test ObjectInfo methods // Test ObjectInfo methods
if !f.opt.NoDataEncryption { assert.Equal(t, int64(outBuf.Len()), src.Size())
assert.Equal(t, int64(outBuf.Len()), src.Size())
}
assert.Equal(t, f, src.Fs()) assert.Equal(t, f, src.Fs())
assert.NotEqual(t, path, src.Remote()) assert.NotEqual(t, path, src.Remote())
@@ -101,13 +114,16 @@ func testComputeHash(t *testing.T, f *Fs) {
t.Skipf("%v: does not support hashes", f.Fs) t.Skipf("%v: does not support hashes", f.Fs)
} }
localFs := makeTempLocalFs(t) localFs, cleanupLocalFs := makeTempLocalFs(t)
defer cleanupLocalFs()
// Upload a file to localFs as a test object // Upload a file to localFs as a test object
localObj := uploadFile(t, localFs, path, contents) localObj, cleanupLocalObj := uploadFile(t, localFs, path, contents)
defer cleanupLocalObj()
// Upload the same data to the remote Fs also // Upload the same data to the remote Fs also
remoteObj := uploadFile(t, f, path, contents) remoteObj, cleanupRemoteObj := uploadFile(t, f, path, contents)
defer cleanupRemoteObj()
// Calculate the expected Hash of the remote object // Calculate the expected Hash of the remote object
computedHash, err := f.ComputeHash(ctx, remoteObj.(*Object), localObj, hashType) computedHash, err := f.ComputeHash(ctx, remoteObj.(*Object), localObj, hashType)

View File

@@ -4,7 +4,6 @@ package crypt_test
import ( import (
"os" "os"
"path/filepath" "path/filepath"
"runtime"
"testing" "testing"
"github.com/rclone/rclone/backend/crypt" "github.com/rclone/rclone/backend/crypt"
@@ -30,7 +29,7 @@ func TestIntegration(t *testing.T) {
} }
// TestStandard runs integration tests against the remote // TestStandard runs integration tests against the remote
func TestStandardBase32(t *testing.T) { func TestStandard(t *testing.T) {
if *fstest.RemoteName != "" { if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set") t.Skip("Skipping as -remote set")
} }
@@ -47,51 +46,6 @@ func TestStandardBase32(t *testing.T) {
}, },
UnimplementableFsMethods: []string{"OpenWriterAt"}, UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"}, UnimplementableObjectMethods: []string{"MimeType"},
QuickTestOK: true,
})
}
func TestStandardBase64(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard")
name := "TestCrypt"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*crypt.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: obscure.MustObscure("potato")},
{Name: name, Key: "filename_encryption", Value: "standard"},
{Name: name, Key: "filename_encoding", Value: "base64"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
QuickTestOK: true,
})
}
func TestStandardBase32768(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-standard")
name := "TestCrypt"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*crypt.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: obscure.MustObscure("potato")},
{Name: name, Key: "filename_encryption", Value: "standard"},
{Name: name, Key: "filename_encoding", Value: "base32768"},
},
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
QuickTestOK: true,
}) })
} }
@@ -113,7 +67,6 @@ func TestOff(t *testing.T) {
}, },
UnimplementableFsMethods: []string{"OpenWriterAt"}, UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"}, UnimplementableObjectMethods: []string{"MimeType"},
QuickTestOK: true,
}) })
} }
@@ -122,9 +75,6 @@ func TestObfuscate(t *testing.T) {
if *fstest.RemoteName != "" { if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set") t.Skip("Skipping as -remote set")
} }
if runtime.GOOS == "darwin" {
t.Skip("Skipping on macOS as obfuscating control characters makes filenames macOS can't cope with")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate") tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate")
name := "TestCrypt3" name := "TestCrypt3"
fstests.Run(t, &fstests.Opt{ fstests.Run(t, &fstests.Opt{
@@ -139,33 +89,5 @@ func TestObfuscate(t *testing.T) {
SkipBadWindowsCharacters: true, SkipBadWindowsCharacters: true,
UnimplementableFsMethods: []string{"OpenWriterAt"}, UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"}, UnimplementableObjectMethods: []string{"MimeType"},
QuickTestOK: true,
})
}
// TestNoDataObfuscate runs integration tests against the remote
func TestNoDataObfuscate(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("Skipping as -remote set")
}
if runtime.GOOS == "darwin" {
t.Skip("Skipping on macOS as obfuscating control characters makes filenames macOS can't cope with")
}
tempdir := filepath.Join(os.TempDir(), "rclone-crypt-test-obfuscate")
name := "TestCrypt4"
fstests.Run(t, &fstests.Opt{
RemoteName: name + ":",
NilObject: (*crypt.Object)(nil),
ExtraConfig: []fstests.ExtraConfigItem{
{Name: name, Key: "type", Value: "crypt"},
{Name: name, Key: "remote", Value: tempdir},
{Name: name, Key: "password", Value: obscure.MustObscure("potato2")},
{Name: name, Key: "filename_encryption", Value: "obfuscate"},
{Name: name, Key: "no_data_encryption", Value: "true"},
},
SkipBadWindowsCharacters: true,
UnimplementableFsMethods: []string{"OpenWriterAt"},
UnimplementableObjectMethods: []string{"MimeType"},
QuickTestOK: true,
}) })
} }

View File

@@ -4,15 +4,15 @@
// buffers which are a multiple of an underlying crypto block size. // buffers which are a multiple of an underlying crypto block size.
package pkcs7 package pkcs7
import "errors" import "github.com/pkg/errors"
// Errors Unpad can return // Errors Unpad can return
var ( var (
ErrorPaddingNotFound = errors.New("bad PKCS#7 padding - not padded") ErrorPaddingNotFound = errors.New("Bad PKCS#7 padding - not padded")
ErrorPaddingNotAMultiple = errors.New("bad PKCS#7 padding - not a multiple of blocksize") ErrorPaddingNotAMultiple = errors.New("Bad PKCS#7 padding - not a multiple of blocksize")
ErrorPaddingTooLong = errors.New("bad PKCS#7 padding - too long") ErrorPaddingTooLong = errors.New("Bad PKCS#7 padding - too long")
ErrorPaddingTooShort = errors.New("bad PKCS#7 padding - too short") ErrorPaddingTooShort = errors.New("Bad PKCS#7 padding - too short")
ErrorPaddingNotAllTheSame = errors.New("bad PKCS#7 padding - not all the same") ErrorPaddingNotAllTheSame = errors.New("Bad PKCS#7 padding - not all the same")
) )
// Pad buf using PKCS#7 to a multiple of n. // Pad buf using PKCS#7 to a multiple of n.

1454
backend/drive/drive.go Normal file → Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -4,31 +4,22 @@ import (
"bytes" "bytes"
"context" "context"
"encoding/json" "encoding/json"
"errors"
"fmt"
"io" "io"
"io/ioutil"
"mime" "mime"
"os"
"path"
"path/filepath" "path/filepath"
"strings" "strings"
"testing" "testing"
"time"
"github.com/pkg/errors"
_ "github.com/rclone/rclone/backend/local" _ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/filter"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations" "github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fs/sync"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests" "github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"google.golang.org/api/drive/v3" "google.golang.org/api/drive/v3"
"google.golang.org/api/googleapi"
) )
func TestDriveScopes(t *testing.T) { func TestDriveScopes(t *testing.T) {
@@ -77,7 +68,7 @@ var additionalMimeTypes = map[string]string{
// Load the example export formats into exportFormats for testing // Load the example export formats into exportFormats for testing
func TestInternalLoadExampleFormats(t *testing.T) { func TestInternalLoadExampleFormats(t *testing.T) {
fetchFormatsOnce.Do(func() {}) fetchFormatsOnce.Do(func() {})
buf, err := os.ReadFile(filepath.FromSlash("test/about.json")) buf, err := ioutil.ReadFile(filepath.FromSlash("test/about.json"))
var about struct { var about struct {
ExportFormats map[string][]string `json:"exportFormats,omitempty"` ExportFormats map[string][]string `json:"exportFormats,omitempty"`
ImportFormats map[string][]string `json:"importFormats,omitempty"` ImportFormats map[string][]string `json:"importFormats,omitempty"`
@@ -115,7 +106,6 @@ func TestInternalParseExtensions(t *testing.T) {
} }
func TestInternalFindExportFormat(t *testing.T) { func TestInternalFindExportFormat(t *testing.T) {
ctx := context.Background()
item := &drive.File{ item := &drive.File{
Name: "file", Name: "file",
MimeType: "application/vnd.google-apps.document", MimeType: "application/vnd.google-apps.document",
@@ -133,7 +123,7 @@ func TestInternalFindExportFormat(t *testing.T) {
} { } {
f := new(Fs) f := new(Fs)
f.exportExtensions = test.extensions f.exportExtensions = test.extensions
gotExtension, gotFilename, gotMimeType, gotIsDocument := f.findExportFormat(ctx, item) gotExtension, gotFilename, gotMimeType, gotIsDocument := f.findExportFormat(item)
assert.Equal(t, test.wantExtension, gotExtension) assert.Equal(t, test.wantExtension, gotExtension)
if test.wantExtension != "" { if test.wantExtension != "" {
assert.Equal(t, item.Name+gotExtension, gotFilename) assert.Equal(t, item.Name+gotExtension, gotFilename)
@@ -191,60 +181,6 @@ func TestExtensionsForImportFormats(t *testing.T) {
} }
} }
func (f *Fs) InternalTestShouldRetry(t *testing.T) {
ctx := context.Background()
gatewayTimeout := googleapi.Error{
Code: 503,
}
timeoutRetry, timeoutError := f.shouldRetry(ctx, &gatewayTimeout)
assert.True(t, timeoutRetry)
assert.Equal(t, &gatewayTimeout, timeoutError)
generic403 := googleapi.Error{
Code: 403,
}
rLEItem := googleapi.ErrorItem{
Reason: "rateLimitExceeded",
Message: "User rate limit exceeded.",
}
generic403.Errors = append(generic403.Errors, rLEItem)
oldStopUpload := f.opt.StopOnUploadLimit
oldStopDownload := f.opt.StopOnDownloadLimit
f.opt.StopOnUploadLimit = true
f.opt.StopOnDownloadLimit = true
defer func() {
f.opt.StopOnUploadLimit = oldStopUpload
f.opt.StopOnDownloadLimit = oldStopDownload
}()
expectedRLError := fserrors.FatalError(&generic403)
rateLimitRetry, rateLimitErr := f.shouldRetry(ctx, &generic403)
assert.False(t, rateLimitRetry)
assert.Equal(t, rateLimitErr, expectedRLError)
dQEItem := googleapi.ErrorItem{
Reason: "downloadQuotaExceeded",
}
generic403.Errors[0] = dQEItem
expectedDQError := fserrors.FatalError(&generic403)
downloadQuotaRetry, downloadQuotaError := f.shouldRetry(ctx, &generic403)
assert.False(t, downloadQuotaRetry)
assert.Equal(t, downloadQuotaError, expectedDQError)
tDFLEItem := googleapi.ErrorItem{
Reason: "teamDriveFileLimitExceeded",
}
generic403.Errors[0] = tDFLEItem
expectedTDFLError := fserrors.FatalError(&generic403)
teamDriveFileLimitRetry, teamDriveFileLimitError := f.shouldRetry(ctx, &generic403)
assert.False(t, teamDriveFileLimitRetry)
assert.Equal(t, teamDriveFileLimitError, expectedTDFLError)
qEItem := googleapi.ErrorItem{
Reason: "quotaExceeded",
}
generic403.Errors[0] = qEItem
expectedQuotaError := fserrors.FatalError(&generic403)
quotaExceededRetry, quotaExceededError := f.shouldRetry(ctx, &generic403)
assert.False(t, quotaExceededRetry)
assert.Equal(t, quotaExceededError, expectedQuotaError)
}
func (f *Fs) InternalTestDocumentImport(t *testing.T) { func (f *Fs) InternalTestDocumentImport(t *testing.T) {
oldAllow := f.opt.AllowImportNameChange oldAllow := f.opt.AllowImportNameChange
f.opt.AllowImportNameChange = true f.opt.AllowImportNameChange = true
@@ -255,7 +191,7 @@ func (f *Fs) InternalTestDocumentImport(t *testing.T) {
testFilesPath, err := filepath.Abs(filepath.FromSlash("test/files")) testFilesPath, err := filepath.Abs(filepath.FromSlash("test/files"))
require.NoError(t, err) require.NoError(t, err)
testFilesFs, err := fs.NewFs(context.Background(), testFilesPath) testFilesFs, err := fs.NewFs(testFilesPath)
require.NoError(t, err) require.NoError(t, err)
_, f.importMimeTypes, err = parseExtensions("odt,ods,doc") _, f.importMimeTypes, err = parseExtensions("odt,ods,doc")
@@ -269,7 +205,7 @@ func (f *Fs) InternalTestDocumentUpdate(t *testing.T) {
testFilesPath, err := filepath.Abs(filepath.FromSlash("test/files")) testFilesPath, err := filepath.Abs(filepath.FromSlash("test/files"))
require.NoError(t, err) require.NoError(t, err)
testFilesFs, err := fs.NewFs(context.Background(), testFilesPath) testFilesFs, err := fs.NewFs(testFilesPath)
require.NoError(t, err) require.NoError(t, err)
_, f.importMimeTypes, err = parseExtensions("odt,ods,doc") _, f.importMimeTypes, err = parseExtensions("odt,ods,doc")
@@ -333,15 +269,14 @@ func (f *Fs) InternalTestDocumentLink(t *testing.T) {
} }
} }
const (
// from fstest/fstests/fstests.go
existingDir = "hello? sausage"
existingFile = `hello? sausage/êé/Hello, 世界/ " ' @ < > & ? + ≠/z.txt`
existingSubDir = "êé"
)
// TestIntegration/FsMkdir/FsPutFiles/Internal/Shortcuts // TestIntegration/FsMkdir/FsPutFiles/Internal/Shortcuts
func (f *Fs) InternalTestShortcuts(t *testing.T) { func (f *Fs) InternalTestShortcuts(t *testing.T) {
const (
// from fstest/fstests/fstests.go
existingDir = "hello? sausage"
existingFile = `hello? sausage/êé/Hello, 世界/ " ' @ < > & ? + ≠/z.txt`
existingSubDir = "êé"
)
ctx := context.Background() ctx := context.Background()
srcObj, err := f.NewObject(ctx, existingFile) srcObj, err := f.NewObject(ctx, existingFile)
require.NoError(t, err) require.NoError(t, err)
@@ -426,165 +361,6 @@ func (f *Fs) InternalTestShortcuts(t *testing.T) {
}) })
} }
// TestIntegration/FsMkdir/FsPutFiles/Internal/UnTrash
func (f *Fs) InternalTestUnTrash(t *testing.T) {
ctx := context.Background()
// Make some objects, one in a subdir
contents := random.String(100)
file1 := fstest.NewItem("trashDir/toBeTrashed", contents, time.Now())
obj1 := fstests.PutTestContents(ctx, t, f, &file1, contents, false)
file2 := fstest.NewItem("trashDir/subdir/toBeTrashed", contents, time.Now())
_ = fstests.PutTestContents(ctx, t, f, &file2, contents, false)
// Check objects
checkObjects := func() {
fstest.CheckListingWithRoot(t, f, "trashDir", []fstest.Item{
file1,
file2,
}, []string{
"trashDir/subdir",
}, f.Precision())
}
checkObjects()
// Make sure we are using the trash
require.Equal(t, true, f.opt.UseTrash)
// Remove the object and the dir
require.NoError(t, obj1.Remove(ctx))
require.NoError(t, f.Purge(ctx, "trashDir/subdir"))
// Check objects gone
fstest.CheckListingWithRoot(t, f, "trashDir", []fstest.Item{}, []string{}, f.Precision())
// Restore the object and directory
r, err := f.unTrashDir(ctx, "trashDir", true)
require.NoError(t, err)
assert.Equal(t, unTrashResult{Errors: 0, Untrashed: 2}, r)
// Check objects restored
checkObjects()
// Remove the test dir
require.NoError(t, f.Purge(ctx, "trashDir"))
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/CopyID
func (f *Fs) InternalTestCopyID(t *testing.T) {
ctx := context.Background()
obj, err := f.NewObject(ctx, existingFile)
require.NoError(t, err)
o := obj.(*Object)
dir := t.TempDir()
checkFile := func(name string) {
filePath := filepath.Join(dir, name)
fi, err := os.Stat(filePath)
require.NoError(t, err)
assert.Equal(t, int64(100), fi.Size())
err = os.Remove(filePath)
require.NoError(t, err)
}
t.Run("BadID", func(t *testing.T) {
err = f.copyID(ctx, "ID-NOT-FOUND", dir+"/")
require.Error(t, err)
assert.Contains(t, err.Error(), "couldn't find id")
})
t.Run("Directory", func(t *testing.T) {
rootID, err := f.dirCache.RootID(ctx, false)
require.NoError(t, err)
err = f.copyID(ctx, rootID, dir+"/")
require.Error(t, err)
assert.Contains(t, err.Error(), "can't copy directory")
})
t.Run("WithoutDestName", func(t *testing.T) {
err = f.copyID(ctx, o.id, dir+"/")
require.NoError(t, err)
checkFile(path.Base(existingFile))
})
t.Run("WithDestName", func(t *testing.T) {
err = f.copyID(ctx, o.id, dir+"/potato.txt")
require.NoError(t, err)
checkFile("potato.txt")
})
}
// TestIntegration/FsMkdir/FsPutFiles/Internal/AgeQuery
func (f *Fs) InternalTestAgeQuery(t *testing.T) {
// Check set up for filtering
assert.True(t, f.Features().FilterAware)
opt := &filter.Opt{}
err := opt.MaxAge.Set("1h")
assert.NoError(t, err)
flt, err := filter.NewFilter(opt)
assert.NoError(t, err)
defCtx := context.Background()
fltCtx := filter.ReplaceConfig(defCtx, flt)
testCtx1 := fltCtx
testCtx2 := filter.SetUseFilter(testCtx1, true)
testCtx3, testCancel := context.WithCancel(testCtx2)
testCtx4 := filter.SetUseFilter(testCtx3, false)
testCancel()
assert.False(t, filter.GetUseFilter(testCtx1))
assert.True(t, filter.GetUseFilter(testCtx2))
assert.True(t, filter.GetUseFilter(testCtx3))
assert.False(t, filter.GetUseFilter(testCtx4))
subRemote := fmt.Sprintf("%s:%s/%s", f.Name(), f.Root(), "agequery-testdir")
subFsResult, err := fs.NewFs(defCtx, subRemote)
require.NoError(t, err)
subFs, isDriveFs := subFsResult.(*Fs)
require.True(t, isDriveFs)
tempDir1 := t.TempDir()
tempFs1, err := fs.NewFs(defCtx, tempDir1)
require.NoError(t, err)
tempDir2 := t.TempDir()
tempFs2, err := fs.NewFs(defCtx, tempDir2)
require.NoError(t, err)
file1 := fstest.Item{ModTime: time.Now(), Path: "agequery.txt"}
_ = fstests.PutTestContents(defCtx, t, tempFs1, &file1, "abcxyz", true)
// validate sync/copy
const timeQuery = "(modifiedTime >= '"
assert.NoError(t, sync.CopyDir(defCtx, subFs, tempFs1, false))
assert.NotContains(t, subFs.lastQuery, timeQuery)
assert.NoError(t, sync.CopyDir(fltCtx, subFs, tempFs1, false))
assert.Contains(t, subFs.lastQuery, timeQuery)
assert.NoError(t, sync.CopyDir(fltCtx, tempFs2, subFs, false))
assert.Contains(t, subFs.lastQuery, timeQuery)
assert.NoError(t, sync.CopyDir(defCtx, tempFs2, subFs, false))
assert.NotContains(t, subFs.lastQuery, timeQuery)
// validate list/walk
devNull, errOpen := os.OpenFile(os.DevNull, os.O_WRONLY, 0)
require.NoError(t, errOpen)
defer func() {
_ = devNull.Close()
}()
assert.NoError(t, operations.List(defCtx, subFs, devNull))
assert.NotContains(t, subFs.lastQuery, timeQuery)
assert.NoError(t, operations.List(fltCtx, subFs, devNull))
assert.Contains(t, subFs.lastQuery, timeQuery)
}
func (f *Fs) InternalTest(t *testing.T) { func (f *Fs) InternalTest(t *testing.T) {
// These tests all depend on each other so run them as nested tests // These tests all depend on each other so run them as nested tests
t.Run("DocumentImport", func(t *testing.T) { t.Run("DocumentImport", func(t *testing.T) {
@@ -600,10 +376,6 @@ func (f *Fs) InternalTest(t *testing.T) {
}) })
}) })
t.Run("Shortcuts", f.InternalTestShortcuts) t.Run("Shortcuts", f.InternalTestShortcuts)
t.Run("UnTrash", f.InternalTestUnTrash)
t.Run("CopyID", f.InternalTestCopyID)
t.Run("AgeQuery", f.InternalTestAgeQuery)
t.Run("ShouldRetry", f.InternalTestShouldRetry)
} }
var _ fstests.InternalTester = (*Fs)(nil) var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -77,10 +77,11 @@ func (f *Fs) Upload(ctx context.Context, in io.Reader, size int64, contentType,
return false, err return false, err
} }
var req *http.Request var req *http.Request
req, err = http.NewRequestWithContext(ctx, method, urls, body) req, err = http.NewRequest(method, urls, body)
if err != nil { if err != nil {
return false, err return false, err
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
googleapi.Expand(req.URL, map[string]string{ googleapi.Expand(req.URL, map[string]string{
"fileId": fileID, "fileId": fileID,
}) })
@@ -94,7 +95,7 @@ func (f *Fs) Upload(ctx context.Context, in io.Reader, size int64, contentType,
defer googleapi.CloseBody(res) defer googleapi.CloseBody(res)
err = googleapi.CheckResponse(res) err = googleapi.CheckResponse(res)
} }
return f.shouldRetry(ctx, err) return f.shouldRetry(err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -113,7 +114,8 @@ func (f *Fs) Upload(ctx context.Context, in io.Reader, size int64, contentType,
// Make an http.Request for the range passed in // Make an http.Request for the range passed in
func (rx *resumableUpload) makeRequest(ctx context.Context, start int64, body io.ReadSeeker, reqSize int64) *http.Request { func (rx *resumableUpload) makeRequest(ctx context.Context, start int64, body io.ReadSeeker, reqSize int64) *http.Request {
req, _ := http.NewRequestWithContext(ctx, "POST", rx.URI, body) req, _ := http.NewRequest("POST", rx.URI, body)
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
req.ContentLength = reqSize req.ContentLength = reqSize
totalSize := "*" totalSize := "*"
if rx.ContentLength >= 0 { if rx.ContentLength >= 0 {
@@ -202,7 +204,7 @@ func (rx *resumableUpload) Upload(ctx context.Context) (*drive.File, error) {
err = rx.f.pacer.Call(func() (bool, error) { err = rx.f.pacer.Call(func() (bool, error) {
fs.Debugf(rx.remote, "Sending chunk %d length %d", start, reqSize) fs.Debugf(rx.remote, "Sending chunk %d length %d", start, reqSize)
StatusCode, err = rx.transferChunk(ctx, start, chunk, reqSize) StatusCode, err = rx.transferChunk(ctx, start, chunk, reqSize)
again, err := rx.f.shouldRetry(ctx, err) again, err := rx.f.shouldRetry(err)
if StatusCode == statusResumeIncomplete || StatusCode == http.StatusCreated || StatusCode == http.StatusOK { if StatusCode == statusResumeIncomplete || StatusCode == http.StatusCreated || StatusCode == http.StatusOK {
again = false again = false
err = nil err = nil

View File

@@ -1,346 +0,0 @@
// This file contains the implementation of the sync batcher for uploads
//
// Dropbox rules say you can start as many batches as you want, but
// you may only have one batch being committed and must wait for the
// batch to be finished before committing another.
package dropbox
import (
"context"
"errors"
"fmt"
"sync"
"time"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox/async"
"github.com/dropbox/dropbox-sdk-go-unofficial/v6/dropbox/files"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/atexit"
)
const (
maxBatchSize = 1000 // max size the batch can be
defaultTimeoutSync = 500 * time.Millisecond // kick off the batch if nothing added for this long (sync)
defaultTimeoutAsync = 10 * time.Second // kick off the batch if nothing added for this long (ssync)
defaultBatchSizeAsync = 100 // default batch size if async
)
// batcher holds info about the current items waiting for upload
type batcher struct {
f *Fs // Fs this batch is part of
mode string // configured batch mode
size int // maximum size for batch
timeout time.Duration // idle timeout for batch
async bool // whether we are using async batching
in chan batcherRequest // incoming items to batch
closed chan struct{} // close to indicate batcher shut down
atexit atexit.FnHandle // atexit handle
shutOnce sync.Once // make sure we shutdown once only
wg sync.WaitGroup // wait for shutdown
}
// batcherRequest holds an incoming request with a place for a reply
type batcherRequest struct {
commitInfo *files.UploadSessionFinishArg
result chan<- batcherResponse
}
// Return true if batcherRequest is the quit request
func (br *batcherRequest) isQuit() bool {
return br.commitInfo == nil
}
// Send this to get the engine to quit
var quitRequest = batcherRequest{}
// batcherResponse holds a response to be delivered to clients waiting
// for a batch to complete.
type batcherResponse struct {
err error
entry *files.FileMetadata
}
// newBatcher creates a new batcher structure
func newBatcher(ctx context.Context, f *Fs, mode string, size int, timeout time.Duration) (*batcher, error) {
// fs.Debugf(f, "Creating batcher with mode %q, size %d, timeout %v", mode, size, timeout)
if size > maxBatchSize || size < 0 {
return nil, fmt.Errorf("dropbox: batch size must be < %d and >= 0 - it is currently %d", maxBatchSize, size)
}
async := false
switch mode {
case "sync":
if size <= 0 {
ci := fs.GetConfig(ctx)
size = ci.Transfers
}
if timeout <= 0 {
timeout = defaultTimeoutSync
}
case "async":
if size <= 0 {
size = defaultBatchSizeAsync
}
if timeout <= 0 {
timeout = defaultTimeoutAsync
}
async = true
case "off":
size = 0
default:
return nil, fmt.Errorf("dropbox: batch mode must be sync|async|off not %q", mode)
}
b := &batcher{
f: f,
mode: mode,
size: size,
timeout: timeout,
async: async,
in: make(chan batcherRequest, size),
closed: make(chan struct{}),
}
if b.Batching() {
b.atexit = atexit.Register(b.Shutdown)
b.wg.Add(1)
go b.commitLoop(context.Background())
}
return b, nil
}
// Batching returns true if batching is active
func (b *batcher) Batching() bool {
return b.size > 0
}
// finishBatch commits the batch, returning a batch status to poll or maybe complete
func (b *batcher) finishBatch(ctx context.Context, items []*files.UploadSessionFinishArg) (complete *files.UploadSessionFinishBatchResult, err error) {
var arg = &files.UploadSessionFinishBatchArg{
Entries: items,
}
err = b.f.pacer.Call(func() (bool, error) {
complete, err = b.f.srv.UploadSessionFinishBatchV2(arg)
// If error is insufficient space then don't retry
if e, ok := err.(files.UploadSessionFinishAPIError); ok {
if e.EndpointError != nil && e.EndpointError.Path != nil && e.EndpointError.Path.Tag == files.WriteErrorInsufficientSpace {
err = fserrors.NoRetryError(err)
return false, err
}
}
// after the first chunk is uploaded, we retry everything
return err != nil, err
})
if err != nil {
return nil, fmt.Errorf("batch commit failed: %w", err)
}
return complete, nil
}
// finishBatchJobStatus waits for the batch to complete returning completed entries
func (b *batcher) finishBatchJobStatus(ctx context.Context, launchBatchStatus *files.UploadSessionFinishBatchLaunch) (complete *files.UploadSessionFinishBatchResult, err error) {
if launchBatchStatus.AsyncJobId == "" {
return nil, errors.New("wait for batch completion: empty job ID")
}
var batchStatus *files.UploadSessionFinishBatchJobStatus
sleepTime := 100 * time.Millisecond
const maxSleepTime = 1 * time.Second
startTime := time.Now()
try := 1
for {
remaining := time.Duration(b.f.opt.BatchCommitTimeout) - time.Since(startTime)
if remaining < 0 {
break
}
err = b.f.pacer.Call(func() (bool, error) {
batchStatus, err = b.f.srv.UploadSessionFinishBatchCheck(&async.PollArg{
AsyncJobId: launchBatchStatus.AsyncJobId,
})
return shouldRetry(ctx, err)
})
if err != nil {
fs.Debugf(b.f, "Wait for batch: sleeping for %v after error: %v: try %d remaining %v", sleepTime, err, try, remaining)
} else {
if batchStatus.Tag == "complete" {
fs.Debugf(b.f, "Upload batch completed in %v", time.Since(startTime))
return batchStatus.Complete, nil
}
fs.Debugf(b.f, "Wait for batch: sleeping for %v after status: %q: try %d remaining %v", sleepTime, batchStatus.Tag, try, remaining)
}
time.Sleep(sleepTime)
sleepTime *= 2
if sleepTime > maxSleepTime {
sleepTime = maxSleepTime
}
try++
}
if err == nil {
err = errors.New("batch didn't complete")
}
return nil, fmt.Errorf("wait for batch failed after %d tries in %v: %w", try, time.Since(startTime), err)
}
// commit a batch
func (b *batcher) commitBatch(ctx context.Context, items []*files.UploadSessionFinishArg, results []chan<- batcherResponse) (err error) {
// If commit fails then signal clients if sync
var signalled = b.async
defer func() {
if err != nil && signalled {
// Signal to clients that there was an error
for _, result := range results {
result <- batcherResponse{err: err}
}
}
}()
desc := fmt.Sprintf("%s batch length %d starting with: %s", b.mode, len(items), items[0].Commit.Path)
fs.Debugf(b.f, "Committing %s", desc)
// finalise the batch getting either a result or a job id to poll
complete, err := b.finishBatch(ctx, items)
if err != nil {
return err
}
// Check we got the right number of entries
entries := complete.Entries
if len(entries) != len(results) {
return fmt.Errorf("expecting %d items in batch but got %d", len(results), len(entries))
}
// Report results to clients
var (
errorTag = ""
errorCount = 0
)
for i := range results {
item := entries[i]
resp := batcherResponse{}
if item.Tag == "success" {
resp.entry = item.Success
} else {
errorCount++
errorTag = item.Tag
if item.Failure != nil {
errorTag = item.Failure.Tag
if item.Failure.LookupFailed != nil {
errorTag += "/" + item.Failure.LookupFailed.Tag
}
if item.Failure.Path != nil {
errorTag += "/" + item.Failure.Path.Tag
}
if item.Failure.PropertiesError != nil {
errorTag += "/" + item.Failure.PropertiesError.Tag
}
}
resp.err = fmt.Errorf("batch upload failed: %s", errorTag)
}
if !b.async {
results[i] <- resp
}
}
// Show signalled so no need to report error to clients from now on
signalled = true
// Report an error if any failed in the batch
if errorTag != "" {
return fmt.Errorf("batch had %d errors: last error: %s", errorCount, errorTag)
}
fs.Debugf(b.f, "Committed %s", desc)
return nil
}
// commitLoop runs the commit engine in the background
func (b *batcher) commitLoop(ctx context.Context) {
var (
items []*files.UploadSessionFinishArg // current batch of uncommitted files
results []chan<- batcherResponse // current batch of clients awaiting results
idleTimer = time.NewTimer(b.timeout)
commit = func() {
err := b.commitBatch(ctx, items, results)
if err != nil {
fs.Errorf(b.f, "%s batch commit: failed to commit batch length %d: %v", b.mode, len(items), err)
}
items, results = nil, nil
}
)
defer b.wg.Done()
defer idleTimer.Stop()
idleTimer.Stop()
outer:
for {
select {
case req := <-b.in:
if req.isQuit() {
break outer
}
items = append(items, req.commitInfo)
results = append(results, req.result)
idleTimer.Stop()
if len(items) >= b.size {
commit()
} else {
idleTimer.Reset(b.timeout)
}
case <-idleTimer.C:
if len(items) > 0 {
fs.Debugf(b.f, "Batch idle for %v so committing", b.timeout)
commit()
}
}
}
// commit any remaining items
if len(items) > 0 {
commit()
}
}
// Shutdown finishes any pending batches then shuts everything down
//
// Can be called from atexit handler
func (b *batcher) Shutdown() {
if !b.Batching() {
return
}
b.shutOnce.Do(func() {
atexit.Unregister(b.atexit)
fs.Infof(b.f, "Committing uploads - please wait...")
// show that batcher is shutting down
close(b.closed)
// quit the commitLoop by sending a quitRequest message
//
// Note that we don't close b.in because that will
// cause write to closed channel in Commit when we are
// exiting due to a signal.
b.in <- quitRequest
b.wg.Wait()
})
}
// Commit commits the file using a batch call, first adding it to the
// batch and then waiting for the batch to complete in a synchronous
// way if async is not set.
func (b *batcher) Commit(ctx context.Context, commitInfo *files.UploadSessionFinishArg) (entry *files.FileMetadata, err error) {
select {
case <-b.closed:
return nil, fserrors.FatalError(errors.New("batcher is shutting down"))
default:
}
fs.Debugf(b.f, "Adding %q to batch", commitInfo.Commit.Path)
resp := make(chan batcherResponse, 1)
b.in <- batcherRequest{
commitInfo: commitInfo,
result: resp,
}
// If running async then don't wait for the result
if b.async {
return nil, nil
}
result := <-resp
return result.entry, result.err
}

1149
backend/dropbox/dropbox.go Normal file → Executable file

File diff suppressed because it is too large Load Diff

View File

@@ -1,44 +0,0 @@
package dropbox
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestInternalCheckPathLength(t *testing.T) {
rep := func(n int, r rune) (out string) {
rs := make([]rune, n)
for i := range rs {
rs[i] = r
}
return string(rs)
}
for _, test := range []struct {
in string
ok bool
}{
{in: "", ok: true},
{in: rep(maxFileNameLength, 'a'), ok: true},
{in: rep(maxFileNameLength+1, 'a'), ok: false},
{in: rep(maxFileNameLength, '£'), ok: true},
{in: rep(maxFileNameLength+1, '£'), ok: false},
{in: rep(maxFileNameLength, '☺'), ok: true},
{in: rep(maxFileNameLength+1, '☺'), ok: false},
{in: rep(maxFileNameLength, '你'), ok: true},
{in: rep(maxFileNameLength+1, '你'), ok: false},
{in: "/ok/ok", ok: true},
{in: "/ok/" + rep(maxFileNameLength, 'a') + "/ok", ok: true},
{in: "/ok/" + rep(maxFileNameLength+1, 'a') + "/ok", ok: false},
{in: "/ok/" + rep(maxFileNameLength, '£') + "/ok", ok: true},
{in: "/ok/" + rep(maxFileNameLength+1, '£') + "/ok", ok: false},
{in: "/ok/" + rep(maxFileNameLength, '☺') + "/ok", ok: true},
{in: "/ok/" + rep(maxFileNameLength+1, '☺') + "/ok", ok: false},
{in: "/ok/" + rep(maxFileNameLength, '你') + "/ok", ok: true},
{in: "/ok/" + rep(maxFileNameLength+1, '你') + "/ok", ok: false},
} {
err := checkPathLength(test.in)
assert.Equal(t, test.ok, err == nil, test.in)
}
}

View File

@@ -2,16 +2,13 @@ package fichier
import ( import (
"context" "context"
"errors"
"fmt"
"io" "io"
"net/http" "net/http"
"net/url"
"regexp" "regexp"
"strconv" "strconv"
"strings"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/fserrors" "github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/rest" "github.com/rclone/rclone/lib/rest"
@@ -28,95 +25,18 @@ var retryErrorCodes = []int{
509, // Bandwidth Limit Exceeded 509, // Bandwidth Limit Exceeded
} }
var errorRegex = regexp.MustCompile(`#\d{1,3}`)
func parseFichierError(err error) int {
matches := errorRegex.FindStringSubmatch(err.Error())
if len(matches) == 0 {
return 0
}
code, err := strconv.Atoi(matches[0])
if err != nil {
fs.Debugf(nil, "failed parsing fichier error: %v", err)
return 0
}
return code
}
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) { func shouldRetry(resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
// 1Fichier uses HTTP error code 403 (Forbidden) for all kinds of errors with
// responses looking like this: "{\"message\":\"Flood detected: IP Locked #374\",\"status\":\"KO\"}"
//
// We attempt to parse the actual 1Fichier error code from this body and handle it accordingly
// Most importantly #374 (Flood detected: IP locked) which the integration tests provoke
// The list below is far from complete and should be expanded if we see any more error codes.
if err != nil {
switch parseFichierError(err) {
case 93:
return false, err // No such user
case 186:
return false, err // IP blocked?
case 374:
fs.Debugf(nil, "Sleeping for 30 seconds due to: %v", err)
time.Sleep(30 * time.Second)
default:
}
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
var isAlphaNumeric = regexp.MustCompile(`^[a-zA-Z0-9]+$`).MatchString var isAlphaNumeric = regexp.MustCompile(`^[a-zA-Z0-9]+$`).MatchString
func (f *Fs) createObject(ctx context.Context, remote string) (o *Object, leaf string, directoryID string, err error) {
// Create the directory for the object if it doesn't exist
leaf, directoryID, err = f.dirCache.FindPath(ctx, remote, true)
if err != nil {
return
}
// Temporary Object under construction
o = &Object{
fs: f,
remote: remote,
}
return o, leaf, directoryID, nil
}
func (f *Fs) readFileInfo(ctx context.Context, url string) (*File, error) {
request := FileInfoRequest{
URL: url,
}
opts := rest.Opts{
Method: "POST",
Path: "/file/info.cgi",
}
var file File
err := f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, &file)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("couldn't read file info: %w", err)
}
return &file, err
}
// maybe do some actual validation later if necessary
func validToken(token *GetTokenResponse) bool {
return token.Status == "OK"
}
func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenResponse, error) { func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenResponse, error) {
request := DownloadRequest{ request := DownloadRequest{
URL: url, URL: url,
Single: 1, Single: 1,
Pass: f.opt.FilePassword,
} }
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
@@ -126,11 +46,10 @@ func (f *Fs) getDownloadToken(ctx context.Context, url string) (*GetTokenRespons
var token GetTokenResponse var token GetTokenResponse
err := f.pacer.Call(func() (bool, error) { err := f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, &token) resp, err := f.rest.CallJSON(ctx, &opts, &request, &token)
doretry, err := shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
return doretry || !validToken(&token), err
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't list files: %w", err) return nil, errors.Wrap(err, "couldn't list files")
} }
return &token, nil return &token, nil
@@ -146,25 +65,19 @@ func fileFromSharedFile(file *SharedFile) File {
func (f *Fs) listSharedFiles(ctx context.Context, id string) (entries fs.DirEntries, err error) { func (f *Fs) listSharedFiles(ctx context.Context, id string) (entries fs.DirEntries, err error) {
opts := rest.Opts{ opts := rest.Opts{
Method: "GET", Method: "GET",
RootURL: "https://1fichier.com/dir/", RootURL: "https://1fichier.com/dir/",
Path: id, Path: id,
Parameters: map[string][]string{"json": {"1"}}, Parameters: map[string][]string{"json": {"1"}},
ContentType: "application/x-www-form-urlencoded",
}
if f.opt.FolderPassword != "" {
opts.Method = "POST"
opts.Parameters = nil
opts.Body = strings.NewReader("json=1&pass=" + url.QueryEscape(f.opt.FolderPassword))
} }
var sharedFiles SharedFolderResponse var sharedFiles SharedFolderResponse
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, &sharedFiles) resp, err := f.rest.CallJSON(ctx, &opts, nil, &sharedFiles)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't list files: %w", err) return nil, errors.Wrap(err, "couldn't list files")
} }
entries = make([]fs.DirEntry, len(sharedFiles)) entries = make([]fs.DirEntry, len(sharedFiles))
@@ -190,10 +103,10 @@ func (f *Fs) listFiles(ctx context.Context, directoryID int) (filesList *FilesLi
filesList = &FilesList{} filesList = &FilesList{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, filesList) resp, err := f.rest.CallJSON(ctx, &opts, &request, filesList)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't list files: %w", err) return nil, errors.Wrap(err, "couldn't list files")
} }
for i := range filesList.Items { for i := range filesList.Items {
item := &filesList.Items[i] item := &filesList.Items[i]
@@ -218,10 +131,10 @@ func (f *Fs) listFolders(ctx context.Context, directoryID int) (foldersList *Fol
foldersList = &FoldersList{} foldersList = &FoldersList{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, foldersList) resp, err := f.rest.CallJSON(ctx, &opts, &request, foldersList)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't list folders: %w", err) return nil, errors.Wrap(err, "couldn't list folders")
} }
foldersList.Name = f.opt.Enc.ToStandardName(foldersList.Name) foldersList.Name = f.opt.Enc.ToStandardName(foldersList.Name)
for i := range foldersList.SubFolders { for i := range foldersList.SubFolders {
@@ -312,10 +225,10 @@ func (f *Fs) makeFolder(ctx context.Context, leaf string, folderID int) (respons
response = &MakeFolderResponse{} response = &MakeFolderResponse{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, &request, response) resp, err := f.rest.CallJSON(ctx, &opts, &request, response)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't create folder: %w", err) return nil, errors.Wrap(err, "couldn't create folder")
} }
// fs.Debugf(f, "Created Folder `%s` in id `%s`", name, directoryID) // fs.Debugf(f, "Created Folder `%s` in id `%s`", name, directoryID)
@@ -339,13 +252,13 @@ func (f *Fs) removeFolder(ctx context.Context, name string, folderID int) (respo
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.rest.CallJSON(ctx, &opts, request, response) resp, err = f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't remove folder: %w", err) return nil, errors.Wrap(err, "couldn't remove folder")
} }
if response.Status != "OK" { if response.Status != "OK" {
return nil, fmt.Errorf("can't remove folder: %s", response.Message) return nil, errors.New("Can't remove non-empty dir")
} }
// fs.Debugf(f, "Removed Folder with id `%s`", directoryID) // fs.Debugf(f, "Removed Folder with id `%s`", directoryID)
@@ -368,11 +281,11 @@ func (f *Fs) deleteFile(ctx context.Context, url string) (response *GenericOKRes
response = &GenericOKResponse{} response = &GenericOKResponse{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response) resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't remove file: %w", err) return nil, errors.Wrap(err, "couldn't remove file")
} }
// fs.Debugf(f, "Removed file with url `%s`", url) // fs.Debugf(f, "Removed file with url `%s`", url)
@@ -380,84 +293,6 @@ func (f *Fs) deleteFile(ctx context.Context, url string) (response *GenericOKRes
return response, nil return response, nil
} }
func (f *Fs) moveFile(ctx context.Context, url string, folderID int, rename string) (response *MoveFileResponse, err error) {
request := &MoveFileRequest{
URLs: []string{url},
FolderID: folderID,
Rename: rename,
}
opts := rest.Opts{
Method: "POST",
Path: "/file/mv.cgi",
}
response = &MoveFileResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("couldn't copy file: %w", err)
}
return response, nil
}
func (f *Fs) copyFile(ctx context.Context, url string, folderID int, rename string) (response *CopyFileResponse, err error) {
request := &CopyFileRequest{
URLs: []string{url},
FolderID: folderID,
Rename: rename,
}
opts := rest.Opts{
Method: "POST",
Path: "/file/cp.cgi",
}
response = &CopyFileResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("couldn't copy file: %w", err)
}
return response, nil
}
func (f *Fs) renameFile(ctx context.Context, url string, newName string) (response *RenameFileResponse, err error) {
request := &RenameFileRequest{
URLs: []RenameFileURL{
{
URL: url,
Filename: newName,
},
},
}
opts := rest.Opts{
Method: "POST",
Path: "/file/rename.cgi",
}
response = &RenameFileResponse{}
err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, request, response)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("couldn't rename file: %w", err)
}
return response, nil
}
func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse, err error) { func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse, err error) {
// fs.Debugf(f, "Requesting Upload node") // fs.Debugf(f, "Requesting Upload node")
@@ -470,10 +305,10 @@ func (f *Fs) getUploadNode(ctx context.Context) (response *GetUploadNodeResponse
response = &GetUploadNodeResponse{} response = &GetUploadNodeResponse{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, response) resp, err := f.rest.CallJSON(ctx, &opts, nil, response)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("didnt got an upload node: %w", err) return nil, errors.Wrap(err, "didnt got an upload node")
} }
// fs.Debugf(f, "Got Upload node") // fs.Debugf(f, "Got Upload node")
@@ -487,7 +322,7 @@ func (f *Fs) uploadFile(ctx context.Context, in io.Reader, size int64, fileName,
fileName = f.opt.Enc.FromStandardName(fileName) fileName = f.opt.Enc.FromStandardName(fileName)
if len(uploadID) > 10 || !isAlphaNumeric(uploadID) { if len(uploadID) > 10 || !isAlphaNumeric(uploadID) {
return nil, errors.New("invalid UploadID") return nil, errors.New("Invalid UploadID")
} }
opts := rest.Opts{ opts := rest.Opts{
@@ -513,11 +348,11 @@ func (f *Fs) uploadFile(ctx context.Context, in io.Reader, size int64, fileName,
err = f.pacer.CallNoRetry(func() (bool, error) { err = f.pacer.CallNoRetry(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, nil) resp, err := f.rest.CallJSON(ctx, &opts, nil, nil)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't upload file: %w", err) return nil, errors.Wrap(err, "couldn't upload file")
} }
// fs.Debugf(f, "Uploaded File `%s`", fileName) // fs.Debugf(f, "Uploaded File `%s`", fileName)
@@ -529,7 +364,7 @@ func (f *Fs) endUpload(ctx context.Context, uploadID string, nodeurl string) (re
// fs.Debugf(f, "Ending File Upload `%s`", uploadID) // fs.Debugf(f, "Ending File Upload `%s`", uploadID)
if len(uploadID) > 10 || !isAlphaNumeric(uploadID) { if len(uploadID) > 10 || !isAlphaNumeric(uploadID) {
return nil, errors.New("invalid UploadID") return nil, errors.New("Invalid UploadID")
} }
opts := rest.Opts{ opts := rest.Opts{
@@ -547,11 +382,11 @@ func (f *Fs) endUpload(ctx context.Context, uploadID string, nodeurl string) (re
response = &EndFileUploadResponse{} response = &EndFileUploadResponse{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.rest.CallJSON(ctx, &opts, nil, response) resp, err := f.rest.CallJSON(ctx, &opts, nil, response)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't finish file upload: %w", err) return nil, errors.Wrap(err, "couldn't finish file upload")
} }
return response, err return response, err

View File

@@ -1,9 +1,7 @@
// Package fichier provides an interface to the 1Fichier storage system.
package fichier package fichier
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
@@ -11,6 +9,7 @@ import (
"strings" "strings"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
@@ -36,24 +35,17 @@ func init() {
fs.Register(&fs.RegInfo{ fs.Register(&fs.RegInfo{
Name: "fichier", Name: "fichier",
Description: "1Fichier", Description: "1Fichier",
NewFs: NewFs, Config: func(name string, config configmap.Mapper) {
},
NewFs: NewFs,
Options: []fs.Option{{ Options: []fs.Option{{
Help: "Your API Key, get it from https://1fichier.com/console/params.pl.", Help: "Your API Key, get it from https://1fichier.com/console/params.pl",
Name: "api_key", Name: "api_key",
}, { }, {
Help: "If you want to download a shared folder, add this parameter.", Help: "If you want to download a shared folder, add this parameter",
Name: "shared_folder", Name: "shared_folder",
Required: false,
Advanced: true, Advanced: true,
}, {
Help: "If you want to download a shared file that is password protected, add this parameter.",
Name: "file_password",
Advanced: true,
IsPassword: true,
}, {
Help: "If you want to list the files in a shared folder that is password protected, add this parameter.",
Name: "folder_password",
Advanced: true,
IsPassword: true,
}, { }, {
Name: config.ConfigEncoding, Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp, Help: config.ConfigEncodingHelp,
@@ -85,11 +77,9 @@ func init() {
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
APIKey string `config:"api_key"` APIKey string `config:"api_key"`
SharedFolder string `config:"shared_folder"` SharedFolder string `config:"shared_folder"`
FilePassword string `config:"file_password"` Enc encoder.MultiEncoder `config:"encoding"`
FolderPassword string `config:"folder_password"`
Enc encoder.MultiEncoder `config:"encoding"`
} }
// Fs is the interface a cloud storage system must provide // Fs is the interface a cloud storage system must provide
@@ -177,7 +167,7 @@ func (f *Fs) Features() *fs.Features {
// //
// On Windows avoid single character remote names as they can be mixed // On Windows avoid single character remote names as they can be mixed
// up with drive letters. // up with drive letters.
func NewFs(ctx context.Context, name string, root string, config configmap.Mapper) (fs.Fs, error) { func NewFs(name string, root string, config configmap.Mapper) (fs.Fs, error) {
opt := new(Options) opt := new(Options)
err := configstruct.Set(config, opt) err := configstruct.Set(config, opt)
if err != nil { if err != nil {
@@ -196,17 +186,16 @@ func NewFs(ctx context.Context, name string, root string, config configmap.Mappe
name: name, name: name,
root: root, root: root,
opt: *opt, opt: *opt,
pacer: fs.NewPacer(ctx, pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant), pacer.AttackConstant(attackConstant))), pacer: fs.NewPacer(pacer.NewDefault(pacer.MinSleep(minSleep), pacer.MaxSleep(maxSleep), pacer.DecayConstant(decayConstant), pacer.AttackConstant(attackConstant))),
baseClient: &http.Client{}, baseClient: &http.Client{},
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
DuplicateFiles: true, DuplicateFiles: true,
CanHaveEmptyDirectories: true, CanHaveEmptyDirectories: true,
ReadMimeType: true, }).Fill(f)
}).Fill(ctx, f)
client := fshttp.NewClient(ctx) client := fshttp.NewClient(fs.Config)
f.rest = rest.NewClient(client).SetRoot(apiBaseURL) f.rest = rest.NewClient(client).SetRoot(apiBaseURL)
@@ -214,6 +203,8 @@ func NewFs(ctx context.Context, name string, root string, config configmap.Mappe
f.dirCache = dircache.New(root, rootID, f) f.dirCache = dircache.New(root, rootID, f)
ctx := context.Background()
// Find the current root // Find the current root
err = f.dirCache.FindRoot(ctx, false) err = f.dirCache.FindRoot(ctx, false)
if err != nil { if err != nil {
@@ -236,7 +227,7 @@ func NewFs(ctx context.Context, name string, root string, config configmap.Mappe
} }
return nil, err return nil, err
} }
f.features.Fill(ctx, &tempF) f.features.Fill(&tempF)
// XXX: update the old f here instead of returning tempF, since // XXX: update the old f here instead of returning tempF, since
// `features` were already filled with functions having *f as a receiver. // `features` were already filled with functions having *f as a receiver.
// See https://github.com/rclone/rclone/issues/2182 // See https://github.com/rclone/rclone/issues/2182
@@ -295,7 +286,7 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
path, ok := f.dirCache.GetInv(directoryID) path, ok := f.dirCache.GetInv(directoryID)
if !ok { if !ok {
return nil, errors.New("cannot find dir in dircache") return nil, errors.New("Cannot find dir in dircache")
} }
return f.newObjectFromFile(ctx, path, file), nil return f.newObjectFromFile(ctx, path, file), nil
@@ -315,10 +306,10 @@ func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
// will return the object and the error, otherwise will return // will return the object and the error, otherwise will return
// nil and the error // nil and the error
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
existingObj, err := f.NewObject(ctx, src.Remote()) exisitingObj, err := f.NewObject(ctx, src.Remote())
switch err { switch err {
case nil: case nil:
return existingObj, existingObj.Update(ctx, in, src, options...) return exisitingObj, exisitingObj.Update(ctx, in, src, options...)
case fs.ErrorObjectNotFound: case fs.ErrorObjectNotFound:
// Not found so create it // Not found so create it
return f.PutUnchecked(ctx, in, src, options...) return f.PutUnchecked(ctx, in, src, options...)
@@ -332,7 +323,7 @@ func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options .
// This will create a duplicate if we upload a new file without // This will create a duplicate if we upload a new file without
// checking to see if there is one already - use Put() for that. // checking to see if there is one already - use Put() for that.
func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size int64, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size int64, options ...fs.OpenOption) (fs.Object, error) {
if size > int64(300e9) { if size > int64(100e9) {
return nil, errors.New("File too big, cant upload") return nil, errors.New("File too big, cant upload")
} else if size == 0 { } else if size == 0 {
return nil, fs.ErrorCantUploadEmptyFiles return nil, fs.ErrorCantUploadEmptyFiles
@@ -358,10 +349,8 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
return nil, err return nil, err
} }
if len(fileUploadResponse.Links) == 0 { if len(fileUploadResponse.Links) != 1 {
return nil, errors.New("upload response not found") return nil, errors.New("unexpected amount of files")
} else if len(fileUploadResponse.Links) > 1 {
fs.Debugf(remote, "Multiple upload responses found, using the first")
} }
link := fileUploadResponse.Links[0] link := fileUploadResponse.Links[0]
@@ -375,6 +364,7 @@ func (f *Fs) putUnchecked(ctx context.Context, in io.Reader, remote string, size
fs: f, fs: f,
remote: remote, remote: remote,
file: File{ file: File{
ACL: 0,
CDN: 0, CDN: 0,
Checksum: link.Whirlpool, Checksum: link.Whirlpool,
ContentType: "", ContentType: "",
@@ -427,135 +417,9 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) error {
return nil return nil
} }
// Move src to this remote using server side move operations.
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Find current directory ID
_, currentDirectoryID, err := f.dirCache.FindPath(ctx, remote, false)
if err != nil {
return nil, err
}
// Create temporary object
dstObj, leaf, directoryID, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
// If it is in the correct directory, just rename it
var url string
if currentDirectoryID == directoryID {
resp, err := f.renameFile(ctx, srcObj.file.URL, leaf)
if err != nil {
return nil, fmt.Errorf("couldn't rename file: %w", err)
}
if resp.Status != "OK" {
return nil, fmt.Errorf("couldn't rename file: %s", resp.Message)
}
url = resp.URLs[0].URL
} else {
folderID, err := strconv.Atoi(directoryID)
if err != nil {
return nil, err
}
resp, err := f.moveFile(ctx, srcObj.file.URL, folderID, leaf)
if err != nil {
return nil, fmt.Errorf("couldn't move file: %w", err)
}
if resp.Status != "OK" {
return nil, fmt.Errorf("couldn't move file: %s", resp.Message)
}
url = resp.URLs[0]
}
file, err := f.readFileInfo(ctx, url)
if err != nil {
return nil, errors.New("couldn't read file data")
}
dstObj.setMetaData(*file)
return dstObj, nil
}
// Copy src to this remote using server side move operations.
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Create temporary object
dstObj, leaf, directoryID, err := f.createObject(ctx, remote)
if err != nil {
return nil, err
}
folderID, err := strconv.Atoi(directoryID)
if err != nil {
return nil, err
}
resp, err := f.copyFile(ctx, srcObj.file.URL, folderID, leaf)
if err != nil {
return nil, fmt.Errorf("couldn't move file: %w", err)
}
if resp.Status != "OK" {
return nil, fmt.Errorf("couldn't move file: %s", resp.Message)
}
file, err := f.readFileInfo(ctx, resp.URLs[0].ToURL)
if err != nil {
return nil, errors.New("couldn't read file data")
}
dstObj.setMetaData(*file)
return dstObj, nil
}
// About gets quota information
func (f *Fs) About(ctx context.Context) (usage *fs.Usage, err error) {
opts := rest.Opts{
Method: "POST",
Path: "/user/info.cgi",
ContentType: "application/json",
}
var accountInfo AccountInfo
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.rest.CallJSON(ctx, &opts, nil, &accountInfo)
return shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, fmt.Errorf("failed to read user info: %w", err)
}
// FIXME max upload size would be useful to use in Update
usage = &fs.Usage{
Used: fs.NewUsageValue(accountInfo.ColdStorage), // bytes in use
Total: fs.NewUsageValue(accountInfo.AvailableColdStorage), // bytes total
Free: fs.NewUsageValue(accountInfo.AvailableColdStorage - accountInfo.ColdStorage), // bytes free
}
return usage, nil
}
// PublicLink adds a "readable by anyone with link" permission on the given file or folder.
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
o, err := f.NewObject(ctx, remote)
if err != nil {
return "", err
}
return o.(*Object).file.URL, nil
}
// Check the interfaces are satisfied // Check the interfaces are satisfied
var ( var (
_ fs.Fs = (*Fs)(nil) _ fs.Fs = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil) _ fs.PutUncheckeder = (*Fs)(nil)
_ dircache.DirCacher = (*Fs)(nil) _ dircache.DirCacher = (*Fs)(nil)
) )

View File

@@ -4,11 +4,13 @@ package fichier
import ( import (
"testing" "testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fstest/fstests" "github.com/rclone/rclone/fstest/fstests"
) )
// TestIntegration runs integration tests against the remote // TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) { func TestIntegration(t *testing.T) {
fs.Config.LogLevel = fs.LogLevelDebug
fstests.Run(t, &fstests.Opt{ fstests.Run(t, &fstests.Opt{
RemoteName: "TestFichier:", RemoteName: "TestFichier:",
}) })

View File

@@ -2,12 +2,11 @@ package fichier
import ( import (
"context" "context"
"errors"
"fmt"
"io" "io"
"net/http" "net/http"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/rest" "github.com/rclone/rclone/lib/rest"
@@ -73,10 +72,6 @@ func (o *Object) SetModTime(context.Context, time.Time) error {
//return errors.New("setting modtime is not supported for 1fichier remotes") //return errors.New("setting modtime is not supported for 1fichier remotes")
} }
func (o *Object) setMetaData(file File) {
o.file = file
}
// Open opens the file for read. Call Close() on the returned io.ReadCloser // Open opens the file for read. Call Close() on the returned io.ReadCloser
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) { func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadCloser, error) {
fs.FixRangeOption(options, o.file.Size) fs.FixRangeOption(options, o.file.Size)
@@ -95,7 +90,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (io.ReadClo
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.rest.Call(ctx, &opts) resp, err = o.fs.rest.Call(ctx, &opts)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
@@ -123,7 +118,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
// Delete duplicate after successful upload // Delete duplicate after successful upload
err = o.Remove(ctx) err = o.Remove(ctx)
if err != nil { if err != nil {
return fmt.Errorf("failed to remove old version: %w", err) return errors.Wrap(err, "failed to remove old version")
} }
// Replace guts of old object with new one // Replace guts of old object with new one

View File

@@ -1,10 +1,5 @@
package fichier package fichier
// FileInfoRequest is the request structure of the corresponding request
type FileInfoRequest struct {
URL string `json:"url"`
}
// ListFolderRequest is the request structure of the corresponding request // ListFolderRequest is the request structure of the corresponding request
type ListFolderRequest struct { type ListFolderRequest struct {
FolderID int `json:"folder_id"` FolderID int `json:"folder_id"`
@@ -19,7 +14,6 @@ type ListFilesRequest struct {
type DownloadRequest struct { type DownloadRequest struct {
URL string `json:"url"` URL string `json:"url"`
Single int `json:"single"` Single int `json:"single"`
Pass string `json:"pass,omitempty"`
} }
// RemoveFolderRequest is the request structure of the corresponding request // RemoveFolderRequest is the request structure of the corresponding request
@@ -55,65 +49,6 @@ type MakeFolderResponse struct {
FolderID int `json:"folder_id"` FolderID int `json:"folder_id"`
} }
// MoveFileRequest is the request structure of the corresponding request
type MoveFileRequest struct {
URLs []string `json:"urls"`
FolderID int `json:"destination_folder_id"`
Rename string `json:"rename,omitempty"`
}
// MoveFileResponse is the response structure of the corresponding request
type MoveFileResponse struct {
Status string `json:"status"`
Message string `json:"message"`
URLs []string `json:"urls"`
}
// CopyFileRequest is the request structure of the corresponding request
type CopyFileRequest struct {
URLs []string `json:"urls"`
FolderID int `json:"folder_id"`
Rename string `json:"rename,omitempty"`
}
// CopyFileResponse is the response structure of the corresponding request
type CopyFileResponse struct {
Status string `json:"status"`
Message string `json:"message"`
Copied int `json:"copied"`
URLs []FileCopy `json:"urls"`
}
// FileCopy is used in the CopyFileResponse
type FileCopy struct {
FromURL string `json:"from_url"`
ToURL string `json:"to_url"`
}
// RenameFileURL is the data structure to rename a single file
type RenameFileURL struct {
URL string `json:"url"`
Filename string `json:"filename"`
}
// RenameFileRequest is the request structure of the corresponding request
type RenameFileRequest struct {
URLs []RenameFileURL `json:"urls"`
Pretty int `json:"pretty"`
}
// RenameFileResponse is the response structure of the corresponding request
type RenameFileResponse struct {
Status string `json:"status"`
Message string `json:"message"`
Renamed int `json:"renamed"`
URLs []struct {
URL string `json:"url"`
OldFilename string `json:"old_filename"`
NewFilename string `json:"new_filename"`
} `json:"urls"`
}
// GetUploadNodeResponse is the response structure of the corresponding request // GetUploadNodeResponse is the response structure of the corresponding request
type GetUploadNodeResponse struct { type GetUploadNodeResponse struct {
ID string `json:"id"` ID string `json:"id"`
@@ -151,6 +86,7 @@ type EndFileUploadResponse struct {
// File is the structure how 1Fichier returns a File // File is the structure how 1Fichier returns a File
type File struct { type File struct {
ACL int `json:"acl"`
CDN int `json:"cdn"` CDN int `json:"cdn"`
Checksum string `json:"checksum"` Checksum string `json:"checksum"`
ContentType string `json:"content-type"` ContentType string `json:"content-type"`
@@ -182,34 +118,3 @@ type FoldersList struct {
Status string `json:"Status"` Status string `json:"Status"`
SubFolders []Folder `json:"sub_folders"` SubFolders []Folder `json:"sub_folders"`
} }
// AccountInfo is the structure how 1Fichier returns user info
type AccountInfo struct {
StatsDate string `json:"stats_date"`
MailRM string `json:"mail_rm"`
DefaultQuota int64 `json:"default_quota"`
UploadForbidden string `json:"upload_forbidden"`
PageLimit int `json:"page_limit"`
ColdStorage int64 `json:"cold_storage"`
Status string `json:"status"`
UseCDN string `json:"use_cdn"`
AvailableColdStorage int64 `json:"available_cold_storage"`
DefaultPort string `json:"default_port"`
DefaultDomain int `json:"default_domain"`
Email string `json:"email"`
DownloadMenu string `json:"download_menu"`
FTPDID int `json:"ftp_did"`
DefaultPortFiles string `json:"default_port_files"`
FTPReport string `json:"ftp_report"`
OverQuota int64 `json:"overquota"`
AvailableStorage int64 `json:"available_storage"`
CDN string `json:"cdn"`
Offer string `json:"offer"`
SubscriptionEnd string `json:"subscription_end"`
TFA string `json:"2fa"`
AllowedColdStorage int64 `json:"allowed_cold_storage"`
HotStorage int64 `json:"hot_storage"`
DefaultColdStorageQuota int64 `json:"default_cold_storage_quota"`
FTPMode string `json:"ftp_mode"`
RUReport string `json:"ru_report"`
}

View File

@@ -1,427 +0,0 @@
// Package api has type definitions for filefabric
//
// Converted from the API responses with help from https://mholt.github.io/json-to-go/
package api
import (
"bytes"
"encoding/json"
"fmt"
"reflect"
"strings"
"time"
)
const (
// TimeFormat for parameters (UTC)
timeFormatParameters = `2006-01-02 15:04:05`
// "2020-08-11 10:10:04" for JSON parsing
timeFormatJSON = `"` + timeFormatParameters + `"`
)
// Time represents date and time information for the
// filefabric API
type Time time.Time
// MarshalJSON turns a Time into JSON (in UTC)
func (t *Time) MarshalJSON() (out []byte, err error) {
timeString := (*time.Time)(t).UTC().Format(timeFormatJSON)
return []byte(timeString), nil
}
var zeroTime = []byte(`"0000-00-00 00:00:00"`)
// UnmarshalJSON turns JSON into a Time (in UTC)
func (t *Time) UnmarshalJSON(data []byte) error {
// Set a Zero time.Time if we receive a zero time input
if bytes.Equal(data, zeroTime) {
*t = Time(time.Time{})
return nil
}
newT, err := time.Parse(timeFormatJSON, string(data))
if err != nil {
return err
}
*t = Time(newT)
return nil
}
// String turns a Time into a string in UTC suitable for the API
// parameters
func (t Time) String() string {
return time.Time(t).UTC().Format(timeFormatParameters)
}
// Int represents an integer which can be represented in JSON as a
// quoted integer or an integer.
type Int int
// MarshalJSON turns a Int into JSON
func (i *Int) MarshalJSON() (out []byte, err error) {
return json.Marshal((*int)(i))
}
// UnmarshalJSON turns JSON into a Int
func (i *Int) UnmarshalJSON(data []byte) error {
if len(data) >= 2 && data[0] == '"' && data[len(data)-1] == '"' {
data = data[1 : len(data)-1]
}
return json.Unmarshal(data, (*int)(i))
}
// String represents an string which can be represented in JSON as a
// quoted string or an integer.
type String string
// MarshalJSON turns a String into JSON
func (s *String) MarshalJSON() (out []byte, err error) {
return json.Marshal((*string)(s))
}
// UnmarshalJSON turns JSON into a String
func (s *String) UnmarshalJSON(data []byte) error {
err := json.Unmarshal(data, (*string)(s))
if err != nil {
*s = String(data)
}
return nil
}
// Status return returned in all status responses
type Status struct {
Code string `json:"status"`
Message string `json:"statusmessage"`
TaskID String `json:"taskid"`
// Warning string `json:"warning"` // obsolete
}
// Status satisfies the error interface
func (e *Status) Error() string {
return fmt.Sprintf("%s (%s)", e.Message, e.Code)
}
// OK returns true if the status is all good
func (e *Status) OK() bool {
return e.Code == "ok"
}
// GetCode returns the status code if any
func (e *Status) GetCode() string {
return e.Code
}
// OKError defines an interface for items which can be OK or be an error
type OKError interface {
error
OK() bool
GetCode() string
}
// Check Status satisfies the OKError interface
var _ OKError = (*Status)(nil)
// EmptyResponse is response which just returns the error condition
type EmptyResponse struct {
Status
}
// GetTokenByAuthTokenResponse is the response to getTokenByAuthToken
type GetTokenByAuthTokenResponse struct {
Status
Token string `json:"token"`
UserID string `json:"userid"`
AllowLoginRemember string `json:"allowloginremember"`
LastLogin Time `json:"lastlogin"`
AutoLoginCode string `json:"autologincode"`
}
// ApplianceInfo is the response to getApplianceInfo
type ApplianceInfo struct {
Status
Sitetitle string `json:"sitetitle"`
OauthLoginSupport string `json:"oauthloginsupport"`
IsAppliance string `json:"isappliance"`
SoftwareVersion string `json:"softwareversion"`
SoftwareVersionLabel string `json:"softwareversionlabel"`
}
// GetFolderContentsResponse is returned from getFolderContents
type GetFolderContentsResponse struct {
Status
Total int `json:"total,string"`
Items []Item `json:"filelist"`
Folder Item `json:"folder"`
From Int `json:"from"`
//Count int `json:"count"`
Pid string `json:"pid"`
RefreshResult Status `json:"refreshresult"`
// Curfolder Item `json:"curfolder"` - sometimes returned as "ROOT"?
Parents []Item `json:"parents"`
CustomPermissions CustomPermissions `json:"custompermissions"`
}
// ItemType determine whether it is a file or a folder
type ItemType uint8
// Types of things in Item
const (
ItemTypeFile ItemType = 0
ItemTypeFolder ItemType = 1
)
// Item ia a File or a Folder
type Item struct {
ID string `json:"fi_id"`
PID string `json:"fi_pid"`
// UID string `json:"fi_uid"`
Name string `json:"fi_name"`
// S3Name string `json:"fi_s3name"`
// Extension string `json:"fi_extension"`
// Description string `json:"fi_description"`
Type ItemType `json:"fi_type,string"`
// Created Time `json:"fi_created"`
Size int64 `json:"fi_size,string"`
ContentType string `json:"fi_contenttype"`
// Tags string `json:"fi_tags"`
// MainCode string `json:"fi_maincode"`
// Public int `json:"fi_public,string"`
// Provider string `json:"fi_provider"`
// ProviderFolder string `json:"fi_providerfolder"` // folder
// Encrypted int `json:"fi_encrypted,string"`
// StructType string `json:"fi_structtype"`
// Bname string `json:"fi_bname"` // folder
// OrgID string `json:"fi_orgid"`
// Favorite int `json:"fi_favorite,string"`
// IspartOf string `json:"fi_ispartof"` // folder
Modified Time `json:"fi_modified"`
// LastAccessed Time `json:"fi_lastaccessed"`
// Hits int64 `json:"fi_hits,string"`
// IP string `json:"fi_ip"` // folder
// BigDescription string `json:"fi_bigdescription"`
LocalTime Time `json:"fi_localtime"`
// OrgfolderID string `json:"fi_orgfolderid"`
// StorageIP string `json:"fi_storageip"` // folder
// RemoteTime Time `json:"fi_remotetime"`
// ProviderOptions string `json:"fi_provideroptions"`
// Access string `json:"fi_access"`
// Hidden string `json:"fi_hidden"` // folder
// VersionOf string `json:"fi_versionof"`
Trash bool `json:"trash"`
// Isbucket string `json:"isbucket"` // filelist
SubFolders int64 `json:"subfolders"` // folder
}
// ItemFields is a | separated list of fields in Item
var ItemFields = mustFields(Item{})
// fields returns the JSON fields in use by opt as a | separated
// string.
func fields(opt interface{}) (pipeTags string, err error) {
var tags []string
def := reflect.ValueOf(opt)
defType := def.Type()
for i := 0; i < def.NumField(); i++ {
field := defType.Field(i)
tag, ok := field.Tag.Lookup("json")
if !ok {
continue
}
if comma := strings.IndexRune(tag, ','); comma >= 0 {
tag = tag[:comma]
}
if tag == "" {
continue
}
tags = append(tags, tag)
}
return strings.Join(tags, "|"), nil
}
// mustFields returns the JSON fields in use by opt as a | separated
// string. It panics on failure.
func mustFields(opt interface{}) string {
tags, err := fields(opt)
if err != nil {
panic(err)
}
return tags
}
// CustomPermissions is returned as part of GetFolderContentsResponse
type CustomPermissions struct {
Upload string `json:"upload"`
CreateSubFolder string `json:"createsubfolder"`
Rename string `json:"rename"`
Delete string `json:"delete"`
Move string `json:"move"`
ManagePermissions string `json:"managepermissions"`
ListOnly string `json:"listonly"`
VisibleInTrash string `json:"visibleintrash"`
}
// DoCreateNewFolderResponse is response from foCreateNewFolder
type DoCreateNewFolderResponse struct {
Status
Item Item `json:"file"`
}
// DoInitUploadResponse is response from doInitUpload
type DoInitUploadResponse struct {
Status
ProviderID string `json:"providerid"`
UploadCode string `json:"uploadcode"`
FileType string `json:"filetype"`
DirectUploadSupport string `json:"directuploadsupport"`
ResumeAllowed string `json:"resumeallowed"`
}
// UploaderResponse is returned from /cgi-bin/uploader/uploader1.cgi
//
// Sometimes the response is returned as XML and sometimes as JSON
type UploaderResponse struct {
FileSize int64 `xml:"filesize" json:"filesize,string"`
MD5 string `xml:"md5" json:"md5"`
Success string `xml:"success" json:"success"`
}
// UploadStatus is returned from getUploadStatus
type UploadStatus struct {
Status
UploadCode string `json:"uploadcode"`
Metafile string `json:"metafile"`
Percent int `json:"percent,string"`
Uploaded int64 `json:"uploaded,string"`
Size int64 `json:"size,string"`
Filename string `json:"filename"`
Nofile string `json:"nofile"`
Completed string `json:"completed"`
Completsuccess string `json:"completsuccess"`
Completerror string `json:"completerror"`
}
// DoCompleteUploadResponse is the response to doCompleteUpload
type DoCompleteUploadResponse struct {
Status
UploadedSize int64 `json:"uploadedsize,string"`
StorageIP string `json:"storageip"`
UploadedName string `json:"uploadedname"`
// Versioned []interface{} `json:"versioned"`
// VersionedID int `json:"versionedid"`
// Comment interface{} `json:"comment"`
File Item `json:"file"`
// UsSize string `json:"us_size"`
// PaSize string `json:"pa_size"`
// SpaceInfo SpaceInfo `json:"spaceinfo"`
}
// Providers is returned as part of UploadResponse
type Providers struct {
Max string `json:"max"`
Used string `json:"used"`
ID string `json:"id"`
Private string `json:"private"`
Limit string `json:"limit"`
Percent int `json:"percent"`
}
// Total is returned as part of UploadResponse
type Total struct {
Max string `json:"max"`
Used string `json:"used"`
ID string `json:"id"`
Priused string `json:"priused"`
Primax string `json:"primax"`
Limit string `json:"limit"`
Percent int `json:"percent"`
Pripercent int `json:"pripercent"`
}
// UploadResponse is returned as part of SpaceInfo
type UploadResponse struct {
Providers []Providers `json:"providers"`
Total Total `json:"total"`
}
// SpaceInfo is returned as part of DoCompleteUploadResponse
type SpaceInfo struct {
Response UploadResponse `json:"response"`
Status string `json:"status"`
}
// DeleteResponse is returned from doDeleteFile
type DeleteResponse struct {
Status
Deleted []string `json:"deleted"`
Errors []interface{} `json:"errors"`
ID string `json:"fi_id"`
BackgroundTask int `json:"backgroundtask"`
UsSize string `json:"us_size"`
PaSize string `json:"pa_size"`
//SpaceInfo SpaceInfo `json:"spaceinfo"`
}
// FileResponse is returned from doRenameFile
type FileResponse struct {
Status
Item Item `json:"file"`
Exists string `json:"exists"`
}
// MoveFilesResponse is returned from doMoveFiles
type MoveFilesResponse struct {
Status
Filesleft string `json:"filesleft"`
Addedtobackground string `json:"addedtobackground"`
Moved string `json:"moved"`
Item Item `json:"file"`
IDs []string `json:"fi_ids"`
Length int `json:"length"`
DirID string `json:"dir_id"`
MovedObjects []Item `json:"movedobjects"`
// FolderTasks []interface{} `json:"foldertasks"`
}
// TasksResponse is the response to getUserBackgroundTasks
type TasksResponse struct {
Status
Tasks []Task `json:"tasks"`
Total string `json:"total"`
}
// BtData is part of TasksResponse
type BtData struct {
Callback string `json:"callback"`
}
// Task describes a task returned in TasksResponse
type Task struct {
BtID string `json:"bt_id"`
UsID string `json:"us_id"`
BtType string `json:"bt_type"`
BtData BtData `json:"bt_data"`
BtStatustext string `json:"bt_statustext"`
BtStatusdata string `json:"bt_statusdata"`
BtMessage string `json:"bt_message"`
BtProcent string `json:"bt_procent"`
BtAdded string `json:"bt_added"`
BtStatus string `json:"bt_status"`
BtCompleted string `json:"bt_completed"`
BtTitle string `json:"bt_title"`
BtCredentials string `json:"bt_credentials"`
BtHidden string `json:"bt_hidden"`
BtAutoremove string `json:"bt_autoremove"`
BtDevsite string `json:"bt_devsite"`
BtPriority string `json:"bt_priority"`
BtReport string `json:"bt_report"`
BtSitemarker string `json:"bt_sitemarker"`
BtExecuteafter string `json:"bt_executeafter"`
BtCompletestatus string `json:"bt_completestatus"`
BtSubtype string `json:"bt_subtype"`
BtCanceled string `json:"bt_canceled"`
Callback string `json:"callback"`
CanBeCanceled bool `json:"canbecanceled"`
CanBeRestarted bool `json:"canberestarted"`
Type string `json:"type"`
Status string `json:"status"`
Settings string `json:"settings"`
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,17 +0,0 @@
// Test filefabric filesystem interface
package filefabric_test
import (
"testing"
"github.com/rclone/rclone/backend/filefabric"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFileFabric:",
NilObject: (*filefabric.Object)(nil),
})
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,115 +0,0 @@
package ftp
import (
"context"
"fmt"
"strings"
"testing"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/object"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/readers"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type settings map[string]interface{}
func deriveFs(ctx context.Context, t *testing.T, f fs.Fs, opts settings) fs.Fs {
fsName := strings.Split(f.Name(), "{")[0] // strip off hash
configMap := configmap.Simple{}
for key, val := range opts {
configMap[key] = fmt.Sprintf("%v", val)
}
remote := fmt.Sprintf("%s,%s:%s", fsName, configMap.String(), f.Root())
fixFs, err := fs.NewFs(ctx, remote)
require.NoError(t, err)
return fixFs
}
// test that big file uploads do not cause network i/o timeout
func (f *Fs) testUploadTimeout(t *testing.T) {
const (
fileSize = 100000000 // 100 MiB
idleTimeout = 1 * time.Second // small because test server is local
maxTime = 10 * time.Second // prevent test hangup
)
if testing.Short() {
t.Skip("not running with -short")
}
ctx := context.Background()
ci := fs.GetConfig(ctx)
saveLowLevelRetries := ci.LowLevelRetries
saveTimeout := ci.Timeout
defer func() {
ci.LowLevelRetries = saveLowLevelRetries
ci.Timeout = saveTimeout
}()
ci.LowLevelRetries = 1
ci.Timeout = idleTimeout
upload := func(concurrency int, shutTimeout time.Duration) (obj fs.Object, err error) {
fixFs := deriveFs(ctx, t, f, settings{
"concurrency": concurrency,
"shut_timeout": shutTimeout,
})
// Make test object
fileTime := fstest.Time("2020-03-08T09:30:00.000000000Z")
meta := object.NewStaticObjectInfo("upload-timeout.test", fileTime, int64(fileSize), true, nil, nil)
data := readers.NewPatternReader(int64(fileSize))
// Run upload and ensure maximum time
done := make(chan bool)
deadline := time.After(maxTime)
go func() {
obj, err = fixFs.Put(ctx, data, meta)
done <- true
}()
select {
case <-done:
case <-deadline:
t.Fatalf("Upload got stuck for %v !", maxTime)
}
return obj, err
}
// non-zero shut_timeout should fix i/o errors
obj, err := upload(f.opt.Concurrency, time.Second)
assert.NoError(t, err)
assert.NotNil(t, obj)
if obj != nil {
_ = obj.Remove(ctx)
}
}
// rclone must support precise time with ProFtpd and PureFtpd out of the box.
// The VsFtpd server does not support the MFMT command to set file time like
// other servers but by default supports the MDTM command in the non-standard
// two-argument form for the same purpose.
// See "mdtm_write" in https://security.appspot.com/vsftpd/vsftpd_conf.html
func (f *Fs) testTimePrecision(t *testing.T) {
name := f.Name()
if pos := strings.Index(name, "{"); pos != -1 {
name = name[:pos]
}
switch name {
case "TestFTPProftpd", "TestFTPPureftpd", "TestFTPVsftpd":
assert.LessOrEqual(t, f.Precision(), time.Second)
}
}
// InternalTest dispatches all internal tests
func (f *Fs) InternalTest(t *testing.T) {
t.Run("UploadTimeout", f.testUploadTimeout)
t.Run("TimePrecision", f.testTimePrecision)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -9,27 +9,25 @@ import (
"github.com/rclone/rclone/fstest/fstests" "github.com/rclone/rclone/fstest/fstests"
) )
// TestIntegration runs integration tests against rclone FTP server // TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) { func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFTPRclone:",
NilObject: (*ftp.Object)(nil),
})
}
// TestIntegrationProftpd runs integration tests against proFTPd
func TestIntegrationProftpd(t *testing.T) {
if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
fstests.Run(t, &fstests.Opt{ fstests.Run(t, &fstests.Opt{
RemoteName: "TestFTPProftpd:", RemoteName: "TestFTPProftpd:",
NilObject: (*ftp.Object)(nil), NilObject: (*ftp.Object)(nil),
}) })
} }
// TestIntegrationPureftpd runs integration tests against pureFTPd func TestIntegration2(t *testing.T) {
func TestIntegrationPureftpd(t *testing.T) { if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set")
}
fstests.Run(t, &fstests.Opt{
RemoteName: "TestFTPRclone:",
NilObject: (*ftp.Object)(nil),
})
}
func TestIntegration3(t *testing.T) {
if *fstest.RemoteName != "" { if *fstest.RemoteName != "" {
t.Skip("skipping as -remote is set") t.Skip("skipping as -remote is set")
} }
@@ -39,13 +37,12 @@ func TestIntegrationPureftpd(t *testing.T) {
}) })
} }
// TestIntegrationVsftpd runs integration tests against vsFTPd // func TestIntegration4(t *testing.T) {
func TestIntegrationVsftpd(t *testing.T) { // if *fstest.RemoteName != "" {
if *fstest.RemoteName != "" { // t.Skip("skipping as -remote is set")
t.Skip("skipping as -remote is set") // }
} // fstests.Run(t, &fstests.Opt{
fstests.Run(t, &fstests.Opt{ // RemoteName: "TestFTPVsftpd:",
RemoteName: "TestFTPVsftpd:", // NilObject: (*ftp.Object)(nil),
NilObject: (*ftp.Object)(nil), // })
}) // }
}

View File

@@ -16,17 +16,16 @@ import (
"context" "context"
"encoding/base64" "encoding/base64"
"encoding/hex" "encoding/hex"
"errors"
"fmt" "fmt"
"io" "io"
"io/ioutil"
"log"
"net/http" "net/http"
"os"
"path" "path"
"strconv"
"strings" "strings"
"sync"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/fs/config/configmap" "github.com/rclone/rclone/fs/config/configmap"
@@ -44,7 +43,6 @@ import (
"golang.org/x/oauth2" "golang.org/x/oauth2"
"golang.org/x/oauth2/google" "golang.org/x/oauth2/google"
"google.golang.org/api/googleapi" "google.golang.org/api/googleapi"
option "google.golang.org/api/option"
// NOTE: This API is deprecated // NOTE: This API is deprecated
storage "google.golang.org/api/storage/v1" storage "google.golang.org/api/storage/v1"
@@ -53,10 +51,10 @@ import (
const ( const (
rcloneClientID = "202264815644.apps.googleusercontent.com" rcloneClientID = "202264815644.apps.googleusercontent.com"
rcloneEncryptedClientSecret = "Uj7C9jGfb9gmeaV70Lh058cNkWvepr-Es9sBm0zdgil7JaOWF1VySw" rcloneEncryptedClientSecret = "Uj7C9jGfb9gmeaV70Lh058cNkWvepr-Es9sBm0zdgil7JaOWF1VySw"
timeFormat = time.RFC3339Nano timeFormatIn = time.RFC3339
metaMtime = "mtime" // key to store mtime in metadata timeFormatOut = "2006-01-02T15:04:05.000000000Z07:00"
metaMtimeGsutil = "goog-reserved-file-mtime" // key used by GSUtil to store mtime in metadata metaMtime = "mtime" // key to store mtime under in metadata
listChunks = 1000 // chunk size to read directory listings listChunks = 1000 // chunk size to read directory listings
minSleep = 10 * time.Millisecond minSleep = 10 * time.Millisecond
) )
@@ -67,7 +65,7 @@ var (
Endpoint: google.Endpoint, Endpoint: google.Endpoint,
ClientID: rcloneClientID, ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL, RedirectURL: oauthutil.TitleBarRedirectURL,
} }
) )
@@ -78,71 +76,78 @@ func init() {
Prefix: "gcs", Prefix: "gcs",
Description: "Google Cloud Storage (this is not Google Drive)", Description: "Google Cloud Storage (this is not Google Drive)",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) { Config: func(name string, m configmap.Mapper) {
saFile, _ := m.Get("service_account_file") saFile, _ := m.Get("service_account_file")
saCreds, _ := m.Get("service_account_credentials") saCreds, _ := m.Get("service_account_credentials")
anonymous, _ := m.Get("anonymous") anonymous, _ := m.Get("anonymous")
if saFile != "" || saCreds != "" || anonymous == "true" { if saFile != "" || saCreds != "" || anonymous == "true" {
return nil, nil return
}
err := oauthutil.Config("google cloud storage", name, m, storageConfig, nil)
if err != nil {
log.Fatalf("Failed to configure token: %v", err)
} }
return oauthutil.ConfigOut("", &oauthutil.Options{
OAuth2Config: storageConfig,
})
}, },
Options: append(oauthutil.SharedOptions, []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Google Application Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Google Application Client Secret\nLeave blank normally.",
}, {
Name: "project_number", Name: "project_number",
Help: "Project number.\n\nOptional - needed only for list/create/delete buckets - see your developer console.", Help: "Project number.\nOptional - needed only for list/create/delete buckets - see your developer console.",
}, { }, {
Name: "service_account_file", Name: "service_account_file",
Help: "Service Account Credentials JSON file path.\n\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login." + env.ShellExpandHelp, Help: "Service Account Credentials JSON file path\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login." + env.ShellExpandHelp,
}, { }, {
Name: "service_account_credentials", Name: "service_account_credentials",
Help: "Service Account Credentials JSON blob.\n\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.", Help: "Service Account Credentials JSON blob\nLeave blank normally.\nNeeded only if you want use SA instead of interactive login.",
Hide: fs.OptionHideBoth, Hide: fs.OptionHideBoth,
}, { }, {
Name: "anonymous", Name: "anonymous",
Help: "Access public buckets and objects without credentials.\n\nSet to 'true' if you just want to download files and don't configure credentials.", Help: "Access public buckets and objects without credentials\nSet to 'true' if you just want to download files and don't configure credentials.",
Default: false, Default: false,
}, { }, {
Name: "object_acl", Name: "object_acl",
Help: "Access Control List for new objects.", Help: "Access Control List for new objects.",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "authenticatedRead", Value: "authenticatedRead",
Help: "Object owner gets OWNER access.\nAll Authenticated Users get READER access.", Help: "Object owner gets OWNER access, and all Authenticated Users get READER access.",
}, { }, {
Value: "bucketOwnerFullControl", Value: "bucketOwnerFullControl",
Help: "Object owner gets OWNER access.\nProject team owners get OWNER access.", Help: "Object owner gets OWNER access, and project team owners get OWNER access.",
}, { }, {
Value: "bucketOwnerRead", Value: "bucketOwnerRead",
Help: "Object owner gets OWNER access.\nProject team owners get READER access.", Help: "Object owner gets OWNER access, and project team owners get READER access.",
}, { }, {
Value: "private", Value: "private",
Help: "Object owner gets OWNER access.\nDefault if left blank.", Help: "Object owner gets OWNER access [default if left blank].",
}, { }, {
Value: "projectPrivate", Value: "projectPrivate",
Help: "Object owner gets OWNER access.\nProject team members get access according to their roles.", Help: "Object owner gets OWNER access, and project team members get access according to their roles.",
}, { }, {
Value: "publicRead", Value: "publicRead",
Help: "Object owner gets OWNER access.\nAll Users get READER access.", Help: "Object owner gets OWNER access, and all Users get READER access.",
}}, }},
}, { }, {
Name: "bucket_acl", Name: "bucket_acl",
Help: "Access Control List for new buckets.", Help: "Access Control List for new buckets.",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "authenticatedRead", Value: "authenticatedRead",
Help: "Project team owners get OWNER access.\nAll Authenticated Users get READER access.", Help: "Project team owners get OWNER access, and all Authenticated Users get READER access.",
}, { }, {
Value: "private", Value: "private",
Help: "Project team owners get OWNER access.\nDefault if left blank.", Help: "Project team owners get OWNER access [default if left blank].",
}, { }, {
Value: "projectPrivate", Value: "projectPrivate",
Help: "Project team members get access according to their roles.", Help: "Project team members get access according to their roles.",
}, { }, {
Value: "publicRead", Value: "publicRead",
Help: "Project team owners get OWNER access.\nAll Users get READER access.", Help: "Project team owners get OWNER access, and all Users get READER access.",
}, { }, {
Value: "publicReadWrite", Value: "publicReadWrite",
Help: "Project team owners get OWNER access.\nAll Users get WRITER access.", Help: "Project team owners get OWNER access, and all Users get WRITER access.",
}}, }},
}, { }, {
Name: "bucket_policy_only", Name: "bucket_policy_only",
@@ -165,112 +170,64 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
Help: "Location for the newly created buckets.", Help: "Location for the newly created buckets.",
Examples: []fs.OptionExample{{ Examples: []fs.OptionExample{{
Value: "", Value: "",
Help: "Empty for default location (US)", Help: "Empty for default location (US).",
}, { }, {
Value: "asia", Value: "asia",
Help: "Multi-regional location for Asia", Help: "Multi-regional location for Asia.",
}, { }, {
Value: "eu", Value: "eu",
Help: "Multi-regional location for Europe", Help: "Multi-regional location for Europe.",
}, { }, {
Value: "us", Value: "us",
Help: "Multi-regional location for United States", Help: "Multi-regional location for United States.",
}, { }, {
Value: "asia-east1", Value: "asia-east1",
Help: "Taiwan", Help: "Taiwan.",
}, { }, {
Value: "asia-east2", Value: "asia-east2",
Help: "Hong Kong", Help: "Hong Kong.",
}, { }, {
Value: "asia-northeast1", Value: "asia-northeast1",
Help: "Tokyo", Help: "Tokyo.",
}, {
Value: "asia-northeast2",
Help: "Osaka",
}, {
Value: "asia-northeast3",
Help: "Seoul",
}, { }, {
Value: "asia-south1", Value: "asia-south1",
Help: "Mumbai", Help: "Mumbai.",
}, {
Value: "asia-south2",
Help: "Delhi",
}, { }, {
Value: "asia-southeast1", Value: "asia-southeast1",
Help: "Singapore", Help: "Singapore.",
}, {
Value: "asia-southeast2",
Help: "Jakarta",
}, { }, {
Value: "australia-southeast1", Value: "australia-southeast1",
Help: "Sydney", Help: "Sydney.",
}, {
Value: "australia-southeast2",
Help: "Melbourne",
}, { }, {
Value: "europe-north1", Value: "europe-north1",
Help: "Finland", Help: "Finland.",
}, { }, {
Value: "europe-west1", Value: "europe-west1",
Help: "Belgium", Help: "Belgium.",
}, { }, {
Value: "europe-west2", Value: "europe-west2",
Help: "London", Help: "London.",
}, { }, {
Value: "europe-west3", Value: "europe-west3",
Help: "Frankfurt", Help: "Frankfurt.",
}, { }, {
Value: "europe-west4", Value: "europe-west4",
Help: "Netherlands", Help: "Netherlands.",
}, {
Value: "europe-west6",
Help: "Zürich",
}, {
Value: "europe-central2",
Help: "Warsaw",
}, { }, {
Value: "us-central1", Value: "us-central1",
Help: "Iowa", Help: "Iowa.",
}, { }, {
Value: "us-east1", Value: "us-east1",
Help: "South Carolina", Help: "South Carolina.",
}, { }, {
Value: "us-east4", Value: "us-east4",
Help: "Northern Virginia", Help: "Northern Virginia.",
}, { }, {
Value: "us-west1", Value: "us-west1",
Help: "Oregon", Help: "Oregon.",
}, { }, {
Value: "us-west2", Value: "us-west2",
Help: "California", Help: "California.",
}, {
Value: "us-west3",
Help: "Salt Lake City",
}, {
Value: "us-west4",
Help: "Las Vegas",
}, {
Value: "northamerica-northeast1",
Help: "Montréal",
}, {
Value: "northamerica-northeast2",
Help: "Toronto",
}, {
Value: "southamerica-east1",
Help: "São Paulo",
}, {
Value: "southamerica-west1",
Help: "Santiago",
}, {
Value: "asia1",
Help: "Dual region: asia-northeast1 and asia-northeast2.",
}, {
Value: "eur4",
Help: "Dual region: europe-north1 and europe-west4.",
}, {
Value: "nam4",
Help: "Dual region: us-central1 and us-east1.",
}}, }},
}, { }, {
Name: "storage_class", Name: "storage_class",
@@ -297,32 +254,6 @@ Docs: https://cloud.google.com/storage/docs/bucket-policy-only
Value: "DURABLE_REDUCED_AVAILABILITY", Value: "DURABLE_REDUCED_AVAILABILITY",
Help: "Durable reduced availability storage class", Help: "Durable reduced availability storage class",
}}, }},
}, {
Name: "no_check_bucket",
Help: `If set, don't attempt to check the bucket exists or create it.
This can be useful when trying to minimise the number of transactions
rclone does if you know the bucket exists already.
`,
Default: false,
Advanced: true,
}, {
Name: "decompress",
Help: `If set this will decompress gzip encoded objects.
It is possible to upload objects to GCS with "Content-Encoding: gzip"
set. Normally rclone will download these files as compressed objects.
If this flag is set then rclone will decompress these files with
"Content-Encoding: gzip" as they are received. This means that rclone
can't check the size and hash but the file contents will be decompressed.
`,
Advanced: true,
Default: false,
}, {
Name: "endpoint",
Help: "Endpoint for the service.\n\nLeave blank normally.",
Advanced: true,
}, { }, {
Name: config.ConfigEncoding, Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp, Help: config.ConfigEncodingHelp,
@@ -330,7 +261,7 @@ can't check the size and hash but the file contents will be decompressed.
Default: (encoder.Base | Default: (encoder.Base |
encoder.EncodeCrLf | encoder.EncodeCrLf |
encoder.EncodeInvalidUtf8), encoder.EncodeInvalidUtf8),
}}...), }},
}) })
} }
@@ -345,25 +276,21 @@ type Options struct {
BucketPolicyOnly bool `config:"bucket_policy_only"` BucketPolicyOnly bool `config:"bucket_policy_only"`
Location string `config:"location"` Location string `config:"location"`
StorageClass string `config:"storage_class"` StorageClass string `config:"storage_class"`
NoCheckBucket bool `config:"no_check_bucket"`
Decompress bool `config:"decompress"`
Endpoint string `config:"endpoint"`
Enc encoder.MultiEncoder `config:"encoding"` Enc encoder.MultiEncoder `config:"encoding"`
} }
// Fs represents a remote storage server // Fs represents a remote storage server
type Fs struct { type Fs struct {
name string // name of this remote name string // name of this remote
root string // the path we are working on if any root string // the path we are working on if any
opt Options // parsed options opt Options // parsed options
features *fs.Features // optional features features *fs.Features // optional features
svc *storage.Service // the connection to the storage server svc *storage.Service // the connection to the storage server
client *http.Client // authorized client client *http.Client // authorized client
rootBucket string // bucket part of root (if any) rootBucket string // bucket part of root (if any)
rootDirectory string // directory part of root (if any) rootDirectory string // directory part of root (if any)
cache *bucket.Cache // cache of bucket status cache *bucket.Cache // cache of bucket status
pacer *fs.Pacer // To pace the API calls pacer *fs.Pacer // To pace the API calls
warnCompressed sync.Once // warn once about compressed files
} }
// Object describes a storage object // Object describes a storage object
@@ -377,7 +304,6 @@ type Object struct {
bytes int64 // Bytes in the object bytes int64 // Bytes in the object
modTime time.Time // Modified time of the object modTime time.Time // Modified time of the object
mimeType string mimeType string
gzipped bool // set if object has Content-Encoding: gzip
} }
// ------------------------------------------------------------ // ------------------------------------------------------------
@@ -395,7 +321,7 @@ func (f *Fs) Root() string {
// String converts this Fs to a string // String converts this Fs to a string
func (f *Fs) String() string { func (f *Fs) String() string {
if f.rootBucket == "" { if f.rootBucket == "" {
return "GCS root" return fmt.Sprintf("GCS root")
} }
if f.rootDirectory == "" { if f.rootDirectory == "" {
return fmt.Sprintf("GCS bucket %s", f.rootBucket) return fmt.Sprintf("GCS bucket %s", f.rootBucket)
@@ -409,10 +335,7 @@ func (f *Fs) Features() *fs.Features {
} }
// shouldRetry determines whether a given err rates being retried // shouldRetry determines whether a given err rates being retried
func shouldRetry(ctx context.Context, err error) (again bool, errOut error) { func shouldRetry(err error) (again bool, errOut error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
again = false again = false
if err != nil { if err != nil {
if fserrors.ShouldRetry(err) { if fserrors.ShouldRetry(err) {
@@ -453,12 +376,12 @@ func (o *Object) split() (bucket, bucketPath string) {
return o.fs.split(o.remote) return o.fs.split(o.remote)
} }
func getServiceAccountClient(ctx context.Context, credentialsData []byte) (*http.Client, error) { func getServiceAccountClient(credentialsData []byte) (*http.Client, error) {
conf, err := google.JWTConfigFromJSON(credentialsData, storageConfig.Scopes...) conf, err := google.JWTConfigFromJSON(credentialsData, storageConfig.Scopes...)
if err != nil { if err != nil {
return nil, fmt.Errorf("error processing credentials: %w", err) return nil, errors.Wrap(err, "error processing credentials")
} }
ctxWithSpecialClient := oauthutil.Context(ctx, fshttp.NewClient(ctx)) ctxWithSpecialClient := oauthutil.Context(fshttp.NewClient(fs.Config))
return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil return oauth2.NewClient(ctxWithSpecialClient, conf.TokenSource(ctxWithSpecialClient)), nil
} }
@@ -469,7 +392,8 @@ func (f *Fs) setRoot(root string) {
} }
// NewFs constructs an Fs from the path, bucket:path // NewFs constructs an Fs from the path, bucket:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
ctx := context.TODO()
var oAuthClient *http.Client var oAuthClient *http.Client
// Parse config into Options struct // Parse config into Options struct
@@ -487,26 +411,26 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
// try loading service account credentials from env variable, then from a file // try loading service account credentials from env variable, then from a file
if opt.ServiceAccountCredentials == "" && opt.ServiceAccountFile != "" { if opt.ServiceAccountCredentials == "" && opt.ServiceAccountFile != "" {
loadedCreds, err := os.ReadFile(env.ShellExpand(opt.ServiceAccountFile)) loadedCreds, err := ioutil.ReadFile(env.ShellExpand(opt.ServiceAccountFile))
if err != nil { if err != nil {
return nil, fmt.Errorf("error opening service account credentials file: %w", err) return nil, errors.Wrap(err, "error opening service account credentials file")
} }
opt.ServiceAccountCredentials = string(loadedCreds) opt.ServiceAccountCredentials = string(loadedCreds)
} }
if opt.Anonymous { if opt.Anonymous {
oAuthClient = fshttp.NewClient(ctx) oAuthClient = &http.Client{}
} else if opt.ServiceAccountCredentials != "" { } else if opt.ServiceAccountCredentials != "" {
oAuthClient, err = getServiceAccountClient(ctx, []byte(opt.ServiceAccountCredentials)) oAuthClient, err = getServiceAccountClient([]byte(opt.ServiceAccountCredentials))
if err != nil { if err != nil {
return nil, fmt.Errorf("failed configuring Google Cloud Storage Service Account: %w", err) return nil, errors.Wrap(err, "failed configuring Google Cloud Storage Service Account")
} }
} else { } else {
oAuthClient, _, err = oauthutil.NewClient(ctx, name, m, storageConfig) oAuthClient, _, err = oauthutil.NewClient(name, m, storageConfig)
if err != nil { if err != nil {
ctx := context.Background() ctx := context.Background()
oAuthClient, err = google.DefaultClient(ctx, storage.DevstorageFullControlScope) oAuthClient, err = google.DefaultClient(ctx, storage.DevstorageFullControlScope)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to configure Google Cloud Storage: %w", err) return nil, errors.Wrap(err, "failed to configure Google Cloud Storage")
} }
} }
} }
@@ -515,7 +439,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
name: name, name: name,
root: root, root: root,
opt: *opt, opt: *opt,
pacer: fs.NewPacer(ctx, pacer.NewS3(pacer.MinSleep(minSleep))), pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(minSleep))),
cache: bucket.NewCache(), cache: bucket.NewCache(),
} }
f.setRoot(root) f.setRoot(root)
@@ -524,17 +448,13 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
WriteMimeType: true, WriteMimeType: true,
BucketBased: true, BucketBased: true,
BucketBasedRootOK: true, BucketBasedRootOK: true,
}).Fill(ctx, f) }).Fill(f)
// Create a new authorized Drive client. // Create a new authorized Drive client.
f.client = oAuthClient f.client = oAuthClient
gcsOpts := []option.ClientOption{option.WithHTTPClient(f.client)} f.svc, err = storage.New(f.client)
if opt.Endpoint != "" {
gcsOpts = append(gcsOpts, option.WithEndpoint(opt.Endpoint))
}
f.svc, err = storage.NewService(context.Background(), gcsOpts...)
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't create Google Cloud Storage client: %w", err) return nil, errors.Wrap(err, "couldn't create Google Cloud Storage client")
} }
if f.rootBucket != "" && f.rootDirectory != "" { if f.rootBucket != "" && f.rootDirectory != "" {
@@ -542,7 +462,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory) encodedDirectory := f.opt.Enc.FromStandardPath(f.rootDirectory)
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.Get(f.rootBucket, encodedDirectory).Context(ctx).Do() _, err = f.svc.Objects.Get(f.rootBucket, encodedDirectory).Context(ctx).Do()
return shouldRetry(ctx, err) return shouldRetry(err)
}) })
if err == nil { if err == nil {
newRoot := path.Dir(f.root) newRoot := path.Dir(f.root)
@@ -589,7 +509,7 @@ type listFn func(remote string, object *storage.Object, isDirectory bool) error
// //
// dir is the starting directory, "" for root // dir is the starting directory, "" for root
// //
// Set recurse to read sub directories. // Set recurse to read sub directories
// //
// The remote has prefix removed from it and if addBucket is set // The remote has prefix removed from it and if addBucket is set
// then it adds the bucket to the start. // then it adds the bucket to the start.
@@ -608,7 +528,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
var objects *storage.Objects var objects *storage.Objects
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
objects, err = list.Context(ctx).Do() objects, err = list.Context(ctx).Do()
return shouldRetry(ctx, err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
if gErr, ok := err.(*googleapi.Error); ok { if gErr, ok := err.(*googleapi.Error); ok {
@@ -651,7 +571,7 @@ func (f *Fs) list(ctx context.Context, bucket, directory, prefix string, addBuck
remote = path.Join(bucket, remote) remote = path.Join(bucket, remote)
} }
// is this a directory marker? // is this a directory marker?
if isDirectory { if isDirectory && object.Size == 0 {
continue // skip directory marker continue // skip directory marker
} }
err = fn(remote, object, false) err = fn(remote, object, false)
@@ -711,7 +631,7 @@ func (f *Fs) listBuckets(ctx context.Context) (entries fs.DirEntries, err error)
var buckets *storage.Buckets var buckets *storage.Buckets
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
buckets, err = listBuckets.Context(ctx).Do() buckets, err = listBuckets.Context(ctx).Do()
return shouldRetry(ctx, err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -807,7 +727,7 @@ func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (
// Put the object into the bucket // Put the object into the bucket
// //
// Copy the reader in to the new object which is returned. // Copy the reader in to the new object which is returned
// //
// The new object may have been created if an error is returned // The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
@@ -837,17 +757,17 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
// service account that only has the "Storage Object Admin" role. See #2193 for details. // service account that only has the "Storage Object Admin" role. See #2193 for details.
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
_, err = f.svc.Objects.List(bucket).MaxResults(1).Context(ctx).Do() _, err = f.svc.Objects.List(bucket).MaxResults(1).Context(ctx).Do()
return shouldRetry(ctx, err) return shouldRetry(err)
}) })
if err == nil { if err == nil {
// Bucket already exists // Bucket already exists
return nil return nil
} else if gErr, ok := err.(*googleapi.Error); ok { } else if gErr, ok := err.(*googleapi.Error); ok {
if gErr.Code != http.StatusNotFound { if gErr.Code != http.StatusNotFound {
return fmt.Errorf("failed to get bucket: %w", err) return errors.Wrap(err, "failed to get bucket")
} }
} else { } else {
return fmt.Errorf("failed to get bucket: %w", err) return errors.Wrap(err, "failed to get bucket")
} }
if f.opt.ProjectNumber == "" { if f.opt.ProjectNumber == "" {
@@ -872,19 +792,11 @@ func (f *Fs) makeBucket(ctx context.Context, bucket string) (err error) {
insertBucket.PredefinedAcl(f.opt.BucketACL) insertBucket.PredefinedAcl(f.opt.BucketACL)
} }
_, err = insertBucket.Context(ctx).Do() _, err = insertBucket.Context(ctx).Do()
return shouldRetry(ctx, err) return shouldRetry(err)
}) })
}, nil) }, nil)
} }
// checkBucket creates the bucket if it doesn't exist unless NoCheckBucket is true
func (f *Fs) checkBucket(ctx context.Context, bucket string) error {
if f.opt.NoCheckBucket {
return nil
}
return f.makeBucket(ctx, bucket)
}
// Rmdir deletes the bucket if the fs is at the root // Rmdir deletes the bucket if the fs is at the root
// //
// Returns an error if it isn't empty: Error 409: The bucket you tried // Returns an error if it isn't empty: Error 409: The bucket you tried
@@ -897,7 +809,7 @@ func (f *Fs) Rmdir(ctx context.Context, dir string) (err error) {
return f.cache.Remove(bucket, func() error { return f.cache.Remove(bucket, func() error {
return f.pacer.Call(func() (bool, error) { return f.pacer.Call(func() (bool, error) {
err = f.svc.Buckets.Delete(bucket).Context(ctx).Do() err = f.svc.Buckets.Delete(bucket).Context(ctx).Do()
return shouldRetry(ctx, err) return shouldRetry(err)
}) })
}) })
} }
@@ -907,18 +819,18 @@ func (f *Fs) Precision() time.Duration {
return time.Nanosecond return time.Nanosecond
} }
// Copy src to this remote using server-side copy operations. // Copy src to this remote using server side copy operations.
// //
// This is stored with the remote path given. // This is stored with the remote path given
// //
// It returns the destination Object and a possible error. // It returns the destination Object and a possible error
// //
// Will only be called if src.Fs().Name() == f.Name() // Will only be called if src.Fs().Name() == f.Name()
// //
// If it isn't possible then return fs.ErrorCantCopy // If it isn't possible then return fs.ErrorCantCopy
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) { func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
dstBucket, dstPath := f.split(remote) dstBucket, dstPath := f.split(remote)
err := f.checkBucket(ctx, dstBucket) err := f.makeBucket(ctx, dstBucket)
if err != nil { if err != nil {
return nil, err return nil, err
} }
@@ -935,27 +847,20 @@ func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object,
remote: remote, remote: remote,
} }
rewriteRequest := f.svc.Objects.Rewrite(srcBucket, srcPath, dstBucket, dstPath, nil) var newObject *storage.Object
if !f.opt.BucketPolicyOnly { err = f.pacer.Call(func() (bool, error) {
rewriteRequest.DestinationPredefinedAcl(f.opt.ObjectACL) copyObject := f.svc.Objects.Copy(srcBucket, srcPath, dstBucket, dstPath, nil)
} if !f.opt.BucketPolicyOnly {
var rewriteResponse *storage.RewriteResponse copyObject.DestinationPredefinedAcl(f.opt.ObjectACL)
for {
err = f.pacer.Call(func() (bool, error) {
rewriteResponse, err = rewriteRequest.Context(ctx).Do()
return shouldRetry(ctx, err)
})
if err != nil {
return nil, err
} }
if rewriteResponse.Done { newObject, err = copyObject.Context(ctx).Do()
break return shouldRetry(err)
} })
rewriteRequest.RewriteToken(rewriteResponse.RewriteToken) if err != nil {
fs.Debugf(dstObj, "Continuing rewrite %d bytes done", rewriteResponse.TotalBytesRewritten) return nil, err
} }
// Set the metadata for the new object while we have it // Set the metadata for the new object while we have it
dstObj.setMetaData(rewriteResponse.Resource) dstObj.setMetaData(newObject)
return dstObj, nil return dstObj, nil
} }
@@ -1002,7 +907,6 @@ func (o *Object) setMetaData(info *storage.Object) {
o.url = info.MediaLink o.url = info.MediaLink
o.bytes = int64(info.Size) o.bytes = int64(info.Size)
o.mimeType = info.ContentType o.mimeType = info.ContentType
o.gzipped = info.ContentEncoding == "gzip"
// Read md5sum // Read md5sum
md5sumData, err := base64.StdEncoding.DecodeString(info.Md5Hash) md5sumData, err := base64.StdEncoding.DecodeString(info.Md5Hash)
@@ -1015,7 +919,7 @@ func (o *Object) setMetaData(info *storage.Object) {
// read mtime out of metadata if available // read mtime out of metadata if available
mtimeString, ok := info.Metadata[metaMtime] mtimeString, ok := info.Metadata[metaMtime]
if ok { if ok {
modTime, err := time.Parse(timeFormat, mtimeString) modTime, err := time.Parse(timeFormatIn, mtimeString)
if err == nil { if err == nil {
o.modTime = modTime o.modTime = modTime
return return
@@ -1023,30 +927,13 @@ func (o *Object) setMetaData(info *storage.Object) {
fs.Debugf(o, "Failed to read mtime from metadata: %s", err) fs.Debugf(o, "Failed to read mtime from metadata: %s", err)
} }
// Fallback to GSUtil mtime
mtimeGsutilString, ok := info.Metadata[metaMtimeGsutil]
if ok {
unixTimeSec, err := strconv.ParseInt(mtimeGsutilString, 10, 64)
if err == nil {
o.modTime = time.Unix(unixTimeSec, 0)
return
}
fs.Debugf(o, "Failed to read GSUtil mtime from metadata: %s", err)
}
// Fallback to the Updated time // Fallback to the Updated time
modTime, err := time.Parse(timeFormat, info.Updated) modTime, err := time.Parse(timeFormatIn, info.Updated)
if err != nil { if err != nil {
fs.Logf(o, "Bad time decode: %v", err) fs.Logf(o, "Bad time decode: %v", err)
} else { } else {
o.modTime = modTime o.modTime = modTime
} }
// If gunzipping then size and md5sum are unknown
if o.gzipped && o.fs.opt.Decompress {
o.bytes = -1
o.md5sum = ""
}
} }
// readObjectInfo reads the definition for an object // readObjectInfo reads the definition for an object
@@ -1054,7 +941,7 @@ func (o *Object) readObjectInfo(ctx context.Context) (object *storage.Object, er
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
object, err = o.fs.svc.Objects.Get(bucket, bucketPath).Context(ctx).Do() object, err = o.fs.svc.Objects.Get(bucket, bucketPath).Context(ctx).Do()
return shouldRetry(ctx, err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
if gErr, ok := err.(*googleapi.Error); ok { if gErr, ok := err.(*googleapi.Error); ok {
@@ -1098,8 +985,7 @@ func (o *Object) ModTime(ctx context.Context) time.Time {
// Returns metadata for an object // Returns metadata for an object
func metadataFromModTime(modTime time.Time) map[string]string { func metadataFromModTime(modTime time.Time) map[string]string {
metadata := make(map[string]string, 1) metadata := make(map[string]string, 1)
metadata[metaMtime] = modTime.Format(timeFormat) metadata[metaMtime] = modTime.Format(timeFormatOut)
metadata[metaMtimeGsutil] = strconv.FormatInt(modTime.Unix(), 10)
return metadata return metadata
} }
@@ -1111,11 +997,11 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error)
return err return err
} }
// Add the mtime to the existing metadata // Add the mtime to the existing metadata
mtime := modTime.Format(timeFormatOut)
if object.Metadata == nil { if object.Metadata == nil {
object.Metadata = make(map[string]string, 1) object.Metadata = make(map[string]string, 1)
} }
object.Metadata[metaMtime] = modTime.Format(timeFormat) object.Metadata[metaMtime] = mtime
object.Metadata[metaMtimeGsutil] = strconv.FormatInt(modTime.Unix(), 10)
// Copy the object to itself to update the metadata // Copy the object to itself to update the metadata
// Using PATCH requires too many permissions // Using PATCH requires too many permissions
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
@@ -1126,7 +1012,7 @@ func (o *Object) SetModTime(ctx context.Context, modTime time.Time) (err error)
copyObject.DestinationPredefinedAcl(o.fs.opt.ObjectACL) copyObject.DestinationPredefinedAcl(o.fs.opt.ObjectACL)
} }
newObject, err = copyObject.Context(ctx).Do() newObject, err = copyObject.Context(ctx).Do()
return shouldRetry(ctx, err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1142,23 +1028,12 @@ func (o *Object) Storable() bool {
// Open an object for read // Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) { func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
req, err := http.NewRequestWithContext(ctx, "GET", o.url, nil) req, err := http.NewRequest("GET", o.url, nil)
if err != nil { if err != nil {
return nil, err return nil, err
} }
req = req.WithContext(ctx) // go1.13 can use NewRequestWithContext
fs.FixRangeOption(options, o.bytes) fs.FixRangeOption(options, o.bytes)
if o.gzipped && !o.fs.opt.Decompress {
// Allow files which are stored on the cloud storage system
// compressed to be downloaded without being decompressed. Note
// that setting this here overrides the automatic decompression
// in the Transport.
//
// See: https://cloud.google.com/storage/docs/transcoding
req.Header.Set("Accept-Encoding", "gzip")
o.fs.warnCompressed.Do(func() {
fs.Logf(o, "Not decompressing 'Content-Encoding: gzip' compressed file. Use --gcs-decompress to override")
})
}
fs.OpenOptionAddHTTPHeaders(req.Header, options) fs.OpenOptionAddHTTPHeaders(req.Header, options)
var res *http.Response var res *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
@@ -1169,7 +1044,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
_ = res.Body.Close() // ignore error _ = res.Body.Close() // ignore error
} }
} }
return shouldRetry(ctx, err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1177,7 +1052,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
_, isRanging := req.Header["Range"] _, isRanging := req.Header["Range"]
if !(res.StatusCode == http.StatusOK || (isRanging && res.StatusCode == http.StatusPartialContent)) { if !(res.StatusCode == http.StatusOK || (isRanging && res.StatusCode == http.StatusPartialContent)) {
_ = res.Body.Close() // ignore error _ = res.Body.Close() // ignore error
return nil, fmt.Errorf("bad response: %d: %s", res.StatusCode, res.Status) return nil, errors.Errorf("bad response: %d: %s", res.StatusCode, res.Status)
} }
return res.Body, nil return res.Body, nil
} }
@@ -1187,7 +1062,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
// The new object may have been created if an error is returned // The new object may have been created if an error is returned
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error { func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
err := o.fs.checkBucket(ctx, bucket) err := o.fs.makeBucket(ctx, bucket)
if err != nil { if err != nil {
return err return err
} }
@@ -1216,8 +1091,6 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
object.ContentLanguage = value object.ContentLanguage = value
case "content-type": case "content-type":
object.ContentType = value object.ContentType = value
case "x-goog-storage-class":
object.StorageClass = value
default: default:
const googMetaPrefix = "x-goog-meta-" const googMetaPrefix = "x-goog-meta-"
if strings.HasPrefix(lowerKey, googMetaPrefix) { if strings.HasPrefix(lowerKey, googMetaPrefix) {
@@ -1235,7 +1108,7 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
insertObject.PredefinedAcl(o.fs.opt.ObjectACL) insertObject.PredefinedAcl(o.fs.opt.ObjectACL)
} }
newObject, err = insertObject.Context(ctx).Do() newObject, err = insertObject.Context(ctx).Do()
return shouldRetry(ctx, err) return shouldRetry(err)
}) })
if err != nil { if err != nil {
return err return err
@@ -1250,7 +1123,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
bucket, bucketPath := o.split() bucket, bucketPath := o.split()
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
err = o.fs.svc.Objects.Delete(bucket, bucketPath).Context(ctx).Do() err = o.fs.svc.Objects.Delete(bucket, bucketPath).Context(ctx).Do()
return shouldRetry(ctx, err) return shouldRetry(err)
}) })
return err return err
} }

View File

@@ -1,4 +1,3 @@
// Package api provides types used by the Google Photos API.
package api package api
import ( import (

View File

@@ -6,9 +6,9 @@ package googlephotos
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"io" "io"
golog "log"
"net/http" "net/http"
"net/url" "net/url"
"path" "path"
@@ -18,6 +18,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/googlephotos/api" "github.com/rclone/rclone/backend/googlephotos/api"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config" "github.com/rclone/rclone/fs/config"
@@ -29,7 +30,6 @@ import (
"github.com/rclone/rclone/fs/fshttp" "github.com/rclone/rclone/fs/fshttp"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/log" "github.com/rclone/rclone/fs/log"
"github.com/rclone/rclone/lib/encoder"
"github.com/rclone/rclone/lib/oauthutil" "github.com/rclone/rclone/lib/oauthutil"
"github.com/rclone/rclone/lib/pacer" "github.com/rclone/rclone/lib/pacer"
"github.com/rclone/rclone/lib/rest" "github.com/rclone/rclone/lib/rest"
@@ -55,7 +55,6 @@ const (
minSleep = 10 * time.Millisecond minSleep = 10 * time.Millisecond
scopeReadOnly = "https://www.googleapis.com/auth/photoslibrary.readonly" scopeReadOnly = "https://www.googleapis.com/auth/photoslibrary.readonly"
scopeReadWrite = "https://www.googleapis.com/auth/photoslibrary" scopeReadWrite = "https://www.googleapis.com/auth/photoslibrary"
scopeAccess = 2 // position of access scope in list
) )
var ( var (
@@ -64,12 +63,12 @@ var (
Scopes: []string{ Scopes: []string{
"openid", "openid",
"profile", "profile",
scopeReadWrite, // this must be at position scopeAccess scopeReadWrite,
}, },
Endpoint: google.Endpoint, Endpoint: google.Endpoint,
ClientID: rcloneClientID, ClientID: rcloneClientID,
ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret), ClientSecret: obscure.MustReveal(rcloneEncryptedClientSecret),
RedirectURL: oauthutil.RedirectURL, RedirectURL: oauthutil.TitleBarRedirectURL,
} }
) )
@@ -80,38 +79,44 @@ func init() {
Prefix: "gphotos", Prefix: "gphotos",
Description: "Google Photos", Description: "Google Photos",
NewFs: NewFs, NewFs: NewFs,
Config: func(ctx context.Context, name string, m configmap.Mapper, config fs.ConfigIn) (*fs.ConfigOut, error) { Config: func(name string, m configmap.Mapper) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't parse config into struct: %w", err) fs.Errorf(nil, "Couldn't parse config into struct: %v", err)
return
} }
switch config.State { // Fill in the scopes
case "": if opt.ReadOnly {
// Fill in the scopes oauthConfig.Scopes[0] = scopeReadOnly
if opt.ReadOnly { } else {
oauthConfig.Scopes[scopeAccess] = scopeReadOnly oauthConfig.Scopes[0] = scopeReadWrite
} else {
oauthConfig.Scopes[scopeAccess] = scopeReadWrite
}
return oauthutil.ConfigOut("warning", &oauthutil.Options{
OAuth2Config: oauthConfig,
})
case "warning":
// Warn the user as required by google photos integration
return fs.ConfigConfirm("warning_done", true, "config_warning", `Warning
IMPORTANT: All media items uploaded to Google Photos with rclone
are stored in full resolution at original quality. These uploads
will count towards storage in your Google Account.`)
case "warning_done":
return nil, nil
} }
return nil, fmt.Errorf("unknown state %q", config.State)
// Do the oauth
err = oauthutil.Config("google photos", name, m, oauthConfig, nil)
if err != nil {
golog.Fatalf("Failed to configure token: %v", err)
}
// Warn the user
fmt.Print(`
*** IMPORTANT: All media items uploaded to Google Photos with rclone
*** are stored in full resolution at original quality. These uploads
*** will count towards storage in your Google Account.
`)
}, },
Options: append(oauthutil.SharedOptions, []fs.Option{{ Options: []fs.Option{{
Name: config.ConfigClientID,
Help: "Google Application Client Id\nLeave blank normally.",
}, {
Name: config.ConfigClientSecret,
Help: "Google Application Client Secret\nLeave blank normally.",
}, {
Name: "read_only", Name: "read_only",
Default: false, Default: false,
Help: `Set to make the Google Photos backend read only. Help: `Set to make the Google Photos backend read only.
@@ -132,43 +137,17 @@ you want to read the media.`,
}, { }, {
Name: "start_year", Name: "start_year",
Default: 2000, Default: 2000,
Help: `Year limits the photos to be downloaded to those which are uploaded after the given year.`, Help: `Year limits the photos to be downloaded to those which are uploaded after the given year`,
Advanced: true, Advanced: true,
}, { }},
Name: "include_archived",
Default: false,
Help: `Also view and download archived media.
By default, rclone does not request archived media. Thus, when syncing,
archived media is not visible in directory listings or transferred.
Note that media in albums is always visible and synced, no matter
their archive status.
With this flag, archived media are always visible in directory
listings and transferred.
Without this flag, archived media will not be visible in directory
listings and won't be transferred.`,
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Base |
encoder.EncodeCrLf |
encoder.EncodeInvalidUtf8),
}}...),
}) })
} }
// Options defines the configuration for this backend // Options defines the configuration for this backend
type Options struct { type Options struct {
ReadOnly bool `config:"read_only"` ReadOnly bool `config:"read_only"`
ReadSize bool `config:"read_size"` ReadSize bool `config:"read_size"`
StartYear int `config:"start_year"` StartYear int `config:"start_year"`
IncludeArchived bool `config:"include_archived"`
Enc encoder.MultiEncoder `config:"encoding"`
} }
// Fs represents a remote storage server // Fs represents a remote storage server
@@ -178,7 +157,7 @@ type Fs struct {
opt Options // parsed options opt Options // parsed options
features *fs.Features // optional features features *fs.Features // optional features
unAuth *rest.Client // unauthenticated http client unAuth *rest.Client // unauthenticated http client
srv *rest.Client // the connection to the server srv *rest.Client // the connection to the one drive server
ts *oauthutil.TokenSource // token source for oauth2 ts *oauthutil.TokenSource // token source for oauth2
pacer *fs.Pacer // To pace the API calls pacer *fs.Pacer // To pace the API calls
startTime time.Time // time Fs was started - used for datestamps startTime time.Time // time Fs was started - used for datestamps
@@ -234,10 +213,6 @@ func (f *Fs) startYear() int {
return f.opt.StartYear return f.opt.StartYear
} }
func (f *Fs) includeArchived() bool {
return f.opt.IncludeArchived
}
// retryErrorCodes is a slice of error codes that we will retry // retryErrorCodes is a slice of error codes that we will retry
var retryErrorCodes = []int{ var retryErrorCodes = []int{
429, // Too Many Requests. 429, // Too Many Requests.
@@ -250,10 +225,7 @@ var retryErrorCodes = []int{
// shouldRetry returns a boolean as to whether this resp and err // shouldRetry returns a boolean as to whether this resp and err
// deserve to be retried. It returns the err as a convenience // deserve to be retried. It returns the err as a convenience
func shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) { func shouldRetry(resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
} }
@@ -281,7 +253,7 @@ func errorHandler(resp *http.Response) error {
} }
// NewFs constructs an Fs from the path, bucket:path // NewFs constructs an Fs from the path, bucket:path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) { func NewFs(name, root string, m configmap.Mapper) (fs.Fs, error) {
// Parse config into Options struct // Parse config into Options struct
opt := new(Options) opt := new(Options)
err := configstruct.Set(m, opt) err := configstruct.Set(m, opt)
@@ -289,10 +261,10 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
return nil, err return nil, err
} }
baseClient := fshttp.NewClient(ctx) baseClient := fshttp.NewClient(fs.Config)
oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(ctx, name, m, oauthConfig, baseClient) oAuthClient, ts, err := oauthutil.NewClientWithBaseClient(name, m, oauthConfig, baseClient)
if err != nil { if err != nil {
return nil, fmt.Errorf("failed to configure Box: %w", err) return nil, errors.Wrap(err, "failed to configure Box")
} }
root = strings.Trim(path.Clean(root), "/") root = strings.Trim(path.Clean(root), "/")
@@ -307,14 +279,14 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
unAuth: rest.NewClient(baseClient), unAuth: rest.NewClient(baseClient),
srv: rest.NewClient(oAuthClient).SetRoot(rootURL), srv: rest.NewClient(oAuthClient).SetRoot(rootURL),
ts: ts, ts: ts,
pacer: fs.NewPacer(ctx, pacer.NewGoogleDrive(pacer.MinSleep(minSleep))), pacer: fs.NewPacer(pacer.NewGoogleDrive(pacer.MinSleep(minSleep))),
startTime: time.Now(), startTime: time.Now(),
albums: map[bool]*albums{}, albums: map[bool]*albums{},
uploaded: dirtree.New(), uploaded: dirtree.New(),
} }
f.features = (&fs.Features{ f.features = (&fs.Features{
ReadMimeType: true, ReadMimeType: true,
}).Fill(ctx, f) }).Fill(f)
f.srv.SetErrorHandler(errorHandler) f.srv.SetErrorHandler(errorHandler)
_, _, pattern := patterns.match(f.root, "", true) _, _, pattern := patterns.match(f.root, "", true)
@@ -323,7 +295,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
var leaf string var leaf string
f.root, leaf = path.Split(f.root) f.root, leaf = path.Split(f.root)
f.root = strings.TrimRight(f.root, "/") f.root = strings.TrimRight(f.root, "/")
_, err := f.NewObject(ctx, leaf) _, err := f.NewObject(context.TODO(), leaf)
if err == nil { if err == nil {
return f, fs.ErrorIsFile return f, fs.ErrorIsFile
} }
@@ -342,16 +314,16 @@ func (f *Fs) fetchEndpoint(ctx context.Context, name string) (endpoint string, e
var openIDconfig map[string]interface{} var openIDconfig map[string]interface{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.unAuth.CallJSON(ctx, &opts, nil, &openIDconfig) resp, err := f.unAuth.CallJSON(ctx, &opts, nil, &openIDconfig)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return "", fmt.Errorf("couldn't read openID config: %w", err) return "", errors.Wrap(err, "couldn't read openID config")
} }
// Find userinfo endpoint // Find userinfo endpoint
endpoint, ok := openIDconfig[name].(string) endpoint, ok := openIDconfig[name].(string)
if !ok { if !ok {
return "", fmt.Errorf("couldn't find %q from openID config", name) return "", errors.Errorf("couldn't find %q from openID config", name)
} }
return endpoint, nil return endpoint, nil
@@ -371,10 +343,10 @@ func (f *Fs) UserInfo(ctx context.Context) (userInfo map[string]string, err erro
} }
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &userInfo) resp, err := f.srv.CallJSON(ctx, &opts, nil, &userInfo)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't read user info: %w", err) return nil, errors.Wrap(err, "couldn't read user info")
} }
return userInfo, nil return userInfo, nil
} }
@@ -402,10 +374,10 @@ func (f *Fs) Disconnect(ctx context.Context) (err error) {
var res interface{} var res interface{}
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err := f.srv.CallJSON(ctx, &opts, nil, &res) resp, err := f.srv.CallJSON(ctx, &opts, nil, &res)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return fmt.Errorf("couldn't revoke token: %w", err) return errors.Wrap(err, "couldn't revoke token")
} }
fs.Infof(f, "res = %+v", res) fs.Infof(f, "res = %+v", res)
return nil return nil
@@ -489,10 +461,10 @@ func (f *Fs) listAlbums(ctx context.Context, shared bool) (all *albums, err erro
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result) resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't list albums: %w", err) return nil, errors.Wrap(err, "couldn't list albums")
} }
newAlbums := result.Albums newAlbums := result.Albums
if shared { if shared {
@@ -506,9 +478,7 @@ func (f *Fs) listAlbums(ctx context.Context, shared bool) (all *albums, err erro
lastID = newAlbums[len(newAlbums)-1].ID lastID = newAlbums[len(newAlbums)-1].ID
} }
for i := range newAlbums { for i := range newAlbums {
anAlbum := newAlbums[i] all.add(&newAlbums[i])
anAlbum.Title = f.opt.Enc.FromStandardPath(anAlbum.Title)
all.add(&anAlbum)
} }
if result.NextPageToken == "" { if result.NextPageToken == "" {
break break
@@ -534,22 +504,16 @@ func (f *Fs) list(ctx context.Context, filter api.SearchFilter, fn listFn) (err
} }
filter.PageSize = listChunks filter.PageSize = listChunks
filter.PageToken = "" filter.PageToken = ""
if filter.AlbumID == "" { // album ID and filters cannot be set together, else error 400 INVALID_ARGUMENT
if filter.Filters == nil {
filter.Filters = &api.Filters{}
}
filter.Filters.IncludeArchivedMedia = &f.opt.IncludeArchived
}
lastID := "" lastID := ""
for { for {
var result api.MediaItems var result api.MediaItems
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, &filter, &result) resp, err = f.srv.CallJSON(ctx, &opts, &filter, &result)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return fmt.Errorf("couldn't list files: %w", err) return errors.Wrap(err, "couldn't list files")
} }
items := result.MediaItems items := result.MediaItems
if len(items) > 0 && items[0].ID == lastID { if len(items) > 0 && items[0].ID == lastID {
@@ -562,7 +526,7 @@ func (f *Fs) list(ctx context.Context, filter api.SearchFilter, fn listFn) (err
for i := range items { for i := range items {
item := &result.MediaItems[i] item := &result.MediaItems[i]
remote := item.Filename remote := item.Filename
remote = strings.ReplaceAll(remote, "/", "") remote = strings.Replace(remote, "/", "", -1)
err = fn(remote, item, false) err = fn(remote, item, false)
if err != nil { if err != nil {
return err return err
@@ -661,7 +625,7 @@ func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err e
// Put the object into the bucket // Put the object into the bucket
// //
// Copy the reader in to the new object which is returned. // Copy the reader in to the new object which is returned
// //
// The new object may have been created if an error is returned // The new object may have been created if an error is returned
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) { func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
@@ -690,10 +654,10 @@ func (f *Fs) createAlbum(ctx context.Context, albumTitle string) (album *api.Alb
var resp *http.Response var resp *http.Response
err = f.pacer.Call(func() (bool, error) { err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, request, &result) resp, err = f.srv.CallJSON(ctx, &opts, request, &result)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, fmt.Errorf("couldn't create album: %w", err) return nil, errors.Wrap(err, "couldn't create album")
} }
f.albums[false].add(&result) f.albums[false].add(&result)
return &result, nil return &result, nil
@@ -825,7 +789,7 @@ func (o *Object) Size() int64 {
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
fs.Debugf(o, "Reading size failed: %v", err) fs.Debugf(o, "Reading size failed: %v", err)
@@ -876,10 +840,10 @@ func (o *Object) readMetaData(ctx context.Context) (err error) {
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &item) resp, err = o.fs.srv.CallJSON(ctx, &opts, nil, &item)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return fmt.Errorf("couldn't get media item: %w", err) return errors.Wrap(err, "couldn't get media item")
} }
o.setMetaData(&item) o.setMetaData(&item)
return nil return nil
@@ -953,7 +917,7 @@ func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.Read
} }
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return nil, err return nil, err
@@ -1008,13 +972,13 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
err = o.fs.pacer.CallNoRetry(func() (bool, error) { err = o.fs.pacer.CallNoRetry(func() (bool, error) {
resp, err = o.fs.srv.Call(ctx, &opts) resp, err = o.fs.srv.Call(ctx, &opts)
if err != nil { if err != nil {
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
} }
token, err = rest.ReadBody(resp) token, err = rest.ReadBody(resp)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return fmt.Errorf("couldn't upload file: %w", err) return errors.Wrap(err, "couldn't upload file")
} }
uploadToken := strings.TrimSpace(string(token)) uploadToken := strings.TrimSpace(string(token))
if uploadToken == "" { if uploadToken == "" {
@@ -1039,17 +1003,17 @@ func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, op
var result api.BatchCreateResponse var result api.BatchCreateResponse
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, request, &result) resp, err = o.fs.srv.CallJSON(ctx, &opts, request, &result)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return fmt.Errorf("failed to create media item: %w", err) return errors.Wrap(err, "failed to create media item")
} }
if len(result.NewMediaItemResults) != 1 { if len(result.NewMediaItemResults) != 1 {
return errors.New("bad response to BatchCreate wrong number of items") return errors.New("bad response to BatchCreate wrong number of items")
} }
mediaItemResult := result.NewMediaItemResults[0] mediaItemResult := result.NewMediaItemResults[0]
if mediaItemResult.Status.Code != 0 { if mediaItemResult.Status.Code != 0 {
return fmt.Errorf("upload failed: %s (%d)", mediaItemResult.Status.Message, mediaItemResult.Status.Code) return errors.Errorf("upload failed: %s (%d)", mediaItemResult.Status.Message, mediaItemResult.Status.Code)
} }
o.setMetaData(&mediaItemResult.MediaItem) o.setMetaData(&mediaItemResult.MediaItem)
@@ -1071,7 +1035,7 @@ func (o *Object) Remove(ctx context.Context) (err error) {
albumTitle, fileName := match[1], match[2] albumTitle, fileName := match[1], match[2]
album, ok := o.fs.albums[false].get(albumTitle) album, ok := o.fs.albums[false].get(albumTitle)
if !ok { if !ok {
return fmt.Errorf("couldn't file %q in album %q for delete", fileName, albumTitle) return errors.Errorf("couldn't file %q in album %q for delete", fileName, albumTitle)
} }
opts := rest.Opts{ opts := rest.Opts{
Method: "POST", Method: "POST",
@@ -1084,10 +1048,10 @@ func (o *Object) Remove(ctx context.Context) (err error) {
var resp *http.Response var resp *http.Response
err = o.fs.pacer.Call(func() (bool, error) { err = o.fs.pacer.Call(func() (bool, error) {
resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil) resp, err = o.fs.srv.CallJSON(ctx, &opts, &request, nil)
return shouldRetry(ctx, resp, err) return shouldRetry(resp, err)
}) })
if err != nil { if err != nil {
return fmt.Errorf("couldn't delete item from album: %w", err) return errors.Wrap(err, "couldn't delete item from album")
} }
return nil return nil
} }

View File

@@ -3,7 +3,7 @@ package googlephotos
import ( import (
"context" "context"
"fmt" "fmt"
"io" "io/ioutil"
"net/http" "net/http"
"path" "path"
"testing" "testing"
@@ -12,6 +12,7 @@ import (
_ "github.com/rclone/rclone/backend/local" _ "github.com/rclone/rclone/backend/local"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash" "github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest" "github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/lib/random" "github.com/rclone/rclone/lib/random"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -34,14 +35,14 @@ func TestIntegration(t *testing.T) {
if *fstest.RemoteName == "" { if *fstest.RemoteName == "" {
*fstest.RemoteName = "TestGooglePhotos:" *fstest.RemoteName = "TestGooglePhotos:"
} }
f, err := fs.NewFs(ctx, *fstest.RemoteName) f, err := fs.NewFs(*fstest.RemoteName)
if err == fs.ErrorNotFoundInConfigFile { if err == fs.ErrorNotFoundInConfigFile {
t.Skipf("Couldn't create google photos backend - skipping tests: %v", err) t.Skip(fmt.Sprintf("Couldn't create google photos backend - skipping tests: %v", err))
} }
require.NoError(t, err) require.NoError(t, err)
// Create local Fs pointing at testfiles // Create local Fs pointing at testfiles
localFs, err := fs.NewFs(ctx, "testfiles") localFs, err := fs.NewFs("testfiles")
require.NoError(t, err) require.NoError(t, err)
t.Run("CreateAlbum", func(t *testing.T) { t.Run("CreateAlbum", func(t *testing.T) {
@@ -55,7 +56,7 @@ func TestIntegration(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
in, err := srcObj.Open(ctx) in, err := srcObj.Open(ctx)
require.NoError(t, err) require.NoError(t, err)
dstObj, err := f.Put(ctx, in, fs.NewOverrideRemote(srcObj, remote)) dstObj, err := f.Put(ctx, in, operations.NewOverrideRemote(srcObj, remote))
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, remote, dstObj.Remote()) assert.Equal(t, remote, dstObj.Remote())
_ = in.Close() _ = in.Close()
@@ -98,7 +99,7 @@ func TestIntegration(t *testing.T) {
t.Run("ObjectOpen", func(t *testing.T) { t.Run("ObjectOpen", func(t *testing.T) {
in, err := dstObj.Open(ctx) in, err := dstObj.Open(ctx)
require.NoError(t, err) require.NoError(t, err)
buf, err := io.ReadAll(in) buf, err := ioutil.ReadAll(in)
require.NoError(t, err) require.NoError(t, err)
require.NoError(t, in.Close()) require.NoError(t, in.Close())
assert.True(t, len(buf) > 1000) assert.True(t, len(buf) > 1000)
@@ -114,7 +115,7 @@ func TestIntegration(t *testing.T) {
assert.Equal(t, "2013-07-26 08:57:21 +0000 UTC", entries[0].ModTime(ctx).String()) assert.Equal(t, "2013-07-26 08:57:21 +0000 UTC", entries[0].ModTime(ctx).String())
}) })
// Check it is there in the date/month/year hierarchy // Check it is there in the date/month/year heirachy
// 2013-07-13 is the creation date of the folder // 2013-07-13 is the creation date of the folder
checkPresent := func(t *testing.T, objPath string) { checkPresent := func(t *testing.T, objPath string) {
entries, err := f.List(ctx, objPath) entries, err := f.List(ctx, objPath)
@@ -154,7 +155,7 @@ func TestIntegration(t *testing.T) {
}) })
t.Run("NewFsIsFile", func(t *testing.T) { t.Run("NewFsIsFile", func(t *testing.T) {
fNew, err := fs.NewFs(ctx, *fstest.RemoteName+remote) fNew, err := fs.NewFs(*fstest.RemoteName + remote)
assert.Equal(t, fs.ErrorIsFile, err) assert.Equal(t, fs.ErrorIsFile, err)
leaf := path.Base(remote) leaf := path.Base(remote)
o, err := fNew.NewObject(ctx, leaf) o, err := fNew.NewObject(ctx, leaf)
@@ -220,7 +221,7 @@ func TestIntegration(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
in, err := srcObj.Open(ctx) in, err := srcObj.Open(ctx)
require.NoError(t, err) require.NoError(t, err)
dstObj, err := f.Put(ctx, in, fs.NewOverrideRemote(srcObj, remote)) dstObj, err := f.Put(ctx, in, operations.NewOverrideRemote(srcObj, remote))
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, remote, dstObj.Remote()) assert.Equal(t, remote, dstObj.Remote())
_ = in.Close() _ = in.Close()

View File

@@ -11,6 +11,7 @@ import (
"strings" "strings"
"time" "time"
"github.com/pkg/errors"
"github.com/rclone/rclone/backend/googlephotos/api" "github.com/rclone/rclone/backend/googlephotos/api"
"github.com/rclone/rclone/fs" "github.com/rclone/rclone/fs"
) )
@@ -23,7 +24,6 @@ type lister interface {
listUploads(ctx context.Context, dir string) (entries fs.DirEntries, err error) listUploads(ctx context.Context, dir string) (entries fs.DirEntries, err error)
dirTime() time.Time dirTime() time.Time
startYear() int startYear() int
includeArchived() bool
} }
// dirPattern describes a single directory pattern // dirPattern describes a single directory pattern
@@ -269,7 +269,7 @@ func days(ctx context.Context, f lister, prefix string, match []string) (entries
year := match[1] year := match[1]
current, err := time.Parse("2006", year) current, err := time.Parse("2006", year)
if err != nil { if err != nil {
return nil, fmt.Errorf("bad year %q", match[1]) return nil, errors.Errorf("bad year %q", match[1])
} }
currentYear := current.Year() currentYear := current.Year()
for current.Year() == currentYear { for current.Year() == currentYear {
@@ -283,7 +283,7 @@ func days(ctx context.Context, f lister, prefix string, match []string) (entries
func yearMonthDayFilter(ctx context.Context, f lister, match []string) (sf api.SearchFilter, err error) { func yearMonthDayFilter(ctx context.Context, f lister, match []string) (sf api.SearchFilter, err error) {
year, err := strconv.Atoi(match[1]) year, err := strconv.Atoi(match[1])
if err != nil || year < 1000 || year > 3000 { if err != nil || year < 1000 || year > 3000 {
return sf, fmt.Errorf("bad year %q", match[1]) return sf, errors.Errorf("bad year %q", match[1])
} }
sf = api.SearchFilter{ sf = api.SearchFilter{
Filters: &api.Filters{ Filters: &api.Filters{
@@ -299,14 +299,14 @@ func yearMonthDayFilter(ctx context.Context, f lister, match []string) (sf api.S
if len(match) >= 3 { if len(match) >= 3 {
month, err := strconv.Atoi(match[2]) month, err := strconv.Atoi(match[2])
if err != nil || month < 1 || month > 12 { if err != nil || month < 1 || month > 12 {
return sf, fmt.Errorf("bad month %q", match[2]) return sf, errors.Errorf("bad month %q", match[2])
} }
sf.Filters.DateFilter.Dates[0].Month = month sf.Filters.DateFilter.Dates[0].Month = month
} }
if len(match) >= 4 { if len(match) >= 4 {
day, err := strconv.Atoi(match[3]) day, err := strconv.Atoi(match[3])
if err != nil || day < 1 || day > 31 { if err != nil || day < 1 || day > 31 {
return sf, fmt.Errorf("bad day %q", match[3]) return sf, errors.Errorf("bad day %q", match[3])
} }
sf.Filters.DateFilter.Dates[0].Day = day sf.Filters.DateFilter.Dates[0].Day = day
} }
@@ -315,7 +315,7 @@ func yearMonthDayFilter(ctx context.Context, f lister, match []string) (sf api.S
// featureFilter creates a filter for the Feature enum // featureFilter creates a filter for the Feature enum
// //
// The API only supports one feature, FAVORITES, so hardcode that feature. // The API only supports one feature, FAVORITES, so hardcode that feature
// //
// https://developers.google.com/photos/library/reference/rest/v1/mediaItems/search#FeatureFilter // https://developers.google.com/photos/library/reference/rest/v1/mediaItems/search#FeatureFilter
func featureFilter(ctx context.Context, f lister, match []string) (sf api.SearchFilter) { func featureFilter(ctx context.Context, f lister, match []string) (sf api.SearchFilter) {

View File

@@ -50,7 +50,7 @@ func (f *testLister) listAlbums(ctx context.Context, shared bool) (all *albums,
// mock listUploads for testing // mock listUploads for testing
func (f *testLister) listUploads(ctx context.Context, dir string) (entries fs.DirEntries, err error) { func (f *testLister) listUploads(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
entries = f.uploaded[dir] entries, _ = f.uploaded[dir]
return entries, nil return entries, nil
} }
@@ -64,11 +64,6 @@ func (f *testLister) startYear() int {
return 2000 return 2000
} }
// mock includeArchived for testing
func (f *testLister) includeArchived() bool {
return false
}
func TestPatternMatch(t *testing.T) { func TestPatternMatch(t *testing.T) {
for testNumber, test := range []struct { for testNumber, test := range []struct {
// input // input

View File

@@ -1,180 +0,0 @@
package hasher
import (
"context"
"errors"
"fmt"
"path"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/kv"
)
// Command the backend to run a named command
//
// The command run is name
// args may be used to read arguments from
// opts may be used to read optional arguments from
//
// The result should be capable of being JSON encoded
// If it is a string or a []string it will be shown to the user
// otherwise it will be JSON encoded and shown to the user like that
func (f *Fs) Command(ctx context.Context, name string, arg []string, opt map[string]string) (out interface{}, err error) {
switch name {
case "drop":
return nil, f.db.Stop(true)
case "dump", "fulldump":
return nil, f.dbDump(ctx, name == "fulldump", "")
case "import", "stickyimport":
sticky := name == "stickyimport"
if len(arg) != 2 {
return nil, errors.New("please provide checksum type and path to sum file")
}
return nil, f.dbImport(ctx, arg[0], arg[1], sticky)
default:
return nil, fs.ErrorCommandNotFound
}
}
var commandHelp = []fs.CommandHelp{{
Name: "drop",
Short: "Drop cache",
Long: `Completely drop checksum cache.
Usage Example:
rclone backend drop hasher:
`,
}, {
Name: "dump",
Short: "Dump the database",
Long: "Dump cache records covered by the current remote",
}, {
Name: "fulldump",
Short: "Full dump of the database",
Long: "Dump all cache records in the database",
}, {
Name: "import",
Short: "Import a SUM file",
Long: `Amend hash cache from a SUM file and bind checksums to files by size/time.
Usage Example:
rclone backend import hasher:subdir md5 /path/to/sum.md5
`,
}, {
Name: "stickyimport",
Short: "Perform fast import of a SUM file",
Long: `Fill hash cache from a SUM file without verifying file fingerprints.
Usage Example:
rclone backend stickyimport hasher:subdir md5 remote:path/to/sum.md5
`,
}}
func (f *Fs) dbDump(ctx context.Context, full bool, root string) error {
if root == "" {
remoteFs, err := cache.Get(ctx, f.opt.Remote)
if err != nil {
return err
}
root = fspath.JoinRootPath(remoteFs.Root(), f.Root())
}
op := &kvDump{
full: full,
root: root,
path: f.db.Path(),
fs: f,
}
err := f.db.Do(false, op)
if err == kv.ErrEmpty {
fs.Infof(op.path, "empty")
err = nil
}
return err
}
func (f *Fs) dbImport(ctx context.Context, hashName, sumRemote string, sticky bool) error {
var hashType hash.Type
if err := hashType.Set(hashName); err != nil {
return err
}
if hashType == hash.None {
return errors.New("please provide a valid hash type")
}
if !f.suppHashes.Contains(hashType) {
return errors.New("unsupported hash type")
}
if !f.keepHashes.Contains(hashType) {
fs.Infof(nil, "Need not import hashes of this type")
return nil
}
_, sumPath, err := fspath.SplitFs(sumRemote)
if err != nil {
return err
}
sumFs, err := cache.Get(ctx, sumRemote)
switch err {
case fs.ErrorIsFile:
// ok
case nil:
return fmt.Errorf("not a file: %s", sumRemote)
default:
return err
}
sumObj, err := sumFs.NewObject(ctx, path.Base(sumPath))
if err != nil {
return fmt.Errorf("cannot open sum file: %w", err)
}
hashes, err := operations.ParseSumFile(ctx, sumObj)
if err != nil {
return fmt.Errorf("failed to parse sum file: %w", err)
}
if sticky {
rootPath := f.Fs.Root()
for remote, hashVal := range hashes {
key := path.Join(rootPath, remote)
hashSums := operations.HashSums{hashName: hashVal}
if err := f.putRawHashes(ctx, key, anyFingerprint, hashSums); err != nil {
fs.Errorf(nil, "%s: failed to import: %v", remote, err)
}
}
fs.Infof(nil, "Summary: %d checksum(s) imported", len(hashes))
return nil
}
const longImportThreshold = 100
if len(hashes) > longImportThreshold {
fs.Infof(nil, "Importing %d checksums. Please wait...", len(hashes))
}
doneCount := 0
err = operations.ListFn(ctx, f, func(obj fs.Object) {
remote := obj.Remote()
hash := hashes[remote]
hashes[remote] = "" // mark as handled
o, ok := obj.(*Object)
if ok && hash != "" {
if err := o.putHashes(ctx, hashMap{hashType: hash}); err != nil {
fs.Errorf(nil, "%s: failed to import: %v", remote, err)
}
accounting.Stats(ctx).NewCheckingTransfer(obj, "importing").Done(ctx, err)
doneCount++
}
})
if err != nil {
fs.Errorf(nil, "Import failed: %v", err)
}
skipCount := 0
for remote, emptyOrDone := range hashes {
if emptyOrDone != "" {
fs.Infof(nil, "Skip vanished object: %s", remote)
skipCount++
}
}
fs.Infof(nil, "Summary: %d imported, %d skipped", doneCount, skipCount)
return err
}

View File

@@ -1,530 +0,0 @@
// Package hasher implements a checksum handling overlay backend
package hasher
import (
"context"
"encoding/gob"
"errors"
"fmt"
"io"
"path"
"strings"
"sync"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/cache"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/fspath"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/kv"
)
// Register with Fs
func init() {
fs.Register(&fs.RegInfo{
Name: "hasher",
Description: "Better checksums for other remotes",
NewFs: NewFs,
MetadataInfo: &fs.MetadataInfo{
Help: `Any metadata supported by the underlying remote is read and written.`,
},
CommandHelp: commandHelp,
Options: []fs.Option{{
Name: "remote",
Required: true,
Help: "Remote to cache checksums for (e.g. myRemote:path).",
}, {
Name: "hashes",
Default: fs.CommaSepList{"md5", "sha1"},
Advanced: false,
Help: "Comma separated list of supported checksum types.",
}, {
Name: "max_age",
Advanced: false,
Default: fs.DurationOff,
Help: "Maximum time to keep checksums in cache (0 = no cache, off = cache forever).",
}, {
Name: "auto_size",
Advanced: true,
Default: fs.SizeSuffix(0),
Help: "Auto-update checksum for files smaller than this size (disabled by default).",
}},
})
}
// Options defines the configuration for this backend
type Options struct {
Remote string `config:"remote"`
Hashes fs.CommaSepList `config:"hashes"`
AutoSize fs.SizeSuffix `config:"auto_size"`
MaxAge fs.Duration `config:"max_age"`
}
// Fs represents a wrapped fs.Fs
type Fs struct {
fs.Fs
name string
root string
wrapper fs.Fs
features *fs.Features
opt *Options
db *kv.DB
// fingerprinting
fpTime bool // true if using time in fingerprints
fpHash hash.Type // hash type to use in fingerprints or None
// hash types triaged by groups
suppHashes hash.Set // all supported checksum types
passHashes hash.Set // passed directly to the base without caching
slowHashes hash.Set // passed to the base and then cached
autoHashes hash.Set // calculated in-house and cached
keepHashes hash.Set // checksums to keep in cache (slow + auto)
}
var warnExperimental sync.Once
// NewFs constructs an Fs from the remote:path string
func NewFs(ctx context.Context, fsname, rpath string, cmap configmap.Mapper) (fs.Fs, error) {
if !kv.Supported() {
return nil, errors.New("hasher is not supported on this OS")
}
warnExperimental.Do(func() {
fs.Infof(nil, "Hasher is EXPERIMENTAL!")
})
opt := &Options{}
err := configstruct.Set(cmap, opt)
if err != nil {
return nil, err
}
if strings.HasPrefix(opt.Remote, fsname+":") {
return nil, errors.New("can't point remote at itself")
}
remotePath := fspath.JoinRootPath(opt.Remote, rpath)
baseFs, err := cache.Get(ctx, remotePath)
if err != nil && err != fs.ErrorIsFile {
return nil, fmt.Errorf("failed to derive base remote %q: %w", opt.Remote, err)
}
f := &Fs{
Fs: baseFs,
name: fsname,
root: rpath,
opt: opt,
}
baseFeatures := baseFs.Features()
f.fpTime = baseFs.Precision() != fs.ModTimeNotSupported
if baseFeatures.SlowHash {
f.slowHashes = f.Fs.Hashes()
} else {
f.passHashes = f.Fs.Hashes()
f.fpHash = f.passHashes.GetOne()
}
f.suppHashes = f.passHashes
f.suppHashes.Add(f.slowHashes.Array()...)
for _, hashName := range opt.Hashes {
var ht hash.Type
if err := ht.Set(hashName); err != nil {
return nil, fmt.Errorf("invalid token %q in hash string %q", hashName, opt.Hashes.String())
}
if !f.slowHashes.Contains(ht) {
f.autoHashes.Add(ht)
}
f.keepHashes.Add(ht)
f.suppHashes.Add(ht)
}
fs.Debugf(f, "Groups by usage: cached %s, passed %s, auto %s, slow %s, supported %s",
f.keepHashes, f.passHashes, f.autoHashes, f.slowHashes, f.suppHashes)
var nilSet hash.Set
if f.keepHashes == nilSet {
return nil, errors.New("configured hash_names have nothing to keep in cache")
}
if f.opt.MaxAge > 0 {
gob.Register(hashRecord{})
db, err := kv.Start(ctx, "hasher", f.Fs)
if err != nil {
return nil, err
}
f.db = db
}
stubFeatures := &fs.Features{
CanHaveEmptyDirectories: true,
IsLocal: true,
ReadMimeType: true,
WriteMimeType: true,
SetTier: true,
GetTier: true,
ReadMetadata: true,
WriteMetadata: true,
UserMetadata: true,
}
f.features = stubFeatures.Fill(ctx, f).Mask(ctx, f.Fs).WrapsFs(f, f.Fs)
cache.PinUntilFinalized(f.Fs, f)
return f, err
}
//
// Filesystem
//
// Name of the remote (as passed into NewFs)
func (f *Fs) Name() string { return f.name }
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string { return f.root }
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features { return f.features }
// Hashes returns the supported hash sets.
func (f *Fs) Hashes() hash.Set { return f.suppHashes }
// String returns a description of the FS
// The "hasher::" prefix is a distinctive feature.
func (f *Fs) String() string {
return fmt.Sprintf("hasher::%s:%s", f.name, f.root)
}
// UnWrap returns the Fs that this Fs is wrapping
func (f *Fs) UnWrap() fs.Fs { return f.Fs }
// WrapFs returns the Fs that is wrapping this Fs
func (f *Fs) WrapFs() fs.Fs { return f.wrapper }
// SetWrapper sets the Fs that is wrapping this Fs
func (f *Fs) SetWrapper(wrapper fs.Fs) { f.wrapper = wrapper }
// Wrap base entries into hasher entries.
func (f *Fs) wrapEntries(baseEntries fs.DirEntries) (hashEntries fs.DirEntries, err error) {
hashEntries = baseEntries[:0] // work inplace
for _, entry := range baseEntries {
switch x := entry.(type) {
case fs.Object:
obj, err := f.wrapObject(x, nil)
if err != nil {
return nil, err
}
hashEntries = append(hashEntries, obj)
default:
hashEntries = append(hashEntries, entry) // trash in - trash out
}
}
return hashEntries, nil
}
// List the objects and directories in dir into entries.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
if entries, err = f.Fs.List(ctx, dir); err != nil {
return nil, err
}
return f.wrapEntries(entries)
}
// ListR lists the objects and directories recursively into out.
func (f *Fs) ListR(ctx context.Context, dir string, callback fs.ListRCallback) (err error) {
return f.Fs.Features().ListR(ctx, dir, func(baseEntries fs.DirEntries) error {
hashEntries, err := f.wrapEntries(baseEntries)
if err != nil {
return err
}
return callback(hashEntries)
})
}
// Purge a directory
func (f *Fs) Purge(ctx context.Context, dir string) error {
if do := f.Fs.Features().Purge; do != nil {
if err := do(ctx, dir); err != nil {
return err
}
err := f.db.Do(true, &kvPurge{
dir: path.Join(f.Fs.Root(), dir),
})
if err != nil {
fs.Errorf(f, "Failed to purge some hashes: %v", err)
}
return nil
}
return fs.ErrorCantPurge
}
// PutStream uploads to the remote path with undeterminate size.
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if do := f.Fs.Features().PutStream; do != nil {
_ = f.pruneHash(src.Remote())
oResult, err := do(ctx, in, src, options...)
return f.wrapObject(oResult, err)
}
return nil, errors.New("PutStream not supported")
}
// PutUnchecked uploads the object, allowing duplicates.
func (f *Fs) PutUnchecked(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
if do := f.Fs.Features().PutUnchecked; do != nil {
_ = f.pruneHash(src.Remote())
oResult, err := do(ctx, in, src, options...)
return f.wrapObject(oResult, err)
}
return nil, errors.New("PutUnchecked not supported")
}
// pruneHash deletes hash for a path
func (f *Fs) pruneHash(remote string) error {
return f.db.Do(true, &kvPrune{
key: path.Join(f.Fs.Root(), remote),
})
}
// CleanUp the trash in the Fs
func (f *Fs) CleanUp(ctx context.Context) error {
if do := f.Fs.Features().CleanUp; do != nil {
return do(ctx)
}
return errors.New("not supported by underlying remote")
}
// About gets quota information from the Fs
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
if do := f.Fs.Features().About; do != nil {
return do(ctx)
}
return nil, errors.New("not supported by underlying remote")
}
// ChangeNotify calls the passed function with a path that has had changes.
func (f *Fs) ChangeNotify(ctx context.Context, notifyFunc func(string, fs.EntryType), pollIntervalChan <-chan time.Duration) {
if do := f.Fs.Features().ChangeNotify; do != nil {
do(ctx, notifyFunc, pollIntervalChan)
}
}
// UserInfo returns info about the connected user
func (f *Fs) UserInfo(ctx context.Context) (map[string]string, error) {
if do := f.Fs.Features().UserInfo; do != nil {
return do(ctx)
}
return nil, fs.ErrorNotImplemented
}
// Disconnect the current user
func (f *Fs) Disconnect(ctx context.Context) error {
if do := f.Fs.Features().Disconnect; do != nil {
return do(ctx)
}
return fs.ErrorNotImplemented
}
// MergeDirs merges the contents of all the directories passed
// in into the first one and rmdirs the other directories.
func (f *Fs) MergeDirs(ctx context.Context, dirs []fs.Directory) error {
if do := f.Fs.Features().MergeDirs; do != nil {
return do(ctx, dirs)
}
return errors.New("MergeDirs not supported")
}
// DirCacheFlush resets the directory cache - used in testing
// as an optional interface
func (f *Fs) DirCacheFlush() {
if do := f.Fs.Features().DirCacheFlush; do != nil {
do()
}
}
// PublicLink generates a public link to the remote path (usually readable by anyone)
func (f *Fs) PublicLink(ctx context.Context, remote string, expire fs.Duration, unlink bool) (string, error) {
if do := f.Fs.Features().PublicLink; do != nil {
return do(ctx, remote, expire, unlink)
}
return "", errors.New("PublicLink not supported")
}
// Copy src to this remote using server-side copy operations.
func (f *Fs) Copy(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
do := f.Fs.Features().Copy
if do == nil {
return nil, fs.ErrorCantCopy
}
o, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantCopy
}
oResult, err := do(ctx, o.Object, remote)
return f.wrapObject(oResult, err)
}
// Move src to this remote using server-side move operations.
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
do := f.Fs.Features().Move
if do == nil {
return nil, fs.ErrorCantMove
}
o, ok := src.(*Object)
if !ok {
return nil, fs.ErrorCantMove
}
oResult, err := do(ctx, o.Object, remote)
if err != nil {
return nil, err
}
_ = f.db.Do(true, &kvMove{
src: path.Join(f.Fs.Root(), src.Remote()),
dst: path.Join(f.Fs.Root(), remote),
dir: false,
fs: f,
})
return f.wrapObject(oResult, nil)
}
// DirMove moves src, srcRemote to this remote at dstRemote using server-side move operations.
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) error {
do := f.Fs.Features().DirMove
if do == nil {
return fs.ErrorCantDirMove
}
srcFs, ok := src.(*Fs)
if !ok {
return fs.ErrorCantDirMove
}
err := do(ctx, srcFs.Fs, srcRemote, dstRemote)
if err == nil {
_ = f.db.Do(true, &kvMove{
src: path.Join(srcFs.Fs.Root(), srcRemote),
dst: path.Join(f.Fs.Root(), dstRemote),
dir: true,
fs: f,
})
}
return err
}
// Shutdown the backend, closing any background tasks and any cached connections.
func (f *Fs) Shutdown(ctx context.Context) (err error) {
err = f.db.Stop(false)
if do := f.Fs.Features().Shutdown; do != nil {
if err2 := do(ctx); err2 != nil {
err = err2
}
}
return
}
// NewObject finds the Object at remote.
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
o, err := f.Fs.NewObject(ctx, remote)
return f.wrapObject(o, err)
}
//
// Object
//
// Object represents a composite file wrapping one or more data chunks
type Object struct {
fs.Object
f *Fs
}
// Wrap base object into hasher object
func (f *Fs) wrapObject(o fs.Object, err error) (obj fs.Object, outErr error) {
// log.Trace(o, "err=%v", err)("obj=%#v, outErr=%v", &obj, &outErr)
if err != nil {
return nil, err
}
if o == nil {
return nil, fs.ErrorObjectNotFound
}
return &Object{Object: o, f: f}, nil
}
// Fs returns read only access to the Fs that this object is part of
func (o *Object) Fs() fs.Info { return o.f }
// UnWrap returns the wrapped Object
func (o *Object) UnWrap() fs.Object { return o.Object }
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Object.String()
}
// ID returns the ID of the Object if possible
func (o *Object) ID() string {
if doer, ok := o.Object.(fs.IDer); ok {
return doer.ID()
}
return ""
}
// GetTier returns the Tier of the Object if possible
func (o *Object) GetTier() string {
if doer, ok := o.Object.(fs.GetTierer); ok {
return doer.GetTier()
}
return ""
}
// SetTier set the Tier of the Object if possible
func (o *Object) SetTier(tier string) error {
if doer, ok := o.Object.(fs.SetTierer); ok {
return doer.SetTier(tier)
}
return errors.New("SetTier not supported")
}
// MimeType of an Object if known, "" otherwise
func (o *Object) MimeType(ctx context.Context) string {
if doer, ok := o.Object.(fs.MimeTyper); ok {
return doer.MimeType(ctx)
}
return ""
}
// Metadata returns metadata for an object
//
// It should return nil if there is no Metadata
func (o *Object) Metadata(ctx context.Context) (fs.Metadata, error) {
do, ok := o.Object.(fs.Metadataer)
if !ok {
return nil, nil
}
return do.Metadata(ctx)
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.Copier = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
_ fs.Commander = (*Fs)(nil)
_ fs.PutUncheckeder = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.CleanUpper = (*Fs)(nil)
_ fs.UnWrapper = (*Fs)(nil)
_ fs.ListRer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Wrapper = (*Fs)(nil)
_ fs.MergeDirser = (*Fs)(nil)
_ fs.DirCacheFlusher = (*Fs)(nil)
_ fs.ChangeNotifier = (*Fs)(nil)
_ fs.PublicLinker = (*Fs)(nil)
_ fs.UserInfoer = (*Fs)(nil)
_ fs.Disconnecter = (*Fs)(nil)
_ fs.Shutdowner = (*Fs)(nil)
_ fs.FullObject = (*Object)(nil)
)

View File

@@ -1,78 +0,0 @@
package hasher
import (
"context"
"fmt"
"os"
"testing"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/obscure"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/kv"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func putFile(ctx context.Context, t *testing.T, f fs.Fs, name, data string) fs.Object {
mtime1 := fstest.Time("2001-02-03T04:05:06.499999999Z")
item := fstest.Item{Path: name, ModTime: mtime1}
o := fstests.PutTestContents(ctx, t, f, &item, data, true)
require.NotNil(t, o)
return o
}
func (f *Fs) testUploadFromCrypt(t *testing.T) {
// make a temporary local remote
tempRoot, err := fstest.LocalRemote()
require.NoError(t, err)
defer func() {
_ = os.RemoveAll(tempRoot)
}()
// make a temporary crypt remote
ctx := context.Background()
pass := obscure.MustObscure("crypt")
remote := fmt.Sprintf(`:crypt,remote="%s",password="%s":`, tempRoot, pass)
cryptFs, err := fs.NewFs(ctx, remote)
require.NoError(t, err)
// make a test file on the crypt remote
const dirName = "from_crypt_1"
const fileName = dirName + "/file_from_crypt_1"
const longTime = fs.ModTimeNotSupported
src := putFile(ctx, t, cryptFs, fileName, "doggy froggy")
// ensure that hash does not exist yet
_ = f.pruneHash(fileName)
hashType := f.keepHashes.GetOne()
hash, err := f.getRawHash(ctx, hashType, fileName, anyFingerprint, longTime)
assert.Error(t, err)
assert.Empty(t, hash)
// upload file to hasher
in, err := src.Open(ctx)
require.NoError(t, err)
dst, err := f.Put(ctx, in, src)
require.NoError(t, err)
assert.NotNil(t, dst)
// check that hash was created
hash, err = f.getRawHash(ctx, hashType, fileName, anyFingerprint, longTime)
assert.NoError(t, err)
assert.NotEmpty(t, hash)
//t.Logf("hash is %q", hash)
_ = operations.Purge(ctx, f, dirName)
}
// InternalTest dispatches all internal tests
func (f *Fs) InternalTest(t *testing.T) {
if !kv.Supported() {
t.Skip("hasher is not supported on this OS")
}
t.Run("UploadFromCrypt", f.testUploadFromCrypt)
}
var _ fstests.InternalTester = (*Fs)(nil)

View File

@@ -1,39 +0,0 @@
package hasher_test
import (
"os"
"path/filepath"
"testing"
"github.com/rclone/rclone/backend/hasher"
"github.com/rclone/rclone/fstest"
"github.com/rclone/rclone/fstest/fstests"
"github.com/rclone/rclone/lib/kv"
_ "github.com/rclone/rclone/backend/all" // for integration tests
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
if !kv.Supported() {
t.Skip("hasher is not supported on this OS")
}
opt := fstests.Opt{
RemoteName: *fstest.RemoteName,
NilObject: (*hasher.Object)(nil),
UnimplementableFsMethods: []string{
"OpenWriterAt",
},
UnimplementableObjectMethods: []string{},
}
if *fstest.RemoteName == "" {
tempDir := filepath.Join(os.TempDir(), "rclone-hasher-test")
opt.ExtraConfig = []fstests.ExtraConfigItem{
{Name: "TestHasher", Key: "type", Value: "hasher"},
{Name: "TestHasher", Key: "remote", Value: tempDir},
}
opt.RemoteName = "TestHasher:"
opt.QuickTestOK = true
}
fstests.Run(t, &opt)
}

View File

@@ -1,315 +0,0 @@
package hasher
import (
"bytes"
"context"
"encoding/gob"
"errors"
"fmt"
"strings"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
"github.com/rclone/rclone/lib/kv"
)
const (
timeFormat = "2006-01-02T15:04:05.000000000-0700"
anyFingerprint = "*"
)
type hashMap map[hash.Type]string
type hashRecord struct {
Fp string // fingerprint
Hashes operations.HashSums
Created time.Time
}
func (r *hashRecord) encode(key string) ([]byte, error) {
var buf bytes.Buffer
if err := gob.NewEncoder(&buf).Encode(r); err != nil {
fs.Debugf(key, "hasher encoding %v: %v", r, err)
return nil, err
}
return buf.Bytes(), nil
}
func (r *hashRecord) decode(key string, data []byte) error {
if err := gob.NewDecoder(bytes.NewBuffer(data)).Decode(r); err != nil {
fs.Debugf(key, "hasher decoding %q failed: %v", data, err)
return err
}
return nil
}
// kvPrune: prune a single hash
type kvPrune struct {
key string
}
func (op *kvPrune) Do(ctx context.Context, b kv.Bucket) error {
return b.Delete([]byte(op.key))
}
// kvPurge: delete a subtree
type kvPurge struct {
dir string
}
func (op *kvPurge) Do(ctx context.Context, b kv.Bucket) error {
dir := op.dir
if !strings.HasSuffix(dir, "/") {
dir += "/"
}
var items []string
cur := b.Cursor()
bkey, _ := cur.Seek([]byte(dir))
for bkey != nil {
key := string(bkey)
if !strings.HasPrefix(key, dir) {
break
}
items = append(items, key[len(dir):])
bkey, _ = cur.Next()
}
nerr := 0
for _, sub := range items {
if err := b.Delete([]byte(dir + sub)); err != nil {
nerr++
}
}
fs.Debugf(dir, "%d hashes purged, %d failed", len(items)-nerr, nerr)
return nil
}
// kvMove: assign hashes to new path
type kvMove struct {
src string
dst string
dir bool
fs *Fs
}
func (op *kvMove) Do(ctx context.Context, b kv.Bucket) error {
src, dst := op.src, op.dst
if !op.dir {
err := moveHash(b, src, dst)
fs.Debugf(op.fs, "moving cached hash %s to %s (err: %v)", src, dst, err)
return err
}
if !strings.HasSuffix(src, "/") {
src += "/"
}
if !strings.HasSuffix(dst, "/") {
dst += "/"
}
var items []string
cur := b.Cursor()
bkey, _ := cur.Seek([]byte(src))
for bkey != nil {
key := string(bkey)
if !strings.HasPrefix(key, src) {
break
}
items = append(items, key[len(src):])
bkey, _ = cur.Next()
}
nerr := 0
for _, suffix := range items {
srcKey, dstKey := src+suffix, dst+suffix
err := moveHash(b, srcKey, dstKey)
fs.Debugf(op.fs, "Rename cache record %s -> %s (err: %v)", srcKey, dstKey, err)
if err != nil {
nerr++
}
}
fs.Debugf(op.fs, "%d hashes moved, %d failed", len(items)-nerr, nerr)
return nil
}
func moveHash(b kv.Bucket, src, dst string) error {
data := b.Get([]byte(src))
err := b.Delete([]byte(src))
if err != nil || len(data) == 0 {
return err
}
return b.Put([]byte(dst), data)
}
// kvGet: get single hash from database
type kvGet struct {
key string
fp string
hash string
val string
age time.Duration
}
func (op *kvGet) Do(ctx context.Context, b kv.Bucket) error {
data := b.Get([]byte(op.key))
if len(data) == 0 {
return errors.New("no record")
}
var r hashRecord
if err := r.decode(op.key, data); err != nil {
return errors.New("invalid record")
}
if !(r.Fp == anyFingerprint || op.fp == anyFingerprint || r.Fp == op.fp) {
return errors.New("fingerprint changed")
}
if time.Since(r.Created) > op.age {
return errors.New("record timed out")
}
if r.Hashes != nil {
op.val = r.Hashes[op.hash]
}
return nil
}
// kvPut: set hashes for an object by key
type kvPut struct {
key string
fp string
hashes operations.HashSums
age time.Duration
}
func (op *kvPut) Do(ctx context.Context, b kv.Bucket) (err error) {
data := b.Get([]byte(op.key))
var r hashRecord
if len(data) > 0 {
err = r.decode(op.key, data)
if err != nil || r.Fp != op.fp || time.Since(r.Created) > op.age {
r.Hashes = nil
}
}
if len(r.Hashes) == 0 {
r.Created = time.Now()
r.Hashes = operations.HashSums{}
r.Fp = op.fp
}
for hashType, hashVal := range op.hashes {
r.Hashes[hashType] = hashVal
}
if data, err = r.encode(op.key); err != nil {
return fmt.Errorf("marshal failed: %w", err)
}
if err = b.Put([]byte(op.key), data); err != nil {
return fmt.Errorf("put failed: %w", err)
}
return err
}
// kvDump: dump the database.
// Note: long dump can cause concurrent operations to fail.
type kvDump struct {
full bool
root string
path string
fs *Fs
num int
total int
}
func (op *kvDump) Do(ctx context.Context, b kv.Bucket) error {
f, baseRoot, dbPath := op.fs, op.root, op.path
if op.full {
total := 0
num := 0
_ = b.ForEach(func(bkey, data []byte) error {
total++
key := string(bkey)
include := (baseRoot == "" || key == baseRoot || strings.HasPrefix(key, baseRoot+"/"))
var r hashRecord
if err := r.decode(key, data); err != nil {
fs.Errorf(nil, "%s: invalid record: %v", key, err)
return nil
}
fmt.Println(f.dumpLine(&r, key, include, nil))
if include {
num++
}
return nil
})
fs.Infof(dbPath, "%d records out of %d", num, total)
op.num, op.total = num, total // for unit tests
return nil
}
num := 0
cur := b.Cursor()
var bkey, data []byte
if baseRoot != "" {
bkey, data = cur.Seek([]byte(baseRoot))
} else {
bkey, data = cur.First()
}
for bkey != nil {
key := string(bkey)
if !(baseRoot == "" || key == baseRoot || strings.HasPrefix(key, baseRoot+"/")) {
break
}
var r hashRecord
if err := r.decode(key, data); err != nil {
fs.Errorf(nil, "%s: invalid record: %v", key, err)
continue
}
if key = strings.TrimPrefix(key[len(baseRoot):], "/"); key == "" {
key = "/"
}
fmt.Println(f.dumpLine(&r, key, true, nil))
num++
bkey, data = cur.Next()
}
fs.Infof(dbPath, "%d records", num)
op.num = num // for unit tests
return nil
}
func (f *Fs) dumpLine(r *hashRecord, path string, include bool, err error) string {
var status string
switch {
case !include:
status = "ext"
case err != nil:
status = "bad"
case r.Fp == anyFingerprint:
status = "stk"
default:
status = "ok "
}
var hashes []string
for _, hashType := range f.keepHashes.Array() {
hashName := hashType.String()
hashVal := r.Hashes[hashName]
if hashVal == "" || err != nil {
hashVal = "-"
}
hashVal = fmt.Sprintf("%-*s", hash.Width(hashType, false), hashVal)
hashes = append(hashes, hashName+":"+hashVal)
}
hashesStr := strings.Join(hashes, " ")
age := time.Since(r.Created).Round(time.Second)
if age > 24*time.Hour {
age = age.Round(time.Hour)
}
if err != nil {
age = 0
}
ageStr := age.String()
if strings.HasSuffix(ageStr, "h0m0s") {
ageStr = strings.TrimSuffix(ageStr, "0m0s")
}
return fmt.Sprintf("%s %s %9s %s", status, hashesStr, ageStr, path)
}

View File

@@ -1,304 +0,0 @@
package hasher
import (
"context"
"errors"
"fmt"
"io"
"path"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/fs/operations"
)
// obtain hash for an object
func (o *Object) getHash(ctx context.Context, hashType hash.Type) (string, error) {
maxAge := time.Duration(o.f.opt.MaxAge)
if maxAge <= 0 {
return "", nil
}
fp := o.fingerprint(ctx)
if fp == "" {
return "", errors.New("fingerprint failed")
}
return o.f.getRawHash(ctx, hashType, o.Remote(), fp, maxAge)
}
// obtain hash for a path
func (f *Fs) getRawHash(ctx context.Context, hashType hash.Type, remote, fp string, age time.Duration) (string, error) {
key := path.Join(f.Fs.Root(), remote)
op := &kvGet{
key: key,
fp: fp,
hash: hashType.String(),
age: age,
}
err := f.db.Do(false, op)
return op.val, err
}
// put new hashes for an object
func (o *Object) putHashes(ctx context.Context, rawHashes hashMap) error {
if o.f.opt.MaxAge <= 0 {
return nil
}
fp := o.fingerprint(ctx)
if fp == "" {
return nil
}
key := path.Join(o.f.Fs.Root(), o.Remote())
hashes := operations.HashSums{}
for hashType, hashVal := range rawHashes {
hashes[hashType.String()] = hashVal
}
return o.f.putRawHashes(ctx, key, fp, hashes)
}
// set hashes for a path without any validation
func (f *Fs) putRawHashes(ctx context.Context, key, fp string, hashes operations.HashSums) error {
return f.db.Do(true, &kvPut{
key: key,
fp: fp,
hashes: hashes,
age: time.Duration(f.opt.MaxAge),
})
}
// Hash returns the selected checksum of the file or "" if unavailable.
func (o *Object) Hash(ctx context.Context, hashType hash.Type) (hashVal string, err error) {
f := o.f
if f.passHashes.Contains(hashType) {
fs.Debugf(o, "pass %s", hashType)
return o.Object.Hash(ctx, hashType)
}
if !f.suppHashes.Contains(hashType) {
fs.Debugf(o, "unsupp %s", hashType)
return "", hash.ErrUnsupported
}
if hashVal, err = o.getHash(ctx, hashType); err != nil {
fs.Debugf(o, "getHash: %v", err)
err = nil
hashVal = ""
}
if hashVal != "" {
fs.Debugf(o, "cached %s = %q", hashType, hashVal)
return hashVal, nil
}
if f.slowHashes.Contains(hashType) {
fs.Debugf(o, "slow %s", hashType)
hashVal, err = o.Object.Hash(ctx, hashType)
if err == nil && hashVal != "" && f.keepHashes.Contains(hashType) {
if err = o.putHashes(ctx, hashMap{hashType: hashVal}); err != nil {
fs.Debugf(o, "putHashes: %v", err)
err = nil
}
}
return hashVal, err
}
if f.autoHashes.Contains(hashType) && o.Size() < int64(f.opt.AutoSize) {
_ = o.updateHashes(ctx)
if hashVal, err = o.getHash(ctx, hashType); err != nil {
fs.Debugf(o, "auto %s = %q (%v)", hashType, hashVal, err)
err = nil
}
}
return hashVal, err
}
// updateHashes performs implicit "rclone hashsum --download" and updates cache.
func (o *Object) updateHashes(ctx context.Context) error {
r, err := o.Open(ctx)
if err != nil {
fs.Infof(o, "update failed (open): %v", err)
return err
}
defer func() {
_ = r.Close()
}()
if _, err = io.Copy(io.Discard, r); err != nil {
fs.Infof(o, "update failed (copy): %v", err)
return err
}
return nil
}
// Update the object with the given data, time and size.
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
_ = o.f.pruneHash(src.Remote())
return o.Object.Update(ctx, in, src, options...)
}
// Remove an object.
func (o *Object) Remove(ctx context.Context) error {
_ = o.f.pruneHash(o.Remote())
return o.Object.Remove(ctx)
}
// SetModTime sets the modification time of the file.
// Also prunes the cache entry when modtime changes so that
// touching a file will trigger checksum recalculation even
// on backends that don't provide modTime with fingerprint.
func (o *Object) SetModTime(ctx context.Context, mtime time.Time) error {
if mtime != o.Object.ModTime(ctx) {
_ = o.f.pruneHash(o.Remote())
}
return o.Object.SetModTime(ctx, mtime)
}
// Open opens the file for read.
// Full reads will also update object hashes.
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (r io.ReadCloser, err error) {
size := o.Size()
var offset, limit int64 = 0, -1
for _, option := range options {
switch opt := option.(type) {
case *fs.SeekOption:
offset = opt.Offset
case *fs.RangeOption:
offset, limit = opt.Decode(size)
}
}
if offset < 0 {
return nil, errors.New("invalid offset")
}
if limit < 0 {
limit = size - offset
}
if r, err = o.Object.Open(ctx, options...); err != nil {
return nil, err
}
if offset != 0 || limit < size {
// It's a partial read
return r, err
}
return o.f.newHashingReader(ctx, r, func(sums hashMap) {
if err := o.putHashes(ctx, sums); err != nil {
fs.Infof(o, "auto hashing error: %v", err)
}
})
}
// Put data into the remote path with given modTime and size
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
var (
o fs.Object
common hash.Set
rehash bool
hashes hashMap
)
if fsrc := src.Fs(); fsrc != nil {
common = fsrc.Hashes().Overlap(f.keepHashes)
// Rehash if source does not have all required hashes or hashing is slow
rehash = fsrc.Features().SlowHash || common != f.keepHashes
}
wrapIn := in
if rehash {
r, err := f.newHashingReader(ctx, in, func(sums hashMap) {
hashes = sums
})
fs.Debugf(src, "Rehash in-fly due to incomplete or slow source set %v (err: %v)", common, err)
if err == nil {
wrapIn = r
} else {
rehash = false
}
}
_ = f.pruneHash(src.Remote())
oResult, err := f.Fs.Put(ctx, wrapIn, src, options...)
o, err = f.wrapObject(oResult, err)
if err != nil {
return nil, err
}
if !rehash {
hashes = hashMap{}
for _, ht := range common.Array() {
if h, e := src.Hash(ctx, ht); e == nil && h != "" {
hashes[ht] = h
}
}
}
if len(hashes) > 0 {
err := o.(*Object).putHashes(ctx, hashes)
fs.Debugf(o, "Applied %d source hashes, err: %v", len(hashes), err)
}
return o, err
}
type hashingReader struct {
rd io.Reader
hasher *hash.MultiHasher
fun func(hashMap)
}
func (f *Fs) newHashingReader(ctx context.Context, rd io.Reader, fun func(hashMap)) (*hashingReader, error) {
hasher, err := hash.NewMultiHasherTypes(f.keepHashes)
if err != nil {
return nil, err
}
hr := &hashingReader{
rd: rd,
hasher: hasher,
fun: fun,
}
return hr, nil
}
func (r *hashingReader) Read(p []byte) (n int, err error) {
n, err = r.rd.Read(p)
if err != nil && err != io.EOF {
r.hasher = nil
}
if r.hasher != nil {
if _, errHash := r.hasher.Write(p[:n]); errHash != nil {
r.hasher = nil
err = errHash
}
}
if err == io.EOF && r.hasher != nil {
r.fun(r.hasher.Sums())
r.hasher = nil
}
return
}
func (r *hashingReader) Close() error {
if rc, ok := r.rd.(io.ReadCloser); ok {
return rc.Close()
}
return nil
}
// Return object fingerprint or empty string in case of errors
//
// Note that we can't use the generic `fs.Fingerprint` here because
// this fingerprint is used to pick _derived hashes_ that are slow
// to calculate or completely unsupported by the base remote.
//
// The hasher fingerprint must be based on `fsHash`, the first _fast_
// hash supported _by the underlying remote_ (if there is one),
// while `fs.Fingerprint` would select a hash _produced by hasher_
// creating unresolvable fingerprint loop.
func (o *Object) fingerprint(ctx context.Context) string {
size := o.Object.Size()
timeStr := "-"
if o.f.fpTime {
timeStr = o.Object.ModTime(ctx).UTC().Format(timeFormat)
if timeStr == "" {
return ""
}
}
hashStr := "-"
if o.f.fpHash != hash.None {
var err error
hashStr, err = o.Object.Hash(ctx, o.f.fpHash)
if hashStr == "" || err != nil {
return ""
}
}
return fmt.Sprintf("%d,%s,%s", size, timeStr, hashStr)
}

View File

@@ -1,415 +0,0 @@
//go:build !plan9
// +build !plan9
package hdfs
import (
"context"
"fmt"
"io"
"os"
"os/user"
"path"
"strings"
"time"
"github.com/colinmarc/hdfs/v2"
krb "github.com/jcmturner/gokrb5/v8/client"
"github.com/jcmturner/gokrb5/v8/config"
"github.com/jcmturner/gokrb5/v8/credentials"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config/configmap"
"github.com/rclone/rclone/fs/config/configstruct"
"github.com/rclone/rclone/fs/hash"
)
// Fs represents a HDFS server
type Fs struct {
name string
root string
features *fs.Features // optional features
opt Options // options for this backend
ci *fs.ConfigInfo // global config
client *hdfs.Client
}
// copy-paste from https://github.com/colinmarc/hdfs/blob/master/cmd/hdfs/kerberos.go
func getKerberosClient() (*krb.Client, error) {
configPath := os.Getenv("KRB5_CONFIG")
if configPath == "" {
configPath = "/etc/krb5.conf"
}
cfg, err := config.Load(configPath)
if err != nil {
return nil, err
}
// Determine the ccache location from the environment, falling back to the
// default location.
ccachePath := os.Getenv("KRB5CCNAME")
if strings.Contains(ccachePath, ":") {
if strings.HasPrefix(ccachePath, "FILE:") {
ccachePath = strings.SplitN(ccachePath, ":", 2)[1]
} else {
return nil, fmt.Errorf("unusable ccache: %s", ccachePath)
}
} else if ccachePath == "" {
u, err := user.Current()
if err != nil {
return nil, err
}
ccachePath = fmt.Sprintf("/tmp/krb5cc_%s", u.Uid)
}
ccache, err := credentials.LoadCCache(ccachePath)
if err != nil {
return nil, err
}
client, err := krb.NewFromCCache(ccache, cfg)
if err != nil {
return nil, err
}
return client, nil
}
// NewFs constructs an Fs from the path
func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, error) {
opt := new(Options)
err := configstruct.Set(m, opt)
if err != nil {
return nil, err
}
options := hdfs.ClientOptions{
Addresses: []string{opt.Namenode},
UseDatanodeHostname: false,
}
if opt.ServicePrincipalName != "" {
options.KerberosClient, err = getKerberosClient()
if err != nil {
return nil, fmt.Errorf("problem with kerberos authentication: %w", err)
}
options.KerberosServicePrincipleName = opt.ServicePrincipalName
if opt.DataTransferProtection != "" {
options.DataTransferProtection = opt.DataTransferProtection
}
} else {
options.User = opt.Username
}
client, err := hdfs.NewClient(options)
if err != nil {
return nil, err
}
f := &Fs{
name: name,
root: root,
opt: *opt,
ci: fs.GetConfig(ctx),
client: client,
}
f.features = (&fs.Features{
CanHaveEmptyDirectories: true,
}).Fill(ctx, f)
info, err := f.client.Stat(f.realpath(""))
if err == nil && !info.IsDir() {
f.root = path.Dir(f.root)
return f, fs.ErrorIsFile
}
return f, nil
}
// Name of this fs
func (f *Fs) Name() string {
return f.name
}
// Root of the remote (as passed into NewFs)
func (f *Fs) Root() string {
return f.root
}
// String returns a description of the FS
func (f *Fs) String() string {
return fmt.Sprintf("hdfs://%s", f.opt.Namenode)
}
// Features returns the optional features of this Fs
func (f *Fs) Features() *fs.Features {
return f.features
}
// Precision return the precision of this Fs
func (f *Fs) Precision() time.Duration {
return time.Second
}
// Hashes are not supported
func (f *Fs) Hashes() hash.Set {
return hash.Set(hash.None)
}
// NewObject finds file at remote or return fs.ErrorObjectNotFound
func (f *Fs) NewObject(ctx context.Context, remote string) (fs.Object, error) {
realpath := f.realpath(remote)
fs.Debugf(f, "new [%s]", realpath)
info, err := f.ensureFile(realpath)
if err != nil {
return nil, err
}
return &Object{
fs: f,
remote: remote,
size: info.Size(),
modTime: info.ModTime(),
}, nil
}
// List the objects and directories in dir into entries.
func (f *Fs) List(ctx context.Context, dir string) (entries fs.DirEntries, err error) {
realpath := f.realpath(dir)
fs.Debugf(f, "list [%s]", realpath)
err = f.ensureDirectory(realpath)
if err != nil {
return nil, err
}
list, err := f.client.ReadDir(realpath)
if err != nil {
return nil, err
}
for _, x := range list {
stdName := f.opt.Enc.ToStandardName(x.Name())
remote := path.Join(dir, stdName)
if x.IsDir() {
entries = append(entries, fs.NewDir(remote, x.ModTime()))
} else {
entries = append(entries, &Object{
fs: f,
remote: remote,
size: x.Size(),
modTime: x.ModTime()})
}
}
return entries, nil
}
// Put the object
func (f *Fs) Put(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
o := &Object{
fs: f,
remote: src.Remote(),
}
err := o.Update(ctx, in, src, options...)
return o, err
}
// PutStream uploads to the remote path with the modTime given of indeterminate size
func (f *Fs) PutStream(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) (fs.Object, error) {
return f.Put(ctx, in, src, options...)
}
// Mkdir makes a directory
func (f *Fs) Mkdir(ctx context.Context, dir string) error {
fs.Debugf(f, "mkdir [%s]", f.realpath(dir))
return f.client.MkdirAll(f.realpath(dir), 0755)
}
// Rmdir deletes the directory
func (f *Fs) Rmdir(ctx context.Context, dir string) error {
realpath := f.realpath(dir)
fs.Debugf(f, "rmdir [%s]", realpath)
err := f.ensureDirectory(realpath)
if err != nil {
return err
}
// do not remove empty directory
list, err := f.client.ReadDir(realpath)
if err != nil {
return err
}
if len(list) > 0 {
return fs.ErrorDirectoryNotEmpty
}
return f.client.Remove(realpath)
}
// Purge deletes all the files in the directory
func (f *Fs) Purge(ctx context.Context, dir string) error {
realpath := f.realpath(dir)
fs.Debugf(f, "purge [%s]", realpath)
err := f.ensureDirectory(realpath)
if err != nil {
return err
}
return f.client.RemoveAll(realpath)
}
// Move src to this remote using server-side move operations.
//
// This is stored with the remote path given.
//
// It returns the destination Object and a possible error.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantMove
func (f *Fs) Move(ctx context.Context, src fs.Object, remote string) (fs.Object, error) {
srcObj, ok := src.(*Object)
if !ok {
fs.Debugf(src, "Can't move - not same remote type")
return nil, fs.ErrorCantMove
}
// Get the real paths from the remote specs:
sourcePath := srcObj.fs.realpath(srcObj.remote)
targetPath := f.realpath(remote)
fs.Debugf(f, "rename [%s] to [%s]", sourcePath, targetPath)
// Make sure the target folder exists:
dirname := path.Dir(targetPath)
err := f.client.MkdirAll(dirname, 0755)
if err != nil {
return nil, err
}
// Do the move
// Note that the underlying HDFS library hard-codes Overwrite=True, but this is expected rclone behaviour.
err = f.client.Rename(sourcePath, targetPath)
if err != nil {
return nil, err
}
// Look up the resulting object
info, err := f.client.Stat(targetPath)
if err != nil {
return nil, err
}
// And return it:
return &Object{
fs: f,
remote: remote,
size: info.Size(),
modTime: info.ModTime(),
}, nil
}
// DirMove moves src, srcRemote to this remote at dstRemote
// using server-side move operations.
//
// Will only be called if src.Fs().Name() == f.Name()
//
// If it isn't possible then return fs.ErrorCantDirMove
//
// If destination exists then return fs.ErrorDirExists
func (f *Fs) DirMove(ctx context.Context, src fs.Fs, srcRemote, dstRemote string) (err error) {
srcFs, ok := src.(*Fs)
if !ok {
return fs.ErrorCantDirMove
}
// Get the real paths from the remote specs:
sourcePath := srcFs.realpath(srcRemote)
targetPath := f.realpath(dstRemote)
fs.Debugf(f, "rename [%s] to [%s]", sourcePath, targetPath)
// Check if the destination exists:
info, err := f.client.Stat(targetPath)
if err == nil {
fs.Debugf(f, "target directory already exits, IsDir = [%t]", info.IsDir())
return fs.ErrorDirExists
}
// Make sure the targets parent folder exists:
dirname := path.Dir(targetPath)
err = f.client.MkdirAll(dirname, 0755)
if err != nil {
return err
}
// Do the move
err = f.client.Rename(sourcePath, targetPath)
if err != nil {
return err
}
return nil
}
// About gets quota information from the Fs
func (f *Fs) About(ctx context.Context) (*fs.Usage, error) {
info, err := f.client.StatFs()
if err != nil {
return nil, err
}
return &fs.Usage{
Total: fs.NewUsageValue(int64(info.Capacity)),
Used: fs.NewUsageValue(int64(info.Used)),
Free: fs.NewUsageValue(int64(info.Remaining)),
}, nil
}
func (f *Fs) ensureDirectory(realpath string) error {
info, err := f.client.Stat(realpath)
if e, ok := err.(*os.PathError); ok && e.Err == os.ErrNotExist {
return fs.ErrorDirNotFound
}
if err != nil {
return err
}
if !info.IsDir() {
return fs.ErrorDirNotFound
}
return nil
}
func (f *Fs) ensureFile(realpath string) (os.FileInfo, error) {
info, err := f.client.Stat(realpath)
if e, ok := err.(*os.PathError); ok && e.Err == os.ErrNotExist {
return nil, fs.ErrorObjectNotFound
}
if err != nil {
return nil, err
}
if info.IsDir() {
return nil, fs.ErrorObjectNotFound
}
return info, nil
}
func (f *Fs) realpath(dir string) string {
return f.opt.Enc.FromStandardPath(xPath(f.Root(), dir))
}
// Check the interfaces are satisfied
var (
_ fs.Fs = (*Fs)(nil)
_ fs.Purger = (*Fs)(nil)
_ fs.PutStreamer = (*Fs)(nil)
_ fs.Abouter = (*Fs)(nil)
_ fs.Mover = (*Fs)(nil)
_ fs.DirMover = (*Fs)(nil)
)

View File

@@ -1,78 +0,0 @@
//go:build !plan9
// +build !plan9
// Package hdfs provides an interface to the HDFS storage system.
package hdfs
import (
"path"
"strings"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/config"
"github.com/rclone/rclone/lib/encoder"
)
func init() {
fsi := &fs.RegInfo{
Name: "hdfs",
Description: "Hadoop distributed file system",
NewFs: NewFs,
Options: []fs.Option{{
Name: "namenode",
Help: "Hadoop name node and port.\n\nE.g. \"namenode:8020\" to connect to host namenode at port 8020.",
Required: true,
}, {
Name: "username",
Help: "Hadoop user name.",
Examples: []fs.OptionExample{{
Value: "root",
Help: "Connect to hdfs as root.",
}},
}, {
Name: "service_principal_name",
Help: `Kerberos service principal name for the namenode.
Enables KERBEROS authentication. Specifies the Service Principal Name
(SERVICE/FQDN) for the namenode. E.g. \"hdfs/namenode.hadoop.docker\"
for namenode running as service 'hdfs' with FQDN 'namenode.hadoop.docker'.`,
Advanced: true,
}, {
Name: "data_transfer_protection",
Help: `Kerberos data transfer protection: authentication|integrity|privacy.
Specifies whether or not authentication, data signature integrity
checks, and wire encryption is required when communicating the the
datanodes. Possible values are 'authentication', 'integrity' and
'privacy'. Used only with KERBEROS enabled.`,
Examples: []fs.OptionExample{{
Value: "privacy",
Help: "Ensure authentication, integrity and encryption enabled.",
}},
Advanced: true,
}, {
Name: config.ConfigEncoding,
Help: config.ConfigEncodingHelp,
Advanced: true,
Default: (encoder.Display | encoder.EncodeInvalidUtf8 | encoder.EncodeColon),
}},
}
fs.Register(fsi)
}
// Options for this backend
type Options struct {
Namenode string `config:"namenode"`
Username string `config:"username"`
ServicePrincipalName string `config:"service_principal_name"`
DataTransferProtection string `config:"data_transfer_protection"`
Enc encoder.MultiEncoder `config:"encoding"`
}
// xPath make correct file path with leading '/'
func xPath(root string, tail string) string {
if !strings.HasPrefix(root, "/") {
root = "/" + root
}
return path.Join(root, tail)
}

View File

@@ -1,21 +0,0 @@
// Test HDFS filesystem interface
//go:build !plan9
// +build !plan9
package hdfs_test
import (
"testing"
"github.com/rclone/rclone/backend/hdfs"
"github.com/rclone/rclone/fstest/fstests"
)
// TestIntegration runs integration tests against the remote
func TestIntegration(t *testing.T) {
fstests.Run(t, &fstests.Opt{
RemoteName: "TestHdfs:",
NilObject: (*hdfs.Object)(nil),
})
}

View File

@@ -1,7 +0,0 @@
// Build for hdfs for unsupported platforms to stop go complaining
// about "no buildable Go source files "
//go:build plan9
// +build plan9
package hdfs

View File

@@ -1,178 +0,0 @@
//go:build !plan9
// +build !plan9
package hdfs
import (
"context"
"io"
"path"
"time"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/hash"
"github.com/rclone/rclone/lib/readers"
)
// Object describes an HDFS file
type Object struct {
fs *Fs
remote string
size int64
modTime time.Time
}
// Fs returns the parent Fs
func (o *Object) Fs() fs.Info {
return o.fs
}
// Remote returns the remote path
func (o *Object) Remote() string {
return o.remote
}
// Size returns the size of an object in bytes
func (o *Object) Size() int64 {
return o.size
}
// ModTime returns the modification time of the object
func (o *Object) ModTime(ctx context.Context) time.Time {
return o.modTime
}
// SetModTime sets the modification time of the local fs object
func (o *Object) SetModTime(ctx context.Context, modTime time.Time) error {
realpath := o.fs.realpath(o.Remote())
err := o.fs.client.Chtimes(realpath, modTime, modTime)
if err != nil {
return err
}
o.modTime = modTime
return nil
}
// Storable returns whether this object is storable
func (o *Object) Storable() bool {
return true
}
// Return a string version
func (o *Object) String() string {
if o == nil {
return "<nil>"
}
return o.Remote()
}
// Hash is not supported
func (o *Object) Hash(ctx context.Context, r hash.Type) (string, error) {
return "", hash.ErrUnsupported
}
// Open an object for read
func (o *Object) Open(ctx context.Context, options ...fs.OpenOption) (in io.ReadCloser, err error) {
realpath := o.realpath()
fs.Debugf(o.fs, "open [%s]", realpath)
f, err := o.fs.client.Open(realpath)
if err != nil {
return nil, err
}
var offset, limit int64 = 0, -1
for _, option := range options {
switch x := option.(type) {
case *fs.SeekOption:
offset = x.Offset
case *fs.RangeOption:
offset, limit = x.Decode(o.Size())
}
}
_, err = f.Seek(offset, io.SeekStart)
if err != nil {
return nil, err
}
if limit != -1 {
in = readers.NewLimitedReadCloser(f, limit)
} else {
in = f
}
return in, err
}
// Update object
func (o *Object) Update(ctx context.Context, in io.Reader, src fs.ObjectInfo, options ...fs.OpenOption) error {
realpath := o.fs.realpath(src.Remote())
dirname := path.Dir(realpath)
fs.Debugf(o.fs, "update [%s]", realpath)
err := o.fs.client.MkdirAll(dirname, 0755)
if err != nil {
return err
}
_, err = o.fs.client.Stat(realpath)
if err == nil {
err = o.fs.client.Remove(realpath)
if err != nil {
return err
}
}
out, err := o.fs.client.Create(realpath)
if err != nil {
return err
}
cleanup := func() {
rerr := o.fs.client.Remove(realpath)
if rerr != nil {
fs.Errorf(o.fs, "failed to remove [%v]: %v", realpath, rerr)
}
}
_, err = io.Copy(out, in)
if err != nil {
cleanup()
return err
}
err = out.Close()
if err != nil {
cleanup()
return err
}
info, err := o.fs.client.Stat(realpath)
if err != nil {
return err
}
err = o.SetModTime(ctx, src.ModTime(ctx))
if err != nil {
return err
}
o.size = info.Size()
return nil
}
// Remove an object
func (o *Object) Remove(ctx context.Context) error {
realpath := o.fs.realpath(o.remote)
fs.Debugf(o.fs, "remove [%s]", realpath)
return o.fs.client.Remove(realpath)
}
func (o *Object) realpath() string {
return o.fs.opt.Enc.FromStandardPath(xPath(o.Fs().Root(), o.remote))
}
// Check the interfaces are satisfied
var (
_ fs.Object = (*Object)(nil)
)

View File

@@ -1,81 +0,0 @@
package api
import (
"encoding/json"
"net/url"
"path"
"strings"
"time"
)
// Some presets for different amounts of information that can be requested for fields;
// it is recommended to only request the information that is actually needed.
var (
HiDriveObjectNoMetadataFields = []string{"name", "type"}
HiDriveObjectWithMetadataFields = append(HiDriveObjectNoMetadataFields, "id", "size", "mtime", "chash")
HiDriveObjectWithDirectoryMetadataFields = append(HiDriveObjectWithMetadataFields, "nmembers")
DirectoryContentFields = []string{"nmembers"}
)
// QueryParameters represents the parameters passed to an API-call.
type QueryParameters struct {
url.Values
}
// NewQueryParameters initializes an instance of QueryParameters and
// returns a pointer to it.
func NewQueryParameters() *QueryParameters {
return &QueryParameters{url.Values{}}
}
// SetFileInDirectory sets the appropriate parameters
// to specify a path to a file in a directory.
// This is used by requests that work with paths for files that do not exist yet.
// (For example when creating a file).
// Most requests use the format produced by SetPath(...).
func (p *QueryParameters) SetFileInDirectory(filePath string) {
directory, file := path.Split(path.Clean(filePath))
p.Set("dir", path.Clean(directory))
p.Set("name", file)
// NOTE: It would be possible to switch to pid-based requests
// by modifying this function.
}
// SetPath sets the appropriate parameters to access the given path.
func (p *QueryParameters) SetPath(objectPath string) {
p.Set("path", path.Clean(objectPath))
// NOTE: It would be possible to switch to pid-based requests
// by modifying this function.
}
// SetTime sets the key to the time-value. It replaces any existing values.
func (p *QueryParameters) SetTime(key string, value time.Time) error {
valueAPI := Time(value)
valueBytes, err := json.Marshal(&valueAPI)
if err != nil {
return err
}
p.Set(key, string(valueBytes))
return nil
}
// AddList adds the given values as a list
// with each value separated by the separator.
// It appends to any existing values associated with key.
func (p *QueryParameters) AddList(key string, separator string, values ...string) {
original := p.Get(key)
p.Set(key, strings.Join(values, separator))
if original != "" {
p.Set(key, original+separator+p.Get(key))
}
}
// AddFields sets the appropriate parameter to access the given fields.
// The given fields will be appended to any other existing fields.
func (p *QueryParameters) AddFields(prefix string, fields ...string) {
modifiedFields := make([]string, len(fields))
for i, field := range fields {
modifiedFields[i] = prefix + field
}
p.AddList("fields", ",", modifiedFields...)
}

View File

@@ -1,135 +0,0 @@
// Package api has type definitions and code related to API-calls for the HiDrive-API.
package api
import (
"encoding/json"
"fmt"
"net/url"
"strconv"
"time"
)
// Time represents date and time information for the API.
type Time time.Time
// MarshalJSON turns Time into JSON (in Unix-time/UTC).
func (t *Time) MarshalJSON() ([]byte, error) {
secs := time.Time(*t).Unix()
return []byte(strconv.FormatInt(secs, 10)), nil
}
// UnmarshalJSON turns JSON into Time.
func (t *Time) UnmarshalJSON(data []byte) error {
secs, err := strconv.ParseInt(string(data), 10, 64)
if err != nil {
return err
}
*t = Time(time.Unix(secs, 0))
return nil
}
// Error is returned from the API when things go wrong.
type Error struct {
Code json.Number `json:"code"`
ContextInfo json.RawMessage
Message string `json:"msg"`
}
// Error returns a string for the error and satisfies the error interface.
func (e *Error) Error() string {
out := fmt.Sprintf("Error %q", e.Code.String())
if e.Message != "" {
out += ": " + e.Message
}
if e.ContextInfo != nil {
out += fmt.Sprintf(" (%+v)", e.ContextInfo)
}
return out
}
// Check Error satisfies the error interface.
var _ error = (*Error)(nil)
// possible types for HiDriveObject
const (
HiDriveObjectTypeDirectory = "dir"
HiDriveObjectTypeFile = "file"
HiDriveObjectTypeSymlink = "symlink"
)
// HiDriveObject describes a folder, a symlink or a file.
// Depending on the type and content, not all fields are present.
type HiDriveObject struct {
Type string `json:"type"`
ID string `json:"id"`
ParentID string `json:"parent_id"`
Name string `json:"name"`
Path string `json:"path"`
Size int64 `json:"size"`
MemberCount int64 `json:"nmembers"`
ModifiedAt Time `json:"mtime"`
ChangedAt Time `json:"ctime"`
MetaHash string `json:"mhash"`
MetaOnlyHash string `json:"mohash"`
NameHash string `json:"nhash"`
ContentHash string `json:"chash"`
IsTeamfolder bool `json:"teamfolder"`
Readable bool `json:"readable"`
Writable bool `json:"writable"`
Shareable bool `json:"shareable"`
MIMEType string `json:"mime_type"`
}
// ModTime returns the modification time of the HiDriveObject.
func (i *HiDriveObject) ModTime() time.Time {
t := time.Time(i.ModifiedAt)
if t.IsZero() {
t = time.Time(i.ChangedAt)
}
return t
}
// UnmarshalJSON turns JSON into HiDriveObject and
// introduces specific default-values where necessary.
func (i *HiDriveObject) UnmarshalJSON(data []byte) error {
type objectAlias HiDriveObject
defaultObject := objectAlias{
Size: -1,
MemberCount: -1,
}
err := json.Unmarshal(data, &defaultObject)
if err != nil {
return err
}
name, err := url.PathUnescape(defaultObject.Name)
if err == nil {
defaultObject.Name = name
}
*i = HiDriveObject(defaultObject)
return nil
}
// DirectoryContent describes the content of a directory.
type DirectoryContent struct {
TotalCount int64 `json:"nmembers"`
Entries []HiDriveObject `json:"members"`
}
// UnmarshalJSON turns JSON into DirectoryContent and
// introduces specific default-values where necessary.
func (d *DirectoryContent) UnmarshalJSON(data []byte) error {
type directoryContentAlias DirectoryContent
defaultDirectoryContent := directoryContentAlias{
TotalCount: -1,
}
err := json.Unmarshal(data, &defaultDirectoryContent)
if err != nil {
return err
}
*d = DirectoryContent(defaultDirectoryContent)
return nil
}

View File

@@ -1,888 +0,0 @@
package hidrive
// This file is for helper-functions which may provide more general and
// specialized functionality than the generic interfaces.
// There are two sections:
// 1. methods bound to Fs
// 2. other functions independent from Fs used throughout the package
// NOTE: Functions accessing paths expect any relative paths
// to be resolved prior to execution with resolvePath(...).
import (
"bytes"
"context"
"errors"
"io"
"net/http"
"path"
"strconv"
"sync"
"time"
"github.com/rclone/rclone/backend/hidrive/api"
"github.com/rclone/rclone/fs"
"github.com/rclone/rclone/fs/accounting"
"github.com/rclone/rclone/fs/fserrors"
"github.com/rclone/rclone/lib/ranges"
"github.com/rclone/rclone/lib/readers"
"github.com/rclone/rclone/lib/rest"
"golang.org/x/sync/errgroup"
"golang.org/x/sync/semaphore"
)
const (
// MaximumUploadBytes represents the maximum amount of bytes
// a single upload-operation will support.
MaximumUploadBytes = 2147483647 // = 2GiB - 1
// iterationChunkSize represents the chunk size used to iterate directory contents.
iterationChunkSize = 5000
)
var (
// retryErrorCodes is a slice of error codes that we will always retry.
retryErrorCodes = []int{
429, // Too Many Requests
500, // Internal Server Error
502, // Bad Gateway
503, // Service Unavailable
504, // Gateway Timeout
509, // Bandwidth Limit Exceeded
}
// ErrorFileExists is returned when a query tries to create a file
// that already exists.
ErrorFileExists = errors.New("destination file already exists")
)
// MemberType represents the possible types of entries a directory can contain.
type MemberType string
// possible values for MemberType
const (
AllMembers MemberType = "all"
NoMembers MemberType = "none"
DirectoryMembers MemberType = api.HiDriveObjectTypeDirectory
FileMembers MemberType = api.HiDriveObjectTypeFile
SymlinkMembers MemberType = api.HiDriveObjectTypeSymlink
)
// SortByField represents possible fields to sort entries of a directory by.
type SortByField string
// possible values for SortByField
const (
descendingSort string = "-"
SortByName SortByField = "name"
SortByModTime SortByField = "mtime"
SortByObjectType SortByField = "type"
SortBySize SortByField = "size"
SortByNameDescending SortByField = SortByField(descendingSort) + SortByName
SortByModTimeDescending SortByField = SortByField(descendingSort) + SortByModTime
SortByObjectTypeDescending SortByField = SortByField(descendingSort) + SortByObjectType
SortBySizeDescending SortByField = SortByField(descendingSort) + SortBySize
)
var (
// Unsorted disables sorting and can therefore not be combined with other values.
Unsorted = []SortByField{"none"}
// DefaultSorted does not specify how to sort and
// therefore implies the default sort order.
DefaultSorted = []SortByField{}
)
// CopyOrMoveOperationType represents the possible types of copy- and move-operations.
type CopyOrMoveOperationType int
// possible values for CopyOrMoveOperationType
const (
MoveOriginal CopyOrMoveOperationType = iota
CopyOriginal
CopyOriginalPreserveModTime
)
// OnExistAction represents possible actions the API should take,
// when a request tries to create a path that already exists.
type OnExistAction string
// possible values for OnExistAction
const (
// IgnoreOnExist instructs the API not to execute
// the request in case of a conflict, but to return an error.
IgnoreOnExist OnExistAction = "ignore"
// AutoNameOnExist instructs the API to automatically rename
// any conflicting request-objects.
AutoNameOnExist OnExistAction = "autoname"
// OverwriteOnExist instructs the API to overwrite any conflicting files.
// This can only be used, if the request operates on files directly.
// (For example when moving/copying a file.)
// For most requests this action will simply be ignored.
OverwriteOnExist OnExistAction = "overwrite"
)
// shouldRetry returns a boolean as to whether this resp and err deserve to be retried.
// It tries to expire/invalidate the token, if necessary.
// It returns the err as a convenience.
func (f *Fs) shouldRetry(ctx context.Context, resp *http.Response, err error) (bool, error) {
if fserrors.ContextError(ctx, &err) {
return false, err
}
if resp != nil && (resp.StatusCode == 401 || isHTTPError(err, 401)) && len(resp.Header["Www-Authenticate"]) > 0 {
fs.Debugf(f, "Token might be invalid: %v", err)
if f.tokenRenewer != nil {
iErr := f.tokenRenewer.Expire()
if iErr == nil {
return true, err
}
}
}
return fserrors.ShouldRetry(err) || fserrors.ShouldRetryHTTP(resp, retryErrorCodes), err
}
// resolvePath resolves the given (relative) path and
// returns a path suitable for API-calls.
// This will consider the root-path of the fs and any needed prefixes.
//
// Any relative paths passed to functions that access these paths should
// be resolved with this first!
func (f *Fs) resolvePath(objectPath string) string {
resolved := path.Join(f.opt.RootPrefix, f.root, f.opt.Enc.FromStandardPath(objectPath))
return resolved
}
// iterateOverDirectory calls the given function callback
// on each item found in a given directory.
//
// If callback ever returns true then this exits early with found = true.
func (f *Fs) iterateOverDirectory(ctx context.Context, directory string, searchOnly MemberType, callback func(*api.HiDriveObject) bool, fields []string, sortBy []SortByField) (found bool, err error) {
parameters := api.NewQueryParameters()
parameters.SetPath(directory)
parameters.AddFields("members.", fields...)
parameters.AddFields("", api.DirectoryContentFields...)
parameters.Set("members", string(searchOnly))
for _, v := range sortBy {
// The explicit conversion is necessary for each element.
parameters.AddList("sort", ",", string(v))
}
opts := rest.Opts{
Method: "GET",
Path: "/dir",
Parameters: parameters.Values,
}
iterateContent := func(result *api.DirectoryContent, err error) (bool, error) {
if err != nil {
return false, err
}
for _, item := range result.Entries {
item.Name = f.opt.Enc.ToStandardName(item.Name)
if callback(&item) {
return true, nil
}
}
return false, nil
}
return f.paginateDirectoryAccess(ctx, &opts, iterationChunkSize, 0, iterateContent)
}
// paginateDirectoryAccess executes requests specified via ctx and opts
// which should produce api.DirectoryContent.
// This will paginate the requests using limit starting at the given offset.
//
// The given function callback is called on each api.DirectoryContent found
// along with any errors that occurred.
// If callback ever returns true then this exits early with found = true.
// If callback ever returns an error then this exits early with that error.
func (f *Fs) paginateDirectoryAccess(ctx context.Context, opts *rest.Opts, limit int64, offset int64, callback func(*api.DirectoryContent, error) (bool, error)) (found bool, err error) {
for {
opts.Parameters.Set("limit", strconv.FormatInt(offset, 10)+","+strconv.FormatInt(limit, 10))
var result api.DirectoryContent
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, opts, nil, &result)
return f.shouldRetry(ctx, resp, err)
})
found, err = callback(&result, err)
if found || err != nil {
return found, err
}
offset += int64(len(result.Entries))
if offset >= result.TotalCount || limit > int64(len(result.Entries)) {
break
}
}
return false, nil
}
// fetchMetadataForPath reads the metadata from the path.
func (f *Fs) fetchMetadataForPath(ctx context.Context, path string, fields []string) (*api.HiDriveObject, error) {
parameters := api.NewQueryParameters()
parameters.SetPath(path)
parameters.AddFields("", fields...)
opts := rest.Opts{
Method: "GET",
Path: "/meta",
Parameters: parameters.Values,
}
var result api.HiDriveObject
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
}
return &result, nil
}
// copyOrMove copies or moves a directory or file
// from the source-path to the destination-path.
//
// The operation will only be successful
// if the parent-directory of the destination-path exists.
//
// NOTE: Use the explicit methods instead of directly invoking this method.
// (Those are: copyDirectory, moveDirectory, copyFile, moveFile.)
func (f *Fs) copyOrMove(ctx context.Context, isDirectory bool, operationType CopyOrMoveOperationType, source string, destination string, onExist OnExistAction) (*api.HiDriveObject, error) {
parameters := api.NewQueryParameters()
parameters.Set("src", source)
parameters.Set("dst", destination)
if onExist == AutoNameOnExist ||
(onExist == OverwriteOnExist && !isDirectory) {
parameters.Set("on_exist", string(onExist))
}
endpoint := "/"
if isDirectory {
endpoint += "dir"
} else {
endpoint += "file"
}
switch operationType {
case MoveOriginal:
endpoint += "/move"
case CopyOriginalPreserveModTime:
parameters.Set("preserve_mtime", strconv.FormatBool(true))
fallthrough
case CopyOriginal:
endpoint += "/copy"
}
opts := rest.Opts{
Method: "POST",
Path: endpoint,
Parameters: parameters.Values,
}
var result api.HiDriveObject
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(ctx, resp, err)
})
if err != nil {
return nil, err
}
return &result, nil
}
// copyDirectory moves the directory at the source-path to the destination-path and
// returns the resulting api-object if successful.
//
// The operation will only be successful
// if the parent-directory of the destination-path exists.
func (f *Fs) copyDirectory(ctx context.Context, source string, destination string, onExist OnExistAction) (*api.HiDriveObject, error) {
return f.copyOrMove(ctx, true, CopyOriginalPreserveModTime, source, destination, onExist)
}
// moveDirectory moves the directory at the source-path to the destination-path and
// returns the resulting api-object if successful.
//
// The operation will only be successful
// if the parent-directory of the destination-path exists.
func (f *Fs) moveDirectory(ctx context.Context, source string, destination string, onExist OnExistAction) (*api.HiDriveObject, error) {
return f.copyOrMove(ctx, true, MoveOriginal, source, destination, onExist)
}
// copyFile copies the file at the source-path to the destination-path and
// returns the resulting api-object if successful.
//
// The operation will only be successful
// if the parent-directory of the destination-path exists.
//
// NOTE: This operation will expand sparse areas in the content of the source-file
// to blocks of 0-bytes in the destination-file.
func (f *Fs) copyFile(ctx context.Context, source string, destination string, onExist OnExistAction) (*api.HiDriveObject, error) {
return f.copyOrMove(ctx, false, CopyOriginalPreserveModTime, source, destination, onExist)
}
// moveFile moves the file at the source-path to the destination-path and
// returns the resulting api-object if successful.
//
// The operation will only be successful
// if the parent-directory of the destination-path exists.
//
// NOTE: This operation may expand sparse areas in the content of the source-file
// to blocks of 0-bytes in the destination-file.
func (f *Fs) moveFile(ctx context.Context, source string, destination string, onExist OnExistAction) (*api.HiDriveObject, error) {
return f.copyOrMove(ctx, false, MoveOriginal, source, destination, onExist)
}
// createDirectory creates the directory at the given path and
// returns the resulting api-object if successful.
//
// The directory will only be created if its parent-directory exists.
// This returns fs.ErrorDirNotFound if the parent-directory is not found.
// This returns fs.ErrorDirExists if the directory already exists.
func (f *Fs) createDirectory(ctx context.Context, directory string, onExist OnExistAction) (*api.HiDriveObject, error) {
parameters := api.NewQueryParameters()
parameters.SetPath(directory)
if onExist == AutoNameOnExist {
parameters.Set("on_exist", string(onExist))
}
opts := rest.Opts{
Method: "POST",
Path: "/dir",
Parameters: parameters.Values,
}
var result api.HiDriveObject
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(ctx, resp, err)
})
switch {
case err == nil:
return &result, nil
case isHTTPError(err, 404):
return nil, fs.ErrorDirNotFound
case isHTTPError(err, 409):
return nil, fs.ErrorDirExists
}
return nil, err
}
// createDirectories creates the directory at the given path
// along with any missing parent directories and
// returns the resulting api-object (of the created directory) if successful.
//
// This returns fs.ErrorDirExists if the directory already exists.
//
// If an error occurs while the parent directories are being created,
// any directories already created will NOT be deleted again.
func (f *Fs) createDirectories(ctx context.Context, directory string, onExist OnExistAction) (*api.HiDriveObject, error) {
result, err := f.createDirectory(ctx, directory, onExist)
if err == nil {
return result, nil
}
if err != fs.ErrorDirNotFound {
return nil, err
}
parentDirectory := path.Dir(directory)
_, err = f.createDirectories(ctx, parentDirectory, onExist)
if err != nil && err != fs.ErrorDirExists {
return nil, err
}
// NOTE: Ignoring fs.ErrorDirExists does no harm,
// since it does not mean the child directory cannot be created.
return f.createDirectory(ctx, directory, onExist)
}
// deleteDirectory deletes the directory at the given path.
//
// If recursive is false, the directory will only be deleted if it is empty.
// If recursive is true, the directory will be deleted regardless of its content.
// This returns fs.ErrorDirNotFound if the directory is not found.
// This returns fs.ErrorDirectoryNotEmpty if the directory is not empty and
// recursive is false.
func (f *Fs) deleteDirectory(ctx context.Context, directory string, recursive bool) error {
parameters := api.NewQueryParameters()
parameters.SetPath(directory)
parameters.Set("recursive", strconv.FormatBool(recursive))
opts := rest.Opts{
Method: "DELETE",
Path: "/dir",
Parameters: parameters.Values,
NoResponse: true,
}
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(ctx, resp, err)
})
switch {
case isHTTPError(err, 404):
return fs.ErrorDirNotFound
case isHTTPError(err, 409):
return fs.ErrorDirectoryNotEmpty
}
return err
}
// deleteObject deletes the object/file at the given path.
//
// This returns fs.ErrorObjectNotFound if the object is not found.
func (f *Fs) deleteObject(ctx context.Context, path string) error {
parameters := api.NewQueryParameters()
parameters.SetPath(path)
opts := rest.Opts{
Method: "DELETE",
Path: "/file",
Parameters: parameters.Values,
NoResponse: true,
}
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.Call(ctx, &opts)
return f.shouldRetry(ctx, resp, err)
})
if isHTTPError(err, 404) {
return fs.ErrorObjectNotFound
}
return err
}
// createFile creates a file at the given path
// with the content of the io.ReadSeeker.
// This guarantees that existing files will not be overwritten.
// The maximum size of the content is limited by MaximumUploadBytes.
// The io.ReadSeeker should be resettable by seeking to its start.
// If modTime is not the zero time instant,
// it will be set as the file's modification time after the operation.
//
// This returns fs.ErrorDirNotFound
// if the parent directory of the file is not found.
// This returns ErrorFileExists if a file already exists at the specified path.
func (f *Fs) createFile(ctx context.Context, path string, content io.ReadSeeker, modTime time.Time, onExist OnExistAction) (*api.HiDriveObject, error) {
parameters := api.NewQueryParameters()
parameters.SetFileInDirectory(path)
if onExist == AutoNameOnExist {
parameters.Set("on_exist", string(onExist))
}
var err error
if !modTime.IsZero() {
err = parameters.SetTime("mtime", modTime)
if err != nil {
return nil, err
}
}
opts := rest.Opts{
Method: "POST",
Path: "/file",
Body: content,
ContentType: "application/octet-stream",
Parameters: parameters.Values,
}
var result api.HiDriveObject
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
// Reset the reading index (in case this is a retry).
if _, err = content.Seek(0, io.SeekStart); err != nil {
return false, err
}
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(ctx, resp, err)
})
switch {
case err == nil:
return &result, nil
case isHTTPError(err, 404):
return nil, fs.ErrorDirNotFound
case isHTTPError(err, 409):
return nil, ErrorFileExists
}
return nil, err
}
// overwriteFile updates the content of the file at the given path
// with the content of the io.ReadSeeker.
// If the file does not exist it will be created.
// The maximum size of the content is limited by MaximumUploadBytes.
// The io.ReadSeeker should be resettable by seeking to its start.
// If modTime is not the zero time instant,
// it will be set as the file's modification time after the operation.
//
// This returns fs.ErrorDirNotFound
// if the parent directory of the file is not found.
func (f *Fs) overwriteFile(ctx context.Context, path string, content io.ReadSeeker, modTime time.Time) (*api.HiDriveObject, error) {
parameters := api.NewQueryParameters()
parameters.SetFileInDirectory(path)
var err error
if !modTime.IsZero() {
err = parameters.SetTime("mtime", modTime)
if err != nil {
return nil, err
}
}
opts := rest.Opts{
Method: "PUT",
Path: "/file",
Body: content,
ContentType: "application/octet-stream",
Parameters: parameters.Values,
}
var result api.HiDriveObject
var resp *http.Response
err = f.pacer.Call(func() (bool, error) {
// Reset the reading index (in case this is a retry).
if _, err = content.Seek(0, io.SeekStart); err != nil {
return false, err
}
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(ctx, resp, err)
})
switch {
case err == nil:
return &result, nil
case isHTTPError(err, 404):
return nil, fs.ErrorDirNotFound
}
return nil, err
}
// uploadFileChunked updates the content of the existing file at the given path
// with the content of the io.Reader.
// Returns the position of the last successfully written byte, stopping before the first failed write.
// If nothing was written this will be 0.
// Returns the resulting api-object if successful.
//
// Replaces the file contents by uploading multiple chunks of the given size in parallel.
// Therefore this can and be used to upload files of any size efficiently.
// The number of parallel transfers is limited by transferLimit which should larger than 0.
// If modTime is not the zero time instant,
// it will be set as the file's modification time after the operation.
//
// NOTE: This method uses updateFileChunked and may create sparse files,
// if the upload of a chunk fails unexpectedly.
// See note about sparse files in patchFile.
// If any of the uploads fail, the process will be aborted and
// the first error that occurred will be returned.
// This is not an atomic operation,
// therefore if the upload fails the file may be partially modified.
//
// This returns fs.ErrorObjectNotFound if the object is not found.
func (f *Fs) uploadFileChunked(ctx context.Context, path string, content io.Reader, modTime time.Time, chunkSize int, transferLimit int64) (okSize uint64, info *api.HiDriveObject, err error) {
okSize, err = f.updateFileChunked(ctx, path, content, 0, chunkSize, transferLimit)
if err == nil {
info, err = f.resizeFile(ctx, path, okSize, modTime)
}
return okSize, info, err
}
// updateFileChunked updates the content of the existing file at the given path
// starting at the given offset.
// Returns the position of the last successfully written byte, stopping before the first failed write.
// If nothing was written this will be 0.
//
// Replaces the file contents starting from the given byte offset
// with the content of the io.Reader.
// If the offset is beyond the file end, the file is extended up to the offset.
//
// The upload is done multiple chunks of the given size in parallel.
// Therefore this can and be used to upload files of any size efficiently.
// The number of parallel transfers is limited by transferLimit which should larger than 0.
//
// NOTE: Because it is inefficient to set the modification time with every chunk,
// setting it to a specific value must be done in a separate request
// after this operation finishes.
//
// NOTE: This method uses patchFile and may create sparse files,
// especially if the upload of a chunk fails unexpectedly.
// See note about sparse files in patchFile.
// If any of the uploads fail, the process will be aborted and
// the first error that occurred will be returned.
// This is not an atomic operation,
// therefore if the upload fails the file may be partially modified.
//
// This returns fs.ErrorObjectNotFound if the object is not found.
func (f *Fs) updateFileChunked(ctx context.Context, path string, content io.Reader, offset uint64, chunkSize int, transferLimit int64) (okSize uint64, err error) {
var (
okChunksMu sync.Mutex // protects the variables below
okChunks []ranges.Range
)
g, gCtx := errgroup.WithContext(ctx)
transferSemaphore := semaphore.NewWeighted(transferLimit)
var readErr error
startMoreTransfers := true
zeroTime := time.Time{}
for chunk := uint64(0); startMoreTransfers; chunk++ {
// Acquire semaphore to limit number of transfers in parallel.
readErr = transferSemaphore.Acquire(gCtx, 1)
if readErr != nil {
break
}
// Read a chunk of data.
chunkReader, bytesRead, readErr := readerForChunk(content, chunkSize)
if bytesRead < chunkSize {
startMoreTransfers = false
}
if readErr != nil || bytesRead <= 0 {
break
}
// Transfer the chunk.
chunkOffset := uint64(chunkSize)*chunk + offset
g.Go(func() error {
// After this upload is done,
// signal that another transfer can be started.
defer transferSemaphore.Release(1)
uploadErr := f.patchFile(gCtx, path, cachedReader(chunkReader), chunkOffset, zeroTime)
if uploadErr == nil {
// Remember successfully written chunks.
okChunksMu.Lock()
okChunks = append(okChunks, ranges.Range{Pos: int64(chunkOffset), Size: int64(bytesRead)})
okChunksMu.Unlock()
fs.Debugf(f, "Done uploading chunk of size %v at offset %v.", bytesRead, chunkOffset)
} else {
fs.Infof(f, "Error while uploading chunk at offset %v. Error is %v.", chunkOffset, uploadErr)
}
return uploadErr
})
}
if readErr != nil {
// Log the error in case it is later ignored because of an upload-error.
fs.Infof(f, "Error while reading/preparing to upload a chunk. Error is %v.", readErr)
}
err = g.Wait()
// Compute the first continuous range of the file content,
// which does not contain any failed chunks.
// Do not forget to add the file content up to the starting offset,
// which is presumed to be already correct.
rs := ranges.Ranges{}
rs.Insert(ranges.Range{Pos: 0, Size: int64(offset)})
for _, chunkRange := range okChunks {
rs.Insert(chunkRange)
}
if len(rs) > 0 && rs[0].Pos == 0 {
okSize = uint64(rs[0].Size)
}
if err != nil {
return okSize, err
}
if readErr != nil {
return okSize, readErr
}
return okSize, nil
}
// patchFile updates the content of the existing file at the given path
// starting at the given offset.
//
// Replaces the file contents starting from the given byte offset
// with the content of the io.ReadSeeker.
// If the offset is beyond the file end, the file is extended up to the offset.
// The maximum size of the update is limited by MaximumUploadBytes.
// The io.ReadSeeker should be resettable by seeking to its start.
// If modTime is not the zero time instant,
// it will be set as the file's modification time after the operation.
//
// NOTE: By extending the file up to the offset this may create sparse files,
// which allocate less space on the file system than their apparent size indicates,
// since holes between data chunks are "real" holes
// and not regions made up of consecutive 0-bytes.
// Subsequent operations (such as copying data)
// usually expand the holes into regions of 0-bytes.
//
// This returns fs.ErrorObjectNotFound if the object is not found.
func (f *Fs) patchFile(ctx context.Context, path string, content io.ReadSeeker, offset uint64, modTime time.Time) error {
parameters := api.NewQueryParameters()
parameters.SetPath(path)
parameters.Set("offset", strconv.FormatUint(offset, 10))
if !modTime.IsZero() {
err := parameters.SetTime("mtime", modTime)
if err != nil {
return err
}
}
opts := rest.Opts{
Method: "PATCH",
Path: "/file",
Body: content,
ContentType: "application/octet-stream",
Parameters: parameters.Values,
NoResponse: true,
}
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
// Reset the reading index (in case this is a retry).
_, err = content.Seek(0, io.SeekStart)
if err != nil {
return false, err
}
resp, err = f.srv.Call(ctx, &opts)
if isHTTPError(err, 423) {
return true, err
}
return f.shouldRetry(ctx, resp, err)
})
if isHTTPError(err, 404) {
return fs.ErrorObjectNotFound
}
return err
}
// resizeFile updates the existing file at the given path to be of the given size
// and returns the resulting api-object if successful.
//
// If the given size is smaller than the current filesize,
// the file is cut/truncated at that position.
// If the given size is larger, the file is extended up to that position.
// If modTime is not the zero time instant,
// it will be set as the file's modification time after the operation.
//
// NOTE: By extending the file this may create sparse files,
// which allocate less space on the file system than their apparent size indicates,
// since holes between data chunks are "real" holes
// and not regions made up of consecutive 0-bytes.
// Subsequent operations (such as copying data)
// usually expand the holes into regions of 0-bytes.
//
// This returns fs.ErrorObjectNotFound if the object is not found.
func (f *Fs) resizeFile(ctx context.Context, path string, size uint64, modTime time.Time) (*api.HiDriveObject, error) {
parameters := api.NewQueryParameters()
parameters.SetPath(path)
parameters.Set("size", strconv.FormatUint(size, 10))
if !modTime.IsZero() {
err := parameters.SetTime("mtime", modTime)
if err != nil {
return nil, err
}
}
opts := rest.Opts{
Method: "POST",
Path: "/file/truncate",
Parameters: parameters.Values,
}
var result api.HiDriveObject
var resp *http.Response
var err error
err = f.pacer.Call(func() (bool, error) {
resp, err = f.srv.CallJSON(ctx, &opts, nil, &result)
return f.shouldRetry(ctx, resp, err)
})
switch {
case err == nil:
return &result, nil
case isHTTPError(err, 404):
return nil, fs.ErrorObjectNotFound
}
return nil, err
}
// ------------------------------------------------------------
// isHTTPError compares the numerical status code
// of an api.Error to the given HTTP status.
//
// If the given error is not an api.Error or
// a numerical status code could not be determined, this returns false.
// Otherwise this returns whether the status code of the error is equal to the given status.
func isHTTPError(err error, status int64) bool {
if apiErr, ok := err.(*api.Error); ok {
errStatus, decodeErr := apiErr.Code.Int64()
if decodeErr == nil && errStatus == status {
return true
}
}
return false
}
// createHiDriveScopes creates oauth-scopes
// from the given user-role and access-permissions.
//
// If the arguments are empty, they will not be included in the result.
func createHiDriveScopes(role string, access string) []string {
switch {
case role != "" && access != "":
return []string{access + "," + role}
case role != "":
return []string{role}
case access != "":
return []string{access}
}
return []string{}
}
// cachedReader returns a version of the reader that caches its contents and
// can therefore be reset using Seek.
func cachedReader(reader io.Reader) io.ReadSeeker {
bytesReader, ok := reader.(*bytes.Reader)
if ok {
return bytesReader
}
repeatableReader, ok := reader.(*readers.RepeatableReader)
if ok {
return repeatableReader
}
return readers.NewRepeatableReader(reader)
}
// readerForChunk reads a chunk of bytes from reader (after handling any accounting).
// Returns a new io.Reader (chunkReader) for that chunk
// and the number of bytes that have been read from reader.
func readerForChunk(reader io.Reader, length int) (chunkReader io.Reader, bytesRead int, err error) {
// Unwrap any accounting from the input if present.
reader, wrap := accounting.UnWrap(reader)
// Read a chunk of data.
buffer := make([]byte, length)
bytesRead, err = io.ReadFull(reader, buffer)
if err == io.EOF || err == io.ErrUnexpectedEOF {
err = nil
}
if err != nil {
return nil, bytesRead, err
}
// Truncate unused capacity.
buffer = buffer[:bytesRead]
// Use wrap to put any accounting back for chunkReader.
return wrap(bytes.NewReader(buffer)), bytesRead, nil
}

Some files were not shown because too many files have changed in this diff Show More